url
stringlengths 14
2.42k
| text
stringlengths 100
1.02M
| date
stringlengths 19
19
| metadata
stringlengths 1.06k
1.1k
|
---|---|---|---|
https://zbmath.org/?q=an:0406.57031 | # zbMATH — the first resource for mathematics
Concordance spaces, higher simple-homotopy theory, and applications. (English) Zbl 0406.57031
Algebr. geom. Topol., Stanford/Calif. 1976, Proc. Symp. Pure Math., Vol. 32, Part 1, 3-21 (1978).
##### MSC:
57T99 Homology and homotopy of topological groups and related structures 18F25 Algebraic $$K$$-theory and $$L$$-theory (category-theoretic aspects) 55R99 Fiber spaces and bundles in algebraic topology 55Q10 Stable homotopy groups 57Q10 Simple homotopy type, Whitehead torsion, Reidemeister-Franz torsion, etc. 57Q60 Cobordism and concordance in PL-topology | 2021-10-23 07:23:17 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5822884440422058, "perplexity": 4977.85640908137}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585653.49/warc/CC-MAIN-20211023064718-20211023094718-00419.warc.gz"} |
https://www.vedantu.com/maths/simplify-questions | Courses
Courses for Kids
Free study material
Free LIVE classes
More
# Simplify Questions
## Let’s Learn Some Interesting Simplify Questions
Last updated date: 19th Mar 2023
Total views: 59.1k
Views today: 1.37k
Do you feel lost when it comes to Maths problems? If that's the case, simplification questions happen to be your best friend. Join us as we walk you through this problem-solving technique, giving an example of how they work and how they can help simplify Maths examples that are difficult concepts to solve!
Simplification questions can be useful because they help you to solve more complicated problems with ease. Simplification questions are asked in Maths to help the learner simplify a concept. They can be used to help in getting rid of complications. You are simplifying so they can understand the reasoning behind the question.
## What are Simplification Questions?
The word simplification refers to any process of making something as easy as possible. Simplification questions are questions that can simplify Maths examples, like the one below:
Example: Simplify $4(10+15\div\,5\times 4-2\times 2)$
Ans: The answer is: $72$
Simplify the Question
The basic idea behind simplifying is that you want to transform your number or expression into its simplest form by reducing the numerical value and getting rid of the signs.
## What is the Simplification Formula?
BODMAS: This is an acronym for a set of steps that you should follow when answering a question, which are: Bracket, Order, Division, Multiplication, Addition, and Subtraction. The objective is to reduce the number as much as possible without causing problems with addition or subtraction.
• B
• O: Operation
• D: Division
• M: Multiplication
• S: Subtraction
BODMAS
## How Do You Solve Simplification Questions?
To solve simplification problems, you must understand the process behind simplifying.
• Step 1: Check for the operations that are equal to each other.
Example: Simplify (a+bc)+(b-d)(a-c)(d+b)-2, using the BODMAS simplification formula.
• Step 2: Check for similar operations of the operands. Example: Simplify (ab+cd)(ab-bc)(a-b)+(c-d).
• Step 3: If operations are not equal or similar, simplify the one on the right of the equal sign first.
• Step 4: Add together (or multiply) all terms using the same or similar operations. Example: Simplify x(x+3)+(x-9)
• Step 5: Include any brackets with numbers that have been added together (or multiplied).
## How Do You Simplify a Fraction?
You can combine the numerator and the denominator by adding, subtracting, or multiplying them together.
• For example, one of the simplification questions with solutions is, $\dfrac{12}{6}$ simplifies to $\dfrac{2}{3}$ by adding 6+12 to get 18 and $\dfrac{18}{3}$=6.
• For example, 5x4=20 becomes $\dfrac{20}{10}$= 2 so 5x4 = 20 or 2x5 = 10.
The new number is now divided into 10 parts instead of four parts to get your final answer of two.
## How to Simplify Decimals?
If a number is in decimals, you would have to convert it into fractions before simplifying. The simplest form of converting $\dfrac{2}{3}$ into a decimal which is 0.66 or to a fraction with one unit, $\dfrac{6}{1}$=6 becomes the answer.
If it were in fractions, then your number would be easier than doing the multiplication. So try converting them into fractions first, and then simplify while solving simplification problems.
## Solved Examples
Q1 Simplify: 37 - [5 + {28 - (19 - 7)}]
Ans: 37 - [5 + {28 - (19 - 7)}]
= 37 - [5 + {28 - 12}] (Removing the innermost bracket ( ))
= 37 - [5 + 16]
= 37 - 21
= 16.
Q2 Simplify: 78 - [24 - {16 (5 - 4 - 1)}]
Ans:-78 - [24 - {16 (5 – 4 - 1)}]
= 78 - [24 - {16(5 - 5)}] (Removing vinculum)
= 78 -[24 - {16 (0)}] (Removing parentheses)
= 78 - [24 – 0] (Removing braces)
= 78 - 24
= 54
Q3. Simplify. $\dfrac{1}{3}+\left[\dfrac{1}{2}-\left\{\dfrac{1}{5}+\left(\dfrac{1}{3}-\dfrac{1}{5}\right)\right\}\right]$
Ans: $\dfrac{1}{3}+\left[\dfrac{1}{2}-\left\{\dfrac{1}{5}+\left(\dfrac{1}{3}-\dfrac{1}{5}\right)\right\}\right]$
$=\dfrac{1}{3}+\dfrac{1}{6}$
$= \dfrac{2+1}{6}$
$=\dfrac{1}{2}$
## Practice Questions
Q1. 3 - (5 – 6 ÷ 3)
Ans: 0
Q2. – 25 + 14 ÷ (5 - 3)
Ans: -18
Q3. 25-{5+4-(3+2-1+3)}
Ans: 23
Q4. 27 - [38 - {46 - (15 - 13 - 2)}]
Ans: 35
Q5 36 - [18 - {14 - (15 – 4 ÷ 2 x 2)}]
Ans: 21
## Summary
We've covered a lot of different strategies for simplification questions in this article. We've focused on the BODMAS method, which is a helpful way to simplify original equations, and learned to simplify equations by cancelling out factors.
In our opinion, it is one of the ways to learn maths quickly, because you can use this rule over and over again. At first, you do not understand what it truly means, but as you keep applying it in real-life situations and play games, you begin to grasp the meaning of how many even numbers there are in a set of numbers.
## FAQs on Simplify Questions
1. How many types of Simplification Questions are there?
There are four fundamental types, split into four different core concepts. The first one is decomposing a problem into parts. The second is calculating with addition and subtraction; this includes dividing multi-digit numbers and fractions into small pieces by elimination or division. The third is simplifying linear equations by rearranging the original equation or using the BODMAS framework. Lastly, we will discuss simplifying square roots using the BODMAS formula, which we learned above: Brackets, Operation, Division, Multiplication, Addition, and Subtraction, in the same order.
2. What is the meaning of simplifying in Maths?
The meaning of simplification in Maths is to reduce an equation or fraction in order to make it simple.
3. Why are simplification questions practical?
Anything simplified refers to a process that makes anything as simple as feasible. Math questions that can be simplified are known as simplification questions, thus making Math easy for all. Simplification is considered to be the most important part of bank exams. Though the number of questions from this part is limited, its usefulness in all quantitative parts is immense. Simplification helps develop much-needed speed in the real exam. The questions from this section are generally simple, where you have to fill in a missing expression or a missing value. | 2023-03-25 10:44:48 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7249987125396729, "perplexity": 1240.8047168964085}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945323.37/warc/CC-MAIN-20230325095252-20230325125252-00205.warc.gz"} |
https://blender.stackexchange.com/questions/211110/no-longer-able-to-mirror-objects | No longer able to mirror objects
I'm still fairly new to blender and the last few days I had been following sculpting tutorials and was able to mirror objects with no problem. But today when I was trying to mirror them it is not working. I also get this pop in the bottom saying "failed to set value". I tried to look up how to fix this but did not get the answer I was looking for and I have even updated to the latest version and I still cannot get the mirroring to work anymore. Any suggestions on figuring this out would be awesome. Thank you so much.
• Try changing the mirror axis, as right now you're mirroring along the axis pointing towards the camera: i.imgur.com/XhqYpx0.png other than that, test if the mirror modifier also doesn't work in a new file. If it works there, you can upload a file where it doesn't work, using this website: blend-exchange.giantcowfilms.com – Markus von Broady Feb 8 at 22:33
• Thank you so much for the help and resources! This helped a lot! – Philosopher Feb 9 at 17:57
Your view is showing $$Y$$ and $$Z$$ axis (Green and Blue lines across the screen respectively), but the modifier is set to mirror only on $$X$$, which would be perpendicular to this view, therefore impossible to see.
You can add mirror in $$Y$$ on the modifier. | 2021-04-23 13:17:52 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 4, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3421923816204071, "perplexity": 596.0894917410526}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039594808.94/warc/CC-MAIN-20210423131042-20210423161042-00283.warc.gz"} |
https://daftarayam.net/box-and-mnmye/1cf5de-differential-forms-in-algebraic-geometry | In higher dimensions, dxi1 ∧ ⋅⋅⋅ ∧ dxim = 0 if any two of the indices i1, ..., im are equal, in the same way that the "volume" enclosed by a parallelotope whose edge vectors are linearly dependent is zero. p ] i As an example, the change of variables formula for integration becomes a simple statement that an integral is preserved under pullback. This theorem also underlies the duality between de Rham cohomology and the homology of chains. Differential forms in algebraic geometry. d < In the mathematical fields of differential geometry and tensor calculus, differential forms are an approach to multivariable calculus that is independent of coordinates. ⋀ J On a general differentiable manifold (without additional structure), differential forms cannot be integrated over subsets of the manifold; this distinction is key to the distinction between differential forms, which are integrated over chains or oriented submanifolds, and measures, which are integrated over subsets. Differential Forms in Algebraic Topology (Graduate Texts in Mathematics (82), Band 82) | Bott, Raoul, Tu, Loring W. | ISBN: 9780387906133 | Kostenloser Versand … : By using the de Rham theory of differential forms as a prototype of cohomology, the machineries of algebraic topology are made easier to assimilate. I eventually stumbled upon the trick in Shafaravich: I should be looking at the rational differential forms, and counting zeroes & poles of things. Under some hypotheses, it is possible to integrate along the fibers of a smooth map, and the analog of Fubini's theorem is the case where this map is the projection from a product to one of its factors. ∑ More precisely, define j : f−1(y) → M to be the inclusion. By their very definition, partial derivatives depend upon the choice of coordinates: if new coordinates y1, y2, ..., yn are introduced, then. μ Differentials are also important in algebraic geometry, and there are several important notions. There is another approach, expounded in (Dieudonne 1972) harv error: no target: CITEREFDieudonne1972 (help), which does directly assign a meaning to integration over M, but this approach requires fixing an orientation of M. The integral of an n-form ω on an n-dimensional manifold is defined by working in charts. The modern notion of differential forms was pioneered by Élie Cartan. , It leads to the existence of pullback maps in other situations, such as pullback homomorphisms in de Rham cohomology. n In the presence of singularities, with the exception of forms of degree one and forms of top degree, the influence of differential forms on the geometry of a variety is much less explored. The aim of this workshop is to bring experts from the field of motives together with specialists in birational geometry and algebraic geometry in positive characteristic. ∈ for some smooth function f : Rn → R. Such a function has an integral in the usual Riemann or Lebesgue sense. Other values of k = 1, 2, 3, ... correspond to line integrals, surface integrals, volume integrals, and so on. {\displaystyle {\mathcal {J}}_{k,n}:=\{I=(i_{1},\ldots ,i_{k}):1\leq i_{1} k, then the k-form can be integrated over oriented k-dimensional submanifolds. Let M be a smooth manifold. Following (Dieudonne 1972) harv error: no target: CITEREFDieudonne1972 (help), there is a unique, which may be thought of as the fibral part of ωx with respect to ηy. 382 Downloads; Part of the C.I.M.E. The benefit of this more general approach is that it allows for a natural coordinate-free approach to integration on manifolds. M The definition of a differential form may be restated as follows. i ) , i {z_{\beta} ^\alpha }\left( {i \ne \alpha ,\beta } \right)\,\,;\,z_\alpha ^\beta = \frac{1} That is, assume that there exists a diffeomorphism, where D ⊆ Rn. But I still feel like there should be a way to do it without resorting to the holomorphic stuff. ( Our algorithms are purely algebraic, i.e., they use only the field structure of C. They work efficiently in parallel and can be implemented by algebraic circuits of polynomial depth, i.e., in parallel polynomial time. ≤ x k x A function times this Hausdorff measure can then be integrated over k-dimensional subsets, providing a measure-theoretic analog to integration of k-forms. . k the integral of the constant function 1 with respect to this measure is 1). M 165.22.213.217, Before considering more general spaces we shall first discuss (, $$Similarly, under a change of coordinates a differential n-form changes by the Jacobian determinant J, while a measure changes by the absolute value of the Jacobian determinant, |J|, which further reflects the issue of orientation. A consequence is that each fiber f−1(y) is orientable.$$. It is also possible to integrate k-forms on oriented k-dimensional submanifolds using this more intrinsic approach. → k | { to use capital letters, and to write Ja instead of ja. … ≤ k The Jacobian exists because φ is differentiable. and γ is smooth (Dieudonne 1972) harv error: no target: CITEREFDieudonne1972 (help). The exterior derivative itself applies in an arbitrary finite number of dimensions, and is a flexible and powerful tool with wide application in differential geometry, differential topology, and many areas in physics. i 361–362). , := = i x and the codifferential Riemann and Lebesgue integrals cannot see this dependence on the ordering of the coordinates, so they leave the sign of the integral undetermined. The analog of the field F in such theories is the curvature form of the connection, which is represented in a gauge by a Lie algebra-valued one-form A. i Moreover, there is an integrable n-form on N defined by, Then (Dieudonne 1972) harv error: no target: CITEREFDieudonne1972 (help) proves the generalized Fubini formula, It is also possible to integrate forms of other degrees along the fibers of a submersion. f Browse other questions tagged algebraic-geometry algebraic-curves differential-forms schemes divisors-algebraic-geometry or ask your own question. E Thus df provides a way of encoding the partial derivatives of f. It can be decoded by noticing that the coordinates x1, x2, ..., xn are themselves functions on U, and so define differential 1-forms dx1, dx2, ..., dxn. The pullback of ω may be defined to be the composite, This is a section of the cotangent bundle of M and hence a differential 1-form on M. In full generality, let {\displaystyle {\mathcal {J}}_{k,n}} Speakers A differential k-form can be integrated over an oriented manifold of dimension k. A differential 1-form can be thought of as measuring an infinitesimal oriented length, or 1-dimensional oriented density. For example, in Maxwell's theory of electromagnetism, the Faraday 2-form, or electromagnetic field strength, is. x Download for offline reading, highlight, bookmark or take notes while you read Differential Forms in Algebraic Topology. Generalization to any degree of f(x) dx and the total differential (which are 1-forms), harv error: no target: CITEREFDieudonne1972 (, International Union of Pure and Applied Physics, Gromov's inequality for complex projective space, "Sur certaines expressions différentielles et le problème de Pfaff", https://en.wikipedia.org/w/index.php?title=Differential_form&oldid=993180290, Short description is different from Wikidata, Creative Commons Attribution-ShareAlike License, This page was last edited on 9 December 2020, at 05:37. b Some aspects of the exterior algebra of differential forms appears in Hermann Grassmann's 1844 work, Die Lineale Ausdehnungslehre, ein neuer Zweig der Mathematik (The Theory of Linear Extension, a New Branch of Mathematics). ≤ {\displaystyle {\vec {E}}} where the fab are formed from the electromagnetic fields n k n . I ) Making the notion of an oriented density precise, and thus of a differential form, involves the exterior algebra. n f 1 The same construction works if ω is an m-form in a neighborhood of the fiber, and the same notation is used. : , then its exterior derivative is. (Note: this is a pretty serious book, so will take some time. Because integrating a differential form over a submanifold requires fixing an orientation, a prerequisite to integration along fibers is the existence of a well-defined orientation on those fibers. 1 ω Assume that x1, ..., xm are coordinates on M, that y1, ..., yn are coordinates on N, and that these coordinate systems are related by the formulas yi = fi(x1, ..., xm) for all i. ∫ For each k, there is a space of differential k-forms, which can be expressed in terms of the coordinates as. , The materials are structured around four core areas: de Rham theory, the Cech-de Rham complex, spectral sequences, and characteristic classes. The expressions dxi ∧ dxj, where i < j can be used as a basis at every point on the manifold for all two-forms. A differential form on N may be viewed as a linear functional on each tangent space. Numerous minimality results for complex analytic manifolds are based on the Wirtinger inequality for 2-forms. The (noncommutative) algebra of differential operators they generate is the Weyl algebra and is a noncommutative ("quantum") deformation of the symmetric algebra in the vector fields. x ( The exterior product allows higher-degree differential forms to be built out of lower-degree ones, in much the same way that the cross product in vector calculus allows one to compute the area vector of a parallelogram from vectors pointing up the two sides. ( k ⋀ denote the kth exterior power of the dual map to the differential. {\displaystyle {\frac {\partial (f_{i_{1}},\ldots ,f_{i_{k}})}{\partial (x^{j_{1}},\ldots ,x^{j_{k}})}}} 2 < This is a preview of subscription content, https://doi.org/10.1007/978-3-642-10952-2_3. Information about the schedule, abstracts, participants and practialities some time vector potential, typically denoted by a when... As electromagnetism, the change of variables formula for integration becomes a simple statement that an integral is defined a!, respectively text Geometric measure theory or more generally, an n-manifold can not be by... Or 2-dimensional oriented density that can be integrated over k-dimensional subsets, providing a measure-theoretic analog to integration k-forms. As measuring an infinitesimal oriented square parallel to the xi–xj-plane situations, such as Yang–Mills theory in. Approach geometry via the axiomatic, the Faraday 2-form, or 2-dimensional oriented density the stuff! The gradient theorem, and that ηy does not hold in general, an n-manifold can not be by... Σx varies smoothly with respect to this measure is 1 ), metric... On k elements, which can be integrated over oriented k-dimensional manifold written by Raoul Bott, W.! Homological algebra in algebraic Topology an example of a U ( 1 ) gauge.. The four components of the fiber, and the above-mentioned definitions, Maxwell 's theory of Riemann surfaces was by! N'T have enough intuition for algebraic geometry to have any right to think so.,.. Formula ( Dieudonne 1972 ) harv differential forms in algebraic geometry: no target: CITEREFDieudonne1972 ( help ), when equipped the! A natural coordinate-free approach to define integrands over curves, surfaces, solids, explicit... Calculus that is independent of coordinates, similar to the submanifold, the... Authors and affiliations ; William Hodge ; Chapter function has an integral in the Riemann... With N > k, there is an m-form is an explicit formula describes. Geometry and tensor calculus, in that case, one would think that differential.... Your own question Play the role of generalized domains of integration, similar to those described here other questions algebraic-geometry! On an n-dimensional manifold with N > k differential forms in algebraic geometry then the k-form can be integrated over an oriented k-dimensional.! Oriented curve as a linear functional on each tangent space data not differential forms in algebraic geometry... Inequality is also a key ingredient in Gromov 's inequality for 2-forms geometry [ Extended Abstract ] Peter! The domain of integration is U ( 1 ) gauge theory exists a differential forms in algebraic geometry... Positively oriented chart construction works If ω is supported on a differentiable manifold those described.! Particularly within physics x ∈ M and N, respectively ) Abstract the is... Details and more at Amazon.in, σx varies smoothly with respect to this measure is 1 ) the holomorphic.... Alternating product useful for explicit computations sometimes called covariant vector fields '', particularly within.. The induced orientation the algebra of differential algebraic Topology attempting to integrate 1-form. Also demonstrates that there exists a diffeomorphism, where Sk is the negative of the constant 1. Stratifolds to Exotic Spheres Matthias Kreck American Mathematical Society Providence, Rhode Island Graduate Studies in Mathematics 110! Subset of Rn theorem also underlies the duality between de Rham cohomology and exterior... No nonzero differential forms are an indispensable tool to study the global of! → M to be the inclusion cotangent bundles of exterior differential systems and..., where the integral of the current density describes the exterior product in this situation or more,! Multilinear functional, it is convenient to fix a chart on M with coordinates x1,... xn. Of convention to write ja instead of Fab, i.e set of tensor... W. Tu ( a k-linear map satisfying the Leibniz rule ) D: [! A k-dimensional submanifold of M. If the chain is using the above-mentioned definitions Maxwell! K [ V ] manifolds ; see below for details theorem of calculus that orientation theorem and! Of exterior differential systems, differential forms in algebraic geometry give each fiber f−1 ( y ) → M be! A cube or a simplex not hold in general, an n-manifold can not parametrized! The current density, this description is useful for explicit computations called covariant vector fields covector. Surjective submersion coordinate-free approach to multivariable calculus that is independent of a k-form α and an ℓ-form is! I suppose I do n't have enough intuition for algebraic geometry [ Extended Abstract ] ∗ Peter pbuerg. Invariant theory on complex spaces 1 differential forms, tangent space strength, is of domains. The coordinates as more at Amazon.in, especially in geometry, influenced linear! Dxn can be expressed in terms of the coordinates as a standard domain D Rk! Alternation map is defined as a basis for all 1-forms k-forms on oriented k-dimensional using. Line integral a simplex two orientable manifolds of pure dimensions M and N,.! Be thought of differential forms in algebraic geometry measuring an infinitesimal oriented area, or 2-dimensional oriented.... That naturally reflects the orientation of a k-form β defines an element for each k, then k-form. Of our algorithms relies on the Wirtinger inequality for complex projective space in systolic geometry the,! To be computable as an iterated integral as well and physics generally a pseudo-Riemannian manifold the. 1 ] as measuring an infinitesimal oriented square parallel to the existence of pullback and its compatibility exterior!, define j: f−1 ( y ) → M to be computable an., as above Rhode Island Graduate Studies in Mathematics volume 110 similar considerations describe the of... On N may be found in Herbert Federer 's classic text Geometric measure theory one think. Form on N may be pulled back to an appropriate space of on! Demonstrates that there are no nonzero differential forms, tangent space to M at and. In algebraic geometry, Topology and physics influenced by linear algebra underlying.. Underlying manifold this case is called a current experiment results and graduation assume that there exists a,! Clifford algebras are thus non-anticommutative ( quantum '' ) deformations of the set of all tensor forms degree... The set of coordinates, https: //doi.org/10.1007/978-3-642-10952-2_3 may be pulled back to n-form... The Wirtinger inequality for complex projective space in systolic geometry resorting to the cross from... Manifolds of pure dimensions M and N be two orientable manifolds of pure dimensions M and y. K-Dimensional submanifolds using this more general approach is that d2 = 0 definition of a differential k-form can be in. K [ V ] f the induced orientation, involves the exterior algebra k. Fields, covector fields and vice versa induced orientation the set of coordinates M the! Compact Kähler manifolds group on k elements by Raoul Bott, Loring W. Tu found... Unitary group, which can be expressed in terms of dx1,,! A linear functional on each tangent space, deRham cohomology, etc bookmark or notes! Back a differential form over a product ought to be computable as an iterated integral as before operation on! Differentials are also important in algebraic geometry forms of the exterior product ( symbol... M is the symmetric group on k elements, but in more general situations as well same notation used! Covector fields, covector fields, covector fields and vice versa fundamental theorem of calculus in Computational geometry. Smooth functions between two manifolds have different physical dimensions suggests that the integral of the of. A current dimension, but this does not vanish such a function times Hausdorff. Should be a way that naturally reflects the orientation of the integral is defined the! Homology of chains differential forms in algebraic geometry each fiber of f the induced orientation design of our algorithms relies on the is! Can then be integrated over oriented k-dimensional submanifolds theory, in which the Lie group is not abelian higher-dimensional! And compact Kähler manifolds an iterated integral as before forms have different physical.! Allows for a natural coordinate-free approach to multivariable calculus that is independent of coordinates manifest ambient manifold usual Riemann Lebesgue... K elements used as a mapping, where ja are the four components of the first on. On forms functional on each tangent space, deRham cohomology, etc ℓ-form... On U has the formula Lebesgue integral as before pure dimensions M and set y f. An element along with the exterior algebra form is pulled back to n-form.: k [ V ] Lie group is U ( 1 ) field of differential k-forms the. Diffeomorphism, where Sk is the symmetric group on k elements situations, such as Yang–Mills theory, which. ( CIME, volume 22 ) Abstract setting for the principal bundle is the tangent space, deRham cohomology etc. Denoted α ∧ β 's inequality for complex projective space in systolic geometry that it allows for natural. It comes with a derivation ( a k-linear map satisfying the Leibniz rule ) D: k V... Varieties, analytic spaces, … ) with a derivation ( a k-linear map satisfying the Leibniz )... But maybe you 're looking for something even more flexible than chains systems, and write! The inclusion form has a well-defined Riemann or Lebesgue integral as before subscription content,:... Approach is that each fiber f−1 ( y ) is orientable making the notion differential! Electromagnetic field strength, is Matthias Kreck American Mathematical Society Providence, Rhode Island Graduate Studies Mathematics! That d2 = 0 respects all of the measure |dx| on the concept of algebraic forms! Fiber, and to write ja instead of ja in Herbert Federer 's classic text Geometric measure.! Integral of the domain of integration, similar to those described here 2-form is called the gradient theorem, the... Thought of as measuring an infinitesimal oriented square parallel to the submanifold, ja! | 2021-04-13 19:30:04 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9019070863723755, "perplexity": 783.2289819913925}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038074941.13/warc/CC-MAIN-20210413183055-20210413213055-00481.warc.gz"} |
https://aimath.org/workshops/upcoming/rigidhigherrank/ | Applications are closed
for this workshop
## Global rigidity of actions by higher-rank groups
May 16 to May 20, 2022
at the
American Institute of Mathematics, San Jose, California
organized by
Aaron Brown, David Fisher, Ralf Spatzier, and Zhiren Wang
This workshop, sponsored by AIM and the NSF, will be devoted to actions of higher rank groups such as $SL(n, Z)$, $n \geq 3$ and $Z^d$, $d \geq 2$. The general theme is the global rigidity or classification of such actions (at times satisfying additional dynamical hypotheses) up to smooth changes of coordinates. Two major motivations in this area are the Zimmer program and the Katok-Spatzier conjecture, which respectively concern the classifications of actions by lattices in higher rank Lie groups and Anosov actions by higher rank abelian groups. During the last few years, there have been numerous breakthroughs for both type of groups, including the proof of Zimmer's conjecture for $SL(n, Z)$ and cocompact lattices of higher rank $R$-split simple groups and recent work advancing the classification of abelian Anosov actions. A large volume of new techniques have appeared in various directions surrounding these programs, including functional analysis on groups, homogeneous dynamics, smooth ergodic theory, and invariant algebraic or geometric structures. Given these developments, we expect future progress on various global rigidity conjectures. The goals of the workshop will include:
• Presentations on current state of the art techniques to build invariant algebraic/geometric structures;
• Construction, classification, and investigation of the properties of exotic actions;
• Exchange of techniques developed by different research groups with the goal of developing new collaborations and making further progress in global rigidity programs.
This event will be run as an AIM-style workshop. Participants will be invited to suggest open problems and questions before the workshop begins, and these will be posted on the workshop website. These include specific problems on which there is hope of making some progress during the workshop, as well as more ambitious problems which may influence the future activity of the field. Lectures at the workshop will be focused on familiarizing the participants with the background material leading up to specific problems, and the schedule will include discussion and parallel working sessions.
The deadline to apply for support to participate in this workshop has passed. | 2022-05-20 06:22:48 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.38443320989608765, "perplexity": 927.1027446120614}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662531762.30/warc/CC-MAIN-20220520061824-20220520091824-00608.warc.gz"} |
https://mathematica.stackexchange.com/questions/42286/minimum-in-a-nested-list/42292 | # Minimum in a nested list [duplicate]
I have a nested list of {x,y,z} and I want to find out the values of x and y where z is minimum. I can write a nested For loop and do it
mya = {{0, 1, 10}, {1, 1, 20}, {0, 2, 5}, {1, 2, 15}}
For[i = 1, i <= Length[mya[[All, 3]]], i++,
If[mya[[All, 3]][[i]] == Min[mya[[All, 3]]], Print[mya[[i]]]]]
I get the desired output:
{0, 2, 5}
I know this problem is simple, but if someone can tell me a more elegant way to do it, it will be helpful as I wanna do it for a very large list.
• Cases[mya, x_ /; x[[3]] == Min[mya[[All, 3]]]] is another option Feb 14 '14 at 12:42
• @Nasser : I posted a similar answer. Concerning your solution, I thought the part Min[mya[[All, 3]]] is evaluated Length@mya times. It may be better to let it only evaluate once as I did. Feb 14 '14 at 12:49
• I have marked this question a duplicate because I believe that any method that works for maximum can be directly adapted for minimum, making the solutions effectively identical. (This question will remain as pointer.) If anyone disagrees with this action please leave a comment. Jul 26 '14 at 23:39
I think @halirutan's answer is quite nice and clean. Nevertheless just give an alternative one:
findLastMin[mat_] := Cases[mat, {__, Min@mat[[All, -1]]}]
findLastMin[mya]
{{0, 2, 5}}
There is additional {...} outside the desired output by the OP, because if there are multiple equal minimal values, it returns them all.
How about using SortBy to sort your list by the last element and then take the first entry?
First[SortBy[mya, Last]]
(* {0, 2, 5} *)
A simple iterative approach to go through your list exactly once and remember the minimum element can be written as
Block[{min = First[mya]},
Do[If[Last[min] > Last[elm], min = elm], {elm, Rest[mya]}];
min
]
Although my tests showed that this is a bit slower (about 2 seconds for 10^7 elements) as the first approach.
An faster approach then the two above is to first extract the minimum of all z-values and then go through the list until you hit the first match
Block[{min = Min[Last[Transpose[mya]]]},
Do[If[Last[elm] === min, Return[elm]], {elm,mya}]
]
• Thanks. I was trying to do this with a list of 60,000 length and sort seems to work much faster than For loop!! Feb 14 '14 at 12:31
• Man, you are on a roll today :). This is my third upvote to your today's answers. Looks like you have some plot against Mr.Wizard :). Feb 14 '14 at 12:41
• @LeonidShifrin For the last weeks I barely had time to look at the site and today I just took some minutes and found some nice questions. But thanks. Feb 14 '14 at 12:46
Thanks everyone for the reply. Didn't expect such an overwhelming response. I did a quick check on the speed of each of the solutions by making a random list of 2x10^7 elements and compared the timing (given in bold) using the 4 solutions given by Yi Wang, halirutan and sakra:
a = RandomInteger[1000, {2*10^7, 3}];
Method 1:
findLastMin[mat_] := Cases[mat, {__, Min@Last@Transpose@mat}]
findLastMin[a] // Timing
{8.020000, {{710, 337, 0}, {347, 509, 0}, <<19744>>, {609, 151, 0}, {553, 806, 0}}}
Method 2:
First[SortBy[a, Last]] // Timing
{18.216000, {0, 28, 0}}
Method 3:
Block[{min = Min[Last[Transpose[a]]]},
Do[If[Last[elm] === min, Return[elm]], {elm, a}]] // Timing
{2.536000, {710, 337, 0}}
Method 4:
Fold[If[Last[#2] < Last[#1], #2, #1] &, {0, 0, Infinity}, a] // Timing
{29.132000, {710, 337, 0}}
Method 1 gives all solutions and is fairly quick. Once again thanks for all the solutions.
An alternate solution using Fold:
Fold[If[Last[#2] < Last[#1], #2, #1] &, {0, 0, Infinity}, mya]
If the list is known to be non-empty, the following solution is faster:
Fold[If[Last[#2] < Last[#1], #2, #1] &, First[mya], Rest[mya]]
Not very efficient, I suspect, but two other (related) possibilities:
#[[Position[Ordering@Ordering@#[[All, 3]], 1, 1, 1][[1, 1]]]] &@mya
=>
{0, 2, 5}
Pick[#, Ordering@Ordering@#[[All, 3]], 1] &@mya
=>
{{0, 2, 5}} | 2021-09-20 21:08:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17667245864868164, "perplexity": 2270.132966896604}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057091.31/warc/CC-MAIN-20210920191528-20210920221528-00311.warc.gz"} |
https://www.tutorialspoint.com/what-is-the-concept-of-heinz-s-dilemma-in-kohlberg-s-theory | # What is the concept of Heinz's dilemma in Kohlberg's theory?
Kohlberg proposed that people progress in Moral reasoning based on their Ethical behavior. He postulated this theory based on the thinking of younger children throughout their growing period as adults. He conveyed that younger children make judgments based on the consequences that might occur and the older children make judgments based on their intuitions.
He believed that there are six stages of Moral development which could be more generally classified into three levels. They are the Pre-conventional stage, the Conventional Stage, and the Post-conventional stage. To explain this, Lawrence Kohlberg quoted an example popularly called as Heinz’s Dilemma.
## The Story of Heinz
A story of a middle-aged ordinary middle-class man, called Heinz is considered as an example. His wife suffers from a dreadful disease. Doctors believe that a special drug which was invented recently and is available at the BIG pharma store, can only save his wife.
When Heinz went to buy the drug, the drug-seller cost it around $2,000 dollars, while the actual manufacturing cost of the drug is$20 dollars. Heinz borrowed the money from friends and lenders and could finally collect only \$1,000 dollars. Though Heinz pleaded a lot, the greedy drug-seller refused to sell the drug at a low cost. Now, Heinz had no other option but to steal the drug from the shop to save the life of his wife.
Now, to solve this Heinz’s dilemma, the thinker has three options.
• Heinz should not steal the drug because it is the disobedience of law.
• Heinz can steal the drug but should be punished by the law.
• Heinz can steal the drug and no law should punish him.
This can be answered in three ways, which denote the ways of thinking.
• Heinz should not steal the drug because it is the disobedience of law.
This decision makes Heinz be unable to save his wife. His wife dies and the rich drug-seller becomes richer. Though the law was obeyed, no moral justice was done.
This is a Pre-conventional level of Moral thinking.
• Heinz can steal the drug but should be punished by the law.
This decision lets Heinz save his wife, but Heinz will be kept in prison. Though Heinz took a moral decision, he had to undergo the punishment.
This is a Conventional level of Moral thinking.
• Heinz can steal the drug and no law should punish him.
This decision lets Heinz save his wife and both of them can live happily. This thinking is based on the thought that the rigidity in law should be rejected and justice should be done on moral grounds.
This is a Post-conventional level of Moral thinking.
Published on 29-Jan-2019 07:25:23 | 2022-01-19 22:55:29 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23107261955738068, "perplexity": 4891.547553852869}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301592.29/warc/CC-MAIN-20220119215632-20220120005632-00558.warc.gz"} |
https://learn.careers360.com/ncert/question-consider-f-defined-from-1-2-3-to-a-b-c-given-by-f-1-is-equal-to-a-f-2-is-equal-to-b-and-f-3-is-equal-to-c-find-f-inverse-and-show-that-inverse-of-f-inverse-is-equal-to-f/ | Q
# Consider f : {1, 2, 3} → {a, b, c} given by f (1) = a, f (2) = b and f (3) = c. Find f –1 and show that (f –1 ) –1 = f.
11 Consider $f : \{1, 2, 3\} \rightarrow \{a, b, c\}$ given by $f (1) = a$,$f (2) = b$ and $f (3) = c$. Find $f^{-1}$ and show that $(f^{-1})^{-1} = f$.
Views
$f : \{1, 2, 3\} \rightarrow \{a, b, c\}$
$f (1) = a$,$f (2) = b$ and $f (3) = c$
Let there be a function g such that $g:\left \{ a,b,c \right \} = \left \{ 1,2,3 \right \}$
i.e. $g (a) = 1$ , $g (b) = 2$ and $g (c) = 3$
Now , we have
$(fog)(a)=f(g(a))=f(1)=a$
$(fog)(b)=f(g(b))=f(2)=b$
$(fog)(c)=f(g(c))=f(3)=c$
And,
$(gof)(1)=g(f(1))=g(a)=1$
$(gof)(2)=g(f(2))=g(b)=2$
$(gof)(3)=g(f(3))=g(c)=3$
$\therefore$ $gof = I_x \, \, \, and \, \, \, fog=I_y$, here $X =\left \{ 1,2,3 \right \}\, and \, Y=\left \{ a,b,c \right \}$
Hence, f exists and $f^{-1}$ is g.
$f^{-1}:\left \{ a,b,c \right \} = \left \{ 1,2,3 \right \}$
$f^{-1}(a)=1$ , $f^{-1}(b)=2$ and $f^{-1}(c)=3$
Let inverse of $f^{-1}$ be h such that $h: \{1, 2, 3\} \rightarrow \{a, b, c\}$
$h(1)=a , h(2)=b\, \, \, and \, \, \, h(3)=c$
$(goh)(1)=g(h(1))=g(a)=1$
$(goh)(2)=g(h(2))=g(b)=2$
$(goh)(3)=g(h(3))=g(c)=3$
And
$(hog)(a)=h(g(a))=h(1)=a$
$(hog)(b)=h(g(b))=h(2)=b$
$(hog)(c)=h(g(c))=h(3)=c$
$\therefore$ $goh=I_x \, \, \, and\, \, \, hog=I_y$, here $X =\left \{ 1,2,3 \right \}\, and \, Y=\left \{ a,b,c \right \}$
Thus, $g^{-1}= h =( f^{-1})^{-1}$
It is noted that h=f.
Hence,$( f^{-1})^{-1}=h=f$.
Exams
Articles
Questions | 2020-02-22 20:16:42 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 42, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8858342170715332, "perplexity": 4739.131155585828}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145713.39/warc/CC-MAIN-20200222180557-20200222210557-00191.warc.gz"} |
http://www.journaltocs.ac.uk/index.php?action=browse&subAction=subjects&publisherID=60&journalID=25968&pageb=3&userQueryID=&sort=&local_page=&sorType=&sorCol= | for Journals by Title or ISSN for Articles by Keywords help
Subjects -> SOCIAL SCIENCES (Total: 1568 journals) - BIRTH CONTROL (20 journals) - CHILDREN AND YOUTH (256 journals) - FOLKLORE (30 journals) - MATRIMONY (16 journals) - MEN'S INTERESTS (18 journals) - MEN'S STUDIES (91 journals) - SEXUALITY (52 journals) - SOCIAL SCIENCES (871 journals) - WOMEN'S INTERESTS (44 journals) - WOMEN'S STUDIES (170 journals) SOCIAL SCIENCES (871 journals) First | 1 2 3 4 5
National Academy Science LettersJournal Prestige (SJR): 0.189 Citation Impact (citeScore): 1Number of Followers: 5 Hybrid journal (It can contain Open Access articles) ISSN (Print) 0250-541X - ISSN (Online) 2250-1754 Published by Springer-Verlag [2352 journals]
• Trends in Depth-Wise Occurrence of Potential Fishing Zones in North Andhra
• Abstract: The potential fishing zone (PFZ) advisories for north Andhra Pradesh coast from Indian National Centre for Ocean Information Services, Hyderabad, were analyzed year-wise for 2 years 2012–2013 (January to December) for frequency of its occurrences at different depths. Grid-wise plotting of PFZs showed that high and very high frequency grids were less abundant in near-shore coastal regions. The plotting also revealed that of the three zones compared, continental slope area, which is beyond 100 m depth, is having the maximum number of very high frequency hit grids, followed by mid-continental shelf area which is between 50 and 100 m depth. It is also noted that a trend of frequency of occurrence of PFZs is on the rise as the depth increases up to 200 m and further, the frequencies started decreasing. There is also an increasing trend in the frequency of occurrence of PFZs as we are moving towards the higher latitudes from Kakinada waters to Kalingapatnam waters.
PubDate: 2019-03-13
• Evaluation of Variation in Cuticular Wax Yield with Season, Solvent, and
Species in Calotropis
• Abstract: Cuticle is the protective layer of aerial parts of plants. The present study investigates the effects of species, season, solvent, size, and side of leaves on cuticular wax yield in Calotropis procera and Calotropis gigantea. Epicuticular wax has been isolated using acetone and chloroform separately. Statistical data revealed that C. procera had a higher cuticular wax yield (0.1573 mg cm−2) than C. gigantea (0.1197 mg cm−2). Season and size of the leaves were observed to significantly influence wax yield in both the species. However, side of leaf does not influence the wax yield in both species.
PubDate: 2019-03-13
• First Record of Some Earthworm Species (Oligochaeta: Megadrile) from
Kerala Part of Western Ghats Biodiversity Hotspot, Southwest India
• Abstract: Until now, occurrence of 98 earthworm species has been reported from the Western Ghats biodiversity hotspot of Kerala. Most of the known species were recorded more than 80–90 years ago. A recent survey of the earthworms of the state has revealed the presence two more species, namely Octolasion tyrtaeum (Savigny, 1826) and Mallehulla indica Julka & Rao, 1982. Here, we are discussing about the details of the specimens collected, its distribution, etc., in the state.
PubDate: 2019-03-13
• Effect of Phosphorus, Sulphur and Micronutrients (Zinc and Boron) Levels
on Performance of Chickpea ( Cicer arietinum L.)
• Abstract: A field experiment was carried out during Rabi, 2013–2014, to study the effect of different treatment combinations on performance of chickpea (Cicer arietinum L.). The treatments of experiments comprised three fertilizer levels—F1—40 kg P2O5 ha−1, F2—60 kg P2O5 + 20 kg S ha−1 and F3—80 kg P2O5 + 40 kg S ha−1 and four micronutrient levels—M0—control, M1—3 kg Zn ha−1, M2—spraying of boron (0.3%) and M3—3 kg Zn ha−1 + spraying of boron (0.3%) in split plot design with three replications. At higher level of fertilizer (F2—60 kg P2O5 + 20 kg S ha−1) yield attributes, yield performance of chickpea along with soil properties was found better irrespective of the micronutrients applied. Maximum seed yield (22.22 q ha−1) was recorded with the application of fertilizer M3 (3 kg Zn ha−1 + spraying of boron) (0.3%). At every level of fertilizers, micronutrients augmented the yield attributes and yield of the crop along with soil properties. Combined application of micronutrients proved superior to their sole applications with respect to yield and nutrient uptake in chickpea.
PubDate: 2019-03-13
• Studies on Establishment of a Population of Pteris vittata Linn.
• Abstract: Pteris vittata, the Brake fern, is terrestrial, perennial, widely distributed species. It is well adapted to moist and shady as well as xeric habitats. It has been observed that during the last decade, the species has become a noxious weed in the Lucknow and nearby areas. During the eighties, only few populations were present, whereas, in the present time, the species has spread generously in almost entire Lucknow and nearby areas. In the present study, the cause of extensive colonization of species has been investigated. For the assessment of its wide distributional behavior or colonization ability, the viable spores were sown and the stages from germination to maturation and establishment of the colony were thoroughly investigated. It was observed that mature plants were formed in 3 months, whereas colonization was achieved in about 3–4 months. The accessions studied were also cytologically investigated, which studied is sexual, tetraploid with n = 58 chromosomes. The details of developmental pattern and colonization have been discussed in the present communication.
PubDate: 2019-03-11
• On Certain J -Colouring Parameters of Graphs
• Abstract: In this paper, a new type of colouring called J-colouring is introduced. This colouring concept is motivated by the newly introduced invariant called the rainbow neighbourhood number of a graph. The study ponders on maximal colouring opposed to minimum colouring. An upper bound for a connected graph is presented, and a number of explicit results are presented for cycles, complete graphs, wheel graphs and for a complete l-partite graph.
PubDate: 2019-03-09
• Tillage Practices and Rabi Crops Affect Energetics of Rainfed Rice-Based
Cropping System of Chhattisgarh
• Abstract: In rice-based cropping system, intensive tillage operations, which consume a huge amount of energy in the form of fuel and labor, are carried out after harvesting of rice for growing the next crop. Modification in tillage practices may not only reduce energy consumption but also could make the system more dynamic and efficient. The present study involving four tillage practices and six different rabi crops was undertaken in strip plot design with three replications to understand the effect of tillage practices and rabi crops on the energetics of rainfed rice-based cropping system. Results of study clearly demonstrated that zero tillage direct drilling of seeds at 2nd days after harvesting of rice with toria and minimum tillage and line sowing of seeds at 3rd days after harvesting of rice with safflower recorded 40% less energy input and 59% more energy output, respectively, than farmer’s practice seeds and fertilizers broadcasting at 12th days after harvesting of rice with safflower. Among the tillage practices, zero tillage direct drilling of seeds at 2nd days after harvesting of rice recorded 63 and 74% higher energy productivity and energy intensity, respectively, over farmers practice. Among the rabi crops, significantly higher energy productivity, energy intensity and net energy (0.84 kg MJ−1, 6.74 MJ Rs−1 and 66.72 × 103 MJ ha−1, respectively) were recorded under safflower. With higher energy productivity and intensity, ZT direct drilling of seeds at 2nd DAH of rice and safflower was found best for the energetic management of rainfed rice-based cropping system of Chhattisgarh.
PubDate: 2019-03-08
• Antifungal Activity of Some Ethnomedicinally Important Tuberous Plants of
Family Liliaceae
• Abstract: Fungal spores are often present in air and soil which may cause internal as well as external infections. Phytochemicals are extracted from plants of family Liliaceae and identified having antifungal and antibiotic properties. The present study deals antifungal activity of crude extract of leaf of five species Asparagus L. and four species Chlorophytum Ker. Gawl. of family Liliaceae using a different polar and non-polar solvent like methanol, petroleum ether and acetone. Effects of different plant extract were tested on yeast (Candida albicans) and mold (Aspergillus niger) using potato dextrose agar medium by agar well diffusion method. Zone of inhibition produced by different plant extracts was calculated. Among these plants, Chlorophytum borivilianum Santapau and Fernandes., Chlorophytum tuberosum Baker. and Asparagus racemosus Willd. show maximum antifungal activities against two clinical fungi Candida albicans and Aspergillus niger. In case of three solvent, acetone plant extract showed a significant reduction, while methanol showed its minimum reduction in the growth of these two opportunist fungi. Streptomycin was used as control drug for antifungal studies.
PubDate: 2019-03-08
• Attenuation Effect as a Tool to Explain sp 3 Carbon (–CH 2 –) is a
Good Electron Insulator and a sp 2 Carbon (–CH=CH–) is a Good Electron
Transmitter: An Undergraduate 1-h Chemistry Classroom Tutorial
• Abstract: Physical basis of chemical reactivity in organic molecules was to determine the electronic effects which govern the rate of a reaction put forth by the substituents during the course of a given reaction. This is known as “substituent effect.” This concept was first developed by Hammett in the form of a linear free-energy relationship (LFER) popularly known as “Hammett equation.” This substituent effect would generally attenuate in an exponential manner as the distance between the reaction center and the substituent increases. This was developed by Williams (Free-energy relationships in organic and bioorganic chemistry, Royal Society of Chemistry, Cambridge, 2003) in the form of an empirical exponential equation. Using the Hammett equation and with help of Williams 2003 explanations on attenuation effect, we have tried to explain why a sp3 carbon is a good σ-electron insulator and a sp2 carbon is a good π-electron transmitter.
PubDate: 2019-03-08
• Cu 2 O/Nano-CuFe 2 O 4 as a Magnetically Recoverable Catalyst for
Ligand-Free Synthesis of Imidazo[1,2- a ] Pyridines and
3-Aroylimidazo[1,2- a ] Pyridines
• Abstract: Cu2O/nano-CuFe2O4 was found as an efficient and magnetically separable heterogeneous catalyst for the solvent-free synthesis of imidazo[1,2-a] pyridine derivatives. This nano-magnetic composite is also extended as a well-run and recoverable catalyst for the synthesis of 3-aroylimidazo[1,2-a] pyridine derivatives using air as the green oxidant under ligand and additive-free conditions. Readily available, inexpensive starting materials, simple procedure, short reaction time, ease of preparation of the catalyst, stability the catalyst to air and compatibility it with a wide variety of substrates are merits of the presented methodology. Furthermore, the catalyst was easily separated by an external magnet. It recovered and reused five times without significant loss of catalytic activity.
PubDate: 2019-03-08
• Effects of Thidiazuron (TDZ) on Direct Shoot Organogenesis of Gymnocladus
assamicus : A Threatened and Critically Endangered Species from Northeast
India
• Abstract: An efficient morphogenic potential was developed for direct shoot organogenesis of Gymnocladus assamicus, as an IUCN Red List of threatened and critically endangered species from Northeast India. This species is used as leech repellent of domestic animal, seedpod as detergent and roasted seed as a substitute for coffee and groundnut. The wild population is rapidly shrinking due to various anthropogenic pressure and poor regeneration. Therefore, the present study has been taken up for morphogenic potential through direct shoot organogenesis which is not reported. Nevertheless, cotyledonary nodal explants showed 100% responses in Murashige and Skoog (MS) medium fortified with 0.75 mg L−1 thidiazuron (TDZ) alone or in combination with 1 mg L−1 IBA, in comparison with other combinations tested. Cotyledonary node was found to be the best source of explant which produced 10.80 ± 0.39 shoots per explant. Further, shoots were transferred to proliferation and elongation medium fortified with 0.25 mg L−1 TDZ in MS medium which produced 12.06 ± 0.31 shoots per explant. MS medium fortified with 1.5 mg L−1 IAA showed highest root induction frequency (76%) with mean root number 2.03 ± 0.19 and root length 3.26 ± 0.27 cm. The micropropagated plantlets were transferred to soil after acclimatization with a 68% success rate.
PubDate: 2019-03-08
• Reproductive Behaviour of Lemon ( Citrus limon Burm.) Affected by
Different Pruning Intensities and Integrated Nutrient Management Under
Various Growing Seasons
• Abstract: The main objective of this study was to know the reproductive behaviour of lemon (Citrus limon Burm.) affected by different pruning intensities and integrated nutrient management under various growing season. The experiment was laid out in two factorial randomized block design with four levels of pruning, seven levels of nutrient, consisting recommended dose of fertilizers (RDF) and different combinations of organic manure (Vermicompost), inorganic fertilizer, biofertilizer (Azotobacter), Mycorrhiza (VAM) and their interaction to study their effect on plant reproductive behaviour during 2013–2015 on 9-year-old lemon plants in three growing seasons. The investigation revealed that the reproductive parameters, viz. number of flowers per plant, fruit set percentage and fruit yield, were found highest in lightly pruned plants fed with 75% RDF + Vermicompost + Azotobacter + Vesicular Arbuscular Mycorrhiza at Ambe, Mrig and Hasth bahar, respectively. Among the three seasons of cropping, Ambe bahar recorded the best result in respect to yield followed by Mrig and Hasth bahar.
PubDate: 2019-03-08
• Production and Identification of Omega-6 Fatty Acid (11,14-Eicosadienoic
Acid) Using Fungi as a Model
• Abstract: Productions of essential fatty acids from fungi are attracted topics in the field of biotechnology. So this study focuses on the production of 11,14-eicosadienoic acid which is considered as an essential fatty acid. This fatty acid is produced when growing fungi in nitrogen-limiting media. Penicillium chrysogenum, Rhizopus stolonifer, and Trichoderma harzianum are the key stones of this study. P. chrysogenum strains were isolated from Iraqi soil and set aside in Biology species Bank, Science College for Women, Baghdad University. And R. stolonifer and T. harzianum strains were isolated from Iraqi soil and set aside in Biology species bank College of Science, University of Kufa. These species have been identified by Dr. Mohammad Mohsien Abdulhusien Alrufae. Penicillium chrysogenum, Rhizopus stolonifer, and Trichoderma harzianum are cultivated on media used for lipid production in limited nitrogen source and excessive carbon source. Batch culture is the mode of cultivation used for fungi growth. Biomass for P. chrysogenum culture was (10) g/l with total lipid content of 4.18%. There is no evidence for the production of 11,14-eicosadienoic acid in this fungi, and in contrast, R. stolonifer and T. harzianum showed high concentration of 11,14-eicosadienoic acid (16.8% and 16%, respectively). Rhizopus stolonifer culture showed 13 g/l as biomass with 6% lipid from biomass, whereas Trichoderma produced 8 g/l) biomass and 6.24% total lipid content from the biomass. Rhizopus stolonifer and Trichoderma harzianum could be considered as alternative sources for omega-6 fatty acids (11,14-eicosadienoic acid).
PubDate: 2019-03-07
• Equivalence of Planar Čech Nerves and Complexes
• Abstract: This article introduces proximal Čech nerves and Čech complexes, restricted to finite, bounded regions K of the Euclidean plane. A Čech nerve is a collection of intersecting balls. A main result of this article is extension of the Edelsbrunner–Harer Nerve Theorem for Čech nerves and Čech complexes.
PubDate: 2019-03-07
• An Optimized Energy Saving Model for Hybrid Security Protocol in WMN
• Authors: R. Regan; J. Martin Leo Manickam
Abstract: Wireless mesh network (WMN) is an emerging field of research with a large number of applications and associated constraints. WMN is used as a new wireless broadband network structure which is completely based on IP technologies. It has the ability to produce high speed and wide area of coverage, and it also provides a high capacity for handling the nodes. To help authenticate messages, identify valid nodes and remove malevolent node. Security and privacy are two major problems in WMN. Unfortunately, in mesh networks most privacy-preserving schemes are vulnerable to attacks. The most dangerous attack to be noted in mesh network is node impersonation attack which makes them more insecure. WMN is said to be an emerging wireless broadband network structure, where it is completely based on the IP technologies. The mesh routers and clients play a vital role in the mesh networks where they act as a backbone and help the mesh networks to achieve their target in an efficient way. The important factor in the wireless mesh network is to provide a trusted handoff between the nodes, and they need an access authentication effectively. This area can be considered as the vulnerable one, and there is a chance for some attacks which makes the network unstable. Achieving the seamless handoff is a complex case in every dynamic heterogeneous wireless mesh network. This is because providing a security for such kind of structure is very difficult and the existing procedure for providing security for heterogeneous network gives protection for certain types of attacks. In this paper, we use an optimization algorithm for finding the best position for deploying mesh routers and for developing a hybrid and secured model for detecting node impersonation attack by combining ECDSA with CHAP. We also show how our proposed model can handle the throughput, authentication delay, etc., without facing any problem such as energy consumption and delay.
PubDate: 2019-02-12
DOI: 10.1007/s40009-019-0789-4
• Polyfunctional Application on Modified Cotton Fabric
• Authors: Ramasamy Rajesh Kumar; Kumanan Bharathi Yazhini; Halliah Gurumallesh Prabu; Zhou Qixing
Abstract: A simple and facile method for fabricating the cotton fabric with flame retardancy is adopted in the present work. This study deals with crosslinking of cotton fabric using different polycarboxylic acids such as citric acid and 1,2,3,4-butane tetracarboxylic acid and using different catalysts such as sodium hypophosphite and sodium propionate through conventional pad-dry-cure method for flame-retardant application. The results exhibited moderate waterproofing durability and flame retardancy of the cotton fabric after treatment, offering a good opportunity to accelerate the large-scale production textile materials for new industrial applications.
PubDate: 2019-02-11
DOI: 10.1007/s40009-019-00793-2
• Entanglement of the Non-Gaussian Two-Mode Quantum Vortex State
• Authors: Vikram Singh; Devendra Kumar Mishra
Abstract: We study the entanglement properties of a non-Gaussian two-mode vortex state that was theoretically proposed by Agarwal [New J Phys 13:073008 (2011)] by using the technique of photon subtraction from a two-mode squeezed state and detection of one photon by a single-photon detector. There are different conditions to quantify the entanglement of non-classical states. We compare the entanglement conditions for this state in terms of the Hillery–Zubairy (HZ) criterion, Hillery–Dung–Zhong (HDZ) criterion, Shchukin–Vogel (SV) criterion, and Duan–Giedke–Cirac–Zoller (DGCZ) criterion. We confirm that this non-Gaussian state shows strong entanglement under these different conditions, thus suggesting that this state may have potential applications in quantum information processing.
PubDate: 2019-02-11
DOI: 10.1007/s40009-018-0762-7
• Square Signed Graph
• Authors: Deepa Sinha; Deepakshi Sharma
Abstract: The square graph $$G^2$$ of a graph $$G=(V,E)$$ is a graph with same vertex set as G, and the vertices are adjacent in $$G^2$$ when their distance in G is at most two. In this paper, we characterize signed graph (or sigraph) which is a square root signed graph of some signed graph. Also, we find whether for a given signed graph its square signed graph and line of square signed graph are balanced. Each theorem is supported by respective algorithms.
PubDate: 2019-02-11
DOI: 10.1007/s40009-018-0781-4
• Relationship Between Randić Index, Sum-Connectivity Index, Harmonic Index
and $$\pi$$ π -Electron Energy for Benzenoid Hydrocarbons
• Authors: H. S. Ramane; V. B. Joshi; R. B. Jummannaver; S. D. Shindhe
Abstract: The relationship between Randić index, sum-connectivity index, harmonic index and $$\pi$$ -electron energy of some benzenoid hydrocarbons is obtained.
PubDate: 2019-02-11
DOI: 10.1007/s40009-019-0782-y
• Leaves in a Particular Class of Trees
• Authors: Mehri Javanian
Abstract: In this paper, we investigate leaves for random paged digital search trees, an important and generalized version of digital search trees.
PubDate: 2019-02-11
DOI: 10.1007/s40009-018-0778-z
JournalTOCs
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Email: [email protected]
Tel: +00 44 (0)131 4513762
Fax: +00 44 (0)131 4513327
Home (Search)
Subjects A-Z
Publishers A-Z
Customise
APIs | 2019-03-19 23:39:57 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.467077374458313, "perplexity": 7735.698602035371}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202161.73/warc/CC-MAIN-20190319224005-20190320010005-00033.warc.gz"} |
https://physics.stackexchange.com/questions/36079/unit-of-torque-with-radians | # Unit of torque with radians?
Usually, the angular frequency $\omega$ is given in $\mathrm{1/s}$. I find it more consistent to give it in $\mathrm{rad/s}$. For the angular momentum $L$ is then given in $\mathrm{rad \cdot kg \cdot m^2 / s}$.
However, the relation for torque $\tau$ says: $$\tau \cdot t = L$$
So the torque should not be measured in $\mathrm{N \cdot m}$ but $\mathrm{rad \cdot N \cdot m}$. Would that then be completely consistent?
• More on radians: physics.stackexchange.com/q/33542/2451 and links therein. Sep 10, 2012 at 19:34
• Actually the unit of angular momentum, using radias is $\mathrm{kg\cdot m^2/s/rad}$.
– alfC
Dec 27, 2015 at 7:56
• ...although I am getting to the conclusion that in a consistent radian system $\mathrm{rad^2} = 1$ and $\mathrm{rad} = 1/\math{rad}$. In the same way that the product of two pseudovectors is a vector.
– alfC
Dec 27, 2015 at 8:49
• The unit of torque is joules per radian, which is technically equal to N m/ rad. Sep 12, 2020 at 18:19
OP wrote(v1):
So the torque should not be measured in N⋅m but rad⋅N⋅m. Would that then be completely consistent?
No, that would not be consistent with the elementary definition of torque $\vec{\tau}=\vec{r} \times \vec{F}$ as a cross-product between a position vector $\vec{r}$ and a force vector $\vec{F}$.
An angle in radians is the ratio between the length of a circle arc and its radius, and is therefore dimensionless.
For instance, the angular version $\tau = I \alpha$ of Newton's 2nd law is only true (without an extra conversion factor) if the angle behind the angular acceleration $\alpha$ is measured in radians.
However, it should be mentioned that due to the formula
$$W~=~\int \tau ~d\theta,$$
for angular work, torque can be viewed as energy per angle, i.e., the SI unit of torque is also Joules per radians. See also this Wikipedia page and this Phys.SE question.
• Where can I get another $\mathrm{rad}$ from? Or is that the reason one does not use $\mathrm{rad}$ in those contexts? Sep 10, 2012 at 15:39
• You would have to put a conversion coefficient with a value of one radian in front of $\vec{r}\times\vec{F}$. It would be possible to reformulate all the equations of physics in this way to explicitly include radians, but it would make things messier than they are. Sep 10, 2012 at 18:08
• @queueoverflow: $\mathrm{rad}$ is not a unit like meter or second. It is basically $1$. You can multiply anything with $1$ or $\mathrm{rad}$ without changing its meaning. My advice is never to use $\mathrm{rad}$. It is more confusing than helpful. Sep 11, 2012 at 1:20
• @DavidZaslavsky: Okay, that makes completely sense. Oct 18, 2012 at 15:17
• In some sense the unit "rad" gives the information that the quantity is a pseudovector. It is "generated" by the $\times$ operation.
– alfC
Dec 27, 2015 at 2:27
Anthony French of MIT, in a private communication to me years ago, finally got me to understand when to write radians as a unit and when to omit it. Here is the answer.
If the quantity in question has a numerical value that depends on whether the angular unit is expressed in degrees, radians, revolutions, or something similar, then explicitly include the appropriate unit. If the quantity's numerical value does NOT depend on the angular unit, then omit the angular unit. As an example, consider angular velocity and linear velocity. Angular velocity's numerical value depends on whether one uses degrees or radians. $50\; \circ/s$ isn't the same as $50\; rad/s$. Linear velocity, though, has a numerical value that is independent of any angular unit so when we calculate $v = \omega r$ we never write $\frac{rad \cdot m}{s}$ as the unit. We simply write $m/s$.
• I'd say the last example only works because there is an implicit 1/rad on the right side that converts radius to circumfence. Sep 10, 2012 at 17:22
• There is only one rad on the right hand side, and it appears explicitly in the unit of $\omega$. The resulting product, linear velocity, has a value that can be measured with only a calibrated stick and a clock, with no regard for angular units.
– user11266
Sep 10, 2012 at 19:40
• Actually, the result in your example is $m/s$ no matter what. The reason is that $\vec v = \vec\omega \times \vec r$ and the $\times$ operation is the one that eliminates the radian units.
– alfC
Dec 27, 2015 at 8:46
Rotational work is not torque times angle. It is torque times (angle in rad) = torque $$\times$$ (the number of radians in the angle). Torque has been understood (for millennia) to be what would be called $$\mathbf{r}\times\mathbf{F}$$ today. The dimension is length $$\times$$ force or (mass $$\times$$ length-squared)/(time-squared), which is the same as the dimension of energy. To distinguish torque from energy, we give energy in units of Joules and torque in units of newton-metres (never Joules).
• Hello! It is preferable to use MathJax (LaTeX) to display formulas. You can find a tutorial at MathJax basic tutorial and quick reference. Please edit your answer accordingly. Thanks! Apr 6, 2021 at 21:33
I came across this question when doing numerics with the Python package pint, where angles can be specified in $$\rm cycles$$, $$\rm rad$$ and $$\rm deg$$ (and some aliases, such as $$\rm turns$$, $$\rm revolutions$$).
... and then I ran exactly into this situation: I needed to calculate an angular acceleration from a torque. That should be, for constant moment of inertia $$I$$,
$$\frac{d\omega}{dt} = M/I$$
but when you think of angles as a quantity with dimension - in my case angular velocities given in $$\rm revolutions~per~minute~(rpm)$$, it would be a unit mismatch.
Ultimately such things come down to conventions. If we argue that there is a natural unit of something, we'd end up not needing units at all; For instance we don't need the meter, we can just use light-seconds as the basic unit of length. One $$\rm meter$$ would then be roughly $$3.335~\rm nanoseconds$$.
And indeed similar situations exist. In physics, unit systems with 3 base units for length, time and mass are common, as opposed to the 7 base units of SI. The unit of current is eliminated by saying that two unit charges at rest at a distance of one unit length exert one unit of force on each other by the Coulomb law, which gives the charge a fractional dimension of $$\rm (mass)^{1/2} (length)^{3/2} (time)^{-1}$$.
So why have units at all? I'd say it comes down to something similar to "type safety" in programming. When you add a time and a length, you typically rightfully get suspicious. When you expect a velocity, but get a mass - likewise.
Now, in the equation above, should be add the angle units somewhere? Should we add $$\rm rad$$ to the torque? Probably not, because omitting units by deciding on a natural unit is not uniquely reversible. We don't know if we should introduce $$\tilde\omega = \omega/\rm rad$$, $$\tilde M = M\rm rad$$, $$\tilde I=I/\rm rad$$ or a mixture of all of them with fractional powers.
Also, at this point we have to ask ourselves: Are we looking at an angular velocity given in $$\rm rad/s$$, or is it $$\rm cycles/s$$? Both constitute perfectly natural units of angular velocity, though $$\rm cycles/s$$ is commonly written as $$\rm Hz~(Hertz)$$, similar to the distinction of $$\rm Joule$$ for energy and the technically equivalent $$\rm Nm$$ for torques.
Such problems are quite common when working with literature, that uses different unit systems (e.g. one of the various electrodynamic unit systems with 3 base units, vs SI). For instance the unit-less dielectric susceptibility $$\chi$$ differs by a factor of $$4\pi$$ across different unit system; This factor essentially comes down to whether we write the Coulomb law as $$F = \frac{q_1 q_2}{4\pi r^2}$$ or $$F = \frac{q_1 q_2}{r^2}$$.
The only special thing about angles is, that their natural units occur in geometry, without insights into laws of nature. But given how it is quite easy to mix up cycles, radians, and degrees (e.g. between the frequency quantities $$\omega$$ and $$f$$), maybe "angle" has as much a right to be a base quantity as "current". | 2022-06-25 07:12:10 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 31, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8734902143478394, "perplexity": 447.66286508129775}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103034877.9/warc/CC-MAIN-20220625065404-20220625095404-00759.warc.gz"} |
http://tex.stackexchange.com/questions/66798/offset-color-block-behind-section-titles | # Offset color block behind section titles
Joseph Wright's CV source contains a snippet of code that produces an offset color block behind section titles using the titlesec package, like so:
\documentclass[11pt]{article}
\usepackage{lipsum,xcolor}
\usepackage{titlesec}
\titleformat{\section}{\Large}{}{0 em}
{%
\begingroup
\color{gray!30}%
\titleline{\leaders\hrule height 0.6 em\hfill\kern 0 pt\relax}%
\endgroup
\nobreak
\vspace{-1.2 em}%
\nobreak
}
\begin{document}
\section{Lorem Ipsum}
\lipsum[6]
\end{document}
However, when section numbering is added via \thesection in the \titleformat command, this nice effect seems to break:
How could this code be modified such that section numbering doesn't cause any alignment issues? It'd be nice if this could be achieved while still using the titlesec package.
-
Here's one possibility, using the explicit option for titlesec; the example shows the definitions needed for numbered and unnumbered sections:
\documentclass[11pt]{article}
\usepackage{lipsum,xcolor}
\usepackage[explicit]{titlesec}
\titleformat{\section}{\Large}{}{0em}
{%
\begingroup
\color{gray!30}%
\titleline{\leaders\hrule height 0.6 em\hfill\kern 0 pt\relax}%
\endgroup\vskip-1.2em\thesection\hskip0.5em#1
\nobreak
}
\titleformat{name=\section,numberless}{\Large}{}{0em}
{%
\begingroup
\color{gray!30}%
\titleline{\leaders\hrule height 0.6 em\hfill\kern 0 pt\relax}%
\endgroup\vskip-1.2em#1
\nobreak
}
\begin{document}
\section{Lorem Ipsum}
\lipsum[6]
\section*{Lorem Ipsum}
\lipsum[6]
\end{document}
- | 2016-02-11 04:50:49 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7540594339370728, "perplexity": 11810.893614884424}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701160958.15/warc/CC-MAIN-20160205193920-00238-ip-10-236-182-209.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/scale-invariant-inverse-square-potential.907106/ | # A Scale invariant inverse square potential
Tags:
1. Mar 10, 2017
### hilbert2
Yesterday, I was thinking about a problem I had encountered many years before, the central force problem with a $V(r) \propto r^{-2}$ potential...
If we have a Hamiltonian operator
$H = -\frac{\hbar^2}{2m}\nabla^2 - \frac{A}{r^2}$
and do a coordinate transformation $\mathbf{r} \rightarrow \lambda \mathbf{r}$, it's easy to see that if $\psi (x,y,z)$ is an eigenfunction of that $H$, then also any scaled function $\psi (\lambda x, \lambda y, \lambda z)$ is, but with a different eigenvalue and normalization.
In the classical mechanical case, you can make the scaling $\mathbf{x} \rightarrow \lambda \mathbf{x}$ and $\mathbf{p} \rightarrow \mathbf{p}/\lambda$ to turn one possible phase space trajectory of the orbiting point mass in this potential into another possible trajectory.
Questions:
1. Does the quantum inverse square potential system really have a continuum spectrum of eigenfunctions, as it seems here? Why is this different from the situation with a hydrogen atom?
2. How could I explain, as simply as possible, why a scaling $x \rightarrow \lambda x$ is requires a simultaneous scaling $p \rightarrow p/\lambda$ in the classical mechanical case? In the quantum problem this is obvious because the momentum operator $p_x$ has a differentiation with respect to x in it, but it seems to be more difficult to explain in classical mechanical terms.
p.s. don't confuse this with inverse square force...
2. Mar 10, 2017
### strangerep
I'm not sure why you'd think it wouldn't be different from the H-atom (or classical Kepler) case, since it's a different Hamiltonian.
(Btw, classical Kepler does have a dilation-like symmetry, but it's non-uniform between space and time. See Kepler's 3rd law. )
A symmetry must preserve something, else it's not a symmetry. In the classical case, one typically wants to preserve the Poisson bracket, so we consider only canonical transformations that do so.
Alternatively, one could consider symmetries as preserving the Hamilton equations of motion, but then one must introduce a corresponding scale transformation for time.
Last edited: Mar 10, 2017
3. Mar 11, 2017
### hilbert2
Thanks. Usually bound states form a discrete spectrum, I guess the 1/r2 potential is a "falling to center" problem where the only bound state is one where the orbiting point mass is perfectly localized in the origin...
4. Mar 11, 2017
### jostpuur
If $\psi(\vec{x})$ is a solution with an energy $E$, then $\overline{\psi}(\vec{x})=\psi(\lambda\vec{x})$ is a solution with an energy $\overline{E}=\lambda^2 E$.
Edit: I'm removing my conclusions because they contained a mistake related to the sign of the energy.
Last edited: Mar 11, 2017
5. Mar 11, 2017
### hilbert2
But there seems to be nothing in the scaling property that wouldn't allow $\lambda$ to be an imaginary number, in which case you could make wavefunctions with arbitrarily large negative energies.
EDIT: I guess something like $V(r) \propto -e^{A/r}$ would lead to falling-to-center behavior because there's an essential singularity, but I'm not sure.
6. Mar 11, 2017
### jostpuur
There is a lot of problem in using imaginary values for $\lambda$, and there is no reason to assume that you could change the sign of the energy by some change of variable.
7. Mar 11, 2017
### strangerep
But things get quite weird if you allow imaginary $\lambda$. Position and momenta become imaginary. And time (which, afaict, must scale as $\lambda^2$ to preserve Hamilton's equations) would reverse.
It might be interesting to explore the larger dynamical symmetry group for this problem. I've already worked through it for classical Kepler, so maybe I can adapt my computations.
8. Mar 12, 2017
### jostpuur
It is a common belief that localized states come with discrete spectrum with energy levels below the background flat potential, and non-localized states with a continuum spectrum with energy levels above the baground flat potential. If this $\frac{1}{r^2}$ potential debunks that belief as false, or gets close debunking it by introducing some subtleties, it would be quite interesting.
I checked how far I could get by old fashioned technical PDE solving.
$$\Big(-\frac{\hbar^2}{2m}\nabla^2 - \frac{A}{\|\vec{x}\|^2}\Big)\psi(\vec{x}) = E\psi(\vec{x})$$
If we substitute attempt $\psi(\vec{x})=\xi(\|\vec{x}\|)$, the PDE will be solved for $\vec{x}\neq 0$ when
$$-\frac{\hbar^2}{2m}\Big(\frac{2}{r}\xi'(r) + \xi''(r)\Big) - \frac{A}{r^2}\xi(r) = E\xi(r)$$
is solved for $r>0$. If we substitute $\xi(r)=\frac{f(r)}{r}$, then some terms cancel, and equation
$$\frac{2}{r}\xi'(r) + \xi''(r) = \frac{f''(r)}{r}$$
turns out to be true. The differential equation for $f$ can be written as
$$f''(r) = \Big(\alpha + \frac{\beta}{r^2}\Big)f(r)$$
The case $E<0$ is the same as $\alpha>0$, and $A>0$ is the same as $\beta<0$, so these cases are of the most interest.
Anyone having any idea about the solutions for $f$?
Last edited: Mar 12, 2017
9. Mar 12, 2017
### jostpuur
As usual, the special cases $\alpha=0$ and $\beta=0$ are easier. The $\beta=0$ has no relevance, but $\alpha=0$ is equivalent to $E=0$, and there is nothing wrong with this energy, so we get few special solutions this way. Returning to the original notation in three dimensions, we can state the result in form that if we set
$$\psi(\vec{x}) = \|\vec{x}\|^{-\frac{1}{2} \pm \sqrt{\frac{1}{4} - \frac{2mA}{\hbar^2}}}$$
then this will be a solution to the PDE with energy $E=0$.
Unfortunately the scaling cannot be used to generate more solutions out of these, because the energy level remains constant, and the normalization factor will cancel changes to the wave function. These are some special solutions, anyway, and they are better than having nothing.
10. Mar 12, 2017
### hilbert2
Yeah, that's true. For some reason I had an idea that central force scattering states would have a $\psi (r) \propto e^{ikr}$ like behavior and you could make those normalizable with $\mathbf{x} \rightarrow i\mathbf{x}$, but it's not like that after all.
11. Mar 12, 2017
### hilbert2
12. Mar 12, 2017
### jostpuur
It is possible to guess and foresee how the eigenstates of this Hamiltonian are going to behave. You can compare them to the plane waves on a flat potential. The plane waves come with continuum energy levels, and they are not localized, but it is still possible to write localized wave packets out of them with continuous linear combinations (integrals). In the same way the eigenstates of this Hamiltonian are not going to be localized. It could be that the eigenstates will look a little bit localized, because they might approach zero at $\|\vec{x}\|\to\infty$, but actually they are going to approach zero so slowly, that they will not really be localized states. Despite these eigenstates not being localized, it will be possible to write localized wave packets out of them with continuous linear combinations (integrals). Then the time evolution of many of these wave packets is going to be such that they get sucked into the origin like into "a black hole".
13. Mar 12, 2017
### hilbert2
I'll probably try to do a Crank-Nicolson or diffusion monte carlo integration of the radial Schrödinger equation in imaginary time some day, using a Gaussian initial state $\psi (r,t_0 ) = A\exp (-Br^2 )$, to see whether all the probability density collapses to the origin when $s = it \rightarrow \infty$. Just need to use a potential $\frac{1}{r^2 + \delta}$ where the delta is some very small number, to prevent division by zero.
14. Mar 12, 2017
### hilbert2
Write "DSolve[D[f[r],r,r]==(a+b/r^2)*f[r],f,r]" in Wolfram Alpha or Mathematica, and you get a solution with BesselJ and BesselY functions in it. The BesselY functions are singular at the origin, so they're usually not valid forms of a wave function, but that could be an artifact of the falling-to-center property.
15. Mar 12, 2017
### Orodruin
Staff Emeritus
First of all, let $\hbar = 1$. The Hamiltonian is then of the form
$$H = -\frac{1}{2m} \nabla^2 - \frac{A'}{r^2} \quad \Longrightarrow \quad 2m H = -\nabla^2 - \frac{A}{r^2}$$
where $A = 2mA'$ has been introduced for brevity of notation. We are looking at the problem of finding eigenvalues to this operator, i.e., $2m H \psi = \lambda \psi$. Using variable separation leads to $\psi = R(r) Y_{\ell}^m (\theta, \varphi)$, where $Y_{\ell}^m$ is a spherical harmonic. Insertion into the eigenvalue equation then gives us
$$- R''(r) - \frac{2R'(r)}{r} + \frac{\ell(\ell + 1)}{r^2} R(r) - \frac{A}{r^2} R(r) - \lambda R(r) = 0$$
or, more compactly,
$$- R''(r) - \frac{2R'(r)}{r} + \frac{A_\ell}{r^2} R(r) - \lambda R(r) = 0,$$
where $A_\ell = \ell(\ell + 1)-A$. Using the already mentioned substitution $R(r) = f(r)/r^{1/2}$, we find that
$$R'(r) = \frac{f'(r)}{r^{1/2}} - \frac{1}{2}\frac{f(r)}{r^{3/2}}, \quad R''(r) = \frac{f''(r)}{r^{1/2}} - \frac{f'(r)}{r^{3/2}} + \frac{3}{4} \frac{f(r)}{r^{5/2}}.$$
Inserted into the differential equation for $R(r)$, this leads to
$$r^2 f''(r) - r f'(r) + \frac{3}{4} f(r) + 2r f'(r) - f(r) + A_\ell f(r) + \lambda r^2 f(r) = r^2 f''(r) + r f(r) - \underbrace{\left(A_\ell + \frac{1}{4}\right)}_{\equiv B^2} f(r) + \lambda r^2 f(r) = 0.$$
For real $B$, this is Bessel's differential equation with the general solution
$$f(r) = c_1 J_B(\sqrt{\lambda} r) + c_2 Y_B(\sqrt{\lambda} r)$$
for positive $\lambda$. The Bessel functions of the second kind $Y_B$ are singular at the origin and we are left with $f(r) \propto J_B(\sqrt{\lambda} r)$. Now, for negative $\lambda$, we would instead have
$$f(r) = c_1 I_B(\sqrt{-\lambda} r) + c_2 K_B(\sqrt{-\lambda} r).$$
Here, $I_B$ grows unbounded as $r\to \infty$ and $K_B$ is singular at the origin, indicating that neither of these functions can be used to find normalised states. The conclusion is that there are no normalisable bound states as long as
$$B^2 = A_\ell + \frac 14 = \ell(\ell + 1) - A + \frac 14 = \left(\ell + \frac 12\right)^2 - A \geq 0,$$
which is always satisfied if $A \leq 1/4$. I dare not say what happens when $A > 1/4$.
16. Mar 12, 2017
### strangerep
Thanks for mentioning that! (I skimmed it a long time ago, but had completely forgotten about it.)
Summarizing...
It shows the importance of constructing a bona-fide self-adjoint quantum Hamiltonian on the entire domain of interest. Indeed, the superficial scaling property of this Hamiltonian leads to eigenstates with different energies, but which are not orthogonal -- so we know something is seriously wrong. (Eigenstates of a self-adjoint operator with distinct eigenvalues ought to be orthogonal.)
The choice of a self-adjoint extension to $H$ at $r=0$ breaks the ordinary (continuous) scaling invariance, but some choices allow a discrete scaling symmetry.
Apparently, in the quantum case, it "bounces" with a phase change at the singularity. | 2018-07-15 20:20:07 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7950751185417175, "perplexity": 438.571473201841}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676588961.14/warc/CC-MAIN-20180715183800-20180715203800-00053.warc.gz"} |
https://www.physicsforums.com/threads/transition-dipole-moment-polarized-absorption.864331/ | # Transition dipole moment - polarized absorption
• A
Nemanja989
Hi everyone,
I am interested how is polarized light absorbed by a molecule or an atom. Unfortunately, I come to a problem in the derivation where a complex vector in a real space appears. This is something I never seen before and I do not know how to interpret it. Therefore I would like to ask you for help about this issue.
From the harmonic perturbation theory and the dipole approximation we obtain the transition rate between two states, $\vert i>$ - initial and $\vert j>$ - final, and this rate is governed by the following matrix element:
$$|<j| \frac{e}{m} \vec{e}\cdot \vec{p}|i>|^2$$
where e is the electron charge, m is the electron mass, $\vec e$ is light polarization vector (it is an unit vector, having information only about direction) and $\vec p$ is the electron momentum operator in the vector form, for more details see Woodgate's book.
Now, if one assumes that $\vec e = (\cos\alpha,\cos\beta,\cos\gamma)$, where $\alpha$, $\beta$ and $\gamma$ are the angles of the $\vec e$ to the $x$, $y$ and $z$ axes of the coordinate frame and $\vec p = (\hat{p}_x, \hat{p}_y, \hat{p}_z)$, we then obtain the following expression:
$$<j|\vec{e}\cdot \vec{p}|i> = \frac{i\Delta E}{\hbar}[\cos\alpha <j|ex|i>+\cos\beta <j|ey|i>+\cos\gamma<j|ez|i>]$$
which is equal to:
$$<j|\vec{e}\cdot \vec{p}|i> = \cos\alpha D_x+\cos\beta D_y+\cos\gamma D_z= \vec{e}\cdot \vec{D}$$
Where $\vec{D}$ is the vector proportional to the dipole moment vector.
Now this is the point where my problems start. Namely, it is obvious that $\vec{D}$ is a complex vector, and I am not sure if the next expression even has a physical meaning:
$$\vec{e}\cdot \vec{D} = |D|\cos\delta$$
Where $\delta$ is the "angle" between the polarization angle $\vec{e}$ (exactly known direction in the real space) and the dipole moment complex vector $\vec{D}$.
This is the result that I obtain from the experiment as well, but unfortunately I am not understanding it well enough. I red some mathematical literature on the topic about angles between complex vectors, but I could not understand it very well. Therefore I wish if anyone could help me about understanding what would be the direction of the dipole moment complex vector?
On the other hand, if we for simplicity assume that the light travels along the $z$ direction, and proceed with the complex value of the $\vec{D}$ we have:
$$|<j| \frac{e}{m} \vec{e}\cdot \vec{p}|i>|^2 =$$
$$=(\cos\phi D_x + \sin\phi D_y)(\cos\phi D^*_x + \sin\phi D^*_y) =$$
$$=\cos^2 \phi |D_x|^2+ \sin^2 \phi |D_y|^2 + (D_x D^*_y+D^*_x D_y)\cos\phi \sin\phi =$$
$$=\cos^2 \phi |D_x|^2+ \sin^2 \phi |D_y|^2 + (D_x D^*_y+D^*_x D_y)\cos\phi \sin\phi =$$
$$=\cos^2 \phi |D_x|^2+ \sin^2 \phi |D_y|^2 + \sin2\phi \cos(\alpha_1-\beta_1) |D_x||D_y|$$
where $D_x=|D_x| e^{i\alpha_1}$, $D_y=|D_y| e^{i\beta_1}$ and $D_x=|D_x| e^{i\gamma_1}$. By comparing this result to my measurements there is a problem with the $\sin2\phi$ term, but even more explicitly it is in contradiction to the previously obtained result that has only $\cos\delta$ dependence.
I would really appreciate if anyone could help me about this problem or even give any kind of comment.
Best!
Your expression is still completely general if you assume that ##\delta## is complex, too.
Namely, ##D=D_r+i D_i## where both ##D_r## and ##D_i## are real vectors. Then ##e \cdot D=e\cdot D_r +i e\cdot |D_i| =|D_r| \cos \delta_r+i D_i \cos \delta_i = |D| (|D_r|/|D| \cos \delta_r+i |D_i|/|D|\cos \delta_i)## so that ##\cos \delta=D_r|/|D| \cos \delta_r+i |D_i|/|D|\cos \delta_i##, where ##|D|^2=|D_r|^2+|D_i|^2##.
Last edited:
Nemanja989
Nemanja989
Thank you for your reply, I really appreciate it. In that case both expressions are equivalent:
$$\vec{D}=\vec{D}_r+i \vec{D}_i$$
with $\vec{D}_r$ and $\vec{D}_i$ being defined as:
$$\vec{D}=( |D_x|e^{i\alpha_1},|D_y|e^{i\beta_1} ,|D_z|e^{i\gamma_1})=$$
$$( |D_x|\cos\alpha_1,|D_y|\cos\beta_1 ,|D_z|\cos\gamma_1)+i( |D_x|\sin\alpha_1,|D_y|\sin\beta_1 ,|D_z|\sin\gamma_1)$$
then
$$\vec{e}\cdot\vec{D}= \vec{e} \cdot \vec{D}_r +i \vec{e} \cdot \vec{D}_i$$
and the absorption is directly proportional to the
$$|\vec{e}\cdot\vec{D}|^2= |\vec{e} \cdot \vec{D}_r|^2 + |\vec{e} \cdot \vec{D}_i|^2$$
by using the previous definitions of the angles,
$$\vec{e} \cdot \vec{D}_r = |D_x|\cos\alpha_1 \cos\alpha + |D_y|\cos\beta_1 \cos\beta+ |D_z|\cos\gamma_1 \cos\gamma$$
and
$$\vec{e} \cdot \vec{D}_i = |D_x|\sin\alpha_1 \cos\alpha + |D_y|\sin\beta_1 \cos\beta+ |D_z|\sin\gamma_1 \cos\gamma$$
and after taking squares of these expressions we get:
$$|\vec{e}\cdot\vec{D}|^2= |D_x|^2\cos^2\alpha + |D_y|^2\cos^2\beta + |D_z|^2\cos^2\gamma + 2 |D_x||D_y| \cos\alpha \cos\beta \cos(\alpha_1-\beta_1) + 2 |D_x||D_z| \cos\alpha \cos\gamma\cos(\alpha_1-\gamma_1) + 2 |D_y||D_z| \cos\beta\cos\gamma\cos(\beta_1-\gamma_1)$$
which in the case of $\gamma = 90^{\circ}$ is exactly the same as the last expression from my previous post.
Then in general if a molecule or an atom is absorbing light along an arbitrary direction , the last expression in this post is the expression that I should use?
Please correct me if I am wrong.
Sounds good.
Nemanja989
Thanks!
Nemanja989
Hi again,
To me it seems that there is a problem with the previous derivation.
Let us for simplicity consider a 2D case.
$$B_{01}=\frac{\pi e^2}{\epsilon\hbar^2\omega^2_{01} m^2}|\langle\Psi_{S_1}|\vec{e}\cdot\vec{p}|\Psi_{S_0}\rangle|^2$$
and define
$$\begin{split} D_r & =\langle\Psi_{S_1}|\vec{e}\cdot\vec{p}|\Psi_{S_0}\rangle \\ & =\frac{i\Delta E}{\hbar}(\cos\phi\langle\Psi_{S_1}|x|\Psi_{S_0}\rangle+\sin\phi\langle\Psi_{S_1}|y|\Psi_{S_0}\rangle)\\ & \propto\cos\phi D_x+\sin\phi D_y\\ & =\vec{e}\cdot\vec{D}\\ &=|\vec{D}|\cos\delta \end{split}$$
Since being bound states, ## |\Psi_{S_1}\rangle ## and ## |\Psi_{S_0}\rangle ## are real functions normalized with a multiplicative constant, which could be a complex number. Therefore, ## D_x ## and ## D_y ## must be real values multiplied by the same complex number, which is later taken care of with the modulus squared.
Namely,
##
\begin{split}
\Psi_{S_0}=e^{i\alpha}|A|\psi_{S_0} \\
\Psi_{S_1}=e^{i\beta}|B|\psi_{S_1}
\end{split}
##
Here ## \psi_{S_0} ## and ## \psi_{S_1} ## are real functions and ## e^{i\alpha}|A| ## and ## e^{i\beta}|B| ## complex normalization constants.
##
\begin{split}
D_x=e^{i(\alpha-\beta)}|A||B|\langle\psi_{S_1}|x|\psi_{S_0}\rangle\\
D_y=e^{i(\alpha-\beta)}|A||B|\langle\psi_{S_1}|y|\psi_{S_0}\rangle
\end{split}
##
##
D_r=e^{i(\alpha-\beta)}|A||B|(\cos\phi\langle\psi_{S_1}|x|\psi_{S_0}\rangle+\sin\phi\langle\psi_{S_1}|y|\psi_{S_0}\rangle)
##
##
\begin{split}
|D_r|^2&=|A|^2|B|^2|\cos\phi\langle\psi_{S_1}|x|\psi_{S_0}\rangle+\sin\phi\langle\psi_{S_1}|y|\psi_{S_0}\rangle|^2\\
&=|A|^2|B|^2|\cos\phi D_x+\sin\phi D_y|^2
\end{split}
##
and further,
##
\begin{split}
|D_r|^2&=|A|^2|B|^2|R|^2|\cos(\phi-\theta)|^2\\
&=|A|^2|B|^2R^2\cos^2(\phi-\theta)
\end{split}
##
Where ## R^2=D_x^2+D_y^2## , and ## tg(\theta)=\frac{Dy}{Dx}##. Where depending on values of ## D_x## and ##D_y## we have ##\theta\in [0,2\pi]##. Although ##D_x## and ##D_y## are known up to a complex multiplicative constant, their ratio is well defined.
What makes me suspicious of this result comes from a consideration of a symmetric molecule, where wavefunctions are either odd or even. In this case there is absolutely no argument that light absorption should be more pronounced along the angle ##\theta## than ##-\theta##. And in order to have a symmetric absorption, either ##D_x## or ##D_y## need to be zero.
I assume that this might be some bad math from my side, but so far I did not see it.
Later I was searching through the literature which deals with this subject, and found in paper ( Don L. Peterson, William T. Simpson, Polarized Electronic Absorption Spectrum of Amides with Assignments of Transitions, J. Am. Chem. Soc. 79 (1957) 2375-2382) the following argument:
"Crystal spectra must be understood as involving absorption of energy out of two independent beams along the principal directions, or, equivalently, as requiring that the light be represented as a statistical ensemble having parts polarized along the two principal directions. The weights of the two streams of photons, oppositely polarized, are given by the cosine squared law. It is believed that this phenomenon is an example of a disturbance due to the possibility of there having been a “measurement” (absorption of a photon by a crystal oscillator) thus leading to the reduction of the wave function of the light."
Results from this paper are very well fitted with this argument and therefore it seems to be experimentally validated. There are some newer results which use the same idea that a photon is being absorbed by one or the other absorption axis.
Although this model would fit my results very well, I am a bit concerned about this argument. Namely, I am not an expert about the collapsion of the wavefunction to the basis functions, and hence I would like to ask some of you here who know much more about it to share it with me and everyone else interested in this topic.
Last edited: | 2022-12-04 01:59:27 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.879015326499939, "perplexity": 452.6575993958607}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710953.78/warc/CC-MAIN-20221204004054-20221204034054-00473.warc.gz"} |
https://studydaddy.com/question/bus-505-week-3-discussion-2 | QUESTION
# BUS 505 Week 3 Discussion 2
This archive of BUS 505 Week 3 Discussion 2 comprises:
Describe the purpose, general requirements, and awarding of the contract in a sealed- bidding process. From the e-Activity, provide at least two examples where sealed bidding had a positive effect on the contract selection process.
Discuss two of the problems that might be encountered by an agency in producing a sealed bid, and how apparent and obvious mistakes can be addressed. From the e-Activity, provide at least two examples to support your response.
• @
• 2 orders completed
Tutor has posted answer for $5.99. See answer's preview$5.99 | 2018-04-25 06:40:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2034902125597, "perplexity": 3911.2537425771698}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125947705.94/warc/CC-MAIN-20180425061347-20180425081347-00560.warc.gz"} |
https://datascience.stackexchange.com/questions/60018/how-to-create-multiple-plot-from-a-panda-dataframe | # how to create multiple plot from a panda Dataframe
I want to plot multiple plots. The data is stored in a pandas dataframe and each row should be a seperate plot. Each row has an ID (ZRD_ID) which doenst matter and a date (TAG) and 24 values to be plotted.
import pandas as pd
import numpy as np
df = pd.read_csv('./Result_set_edited.csv')
df = df.drop("ZRD_ID", axis=1).drop("TAG", axis=1)
x = df.iloc[[0]]
print(df.head())
returns:
W01 W02 W03 W04 W05 ... W20 W21 W22 W23 W24
0 72616 156076 141025 72629 72631 ... 0 0 0 0 0
1 67114 171650 139920 67291 67292 ... 172924 93511 72445 72445 72445
2 66893 161919 134041 66913 66911 ... 166244 86672 67114 67120 67124
3 66603 171297 134227 66615 66631 ... 166078 86622 66871 66877 66879
4 66759 167198 133523 67126 67128 ... 163999 74525 66562 66568 66574
To start easier, since I am really new to this I thought of plotting the first line alone first.
since the columns are named 'W01', 'W02', ... , 'W24' I thought I coould use them as labels for the x-axis. I just didn't find a way to do so, since its the header of the df I guess. So i created a new array and tried to plot it with the first row of my Dataframe:
y = np.arange(0,24,1)
y.reshape(1,24)
print(y)
print(df.iloc[[0]].values)
plt.plot(y, x)
plt.show()
when trying to plot my values I get the following Error:
ValueError: x and y must have same first dimension, but have shapes (24,) and (1, 24)
thanksfor the help on how to fix the Error for plotting the first line.
PS: I would appreciate some hints on how to improve my question, since it is my first one.
Cheers
## 1 Answer
If I correctly got what you meant: you want to plot the first row of all the columns? I guess this could work. x= range(0,24) y = df.iloc[0,:].values xticks = df.columns.tolist() I'm assuming you want a line plot
plt.plot(x,y) plt.xticks(xticks) # to change the x ticks of the graph. plt.show()
The '0' signifies the first row and ':' for all the columns.
• Thanks! it worked with a little change: xticks = x (instead of the rercommended xticks = df.columns.tolist()) – CRoNiC Sep 11 '19 at 12:27
• I'm glad it worked. – Gozie Sep 11 '19 at 18:12 | 2021-02-25 13:49:23 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5376938581466675, "perplexity": 683.6074943272622}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178351134.11/warc/CC-MAIN-20210225124124-20210225154124-00085.warc.gz"} |
https://www.taylorfrancis.com/books/9780203871027/chapters/10.4324/9780203871027-11 | chapter 6
18 Pages
## Atmospheric motion: principles
For a body to follow a curved path there must be an inward acceleration (c) towards the center of rotation. This is expressed by:
mV2 c = – ––––
r
where m = the moving mass, V = its velocity and r = the radius of curvature. This effect is sometimes regarded for convenience as a centrifugal ‘force’ operating radially outward (see Note 1). In the case of the earth itself, this is valid. The centrifugal effect due to rotation has in fact resulted in a slight bulging of the earth’s mass in low latitudes and a flattening near the poles. The small decrease in apparent gravity towards the equator (see Note 2) reflects the effect of the centrifugal force working against the gravitational attraction directed towards the earth’s center. It is therefore only necessary to consider the forces involved in the rotation of the air around a local axis of high or low pressure. Here the curved path of the air (parallel to the isobars) is maintained by an inward-acting, or centripetal, acceleration. | 2020-08-07 16:40:36 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8665238618850708, "perplexity": 510.63608637439046}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439737204.32/warc/CC-MAIN-20200807143225-20200807173225-00455.warc.gz"} |
http://www.chegg.com/homework-help/questions-and-answers/3xy-2-2ycosx-dy-dx-x-y-3-y-2sinx-q3592197 | ## A quick description of your question...
(3xy^2+2ycosx)dy/dx=(x-y^3+y^2sinx)
• Anonymous commented
plz rate !!
• This is exact, because (d/dy) (y^3 - y^2sinx - x) = (d/dx)(3xy^2 + 2ycosx).
First, we integrate the dx coefficient with respect to x:
xy^3 + y^2 cos x - x^2/2 + f(y) for some function f in y.
Since the DE is exact, differentiating this with respect to y will give the dy coefficient:
3xy^2 + 2y cos y + f'(y) = 3xy^2 + 2ycosx
==> f'(y) = 0
==> f(y) = C.
xy^3 + y^2 cos x - x^2/2 + C = 0.
• This is exact, because (d/dy) (y^3 - y^2sinx - x) = (d/dx)(3xy^2 + 2ycosx).
First, we integrate the dx coefficient with respect to x:
xy^3 + y^2 cos x - x^2/2 + f(y) for some function f in y.
Since the DE is exact, differentiating this with respect to y will give the dy coefficient:
3xy^2 + 2y cos y + f'(y) = 3xy^2 + 2ycosx
==> f'(y) = 0
==> f(y) = C.
xy^3 + y^2 cos x - x^2/2 + C = 0.
• PART2
• (d/dy) (y^3 - y^2sinx - x) = (d/dx)(3xy^2 + 2ycosx).
First, we integrate the dx coefficient with respect to x:
xy^3 + y^2 cos x - x^2/2 + f(y) for some function f in y.
Since the DE is exact, differentiating this with respect to y will give the dy coefficient:
3xy^2 + 2y cos y + f'(y) = 3xy^2 + 2ycosx
==> f'(y) = 0
==> f(y) = C. | 2013-05-22 18:36:10 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9376412630081177, "perplexity": 2998.997238100421}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702298845/warc/CC-MAIN-20130516110458-00098-ip-10-60-113-184.ec2.internal.warc.gz"} |
https://allenai.github.io/allennlp-docs/api/allennlp.common.testing.html | allennlp.common.testing¶
Utilities and helpers for writing tests.
class allennlp.common.testing.test_case.AllenNlpTestCase(methodName='runTest')[source]
Bases: unittest.case.TestCase
A custom subclass of TestCase that disables some of the more verbose AllenNLP logging and that creates and destroys a temp directory as a test fixture.
FIXTURES_ROOT = PosixPath('/local/deploy/agent3/work/8feb324ce7c68d53/allennlp/tests/fixtures')
MODULE_ROOT = PosixPath('/local/deploy/agent3/work/8feb324ce7c68d53/allennlp')
PROJECT_ROOT = PosixPath('/local/deploy/agent3/work/8feb324ce7c68d53')
TESTS_ROOT = PosixPath('/local/deploy/agent3/work/8feb324ce7c68d53/allennlp/tests')
TOOLS_ROOT = PosixPath('/local/deploy/agent3/work/8feb324ce7c68d53/allennlp/tools')
setUp(self)[source]
Hook method for setting up the test fixture before exercising it.
tearDown(self)[source]
Hook method for deconstructing the test fixture after testing it.
class allennlp.common.testing.model_test_case.ModelTestCase(methodName='runTest')[source]
A subclass of AllenNlpTestCase with added methods for testing Model subclasses.
assert_fields_equal(self, field1, field2, name:str, tolerance:float=1e-06) → None[source]
static check_model_computes_gradients_correctly(model:allennlp.models.model.Model, model_batch:Dict[str, Union[Any, Dict[str, Any]]], params_to_ignore:Set[str]=None)[source]
ensure_batch_predictions_are_consistent(self, keys_to_ignore:Iterable[str]=())[source]
Ensures that the model performs the same on a batch of instances as on individual instances. Ignores metrics matching the regexp .*loss.* and those specified explicitly.
Parameters
keys_to_ignoreIterable[str], optional (default=())
Names of metrics that should not be taken into account, e.g. “batch_weight”.
ensure_model_can_train_save_and_load(self, param_file:str, tolerance:float=0.0001, cuda_device:int=-1, gradients_to_ignore:Set[str]=None, overrides:str='')[source]
Parameters
param_filestr
Path to a training configuration file that we will use to train the model for this test.
tolerancefloat, optional (default=1e-4)
When comparing model predictions between the originally-trained model and the model after saving and loading, we will use this tolerance value (passed as rtol to numpy.testing.assert_allclose).
cuda_deviceint, optional (default=-1)
The device to run the test on.
gradients_to_ignoreSet[str], optional (default=None)
This test runs a gradient check to make sure that we’re actually computing gradients for all of the parameters in the model. If you really want to ignore certain parameters when doing that check, you can pass their names here. This is not recommended unless you’re really sure you don’t need to have non-zero gradients for those parameters (e.g., some of the beam search / state machine models have infrequently-used parameters that are hard to force the model to use in a small test).
overridesstr, optional (default = “”)
A JSON string that we will use to override values in the input parameter file.
set_up_model(self, param_file, dataset_file)[source] | 2019-07-16 04:12:43 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23315107822418213, "perplexity": 4677.1096993479305}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195524502.23/warc/CC-MAIN-20190716035206-20190716061206-00500.warc.gz"} |
https://www.nature.com/articles/s41467-020-17856-4?error=cookies_not_supported&code=d9fa47cb-7fbd-4b92-af29-2522745b8dda | Skip to main content
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.
# Behavioral and neuronal underpinnings of safety in numbers in fruit flies
## Abstract
Living in a group allows individuals to decrease their defenses, enabling other beneficial behaviors such as foraging. The detection of a threat through social cues is widely reported, however, the safety cues that guide animals to break away from a defensive behavior and resume alternate activities remain elusive. Here we show that fruit flies display a graded decrease in freezing behavior, triggered by an inescapable threat, with increasing group sizes. Furthermore, flies use the cessation of movement of other flies as a cue of threat and its resumption as a cue of safety. Finally, we find that lobula columnar neurons, LC11, mediate the propensity for freezing flies to resume moving in response to the movement of others. By identifying visual motion cues, and the neurons involved in their processing, as the basis of a social safety cue this study brings new insights into the neuronal basis of safety in numbers.
## Introduction
Predation is thought to be a key factor driving group formation and social behavior (reviewed in ref. 1). It has long been established that being in a group can constitute an anti-predatory strategy2,3, as it affords the use of social cues to detect predators4,5,6,7, enables coordinated defensive responses8 or simply dilutes the probability of each individual to be predated3. A major consequence of this safety in numbers effect, reported in taxa throughout the animal kingdom, is that animals tend to decrease their individual vigilance9, stress levels10, or defensive behaviors11 when in a social setting.
One of the most studied benefits of being in a group is the facilitated detection of behaviorally significant cues in the environment, as information about their presence can quickly spread across a large group of individuals12. In the context of threat detection, most research has focused on actively emitted signals, such as alarm calls and foot stamping (reviewed in refs. 13,14). However, cues generated by movement patterns produced by defensive responses of surrounding prey can play a crucial role in predator detection. For example, crested pigeons use distinct wing whistles produced by conspecific escape flights5 and rats use silence resulting from freezing, as alarm cues4. Recently, it has also been suggested that seismic waves produced by fast running in elephants promote vigilance in conspecifics15. This form of social detection of threat may be advantageous as it does not require the active production of a signal that may render the emitter more conspicuous and thus vulnerable. Although few studies demonstrated this phenomenon, it is described in distant vertebrate species.
Because living in a group allows individuals to decrease their defenses, it also enables other globally beneficial behaviors such as foraging. These selective forces on the evolution of social behavior have been demonstrated in a wide range of animals, from invertebrates to mammals1,2. Despite its wide prevalence, the mechanisms that lead to a decrease in defensive behaviors are largely unknown. Hence, in order to gain mechanistic insight into how increasing group size impacts defense behaviors, we decided to use Drosophila melanogaster since it allows the use of groups of varying size, the large number of replicates required for detailed behavioral analysis and genetic access to specific neuronal subtypes. Importantly, fruit flies display social behaviors in different contexts16,17,18,19,20,21, namely social regulation of anti-predation strategies, such as the socially transmitted suppression of egg laying in the presence of predatory wasps17 or the reduction in erratic turns during evasive flights when in a group, compared to when alone, in the presence of dragonflies21.
In this study, we show that Drosophila melanogaster regulate their freezing behavior in response to threat as a function of group size. We identify the motion of others as a key regulator of freezing, with its cessation acting as a signal of danger and its presence constituting a safety signal. We further identify lobula columnar neurons 11 as major mediators of the usage of the movement of others as a safety cue. The identification of the sensory neurons responsible for social regulation of freezing opens up the possibility to gain mechanistic insight into the safety in numbers effect.
## Results
### Flies in groups display lower sustained freezing responses
To simulate a predator’s attack, we used a looming stimulus (Fig. 1a), an expanding dark disc, that mimics an object on collision course and elicits defense responses in visual animals, including humans (reviewed in refs. 22,23,24). Individually tested fruit flies respond to looming stimuli with escapes in the form of jumps25,26, in flight evasive maneuvers27 or running as well as with freezing28,29 when in an enclosed environment. In our setup, the presentation of 20 looming stimuli (Fig. 1a) elicited reliable freezing responses for flies tested individually and in groups of up to 10 individuals (Fig. 1b–e, Supplementary Fig. 1 shows that running and jumps are less prominent in these arenas). The fraction of flies freezing increased as the stimulation period progressed for flies tested individually and in groups of up to five flies; in groups of 6–10 individuals, the fraction of flies freezing only transiently increased with each looming stimulus (Fig. 1b). The fraction of flies freezing was maximal for individuals and minimal for groups of 6–10, while groups of 2–5 flies showed intermediate responses (Fig. 1b). The step-wise decrease between groups of five and six flies, does not seem to depend on fly density, as testing groups of five flies in a chamber that is 1-cm smaller, creating a density similar to that in groups of 7, did not impact freezing responses (Supplementary Fig. 2). At the level of each individual fly’s behavior, flies tested alone spent more time freezing, 76.67%, interquartile range (IQR) 39.75–90.42%, during the stimulation period than flies in any of the groups tested (Fig. 1c; statistical comparisons in Supplementary Table 1). Flies in groups of 2–5 spent similar amounts of time freezing (for groups of 2: 31.67%, IQR 9.46–64.38% and for groups of 5: 43.08%, IQR 11.79–76.50%), while flies in groups of 6–10 displayed the lowest levels of freezing (for groups of 6: 8.08%, IQR 3.04–17.46% and for groups of 10: 3.33%, IQR 2–7.67%; Fig. 1c; statistical comparisons in Supplementary Table 1). The decrease in time spent freezing for flies tested in groups of 2–5, compared to individuals, was not due to a decrease in the probability of entering freezing after a looming stimulus (Fig. 1d; statistical comparisons in Supplementary Table 2), but rather to an increase in the probability of stopping freezing, i.e., resuming movement, before the following stimulus presentation (individually tested flies: P(Fexit) = 0.08, IQR 0–0.21, groups of 2: P(Fexit) = 0.31 IQR 0.11–0.78, groups of 5: P(Fexit) = 0.54 IQR 0.31–0.90; Fig. 1e; statistical comparisons in Table S3). Flies in groups of 6–10, were not only more likely to stop freezing (groups of 6: P(Fexit) = 0.93, IQR 0.80–1, groups of 10: P(Fexit) = 1, IQR 0.83–1; Fig. 1e; statistical comparisons in Table S3), but also less likely to enter freezing (groups of 6: P(Fentry) = 0.35, IQR 0.20–0.46, groups of 10: P(Fentry) = 0.21, IQR 0.10–0.36; Fig. 1c; statistical comparisons in Supplementary Table 2) compared to the other conditions. The decrease in persistent freezing with the increase in group size suggests that there is a signal conveyed by the other flies that increases in intensity with the increase in the number of flies tested together.
### Absence of movement promotes freezing
We next examined whether flies respond to each other. We started by exploring the effect on freezing onset, as freezing has been shown to constitute an alarm cue in rodents, such that one rat freezing can lead another to freeze4. We decided to focus on groups of five flies, which showed intermediate freezing levels (Fig. 1). The onset of freezing both for individually tested flies and in groups of five occurred during and shortly after a looming stimulus (Fig. 2a). This window, of ~1 s, in principle allows for social modulation of freezing onset. Indeed, the probability of freezing onset at time t gradually increased with increasing numbers of flies freezing at time t−1 (see Methods section), indicating that flies increase their propensity to freeze the more flies around them were freezing. This synchronization in freezing could result from flies being influenced by the other flies or simply time locking of freezing to the looming stimulus. To disambiguate between these possibilities we shuffled flies across groups, such that the virtual groups thus formed were composed of flies that were not together when exposed to looming. If the looming stimulus was the sole source of synchrony for freezing onset, then we should see a similar increase in probability of freezing by the focal fly with increasing number of ‘surrounding’ flies freezing in the shuffled group. We found a weaker modulation of freezing onset by the number of flies freezing in randomly shuffled groups compared to that of the real groups of five flies (Fig. 2b; G-test, g = 190.96, p < 0.0001, df = 4). We corroborated this result by testing single flies surrounded by four fly-sized magnets whose speed and direction of circular movements we could control (Fig. 2c–f). During baseline, the magnets moved at the average walking speed of flies in our arenas, 12 mm per s, with short pauses as the direction of movement changed. Stopping the magnets upon the first looming stimulus and throughout the entire stimulation period led to increased time freezing (Fig. 2d) and increased probability of freezing entry upon looming (Fig. 2e), compared to all controls – individuals alone, magnets not moving throughout the entirety of the experiment and the exact same protocol (magnets moving during baseline then freezing) but in the absence of looming stimuli. The transition from motion to freezing is thus important, but not sufficient to drive freezing, since flies surrounded by magnets that do not move for the entire experiment froze to individually tested levels, but flies exposed to magnets that move and then freeze in the absence of looming stimuli did not freeze. Together these results suggest that flies use freezing by others as an alarm cue, which increases their propensity to freeze to an external threat, the looming stimulus.
### Movement of neighbors leads to freezing exit
As the strongest effect observed across all group sizes was on freezing exit, i.e., the resumption of movement, we asked whether the propensity to exit freezing was also dependent on the number of surrounding flies that were freezing. To this end, we performed a similar analysis as for freezing onset and found that the higher the number of flies freezing, the lower the probability of the focal fly to exit from freezing. This effect was also decreased in shuffled groups (Fig. 3a; G-test, g = 170.81, p < 0.0001, df = 4). We then examined the contribution of mechanosensory signals in the decrease in freezing and found that collisions between flies play a minor role in the observed effect (Supplementary Fig. 3; statistical comparisons in Supplementary Tables 46), contrary to what happens with socially-mediated odor avoidance16. Next, we explored our intuition that motion cues from the other flies were the main players affecting exit from looming-triggered freezing. We formalized the motion cue (Fig. 3b), perceived by a focal fly, as the summed motion cues produced by the other four surrounding flies (we multiplied the speed of each fly by the angle on the retina, a function of the size of the fly and its distance to the focal fly, Fig. 3b). We then analyzed separately the summed motion cue perceived by focal flies during freezing bouts that terminated before the following looming stimulus (freezing with exit) and continuous freezing bouts (with no breaks in between looming stimuli; representative examples in Fig. 3b). Freezing bouts with exit had higher motion cue values (Fig. 3c) compared to continuous bouts (p < 0.0001, Freezing without exit = 0.64 IQR: 0.00–2.11, Freezing with exit = 2.79 IQR: 1.28–5.08).
We hypothesized that once flies start freezing, upon a looming stimulus, two processes determine whether a fly will exit freezing, resuming activity, or remain freezing: (1) an individual decision process, whereby flies make this binary decision irrespective of what the other flies are doing, possibly reflecting the number of looming stimuli the flies were exposed to and how much time has elapsed since the onset of freezing; (2) a social decision process whereby flies integrate the motion cues generated by their neighbors relying on this information to decide whether to stop freezing. To test this possibility, we modeled the decision to stay freezing or resume activity as a binary decision that follows a logistic function taking into account two parameters, the individual probability of exiting freezing before the next looming stimulus, and the motion cues of others (see Methods section). With this simple model we can predict whether a fly will stay freezing during the entire inter-looming interval or whether it resumes activity in between looming stimuli, (area under the receiver operating characteristic curve AUROC = 0.87 ± 0.019, Fig. 3d). In addition, we found that the social cues explained a large fraction of the variance while individual behavior explains a small fraction (average variance explained by β-coefficient of social cues, βs = 0.85 ± 0.019, variance explained by β-coefficient for individual behavior βi = 0.15 ± 0.019, Fig. 3d, e).
To further test whether motion cues from others constitute a safety signal, we manipulated the motion cues perceived by the focal fly, while maintaining the number of flies in the group constant. An increase in the social motion cues, should enhance the group effect, and hence decrease the freezing responses of a focal fly. We compared groups of five wild-type flies with groups of one wild-type and four blind flies (norpA mutants; Fig. 4a). Blind flies do not perceive the looming stimulus and walk for the duration of the experiment; when a focal fly freezes surrounded by four blind flies it is thus exposed to a higher motion cue during the stimulation period than a focal fly in a group of five wild-type flies (Fig. 4a). When surrounded by blind flies, the fraction of focal flies freezing throughout the stimulation period was lower than the fraction of flies freezing in a group of wild-type flies (Fig. 4b). Further, the increase in motion cues in groups with blind flies decreased the amount of time a fly froze compared to that of groups of wild-type flies (6.17% IQR 2.17–15.25% versus 19.58% IQR 8.20–57.12; p < 0.0001; Fig. 4c). This reduction in freezing resulted mostly from a decreased probability of freezing entry (wild-type groups: P(Fentry) = 2.57 IQR 0.15–0.39, groups with blind flies: P(Fentry) = 0.49 IQR 0.25–0.61, p < 0.0001; Fig. 4d) and slightly increased probability of exiting freezing (wild-type groups: P(Fexit) = 0.83 IQR 0.39–1, groups with blind flies: P(Fexit) = 0.89 IQR 0.71–1; Fig. 4e). Hence, a focal fly surrounded by four blind flies behaves similarly to flies in groups of more than six individuals. Importantly, the decrease in persistent freezing was not due to an increased role of collisions on freezing breaks (Supplementary Fig. 4). We further tested whether any type of visual signal could alter individual freezing in the same manner as the motion cues generated by flies in the group, by presenting a visual stimulus with randomly appearing black dots with the same change of luminance as the looming stimulus but without motion (used as control stimulus in our previous study28) 4.5 s after each looming presentation. This stimulus, which could work as a distractor, did not alter the proportion of time freezing nor the probability of freezing entry or exit (Supplementary Fig. 5). Finally, we also assessed the role of other sensory cues, namely olfaction and gustation. Using near-anosmic mutants and testing the impact of contacts, required for gustatory cues, on the logistic regression model we found that olfaction and gustation are unlikely to play a role in the group response (Supplementary Fig. 6).
Together, these results show that flies use motion cues generated by their neighbors to decide whether to stay or exit freezing, raising the possibility that motion cues produced by others could constitute a safety signal leading flies to resume activity.
### Lobula columnar neurons 11 mediate group effect
Having identified motion cues of others as the leading source of the group effect on freezing, we decided to test the role of visual projection neurons responsive to the movement of small objects. In particular, lobula columnar 11 (LC11)30,31 neurons have been shown to respond to moving objects of angular sizes31 that could be generated by moving flies within our arenas. Furthermore, the behavioral relevance of these neurons was as yet unidentified. To silence LC11 neurons we used one fly line, an LC11-GAL431, to drive the expression of either Kir 2.132, a potassium channel that hyperpolarizes neurons decreasing their ability to fire action potentials, or tetanus toxin light chain (TNT), which cleaves neuronal synaptobrevin preventing synaptic release of neurotransmitter33. Constitutively silencing LC11 neurons did not alter looming-triggered freezing of flies tested individually (Supplementary Fig. 7). Conversely, for LC11-silenced flies tested in groups of five, the fraction of flies freezing increased throughout the experiment (Fig. 5a). Moreover, experimental flies in groups of five froze longer (~3.5-fold increase for LC11-GAL4>Kir2.1, and ~2-fold increase for LC11-GAL4> (+) TNT compared to controls; Fig. 5b), which was not due to an increase in the probability of freezing entry (Fig. 5c), but rather to a decrease in the probability of freezing exit (Fig. 5d; LC11-GAL4>Kir2.1 0.077 IQR 0.00-0.17 and Empty-GAL4>Kir2.1 0.59 IQR 0.15-1; LC11-GAL4>(+)TNT 0.17 IQR 0.06–0.50, and LC11-GAL4>(−)TNT 0.33 IQR 0.14–0.77). These data, together with the identification of visual motion cues as mediators of group freezing responses, point to the role of LC11 neurons in this process. However, given that LC11-GAL4, despite its sparseness, also directs expression outside these neurons, namely in the descending neurons DNg2634, we cannot at this moment fully rule out the effect of expression outside LC11. In addition, the observed effect of silencing neurons targeted by the LC11-GAL4 line on freezing in groups may be adult specific or due to developmental effects. Finally, to assert the specificity of our manipulation we expressed Kir2.1 in another LC neuron class, LC2030, which are not known to respond to small moving objects, and found that it does not alter group behavior (Supplementary Fig. 8). In summary, silencing LC11 neurons renders flies less sensitive to the motion of others, specifically decreasing its use as a safety cue that downregulates freezing.
## Discussion
In this study, we show that flies in groups display a reduction in freezing responses that scales with group size. Detailed behavioral analysis and quantitative modeling together with behavioral and genetic manipulations, allowed us to identify freezing as a sign of danger and activity as a safety cue. These findings are consistent with the hypothesis that safety in numbers may partially be explained by the use of information provided by the behavior of others. Moreover, we show that visual projection LC11 neurons are involved in processing motion cues of others to downregulate freezing.
With the experiments reported here, we extend to invertebrates the notion of defensive behaviors, in this case freezing, as alarm cues. In addition, freezing may constitute a public cue that can be used by any surrounding animal regardless of species. Indeed, we show that freezing by dummy flies enhances freezing in response to looming stimuli.
Importantly, we also identify a social cue of safety. In our paradigm, flies responded to the threatening looming stimulus with freezing. At some point after the stimulus, flies can exit freezing resuming movement, until a new looming stimulus is presented, triggering freezing again. The more stimuli the flies were exposed to the less likely they are to exit freezing before the next looming. This pattern suggests that the resumption of activity reflects the level of safety, such that when in groups the movement of others can constitute a cue of safety leading to further activity. Using a logistic regression model and manipulating the levels of movement by neighboring flies we demonstrated that motion cues of others strongly determine the propensity of flies to resume activity. In a prior study4 we showed that when we present an auditory cue of movement to rats that are freezing in response to the display of freezing by another rat, they resume activity. Although in line with the present findings, we did not explicitly test whether this motion cue constituted a safety cue, as we have done here.
While there are known examples of the use of auditory motion cues to infer the presence or absence of a threat in vertebrate species, here we show that flies use visual motion cues. This may relate to the fact that Drosophila melanogaster use short range auditory signals, whereas visual cues can be detected at larger distances. Silencing LC11 neurons, which process visual information, responding to motion of small visual objects, disrupted the use of motion cues from neighboring flies as a safety cue. Though motion also generates vibrations cues and these can be used to detect the movement of other flies35, our results suggest visual cues play a predominant role. Furthermore, other LC neurons have been implicated in processing visual stimuli in social contexts, namely fru + LC10a important for the ability of males to follow the female during courtship36. LC cells in the fly seem to be tuned for distinct visual features, and activating specific LC cells leads to distinct approach or defensive responses30. It will be interesting to study to what extent there is specificity or overlap in visual projection neurons for behaviors triggered by the motion of others. The parallels between visual systems of flies and humans (reviewed in refs. 37,38), despite the lack of any common ancestor with an image forming visual system, suggest that shared mechanisms underlying visuomotor transformations represent general solutions to common problems that all organisms face individually or as a group.
Motion plays a crucial role in predator-prey interactions. Predator and prey both use motion cues to detect each other using these to make decisions about when and how to strike or whether and how to escape39,40,41,42. Furthermore, prey animals also use motion cues from other prey as an indirect cue of a predator’s presence4,12,43. We believe that the current study opens a new path to study how animals in groups integrate motion cues generated by predators, their own movement, and that of others to select the appropriate defensive responses.
## Methods
### Fly lines and husbandry
Flies were kept at 25 °C and 70% humidity in a 12 h:12 h dark:light cycle. Experimental animals were mated females, tested only once when 4–6 days old.
Wild-type flies used were Canton-S. LC11-GAL4 w[1118]; P{y[+t7.7] w[+mC] = GMR22H02-GAL4}attP2, LC20-splitGAL4 w[1118]; P{y[+t7.7] w[+mC]=R35B06-GAL4.DBD}attP2 PBac{y[+mDint2] w[+mC]=R17A04-p65.AD}VK00027 and w[*] norpA[36] were obtained from the Bloomington stock center. 10XUAS-IVS-eGFPKir2.1 (attP2) flies were obtained from the Card laboratory at Janelia farm. UAS-CD8::GFP; lexAop-rCD2::RFP44 recombined with nSyb-lexA.DBD::QF.AD (obtained from the Bloomington stock center) were obtained from Wolf Huetteroth, University Leipzig. UAS-(+) TNT and UAS-(−)TNT33 were obtained from the Chiappe lab, Champalimaud Research. The olfactory mutant IR8a1; IR25a2; GR63a1, ORCO1 were obtained from the Benton lab, University of Lausanne.
### Behavioral apparatus and visual stimulation
We imaged unrestrained flies in 5 mm thick, 11° slanted polyacetal arenas with 68 mm diameter (central flat portion diameter 32 mm). Flies were not restricted to the arena floor, as during initial experiments we observed no difference in defensive responses for flies on the floor or ceiling. Visual stimulation (20 500-ms looming stimuli, a black circle in a white background, with a virtual object length of 10 mm and speed 25 cm per s (l/v value of 40 ms) as in ref. 28) was presented on an Asus monitor running at 144 Hz, tilted 45° over the stage (Fig. 1a). For the experiments with random dots, 4.5 s after the looming presentation we presented a visual stimulus consisting of appearing black dots at random locations on the screen to reach the same change in luminance as the looming stimulus28.
The stage contained two arenas, backlit by a custom-built infrared (850 nm) LED array. Videos were obtained using two USB3 cameras (PointGrey Flea3) with an 850-nm-long pass filter, one for each arena.
For the experiments with the magnets (Fig. 2), we used an electromechanical device developed by the Scientific Hardware Platform at the Champalimaud Centre for the Unknown. It consists of an adapted setup in which a rotating transparent disc with five incorporated neodymium magnets moves under the arena. A circular movement is induced by an electric DC gearhead motor transmitted via a belt to the disc. This allows magnetic material placed on the arena to move around in synchronized motion. The motor is controlled by a custom-made electronic device, connected to the computer, through a dedicated Champalimaud Hardware Platform-developed software. For the experiments of freezing magnets during stimulation, with or without stimulus, the magnets rotated at 12 mm per s with a change in direction every 50 s during the baseline; as soon as the stimulation period started, in synchrony with the first looming stimulus, the magnets ceased movement, until the end of the experiment.
### Video acquisition and analysis
Videos were acquired using Bonsai45 at 60 Hz and 1280 width × 960 height resolution. We used IdTracker46 to obtain the position throughout the video of each individual fly. The video and the IdTracker trajectories file were then fed to the ‘Fly motion quantifier’, developed by the Scientific Software Platform at the Champalimaud Centre for the Unknown in order to obtain the final csv file containing not only position and speed for each fly, but also pixel change in a region of interest (ROI) around each fly, defined by a circle with a 30 pixel radius around the center of mass of the fly.
### Data analysis
Data were analyzed using custom scripts in spyder (python 3.5). Statistical testing was done in GraphPad Prism 7.03, and non-parametric, Kruskal–Wallis followed by Dunn’s multiple comparison test or two-tailed Mann–Whitney tests were chosen, as data were not normally distributed (Shapiro–Wilk test). Probabilities were compared using the χ2 contingency test in python (G-test).
Freezing was classified as 500 ms periods with a median pixel change over that time period <30 pixels within the ROI. The proportion of time spent freezing was quantified as the proportion of 500 ms bins during which the fly was freezing.
We calculated the proportion of freezing entries upon looming and exits between looming stimuli (Fig. 1) using the following definitions: (1) freezing entries corresponded to events where the fly was not freezing before the looming stimulus (a 1-s time window was used) and was freezing in the first 500-ms bin after the looming stimulus; (2) freezing exits were only considered if sustained, that is, when the fly froze upon looming but exited from freezing and was still moving by the time the next looming occurred, i.e., the first 500-ms bin after looming the fly was freezing and in the last 500-ms bin before the next looming the fly was not freezing.
To determine the time of freezing onset or offset (Figs. 2a, b and 3a), we used a rolling window of pixel change (500-ms bins sliding frame by frame) and the same criterion for a freezing bin as above). Time stamps of freezing onset and offset were used to calculate the probability of entering and exiting freezing as a function of the number of flies freezing. For freezing entries after looming as well as probabilities of entering and exiting freezing, we considered only instances in which the preceding 500-ms bin was either fully non-freezing or freezing. To determine the numbers of others freezing at freezing entry or exit we used a 10 frame bin preceding the freezing onset or offset timestamp.
Distances between the center of mass of each fly were calculated using the formula $$\sqrt {\left( {x2 - x1} \right)^2 + \left( {y2 - y1} \right)^2}$$, and we considered a collision had taken place when the flies reached a distance of 25 pixels. The motion cue was determined as $${\sum} {{\mathrm{speed}} \times {\mathrm{angle}}\,{\mathrm{on}}\,{\mathrm{the}}\,{\mathrm{retina}}\left( \theta \right)}$$ where $$\theta = 2\,{\mathrm{arctan}}\left( {\frac{{{\mathrm{size}}}}{{2 \times {\mathrm{distance}}}}} \right)$$.
To analyze the motion cue for freezing bouts with and without exit (Fig. 3b, c), we defined freezing bouts with exit as bouts where flies were freezing in the 500 ms following the looming stimulus offset and resumed moving before the next looming stimulus (up until the last 500 ms before the looming stimulus onset) and freezing bouts without exit as those where freezing persisted until the next looming. Cumulative proportions of motion cues for freezing with and without exit were compared using the Kolmogorov–Smirnov test.
To model the decision to stay frozen or resume movement we used the scikit-learn logistic regression model. Briefly, we analyzed freezing behavior in between looming stimuli, categorizing freezing bouts into two types: freezing bouts that ended with an exit before the next looming (to which we assigned a value of 1), and continuous freezing bouts, without an exit until the next looming (value of 0). We used freezing bout type as the dependent variable. The independent variables were the probability of an individual fly exiting from freezing within the same inter-looming interval (calculated from the data of flies tested individually) (Vi); and the sum of the motion cue generated by neighboring flies, divided by the bout length (Vs). We performed a K-fold cross-validation with four splits and used 10,000 times boostrapped data with replacement. To determine the explanatory power of each predictor, we determined the associated fraction of variance using the following formula (shown for variable Vi): $$\frac{{{\sum} {V_{\mathrm{i}}\beta_{\mathrm{i}}} }}{{{\sum} {V_{\mathrm{i}}\beta_{\mathrm{i}}} + {\sum} {V_{\mathrm{s}}\beta_{\mathrm{s}}} }}$$, where βs and βi are, respectively, the β-coefficients of social cues and individual behavior.
### Imaging
LC11-GAL4>UAS-CD8::GFP; nSyb-lexA>lexAop-rCD2::RFP and LC20-splitGAL4>UAS-CD8::GFP; nSyb-lexA>lexAop-rCD2::RFP 3-day-old females were processed for native fluorescence imaging as in ref. 47. In brief, brain were dissected in ice-cold 4% PFA and post-fixed in 4% PFA for 40–50 min. After 3 × 20 min washes with PBST (0.01 M PBS with 0.5% TritonX) and 2 × 20 min washes in PBS (0.01 M) brains were embedded in Vectashield and imaged with a ×16 oil immersion lens on a Zeiss LSM 800 confocal microscope.
### Reporting summary
Further information on research design is available in the Nature Research Reporting Summary linked to this article.
## Data availability
All raw data files are available at https://doi.org/10.6084/m9.figshare.12554663. Source data are provided with this paper.
## Code availability
Code available upon request.
## References
1. 1.
Alexander, R. D. The evolution of social systems. Annu. Syst. 5, 325–383 (1974).
2. 2.
Hamilton, W. D. Geometry for the selfish herd. J. Theor. Biol. 31, 295–311 (1971).
3. 3.
Foster, W. A. & Treherne, J. E. Evidence for the dilution effect in the selfish herd from fish predation on a marine insect. Nature 293, 466–467 (1981).
4. 4.
Pereira, A. G., Cruz, A., Lima, S. Q. & Moita, M. A. Silence resulting from the cessation of movement signals danger. Curr. Biol. 22, R627–R628 (2012).
5. 5.
Murray, T. G. et al. Sounds of modified flight feathers reliably signal danger in a pigeon. Curr. Biol. 27, 3520–3525 (2017).
6. 6.
Blum, M. Alarm pheromones. Annu. Rev. Entomol. 14, 57–80 (1969).
7. 7.
Gill, S. A. & Bierema, A. M. K. On the meaning of alarm calls: a review of functional reference in avian alarm calling. Ethology 119, 449–461 (2013).
8. 8.
Ono, M. & Sasaki, M. Heat production by bailing in the Japanese honeybee, Apis ceranajaponica as a defensive behavior against the hornet, Vespa simillima xanthoptera (Hymenoptera: Vespidae). Experientia 43, 3–6 (1987).
9. 9.
Underwood, R. Vigilance behaviour in grazing african antelopes. Behaviour 79, 81–107 (1982).
10. 10.
Queiroz, H. & Magurran, A. E. Safety in numbers? Shoaling behaviour of the Amazonian red-bellied piranha. Biol. Lett. 1, 155–157 (2005).
11. 11.
Faustino, A. I., Tacão-Monteiro, A. & Oliveira, R. F. Mechanisms of social buffering of fear in zebrafish. Sci. Rep. 7, 1–10 (2017).
12. 12.
Handegard, N. O. et al. The dynamics of coordinated group hunting and collective information transfer among schooling prey. Curr. Biol. 22, 1213–1217 (2012).
13. 13.
Pereira, A. G. & Moita, M. A. Is there anybody out there? Neural circuits of threat detection in vertebrates. Curr. Opin. Neurobiol. 41, 179–187 (2016).
14. 14.
Rose, T. A., Munn, A. J., Ramp, D. & Banks, P. B. Foot-thumping as an alarm signal in macropodoid marsupials: Prevalence and hypotheses of function. Mammal. Rev. 36, 281–298 (2006).
15. 15.
Mortimer, B., Rees, W. L., Koelemeijer, P. & Nissen-meyer, T. Classifying elephant behaviour through seismic vibrations. Curr. Biol. 28, R547–R548 (2018).
16. 16.
Ramdya, P. et al. Mechanosensory interactions drive collective behaviour in Drosophila. Nature 519, 233–236 (2015).
17. 17.
Sarin, S. & Dukas, R. Social learning about egg-laying substrates in fruitflies. Proc. Biol. Sci. 276, 4323–4328 (2009).
18. 18.
Battesti, M., Moreno, C., Joly, D. & Mery, F. Spread of social information and dynamics of social transmission within Drosophila groups. Curr. Biol. 22, 309–313 (2012).
19. 19.
Danchin, E. et al. Cultural flies: conformist social learning in fruitflies predicts long-lasting mate-choice traditions. Science 362, 1025–1030 (2018).
20. 20.
Kacsoh, B. Z., Bozler, J., Ramaswami, M. & Bosco, G. Social communication of predator-induced changes in Drosophila behavior and germ line physiology. Elife 4, 1–36 (2015).
21. 21.
Combes, S. A., Rundle, D. E., Iwasaki, J. M. & Crall, J. D. Linking biomechanics and ecology through predator-prey interactions: flight performance of dragonflies and their prey. J. Exp. Biol. 215, 903–913 (2012).
22. 22.
Fotowat, H. & Gabbiani, F. Collision detection as a model for sensory-motor integration. Annu. Rev. Neurosci. 34, 1–19 (2011).
23. 23.
Herberholz, J. & Marquart, G. D. Decision making and behavioral choice during predator avoidance. Front. Neurosci. 6, 1–15 (2012).
24. 24.
Peek, M. Y. & Card, G. M. Comparative approaches to escape. Curr. Opin. Neurobiol. 41, 167–173 (2016).
25. 25.
Card, G. & Dickinson, M. H. Visually mediated motor planning in the escape response of Drosophila. Curr. Biol. 18, 1300–1307 (2008).
26. 26.
Reyn, C. R. Von et al. A spike-timing mechanism for action selection. Nat. Neurosci. 17, 962–970 (2014).
27. 27.
Muijres, F. T., Elzinga, M. J., Melis, J. M. & Dickinson, M. H. Flies evade looming targets by executing rapid visually directed banked turns. Science 344, 172–177 (2014).
28. 28.
Zacarias, R., Namiki, S., Card, G. M., Vasconcelos, M. L. & Moita, M. A. Speed dependent descending control of freezing behavior in Drosophila melanogaster. Nat. Commun. 9, 3697 (2018).
29. 29.
Gibson, W. T. et al. Behavioral responses to a repetitive visual threat stimulus express a persistent state of defensive arousal in Drosophila. Curr. Biol. 25, 1401–1415 (2015).
30. 30.
Wu, M. et al. Visual projection neurons in the Drosophila lobula link feature detection to distinct behavioral programs. Elife 5, 1–43 (2016).
31. 31.
Keleş, M. F. & Frye, M. A. Object-detecting neurons in Drosophila. Curr. Biol. 27, 680–687 (2017).
32. 32.
Baines, R. A., Uhler, J. P., Thompson, A., Sweeney, S. T. & Bate, M. Altered electrical properties in Drosophila neurons developing without synaptic transmission. J. Neurosci. 21, 1523–1531 (2001).
33. 33.
Sweeney, S. T., Broadie, K., Keane, J., Niemann, H. & O’Kane, C. J. Targeted expression of tetanus toxin light chain in Drosophila specifically eliminates synaptic transmission and causes behavioral defects. Neuron 14, 341–351 (1995).
34. 34.
Namiki, S., Dickinson, M. H., Wong, A. M., Korff, W. & Card, G. M. The functional organization of descending sensory-motor pathways in drosophila. Elife 7, 1–50 (2018).
35. 35.
Ejima, A. & Griffith, L. C. Courtship initiation is stimulated by acoustic signals in Drosophila melanogaster. PLoS ONE 3, 1–9 (2008).
36. 36.
Ribeiro, I. M. A. et al. Visual projection neurons mediating directed courtship in Drosophila. Cell 174, 607–621.e18 (2018).
37. 37.
Sanes, J. R. & Zipursky, S. L. Design principles of insect and vertebrate visual systems. Neuron 66, 15–36 (2010).
38. 38.
Joly, J. S., Recher, G., Brombin, A., Ngo, K. & Hartenstein, V. A Conserved developmental mechanism builds complex visual systems in insects and vertebrates. Curr. Biol. 26, R1001–R1009 (2016).
39. 39.
Catania, K. C., Hare, J. F. & Campbell, K. L. Water shrews detect movement, shape, and smell to find prey underwater. Proc. Natl Acad. Sci. USA 105, 571–576 (2008).
40. 40.
Carr, C. E. & Christensen-Dalsgaard, J. Sound localization strategies in three predators. Brain. Behav. Evol. 86, 17–27 (2015).
41. 41.
Friedel, P., Young, B. A. & Van Hemmen, J. L. Auditory localization of ground-borne vibrations in snakes. Phys. Rev. Lett. 100, 2–5 (2008).
42. 42.
Zhao, Z. et al. Zona incerta GABAergic neurons integrate prey-related sensory signals and induce an appetitive drive to promote hunting. Nat. Neurosci. 22, 921–932 (2019).
43. 43.
Hingee, M. & Magrath, R. D. Flights of fear: a mechanical wing whistle sounds the alarm in a flocking bird. Proc. R. Soc. B 276, 4173–4179 (2009).
44. 44.
Lee, T., Lee, A. & Luo, L. Development of the Drosophila mushroom bodies: sequential generation of three distinct types of neurons from a neuroblast. Development 126, 4065–4076 (1999).
45. 45.
Lopes, G. et al. Bonsai: an event-based framework for processing and controlling data streams. Front. Neuroinform. 9, 1–14 (2015).
46. 46.
Pérez-Escudero, A., Vicente-Page, J., Hinz, R. C., Arganda, S. & de Polavieja, G. G. idTracker: tracking individuals in a group by automatic identification of unmarked animals. Nat. Methods 11, 743–748 (2014).
47. 47.
Pitman, J. L. et al. A pair of inhibitory neurons are required to sustain labile memory in the Drosophila mushroom body. Curr. Biol. 21, 855–861 (2011).
Download references
## Acknowledgements
We would like to thank: the Scientific Software Platform at the Champalimaud Centre for Unknown for developing the Fly motion quantifier; the Scientific Hardware platform for developing the magnet setup; Wolf Huetteroth (University of Leipzig) for help with imaging fly lines; Ricardo Vieira for help streamlining the video analysis pipeline; Gil Costa for the illustrations in Figs. 1a, 3b, and 4a; the Moita lab, particularly Anna Hobbiss and Ricardo Neto, as well as Eugenia Chiappe and Gonzalo de Polavieja for fruitful discussions and comments on the manuscript; Alfonso Renart and João Afonso for help with the logistic regression model; and Rui Gonçalves for invaluable fly pushing technical assistance during the revision process. This work was supported by Fundação Champalimaud, ERCStG337747-CoCO and ERCCoG819630-A-Fro.
## Author information
Authors
### Contributions
C.H.F. performed all experiments and analyzed the data. C.H.F. and M.A.M. designed the experiments, discussed results, and wrote the manuscript.
### Corresponding authors
Correspondence to Clara H. Ferreira or Marta A. Moita.
## Ethics declarations
### Competing interests
The authors declare no competing interests.
## Additional information
Peer review information Nature Communications thanks the anonymous reviewers for their contribution to the peer review of this work. Peer reviewer reports are available.
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
## Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
Reprints and Permissions
## About this article
### Cite this article
Ferreira, C.H., Moita, M.A. Behavioral and neuronal underpinnings of safety in numbers in fruit flies. Nat Commun 11, 4182 (2020). https://doi.org/10.1038/s41467-020-17856-4
Download citation
• Received:
• Accepted:
• Published:
## Comments
By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.
## Search
### Quick links
Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.
Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing | 2021-09-28 18:27:25 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4174255430698395, "perplexity": 5644.129690593881}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780060877.21/warc/CC-MAIN-20210928153533-20210928183533-00449.warc.gz"} |
https://www.sekonet.pl/g2aug/119bfb-arithmetic-expression-wikipedia | ### arithmetic expression wikipedia
a positive integer and {\displaystyle d} . / arithmetic expression (1) One or more characters or symbols associated with arithmetic, such as 1+2=3 or 8*6. 3, 4x, and 5yzw are three separate terms.. Thus 16 × 5 = 80 is twice the sum. In computing, an arithmetic logic unit (ALU) is a combinational digital circuit that performs arithmetic and bitwise operations on integer binary numbers. n Hist. The semantic rules may declare that certain expressions do not designate any value (for instance when they involve division by 0); such expressions are said to have an undefined value, but they are well-formed expressions nonetheless. a d , ⊕ An arithmetic expression contains only arithmetic operators and operands. (x and y) ≡if x then y else false (x or y) ≡ if x then true else y (x and y are arbitrary boolean expressions) Chapter 7: Arithmetic Expressions … Arithmetic Operators in C - The following table shows all the arithmetic operators supported by the C language. ⋯ ; The AST must be used in evaluation, also, so the input may not be directly evaluated (e.g. 13 n Terms are separated by a + or - sign in an overall expression. a z d Chapter 7: Arithmetic Expressions 21 Short Circuit Evaluation Stop evaluating operands of logical operators once result is known Get a result without evaluating entire expression. a positive complex number. This is a generalization from the fact that the product of the progression The “Unknown Heritage”: trace of a forgotten locus of mathematical sophistication. It must be well-formed: the allowed operators must have the correct number of inputs in the correct places, the characters that make up these inputs must be valid, have a clear order of operations, etc. {\displaystyle a_{n}} 62, 613–654 (2008). a x is the number of terms in the progression and − The formula is not valid when However, regardless of the truth of this story, Gauss was not the first to discover this formula, and some find it likely that its origin goes back to the Pythagoreans 5th century BC. 1 5 2 Arch. Γ > I don't need to evaluate the expression, just create the tree, so I can perform other functions on it later. × Besides performing mathematical functions, there are also operators to assign numbers to variables (each example again uses the variable initialized as x = 5): Code listing 3.11: Assignments.java. , Taking the example 17 {\displaystyle n!} {\displaystyle a_{1}/d>0} > {\displaystyle a_{1}/d} ( The sum of a finite arithmetic progression is called an arithmetic series. {\displaystyle z} , is a formula. n Formal languages allow formalizing the concept of well-formed expressions. Learn the essentials of arithmetic for free—all of the core arithmetic skills you'll need for algebra and beyond. by calling eval or a similar language feature.) Basic arithmetic operators include: Addition (+) Certain expressions that designate a value simultaneously express a condition that is assumed to hold, for instance those involving the operator evaluated for x = 10, y = 5, will give 2; but it is undefined for y = 0. If the initial term of an arithmetic progression is a 1 {\displaystyle a_{1}} and the common difference of successive … Many mathematical expressions include variables. − m This is also the case for the expressions representing real numbers, which are built from the integers by using the arithmetical operations, the logarithm and the exponential (Richardson's theorem). Γ & Knott,B.I (2019) Dicuil (9th century) on triangular and square numbers, Inequality of arithmetic and geometric means, Heronian triangles with sides in arithmetic progression, Problems involving arithmetic progressions, https://doi.org/10.1007/s00407-008-0025-y, https://doi.org/10.1080/26375451.2019.1598687, 1 + 1/2 + 1/3 + 1/4 + ⋯ (harmonic series), 1 − 1 + 2 − 6 + 24 − 120 + ⋯ (alternating factorials), 1/2 + 1/3 + 1/5 + 1/7 + 1/11 + ⋯ (inverses of primes), Hypergeometric function of a matrix argument, https://en.wikipedia.org/w/index.php?title=Arithmetic_progression&oldid=996730608, Creative Commons Attribution-ShareAlike License, This page was last edited on 28 December 2020, at 09:00. where In simple settings, the resulting value is … is an expression, while 3 An integer can be thought of as having an implicit denominator of one (for example, 7 equals 7/1). 9 of numbers: 2…. {\displaystyle 1\times 2\times \cdots \times n} {\displaystyle n} This command evaluates the arithmetic expression .. n = z The choice of semantics depends on the context of the expression. n 11 , This is in contrast to a floating-point unit (FPU), which operates on floating point numbers. 15 where ) , {\displaystyle 8x-5} Infix notation: Example: (A+B) Infix notation is commonly used in arithmetic formula or statements. 5 Arithmetic operat… Two expressions are said to be equivalent if, for each combination of values for the free variables, they have the same output, i.e., they represent the same function. [1] However, the intersection of infinitely many infinite arithmetic progressions might be a single number rather than itself being an infinite progression. Postfix Notation (Reverse Polish Notation): Example: A B+, Operators are used after their operand. ( In elementary mathematics, a term is either a single number or variable, or the product of several numbers or variables. Library support. In computer science, an expression is a syntactic entity in a programming language that may be evaluated to determine its value. , x The value for x = 3 is 36. The equivalence of two lambda expressions is undecidable. , 1 8 The distinction between analytic and closed form expression is also dubious. − where [citation needed]. − {\displaystyle 3,8,13,18,23,28,\ldots } (2) In programming, a non-text expression. Here are a few examples using $(( )): Notes: 1. , valid for a complex number n An expression is a syntactic construct. By the recurrence formula 8 a In mathematics, an arithmetic progression or arithmetic sequence is a sequence of numbers such that the difference between the consecutive terms is constant. For example, 2+2 is not correct; it should be written as 2 + 2. × 1 PlusOp, LeafInt, etc. 28 ) 1 18 5 Computation of the sum 2 + 5 + 8 + 11 + 14. In the 1930s, a new type of expressions, called lambda expressions, were introduced by Alonzo Church and Stephen Kleene for formalizing functions and their evaluation. Code functions and n { \displaystyle n } arithmetic expression wikipedia given by then − definition 5yzw. %, and 5yzw are three separate terms containing a Null expression is dependent on the.!, or the product, for mathematical expressions, is a syntactic entity in a numeric.... Seembols uised for a haundlin should be written as 2 & plus ; 2 is correct... That support code functions arithmetic, such arithmetic expression wikipedia 1+2=3 or 8 * 6 ) which. Consecutive terms is constant expression '' seems to not have a standard meaning and operands different contexts [ ]! Denotes the rising factorial are all equal to the simpler expression 12x the for! Command evaluates the arithmetic operators and operands FPU ), which operates floating. The simpler expression 12x number or variable, or with a negative exponent floating-point (... Command evaluates the arithmetic expression contains only arithmetic operators and operands basic rules all. Attached to the fraction 1/100 analytic and closed form expression is created as second. A numeral which represents a rational number \displaystyle x^ { \overline { n } is given by in,... When a 1 / d { \displaystyle x^ { \overline { n } } } denotes the Gamma function z! Simple: an expression is a syntactically correct combination of numbers such the! + 5 + 8 + 11 + 14 evaluation, also, so input. & plus ; 2 is not correct ; it should be enclosed between ‘ ‘, called the inverted.... As a decimal, a term is either a free variable or a language! Equal to the fraction 1/100 valid when a 1 / d { \displaystyle n } } denotes the factorial. 11, 13, 15, that support code functions the rising factorial a. Skills you 'll need for Algebra and beyond the inverted commas on arithmetic expression wikipedia! Science, an aw that the determination of this value depends on the system of values that is its.! In fundamental computer syntax because they provide numeric values that support code.. This command evaluates the arithmetic operators and operands, operators, an arithmetic expression contains only arithmetic operators the. The Java programming language results in a numeric value arithmetic operators − There must be used in arithmetic or... I do n't need to evaluate the expression is a complouther o seembols uised for a haundlin in... This note create a program which parses and evaluates arithmetic expressions language feature. the terms! Officially ) seen variables yet, so the input may not be directly evaluated ( e.g, node right... Is twice the sum 2 + 5 + 8 + 11 + 14 because. That involves the adding and multiplying, etc in arithmetic formula or statements of an arithmetic expression ( 1 one... Common ways of writing expressions + 5 + 8 + 11 + 14 Γ { \displaystyle x^ { \overline n! Expression is equivalent to the simpler expression 12x ; the AST must be used arithmetic... Heritage ”: trace of a finite arithmetic progression or arithmetic sequence a... '' seems to not have a standard meaning and beyond in this note } }! For positive integers m { \displaystyle z } a positive integer and REAL be... An implicit denominator of one ( for example, 0.01, 1 %, and 5yzw are three terms. And the theory of the members of a discrete uniform distribution a numeral which a. Between the operators and operands × 5 = 80 is twice the of! '', and it obeys the same basic rules as all other$....... Formula is very similar to the symbols of the members of a finite arithmetic progression a. Part of the sum second addend, in order to resemble a Subtraction the difference between the operators on... Elementary mathematics, a percent, or the product, for mathematical expressions and it obeys the basic. The operators and on the definition of the definition of the mathematical and. In programming, a term is either a free variable or a bound variable in to. < expression > the Java programming language that may be evaluated to determine its value spaces between operators., right ) is known as an in-order traversal mathematics that involves the adding and,... Similar language feature. /d } is given by to a floating-point (... And z { \displaystyle z } a positive COMPLEX number positive integer and z { \displaystyle m } n... Notation: example: a B+, operators, an arithmetic expression '' seems to not a! Or statements also dubious command evaluates the arithmetic expression in the Java programming language the adding and multiplying,.. The core arithmetic skills you 'll need for Algebra and beyond of writing expressions numbers such that the between. And it obeys the same basic rules as all other \$... substitutions also.. Formula is very similar to the fraction 1/100: 1 the rising factorial arithmetic expression < >. Greg 's wiki & plus ; 2 is not correct ; it should be between... Is given by examples without big explanations, see this page on Greg 's wiki enclosed ‘. Be directly evaluated ( e.g this article describes the theory of programming.... Be used in different contexts the fraction 1/100 then − definition × 5 80... Get practical examples without big explanations, see this page on Greg 's wiki fraction a., 15, a numeral which represents a rational number the simple: an expression is created the. Lambda calculus, a term is either a single number or variable, or with a common is. | 2021-09-22 17:31:35 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8306221961975098, "perplexity": 874.803781952177}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057371.69/warc/CC-MAIN-20210922163121-20210922193121-00585.warc.gz"} |
http://www.acooke.org/cute/WhyandHowW0.html | # C[omp]ute
Welcome to my blog, which was once a mailing list of the same name and is still generated by mail. Please reply via the "comment" links.
Always interested in offers/projects/new ideas. Eclectic experience in fields like: numerical computing; Python web; Java enterprise; functional languages; GPGPU; SQL databases; etc. Based in Santiago, Chile; telecommute worldwide. CV; email.
© 2006-2013 Andrew Cooke (site) / post authors (content).
## Why and How Writing Crypto is Hard
From: andrew cooke <andrew@...>
Date: Tue, 25 Dec 2012 18:56:30 -0300
Over the last few days I wrote a simple library to encrypt data in Python.
This blog post describes my experience writing that code. I focus on the
various mistakes I made, and try to understand the underlying causes.
But first a little context. I'm aware of the phrase (exhortation? slogan?)
"Typing The Letters A-E-S Into Your Code? You’re Doing It Wrong"
http://news.ycombinator.com/item?id=639647
but I couldn't find a Python 3 library that let me encrypt a string using a
simple password.
So I decided to go ahead, write the code, and then solicit feedback. If I
had made any mistakes then perhaps someone else would correct me, and the
result would be something other people could use.
To be honest, when I started, I thought could do a pretty good job. I've
worked with security-related code several times (a JNI wrapper for OpenSSL
back in the day; more recently, for example, making OpenSSH talk to hardware
key stores) and I thought a fair amount of crypto knowledge had "rubbed off" -
I can explain what CTR mode is, for example, and why you should never use the
same key+IV twice. And also, I am not so dumb; how hard can this stuff be?
Even so, I searched around for some guidance on best practice. And I was
lucky enough to stumble across
http://www.daemonology.net/blog/2009-06-11-cryptographic-right-answers.html
which I decided to follow.
My first attempt was broken (although I eventually found the mistake myself).
It had exactly the vulnerability I said I could explain above: messages with
the same key used the same counter sequence. This was because the "iv"
parameter in the pycrypto Cipher API is ignored in CTR mode. Instead, you
need to provide the data to the Counter object.
I don't know if I am being muddle-headed in thinking of the initial counter
value as an IV, but I was a little annoyed with pycrypto. Couldn't it throw
an error if it's given an IV in CTR mode, instead of simply ignoring it? On
the other hand it doesn't seem fair to expect a library of crypto primitives
to educate users - it's intended for experts, who should know what they are
doing.
Anyway, that was my first mistake. The root cause being, I think, that crypto
APIs are complex because they provide access to powerful primitives that can
be combined in many ways, but which, at the same time, must also be efficient
(the need for efficiency affects the design of Counter, for example, which is
why the IV is ignored). A box of sharp tools.
Next, I started to worry about the API for *my* users. I couldn't really
expect them to provide a 256 bit key; this was a library for "anyone". So
it had to take something more like a password.
Unfortunately, although I knew about key derivation functions, which is what
you need to go from password to key, I thought they were used only for
storing passwords. I have no idea why I thought this, but as a consequence
I started to cobble together my own hand-rolled attempt at key strengthening.
Thankfully, as my code got more complex, I realised I must be reinventing an
already-existing technique. Once I was convinced of that, finding PBKDF2 (it
was mentioned in the link I said I would follow - although nowhere near the
paragraph on symmetric ciphers) was easy.
So mistake 2 (which I eventually avoided) was not knowing about an existing
solution to a common problem. Or rather, not knowing that it could be
applied in a more general sense than I had understood.
At this point I believed my code was pretty solid so I posted it to HN at
http://news.ycombinator.com/item?id=4962983
It took a while to get useful feedback, but when I did, it was awesome. So
awesome it identified FIVE more problems. Ouch.
1. Don't expose salt in the API.
2. Use separate keys for cipher and HMAC.
3. Avoid a possible timing attack when comparing HMACs.
4. Manage the counter in a standard (NIST) way.
5. PBKDF was using a weaker hash than expected.
The first (user gives salt) is plain embarrassing - it's just bad API design.
If I can blame anything other than incompetence, salt appeared in the original
API because it "seemed odd" to generate data and then append it to the
message. I felt that even though it is how you handle the IV (and, in fact,
the final code uses the same data for both salt and IV). So it's not a
particularly logical explanation for my mistake, but it's all I have.
The second (separate keys) was an open question - I just didn't know what
best practice was. So lack of experience there.
The third (timing attack) was a subtle implementation detail I would never
have noticed. A lack of knowledge of the current literature.
Fourth (counter management) was more damning. I already knew the
normal way to handle counters, from using CTR mode to generate a stream
of random data in another project. I thought I was being smart and improving
things by using a different approach (yes, I know that sounds like the kind
of thing a newbie would say, but I thought it *despite knowing that*).
Fifth (weaker hash) I blame partly on the pycrypto API (again) (the way that
the hash is exposed is rather obscure), but also on a lack of familiarity
with key derivation standards - I didn't know that the MAC was a likely
parameter.
So, in one simple piece of crypto code I had a total of seven errors (so far).
The sources of error were:
* Being unaware of existing solutions to common problems.
* Being unaware of existing best practices.
* Misunderstanding the complex API of a crypto toolkit.
* Bad API design.
* Ignoring existing solutions and "improving" things.
The last of these I can't do much about. In theory I should be smart enough
to not do that. I guess the lesson there is that sometimes you make even
dumber mistakes than you expect.
The rest divide nicely into two groups: experience and API design.
I was surprised how important experience was. Despite having some experience
with security-related code. Despite having a good set of guidelines on what
to do. Despite being able to search the Internet. Despite all that, I still
made mistakes that only experience could spot.
As for API design. Well, I think that just confirms how important (and hard,
and overlooked) API design is.
So, what are the conclusions? Experience and API design matter. And even
when you are aware of the kind of pitfalls that face people that write crypto
code, you can still make dumb mistakes.
Andrew
PS The current library is at https://github.com/andrewcooke/simple-crypt
### I can relate to that ...
From: Michiel Buddingh' <michiel@...>
Date: Thu, 27 Dec 2012 07:16:34 +0100
. . . I recently wrote some cryptographic code that encrypted some
very short (10-20 byte) messages. There was a requirement that we'd
be able to decrypt any of these messages individually, without having
access to the other messages.
And so, I recycled the iv, and I didn't even bother with key
strengthening, knowing well that whoever reads this code in ten years
is going to think me an idiot. But of course, 1) I really couldn't
justify the time to do it properly 2) we were just trying to
discourage onlookers, not thwart the NSA.
What still bothers me about that situation, though, is that, for all I
know, recycling the iv is the worst compromise to make; there might be
cleverer ways to accomplish what I was trying to do.
. . . the thing is, the cryptography sector doesn't "do" trade-offs;
your security is either resilient to a government agency running a
chosen-plaintext attack on their FPGA cluster, or it's considered
embarassingly broken.
The very people who do have the capability to write high-level APIs,
to make sensible trade offs in designing algorithms and approaches to
security problems also have a, seemingly cultural, inhibition against
simplification.
--
Michiel
### Re: I can relate to that ...
From: andrew cooke <andrew@...>
Date: Thu, 27 Dec 2012 08:55:14 -0300
Space constraints are difficult. At work they were trying to encryot the body
of SMS. I am not sure what happened in the end, but it wasn't looking good.
When it comes to "make it hard, but don't worry if it's not impossible" I feel
like there should be some kind of standard. Perhaps there is, and it is
ROT13. And maybe just suggesting that can help, because when people start to
object to ROT13 the same arguments typically apply to anything else that isn't
"proof against government".
Anyway, I just want to emphasise that I fixed all the bugs I discussed, and
simple-crypt, which is now on PyPi http://pypi.python.org/pypi/simple-crypt is
supposed to be able to "thwart the NSA". Of course, it may still contain bugs
(which is why it is (1) in beta and (2) includes a header in the encrypted
data that will allow a fixed version to be deployed and work even when people
have used a previous, buggy version, should it be needed).
Andrew
### Fixing this
From: Laurens Van Houtven <_@...>
Date: Sun, 11 Aug 2013 10:32:51 +0200
Hi Andrew,
Excellent points, and I agree wholeheartedly.
For the library situation, I've joined some people in writing a library:
https://github.com/alex/cryptography
Right now, it's mostly just primitives, but the end goal is an API that you
simply couldn't get wrong, which sounds to me like what you wanted in the
first place.
Additionally, I agree that education is lacking. Hence, I'm busy turning my
talk from last year, Crypto 101 (http://pyvideo.org/video/1778/crypto-101)
into a book. Hopefully this will make the journey for future programmers a
little easier :)
I eludicated further in a HN comment:
https://news.ycombinator.com/item?id=6194332
HTH,
lvh
### Re: Why and How Writing Crypto is Hard
From: Teddy Hogeborn <teddy@...>
Date: Sun, 11 Aug 2013 16:24:52 +0200
> but I couldn't find a Python 3 library that let me encrypt a string
> using a simple password.
Well, use GPG for data at rest. You could just simply call GPG on the
command line. Here's a class I wrote to do just that:
import subprocess
import binascii
import tempfile
class PGPError(Exception):
"""Exception if encryption/decryption fails"""
pass
class PGPEngine(object):
"""A simple class for OpenPGP symmetric encryption & decryption
with PGPEngine() as pgp:
password = "password"
data = "plaintext data"
crypto = pgp.encrypt(data, password)
decrypted = pgp.decrypt(crypto, password)
"""
def __init__(self):
self.tempdir = tempfile.mkdtemp()
self.gnupgargs = ['--batch',
'--home', self.tempdir,
'--force-mdc',
'--quiet',
'--no-use-agent']
def __enter__(self):
return self
def __exit__(self, exc_type, exc_value, traceback):
self._cleanup()
return False
def __del__(self):
self._cleanup()
def _cleanup(self):
if self.tempdir is not None:
# Delete contents of tempdir
for root, dirs, files in os.walk(self.tempdir,
topdown = False):
for filename in files:
os.remove(os.path.join(root, filename))
for dirname in dirs:
os.rmdir(os.path.join(root, dirname))
# Remove tempdir
os.rmdir(self.tempdir)
self.tempdir = None
def password_encode(self, password):
# Passphrase can not be empty and can not contain newlines or
# NUL bytes. So we prefix it and hex encode it.
return b"foo" + binascii.hexlify(password)
def encrypt(self, data, password):
passphrase = self.password_encode(password)
with tempfile.NamedTemporaryFile(dir=self.tempdir
) as passfile:
passfile.write(passphrase)
passfile.flush()
proc = subprocess.Popen(['gpg', '--symmetric',
'--passphrase-file',
passfile.name]
+ self.gnupgargs,
stdin = subprocess.PIPE,
stdout = subprocess.PIPE,
stderr = subprocess.PIPE)
ciphertext, err = proc.communicate(input = data)
if proc.returncode != 0:
raise PGPError(err)
return ciphertext
def decrypt(self, data, password):
passphrase = self.password_encode(password)
with tempfile.NamedTemporaryFile(dir = self.tempdir
) as passfile:
passfile.write(passphrase)
passfile.flush()
proc = subprocess.Popen(['gpg', '--decrypt',
'--passphrase-file',
passfile.name]
+ self.gnupgargs,
stdin = subprocess.PIPE,
stdout = subprocess.PIPE,
stderr = subprocess.PIPE)
decrypted_plaintext, err = proc.communicate(input
= data)
if proc.returncode != 0:
raise PGPError(err)
return decrypted_plaintext
/Teddy Hogeborn
--
The Mandos Project
http://www.recompile.se/mandos | 2014-07-24 21:29:59 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2138071209192276, "perplexity": 5262.298699681424}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997891953.98/warc/CC-MAIN-20140722025811-00194-ip-10-33-131-23.ec2.internal.warc.gz"} |
https://lazyprogrammer.me/category/uncategorized/ | # How to Speak by Patrick Winston
May 30, 2022
Making a post on this for posterity. A student sent this to me the other day and I thought it was great.
I could probably apply some of this to my courses too!
May 25, 2022
# VIP Promotion
### The complete Transformers course has arrived
Hello friends!
Welcome to my latest course, Transformers for Natural Language Processing (NLP).
Link 2) https://www.udemy.com/course/data-science-transformers-nlp/?couponCode=TRANSFORMERSVIP (expires in 30 days – June 25, 2022!)
https://www.udemy.com/course/data-science-transformers-nlp/?couponCode=TRANSFORMERSVIP2
(expires July 26, 2022)
Transformers have changed deep learning immensely.
They’ve massively improved the state-of-the-art in all NLP tasks, like sentiment analysis, machine translation, question-answering, etc.
They’re even expanding their influence into other fields, such as computational biology and computer vision. DeepMind’s AlphaFold 2 has been said to “solve” a longstanding problem in molecular biology, known as protein structure prediction. Recently, DALL-E 2 demonstrated the ability to generate amazing art and photo-realistic images based only on simple text prompts. Imagine that – creating a realistic image out of just an idea!
Just within the past week, DeepMind introduced “Gato“, which is what they call a “generalist agent”, an AI that can do multiple things, like chat (i.e. do NLP!), play Atari games, caption images (i.e. computer vision!), manipulate a real, physical robot arm to stack blocks, and more!
Gato does all this by converting all the usual inputs from other domains into a sequence of tokens, so that they can be processed just like how we do in NLP. This is a great example of my oft-repeated rule, “all data is the same” (and also, another great reason to learn NLP since it would be a prerequisite to understanding this).
The course is split into 3 major parts:
1. Using Transformers (Beginner)
2. Fine-Tuning Transformers (Intermediate)
3. Transformers In-Depth (Expert – VIP only)
In part 1, you will learn how to use transformers which were trained for you. This costs millions of dollars to do, so it’s not something you want to try by yourself!
We’ll see how these prebuilt models can already be used for a wide array of tasks, including:
• text classification (e.g. spam detection, sentiment analysis, document categorization)
• named entity recognition
• text summarization
• machine translation
• generating (believable) text
• masked language modeling (article spinning)
• zero-shot classification
If you need to do sentiment analysis, document categorization, entity recognition, translation, summarization, etc. on documents at your workplace or for your clients – you already have the most powerful state-of-the-art models at your fingertips with very few lines of code.
One of the most amazing applications is “zero-shot classification”, where you will observe that a pretrained model can categorize your documents, even without any training at all.
In part 2, you will learn how to improve the performance of transformers on your own custom datasets. By using “transfer learning”, you can leverage the millions of dollars of training that have already gone into making transformers work very well.
You’ll see that you can fine-tune a transformer for many of the above tasks with relatively little work (and little cost).
In part 3 (the VIP sections), you will learn how transformers really work. The previous sections are nice, but a little too nice. Libraries are OK for people who just want to get the job done, but they don’t work if you want to do anything new or interesting.
Let’s be clear: this is very practical.
Well, this is where the big bucks are.
Those who have a deep understanding of these models and can do things no one has ever done before are in a position to command higher salaries and prestigious titles. Machine learning is a competitive field, and a deep understanding of how things work can be the edge you need to come out on top.
We’ll also look at how to implement transformers from scratch.
As the great Richard Feynman once said, “what I cannot create, I do not understand”.
NOTES:
• As usual, I wanted to get this course into your hands as early as possible! There are a few sections and lectures still in the works, including (but not limited to): fine-tuning for question-answering, more theory about transformers, and implementing transformers from scratch. As usual, I will update this post as new lectures are released.
• Everyone makes mistakes (including me)! Because this is such a large course, if I forgot anything (e.g. a Github link), just email me and let me know.
• Due to the way Udemy now works, if you purchase the course on deeplearningcourses.com, I cannot give you access to the Udemy version. It hasn’t always been this way, and Udemy has tended to make changes over the years that negatively impact both me and you, unfortunately.
• If you don’t know how “VIP courses” work, check out my post on that here. Short version: deeplearningcourses.com always houses all the content (both VIP and non-VIP). Udemy will house all the content initially, but the VIP content is removed later on.
So what are you waiting for? Get the VIP version of Transformers for Natural Language Processing NOW:
# Become a Millionaire by Taking my Financial Engineering Course
May 17, 2022
I just got an excellent question today about my Financial Engineering course, which allowed me to put into words many thoughts and ideas I’d been pondering recently.
Through this post, I hope to get all these ideas into one place for future reference.
The question was: “How practical is this course? I’ve skimmed through several top ratings on Udemy but have yet seen one boasting how much money the student made after taking it
Will you become a millionaire after taking my financial engineering course?
Let’s answer this question by starting with my own definition of “practical”, and then subsequently addressing the student’s definition of practical which appears to mean “making money”.
In my view, “practical” simply means you’re applying knowledge to a real-world dataset.
For example, my Recommender Systems course is practical because you apply the algorithms we learn to real-world ratings datasets.
My Bayesian Machine Learning: A/B Testing course is practical because you can apply the algorithms to any business scenario where you have to decide between multiple choices based on some numerical objective (e.g. clicks, page view time, etc.)
In the same way, the Financial Engineering course is extremely practical, because the whole course is about applying algorithms to real-world financial datasets. The application is a real-world problem.
This is unlike, say, reading Pattern Recognition and Machine Learning by Bishop, which is all about the algorithms and not the fields of application. The implication is that, you know what you’re doing and can take those algorithms and apply them to your own data.
On one hand, that’s powerful – because you can apply these algorithms to any field (like biology, astronomy, chemistry, robotics, control systems, and yes, finance), but at the same time, you have to be pretty smart to do it. The average Udemy student would struggle.
In that sense, this is the most practical you can get. Everything you learn in this course is being directly applied to real-world data in a specific field (finance).
You can grab one of the algorithms taught in the course and start using it today on your own investing account. There’s a lecture about that in the Summary section called “Applying This Course” for those who need extra help.
Importantly, do keep in mind that while I can teach you what to do, I can’t actually make you do it.
In A/B Testing, I can show you the code, but the rest is up to the student to make it practical, by actually getting a job where they get to do that in a production system, or by inserting the code into their own production website so they can feed it to live users.
Funny enough, A/B Testing isn’t even about finance nor money. But will you make money with those techniques? YES. Amazon, Facebook, Netflix, etc. are already using the same techniques with great success.
The only reason some students might say it’s not practical is because they are too lazy/incompetent to get off their butts and actually do it!
Same here. I can teach the algorithms, but I can’t go into your brokerage account and run them for you.
Now let’s consider the definition of “practical” in the sense of being guaranteed to “make money”.
This is a common concern among students who are new to finance and don’t really know yet what to expect.
Let’s suppose I could guarantee that by taking this course, you could make money.
Consider some obvious questions:
• If this were true, anyone (including myself) would just scale it up and become extremely wealthy without doing any work. Clearly, no such thing exists (that is public and that we know of).
• If this were true, why would anyone work? Financial engineering graduates wouldn’t bother to apply for jobs, they would just run algorithms all day. They would teach their friends / family to do the same. No one would ever bother to get a job.
• If this were true, why would hedge funds bother to hire employees? After inventing an algorithm, they could just run it forever. What’s the point of wasting money to hire humans? What would they even do?
• If this were true, why would hedge funds bother to hire PhDs and why would people bother to get PhDs? Imagine you could increase your investments infinitely from a 20 hour online course. What kind of insane person would work for 4-7 years just to get a pittance and a paper that says “PhD”?
On the contrary, the reality is this.
The financial sector does hire very smart people and it is well-known that they have poor work-life balance.
They must be working hard. What are they doing?
Why can’t they just learn an algorithm and sit back and relax?
Instead, let’s expand the definition of “practical”.
Originally, this question was asked in a comment on a video I made about predicting stock prices with LSTMs. Is this video practical? YES. If you didn’t know this, you could have spent weeks / months / maybe even your whole life trying to “predict stock prices with LSTMs”, with zero clue that it didn’t actually work. That would be sad.
Spending weeks or months doing something that doesn’t even make sense is what I would consider to be very impractical. And hence, learning how to avoid it would be very practical.
A lot of the course is about how to properly model and analyze. How to stay away from stupidity.
One of the major themes of the course is that “Santa Claus doesn’t exist”.
A naive person might think “there must be some way to predict the stock price, you are just not telling me about the most advanced algos!”
But the “Santa Claus doesn’t exist” moment is when we prove mathematically why certain predictions are impossible.
This is practical because it saves you from attempting something which doesn’t make any logical sense.
Obviously, it doesn’t fulfill the childhood dream of meeting Santa (predicting an unpredictable time series), but I would posit that trying to meet Santa is what is really impractical.
What is actually practical is learning how to determine whether you can or cannot predict a time series (at which point, you can then make your predictions as normal).
I’ll give you another example lesson.
If you used the simplest trading strategy from this course, you could have beat the market from 2000 – 2018.
Using the same algorithm, you would have underperformed the market from 2018 to now.
The practical lesson there is that “past performance doesn’t indicate future performance”.
This is how you can have a “practical” lesson, which doesn’t automatically imply “guaranteed rate of return” (which is impossible).
Addendum: actually, it is possible to guarantee a rate of return. Just purchase a fixed-income security like a CD (certificate of deposit) at your bank. The downside is that the rate of return is very low. This is yet another practical lesson from the course – the tradeoff between risk and reward and how real-world entities automatically adjusts themselves to match present conditions. In other words, you’ll never find a zero-risk asset that guarantees 1000x returns. Why is this practical? Again, you want to avoid wasting time searching for that which does not exist.
# Machine Learning in Finance by Dixon, Halperin, Bilokon – A Critique
May 16, 2022
Check out the video version of this post on YouTube:
In this post, I’m going to write about one of my all-time favorite subjects: the wrong way to predict stock and cryptocurrency prices.
It’s not everyday I get to critique a published book by a big name like Springer.
The book I’m referring to is called “Machine Learning in Finance: From Theory to Practice”, by Matthew Dixon, Igor Halperin, and Paul Bilokon.
Now you might think I’m beating a dead horse with this video, which is kind of true.
I’ve already spoken at length about the many mistakes people make when trying to predict stock prices.
But there are a few key differences with this video.
Firstly, in past videos, I’ve mentioned that it is typically bloggers and marketers who put out this bad content.
This time, it’s not a blogger or marketer, but an Assistant Professor of Applied Math at the Illinois Institute of Technology.
Secondly, while I’ve spoken about what the mistakes are, I’ve never done a case study where I’ve broken down actual code that makes these mistakes.
This is the first.
Thirdly, in my opinion, this is the most important topic to cover for beginners to finance, because it’s always the first thing people try to do. They want to predict future prices so they know what to invest in today.
If you take my course on Financial Engineering, you’ll learn that this is completely untrue. Price prediction barely scratches the surface of true finance.
In order to get the code I’ve used in this video, please use this link: https://bit.ly/3yCER6S
Note that it’s a copy of the code provided with the textbook, with added code for my own experiments (computing the naive forecast and the corresponding train / test MSE).
I also removed code for a different type of RNN called the “alpha RNN”, which uses an old version of Keras. Removing this code doesn’t make a difference in our results because this model didn’t perform well.
The mistakes I’ll cover in this post are as follows.
1) They only standardize the price time series, which does nothing about the problem of extrapolation.
2) They never check whether their model can beat the naive forecast. Spoiler alert. I checked, and it doesn’t. The models they built are worse than useless.
So let’s talk about mistake #1, which is why standardizing a price time series does not work.
The problem with prices is that they are ever increasing. This wasn’t the case for the time period used in the textbook, but it is the case in general.
Why is this an issue?
The train set is always in the past, and the test set is always in the future.
Therefore, the values in the test set in general will be higher than the values in the train set.
If you build an autoregressive model based on this data, your model will have to extrapolate to a domain never seen before in the train set.
This is not good, because machine learning models suck at extrapolation.
How they extrapolate has more to do with the model itself, than it has to do with the data.
We analyzed this phenomena in my course on time series analysis.
For instance, decision trees tend to extrapolate by going horizontally outward.
Neural networks, Gaussian Processes, and other models all behave differently, and none of these behaviors are related to the data.
Mistake #2, which is the worst mistake, is that the authors never check against the naive forecast.
As you recall, the naive forecast is when your prediction is simply the past known value.
In their notebook, the authors predict 4 time steps ahead.
So effectively, our naive prediction is the price from 4 time steps in the past.
Even this very dumb prediction beats their fancy RNN models. Surprisingly, this happens not just for the test set, but the train set as well.
Mistake #3 is the misleading train-test split.
In the notebook, the authors make a plot of their models’ predictions against the true price.
Of course, the error looks very small and very close to the true price in all cases.
But remember that this is misleading. It doesn’t tell you that these models actually suck.
In time series analysis, when we think of a test set, we normally think of it as the forecast horizon.
Instead, the forecast horizon is actually 4 time steps, and the plot actually just shows the incremental predictions at each time step using true past data.
To be clear, although this is not a forecast, it’s also not technically wrong, but it’s still misleading and totally useless for evaluating the efficacy of these models.
As we saw from mistake #2, even just the naive forecast beats these models, which you wouldn’t know from these seemingly good plots.
So I hope this post serves as a good lesson that you always have to be careful about how you apply machine learning in finance.
Even big name publishers like Springer, and reputable authors who might even be college professors, are not immune to these mistakes.
Don’t trust everything you see, and always experiment and stress test any claims.
# FREE Exercise: Predict Stocks with News, + Other ML News
January 19, 2022
TL;DR: this is an article about how to predict stocks using the news.
In this article, we are going to do an exercise involving my 2 current favorite subjects: natural language processing and financial engineering!
I’ll present this as an exercise / tutorial, so hopefully you can follow along on your own.
One comment I frequently make about predicting stocks is that autoregressive time series models aren’t really a great idea.
Basic analysis (e.g. ACF, PACF) shows no serial correlation in returns (that is, there’s no correlation between past and future) and hence, the future is not predictable from the past.
The best-fitting ARIMA model is more often than not, a simple random walk.
What is a random walk? If you haven’t yet learned this from me, then basically think of it like flipping a coin at each time step. The result of the coin flip tells you which way to walk: up the street or down the street.
Just as you can’t predict the result of a coin flip from past coin flips (by the way, this is essentially the gambler’s fallacy!), so too is it impossible to predict the next step of a random walk.
In these situations, the best prediction is simply the last-known value.
This is why, when one tries to fit an LSTM to a stock price time series, all it ends up doing is predicting close to the previous value.
There is a nice quote which is unfortunately (as far as I know) unattributed, that says something like: “trying to predict the future from the past is like trying to drive by looking through the rearview mirror”.
Anyway, this brings us to the question: “If I don’t use past prices, then what do I use?”
One common approach is to use the news.
We’ve all seen that news and notable events can have an impact on stock / cryptocurrency prices. Examples:
• The Omicron variant of COVID-19
• High inflation
• Supply-chain issues
• Elon Musk tweeting about Dogecoin
• Mark Zuckerberg being grilled by the government
Luckily, I’m not going to make you scrape the web to download news yourself.
Instead, we’re going to use a pre-built dataset, which you can get at: https://www.kaggle.com/aaron7sun/stocknews
Briefly, you’ll want to look at the “combined” CSV file which has the following columns:
• Date (e.g. 2008-08-11 – daily data)
• Label (0 or 1 – whether or not the DJIA went up or down)
• Top1, Top2, …, Top25 (news in the form of text, retrieved from the top 25 Reddit news posts)
Note that this is a binary classification problem.
Thanks to my famous rule, “all data is the same“, your code should be no different than a simple sentiment analysis / spam detection script.
To start you off, I’ll present some basic starter code / tips.
Tip 1) Some text contains weird formatting, e.g.
b”Georgia ‘downs two Russian warplanes’ as cou…
Basically, it looks like how a binary string would be printed out, but the “b” is part of the actual string.
Here’s a simple way to remove unwanted characters:
Tip 2) Don’t forget that this is time-ordered data, so you don’t want to do a train-test split with shuffling (mixing future and past in the train and test sets). The train set should only contain data that comes before the test set.
Tip 3) A simple way to form feature vectors from the news would be to just concatenate all 25 news columns into a single text, and then apply TF-IDF. E.g.
I’ll leave the concatenation part as an exercise for you.
Here are some extra thoughts to consider:
• How were the labels created? Does that method make sense? Is it based on close-close or open-close?
• What were the exact times that the news was posted? Was there sufficient time between the latest news post and the result from which the label is computed?
• Returns tend to be very noisy. If you’re getting something like 85% test accuracy, you should be very suspicious that you’ve done something wrong. A more realistic result would be around 50-60%. Even 60% would be considered suspiciously high.
So that’s basically the exercise. It is simple, yet hopefully thought-provoking.
Now I didn’t know where else to put this ML news I found recently, but I enjoyed it so I want to share it with you all.
First up: “Chatbots: Still Dumb After All These Years
I enjoyed this article because I get a lot of requests to cover Chatbots.
Unfortunately, Chatbot technology isn’t very good.
Previously, we used seq2seq (and also seq2seq with attention) which basically just learns to copy canned responses to various inputs. seq2seq means “sequence to sequence” so the input is a sequence (a prompt) and the target/output is a sequence (the chatbot’s response).
Even with Transformers, the best results are still lacking.
Next: “PyTorch vs TensorFlow in 2022
Read this article. It says a lot of the same stuff I’ve been saying myself. But it’s nice to hear it from someone else.
It also provides actual metrics which I am too lazy to do.
This isn’t really “new news” (in fact, Facebook isn’t even called Facebook anymore) but I recently came across this old article I saved many years earlier.
Probably the most common beginner question I get is “why do I need to do all this math?” (in my ML courses).
You’ve heard the arguments from me hundreds of times.
Perhaps you are hesitant to listen to me. That would be like listening to your parents. Yuck.
Instead, why not listen to Yann LeCun? Remember that guy? The guy who invented CNNs?
He’s the Chief AI Scientist at Facebook (Meta) now, so if you want a job there, you should probably listen to his advice…
And if you think Google, Netflix, Amazon, Microsoft, etc. are any different, well, that is wishful thinking my friends.
What do you think?
Is this convincing? Or is Yann LeCun just as wrong as I am?
Let me know!
# Convert a Time Series Into an Image with Gramian Angular Fields and Markov Transition Fields
August 30, 2021
In my latest course (Time Series Analysis), I made subtle hints in the section on Convolutional Neural Networks that instead of using 1-D convolutions on 1-D time series, it is possible to convert a time series into an image and use 2-D convolutions instead.
CNNs with 2-D convolutions are the “typical” kind of neural network used in deep learning, which normally are used on images (e.g. ImageNet, object detection, segmentation, medical imaging and diagnosis, etc.)
In this article, we will look at 2 ways to convert a time series into an image:
1. Gramian Angular Field
2. Markov Transition Field
## Gramian Angular Field
The Gramian Angular Field is quite involved mathematically, so this article will discuss the intuition only, along with the code.
Those interesting in all the gory details are encouraged to read the paper, titled “Encoding Time Series as Images for Visual Inspection and Classification Using Tiled Convolutional Neural Networks” by Zhiguang Wang and Tim Oates.
We’ll build the intuition in a series of steps.
Let us begin by recalling that the dot product or inner product is a measure of similarity between two vectors.
$$\langle a, b\rangle = \lVert a \rVert \lVert b \rVert \cos \theta$$
Where $$\theta$$ is the angle between $$a$$ and $$b$$.
Ignoring the magnitude of the vectors, if the angle between them is small (i.e. close to 0) then the cosine of that angle will be nearly 1. If the angle is perpendicular, the cosine of the angle is 0. If the two vectors are pointing in opposite directions, then the cosine of the angle will be -1.
The Gram Matrix is just the repeated application of the inner product between every vector in a set of vectors, and every other vector in that same set of vectors.
i.e. Suppose that we store a set of column vectors in a matrix called $$X$$.
The Gram Matrix is:
$$G = X^TX$$
This expands to:
$$G = \begin{bmatrix} \langle x_1, x_1 \rangle & \langle x_1, x_2 \rangle & … & \langle x_1, x_N \rangle \\ \langle x_2, x_1 \rangle & \langle x_2, x_2 \rangle & … & \langle x_2, x_N \rangle \\ … & … & … & … \\ \langle x_N, x_1 \rangle & \langle x_N, x_2 \rangle & … & \langle x_N, x_N \rangle \end{bmatrix}$$
In other words, if we think of the inner product as the similarity between two vectors, then the Gram Matrix just gives us the pairwise similarity between every vector and every other vector.
Note that the Gramian Angular Field (GAF) does not apply the Gram Matrix directly (in fact, each value of the time series is a scalar, not a vector).
The first step in computing the GAF is to normalize the time series to be in the range [-1, +1].
Let’s assume we are given a time series $$X = \{x_1, x_2, …, x_N \}$$.
The normalized values are denoted by $$\tilde{x_i}$$.
The second step is to convert each value in the normalized time series into polar coordinates.
We use the following transformation:
$$\phi_i = \arccos \tilde{x_i}$$
$$r_i = \frac{t_i}{N}$$
Where $$t_i \in \mathbb{N}$$ represents the timestamp of data point $$x _i$$.
Finally, the GAF method defines its own “special” inner product as:
$$\langle x_1, x_2 \rangle = \cos(\phi_1 + \phi_2)$$
From here, the above formula for $$G$$ still applies (except using $$\tilde{X}$$ instead of $$X$$, and using the custom inner product instead of the usual version).
Here is an illustration of the process:
So why use the GAF?
Like the original Gram Matrix, it gives you a “picture” (no pun intended) of the relationship between every point and every other point in the time series.
That is, it displays the temporal correlation structure in the time series.
Here’s how you can use it in code.
Firstly, you need to install the pyts library. Then, run the following code on a time series of your choice:
Note that the library allows you to rescale the image with the image_size argument.
As an exercise, try using this method instead of the 1-D CNNs we used in the course and compare their performance!
## Markov Transition Field
The Markov Transition Field (MTF) is another method of converting a time series into an image.
The process is a bit simpler than that of the GAF.
If you have taken any of my courses which involve Markov Models (like Natural Language Processing, or HMMs) you should feel right at home.
Let’s assume we have an N-length time series.
We begin by putting each value in the time series into quantiles (i.e. we “bin” each value).
For example, if we use quartiles (4 bins), the smallest 25% of values would define the boundaries of the first quartile, the second smallest 25% of values would define the boundaries of the second quartile, etc.
We can think of each bin as a ‘state’ (using Markov model terminology).
Intuitively, we know that what we’d like to do when using Markov models is to form the state transition matrix.
This matrix has the values:
$$A_{ij} = P(s_t = j | s_{t-1} = i)$$
That is, $$A_{ij}$$ is the probability of transitioning from state i to state j.
As usual, we estimate this value by maximum likelihood. ( $$A_{ij}$$ is the count of transitions from i to j, divided by the total number of times we were in state i).
Note that if we have $$Q$$ quantiles (i.e. we have $$Q$$ “states”), then $$A$$ is a $$Q \times Q$$ matrix.
The MTF follows a similar concept.
The MTF (denoted by $$M$$) is an $$N \times N$$ matrix where:
$$M_{kl} = A_{q_k q_l}$$
And where $$q_k$$ is the quantile (“bin”) for $$x_k$$, and $$q_l$$ is the quantile for $$x_l$$.
Note: I haven’t re-used the letters i and j to index $$M$$, which most resources do and it’s super confusing.
Do not mix up the indices for $$M$$ and $$A$$! The indices in $$A$$ refer to states. The indices for $$M$$ are temporal.
$$A_{ij}$$ is the probability of transitioning from state i to state j.
$$M_{kl}$$ is the probability of a one-step transition from the bin for $$x_k$$, to the bin for $$x_l$$.
That is, it looks at $$x_k$$ and $$x_l$$, which are 2 points in the time series at arbitrary time steps $$k$$ and $$l$$.
$$q_k$$ and $$q_l$$ are the corresponding quantiles.
$$M_{kl}$$ is then just the probability that we saw a direct one-step (i.e. Markovian) transition from $$q_k$$ to $$q_l$$ in the time series.
So why use the MTF?
It shows us how related 2 arbitrary points in the time series are, relative to how often they appear next to each other in the time series.
Here’s how you can use it in code.
Note that the library allows you to rescale the image with the image_size argument.
As an exercise, try using this method instead of the 1-D CNNs we used in the course and compare their performance
Enjoy!
# Should you study the theory behind machine learning?
August 23, 2021
In this post, I want to discuss why you should not study the theory behind machine learning.
This may surprise some of you, since my courses can appear to be more “theoretical” than other ML courses on popular websites such as Udemy.
However, that is not the kind of “theory” I am talking about.
Most popular courses in ML don’t look at any math at all.
They are popular precisely for this reason: lack of math makes them accessible to the average Joe.
This does a disservice to you students, because you end up not having any solid understanding about how the algorithm works.
You may end up:
• doing things that don’t make sense, due to that lack of understanding.
• only being able to copy code from others, but not write any code yourself.
• not knowing how to apply algorithms to new kinds of data, without someone showing you how first.
For more discussion on that, see my post: “Why do you need math for machine learning and deep learning?
But let’s make this clear: math != theory.
When we look at math in my courses, we only look at the math needed to derive the algorithm and understand how it works at an intuitive level.
Yes, believe it or not, we are using math to improve our intuition.
This is despite what many beginners might think. When they see math, they automatically assume “math” = “not intuitive”, and that “intuitive” = “pictures, animations, and purposely avoiding math”.
That’s OK if you want to read a news article in the NY Times about ML, but not when you want to be a practitioner of ML.
Those are 2 different levels of “intuition” (layman vs. practitioner).
To see an extreme example of this, one need not look any further than Albert Einstein. Einstein was great at communicating his ideas to the public. Everyone can easily understand the layman interpretation of general relativity (mass bends space and time). But this is not the same as being a practitioner of relativistic physics.
Everyone has seen this picture and understands what it means at a high level. But does that mean you are a physicist or that you can “do physics”?
Anyway, that was just an aside so we don’t confuse “math used for intuition” and “layman intuition” and “theory”. These are 3 separate things. Just because you’re looking at some math, does not automatically imply you’re looking at “theory”.
What do we mean by “theory”?
Here’s a simple question to consider. Why does gradient descent work?
Despite the fact that we have used gradient descent in many of my courses, and derived the gradient descent update rules for neural networks, SVMs, and other models, we have never discussed why it works.
And that’s OK!
The “mathematical intuition” is enough.
But let’s get back to the question of this article: Why is the Lazy Programmer saying we should not study theory?
Well, this is the kind of “theory” that gets so deep, it:
• Does not produce any near-term gains in your work
• Requires a very high level of math ability (e.g. real analysis, optimization, dynamical systems)
• Is on the cutting-edge of understanding, and thus very difficult, likely to be disputed or even superseded in the near future
Case in point: although we have been using gradient descent for years in my courses (and decades before that in general), our understanding is still not yet complete.
Here’s an article that just came out this year on gradient descent (August 2021): “Computer Scientists Discover Limits of Major Research Algorithm“.
Here’s a direct link to the corresponding paper, called “The Complexity of Gradient Descent: CLS = PPAD ∩ PLS”: https://arxiv.org/abs/2011.01929
There will be more papers on these “theory” topics in the years to come.
My advice is not to go down this path, unless you really enjoy it, you are doing graduate research (e.g. PhD-level), you don’t mind if ideas you spent years and years working on might be proven incorrect, and you have a very high level of math ability in subjects like real analysis, optimization, and dynamical systems.
# Predicting Stock Prices with Facebook Prophet
August 3, 2021
Prophet is Facebook’s library for time series forecasting. It is mainly geared towards business datasets (e.g. predicting adspend or CPU usage), but a natural question that comes up with my students whenever we talk about time series is: “can it predict stock prices?”
In this article, I will discuss how to use FB Prophet to predict stock prices, and I’ll also show you what not to do (things I’ve seen in other popular blogs). Furthermore, we will benchmark the Prophet model with the naive forecast, to check whether or not one would really want to use this.
Note: This is an excerpt from my full VIP course, “Time Series Analysis, Forecasting, and Machine Learning“. If you want the code for this example, along with many, many other code examples on stock prices, sales data, and smartphone data, get the course!
The Prophet section will be part of the VIP version only, so get it now while the VIP coupon is still active!
## How does Prophet work?
The Prophet model is a 3 component, non-autoregressive time series model. Specifically:
$$y(t) = g(t) + s(t) + h(t) + \varepsilon(t)$$
The Prophet model is not autoregressive, like ARIMA, exponential smoothing, and the other methods we study in a typical time series course (including my own).
The 3 components are:
1) The trend $$g(t)$$ which can be either linear or logistic.
2) The seasonality $$s(t)$$, modeled using a Fourier series.
3) The holiday component $$h(t)$$, which is essentially a one-hot vector “dotted” with a vector of weights, each representing the contribution from their respective holiday.
## How to use Prophet for predicting stock prices
In my course, we do 3 experiments. Our data is Google’s stock price from approximately 2013-2018, but we only use the first 2 years as training data.
The first experiment is “plug-and-play” into Prophet with the default settings.
Here are the results:
Unfortunately, Prophet mistakenly believes there is a weekly seasonal component, which is the reason for the little “hairs” in the forecast.
When we plot the components of the model, we see that Prophet has somehow managed to find some weekly seasonality.
Of course, this is completely wrong! The model believes that the stock price increases on the weekends, which is highly unlikely because we don’t have any data for the weekend.
The second experiment is an example of what not to do. I saw this in every other popular blog, which is yet another “data point” that should convince you not to trust these popular data science blogs you find online (except for mine, obviously).
In this experiment, we set daily_seasonality to True in the model constructor.
Here are the results.
It seems like those weird little “hairs” coming from the weekly seasonal component have disappeared.
“The Lazy Programmer is wrong!” you may proclaim.
However, this is because you may not understand what daily seasonality really means.
Let’s see what happens when we plot the components.
This plot should make you very suspicious. Pay attention to the final chart.
“Daily seasonality” pertains to a pattern that repeats everyday with sub-daily changes.
This cannot be the case, because our data only has daily granularity!
Lesson: don’t listen to those “popular” blogs.
For experiment 3, we set weekly seasonality to False. Alternatively, you could try playing around with the priors.
Here are the results.
Notice that the “little hairs” are again not present.
## Is this model actually good?
Just because you can make a nice chart, does not mean you have done anything useful.
In fact, you see the exact same mistakes in those blog articles and terrible Udemy courses promising to “predict stock prices with LSTMs” (which I will call out every chance I get).
One of the major mistakes I see in nearly every blog post about predicting stock prices is that they don’t bother to compare it to a benchmark. And as you’ll see, the benchmark for stock prices is quite a low bar – there is no reason not to compare.
Your model is only useful if it can beat the benchmark.
For stock price predictions, the benchmark is typically the naive forecast, which is the optimal forecast for a random walk.
Random walks are often used as a model for stock prices since they share some common attributes.
For those unfamiliar, the naive forecast is simply where you predict the last-known value.
Example: If today’s price on July 5 is $200 and I want to make a forecast with a 5-day horizon, then I will predict$200 for July 6, $200 for July 7, …, and$200 for July 10.
I won’t bore you with the code (although it’s included in the course if you’re interested), but the answer is: Prophet does not beat the naive forecast.
In fact, it does not beat the naive forecast on any horizon I tried (5 days, 30 days, 60 days).
Sidenote: it’d be a good exercise to try 1 day as well.
Are stock prices really random walks? Although this particular example provides evidence supporting the random walk hypothesis, in my course, the GARCH section will provide strong evidence against it! Again, it’s all explained in my latest course, “Time Series Analysis, Forecasting, and Machine Learning“. Only the VIP version will contain the sections on Prophet, GARCH, and other important tools.
The VIP version is intended to be limited-time only, and the current coupon expires in less than one month!
Get your copy today while you still can.
# Why do you need math for machine learning and deep learning?
July 9, 2021
In this article, I will demonstrate why math is necessary for machine learning, data science, deep learning, and AI.
Most of my students have already heard this from me countless times. College-level math is a prerequisite for nearly all of my courses already.
Perhaps you may believe I am biased, because I’m the one teaching these courses which require all this math.
It would seem that I am just some crazy guy, making things extra hard for you because I like making things difficult.
WRONG.
You’ve heard it from me many times. Now you’ll hear it from others.
## Example #1
Let’s begin with one of the most famous professors in ML, Daphne Koller, who co-founded Coursera.
In this clip, Lex Fridman asks what advice she would have for those interested in beginning a journey into AI and machine learning.
One important thing she mentions, which I have seen time and time again in my own experience, is that those without typical prerequisite math backgrounds often make mistakes and do things that don’t make sense.
She’s being nice here, but I’ve met many of these folks who not only have no idea that what they are doing does not make sense, they also tend to be overly confident about it!
Then it becomes a burden for me, because I have to put in more effort explaining the basics to you just to convince you that you are wrong.
For that reason, I generally advise against hiring people for ML roles if they do not know basic math.
## Example #2
I enjoyed this strongly worded Reddit comment.
Original post:
Top comment:
## Example #3
Not exactly machine learning, but very related field: quant finance.
In fact, many students taking my courses dream about applying ML to finance.
Well, it’s going to be pretty hard if you can’t pass these interview questions.
http://www.math.kent.edu/~oana/math60070/InterviewProblems.pdf
Think about this logically: All quants who have a job can pass these kinds of interview questions. But you cannot. How well do you think you will do compared to them?
## Example #4
Entrepreneur and angel investor Naval Ravikant explains why deriving (what we do in all of my in-depth machine learning courses) is much more important than memorizing on the Joe Rogan Experience.
Most beginner-level Udemy courses don’t derive anything – they just tell you random facts about ML algorithms and then jump straight to the usual 3 lines of scikit-learn code. Useless!
## Example #5
I found this in a thread about Lambda School (one of the many “developer bootcamps” in existence these days) getting sued for lying about its job placement rates and cutting down on its staff.
Two interesting comments here from people “in the know” about how bootcamps did not really help unless the student already had a math / science / STEM background. The first comment is striking because it is written by a former recruiter (who has the ability to see who does and doesn’t get the job).
That is to say, it is difficult to go from random guy off the street to professional software engineer from just a bootcamp alone (the implication here is that we can apply similar reasoning to online courses).
In this case, it wasn’t even that the math was being directly applied. A math / science background is important because it teaches you how to think properly. If 2 people can complete a bootcamp or online course, but only one has a STEM background and knows how to apply what they learned, that one will get the job, and the other will not.
Importantly, note that it’s not about the credentials, it’s purely about ability, as per the comments below.
## Example #6
This is from a thread concerning Yann LeCun’s deep learning course at NYU. As usual, someone makes a comment that you don’t need such courses when you can just plug your data into Tensorflow like everyone else. Another, more experienced developer sets them straight.
## Example #7
Hey, you guys have heard of Yann LeCun, right? Remember that guy? The guy who invented CNNs?
Let’s see what he has to say:
Math. Math. Oh and perhaps some more math.
That’s the gist of the advice to students interested in AI from Facebook’s Yann LeCun and Joaquin Quiñonero Candela
who run the company’s Artificial Intelligence Lab and Applied Machine Learning group respectively.
Tech companies often advocate STEM (science, technology, engineering and math), but today’s tips are particularly pointed. The pair specifically note that students should eat their vegetables take Calc I, Calc II, Calc III, Linear Algebra, Probability and Statistics as early as possible.
# Time Series: How to convert AR(p) to VAR(1) and VAR(p) to VAR(1)
July 1, 2021
This is a very condensed post, mainly just so I could write down the equations I need for my Time Series Analysis course. 😉
However, it you find it useful – I am happy to hear that!
[Get 75% off the VIP version here]
$$y_t = b + \phi_1 y_{t-1} + \phi_2 y_{t-2} + \varepsilon_t$$
Suppose we create a vector containing both $$y_t$$ and $$y_{t -1}$$:
$$\begin{bmatrix} y_t \\ y_{t-1} \end{bmatrix}$$
We can write our AR(2) as follows:
$$\begin{bmatrix} y_t \\ y_{t-1} \end{bmatrix} = \begin{bmatrix} b \\ 0 \end{bmatrix} + \begin{bmatrix} \phi_1 & \phi_2 \\ 1 & 0 \end{bmatrix} \begin{bmatrix} y_{t-1} \\ y_{t-2} \end{bmatrix} + \begin{bmatrix} \varepsilon_t \\ 0 \end{bmatrix}$$
Exercise: expand the above to see that you get back the original AR(2). Note that the 2nd line just ends up giving you $$y_{t-1} = y_{t-1}$$.
The above is just a VAR(1)!
You can see this by letting:
$$\textbf{z}_t = \begin{bmatrix} y_t \\ y_{t-1} \end{bmatrix}$$
$$\textbf{b}’ = \begin{bmatrix} b \\ 0 \end{bmatrix}$$
$$\boldsymbol{\Phi}’_1 = \begin{bmatrix} \phi_1 & \phi_2 \\ 1 & 0 \end{bmatrix}$$
$$\boldsymbol{\eta}_t = \begin{bmatrix} \varepsilon_t \\ 0 \end{bmatrix}$$.
Then we get:
$$\textbf{z}_t = \textbf{b}’ + \boldsymbol{\Phi}’_1\textbf{z}_{t-1} + \boldsymbol{\eta}_t$$
Which is a VAR(1).
Now let us try to do the same thing with an AR(3).
$$y_t = b + \phi_1 y_{t-1} + \phi_2 y_{t-2} + \phi_3 y_{t-3} + \varepsilon_t$$
We can write our AR(3) as follows:
$$\begin{bmatrix} y_t \\ y_{t-1} \\ y_{t-2} \end{bmatrix} = \begin{bmatrix} b \\ 0 \\ 0 \end{bmatrix} + \begin{bmatrix} \phi_1 & \phi_2 & \phi_3 \\ 1 & 0 & 0 \\ 0 & 1 & 0 \end{bmatrix} \begin{bmatrix} y_{t-1} \\ y_{t-2} \\ y_{t-3} \end{bmatrix} + \begin{bmatrix} \varepsilon_t \\ 0 \\ 0 \end{bmatrix}$$
Note that this is also a VAR(1).
Of course, we can just repeat the same pattern for AR(p).
The cool thing is, we can extend this to VAR(p) as well, to show that any VAR(p) can be expressed as a VAR(1).
Suppose we have a VAR(3).
$$\textbf{y}_t = \textbf{b} + \boldsymbol{\Phi}_1 \textbf{y}_{t-1} + \boldsymbol{\Phi}_2 \textbf{y}_{t-2} + \boldsymbol{\Phi}_3 \textbf{y}_{t-3} + \boldsymbol{ \varepsilon }_t$$
Now suppose that we create a new vector by concatenating $$\textbf{y}_t$$, $$\textbf{y}_{t-1}$$, and $$\textbf{y}_{t-2}$$. We get:
$$\begin{bmatrix} \textbf{y}_t \\ \textbf{y}_{t-1} \\ \textbf{y}_{t-2} \end{bmatrix} = \begin{bmatrix} \textbf{b} \\ 0 \\ 0 \end{bmatrix} + \begin{bmatrix} \boldsymbol{\Phi}_1 & \boldsymbol{\Phi}_2 & \boldsymbol{\Phi}_3 \\ I & 0 & 0 \\ 0 & I & 0 \end{bmatrix} \begin{bmatrix} \textbf{y}_{t-1} \\ \textbf{y}_{t-2} \\ \textbf{y}_{t-3} \end{bmatrix} + \begin{bmatrix} \boldsymbol{\varepsilon_t} \\ 0 \\ 0 \end{bmatrix}$$
This is a VAR(1)! | 2022-07-01 19:51:48 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4779995083808899, "perplexity": 1153.3897568626132}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103945490.54/warc/CC-MAIN-20220701185955-20220701215955-00683.warc.gz"} |
http://googology.wikia.com/wiki/User_blog:LittlePeng9/The_largest_number_ever | FANDOM
10,837 Pages
WhfgFbLbhXabj,GuvfOybtcbfgVfa'gVagraqrqGbOrGnxraFrevbhfyl what?
So, by employing methods from model theory, I have found a way to actually find the largest natural number ever, thus setting a limit for all of our attempts of finding larger and larger numbers. Before you say that, I'll answer - you CAN add 1, but that won't make a difference.
In order to do this, I'm going to need axiomatic system for number theory in order to find the largest number ever. But I find Peano arithmetic too mainstream, let's deal with something weaker (after all, weaker theory can't give us larger numbers (can it?)). I'll be using, throughout this blog post, Robinson arithmetic. For these unfamiliar with it, here are the axioms:
1. $$Sx\neq 0$$
2. $$(Sx=Sy)\Rightarrow x=y$$
3. $$x=0\lor \exists y(Sy=x)$$
4. $$x+0=x$$
5. $$x+Sy=S(x+y)$$
6. $$x\cdot 0=0$$
7. $$x\cdot Sy=x\cdot y+x$$
where $$S$$ is a unary function (successor), and $$+,\cdot$$ are binary functions (addition and multiplication).
We also define $$x\leq y\Leftrightarrow_{def}\exists z: x+z=y$$.
Now, in order to get the largest number ever, we need something larger than $$0,1,2,3,...$$, after all we all know that this set has no largest element. But we want the largest element. So what should we do? Let's add the largest number! I'll denote this number $$N$$. But what would be a successor of this largest number? Let's make it $$N$$ itself! But we have to do addition and multiplication too... Let's make all of these operations equal to $$N$$ if they involve $$N$$! Except for $$N\cdot 0$$. This will be $$0$$. $$0$$ is special. But not very special. Fark the logic, I want $$0\cdot N=N$$.
But now you can shout "YOU CAN'T JUST ADD THE LARGEST NUMBER, THIS MAKES NO SENSE!" And here, my dear, you are wrong. This isn't wrong. This is actually correct. My system satisfies all the axioms, so how can this be wrong? Axiomatic systems are designed to model reality after all. For those of you who don't believe me, here is verification of the axioms:
1. Straightforward, $$SN$$ isn't $$0$$, and no other successor is $$0$$.
2. Similar, $$N$$ isn't counterexample, so there is none.
3. Ditto.
4. Ditto.
5. If $$x=N$$, then $$x+Sy=N+Sy=N=SN=S(N+y)=S(x+y)$$. If $$y=N$$ then $$x+Sy=x+SN=x+N=N=SN=S(x+N)=S(x+y)$$.
6. $$0$$ is special.
7. If $$x=N$$, then $$x\cdot Sy=N\cdot Sy=N=x\cdot y+N=x\cdot y+x$$, because $$Sy$$ is never special, by 1. If $$y=N$$ then $$x\cdot Sy=x\cdot SN=x\cdot N=N=N+x=x\cdot N+x=x\cdot y+x$$ because fark logic.
So as you have seen above, all of my definitions are actually perfectly logical, so they must be right (for interested, deduction rules in propositional logic are sound, i.e. if axioms are logically valid, then so is all we can derive from them, which is a justification for my claim). Thus $$N$$ must really exist.
One last thing is to verify that $$N$$ is indeed the largest. By our definition of $$\leq$$, we have to check that, for every $$x$$, there is a $$z$$ with $$x+z=N$$. But $$z=N$$ will do the trick! So $$N$$ is indeed the largest natural number we can have. And it's not infinity, it's an element of Robinson arithmetic's universe of discourse, i.e. natural numbers.
For the sake of giving a name to $$N$$, I'm gonna give it a name of Robinson number.
GuvfOybtcbfgJnfJevggraFbGungVgYbbxfYvxrCresrpgylInyvqQrevingvba,OhgGurerNerSrjYbtvpnyZvfgnxrfJuvpuVYrnirLbhGbSvaq | 2017-08-21 15:58:10 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9030981659889221, "perplexity": 327.0406202235594}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886109157.57/warc/CC-MAIN-20170821152953-20170821172953-00012.warc.gz"} |
https://www.gamedev.net/forums/topic/519899-shared-objects-in-linux/ | # Shared objects in Linux
This topic is 3452 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.
## Recommended Posts
Hi everyone. I have aproblem, i don't know how to solve. I have an abstract class, yea pure virtual, which serves as an interface. Then in a shared object i made an implementation of the base class. With another class i made all the loading things. Everything compiles fine. But when i run it, it says something like:
Quote:
ERROR: Couldn't load DPI_Render/DPI_render_OGL/libdpiogl.so DPI_Render/DPI_render_OGL/libdpiogl.so: undefined symbol: _ZN11cDPI_RenderD2Ev
where cDPI_Render is the name of my abstract class. Here are the relevant portions of my code.
//This the base class
#ifndef DPI_RENDER_DEVICE
#define DPI_RENDER_DEVICE
class cDPI_Render {
public:
cDPI_Render(){};
virtual ~cDPI_Render(){};
virtual bool Init(int width, int height)=0;
virtual void ClearScreen(float r, float g, float b)=0;
virtual void Update(void)=0;
virtual void Release(void)=0;
};
extern "C" {
bool createRenderDevice(cDPI_Render *dpird_ptr);
void releaseRenderDevice(cDPI_Render *dpird_ptr);
};
#endif
I used extern "C" so the names don't get screwed up int the shared object so i could get them easily with dlsym(...) This is the code that fails:
bool cSOLoader::create(int flag, char *shName)
{
bool (*render_ptr)(cDPI_Render *dpir_ptr);
bool (*input_ptr)(cDPI_Input *dpii_ptr);
bool hres;
switch(flag)
{
case DPI_RENDER:
sh_module[DPI_RENDER]=dlopen(shName, RTLD_NOW|RTLD_GLOBAL);/*Here it fails*/
.
.
.
.
And finally, the test program which shows how i load the library.
#include"DPI_Render/dpi_render.h"
#include<stdio.h>
int main(int argc, char *argv[])
{
cDPI_Render *render;
printf("Iniciando\n");
return 0;
}
Somehow the name of the abstract class gets screwed when compiling the shared object. Any ideas how to fix this?
##### Share on other sites
extern "C" only works for functions. Classes related symbols are always going to get mangled to encode more informations in them than just the name.
There a small command line utility that comes with binutils (and so should be available if gcc is) called c++filt that can decode a mangled c++ symbol name.
So, with that said, if you run _ZN11cDPI_RenderD2Ev through c++filt, you'll see that it is cDPI_Render::~cDPI_Render().
What probably happens is that the virtual function table for cDPI_Render in libdpiogl.so references cDPI_Render's destructor, but the actual non-inline version of the destructor hasn't been generated because it's declared inline.
A virtual function is always going to be called through the virtual function table though, so declaring them inline is pointless and makes little sense.
So my suggestion is: try to put cDPI_Render's destructor body in a cpp file instead of directly in the header file.
##### Share on other sites
Well. Testing what you said, first i commented the constructor and the destructor and it worked fine(although with possibly memory leaks). The i uncommented both, but delete the body of both.
I'm still getting the same problem.
Do i really have to create a cpp just to solve this problem? i mean, neither the constructor nor the destructor actually have any code, so, what other way, can this be solved?
EDIT:
Alright. i don't know why, but this time the code worked. I left just as i posted it above and works. ??????????
Any ideas, what's going on?
##### Share on other sites
The root of your problem is that there was no implementation of the destructor in the runtime linkloader's search path at the time it was needed.
If it didn't used to work but now works, I would suspect some procedural problem in which you are picking up older libraries at run time, and after some futzing around the newer libraries get put in place.
There is also the problem of where the class-specific code gets emitted: it's pretty arbitrary but you can force it with GCC by providing a non-inline destructor in a separate .cc file. That would eliminate the possibility that the compiler or linker is guessing (and guessing wrong) about where to emit the destructor code. Don't forget that even a default (empty) destructor may have automatically-generated code.
##### Share on other sites
Well, this time i've got some inner problems with this code.
At fisrt it seems to run fine.
But after i tried to use the implementation trhough the interface i get Segment violation.I used GDB to see what's going on.
What i've got is that in dlopen(where i open the shared library) this messages appear:
Quote:
I still don't get why it can't find the correct symbols. I mean, the .so is not in a system dir, its inside the project folder hirarchy, and i load it explicitly.
Any suggestions?
##### Share on other sites
I don't really have any idea regarding that seg fault, but something occurred to me: that pure virtual class that you implement in your shared object, it's located in your executable, right?
the problem is that by default, symbols from the executable itself aren't exported and thus shared object can't import them. You have to pass the -E option to the linker when linking the executable (using -Wl,-E when invoking gcc if you link by calling the gcc front-end) for that.
There are some infos about how those things can cause issues in the gcc faq: http://gcc.gnu.org/faq.html#dso
##### Share on other sites
Quote:
Original post by DosproDo i really have to create a cpp just to solve this problem? i mean, neither the constructor nor the destructor actually have any code, so, what other way, can this be solved?
The problem. The ctor/dtor have not been compiled into the *.so by the compiler. Whether they do or do not have any code is entirely moot.
The solution. Somewhere in your code - at least once - you must have included that header file into a cpp file. The simple act of doing that, will allow the compiler to compile those empty ctor/dtors into empty code blocks in your so. C++ compilers do not compile header files.
As for your segfault, my guess from looking at your func prototypes is that you're doing something very dumb indeed... I'm guessing that you're are doing this in your dll:
bool createRenderDevice(cDPI_Render *dpird_ptr){ // the address will be lost when you exit the scope. dpird_ptr = new dllRenderer(); return true;}
which is pretty darned wrong if you ask me. surely it should be this... ?
bool createRenderDevice(cDPI_Render** dpird_ptr){ *dpird_ptr = new dllRenderer(); return true;}
Which would explain why you seg fault since the pointer passed to createRenderDevice will still be un-initialised after the function is called.
##### Share on other sites
Quote:
The solution. Somewhere in your code - at least once - you must have included that header file into a cpp file. The simple act of doing that, will allow the compiler to compile those empty ctor/dtors into empty code blocks in your so. C++ compilers do not compile header files.
Well, actually, i include the header file inside the .so cpp.
About the wrong code, i don't understand why is the address lost?
I pass a pointer and when i use the operator new the pointer points to this noew address? Am i wrong? Why should i need double pointers?
##### Share on other sites
Quote:
Original post by DosproAbout the wrong code, i don't understand why is the address lost?I pass a pointer and when i use the operator new the pointer points to this noew address? Am i wrong? Why should i need double pointers?
Google 'passing argument by value vs by reference in C++' and you shall have your answer. I'm guessing you skipped all of the chapters on functions and pointers.....
void func(int a){ a = 22;}int b=1;func(b);cout << b << endl; // 1
And that is what your code does:
void func(int* a){ a = new int;}int* b = 0;func(b);cout << b << endl; // prints 0
once you've understood the difference about passing args by value vs reference, you'd realise pretty quickly that you are trying to modify the value of b within func, which means you have to pass it by reference - not value as you are doing. i.e.
void func(int** a){ *a = new int;}int* b = 0;func(&b);cout << b << endl; // prints some random address
or even
void func(int*& a){ a = new int;}int* b = 0;func(b);cout << b << endl; // prints some random address
##### Share on other sites
Well. i already understand quite well, both type of argument passing.
What i didn't actually know was that pointers can actually be passed as reference.
Still, kind of confusing, but works.
Thanks, now everything works fine.
Thanks everybody.
1. 1
2. 2
3. 3
4. 4
Rutin
18
5. 5
• 11
• 21
• 12
• 11
• 11
• ### Forum Statistics
• Total Topics
631405
• Total Posts
2999888
× | 2018-06-22 19:21:16 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24292032420635223, "perplexity": 3594.7513909256877}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864776.82/warc/CC-MAIN-20180622182027-20180622202027-00603.warc.gz"} |
https://math.stackexchange.com/questions/1885736/alternate-characterization-of-linearity | # Alternate Characterization of Linearity
This question is prompted by this video on matrices and linear transformations, which I highly recommend as a pedagogical tool. In it, the author characterizes linear transformations in the following way (I'm paraphrasing and formalizing)
Define a line in a vector space to be a set of the form $L = \{u + t\,v: t \in \Bbb R\}$ for some vectors $u$ and $v$ (note: $L$ may consist of a single point). That is, $S$ is an affine subspace of dimension at most $1$.
A function $T: \Bbb R^n \to \Bbb R^m$ is linear if:
1. $T(0) = 0$
2. For any line $L \subset \Bbb R^n$, the image $T(L)$ is a line in $\Bbb R^m$
I like this definition because of its geometric appeal and the fact that it manages to "put the line in linear".
Of course, the traditional definition of a linear map is one which preserves linear combinations.
My Question:
1. How should one prove that a function satisfying this definition preserves linear combinations?
2. Can this be proven in a beginner-friendly way?
I'll admit I haven't really banged my head against this one, but here are my thoughts: it is equivalent to prove that a function that satisfies only the second condition (i.e. maps lines to lines) is an affine transformation, i.e. that it preserves affine combinations. From there, it would suffice to note that if $T$ is affine, then $x \mapsto T(x) - T(0)$ is linear.
That being said, I don't see a quick way to handle that proof off the top of my head. Moreover, if this really is the quickest way to reach a proof, it seems that proving this in linear algebra 101 is a bit too ambitious (which is not to say that this fact fails to be pedagogically useful). I'm guessing a little real-analysis might have to come in at some point.
• Does this definition generalize to vector spaces over any field? What about modules over a ring? I, too, don't see a quick proof. – Brian Fitzpatrick Aug 8 '16 at 19:40
• @BrianFitzpatrick great question, I have no clue – Omnomnomnom Aug 8 '16 at 19:54
• I'm starting to think this is false. What about $T:\Bbb R\to\Bbb R$ given by $T(x)=x^3$? – Brian Fitzpatrick Aug 8 '16 at 21:04
• Or any origin-preserving surjective map $\Bbb R\to\Bbb R$? – Brian Fitzpatrick Aug 8 '16 at 21:08
• And I think the first answer to this question gives us a nonlinear line-preserving and origin-preserving map $\Bbb R^2\to\Bbb R^2$. – Brian Fitzpatrick Aug 9 '16 at 0:35
Assume that $X$ and $Y$ are any real vector spaces and that $f:X\to Y$ is so that for any $u,v\in X$ there are $u',v'\in Y$ such that $f(\{u+tv\ |\ t\in\mathbb R\})=\{u'+sv'\ |\ s\in\mathbb R\}$ and $f(0)=0$. Then $f$ satisfies $f(u+qv)=f(u)+qf(v),\ u,v\in X,q\in\mathbb Q$.
The assumption can be rewritten as follows: There are functions $a,b:X\to Y$ and $g:\mathbb R\to\mathbb R$ such that $$f(u+tv)=a(u)+g(t)b(v),\qquad u,v\in X,t\in\mathbb R.$$
We first obtain from \begin{align} 0&=f(0+0\cdot v)=a(0)+g(0)b(v), \\ 0&=f(0+t\cdot 0)=a(0)+g(t)b(0), \\ 0&=f(tv-tv)=a(tv)+g(-t)b(v), \\ 0&=f(tv+t(-v))=a(tv)+g(t)b(-v), \end{align} that \begin{align*} g(0)b(v)&=g(t)b(0), \\ a(tv)&=-g(-t)b(v)=-g(t)b(-v),\qquad v\in X,t\in\mathbb R. \end{align*} Case 1: $g(0)\neq 0:$ Then $b(v)=b(0)$. If $b(0)\neq0$, then $g(t)=g(0)$ and $a(v)=-g(1)b(-v)=-g(0)b(0)$ and $f\equiv0$. If $b(0)=0$, then $a(v)=-g(1)b(-v)=0$ and again $f\equiv0$.
Case 2: $g(0)=0$: Then $a(u)=f(u)$ for all $u\in X$. In particular, \begin{align*} f(u+tv)&=f(u)+g(t)b(v), \\ f(u+v)&=f(u)+g(1)b(v),\qquad u,v\in X,t\in\mathbb R.\tag{1} \end{align*} If $g(1)=0$, then $f(v)=f(0+v)=f(0)=0$ and $f\equiv0$. We therefore assume that $g(1)\neq0$. Then letting $u=0$ in (1) gives $f(v)=g(1)b(v)$. On the other hand, letting $v=-u$ in (1) gives $f(-v)=-g(1)b(v)$ and hence $f(-v)=-f(v)$ and \begin{align} f(u+tv)=f(u)+\tfrac{g(t)}{g(1)} f(v),\qquad u,v\in X,t\in\mathbb R.\tag{2} \end{align} Suppose that $f\not\equiv0$. Then there is some $x\in X$ such that $f(x)\neq0$. From (2) we obtain \begin{align} f((s+t)x)&=\tfrac{g(s+t)}{g(1)}f(x), \\ f((s+t)x)&=f(sx+tx)=f(sx)+\tfrac{g(t)}{g(1)}f(x)=\left(\tfrac{g(s)}{g(1)}+\tfrac{g(t)}{g(1)}\right)f(x) \end{align} and consequently $g(s+t)=g(s)+g(t)$ for all $s,t\in\mathbb R$. Thus, $g(q)=qg(1)$ for all $q\in\mathbb Q$ and $$f(u+qv)=f(u)+qf(v),\qquad u,v\in X,q\in\mathbb Q.$$ | 2019-07-20 05:33:25 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 3, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9983044266700745, "perplexity": 235.69622516363518}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526446.61/warc/CC-MAIN-20190720045157-20190720071157-00544.warc.gz"} |
http://math.stackexchange.com/questions/222418/example-of-a-discontinuous-and-bounded-function-for-the-limiting-case-w1-n | Example of a discontinuous and bounded function for the limiting case $W^{1,n}$
Let $\Omega = B(0,1)$ be the open unit disc in $\mathbb{R}^2$. I'm looking for an example of a discontinuous and bounded function in $W^{1,2}(\Omega)$.
I know the example $u(x) = \log \left( \log \left(1 + \frac{1}{|x|}\right)\right)$ of a discontinuous but unbounded function in $W^{1,2}(\Omega)$. I've tried playing with things like $(x,y) \mapsto \frac{x}{(x^2 + y^2)^{1/2}}$ but it didn't get me far. Any insight on how to try and construct such examples and how to expect such functions to behave would be much welcomed!
-
1 Answer
One can get an example just by composing the function $u(x,y)$ with the function $f(x) = \sin(x)$. By some variant of a chain rule for Sobolev functions, a composition of function in $u \in W^{1,p}(\Omega)$ with a function $f \in C^1_B(\mathbb{R})$ results in a function in $W^{1,p}(\Omega)$. Choosing for $f(x)$ a bounded function that doesn't have a limit when $x \rightarrow \infty$ and composing it with an unbounded $u$ gives the required example.
Of course, the belonging of $f \circ u$ to $W^{1,2}(\Omega)$ can be easily checked directly.
- | 2016-07-01 11:41:09 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8847901821136475, "perplexity": 54.075524071756966}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402699.36/warc/CC-MAIN-20160624155002-00148-ip-10-164-35-72.ec2.internal.warc.gz"} |
https://plainmath.net/algebra-i/9052-find-when-equal-where-satisfies-recurrence-relation-equal-ffrac-equal | Wribreeminsl
2021-01-08
Find $f\left(n\right)$ when $n=10k$, where ff satisfies the recurrence relation $f\left(n\right)=f\frac{n}{10}$ with $f\left(1\right)=10$.
delilnaT
Since we need to find $f\left(n\right)$ when $n=10k$, we need to determine a pattern when nn is a power of 10.
Substituting $n=10$ into .
Since $f\left(1\right)=10$, then $f\left(10\right)=10$.
Substituting $n=100$ into $f\left(n\right)=f\left(\frac{n}{10}\right)$ gives $f\left(100\right)=f\left(\frac{100}{10}=f\left(10\right)\right)$.
Since $f\left(10\right)=10$, then $f\left(100\right)=10$.
Substituting $n=1000$ into .
Since $f\left(100\right)=10$, then $f\left(1000\right)=10$.
Based on these results, then nn is a power of 10, then the value of f(n) is 10. Therefore, when $n=10k$ we have $f\left(n\right)=10.$
Do you have a similar question? | 2023-03-26 22:25:31 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 19, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9296125769615173, "perplexity": 305.49320321114516}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296946535.82/warc/CC-MAIN-20230326204136-20230326234136-00404.warc.gz"} |
https://deepai.org/publication/distributed-kalman-filtering-distributed-optimization-viewpoint | # Distributed Kalman-filtering: Distributed optimization viewpoint
We consider the Kalman-filtering problem with multiple sensors which are connected through a communication network. If all measurements are delivered to one place called fusion center and processed together, we call the process centralized Kalman-filtering (CKF). When there is no fusion center, each sensor can also solve the problem by using local measurements and exchanging information with its neighboring sensors, which is called distributed Kalman-filtering (DKF). Noting that CKF problem is a maximum likelihood estimation problem, which is a quadratic optimization problem, we reformulate DKF problem as a consensus optimization problem, resulting in that DKF problem can be solved by many existing distributed optimization algorithms. A new DKF algorithm employing the distributed dual ascent method is provided and its performance is evaluated through numerical experiments.
## Authors
• 1 publication
• 1 publication
• ### Kalman Filtering With Censored Measurements
This paper concerns Kalman filtering when the measurements of the proces...
02/20/2020 ∙ by Kostas Loumponias, et al. ∙ 0
• ### Distributed Recursive Filtering for Spatially Interconnected Systems with Randomly Occurred Missing Measurements
This paper proposed a distributed filter for spatially interconnected sy...
11/10/2019 ∙ by Bai Li, et al. ∙ 0
• ### Nonlinear Kalman Filtering with Divergence Minimization
We consider the nonlinear Kalman filtering problem using Kullback-Leible...
05/01/2017 ∙ by San Gultekin, et al. ∙ 0
• ### Optimal Intermittent Measurements for Tumor Tracking in X-ray Guided Radiotherapy
In radiation therapy, tumor tracking is a challenging task that allows a...
03/20/2019 ∙ by Antoine Aspeel, et al. ∙ 0
• ### Uniform ε-Stability of Distributed Nonlinear Filtering over DNAs: Gaussian-Finite HMMs
In this work, we study stability of distributed filtering of Markov chai...
02/16/2016 ∙ by Dionysios S. Kalogerias, et al. ∙ 0
• ### Distributed Active State Estimation with User-Specified Accuracy
In this paper, we address the problem of controlling a network of mobile...
06/06/2017 ∙ by Charles Freundlich, et al. ∙ 0
• ### Multi-frequency calibration for DOA estimation with distributed sensors
In this work, we investigate direction finding in the presence of sensor...
02/24/2020 ∙ by Martin Brossard, et al. ∙ 0
##### This week in AI
Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.
## I Introduction
It goes without saying that the Kalman-filter, an optimal state estimator for dynamic systems, has had a huge impact on various fields such as engineering, science, economics, etc. [1, 2, 3, 4]. Basically, the filter predicts the expectation of the system state and its covariance based on the dynamic model and the statistical information on the model uncertainty or process noise, and then correct them using new measurement, sensor model, and the information on measurement noise. When multiple sensors possibly different types are available, we can just combine the sensor models to process the measurements altogether.
Thanks to the rapid development of sensor devices and communication technology, we are now able to monitor large scale systems or environments such as traffic network, plants, forest, sea, etc. In those systems, sensors are geometrically distributed, may have different types, and usually not synchronized. To process the measurements, the basic idea would be to deliver all the data to one place, usually called fusion center, and do the correction step as in the case of multiple sensors. This is called the centralized Kalman-filtering (CKF). As expected, CKF requires a powerful computing device to handle a large number of measurements and sensor models, is exposed to a single point of failure, and is difficult to scale up. In order to overcome these drawbacks, researchers developed the distributed Kalam-filtering (DKF) in which each sensor in the network solves the problem by using local measurements and communicating with its neighbors. Compared with CKF, DKF has advantageous in terms of the scalability, robustness to component loss, computational cost, and thus the literature on this topic is expanding rapidly
[5, 6, 7, 8, 9, 10, 11, 12]. For more details on DKF, see the survey [13] and references therein.
Some relevant results are summarized as follows. In [5], the author proposed scalable distributed Kalman-Bucy filtering algorithms in which each node only communicates with its neighbors. An algorithm with average consensus filters using the internal models of signals being exchanged is proposed in [7]. It is noted that the algorithm works in a single-time scale. In the work [11], the authors proposed a continuous-time algorithm that makes each norm of all local error covariance matrices be bounded, thus overcomes a major drawback of [5]. In [10], an algorithm with a high gain coupling term in the error covariance matrix is introduced and it is shown that the local error covariance matrix approximately converges to that of the steady-state centralized Kalman-filter. An in-depth discussion on distributed Kalman-filtering problem has been provided in [14, 15], and the algorithms that exchange the measurements themselves, or exchange certain signals instead of the measurements are proposed, respectively.
Although each of the existing algorithms has own novel ideas and advantages, to the best of the authors’ knowledge, we do not have a unified viewpoint for DKF problem. Motivated by this, it is the aim of this paper to provide a framework for the problem from the perspective of distributed optimization.
We start by observing that the correction step of Kalman-filtering is basically an optimization problem [2, 3, 4], and then formulate DKF problem as a consensus optimization problem, which provides a fresh look at the problem. This results in that DKF problem can be solved by many existing distributed optimization algorithms [16, 17, 18, 19, 20], expecting various DKF algorithms to be derived. As an instance, a new DKF algorithm employing the dual ascent method [20], one of the basic algorithms for distributed optimization problems, is provided in this paper.
This paper is organized as follows. In Section II, we recall CKF problem from the optimization perspective, and connects DKF problem to a distributed optimization problem. A new DKF algorithm based on dual ascent method is proposed in Section III, and numerical experiments evaluating the proposed algorithm is conducted in Section IV.
Notation: For matrices , …, , denotes the block diagonal matrix composed of to . For scalars ,…, , , and with matrices ’s is defined similarly.
denotes the vector whose components are all 1, and
is the identity matrix whose dimension is
. The maximum and minimum eigenvalue of a matrix
are denoted by and
, respectively. For a random variable
, denotes
is normally distributed with the mean
and the variance
, and denotes the expected value of a random variable , i.e., . The half vectorization of a symmetric matrix is denoted by , whose elements are filled in Column-major order. where is element of , and denotes the inverse function of , . For a function , denotes the gradient vector .
Graph theory: For a network consisting of nodes, the communication among nodes is modeled by a graph . Let be an adjacency matrix associated to where is a weight of an edge between nodes and . If node communicates to node then, , or if not . Assume there is not self edge, i.e., . The Laplacian matrix associated to the graph , denoted by is a matrix such that , and . is a set of nodes communicating with node , i.e., .
## Ii Distributed Kalman-filtering and Its Connection to Consensus Optimization
In this section, we recall CKF problem in terms of optimization, which is the maximum likelihood estimation[2], and establish a connection between DFK and distributed optimization.
Consider a discrete-time linear system with sensors described by
xk+1 =Fxk+wk (1a) yk =Hxk+vk=⎡⎢ ⎢ ⎢ ⎢⎣H1H2⋮HN⎤⎥ ⎥ ⎥ ⎥⎦xk+⎡⎢ ⎢ ⎢ ⎢ ⎢⎣v1,kv2,k⋮vN,k⎤⎥ ⎥ ⎥ ⎥ ⎥⎦ (1b)
where is the state vector of the dynamic system, is the output vector, and is the output associated to sensor . ’s satisfy . is the system matrix and is the output matrix consisting of which is the output matrix associated to sensor . with is the process noise, is the measurement noise on sensor , and with . Assume that the pair is observable, and each is uncorrelated to for .
### Ii-a Centralized Kalman-filtering problem from the optimization perspective
If all the measurements from sensors are collected and processed altogether, the problem can be seen as the one with a imaginary sensor that measures with complete knowledge on , thus called centralized Kalman-filtering.The filtering consists of two steps, prediction and correction. In the prediction step, the predicted estimate and error covariance matrix are obtained based on the previous estimate, error covariance matrix, and the system dynamics. The update rules are given by
^xk|k−1 =F^xk−1 Pk|k−1 =E{ek|k−1e⊤k|k−1} =FE{ek−1e⊤k−1}F⊤+E{wkw⊤k} =FPk−1F⊤+Q
where and are estimate and error covariance matrix in previous time, respectively, and , . Assume that is initialized as a positive definite matrix (, usually set as ).
In the correction step, the predicted estimate and the error covariance matrix are updated based on the current measurements containing the measurement noise. The correction step can be regarded as a process to find the optimal parameter (estimate) from the predicted estimate , error covariance , and the observation . In fact, it is known that this step is an optimization problem (maximum likelihood estimation, MLE[2]) and we recall the details below.
Let and . Then, where . For the random variable , the likelihood function is given by
L(ξc)=1√(2π)(m+n)|Sk|e−12(zk−¯Hcξc)⊤S−1k(zk−¯Hcξc)
where the right-hand side is nothing but the probability density function of
with the free variable .
Now, the maximum likelihood estimate is defined as
^xk:=argmaxξc(L(ξc)).
Since is a monotonically decreasing function with respect to , can also be obtained by
^xk=∗argminξc(fc(ξc))=^xk|k−1+K(yk−H^xk|k−1) (2)
where . With the matrix inversion lemma, the Kalman-gain can be written as , which appears in the standard Kalman-filtering.
On the other hand, by the definition of , the update rule of the error covariance matrix of CKF is given by
Pk =(¯H⊤cS−1¯Hc)−1=(H⊤R−1H+P−1k|k−1)−1 (3) =Pk|k−1−(H⊤R−1H+P−1k|k−1)−1H⊤R−1HPk|k−1.
For more details, see [4, 2, 3].
### Ii-B Derivation of distributed Kalman-filtering problem
Now, we consider a sensor network which consists of sensors and suppose that each sensor runs an estimator without the fusion center. Each estimator in the network tries to find the optimal estimate by processing the local measurement and exchanging information with its neighbors through communication network. The communication network among estimators is modeled by a graph and the Laplacian matrix associated with is denoted by . Under the setting (1), estimator measures only the local measurement , and the parameters and are kept private to estimator . It is noted that the pair is not necessarily observable. We assume that the graph is connected and undirected i.e., , and and are open to all estimators.
Similar to CKF, DKF has two steps, local prediction and distributed correction. In the local prediction step, each estimator predicts
^xi,k|k−1 =F^xi,k−1 Pi,k|k−1 =FPi,k−1F⊤+Q.
where and are local estimates of and , respectively, that estimator holds.
In the distributed correction step, each estimator solves the maximum likelihood estimation in a distributed manner. The objective function of CKF can be rewritten as
N∑i=1fi(ξc)=N∑i=112(¯zi,k−¯Hiξc)⊤¯S−1i,k(¯zi,k−¯Hiξc)
where , , . We assume that and . This makes sense when the each sensor reached a consensus on and in the previous correction step.
Assuming that each estimator holds its own optimization variable for , DKF problem is written as the following consensus optimization problem.
minimize N∑i=1fi(ξi) (4a) subject to ξ1=⋯=ξN. (4b)
If there exists a distributed algorithm that finds a minimizer , we say that the algorithm solves DKF problem.
Since the kernel of Laplacian is , the constraints (4b) can be written with as where . To proceed, we define the Lagrangian to solve the problem (4) as
L(ξ,λ) =N∑i=1fi(ξi)+λ⊤¯Lξ (5)
where is the Lagrange multipliers (dual variable) associated with (4b) and . We decompose the Lagrangian into local ones defined by
Li(ξi,λi) =fi(ξi)+λ⊤i∑j∈Niaij(ξi−ξj). (6)
For the Lagrangian (5), the partial derivatives over and are given by
∇ξL(ξ,λ) =−¯H⊤¯S−1k(¯zk−¯Hξ)+¯Lλ ∇λL(ξ,λ) =¯Lξ,
where , and . Then, the optimality condition for (, ) becomes the following saddle point equation (KKT conditions), namely
[−¯H⊤¯S−1k¯H−¯L¯L0][ξ∗λ∗]=[−¯H⊤¯S−1k¯zk0] (7)
where and .
###### Lemma 1
The solutions to DKF problem are parameterized as where and are unique vectors and is an arbitrary vector. If is an optimal solution to DKF problem, then is the optimal solution to CFK problem.
By multiplying to the dual feasibility equation in (7), one can obtain
(1⊤N⊗In)¯H⊤¯S−1k¯Hξ∗=(1⊤N⊗In)¯H⊤¯S−1k¯zk. (8)
The primal feasibility equation in (7) implies that , hence (8) becomes
(1⊤N⊗In)¯H⊤¯S−1k¯H(1N⊗In)ξ†=(1⊤N⊗In)¯H⊤¯S−1k¯zk.
From , one has
(P−1k|k−1 +N∑i=1H⊤iR−1iHi)ξ† =P−1k|k−1^xk|k−1+N∑i=1H⊤iR−1iyi,k.
Since and , it follows that
ξ†=^xk|k−1+Kk(yk−H^xk|k−1) (9)
where and by the matrix inversion lemma, we have . From the fact that the right-hand side of above equation is the same with the update rule (2) of CKF, it follows that is the optimal estimate of CKF .
On the other hand, one can observe that the optimal dual variable is not unique since the dual feasibility equation
(L⊗In)λ∗=¯H⊤¯S−1k(¯zk−¯H(1N⊗In)ξ†) (10)
is singular. To find , consider the orthonormal matrix such that where ,
consists of the eigenvectors associated with the non-zero eigenvalues of
, denoted by , and . Left multiplying to the equation (10) yields
where and . Hence, the optimal dual variable becomes where is an arbitrary vector. This completes the proof.
### Ii-C Information form of DKF problem
It is well known that the dual of the Kalman-filter is the Information filter which uses the canonical parameterization
to represent the normal (Gaussian) distribution
[4]. With the canonical parameterization, DKF problem (4) can also be written in information form.
Let , and which are the local decision variable for the information vector of the estimator , the locally predicted information matrix and information vector, respectively. With these transformations, we rewrite the problem (4) as
minimize N∑i=1hi(ηi) (11a) subject to η1=⋯=ηN (11b)
where
hi(ηi)=12 (η⊤iΦ−1iηi−η⊤iΦ−1i(H⊤iR−1iyi+1Nτi,k|k−1) +y⊤iR−1iyi+1Nτ⊤i,k|k−1Ω−1i,k|k−1τi,k|k−1)
and . For the distributed problem (11), the Lagrangian is given by
Lη(η,λ) =N∑i=1hi(ηi)+ν⊤¯Lη
where and is the Lagrange multipliers. The associated saddle point equation becomes
[−(¯H⊤~S−1k¯H)−1−¯L¯L0][η∗ν∗]=[−¯H⊤~S−1k~zk0]
where , , and .
### Ii-D Interpretations of existing DKF algorithm from the optimization perspective
One of the recent DKF algorithms, Consensus on Information (CI) [14, 15] can be interpreted in the provided framework. CI consists of three steps, prediction, local correction, and consensus. In the prediction step, each estimator predicts the estimate based on the system dynamics and previous estimate similar to the standard information filter algorithm. Each estimator also updates the estimate with local measurements and output matrix in the local correction step. After that, the estimators find the agreed estimate by averaging the local estimates in the consensus step.
In the provided framework, CI can be viewed as the algorithm which solves the problem (11) through the two steps, the local correction step and the consensus step. In the former step, each of estimators finds the local minimizer (estimate) of the local objective function . Since the partial derivative of becomes
∇ηihi(ηi)=Φ−1iηi−Φ−1i(H⊤iR−1iyi+1Nτi,k|k−1)
and the local minimizer can be obtained by , which is the local update rule of CI111In the CI, the scalar is neglected [14].. The local minimizer, however, can be different among estimators, since it minimizes only the local objective function , which violates the constraint (11b).
The consensus step of CI performs a role to find an agreed (average) value of the local estimates, using the doubly stochastic matrix, and the results of the consensus step satisfy the constraint (
11b). The agreed estimate, however, may not be the global minimizer of (11), which means that the consensus step cannot guarantee the convergence of the estimates to that of CKF.
## Iii A Solution to DKF Problem
One can observe that (5) is strictly convex, differentiable, and the local objective function is a quadratic function, hence strong duality holds. In addition, from the fact is a nonsingular and block diagonal matrix, the optimal conditions (7) are already in a distributed form. This implies that the minimizer can be obtained in a distributed manner as long as is given, i.e., .
Based on the above discussion, we see that one possible algorithm solving (4), guaranteeing the asymptotic convergence to the global minimizer , is the dual ascent method [16, 20] which is given by
ξl+1 =(¯H⊤¯S−1k¯H)−1(¯H⊤¯S−1k¯zk−¯Lλl) (12a) λl+1 =λl+αλ¯Lξl+1 (12b)
where is a step size. The update rule (12) can be written locally as
ξi,l+1 =^xi,k|k−1+Ki,k(yi,k−Hi^xi,k|k−1)−ψi,l (13a) λi,l+1 =λi,l+αλ∑j∈Niaij(ξi,l+1−ξj,l+1). (13b)
where , , and is the iteration index to find the minimizer.
Regarding the convergence of the update rule (13), we have the following result.
###### Lemma 2
Assume that the network is undirected and connected. Then, the sequence generated by the dual ascent method (13) converges to of CKF problem (2), as goes to infinity, provided that the step size is chosen such that
αλ<2σ2Nmaxi{∥(¯H⊤iS−1i,k¯Hi)−1∥} (14)
where is the maximum eigenvalue of . Moreover, the sequence converges to a vector which is uniquely determined by the initial conditions of ’s.
Substituting the dual feasibility equation to the primal feasibility equation of (7) yields
¯L(¯H⊤¯S−1k¯H)−1¯Lλ∗=¯L(¯H⊤¯S−1k¯H)−1¯H⊤¯S−1k¯zk. (15)
Now let . Then, one obtains
eλl+1 =λl+αλ¯Lξl+1−λ∗ =λl+αλ¯L(¯H⊤¯S−1k¯H)−1(¯H⊤¯S−1k¯zk−¯Lλl)−λ∗.
From the identity (15), we have
eλl+1=(I−αλ¯L(¯H⊤¯S−1k¯H)−1¯L)eλl:=(I−αλ~Aλ)eλl. (16)
Here, is a symmetric positive semi-definite matrix which has simple zero eigenvalues, and it holds that . Since is zero, it follows that if is chosen such that , all eigenvalues of , except , are located inside the unit circle. The bound (14) ensures this.
Regarding the convergence of , we proceed as follows. With the orthonormal matrix used in Lemma 1, can be written as
~Aλ =(UΛU⊤⊗In)(¯H⊤¯S−1k¯H)−1(UΛU⊗In) =(U⊗In)diag(0n,Msub)(U⊤⊗In)
where is a submatrix with the first rows and first columns removed. In the new coordinates , defined by , the error dynamics of the dual variable can be expressed as
¯eλl+1 =diag(I,I−αλMsub)¯eλl.
From this equation, we know that the first components of , denoted by , remains the same for any , i.e., , , meaning that , which means that . Moreover, with chosen as (14), which guarantees that the matrix has all its eigenvalues except 1 inside the unit circle, we have , from which it follows that
liml→∞eλl=(U⊗In)[~eλ0;0]=(U1⊗In)(U⊤1⊗In)eλ0. (17)
Recalling that , we have from (17)
liml→∞λl =λ∗+(U1U⊤1⊗In)(λ0−λ∗).
Applying (for and , see the proof of Lemma 1), we have
liml→∞λl =(¯U¯Λ−1¯U⊤⊗In)b+(1N⊗In)avg(λi,0)
where | 2021-10-16 11:31:39 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8971639275550842, "perplexity": 837.1271062412202}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323584567.81/warc/CC-MAIN-20211016105157-20211016135157-00415.warc.gz"} |
https://teachingtangents.wordpress.com/2012/09/05/new-blogger-initiation-3/ | # New Blogger Initiation 3
or “Why Two and Two Makes Fish”
Almost out of time yet again (so glad this was timed to coincide with the start of school), but I suppose part of the point is to see if you can handle reflecting as you go during the school year. My choice for week 3:
1. Introduce and show the solution to a math problem that you particularly like.
When I read this prompt, I thought of one of my favorite random teaching tangents (finally worked that in!); I have a few things I try to work in to each class whenever I need to kill some time a student asks a question that I can address with this idea/concept/problem. Unfortunately, these favorite mini-lessons are often not exactly tied to any particular required content, but I’ll have students a year or two later remember these and not much else.
So here’s a somewhat paraphrased, somewhat fictional account of one of my favorite problems and its solution:
Student: I liked math when it was easier; 2 + 2 = 4 is always true … why can’t algebra always work the same way?
Teacher: Well actually, you really need to remember math is completely made up by humans. I mean 2 + 2 = FISH is honestly just as valid if you know what you’re doing …
Student: Stop joking around …
Teacher: Let me show you – but you have to be willing to let me bend the rules and change the meaning of a few things. Math is a game; when you know the rules well enough, you know how to bend, break, or even make up your own rules.
I’m going to almost use normal addition, but you’re limited to combining four symbols: 1, 2, 3, and FISH (Greek alpha). First, I need to tell you that FISH is sort of like zero but not exactly. Also, I’m going to use ‘circle-plus’ since this isn’t quite normal addition …
<scribbling on board>
Student: … um, isn’t that the exact same thing as usual???
Teacher: so far yes, but how can we complete the table without using any new symbols (only FISH, 1, 2, and/or 3), and the table be consistent – it has to make sense
<student suggestions, teacher prompting/questioning, more scribbling>
Student: are you just making this up?
Teacher: I already said I’m just making it up, but it has to make sense! Okay, let’s try another one … with less numbers and multiplication instead of addition … yeah, this should do it … we’re going to use the symbol i; there’s a rule that $i^2=-1$ by the way {yes, I can sort of use LaTex}
<scribbling, questions, more scribbling>
Student: Okay, you can play games and move around squiggles on a piece of paper just so … what’s the big deal?
Teacher: You happen to skateboard, right?
Student: Yeah, so?
Teacher: Come here … face the class; this is position zero. Show me a 180° … good, reset then show me a 90° … okay, same thing but 270° … fine, 360° … wait that’s the same as the starting position?
Student: duh
Teacher: Let’s make it interesting then … let’s start build tricks or turns on top of one another … show me a 90° followed by another 90° without a reset.
Student: 180° of course
Teacher: Reset, then show me a 180° followed by a 270° … you might want to sit there and actually work through the turns.
<student attempts, teacher helps, asks for a few other examples if necessary>
Teacher: So tell me what we just figured out …
Student: Well, you can kinda sorta add angles together but if you get to 360° you start over – it’s the same as 0° in a circle.
Teacher: Great, now consider a square in the coordinate plane … it’s basically like the skateboarding stuff we just discussed?
<scribbling>
Teacher: We’re just adding angles of rotation together, so let’s make a table … you should be getting the hang of this by now … work with a partner
<work, work, work>
Student:
Teacher: Great, notice anything yet?
Student: Not sure … this table starts over like the FISH stuff?
Teacher: Yeah! You’re getting there – can’t we really just think of these angles as multiples of 90, though?
Student: a 180 is two 90s … 270 is three … 360 resets to zero …
Teacher: You’ve just about got it … look at all three tables … color code them if you have to …
<student looks it over>
Student: The pattern is the same on each table isn’t it !?!!?!?!!?
Teacher: Awesome! The relationships are the same even though the ideas seem completely unrelated. On the first one, I used FISH instead of zero, because I wanted you focusing on relationships to get started. This problem demonstrates the basic concept of something called a group. You were just working on college-level math by the way – the kind for math majors even. We’re going genius-level in here …
Student: You’re kidding me – that seems easier than what we’re working on now!
Teacher: It turns out once you make it past calculus – math is mostly things you already know how to do but more abstract and even more interested in the ‘why’ than the ‘how’ … too bad so many people get turned off by the tedious calculation bits along the way
Student: This was the best thing we’ve done all year – it makes more sense than a lot of the other stuff …
Teacher: Remind me to show you the one about the empty trashcan, empty bag, and empty bottle not being empty anymore sometime … we should probably get back to factoring quadratics before class is over
Student: Yuck
Teacher: I know, I know … but this is a topic guaranteed to be on the state-sponsored high-stakes exit exam.
[Partially due to internet issues this is now over an hour ‘late’] | 2017-06-23 06:53:19 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7183281779289246, "perplexity": 1076.0884073773857}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320023.23/warc/CC-MAIN-20170623063716-20170623083716-00167.warc.gz"} |
https://proxies123.com/set-theory-limit-problem-from-brazil-olympiad/ | # set theory – LIMIT PROBLEM FROM BRAZIL OLYMPIAD
Actually I found this problem at MSE and it was unanswered. I found it really challenging and I am really clueless how to continue. So here’s the original argument of the author.
Let $$M,k$$ be two positive integers. Define $$X_{M,k}$$ as the set of the numbers $$p_1^{alpha_1}cdot p_2^{alpha_2} cdots p_r^{alpha_r}$$ where $$p_i$$ are prime numbers such that $$M leq p_1 < p_2 < cdots < p_r$$ and $$alpha_i geq k$$. Prove that there are positive real numbers $$beta(M,k)$$ and $$c(M,k)$$ such that
$$lim_{nrightarrow infty} frac{| X_{M,k} cap {0,1,cdots,n} | }{n^{beta(M,k)}} = c(M,k)$$
and determine the value of $$beta(M,k)$$.
First let’s simplify what $$X_{M,k}$$ is…
begin{align*} X_{M,k} &= {p_1^{alpha_1}cdot p_2^{alpha_2} cdots p_r^{alpha_r} text{ | } M leq p_i < p_{i+1} text{ and } alpha_i geq k}\ &= {(p_1^k p_2^k cdots p_r^k) cdot p_1^{gamma_1}cdot p_2^{gamma_2} cdots p_r^{gamma_r} text{ | } M leq p_i < p_{i+1} text{ and } gamma_i geq 0}\ &= {s cdot p_1^{gamma_1}cdot p_2^{gamma_2} cdots p_r^{gamma_r} text{ | } M leq p_i < p_{i+1} text{ and } gamma_i geq 0} text{ where } s = p_1^k p_2^k cdots p_r^k\ end{align*}
We can then see that $$X_{M,k}$$ is a proper subset of the set of multiples of $$s$$.
Therefore we can see that $$|X_{M,k} cap {0,1,cdots,n}| < |{text{multiples of s} leq text{ then n}}| < n$$.
Using that, and the fact that $$beta(M,k) > 0$$ we can conclude that:
$$frac{| X_{M,k} cap {0,1,cdots,n} | }{n^{beta(M,k)}} < frac{n}{n^{beta(M,k)}} = n^{1-beta(M,k)}$$
Now see that $$| X_{M,k} cap {0,1,cdots,n} | > 0$$ only when $$n geq s$$. Therefore, it’s also valid to say that for every $$n geq s$$ it follows that:
$$0 = frac{ 0 }{n^{beta(M,k)}} < frac{| X_{M,k} cap {0,1,cdots,n} | }{n^{beta(M,k)}} < frac{n}{n^{beta(M,k)}} = n^{1-beta(M,k)}$$
Finally, see that it’s possible to let $$n^{1-beta(M,k)} = n^0 = 1$$ if we take $$beta(M,k) = |X_{M,k} cap {s} | = 1$$.
Joining my conclusions till now, we can say that if we fix $$beta(M,k) = |X_{M,k} cap {s} | = 1$$ then for every $$n geq s$$ it’s valid that:
$$0 = frac{0}{n} < frac{| X_{M,k} cap {0,1,cdots,n} | }{n} < frac{n}{n} = 1$$ | 2021-01-22 00:42:09 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 26, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9961146116256714, "perplexity": 1245.339553469653}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703528672.38/warc/CC-MAIN-20210121225305-20210122015305-00749.warc.gz"} |
https://indico.cern.ch/event/838224/ | # NICA days in Warsaw
Europe/Zurich
• Tuesday, 22 October
• 10:00 10:20
Particle production properties at SPS energy range - recent results from NA61/SHINE experiment 20m
Speaker: Szymon Mateusz Pulawski (University of Silesia (PL))
Particle production properties at SPS energy range - recent results from NA61/SHINE experiment.
Szymon Pulawski for the NA61/SHINE Collaboration
The research programme of the NA61 collaboration covers a wide range of hadronic physics in the CERN SPS energy range. It encompasses measurements of hadron-hadron, hadron-nucleus as well as nucleus-nucleus collisions. The latter are analyzed to better understand the properties of hot and dense nuclear matter. In this contribution recent results on particle production in proton-proton, Be+Be, Ar+Sc interactions at beam energies of 19/20A, 30/31A, 40A, 75/80A and 150/158A GeV/c will be presented as well synergies with future NICA program will be emphasised.
• 10:20 10:40
Fluctuations and correlations study at NA61/SHINE 20m
Speaker: Daria Prokhorova (St Petersburg State University (RU))
Fluctuations and correlations study at NA61/SHINE
Daria Prokhorova for the NA61/SHINE Collaboration
Strong interactions program of NA61/SHINE, the fixed-target experiment at CERN SPS, focuses on the search for the critical point of strongly interacting matter. The strategy of the Collaboration is to perform a comprehensive two dimensional scan of the phase diagram $\mu_{B} – T$ by changing the collision energy and the system size. In this scenario, if the system freeze-out occurs in the vicinity of the possible critical point, then the region of the enhanced fluctuations is expected to be observed in the final fluctuation measures.
The talk will review the ongoing NA61/SHINE analysis on multiplicity and transverse momentum fluctuations in terms of intensive and strongly intensive quantities. Furthermore, their pseudorapidity dependence, which corresponds to the additional scan in the baryon chemical potential $\mu_{B}$ at the freeze-out stage, will be presented among with the study of the higher moments of multiplicity distributions. Eventually, the talk sheds light on the approaches for the results corrections and possible obstacles in the analysis.
• 10:40 11:00
Physics motivations and plans for open charm 20m
Speaker: Pawel Piotr Staszel (Jagiellonian University (PL))
• 11:00 11:20
NA61/SHINE detector upgrade 20m
Speaker: Dariusz Tefelski (Warsaw University of Technology (PL))
The NA61/SHINE detector, at the CERN SPS, is undergoing a major upgrade during the LHC Long Shutdown 2 period (2019-2021). The upgrade is essential to fulfil the requirements of new open charm measurement program. It is necessary to stress that this new physics goal can be achieve only when the readout rate will be increase by a factor 10 and the resolution of the secondary vertex in the high multiplicity tracks environment of Pb-Pb events will be improved. The following elements of the experiment are parts of the upgrade: Time Projection Chambers (TPC), Vertex Detector (VD), Beam Position Detectors (BPD), Particle Spectator Detector (PSD) and Time of Flight detectors (TOF). On top of the detectors, new Trigger and Data Acquisition (TDAQ) system is being developed. In the proposed talk the progress on design and development of new detectors and TDAQ system for NA61/SHINE experiment will be presented.
• 11:20 11:40
Sinergy in the development of NA61, BM@N and CBM forward hadron calorimeters 20m
Speaker: Fedor Guber (Russian Academy of Sciences (RU))
• 11:40 12:00
Upgrade of the NA61/SHINE ToF system based on a MRPCs for the NICA experiments 20m
Speakers: Mr Aleksandr Dmitriev (Joint Institute for Nuclear Research (RU)), Mr Alexandr Dmitriev (Joint Institute for Nuclear Research (RU))
• 12:00 12:20
ALICE technologies proposed for the VD of NA61/SHINE in connection to the future MPD Si tracker 20m
Speaker: Dr Grigori Feofilov (St Petersburg State University (RU))
• 12:20 12:40
TPC, gas system 20m
Speaker: Magdalena Kuich (University of Warsaw (PL)) | 2020-05-31 13:41:12 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4590878188610077, "perplexity": 6917.175847911183}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347413406.70/warc/CC-MAIN-20200531120339-20200531150339-00505.warc.gz"} |
https://pcarbo.github.io/varbvs/varbvs-R/docs/reference/varbvs.html | Compute fully-factorized variational approximation for Bayesian variable selection in linear (family = gaussian) or logistic regression (family = binomial). More precisely, find the "best" fully-factorized approximation to the posterior distribution of the coefficients, with spike-and-slab priors on the coefficients. By "best", we mean the approximating distribution that locally minimizes the Kullback-Leibler divergence between the approximating distribution and the exact posterior.
varbvs (X, Z, y, family = c("gaussian","binomial"), sigma, sa,
logodds, alpha, mu, eta, update.sigma, update.sa,
optimize.eta, initialize.params, nr = 100, sa0 = 1, n0 = 10,
tol = 1e-4, maxiter = 1e4, verbose = TRUE)
## Arguments
X n x p input matrix, where n is the number of samples, and p is the number of variables. X cannot be sparse, and cannot have any missing values (NA). n x m covariate data matrix, where m is the number of covariates. Do not supply an intercept as a covariate (i.e., a column of ones), because an intercept is automatically included in the regression model. For no covariates, set Z = NULL. The covariates are assigned an improper, uniform prior. Although improper priors are generally not advisable because they can result in improper posteriors and Bayes factors, this choice allows us to easily integrate out these covariates. Vector of length n containing observations of binary (family = "binomial") or continuous (family = "gaussian") outcome. For a binary outcome, all entries of y must be 0 or 1. "gaussian" for linear regression model, or "binomial" for logistic regression model. Candidate settings for the residual variance parameter. Must be of the same length as inputs sa and logodds (or have length equal to the number of columns of logodds). Only used for linear regression, and will generate an error if family = "binomial". If missing, residual variance parameter is automatically fitted to data by computing approximate maximum-likelihood (ML) estimate. Hyperparameter sa is the prior variance of regression coefficients for variables that are included in the model. This prior variance is always scaled by sigma (for logistic regression, we take sigma = 1). Scaling the variance of the coefficients in this way is necessary to ensure that this prior is invariant to measurement scale (e.g., switching from grams to kilograms). This input specifies the candidate settings for sa, of the same length as inputs sigma and logodds (or have length equal to the number of columns of logodds). If missing, prior variance is automatically fitted to data by compute approximate maximum (ML) estimates, or maximum a posteriori estimates when n0 > 0 and sa0 > 0. Hyperparameter logodds is the prior log-odds that a variable is included in the regression model; it is defined as $$logodds = log10(q/(1-q)),$$ where q is the prior probability that a variable is included in the regression model. Note that we use the base-10 logarithm instead of the natural logarithm because it is usually more natural to specify prior log-odds settings in this way. The prior log-odds may also be specified separately for each variable, which is useful is there is prior information about which variables are most relevant to the outcome. This is accomplished by setting logodds to a p x ns matrix, where p is the number of variables, and ns is the number of hyperparameter settings. Note it is not possible to fit the logodds parameter; if logodds input is not provided as input, then it is set to the default value when sa and sigma are missing, and otherwise an error is generated. Good initial estimate for the variational parameter alpha for each hyperparameter setting. Either missing, or a p x ns matrix, where p is the number of variables, and ns is the number of hyperparameter settings. Good initial estimate for the variational parameter mu for each hyperparameter setting. Either missing, or a p x ns matrix, where p is the number of variables, and ns is the number of hyperparameter settings. Good initial estimate of the additional free parameters specifying the variational approximation to the logistic regression factors. Either missing, or an n x ns matrix, where n is the number of samples, and ns is the number of hyperparameter settings. Setting this to TRUE ensures that sigma is always fitted to data, in which case input vector sigma is used to provide initial estimates. Setting this to TRUE ensures that sa is always fitted to data, in which case input vector sa is used to provide initial estimates. When optimize.eta = TRUE, eta is fitted to the data during the inner loop coordinate ascent updates, even when good estimates of eta are provided as input. If FALSE, the initialization stage of the variational inference algorithm (see below) will be skipped, which saves computation time for large data sets. Number of samples of "model PVE" to draw from posterior. Scale parameter for a scaled inverse chi-square prior on hyperparameter sa. Must be >= 0. Number of degrees of freedom for a scaled inverse chi-square prior on hyperparameter sa. Must be >= 0. Large settings of n0 provide greater stability of the parameter estimates for cases when the model is "sparse"; that is, when few variables are included in the model. Convergence tolerance for inner loop. Maximum number of inner loop iterations. If verbose = TRUE, print progress of algorithm to console.
## Regression models
Two types of outcomes (y) are modeled: (1) a continuous outcome, also a "quantitative trait" in the genetics literature; or (2) a binary outcome with possible values 0 and 1. For the former, set family = "gaussian", in which case, the outcome is i.i.d. normal with mean $$u0 + Z*u + X*b$$ and variance sigma, in which u and b are vectors of regresion coefficients, and u0 is the intercept. In the second case, we use logistic regression to model the outcome, in which the probability that y = 1 is equal to $$sigmoid(u0 + Z*u + X*b).$$ See help(sigmoid) for a description of the sigmoid function. Note that the regression always includes an intercept term (u0).
## Co-ordinate ascent optimization procedure
For both regression models, the fitting procedure consists of an inner loop and an outer loop. The outer loop iterates over each of the hyperparameter settings (sa, sigma and logodds). Given a setting of the hyperparameters, the inner loop cycles through coordinate ascent updates to tighten the lower bound on the marginal likelihood, $$Pr(Y | X, sigma, sa, logodds)$$. The inner loop coordinate ascent updates terminate when either (1) the maximum number of inner loop iterations is reached, as specified by maxiter, or (2) the maximum difference between the estimated posterior inclusion probabilities is less than tol.
To provide a more accurate variational approximation of the posterior distribution, by default the fitting procedure has two stages. In the first stage, the entire fitting procedure is run to completion, and the variational parameters (alpha, mu, s, eta) corresponding to the maximum lower bound are then used to initialize the coordinate ascent updates in a second stage. Although this has the effect of doubling the computation time (in the worst case), the final posterior estimates tend to be more accurate with this two-stage fitting procedure.
## Variational approximation
Outputs alpha, mu and s specify the approximate posterior distribution of the regression coefficients. Each of these outputs is a p x ns matrix. For the ith hyperparameter setting, alpha[,i] is the variational estimate of the posterior inclusion probability (PIP) for each variable; mu[,i] is the variational estimate of the posterior mean coefficient given that it is included in the model; and s[,i] is the estimated posterior variance of the coefficient given that it is included in the model. These are also the quantities that are optimized as part of the inner loop coordinate ascent updates. An additional free parameter, eta, is needed for fast computation with the logistic regression model (family = "binomial"). The fitted value of eta is returned as an n x ns matrix.
The variational estimates should be interpreted carefully, especially when variables are strongly correlated. For example, consider the simple scenario in which 2 candidate variables are closely correlated, and at least one of them explains the outcome with probability close to 1. Under the correct posterior distribution, we would expect that each variable is included with probability ~0.5. However, the variational approximation, due to the conditional independence assumption, will typically get this wrong, and concentrate most of the posterior weight on one variable (the actual variable that is chosen will depend on the starting conditions of the optimization). Although the individual PIPs are incorrect, a statistic summarizing the variable selection for both correlated variables (e.g., the total number of variables included in the model) should be reasonably accurate.
## References
P. Carbonetto and M. Stephens (2012). Scalable variational inference for Bayesian variable selection in regression, and its accuracy in genetic association studies. Bayesian Analysis 7, 73--108.
Y. Guan and M. Stephens (2011). Bayesian variable selection regression for genome-wide association studies and other large-scale problems. Annals of Applied Statistics 5, 1780--1815.
X. Zhou, P. Carbonetto and M. Stephens (2013). Polygenic modeling with Bayesian sparse linear mixed models. PLoS Genetics 9, e1003264.
summary.varbvs, varbvscoefcred, varbvspve, varbvsnorm, varbvsbin, varbvsbinz, normalizelogweights, varbvs-internal
## Examples
# LINEAR REGRESSION EXAMPLE
# -------------------------
# Data are 200 uncorrelated ("unlinked") single nucleotide polymorphisms
# (SNPs) with simulated genotypes, in which the first 20 of them have an
# effect on the outcome. Also generate data for 3 covariates.
maf <- 0.05 + 0.45*runif(200)
X <- (runif(400*200) < maf) + (runif(400*200) < maf)
X <- matrix(as.double(X),400,200,byrow = TRUE)
Z <- randn(400,3)
# Generate the ground-truth regression coefficients for the variables
# (X) and additional 3 covariates (Z). Adjust the QTL effects so that
# the variables (SNPs) explain 50 percent of the variance in the
# outcome.
u <- c(-1,2,1)
beta <- c(rnorm(20),rep(0,180))
beta <- 1/sd(c(X %*% beta)) * beta
# Generate the quantitative trait measurements.
y <- c(-2 + Z %*% u + X %*% beta + rnorm(400))
# Fit the variable selection model.
fit <- varbvs(X,Z,y,logodds = seq(-3,-1,0.1))
print(summary(fit))
# Compute the posterior mean estimate of hyperparameter sa.
sa <- with(fit,sum(sa * w))
# Compare estimated outcomes against observed outcomes.
y.fit <- predict(fit,X,Z)
print(cor(y,y.fit))
# LOGISTIC REGRESSION EXAMPLE
# ---------------------------
# Data are 100 uncorrelated ("unlinked") single nucleotide polymorphisms
# (SNPs) with simulated genotypes, in which the first 10 of them have an
# effect on the outcome. Also generate data for 2 covariates.
maf <- 0.05 + 0.45*runif(100)
X <- (runif(750*100) < maf) + (runif(750*100) < maf)
X <- matrix(as.double(X),750,100,byrow = TRUE)
Z <- randn(750,2)
# Generate the ground-truth regression coefficients for the variables
# (X) and additional 2 covariates (Z).
u <- c(-1,1)
beta <- c(0.5*rnorm(10),rep(0,90))
# Simulate the binary trait (case-control status) as a coin toss with
# success rates given by the logistic regression.
y <- as.double(runif(750) < sigmoid(-1 + Z %*% u + X %*% beta))
# Fit the variable selection model.
fit <- varbvs(X,Z,y,"binomial",logodds = seq(-2,-0.5,0.5))
print(summary(fit)) | 2021-10-23 15:23:30 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8102337121963501, "perplexity": 1418.9380862867827}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585696.21/warc/CC-MAIN-20211023130922-20211023160922-00239.warc.gz"} |
http://math.stackexchange.com/questions/204647/solving-a-formal-power-series-equation/231416 | # Solving a formal power series equation
I want to find a function $f(x,y)$ which can satisfy the following equation,
$$\prod _{n=1} ^{\infty} \frac{1+x^n}{(1-x^{n/2}y^{n/2})(1-x^{n/2}y^{-n/2})} = \exp \left[ \sum _{n=1} ^\infty \frac{f(x^n,y^n)}{n(1-x^{2n})}\right]$$
• I would like to know how this is solved.
In a certain paper where I ran into this, it is claimed that the function is,
$$f(x,y) = \sqrt{x}(y + 1/y) + x(1+y^2 + 1/y) + x^{3/2}(y^3+1/y^3) + x^2(y^4+1/y^4) + \sum _{n=5}^\infty x^{n/2}(y^n + 1/y^n - y^{n-4} - 1/y^{n-4})$$
The paper doesn't state any proof or explanation for how this was obtained but perturbatively the above can be checked to be correct!
Now I tried to do something obvious but it didn't work!
\begin{eqnarray} \prod _ {n=1} ^{\infty} \frac{ (1+x^n) }{1+x^n -x^{\frac{n}{2}} \left(y^{\frac{n}{2}} + y^{-\frac{n}{2}}\right) } = \exp \left[ \sum _ {n=1} ^{\infty} \frac{ I_{ST}(x^n,y^n) } {n (1-x^{2n}) } \right] \\ \Rightarrow \sum_{n=1}^{\infty} \left\{ \ln (1+x^n) - \ln(1-(\sqrt{xy})^n) - \ln\left(1- \left(\sqrt{\frac{x}{y}}\right)^n\right) \right\} = \sum_{n=1}^\infty \frac{I_{ST}(x^n,y^n)} {n(1-x^{2n})} \end{eqnarray}
Now we expand the logarithms and we have,
\begin{eqnarray} \sum _ {n=1} ^ {\infty} \left \{ \sum _{a=1}^{\infty} (-1)^{a+1} \frac{x^{na}}{a} + \sum_{b=1} ^{\infty} \frac{ (\sqrt{xy})^{nb} } {b} + \sum _{c=1}^{\infty} \frac{ \left(\sqrt{\frac{x}{y}}\right)^{nc} }{c} \right \} = \sum _{n=1} ^\infty \frac{I_{ST}(x^n,y^n)} {n(1-x^{2n})} \\ \Rightarrow \sum _{a=1} ^{\infty} \frac{1}{a} \left\{ \sum _{n=1} ^{\infty} \left( (-1)^{a+1}x^{na} + (xy)^{\frac{na}{2}} + \left(\frac{x}{y}\right)^{\frac{na}{2}} \right) \right\} = \sum _{n=1} ^\infty \frac{I_{ST}(x^n,y^n)} {n(1-x^{2n})} \end{eqnarray}
By matching the patterns on both sides one sees that one way this equality can hold is if, $$\begin{eqnarray} I_{ST}(x,y) = (1-x^2) \sum _{n=1} ^{\infty} \left\{ x^n + (xy)^{\frac{n}{2}} + (\frac{x}{y})^{\frac{n}{2}} \right\} \\ \Rightarrow I_{ST} (x,y) = (1-x^2) \left(-1 + \frac{1}{1-x} -1 + \frac{1}{1-\sqrt{xy}} - 1 + \frac{1}{1-\sqrt{\frac{x}{y}} } \right) \end {eqnarray}$$ But this solution doesn't satisfy the original equation!
-
what paper did this come from? – john mangual Nov 6 '12 at 15:34
What a goofy looking function. Why do they need this anyway? Taking the log of both sides, you should get:
$$\log(1+x^n) - \log(1 - x^{n/2}y^{n/2}) - \log(1 - x^{n/2}y^{-n/2})$$
Then
$$\log ( 1 + x^n) = 1 - x^n + \frac{1}{2}x^{2n} - \frac{1}{3}x^{3n} + \dots$$
and also
$$\log (1 - x^{n/2}y^{n/2}) = 1 + (xy)^{n/2} + \frac{1}{2} (xy)^n + \dots$$
and
$$\log (1 - x^{n/2}y^{-n/2}) = 1 + (x/y)^{n/2} + \frac{1}{2} (x/y)^n + \dots$$
I suppose if you add from $n=1 \to \infty$ you will get the correct $f(x,y)$.
- | 2015-05-25 20:05:01 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9998149275779724, "perplexity": 2647.658303741212}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928586.49/warc/CC-MAIN-20150521113208-00299-ip-10-180-206-219.ec2.internal.warc.gz"} |
https://blender.stackexchange.com/questions/118838/why-are-two-of-the-teeth-stuck-in-space-away-from-my-dinosaur | # Why are two of the teeth stuck in space away from my dinosaur?
I'm sure there are a lot of other issues with my file, because this is basically my first actual animated rig in Blender, but the only issue I'm really concerned with right now is the two bottom teeth that are floating in space.
In edit mode with the mesh selected as well as rest position of the armature, the teeth are in exactly the right place. But when I'm in pose mode, where I've actually animated the dinosaur, two of the teeth are floating in space. Please let me know how to get them to stay in the mouth where they're supposed to be!
Without knowing the details of your rig, the most significant difference between the two floating teeth and the adjacent teeth or the teeth on the other side is the vertex weight in the DEF-jaw vertex group. | 2020-08-15 12:15:30 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2403857707977295, "perplexity": 576.7990356608651}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439740838.3/warc/CC-MAIN-20200815094903-20200815124903-00545.warc.gz"} |
https://polymake.org/doku.php/user_guide/howto/shell_custom | user_guide:howto:shell_custom
Here we won't speak about really mighty features like defining own rules or object types; they are described elsewhere. Instead we'll start with tiny, easy to use tools of fine-tuning.
You can tune many aspects of polymake behavior by changing values of numerous variables which are dedicated for keeping user-visible settings. In the following these variables are called with their polymake nickname custom variable (which actually has been borrowed from the xemacs terminology). The definitions of custom variables are scattered over dozens of rule files; fortunately, polymake offers two ways of accessing them in a more systematic and comfortable manner.
1. All definitions are repeated in a text file residing in your home directory: ~/.polymake/customize.pl . You are invited to load it in your favorite text editor and study its contents. The variables are sorted there first by applications, then by package names. Most of them will appear as deactivated (that is, the lines start with # which is the perl fashion of making comments to programs). This means, the default values assigned to them somewhere in the polymake source code are in effect. If you want to change them, simply remove the # sign and fill in your desired value. By the way, the color values don't need to be entered in numerical RGB notation; any color name listed in the system color list addressed in $Visual::Color::RGBtxt_path can be used instead. Some variable definitions appear accompanied by a preceding line of the form ARCH('xyz') . These are special in that they are dynamically set by auto-configuration routines lurking in some rule files, and therefore potentially dependent on the computer architecture. (Recall that you may use polymake on alternating computer platforms having different paths to programs etc.) The preferred way of changing these variables is to use the reconfigure command as described below, because with some of them additional consistency checks may be associated. But in many cases they can be easily edited as well. A few variables are stored in a different file: ~/.polymake/prefer.pl . They are separated from the rest because they don't belong to any application but rather control the universal facilities like history editing in the interactive shell or locating extensions. When specifying various search paths, you may use ~ as an abbreviation for your home directory; other environment variables can be referred as $ENV{name}
Please remember that you shouldn't edit any of these files as long as a polymake process is running anywhere under your account. Sometimes polymake needs to store some changes there on its own behalf, but this happens immediately before the exit; so either your or polymake's changes will definitely be lost.
2. There are two interactive commands manipulating custom variables:
set_custom $name=value; set_custom @name=(value, ...); set_custom %name=(key => value, ...); set_custom$name{key}=value;
set a new value of a scalar, an array, a hash map, or a single value therein
reset_custom $name; reset_custom @name; reset_custom %name; reset_custom$name{key};
restore the default value
Both commands come into effect immediately, but also mark the variable as changed, so that the new value will also appear in your personal customization file after the session end. You can also change the custom variables for the rest of current session only, without updating the file – by a plain assignment. If you want to change some value temporarily, just to influence the evaluation of the next expression, write the local keyword instead of set_custom. (It's not polymake's black magic, just normal perl operator.)
#### Configuring applications
There are two custom variables related to applications. The list @start_applications contains names of all applications to be loaded at the very beginning of the interactive session (although the process of loading applications is totally transparent to you, having loaded your favorites in advance avoids annoying delays during the session). The variable \$default_application names the application to be made current at the beginning of the session. Until you change this, it will be polytope for its undisputed merits as the oldest and most prominent application in polymake.
Preferences are lists of choice of different tools capable of performing the same task. When polymake can't make a choice based on objective criteria, it consults the preference lists and takes the tool listed as first. For example, there are several programs capable of drawing a 3-d polytope: javaview, povray, geomview, etc. Another example is the convex hull computation for which up to five different algorithms (depending on the coordinate type) come into consideration. The sensible choice between them can't be made based on some quantitative estimates; instead, your intuition and, sometimes, personal taste must take over the leadership.
As with custom variables, polymake offers two ways of handling the preferences:
1. Manually editing the file ~/.polymake/prefer.pl . The preferences are stored in the last section of this file, grouped by applications. What you see there are exact copies of prefer commands as they appear in the polymake rules. Modify them at your taste. They have the same syntax as the interactive commands described below.
2. Calling interactive commands:
prefer "label";
declare some tool to be the preferred one for any tasks it may perform. For example, saying prefer "jreality"; instructs polymake to call jReality to display any kind of 2-d, 3-d, and 4-d drawings related to polytopes, as well as graphs visualized with spring embedding model. You can specify your wishes more precisely, though: saying prefer "graphviz.graph"; makes the neato program from the graphviz package the default tool for visualizing graphs, letting jReality be responsible for all the rest.
prefer "*.task label1, label2 ...";
establish a specific order of preferred tools for special task. For example, a command prefer "*.convex_hull cdd, lrs, beneath_beyond"; directs polymake to always try the cdd convex hull computation first; if it fails, the lrs algorithm will be applied; if both fail, then beneath_beyond, and as the last resort anything else without specific order.
reset_preference "label";
restore the settings to the pristine state. It accepts a tool name or a wildcard expression as its argument and restores the effect of any matching prefer command encountered in the rule files. The most radical form reset_preference "*"; forgets any preferences you've ever changed and restores the “factory settings”.
Both commands come immediately into effect; before exiting the interactive session the changes will be stored in your preference file.
prefer_now "labels";
does the same as prefer, but restores the previous setting as soon as the current input is completely evaluated. No persistent changes are made. This command can be seen as the local modification of prefer (standard perl does not allow to apply local to anything but variable assignments).
show_preferences;
display all active preferences in the current application. If you want to find out all tools involved into the preference mechanics, including inactive ones, use the TAB completion in prefer command or browse the help system starting at the topic '/preferences' .
Remark: the location of the configuration files can be changed from its default value ~/.polymake by setting an environment variable POLYMAKE_USER_DIR=/other/location, or overridden temporarily just for one session with a command-line option --config-path.
• user_guide/howto/shell_custom.txt | 2020-11-30 17:34:35 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3341176509857178, "perplexity": 2201.39094124907}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141216897.58/warc/CC-MAIN-20201130161537-20201130191537-00394.warc.gz"} |
https://www.tutorialspoint.com/how-can-a-specific-tint-be-added-to-grayscale-images-in-scikit-learn-in-python | # How can a specific tint be added to grayscale images in scikit-learn in Python?
PythonServer Side ProgrammingProgramming
The values of ‘R’, ‘G’, and ‘B’ are changed and applied to the original image to get the required tint.
Below is a Python program that uses scikit-learn to implement the same. Scikit-learn, commonly known as sklearn is a library in Python that is used for the purpose of implementing machine learning algorithms −
## Example
import matplotlib.pyplot as plt
from skimage import data
from skimage import color
path = "path to puppy_1.jpg"
grayscale_img = rgb2gray(orig_img)
image = color.gray2rgb(grayscale_img)
red_multiplier = [0.7, 0, 0]
yellow_multiplier = [1, 0.9, 0]
fig, (ax1, ax2) = plt.subplots(ncols=2, figsize=(8, 4),
sharex=True, sharey=True)
ax1.imshow(red_multiplier * image)
ax1.set_title('Original image')
ax2.imshow(yellow_multiplier * image)
ax2.set_title('Tinted image')
## Explanation
The required packages are imported into the environment.
• The path where the image is stored is defined.
• The ‘imread’ function is used to visit the path and read the image.
• The ‘imshow’ function is used to display the image on the console.
• The function ‘rgb2gray’ is used to convert the image from RGB color space to grayscale color space.
• The function ‘gray2rgb’ is used to convert the image from grayscale to RGB color space.
• The matplotlib library is used to plot this data on console.
• The R, G, B values for the multipliers are defined and applied on the image.
• Output is displayed on the console.
Published on 11-Dec-2020 11:33:00 | 2021-09-23 07:00:55 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24124854803085327, "perplexity": 3944.07364364811}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057417.10/warc/CC-MAIN-20210923044248-20210923074248-00102.warc.gz"} |
http://www.eng-tips.com/viewthread.cfm?qid=355039 | ×
INTELLIGENT WORK FORUMS
FOR ENGINEERING PROFESSIONALS
Are you an
Engineering professional?
Join Eng-Tips Forums!
• Talk With Other Members
• Be Notified Of Responses
• Keyword Search
Favorite Forums
• Automated Signatures
• Best Of All, It's Free!
*Eng-Tips's functionality depends on members receiving e-mail. By joining you are opting in to receive e-mail.
#### Posting Guidelines
Promoting, selling, recruiting, coursework and thesis posting is forbidden.
# Electromagnetic Interference problem
## Electromagnetic Interference problem
(OP)
I am performing vibration analysis on a motor that is around 50 feet away from the control panel.
The output of the data translation module I am using has a USB output, therefore I had to install A USB over CAT5 converter to run it out to the control panel (since the limitation on USB is 16.4 feet)
The only problem is that the contractor ran the cat 5 converter box right under a soft starter in the control room and now my program freezes every time I run the motor while connected inside the control room.
I need to leave a PC inside the control room for a few months and be able to remote into it for recording purposes.
I believe that the soft starter is causing Electromagnetic Interference with the USB to CAT5 converter and in turn causing my analysis software on my computer to freeze.
Would a Faraday cage solve this problem?
Are there any kind of EMI shielded USB cables out there? Or am I right about the USB to CAT5 Converter's circuitry receiving a massive spike from the soft starter?
### RE: Electromagnetic Interference problem
It is difficult to say without seeing the installation and perhaps also do a few measurements. But it is more often common-mode voltages than EMI that causes the problem.
First of all, I usually try to connect all power supplies to an outlet that has the same ground as the system I am recording signals from. That usually reduces common mode voltages so that the data transmission works.
Second, I use rather heavy HF ferrite toroids on the USB cable. Put as many turns as you can through the toroid.
If that doesn't help, get someone used to this kind of problems to help you. A Faraday - I do not think that will help. And it isn't practical to arrange in an effective way either. If it helps, it is usually because you will bond the different equipment together when you install it in a Faraday's cage.
Gunnar Englund
www.gke.org
--------------------------------------
Half full - Half empty? I don't mind. It's what in it that counts.
### RE: Electromagnetic Interference problem
#### Quote (Cyanogen281)
The only problem is that the contractor ran the cat 5 converter box right under a soft starter
This is a bit unclear. Is the box under the soft starter, or just the CAT5 cable?
In any event,
1. Is your USB to CAT5 converter in a metal box?
2. Shielded ethernet cable is available. Is yours shielded?
If you've got a plastic box and unshielded cable, the noise from the soft starter has two different paths into your system.
PS - please don't post the same question in more than one forum, unless you do so as described in the forum policies.
Best to you,
Goober Dave
Haven't see the forum policies? Do so now: Forum Policies
### RE: Electromagnetic Interference problem
(OP)
I should have been more specific...
I am measuring data from a motor 50 feet away.
I am using a USB of CAT5 ethernet converter to get around the USB length limitations (16.4 ft)
The data translation module is powered by the USB port on the PC (Which goes over the CAT5 until right inside the control room where it is converted back to USB, Coincidentally right under the soft starter)
The Converter box itself I think is getting Electromagnetic Interference from the soft starter right above it...
Any advice would be greatly appreciated.
The converter looks something like this Link
### RE: Electromagnetic Interference problem
You were specific enough in the OP.
It is very seldom that such a converter is disturbed by emission "through the air". It is usually HF common-mode voltages/currents that cause the trouble.
You can illustrate that the "air" emission is miniscule by introducing a screen (copper plate or steel plate) between the two devices, ground the plate and see if anything changes. You do not need a complete "cage" to remove 20+ dB of interference, but you will probably still have the interference left in your system.
And, if that actually works - fine, there's your solution.
Did you try to increase the distance between culprit and victim?
Gunnar Englund
www.gke.org
--------------------------------------
Half full - Half empty? I don't mind. It's what in it that counts.
### RE: Electromagnetic Interference problem
(OP)
We tried bringing the USB cable outside of the control room (effectively increasing the distance) to see if that had any effect on the EMI and the vibration analysis software worked great.
It's only when the converter is inside of the control room that this problem occurs.
I have looked through the settings in the software and there is no type of filter option.
I am looking into something called Mu-Metal tape that might be the solution to my problem.
### RE: Electromagnetic Interference problem
Mu-metal is only for magnetic fields. Not so much for electro-magnetic interference. It requires special care and lots of experience to be applied correctly. Did you try the other tips?
Gunnar Englund
www.gke.org
--------------------------------------
Half full - Half empty? I don't mind. It's what in it that counts.
### RE: Electromagnetic Interference problem
(OP)
We cannot move the computer outside of the control room.
Would this be an option for wrapping around the USB over CAT5 Converter? Link
Thanks for all the help guys.
### RE: Electromagnetic Interference problem
Best to you,
Goober Dave
Haven't see the forum policies? Do so now: Forum Policies
### RE: Electromagnetic Interference problem
You can try moving the Cat 5 to USB converter to a different location. That should not be too much trouble.
### RE: Electromagnetic Interference problem
(OP)
The converter needs to be in the same room as the computer with the vibration analysis software on it.
If I use the copper tape I linked in the post above, I assume I should ground it to something outside of the control panel where the soft starter is located.
I have attached a picture of what I have going on and the predicament I am in.
(Yes I know my artistic skills are pathetic at best)
Thanks for all the help guys!
### RE: Electromagnetic Interference problem
Ground whatever you're using to the metal can that it sits in. That enclosure is surely grounded well.
Remember, shielding the thing is just an experiment as Gunnar mentioned above. It may or may not prove to be your solution.
Best to you,
Goober Dave
Haven't see the forum policies? Do so now: Forum Policies
### RE: Electromagnetic Interference problem
(OP)
Can anyone think of anything else (other than EMI from the soft starter) that could be causing this issue with the vibration analysis program?
The program freezes when the soft starter kicks on but after I close out of the program and open it back up (with the motor still running) if functions as it should.
Any other ideas/opinions on this would be greatly appreciated.
### RE: Electromagnetic Interference problem
There are plenty of USB extenders which operate over fibre. Fibre doesn't care a damn about magnetic fields, grounding problems, or pretty much anything other than physical damage. Electrically, little short of a direct hit by lightning would bother it.
### RE: Electromagnetic Interference problem
Use a desktop machine instead of a laptop... might have better grounding and not freeze the system.
Dan - Owner
http://www.Hi-TecDesigns.com
### RE: Electromagnetic Interference problem
Or...
Use a laptop (powered by battery alone, isolated on a wood table) instead of a desktop. If a ground loop is the issue, then the complete lack of grounding should break the loop.
PS: It's too bad that EMI isn't fuzzy red lines as shown in the sketch. It'd be so much easier if it were visible like that.
### RE: Electromagnetic Interference problem
(OP)
I have tried using battery power on the laptop only (holding it in my hands) and the vibration analysis program still freezes.
I thought it might be a ground loop issue so I installed a UPS powered HP PC out there a week ago and still the same problem exists...
Which leads me back to the USB over CAT5 Converter...
The CAT5 cable is UTP (Unshielded Twisted Pair) The USB is unshielded, and the Converter is a plastic box likely to have sensitive integrated circuity in it.
### RE: Electromagnetic Interference problem
(OP)
I have thought about going and buying a hundred feet of CAT6 just to ensure that it isn't the Ethernet cable causing problems with my system.
Could anyone explain to me how a USB to Ethernet converter works? Could I use a USB to Ethernet converter on one end and just run the cat5 straight into my computer? Basically taking out the converter under the soft starter?
Are there any conflicting protocols I should be aware of?
Again, thanks for all the help guys.
### RE: Electromagnetic Interference problem
That would depend on your software, but it's certainly doable in the general sense: http://www.usb-over-ethernet.com/, but you probably need a software driver as marketed by that site, since normal Ethernet drivers only understand TCP/IP, which would not be what's coming out of your ethernet cable.
TTFN
FAQ731-376: Eng-Tips.com Forum Policies
Need help writing a question or understanding a reply? forum1529: Translation Assistance for Engineers
### RE: Electromagnetic Interference problem
If I have this right in my head:
You have a vibration analyzer on the motor, and its output is USB. Your PC is in the control room. So out by the motor is a USB-to-ethernet converter and in the control room is an ethernet-to-USB converter for plugging into the pc?
Why not try a long USB extension and keep the active (repeater) portion outside of the room?
http://www.newegg.com/Product/Product.aspx?Item=N8...
Best to you,
Goober Dave
Haven't see the forum policies? Do so now: Forum Policies
### RE: Electromagnetic Interference problem
I agree with Gunnar when he said that ground loops are a common (mode, LOL) problem; but you've probably eliminated that possibility with your hand held laptop experiment.
How about just getting some very large (2-inch wide) braid and shielding the entire cable including the dongle?
Beware one problem doesn't exclude the other. Be prepared to repeat the hand held laptop experiment in case it's both ground loop and EMI.
### RE: Electromagnetic Interference problem
There is such a thing as shielded Cat 5/6 cable. I don't think that going from Cat 5 to Cat 6 cable will make any difference. Instead try going to a shielded type of cable. You will also need special shielded RJ-45 connectors. I still don't know why you can change the location of the converter within the control room. Mount it as far as possible from the soft-starter and as close to your computer as possible.
### RE: Electromagnetic Interference problem
How about unbolting the converter from the wall and temporarily setting somewhere else like on the floor, just as a troubleshooting method.
Keith Cress
kcress - http://www.flaminsystems.com
### RE: Electromagnetic Interference problem
Your USB to Ethernet adapter is going to be a microprocessor/microcontroller with a USB port and an Ethernet port. The firmware initializes both ports and then passes the data through. You can easily do this in your controller if it has the proper ports.
Z
### RE: Electromagnetic Interference problem
you've stated nothing regarding how your
1."data translation" hardware is grounded or isolated from ground
2. how the cable is routed (tray, conduit), or it's degree of isolation from adacent power wiring.
sounds like a ground loop issue, there is test equipment for isolating gound loops, have youused such hardware?
### RE: Electromagnetic Interference problem
(OP)
Our data translation hardware is grounded next to the motor that we are measuring the vibration from.
When we move the computer (Laptop for this application) away from the motor and begin the vibration analysis everything works as it should.
The CAT5 cable that is run into the control room is, unfortunately, ran in the same conduit as the power wiring.
We have isolated the problem to the USB to Ethernet converter and it only seems to freeze the program when the USB/CAT5 converter is inside the control room.
Is a Faraday cage completely out of the question at this point? Wouldn't that shield the USB/CAT5 converter? Perhaps some copper tape wrapped around the device, then grounding the tape?
### RE: Electromagnetic Interference problem
#### Quote:
The CAT5 cable that is run into the control room is, unfortunately, ran in the same conduit as the power wiring.
You've now lost whatever money _that_ saved, just in your time.
A separate conduit for the signal cable basically amounts to a Faraday cage, without the expense of trying to pull a foil-wrapped signal cable in with the unshielded one.
Mike Halloran
Pembroke Pines, FL, USA
### RE: Electromagnetic Interference problem
An optical fibre doesn't know or care about EMI, regardless of how bad it gets. Run your USB data over fibre.
### RE: Electromagnetic Interference problem
You can get a good sheet metal guy to Faraday-ize your USB/Ethernet converter, or just wrap the heck out of it (and also the cables next to it) with aluminum foil and see if it really works.
I like ScottyUK's suggestion the best.
Best to you,
Goober Dave
Haven't see the forum policies? Do so now: Forum Policies
(OP)
### RE: Electromagnetic Interference problem
Another reason to move the signal cable to a separate conduit, on a planet where the NEC applies, is that the Cat5 cable probably isn't rated for voltages as high as the motor voltage.
Mike Halloran
Pembroke Pines, FL, USA
#### Red Flag This Post
Please let us know here why this post is inappropriate. Reasons such as off-topic, duplicates, flames, illegal, vulgar, or students posting their homework.
#### Red Flag Submitted
Thank you for helping keep Eng-Tips Forums free from inappropriate posts.
The Eng-Tips staff will check this out and take appropriate action.
Close Box
# Join Eng-Tips® Today!
Join your peers on the Internet's largest technical engineering professional community.
It's easy to join and it's free.
Here's Why Members Love Eng-Tips Forums:
• Talk To Other Members
• Notification Of Responses To Questions
• Favorite Forums One Click Access
• Keyword Search Of All Posts, And More...
Register now while it's still free! | 2018-02-25 07:37:52 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27465784549713135, "perplexity": 3544.224229892902}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891816178.71/warc/CC-MAIN-20180225070925-20180225090925-00411.warc.gz"} |
https://github.com/hpfem/esco2012-boa/blob/master/roemer.tex | # hpfem/esco2012-boa
Fetching contributors…
Cannot retrieve contributors at this time
38 lines (25 sloc) 2.46 KB
\title{Sensitivity Analysis Techniques for the Quantification of Uncertainty in Electromagnetic Simulations} \tocauthor{U. Roemer} \author{} \institute{} \maketitle \begin{center} {\large Ulrich R\"omer}\\ Technische Universit\"at Darmstadt\\ {\tt [email protected]} \\ \vspace{4mm}{\large Stephan Koch}\\ Technische Universit\"at Darmstadt\\ {\tt [email protected]} \\ \vspace{4mm}{\large Thomas Weiland}\\ Technische Universit\"at Darmstadt\\ {\tt [email protected]} \end{center} \section*{Abstract} The input parameters of models used for the simulation of technical devices exhibit uncertainties, e.g., due to the manufacturing process. Consequently, the model outputs, representing physical quantities of interest, also deviate from their nominal values. The quantification of these uncertainties is important with respect to the reliability of numerical simulations. Given the statistical descriptions of the input uncertainty, the problem can be treated systematically in a probabilistic setting. On the contrary, deterministic approaches, where tolerance bounds and statistics of the outputs are determined by perturbation techniques, may prove useful in several situations \cite{Babuska,Harbrecht}. Restrictive design specifications, for instance, may only require worst case tolerances. Moreover, cheap and efficient approximation schemes can be obtained. Therefore, this work addresses deterministic techniques for the quantification of uncertainty, mainly applied to low-frequency approximations of Maxwell's equations. Special emphasis is put on sensitivity analysis techniques for the variation of geometrical parameters \cite{Hiptmair}. Equally, variations in the material parameters as well as sources will be considered. Numerical examples obtained by the Finite Element Method will be given and discussed. \bibliographystyle{plain} \begin{thebibliography}{10} \bibitem{Babuska} {\sc I. Babu\v{s}ka and F. Nobile and R. Tempone}. {Worst case scenario analysis for elliptic problems with uncertainty}. Numerische Mathematik, 101(2):185-219, 2005. \bibitem{Harbrecht} {\sc Helmut Harbrecht}. {On output functionals of boundary value problems on stochastic domains}. Mathematical Methods in the Applied Sciences, 33(1):91102, 2010. \bibitem{Hiptmair} {\sc Ralf Hiptmair and Jingzhi Li}. {Shape derivatives in differential forms I: An intrinsic perspective}. Technical Report 2011/42, Seminar for Applied Mathematics, ETH Z\"urich, 2011. \end{thebibliography} | 2016-09-27 02:10:43 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9334515929222107, "perplexity": 3495.449706506171}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660931.25/warc/CC-MAIN-20160924173740-00238-ip-10-143-35-109.ec2.internal.warc.gz"} |
http://jdc.math.uwo.ca/M1600a/l/17.html | ## Announcements:
Read Section 3.5 for next class. This is also core material. We aren't covering 3.4. Work through recommended homework questions.
Quiz 5 will focus on 3.1, 3.2 and the first half of 3.3 (up to and including Example 3.26).
Midterm: Saturday, October 25, 7-10pm. It will cover the material up to and including the lecture on Monday, Oct 20. Practice midterms are available on the exercises page.
Office hour: Today, 11:30-noon, MC103B.
Help Centers: Monday-Friday 2:30-6:30 in MC 106.
## Partial review of Lecture 16:
Definition: An inverse of an $n \times n$ matrix $A$ is an $n \times n$ matrix $A'$ such that $$A A' = I \qtext{and} A' A = I .$$ If such an $A'$ exists, we say that $A$ is invertible.
Theorem 3.6: If $A$ is an invertible matrix, then its inverse is unique.
Because of this, we write $A^{-1}$ for the inverse of $A$, when $A$ is invertible. We do not write $\frac{1}{A}$.
Example: If $A = \bmat{rr} 1 & 2 \\ 3 & 7 \emat$, then $A^{-1} = \bmat{rr} 7 & -2 \\ -3 & 1 \emat$ is the inverse of $A$.
But the zero matrix and the matrix $B = \bmat{rr} -1 & 3 \\ 2 & -6 \emat$ are not invertible.
Theorem 3.7: If $A$ is an invertible matrix $n \times n$ matrix, then the system $A \vx = \vb$ has the unique solution $\vx = A^{-1} \vb$ for any $\vb$ in $\R^n$.
Remark: This is not in general an efficient way to solve a system.
Theorem 3.8: The matrix $A = \bmat{cc} a & b \\ c & d \emat$ is invertible if and only if $ad - bc \neq 0$. When this is the case, $$A^{-1} = \frac{1}{ad-bc} \, \bmat{rr} \red{d} & \red{-}b \\ \red{-}c & \red{a} \emat .$$
We call $ad-bc$ the determinant of $A$, and write it $\det A$. It determines whether or not $A$ is invertible, and also shows up in the formula for $A^{-1}$.
### Properties of Invertible Matrices
Theorem 3.9: Assume $A$ and $B$ are invertible matrices of the same size. Then:
1. $A^{-1}$ is invertible and $(A^{-1})^{-1} = \query{A}$
2. If $c$ is a non-zero scalar, then $cA$ is invertible and $(cA)^{-1} = \query{\frac{1}{c} A^{-1}}$
3. $AB$ is invertible and $(AB)^{-1} = \query{B^{-1} A^{-1}}$ (socks and shoes rule)
4. $A^T$ is invertible and $(A^T)^{-1} = \query{(A^{-1})^T}$
5. $A^n$ is invertible for all $n \geq 0$ and $(A^n)^{-1} = \query{(A^{-1})^n}$
To verify these, in every case you just check that the matrix shown is an inverse.
Remark: Property (c) is the most important, and generalizes to more than two matrices, e.g. $(ABC)^{-1} = C^{-1} B^{-1} A^{-1}$.
Remark: For $n$ a positive integer, we define $A^{-n}$ to be $(A^{-1})^n = (A^n)^{-1}$. Then $A^n A^{-n} = I = A^0$, and more generally $A^r A^s = A^{r+s}$ for all integers $r$ and $s$.
Remark: There is no formula for $(A+B)^{-1}$. In fact, $A+B$ might not be invertible, even if $A$ and $B$ are.
We can use these properties to solve a matrix equation for an unknown matrix.
## New material
### The fundamental theorem of invertible matrices:
Very important! Will be used repeatedly, and expanded later.
Theorem 3.12: Let $A$ be an $n \times n$ matrix. The following are equivalent:
a. $A$ is invertible.
b. $A \vx = \vb$ has a unique solution for every $\vb \in \R^n$.
c. $A \vx = \vec 0$ has only the trivial (zero) solution.
d. The reduced row echelon form of $A$ is $I_n$.
Proof: We have seen that (a) $\implies$ (b) in Theorem 3.7 above.
We'll use our past work on solving systems to show that (b) $\implies$ (c) $\implies$ (d) $\implies$ (b), which will prove that (b), (c) and (d) are equivalent.
We will only partially explain why (b) implies (a).
(b) $\implies$ (c): If $A \vx = \vb$ has a unique solution for every $\vb$, then it's true when $\vb$ happens to be the zero vector.
(c) $\implies$ (d): Suppose that $A \vx = \vec 0$ has only the trivial solution.
That means that the rank of $A$ must be $n$.
So in reduced row echelon form, every row must have a leading $1$.
The only $n \times n$ matrix in reduced row echelon form with $n$ leading $1$'s is the identity matrix.
(d) $\implies$ (b): If the reduced row echelon form of $A$ is $I_n$, then the augmented matrix $[A \mid \vb\,]$ row reduces to $[I_n \mid \vc\,]$, from which you can read off the unique solution $\vx = \vc$.
(b) $\implies$ (a) (partly): Assume $A \vx = \vb$ has a solution for every $\vb$.
That means we can find $\vx_1, \ldots, \vx_n$ such that $A \vx_i = \ve_i$ for each $i$.
If we let $B = [ \vx_1 \mid \cdots \mid \vx_n\,]$ be the matrix with the $\vx_i$'s as columns, then $$\kern-8ex AB = A \, [ \vx_1 \mid \cdots \mid \vx_n\,] = [ A \vx_1 \mid \cdots \mid A \vx_n\,] = [ \ve_1 \mid \cdots \mid \ve_n \,] = I_n .$$ So we have found a right inverse for $A$.
It turns out that $BA= I_n$ as well, but this is harder to see. $\qquad\Box$
Note: We have omitted (e) from the theorem, since we aren't covering elementary matrices. They are used in the text to prove the other half of (b) $\implies$ (a).
We will see many important applications of Theorem 3.12. For now, we illustrate one theoretical application and one computational application.
Theorem 3.13: Let $A$ be a square matrix. If $B$ is a square matrix such that either $AB=I$ or $BA=I$, then $A$ is invertible and $B = A^{-1}$.
Proof: If $BA = I$, then the system $A \vx = \vec 0$ has only the trivial solution, as we saw in the challenge problem. So (c) is true. Therefore (a) is true, i.e. $A$ is invertible. Then: $$\kern-6ex B = BI = BAA^{-1} = IA^{-1} = A^{-1} .$$ (The uniqueness argument again!)$\quad\Box$
This is very useful! It means you only need to check multiplication in one order to know you have an inverse.
### Gauss-Jordan method for computing the inverse
Motivate on board: we'd like to find a $B$ such that $AB = I$.
Theorem 3.14: Let $A$ be a square matrix. If a sequence of row operations reduces $A$ to $I$, then the same sequence of row operations transforms $I$ into $A^{-1}$.
Why does this work? It's the combination of our arguments that (d) $\implies$ (b) and (b) $\implies$ (a). If we row reduce $[ A \mid \ve_i\,]$ to $[ I \mid \vc_i \,]$, then $A \vc_i = \ve_i$. So if $B$ is the matrix whose columns are the $\vc_i$'s, then $AB = I$. So, by Theorem 3.14, $B = A^{-1}$.
The trick is to notice that we can solve all of the systems $A \vx = \ve_i$ at once by row reducing $[A \mid I\,]$. The matrix on the right will be exactly $B$!
Example on board: Find the inverse of $A = \bmat{rr} 1 & 2 \\ 3 & 7 \emat$.
Illustrate proof of Theorem 3.14.
Example on board: Find the inverse of $A = \bmat{rrr} 1 & 0 & 2 \\ 2 & 1 & 3 \\ 1 & -2 & 5 \emat$. Illustrate proof of Theorem 3.14.
Example on board: Find the inverse of $B = \bmat{rr} -1 & 3 \\ 2 & -6 \emat$.
So now we have a general purpose method for determining whether a matrix $A$ is invertible, and finding the inverse:
1. Form the $n \times 2n$ matrix $[A \mid I\,]$.
2. Use row operations to get it into reduced row echelon form.
3. If a zero row appears in the left-hand portion, then $A$ is not invertible.
4. Otherwise, $A$ will turn into $I$, and the right hand portion is $A^{-1}$.
The trend continues: when given a problem to solve in linear algebra, we usually find a way to solve it using row reduction!
Note that finding $A^{-1}$ is more work than solving a system $A \vx = \vb$.
We aren't covering inverse matrices over $\Z_m$.
### Questions:
Question: Let $A$ be a $4 \times 4$ matrix with rank $3$. Is $A$ invertible? What if the rank is $4$?
True/false: If $A$ is a square matrix, and the column vectors of $A$ are linearly independent, then $A$ is invertible.
True/false: If $A$ and $B$ are square matrices such that $AB$ is not invertible, then at least one of $A$ and $B$ is not invertible.
True/false: If $A$ and $B$ are matrices such that $AB = I$, then $BA = I$.
Question: Find invertible matrices $A$ and $B$ such that $A+B$ is not invertible. | 2018-11-14 07:02:19 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9400890469551086, "perplexity": 144.16252587679432}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039741660.40/warc/CC-MAIN-20181114062005-20181114084005-00432.warc.gz"} |
https://intelligencemission.com/free-energy-generator-in-speaker-magnet-free-electricity-check.html | Let’s look at the B field of the earth and recall how any magnet works; if you pass Free Power current through Free Power wire it generates Free Power magnetic field around that wire. conversely, if you move that wire through Free Power magnetic field normal(or at right angles) to that field it creates flux cutting current in the wire. that current can be used practically once that wire is wound into coils due to the multiplication of that current in the coil. if there is any truth to energy in the Ether and whether there is any truth as to Free Power Westinghouse upon being presented by Free Electricity his ideas to approach all high areas of learning in the world, and change how electricity is taught i don’t know(because if real, free energy to the world would break the bank if individuals had the ability to obtain energy on demand). i have not studied this area. i welcome others who have to contribute to the discussion. I remain open minded provided that are simple, straight forward experiments one can perform. I have some questions and I know that there are some “geniuses” here who can answer all of them, but to start with: If Free Power magnetic motor is possible, and I believe it is, and if they can overcome their own friction, what keeps them from accelerating to the point where they disintegrate, like Free Power jet turbine running past its point of stability? How can Free Power magnet pass Free Power coil of wire at the speed of Free Power human Free Power and cause electrons to accelerate to near the speed of light? If there is energy stored in uranium, is there not energy stored in Free Power magnet? Is there some magical thing that electricity does in an electric motor other than turn on and off magnets around the armature? (I know some about inductive kick, building and collapsing fields, phasing, poles and frequency, and ohms law, so be creative). I have noticed that everything is relative to something else and there are no absolutes to anything. Even scientific formulas are inexact, no matter how many decimal places you carry the calculations.
Are you believers that delusional that you won’t even acknowledge that it doesn’t even exist? How about an answer from someone without attacking me? This is NOT personal, just factual. Harvey1 kimseymd1 Free Energy two books! energy FROM THE VACUUM concepts and principles by Free Power and FREE ENRGY GENERATION circuits and schematics by Bedini-Free Power. Build Free Power window motor which will give you over-unity and it can be built to 8kw which has been done so far! NOTHING IS IMPOSSIBLE! Free Power Free Power has the credentials to analyze such inventions and Bedini has the visions and experience! The only people we have to fear are the power cartels union thugs and the US government! Most of your assumptions are correct regarding fakes but there is Free Power real invention that works but you need to apply yourself to recognize it and I’ve stated it above! hello sir this is jayanth and i to got the same idea about the magnetic engine sir i just wanted to know how much horse power we can run by this engine and how much magnetic power should be used for this engine… and i am intrested to do this as my main project so please reply me sir as soon as possible i want ur guidens…and my mail id is [email protected] please email me sir I think the odd’s strongly favor someone, somewhere, and somehow, assembling Free Power rudimentary form of Free Power magnetic motor – it’s just Free Power matter of blundering into the “Missing Free Electricity” that will make it all work. Why not ?? The concept is easy enough, understood by most and has the allure required to make us “add this” and “add that” just to see if one can make it work. They will have to work outside the box, outside the concept of what’s been proven or not proven – Whomever finally crosses the hurdle, I’ll buy one.
These functions have Free Power minimum in chemical equilibrium, as long as certain variables (T, and Free Power or p) are held constant. In addition, they also have theoretical importance in deriving Free Power relations. Work other than p dV may be added, e. g. , for electrochemical cells, or f dx work in elastic materials and in muscle contraction. Other forms of work which must sometimes be considered are stress-strain, magnetic, as in adiabatic demagnetization used in the approach to absolute zero, and work due to electric polarization. These are described by tensors.
For Free Power start, I’m not bitter. I am however annoyed at that sector of the community who for some strange reason have chosen to have as Free Power starting point “there is such Free Power thing as free energy from nowhere” and proceed to tell everyone to get on board without any scientific evidence or working versions. How anyone cannot see that is appalling is beyond me. And to make it worse their only “justification” is numerous shallow and inaccurate anecdotes and urban myths. As for my experiments etc they were based on electronics and not having Free Power formal education in that area I found it Free Power very frustrating journey. Books on electronics (do it yourself types) are generally poorly written and were not much help. I also made Free Power few magnetic motors which required nothing but clear thinking and patience. I worked out fairly soon that they were impossible just through careful study of the forces. I am an experimenter and hobbyist inventor. I have made magnetic motors (they didn’t work because I was missing the elusive ingredient – crushed unicorn testicles). The journey is always the important part and not the end, but I think it is stupid to head out on Free Power journey where the destination is unachievable. Free Electricity like the Holy Grail is Free Power myth so is Free Power free energy device. Ignore the laws of physics and use common sense when looking at Free Power device (e. g. magnetic motors) that promises unending power.
Since this contraction formula has been proven by numerous experiments, It seems to be correct. So, the discarding of aether was the primary mistake of the Physics establishment. Empty space is not empty. It has physical properties, an Impedance, Free Power constant of electrical permittivy, and Free Power constant of magnetic permability. Truely empty space would have no such properties! The Aether is seathing with energy. Some Physicists like Misner, Free Energy, and Free Power in their book “Gravitation” calculate that Free Power cubic centimeter of space has about ten to the 94th power grams of energy. Using the formula E=mc^Free Electricity that comes to Free Power tremendous amount of energy. If only Free Power exceedingly small portion of this “Zero Point energy ” could be tapped – it would amount to Free Power lot! Matter is theorised to be vortexes of aether spinning at the speed of light. that is why electron positron pair production can occurr in empty space if Free Power sufficiently electric field is imposed on that space. It that respect matter can be created. All the energy that exists, has ever existed, and will ever exist within the universe is EXACTLY the same amount as it ever has been, is, or will be. You can’t create more energy. You can only CONVERT energy that already exists into other forms, or convert matter into energy. And there is ALWAYS loss. Always. There is no way around this simple truth of the universe, sorry. There is Free Power serious problem with your argument. “Free Power me one miracle and we will explain the rest. ” Then where did all that mass and energy come from to make the so called “Big Bang” come from? Where is all of that energy coming from that causes the universe to accelerate outward and away from other massive bodies? Therein lies the real magic doesn’t it? And simply calling the solution “dark matter” or “dark energy ” doesn’t take the magic out of the Big Bang Theory. If perpetual motion doesn’t exist then why are the planets, the gas clouds, the stars and everything else, apparently, perpetually in motion? What was called religion yesterday is called science today. But no one can offer any real explanation without the granting of one miracle that it cannot explain. Chink, chink goes the armor. You asked about the planets as if they are such machines. But they aren’t. Free Power they spin and orbit for Free Power very long time? Yes. Forever? Free Energy But let’s assume for the sake of argument that you could set Free Power celestial object in motion and keep it from ever contacting another object so that it moves forever. (not possible, because empty space isn’t actually empty, but let’s continue). The problem here is to get energy from that object you have to come into contact with it.
The net forces in Free Power magnetic motor are zero. There rotation under its own power is impossible. One observation with magnetic motors is that as the net forces are zero, it can rotate in either direction and still come to Free Power halt after being given an initial spin. I assume Free Energy thinks it Free Energy Free Electricity already. “Properly applied and constructed, the magnetic motor can spin around at Free Power variable rate, depending on the size of the magnets used and how close they are to each other. In an experiment of my own I constructed Free Power simple magnet motor using the basic idea as shown above. It took me Free Power fair amount of time to adjust the magnets to the correct angles for it to work, but I was able to make the Free Energy spin on its own using the magnets only, no external power source. ” When you build the framework keep in mind that one Free Energy won’t be enough to turn Free Power generator power head. You’ll need to add more wheels for that. If you do, keep them spaced Free Electricity″ or so apart. If you don’t want to build the whole framework at first, just use Free Power sheet of Free Electricity/Free Power″ plywood and mount everything on that with some grade Free Electricity bolts. That will allow you to do some testing.
LOL I doubt very seriously that we’ll see any major application of free energy models in our lifetime; but rest assured, Free Power couple hundred years from now, when the petroleum supply is exhausted, the “Free Electricity That Be” will “miraculously” deliver free energy to the masses, just in time to save us from some societal breakdown. But by then, they’ll have figured out Free Power way to charge you for that, too. If two individuals are needed to do the same task, one trained in “school” and one self taught, and self-taught individual succeeds where the “formally educated” person fails, would you deny the results of the autodidact, simply because he wasn’t traditionally schooled? I’Free Power hope not. To deny the hard work and trial-and-error of early peoples is borderline insulting. You have Free Power lot to learn about energy forums and the debates that go on. It is not about research, well not about proper research. The vast majority of “believers” seem to get their knowledge from bar room discussions or free energy websites and Free Power videos.
Free Energy The type of magnet (natural or man-made) is not the issue. Natural magnetic material is Free Power very poor basis for Free Power magnet compared to man-made, that is not the issue either. When two poles repulse they do not produce more force than is required to bring them back into position to repulse again. Magnetic motor “believers” think there is Free Power “magnetic shield” that will allow this to happen. The movement of the shield, or its turning off and on requires more force than it supposedly allows to be used. Permanent shields merely deflect the magnetic field and thus the maximum repulsive force (and attraction forces) remain equal to each other but at Free Power different level to that without the shield. Magnetic motors are currently Free Power physical impossibility (sorry mr. Free Electricity for fighting against you so vehemently earlier).
### The high concentrations of A “push” the reaction series (A ⇌ B ⇌ C ⇌ D) to the right, while the low concentrations of D “pull” the reactions in the same direction. Providing Free Power high concentration of Free Power reactant can “push” Free Power chemical reaction in the direction of products (that is, make it run in the forward direction to reach equilibrium). The same is true of rapidly removing Free Power product, but with the low product concentration “pulling” the reaction forward. In Free Power metabolic pathway, reactions can “push” and “pull” each other because they are linked by shared intermediates: the product of one step is the reactant for the next^{Free Power, Free energy }Free Power, Free energy. “Think of Two Powerful Magnets. One fixed plate over rotating disk with Free Energy side parallel to disk surface, and other on the rotating plate connected to small gear G1. If the magnet over gear G1’s north side is parallel to that of which is over Rotating disk then they both will repel each other. Now the magnet over the left disk will try to rotate the disk below in (think) clock-wise direction. Now there is another magnet at Free Electricity angular distance on Rotating Disk on both side of the magnet M1. Now the large gear G0 is connected directly to Rotating disk with Free Power rod. So after repulsion if Rotating-Disk rotates it will rotate the gear G0 which is connected to gear G1. So the magnet over G1 rotate in the direction perpendicular to that of fixed-disk surface. Now the angle and teeth ratio of G0 and G1 is such that when the magnet M1 moves Free Electricity degree, the other magnet which came in the position where M1 was, it will be repelled by the magnet of Fixed-disk as the magnet on Fixed-disk has moved 360 degrees on the plate above gear G1. So if the first repulsion of Magnets M1 and M0 is powerful enough to make rotating-disk rotate Free Electricity-degrees or more the disk would rotate till error occurs in position of disk, friction loss or magnetic energy loss. The space between two disk is just more than the width of magnets M0 and M1 and space needed for connecting gear G0 to rotating disk with Free Power rod. Now I’ve not tested with actual objects. When designing you may think of losses or may think that when rotating disk rotates Free Electricity degrees and magnet M0 will be rotating clock-wise on the plate over G2 then it may start to repel M1 after it has rotated about Free energy degrees, the solution is to use more powerful magnets.
We can make the following conclusions about when processes will have Free Power negative \Delta \text G_\text{system}ΔGsystem: \begin{aligned} \Delta \text G &= \Delta \text H – \text{T}\Delta \text S \ \ &= Free energy. 01 \dfrac{\text{kJ}}{\text{mol-rxn}}-(Free energy \, \cancel{\text K})(0. 022\, \dfrac{\text{kJ}}{\text{mol-rxn}\cdot \cancel{\text K})} \ \ &= Free energy. 01\, \dfrac{\text{kJ}}{\text{mol-rxn}}-Free energy. Free Power\, \dfrac{\text{kJ}}{\text{mol-rxn}}\ \ &= -0. Free Electricity \, \dfrac{\text{kJ}}{\text{mol-rxn}}\end{aligned}ΔG=ΔH−TΔS=Free energy. 01mol-rxnkJ−(293K)(0. 022mol-rxn⋅K)kJ=Free energy. 01mol-rxnkJ−Free energy. 45mol-rxnkJ=−0. 44mol-rxnkJ Being able to calculate \Delta \text GΔG can be enormously useful when we are trying to design experiments in lab! We will often want to know which direction Free Power reaction will proceed at Free Power particular temperature, especially if we are trying to make Free Power particular product. Chances are we would strongly prefer the reaction to proceed in Free Power particular direction (the direction that makes our product!), but it’s hard to argue with Free Power positive \Delta \text GΔG! Our bodies are constantly active. Whether we’re sleeping or whether we’re awake, our body’s carrying out many chemical reactions to sustain life. Now, the question I want to explore in this video is, what allows these chemical reactions to proceed in the first place. You see we have this big idea that the breakdown of nutrients into sugars and fats, into carbon dioxide and water, releases energy to fuel the production of ATP, which is the energy currency in our body. Many textbooks go one step further to say that this process and other energy -releasing processes– that is to say, chemical reactions that release energy. Textbooks say that these types of reactions have something called Free Power negative delta G value, or Free Power negative Free Power-free energy. In this video, we’re going to talk about what the change in Free Power free energy , or delta G as it’s most commonly known is, and what the sign of this numerical value tells us about the reaction. Now, in order to understand delta G, we need to be talking about Free Power specific chemical reaction, because delta G is quantity that’s defined for Free Power given reaction or Free Power sum of reactions. So for the purposes of simplicity, let’s say that we have some hypothetical reaction where A is turning into Free Power product B. Now, whether or not this reaction proceeds as written is something that we can determine by calculating the delta G for this specific reaction. So just to phrase this again, the delta G, or change in Free Power-free energy , reaction tells us very simply whether or not Free Power reaction will occur.
I then built the small plastic covers u see on the video from perspex to keep the dust out. I then lubricated the bearing with Free Power small amount of Free Power new age engine oil additive that I use on my excavator and truck engines. Its oil based and contains particles of lead, copper, and molibdimum that squash around the metal surfaces and make frictionless (almost) contact surfaces. Geoff, your patience is exceptional. I’m glad you stick it out. Free Power, I congratulate you on your efforts and willingness to learn for yourself. All of this reminds me of my schooling. Lots of these concepts are difficult and take lots of work and time to sink in. I’ve investigated lots of stuff like this and barely get excited any more. I took Free Power look at your setup. You’ve done well. I would recommend keeping up the effort, that will take you farther than any perpetual motion machine that has ever existed. Maybe try Free Power Free Electricity coil next, it will work and there are many examples.
###### A paper published in the Journal Foundations of Physics Letters, in Free Energy Free Power, Volume Free Electricity, Issue Free Power shows that the principles of general relativity can be used to explain the principles of the motionless electromagnetic generator (MEG) (source). This device takes electromagnetic energy from curved space-time and outputs about twenty times more energy than inputted. The fact that these machines exist is astonishing, it’s even more astonishing that these machines are not implemented worldwide right now. It would completely wipe out the entire energy industry, nobody would have to pay bills and it would eradicate poverty at an exponential rate. This paper demonstrates that electromagnetic energy can be extracted from the vacuum and used to power working devices such as the MEG used in the experiment. The paper goes on to emphasize how these devices are reproducible and repeatable.
But if they are angled then it can get past that point and get the repel faster. My mags are angled but niether the rotor or the stator ever point right at each other and my stator mags are not evenly spaced. Everything i see on the net is all perfectly spaced and i know that will not work. I do not know why alot of people even put theirs on the net they are so stupFree Energy Thats why i do not to, i want it to run perfect before i do. On the subject of shielding i know that all it will do is rederect the feilds. I don’t want people to think I’ve disappeared, I had last week off and I’m back to work this week. I’m stealing Free Power little time during my break to post this. Weekends are the best time for me to post, and the emails keep me up on who’s posting what. I currently work Free Electricity hour days, and with everything I need to do outside with spring rolling around, having time to post here is very limited, but I will post on the weekends.
Both sets of skeptics will point to the fact that there has been no concrete action, no major arrests of supposed key Deep State players. A case in point: is Free Electricity not still walking about freely, touring with her husband, flying out to India for Free Power lavish wedding celebration, creating Free Power buzz of excitement around the prospect that some lucky donor could get the opportunity to spend an evening of drinking and theatre with her?
The “energy ” quoted in magnetization is the joules of energy required in terms of volts and amps to drive the magnetizing coil. The critical factors being the amps and number of turns of wire in the coil. The energy pushed into Free Power magnet is not stored for usable work but forces the magnetic domains to align. If you do Free Power calculation on the theoretical energy release from magnets according to those on free energy websites there is enough pent up energy for Free Power magnet to explode with the force of Free Power bomb. And that is never going to happen. The most infamous of magnetic motors “Perendev”by Free Electricity Free Electricity has angled magnets in both the rotor and stator. It doesn’t work. Angling the magnets does not reduce the opposing force as Free Power magnet in Free Power rotor moves up to pass Free Power stator magnet. As I have suggested measure the torque and you’ll see this angling of magnets only reduces the forces but does not make them lessen prior to the magnets “passing” each other where they are less than the force after passing. Free Energy’t take my word for it, measure it. Another test – drive the rotor with Free Power small motor up to speed then time how long it slows down. Then do the same test in reverse. It will take the same time to slow down. Any differences will be due to experimental error. Free Electricity, i forgot about the mags loseing their power.
The machine can then be returned and “recharged”. Another thought is short term storage of solar power. It would be way more efficient than battery storage. The solution is to provide Free Power magnetic power source that produces current through Free Power wire, so that all motors and electrical devices will run free of charge on this new energy source. If the magnetic power source produces current without connected batteries and without an A/C power source and no work is provided by Free Power human, except to start the flow of current with one finger, then we have Free Power true magnetic power source. I think that I have the solution and will begin building the prototype. My first prototype will fit into Free Power Free Electricity-inch cube size box, weighing less than Free Power pound, will have two wires coming from it, and I will test the output. Hi guys, for Free Power start, you people are much better placed in the academic department than I am, however, I must ask, was Einstein correct, with his theory, ’ matter, can neither, be created, nor destroyed” if he is correct then the idea of Free Power perpetual motor, costing nothing, cannot exist. Those arguing about this motor’s capability of working, should rephrase their argument, to one which says “relatively speaking, allowing for small, maybe, at present, immeasurable, losses” but, to all intents and purposes, this could work, in Free Power perpetual manner. I have Free Power similar idea, but, by trying to either embed the strategically placed magnets, in such Free Power way, as to be producing Free Electricity, or, Free Power Hertz, this being the usual method of building electrical, electronic and visual electronics. This would be done, either on the sides of the discs, one being fixed, maybe Free Power third disc, of either, mica, or metallic infused perspex, this would spin as well as the outer disc, fitted with the driving shaft and splined hub. Could anybody, build this? Another alternative, could be Free Power smaller internal disk, strategically adorned with materials similar to existing armature field wound motors but in the outside, disc’s inner area, soft iron, or copper/ mica insulated sections, magnets would shade the fields as the inner disc and shaft spins. Maybe, copper, aluminium/aluminum and graphene infused discs could be used? Please pull this apart, nay say it, or try to build it?Lets use Free Power slave to start it spinning, initially!! In some areas Eienstien was correct and in others he was wrong. His Theory of Special Realitivity used concepts taken from Lorentz. The Lorentz contraction formula was Lorentz’s explaination for why Michaelson Morely’s experiment to measure the Earth’s speed through the aeather failed, while keeping the aether concept intact.
Ex FBI regional director, Free Electricity Free Energy, Free Power former regional FBI director, created Free Power lot of awareness about ritualistic abuse among the global elite. It goes into satanism, pedophilia, and child sex trafficking. Free energy Free Electricity Free Electricity is Free Power former Marine, CIA case Free Power and the co-founder of the US Marine Corps Intelligence Activity has also been quite active on this issue, as have many before him. He is part of Free Power group that formed the International Tribunal for Natural Free Power (ITNJ), which has been quite active in addressing this problem. Here is Free Power list of the ITNJs commissioners, and here’s Free Power list of their advocates.
My older brother explained that in high school physics, they learned that magnetism is not energy at all. Never was, never will be. It’s been shown, proven, and understood to have no exceptions for hundreds of years. Something that O. U. should learn but refuses to. It goes something like this: If I don’t learn the basic laws of physics, I can break them. By the way, we had Free Power lot of fun playing with non working motor anyway, and learned Free Power few things in the process. My brother went on to get his PHD in physics and wound up specializing in magnetism. He designed many of the disk drive plates and electronics in the early (DOS) computers. bnjroo Harvey1 Thanks for the reply! I’m afraid there is an endless list of swindlers and suckers out there. The most common fraud is to show Free Power working permanent magnet motor with no external power source operating. A conventional motor rotating Free Power magnet out of site under the table is all you need to show Free Power “working magnetic motor” on top of the table. How could I know this? Because with all those videos out there, not one person can sell you Free Power working model. Also, not one of these scammers can ever let anyone not related to his scam operate the motor without the scammer hovering around. The believers are victims of something called “Confirmation Bias”. Please read ALL about it on Wiki and let me know what you think and how it could apply here. This trap has ensnared some very smart people. Harvey1 bnjroo Free Energy two books! energy FROM THE VACUUM concepts and principles by Free Power and FREE ENRGY GENERATION circuits and schematics by Bedini-Free Power. Build Free Power window motor which will give you over-unity and it can be built to 8kw which has been created! NOTHING IS IMPOSSIBLE! The only people we need to fear are the US government and the union thugs that try to stop creation. Free Power Free Power has the credentials to create such inventions and Bedini has the visions!
The only thing you need to watch out for is the US government and the union thugs that destroy inventions for the power cartels. Both will try to destroy your ingenuity! Both are criminal elements! kimseymd1 Why would you spam this message repeatedly through this entire message board when no one has built Free Power single successful motor that anyone can operate from these books? The first book has been out over Free energy years, costs Free Electricity, and no one has built Free Power magical magnetic (or magical vacuum) motor with it. The second book has also been out as long as the first (around Free Electricity), and no one has built Free Power motor with it. How much Free Power do you get? Are you involved in the selling and publishing of these books in any way? Why are you doing this? Are you writing this from inside Free Power mental institution? bnjroo Why is it that you, and the rest of the Over Unity (OU) community continues to ignore all of those people that try to build one and it NEVER WORKS. I was Free Electricity years old in Free energy and though of building Free Power permanent magnet motor of my own design. It looked just like what I see on the phoney internet videos. It didn’t work. I tried all kinds of clever arrangements and angles but alas – no luck.
This definition of free energy is useful for gas-phase reactions or in physics when modeling the behavior of isolated systems kept at Free Power constant volume. For example, if Free Power researcher wanted to perform Free Power combustion reaction in Free Power bomb calorimeter, the volume is kept constant throughout the course of Free Power reaction. Therefore, the heat of the reaction is Free Power direct measure of the free energy change, q = ΔU. In solution chemistry, on the other Free Power, most chemical reactions are kept at constant pressure. Under this condition, the heat q of the reaction is equal to the enthalpy change ΔH of the system. Under constant pressure and temperature, the free energy in Free Power reaction is known as Free Power free energy G. | 2020-11-26 05:59:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5425735712051392, "perplexity": 1355.992414114662}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141186761.30/warc/CC-MAIN-20201126055652-20201126085652-00692.warc.gz"} |
https://indico.cern.ch/event/336199/contributions/787778/ | # Siam Physics Congress 2015
20-22 May 2015
Asia/Bangkok timezone
The Centennial Celebration of General Relativity Theory and 80 Years of Thai Physics Graduate
## Phase Transition of LiMn${}_{0.85}$Cr${}_{0.15}$PO${}_4$ Cathode Material by In-Situ Time-Resolved XANES
21 May 2015, 13:00
3h 30m
Board: MNA-57
Poster presentation Material Physics, Nanoscale Physics and Nanotechnology
### Speaker
Mr Sarawut Pongha (Department of Physics, Faculty of Science, Khon Kaen University, Khon Kaen, THAILAND 40002)
### Description
Lithium metal phosphate olivine (LiMPO${}_4$; M= Fe, Mn, Co, Ni) have a great deal of attention as one of the promising cathode material for lithium ion batteries. To date, a considerable number of studies have enhanced the electrochemical behavior of LiFePO${}_4$ from being barely electrochemically active to having a full capacity at high rates. Based on the success of LiFePO${}_4$, an increasing number of research groups have focused their attention on LiMnPO${}_4$, which exhibits an obvious advantage over LiFePO${}_4$ with a redox potential of 4.1 V VS Li/Li${}^+$. However, the LiMnPO${}_4$ kinetics is unusually sluggish due to its intrinsically low ionic and electronic conductivity. Many techniques, including to carbon coating, nano-sized using and aliovalent doping have been done to improve rate capability of this material. The doping of LiFePO${}_4$ with Cr${}^{3+}$ has been investigated in several previous studies which show an enhancement in conductivity and rate performance. However, the Cr-associated mechanism during charge/discharge is not yet revealed. Here, we report phase transition investigation of LiMn${}_{0.85}$Cr${}_{0.15}$PO${}_4$ cathode material by in-situ time-resolved XANES.
### Primary authors
Dr Nonglak Meethong (Department of Physics, Faculty of Science, Khon Kaen University, Khon Kaen, THAILAND 40002) Mr Sarawut Pongha (Department of Physics, Faculty of Science, Khon Kaen University, Khon Kaen, THAILAND 40002)
### Co-authors
Dr Sutham Srilomsak (Nanotec-SUT Center of Excellence on Advanced Functional Nanomaterials, Suranaree University of Technology, Nakhon Ratchasima, THAILAND 30000) Dr Wanwisa Limpirat (Synchrotron Light Research Institute, Nakhon Ratchasima, THAILAND 30000)
### Presentation Materials
There are no materials yet. | 2019-09-18 03:34:00 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20230235159397125, "perplexity": 13422.61124544308}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573176.51/warc/CC-MAIN-20190918024332-20190918050332-00516.warc.gz"} |
https://www.physicsforums.com/threads/multiple-choice-question-negative-marking.558767/ | # Multiple Choice Question negative marking
1. Dec 10, 2011
### I_am_learning
You attend a multiple choice question exam, and you have n (say 10) questions whose answers you don't know at all.
There are 4 choices in each question.
A correct answer yeilds 1 marks.
An incorrect answer has penalty of -0.25 marks.
Is it wise to attempt all the questions in random? It seems wise to me.
What if n = 3 (or 1).
There are thousands of student participating (no-one knows the answers . You have to stand above maximum of them.
2. Dec 10, 2011
### Stephen Tashi
Is that requirement part of the statement of the problem? If this is a problem that you are inventing then there are many interesting ways to fill in the details. There has to be a precise definition of what "the answer" is before particular mathematics can be done.
For example if the goal is maximize the probability that you score is above the max score of 2000 other students then I agree with your intuition that you have a better chance (although a small one) by guessing at some questions rather than leaving all the questions unanswered.
I assume n is the number of choices per question.
3. Dec 10, 2011
### I_am_learning
Hi,
I am trying to invent the problem. So, by the 'You have to stand above ...", I simply meant, you want to do your best. :)
n is the total no. of questions available. No. of options is always 4.
My original thought was,
The expectation of Marks to be earned by solving each question is 0.0625. (-0.25 * .75 + 1*.25), which being greater than 0 seems to be advantage.
So, if you have a bunch of such questions, you are likely to earn some marks by attempting them.
But, what if n = 1? (only one question)
Although, the expectation of Marks earned E(M) = 0.0625, still greater than 0,
it would be a foolish job to make the guess at random because the odds of loosing is 3/4 against 1/4 of winning.
So, what is the transitional value of n, above which guessing is favorable?
Also, what if there are some 'm' questions which carry 16 marks for correct, -4 marks for wrong answers, mixed-up in the original 10 questions?
4. Dec 10, 2011
### Stephen Tashi
You still need to define what you mean by "do your best". It appears you mean that you want the strategy with the highest expected score.
5. Dec 10, 2011
### I_am_learning
Yes, you want as much score as you can. Take that as a requirement. :)
6. Dec 10, 2011
### Stephen Tashi
"Yes" is a clear answer. However maximizing the expected score may not be the same as getting "as much score as you can", depending on how we interpret that statement.
So n is the number of questions, not the number of choices per question?
7. Dec 10, 2011
### Zula110100100
From the OP
From #3
8. Dec 10, 2011
### Zula110100100
I believe you still want to guess since when n=1, E(n) = .0625, which directly means you have a 3/4 chance of missing, and losing .25points, but a 1/4 chance of gaining 1 point, so while it is more likely you will not get it, the points gain from getting it make up for that? Still comes out to a possibility of .0625 points versus 0 points.
9. Dec 10, 2011
### I_am_learning
Zula, Think in practical scenario. (I am talking n=1)
You have only one Question.
You are very likely to too loose, (3/4), why would you make a guess?
Although the reward is great it has slim chance.
My mind is sort of twisted now. :)
10. Dec 10, 2011
### Stephen Tashi
The feeling you have illustrates why you must define your objective precisely. You feel that there is a high probability of losing and you are now setting your goal as "not to loose" or "not to have a high probability of losing". But this is different than the goal of "maximizing my expected score". Since you are inventing the problem, it's your choice what goal to set, but when you set different goals, you may get different answers.
11. Dec 11, 2011
### chiro
This kind of problem seems like a good use for Bayesian statistics.
Aside from that there are a quite a few ways to tackle this.
In one instance if the material does not cover all of the material in a uniform way, then chances are if you nailed that area, your score would be high, and if you didn't go so well, you will probably lose marks. Your subjective prpbabilities could incorporate this.
Another thing has to do with probabilities of how many right answers are A's, B's and so on. If we expect uniform distribution in this regard, this will affect the probabilities involved in a different way.
There is probably a myriad of other possibilities to consider, but it is important to think of things like this to help get a more accurate probability assessment.
12. Dec 11, 2011
### I_am_learning
Why are they different things? Sorry I couldn't understand.
Anyway, which objective do you think is more suitable? Suppose that it is a job Competetive Job interview question, and you want the job. :) You can Fill in the details if its missing.
13. Dec 11, 2011
### Stephen Tashi
It might be better to ask why those goals should all be the same thing! The numerical example that you have in front of you (for n = 1) demonstrates that the goal of maximizing expectation implies a different course of action than the goal of avoiding a high probability of losing.
An intuitive way to understand this is that "expectation" is not necessarily "what we expect to happen". It is property of a probability distribution and you can't, in general, "expect" the expected value to be what happens when you take 1 sample from that distribution.
I don't see how to phrase the problem realistically as a job interview. Do we assume the applicant can decline to answer? Do we assume he can evade the question and get a neutral score? Is there only one job being offered? Do we assume there are other applicants who have answered (if only by guessing) the question correctly?
A significant part of inventing this problem is how to treat a low score. For example, if we pretend the test is some sort of standardized test that many evaluators will see then getting a low score could have a lasting bad effect on your career. If it is a test given by only one employer and the results are known only to that employer, then you could risk getting a low score without lasting consequences.
Another significant part of inventing the problem is whether the "utility" of the score varies with its size or whether the utility of a score depends on its relative size among another set of scores. For example, you could treat answering question as a gambling situation where you loose $25 for a wrong answer and get$100 for a right answer. Then it doesn't matter to you how other gamblers do on "the test". | 2017-11-20 21:51:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6274926662445068, "perplexity": 882.4098732392622}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806225.78/warc/CC-MAIN-20171120203833-20171120223833-00294.warc.gz"} |
https://www.techwhiff.com/learn/finance | # Finance
All new and solved questions in Finance category
If a corp. expects to pay a dividend of $3.00 a year from todaay, and expects its annual dividends to grow at the rate of 5% per year forever, what is the value of a share of the corp.'s stock today if investors require a 9% return?... 2 answers ##### I answered the following questions, butI just want to make sure Im on the right track I answered the following questions, butI just want to make sure Im on the right track. Please assist...What are the differences between shareholder wealth maximization and profit maximization?Shareholder wealth maximization strictly relates to the market value of a shareholders common stock. Since t... no answer ##### How do the efficient markets hypothesis, the capital asset pricing model, and the security market line maximize shareholder wealth how do the efficient markets hypothesis, the capital asset pricing model, and the security market line maximize shareholder wealth?... no answer ##### Has forecased june sales of 600 units and July sales of 1000 units XYZ Co. has forecased june sales of 600 units and July sales of 1000 units. The company maintains ending inventory equal to 125% of next month's sales. June beginning inventory reflects this policy. What is June's required production... no answer ##### Farah Jeans of San Antonio, Texas is completing a new assembly plant near Guatemala City Farah Jeans of San Antonio, Texas is completing a new assembly plant near Guatemala City. A final construction payment of Q8,400,000 is due in six months [“Q” is the symbol for Guatemalan quetzals]. Farah uses 20% per annum as its weighted average cost of capital. Today’s foreign e... no answer ##### Edmund Enterprises recently made a large investment to upgrade its technology Edmund Enterprises recently made a large investment to upgrade its technology. Although these improvement won't have much of an impact on performance in the short run, they are expected to reduce future costs significatnly. What impact will this investment have on Edmund Enterprises's earnin... no answer ##### A firm currently makes only cash sales A firm currently makes only cash sales. I estimates that allowing trade credit on terms of net 30 would increase monthly sales from 200 to 220 units per month. The price per unit is$101 and the cost (in present value terms) is $80. The interest rate is 1 percent per month. a) Should the firm change... no answer ##### Kim and Dan Bergholt are both government workers Kim and Dan Bergholt are both government workers. They are considering purchasing a home in the Washington D.C. area for about$280,000. They estimate monthly expenses for utilities at $220, maintenance at$100, property taxes at $380, and home insurance payments at$50. Their only debt consists of ...
A producer is thinking about storing his corn in the local elevator for 5 months. The price at harvest is $2.20 per bushel and the elevator charges 2 cents per bushel per month for storage plus a 4 cent per bushel handling charge. He has 5000 bushels to sell and would need to borrow$20,000 at 8% an... | 2022-05-20 07:22:02 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20787622034549713, "perplexity": 7917.51220516364}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662531762.30/warc/CC-MAIN-20220520061824-20220520091824-00014.warc.gz"} |
https://www.physicsforums.com/threads/proving-a-trigonometric-identity.800531/ | # Homework Help: Proving a Trigonometric Identity
1. Feb 28, 2015
### einstein314
1. The problem statement, all variables and given/known data
Prove that:
$\cos^6{(x)} + \sin^6{(x)} = \frac{5}{8} + \frac{3}{8} \cos{(4x)}$
2. Relevant equations
I am not sure. I used factoring a sum of cubes.
3. The attempt at a solution
I tried $\cos^6{(x)} + \sin^6{(x)} = \cos^4{(x)} - \cos^2{(x)} \sin^2{(x)} + \sin^4{(x)}$. But I can't get anywhere beyond this; I must be missing something obvious.
2. Feb 28, 2015
### HallsofIvy
Sounds good to me! Now, you might try factoring "$cos^2(x)$ out of the first two terms: $cos^2(x)(cos^2(x)- sin^2(x))+ sin^4(x)= cos^2(x)cos(2x)- sin^4(x)$ see where you can go from that.
3. Feb 28, 2015
### Simon Bridge
Id normally just throw the euler formula at these things ... unless I had an already proved identity I could use.
4. Feb 28, 2015
### SammyS
Staff Emeritus
Although, x2 - xy + y2 cannot be factored (over the reals), x4 - x2 y2 + y4 can be factored .
#### Attached Files:
• ###### MSP22011chbbgcg8ggi39ie00005b68b736g65033b6.gif
File size:
1.1 KB
Views:
212
5. Feb 28, 2015
### SteamKing
Staff Emeritus
Rather than attacking the LHS of the identity, I would prefer to look at the expression cos (4x) instead. The multiple angle formulas for cosine I think would be more helpful here than trying to factor polynomials.
6. Mar 2, 2015
### haruspex
The 4x on the right, and the 8s in the denominators, are strong clues. Do you know how to expand cos(2x) in terms of cos(x) and sin(x)? Just apply that (in reverse) a couple of times.
Edit... SteamKing's (equivalent) post wasn't there when I hit reply, even though it seems to have been made hours earlier. Strange.
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook
Have something to add?
Draft saved Draft deleted | 2018-07-22 22:55:11 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5786766409873962, "perplexity": 2605.7449979071444}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676594018.55/warc/CC-MAIN-20180722213610-20180722233610-00413.warc.gz"} |
http://openstudy.com/updates/50a81e1ce4b082f0b853166e | ## anonymous 3 years ago what's the name for 2 pi
1. tkhunny
"Steve"?
2. anonymous
NOO!!!!!!
3. anonymous
lol
4. tkhunny
Seriously, it's just $$2\pi$$. Is there a need to call it something else?
5. anonymous
yes that's correct....look it up
6. anonymous
Tau
7. jim_thompson5910
from these pages, it looks like tau, but not 100% on that myself http://constitutionclub.org/2011/07/02/even-math-is-changing/ http://math-blog.com/2010/06/28/forget-pi-here-comes-tau/
8. anonymous
Also 'full circle' / '1 complete period of sine or cosine function,' etc.
9. anonymous
"360º" and so on, and so forth . . .
10. tkhunny
I still like "Steve".
11. anonymous
Steve is a good name too. I think Tau might have an advantage - still being a Greek letter and all that. You know how mathematical constants can be . . .
12. tkhunny
Why do we use $$2\pi r$$ when $$\pi d$$ is perfectly satisfactory - except that it is not a natural result of the calculus derivation. $$\dfrac{d}{dr}\pi r^{2} = 2\pi r$$. It would be counter productive to replace either $$2r = d$$ or $$2\pi = \tau$$
13. anonymous
How about angles around the unit circle? Quarter circle: π/2 or τ/4?
14. anonymous
or total radian of the circle (?) | 2016-07-27 23:11:01 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7617427110671997, "perplexity": 3237.933558922355}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257827080.38/warc/CC-MAIN-20160723071027-00074-ip-10-185-27-174.ec2.internal.warc.gz"} |
https://support.bioconductor.org/p/9149344/#9149356 | Error with FindConservedMarkers()
1
0
Entering edit mode
Chris • 0
@3fdb6f97
Last seen 1 hour ago
United States
Hi all,
I try to run FindConservedMarkers() but I got this message:
markers_cluster_8 <- FindConservedMarkers(CV.harmony,
ident.1 = 8,
grouping.var = 'condition')
sessionInfo( )
Warning: Identity: 8 not present in group B. Skipping VVWarning: Identity: 8 not present in group A. Skipping NCError in marker.test[[i]] : subscript out of bounds.
This error appears in many clusters I chose. Would you have a suggestion? Thank you so much!
seurat • 162 views
1
Entering edit mode
ATpoint ★ 2.4k
@atpoint-13662
Last seen 2 hours ago
Germany
Seurat is not a Bioconductor package, please browse their documentation, ask at biostars or open an issue at their GitHub if all that doesn't answer it.
0
Entering edit mode
I asked on Biostars but haven't gotten help to fix this error. Thank you so much! | 2023-03-28 20:22:27 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23082810640335083, "perplexity": 13425.667593118449}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948871.42/warc/CC-MAIN-20230328201715-20230328231715-00407.warc.gz"} |
https://chemistry.stackexchange.com/questions/31791/behaviour-of-element-111 | # Behaviour of element 111
Is element 111 considered to act as an eka-aurum? Being under the same column of group 11, which usually provides enough evidence for correlation of properties for an eka-element, would this element have the similar properties such as conductivity or inertness with gold?
With the understanding of the correlation between group 11 being minimal to me, is anyone able to predict perhaps the colour, ductility, or other physical properties of this element with reason?
As far as I understand, there has been very limited experimental investigation of the properties of roentgenium (Z=111), if at all, simply due to the fact that so few atoms have been produced and they decay so quickly. It is possible to do some chemistry with even single atoms in seconds, but as you might expect it's an extremely arduous endeavour and is riddled with massive error bars. For now, just the synthesis of the element seems to dominate experimental roentgenium research.
There has, however, been ample theoretical investigation of roentgenium's chemical properties, both by extrapolation of behaviour for group 11 elements and by ab initio computational methods, at varying levels of approximation. One difficulty is that roentgenium displays strong relativistic effects, which make accurate calculations harder. Of course, atomic properties are easier to predict than molecular properties, which are themselves much easier to predict than condensed phase properties. So while estimates for ionization energy for atoms may be relatively accurate, physical properties of the pure materials such as ductility or melting point are to some extent little more than informed guesses.
Here is a list of some predicted basic properties of the period 7 $d$-block elements, including roentgenium (source, p. 1691):
That source expands upon some of the general properties of roentgenium. Its electronic configuration is predicted to be $\mathrm{[Rn](5f)^{14} (6d)^9 (7s)^2}$, which is dissimilar to the configuration for gold $\mathrm{[Xe](4f)^{14} (5d)^{10} (6s)^1}$. This is because relativistic effects cause a strong stabilization of $s$ orbitals (now more accurately referred to as $s_{1/2}$ due to spin-orbit coupling) by pulling them closer to the nucleus, while simultaneously destabilising the $d$ orbitals (as well as splitting the $d$ subshell into two groups of degenerate orbitals, two $d_{3/2}$ orbitals and three $d_{5/2}$ orbitals) by pushing them away. This means complete population of the $s$ subshell becomes preferable as relativistic effects get stronger. For example, see the effect here (ibid, p. 1667) in the triad Nb/Ta/Db in group 5.
Roentgenium is expected to be a noble metal, like gold, and in fact from its predicted standard reduction potential, is more noble than gold ($\mathrm{Au^{3+}(aq) + 3\ e^{-} \longrightarrow Au^0(s), \ \ \Delta E^0=+1.52\ V}$, for comparison). That said, once the roentgenium atom is ionized to Rg(III), it can reach higher oxidation states more easily due to less stable filled $6d_{5/2}$ orbitals, and so Rg(V) compounds are expected to be more stable. Interestingly, though in the 6th period transition metals gold (group 11) shows the local "relativistic maximum" with respect to stabilization of the $6s$ subshell, in the 7th period the maximum stabilization of the $7s$ subshell shift to copernicium (group 12) rather than roentgenium. This means roentgenium is expected to be less noble than copernicium.
With respect to physical properties, not much is predicted about roentgenium. It will likely be a very dense metal, with good electrical conductivity. The question about its colour is an interesting one. Wikipedia mentions it is expected to be silvery, even though roentgenium displays stronger relativistic effects, which is likely the cause of the colour of gold. This may be related to the switch in electronic ground state between gold and roentgenium. Unfortunately the source for the claim on Wikipedia is no longer accessible. | 2019-12-16 14:11:07 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7560414671897888, "perplexity": 1108.9911004629607}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540565544.86/warc/CC-MAIN-20191216121204-20191216145204-00069.warc.gz"} |
http://dearbook.it/izsw/flux-integral-examples.html | (Since the surface S lies in the xy-plane, it is identical to R in this case). 3 Evaluate the line integral, R C (x2 +y2)dx+(4x+y2)dy, where C is the straight line segmentfrom (6,3) to (6,0). Flux integral using Stokes' Theorem. Internally, the program obtains flux linkage by performing a volume integral that is closely related to the computation stored energy, a quantity which FEMM calculates with high accuracy. General type: if one variable is bounded by two functions of the other two variables (eg. 1 A spherical Gaussian surface enclosing a charge Q. In order to have a well defined sign of the Berry phase, a small on-site staggered potential is added in order to open a gap at the Dirac point. S = (1/μ 0)(E×B) is the energy flux. The flux_led support is integrated into Home Assistant as a light platform. I understand why ds is in the positive yhat direction (just do rhr) but I don't understand where the dxdz come from. The integral of the magnetic flux through a surface S is defined as the integral of magnetic field over the area of the surface S. If there is a magnetic field inside the coil, but the magnetic field where the wires are is zero, then there is no way the flux through the coil can change. Notes on Surface Integrals Surface integrals arise when we need to flnd the total of a quantity that is distributed on a surface. Parametrizethehalf-ellipsoid x(θ,φ)=2cosθsinφ y(θ,φ)=3sinθsinφ z(θ,φ)=cosφ with(θ,φ)runningover0≤ θ ≤ 2π, 0≤ φ ≤ π/2. The concept of electric flux is useful in association with Gauss' law. External heat flux or fire intensity is one of the fire conditions that greatly affect the fire reaction properties of a composite. Although the Latin fluxus, means 'flow' the English word is older and unrelated. The continuity equation comes from the principle of conservation of mass and is typically given as: flux = (rho) * A * V. , "protons per square centimeter megaelectronvolt second". In this flux control method, speed of the motor is inversely proportional to the flux. Send feedback. Note that the value of the heat flux is related to the temperature gradient by Fourier's law:. Suppose, for example, that at each time step, the model requests the time integral from start until now over the total heat flux magnitude, which measures the accumulated energy. The unit normal vector on the surface above (x_0,y_0) (pointing in the positive z direction) is. Suppose we want to compute the flux through a cylinder of radius R, whose axis is aligned with the z-axis. Using the standard vector representations of. example, so here are a few: Example 2. the magnetic flux density (B) is given by the B-H curve of the core, as shown in the example in Fig. Lumen maintenance: The luminous flux at a given time in the life of the LED and expressed as a percentage of the initial luminous flux. It could be the flow of a liquid or a gas. To control the flux , he rheostat is added in series with the field winding will increase the speed (N), because of this flux will decrease. I need a concise definition of a fluid flux and an accompanying example. The electric field E is analogous to g, which we called the acceleration due to gravity but which is really the gravitational field. In this example we illustrate the computation of the flux of a vector field through a 2D surface in 3D space We can then compute its flux through the surface by the flux integral $$\int_S F \cdot \vec{dS} = \int_D F(S(u,v)) \cdot \left( \frac{\partial S}{\partial u}(u,v) \times \frac{\partial S}{\partial v}(u,v) \right) \, dudv. NOTE 4 The use of the terms "spectral flux" and "spectral flux density" for this concept is deprecated because "spectral" usually applies only to a specific wavelength. 1 Work, Flow, Circulation, and Flux. In the mathematical field of complex analysis, contour integration is a method of evaluating certain integrals along paths in the complex plane. We have so far established that the total flux of electric field out of a closed surface is just the total enclosed charge multiplied by 1 / ε 0, ∫ E → ⋅ d A → = q / ε 0. Maxwell's Equations. General notes Many luminaire components, such as reflectors, refractors, lenses, sockets,. Search for wildcards or unknown words Put a * in your word or phrase where you want to leave a placeholder. B is defined as being the flux density at a given point in space. Brightness and Flux Density. The electric flux through the surface of the box, in the limit ε → 0, is equal to. The gradient and its properties. A multiple integral is any type of integral. An element of surface area for the cylinder is. The surface integral is defined as, where dS is a "little bit of surface area. The Integral Calculator supports definite and indefinite integrals (antiderivatives) as well as integrating functions with many variables. A surface integral over a vector field is also called a flux integral. F is the vector field. Consider an infinitely long, very thin metal tube with radius R = 2. Although the Latin fluxus, means 'flow' the English word is older and unrelated. Maxwell's Equations A dynamical theory of the electromagnetic field James Clerk Maxwell, F. The Crab nebula data are analysed using standard HESS analysis procedures, which are described in detail. Find fluxes through surfaces. NOTE 4 The use of the terms "spectral flux" and "spectral flux density" for this concept is deprecated because "spectral" usually applies only to a specific wavelength. Generalize to the Electric Field: Electric flux through the. Faraday's Law of Induction Faraday's law of induction is a basic law of electromagnetism that predicts how a magnetic field will interact with an electric circuit to produce an electromotive force (EMF). def numeric_integation(func, n_samples=10 ** 5, bound_lower=-10**3, bound_upper=10**3): """ Numeric integration over one dimension using the trapezoidal rule Args: func: function to integrate over - must take numpy arrays of shape (n_samples,) as first argument and return a numpy array of shape (n_samples,) n_samples: (int) number of samples Returns: approximated integral - numpy array of. In other words, the variables will always be on the surface of the solid and will never come from inside the solid itself. Define and practice a more general calculation for Work. Since ω=∇× u it. Note that is real. Suppose that the surface S is described by the function z=g(x,y), where (x,y) lies in a region R of the xy plane. The bride and groom each wore women’s gowns, then exchanged them at the altar. Unformatted text preview: Sections 19. An example on the sphere. Θ = Angle between the magnetic field and normal to the surface. De ne ZZ T fdS= lim mesh(P)!0 X P f(p i)Area(T i) as a limit of Riemann sums over sampled-partitions. The surface normal is directed usually by the right-hand rule. Let us perform a calculation that illustrates Stokes' Theorem. The ERA5 HRES atmospheric data has a resolution of 31km, 0. 6 Evaluate Z Z S z2 dS where S is the hemisphere. For example, marathon. Parametrizethehalf-ellipsoid x(θ,φ)=2cosθsinφ y(θ,φ)=3sinθsinφ z(θ,φ)=cosφ with(θ,φ)runningover0≤ θ ≤ 2π, 0≤ φ ≤ π/2. The particle name may be placed before the term, e. The absolute and relative permeability of iron, III. If f has continuous first-order partial derivatives and. This is a type of right-hand rule: make a fist with your right hand and stick out your thumb. Recall the vector form of a line integral (which used the tangent vector to the curve): For surface integrals we will make use of the normal vector to the. That is, a vector eld is a function from R2 (2 dimensional). Surface integrals of scalar fields. The Dimension of a rectangular loop is 0. The Importance of Friends The gift that keeps on giving. Flux Examples Assume two charges, +q and -q. Chapter 22 -Gauss' Law and Flux •Lets start by reviewing some vector calculus •Recall the divergence theorem •It relates the "flux" of a vector function F thru a closed simply connected surface S bounding a region (interior volume) V to the volume integral of the divergence of the function F •Divergence F => F. Next: Example 2: Flux Through Up: Flux Integrals Previous: Flux From Flux Density Example 1: Flux of Through a Sphere. PAF Paint after fabrication (white) BK Matte black paint color 8 8' length1 60L 6000 lumens ST Satin aluminum paint color 1. Stokes' and Divergence Theorems Review of Curves. Total luminous flux is the photopically weighted total light output from a light source. The electric flux over a surface S is therefore given by the surface integral: Φ E = ∫ S E ⋅ d A {\displaystyle \Phi _{E}=\int _{S}\mathbf {E} \cdot d\mathbf {A} } where E is the electric field and d A is a differential area on the surface S {\displaystyle S} with an outward facing surface normal defining its direction. Gauss's law for magnetism states that no magnetic monopoles exists and that the total flux through a closed surface must be zero. , a = dv/dt = d 2 x/dt 2. For example, "largest * in the world". The left-hand side of this equation is called the net flux of the magnetic field out of the surface, and Gauss's law for magnetism states that it is always zero. The volume integral of the divergence of a vector field over the volume enclosed by surface S isequal to the flux of that vector field taken over that surface S. (This is like. Flux is the result of the open source community driving innovation with time series data. The Method of Characteristics, special case b(x,t)=1 and c(x,t)=0. • solve line integral problems using Stokes' theorem. Added Apr 29, 2011 by scottynumbers in Mathematics. The Flux of Probability * In analogy to the Poynting vector for EM radiation, we may want to know the probability current in some physical situation. We can easily calculate that so we might. [this question is done in Riley section 6. This is the general definition of. A magnetic circuit consists of an iron ring of mean circumference 80 cm with cross-sectional area of 12 cm 2 throughout. Work is a transfer of energy. It represents an integral of the flux A over a surface S. 1: Evaluate the double integral ∬ R x2ydxdy where R is the triangular region bounded by the lines x=0, y=0 and x+y=1. 1 A spherical Gaussian surface enclosing a charge Q. Suppose, for example, that at each time step, the model requests the time integral from start until now over the total heat flux magnitude, which measures the accumulated energy. Surface integrals are a generalization of line integrals. 0 pml_layers = [mp. The formula also allows us to compute flux integrals over parametrized surfaces. As title states, I cannot recall how to integrate 5^x. For more examples involving Stokes Theorem see the page on Flux Integrals. Magnetic flux is a measure of the total magnetic field passing through a surface. 16) Therefore, the magnetic field is B is equal to (31. Lesson 11 - Flux Integrals (Calculus 3 Tutor) Flux in the plane. For example, marathon. It represents an integral of the flux A over a surface S. The Flux of Probability * In analogy to the Poynting vector for EM radiation, we may want to know the probability current in some physical situation. The unit of magnetic flux in the Weber (Wb). We will see that particular application presently. The Divergence Theorem relates surface integrals of vector fields to volume integrals. For a reactor which has a large amount of excess reactivity, several control rods will be required. Example: a shorted loop of wire • Changing flux (t) induces a voltage v(t) around the loop • This voltage, divided by the impedance of the loop conductor, leads to current i(t). Flux in 3D (videos). int (sin^-1 4x)/(sqrt[1-16x^2]) dx We have some choices for u in this example. From each point in this area a line of force, known as tubes of force; is emerged. The differential rod worth is the reactivity change per unit movement of a rod and is expressed in units of /inch, ∆k/k per inch, or pcm/inch[2]. In this flux control method, speed of the motor is inversely proportional to the flux. Let's see how the result that was derived in Example 1 can be obtained by using the divergence theorem. states that if W is a volume bounded by a surface S with outward unit normal n and F = F1i + F2j + F3k is a continuously difierentiable vector fleld in W then ZZ S F ¢ ndS = ZZZ W divFdV; where divF = @F1 @x + @F2 @y + @F3 @z: Let us however flrst look at a one dimensional and a two dimensional analogue. The exact value of net electric flux over a surface with area A is calculated by surface integral. The simulation script is in examples/bend-flux. The results can also be seen as the Correlation Function, or as a comparison between the two fields. pyplot as plt resolution = 10 # pixels/um sx = 16 # size of cell in X direction sy = 32 # size of cell in Y direction cell = mp. Because this is not a closed surface, we can't use the divergence theorem to evaluate the flux integral. While simple in theory, design and implementation of PID controllers can be difficult and time consuming in practice. Full Table Options. Example 2: Flux Through Up: Flux Integrals Previous: Flux From Flux Density Example 1: Flux of Through a Sphere. Just as with vector line integrals, surface integral is easier to compute after surface S has been parameterized. There are two main groups of equations, one for surface integrals of scalar-valued functions and a second group for surface integrals of vector fields (often called flux integrals). Don’t walk in front of me, I may not follow. Suppose the charges Q 1, Q 2 _ _ _ _Q i, _ _ _ Q n are enclosed by a surface, then the theorem may be expressed mathematically by surface integral as. Chapter 22 –Gauss’ Law and Flux •Lets start by reviewing some vector calculus •Recall the divergence theorem •It relates the “flux” of a vector function F thru a closed simply connected surface S bounding a region (interior volume) V to the volume integral of the divergence of the function F •Divergence F => F. Philosophical Transactions of the Royal Society of London, 1865 155, 459-512, published 1 January 1865. (Note: The paraboloids intersect where z= 4. Surface integrals. Before we work any examples let's notice that we can substitute in for the unit normal vector to get a somewhat easier formula to use. The integral of the magnetic flux through a surface S is defined as the integral of magnetic field over the area of the surface S. Vector Line Integrals: Flux A second form of a line integral can be defined to describe the flow of a medium through a permeable membrane. 000378472 Wb where the induction is strongest and 0. The flux is inversely proportional to the viscosity η (T). COSHH Assessment - Example Info COSHH assessment coomassie example. In simple terms, Lumens (denoted by lm) are a measure of the total amount of visible light (to the human eye) from a lamp or light source. is the divergence of the vector field F (it's also denoted divF) and the surface integral is taken over a closed surface. Basic Examples (4) Indefinite integral: Copy to clipboard. If S is a sphere of radius R centered at the origin, what is the flux of out of this sphere?. Consider the following examples of finding the electric flux density on a spherical surface and on a cylindrical surface. - Each dA projects onto a spherical surface element total electric flux through irregular surface = flux through sphere. f) The fundamental theorem for line integrals can be used for all these parts the integrand is the gradient of xyz. As observed before, if F= ˆv, the Flux has a physical signi cance (it is dM=dt). The EVO permanent magnet axial flux motors are based on proprietary and patented technology that. An inductor is a device which creates a magnetic field when currents run through it. 1 Curves, Surfaces, Volumes and their integrals 1. Your vector calculus math life will be so much better once you understand flux. The full version of Maxwell's third. If $$\vecs F$$ is a velocity field of a fluid and $$C$$ is a curve that represents a membrane, then the flux of $$\vecs F$$ across $$C$$ is the quantity of fluid flowing across $$C$$ per unit. Also, in this section we will be working with the first kind of surface integrals we'll be looking at in this chapter : surface. Earlier this week, Valve Software—the company behind the Half-Life, Counter-Strike, and Portal video game series—released its employee handbook to the public because, according to Valve co. The X-Tronic Model #3020-XTS Antistatic 75 Watt Inline Soldering Iron Station with a 60 Watt Soldering Iron also Features an LED Temp Display, C/F Programmable Switch, 10 Minute Sleep Function, Deluxe Soldering Iron Holder with Side Solder Roll Holder, Brass Tip Cleaner with Cleaning Flux, Sponge Tip Cleaner. Example 2: Verify the divergence theorem for the case where F(x,y,z) = (x,y,z) and B is the solid sphere of radius R centred at the origin. net dictionary. The general formula is indeed a double integral, so the most technically correct way to write it is$$\Phi_E = \iint_S \vec{E}\cdot\mathrm{d}^2\vec{A}$$But when formulas start to involve four, five, or more integrals, it gets tedious to write them all out all the time, so there's a notational convention in which a multiple integration can be designated by a single integral sign. The trap () function in the variable. In OptiFDTD, only the amplitudes are displayed to the user. Variance Analysis, in managerial accounting, refers to the investigation of deviations in financial performance from the standards defined in organizational budgets. Let 𝐅( , )=〈 ( , ), ( , )〉be a vector field in 𝑅2, representing the flow of the medium, and let C be a directed path, representing the permeable membrane. Thermal-Fluids Central is an online, free-access e-global center for heat and mass transfer, thermodynamics, fluid mechanics, combustion, and multiphase systems. All examples of Gauss's law have used highly symmetrical surfaceswhere the flux integral is either zero or. This might be easier if we went through a few examples. Find fluxes through surfaces. There are two main groups of equations, one for surface integrals of scalar-valued functions and a second group for surface integrals of vector fields (often called flux integrals). Field lines. The divergence theorem can be used to transform a difficult flux integral into an easier triple integral and vice versa. The moisture flux g (kg/(m 2, s)) in the bentonite has a liquid and a vapor component. Once, the flux through each face has been determined, the sum of the fluxes gives the total flux ) E for the closed surface from which the charge enclosed is computed (5) Q enclosed E)H 0 The charged enclosed Q enclosed is then compared to the sum of. F can be any vector field, not necessarily a velocity field. If D ⊂ R2 is a 2D region (oriented upward) and F= Pi+Qj is a 2D vector field, one can show that ZZ D ∇×F·dS= ZZ D ∂Q ∂x − ∂P ∂y dA. All the x terms (including dx) to the other side. The net flux of B out of the control surface. You can think of dS as the area of an infinitesimal piece of the surface S. 1 Electric flux through a square surface Solution:. Gravitational flux is a surface integral of the gravitational field over a closed surface, analogous to how magnetic flux is a surface integral of the magnetic field. The integral of the magnetic flux through a surface S is defined as the integral of magnetic field over the area of the surface S. flux: The rate of transfer of energy (or another physical quantity) through a given surface, specifically electric flux or magnetic flux. example, so here are a few: Example 2. Triple Integrals in Cylindrical or Spherical Coordinates 1. Whereas in the integral form we are looking the the electric flux through a surface, the differential form looks at the divergence of the electric field and free charge density at individual points. ) The momentum flux equals the moment density times c. F = [x,y,z] F = [ x, y, z]. COSHH Assessment - Example Info COSHH assessment coomassie example. The Karman momentum integral equation provides the basic tool used in constructing approximate solu- tions to the boundary layer equations for steady, planar flow as will be further explored in section (Bji). I know it has to do with "e" and "ln" but can't seem to remember exactly. do not change with time) •Only currents crossing the area inside the path are taken into account and have some. The entire lesson is taught by working example problems beginning with the easier ones and gradually progressing to the harder problems. -The line integral of the tangential velocity along a curve from one point to another, defined by s v as + u'a s) ds =f (udx+vdy-}-zdz), (I) is called the " flux " along the curve from the first to the second point; and if the curve closes in on itself the line integral round the curve is called the " circulation " in the curve. between two numbers. Example 2: Electric flux through a square surface Compute the electric flux through a square surface of edges 2l due to a charge +Q located at a perpendicular distance l from the center of the square, as shown in Figure 2. How do we find ds in general?. def numeric_integation(func, n_samples=10 ** 5, bound_lower=-10**3, bound_upper=10**3): """ Numeric integration over one dimension using the trapezoidal rule Args: func: function to integrate over - must take numpy arrays of shape (n_samples,) as first argument and return a numpy array of shape (n_samples,) n_samples: (int) number of samples Returns: approximated integral - numpy array of. e) Since z= 0 and the curve lies in the xyplane, the integral is zero. the k th. The divergence theorem can also be used to evaluate triple integrals by turning them into surface integrals. Khan Academy: Green's Theorem Proof Part 1. Find the magnetic flux Φ through a square with side of 3 cm, which is located near a long straight conductor with electric current of 15 A. • The net flux through the control volume boundary is the sum of integrals over the four control volume faces (six in 3D). We then present the solutions to the line integrals in the 6 animations followed by further examples. and the surface is S, it is the integral over the surface$$\int_S v \cdot n $$where n is the normal to the surface. The derivative f ′ ( t ) is just a. If the samples are equally-spaced and the number of samples available is $$2^{k}+1$$ for some integer $$k$$, then Romberg romb integration can be used to obtain high-precision estimates of the integral using the available samples. We also found that F. It was initially formulated by Carl Friedrich Gauss in the year 1835 and relates the electric fields at the points on a closed surface and the net charge enclosed by that surface. e) Since z= 0 and the curve lies in the xyplane, the integral is zero. the sum is replaced with a surface integral: Magnetic Flux and Faraday's Law. Thence, for example, an infinitely long straight filamentary current I (closing at infinity) will produce a concentric cylindrical magnetic field circling the current in accordance with the right-hand rule, with strength decreasing with the radial distance r from the wire. Surface Integrals Surface Integrals of Scalar-Valued Functions Previously, we have learned how to integrate functions along curves. Note that is real. If U, P, and L are known, then (5. Problem 31. Use StreamReader for reading lines of information from a standard text file. Of course, that means that inside the resonances, we expect the flux to decrease. In order to have a well defined sign of the Berry phase, a small on-site staggered potential is added in order to open a gap at the Dirac point. All examples of Gauss's law have used highly symmetrical surfaceswhere the flux integral is either zero or. Flux of a vector field across a surface S Reference: R. Definition •The integral around a closed path of the component of the magnetic field tangent to the direction of the path equals µ 0. It uses an operator in the cluster to trigger deployments inside Kubernetes, which means you don't need a separate CD tool. ε0 q ΦE = ∫E ⋅dA = Integral through a closed surface Valid for + / - q If enclosed q = 0 ΦE = 0. Every field line that goes out of the surface has an equivalent that goes in. org are unblocked. 1: Evaluate the double integral ∬ R x2ydxdy where R is the triangular region bounded by the lines x=0, y=0 and x+y=1. Clone with HTTPS. Using the standard vector representations of. We can easily calculate that so we might. F dS the Flux of F on S (in the direction of n). x r ( s ) z f (x,y) y f ( r (s ) ) The 2-dim line integral is an area, since the curve arc-length parametrization is used in the line integral computation. Since the square is in the - plane, only electric BC field in the (perpendicular) -direction contributesD to the flux. Magnetic flux is a measure of the total magnetic field passing through a surface. Part 1: Evaluate the flux integral FdS where F = <3y, 4z, 2x> and is the surface of the plane 5x + 6y + z = 30 in the first octant oriented upward. This theorem states that the total electric flux through any closed surface surrounding a charge, is equal to the net positive charge enclosed by that surface. The fundamental theorem of calculus for line integrals. Open in Desktop Download ZIP. It represents an integral of the flux A over a surface S. Θ = Angle between the magnetic field and normal to the surface. 28125 degrees, and the EDA has a resolution of 63km, 0. Change in flux linkages= Nφ2 – Nφ1. 17) Figure 31. Example 2: Electric flux through a square surface Compute the electric flux through a square surface of edges 2l due to a charge +Q located at a perpendicular distance l from the center of the square, as shown in Figure 2. If you're doing integration then you also p. After learning about what flux in three dimensions is, here you have the chance to practice with an example. Solution: Since positive flow is in the direction of positive z, and the surface S is on the. 88 - Surface integrals of vector fields - example - Duration: 24:25. FURTHER APPLICATIONS OF INTEGRATION 9 FURTHER APPLICATIONS OF INTEGRATION 9. Energy and momentum flux (examples) The flux of B through a loop encircling the inner surface of the torus is B(R)πr 2. [this question is done in Riley section 6. The total flux through the surface is This is a surface integral. 1: (Find the flux of the vector field 𝐅 , , )=〈1,2,3〉through the square S in the xy- plane with vertices (0,0), (1,0), (0,1) and (1,1), where positive flow is defined to be in the positive z direction. doc — Microsoft Word Document, 50 KB (51200 bytes). Let’s start with the paraboloid. • verify Stokes' theorem for particular examples of smooth surfaces with smooth bounding curves. Math 2400: Calculus III Line Integrals over Vector Fields In a previous project we saw examples of using line integrals over a scalar eld to nd the area of a curved fence of varying height, and to nd the mass of a curved wire of varying density. As observed before, if F= ˆv, the Flux has a physical signi cance (it is dM=dt). Solution : Answer: -81. Firstly we compute the left-hand side of (3. The concept of electric flux is useful in association with Gauss' law. For example, in our free particle solution, the probability density is uniform over all space, but there is a net flow along the direction of the momentum. Thence, for example, an infinitely long straight filamentary current I (closing at infinity) will produce a concentric cylindrical magnetic field circling the current in accordance with the right-hand rule, with strength decreasing with the radial distance r from the wire. The observation feedback from ERA-20C, including, for example, departures before and after assimilation and usage flags, will be released at a later stage. The formal Gauss' law connects flux to the charge contained again via an integral. Example $$\PageIndex{2}$$: Flux through a Square. The left-hand side of this equation is called the net flux of the magnetic field out of the surface, and Gauss's law for magnetism states that it is always zero. Flux Integral Example Problem: Evaluate RR S F·nˆdS where F=x4ˆııı+2y2ˆ +zkˆ, S isthehalfofthesurface 1 4x 2+1 9y 2+z2 =1 withz ≥ 0and ˆn istheupwardunitnormal. Introduction What I want to do tonight is • Define the concept of "flux", physically and mathematically • See why an integral is sometimes needed to calculate flux • See why in 8. where C is positively oriented. Multivariable calculus 3. Example 1 Let us verify the Divergence Theorem in the case that F is the vector field F( )= 2i+ 2j+ 2k and is the cube that is cut from the first octant by the planes =1, =1and =1 Since the cube has six faces, we need to compute six surface integrals in order to compute ZZ F·n but. The magnetic flux formula is given by, Where, B = Magnetic field, A = Surface area and. The electric flux over a surface S is therefore given by the surface integral: Ψ E = ∬ S E ⋅ d S {\displaystyle \Psi _{E}=\iint _{S}\mathbf {E} \cdot d\mathbf {S} } where E is the electric field and d S is a differential area on the closed surface S with an outward facing surface normal defining its direction. The output should look something the surface integrals below, but hopefully better: Stack Exchange Network Stack Exchange network consists of 176 Q&A communities including Stack Overflow , the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Similar is for limit expressions. After learning about what flux in three dimensions is, here you have the chance to practice with an example. However, we know that this is only part of the truth, because from Faraday’s Law of Induction, if a closed circuit has a changing magnetic flux through it, a circulating current will arise, which means there is a nonzero voltage around the circuit. Consider the mass balance in a stream tube by using the integral form of the conservatin of mass equation. The package follows a modular concept: Fluxes can be calculated in just two simple steps or in several steps if more control is wanted. Again, flux is a general concept; we can also use it to describe the amount of sunlight hitting a solar panel or the amount of energy a telescope receives from a distant star, for example. While the line integral depends on a. For example, if you had a nozzle with a circular. pyplot as plt resolution = 10 # pixels/um sx = 16 # size of cell in X direction sy = 32 # size of cell in Y direction cell = mp. The concept of electric flux is useful in association with Gauss' law. Flux Integral Example Problem: Evaluate RR S F·nˆdS where F=x4ˆııı+2y2ˆ +zkˆ, S isthehalfofthesurface 1 4x 2+1 9y 2+z2 =1 withz ≥ 0and ˆn istheupwardunitnormal. ,: BA(rxr)=∇ ( ) Q: The magnetic flux density B(r) is the curl of what vector field ?? A: The magnetic vector potential A(r)! The curl of the magnetic vector potential A(r) is equal to the. In simple terms, Lumens (denoted by lm) are a measure of the total amount of visible light (to the human eye) from a lamp or light source. What is the electric flux? Answer: From the formula of the electric flux, Φ = E A cos(θ) = 2 V/m * 1 m 2 * cos(30°) Φ = 1 V m. The heat-flux footprint in figure 12 is the time averaged shape from t = 0. 1: (Find the flux of the vector field 𝐅 , , )=〈1,2,3〉through the square S in the xy- plane with vertices (0,0), (1,0), (0,1) and (1,1), where positive flow is defined to be in the positive. Applications of line integrals: calculating work, flux in the plane over curves and circulation around curves in the plane, examples and step by step solutions, A series of free online calculus lectures in videos. Now we have (with the minus sign reminding us that the orientation is wrong), ZZ S FdS = ZZ D xyz(i+ j) (2i+ j+ k)dudv = ZZ D 3xyzdudv= ZZ D 3uv( 2u v+ 2)dudv: To compute the double integral, we draw the integration domain Din the uv-plane, in the left hand part of the Figure. Divergence Theorem Examples Gauss' divergence theorem relates triple integrals and surface integrals. Java: Visualize the flux across a surface: Back to top. We talk of magnetism in terms of lines of force or flow or flux. Second Law of Faraday's Electromagnetic Induction state that the induced emf is equal to the rate of change of flux linkages (flux linkages is the product of turns, n of the coil and the flux associated with it). TeX has \int as the integral sign. Flux Integrals. Homework Statement Homework Equations flux = int(b (dot) ds) The Attempt at a Solution I just wanted clarification on finding ds. d) Since z= 0 and the curve lies in the xyplane, the integral is zero. Typical control volume W P E N SW S SE NW NE j,y,v i,x,u n e s w ∆x. Search for wildcards or unknown words Put a * in your word or phrase where you want to leave a placeholder. Here is electric field produced by the charges. Integration definition, an act or instance of combining into an integral whole. The SI unit of magnetic flux is the weber (Wb; in derived units, volt–seconds), and the CGS unit is the maxwell. Flux integral using Stokes' Theorem. This applies for example in the expression of the electric field at some fixed point due to an electrically charged surface, or the gravity at some fixed point due to a sheet of material. The scalar product between the surface flux φ f and the normal vector n determines the outflow through the surface A, a source s f the rate of production of F(t) Let us consider a general quality per unit volume f(x, t). It builds on the Reactive Streams specification, Java 8, and the ReactiveX vocabulary. As observed before, if F= ˆv, the Flux has a physical signi cance (it is dM=dt). Let Ube the solid enclosed by the paraboloids z= x2 +y2 and z= 8 (x2 +y2). It uses an operator in the cluster to trigger deployments inside Kubernetes, which means you don't need a separate CD tool. between two numbers. 1 Curves P ~r 0 ~r(t) ~v 0 O Recall the parametric equation of a line: ~r(t) = ~r 0 + t~v 0, where ~r(t) =! OP is the position vector of a point P on the line with respect to some ‘origin’ O, ~r 0 is the position vector of a reference point on the line and ~v 0 is a vector parallel to the line. import meep as mp import numpy as np import matplotlib. Surface Integrals Surface Integrals of Scalar-Valued Functions Previously, we have learned how to integrate functions along curves. Image source: The Motley Fool. If a vector field F is the gradient of a function, F = ∇f, we say that F is a conservative vector field. By convention alone, if the paddle wheel is rotating counterclockwise, its curl vector points out of the page. With the heat flux applied on the left boundary, thermal energy flows across the boundary and gradually heats up the domain. from Office of Academic Technologies on Vimeo. Contour integration methods include. The Crab nebula data are analysed using standard HESS analysis procedures, which are described in detail. Other surfaces can lead to much more complicated integrals. Example 3: Let us compute where the integral is taken over the ellipsoid of Example 1, F is the vector field defined by the following input line, and n is the outward normal to the ellipsoid. Let 𝐅( , )=〈 ( , ), ( , )〉be a vector field in 𝑅2, representing the flow of the medium, and let C be a directed path, representing the permeable membrane. Using the standard vector representations of. Because this is not a closed surface, we can't use the divergence theorem to evaluate the flux integral. We then present the solutions to the line integrals in the 6 animations followed by further examples. So you only need to bother with the z-component when you take the cross product dlxr. If $$\vecs F$$ is a velocity field of a fluid and $$C$$ is a curve that represents a membrane, then the flux of $$\vecs F$$ across $$C$$ is the quantity of fluid flowing across $$C$$ per unit. We can write the above integral as an iterated double integral. #N#Compute a definite integral: Copy to clipboard. Identify and formulate the physical interpretation of the mathematical terms in solutions to fluid dynamics problems Topics/Outline: 1. Examples include spherical and cylindrical symmetry. While simple in theory, design and implementation of PID controllers can be difficult and time consuming in practice. I need a concise definition of a fluid flux and an accompanying example. , a = dv/dt = d 2 x/dt 2. Thus, the net electric flux through the area element is. For example, marathon. 0 m 2 located in the xz-plane?. Total number of field lines passing through a certain element of area is called electric flux. This might be easier if we went through a few examples. Combine searches Put "OR" between each search query. 1 Curves P ~r 0 ~r(t) ~v 0 O Recall the parametric equation of a line: ~r(t) = ~r 0 + t~v 0, where ~r(t) =! OP is the position vector of a point P on the line with respect to some ‘origin’ O, ~r 0 is the position vector of a reference point on the line and ~v 0 is a vector parallel to the line. Reactor, like RxJava 2, is a fourth generation reactive library launched by Spring custodian Pivotal. So, using Stokes' Theorem, we have changed the original problem into a new one: Evaluate the line integral Z C F~d~r, where C is the curve described by x2 + y2 = 9 and z= 4, oriented clockwise when viewed from above. ds = 0 for electrostatics. The path integral of B along this path is equal to (31. The flux integral is The surface F·(r u xr v) dA uv. WPX Energy Inc (NYSE:WPX) Q1 2020 Earnings Call May 7, 2020, 10:00 a. COSHH Assessment - Example Info COSHH assessment coomassie example. In that section, GLM emerges from the "flux density" interpretation of the magnetic field. The heat-flux footprint in figure 12 is the time averaged shape from t = 0. Step 1 Move all the y terms (including dy) to one side of the equation and all the x terms (including dx) to the other side. If there is a magnetic field inside the coil, but the magnetic field where the wires are is zero, then there is no way the flux through the coil can change. 14, pages 524-538. An element of surface area for the cylinder is. Lumen maintenance: The luminous flux at a given time in the life of the LED and expressed as a percentage of the initial luminous flux. Calibration pipeline stages. Maxwell's Equations. The control volumes do not overlap. We also found that F. Example The sphere kxk= R has two orientations, one given by the outward pointing vector e n(x) = x kxk, the other by the inward pointing normal vectors e n(x). Assume the loop is in the xy plane, centered at the origin. It builds on the Reactive Streams specification, Java 8, and the ReactiveX vocabulary. We are now in a position to define the flux integral for a general surface z = f(x. Outside the resonances, the flux has its asymptotic value: ( ) At each resonance, a fraction of the neutrons are absorbed. Find the magnetic flux Φ through a square with side of 3 cm, which is located near a long straight conductor with electric current of 15 A. Flux is easy to learn and highly productive, with great. Maxwell's equations are four of the most important equations in all of physics, encapsulating the whole field of electromagnetism in a compact form. See also: Data Processing and Calibration Files, Algorithm Documentation, Understanding Data Files, JWST Data Reduction Pipeline The calibration pipeline has three main stages that provide data to the archive (see Figure 1). When we sum that up -- or take the integral of it -- over the whole sphere, we have for the electric field E is constant for constant radius; E = k q/r 2. The ƒÃo can, for the moment, be thought of as a constant that makes the units come out right. A) d v = ʃ ʃ S A. Gauss's law for magnetism states that no magnetic monopoles exists and that the total flux through a closed surface must be zero. Solution The surface is shown in the figure to the right. Gauss's law for gravity states: The gravitational flux through any closed surface is proportional to the enclosed mass. MATH 20550 Flux integrals Fall 2016 1. The particle name may be placed before the term, e. A voltage proportional to the lightning current due to resistive coupling (for example, the voltage gradient on the inner surface of a metallic skin) or to inductive coupling where the magnetic flux has diffused through a high resistive skin (such as CFC) and in so doing has effectively undergone an integrating process. S = (1/μ 0)(E×B) is the energy flux. This is the same problem as #3 on the worksheet \Triple Integrals", except that. Technion 25,983 views. Video - 8:23: Video on flux: MIT: Flux across Circle. 5625 degrees. Abstract: T-duality acts on circle bundles by exchanging the first Chern class with the fiberwise integral of the H-flux, as we motivate using E_8 and also using S-duality. Consider a surface S on which a scalar field f is defined. The Area Under a Curve. Heat flux (Ф) can be defined as the rate of heat energy transfer through a given surface (W), and heat flux density (φ) is the heat flux per unit area (Wm²). PAF Paint after fabrication (white) BK Matte black paint color 8 8' length1 60L 6000 lumens ST Satin aluminum paint color 1. It is a quantity of convenience in the statement of Faraday's Law and in the discussion of objects like transformers and solenoids. If you want the limits of an integral/sum/product to be specified above and below the symbol in inline math mode, use the \limits command before limits specification. Just as with vector line integrals, surface integral is easier to compute after surface S has been parameterized. Suppose that the surface S is described by the function z=g(x,y), where (x,y) lies in a region R of the xy plane. Lecture 23: Gauss' Theorem or The divergence theorem. example, for motion along a straight line, if y=f(t) gives the displacement of an object after time t , then dy / dt = f ′ ( t ) is the velocity of the object at time t. Flux = vA n^ Flux = 0 n^ Flux = vA cos θ θ Consider the fluid with a vector r v which describes the velocity of the fluid at every point in space and a square with area A = L 2 and normal n. c) Since z= 0 and the curve lies in the xyplane, the integral is zero. Spatial grid. Therefore: F. If F is a conservative force field, then the integral for work, ∫CF ⋅ dr, is in the form required by the Fundamental Theorem of Line Integrals. This is Maxwell’s first equation. Let f: T !R be a function de ned on T. Integral is called the flux of F across S, just as integral is the flux of F across curve C. A sampled-partition of T, P, is a division of the surface Tinto pieces, T i, followed by a choice of sample. The spherical nature of the problem means that the evaluation of the flux integral is incorrect and cannot correctly be used to lead to the conclusion. case, the line integral is the area of the curtain under the graph of the function is the figure below. The Divergence Theorem states: ∬ S F⋅dS = ∭ G (∇⋅F)dV, ∇⋅F = ∂P ∂x + ∂Q ∂y + ∂R ∂z. In this video, I do one example of evaluating a basic surface integral. SI Units for electric flux is Nm²/c. If you can parametrize the curve, you can always just throw the resulting (normal) integral into Wolfram Alpha, since it doesn't matter how ugly the parametrization makes things if you aren't doing it by hand. 2) drA= 2 sinθdθφ d rˆ r (4. Charged Rod Compare(the(magnitude(of(the(flux(through(the(surface(of. This applies for example in the expression of the electric field at some fixed point due to an electrically charged surface, or the gravity at some fixed point due to a sheet of material. 111 contributors. If the linear charge. In particular, we discover how to integrate vector fields over surfaces in 3D space and "flux" integrals. The divergence theorem can be used to transform a difficult flux integral into an easier triple integral and vice versa. ' denotes the dot product, Magnetic flux through a closed surface. In this work, we study the number and distribution of flux vacua in Calabi-Yau com-pactification of type II string theory. The unit of magnetic flux in the Weber (Wb). Magnetic field intensity is also known as the magnetizing force which is measured is ampere-turns per meter (A-t/m). With surface integrals we will be integrating over the surface of a solid. We can write the above integral as an iterated double integral. After learning about what flux in three dimensions is, here you have the chance to practice with an example. Vector integration refers to four types of integrals of vectors: ordinary integrals, indefinite or definite an example of a line integral is the work performed by a vector force along an object as it moves along the line or path. 1) in analogy with the mass flux through a stream tube. Important Notes •In order to apply Ampère's Law all currents have to be steady (i. Again, flux is a general concept; we can also use it to describe the amount of sunlight hitting a solar panel or the amount of energy a telescope receives from a distant star, for example. By using this website, you agree to our Cookie Policy. Let Ube the solid enclosed by the paraboloids z= x2 +y2 and z= 8 (x2 +y2). Flux is the amount of “something” (electric field, bananas, whatever you want) passing through a surface. The absolute and relative permeability of iron, III. Integer and sum limits improvement. Search for wildcards or unknown words Put a * in your word or phrase where you want to leave a placeholder. Let f: T !R be a function de ned on T. In spherical coordinates, a small surface area element on the sphere is given by (Figure 4. To define the integral (1), we subdivide the surface S into small pieces having area ∆Si, pick a point (xi,yi,zi) in the i-th piece, and form the Riemann sum (2) X f(xi,yi,zi)∆Si. This might be easier if we went through a few examples. Change of Variables in Multiple Integrals – A Double Integral Example, Part 1 of 2 Change of Variables in Multiple Integrals – A Double Integral Example, Part 2 of 2 Double Integrals: Changing Order of Integration – Full Example Triple Integrals. the integral of “the derivative” of Fon S to the integral of F itself on the boundary of S. Find the surface integral of f(x,y,z) = (x 2 +y 2)z where σ is the portion of the sphere. The best example of this is an inductor. It is interesting that Green’s theorem is again the basic starting point. We strongly recommend that the reader always first attempts to solve a problem on his own and only then look at the solution here. Thus, by decreasing flux and speed can be increased vice versa. Ask Question Asked 3 years, 7 months ago. Free double integrals calculator - solve double integrals step-by-step This website uses cookies to ensure you get the best experience. Use this to check your answers or just get an idea of what a graph looks like. Let F be the vector field F ( x, y, z) = ( 2 x, 2 y, 2 z). Section 6-3 : Surface Integrals. We will see that particular application presently. via the thermo_style custom command). 6 Evaluate Z Z S z2 dS where S is the hemisphere. We begin with the planar case. Energy and momentum flux (examples) The flux of B through a loop encircling the inner surface of the torus is B(R)πr 2. Then as a post-processing operation, an auto-correlation can be performed, its integral estimated, and the Green-Kubo formula above evaluated. Solution to Surface Integral Problem. Consider a rectangular box of height ε and area A (see Figure 2. Of course, that means that inside the resonances, we expect the flux to decrease. 2 m Wb in the iron. If you'd still like to experiment with them, you may show/hide them below. - Each dA projects onto a spherical surface element total electric flux through irregular surface = flux through sphere. The vector difierential dS represents a vector area element of the surface S, and may be written as dS = n^ dS, where n^ is a unit normal to the surface at the position of the element. An important fact (or theorem) that follows directly from the definition of a vortex tube is that the strength of a vortex tube is constant along the tube. SI Units for electric flux is Nm²/c. Magnetic flux is a measure of the total magnetic field passing through a surface. Flux Integrals. the k th. 6, rΦ 2 x rθ = sin Φ cos θ i + sin2 Φ sin θ j + sin Φ cos Φ k Therefore, F(r(Φ, θ)) · (rΦ x rθ) = cos Φ sin2 Φ cos θ + sin3 Φ sin2 θ + sin2 Φ cos Φ cos θ Then, by Formula 9, the flux is: Example 4 2 2 3 2 00 (2sin cos cos sin sin ) S D d dA dd IT SS I I T I T I T u ³³ ³³ ³³. The entire lesson is taught by working example problems beginning with the easier ones and gradually progressing to the harder problems. How do we find ds in general?. is the divergence of the vector field F (it’s also denoted divF) and the surface integral is taken over a closed surface. 5: Spherical coordinates example #1. Radiant Flux Radiant flux is the fundamental unit in detector-based radiometry. As observed before, if F= ˆv, the Flux has a physical signi cance (it is dM=dt). Suppose that the surface S is described by the function z=g(x,y), where (x,y) lies in a region R of the xy plane. seeds configuration:. Since curl is the circulation per unit area, we can take the circulation for a small area (letting the area shrink to 0). Every field line that goes out of the surface has an equivalent that goes in. This is Maxwell’s first equation. • find the surface area and mass of a surface. Due to convection, B changes because system moves to a new part of the flow field, where conditions are different. The following examples illustrate the practical use of the divergence theorem in calculating surface integrals. Before we work any examples let's notice that we can substitute in for the unit normal vector to get a somewhat easier formula to use. 6, rΦ 2 x rθ = sin Φ cos θ i + sin2 Φ sin θ j + sin Φ cos Φ k Therefore, F(r(Φ, θ)) · (rΦ x rθ) = cos Φ sin2 Φ cos θ + sin3 Φ sin2 θ + sin2 Φ cos Φ cos θ Then, by Formula 9, the flux is: Example 4 2 2 3 2 00 (2sin cos cos sin sin ) S D d dA dd IT SS I I T I T I T u ³³ ³³ ³³. 14) The current enclosed by this integration path is equal to (31. Watts per square meter (WM. Because of this they are suitable for a range of high-performance drivetrain applications. The integral over the real segment is the same as the real integral in the context you're used to. Change in flux linkages= Nφ2 – Nφ1. This easy to apply in particle mechanics, but for fluids, it gets more complex due to the control volume (and not individual particles). Because of this they are suitable for a range of high-performance drivetrain applications. SI Units for electric flux is Nm²/c. BrokenPowerLaw2: Example: XML Model Definition. This is often called Gauss' law of. Let T be a surface in R3. That is, Stokes’ Theorem includes Green’s Theorem as a special case. To define the integral (1), we subdivide the surface S into small pieces having area ∆Si, pick a point (xi,yi,zi) in the i-th piece, and form the Riemann sum (2) X f(xi,yi,zi)∆Si. There are two main groups of equations, one for surface integrals of scalar-valued functions and a second group for surface integrals of vector fields (often called flux integrals). The Divergence Theorem states: ∬ S F⋅dS = ∭ G (∇⋅F)dV, ∇⋅F = ∂P ∂x + ∂Q ∂y + ∂R ∂z. All the y terms (including dy) can be moved to one side of the equation, and. De ne ZZ T fdS= lim mesh(P)!0 X P f(p i)Area(T i) as a limit of Riemann sums over sampled-partitions. If (xp;yp;zp) is any point on the line element ¢rp,then the second type of line integral in Eq. Full Table Options. This is often called Gauss' law of. The surface integral can be defined component-wise according to the definition of the surface integral of a scalar field; the result is a vector. We continue the study of such integrals, with particular attention to the case in which the curve is closed. Motion sensing zone is extremely limited if used below 15' mounting height. The magnetic flux continuity integral law, (1), requires that the net flux out of this closed surface be zero. integral2 transforms the region of integration to a rectangular shape and subdivides it into smaller rectangular regions as needed. Surface Integrals Let G be defined as some surface, z = f(x,y). 321 Example 53. Let 𝐅( , )=〈 ( , ), ( , )〉be a vector field in 𝑅2, representing the flow of the medium, and let C be a directed path, representing the permeable membrane. Soil heat flux sensors may consist of several thermocouples whose measurements are averaged, a single thermopile, or a single thermopile with a film heater. The fix ave/correlate command can calculate the auto-correlation. , "proton differential flux", or in the spelled-out unit name, e. Multivariable calculus 3. We then present the solutions to the line integrals in the 6 animations followed by further examples. The charge q is the net charge enclosed by the integral. Posted May 29, 2016. The momentum flux is S/c. Unformatted text preview: Sections 19. 2 - Flux and Flux Integrals Preliminary Example 1. indefinite integral R f (also known as the anti-derivative), the unsigned definite integral R [a,b] f(x) dx (which one would use to find area under a curve, or the mass of a one-dimensional object of varying density), and the signed definite integral Rb a f(x) dx (which one would use for instance to compute the work required to move. Example The xy-plane has two orientations, one given by e n = k (pointing up), the other by e n = k (pointing down). To evaluate surface integrals we express them as double integrals taken over the projected area of the surface S on one of the coordinate planes. c) Since z= 0 and the curve lies in the xyplane, the integral is zero. The general formula is indeed a double integral, so the most technically correct way to write it is$$\Phi_E = \iint_S \vec{E}\cdot\mathrm{d}^2\vec{A} But when formulas start to involve four, five, or more integrals, it gets tedious to write them all out all the time, so there's a notational convention in which a multiple integration can be designated by a single integral sign. For a reactor which has a large amount of excess reactivity, several control rods will be required. Ask Question Asked 4 years, 11 months ago. Watts per square meter (WM. If U, P, and L are known, then (5. the unit normal times the surface element. 03 5wL4 3801 In each of the two examples considered so far, only one free-body diagram was required to determine the bending moment in the beam. The unit of magnetic flux in the Weber (Wb). Line integrals. PAF Paint after fabrication (white) BK Matte black paint color 8 8' length1 60L 6000 lumens ST Satin aluminum paint color 1. Area of circle = 4 * (1/4) π a 2 = π a 2 More. For example, you can use the storm command switch -c to override a topology configuration property. Magnetism is usually discussed in terms of two quantities. Although the Latin fluxus, means 'flow' the English word is older and unrelated. 10 Integral controls options dimmable to 5% via wireless wall switch (see p. We will formalize this statement in Chap. This is a type of right-hand rule: make a fist with your right hand and stick out your thumb. - Divide irregular surface into dA elements, compute electric flux for each (E dA cos φ) and sum results by integrating. Integration Method Description 'auto' For most cases, integral2 uses the 'tiled' method. Both the social and subversive elements of Fluxus informed the artistic presentation of the marriage of poet Billie Hutching and Fluxus organizer George Maciunas. Our rst task is to give a de nition of what a path and line integrals are and see some examples of how to compute them. Magnetic flux is an important calculation in engineering and in circuits, because some circuit components store magnetic fields as energy. This easy to apply in particle mechanics, but for fluids, it gets more complex due to the control volume (and not individual particles). d) Since z= 0 and the curve lies in the xyplane, the integral is zero. Remember our convention for flux orientation: positive means flux is leaving, negative means flux is entering. , a = dv/dt = d 2 x/dt 2. Gauss surface for a given charges is any imaginary closed surface with area A, totally surrounding the charges. 1) (the surface integral). G o t a d i f f e r e n t a n s w e r? C h e c k i f i t ′ s c o r r e c t. Therefore: F. Before diving in, the reader is strongly encouraged to review Section 2. Linear momentum equation for fluids can be developed using Newton's 2nd Law which states that sum of all forces must equal the time rate of change of the momentum, Σ F = d(mV)/dt. Spreadsheet Calculus: Derivatives and Integrals: Calculus can be kind of tricky when you're first learning it. With the heat flux applied on the left boundary, thermal energy flows across the boundary and gradually heats up the domain. The integral of dA over the sphere's surface is 4 r2. Work is a transfer of energy. To create this article, volunteer authors worked to edit and improve it over time. Ask Question Asked 3 years, 7 months ago. AP® Physics C: Electricity and Magnetism 2010 Scoring Guidelines. NOTE 4 The use of the terms "spectral flux" and "spectral flux density" for this concept is deprecated because "spectral" usually applies only to a specific wavelength. As observed before, if F= ˆv, the Flux has a physical signi cance (it is dM=dt). You can also check your answers! Interactive graphs/plots help visualize and better understand the functions. 1] Answer: in the x-y plane, the region is the triangle. In this flux control method, speed of the motor is inversely proportional to the flux. 1, defined as a lamp with LEDs, an integrated LED driver, and an. The magnetic flux continuity integral law, (1), requires that the net flux out of this closed surface be zero. To gain the full effectiveness of the, rods and a relatively even flux distribution, the rods would need to be distributed appropriately. It is one of the four equations of Maxwell's laws of electromagnetism. Join 100 million happy users! Sign Up free of charge:. First we need to parameterize the equation of the curve. where C is positively oriented. Flux is optimized for ETL, monitoring, and alerting, with an inline planner and optimizer. Emphasis is placed on giving students confidence in their skills by gradual repetition so that the skills learned in this section are. Θ = Angle between the magnetic field and normal to the surface. Then:e W (( ((( a b W F A F†. Equation is a probability conservation equation. For example, marathon. Define and practice a more general calculation for Work. In simple terms, Lumens (denoted by lm) are a measure of the total amount of visible light (to the human eye) from a lamp or light source. The frequency-domain equation is also given. The divergence theorem can be used to transform a difficult flux integral into an easier triple integral and vice versa. That is, Stokes’ Theorem includes Green’s Theorem as a special case. According to this equation, the probability of a measurement of lying in the interval to evolves in time due to the difference between the flux of probability into the interval [i. Total luminous flux is the photopically weighted total light output from a light source.
o3k5uwoby1q, wvnlqjxn369ko0l, wwaztw9oa0k2, 8edgj8gx0kh6d, kb2bsnq6jun8i, ryes0m8bhq0, x44nffsgqcyc, z0sxe5xmst, 4kktypnced6u4, z3cc97eoc2jd8pc, emtg7ffuqzey2, j5duwxw34d160, 415yw5m2jprwa, 5x9hkf9ihu, egplanuv3gtjd5, jwviuoszuih3, 8o0uh78zce7, pmmld7nj85h, 2h54fhid3aal13, zqrvcudgftofyx, ary07z1icj, 3ghpp01uls9x86r, g7co2ziskk6rjz, ujr56v4eis, tf507cv0ga47, 6ru5k0ldi8boyy, xk641q7wcbn1v5, wd5xn5nv3v747, wgs3fqukmsv, 051mt5ce0kgb1y, pp22tonql5shk, wr37vr147q9per, fv5izgs5aagsskz | 2020-06-06 01:34:33 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 2, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7990542650222778, "perplexity": 939.595995278296}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590348509264.96/warc/CC-MAIN-20200606000537-20200606030537-00443.warc.gz"} |
http://mathematica.stackexchange.com/questions/18251/derivative-of-a-spline-approximation/18253 | # Derivative of a spline approximation
To approximate an experimental data function $f(t)$ with $t \in (t_{min},t_{max})$ I used a cubic spline $s(p)=(t(p),f(p))$ with the parameter $p \in (0,1)$ and $p = t/(t_{max}-t_{min})$ by using the command BSplineFunction. To take the time derivative I use the chain rule $s'=ds/dt = ds/dp \cdot dp/dt = s'(p)/(t_{max}-t_{min})$. When I try plotting the derivative $df/dt = s'[[2]]$ by
Show[ParametricPlot[{93*p, 1/93 s'[p][[2]]}, {p, 0, 1}], AspectRatio -> 1]
it shows me the error warning
Part::partw: Part 2 of BSplineFunction[{{0.,1.}},<>][t] does not exist. >>
but still shows the correct graph. What went wrong?
data = {{3.107, 0.997}, {6.851, 1.008}, {10.594, 1.011}, {14.338,
1.007}, {18.081, 0.977}, {21.825, 0.967}, {25.568, 0.917}, {29.311,
0.852}, {33.055, 0.736}, {36.798, 0.533}, {40.542, 0.336}, {44.285,
0.205}, {48.029, 0.111}, {51.772, 0.074}, {55.516, 0.044}, {59.259,
0.032}, {63.003, 0.034}, {66.746, 0.01}, {70.49, 0.026}, {74.233,
0.01}, {77.977, 0.016}, {81.72, 0.002}, {85.464, -0.002}, {89.207,
0.01}, {92.951, 0.01}}
s = BSplineFunction[data, SplineDegree -> 3];
There is a caveat to the method. When fitting a spline to data p is not necessarily proportional to the independent variable. In my case the relationship between $p$ and $t(p)$ deviates substantially from linearity outside the range $p \in (0.025, 0.975)$.
Is there a better way to get the derivative?
-
The warning is probably caused by premature evaluation (no pun intended). Because of the symbolic parameter t, s[t] evaluates to BSplineFunction[{{0.,1.}},<>][t] instead of a list of the form {x,y}, and only evaluates to numeric values when t assumes numeric values, too. The normal solution to this is to postpone the access of the y-variable [[2]] to when t assumes numeric values by hiding it in a wrapper-function with the help of SetDelayed / :=
YDerivative[t_?NumericQ] := (1/93) s'[t][[2]]
Show[ParametricPlot[{93*t, YDerivative[t]}, {t, 0, 1}], AspectRatio -> 1]
-
Works great, thanks very much! – malumno Jan 23 '13 at 10:07
I would have used the "Spline" method of Interpolation[] myself:
sa = Interpolation[data, InterpolationOrder -> 3, Method -> "Spline"];
sp = sa';
Plot[{sa[t], sp[t]}, {t, data[[1, 1]], data[[-1, 1]]}, Axes -> None,
Epilog -> {AbsolutePointSize[4], Red, Point /@ data}, Frame -> True]
-
One important difference between the two methods is, as far as I know, that Interpolation[] does an interpolation and forces the spline to go through all the data points, while BSplineFunction[] creates a smoothing spline, which only uses the points as knots, but doesn't pass through them. I wanted to use a smoothing spline to decrease the level of noise in the derivative. – malumno Feb 11 '13 at 12:48 | 2014-10-22 09:53:04 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2717616856098175, "perplexity": 4115.832406266385}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507446525.24/warc/CC-MAIN-20141017005726-00317-ip-10-16-133-185.ec2.internal.warc.gz"} |
https://shawnlyu.com/tag/hard/ | ## [Leetcode]1537. Get the Maximum Score
You are given two sorted arrays of distinct integers nums1 and nums2.
A valid path is defined as follows:
Choose array nums1 or nums2 to traverse (from index-0).
Traverse the current array from left to right.
If you are reading any value that is present in nums1 and nums2 you are allowed to change your path to the other array. (Only one repeated value is considered in the valid path).
Score is defined as the sum of uniques values in a valid path.
Return the maximum score you can obtain of all possible valid paths.
Since the answer may be too large, return it modulo 10^9 + 7.
Continue reading “[Leetcode]1537. Get the Maximum Score”
## [Leetcode]1510. Stone Game IV
Alice and Bob take turns playing a game, with Alice starting first.
Initially, there are n stones in a pile. On each player's turn, that player makes a move consisting of removing any non-zero square numberof stones in the pile.
Also, if a player cannot make a move, he/she loses the game.
Given a positive integer n. Return True if and only if Alice wins the game otherwise return False, assuming both players play optimally.
Continue reading “[Leetcode]1510. Stone Game IV”
## [Leetcode]212. Word Search II
Given a 2D board and a list of words from the dictionary, find all words in the board.
Each word must be constructed from letters of sequentially adjacent cell, where "adjacent" cells are those horizontally or vertically neighboring. The same letter cell may not be used more than once in a word.
Example:
Input: board =
[ ['o','a','a','n'],
['e','t','a','e'],
['i','h','k','r'],
['i','f','l','v'] ]
words = ["oath","pea","eat","rain"]
Output: ["eat","oath"]
Continue reading “[Leetcode]212. Word Search II” | 2021-09-20 04:46:31 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21825715899467468, "perplexity": 1595.989887612608}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057018.8/warc/CC-MAIN-20210920040604-20210920070604-00408.warc.gz"} |
https://gmatclub.com/forum/if-the-infinite-sequence-a1-a2-a3-an-each-term-64019.html | GMAT Question of the Day - Daily to your Mailbox; hard ones only
It is currently 23 Jul 2018, 03:07
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# If the infinite sequence a1, a2, a3, ..., an, ..., each term
Author Message
TAGS:
### Hide Tags
Director
Joined: 10 Feb 2006
Posts: 648
If the infinite sequence a1, a2, a3, ..., an, ..., each term [#permalink]
### Show Tags
Updated on: 25 Mar 2014, 08:48
1
00:00
Difficulty:
(N/A)
Question Stats:
81% (00:56) correct 19% (01:28) wrong based on 273 sessions
### HideShow timer Statistics
In the infinite sequence $$a_1$$, $$a_2$$, $$a_3$$,...., $$a_n$$, each term after the first is equal to twice the previous term. If $$a_5-a_2=12$$, what is the value of $$a_1$$?
A. 4
C. 2
D. 12/7
E. 6/7
The sequence looks more like x,x^2,x^4,x^8,x^16
a5 = x^16
a2=x^2
x^16-x^2 = 12
x^2(x^8 -1 ) = 12
I'm lost here. Thanks
OPEN DISCUSSION OF THIS QUESTION IS HERE: if-the-infinite-sequence-a1-a2-a3-an-each-term-134617.html
_________________
GMAT the final frontie!!!.
Originally posted by alimad on 16 May 2008, 03:50.
Last edited by Bunuel on 25 Mar 2014, 08:48, edited 1 time in total.
Renamed the topic, edited the question, added the OA and moved to PS forum.
Manager
Joined: 27 Jul 2007
Posts: 103
### Show Tags
16 May 2008, 05:46
Is it 6/7 ?
the seq wil be x , 2x, 4x, 8x...........not x^2,x^4 etc
Manager
Joined: 11 Apr 2008
Posts: 147
Schools: Kellogg(A), Wharton(W), Columbia(D)
### Show Tags
16 May 2008, 05:51
1
In the infinite sequence, a1,a2,a3,,,,an, each term after the first is equal to twice the previous term. If a5-a2 =12, what is the value of a1?
The sequence looks more like x,x^2,x^4,x^8,x^16
a5 = x^16
a2=x^2
x^16-x^2 = 12
x^2(x^8 -1 ) = 12
I'm lost here. Thanks
The sequence is
x, 2*x, 2*(2*x), 2*(2*(2*x)) .....
i.e.
nth term = 2^(n-1)x
a5=2^4 * x
a2=2^2*x
=> a5-a2 = (16-4)x= 12x
thus, 12x=12
and the first term x=1
SVP
Joined: 29 Mar 2007
Posts: 2489
### Show Tags
16 May 2008, 07:17
2
In the infinite sequence, a1,a2,a3,,,,an, each term after the first is equal to twice the previous term. If a5-a2 =12, what is the value of a1?
The sequence looks more like x,x^2,x^4,x^8,x^16
a5 = x^16
a2=x^2
x^16-x^2 = 12
x^2(x^8 -1 ) = 12
I'm lost here. Thanks
x. 2x. 4x. 8x. 16x. 16x-2x=12 14x=12. x=6/7
SVP
Joined: 29 Mar 2007
Posts: 2489
### Show Tags
16 May 2008, 07:18
anirudhoswal wrote:
In the infinite sequence, a1,a2,a3,,,,an, each term after the first is equal to twice the previous term. If a5-a2 =12, what is the value of a1?
The sequence looks more like x,x^2,x^4,x^8,x^16
a5 = x^16
a2=x^2
x^16-x^2 = 12
x^2(x^8 -1 ) = 12
I'm lost here. Thanks
The sequence is
x, 2*x, 2*(2*x), 2*(2*(2*x)) .....
i.e.
nth term = 2^(n-1)x
a5=2^4 * x
a2=2^2*x
=> a5-a2 = (16-4)x= 12x
thus, 12x=12
and the first term x=1
This cannot be correct.
Just try it. 1, 2, 4, 8, 16. 16-2 dsnt = 12.
Director
Joined: 23 Sep 2007
Posts: 761
### Show Tags
16 May 2008, 19:00
1
The OA is 6/7
Attachments
infinitesequence.JPG [ 16.16 KiB | Viewed 8226 times ]
Intern
Joined: 20 Feb 2014
Posts: 3
Re: In the infinite sequence, a1,a2,a3,,,,an, each term after [#permalink]
### Show Tags
25 Mar 2014, 07:49
a5=2^4*x
a2=2^1*x
So a5-a2=16x-12x=14x
14x=12 => x=6/7
Math Expert
Joined: 02 Sep 2009
Posts: 47206
Re: If the infinite sequence a1, a2, a3, ..., an, ..., each term [#permalink]
### Show Tags
25 Mar 2014, 08:46
In the infinite sequence $$a_1$$, $$a_2$$, $$a_3$$,...., $$a_n$$, each term after the first is equal to twice the previous term. If $$a_5-a_2=12$$, what is the value of $$a_1$$?
A. 4
C. 2
D. 12/7
E. 6/7
The formula for calculating $$n_{th}$$ term would be $$a_n=2^{n-1}*a_1$$ . So:
$$a_5=2^4*a_1$$;
$$a_2=2*a_1$$;
Given: $$a_5-a_2=2^4*a_1-2*a_1=12$$ --> $$2^4*a_1-2*a_1=12$$ --> $$a_1=\frac{12}{14}=\frac{6}{7}$$.
OPEN DISCUSSION OF THIS QUESTION IS HERE: if-the-infinite-sequence-a1-a2-a3-an-each-term-134617.html
_________________
Senior Manager
Joined: 06 Dec 2016
Posts: 251
Re: If the infinite sequence a1, a2, a3, ..., an, ..., each term [#permalink]
### Show Tags
30 Aug 2017, 11:27
Process elimination could work as well.
a5 - a2
96/7 - 12/7
12
Target Test Prep Representative
Affiliations: Target Test Prep
Joined: 04 Mar 2011
Posts: 2679
Re: If the infinite sequence a1, a2, a3, ..., an, ..., each term [#permalink]
### Show Tags
02 Sep 2017, 07:08
In the infinite sequence $$a_1$$, $$a_2$$, $$a_3$$,...., $$a_n$$, each term after the first is equal to twice the previous term. If $$a_5-a_2=12$$, what is the value of $$a_1$$?
A. 4
C. 2
D. 12/7
E. 6/7
We can let a_1 = x, a_2 = 2x, a_3 = 4x, a_4 = 8x and a_5 = 16x. Thus:
16x - 2x = 12
14x = 12
x = 12/14 = 6/7
_________________
Jeffery Miller
GMAT Quant Self-Study Course
500+ lessons 3000+ practice problems 800+ HD solutions
Re: If the infinite sequence a1, a2, a3, ..., an, ..., each term &nbs [#permalink] 02 Sep 2017, 07:08
Display posts from previous: Sort by
# Events & Promotions
Powered by phpBB © phpBB Group | Emoji artwork provided by EmojiOne Kindly note that the GMAT® test is a registered trademark of the Graduate Management Admission Council®, and this site has neither been reviewed nor endorsed by GMAC®. | 2018-07-23 10:07:32 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8395461440086365, "perplexity": 8554.763402273897}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676596204.93/warc/CC-MAIN-20180723090751-20180723110751-00460.warc.gz"} |
http://smellslikeml.com/bluebot_II.html | # SmellsLikeML
#### Hacking the iTag Bluetooth Tracker
1/27/18
Bluetooth, Botnets, Networks, Alexa Skills, Home Automation, IoT
In the previous experiments, I used a Samsung device. Unfortunately, this required me to initiate a scan for nearby devices through android's OS. This seems to be a standard security feature of modern mobile operating systems.
To tag an important item with bluetooth, we found the DILITEC tracker appealing. This product features bluetooth low energy and a button that can be set to trigger cameras to take photos and drop pins on maps, so naturally the mobile app will require access to all these applications in your phone.
This means that without the phone, you will not be able to recover the keys. And so we are motivated by the fact that alexa devices are expected to remain stationary and commands can be issued around the home by voice.
To these ends, we turn to investigating bluetooth communications with the iTag device. This adafruit tutorial on hacking your smart light was extremely helpful and forms the basis for how we discovered, connected, and began to reverse engineer features of the iTag.
Specifically, we used hcitool to scan for the device named iTag:
sudo hcitool lescan
With the target address, we can use gatttool interactively running:
sudo gatttool -I
Next, we connect with the target device.
connect AA:BB:CC:DD:EE:FF
We expect the reply Connection successful. Then we can enter 'primary' into the prompt to discover the primary services. For other options, we could simply enter help.
The information on display using the gatttool prompt should be cross referenced against gatt services specifications. We can run 'char-desc' on a characteristic's handle to get more information and reference this output against the gatt characteristics specification. For instance, we find UUIDs beginning in 1800, 180f, 1802, and ffe0 corresponding to 'generic access', 'battery service', 'immediate alert', and the 'custom service' respectively. Inquiring on the 'immediate alert' service, we find a UUID beginning with '00002a06' which corresponds to '0x2A06' in the characteristics chart the 'alert level' is controlled. In code, we have:
primary
char-desc 0x000b 0x000b
You're probably curious about the 'custom service'... If you are connected to the device and you give the button a push, you will receive a notification of the form:
Notification handle = 0x000e value: 01
Here, the notification references the handle 0x000e associated with custom services.
At this point in the adafruit example, I simply tried out the char-write-cmd command on the alert service handle using the same value they used to manipulate the light color only to find it triggered a repeated beeping alarm. Very interesting!!
char-write-cmd 0x000b 58010301ff00ff0000
Pop out the battery to stop this. Now we have a way to kick off an alarm on the tagger using Alexa. To pin this down, I would need to study the packets being sent between devices. Here I refer to another part in the adafruit series to use wireshark to get more information about the communications.
After reviewing a few exchanges, you find the structure of the packets. Generally, they seem to take a couple bits followed by '111000000001'. Experimenting interactively, you'll quickly stumble upon '0100111000000001' to turn on the alarm. Naturally, replacing the 1 with a 0, '0000111000000001' shuts off the alarm. Putting this into python, we have a function like:
#!/usr/bin/env python
import sys
import pexpect
def sound_alarm():
child = pexpect.spawn('gatttool -I')
child.expect('Connection successful', timeout=30)
child.sendline('char-write-cmd 0x000b 0100111000000001')
This can be used to trigger the alarm and Alexa can help us to do this with a convenient voice UI.
#### Making our skill smarter
Rather than running the alexa code through a lambda function, we opt to create a flask server running flask-ask. This allows us to run code on our own computers where we can make use of machine learning libraries like XGBoost.
Getting the data is easy, we have a simple script to query the device for RSSI signal strength which we can run on a crontab every 2 minutes. These readings are quite noisy and tend to time out before getting the signal leaving many missing readings. However, with readings from 4 machines every 2 minutes, samples quickly accumulate.
Every few minutes, I would relocate the beacon and insert a marker into the log file like:
echo "KITCHEN" >> logfile.txt
This helped to label the samples and after accumulating readings from many different nooks and crannies, I parse the logfile and form a pandas dataframe with roughly 1000 readings taken over a day. Here, we can coarsen the labels somewhat. This will make learning easier and since we can trigger an alarm, we only care about being within earshot. I divide the space into 4 quadrants labeled: BEDROOM, BATHROOM, KITCHEN, and LIVING ROOM.
With each sample comprised of RSSI readings from 4 machines, we expect simple models to work well. Since we expect many missing values, we try XGBoost which handles missing values easily and trains in seconds.
clf = XGBClassifier(max_depth=4, learning_rate=0.05,
n_estimators=300,
objective='multi:softmax',
n_jobs=10, num_class=4)
We pickle the model so that the flask server can read off the last N lines to construct test samples. Here, we use the idea that a lost item has likely been sitting for a few minutes and collect samples for the last 10 minutes, then we generate predictions for the last 5 readings and return the mode prediction to limit the impact of model error.
Check out the hackster project and the github repo for details. | 2018-05-23 20:46:17 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3609570860862732, "perplexity": 3167.0423964887623}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794865809.59/warc/CC-MAIN-20180523200115-20180523220115-00602.warc.gz"} |
https://lhcbproject.web.cern.ch/lhcbproject/Publications/LHCbProjectPublic/LHCb-CONF-2020-002.html | Combination of the ATLAS, CMS and LHCb results on the $B^0_{(s)} \to \mu^+ \mu^-$ decays
[to restricted-access page]
Abstract
A combination of results on the rare $B^0_{s} \to \mu^+ \mu^-$ and $B^0 \to \mu^+ \mu^-$ decays from the ATLAS, CMS, and LHCb experiments using data collected at the Large Hadron Collider between 2011 and 2016, is presented. The $B^0_{s} \to \mu^+ \mu^-$ branching fraction is obtained to be $\left( 2.69 ^{+ 0.37}_{- 0.35} \right) \times 10^{-9}$ and the effective lifetime of the $B^0_{s} \to \mu^+ \mu^-$ decay is measured to be $\tau_{B^0_{s} \to \mu^+ \mu^-} = 1.91^{+0.37}_{-0.35} \rm{ps}$. An upper limit on the $B^0 \to \mu^+ \mu^-$ branching fraction is evaluated to be $\mathcal{B}(B^0 \to \mu^+ \mu^-) < 1.6 (1.9)\times 10^{-10}$ at 90 (95) confidence level. An upper limit on the ratio of the $B^0 \to \mu^+ \mu^-$ and $B^0_{s} \to \mu^+ \mu^-$ branching fractions is obtained to be $0.052 (0.060)$ at 90 (95) confidence level.
Figures and captions
In the left-hand plot, the two-dimensional likelihood contours of the results for the $B ^0_ s \rightarrow \mu^+\mu^-$ and $B ^0 \rightarrow \mu^+\mu^-$ decays for the three experiments are shown together with their combination. The dataset used was collected from 2011 to 2016. The red dashed line represents the ATLAS experiment, the green dot-dashed line the CMS experiment, the blue long-dashed line the LHCb experiment and the continuous line their combination. For each experiment and for the combination, likelihood contours correspond to the values of $-2 \Delta \mathrm{ln} \mathcal{L} =$ 2.3, 6.2, and 11.8, respectively. In the right-hand plot, the combination of the three experiments is shown with contours of different shades. Likelihood contours correspond to the values of $-2\Delta \mathrm{ln} \mathcal{L} =$ 2.3, 6.2, 11.8, 19.3, and 30.2, represented in order by darkest to less dark colour. In both plots, the red point shows the SM predictions with their uncertainties. The published results from the three experiments are detailed in Ref. [1,2,3]. Fig1a.pdf [56 KiB] HiDef png [417 KiB] Thumbnail [288 KiB] *.C file Fig1b.pdf [99 KiB] HiDef png [371 KiB] Thumbnail [221 KiB] *.C file Value of $-2 \Delta \mathrm{ln} \mathcal{L}$ for ${\cal B}( B ^0_ s \rightarrow \mu^+\mu^- )$ (left) and ${\cal B}( B ^0 \rightarrow \mu^+\mu^- )$ (right), shown in both as solid black line. In the left-hand plot, the dark (light) green dashed lines represent the $1\sigma$ ($2\sigma$) interval. In the right-hand plot, the dark (light) blue dashed lines represent the 90% (95%) CL. In both plots, the red solid band shows the SM prediction with its uncertainty. The published results from the three experiments are detailed in Ref. [1,2,3]. Fig2a.pdf [20 KiB] HiDef png [200 KiB] Thumbnail [156 KiB] *.C file Fig2b.pdf [19 KiB] HiDef png [194 KiB] Thumbnail [158 KiB] *.C file Value of $-2 \Delta \mathrm{ln} \mathcal{L}$ for the ratio of the $B ^0 \rightarrow \mu^+\mu^-$ and $B ^0_ s \rightarrow \mu^+\mu^-$ branching fractions, $\mathcal{R}$, shown as solid black line. The light (dark) blue dashed line represents the 90% (95%) CL and the red solid band shows the SM prediction with its uncertainty. The published results from the three experiments are detailed in Ref. [1,2,3]. Fig3.pdf [75 KiB] HiDef png [198 KiB] Thumbnail [149 KiB] *.C file Value of $-2 \Delta \mathrm{ln} \mathcal{L}$ for the combination of CMS and LHCb measurements [2,3] of the $B ^0_ s \rightarrow \mu^+\mu^-$ effective lifetime, shown as solid black line. The dark and light green dashed lines represent the intervals corresponding to $-2 \Delta \mathrm{ln} \mathcal{L} =$1 and 4, respectively, and the red solid band shows the SM prediction with its uncertainty. Fig4.pdf [20 KiB] HiDef png [183 KiB] Thumbnail [148 KiB] *.C file Animated gif made out of all figures. CONF-2020-002.gif Thumbnail
Created on 06 March 2021. | 2021-03-06 14:29:55 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8850961923599243, "perplexity": 2122.9510423811776}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178375096.65/warc/CC-MAIN-20210306131539-20210306161539-00086.warc.gz"} |
https://www.gradesaver.com/textbooks/math/algebra/college-algebra-11th-edition/chapter-1-section-1-3-complex-numbers-1-3-exercises-page-103/79 | ## College Algebra (11th Edition)
$0-\frac{2}{3}i$
Multiply the numerator and denominator by the conjugate of the complex imaginary number. $\frac{2}{3i}\times\frac{i}{i}$ Expand. $\frac{2i}{3i^2}$ Remember that $i^2=-1$. $\frac{2i}{-3}$ Simplify. $0-\frac{2}{3}i$ | 2018-09-22 03:30:50 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9979885220527649, "perplexity": 815.2745530203507}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267158011.18/warc/CC-MAIN-20180922024918-20180922045318-00303.warc.gz"} |
https://www.gamedev.net/forums/topic/315666-checking-particle-collisions-with-sprites/ | Jump to content
• Advertisement
Public Group
# Checking Particle collisions with Sprites
This topic is 4953 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.
If you intended to correct an error in the post then please contact us.
## Recommended Posts
Hey everybody! I posted a vector question in this thread. a couple of days ago and in that thread I came upon a way to check collisions with particles and sprites without a lot of pain. However, I feel it is fairly inefficient.
for (std::vector<Sprite*>::iterator i=SpritePool.begin(); i!=SpritePool.end(); i++)
{
for(std::vector<Particle*>::iterator j=ParticlePool.begin(); j!=ParticlePool.end(); j++)
{
//update particles / sprite-particle collision
//kill particles when necessary
}
//update sprites / sprites-sprite collision
//kill sprites when necessary
}
See, I have a particlepool vector and a spritepool vector, and I need to check collisions between elements in the two. This is the best way I can think to do it without re-writing the whole thing to throw everything into one big vector. Give me your thoughts and maybe things you've done related to this. Thanks in advance! toXic1337
#### Share this post
##### Share on other sites
Advertisement
The best solution I can think of is to divide the world up into sections and then do your collisions between objects that are in the current or neighbouring sections. You could do this using some kind of tree (like a quad tree) or you could just use a plain 2d array of world sections. You would have to change your code around a bit but its better than your current code where the ammount of checking that has to be done for each particle scales linearly for each object in the entire world, limiting the size of world you can use.
Hope that helps
#### Share this post
##### Share on other sites
I understand what you're saying, and it is a good way to limit the number of calculations that are being done each cycle but..
My world is only 800x600, so dividing it up won't really help all that much. I suppose I could divide it into 200x200px sections and that would help. But will the basic algorithm I've stated even work correctly?
It seems as though but it's hard to tell without testing (in class ATM).
Any other ideas? I mean is there a common way of testing particle-sprite collisions? Should they all be in the same vector/list?
Thanks by the way,
toXic1337
#### Share this post
##### Share on other sites
If the world is small enough, which 800x600 is, then you should be grand with checking everything against everything like you are doing at the moment. You would only need to make the system more complicated if the world's size was arbitrary.
And I wouldn't be too worried about speed here. Just make sure hat you do bounding box checks first and then per-pixel checks if those succeed if you are using per-pixel collision detection. You'll be surprised with how much you can actually do each frame as long as you aren't too sloppy.
[nitpicking]
And its better and faster to use ++i and ++j instead of i++ and j++ in for loops.
[/nitpicking]
#### Share this post
##### Share on other sites
Whats the difference in ++j and j++?
Thanks for the tidbit/nitpicking [lol]
Thanks for your input stro! ratings++; ... er... ++ratings; [lol]
toXic1337
#### Share this post
##### Share on other sites
++j returns j after increment. j++ returns j before.
int intplusplus(int j){//////int rtn;rtn=j;j=j+1;return(rtn);}int plusplusint(int j){//////j=j+1;return(j);}
I'm not sure how much difference that makes [especially after optimization] but the thinking is that ++j is one less copy, and less memory management fiddling.
#### Share this post
##### Share on other sites
cant remeber where i read it but i think the pre and post increment are only really different on stl containers...
#### Share this post
##### Share on other sites
• Advertisement
• Advertisement
• ### Popular Contributors
1. 1
Rutin
38
2. 2
3. 3
4. 4
5. 5
• Advertisement
• 11
• 9
• 12
• 14
• 9
• ### Forum Statistics
• Total Topics
633350
• Total Posts
3011470
• ### Who's Online (See full list)
There are no registered users currently online
×
## Important Information
By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.
We are the game development community.
Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!
Sign me up! | 2018-11-17 06:33:00 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21147094666957855, "perplexity": 2547.669961836554}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039743294.62/warc/CC-MAIN-20181117061450-20181117083450-00245.warc.gz"} |
https://math.stackexchange.com/questions/1456900/splitting-field-of-a-polynomial-over-a-finite-field | # splitting field of a polynomial over a finite field
I just realized that finding the splitting field of a polynomial over finite fields is not as "straightforward" as in $\mathbb{Q}$
I am struggling with the following problem:
"Find the splitting field of $f(x)= x^{15}-2$ over $\mathbb{Z}_7=\Bbb F_7$, the finite field of $7$ elements."
By direct computation, $f(x)$ has no roots on $\mathbb{Z}_7$ ; however, I do not how to prove that $f$ is actually irreducible.
I just found this lecture http://hyperelliptic.org/tanja/teaching/CCI11/online-ff.pdf
Using lemma 67, I can conclude that my polynomial is irreducible (although the proof seems a little weird)
Therefore, I think that the splitting field is $F= \mathbb{Z}_7(\alpha,\zeta)$ where $\alpha^{15} = 2$ and $\zeta$ is the $15th-$root of unity.
I want to describe $F$ as $\mathbb{F}_{7^n}$ for a suitable $n$.
• Do you mean $\Bbb Z_7$ as the $7$-adic integers or do you really mean $\Bbb Z/7\Bbb Z=\Bbb F_7$ the field with $7$ elements? Sep 29 '15 at 19:39
• @AdamHughes: there is a clue in the title. Sep 29 '15 at 19:42
• @RobArthan thanks for the pointer. I'll edit the question. Sep 29 '15 at 19:43
• I mean the finite field with 7 elements. I will change to avoid such confusions. Sep 29 '15 at 19:43
• @Groups: We cannot make such a conclusion. And this polynomial is not irreducible. See my answer for a factorization. Sep 30 '15 at 12:08
There is a typo in the statement of Lemma 67 in your source. The $n$th roots of unity are in $\Bbb{F}_p$ only if $n\mid p-1$ or, iff $p\equiv1\pmod n$ (not $n\equiv1\pmod p$ as is written there). Therefore that Lemma does not apply.
In fact, the polynomial $x^{15}-2$ is NOT irreducible in $\Bbb{F}_7[x]$. This follows trivially from the fact that $3^5=243\equiv-2\pmod 7$. Therefore $$x^{15}-2=(x^3)^5+3^5=(x^3+3)(x^{12}-3x^9+3^2x^6-3^3x^3+3^4).$$
We immediately see that $x^3+3$ has no zeros in $\Bbb{F}_7$ (the cubes in that field are $0,\pm1$), so it is irreducible. Therefore the polynomial has a zero $\alpha$ in $\Bbb{F}_{7^3}$.
To get the splitting field of $x^{15}-2$ we need, as you observed, the primitive 15th roots of unity. We easily see that $$7^4=2401\equiv1\pmod{15}.$$ The multiplicative group of the field $\Bbb{F}_{7^4}$ is cyclic of order $7^4-1$, and thus it contains a primitive 15th root of unity $\zeta$.
A consequence of all this is that the splitting field of this polynomial is $$\Bbb{F}_7[\alpha,\zeta]=\Bbb{F}_{7^{12}}.$$
• Raising to fifth power is a permutation of $\Bbb{F}_7$, so I was immediately confident that this polynomial is not irreducible. You don't need Mathematica/WA/CASofYourChoice to find that. To see that the degree 12 factor above is irreducible you need more precise information from Eric Wofsey's answer (+1): the zeros of this polynomial form a coset of the group of fifteenth roots of unity inside the bigger group of 45th roots of unity, and it is easy to see that the said coset then contains primitive 45th roots of unity. Those have minimal polynomials of degree 12. Sep 30 '15 at 12:05
• Could you explain please the following: I see that $x^3+3$ has no zeros in $\mathbb{F}_7$. How it follows that the polynomial has a zero $\alpha$ in $\mathbb{F}_{7^3}$?
– ZFR
Mar 1 '19 at 16:20
• @ZFR Do you see why $p(x)=x^3+3$ is irreducible in $\Bbb{F}_7[x]$? It follows that one of the ways of constructing $\Bbb{F}_{7^3}$ is to form the quotient ring $K=\Bbb{F}_7[t]/\langle p(t)\rangle$. $K$ is a field because an irreducible polynomial generates a maximal ideal. And the coset $\alpha=t+\langle p(t)\rangle$ is then automatically a zero of $p$. Given that $|K|=7^3$ we can conclude that $K\simeq\Bbb{F}_{7^3}$. Assuming you have proven uniqueness of a finite field (up to isomorphism) of a given cardinality. Mar 2 '19 at 6:20
Here is a way to find the splitting field without having to factor the polynomial. Observe that $2$ is a primitive cube root of $1$ in $\mathbb{F}_7$, so $x^{15}-2$ splits completely in $\mathbb{F}_{7^n}$ iff there is a primitive $45$th root of $1$ in $\mathbb{F}_{7^n}$. It follows that the splitting field is $\mathbb{F}_{7^n}$ for the least $n$ such that $7^n-1$ is divisible by $45$. Doing some arithmetic mod $45$, it is not hard to compute that this $n$ is $12$. | 2021-09-24 18:14:07 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9462668895721436, "perplexity": 98.78144263476582}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057564.48/warc/CC-MAIN-20210924171348-20210924201348-00319.warc.gz"} |
https://proxies-free.com/c-calculation-of-the-sine-function/ | # c++ – Calculation of the sine function
I have been attempting to implement a function calculating values of the sine function.
I know that there are several similar threads regarding this topic but my goal was to try
to implement such function in my own way as an excercise.
In the time being I have below given code in C++ language which seems to be working (I have
compared the outputs of that code with the sine values calculated by the Excel).
``````#define PI 3.14
#define TABLE_SIZE 256
#define STEP_SIZE (2*PI/255)
double lut(TABLE_SIZE) = {
0.0000, 0.0245, 0.0490, 0.0735, 0.0980, 0.1223, 0.1467, 0.1709,
0.1950, 0.2190, 0.2429, 0.2666, 0.2901, 0.3135, 0.3367, 0.3597,
0.3825, 0.4050, 0.4274, 0.4494, 0.4712, 0.4927, 0.5139, 0.5348,
0.5553, 0.5756, 0.5954, 0.6150, 0.6341, 0.6529, 0.6713, 0.6893,
0.7068, 0.7240, 0.7407, 0.7569, 0.7727, 0.7881, 0.8029, 0.8173,
0.8312, 0.8446, 0.8575, 0.8698, 0.8817, 0.8930, 0.9037, 0.9140,
0.9237, 0.9328, 0.9413, 0.9493, 0.9568, 0.9636, 0.9699, 0.9756,
0.9806, 0.9852, 0.9891, 0.9924, 0.9951, 0.9972, 0.9988, 0.9997,
1.0000, 0.9997, 0.9988, 0.9974, 0.9953, 0.9926, 0.9893, 0.9854,
0.9810, 0.9759, 0.9703, 0.9640, 0.9572, 0.9498, 0.9419, 0.9333,
0.9243, 0.9146, 0.9044, 0.8937, 0.8824, 0.8706, 0.8583, 0.8454,
0.8321, 0.8182, 0.8039, 0.7890, 0.7737, 0.7580, 0.7417, 0.7251,
0.7080, 0.6904, 0.6725, 0.6541, 0.6354, 0.6162, 0.5967, 0.5769,
0.5566, 0.5361, 0.5152, 0.4941, 0.4726, 0.4508, 0.4288, 0.4065,
0.3840, 0.3612, 0.3382, 0.3150, 0.2917, 0.2681, 0.2444, 0.2205,
0.1966, 0.1724, 0.1482, 0.1239, 0.0996, 0.0751, 0.0506, 0.0261,
0.0016, -0.0229, -0.0475, -0.0719, -0.0964, -0.1208, -0.1451, -0.1693,
-0.1934, -0.2174, -0.2413, -0.2650, -0.2886, -0.3120, -0.3352, -0.3582,
-0.3810, -0.4036, -0.4259, -0.4480, -0.4698, -0.4913, -0.5125, -0.5334,
-0.5540, -0.5743, -0.5942, -0.6137, -0.6329, -0.6517, -0.6701, -0.6881,
-0.7057, -0.7229, -0.7396, -0.7559, -0.7717, -0.7871, -0.8020, -0.8164,
-0.8303, -0.8437, -0.8566, -0.8690, -0.8809, -0.8923, -0.9031, -0.9133,
-0.9230, -0.9322, -0.9408, -0.9488, -0.9563, -0.9632, -0.9695, -0.9752,
-0.9803, -0.9849, -0.9888, -0.9922, -0.9950, -0.9971, -0.9987, -0.9996,
-1.0000, -0.9998, -0.9989, -0.9975, -0.9954, -0.9928, -0.9895, -0.9857,
-0.9813, -0.9762, -0.9706, -0.9644, -0.9577, -0.9503, -0.9424, -0.9339,
-0.9249, -0.9153, -0.9051, -0.8944, -0.8832, -0.8714, -0.8591, -0.8463,
-0.8330, -0.8191, -0.8048, -0.7900, -0.7747, -0.7590, -0.7428, -0.7262,
-0.7091, -0.6916, -0.6736, -0.6553, -0.6366, -0.6175, -0.5980, -0.5782,
-0.5580, -0.5374, -0.5166, -0.4954, -0.4740, -0.4522, -0.4302, -0.4080,
-0.3854, -0.3627, -0.3397, -0.3166, -0.2932, -0.2696, -0.2459, -0.2221,
-0.1981, -0.1740, -0.1498, -0.1255, -0.1011, -0.0767, -0.0522, -0.0277
};
double sine(double x, double lut(TABLE_SIZE))
{
bool negateTableValue = false;
if (x < 0) {
// sin(-x) = -sin(x)
x = -x;
negateTableValue = true;
}
uint8_t index_01 = x/STEP_SIZE;
uint8_t index_02 = (index_01 + 1);
double aux = (lut(index_02) - lut(index_01))/STEP_SIZE*(x - index_01*STEP_SIZE) + lut(index_01);
if (negateTableValue) {
return -aux;
} else {
return aux;
}
}
``````
The sine values calculation is based on the look-up table containing the pre-computed values of the sine function covering the whole period $$left<0, 2piright>$$ with 256 values. I have decided to use the linear interpolation method for improving the precision.
I have one doubt regarding the linear interpolation. Namely I have been using table with 256
entries but most of the solutions exploiting the linear interpolation use look-up table with one additional entry. I would say that it isn’t necessary in my case because the index variables are `uint8_t` type i.e. they can store values from range `0-255`. But I would like to know other ones opinion. Thank you in advance. | 2021-07-30 07:26:46 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 1, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6674442887306213, "perplexity": 258.08299318296474}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153934.85/warc/CC-MAIN-20210730060435-20210730090435-00508.warc.gz"} |
https://planetmath.org/BamonsTheorem | # Bamòn’s theorem
Every quadratic vector field in $\mathbb{R}^{2}$ has a finite number of limit cycles.[BR]
## Historical note
This theorem is weaker than Dulac’s theorem but at the time of the publication of [BR] there was a problem in the proof of Dulac’s theorem.
## References
• BR R. Bamòn: http://archive.numdam.org/ARCHIVE/PMIHES/PMIHES_1986__64_/PMIHES_1986__64__111_0/PMIHES_1986__64__111_0.pdfQuadratic vector fields in the plane have a finite number of limit cycles, Publ. I.H.E.S. 64 (1986), 111-142.
Title Bamòn’s theorem BamonsTheorem 2013-03-22 14:28:37 2013-03-22 14:28:37 Daume (40) Daume (40) 5 Daume (40) Theorem msc 34C07 | 2021-01-17 07:29:44 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 1, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7212476134300232, "perplexity": 5328.524386477683}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703509973.34/warc/CC-MAIN-20210117051021-20210117081021-00429.warc.gz"} |
https://support.thunderheadeng.com/release-notes/pyrosim/2021/2021-4-1201/ | # PyroSim 2021.4.1201
Released on December 1, 2021
To install this version of Pyrosim on your computer choose an installer below. The "Portable Zip" option of Pyrosim can be run without overwriting the current version installed on your computer.
This release adds new features and bug fixes.
This version of PyroSim is designed for FDS version 6.7.6 and uses version 1.8.0_302 of the OpenJDK Java VM.
#### Changes since PyroSim 2021.3.0901:
• Added support for mathematic controls.
• Added an option to color appearances by using the object's color. Documentation
• Added the ability to have duplicate IDs for 3D slices.
• Added the ability to partition a selection of meshes from the context menu.
• Added recognition of holes/cavities within imported STL objects.
• Added orientation to Device Tool Properties dialog.
• Added an option to exclude unwanted Scenarios when exporting.
• Added a pre-simulation warning if PyroSim detects no openings to ambient conditions.
• Improved the display of imported STL objects (e.g. smooth objects now look smooth).
• Improved support for importing solid STL objects.
• Updated scientific notation to show the plus sign in the exponent.
• Updated the Stop and Kill action so that when the user stops the FDS simulation, Results does not pop-up automatically.
• Updated Device snapping to include the centers of cells and solid faces.
• Updated FDS input file generation for meshes with transforms to use the more recent transform ID format rather than the legacy mesh number format. Both variations are accepted by import and paste.
• Updated JRE to version 1.8.0_302
• Fixed a bug where importing some FBX files might result in an ArrayIndexOutOfBoundsException error.
• Fixed a bug where deleting a material that contained a circular reference to another material in its byproducts would cause a crash.
• Fixed a bug where fire spread rate would persist in the Record View for a vent if the associated surface type was changed from a burner type.
• Fixed an untranslated heading in Simulation Parameters dialog.
• Fixed a bug that caused the view option Preview FDS Blocks to incorrectly become enabled when loading legacy PSM files.
• Fixed a bug where textures might display incorrectly on obstructions that intersect holes.
• Fixed a bug where the Spray Models dialog displays an incorrect unit of pressure in English mode.
• Fixed a bug where imported PRES records were not being added.
• Fixed a bug that caused PyroSim to parse and write thermocouple PROP namelist entries using legacy BEAD_ naming. When importing FDS input files, PyroSim will still accept the legacy thermocouple PROP entries.
• Fixed a bug that could cause PyroSim to use the bundled version of FDS for simulations after specifying a custom version in Preferences.
#### Changes to Results:
• Added gamma correction for improved lighting.
• In the Occupant Proximity Analysis dialog, increased the time that tooltips remain visible.
• Updated the default value for anisotropic filtering from 16 to 4.
• Fixed a bug where FDS slices that span multiple meshes were not being grouped or showing the mesh number in the Navigation View.
• Fixed a bug where FDS boundary output in multiple-mesh models all indicated they were in Mesh 0 in the Navigation View.
• Fixed a bug where textures might display incorrectly on PyroSim obstructions that intersect holes with control logic.
• Fixed typos.
#### Known Issues:
• Using a mathematical control as the input source to a deadband control can cause unpredictable FDS behavior.
See all other Pyrosim Release Notes. | 2022-01-28 04:52:07 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3006012737751007, "perplexity": 10375.47258513082}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305420.54/warc/CC-MAIN-20220128043801-20220128073801-00657.warc.gz"} |
https://mathsgee.com/45687/let-say-that-integer-value-if-there-exist-integers-for-which | 18 views
Let $P(x, y)=2 x^2-6 x y+5 y^2$. We say that an integer $a$ is a value of $P$ if there exist integers $b, c$ for which $a=P(b, c)$.
(a) How many elements of $\{1,2, \ldots, 100\}$ are values of $P$ ?
(b) Prove that a product of values of $P$ is also a value of $P$.
| 18 views | 2022-10-01 20:37:17 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.42644578218460083, "perplexity": 142.11815525811807}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336921.76/warc/CC-MAIN-20221001195125-20221001225125-00286.warc.gz"} |
https://informesia.com/2901/write-the-formula-to-find-the-magnitude-of-the-gravitational-force-between-the-earth-and-an-object-on-the-surface-of-the-earth | 43 views
Write the formula to find the magnitude of the gravitational force between the earth and an object on the surface of the earth.
The formula for the magnitude of gravitational force between the earth and an object on its surface is
$$F=G \frac{M_{e} m}{R_{e}^{2}}$$
where $F$ is the gravitational force.
$G$ is the gravitational constant.
$M_{e}$ is the mass of the earth.
$m$ is the mass of the object on the surface of the earth.
$R$ is the radius of the earth
by
7.2k Points
1 Vote
31 views
1 Vote
18 views
1 Vote
34 views
37 views
1 Vote
27 views
1 Vote | 2023-02-05 19:59:41 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6876006126403809, "perplexity": 585.4502328307116}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500288.69/warc/CC-MAIN-20230205193202-20230205223202-00724.warc.gz"} |
https://en.wikipedia.org/wiki/Inverse_trigonometric_function | # Inverse trigonometric functions
(Redirected from Inverse trigonometric function)
In mathematics, the inverse trigonometric functions (occasionally called cyclometric functions[1]) are the inverse functions of the trigonometric functions (with suitably restricted domains). Specifically, they are the inverses of the sine, cosine, tangent, cotangent, secant, and cosecant functions. They are used to obtain an angle from any of the angle's trigonometric ratios. Inverse trigonometric functions are widely used in engineering, navigation, physics, and geometry.
## Notation
There are several notations used for the inverse trigonometric functions.
The most common convention is to name inverse trigonometric functions using an arc- prefix, e.g., arcsin(x), arccos(x), arctan(x), etc. This convention is used throughout the article. When measuring in radians, an angle of θ radians will correspond to an arc whose length is rθ, where r is the radius of the circle. Thus, in the unit circle, "the arc whose cosine is x" is the same as "the angle whose cosine is x", because the length of the arc of the circle in radii is the same as the measurement of the angle in radians.[2] Similarly, in computer programming languages (also Excel) the inverse trigonometric functions are usually called asin, acos, atan.
The notations sin−1(x), cos−1(x), tan−1(x), etc., as introduced by John Herschel in 1813,[3][4] are often used as well, but this convention logically conflicts with the common semantics for expressions like sin2(x), which refer to numeric power rather than function composition, and therefore may result in confusion between multiplicative inverse and compositional inverse. The confusion is somewhat ameliorated by the fact that each of the reciprocal trigonometric functions has its own name—for example, (cos(x))−1 = sec(x). Nevertheless, certain authors advise against using it for its ambiguity.[5]
Another convention used by a few authors[6] is to use a majuscule (capital/upper-case) first letter along with a −1 superscript, e.g., Sin−1(x), Cos−1(x), Tan−1(x), etc., which avoids confusing them with the multiplicative inverse, which should be represented by sin−1(x), cos−1(x), etc.
## Basic properties
### Principal values
Since none of the six trigonometric functions are one-to-one, they are restricted in order to have inverse functions. Therefore the ranges of the inverse functions are proper subsets of the domains of the original functions
For example, using function in the sense of multivalued functions, just as the square root function y = x could be defined from y2 = x, the function y = arcsin(x) is defined so that sin(y) = x. There are multiple numbers y such that sin(y) = x; for example, sin(0) = 0, but also sin(π) = 0, sin(2π) = 0, etc. When only one value is desired, the function may be restricted to its principal branch. With this restriction, for each x in the domain the expression arcsin(x) will evaluate only to a single value, called its principal value. These properties apply to all the inverse trigonometric functions.
The principal inverses are listed in the following table.
Name Usual notation Definition Domain of x for real result Range of usual principal value
Range of usual principal value
(degrees)
arcsine y = arcsin(x) x = sin(y) −1 ≤ x ≤ 1 π/2yπ/2 −90° ≤ y ≤ 90°
arccosine y = arccos(x) x = cos(y) −1 ≤ x ≤ 1 0 ≤ yπ 0° ≤ y ≤ 180°
arctangent y = arctan(x) x = tan(y) all real numbers π/2 < y < π/2 −90° < y < 90°
arccotangent y = arccot(x) x = cot(y) all real numbers 0 < y < π 0° < y < 180°
arcsecant y = arcsec(x) x = sec(y) x ≤ −1 or 1 ≤ x 0 ≤ y < π/2 or π/2 < yπ 0° ≤ y < 90° or 90° < y ≤ 180°
arccosecant y = arccsc(x) x = csc(y) x ≤ −1 or 1 ≤ x π/2y < 0 or 0 < yπ/2 −90° ≤ y < 0° or 0° < y ≤ 90°
(Note: Some authors define the range of arcsecant to be ( 0 ≤ y < π/2 or πy < 3π/2 ), because the tangent function is nonnegative on this domain. This makes some computations more consistent. For example using this range, tan(arcsec(x)) = x2-1, whereas with the range ( 0 ≤ y < π/2 or π/2 < yπ ), we would have to write tan(arcsec(x)) = ±x2-1, since tangent is nonnegative on 0 ≤ y < π/2 but nonpositive on π/2 < yπ. For a similar reason, the same authors define the range of arccosecant to be ( -π < y ≤ -π/2 or 0 < yπ/2 ).)
If x is allowed to be a complex number, then the range of y applies only to its real part.
### Relationships between trigonometric functions and inverse trigonometric functions
Trigonometric functions of inverse trigonometric functions are tabulated below. A quick way to derive them is by considering the geometry of a right-angled triangle, with one side of length 1, and another side of length x (any real number between 0 and 1), then applying the Pythagorean theorem and definitions of the trigonometric ratios. Purely algebraic derivations are longer.[citation needed]
${\displaystyle \theta }$ ${\displaystyle \sin(\theta )}$ ${\displaystyle \cos(\theta )}$ ${\displaystyle \tan(\theta )}$ Diagram
${\displaystyle \arcsin(x)}$ ${\displaystyle \sin(\arcsin(x))=x}$ ${\displaystyle \cos(\arcsin(x))={\sqrt {1-x^{2}}}}$ ${\displaystyle \tan(\arcsin(x))={\frac {x}{\sqrt {1-x^{2}}}}}$
${\displaystyle \arccos(x)}$ ${\displaystyle \sin(\arccos(x))={\sqrt {1-x^{2}}}}$ ${\displaystyle \cos(\arccos(x))=x}$ ${\displaystyle \tan(\arccos(x))={\frac {\sqrt {1-x^{2}}}{x}}}$
${\displaystyle \arctan(x)}$ ${\displaystyle \sin(\arctan(x))={\frac {x}{\sqrt {1+x^{2}}}}}$ ${\displaystyle \cos(\arctan(x))={\frac {1}{\sqrt {1+x^{2}}}}}$ ${\displaystyle \tan(\arctan(x))=x}$
${\displaystyle \operatorname {arccsc}(x)}$ ${\displaystyle \sin(\operatorname {arccsc}(x))={\frac {1}{x}}}$ ${\displaystyle \cos(\operatorname {arccsc}(x))={\frac {\sqrt {x^{2}-1}}{x}}}$ ${\displaystyle \tan(\operatorname {arccsc}(x))={\frac {1}{\sqrt {x^{2}-1}}}}$
${\displaystyle \operatorname {arcsec}(x)}$ ${\displaystyle \sin(\operatorname {arcsec}(x))={\frac {\sqrt {x^{2}-1}}{x}}}$ ${\displaystyle \cos(\operatorname {arcsec}(x))={\frac {1}{x}}}$ ${\displaystyle \tan(\operatorname {arcsec}(x))={\sqrt {x^{2}-1}}}$
${\displaystyle \operatorname {arccot}(x)}$ ${\displaystyle \sin(\operatorname {arccot}(x))={\frac {1}{\sqrt {1+x^{2}}}}}$ ${\displaystyle \cos(\operatorname {arccot}(x))={\frac {x}{\sqrt {1+x^{2}}}}}$ ${\displaystyle \tan(\operatorname {arccot}(x))={\frac {1}{x}}}$
### Relationships among the inverse trigonometric functions
The usual principal values of the arcsin(x) (red) and arccos(x) (blue) functions graphed on the cartesian plane.
The usual principal values of the arctan(x) and arccot(x) functions graphed on the cartesian plane.
Principal values of the arcsec(x) and arccsc(x) functions graphed on the cartesian plane.
Complementary angles:
{\displaystyle {\begin{aligned}\arccos(x)&={\frac {\pi }{2}}-\arcsin(x)\\[0.5em]\operatorname {arccot}(x)&={\frac {\pi }{2}}-\arctan(x)\\[0.5em]\operatorname {arccsc}(x)&={\frac {\pi }{2}}-\operatorname {arcsec}(x)\end{aligned}}}
Negative arguments:
{\displaystyle {\begin{aligned}\arcsin(-x)&=-\arcsin(x)\\\arccos(-x)&=\pi -\arccos(x)\\\arctan(-x)&=-\arctan(x)\\\operatorname {arccot}(-x)&=\pi -\operatorname {arccot}(x)\\\operatorname {arcsec}(-x)&=\pi -\operatorname {arcsec}(x)\\\operatorname {arccsc}(-x)&=-\operatorname {arccsc}(x)\end{aligned}}}
Reciprocal arguments:
{\displaystyle {\begin{aligned}\arccos \left({\frac {1}{x}}\right)&=\operatorname {arcsec}(x)\\[0.3em]\arcsin \left({\frac {1}{x}}\right)&=\operatorname {arccsc}(x)\\[0.3em]\arctan \left({\frac {1}{x}}\right)&={\frac {\pi }{2}}-\arctan(x)=\operatorname {arccot}(x)\,,{\text{ if }}x>0\\[0.3em]\arctan \left({\frac {1}{x}}\right)&=-{\frac {\pi }{2}}-\arctan(x)=\operatorname {arccot}(x)-\pi \,,{\text{ if }}x<0\\[0.3em]\operatorname {arccot} \left({\frac {1}{x}}\right)&={\frac {\pi }{2}}-\operatorname {arccot}(x)=\arctan(x)\,,{\text{ if }}x>0\\[0.3em]\operatorname {arccot} \left({\frac {1}{x}}\right)&={\frac {3\pi }{2}}-\operatorname {arccot}(x)=\pi +\arctan(x)\,,{\text{ if }}x<0\\[0.3em]\operatorname {arcsec} \left({\frac {1}{x}}\right)&=\arccos(x)\\[0.3em]\operatorname {arccsc} \left({\frac {1}{x}}\right)&=\arcsin(x)\end{aligned}}}
If you only have a fragment of a sine table:
{\displaystyle {\begin{aligned}\arccos(x)&=\arcsin \left({\sqrt {1-x^{2}}}\right)\,,{\text{ if }}0\leq x\leq 1\\\arccos(x)&={\frac {1}{2}}\arccos \left(2x^{2}-1\right)\,,{\text{ if }}0\leq x\leq 1\\\arcsin(x)&={\frac {1}{2}}\arccos \left(1-2x^{2}\right)\,,{\text{ if }}0\leq x\leq 1\\\arctan(x)&=\arcsin \left({\frac {x}{\sqrt {x^{2}+1}}}\right)\end{aligned}}}
Whenever the square root of a complex number is used here, we choose the root with the positive real part (or positive imaginary part if the square was negative real).
From the half-angle formula, ${\displaystyle \tan \left({\tfrac {\theta }{2}}\right)={\tfrac {\sin \theta }{1+\cos \theta }}}$, we get:
{\displaystyle {\begin{aligned}\arcsin(x)&=2\arctan \left({\frac {x}{1+{\sqrt {1-x^{2}}}}}\right)\\[0.5em]\arccos(x)&=2\arctan \left({\frac {\sqrt {1-x^{2}}}{1+x}}\right)\,,{\text{ if }}-1
${\displaystyle \arctan(u)+\arctan(v)=\arctan \left({\frac {u+v}{1-uv}}\right){\pmod {\pi }}\,,\quad uv\neq 1\,.}$
This is derived from the tangent addition formula
${\displaystyle \tan(\alpha +\beta )={\frac {\tan(\alpha )+\tan(\beta )}{1-\tan(\alpha )\tan(\beta )}}\,,}$
by letting
${\displaystyle \alpha =\arctan(u)\,,\quad \beta =\arctan(v)\,.}$
## In calculus
### Derivatives of inverse trigonometric functions
The derivatives for complex values of z are as follows:
{\displaystyle {\begin{aligned}{\frac {\mathrm {d} }{\mathrm {d} z}}\arcsin(z)&{}={\frac {1}{\sqrt {1-z^{2}}}}\;;&z&{}\neq -1,+1\\{\frac {\mathrm {d} }{\mathrm {d} z}}\arccos(z)&{}=-{\frac {1}{\sqrt {1-z^{2}}}}\;;&z&{}\neq -1,+1\\{\frac {\mathrm {d} }{\mathrm {d} z}}\arctan(z)&{}={\frac {1}{1+z^{2}}}\;;&z&{}\neq -\mathrm {i} ,+\mathrm {i} \\{\frac {\mathrm {d} }{\mathrm {d} z}}\operatorname {arccot}(z)&{}=-{\frac {1}{1+z^{2}}}\;;&z&{}\neq -\mathrm {i} ,+\mathrm {i} \\{\frac {\mathrm {d} }{\mathrm {d} z}}\operatorname {arcsec}(z)&{}={\frac {1}{z^{2}{\sqrt {1-{\frac {1}{z^{2}}}}}}}\;;&z&{}\neq -1,0,+1\\{\frac {\mathrm {d} }{\mathrm {d} z}}\operatorname {arccsc}(z)&{}=-{\frac {1}{z^{2}{\sqrt {1-{\frac {1}{z^{2}}}}}}}\;;&z&{}\neq -1,0,+1\end{aligned}}}
Only for real values of x:
{\displaystyle {\begin{aligned}{\frac {\mathrm {d} }{\mathrm {d} x}}\operatorname {arcsec}(x)&{}={\frac {1}{|x|{\sqrt {x^{2}-1}}}}\;;&|x|>1\\{\frac {\mathrm {d} }{\mathrm {d} x}}\operatorname {arccsc}(x)&{}=-{\frac {1}{|x|{\sqrt {x^{2}-1}}}}\;;&|x|>1\end{aligned}}}
For a sample derivation: if ${\displaystyle \theta =\arcsin x\!}$, we get:
${\displaystyle {\frac {\mathrm {d} \arcsin(x)}{\mathrm {d} x}}={\frac {\mathrm {d} \theta }{\mathrm {d} \sin(\theta )}}={\frac {\mathrm {d} \theta }{\cos(\theta )\mathrm {d} \theta }}={\frac {1}{\cos(\theta )}}={\frac {1}{\sqrt {1-\sin ^{2}(\theta )}}}={\frac {1}{\sqrt {1-x^{2}}}}}$
### Expression as definite integrals
Integrating the derivative and fixing the value at one point gives an expression for the inverse trigonometric function as a definite integral:
{\displaystyle {\begin{aligned}\arcsin(x)&{}=\int _{0}^{x}{\frac {1}{\sqrt {1-z^{2}}}}\,\mathrm {d} z\;,&|x|&{}\leq 1\\\arccos(x)&{}=\int _{x}^{1}{\frac {1}{\sqrt {1-z^{2}}}}\,\mathrm {d} z\;,&|x|&{}\leq 1\\\arctan(x)&{}=\int _{0}^{x}{\frac {1}{z^{2}+1}}\,\mathrm {d} z\;,\\\operatorname {arccot}(x)&{}=\int _{x}^{\infty }{\frac {1}{z^{2}+1}}\,\mathrm {d} z\;,\\\operatorname {arcsec}(x)&{}=\int _{1}^{x}{\frac {1}{z{\sqrt {z^{2}-1}}}}\,\mathrm {d} z=\mathrm {\pi } +\int _{x}^{-1}{\frac {1}{z{\sqrt {z^{2}-1}}}}\,\mathrm {d} z\;,&x&{}\geq 1\\\operatorname {arccsc}(x)&{}=\int _{x}^{\infty }{\frac {1}{z{\sqrt {z^{2}-1}}}}\,\mathrm {d} z=\int _{-\infty }^{x}{\frac {1}{z{\sqrt {z^{2}-1}}}}\,\mathrm {d} z\;,&x&{}\geq 1\\\end{aligned}}}
When x equals 1, the integrals with limited domains are improper integrals, but still well-defined.
### Infinite series
Like the sine and cosine functions, the inverse trigonometric functions can be calculated using power series, as follows. For arcsine, the series can be derived by expanding its derivative, ${\displaystyle {\frac {1}{\sqrt {1-z^{2}}}}}$, as a binomial series, and integrating term by term (using the integral definition as above). The series for arctangent can similarly be derived by expanding its derivative ${\displaystyle {\frac {1}{1+z^{2}}}}$ in a geometric series and applying the integral definition above (see Leibniz series).
${\displaystyle \arcsin(z)=z+\left({\frac {1}{2}}\right){\frac {z^{3}}{3}}+\left({\frac {1\cdot 3}{2\cdot 4}}\right){\frac {z^{5}}{5}}+\left({\frac {1\cdot 3\cdot 5}{2\cdot 4\cdot 6}}\right){\frac {z^{7}}{7}}+\cdots =\sum _{n=0}^{\infty }{\frac {{\binom {2n}{n}}z^{2n+1}}{4^{n}(2n+1)}}\,;\qquad |z|\leq 1}$
${\displaystyle \arccos(z)={\frac {\pi }{2}}-\arcsin(z)={\frac {\pi }{2}}-\left(z+\left({\frac {1}{2}}\right){\frac {z^{3}}{3}}+\left({\frac {1\cdot 3}{2\cdot 4}}\right){\frac {z^{5}}{5}}+\cdots \right)={\frac {\pi }{2}}-\sum _{n=0}^{\infty }{\frac {{\binom {2n}{n}}z^{2n+1}}{4^{n}(2n+1)}}\,;\qquad |z|\leq 1}$
${\displaystyle \arctan(z)=z-{\frac {z^{3}}{3}}+{\frac {z^{5}}{5}}-{\frac {z^{7}}{7}}+\cdots =\sum _{n=0}^{\infty }{\frac {(-1)^{n}z^{2n+1}}{2n+1}}\,;\qquad |z|\leq 1\qquad z\neq \mathrm {i} ,-\mathrm {i} }$
${\displaystyle \operatorname {arccot}(z)={\frac {\pi }{2}}-\arctan(z)={\frac {\pi }{2}}-\left(z-{\frac {z^{3}}{3}}+{\frac {z^{5}}{5}}-{\frac {z^{7}}{7}}+\cdots \right)={\frac {\pi }{2}}-\sum _{n=0}^{\infty }{\frac {(-1)^{n}z^{2n+1}}{2n+1}}\,;\qquad |z|\leq 1\qquad z\neq \mathrm {i} ,-\mathrm {i} }$
${\displaystyle \operatorname {arcsec}(z)=\arccos \left({\frac {1}{z}}\right)={\frac {\pi }{2}}-\left(z^{-1}+\left({\frac {1}{2}}\right){\frac {z^{-3}}{3}}+\left({\frac {1\cdot 3}{2\cdot 4}}\right){\frac {z^{-5}}{5}}+\cdots \right)={\frac {\pi }{2}}-\sum _{n=0}^{\infty }{\frac {{\binom {2n}{n}}z^{-(2n+1)}}{4^{n}(2n+1)}}\,;\qquad |z|\geq 1}$
${\displaystyle \operatorname {arccsc}(z)=\arcsin \left({\frac {1}{z}}\right)=z^{-1}+\left({\frac {1}{2}}\right){\frac {z^{-3}}{3}}+\left({\frac {1\cdot 3}{2\cdot 4}}\right){\frac {z^{-5}}{5}}+\cdots =\sum _{n=0}^{\infty }{\frac {{\binom {2n}{n}}z^{-(2n+1)}}{4^{n}(2n+1)}}\,;\qquad |z|\geq 1}$
${\displaystyle 2\arcsin ^{2}{\frac {x}{2}}=\sum _{n=1}^{\infty }{\frac {x^{2n}}{n^{2}{\binom {2n}{n}}}}}$ [7]
${\displaystyle \sum _{n=1}^{\infty }{\frac {1}{n^{2}{\binom {2n}{n}}}}={\frac {\pi ^{2}}{18}}}$
${\displaystyle \sum _{n=1}^{\infty }{\frac {(-1)^{n+1}}{n^{2}{\binom {2n}{n}}}}=2\ln ^{2}\varphi }$, where ${\displaystyle \varphi }$ is the golden ratio.
Leonhard Euler found a more efficient series for the arctangent, which is:
${\displaystyle \arctan(z)={\frac {z}{1+z^{2}}}\sum _{n=0}^{\infty }\prod _{k=1}^{n}{\frac {2kz^{2}}{(2k+1)(1+z^{2})}}\,.}$
(Notice that the term in the sum for n = 0 is the empty product which is 1.)
Alternatively, this can be expressed:
${\displaystyle \arctan z=\sum _{n=0}^{\infty }{\frac {2^{2n}(n!)^{2}}{(2n+1)!}}\;{\frac {z^{2n+1}}{(1+z^{2})^{n+1}}}}$
#### Variant: Continued fractions for arctangent
Two alternatives to the power series for arctangent are these generalized continued fractions:
${\displaystyle \arctan(z)={\frac {z}{1+{\cfrac {(1z)^{2}}{3-1z^{2}+{\cfrac {(3z)^{2}}{5-3z^{2}+{\cfrac {(5z)^{2}}{7-5z^{2}+{\cfrac {(7z)^{2}}{9-7z^{2}+\ddots }}}}}}}}}}={\frac {z}{1+{\cfrac {(1z)^{2}}{3+{\cfrac {(2z)^{2}}{5+{\cfrac {(3z)^{2}}{7+{\cfrac {(4z)^{2}}{9+\ddots }}}}}}}}}}}$
The second of these is valid in the cut complex plane. There are two cuts, from −i to the point at infinity, going down the imaginary axis, and from i to the point at infinity, going up the same axis. It works best for real numbers running from −1 to 1. The partial denominators are the odd natural numbers, and the partial numerators (after the first) are just (nz)2, with each perfect square appearing once. The first was developed by Leonhard Euler; the second by Carl Friedrich Gauss utilizing the Gaussian hypergeometric series.
### Indefinite integrals of inverse trigonometric functions
For real and complex values of z:
{\displaystyle {\begin{aligned}\int \arcsin(z)\,\mathrm {d} z&{}=z\,\arcsin(z)+{\sqrt {1-z^{2}}}+C\\\int \arccos(z)\,\mathrm {d} z&{}=z\,\arccos(z)-{\sqrt {1-z^{2}}}+C\\\int \arctan(z)\,\mathrm {d} z&{}=z\,\arctan(z)-{\frac {1}{2}}\ln \left(1+z^{2}\right)+C\\\int \operatorname {arccot}(z)\,\mathrm {d} z&{}=z\,\operatorname {arccot}(z)+{\frac {1}{2}}\ln \left(1+z^{2}\right)+C\\\int \operatorname {arcsec}(z)\,\mathrm {d} z&{}=z\,\operatorname {arcsec}(z)-\ln \left[z\left(1+{\sqrt {\frac {z^{2}-1}{z^{2}}}}\right)\right]+C\\\int \operatorname {arccsc}(z)\,\mathrm {d} z&{}=z\,\operatorname {arccsc}(z)+\ln \left[z\left(1+{\sqrt {\frac {z^{2}-1}{z^{2}}}}\right)\right]+C\end{aligned}}}
For real x ≥ 1:
{\displaystyle {\begin{aligned}\int \operatorname {arcsec}(x)\,\mathrm {d} x&{}=x\,\operatorname {arcsec}(x)-\ln \left(x+{\sqrt {x^{2}-1}}\right)+C\\\int \operatorname {arccsc}(x)\,\mathrm {d} x&{}=x\,\operatorname {arccsc}(x)+\ln \left(x+{\sqrt {x^{2}-1}}\right)+C\end{aligned}}}
For all real x not between -1 and 1:
{\displaystyle {\begin{aligned}\int \operatorname {arcsec}(x)\,\mathrm {d} x&{}=x\,\operatorname {arcsec}(x)-\operatorname {sgn}(x)\ln \left(\left|x+{\sqrt {x^{2}-1}}\right|\right)+C\\\int \operatorname {arccsc}(x)\,\mathrm {d} x&{}=x\,\operatorname {arccsc}(x)+\operatorname {sgn}(x)\ln \left(\left|x+{\sqrt {x^{2}-1}}\right|\right)+C\end{aligned}}}
The absolute value is necessary to compensate for both negative and positive values of the arcsecant and arccosecant functions. The signum function is also necessary due to the absolute values in the derivatives of the two functions, which create two different solutions for positive and negative values of x. These can be further simplified using the logarithmic definitions of the inverse hyperbolic functions:
{\displaystyle {\begin{aligned}\int \operatorname {arcsec}(x)\,\mathrm {d} x&{}=x\,\operatorname {arcsec}(x)-\operatorname {arcosh} (|x|)+C\\\int \operatorname {arccsc}(x)\,\mathrm {d} x&{}=x\,\operatorname {arccsc}(x)+\operatorname {arcosh} (|x|)+C\\\end{aligned}}}
The absolute value in the argument of the arcosh function creates a negative half of its graph, making it identical to the signum logarithmic function shown above.
All of these antiderivatives can be derived using integration by parts and the simple derivative forms shown above.
#### Example
Using ${\displaystyle \int u\,\mathrm {d} v=uv-\int v\,\mathrm {d} u}$, set
{\displaystyle {\begin{aligned}u&{}=&\arcsin(x)&\quad \quad \mathrm {d} v=\mathrm {d} x\\\mathrm {d} u&{}=&{\frac {\mathrm {d} x}{\sqrt {1-x^{2}}}}&\quad \quad {}v=x\end{aligned}}}
Then
${\displaystyle \int \arcsin(x)\,\mathrm {d} x=x\arcsin(x)-\int {\frac {x}{\sqrt {1-x^{2}}}}\,\mathrm {d} x}$
Substitute
${\displaystyle w=1-x^{2}\,.}$
Then
${\displaystyle \mathrm {d} w=-2x\,\mathrm {d} x}$
and
${\displaystyle \int {\frac {x}{\sqrt {1-x^{2}}}}\,\mathrm {d} x=-{\frac {1}{2}}\int {\frac {\mathrm {d} w}{\sqrt {w}}}=-{\sqrt {w}}}$
Back-substitute for x to yield
${\displaystyle \int \arcsin(x)\,\mathrm {d} x=x\arcsin(x)+{\sqrt {1-x^{2}}}+C}$
## Extension to complex plane
Since the inverse trigonometric functions are analytic functions, they can be extended from the real line to the complex plane. This results in functions with multiple sheets and branch points. One possible way of defining the extensions is:
${\displaystyle \arctan(z)=\int _{0}^{z}{\frac {\mathrm {d} x}{1+x^{2}}}\quad z\neq -\mathrm {i} ,+\mathrm {i} \,}$
where the part of the imaginary axis which does not lie strictly between −i and +i is the cut between the principal sheet and other sheets;
${\displaystyle \arcsin(z)=\arctan \left({\frac {z}{\sqrt {1-z^{2}}}}\right)\quad z\neq -1,+1\,}$
where (the square-root function has its cut along the negative real axis and) the part of the real axis which does not lie strictly between −1 and +1 is the cut between the principal sheet of arcsin and other sheets;
${\displaystyle \arccos(z)={\frac {\mathrm {\pi } }{2}}-\arcsin(z)\quad z\neq -1,+1\,}$
which has the same cut as arcsin;
${\displaystyle \operatorname {arccot}(z)={\frac {\mathrm {\pi } }{2}}-\arctan(z)\quad z\neq \mathrm {-i,+i} \,}$
which has the same cut as arctan;
${\displaystyle \operatorname {arcsec}(z)=\arccos \left({\frac {1}{z}}\right)\quad z\neq -1,0,+1\,}$
where the part of the real axis between −1 and +1 inclusive is the cut between the principal sheet of arcsec and other sheets;
${\displaystyle \operatorname {arccsc}(z)=\arcsin \left({\frac {1}{z}}\right)\quad z\neq -1,0,+1\,}$
which has the same cut as arcsec.
### Logarithmic forms
These functions may also be expressed using complex logarithms. This extends in a natural fashion their domain to the complex plane.
{\displaystyle {\begin{aligned}\arcsin(z)&{}=\mathrm {-i} \ln \left(\mathrm {i} z+{\sqrt {1-z^{2}}}\right)&{}=\operatorname {arccsc} \left({\frac {1}{z}}\right)\\[10pt]\arccos(z)&{}=\mathrm {-i} \ln \left(z+{\sqrt {z^{2}-1}}\right)={\frac {\pi }{2}}\,+\mathrm {i} \ln \left(\mathrm {i} z+{\sqrt {1-z^{2}}}\right)={\frac {\pi }{2}}-\arcsin(z)&{}=\operatorname {arcsec} \left({\frac {1}{z}}\right)\\[10pt]\arctan(z)&{}={\tfrac {1}{2}}\mathrm {i} \left[\ln \left(1-\mathrm {i} z\right)-\ln \left(1+\mathrm {i} z\right)\right]&{}=\operatorname {arccot} \left({\frac {1}{z}}\right)\\[10pt]\operatorname {arccot}(z)&{}={\tfrac {1}{2}}\mathrm {i} \left[\ln \left(1-{\frac {\mathrm {i} }{z}}\right)-\ln \left(1+{\frac {\mathrm {i} }{z}}\right)\right]&{}=\arctan \left({\frac {1}{z}}\right)\\[10pt]\operatorname {arcsec}(z)&{}=\mathrm {-i} \,\ln \left({\sqrt {{\frac {1}{z^{2}}}-1}}+{\frac {1}{z}}\right)=\mathrm {i} \,\ln \left({\sqrt {1-{\frac {1}{z^{2}}}}}+{\frac {\mathrm {i} }{z}}\right)+{\frac {\pi }{2}}={\frac {\pi }{2}}-\operatorname {arccsc}(z)&{}=\arccos \left({\frac {1}{z}}\right)\\[10pt]\operatorname {arccsc}(z)&{}=\mathrm {-i} \ln \left({\sqrt {1-{\frac {1}{z^{2}}}}}+{\frac {\mathrm {i} }{z}}\right)&{}=\arcsin \left({\frac {1}{z}}\right)\end{aligned}}}
Elementary proofs of these relations proceed via expansion to exponential forms of the trigonometric functions.
#### Example proof
${\displaystyle \sin(\phi )=z}$
${\displaystyle \phi =\arcsin(z)}$
Using the exponential definition of sine, one obtains
${\displaystyle z={\frac {\mathrm {e} ^{\phi \mathrm {i} }-\mathrm {e} ^{-\phi \mathrm {i} }}{2\mathrm {i} }}}$
Let
${\displaystyle \xi =\mathrm {e} ^{\phi \mathrm {i} }\,}$
Solving for ${\displaystyle \phi }$
${\displaystyle z={\frac {\xi -{\frac {1}{\xi }}}{2\mathrm {i} }}}$
${\displaystyle 2\mathrm {i} z={\xi -{\frac {1}{\xi }}}}$
${\displaystyle {\xi -2\mathrm {i} z-{\frac {1}{\xi }}}=0}$
${\displaystyle \xi ^{2}-2\mathrm {i} \xi z-1\,=\,0}$
${\displaystyle \xi =\mathrm {i} z\pm {\sqrt {1-z^{2}}}=\mathrm {e} ^{\phi \mathrm {i} }}$
${\displaystyle \phi \mathrm {i} =\ln \left(\mathrm {i} z\pm {\sqrt {1-z^{2}}}\right)}$
${\displaystyle \phi =\mathrm {-i} \ln \left(\mathrm {i} z\pm {\sqrt {1-z^{2}}}\right)}$
(the positive branch is chosen)
${\displaystyle \phi =\arcsin(z)=\mathrm {-i} \ln \left(\mathrm {i} z+{\sqrt {1-z^{2}}}\right)}$
${\displaystyle \arcsin(z)}$ ${\displaystyle \arccos(z)}$ ${\displaystyle \arctan(z)}$ ${\displaystyle \operatorname {arccot}(z)}$ ${\displaystyle \operatorname {arcsec}(z)}$ ${\displaystyle \operatorname {arccsc}(z)}$
## Applications
### General solutions
Each of the trigonometric functions is periodic in the real part of its argument, running through all its values twice in each interval of 2π. Sine and cosecant begin their period at 2πkπ/2 (where k is an integer), finish it at 2πk + π/2, and then reverse themselves over 2πk + π/2 to 2πk + 3π/2. Cosine and secant begin their period at 2πk, finish it at 2πk + π, and then reverse themselves over 2πk + π to 2πk + 2π. Tangent begins its period at 2πkπ/2, finishes it at 2πk + π/2, and then repeats it (forward) over 2πk + π/2 to 2πk + 3π/2. Cotangent begins its period at 2πk, finishes it at 2πk + π, and then repeats it (forward) over 2πk + π to 2πk + 2π.
This periodicity is reflected in the general inverses where k is some integer:
${\displaystyle \sin(y)=x\;\Leftrightarrow \;y=\arcsin(x)+2\mathrm {\pi } k\;{\text{ or }}\;y=\mathrm {\pi } -\arcsin(x)+2\mathrm {\pi } k}$
Which, written in one equation, is: ${\displaystyle \sin(y)=x\;\Leftrightarrow \;y=(-1)^{k}\arcsin(x)+\mathrm {\pi } k}$
${\displaystyle \cos(y)=x\;\Leftrightarrow \;y=\arccos(x)+2\mathrm {\pi } k\;{\text{ or }}\;y=2\mathrm {\pi } -\arccos(x)+2\mathrm {\pi } k}$
Which, written in one equation, is: ${\displaystyle \cos(y)=x\;\Leftrightarrow \;y=\pm \arccos(x)+2\mathrm {\pi } k}$
${\displaystyle \tan(y)=x\;\Leftrightarrow \;y=\arctan(x)+\mathrm {\pi } k}$
${\displaystyle \cot(y)=x\;\Leftrightarrow \;y=\operatorname {arccot}(x)+\mathrm {\pi } k}$
${\displaystyle \sec(y)=x\;\Leftrightarrow \;y=\operatorname {arcsec}(x)+2\mathrm {\pi } k{\text{ or }}y=2\mathrm {\pi } -\operatorname {arcsec}(x)+2\mathrm {\pi } k}$
${\displaystyle \csc(y)=x\;\Leftrightarrow \;y=\operatorname {arccsc}(x)+2\mathrm {\pi } k{\text{ or }}y=\mathrm {\pi } -\operatorname {arccsc}(x)+2\mathrm {\pi } k}$
#### Application: finding the angle of a right triangle
A right triangle.
Inverse trigonometric functions are useful when trying to determine the remaining two angles of a right triangle when the lengths of the sides of the triangle are known. Recalling the right-triangle definitions of sine, for example, it follows that
${\displaystyle \theta =\arcsin \left({\frac {\text{opposite}}{\text{hypotenuse}}}\right)\,.}$
Often, the hypotenuse is unknown and would need to be calculated before using arcsine or arccosine using the Pythagorean Theorem: ${\displaystyle a^{2}+b^{2}=h^{2}}$ where ${\displaystyle h}$ is the length of the hypotenuse. Arctangent comes in handy in this situation, as the length of the hypotenuse is not needed.
${\displaystyle \theta =\arctan \left({\frac {\text{opposite}}{\text{adjacent}}}\right)\,.}$
For example, suppose a roof drops 8 feet as it runs out 20 feet. The roof makes an angle θ with the horizontal, where θ may be computed as follows:
${\displaystyle \theta =\arctan \left({\frac {\text{opposite}}{\text{adjacent}}}\right)=\arctan \left({\frac {\text{rise}}{\text{run}}}\right)=\arctan \left({\frac {8}{20}}\right)\approx 21.8^{\circ }\,.}$
### In computer science and engineering
#### Two-argument variant of arctangent
Main article: atan2
The two-argument atan2 function computes the arctangent of y / x given y and x, but with a range of (−ππ]. In other words, atan2(yx) is the angle between the positive x-axis of a plane and the point (xy) on it, with positive sign for counter-clockwise angles (upper half-plane, y > 0), and negative sign for clockwise angles (lower half-plane, y < 0). It was first introduced in many computer programming languages, but it is now also common in other fields of science and engineering.
In terms of the standard arctan function, that is with range of (−π/2, π/2), it can be expressed as follows:
${\displaystyle \operatorname {atan2} (y,x)={\begin{cases}\arctan({\frac {y}{x}})&\quad x>0\\\arctan({\frac {y}{x}})+\mathrm {\pi } &\quad y\geq 0\;,\;x<0\\\arctan({\frac {y}{x}})-\mathrm {\pi } &\quad y<0\;,\;x<0\\{\frac {\mathrm {\pi } }{2}}&\quad y>0\;,\;x=0\\-{\frac {\mathrm {\pi } }{2}}&\quad y<0\;,\;x=0\\{\text{undefined}}&\quad y=0\;,\;x=0\end{cases}}}$
It also equals the principal value of the argument of the complex number x + iy.
This function may also be defined using the tangent half-angle formulae as follows:
${\displaystyle \operatorname {atan2} (y,x)=2\arctan \left({\frac {y}{{\sqrt {x^{2}+y^{2}}}+x}}\right)}$
provided that either x > 0 or y ≠ 0. However this fails if given x ≤ 0 and y = 0 so the expression is unsuitable for computational use.
The above argument order (y, x) seems to be the most common, and in particular is used in ISO standards such as the C programming language, but a few authors may use the opposite convention (x, y) so some caution is warranted. These variations are detailed at atan2.
#### Arctangent function with location parameter
In many applications[which?] the solution ${\displaystyle y}$ of the equation ${\displaystyle x=\tan y}$ is to come as close as possible to a given value ${\displaystyle -\infty <\eta <\infty }$. The adequate solution is produced by the parameter modified arctangent function
${\displaystyle y=\arctan _{\eta }(x):=\arctan(x)+\mathrm {\pi } \cdot \operatorname {rni} {\frac {\eta -\arctan(x)}{\mathrm {\pi } }}\,.}$
The function ${\displaystyle \operatorname {rni} }$ rounds to the nearest integer.
#### Numerical accuracy
For angles near 0 and π, arccosine is ill-conditioned and will thus calculate the angle with reduced accuracy in a computer implementation (due to the limited number of digits).[8] Similarly, arcsine is inaccurate for angles near −π/2 and π/2. To achieve full accuracy for all angles, arctangent or atan2 should be used for the implementation.[8] | 2016-07-29 08:46:23 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 114, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9289287328720093, "perplexity": 3015.175278497742}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257829972.49/warc/CC-MAIN-20160723071029-00059-ip-10-185-27-174.ec2.internal.warc.gz"} |
http://qsex.ciclofucina.it/transformation-of-degrees-of-comparison-exercises.html | # Transformation Of Degrees Of Comparison Exercises
Showing top 8 worksheets in the category - Transformation Of Sentences Degrees Of Comparison Grade 7. Use gluLookat() to place the camera. If c is added to the function, where the. Degrees of Comparison are used when we compare one person or one thing with another. Comparative Degree. The Use of Glucose in Muscle Cells With Exercise. The University of Texas at El Paso announced today that El Paso native Andrea Cortinas will be promoted to the position of Vice President and Chief of Staff effective July 1, 2020. to the cinema last week. Clinton speaks most effectively. The positive degree of an adjective makes no comparison and it just modifies or gives more information about a noun. In this using words of comparison learning exercise, students interactively fill in the correct word of comparison in 15 sentences with immediate online feedback. Positive degree 2. Use contractions where possible. But it is not only the amateur writers who get it wrong. Both Bengali and English versions have the same questions. PET Comparatives Transformation Exercise. The core disciplines that the course covers are exercise physiology, biomechanics, motor control, sport and. Provide equal and fair treatment to all clients. Physical Education Graduate Degree programs teach methods aimed at enhancing the physical health of students. To know the rules:Click here 21. For each question, complete the second sentence so that it means the same as the the first sentence. He wants some really delicious French cheese. Welcome to the Every Although In Spite Of Despite However Exercises Pdf. Very few metals are as precious as gold. Fisher's transformation of the correlation coefficient. Digital transformation is a hot topic--but what exactly is it and what does it mean for companies? In this course, developed at the Darden School of Business at the University of Virginia, and led by top-ranked Darden faculty and Boston Consulting Group global management experts, we talk about digital transformation in two ways. We would use "stronger," the comparative, when comparing two things. Turkish Comparison of Adjectives Formation of Degree of Equality in Turkish. 9,289 Downloads. This tutorial will introduce you to the translate, rotate, and scale functions so that you can use them in your sketches. Sentence transformation exercise. Degrees of Comparison are of three types. At school, it's easy but when doing more exercises at higher levels, it's very hard for me to finish all of them. You can make degrees of comparison exercises worksheets photos for your tablet, and smartphone device or Desktop to set degrees of comparison exercises worksheets pictures as wallpaper background on your desktop choose images below and share degrees of comparison exercises worksheets wallpapers if you love it. More high    D. Degrees Of Comparison Grade 3. Exercise 1 - Comparatives and superlatives. In this English Grammar Quiz, we are going to focus on Degrees of Adjectives. Compare the result with the one obtained previously: (3. Susie is the tallest girl in the class. I want to bring in Kaitlan Collins, April Ryan, Jamie Gangel. Comparative. Click Degree of Comparison Worksheet 3. Выберите наиболее подходящий ответ. Her brother is nine, so he is. Some examples of Positive, Comparative and Superlative Degrees of Comparison. Although some words can be used as either adjectives or adverbs, in most cases, adverbs of manner are formed by adding ly to the corresponding adjectives. It seems to me that the general rules for the comparison of adjectives can be mastered by a twelve-year-old. Dr Evangeline Mantzioris, Lecturer in Nutrition: School of Pharmacy and Medical Sciences and Dr Negin Mirriahi, Academic Developer: UniSA Online give an insight into what the Nutrition and Exercise degree is about, and how the degree has been designed to suit busy adults on the go. For exam-. Apr 19, 2013 - Explore hchatfie's board "Comparative adjectives", followed by 209 people on Pinterest. Adverbs Degrees Of Comparison. Transformation of Stresses and Strains David Roylance Department of Materials Science and Engineering Massachusetts Institute of Technology Cambridge, MA 02139. Positive : He is as dull as an ass. The challenges of transformation in higher education and training institutions in South Africa Page 4 Introduction This paper, commissioned by the Development Bank of Southern Africa, responds to the Bank’s request: • For ‘a diagnosis and analysis of the key issues on Challenges of Transformation in Higher. An example that includes every kind of transformation possible, all in one problem, is shown. For each question, complete the second sentence so that it means the same as the the first sentence. Problem 684. Degree of Comparison. where my words occur. Browse undergraduate degrees, minors, pre-professional and certificates below. Start studying Transformation Of sentences : ASSERTIVE TO INTERROGATIVE. I have more books than you. Remember that both parts of the Direct Comparison Test require that. How to do it: Lie faceup with knees and hips bent 90 degrees, feet flexed. Degree of Comparison: Comparatives and Superlatives Test Exercise | Comparison | 1278. The Society for Human Resource Management (SHRM) is the world’s largest HR association. I quite like sentence transformation athough it's very difficult. very much 4. Canada’s Canada 5. 38889 percent. An ANOVA is apporpriate for multiple test subjects; for the exercise example, you may have people who performed no exercise, people who performed light exercise and people who performed heavy exercise. Let's discuss the degrees of comparisons along with common errors that writers make in forming comparisons. 2D Transformations J David Eisenberg (Download the files from this tutorial. Transformation Of Sentences Degrees Of Comparison Grade 7. At school, it's easy but when doing more exercises at higher levels, it's very hard for me to finish all of them. Choose the correct sentence. Conversion of Degrees in sentences. The comparison of adjective words or adverb words is known as. It is a significant chapter of Transformation of sentences. You can take this quiz and then check your answers right away. Derajat Perbandingan Halo, learners! Seperti yang kami janjikan pada artikel pertama, artikel kedua mengenai adjective ini akan mengajari kita serba-serbi degree of comparison, yang juga masih merupakan bagian dari adjective. I quite like sentence transformation athough it's very difficult. The degree of comparison in English grammar are made with the adjective and adverb words to show how big or small, high or low, more or less, many or few etc. search DEGREES OF COMPARISON. You can learn. Transformation of sentences Exercise & Practice with Explanation: Opening the door, he asked for my permission to come in. On the other hand, precision shows the nearness of an individual measurement with those of the others. Use this page to learn how to convert between degrees and percent. (Positive) Answer: How glad I was to see the sea beaches in the world! 2. Input temperature in Celsius from user. 1) rotation 180° about the origin x y J Q H 2) rotation 90° counterclockwise about the origin x y S B L 3) rotation 90° clockwise about the origin x y M B F H 4) rotation 180° about the origin x y U H F 5) rotation 90° clockwise about the origin U(1, −2), W(0, 2), K(3, 2), G(3. In sentence (c), the adjective sweetest tells us that of all these Aditya's apple has the greatest amount or highest degree of the quality of sweetness. Options and concentrations are shown when available. Remember that a superlative describes the highest degree among things that are being compared. Remember that both parts of the Direct Comparison Test require that. Three Forms of Comparison of Adjectives in English. He proposed the transformation f(r) = arctanh(r), which is the inverse hyperbolic tangent function. Further reading and exercises are provided at the end. Упражнение 1 на степени сравнения прилагательных и наречий. ID: 30022 Language: English School subject: English as a Second Language (ESL) Grade/level: Form 7 Age: 12-15 Main content: Degrees of Comparison of Adjectives Other contents: adjectives Add to my workbooks (41) Download file pdf Embed in my website or blog Add to Google Classroom. scaled <- rangeScale(pAsin) pLogit. A superlative adjective expresses the extreme or highest degree of a quality. China is larger than India. In this calculator, the degree of freedom for one sample and two sample t-tests are calculated based on number of elements in sequences. Transformation of Degrees. Submitted to the Graduate School at Appalachian State University in partial fulfillment of the requirements for the degree of MASTER OF SCIENCE. My father has bought a new car. 1 Problem 66E. Set up the degrees of freedom : transverse displacements and rotations at nodes. Positive adverbs of comparison are used to make general comparisons without directly comparing two or more things. Showing top 8 worksheets in the category - Adjective Degrees Of Comparison Grade 4. Positive - I have never seen so beautiful a building as the Taj. Jogging vs Running comparison. If a random variable has a Chi-square distribution with degrees of freedom and is a strictly positive constant, then the random variable defined as has a Gamma distribution with parameters and. Facial Expressions and Body Gestures. LINEAR TRANSFORMATION Recall, from calculus courses, a funtion f : X → Y from a set X to a set Y associates to each x ∈ X a unique element f(x) ∈ Y. Independent Group t-Test. This is the (interesting) book I have ever read. eg cold ¨ colder ¨coldest. I think what was so, I guess, noticeable there was just her. The graph of the polynomial function of degree n must have at most n – 1 turning points. Trying to find fresh thoughts is one of the fun events however it can as well be exhausted whenever we can not have the wanted concept. You must use a minimum of TWO and a maximum of FIVE words for each space. Comparative : The aero plane flies faster than birds. Sports and Exercise Science BSc (Hons) Lancaster Medical School. Dani Willis was a powerlifter at Memorial High School, ranking first in her weight class regionally, qualifying for the state meet and graduating in 2016. Home General English English Grammar-Transformation of Sentences English Grammar-Transformation of Sentences ExamGuru 8:48 AM. Click the arrow to select more options. Degrees Of Comparison Exercises - Displaying top 8 worksheets found for this concept. Kinesiology and Exercise Science Major. You may change this if you wish, select the degree of difficulty to be either Easy (Four Numbers and Three Operations) or Hard (Five Numbers and Four Operations). You can learn. However, in recent decades, bariatric surgery has become more prevalent in the treatment of severely obese patients who. In this section we ask the opposite question from the previous section. Peter is cleverer than any other boy in the class. search DEGREES OF COMPARISON. In this using words of comparison learning exercise, students interactively fill in the correct word of comparison in 15 sentences with immediate online feedback. Use this sketch to play around with rotation. You can make Degrees Of Comparison Sentences Exercises photos for your tablet, and smartphone device or Desktop to set Degrees Of Comparison Sentences Exercises pictures as wallpaper background on your desktop choose images below and share Degrees Of Comparison Sentences Exercises wallpapers if you love it. Exercise 1 for degrees of comparison of adjectives and adverbs. Key Word Transformation - English Vocabulary Exercises. Compare the result with the one obtained previously: (3. The Vision of Autonomic Computing I n mid-October 2001, IBM released a manifesto observing that the main obstacle to further progress in the IT industry is a looming soft-ware complexity crisis. degree + modifier. In major transformations of large enterprises, they and their advisors conventionally focus their attention on devising the best strategic and tactical plans. जो शब्द कर्ता या कर्म की विशेषता बताते हैं, उन्हें. Exam Resources for PEC, JSC, SSC, HSC, Degree,Diploma in Medical Technology,Diploma in Dental Technology,Diploma in Nursing Science and Midwifery,B S C in Nursing and MA Examines in Bangladesh Saturday, April 28, 2012. AACN works to establish quality standards for nursing education; assists schools in implementing those standards; influences the nursing profession to improve health care; and promotes public support for professional nursing education, research, and practice. Small = Smaller 5. Degrees of Comparison Russian grammar--> Degrees of Comparison. Degree of comparison is an important topic of English grammar lessons. One of the methods is to interchange degrees of adjective or adverb used in the sentence. Transformation Of Sentences Degrees Of Comparison Grade 7. It can be done in a number of ways. Let’s discuss the degrees of comparisons along with common errors that writers make in forming comparisons. Plus, get practice tests, quizzes, and personalized coaching to help you succeed. Home / Grammar / Adjectives and Adverbs / Comparative Adjectives Quiz About We are dedicated to creating and providing free, high-quality English language learning resources. Interchange of Degrees of Comparison: : The Transformation-of-Sentences, containing comparatives, can be done as follows with out changing the meaning of the sentences. Degrees of Comparison of Adjectives in English 22. 80): 71 Key Ideas & Details: 21 of 21 Apply Raw Score: 52 of 63 Independent DRP A |. For example: slow, slower, slowest. Air is lighter than water. The biggest difference between acting for stage versus acting for screen is the location of the audience. Cucumbers' inner temperature can be up to 20 degrees cooler than the outside air. Bryan University was founded upon the principle of providing innovative and rewarding educational experiences that lead to productive professional careers. Some of the worksheets for this concept are Put the adjectives in the correct form comparative, Adjectives comparative and superlative exercises, Grade 3 adjectives work, Degrees of comparison, Grade 3 adjectives work, She is than her martha is a, Degrees of comparison, Coommppaarraa ttii vvee. Normalized Linear Transformations. Degrees of Comparison - Free download as Word Doc (. Dani Willis was a powerlifter at Memorial High School, ranking first in her weight class regionally, qualifying for the state meet and graduating in 2016. without the words. Learn vocabulary, terms, and more with flashcards, games, and other study tools. Properties of. Change the degree of comparison without changing the meaning. I have more books than you. Degrees of Comparison are used when we compare one person or one thing with another. Each question includes 1 mark for the correct …. If a sentence contains 'the' before degree then it is in superlative degree. A kriged estimate is a weighted linear combination of the known sample values around the point to be estimated. Air is lighter than water. The differences between losing weight and building muscle are huge, so the exercises you do for them differ significantly as well. Adjectives have three degrees of comparison:. ( quite / rather ). Degrees of comparison exercise Complete the following. Second-degree burns are more serious because the damage extends beyond the top layer of skin. It is one of the longest sea beaches in the world (Positive). If a one or two-syllable adjective ends in -y, we change the y to an i before adding -est. Adjectives Degrees Of Comparison. Learn examples of matrix transformations: reflection, dilation, rotation, shear, projection. Download to practice offline. See more ideas about Comparative adjectives, Adjectives, Superlative adjectives. To determine the best methods, we compared four commonly used chemical methods (DMSO, MgCl 2 –CaCl 2, CaCl 2 and Hanahan's methods) on frequently used Escherichia coli (E. They bought a new red car. Suppose you're sitting in class, and the teacher has you graph the function f(x) = x 2. You just have to type in the correct form of degree of comparison in the box given. The first exercise is a major lift done with a partial range of motion and the second is an isolation exercise. To be blunt, there aren't many jobs available at the associate level. Comparative Degree – Adjective is described in higher degree than that in Positive degree. Degree of comparison is an important topic of English grammar lessons. Disyllabic adjectives ending with y, er, ow, le are compared with-er, -est. : He is a good boy. Linear Transformations and Matrices In Section 3. If the comparative is made with more, the superlative takes most. Not to worry; English has this covered. KINDS OF ADVERBS Ex. (loudly) 5. Big = Bigger 4. Both knees of 50 male and 50 female. Tall is an adjective in the positive degree. These three forms are known as […]. Worksheets are Put the adjectives in the correct form comparative, Degrees of comparison, Adjectives comparative and superlative exercises, Exercise adjective or adverb exercise 1, Degrees of comparison, Unit consumer society comparatives. Change the degrees as directed. Anyone who has completed eight years of formal education can be expected to have gotten the hang of it. This sentence is in positive degree. Here, we are to change a sentence from Positive to Comparative or Superlative according to the rules of changing the adjective or adverb words. The Positive Degree The positive degree of an adjective makes no comparison. I was glad to see the sea beaches in the world. Positive : He is as dull as an ass. Adjectives Degrees Of Comparison. In this study, we aim to clarify the mechanisms underlying the intensity-dependent effect of exercise on neurologic function, and thereby to help direct the clinical. 2 Decimal Point - 2. Comparatives - comparison: worksheets pdf, handouts to print, printable exercises, Comparative and superlative. Degrees of comparison in both languages are created synthetically and analytically. in management, and a master's of science degree in business and management research in transparency and trust in the food chain. Positive Degree. TOO + ADJECTIVE => NOT + ADJECTIVE + ENOUGH Tom is more handsome than Mark. Υπερθετικός βαθμός - Superlative degree A degree can be expressed with: - Only one word (Μονολεκτικός) - More than one words (Περιφραστικός). Cells that have the ability to readily take up this DNA are called competent cells. chanical comparison has yet to be conducted. best (better) singer Exercise E. Bachelor's Degrees in Fitness. In this exercise, you will be given a German adjective in its basic form. Muscle cells and fat cells are relatively efficient at obtaining glucose from the bloodstream, although liver and certain pancreatic cells are even more effective in that regard. All four colleges at University of Houston-Clear Lake offer undergraduate, graduate degrees and professional certificates, as well as specializations and minors to help you define your education and career path. A ws is also available in today´s contributions. The degree of comparison in English grammar are made with the adjective and adverb words to show how big or small, high or low, more or less, many or few etc. Exercise 3: 1. They have black Dutch bicycles. Alternate which direction you turn and don’t get sloppy with the landing; the rotational momentum exerts novel forces on your body and requires greater trunk (and really full-body) stability. Transformation Of Sentences Degrees Of Comparison Grade 7. Positive Degree. Further reading and exercises are provided at the end. Let's see how the Adjectives form the Comparative and Superlative: Rule 1: The following Adjectives form the Comparative by adding -"er" and Superlative by adding -"est" to the Positive. 3 Transformations of Graphs MATH 1330 Precalculus 87 Looking for a Pattern – When Does the Order of Transformations Matter? When deciding whether the order of the transformations matters, it helps to think about whether a transformation affects the graph vertically (i. By completing your Certification in Applied Functional Science ® (CAFS) and the 3D Movement Analysis & Performance System (3DMAPS) course, you position yourself as the go-to movement professional in your field. Degree Rules of Degree At a Glance in English Grammar এখানে, Degree পরিবর্তনের উপর এই ৫ টি সূত্র অনুসরন করঃ 5 Rules of. You must use a minimum of TWO and a maximum of FIVE words for each space. Transformation of degrees কোনো ব্যক্তি বা বস্তুর মধ্যে দোষ-গুণের তুলনাকে Degree বলে। Degree তিন প্রকার। যথা: (i) Positive degree (ii) Comparative degree (iii) Superlative degree. Learn examples of matrix transformations: reflection, dilation, rotation, shear, projection. Soal lengkap Degrees of comparison SMP Kelas 8 Written By Almusto_Kangmus on Thursday, February 28, 2019 | 7:50 AM Worksheet , kertas kerja atau latihan dril soal-soal Bahasa Inggris sangat penting untuk mengukur sekaligus melatih anak mengenai suatu materi tertentu yang diajarkan oleh guru. best (better) singer Exercise E. Some boys are not less brave than Ram. Here, we are to change a sentence from Positive to Comparative or Superlative according to the rules of changing the adjective or adverb words. We discuss how to determine the behavior of the graph at x-intercepts and the leading coefficient test to determine the behavior of the graph as we allow x to increase and decrease without bound. Test yourself with our free English language quiz about 'Comparatives and Superlatives'. Answers: Order of Adjectives Exercise 1 1. You just have to type in the correct form of degree of comparison in the box given. It is true that in recent years college grads are having a harder and harder time finding employment –a fact typically applies to new graduates. Further reading and exercises are provided at the end. The positive degree is the normal form of an adjective or adverb, and, as such, it is not responsible for causing many writing errors. - ESL worksheets. pdf link to view the file. "I've started walking two to three miles outside two to three times a week. In the last holidays I read a good book, but father gave me an even one last weekend. worse Exercise 4: 1. To better understand conflict transformation, an explanation of each component is needed. Fill in the correct form of the words in brackets (comparative or superlative). Downloadable worksheets: ADJECTIVES - DEGREES OF COMPARISON Level: elementary Age: 6-17 Downloads: 1013 : Grammar in charts. 1) rotation 180° about the origin x y J Q H 2) rotation 90° counterclockwise about the origin x y S B L 3) rotation 90° clockwise about the origin x y M B F H 4) rotation 180° about the origin x y U H F 5) rotation 90° clockwise about the origin U(1, −2), W(0, 2), K(3, 2), G(3. Data transformations are an important tool for the proper statistical analysis of biological data. The comparative degree: the adjective compares the qualities of two nouns. , of the qualities, numbers and positions of the nouns (persons, things and places) in comparison to the others mentioned in the other part of a sentence or expression. Degree of comparison memiliki pengertiannya serta aplikasi atau contoh-contohnya pada kalimat, percakapan, maupun tes kemampuan bahasa Inggris semacam TOEFL ITP. Adjectives Degrees Of Comparison For Grade 5. In this type of exercise, the practitioner uses his or her own body to provide the necessary resistance for opposing the movements. List Degrees of Comparison To continue discussing about degrees of comparison we have discussed in the previous post, we are discussing List Degrees of Comparison consists of some words related to positive, comparative and superlative degrees. In this English Grammar Quiz, we are going to focus on Degrees of Adjectives. Fact is —————- than fiction. Выберите наиболее подходящий ответ. Previous polynomial function transformations Core VocabularyCore. (Make it complex) a. 05) indirect mediation paths through the performance-approach and avoidance goals. Birbal was the wisest man in the court. Class 5 students can learn & practice free online Degrees of Comparison exercise of English subject. Comparative - The Taj is more beautiful than any other building that I have ever seen. A love for good health has served the. The inflectional suffix for superlative degree is est. Change the degrees of comparison in the sentences given below. Platinum is the most precious metal. Every word of her story is false. In other words, the positive degree is the normal form of an adjective or adverb. By Om Min Posted on April 8, 2016. The course provides students with a strong understanding of the processes and mechanisms underlying sport and exercise science, and with the knowledge and skills necessary to manage and plan sport and exercise activities in health, exercise science, sport, event and exercise therapy contexts. Displaying all worksheets related to - Transformation Of Sentences Degrees Of Comparison Grade 7. without the words. The third exercise offers to choose between thes. ID: 30022 Language: English School subject: English as a Second Language (ESL) Grade/level: Form 7 Age: 12-15 Main content: Degrees of Comparison of Adjectives Other contents: adjectives Add to my workbooks (41) Download file pdf Embed in my website or blog Add to Google Classroom. Comparative and superlative. multiply each y-coordinate by 1 C. The degree of comparison in English grammar are made with the adjective and adverb words to show how big or small, high or low, more or less, many or few etc. 88-01711-133548 [email protected] Trying to find fresh thoughts is one of the fun events however it can as well be exhausted whenever we can not have the wanted concept. Graph of my pulse rate vs. Further reading and exercises are provided at the end. This lower-abs exercise is a great way to wake up your core at the beginning of your workout or as a stand-alone exercise any time you want to squeeze in some extra ab work. There are different form of comparison such as 1. In this type of exercise, the practitioner uses his or her own body to provide the necessary resistance for opposing the movements. While a student is bound by the general guideline to be as close in meaning to the original sentence, the variations test a wide range of grammar topics. The first and most important role of our physicians is to provide you and your family with exceptional neurological care for a wide range of movement disorders. I would like a drink too. degrees of comparison sentences exercises high resolution. University of Reading. A message from our Academic Dean: Our online distance-learning MSc in ‘Consciousness, Spirituality & Transpersonal Psychology’ provides an intellectually-stimulating programme of study which focuses on diverse topics around the nature of consciousness, the dynamics between psyche and soma, the psychology of self and higher states of being, and the psychological basis of spiritual and. Transform the following as directed. Platinum is the most precious metal. Degree of comparison memiliki pengertiannya serta aplikasi atau contoh-contohnya pada kalimat, percakapan, maupun tes kemampuan bahasa Inggris semacam TOEFL ITP. Tom's brothers are not as noisy as Tom. This sentence can be changed into a sentence of comparative one. Gold is more precious than silver. JavaScript Basic: Exercise-11 with Solution. Source: UNISTATS, 2019. Displaying top 8 worksheets found for - Degrees Of Comparison Grade 3. Our aim is to teach the students to transform the degrees of Comparison (positive, comparative and superlative) so that they may do better in the examination. Karma Yoga For. Degree of Comparison: Comparatives and Superlatives Test Exercise | Comparison | 1278. Упражнение 1 на степени сравнения прилагательных и наречий. Here we are comparing the height of two people with a positive adjective. Fully Online Accredited University. There are different form of comparison such as 1. sweeter, sweetest) to show comparison. The Farlex Grammar Book > English Grammar > Parts of Speech > Adjectives > Degrees of Comparison Degrees of Comparison Definition Adjectives describe a quality or characteristic of a noun or pronoun. Выберите наиболее подходящий ответ. He opened the door and asked for my permission to come in. Batch Start on: 06th June 2019 From 2:00 PM - 04:00 PM 80 Hours classes for English Improvement Class Covering all type of difficulty level questions. Very few boys are as industrious as John. 80): 71 Key Ideas & Details: 21 of 21 Apply Raw Score: 52 of 63 Independent DRP A |. Covid-19 thus functions in my thoughts as. best (better) singer Exercise E. For example: slow, slower, slowest. Comparative - The Taj is more beautiful than any other building that I have ever seen. Comparison of disyllabic adjectives. nervous ^ than 6. Simple Definition with Examples. The synthetic way of comparison creation is carried out with the help of affixes, but differently in each language. Degree Rules of Degree At a Glance in English Grammar এখানে, Degree পরিবর্তনের উপর এই ৫ টি সূত্র অনুসরন করঃ 5 Rules of. Karma Yoga For. Textbook solution for Elementary Linear Algebra (MindTap Course List) 8th Edition Ron Larson Chapter 6. The course called for certain types of movements done in isometric fashion like push-ups, where you hold yourself in a push-up position for a given time. Keep the target muscle under tension long enough, so perform the partial reps in a "no-acceleration" style. A superlative adjective expresses the extreme or highest degree of a quality. KEILAR: -- About Mexicans. Adjective Examples in Sentences. John is as tall as Mike. The final part of the Use of English paper is Key Word Transformations. Degrees of Comparison synonyms, Degrees of Comparison pronunciation, Degrees of Comparison translation, English dictionary definition of Degrees of Comparison. On the other hand, precision shows the nearness of an individual measurement with those of the others. No insurance company has yet completed a digital transformation--one that fully harnesses the power of digital technology to rethink every aspect of the organization. Degrees Of Comparison Grade 5. These transformations and coordinate systems will be discussed below in more detail. Kidsfront has developed online study material of Class 5 English Degrees of Comparison lesson, available for free. Most exercises are completed in groups of 3-4 and then discussed as a class. Degrees of Comparison of Adjectives Worksheet-3. : He is a good boy. It is possible to change the degree of comparison of an adjective in a sentence, without changing the meaning of the sentence. It is used to denote the existing state of a person or thing and is used when no comparison is made. 05) indirect mediation paths through the performance-approach and avoidance goals. Karen graduated from the University of Maryland with her bachelor's degree in psychology then went on to the University of Connecticut to get her master's degree and Ph. Birbal was the wisest man in the court. Many graduate degree options in a particular subject offer either degree type, making the entire process quite confusing and difficult for prospective students. Too as an adverb meaning "also" goes at the end of the phrase it modifies. Simple past, past progressive, past perfect simple, past perfect progressive. Degree of Comparison: Changes of Degrees-Basic-Today we are going to discuss on Degree of Comparison, types of degrees, definition, examples, and exercises etc. ADJECTIVES: Degree of Comparison. She's six years old. As you will soon see, they are usually placed before the word they are modifying. Accessed by: 683 Students;. Comp: Gold is more precious than most other metals. Adverbs that end in -lyuse the words more and most to form their comparatives and superlatives. The comparative degree denotes a greater amount of a quality relative to something else. Production. The comparative degree is used to compare two actions: eg slower, more slowly, earlier (‘Sarah walked more slowly than Ben. Give the comparative degree of the given adjective: high; A. Let's say you did some work on your home to make it more energy efficient - air sealing, more attic insulation, and a duct system retrofit. Comparative degree • This is when we compare two people or things to each other. Which part of speech is it? When in doubt, use a dictionary!. The adverbs form their comparatives and superlatives using -er and -e st , and more and most. AACN works to establish quality standards for nursing education; assists schools in implementing those standards; influences the nursing profession to improve health care; and promotes public support for professional nursing education, research, and practice. More happy. Interchange of degrees of comparison. It can be classified into three kinds. The basic form of an adjective is sometimes known as the positive degree. You must use a minimum of TWO and a maximum of FIVE words for each space. She went home and sat on her comfortable old wooden bed. Explore individual degrees or programs and access learning objectives, degree worksheets, plans of study, career guides, and more. Home / Grammar / Adjectives and Adverbs / Comparative Adjectives Quiz About We are dedicated to creating and providing free, high-quality English language learning resources. Top Tip: Move more to stress less. To make a comparison between two or three subjects or objects, we use comparative and superlative degrees. All posts are authored by NFPT trainers and industry experts who share their experiences, education and steps for success with you. The calves can be a very stubborn muscle group, so it’s important to target them with plenty of different angles and a with a high training frequency. scaled <- rangeScale(pAsin) pLogit. Comparative adjectives represent the second highest degree within a comparison (such as the word "better" in English), and superlative adjectives represent the highest degree within a. The superlative is used to compare three or more. However, in recent decades, bariatric surgery has become more prevalent in the treatment of severely obese patients who. Tutorial: Comparison Analysis 3 g. Filed in English Grammar. 3 years full time. Explanation are given for understanding. Iron is more useful than any other metal. You will also be taken through enough exercises based on Transformation of Sentences. On the other hand, precision shows the nearness of an individual measurement with those of the others. As a result, individuals may not build their exercise-related. Lead is heavier than any other metal. This exercise is to help you practice forming the Comparison of adjectives in sentences. Positive Comparative Superlative Good Better Best Hot Hotter Hottest Sharp Sharper Sharpest Tall Taller Tallest Short Shorter Shortest Large Larger Largest Small Smaller Smallest Dry More dry (drier) Most dry (driest) Cold More cold (colder) Most cold (coldest) Proud More proud (prouder) Most proud (proudest)…. You may change this if you wish, select the degree of difficulty to be either Easy (Four Numbers and Three Operations) or Hard (Five Numbers and Four Operations). No problem! You whip up the graph in a couple of minutes. In September, KENNETH took a Degrees of Reading Power (DRP) Core Comprehension Test. Rewrite the sentences given below using different degrees of comparison. Chakras and Nadis For Beginners 3. The addition of 2-3 degrees of foot pronation lead to a 20-30% increase in pelvic alignment while standing and 50-75% increase in anterior pelvic tilt during walking (3). It can be done in a number of ways. The American Association of Colleges of Nursing (AACN) is the national voice for baccalaureate and graduate nursing education. For each question, complete the second sentence so that it means the same as the the first sentence. The giraffe is taller than any other animal. We can advise you on new and existing injuries, refer you to relevant NHS services and information, advise on load management, guide you as you perhaps embark on a new activity, or add useful strategies to complement your existing exercise. (superlative degree) The cow has the longest tail. Adverbs that end in -lyuse the words more and most to form their comparatives and superlatives. Specifically, if T: n m is a linear transformation, then there. Degrees Of Comparison Grade 8. This exercise is to help you practice forming the Comparison of adjectives in sentences. Class 5 students can learn & practice free online Degrees of Comparison exercise of English subject. Use this sketch to play around with rotation. Comparatives - comparison: worksheets pdf, handouts to print, printable exercises, Comparative and superlative. with at least one of the words. All figures with dilation symmetries are self-similar. Turkish Comparison of Adjectives Formation of Degree of Equality in Turkish. Positive degree 2. The following article shall explain to you the concept of adverbs, adjectives, and their degrees of comparison. Rule- add er to the adjective 1. Let's see how the Adjectives form the Comparative and Superlative: Rule 1: The following Adjectives form the Comparative by adding -"er" and Superlative by adding -"est" to the Positive. See if you can score a perfect 10. An intensifier is powerful but it has a very narrow usage. Malacca is the oldest town in Malaysia. The fields of health care management and health care administration may sound similar. I quite like sentence transformation athough it's very difficult. Adverbs of degree are usually placed before the adjective, adverb, or verb that they modify, although there are some exceptions. One degree of unsaturation is equivalent to 1 ring or 1 double bond (1 $$\pi$$ bond). Additionally, some courses may address topics in nutrition, coaching and other health-related fields. Deciding which is right for you depends, in part, on how you want to use your education and the standards in your professional field. How you do your partial reps is important. 3 Composite Transformation Matrix. My father has bought a new car. (Change into Negative) See Answer: 2. simple action in the past → simple past. Medical and Molecular Genetics Strong tradition in clinical genetics and research One of the first human genetics departments in the country, the Department of Medical and Molecular Genetics at IU School of Medicine has a rich history of training geneticists and genetic counselors and providing genetic consultation and counseling services. Positive degree denotes the quality of a person, thing or group. Comparison: comparisons of equality ( as tall as his father ) - English Grammar Today - a reference to written and spoken English grammar and usage - Cambridge Dictionary. These revision exercises will help you understand and practise working with determinants. Definition: Used to compare the means of two independent groups. Comparative degree • This is when we compare two people or things to each other. Build a definition of congruence from an understanding of rigid transformations. The exercises described in the course didn't use weights, rather they used bodyweight exercises and dynamic tension exercises. If f(x) = y, then we say y is the image of x. (slowly) 3. ^ the cheaper 2. We can express the same idea using different degrees of comparison. Tall is an adjective in the positive degree. Batch Start on: 06th June 2019 From 2:00 PM - 04:00 PM 80 Hours classes for English Improvement Class Covering all type of difficulty level questions. By kissnetothedit. Adjectives Degrees Of Comparison. Kinesiology and Exercise Science Major. If you’ve assigned an end-of-semester term paper, you may want to assign one or two activities from each of the four stages-brainstorming, organizing, drafting, editing-at strategic points throughout. JavaScript Basic: Exercise-11 with Solution. Degrees of Comparison Exercises The worksheet contains 3 exercises - the first one centers on the comparative form of adjectives, the second one - on the superlative degree. Conversion or transformation of a sentence implies changing grammatical form of a sentence from one to another without changing its meaning. Small = Smaller 5. Degrees Of Comparison Exercises. 1 Matrix Transformations ¶ permalink Objectives. For example, fitness studies degree programs prepare you for careers in corporate and community wellness, sports and recreation management, exercise supervision or rehabilitation. It is intended as a resource plan for those who are teaching English or those who are preparing for a demonstration teaching. I am as strong as he. This type burn causes the skin to blister and become extremely red and sore. Degrees of comparison (Adjectives) 1. The second sentence usually has a prompt. Using exercise to correct flat feet is not a new concept. English Comparison of adjectives exercises. The idea of iteration is to begin with a single motif, called the initiator and then repeatedly apply a transformation rule. 1 Multiplication Cross -. Degrees Offered ; African and African American Studies: Dec 15, 2019: Doctor of Philosophy (PhD) American Studies: Jan 2, 2020: Doctor of Philosophy (PhD) Anthropology: Dec 15, 2019: Master of Arts (AM) Doctor of Philosophy (PhD) Applied Mathematics SEAS: Dec 15, 2019: Doctor of Philosophy (PhD) Applied Physics SEAS. Adjective Examples in Sentences. Download Objective type questions of Transformation of Sentences PDF Visit our PDF store. Set up the degrees of freedom : transverse displacements and rotations at nodes. The alchemical process of transformation has been variously described, according to the text that is consulted, as being a six-stage process, 12 stage, 20, 22, 50, and even 75 stage process! However, it is possible to understand the alchemical process in terms of four basic stages, this. Use this page to learn how to convert between degrees and percent. Change the degrees as directed. There are three Degrees of Comparison in English. My father has bought a new car. Jogging requires more muscle than walking and can be done by anyone, where as running requires more. 1 we defined matrices by systems of linear equations, and in Section 3. But the exercise with an asterisk (*) is the exercise on the worksheet. Degrees of Comparison List. Additionally, some courses may address topics in nutrition, coaching and other health-related fields. Iron is more useful than any other metal. Saumya is the tallest of the three. 30 sentences DEGREES OF COMPARISON OF ADJECTIVES. Although physical exercise has been demonstrated to augment recovery of the post-stroke brain, the question of what level of exercise intensity optimizes neurological outcomes of post-stroke rehabilitation remains unsettled. To be blunt, there aren't many jobs available at the associate level. It is expressed by the short form of the adjective and the conjunctions так. Choose the most appropriate answer. Some adjectives are used in the comparative only. The Use of Glucose in Muscle Cells With Exercise. Kriging is a geostatistical interpolation technique that considers both the distance and the degree of variation between known data points when estimating values in unknown areas. Practise your English grammar in the English classroom. Please see our regulatory requirements during the coronavirus pandemic for information on current regulatory requirements. Bombay is one of the biggest cities in India. In other words, it gives information about the noun or a pronoun. In this section we ask the opposite question from the previous section. Glucose is a common fuel for the body, and all cells use it. Look at the word in bold. Degrees Of Comparison Grade 3. Degrees of comparison; Renjit P. Transformation of sentences Exercise & Practice with Explanation: Opening the door, he asked for my permission to come in. You cannot change the word in bold in ANY way. 1 The company cited applications and environments that weigh in at tens of millions of lines of code and require. The graph of arctanh is shown at the top of this article. Exercise on Comparison of Adverbs :: Learn English online - free exercises, explanations, games, teaching materials and plenty of information on English language. School is boring, but homework is than school. Options and concentrations are shown when available. 77778 percent. The inflectional suffix for superlative degree is est. Inline Exercise 10. The positive degree of an adjective makes no comparison and it just modifies or gives more information about a noun. MIT OpenCourseWare is a web-based publication of virtually all MIT course content. The Gamma distribution can be thought of as a generalization of the Chi-square distribution. Test: The hypotheses for the comparison of two independent groups are:. Initial and general ongoing conditions of registration Some of these conditions have been temporarily altered to support providers as they deal with the coronavirus (COVID-19) pandemic. Explanation are given for understanding. The third exercise offers to choose between thes. Degrees, Minutes and Seconds. Practice adjectives (degrees of comparison) septembrie (1) iunie (4) mai (14) aprilie (4) martie (17) februarie (27) ianuarie (9) 2011 (65) decembrie (11) noiembrie (21) octombrie (13) septembrie (20). The Use of Glucose in Muscle Cells With Exercise. Repeat this process using a 90 degree angle in the opposite direction. Θετικός βαθμός - Positive degree 2. Please see our regulatory requirements during the coronavirus pandemic for information on current regulatory requirements. Fact is —————- than fiction. Tom’s brothers are not as noisy as Tom. In English, there are three forms of adjectives, including two forms of comparative adjectives: positive (the initial form. She is the most graceful dancer in the play. cookie policy. Martin is not so honest as his brother. Positive degree 2. Degree Rules of Degree At a Glance in English Grammar এখানে, Degree পরিবর্তনের উপর এই ৫ টি সূত্র অনুসরন করঃ 5 Rules of. Change the degrees of comparison without changing the meaning of the sentence. We walked than the rest of the people. The challenges of transformation in higher education and training institutions in South Africa Page 4 Introduction This paper, commissioned by the Development Bank of Southern Africa, responds to the Bank’s request: • For ‘a diagnosis and analysis of the key issues on Challenges of Transformation in Higher. KENNETH's performance on this test is reported and interpreted in the following table and chart. A transformation that "slides" each point of a figure the same distance in the same direction. Comparative degree • This is when we compare two people or things to each other. Degrees of comparison are used when we describe a person or a thing. Take the quiz below and test your understanding and use of the degree of comparison. Comparative Degree: An adjective is said to be in the comparitive degree when it is used to compare two nouns/pronouns. Health Care Administration. Saumya is the tallest of the three. The adjective sweet is said to be in the Positive Degree. 1,878 Downloads. In both cases, the WBGT was around 80 degrees, just below the threshold for canceling. You can find out more about our cookie policy. No sign-up required. I would like a drink too. Superlative - Ram is not the bravest of all boys. The first and most important role of our physicians is to provide you and your family with exceptional neurological care for a wide range of movement disorders. As a member, you'll also get unlimited access to over 79,000 lessons in math, English, science, history, and more. 0 within an ’internet of things, services, data and people’ mean that manufacturing is set to undergo enormous changes in future. I use a subset of the following exercises each semester based on time available in class and the interest level of students in different topics. -- I am as strong as him. best (better) singer Exercise E. Researching Gender. The American Association of Colleges of Nursing (AACN) is the national voice for baccalaureate and graduate nursing education.
q02wgzxeemj psknaw4gvr4l 85zkmgani8 qjpk6nmzlisd 6z4bi2zp8qm5gx lvw1psruhhoq 1k5m8twhsfkx sat018tsueyyv rorve93040p zhqfd6a932ypia 61tdrsdwlw7r v5psppwzfq2wkj6 bmc2qn9jrl4ta40 2l0qmvj2q4wzu sixlzbg0rvid z8k1xnrxa51 o4oomifzfsoj2bz 0t0g99sxji20m 37ypx5qyffcux0 gg8vf1c8mrvq hzppm4xxyw0 9j7wmv1d9s dyrfb9g1s3svybj efwev61ov2jz00 mdgc06oyo8qe reoppeuba6qqk5n adhgh8g9v2rw7c r6cz1r31xrx3rk p4nbojbt2r | 2020-10-24 22:36:47 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2155839204788208, "perplexity": 3018.9724540080106}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107885059.50/warc/CC-MAIN-20201024223210-20201025013210-00442.warc.gz"} |
http://clay6.com/qa/61598/f-is-brother-of-a-c-is-the-daughter-of-a-k-is-the-sister-of-f-and-g-is-the- | Comment
Share
Q)
F is brother of A, C is the daughter of A, K is the sister of F and G is the brother of C then who is the uncle of G.
$\begin{array}{1 1} C \\ A \\ K \\ none\;of\;these \end{array}$
$C\; and\; J$ are children of A and F is the brother of A. | 2019-08-24 22:32:36 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4762533903121948, "perplexity": 545.0573527758352}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027321786.95/warc/CC-MAIN-20190824214845-20190825000845-00014.warc.gz"} |
https://notebook.community/deep-learning-indaba/practicals2017/practical1 | # From linear to non-linear models
Developed by Stephan Gouws, Avishkar Bhoopchand & Ulrich Paquet.
Introduction
In this practical we will develop our intuition of machine learning models, and implement a linear and non-linear model from scratch that can classify a variant of the classic "swiss roll" dataset. We will then show how to do the same using the TensorFlow framework.
Learning objectives:
• Understand how the different ML components (data, model parameters, loss functions) work together.
• Understand the intuition of gradient-based optimization.
• Be able to derive gradients of the softmax loss wrt output dL/dOut, and wrt the model parameters dL/dW.
• Understand what is meant by a decision boundary & how it visualizes the models predictions.
• Understand the basics of TensorFlow, especially:
• the computation graph abstraction (define-and-run vs define-by-run),
• how TensorFlow can automatically derive gradients (relate it to the graph abstraction).
What is expected of you:
• Step through the cells and discuss the questions with your lab partner.
• Fill in the missing code sections ("# IMPLEMENT-ME") by pair-programming with your partner.
• Change the values for $w$ and $\alpha$, try out different regularization strengths, look at and discuss the decision boundaries.
• 5min before the end, pair up with someone else and explain the concepts from the Learning Objectives (above) to each other, and ask the tutors if you are stuck.
# Building Intuitions: Our First Classifier on Simple Toy Data
Run the code in the cell below, and look at the resulting plot. It should produce a simple 2D data set consisting of 2 classes of points, a weight vector in black, and a decision boundary in red. The goal is to obtain a decision boundary that best separates the different classes of points.
In [1]:
import numpy as np # Numpy is an efficient linear algebra library.
import matplotlib.pyplot as plt # Matplotlib is used to generate plots of data.
centre = 1.0
points_in_class = 20
# Setting a random seed allows us to recreate the same data each time we run the cell.
np.random.seed(0)
# Generate random points in the "positive" class
x_pos = np.random.normal(loc=centre, scale=1.0, size=[points_in_class, 2])
# Generate random points in the "negative" class
x_neg = np.random.normal(loc=-centre, scale=1.0, size=[points_in_class, 2])
# Put these together
x = np.concatenate((x_pos, x_neg), axis=0)
# The class (or "y") value is +1 or -1 for the two classes
y_pos = np.ones(points_in_class)
y_neg = - np.ones(points_in_class)
y = np.concatenate((y_pos, y_neg), axis=0)
# N is the total data set size
N = 2 * points_in_class
# Plot the data using Matplotlib
fig = plt.figure()
plt.scatter(x[:, 0], x[:, 1], c=y, s=40)
plt.axis('equal')
# Pick a weight vector. In the exercise below, you are going to change the
# values in this weight vector to see how the decision boundary changes.
w = [-1.5, 1.6] # CHANGE ME!!
# Add the weight vector to the plot.
plt.plot([0, w[0]], [0, w[1]], 'k-')
# Plot part of the decision boundary in red. It is orthogonal to the weight vector.
t = 2
plt.plot([-t * w[1], t * w[1]], [t * w[0], -t * w[0]], 'r-')
# Add some labels to the plot and display it
plt.xlabel('x0')
plt.ylabel('x1')
plt.show()
# What do parameters do?
• Change the line w = [0.5, 1.6] to use different values, and re-run the above code.
• How does the decision boundary (the red line) change?
• For what values of $x_0$ and $x_1$ is the inner product between $w$ and $x$ positive? negative? zero?
• Can you relate this to the decision boundary?
• Can you use this to determine the class labels $y$?
• By changing $w$, can you manually find a weight vector that does a good job of discriminating between the two classes?
• How did you manually find it? What made you move from one choice to the next?
• Now that you've found it manually, chat with your friend next to you.
• How will you find it automatically?
• Can you devise a function, where minimizing it does the same thing as the way you manually searched?
• Pause here and think.
• Which values of $w$ give "bad solutions", according to you?
• Which ones give good solutions?
• Try to draw the function that you devised on a piece of paper. It should be small when $w$ gives a "good solution", and it should be big when $w$ gives a "bad solution". Be creative and think of your own function. Show your drawing to your neighbour.
DO NOT PROCEED ANY FURTHER UNTIL YOU'VE THOROUGHLY ATTEMPTED ALL THE ABOVE QUESTIONS. (Ask your tutors for help if you're stuck!)
# A loss function of w
• Below we will formulate the above as a loss function.
• Run the code below, and look at the plot of the loss function. Note that it is not in x-space (data) any more, but that our axis labels are w[0] and w[1] (parameters)!
• Are the weights $w$ that you found manually close to the minimum of the loss function?
In [2]:
def compute_loss(w0, w1, x, y, alpha):
# What is its effects on the loss function? Does it change the loss function's
# minimum?
# Note: In this practical we won't see too much benefit from this as we don't
# have a separate test set, but it is good practice to include one and will
# become very important soon enough!
loss = alpha * (w0 * w0 + w1 * w1)
# Add the data point's contribution to the loss. We do this for every data
# point. (We don't have to do it in a for-loop, but below, you can really see
# what is happening...)
for n in xrange(N):
# Get the inner product x' * w for data point x.
inner = w0 * x[n, 0] + w1 * x[n, 1]
# Now look at the plot with a weight vector in our "data space".
loss += np.log(1 + np.exp(- y[n] * inner))
return loss
lim = 5
ind = np.linspace(-lim, lim, 50)
w0, w1 = np.meshgrid(ind, ind)
# You will change the value of alpha (below), to see how the loss function
# changes. It has to be alpha >= 0. No negative values (otherwise the loss
# function's minimum is at negative infinity)!
alpha = 0.1
loss = compute_loss(w0, w1, x, y, alpha)
fig = plt.figure()
plt.contourf(w0, w1, np.exp(-loss), 20, cmap=plt.cm.jet)
cbar = plt.colorbar()
# We plot exp(-1oss) here, to let the colours show clearly in the plot. This is
# incidentally also proportional to the joint distribution
# p(y, w | x) = p(y | x, w) p(w), which you'll encounter on Wednesday in the
# Indaba, and can safely ignore for now.
plt.title('A plot of exp(-loss), as a function of weight vector [w0, w1]; '
+ 'alpha = ' + str(alpha))
plt.xlabel('w0')
plt.ylabel('w1')
plt.axis('equal')
plt.show()
# Trying different loss functions
• As a first exercise, look at the function that computes the loss. There is a for-loop, essentially a sum.
• Can you write down the loss function on a piece of paper? As a mathematical expression...
• On a piece of paper, can you draw
log(1 + exp(- y[n] * inner))
as a function of the inner product, the value of $y$, etc.
• When is it almost zero, and the contribution to the loss is neglible?
• Where does it become almost linear?
After you've done your drawings, explain to yourself why the function is equivalent to
inner = y[n] * (w0 * x[n, 0] + w1 * x[n, 1])
loss += np.log(1 + np.exp(- inner))
What is the effect of the class label $y$ on the weight vector. What happens if we multiply a weight vector with -1?
• Now change the setting of alpha. Make it bigger, and smaller. What happens to the minimum?
# Working with More Complex Data
Real-world data is unfortunately not as simple as our toy bimodal Gaussian example above. Real data (e.g. pixels from vision, or speech phonemes, or words of a language) can have very complex, high-dimensional distributions. Before we get our hands dirty with that, we'll move one step up and work with another toy dataset, but this time non-linear.
In [3]:
import numpy as np # Numpy is an efficient linear algebra library.
import matplotlib.pyplot as plt # Matplotlib is used to generate plots of data.
def reset_matplotlib():
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
reset_matplotlib()
# (if you're curious, see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython)
Let's generate a dataset with three spirals arranged in a swiss-roll type structure, and where each spiral forms a "class" that we want to be able to classify (i.e. the classes are very non-linearly distributed).
## Hyperparameters
First we define some hyperparameters that will be used by the next few cells. Try changing these values to see how the models work with different data.
In [4]:
num_classes = 3 # The number of classes (distinct groups) of data (these are our "y" values)
dimensions = 2 # The number of dimensions of our input or "X" values
points_per_class = 100 # number of X points to generate for each of the y values
In [5]:
# Setting a random seed allows us to get the exact same data each time we run
# the cell.
np.random.seed(0)
def generate_spiral_data(num_classes, dimensions, points_per_class):
"""Generate num_classes spirals with points_per_class points per spiral."""
X = np.zeros((points_per_class*num_classes, dimensions), dtype='float32') # Create an empty matrix to hold our X values
y = np.zeros(points_per_class*num_classes, dtype='uint8') # Create an empty vector to hold our y values
for y_value in xrange(num_classes): # Generate data for each class
ix = range(points_per_class*y_value, points_per_class*(y_value+1)) # The indices in X and y where we will save this class of data
radius = np.linspace(0.0, 1, points_per_class) # Generate evenly spaced numbers in the interval 0 to 1
theta = np.linspace(y_value*4, (y_value+1)*4, points_per_class) + np.random.randn(points_per_class) * 0.2
X[ix] = np.column_stack([radius*np.sin(theta), radius*np.cos(theta)]) # Convert polar coordinates to standard Euclidian coordinates
y[ix] = y_value
return X, y
def plot_data(X, y):
"""Use Matplotlib to plot X, y data on a figure."""
fig = plt.figure()
plt.scatter(X[:, 0], X[:, 1], c=y, s=40, cmap=plt.cm.Spectral)
plt.xlim([-1,1])
plt.ylim([-1,1])
return fig
X, y = generate_spiral_data(num_classes, dimensions, points_per_class)
fig = plot_data(X, y)
# fig.savefig('spiral_raw.png') # Uncomment this line if you want to save your image to a file
Let's look quickly at some of the raw values to get a better sense of the datastructure:
In [6]:
idx = np.random.choice(range(y.size), size=10, replace=False)
print "X values: \n", X[idx,]
print
print "Y values: \n", y[idx]
X values:
[[ 0.07453433 0.08240335]
[-0.02794273 -0.01172508]
[ 0.02408017 0.07713684]
[-0.33180022 -0.48284459]
[ 0.08547524 -0.08594394]
[-0.19078229 0.45670244]
[-0.67819595 0.66452307]
[ 0.11460172 -0.79991317]
[-0.31485459 0.23675902]
[ 0.41948399 -0.45131844]]
Y values:
[0 1 0 2 2 1 2 0 1 0]
# Implementing a Classifier from Scratch
Before experimenting with libraries like TensorFlow, we start by implementing a simple linear classifier and then a more complex nonlinear classifier from scratch in Numpy. This allows us to be able to go through all the low-level details of how to make predictions and how to train our models, as these details are very important.
Later on, we will then reimplement these two classifiers using TensorFlow, and hopefully we will see that this makes things much easier (especially as the models get more complex!).
## Implementing a linear classifier
How do we define and train a model that learns to separate the num_classes different classes, based on their coordinates in X space? We start by using a linear classifier.
A classifer is a function that takes an object's characteristics (or features) as inputs and outputs a prediction of the class (or group) that the object belongs to. It may make a single prediction for each input or it may output some score (for example a probability) for each of the possible classes. A classifier is linear when the scores are derived through some linear combination of the input features. An input feature can, for example, be a projection of the raw data onto some basis functions.
The linear classifier we will implement computes the scores for a given input X using the formula $z = Wx + b$ where $W$ is a weight matrix of shape $[d, k]$ where $d$ is the dimensionality of the input and $k$ is the number of possible classes and $b$ is a bias vector of length $k$.
By normalising the output score vector, using the so-called softmax function, we can derive a probability distribution:
$y'(j) = P(y=j\ |\ z) = \frac{e^{z_j}}{\sum_{i=1}^k e^{z_i}}$
Here j refers to the jth possible class and $z_j$ to the jth element of vector $z$.
In this probabilistic form, we can also describe the classifier as a linear discriminitive classifier. This means that it directly models the probability distribution of the unobserved class conditioned on ("given") the input. In contrast, generative models, which we will encounter later in the Indaba, model the generative process of the data.
Finally, we need to define a loss function, such that minimising the loss function by gradient descent, results in parameters $W$ and $b$ which make good predictions about the class of each input $X$. For this purpose we use the cross entropy loss (also called the negative log likelihood loss). The cross entropy loss compares two probability distributions, the true distribution $y$ and the predicted distribution from our classifier, $y'$ using the formula:
$H(y, y') = - \sum_{j=1}^k y(j)log[y'(j)]$
Here the "probability distribution" $y$ is the so called "one-hot" encoding of the true class of the particular example. Earlier in the practical, you experimented with a loss function that looked like this (in code):
inner = y[n] * (w0 * x[n, 0] + w1 * x[n, 1])
loss += np.log(1 + np.exp(- inner))
Can you figure out how the cross entropy loss we defined here, combined with the softmax function, compares to this when k=2? And if we restrict our y values to be in the set {-1, 1}?
Let's implement these in Numpy!
(Remember that Numpy is a linear algebra library that works most efficiently with vector and matrix operations, try to define all your formulas in vector and matrix form, rather than using loops!)
### Define the hyperparameters
Model hyperparameters include all choices about a model that are made prior to training. In a sense, these are the "meta parameters": parameters chosen by the practitioner and are not set by the optimization algorithm. This includes choices of learning rate, number of examples in a batch, etc.
In [7]:
learning_rate = 1e-0 # "Step-size": How far along the gradient do we want to
# travel when doing gradient descent?
reg_lambda = 1e-3 # Regularization strength.
# Define the initial random value of W here so we can re-use it later.
# There are various initialization schemes that exist, and we will see later
# how these can have a big influence on model training. For now we will just
# initialize our weights with random normal(0, 0.01) values.
W_init = 0.01 * np.random.randn(dimensions, num_classes)
In [0]:
### Defining helper functions
Begin by defining some re-usable helper functions. You may want to look ahead to the LinearModel class before implementing these to see how they get used.
In [8]:
def softmax(logits):
"""Convert un-normalised model scores(logits) into a probability distribution.
Args:
logits: The un-normalised scores assigned by the model.
"""
# IMPLEMENT-ME: (1)
# Hint: Have a look at the np.exp and np.sum functions,
# paying particular attention to the axis and keepdims parameters of the sum function
probs = ...
return probs
def cross_entropy(predictions, targets):
"""Calculate the cross entropy loss given some predictions and target (true) values.
Args:
predictions: The model predictions (of shape [num_examples, num_classes])
targets: The correct labels for the data (of shape [num_examples])
"""
num_examples = predictions.shape[0]
# IMPLEMENT-ME: (2)
# HINT: Think about the shapes of predictions and targets and what cross entropy is measuring.
# You may want to use "numpy advanced indexing" (but there are many other ways too!)
correct_logprobs = ...
# NOTE: When dealing with a batch of data, we compute the average cross
# entropy over the batch (i.e. we want the average per-example loss).
# QUESTION: Why do we use the average loss?
crossentropy = np.sum(correct_logprobs) / num_examples
return crossentropy
def l2_loss(parameters):
"""Calculate the L2 regularisation of a list of parameters."""
reg = 0.0
for param in parameters:
# IMPLEMENT-ME: (3)
# HINT: Remember to include reg_lambda, the hyper-parameter that controls the degree of regularisation.
reg += ...
return reg
### Defining the linear model
Now we define the linear model itself. We put this in a class so that we can think of it as a "black box" that takes in our inputs and returns some predictions and a loss value that tells us how good our predictions are. We can also update the model using its update method. Doing it this way gives a clean separation between the model definition and training of the model and will be a useful pattern to use going forward.
Machine learning models assign a scalar cost/loss/error function $E(\theta)$ (these terms are largely interchangeable) to how well the model is doing, as a function of the model parameters $\theta$. We want to find a setting of the parameters $\theta$ that gives us the best model possible on our data. For this we use gradient-based optimization, and particularly stochastic gradient descent (we'll talk more about this in Practical 2). The SGD algorithm computes the gradient $\frac{\partial E}{\partial \theta}$ on a sample of the data, and then takes a small step/update in the negative direction of the gradient (which minimizes $E$). I.e.
$\theta^{t+1} = \theta^t - \eta \frac{\partial E}{\partial \theta}$
Here, $\eta$ is referred to as the "learning rate". There's a lot more to say about this, but this is all we need for now. To implement this, we need to compute the gradients of the loss function wrt the model parameters.
Recall the definition of the softmax above, written in terms of the logits (unnormalized scores) $z$):
$P(y=j\ |\ z) = \frac{e^{z_j}}{\sum_{i=1}^k e^{z_i}}$
Our loss (error function) is the cross-entropy function:
$E = - \sum_k (\log p_k).y_k$,
where $y_k$ is the "one-hot" encoding of the target class, so the terms of the sum is zero everywhere except for the observed class (i.e. we can ignore the sum and let $k$ be equal to the observed class). The above sum really just "picks out" one $\log p_k$ (compare this to the code for cross_entropy() above). We require the gradient of the loss with respect to the logits, i.e.
$\frac{\partial E}{\partial z_k} = - \frac{\partial}{\partial z_k} [ \log p_k ]$
By substituting the softmax equation for $p_k$, and expanding the log-quotient into two terms, we get (just the RHS for now)
$\frac{\partial}{\partial z_k} (\log(e^{z_k}) - \log \sum_j e^{z_j})$
where the first term reduces to 1, and the second is known as the "log-sum-exp". This arises frequently in normalized probabilistic models, and we can deal with that as follows:
\begin{aligned} \frac{\partial}{\partial z_k} \log \sum_j e^{z_j} &= \frac{1}{\sum_j e^{z_j}} \left [ \frac{\partial}{\partial z_k} \sum_j e^{z_j} \right ] && \vartriangleright \frac{d}{dx} \log f(x) = \frac{1}{f(x)} \frac{d}{dx}f(x)\\ &= \frac{e^{z_k}}{\sum_j e^{z_j}} && \vartriangleright \frac{d}{dx} e^{f(x)} = e^{f(x)} \frac{d}{dx}f(x) \\ &= p_k \end{aligned}
Putting it all together we get this elegant expression for the softmax gradient:
$\frac{\partial E}{\partial z_k} = - [1 - p_k ] = p_k - 1$.
#### What does this mean?
Look at the derivative. When is it big? When $p_k$ is far from one, and we mis-classified the point. The derivative is big and we have some more work to do! When is the derivative small? It is when $p_k \approx 1$. In that case, we correctly classified the point, and don't need to sweat and labour any more.
NOTE: The derivative is for each example, and in the full derivative, many data points play a role. The misclassified ones play a bigger role. So, we typically compute this for a batch of examples by computing the average per-example cross-entropy over the batch.
QUESTION: Why do we compute the average loss? Hint: What happens to the loss as batch-size changes without averaging? Also, what happens to the gradients as batch-size changes? How could these be a problem?
We now have $\frac{\partial E}{\partial z_k}$ (the derivative of the loss with respect to the logit for class $k$). Next we want the gradient on the weights $W$.
QUESTION: How can we compute $\frac{\partial E}{\partial W}$? Note: This is a derivative of a matrix (gradient).
To derive this, let's think about the shape of W, and then derive the gradient on each of its elements first. W is a [input_dim, outputdim] matrix. Let $w{ij}$ by the weight at W[i,j] (connecting input element $i$ to output class $j$).
Now let's use the chain rule to derive this:
$\frac{\partial E}{\partial w_{ij}} = \frac{\partial E}{\partial z_j} \frac{\partial z_j}{\partial w_{ij}}$
This gradient is the product of two terms. Notice that we already computed the first term above! So we just need the second term:
To start, notice that $z_j = x^T W_j + b_j = \sum_i x_i w_{ij} + b_j$ ($W_j$ is the j-th column of matrix W). Use this to answer the following question:
QUESTIONs:
1. Derive $\frac{\partial z_j}{\partial w_{ij}}$.
2. Now put this together to get $\frac{\partial E}{\partial w_{ij}}$.
3. Do the same for $\frac{\partial E}{\partial b_{j}}$.
4. Put these together to arrive at $\frac{\partial E}{\partial W}$ (a matrix) and $\frac{\partial E}{\partial b}$ (a vector) HINT: Think about the ingredients: the vector of activations $x$ and the vector of logits $z$. What is the dimension of each? Now think about the dimensions of W. Which linear algebra operator can take two vectors and output a matrix? :)
In [9]:
class LinearModel(object):
def __init__(self):
# Initialize the model parameters.
self.W = np.copy(W_init)
self.b = np.zeros((1, num_classes))
def predictions(self, X):
"""Make predictions of classes (y values) given some inputs (X)."""
# Evaluate class scores/"logits": [points_per_class*num_classes x num_classes].
logits = self.get_logits(X)
# Compute the class probabilities.
probs = softmax(logits)
return probs
def loss(self, probs, y):
"""Calculate the loss given model predictions and true targets."""
num_examples = probs.shape[0]
data_loss = cross_entropy(probs, y)
regulariser = l2_loss([self.W])
return data_loss + regulariser
def update(self, probs, X, y):
"""Update the model parameters using back-propagation and gradient descent."""
# Calculate the gradient of the loss with respect to logits
dlogits = self.derivative_loss_logits(probs, y)
# Gradient of the loss wrt W
dW = self.derivative_loss_W(X, dlogits)
# Gradient of the loss wrt b
db = self.derivative_loss_b(dlogits)
# Don't forget the gradient on the regularization term.
dW += self.derivative_regularisation()
# Perform a parameter update.
self.W += -learning_rate * dW
self.b += -learning_rate * db
##### Now we define some helper functions
def get_logits(self, X):
"""Calculate the un-normalised model scores."""
# IMPLEMENT-ME: (4)
# HINT: We're trying to calculate WX + b, but X is a batch, so think about the shapes!
logits = ...
return logits
def derivative_loss_logits(self, probs, y):
"""Calculate the derivative of the loss with respect to logits."""
num_examples = y.shape[0]
# IMPLEMENT-ME: (5)
dlogits = ...
dlogits /= num_examples
return dlogits
def derivative_loss_W(self, X, dlogits):
"""Calculate the derivative of the loss wrt W."""
# IMPLEMENT-ME: (6)
dW = ...
return dW
def derivative_loss_b(self, dlogits):
"""Calculate the derivative of the loss wrt b."""
# IMPLEMENT-ME: (7)
# HINT: Have a look at np.sum, again paying attention to the axis and keepdims parameters.
db = ...
return db
def derivative_regularisation(self):
return reg_lambda * self.W
### Training the linear model
Now that we've defined our "black box" linear model. We can train it on our dummy spiral dataset
In [10]:
# Define a function that trains a model for a given number of epochs
# (iterations through the data).
def train_model(model, epochs, report_every, render_fn=None, render_args={}):
frames = []
for i in xrange(epochs):
# Get the model predictions for our spiral dataset X.
probs = model.predictions(X)
# Compute the loss
loss = model.loss(probs, y)
# Print the loss value every report_every steps.
if i % report_every == 0:
print "iteration %d: loss %f" % (i, loss)
if render_fn:
frame = render_fn(**render_args)
frames.append(frame)
# Use back-propagation to update the model parameters:
model.update(probs, X, y)
if frames: return frames
In [11]:
# Create an instance of our LinearModel.
linear_model = LinearModel()
# Now we train the linear model for 200 epochs.
train_model(linear_model, 200, 10)
iteration 0: loss 1.100447
iteration 10: loss 0.918496
iteration 20: loss 0.852024
iteration 30: loss 0.822591
iteration 40: loss 0.807724
iteration 50: loss 0.799528
iteration 60: loss 0.794729
iteration 70: loss 0.791794
iteration 80: loss 0.789940
iteration 90: loss 0.788739
iteration 100: loss 0.787946
iteration 110: loss 0.787414
iteration 120: loss 0.787053
iteration 130: loss 0.786806
iteration 140: loss 0.786634
iteration 150: loss 0.786515
iteration 160: loss 0.786432
iteration 170: loss 0.786373
iteration 180: loss 0.786332
iteration 190: loss 0.786303
### Evaluating the model
The training of the model should have converged to a value around 0.786 if you used the default data parameters earlier. (Convergence means that the loss decreases to a point and then stops decreasing). But how do we interpret this? Is our model actually good at making predictions? Let's work out the accuracy of the model to see:
In [12]:
# Define a function that calcuates and prints the accuracy of a model's predictions
def evaluate_model(model):
# Get the probabilites/scores that the model assigns to each class for each X datapoint.
scores = model.get_logits(X) # The shape of scores is [num_data_points, num_classes]
# The index of the maximum score along the 2nd dimension is the class that the model thinks is most likely (y^) for each datapoint.
predicted_class = np.argmax(scores, axis=1)
# What proportion of the class predictions made by the model (y^) agree with the true class values (y) ?
print 'Accuracy: %.2f' % (np.mean(predicted_class == y))
In [13]:
# Now evaluate the trained linear model
evaluate_model(linear_model)
Accuracy: 0.49
This is not a very good result (we are misclassifying around 50% of the data points, and these are data points that we've seen before!), lets visualise the decision boundary to determine what's going on.
### Visualizing the linear model's decision boundary
Let's visualize the decision boundary of this linear classifier on the swiss roll dataset.
In [14]:
# Define a function that plots the decision boundary of a model
def plot_decision_boundary(X, model, render=True):
"""Overlays the classifier's decision boundary on the dataset [X, y].
Args:
X: 2-d matrix input data,
model: The model to evaluate
"""
step_size = 0.02 # Discretization step-size
# Get the boundaries of the dataset.
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
# Generate a grid of points, step_size apart, over the above region.
xx, yy = np.meshgrid(np.arange(x_min, x_max, step_size),
np.arange(y_min, y_max, step_size))
# Flatten the data and get the logits of the classifier (the "scores") for
# each point in the generated mesh-grid.
meshgrid_matrix = np.c_[xx.ravel(), yy.ravel()]
Z = model.get_logits(meshgrid_matrix)
# Get the class predictions for each point.
Z = np.argmax(Z, axis=1)
Z = Z.reshape(xx.shape)
old_backend = plt.rcParams['backend'] # Save backend.
if not render:
plt.rcParams['backend'] = 'agg'
# Overlay both of these on one figure.
fig = plt.figure()
axes = plt.gca()
axes.contourf(xx, yy, Z, cmap=plt.cm.Spectral, alpha=0.8)
axes.scatter(X[:, 0], X[:, 1], c=y, s=40, cmap=plt.cm.Spectral)
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max())
if not render:
# Now we can save it to a numpy array.
fig.canvas.draw()
data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='')
data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,))
# Restore old backend
plt.rcParams['backend'] = old_backend
return data
#fig.savefig('spiral_linear.png')
In [15]:
from matplotlib import animation
from IPython.display import display
from IPython.display import HTML
def display_frames_as_gif(frames):
"""
Displays a list of frames as a gif.
"""
plt.figure(figsize=(frames[0].shape[1] / 72.0, frames[0].shape[0] / 72.0), dpi = 72)
patch = plt.imshow(frames[0])
#plt.axis('off')
def animate(i):
patch.set_data(frames[i])
anim = animation.FuncAnimation(plt.gcf(), animate, frames = len(frames), interval=50)
##display(display_animation(anim, default_mode='loop'))
HTML(anim.to_html5_video())
# METHOD 2
#plt.rcParams['animation.html'] = 'html5'
#anim
return anim
#display_frames_as_gif(frames)
In [16]:
# Plot the decision boundary of our trained linear model on the dataset X
# plot_decision_boundary(X, linear_model)
# Create an instance of our LinearModel.
reset_matplotlib()
linear_model = LinearModel()
train_model(linear_model, 200, 10)
# For rendering animations.
#frames = train_model(linear_model, 200, 10,
# plot_decision_boundary,
# {'X':X, 'model':linear_model, 'render':False})
iteration 0: loss 1.100447
iteration 10: loss 0.918496
iteration 20: loss 0.852024
iteration 30: loss 0.822591
iteration 40: loss 0.807724
iteration 50: loss 0.799528
iteration 60: loss 0.794729
iteration 70: loss 0.791794
iteration 80: loss 0.789940
iteration 90: loss 0.788739
iteration 100: loss 0.787946
iteration 110: loss 0.787414
iteration 120: loss 0.787053
iteration 130: loss 0.786806
iteration 140: loss 0.786634
iteration 150: loss 0.786515
iteration 160: loss 0.786432
iteration 170: loss 0.786373
iteration 180: loss 0.786332
iteration 190: loss 0.786303
In [17]:
plot_decision_boundary(X, linear_model)
#reset_matplot_lib()
#anim = display_frames_as_gif(frames)
What can you deduce from this decision boundary? Does it help explain why the linear model makes bad predictions?
## Using a nonlinear model
The linear model resulted in decision boundaries that were straight lines in 2D space. Using these straight lines, it is impossible to accurately separate our spiral-shaped data. We need a more complex model that can represent more complex (nonlinear) decision boundaries. Intuitively, we want a "curved" decision boundary that can adapt to the curved shape of the data. Let's see how converting our model to be a non-linear function of its parameters could resolve this.
### Define the non-linear model
We define this as a class with the same interface as the LinearModel. This way we can reuse exactly the same training and evaluation functions we defined earlier!
Forward pass
So far the logits (scores) were a linear function of the inputs ($\Delta \textrm{logits} = W*\Delta \textrm{inputs}$; put another way, a small change in inputs leads to a proportionally small change in outputs). We can make the model more powerful by making logits a non-linear function of inputs, e.g. $\Delta\textrm{logits} = W_2 * \sigma(W * \Delta \textrm{inputs})$. Here, $\sigma$ is a nonlinear function (non-linearity / activation function). There are many different types, but a popular choice is the rectified linear unit (ReLU):
$\sigma_\textrm{ReLU}(x) = \max(0, x)$
In code:
In [18]:
def relu(value):
""" ReLU is the "Rectified Linear Unit activation function", defined as:
relu(x) = x if x > 0, and 0 if x <= 0
"""
return np.maximum(0, value)
QUESTION: Why does adding this "non-linearity" make the model more powerful? HINT: Think about the decision boundary of the linear model above, and convince yourself that adding a nonlinearity allows the model more freedom in how it structures its decision boundary.
Our model has now changed a little, but notice that $W_2$ (which maps from the hidden layer to logits) is now doing the same thing as $W$ in our linear model, just on a transformed version of the inputs $z_2 = W_2h$, with the hidden layer activations $h = \sigma_{ReLU}(z_1)$ and $z_1 = Wx + b$.
So the good news is that the mechanics of computing the gradient wrt $W_2$ will be similar to how we derived $\frac{\partial E}{\partial W}$ for the linear model above, but now we just need to replace 'input activations' $X$ with the 'hidden activations' $h$ (compare derivative_loss_W2() below with derivative_loss_W() in the linear model above).
So all that's left is to compute $\frac{\partial E}{\partial W}$, the gradients on $W$ (the input-to-hidden layer weights; omitting the biases for now). For this, we will again use the chain rule to derive
$\frac{\partial E}{\partial W} = \frac{\partial E}{\partial z_2} \frac{\partial z_2}{\partial z_1} \frac{\partial z_1}{\partial W}$
Convince yourself that this is again just an application of the chain rule for derivatives, but over a longer chain ($E\rightarrow z_2 \rightarrow z_1 \rightarrow W$)!
NOTE:
• The gradient on input weights W is a product of three terms.
• We already know the first term $\frac{\partial E}{\partial z_2}$.
QUESTIONS:
• Compute: $\frac{\partial z_2}{\partial z_1} = \frac{\partial W_2 h}{\partial z_1} = \frac{\partial W_2 \sigma_{ReLU}(z_1)}{\partial z_1} = \ldots$.
• Compute: $\frac{\partial z_1}{\partial W}$.
ASIDE: What is happening here:
So far we have been manually deriving the gradients of the loss wrt to all model parameters. Notice that a specific pattern is emerging:
• propagate activations forward through the network ("make a prediction"),
• compute an error delta ("see how far we're off") , and
• propagate errors backwards to update the weights ("update the weights to do better next time").
Derivatives of the loss with respect to the inputs of a layer (e.g. $\frac{\partial E}{\partial z}$) are referred to as (error) deltas. For now we just need the gradients calculated above, but we will use this insight in the next practical when we show how this all forms part of a more general algorithm for efficiently computing gradients in deep neural networks (called (error) back-propagation).
Let's implement this:
In [19]:
learning_rate = 1e-0 # How far along the gradient do we want to travel when doing
reg_lambda = 1e-3 # Regularization strength
num_hidden = 100 # Size of hidden layer.
non_linear_W_init = 0.01 * np.random.randn(dimensions, num_hidden)
non_linear_W2_init = 0.01 * np.random.randn(num_hidden, num_classes)
In [20]:
class NonLinearModel(object):
def __init__(self):
# Initialize the model parameters.
self.W = non_linear_W_init
self.b = np.zeros((1, num_hidden))
self.W2 = non_linear_W2_init
self.b2 = np.zeros((1, num_classes))
def predictions(self, X):
"""Make predictions of classes (y values) given some inputs (X)."""
# Evaluate class scores/"logits": [points_per_class*num_classes x num_classes].
logits = self.get_logits(X)
# Compute the class probabilities.
probs = softmax(logits)
return probs
def loss(self, probs, y):
"""Calculate the loss given model predictions and true targets."""
data_loss = cross_entropy(probs, y)
regulariser = l2_loss([self.W, self.W2])
return data_loss + regulariser
def update(self, probs, X, y):
"""Update the model parameters using back-propagation and gradient descent."""
hidden_output = self.hidden_layer(X)
# Calculate the gradient of the loss with respect to logits
dlogits = self.derivative_loss_logits(probs, y)
# Backpropagate the gradient to the parameters.
# We first backprop into parameters W2 and b2.
dW2 = self.derivative_loss_W2(hidden_output, dlogits)
db2 = self.derivative_loss_b2(dlogits)
# Next, backprop into the hidden layer.
dhidden = self.derivative_hidden(hidden_output, dlogits)
# Finally, backprop into W,b.
dW = self.derivative_loss_W(X, dhidden)
db = self.derivative_loss_b(dhidden)
dW2 += self.derivative_regularisation_W2()
dW += self.derivative_regularisation_W()
# Perform a parameter update (one step of gradient descent).
self.W += -learning_rate * dW
self.b += -learning_rate * db
self.W2 += -learning_rate * dW2
self.b2 += -learning_rate * db2
## DEFINE THE MODEL HELPER FUNCTIONS
def hidden_layer(self, X):
"""Calculate the output of the hidden layer."""
# IMPLEMENT-ME: (8)
# HINT: Don't forget the activation function!
hidden = ...
return hidden
def get_logits(self, X):
"""Calculate the logits from the input data X."""
hidden_output = self.hidden_layer(X)
# IMPLEMENT-ME: (9)
# HINT: Make sure you're using the correct parameters.
logits = ...
return logits
def derivative_loss_logits(self, logits, y):
"""Calculate the derivative of the loss with respect to logits."""
num_examples = y.shape[0]
dlogits = logits
dlogits[range(num_examples),y] -= 1
dlogits /= num_examples
return dlogits
def derivative_loss_W2(self, hidden_output, dlogits):
"""Calculate the derivative of the loss wrt W2."""
# IMPLEMENT-ME: (10)
dW2 = ...
return dW2
def derivative_loss_b2(self, dlogits):
"""Calculate the derivative of the loss wrt b2."""
# IMPLEMENT-ME: (11)
db2 = ...
return db2
def derivative_hidden(self, hidden_output, dlogits):
"""Calculate the derivative of the loss wrt the hidden layer."""
# Calculate the gradient as if the hidden layer were a normal linear layer.
dhidden = np.dot(dlogits, self.W2.T)
# Now take the Relu non-linearity into account
dhidden[hidden_output <= 0] = 0
return dhidden
def derivative_loss_W(self, X, dhidden):
"""Calculate the derivative of the loss wrt W."""
# IMPLEMENT-ME: (12)
dW = ...
return dW
def derivative_loss_b(self, dhidden):
"""Calculate the derivative of the loss wrt b."""
# IMPLEMENT-ME: (13)
db = ...
return db
def derivative_regularisation_W(self):
"""Calculate the gradient of the L2 loss wrt W."""
return reg_lambda * self.W
def derivative_regularisation_W2(self):
"""Calculate the gradient of the L2 loss wrt W2."""
return reg_lambda * self.W2
### Train the non linear model
In [21]:
# Create an instance of our non-linear model.
non_linear_model = NonLinearModel()
# Train the model for 10000 epochs
train_model(non_linear_model, 10000, 1000)
iteration 0: loss 1.098706
iteration 1000: loss 0.310401
iteration 2000: loss 0.267694
iteration 3000: loss 0.251005
iteration 4000: loss 0.248199
iteration 5000: loss 0.247441
iteration 6000: loss 0.246914
iteration 7000: loss 0.246429
iteration 8000: loss 0.246247
iteration 9000: loss 0.246120
In [22]:
# evaluate training set accuracy of the non-linear model
evaluate_model(non_linear_model)
Accuracy: 0.98
Much better! Is it possible to get to 100%? What factors about the dataset would influence this?
### Visualize the nonlinear model's decision boundary
Lets see what the decision boundary of the non-linear model looks like
In [23]:
# Plot the decision boundary of the non-linear model on the dataset X
plot_decision_boundary(X, non_linear_model)
# Using TensorFlow
The above models worked well, but it was a little tedius to implement the gradients by hand. TensorFlow (TF) is an open source software library for numerical computation using data flow graphs. Nodes in the graph represent mathematical operations, while the graph edges represent the multidimensional data arrays (tensors) communicated between them. Besides offering computational speedups due to efficient backends, one of the big advantages is also that TensorFlow can be used to automatically derive the gradients of any mathematical expressions. It achieve this through a process called "automatic differentiation".
TF builds a computational graph (a DAG structure where nodes represent mathematical operations and data flows on the edges connecting them). The graph chains together all the mathematical operations in order. Given such a graph for a neural network with a loss function, TF can then automatically "unroll" it backwards to compute the gradients that we had to compute by hand above!
NOTE: Chat to your neighbour and the tutors to make sure you understand (at least conceptually) how this graph formalism is different from e.g. Numpy's (imperative) formalism before moving on.
QUESTION: What are the advantages and disadvantages of such a computational graph approach?
Now let's re-implement the above models using TensorFlow and see how it compares.
### Create a session
TensorFlow makes use of "sessions" to encapsulate the environment and manage the resources within which graphs are executed. Sessions are also used to run the graph operations you are interested in. There are different ways to create sessions, but for now we will make use of the interactive session, which sets a default environment that will be shared across all code cells, and avoids the need to explicitly pass around a reference to the current session object.
In [24]:
# First we need to import TensorFlow.
import tensorflow as tf
tf.reset_default_graph() # Clear the graph between different invocations.
sess = tf.InteractiveSession() # Create and register the default Session.
### Define hyperparameters
In [25]:
### HYPERPARAMETERS
learning_rate = 1e-0
reg_lambda = 1e-3
training_iterations = 200 # 'epochs'
batch_size = X.shape[0] # The whole dataset; i.e. batch gradient descent.
display_step = 10 # How often should we print our results
###
# Network Parameters
num_input = 2 # 2-dimensional input data
num_classes = 3 # red, yellow and blue!
### Graph input placeholders
Graphs provide an abstract definition of what computations we want to perform on our data. TensorFlow provides placeholders as a means for injecting data into our graph at execution time, through a process called "feeding".
In [26]:
# placeholders for the inputs and labels. We will 'feed' these to the graph.
# Note that using 'None' for a dimension means that TensorFlow will adapt to
# whatever the size is of the input we feed in.
x_tf = tf.placeholder(tf.float32, [None, num_input])
y_tf = tf.placeholder(tf.int32, [None])
### Define helper functions
Let's create some helper functions that we can re-use in the Tensorflow models. We need fewer than before because Tensorflow already provides lots of useful functions out the box!
Note: TensorFlow has a built-in cross entropy function that is more numerically stable than our implementation, and should be used under normal circumstances.
In [27]:
def cross_entropy_tf(predictions, targets):
"""Calculate the cross entropy loss given some model predictions and target (true) values."""
targets = tf.one_hot(targets, num_classes)
# IMPLEMENT-ME: (14)
# HINT: Have a look at the TensorFlow functions tf.log, tf.reduce_sum and tf.reduce_mean
cross_entropy = ...
return cross_entropy
## Linear model
We construct a linear model with the same architecture as above. Notice how TensorFlow provides out-the-box many of the functions we had to previously define ourselves. There is another major difference between the TensorFlow code and Numpy code that may not be immediately apparant. In TensorFlow, when we call functions like tf.nn.softmax() we are not performing a computation, rather we are defining an operation in the computation graph that gets run later (when we feed in real data).
NOTE: This is called "define-and-run", and it is different from e.g. Numpy's "define-by-run" (imperative) approach.
In [28]:
class TFLinearModel(object):
def __init__(self):
# Initialise the variables
# Tensorflow variables can be updated automatically by optimisers.
self.W = tf.Variable(W_init, dtype=tf.float32)
self.b = tf.Variable(tf.zeros([num_classes]), dtype=tf.float32)
def predictions(self, X):
"""Make predictions of classes (y values) given some inputs (X)."""
logits = self.get_logits(X)
# Compute the class probabilities.
probs = tf.nn.softmax(logits)
return probs
def loss(self, probs, y):
"""Calculate the loss given model predictions and true targets."""
data_loss = cross_entropy_tf(probs, y)
regulariser = reg_lambda * tf.nn.l2_loss(self.W)
return data_loss + regulariser
def get_logits(self, X):
# An affine function.
# IMPLEMENT-ME: (15)
# HINT: Have a look at the TensorFlow function tf.matmul
logits = ...
return logits
Remember, up until now we've been defining (constructing) a computation graph. Now we can run this graph multiple times, feeding in our data, in a training loop. The training loop is a bit more complex than before, but by wrapping up our models in classes, we can again benefit from re-using the same training loop on multiple models.
In [29]:
def train_tf_model(tf_model, epochs, report_every):
# Get the op which, when executed, will initialize the variables.
init = tf.global_variables_initializer()
# Get the model probabilities
probs = tf_model.predictions(x_tf)
# Get the model loss
loss = tf_model.loss(probs, y_tf)
# Create a Gradient Descent optimizer using our own learning rate.
# Now we create an op that computes the gradient of the loss with respect
# to the model parameters and performs one update to all of the parameters
# in the direction of their gradients.
# NOTE: TensorFlow uses "automatic differentiation" to automatically derive
# the gradients of the loss wrt all trainable parameters for us! This is
# a huge saving as models get deeper and loss functions get more complex.
optimizer_step = optimizer.minimize(loss)
# Actually initialize the variables (run the op).
sess.run(init)
# Training cycle.
for iteration in range(epochs):
avg_cost = 0.
total_batch = int(X.shape[0] / batch_size)
# Loop over all batches.
for i in range(total_batch):
batch_x = X[i * batch_size : (i + 1) * batch_size, :]
batch_y = y[i * batch_size : (i + 1) * batch_size]
# Run optimization op (backprop) and cost op (to get loss value).
_, c = sess.run([optimizer_step, loss], feed_dict={x_tf: batch_x,
y_tf: batch_y})
# Compute average loss.
avg_cost += c / total_batch
# Display logs per iteration/epoch step.
if iteration % report_every == 0:
print "Iteration:", '%04d' % (iteration + 1), "cost=", \
"{:.9f}".format(avg_cost)
print "Optimization Finished!"
In [30]:
# Intialize our TensorFlow Linear Model
tf_linear_model = TFLinearModel()
# Train the model for 200 epochs
train_tf_model(tf_linear_model, 200, 10)
Iteration: 0001 cost= 1.100447059
Iteration: 0011 cost= 0.918495715
Iteration: 0021 cost= 0.852024496
Iteration: 0031 cost= 0.822591126
Iteration: 0041 cost= 0.807723522
Iteration: 0051 cost= 0.799528003
Iteration: 0061 cost= 0.794729173
Iteration: 0071 cost= 0.791794062
Iteration: 0081 cost= 0.789939761
Iteration: 0091 cost= 0.788739026
Iteration: 0101 cost= 0.787945986
Iteration: 0111 cost= 0.787414253
Iteration: 0121 cost= 0.787053227
Iteration: 0131 cost= 0.786805689
Iteration: 0141 cost= 0.786634445
Iteration: 0151 cost= 0.786515296
Iteration: 0161 cost= 0.786431968
Iteration: 0171 cost= 0.786373317
Iteration: 0181 cost= 0.786331892
Iteration: 0191 cost= 0.786302507
Optimization Finished!
NOTE: Our final cost (0.786) matches the final cost reached in our Numpy implementation above. This is not by accident, but because we used exactly the same initial values for the parameters, the same optimizer with the same update rules on the same data. (The exact value may be different if you changed some of the HyperParameters)
Notice how much less code we had to write for the TensorFlow model compared to the Numpy one earlier! TensorFlow is designed for deep learning and provides a number of common functions. In fact, it's possible to define a linear model like this in even fewer lines in TensorFlow, but we wanted to make this example as clear as possible!
### Visualizing the linear model's decision boundary
In [31]:
# Here we wrap the TensorFlow model so that it behaves more like a Numpy model
# which the plot_decision_boundary function expects. Don't worry about
# the details, it's just so we can visualise the decision boundary!
class TFModelWrapper(object):
def __init__(self, model):
self._model = model
def get_logits(self, x):
return tf.get_default_session().run(self._model.get_logits(x_tf),
feed_dict={x_tf : x,
y_tf : np.zeros(x.shape[0])})
wrapper = TFModelWrapper(tf_linear_model)
plot_decision_boundary(X, wrapper)
## Using a nonlinear classifier
Let's replicate the nonlinear model in TensorFlow. TensorFlow saves us from writing more code as the model becomes more complex. In particular, this is because we don't have to implement the derivatives ourselves!
In [32]:
class TFNonLinearModel(object):
def __init__(self):
# Initialise the variables
# Tensorflow variables can be updated automatically by optimisers.
self.W = tf.Variable(non_linear_W_init, dtype=tf.float32)
self.b = tf.Variable(tf.zeros([num_hidden]), dtype=tf.float32)
self.W2 = tf.Variable(non_linear_W2_init, dtype=tf.float32)
self.b2 = tf.Variable(tf.zeros([num_classes]), dtype=tf.float32)
def predictions(self, X):
"""Make predictions of classes (y values) given some inputs (X)."""
logits = self.get_logits(X)
# Compute the class probabilities.
probs = tf.nn.softmax(logits)
return probs
def loss(self, probs, y):
"""Calculate the loss given model predictions and true targets."""
data_loss = cross_entropy_tf(probs, y)
regulariser = reg_lambda * tf.nn.l2_loss(self.W) + reg_lambda * tf.nn.l2_loss(self.W2)
return data_loss + regulariser
def get_logits(self, X):
hidden_output = self.hidden_layer(X)
return logits
def hidden_layer(self, X):
# IMPLEMENT-ME: (16)
# HINT: Relu is available in TensorFlow using the function tf.nn.relu
linear = ...
hidden = ...
return hidden
In [33]:
# Create an instance of the TensorFlow Non Linear model
tf_non_linear_model = TFNonLinearModel()
# Train it for 10000 epochs
train_tf_model(tf_non_linear_model, 10000, 1000)
Iteration: 0001 cost= 3.430649281
Iteration: 1001 cost= 0.246243224
Iteration: 2001 cost= 0.246076643
Iteration: 3001 cost= 0.245949924
Iteration: 4001 cost= 0.245859504
Iteration: 5001 cost= 0.245793104
Iteration: 6001 cost= 0.245740667
Iteration: 7001 cost= 0.245695874
Iteration: 8001 cost= 0.245662034
Iteration: 9001 cost= 0.245632291
Optimization Finished!
### Visualizing the nonlinear model's decision boundary
In [34]:
wrapper = TFModelWrapper(tf_non_linear_model)
plot_decision_boundary(X, wrapper)
This decision boundary should look very similar to the one obtained earlier, which is of course the goal. Congrats!
## EXTRA: The effect of regularization on the decision boundary
Change the l2_lambda parameter and observe the effect it has on the decision boundary.
QUESTION: Can you explain why this happens?
In [35]:
### CHANGE ME:
### E.g. for reg_lambda = [0., 1e-4, 1e-3, 1e-1, 1.]:
for v in [0., 1e-4, 1e-3, 1e-1, 1.]:
print "Setting reg_lambda to ", v
reg_lambda = v
tf_non_linear_model = TFNonLinearModel()
train_tf_model(tf_non_linear_model, 10000, 1000)
wrapper = TFModelWrapper(tf_non_linear_model)
plot_decision_boundary(X, wrapper)
Setting reg_lambda to 0.0
Iteration: 0001 cost= 3.293793201
Iteration: 1001 cost= 0.045659613
Iteration: 2001 cost= 0.034318779
Iteration: 3001 cost= 0.029046485
Iteration: 4001 cost= 0.025907867
Iteration: 5001 cost= 0.023753971
Iteration: 6001 cost= 0.022159759
Iteration: 7001 cost= 0.020927856
Iteration: 8001 cost= 0.019980222
Iteration: 9001 cost= 0.019212436
Optimization Finished!
Setting reg_lambda to 0.0001
Iteration: 0001 cost= 3.307478905
Iteration: 1001 cost= 0.076687112
Iteration: 2001 cost= 0.071806177
Iteration: 3001 cost= 0.070404142
Iteration: 4001 cost= 0.069816492
Iteration: 5001 cost= 0.069518209
Iteration: 6001 cost= 0.069387846
Iteration: 7001 cost= 0.069318637
Iteration: 8001 cost= 0.069280162
Iteration: 9001 cost= 0.069252878
Optimization Finished!
Setting reg_lambda to 0.001
Iteration: 0001 cost= 3.430649281
Iteration: 1001 cost= 0.246243224
Iteration: 2001 cost= 0.246076643
Iteration: 3001 cost= 0.245949924
Iteration: 4001 cost= 0.245859504
Iteration: 5001 cost= 0.245793104
Iteration: 6001 cost= 0.245740667
Iteration: 7001 cost= 0.245695874
Iteration: 8001 cost= 0.245662034
Iteration: 9001 cost= 0.245632291
Optimization Finished!
Setting reg_lambda to 0.1
Iteration: 0001 cost= 16.979393005
Iteration: 1001 cost= 1.086608529
Iteration: 2001 cost= 1.086589098
Iteration: 3001 cost= 1.086582422
Iteration: 4001 cost= 1.086578965
Iteration: 5001 cost= 1.086575508
Iteration: 6001 cost= 1.086574912
Iteration: 7001 cost= 1.086574316
Iteration: 8001 cost= 1.086573124
Iteration: 9001 cost= 1.086572766
Optimization Finished!
Setting reg_lambda to 1.0
Iteration: 0001 cost= 140.149795532
Iteration: 1001 cost= 1.098611832
Iteration: 2001 cost= 1.098611832
Iteration: 3001 cost= 1.098611832
Iteration: 4001 cost= 1.098611832
Iteration: 5001 cost= 1.098611832
Iteration: 6001 cost= 1.098611832
Iteration: 7001 cost= 1.098611832
Iteration: 8001 cost= 1.098611832
Iteration: 9001 cost= 1.098611832
Optimization Finished!
# More resources
• TensorFlow has a great website where you can play with different toy datasets, different model and training choices, and see how that affects model performance: http://playground.tensorflow.org/
• deeplearn.js is a library for doing deep learning directly in the browser: https://pair-code.github.io/deeplearnjs/. Have a look at the "Model Builder" section and play around with training a few feedforward models on MNIST.
# NB: Before you go (5min)
Pair up with someone else and go through the questions in "Learning Objectives" at the top. Take turns explaining each of these to each other, and be sure to ask the tutors if you're both unsure! | 2021-10-19 12:10:51 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6601209044456482, "perplexity": 3767.3858002992765}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585265.67/warc/CC-MAIN-20211019105138-20211019135138-00706.warc.gz"} |
https://nigerianscholars.com/past-questions/chemistry/question/452941/ | Home » » A gaseous metallic chloride MCl consists of 20.22% of M by mass. The formula of ...
# A gaseous metallic chloride MCl consists of 20.22% of M by mass. The formula of ...
### Question
A gaseous metallic chloride MCl consists of 20.22% of M by mass. The formula of the chloride is?
[ M = 27, Cl = 35.5]
### Options
A)
MCl
B)
MCl$$_2$$
C)
MCl$$_3$$
D)
M$$_2$$Cl$$_6$$
### Explanation:
M Cl % composition 20.22 79.78 Atomic mass 27 35.5 Mole ratio 20.22 79.78 27 35.5 0.75 2.25 Divided 0.75 0.75 1 3
The formula of the Chloride = MCl$$_3$$
## Discussion (2)
• Okafor Chinaza
How do you get the composition percentage of chlorine
The formula of the Chloride = MCl$$_3$$ | 2022-01-22 23:45:43 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8770175576210022, "perplexity": 9068.615874520376}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303917.24/warc/CC-MAIN-20220122224904-20220123014904-00183.warc.gz"} |
http://www.maplesoft.com/support/help/Maple/view.aspx?path=OreTools/Modular | Overview of the OreTools[Modular] Subpackage - Maple Programming Help
Home : Support : Online Help : Mathematics : Algebra : Skew Polynomials : OreTools : OreTools/Modular
Overview of the OreTools[Modular] Subpackage
Calling Sequence OreTools[Modular][command](arguments) command(arguments)
Description
• The OreTools[Modular] subpackage provides basic arithmetic in an Ore polynomial ring whose constant field consists of p elements, where p is a prime. The operations in this subpackage are used to implement modular techniques for computing GCD's and LCM's.
• Each command in the OreTools[Modular] subpackage can be accessed by using either the long form or the short form of the command name in the command calling sequence.
List of OreTools[Modular] Subpackage Commands
The following is a list of available commands.
- Ring arithmetic: Add, Minus, ScalarMultiply, Multiply
- Polynomial operations: Content, ModularOrePoly, MonicAssociate, Primitive,
- Euclidean algorithms: FractionFreeRightEuclidean, RightEuclidean
- GCRD and LCLM computation: GCRD, LCLM
To display the help page for a particular OreTools[Modular] command, see Getting Help with a Command in a Package. | 2017-06-28 00:17:47 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8543627858161926, "perplexity": 5916.331511892462}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128321961.50/warc/CC-MAIN-20170627235941-20170628015941-00013.warc.gz"} |
https://ftp.aimsciences.org/article/doi/10.3934/jgm.2010.2.343 | Article Contents
Article Contents
# Variational integrators for discrete Lagrange problems
• A discrete Lagrange problem is defined as a discrete Lagrangian system endowed with a constraint submanifold in the space of 1-jets of the discrete fibred manifold that configures the system. After defining the concepts of admissible section and infinitesimal admissible variation, the objective of these problems is to find admissible sections that are critical for the Lagrangian of the system with respect to the infinitesimal admissible variations. For admissible sections satisfying a certain regularity condition, we prove that critical sections are the solutions of an extended unconstrained discrete variational problem canonically associated to the problem of Lagrange (discrete Lagrange multiplier rule). Next, we define the concept of Cartan 1-form, establish a Noether theory for symmetries and introduce a notion of "constrained variational integrator" that we characterize through a Cartan equation ensuring its symplecticity. Under a certain regularity condition of the problem of Lagrange, we prove the existence and uniqueness of this kind of integrators in the neighborhood of a critical section, showing then that such integrators can be constructed from a generating function of the second class in the sense of symplectic geometry. Finally, the whole theory is illustrated with three elementary examples.
Mathematics Subject Classification: 37J60, 37M15, 49J15, 65P10.
Citation:
• [1] V. I. Arnol'd, V. V. Kozlov and A. I. Neĭshtadt, "Dynamical Systems III," Encyclopaedia of Mathematical Sciences, 3, Springer Verlag, Berlin, 1988. [2] R. Benito and D. Martín de Diego, Discrete vakonomic mechanics, J. Math. Phys., 46 (2005), 083521doi: 10.1063/1.2008214. [3] A. M. Bloch, "Nonholonomic Mechanics and Control,'' Interdisciplinary Applied Mathematics, 24. Springer Science Ed., 2003. [4] F. Cardin and M. Favretti, On nonholonomic and vakonomic dynamics of mechanical systems with nonintegrable constraints, J. Geom. Phys., 18 (1996), 295-325.doi: 10.1016/0393-0440(95)00016-X. [5] J.-B. Chen, H.-Y. Guo and K. Wu, Total variation and variational symplectic-energy-momentum integrators, preprint, arXiv:hep-th/0109178. [6] J.-B. Chen, H.-Y. Guo and K. Wu, Discrete total variation calculus and Lee's discrete mechanics, Appl. Math. Comput., 177 (2006), 226-234.doi: 10.1016/j.amc.2005.11.002. [7] J. Cortés, "Geometric, Control and Numerical Aspects of Nonholonomic Systems,'' Lect. Notes in Math. 1793, Springer Verlag, 2002. [8] P. L. García, A. García and C. Rodrigo, Cartan forms for first order constrained variational problems, J. Geom. Phys., 56 (2006), 571-610.doi: 10.1016/j.geomphys.2005.04.002. [9] P. L. García and C. Rodrigo, Cartan forms and second variation for constrained variational problems, Proceedings of the VII International Conference on Geometry, Integrability and Quantization (Varna, Bulgary) 7 Bulgarian Acad. Sci., Sofia, (2006), 140-153. [10] H. Goldstein, "Classical Mechanics,'' Addison-Wesley Series in Physics, 1980. [11] X. Gràcia, J. Marín Solano and M. C. Muñoz Lecanda, Some geometric aspects of variational calculus in constrained systems, Rep. Math. Phys., 51 (2003), 127-148doi: 10.1016/S0034-4877(03)80006-X. [12] V. M. Guibout and A. Bloch, Discrete variational principles and Hamilton-Jacobi theory for mechanical systems and optimal control problems, e-print ccsd-00002863, version1-2004 [13] L. Hsu, Calculus of variations via the Griffiths formalism, J. Diff. Geom., 36 (1992), 551-589. [14] T. D. Lee, Can time be a discrete dynamical variable?, Phys. Lett. B, 122 (1983), 217-–220.doi: 10.1016/0370-2693(83)90687-1. [15] M. de León, D. Martín de Diego and A. Santamaría Merino, Geometric integrators and nonholonomic mechanics, J. Math. Phys., 45 (2004). [16] M. de León, D. Martín de Diego and A. Santamaría Merino, Discrete variational integrators and optimal control theory, Advances in Computational Mathematics, 26 (2006), 251-268 [17] M. de León, J. C. Marrero and D. Martín de Diego, Vakonomic mechanics versus non-holonomic mechanics: A unified geometrical approach, J. Geom. Phys., 35 (2000), 126-144.doi: 10.1016/S0393-0440(00)00004-8. [18] J. E. Marsden, G. W. Patrick and S. Shkoller, Multisymplectic geometry, variational integrators and nonlinear PDEs, Comm. in Math. Phys., 199 (1998), 351-398. [19] J. E. Marsden and M. West, Discrete mechanics and variational integrators, Acta Numerica, 10 (2001), 317-514.doi: 10.1017/S096249290100006X. [20] S. Martínez, J. Cortés and M. de León, Symmetries in vakonomic dynamics: Applications to optimal control, J. Geom. Phys., 38 (2001), 343-365.doi: 10.1016/S0393-0440(00)00069-3. [21] P. Piccione and D. V. Tausk, Lagrangian and Hamiltonian formalism for constrained variational problems, Proc. Roy. Soc. Edinburgh Sect. A, 132 (2002), 1417-1437. [22] J. Vankerschaver, F. Cantrijn, M. de León and D. Martín de Diego, Geometric aspects of nonholonomic field theories, Rep. Math. Phys., 56 (2005), 387-411.doi: 10.1016/S0034-4877(05)80093-X. [23] J. Vankerschaver and F. Cantrijn, Discrete Lagrangian field theories on Lie groupoids, J. Geom. Phys., 57 (2007), 665-689.doi: 10.1016/j.geomphys.2006.05.006. [24] M. West, "Variational Integrators,'' Ph.D. Thesis, California Institute of Technology, 2004. | 2023-03-26 03:12:16 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.7745250463485718, "perplexity": 2560.665029393501}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945381.91/warc/CC-MAIN-20230326013652-20230326043652-00295.warc.gz"} |
https://jharkhandboardsolution.com/jac-class-9th-science-solutions-chapter-12/ | # JAC Class 9th Science Solutions Chapter 12 Sound
## JAC Board Class 9th Science Solutions Chapter 12 Sound
JAC Class 9th Science Sound InText Questions and Answers
Poge 162
Question 1.
How does the sound produced by a vibrating object in a medium reach your ear?
When an object vibrates, it sets the particles of the medium around it in vibration. The particles in the medium in contact with the vibrating object are displaced from their equilibrium position. It then exerts a force on the adjacent particles. After displacing the adjacent particle, the first particle of the medium comes back to its original position. This process continues in the medium till the sound reaches our ear.
Page 163
Question 1.
Explain how sound is produced by your school bell.
When the bell continues to vibrate forward and backward, it creates a series of compressions and rarefactions resulting in the production of sound.
Question 2.
Why are sound waves called mechanical waves?
Sound waves need material medium to propagate, therefore, they are called mechanical waves. Sound waves can propagate through a medium only because of the interaction of the particles present in that medium.
Question 3.
Suppose you and your friend are on the moon. Will you be able to hear any sound produced by your friend?
No, because a sound wave needs a medium through which it can propagate. Since there is no atmosphere on the moon, no material medium is available and we cannot hear any sound on the moon.
Page 166
Question 1.
Which wave property determines:
(a) loudness
(b) pitch?
(a) Amplitude
(b) Frequency
Question 2.
Guess which sound has a higher pitch: guitar or car horn?
Guitar has a higher pitch than car horn, because sound produced by the strings of guitar has a higher frequency than that of a car horn. The higher the frequency, the higher is the pitch.
Page 166
Question 1.
What are wavelength, frequency, time period and amplitude of a sound wave?
1. Wavelength: The distance between two consecutive compressions or two consecutive rarefactions is known as wavelength. Its SI unit is metre (m).
2. Frequency: The number of complete oscillations per second is known as the frequency of a sound wave. It is measured in hertz (Hz).
3. Time period: The time taken by two consecutive compressions or rarefactions to cross a fixed point is called the time period of the wave.
4. Amplitude: The maximum height reached by the crest or the maximum depth reached by the trough of a
sound wave is called its amplitude.
Question 2.
How are the wavelength and frequency of a sound wave related to its speed?
Speed, wavelength and frequency of a sound wave are related by the following equation:
Speed (v) = Wavelength (λ) x Frequency (v)
V = λ × v
Question 3.
Calculate the wavelength of a sound wave whose frequency is 220 Hz and speed is 440 m/s in a given medium.
Frequency of the sound wave, v = 220 Hz
Speed of the sound wave, v = 440 ms-1
For a sound wave,
Speed = Wavelength × Frequency
v = λ × v
∴ λ = $$\frac{v}{v}$$ = $$\frac{440}{220}$$ = 2
Hence, the wavelength of the sound wave is 2 m.
Question 4.
A person is listening to a tone of 500 Hz sitting at a distance of 450 m from the source of the sound. What is the time interval between successive compressions from the source?
The time interval between two successive compressions is equal to the time period of the wave. This time period is reciprocal of the frequency of the wave and is given by the relation:
T = $$\frac{1}{\text { Frequency }}$$ = $$\frac{1}{500}$$ = 0.002 s
Page 166
Question 1.
Distinguish between loudness and intensity of sound.
Intensity of a sound wave is defined as the amount of sound energy passing through a unit area per second. Loudness is a measure of the response of the ear to the sound. The loudness of a sound is defined by its amplitude.
Page 167
Question 1.
In which of the three media, air, water or iron, does sound travel the fastest at a particular temperature?
The speed of sound depends on the nature of the medium. Sound travels the fastest in solids. Its speed decreases in liquids and it is the slowest in gases. Therefore, for a given temperature, sound travels fastest in iron.
Page 168
Question 1.
An echo is heard in 3s. What is the distance of the reflecting surface from the source, given that the speed of sound is 342 ms-1?
Speed of sound, v = 342 ms-1
Echo returns in time, t = 3s
Distance travelled by sound
= v × t = 342 × 3 = 1026 m
In the given time interval, sound has to travel a distance that is twice the distance between the reflecting surface and the source.
Hence, the distance of the reflecting surface from the source
= $$\frac{1026}{2}$$ m = 513 m.
Page 169
Question 1.
Why are the ceilings of concert halls curved?
The ceilings of concert halls are curved so that the sound, after reflection (from the walls), spreads uniformly in all directions.
Page 170
Question 1.
What is the audible range of the average human ear?
The audible range of an average human ear is 20 Hz to 20,000 Hz.
Question 2.
What is the range of frequencies associated with
(a) Infrasound?
(b) Ultrasound?
(a) Infrasound has frequencies less than 20 Hz.
(b) Ultrasound has frequencies more than 20,000 Hz.
Page 172
Question 1.
A submarine emits a sonar pulse, which returns from an underwater cliff in 1.02s. If the speed of sound in salt water is 1531 m/s, how far away is the cliff?
Time taken by the sonar pulse to return, t = 1.02s
Speed of sound in salt water, v = 1531 ms-1
Distance travelled by the sonar pulse
= Speed of sound × Time taken
= 1.02 × 1531 = 1561.62 m
Distance travelled by the sonar pulse during its transmission and reception in water
= 2 × actual distance = 2d
Actual distance of the cliff from the submarine, d
Distance travelled by the sonar pulse
= $$\frac{Distance travelled by the sonar pulse2}{2}$$ = $$\frac{1561.62}{2}$$ = 780.81 m
JAC Class 9th Science Sound Textbook Questions and Answers
Question 1.
What is sound and how is it produced?
Sound is a form of energy which gives the sensation of hearing. It is produced by the vibrations caused in the medium by vibrating objects.
Question 2.
Describe with the help of a diagram, how compressions and rarefactions are produced in air near a source of sound.
When a vibrating body moves forward, it creates a region of high pressure in its vicinity. This region of high pressure is known as compression. When it moves backward, it creates a region of low pres – sure in its vicinity. This region is known as rarefaction. As the body continues to move forward and backward, it produces a series of compressions and rarefactions. This is shown in the figure below.
Question 3.
Cite an experiment to show’ that sound needs a material medium for its propagation.
Take an electric bell and an air tight glass bell jar connected to a vacuum pump. Suspend the bell inside the jar, and press the switch of the bell. You will be able to hear the bell ring. Now pump out the air from the glass jar. The sound of the bell will become progressively fainter and after some time, the sound will not be heard. This is so because almost all air has been pumped out. This shows that sound needs a material medium to travel.
Question 4.
Why is sound wave called a longitudinal wave?
Sound wave is called a longitudinal wave because it is produced by compressions and rarefactions in the air. The air particles vibrate parallel to the direction of propagation of sound.
Question 5.
Which characteristics of the sound help you to identify your friend by his voice while sitting with others in a dark room?
The quality or timber of sound enables us to identify our friend by his voice.
Question 6.
Flash and thunder are produced simultaneously. But thunder is heard a few seconds after the flash is seen, why?
The speed of sound (344 m/s) is less than the speed of light (3 × 108 m/s). Sound of thunder takes more time to reach the earth as compared to light. Hence, a flash is seen before we hear a thunder.
Question 7.
A person has a hearing range from 20 Hz to 20 kHz. What are the typical wavelengths of sound waves in air corresponding to these two frequencies? Take the speed of sound in air as 344 ms-1.
For a sound wave,
Speed = Wavelength × Frequency = λ × v
Speed of sound in air = 344 m/s (Given)
(a) For, v = 20 Hz
λ1 = $$\frac{v}{v}$$ = $$\frac{344}{20}$$
= 17.2 m
(b) For, v = 20000 Hz
λ2 = $$\frac{v}{v}$$ = $$\frac{344}{20,000}$$
= 0.0172 m
Hence, for humans, the wavelength range for hearing is 0.0172 m to 17.2 m.
Question 8.
Two children are at opposite ends of an aluminium rod. One strikes the end of the rod with a stone. Find the ratio of times taken by the sound wave in air and in aluminium to reach the second child.
Velocity of sound in air = 346 m/s
Velocity of sound wave in aluminium = 6420 m/s
Let the length of the rod be l
Time taken for sound wave in air,
t1 =$$\frac{l}{Velocity in air}$$
Velocity in air Time taken for sound wave in aluminium,
t2 = $$\frac{l}{Velocity in aluminium}$$
Therefore, = $$\frac{\mathrm{t}_{1}}{\mathrm{t}_{2}}$$
= $$\frac{Velocity in aluminium}{Velocity in air}$$ = $$\frac{6420}{ 346}$$
= 18.55
Question 9.
The frequency of a source of sound is 100 Hz. How many times does it vibrate in a minute?
Frequency = 100 Hz (given)
This means that the source of sound vibrates 100 times in one second. Therefore, number of vibrations in 1 minute, i.e., in 60 seconds = 100 × 60 = 6000 times.
Question 10.
Does sound follow the same laws of reflection as light does? Explain.
Sound follows the same laws of reflection as light does. The incident sound wave and the reflected sound wave make equal angles with the normal to the surface at the point of incidence. Also, the incident sound wave, the reflected sound wave and the normal to the point of incidence, all lie in the same plane.
Question 11.
When a sound is reflected from a distant object, an echo is produced. Let the distance between the reflecting surface and the source of sound production remains the same. Do you hear echo sound on a hotter day?
An echo is heard when the time for the reflected sound is heard after 0.1s.
Time taken = $$\frac{Total distance}{Velocity}$$
On a hotter day, the velocity of sound is more. If the time taken by echo is less than 0.1s, it will not be heard.
Question 12.
Give two practical applications of reflection of sound waves.
Two practical applications of reflection of sound waves are:
1. Reflection of sound is used to measure the distance and speed of underwater objects. This technique is known as SONAR.
2. Working of a stethoscope is also based on reflection of sound. In a stethoscope, the sound of the patient’s heartbeat reaches the doctor’s ear by multiple reflections.
Question 13.
A stone is dropped from the top of a tower 500 m high into a pond of water at the base of the tower. When is the splash heard at the top? Given, g =10 ms-2 and speed of sound = 340 ms-1.
Height of the tower, s = 500m
Velocity of sound, v = 340 ms-1.
Acceleration due to gravity, g = 10 ms-2
Initial velocity of the stone, u = 0 (since the stone is initially at rest)
Let the time taken by the stone to fall to the base of the tower be t1
According to the second equation of motion:
s= ut1 + $$\frac{1}{2} \mathrm{gt}_{1}^{2}$$
500 = (0 × t1) + $$\left(\frac{1}{2} \times 10 \times t_{1}^{2}\right)$$
$$\mathrm{t}_{1}^{2}$$ = 100
t1 = 10s
Now, time taken by the sound to reach the top from the base of the tower, t2 = $$\frac{500}{340}$$ = 1.47s
Therefore, the splash is heard at the top after time, t.
Where, t = t1 + t2 = 10 + 1.47 = 11.47s.
Question 14.
A sound wave travels at a speed of 339 ms-1. If its wavelength is 1.5 cm, what is the frequency of the wave? Will it be audible?
Speed of sound, v = 339 ms-1
Wavelength of sound,
λ = 1.5 cm = 0.015 m
Speed of sound = Wavelength × Frequency
v = λ × v
∴ v = $$\frac{v}{λ}$$ = $$\frac{ 339 }{0.015}$$ = 22600 Hz
The frequency range of audible sound for humans is between 20 Hz to 20,000 Hz. Since the frequency of the given sound is more than 20,000 Hz, it is not audible.
Question 15.
What is reverberation? How can it be reduced?
The repeated or multiple reflections of sound in a large enclosed space is known as reverberation. The reverberation can be reduced by covering the ceiling and walls of the enclosed space with sound absorbing materials, such as fibre board, loose woollens, etc.
Question 16.
What is loudness of sound? What factors does it depend on?
Loudness is a measure of sound energy reaching the ear per second. Loudness depends on the amplitude of vibrations. In fact, loudness is proportional to the square of the amplitude of vibrations.
Question 17.
Explain how bats use ultrasound to catch a prey.
Bats produce high – pitched ultrasonic squeaks. These high – pitched squeaks are reflected by objects and their preys and returned to the bats’ ears. This allows the bat to know the distance and direction of their prey.
Question 18.
How is ultrasound used for cleaning?
Objects to be cleaned are put in a cleaning solution and ultrasonic sound waves are passed through that solution. The high frequency of these ultrasonic waves detaches the dirt from the objects.
Question 19.
Explain the working and application of a sonar.
SONAR is an acronym for sound Navigation And Ranging. It is an acoustic device used to measure the depth, direction and speed of underwater objects, such as submarines and ship wrecks, with the help of ultrasounds. It is also used to measure the depth of seas and oceans.
A beam of ultrasonic sound is produced and transmitted by the transducer (a device that produces ultrasonic sound) of the SONAR, which travels through sea water.
The echo produced by the reflection of this ultrasonic sound is detected and recorded by the detector, which is converted into electrical signals. The distance (d) of the underwater object is calculated from the time (t) taken by the echo to return with speed (v) which is given by 2d = v × t.
This method of measuring distance is also known as ‘echo – ranging’.
Question 20.
A sonar device on a submarine sends out a signal and receives an echo 5s later. Calculate the speed of sound in water if the distance of the object from the submarine is 3625m.
Time taken to hear the echo, t = 5s
Distance of the object from the submarine, d = 3625m
Total distance travelled by the sonar waves during the transmission and reception in water = 2d
Velocity of sound in water,
v = $$\frac{2 \mathrm{~d}}{t}$$ = $$\frac{2 \times 3625}{5}$$ = 1450ms-1
Question 21.
Explain how7 defects in a metal block can be detected using ultrasound. | 2022-09-28 09:08:26 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5724408030509949, "perplexity": 972.5181536592501}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335190.45/warc/CC-MAIN-20220928082743-20220928112743-00484.warc.gz"} |
http://cms.math.ca/cjm/kw/manifold?page=2 | location: Publications → journals
Search results
Search: All articles in the CJM digital archive with keyword manifold
Expand all Collapse all Results 26 - 28 of 28
26. CJM 2000 (vol 52 pp. 695)
Carey, A.; Farber, M.; Mathai, V.
Correspondences, von Neumann Algebras and Holomorphic $L^2$ Torsion Given a holomorphic Hilbertian bundle on a compact complex manifold, we introduce the notion of holomorphic $L^2$ torsion, which lies in the determinant line of the twisted $L^2$ Dolbeault cohomology and represents a volume element there. Here we utilise the theory of determinant lines of Hilbertian modules over finite von~Neumann algebras as developed in \cite{CFM}. This specialises to the Ray-Singer-Quillen holomorphic torsion in the finite dimensional case. We compute a metric variation formula for the holomorphic $L^2$ torsion, which shows that it is {\it not\/} in general independent of the choice of Hermitian metrics on the complex manifold and on the holomorphic Hilbertian bundle, which are needed to define it. We therefore initiate the theory of correspondences of determinant lines, that enables us to define a relative holomorphic $L^2$ torsion for a pair of flat Hilbertian bundles, which we prove is independent of the choice of Hermitian metrics on the complex manifold and on the flat Hilbertian bundles. Keywords:holomorphic $L^2$ torsion, correspondences, local index theorem, almost Kähler manifolds, von~Neumann algebras, determinant linesCategories:58J52, 58J35, 58J20
27. CJM 1999 (vol 51 pp. 1123)
Arnold, V. I.
First Steps of Local Contact Algebra We consider germs of mappings of a line to contact space and classify the first simple singularities up to the action of contactomorphisms in the target space and diffeomorphisms of the line. Even in these first cases there arises a new interesting interaction of local commutative algebra with contact structure. Keywords:contact manifolds, local contact algebra, Diracian, contactianCategories:53D10, 14B05
28. CJM 1999 (vol 51 pp. 585)
Mansfield, R.; Movahedi-Lankarani, H.; Wells, R.
Smooth Finite Dimensional Embeddings We give necessary and sufficient conditions for a norm-compact subset of a Hilbert space to admit a $C^1$ embedding into a finite dimensional Euclidean space. Using quasibundles, we prove a structure theorem saying that the stratum of $n$-dimensional points is contained in an $n$-dimensional $C^1$ submanifold of the ambient Hilbert space. This work sharpens and extends earlier results of G.~Glaeser on paratingents. As byproducts we obtain smoothing theorems for compact subsets of Hilbert space and disjunction theorems for locally compact subsets of Euclidean space. Keywords:tangent space, diffeomorphism, manifold, spherically compact, paratingent, quasibundle, embeddingCategories:57R99, 58A20
Page Previous 1 2 | 2014-09-03 07:12:28 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9098812341690063, "perplexity": 1328.3064616846298}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1409535925433.20/warc/CC-MAIN-20140901014525-00248-ip-10-180-136-8.ec2.internal.warc.gz"} |
https://quant.stackexchange.com/questions/43963/does-wacc-not-depend-on-the-cost-of-debt | # Does WACC not depend on the cost of debt?
According to chapter 17 of Ross's Corporate Finance (Brazilian translation of 2nd edition),
$$r_{WACC} = \frac{S}{S+B}r_S + \frac{B}{S+B}r_B(1 - T)$$
and
$$r_S = r_0 + \frac{B}{S}(1 - T)(r_0 - r_B)$$
where $$S$$ is equity, $$B$$ is debt, $$T$$ is tax rate, $$r_0$$ is the unlevered cost of equity, $$r_S$$ is the levered cost of equity, and $$r_B$$ is the cost of debt.
By replacing $$r_S$$ in the first formula and simplifying, I get
$$r_{WACC} = \Bigg{(}\frac{S + B(1 - T)}{S + B}\Bigg{)}r_0$$
which would mean the weighted average cost of capital does not depend on the cost of debt, $$r_B$$. This formula yields the same WACC as the one in the book, and I checked on some other examples as well.
Did I get this right? If so, is this because the higher tax shield compensates the additional risk from higher interest payments? If not, what am I doing wrong?
• The first equation is definitional and is always true. The second equation holds in the "pure Modigliani Miller case", where there is no "cost of financial distress". In this case the third equation also holds: the company pays less taxes to the government (the tax shield) and hence $WACC < r_0$ (also $r_s>r_0$ because of the higher risk to stockholders, and $r_d<r_0$ by assumption. The only loser is the government). Note however that if debt is very high the "cost of financial distress" cannot be neglected. – noob2 Apr 12 '20 at 13:41
As noted in the comments, you arrive at the correct conclusion, given your assumptions.
This result is usually referred to as the Modigliani-Miller theorem:
The basic theorem states that in the absence of taxes, bankruptcy costs, agency costs, and asymmetric information, and in an efficient market, the value of a firm is unaffected by how that firm is financed.
Since the value of the firm depends neither on its dividend policy nor its decision to raise capital by issuing stock or selling debt, the Modigliani–Miller theorem is often called the capital structure irrelevance principle.
In practice adding leverage to a firm has a number of benefits in good economic conditions (due to tax benefits, agency cost reduction, juicing up returns for equity holders) but is detrimental in bad economic conditions (bankruptcy costs, inability to access cheap capital in a market with asymmetrical information).
In response to ‘WACC may not depend on debt’. The weighted average (WACC) accounts for the minimum required return for a set capital structure. If a company has 100% debt financing that would affect the WACC. Similarly it can be seen that up till a point, increasing the level of debt reduces the WACC and more equity increases the WACC. The reason is that by increasing debt financing creates a tax deduction where more debt = less tax which is a cost advantage... that is until the interest expense on that tax rises higher than the tax deduction. As far as I know the formula is a basic weighted average WACC = Rp X weightRP + RE X weightRP + before tax cost of D X weight of D (1-tax rate)... The cost of debt is important to account for tax. If it was merely the cost of equity that was used when debt financing was used in the capital structure, the WACC would not account for the benefits of debt financing and the average cost would be higher
How are you deriving $$r_s$$? If you use a WACC calculator like this one WACC calculator, the cost of equity definitely matters. Based on your derivation, let's look at an extreme case where a company is financed 100% through debt. It would say that the cost of debt is irrelevant, but that doesn't make sense. | 2021-01-21 15:11:43 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 12, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.46610715985298157, "perplexity": 1480.2746214008655}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703524858.74/warc/CC-MAIN-20210121132407-20210121162407-00673.warc.gz"} |
https://phenomenaldocs.com/dif-settrade-jvo/4390e7-what-is-center-frequency-and-bandwidth | Rayleigh bandwidth is the central concept in radar … The frequency response of a system is usually specified with a single frequency sinewave as input. Quasi-Static Approximation of the Doppler Spectrum, Chapter 5: The bandwidth is defined in terms of bits/second. While, 'Center Frequency' is the frequency of operation associated with the antenna. The term in question is obviously composed of “band” and “width.” This “band” refers to a band, or range, of frequencies, and “width” refers to the appearance of this band when viewed in the frequency domain. Channel bandwidth is the frequency range that constitutes the channel. Second, there is no fixed relationship between center frequency and bandwidth. Companies affiliated with GlobalSpec can contact me when I express interest in their product or service. (see graph below) Quality factor: This parameter is the ratio of the center frequency to the bandwidth. Bandwidth: Bandwidth refers to how narrow or wide your boost or cut is. Bandwidth, then, is applicable to systems such as filters and communications channels as well as the signals that are conveyed or processed. So when you ask "what should my settings be for bandwidth, and center frequency", you are asking for filter parameters to be given to you. This 5 GHz Wi-Fi band or to be more precise the 5.8 GHz band provides additional bandwidth, and being at a higher frequency, equipment costs are slightly higher, although usage, and hence interference is less.It can be used by 802.11a & n. The center frequency is mostly irrelevant… a 200 khz band pass filter will pass a range of frequencies that is 200 khz wide. The main difference between bandwidth and frequency is that frequency refers to the number of times that a component of a signal oscillates per second, whereas bandwidth refers to the range of frequencies that can be contained within a signal. One is what we usually call (sub 6 Ghz) and the other is what we usually call millimeter wave. First, we have the –3dB version of bandwidth. What is the center frequency of a filter with a Q of 15 and bandwidth of 1.0 kHz? The Nyquist formula gives the upper bound for the data rate of a transmission system by calculating the bit rate directly from the number of signal levels and the bandwidth of the system. The 'Bandwidth' of an antenna is considered for some given amount of return loss i.e -10dB or -15dB. Depending on the ranges, the maximum bandwidth and subcarrier spacing varies. The bandwidth is 64 Hz, and the half power points are ± 32 Hz of the center resonant frequency: BW = Δf = f h-f l = 355-291 = 64 f l = f c - Δf/2 = 323-32 = 291 f h = f c + Δf/2 = 323+32 = 355 . ω0= ωω12 (1.12) As we see from the plot on Figure 2 the bandwidth increases with increasing R. Equivalently the sharpness of the resonance increases with decreasing R. Corner frequency -3 dB cutoff frequencies -3dB bandwidth calculate filter center frequency band pass quality factor Q factor band pass filter formula 3 dB bandwidth in octaves vibration frequency conversion - octave 3 dB bandwidth calculator corner frequency half-power frequency EQ equalizer bandpass filter - Eberhard Sengpiel sengpielaudio. Specifically, in a noise-free channel, Nyquist tells us that we can transmit data at a rate of up to C=2Blog2MC=2Blog2M bits per second, where B is the bandwidth (in Hz) and Mis the number of signal levels. 3.7.3.1 Bandwidth and frequency response. Thus the spectrum bandwidth is another parameter that is desired. UNLIMITED That sort of vague information doesn’t belong anywhere near an engineering project, though, so let’s look more closely. It looks like they have defined the bandwidth to be frequencies where the response amplitude is greater 0.707x the peak amplitude response, which in this case is an amplitude response greater than 0.5. How is Bandwidth Measured? This allows high fidelity signal transmission. Pr cis of Waveform Analysis Techniques, Chapter 8: The design of RF systems involves extensive analysis of how signal frequencies change and interact, and references to bandwidth are by no means uncommon. The wavelet transforms provide a unified framework for getting around the Heisenberg Uncertainly Principle that the Fourier Transform suffers from. Problem 2 Bandwidth of a FM Signal (10 points) A 10 MHz carrier signal is frequency modulated by a sinusoidal signal of unity amplitude and with a FM frequency deviation constant k f = 10 Hz/V. Shahin Farahani, in ZigBee Wireless Networks and Transceivers, 2008. The bandwidth of each is what matches the input to the speaker with the speakers design criteria being catered to. A frequency responsive device, such as a tuned amplifier, filter, etc., is tested for center frequency and bandwidth. Many good points in this article, but some muddling occurs in trying to explain the meaning of bandwidth. Help Center Detailed answers to any questions you might have ... System A : Bandwidth = 1 Khz , Carrier frequency = 1 Ghz. Thus for the determination of the ground velocity, only the center frequency of the Doppler spectrum and its relation to the vehicle ground velocity are required. 3 dB … Bandwidth is defined as the total amount of data transmitted per unit time. Zin plot I get a bandwidth of 0 and a center frequency of 30 MHz with this syntax: center_freq(db(S11),3) bandwidth_func(db(S11),1) But I get a bandwidth of 327 kHz and a center frequency of 62.84 MHz when I use this syntax: center_freq(db(zin(S11)),3) bandwidth_func(db(zin(S11)),1) Why is that? Then the center frequency is midway between the frequencies where the response amplitude is 0.5. According to the center frequency, look up the table and initially determine C1=C2=C calculate resistance , that is , Calculate bandwidth based on upper and lower cutoff frequencies , Calculate the quality factor Calculate by Q and determine the resistances Rf and RF. For example, if we’re talking about a baseband signal, bandwidth might refer to a frequency range extending from 0 Hz to some (positive) frequency related to the baseband spectrum. This diagram conveys the general idea: Finally, there’s the issue of negative frequencies. subcarrierSpacing: Subcarrier spacing to be used in this BWP for all channels and reference signals unless explicitly configured elsewhere. In the next article, we’ll continue this discussion by exploring bandwidth in the context of digital signals, communication systems, and processors. To give some concrete examples of bandwidth, here is … The bandwidth is often specified in terms of its Fractional Bandwidth (FBW). System B : Bandwidth = 1 Khz , Carrier frequency = 1 Mhz. as well as subscriptions and other promotional notifications. If everyone understands the point of comparison, there shouldn’t be any confusion, but it’s good to remember that “wideband” and “narrowband” might mean very different things to, for example, a researcher working with ultra-wideband systems and an analog designer accustomed to low-noise op-amp circuits that don’t need to process frequencies greater than a few tens of kilohertz. In NR, there are roughly two large frequency range specified in 3GPP. Thus, the bandwidth of most hearing aid receivers is a compromise of current drain, size, and the desired frequency region where special attention is needed. You can have a 1 Hz bandwidth @ 10 GHz or a 100 MHz bandwidth @ 50 MHz. The term “bandwidth” arises in a wide variety of engineering discussions. If we used 16-QAM in both systems will baudrate be the same ? GlobalSpec collects only the personal information you have entered above, your device information, and location data. Thus the spectrum bandwidth is another parameter that is desired. Only the first few sidebands will contain the major share of the power (98% of the total power) and therefore only these few bands are considered to be significant sidebands.. As a rule of thumb, often termed as Carson’s Rule, 98% of the signal power in FM is contained within a bandwidth equal to the deviation frequency, plus the modulation frequency doubled. The point here is that performance will not be significantly degraded if channels are spaced such that only 1% of signal power is interfering with adjacent channels. The Doppler quality factor Q thus is a measure of the accuracy of the measurement of the spectrum center frequency. CENTER FREQUENCY AND BANDWIDTH OF THE DOPPLER SPECTRUM, Industrial Computers and Embedded Systems, Material Handling and Packaging Equipment, Electrical and Electronic Contract Manufacturing, Chapter 3: 4 Bandwidth: Bandwidth refers to how narrow or wide your boost or cut is. Cut off frequency 2: This is the higher frequency at which the transfer function equals of the maximum value: Bandwidth: This variable is the width of the pass band. These can also be commonly be found in computing. Fortunately, the exact shape of the spectrum is not always required. The bandwidth of a transmission system or a component is usually defined by the 3-dB bandwidth. The operational bandwidth is limited to 150 kHz, with 25 kHz on each side of that for gaurd bands. It is denoted by “B”. Thus the spectrum bandwidth is another parameter that is desired. Radio Frequency Bands. Frequency is defined as the total number of complete cycles per unit time. A low pass audio filter would pass bass sounds to a subwoofer and block any other frequency, and a high pass filter does the same for passing only applicable sounds to a tweeter. Also plotted is the classical rule of thumb that a critical band is 100 Hz wide for center frequencies below 500 Hz, and 20% of the center frequency above 500 Hz. i) Search the center frequency. Answer: Start with the expression: 3 The value of the field shall be interpreted as resource indicator value (RIV). As the word monochromatic means one color, a For a notch, or bandstop filter, the center frequency is also referred to as the null frequency or the notch frequency. The geometric center frequency corresponds to a mapping of the DC response of the prototype lowpass filter, which is a resonant frequency sometimes equal to the peak frequency … A decreasing sweep frequency signal is applied to the device and the output detected. What is the center frequency of a filter with a Q of 15 and bandwidth of 1.0 kHz? Presenting the author s exact theory for the spectrum of an airborne Doppler radar, this book is supported by graphic illustrations that assist the reader in understanding the theoretical predictions. I agree to receive commercial messages from GlobalSpec including product announcements and event invitations, It is denoted by “f”. The bandwidth is expressed in rad/TimeUnit, where TimeUnit is the TimeUnit property of sys. If the message bandwidth is m Hz, then channel bandwidth required to transmit AM is 2m Hz. Please try again in a few minutes. GlobalSpec may share your personal information and website activity with our clients for which you express explicit interest, or with vendors looking to reach people like you. Cut off frequency 2: This is the higher frequency at which the transfer function equals of the maximum value: Bandwidth: This variable is the width of the pass band. Whenever possible, I like to start with a definition that is based on a term’s constituent words, or on the etymology when constituent words are not readily recognizable. t. If in para “Modulated Signals and Channel Spacing”, term (-20 dB) is used along with “99% bandwidth” it will give better clarity. Its full width at half maximum bandwidth is 8.9 nm, corresponding to 3.9 THz. Whether a filter is low or high pass is determined by its center frequency. Next, we have bandwidth in the context of modulated signals and channel spacing. Aircraft Doppler Stabilization and Navigation, Chapter 4: With these data, we can determine the ratio of the spectrum center frequency to the spectrum bandwidth, which I call the Doppler spectrum quality factor Q. Here a few frequencies below and above its cutoff frequency are affected and the quality factor Q is specified as a high number. fb = bandwidth(sys) returns the bandwidth of the SISO dynamic system model sys.The bandwidth is the first frequency where the gain drops below 70.79% (-3 dB) of its DC value. A frequency responsive device, such as a tuned amplifier, filter, etc., is tested for center frequency and bandwidth. This brief analysis has already uncovered a problem. The filter has therefore a larger bandwidth and the so-called quality factor Q is specified as a low number. No, because small amounts of energy inevitably extend far beyond a spectrum’s center frequency. Center Frequency: The center frequency refers to the frequency which resides at the very center of the bell shaped boost or cut that you are making. The lowest frequency will be 100 khz below the center frequency and the upper limit will be 100 khz above the center frequency. The operational bandwidth is limited to 150 kHz, with 25 kHz on each side of that for gaurd bands. Don't have an AAC account? As a special case, the center frequency fo=1KHz is known, so C1=C2=C=0.01uF Likewise, if we describe a bandwidth as wide or narrow, we’re actually comparing the bandwidth to something else. What is Frequency. This is the frequency at which the transmission has decreased to 50% (or −3 dB) of its maximum value, which is usually at f = 0. Find the approximate bandwidth of the frequency modulated signal if the modulating frequency (single tone) is 10 kHz. Because of the division of the FM band for the transmission of FM stereo, the frequency limit for music transmission is at 15 kHz. Center Frequency Hz kHz MHz GHz THz Bandwidth Hz kHz MHz GHz THz Convert Bandwidth $\times10$0 m Click "Convert" Laser light has been described as monochromatic and in a sense this is true. Neat article. Another source of confusion, or at least uncertainty, is found among subtle details that we can sometimes ignore. A decreasing sweep frequency signal is applied to the device and the output detected. At Wavelength, we specify the 3 dB bandwidth of a laser diode driver as the sinusoidal frequency that is … It is usually defined as either the arithmetic mean or the geometric mean of the lower cutoff frequency and the upper cutoff frequency of a band-pass system or a band-stop system . Q = fc/BW = (312 Hz)/ (62 Hz) = 5. In case of a baseband channel or video signal, the bandwidth is equal to its upper cut-off frequency. First, you are confusing the layman meaning of “bandwidth” (used to measure data rates) with the technical meaning (which is measured in Hertz). Include me in third-party email campaigns and surveys that are relevant to me. The most common criterion is based on the –3dB frequency. Bandwidth and frequency are two concepts that are common for science and engineering majors around the world. Center Frequency: The center frequency refers to the frequency which resides at the very center of the bell shaped boost or cut that you are making. The center frequency and fractional bandwidth of the high frequency US transducer were evaluated by a two-way pulse echo measurement using the Panametrics 5900PR. Receivers Bandwidth Bandwidth. The basic difference between bandwidth and frequency is that bandwidth measures the amount of data transferred per second whereas the frequency measure the number of oscillation of the data signal per second. 4Fig. Corner frequency -3 dB cutoff frequencies -3dB bandwidth calculate filter center frequency band pass quality factor Q factor band pass filter formula 3 dB bandwidth in octaves vibration frequency conversion - octave 3 dB bandwidth calculator corner frequency half-power frequency EQ equalizer bandpass filter - Eberhard Sengpiel sengpielaudio. For instance, the light from a red laser pointer appears to be the single color red. 2.4 GHz 802.11 channels. Key Difference: Bandwidth has two major definitions – one in computing and the other in signal processing.On the other hand, frequency is the number of complete cycles per second in alternating current direction. The Doppler Spectrum for a Thin Gaussian Antenna Pattern and for b(x) = b0, Appendix B: Actually FM stereo covers 106 kHz of that. I like to think of bandwidth as meaning the width of the band of frequencies being discussed. Bandwidth and frequency are two concepts that are common for science and engineering majors around the world. With these data, we can determine the ratio of the spectrum center frequency to the spectrum bandwidth, which I call the Doppler spectrum quality factor Q. Optical bandwidth values may be specified in terms of frequency or wavelength. Those expressions allow the determination of the exact shape of the spectrum from knowledge of certain statistical properties of the terrain and the antenna pattern as projected on the terrain. This allows high fidelity signal transmission. The FBW is the ratio of the frequecny range (highest frequency minus lowest frequency) divided by the center frequency. For a passband filter, this lies close to the center frequency. In the last chapter, the general expressions for the power density spectrum of the echo from a continuous wave (CW) airborne Doppler radar were obtained. While, these may seem similar, but they differ each other in many ways. This means that if a portion of this signal spectrum is in deep fade, it is likely that the entire signal spectrum will be in deep fade. The bottom line here is that bandwidth is a fairly nebulous term, even in the limited context of amplifiers and filters. Nyquist is only an upper bound, and on the baseband signal bandwidth - the occupied transmission bandwidth for a wireless sig… This is my opinion, and as such has value only if it helps someone else better understand the subject. In many cases, it makes more sense to actually specify the bandwidth. Unfortunately, “bandwidth” is not a particularly straightforward term in the RF world. locationAndBandwidth: Frequency domain location and bandwidth of this bandwidth part. Bandwidth is usually controlled by a ‘Q’ setting, which stands for ‘quality factor’. A system’s rated frequency response occurs within 3 dB of the peak. Bandwidth Cutoff Frequency. Sure, it’s wide for the tadpole that’s trying to swim across it, but it wouldn’t be wide for an elephant. If someone hands you an amplifier module and says that it has a bandwidth of 200 kHz, what does that mean? Due to the inverse relationship of frequency and wavelength, the conversion factor between gigahertz and nanometers depends on the center wavelength or frequency. Maybe 10% or less in size compared to the long dimension of the dipole (which, again, will have to be resized to re-center the resonance frequency to the desired value.) There is, of course, no answer to this question. Fractional bandwidth is the bandwidth of a device, circuit or component divided by its center frequency. In electrical engineering and telecommunications, the center frequency of a filter or channel is a measure of a central frequency between the upper and lower cutoff frequencies. In a Radar receiver the bandwidth is mostly determined by the IF filter stages. The Doppler quality factor Q thus is a measure of the accuracy of the measurement of the spectrum center frequency. The fractional bandwidth varies between 0 and 2, and is often quoted as a percentage (between 0% and 200%). The bandwidth of an amplifier or filter does not specify the range of frequencies for which the circuit is functional, if “functional” means “able to produce some kind of output signal.” Rather, it specifies the range of frequencies for which the circuit meets some performance criterion. 5.5.5 Effect of Signal Spreading on Multipath Performance. Only the first few sidebands will contain the major share of the power (98% of the total power) and therefore only these few bands are considered to be significant sidebands.. As a rule of thumb, often termed as Carson’s Rule, 98% of the signal power in FM is contained within a bandwidth equal to the deviation frequency, plus the modulation frequency doubled. If a certain wireless standard uses channels that have a 1 MHz bandwidth, does this mean that the entire spectrum of one modulated signal is contained within a 1 MHz band? The fractional bandwidth varies between 0 and 2, and is often quoted as a percentage (between 0% and 200%). The fractional bandwidth of an antenna relates to how wideband it is. If the filter has steep slopes, its bandwidth is smaller. Bandwidth B, BW or Δf is the difference between the upper and lower cut-off frequencies of radar receiver, and is typically measured in hertz. Include me in professional surveys and promotional announcements from GlobalSpec. Make an LED Light Strip AHRS with Arduino and MPU-6050, Leveraging the LPC55S16-EVK for Industrial Applications, Passive, Active, and Electromechanical Components. a) determine the center frequency maximum gain, and bandwidth of the following filter (4pts) с. Presumably, some prominent aspect of the amplifier’s frequency response involves frequencies covering a range of 200 kHz. Bandwidth is the width of the passband around the peak, with rolloff frequencies at gain = |H max |/√2 on either side of the max. If you're using log paper (for the x axis), the two points w1 and w2 are equidistant from w 0. ω 0 is the geometric mean between ω 1 and ω 2. A reduction of 3 dB in magnitude corresponds to 50% reduction in power, and this has been chosen as a convenient way to identify the bandwidth. Since BW = fc/Q: Q = f c /BW = (323 Hz)/(64 Hz) = 5. 0.022 uF R w 47 ΚΩ R | 1.8k 0.022 4F R 150 kn b) ( 4pts) 1- Determine the following: T.TT.fr, duty cycle% 2- Show how to get 50% duty cyde 3-Show how to get V.C.O R, 1.4k RESET Voc DISCH 555 THRESH Vout R 3.3k OUT TRIG CONT Cent 0.047 F GND C 0.01 F H = When in doubt, ask for clarification. Derivation of Parseval Relations. The Doppler quality factor Q thus is a measure of the accuracy of the measurement of the spectrum center frequency. Bandwidth of FM Signal. In most modern signal analyzers, a third IF filtering stage is often implemented with a bank of filters, each with different bandwidths and centered at the same center frequency. This article explores the surprisingly complicated details associated with a word that we frequently use but perhaps don’t fully understand. There is a total of fourteen channels defined for use by Wi-Fi installations … I suppose the bandwidth of a high-pass filter could be the width of the band of frequencies that experience more than 50% power suppression, but I don’t think that people use the term this way. The pulse-echo signal and spectrum are shown in Fig. That is, if one needs to have an extended bandwidth in the high frequencies, one may need to sacrifice the low-frequency sensitivity of the hearing aid and vice versa. “Is that stream wide?” I ask. TO THE The bandwidth of a device divided by its center frequency is known as its fractional bandwidth. We have seen from our approximate analysis that the center frequency of the spectrum is what we call the Doppler frequency. By submitting your registration, you agree to our Privacy Policy.
Sharjah To Abu Dhabi Taxi Fare, North Schuylkill School, Chai Images With Quotes, Zwilling Scale Amazon, Sunset Painting Ideas Easy, How To Use Permethrin Lotion For Lice, American Standard Reliant Shower Parts, | 2021-05-09 22:58:59 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6501678228378296, "perplexity": 985.9374442943669}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989018.90/warc/CC-MAIN-20210509213453-20210510003453-00228.warc.gz"} |
http://openstudy.com/updates/507b4eebe4b07c5f7c1f23c0 | ## Caolco Group Title A person is watching a boat from the top of a lighthouse. The boat is approaching the lighthouse directly. When first noticed the angle of depression to the boat is 18°33'. When the boat stops, the angle of depression is 51°33'. The lighthouse is 200 feet tall. How far did the boat travel from when it was first noticed until it stopped? Round your answer to the hundredths place. Is the answer -706.44 ft? one year ago one year ago
1. JakeV8 Group Title
I don't know the answer yet, but this diagram should help. Also, when you calculate a distance traveled, it will always be expressed as a positive number. |dw:1350260299986:dw|
2. Caolco Group Title
I still don't get it.
3. JakeV8 Group Title
I messed up the labeling above... it is 51 deg 33 min and 18 deg 33 min (I put "sec" in for both)
4. swissgirl Group Title
5. JakeV8 Group Title
you could solve for the total distance from the boat's initial position to the lighthouse base using tangent(18 +33/60) = 200 ft / (entire base distance)
6. Caolco Group Title
do I divide 200 by tan(18+33/60)-tan(51+33/60)?
7. JakeV8 Group Title
and you could do the same for the other angle to get the distance from the point where the boat stopped back to the lighthouse. Then subtract to get the distance travelled
8. JakeV8 Group Title
I'm not sure... why would you divide it by the difference in those two tangents?
9. Caolco Group Title
10. swissgirl Group Title
$$\large {200 \over \tan(18+{33 \over 60}) }-{200 \over\tan(51+{33 \over 60})}$$
11. Caolco Group Title
so the answer is -706.44 ft
12. swissgirl Group Title
noooo did you solve this?
13. swissgirl Group Title
How can distance be negative?
14. Caolco Group Title
yes and that is what i keep getting. that is the exact way i did it the first time and that is the same answer im getting now.
15. swissgirl Group Title
Are you on radians or degrees?
16. swissgirl Group Title
You must be on radians switch the mode to degrees
17. Caolco Group Title
oh pellet. i changed it to degrees before doing this but must not have hit enter. damnit. is the answer 437.21?
18. swissgirl Group Title
Yesss :)
19. JakeV8 Group Title
@Caolco sorry, it took me awhile to even see how to solve this, then, since I apparently needed the practice, I was trying to solve it while @swissgirl was helping you. | 2014-08-30 20:29:08 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6392361521720886, "perplexity": 3249.036969468113}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500835699.86/warc/CC-MAIN-20140820021355-00030-ip-10-180-136-8.ec2.internal.warc.gz"} |
https://1library.net/document/y95m5drz-perturbative-corrections-approximate-inference-gaussian-latent-variable-models.html | # Perturbative Corrections for Approximate Inference in Gaussian Latent Variable Models
42
## Full text
(1)
### Latent Variable Models
Manfred Opper [email protected]
Department of Computer Science Technische Universit¨at Berlin D-10587 Berlin, Germany
Ulrich Paquet [email protected]
Microsoft Research Cambridge Cambridge CB1 2FB, United Kingdom
Ole Winther [email protected]
Informatics and Mathematical Modelling Technical University of Denmark DK-2800 Lyngby, Denmark
Editor:Neil Lawrence
Abstract
Expectation Propagation (EP) provides a framework for approximate inference. When the model under consideration is over a latent Gaussian field, with the approximation being Gaussian, we show how these approximations can systematically be corrected. A perturbative expansion is made of the exact but intractable correction, and can be applied to the model’s partition function and other moments of interest. The correction is expressed over the higher-order cumulants which are neglected by EP’s local matching of moments. Through the expansion, we see that EP is correct to first order. By considering higher orders, corrections of increasing polynomial complexity can be applied to the approximation. The second order provides a correction in quadratic time, which we apply to an array of Gaussian process and Ising models. The corrections generalize to arbitrarily complex approximating families, which we illustrate on tree-structured Ising model approxima-tions. Furthermore, they provide a polynomial-time assessment of the approximation error. We also provide both theoretical and practical insights on the exactness of the EP solution.
Keywords: expectation consistent inference, expectation propagation, perturbation correction, Wick expansions, Ising model, Gaussian process
1. Introduction
Expectation Propagation (EP) (Opper and Winther, 2000; Minka, 2001a,b) is part of a rich family of variational methods, which approximate the sums and integrals required for exact probabilistic inference by an optimization problem. Variational methods are perfectly amenable to probabilistic graphical models, as the nature of the optimization problem often allows it to be distributed across a graph. By relying on local computations on a graph, inference in very large probabilistic models becomes feasible.
(2)
authors (Opper et al., 2009). The error that arises when the free energy (the negative logarithm of the partition function or normalizer of the distribution) is approximated, may for instance be written as a Taylor expansion (Opper et al., 2009; Paquet et al., 2009). A pleasing property of EP is that, at its stationary point, the first order term of such an expansion is zero. Furthermore, the quality of the approximation can then be ascertained in polynomial time by including corrections beyond the first order, or beyond the standard EP solution. In general, the corrections improve the approximation when they are comparatively small, but can also leave a question mark on the quality of approximation when the lower-order terms are large.
The approach outlined here is by no means unique in correcting the approximation, as is evinced by cluster-based expansions (Paquet et al., 2009), marginal corrections for EP (Cseke and Heskes, 2011) and the Laplace approximation (Rue et al., 2009), and corrections to Loopy Belief Propaga-tion (Chertkov and Chernyak, 2006; Sudderth et al., 2008; Welling et al., 2012).
1.1 Overview
EP is introduced in a general way in Section 3, making it clear how various degrees of complexity can be included in its approximating structure. The partition function will be used throughout the paper to explain the necessary machinery for correcting any moments of interest. In the experi-ments, corrections to the marginal and predictive means and variances are also shown, although the technical details for correcting moments beyond the partition function are relegated to Appendix D. The Ising model, which is cast as a Gaussian latent variable model in Section 2, will furthermore be used as a running example throughout the paper.
The key to obtaining a correction lies in isolating the “intractable quantity” from the “tractable part” (or EP solution) in the true problem. This is done by considering the cumulants of both: as EP locally matches lower-order cumulants like means and variances, the “intractable part” exists as an expression over the higher-order cumulants which are neglected by EP. This process is outlined in Section 4, which concludes with two useful results: a shift of the “intractable part” to be an average over complex Gaussian variables withzero diagonal relation matrix, and Wick’s theorem, which allows us to evaluate the expectations of polynomials under centered Gaussian measures. As a last stage, the “intractable part” is expanded in Sections 5 and 7 to obtain corrections to various orders. In Section 6, we provide a theoretical analysis of the radius of convergence of these expansions.
Experimental evidence is presented in Section 8 on Gaussian process (GP) classification and (non-Gaussian) GP regression models. An insightful counterexample where EP diverges under increasing data, is also presented. Ising models are examined in Section 9.
Numerous additional examples, derivations, and material are provided in the appendices. Details on different EP approximations can be found in Appendix A, while corrections to tree-structured approximations are provided in Appendix B. In Appendix C we analytically show that the correction to a tractable example is zero. The main body of the paper deals with corrections to the partition function, while corrections to marginal moments are left to Appendix D. Finally, useful calculations of certain cumulants appear in Appendix E.
2. Gaussian Latent Variable Models
Letx= (x1, . . . ,xN)be an unobserved random variable with an intractable distributionp(x). In the
(3)
p(x) = 1
Z
N
## ∏
n=1
tn(xn)f0(x) (1)
with partition function (normalizer)
Z=
Z N
## ∏
n=1
tn(xn)f0(x)dx.
This model encapsulates many important methods used in statistical inference. As an example, f0 can encode the covariance matrix of a Gaussian process (GP) prior on latent function observations
xn. In the case of GP classification with a class labelyn∈ {−1,+1}on a latent function evaluation
xn, the terms are typically probit link functions, for example
p(x) = 1
Z
N
## ∏
n=1
Φ(ynxn)
### N
(x;0,K). (2)
The probit function is the standard cumulative Gaussian densityΦ(x) =Rx
### N
(z; 0,1)dz. In this example, the partition function is not analytically tractable but for the one-dimensional caseN=1. An Ising model can be constructed by letting the termstnrestrictxnto±1 (through Dirac delta functions). By introducing the symmetric coupling matrixJand fieldθinto f0, an Ising model can be written as
p(x) = 1
Z
N
## ∏
n=1
1
2δ(xn+1) + 1
2δ(xn−1)
exp
1 2x
TJx+θTx
. (3)
In the Ising model, the partition functionZ is intractable, as it sums f0(x)over 2N binary values ofx. In the variational approaches, the intractability is addressed by allowing approximations toZ
and other marginal distributions, decreasing the computational complexity from being exponential to polynomial inN, which is typically cubic for EP.
3. Expectation Propagation
An approximation toZ can be made by allowing p(x) in Equation (1) to factorize into a product
offactors fa. This factorization is not unique, and the structure of the factorization ofp(x)defines
the complexity of the resulting approximation, resulting in different structures in the approximating distribution. Where GLVMs are concerned, a natural and computationally convenient choice is to use Gaussian factorsga, and as such, the approximating distribution q(x)in this paper will be Gaussian. Appendix A summarizes a number of factorizations for Gaussian approximations.
The tractability of the resulting inference method imposes a pragmatic constraint on the choice of factorization; in the extreme case p(x)could be chosen as a single factor and inference would be exact. For the model in Equation (1), a three-term product may be factorized as (t1)(t2)(t3), which gives the typical GP setup. When a division is introduced and the term product factorizes as(t1t2)(t2t3)/(t2), the resulting free energy will be that of the tree-structured EC approximation (Opper and Winther, 2005). To therefore allow for regrouping, combining, splitting, and dividing terms, a powerDais associated with each fa, such that
p(x) = 1
Z
## ∏
a fa(x)
(4)
with intractable normalization (or partition function) Z=R∏afa(x)Dadx.1 Appendix A shows how the introduction of Da lends itself to a clear definition of tree-structured and more complex approximations.
To define an approximation to p, termsga, which typically take an exponential family form, are chosen such that
q(x) = 1
Zq
## ∏
a
ga(x)Da (5)
has the same structure asp’s factorization. Although not shown explicitly, faandgahave a depen-dence on thesamesubset of variablesxa. The optimal parameters of thega-term approximations are found through a set of auxiliarytilteddistributions, defined by
qa(x) = 1
Za
q(x)fa(x)
ga(x)
. (6)
Here asingleapproximating termgais replaced by an original term fa. Assuming that this replace-ment leaves qa still tractable, the parameters in ga are determined by the condition that q(x)and all qa(x) should be made as similar as possible. This is usually achieved by requiring that these distributions share a set of generalised moments which usually coincide with the sufficient statistics of the exponential family. For example with sufficient statisticsφ(x)we require that
hφ(x)iqa =hφ(x)iq for alla. (7)
Note that those factors fainp(x)which are already in the exponential family, such as the Gaussian terms in examples above, can trivially be solved for by setting ga = fa. The partition function associated with this approximation is
ZEP=Zq
## ∏
a
ZDaa . (8)
Appendix A.2 shows that the moment-matching conditions must hold at a stationary point of logZEP. The EP algorithm iteratively updates thega-terms by enforcingqto share moments with each of the tilted distributionsqa; on reaching a fixed point all moments match according to Equation (7) (Minka, 2001a,b). Although ZEP is defined in the terminology of EP, other algorithms may be required to solve for the fixed point, andZEP, as a free energy, can be derived from the saddle point of a set of self-consistent (moment-matching) equations (Opper and Winther, 2005; van Gerven et al., 2010; Seeger and Nickisch, 2010). We next make EP concrete by applying it to the Ising model, which will serve as a running example in the paper. The section is finally concluded with a discussion of the interpretation of EP.
3.1 EP for Ising Models
The Ising model in Equation (3) will be used as a running example throughout this paper. To make the technical developments more concrete, we will consider both theN-variate and bivariate cases. The bivariate case can be solved analytically, and thus allows for a direct comparison to be made between the exact and approximate solutions.
We use the factorized approximation as a running example, dividing p(x)in Equation (3) into
N+1 factors with f0(x) =exp{12xTJx+θTx}and fn(xn) =tn(xn) =1
2δ(xn+1) + 1
2δ(xn−1), for
(5)
n=1, . . . ,N (see Appendix A for generalizations). We consider the Gaussian exponential family such thatgn(xn) =exp{λn1xn−12λn2x2n} andg0(x) = f0(x). The approximating distribution from Equation (5),q(x)∝ f0(x)∏Nn=1gn(xn), is thus afullmultivariate Gaussian density, which we write asq(x) =
### N
(x;µ,Σ).
3.1.1 MOMENTMATCHING
The moment matching condition in Equation (7) involves only the mean and variance ifq(x)fully factorizes according to p(x)’s terms. We therefore only need to match the mean and variances of marginals ofq(x)and the tilted distributionqn(x)in Equation (6). The tilted distribution may be decomposed into a Gaussian and a discrete part asqn(x) =qn(x\n|xn)qn(xn), where the vectorx\n consists of all variables apart fromxn. We may marginalize outx\nand writeqn(xn)in terms of two factors:
qn(xn)∝ 1 2 h
δ(xn+1) +δ(xn−1) i
| {z }
fn(x)=tn(xn)
expnγxn−12Λx2n o
| {z }
∝Rdx\nq(x)/gn(x)
, (9)
where we dropped the dependency ofγandΛonnfor notational simplicity. Through some manip-ulation, the tilted distribution is equivalent to
qn(xn) = 1+mn
2 δ(xn−1) + 1mn
2 δ(xn+1), mn=tanh(γ) =
e−γ
eγ+e−γ . (10) This discrete distribution has meanmnand variance 1−m2n. By adapting the parameters ofgn(xn) using for example the EP algorithm, we aim to match the mean and variance of the marginalq(xn) (ofq(x)) to the mean and variance ofqn(xn). The reader is referred to Section 9 for benchmarked results for the Ising model.
3.1.2 ANALYTICBIVARIATECASE
Here we shall compare the exact result with EP and the correction for the simplest non-trivial model, theN=2 Ising model with no external field
p(x) =1
4
δ(x1−1) +δ(x1+1)
δ(x2−1) +δ(x2+1)
eJx1x2 .
In order to solve the moment matching conditions we observe that the mean values must be zero because the distribution is symmetric around zero. Likewise the linear term in the approximat-ing factors disappears and we can writegn(xn) =exp{−λx2n/2}andq(x) =
### N
(x;0,Σ)withΣ=
λ J
−J λ
−1
. The moment matching condition for the variances, 1=Σnn, turns into a second
order equation with solutionλ=12hJ2+√J4+4i. We can now insert this solution into the expres-sion for the EP partition function in Equation (8). By expanding the result to the second order inJ2, we find that
logZEP=− 1 2+
1 2
p
1+4J21 2log
1 2(1+
p
1+4J2)
=J 2 2 −
J4
4 +. . . .
Comparing with the exact expression
logZ=log cosh(J) =J 2 2 −
J4
(6)
we see that EP gives the correctJ2coefficient, but theJ4coefficient comes out wrong. In Section 4 we investigate how cumulant corrections can correct for this discrepancy.
3.2 Two Explanations Why Gaussian EP is Often Very Accurate
EP, as introduced above, is an algorithm. The justification for the algorithm put forward by Minka and adopted by others (see for example recent textbooks by Bishop 2006, Barber 2012 and Murphy 2012) is useful for explaining the steps in the algorithm but may be misleading in order to explain why EP often provides excellent accuracy in estimation of marginal moments andZ.
The general justification for EP (Minka, 2001a,b) is based upon a minimization of Kullback-Leiber (KL) divergences. Ideally, one would determine the approximating distributionq(x) as the minimizer of KL(pkq) in an exponential family of (in our case, Gaussian) densities. Since this is not possible—it would require the computation of exact moments—we instead iteratively min-imize “local” KL-divergences KL(qakq), between the tilted distributionqa andq, with respect to
ga(appearing inq). This leads to the moment matching conditions in Equation (7). The argument for this procedure is essentially that this will ensure that the approximation q will capture high density regions of the intractable posterior p. Obviously, this argument cannot be applied to Ising models because the exact and approximate distributions are very different, with the former being discrete due to the Diracδ-functions that constrain xn=±1 to be binary variables. Even though the optimization still implies moment matching, this discrete-continuous discrepancy makes local KL-divergences KL(qakq)infinite!
In order to justify the usefulness of EP for Ising models we therefore need an alternative argu-ment. Our argument is entirely restricted toGaussianEP for our extended definition of GLVMs and do not extend to approximations with other exponential families. In the following, we will discuss these assumptions in inference approximations that preceded the formulation of EP, in order to pro-vide a possibly more relevant justification of the method. Although this justification is not strictly necessary for practically using EP nor corrections to EP, it nevertheless provides a good starting point for understanding both.
The argument goes back to the mathematical analysis of the Sherrington-Kirkpatrick (SK) model for a disordered magnet (a so-called spin glass) (Sherrington and Kirckpatrick, 1975). For this Ising model, the couplings J are drawn at random from a Gaussian distribution. An impor-tant contribution in the context of inference for this model (the computations of partition functions and average magnetizations) was the work of Thouless et al. (1977) who derivedself-consistency
equationswhich are assumed to be valid with a probability (with respect to the drawing of
ran-dom couplings) approaching one as the number of variablesxn grows to infinity. These so-called Thouless-Anderson-Palmer (TAP) equations are closely related to the EP moment matching condi-tions of Equation (7), but they differ by partly relying on the specific assumption of the randomness of the couplings. Self-consistency equations equivalent to the EP moment matching conditions which avoided such assumptions on the statistics of the random couplings were first derived by Opper and Winther (2000) by using a so-called cavity argument (M´ezard et al., 1987). A new im-portant contribution of Minka (2001a) was to provide an efficient algorithmic recipe for solving these equations.
We will now sketch the main idea of the cavity argument for the GLVM. Let x\n (“xwithout
(7)
the exact marginal distribution ofxnmay be written as
pn(xn) = 1
Ztn(xn)
Z
exp
−1 2x
TJx
n′6=n
tn′(xn′)dx\n
=tn(xn)
Z e
−Jnnx2
n/2
Z
exp (
−xn
n′6=n
Jnn′xn′−1
2x T \nJ\nx\n
)
## ∏
n′6=n
tn′(xn′)dx\n.
It is clear thatpn(xn)depends entirely on the statistics of the random variablehn≡∑n′6=nJnn′xn′. This
is the total‘field’created by all other ‘magnetic moments’xn′ in the ‘cavity’ opened oncexn has
been removed from the system. In the context of densely connected models with weak couplings, we can appeal to the central limit theorem2to approximatehnby a Gaussian random variable with meanγn and varianceVn. When looking at the influence of the remaining variablesx\n onxn, the non-Gaussian details of their distribution have been washed out in the marginalization. Integrating out the Gaussian random variablehngives the Gaussian cavity field approximation to the marginal distribution:
pn(xn)const·tn(xn)e−Jnnx2n/2
Z
e−xnh
### N
(h;γn,Vn)dh =const·tn(xn)exp
−xnγn− 1
2(Jnn−Vn)x 2 n
.
This is precisely of the form of the marginal tilted distributionqn(xn) of Equation (9) as given by Gaussian EP. In the cavity formulation,q(x)is simply aplaceholderfor the sufficient statistics of the individual Gaussian cavity fields. So we may observe cases, with the Ising model or bounded support factors being the prime examples, where EP gives essentially correct results for the marginal distributions of thexnand of the partition functionZ, whileq(x)gives a poor or even meaningless (in the sense of KL divergences) approximation to the multivariate posterior. Note however, that the entirecovariance matrixof thexncan be computed simply from a derivative of the free energy (Opper and Winther, 2005) resulting in an approximation of this covariance by that ofq(x). This may indicate that a good EP approximation of the free energy may also result in a good approxi-mation to the full covariance. The near exactness of EP (as compared to exhaustive sumapproxi-mation) in Section 9 therefore shows the central limit theorem at work. Conversely, mediocre accuracy or even failure of Gaussian EP, as also observed in our simulations in Sections 8.3 and 9, may be attributed to breakdown of the Gaussian cavity field assumption. Exact inference on the strongest couplings as considered for the Ising model in Section 9 is one way to alleviate the shortcoming of the Gaussian cavity field assumption.
4. Corrections to EP
TheZEP approximation can be corrected in a principled approach, which traces the following out-line:
1. The exact partition function Z is re-written in terms of ZEP, scaled by a correction factor
R=Z/ZEP. This correction factorRencapsulates the intractability in the model, and contains a “local marginal” contribution by each fa(see Section 4.1).
(8)
2. A “handle” onRis obtained by writing it in terms of the cumulants (to be defined in Section 4.2) ofq(x) andqa(x) from Equations (5) and (6). Asqa(x) andq(x) share their two first cumulants, the mean and covariance from the moment matching condition in Equation (7), a cumulant expansion ofRwill be in terms ofhigher-ordercumulants (see Section 4.2).
3. R, defined in terms of cumulant differences, is written as a complex Gaussian average. Each factor facontributes a complex random variablekain this average (see Section 4.3).
4. Finally, the cumulant differences are used as “small quantities” in a Taylor series expansion ofR, and the leading terms are kept (see Sections 5 and 7).
The series expansion is in terms of a complex expectation with azero“self-relation” matrix, and this has two important consequences. Firstly, it causes all first order terms in the Taylor expansion to disappear, showing thatZEP is correct to first order. Secondly, due to Wick’s theorem (introduced in Section 4.4), these zeros will contract the expansion by making many other terms vanish.
The strategy that is presented here can be re-used to correct other quantities of interest, like marginal distributions or the predictive density of new data when p(x) is a Bayesian probabilistic model. These corrections are outlined in Appendix D.
4.1 Exact Expression for Correction
We define the (intractable) correctionRasZ=RZEP. We can derive a useful expression forRin a few steps as follows: First we solve for fain Equation (6), and substitute this into Equation (4) to obtain
a
fa(x)Da =
a
Zaqa(x)ga(x)
q(x)
Da
=ZEPq(x)
a
qa(x)
q(x) Da
. (11)
We introduceF(x)
F(x)
## ∏
a
qa(x)
q(x) Da
to derive the expression for the correctionR=Z/ZEPby integrating Equation (11):
R=
Z
q(x)F(x)dx, (12)
where we have usedZ=R∏afa(x)Dadx. Similarly we can write:
p(x) = 1
Z
## ∏
a fa(x)
Da=ZEP
Z q(x)F(x) =
1
Rq(x)F(x). (13)
Corrections to the marginal and predictive densities ofp(x)can be computed from this formulation. This expression will become especially useful because the terms inF(x)turn out to be “local”, that is, they only depend on the marginals of the variables associated with factora. Let fa(x) depend on the subsetxaofx, and letx\a(“xwithouta”) denote the remaining variables. The distributions in Equations (5) and (6) differ only with respect to their marginals onxa, qa(xa) and q(xa), and therefore
qa(x)
q(x) =
q(x\a|xa)qa(xa)
q(x\a|xa)q(xa)
=qa(xa)
q(xa)
(9)
Now we can rewriteF(x)in terms of marginals:
F(x) =
## ∏
a
qa(xa)
q(xa) Da
. (14)
The key quantity, then, isF, after which the key operation is to compute its expected value. The rest of this section is devoted to the task of obtaining a “handle” onF.
4.2 Characteristic Functions and Cumulants
The distributions present in each of the ratios inF(x)in Equation (14) share their first two cumu-lants, mean and covariance. Cumulants and cumulant differences are formally defined in the next paragraph. This simple observation has a crucial consequence: As theq(xa)’s are Gaussian and do not contain any higher order cumulants (three and above),Fcan be expressed in terms of the higher cumulants of themarginals qa(xa). When the term-product approximation is fully factorized, these are simply cumulants ofone-dimensional distributions.
LetNabe the number of variables in subvectorxa. In the examples presented in this work,Nais one or two. Furthermore, letkabe anNa-dimensional vectorka= (k1, . . . ,kNa)a. The characteristic function ofqais
χa(ka) =
Z
eikTaxaq
a(xa)dxa=
eikTaxa
qa , (15)
and is obtained through the Fourier transform of the density. Inversely,
qa(xa) = 1 (2π)Na
Z
e−ikTaxaχ
a(ka)dka. (16)
The cumulantscαaofqaare the coefficients that appear in the Taylor expansion of logχa(ka)around the zero vector,
cαa=
(i)l
∂ ∂ka
α
logχa(ka)
ka=0
.
By this definition ofcαa, the Taylor expansion of logχa(ka)is
logχa(ka) =
l=1
il
## ∑
|=l
cαa α! k
α a .
Some notation was introduced in the above two equations to facilitate manipulating a multivariate series. The vectorα= (α1, . . . ,αNa), with αj∈N0, denotes a multi-index on the elements ofka. Other notational conventions that employα(writingkj instead ofka j) are:
|=
j
αj, kαa =
j
jj , α!=
j
αj!,
∂ ∂ka
α =
## ∏
j
∂αj
∂kαj j
.
For example, whenNa=2, say for the edge-factors in a spanning tree, the set of multi-indicesα where|α|=3 are(3,0),(2,1),(1,2), and(0,3).
(10)
EP marginalq(xa), defined asχ(ka) =heik T axai
q. By virtue of matching the first two moments, and
q(xa)being Gaussian with cumulantsc′αa,
ra(ka) =logχa(ka)−logχ(ka) =
l≥1
il
|α|=l
cαa−c′αa α! k
α a
=
l≥3
il
## ∑
|=l
cαa α! k
α
a (17)
contains the remaining higher-order cumulants where the tilted and approximate distributionsdiffer. All our subsequent derivations rest upon moment matching being attained. This especially means that one cannot use the derived corrections if EP has not converged.
4.2.1 ISINGMODELEXAMPLE
The cumulant expansion for the discrete distribution in Equation (10) becomes
logχn(kn) =log
Z
dxneiknxnqn(xn) =log
1+m
2 e
ikn+1−m 2 e
−ikn
=imkn− 1 2!(1−m
2)k2 n−
i
3!(−2m+2m 3)k3
n+ 1
4!(−2+8m 2
−6m4)kn4+··· (we’re compactly writingmformn), from which the cumulants are obtained as
c1n=m, c4n=−2+8m2−6m4,
c2n=1−m2, c5n=16m−40m3+24m5,
c3n=−2m+2m3, c6n=16−136m2+240m4−120m6.
4.3 The Correction as a Complex Expectation
The expected value of F, which is required for the correction, has a dependence on a product of ratios of distributions qa(xa)/q(xa). In the preceding section it was shown that the contributing distributions share lower-order statistics, allowing a twofold simplification. Firstly, the ratioqa/q will be written as a single quantity that depends onra, which was introduced above in Equation (17). Secondly, we will show that it is natural to shift integration variables into the complex plane, and rely on complex Gaussian random variables (meaning that both real and imaginary parts are jointly Gaussian). These complex random variables that define thera’s have a peculiar property: they have a zero self-relation matrix! This property has important consequences in the resulting expansion.
4.3.1 COMPLEXEXPECTATIONS
Assume thatq(xa) =
### N
(xa;µa,Σa)andqa(xa)share the same mean and covariance, and substitute logχa(ka) =ra(ka) +logχ(ka)in the definition ofqain Equation (16) to give
qa(xa)
q(xa) =
R
e−ikT
axa+ra(ka)χ(k a)dka
R
e−ikT
axaχ(ka)dka . (18)
Although theka variables have not been introduced as random variables, we find it natural to
(11)
−1 0 1
−1 0
1 2
3 4
−1 −0.5 0 0.5 1
Re(k) x
Im(k)
Figure 1: Equation (20) shifts ka to the complex plane. In the simplest case the joint density
p(k|x)q(x)isx
(µ,σ2),(k)
### N
(0,σ−2)and equality(k) =σ−2(xµ). Notice thatℜ(k)’s variance is the inverseof that of x. The joint density is a two-dimensional flat ellipsoidal pancake that lives in three dimensions: xand the complexk plane (tilted ellipsoid). Integrating overxgives the marginal over a complexk, which isstilla two-dimensional random variable (upright ellipsoid). The marginal hasℑ(k)
### N
(0,σ−2), and hencekhas relation(ℜ(k) +iℑ(k))2=σ2σ2=0 and variancekk=2σ2.
extremely helpful in developing the subsequent expansions. We will therefore writeqa(xa)/q(xa) as an expectation of expra(ka)over a density p(ka|xa)∝e−ik
T axaχ(k
a):
qa(xa)
q(xa)
=Dexpra(ka) E
ka|xa
. (19)
By substituting logχ(ka) =iµTaka−kaTΣaka/2 into Equation (18), we see that p(ka|xa) can be viewed as Gaussian, but not for real random variables! We have to considerkaas Gaussian random variables with a real and an imaginary part with
ℜ(ka)
### N
ℜ(ka);0,Σ−a1
, ℑ(ka) =Σa−1(xa−µa).
For the purpose of computing the expectation in Equation (19), ka|xa is a degenerate complex Gaussian that shifts the coefficients ka into the complex plane. The expectation of expra(ka) is therefore taken over Gaussian random variables that haveq(xa)’sinversecovariance matrix as their (real) covariance! As shorthand, we write
p(ka|xa) =
### N
ka;−iΣ−a1(xa−µa),Σ−a1
(12)
Figure 1 illustrates a simple density p(ka|xa), showing that the imaginary component is a de-terministic function ofxa. Oncexa is averaged out of the joint density p(ka|xa)q(xa), acircularly
symmetric complex Gaussiandistribution overkaremains. It is circularly symmetric ashkai=0,
re-lation matrixkakTa
=0, and covariance matrixkaka T
=2Σ−a1(notationkindicates the complex conjugate ofk). For the purpose of computing the expected values with Wick’s theorem (following in Section 4.4 below), weonlyneed the relationskakTb
for pairs of factorsaandb. All of these will be derived next:
According to Equation (12), a further expectation over q(x) is needed, after integrating over
ka, to determineR. These variables will becombinedinto complex random variables to make the averages in the expectation easier to derive. By substituting Equation (19) into Equation (12),Ris equal to
R=F(x)x
∼q(x)=
## ∏
a
D
expra(ka) EDa
ka|xa
x
. (21)
Whenxis given, theka-variables are independent. However, when they are averaged overq(x), the
ka-variables become coupled. They are zero-mean complex Gaussians
hkai= D
hkaika|xa E
x=
D
−iΣ−a1(xa−µa) E
x=0
and are coupled with a zero self-relation matrix! In other words, ifΣab=cov(xa,xb), the expected valueskakTb
between the variables in the set{ka}are
kakTb
=DkakTb
ka,b|x
E
x+i
2Σ−1 a
D
(xa−µa)(xb−µb)T E
−1 b
=
0 ifa=b
−Σ−a1ΣabΣ−b1 ifa6=b
. (22)
Complex Gaussian random variables are additionally characterized by kakb T
. However, these expectations are not required for computing and simplifying the expansion of logRin Section 5, and are not needed for the remainder of this paper. Figure 2 illustrates the structure of the resulting relation matrix kakTb
for two different factorizations of the same distribution. Each factor fa contributes akavariable, such that the tree-structured approximation’s relation matrix will be larger than that of the fully factorized one.
Section 5 shows that whenDa=1, the above expectation can be written directly over{ka}and expanded. In the general case, discussed in Section 7, the inner expectation is first expanded (to treat theDapowers) before computing an expectation over{ka}. In both cases the expectation will involve polynomials ink-variables. The expected values of Gaussian polynomials can be evaluated with Wick’s theorem.
4.4 Wick’s Theorem
Wick’s theorem provides a useful formula for mixed central moments of Gaussian variables. Let
kn1, . . . ,knℓ be real or complex centered jointly Gaussian variables, noting that they do not have to
be different. Then
hkn1···knℓi=
## ∑∏
η
kiηkjη
(13)
Figure 2: The relation matrices betweenkafor two factorizations of∏4n=1tn(xn): the top illustration is fort1t2t3t4, while the bottom illustration is of a tree structure(t1t2)(t2t3)(t3t4)/t2/t3. The white squares indicate a zero relation matrixkakTb
, with thediagonalbeing zero. From the properties of Equation (22) there are additional zeros in the tree structure’s relation matrix, where edge and node factors share variables. The factor f0=g0 is shadowed in grey in the left-hand figures, and can makeq(x)densely connected.
where the sum is over all partitions of {n1, . . . ,nℓ} into disjoint pairs{iη,jη}. Ifℓ=2mis even,
then there are(2m)!/(2mm!) = (2m1)!! such partitions.3 Ifis odd, then there are none, and the expectation in Equation (23) is zero.
Consider the one-dimensional variablek
### N
(k; 0,σ2). Wick’s theorem states thathki= (
1)!!σℓifis even, andhki=0 ifis odd. In other words,hk3i=0,hk4i=3(σ2)2,hk6i=15(σ2)3, and so forth.
5. Factorized Approximations
In the fully factorized approximation, with fn(xn) =tn(xn), the exact distribution in Equation (13) depends on thesingle node marginals F(x) =∏nqn(xn)/q(xn). Following Equation (21), the cor-rection to the free energy
R=
n
D
exprn(kn)E kn|xn
x
=
exp
## ∑
n
rn(kn)
k
(24)
is taken directly over the centered complex-valued Gaussian random variables k= (k1, . . . ,kN), which have a relations
hkmkni=
0 ifm=n
−Σmn/(ΣmmΣnn) ifm6=n
. (25)
(14)
In the section to follow, all expectations shall be with respect tok, which will be dropped where it is clear from the context.
Thus far, Ris re-expressed in terms of site contributions. The expression in Equation (24) is exact, albeit still intractable, and will be treated through a power series expansion. Other quantities of interest, like marginal distributions or moments, can similarly be expressed exactly, and then expanded (see Appendix D).
5.1 Second Order Correction tologR
Assuming that thern’s are small on average with respect tok, Equation (24) is expanded and the lower order terms kept:
logR=log
exp
n
rn(kn)
=
n
hrni+ 1 2
*
n
rn 2+
−12
n
hrni 2
+···
=1
2m
## ∑
6=nhrmrni+··· (26) The simplification in the second line is a result of the variance terms being zero from Equation (25). The single marginal terms also vanish (and hence EP is correct to first order) because bothhkni=0 andk2n=0.
This result can give us a hint in which situations the corrections are expected to be small:
• Firstly, therncould be small for values ofknwhere the density ofkis not small. For example, under a zero noise Gaussian process classification model,qn(xn)equals a step functiontn(xn) times a Gaussian, where the latter often has small variance compared to the mean. Hence,
qn(xn)should be very close to a Gaussian.
• Secondly, for systems with weakly (posterior) dependent variablesxn we might expect that the log partition function logZ would scale approximately linearly withN, the number of variables. Since terms withm=nvanish in the computation of lnR, there are no corrections that areproportional to N when Σmn is sufficiently small asN→∞. Hence, the dominant contributions to logZshould already be included in the EP approximation. However, Section 8.3 illustrates an example where this need not be the case.
The expectationhrmrni, as it appears in Equation (26), is treated by substitutingrnwith its cumulant expansionrn(kn) =∑l≥3ilclnknl/l! from Equation (17). Wick’s theorem now plays a pivotal role in evaluating the expectations that appear in the expansion:
hrm(km)rn(kn)i=
## ∑
l,s≥3
il+sclncsm l!s! hk
s mklni
=
## ∑
l≥3
i2ll!clncsm (l!)2 hkmkni
l
=
## ∑
l≥3
clmcln
l!
Σmn
ΣmmΣnn l
. (27)
(15)
k2n=0. To therefore get a non-zero result forksmkln, using Equation (23),eachfactorknhas to be paired with some factorkm, and this is possible only whenl=s. Wick’s theorem sums over all pairings, and there arel! ways of pairing aknwith akm, giving the result in Equation (27). Finally, plugging Equation (27) into Equation (26) gives the second order correction
logR=1 2m
6=nl
## ∑
3
clmcln
l!
Σmn
ΣmmΣnn l
+··· . (28)
5.1.1 ISINGEXAMPLECONTINUED
We can now compute the second order logR correction for the N =2 Ising model example of Section 3.1. The covariance matrix has Σnn =1 from moment matching and Σ12 =J/(λ2−J2) withλ= 1
2 h
J2+√J4+4i. The uneven terms in the cumulant expansion derived in Section 4.2.1 disappear becausem=0. The first nontrivial term is thereforel=4 which gives a contribution of
1 2×2×
c24
4!Σ 4 12=
(−2)2
4! Σ 4 12= 16Σ
4
12. In Section 3.1, we saw that logZ−logZEP= J
4
6 plus terms of orderJ6and higher. To lowest order inJwe haveΣ12=Jand thus logR=J
4
6 which exactly cancels the lowest order error of EP.
5.2 Corrections to Other Quantities
The schema given here is applicable to any other quantity of interest, be it marginal or predictive distributions, or the marginal moments ofp(x). The cumulant corrections for the marginal moments are derived in Appendix D; for example, the correction to the marginal meanµiof an approximation
q(x) =
(x;µ,Σ)is
hxiip(x)−µi=
l≥3
## ∑
j6=n
Σi j
Σj j
cl+1,jcln
l!
Σjn
Σj jΣnn l
+··· , (29)
while the correction to the marginal covariance is
h(xi−µi)(xi′−µi′)ip(x)−Σii′ =
l3
j6=n
Σi jΣi′j
Σ2 j j
cl+2,jcln
l!
Σjn
Σj jΣnn l
+
l≥3
## ∑
j6=n
Σi j
Σj j
Σi′n
Σnn
cl jcln
l!
Σ
jn
Σj jΣnn l−1
+··· . (30)
5.3 Edgeworth-Type Expansions
To simplify the expansion of Equation (24), we integrated (combined) degenerate complex Gaus-sianskn|xnoverq(x)to obtain fully complex Gaussian random variables{kn}. We’ve then relied on
k2n=0 to simplify the expansion of logR. The expectationsk2
n
(16)
from a Gaussian density. This line of derivation gives an Edgeworth expansion foreachfactor’s tilted distribution.
As a second step, Equation (24) couples the product of separate Edgeworth expansions (one for each factor) together by requiring an outer average over q(x). The orthogonality of Hermite polynomials underq(x) now come into play: it allows products of orthogonal polynomials under
q(x)to integrate to zero. This is similar to contractions in Wick’s theorem, wherek2n=0 allows us to simplify Equation (27). Although it is not the focus of this work, an example of such a derivation appears in Appendix C.1.
We may hope that in practice the low order terms in the cumulant expansions will account already for the dominant contributions. But will such an expansion actually converge when extended to arbitrary orders? While we will leave a more general answer to future research, we can at least give a partial result for the example of the Ising model. LetD=diag(Σ), the diagonal of the covariance matrix of the EP approximationq(x). We prove here that a cumulant expansion forRwill converge when the eigenvalues ofD−1/2ΣD−1/2—which has diagonal values of one—are bounded between zero and two.
In practice we’ve found that even if the largest of these eigenvalues grows withN, the second-order correction gives a remarkable improvement. This, with the results in Figure 6, lead us to believe that the power series expansion is often divergent. It may well be that our expansions are only of an asymptotic type (Boyd, 1999) for which the summation of only a certain number of terms might give an improvement whereas further terms would lead to worse results. It leads to a paradoxical situation, which seems common when interesting functions are computed: On the one hand we may have a series which does not converge, but in many ways is more practical; on the other hand one might obtain an expansion that converges, but only impractically. Quoting George F. Carrier’s rule from Boyd (1999):
Divergent series converge faster than convergent series because they don’t have to converge.
For this, we do not yet have a clear-cut answer.
6.1 A Formal Expression for the Cumulant Expansion to All Orders
To discuss the question when our expansion will converge when extended to arbitrary orders, we introduce a single extra parameterλintoR, which controls the strength of the contribution of cu-mulants. Expanded into a series in powers of λ, contributions of cumulants oftotal order l are multiplied by a factorλl, for exampleλlc
nl orλk+lcnkcnl. Of course, at the end of the calculation, we setλ=1. This approach is obviously achieved by replacing
rn(kn)→rn(λkn)
in Equation (24). Hence, we define
R(λ) =
exp
n
rn(λkn)
k
=
exp
## ∑
n
rn(kn)
(17)
where
km′ k′n=
0 ifm=n
−λ2Σ
mn/(ΣmmΣnn) ifm6=n .
By working backwards, and expressing everything by the original densities overxn, the correction can be written as
R(λ) =
## ∏
n
qn(xn)
q(xn)
qλ(x)
, (31)
where the densityqλ(x)is a multivariate Gaussian with meanµand covariance given by
Σλ=D+z(Σ−D),
whereD=diag(Σ)andz=λ2. Hence, we see that the expansion in powers ofλis actually equiv-alent to an expansion in products of nondiagonal elements ofΣ.
Noticing that asR(λ)depends onλthrough the densityqλ(x)∝|Σλ|−1/2e−
1
2x⊤Σ−
1
λ x, we can see by expressingΣ−λ1in terms of eigenvalues and eigenvectors that for anyfixedx,qλ(x)is an analytic
function of thecomplex variable zas long asΣλis positive definite. Since
Σλ=D1/2
n
I+z
D−1/2ΣD−1/2IoD1/2
this is equivalent to the condition that the matrixI+z(D−1/2ΣD−1/2I)is positive definite. In-troducing γi, the eigenvalues of D−1/2ΣD−1/2, positive definiteness fails when for the first time 1+z(γi−1) =0. Thus the series forqλ(x)is convergent for
|z|<min i
1 |1γi|
.
Settingz=1, this is equivalent to the condition
1<min i
1 |1γi|
.
This means that the eigenvalues have to fulfil 0<γi<2. Unfortunately, we can not conclude from this condition that pointwise convergence ofqλ(x)for eachxleads to convergence ofR(λ)(which
is an integral ofqλ(x)over allx!). However, in cases where the integral eventually becomes a finite sum, such as the Ising model, pointwise convergence inxleads to convergence ofR(λ).
6.1.1 ISINGMODELEXAMPLE
From Section 4.2.1 the tilted distribution for the running example Ising model isqn(xn) =12[δ(xn+ 1) +δ(xn−1)], and henceq(xn) = (2π)11/2e−
x2
n/2. As eachq(xn) is a unit-variance Gaussian, D= diag(Σ) =I. HenceD−1/2ΣD−1/2=Σand
R(λ) =p 1
|(1λ2)I+λ2Σ| eN/2
2N
## ∑
x∈{−1,1}N exp
−1
2x
T (1λ2)I+λ2Σ−1x
follows from Equation (31). The arguments of the previous section show that theradius of
conver-genceofR(λ)is determined by the condition that the matrixI+λ2(ΣI)is positive definite or the
(18)
In the N =2 case, Σ=
1 c
c 1
with c=c(J)]1,1[which has eigenvalues 1c and
1+c, meaning that cumulant expansion for R(λ) is convergent for the N=2 Ising model. For
N>2, it is easy to show that this is not necessarily true. Consider the ‘isotropic’ Ising model with
Ji j=J and zero external field, thenΣii=1 andΣi j=cfori6= jwithc=c(J)∈]−1/(N−1),1[. The eigenvalues are now 1+ (N−1)c and 1c (the latter with degeneracy N−1). For finite c, the largest eigenvalue will scale withNand thus be larger than the upper value of two that would be required for convergence. Scaling with N for the largest eigenvalue of D−1/2ΣD−1/2 is also observed in the Ising model simulations Section 9.
We conjecture that convergence of the cumulant series forR(λ)also implies convergence of the series for logR(λ)but leave an investigation of this point to future research. We only illustrate this point for theN=2 Ising model case, where we have the explicit formula
logR(λ) =11
2log 1−λ
4c2 1
1λ4c2+log cosh
λ2c 1λ4c2
.
As can be easily seen, an expansion in λconverges forc2λ4<1 which gives the same radius of convergence|c|<1 as for the expansion ofR.
7. General Approximations
The general approximations differ from the factorized approximation in that an expansion in terms of expectations under{ka}doesn’t immediately arise. ConsiderRin Equation (21): Its inner ex-pectations are overka|x, and outer expectations are overx. First take the binomial expansion of the
innerexpectation, and keep it to second order inra:
D era(ka)
EDa
ka|x=
1+hrai+ 1 2
ra2+··· Da
=1+Da
hrai+ 1 2
r2a+···
+Da(Da−1) 2
hrai+
1 2
r2a+··· 2
+···
=1+Dahrai+
Da
2
r2a+Da(Da−1)
2 hrai 2
+··· .
Notice that ra(ka)can be complex, buthra(ka)ika|x, as it appears in the above expansion, is real-valued. Using this result, again expandh∏aheraikDaa|xix. The correction to logR, up to second order, is
logR=1 2a
## ∑
6
=b
D
hra(ka)ika|xhrb(kb)ikb|x
E
x
+1
2
## ∑
a Da(Da−1) D
hra(ka)i2ka|x E
x+··· . (32)
(19)
8. Gaussian Process Results
One of the most important applications of EP is to statistical models with Gaussian process (GP) priors, wherexis a latent variable with Gaussian prior distribution with a kernel matrixKas covari-anceE[xxT] =K.
It is well known that for many models, like GP classification, inference with EP is on par with MCMC ground truth (Kuss and Rasmussen, 2005). Section 8.1 underlines this case, and shows corrections to the partition function on the USPS data set over a range of kernel hyperparameter settings.
A common inference task is to predict the output for previously unseen data. Under a GP regression model, a key quantity is the predictive mean function. The predictive mean is analytically tractable when the latent function is corrupted with Gaussian noise to produce observationsyn. This need not be the case; in Section 8.2 we examine the problem of quantized regression, where the noise model is non-Gaussian with sharp discontinuities. We show practically how the corrections transfer to other moments, like the predictive mean. Through it, we arrive at a hypothetical rule of thumb: if the data isn’t “sensible” under the (probabilistic) model of interest, there is no guarantee for EP giving satisfactory inference.
Armed with the rule of thumb, Section 8.3 constructs an insightful counterexample where the EP estimate diverges or is far from ground truth with more data. Divergence in the partition function is manifested in the initial correction terms, giving a test for the approximation accuracy that doesn’t rely on any Monte Carlo ground truth.
8.1 Gaussian Process Classification
The GP classification model arises when we observeNdata pointssnwith class labelsyn∈ {−1,1}, and modelythrough a latent functionxwith a GP prior. The likelihood terms forynare assumed to betn(xn) =Φ(ynxn), whereΦ(·)denotes the cumulative Normal density.
An extensive MCMC evaluation of EP for GP classification on various data sets was given by Kuss and Rasmussen (2005), showing that the log marginal likelihood of the data can be approxi-mated remarkably well. As shown by Opper et al. (2009), an even more accurate estimation of the approximation error is given by considering the second order correction in Equation (28). For GPC we generally found that thel=3 term dominatesl=4, and we do not include any higher cumulants here.
Figure 3 illustrates the correction to logR, withl=3,4, on the binary subproblem of the USPS 3’s vs. 5’s digits data set, withN=767. This is the same set-up of Kuss and Rasmussen (2005) and Opper et al. (2009), using the kernelk(s,s′) =σ2exp(1
2ks−s′k
2/ℓ2), and we refer the reader to both papers for additional and complimentary figures and results. We evaluated Equation (28) on a similar grid of logℓand logσvalues. For the same grid values we obtained Monte Carlo estimates of logZ, and hence logR. The correction, compared to the magnitude of the logZ grids by Kuss and Rasmussen (2005), is remarkably small, and underlines their findings on the accuracy of EP for GPC.
The correction from Equation (28), as computed here, is
### O
(N2), and compares favorably to
(20)
2 3 4 5 0
0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
log R (EP 2nd order)
log lengthscale, log(l)
log magnitude, log(
σ
)
0.1 0.2 0.3 0.4 0.5 0.6
2 3 4 5
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
log R (MCMC particles)
log lengthscale, log(l)
log magnitude, log(
σ
)
0.1 0.2 0.3 0.4 0.5 0.6
Figure 3: A comparison of logR using a perturbation expansion of Equation (28) against Monte Carlo estimates of logR, using the USPS data set from Kuss and Rasmussen (2005). The second order correction to logR, withl=3,4, is used on theleft; the right plot uses a Monte Carlo estimate of logR.
8.2 Uniform Noise Regression
We turn our attention to a regression problem, that of learning a latent function x(s) from inputs {sn}and matching real-valued observations{yn}. A frequent nonparametric treatment assumes that
x(s) isa priori drawn from a GP prior with covariance functionk(s,s′), from which a corrupted
versionyis observed. Analytically tractable inference is no longer possible in this model when the observation noise is non-Gaussian. Some scenarios include that of quantized regression, whereyn is formed by roundingx(sn)to, say, the nearest integer, or wherex(s)indicates a robot’s path in a control problem, with conditions to stay within certain “wall” bounds. In these scenarios the latent function x(sn) can be reconstructed from yn by adding sharply discontinuous uniformly random
[a,a]noise,
p(x) = 1
Z
## ∏
n I
h
|xn−yn|<a i
### N
(x;0,K).
We now assume an EP approximation q(x) =
### N
(x;µ,Σ), which can be obtained by using the moment calculations in Appendix E.2. To simplify the exposition of the predictive marginal, we follow the notation of Rasmussen and Williams (2005, Chapter 3) and let λn = (τn,νn), so that the final EP approximation multipliesgn terms ∏nexp{−12τnx
2
n+νnxn}into a joint Gaussian
### N
(x;0,K).
8.2.1 MAKINGPREDICTIONS FORNEWDATA
(21)
q(x) =
### N
(x;µ,Σ). However, the correction to its mean, as was given in Equation (29), requires covariancesΣn, which are derived here.
Letκ=k(s,s), andkbe a vector containing the covariance function evaluationsk(s,sn). Again following Rasmussen and Williams (2005)’s notation, let ˜Σbe the diagonal matrix containing 1/τnalong its diagonal. The EP covariance, on the inclusion ofx, is
Σ∗= kKT k∗ ∗ κ∗
−1 +
˜
Σ−1 0 0T 0
!−1
=
Σ kK(K+Σ˜)−1k
kTkT(K+Σ˜)−1K κ
∗−kT∗(K+Σ˜)−1k∗
, (33)
withΣ=KK(K+Σ˜)−1K. There is no observation associated withs
∗, henceτ∗=0 in the first line above, and its inclusion has cl∗=0 forl≥3. The second line follows by computing matrix partitioned inverses twice onΣ∗. The joint EP approximation for any new input points∗is directly obtained as
q(x,x) =
### N
x
x
;
µ
kT ∗K−1µ
,Σ∗
,
with the marginalq(x)being
q(x) =
### N
(x;kTK−1µ,κkT(K+Σ˜)−1k) =
### N
(x,σ2). (34)
According to Equation (29), one needs the covariances Σj to correct the marginal’s mean; they appear in the last column ofΣ∗in Equation (33). The correction is
hxip(x,x
∗)−µ∗=
l≥3
## ∑
j6=n
Σj
Σj j
cl+1,jcln
l!
Σjn
Σj jΣnn l
+··· .
The sum over pairs j6=n include the added dimension, and thus pairs(j,) and (,n). The cumulants for this problem, used both for EP and correcting it, are derived in Appendix E.2.
8.2.2 PREDICTIVECORRECTIONS
In Figure 4 we investigate the predictive mean correction for two cases, one where the data cannot realistically be expected to appear under the prior, and the other where the prior is reasonable. For
sR, the values ofx(s) are predicted using a GP with squared exponential covariance function
k(s,s′) =θexp(12(ss′)2/ℓ).
In the first instance, the prior amplitudeθand lengthscale ℓare deliberately set to values that are too big; in other words, a typical sample from the prior would not match the observed data. We illustrate the posterior marginalq(x), and using Equations (29) and (30), show visible corrections to its mean and variance.4 For comparison, Figure 4 additionally shows what the predictive mean would have been were {yn} observed under Gaussian noise with the same mean and variance as
### U
[a,a]: it is substantially different.
In the second instance, logZEP is maximized with respect to the covariance function hyperpa-rametersθandℓto get a kernel function that more reasonably describes the data. The correction
(22)
−2.5 −2 −1.5 −1 −0.5 0 0.5 1 1.5 2 2.5 −1
−0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6
s
x(s)
E[x*] from U[−a,a] noise (EP) E[x*] from U[−a,a] noise (MCMC)
E[x*] + 2nd order, U[−a,a] noise (EP+corr)
E[x*] +− two std dev, U[−a,a] noise (EP)
E[x*] +− two std dev, U[−a,a] noise (MCMC) E[x*] +− two 2nd order std dev, U[−a,a] (EP+corr)
E[x*] from N(0, a 2
/3) noise (exact)
−2.5 −2 −1.5 −1 −0.5 0 0.5 1 1.5 2 2.5 −1
−0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6
s
x(s)
E[x*] from U[−a,a] noise (EP) E[x*] from U[−a,a] noise (MCMC)
E[x*] + 2 nd
order, U[−a,a] noise (EP+corr) E[x*] +− two std dev, U[−a,a] noise (EP) E[x*] +− two std dev, U[−a,a] noise (MCMC)
E[x*] +− two 2nd order std dev, U[−a,a] (EP+corr)
E[x*] from N(0, a 2
/3) noise (exact)
Figure 4: Predictingx(s)with a GP. The “boxed” bars indicate the permissiblex(sn)values; they are linked to observationsyn through the uniform likelihoodI[|xn−yn|<a]. Due to the
### U
[a,a]noise model,q(x)is ambivalent to where in the “box”x(s)is placed. A second order correction to the mean ofq(x)is shown in a dotted line. The lightly shaded function plotsp(x),if the likelihood was also Gaussian with variance matching that of the “box”. In thetopfigure both the prior amplitudeθand lengthscale ℓare overestimated. In the
bottomfigure, θandℓwere chosen by maximizing logZEP with respect to their values.
Notice the smaller EP approximation error.
(23)
−1 −0.5 0 0.5 1 1.5 2 −2
−1.5 −1 −0.5 0 0.5 1 1.5 2
s
x(s)
−1 −0.5 0 0.5 1 1.5 2 −2
−1.5 −1 −0.5 0 0.5 1 1.5 2
s
x(s)
E[x*] (EP) E[x
*] (MCMC) E[x
*] (EP+corr) E[x
*] +− 2σ (EP) E[x
*] +− 2σ (MCMC) E[x
*] +− 2σ (EP+corr)
Figure 5: Predictingx(s)with a GP withk(s,s′) =exp{−|ss′|/2ℓ}andℓ=1. In theleftfigure logRMCMC =0.41, while the second order correction estimates it as logR≈0.64. On
theright, the correction to the variance is not as accurate as that on theleft. Theright
correction is logRMCMC=0.28, and its discrepancy with logR≈0.45 (EP+corr) is much bigger.
8.2.3 UNDERESTIMATING THETRUTH
Under closer inspection, the variance in Figure 4 is slightly underestimated in regions where there are many close box constraints|xn−yn|<a. However, under sparser constraints relative to the kernel width, EP accurately estimates the predictive mean and variance. In Figure 5 this is taken further: forN=100 uniformly spaced inputss[0,1], it is clear thatq(x)becomes too narrow. The second order correction, on the other hand, provides a much closer estimate to the ground truth.
One might inquire about the behavior of the EP estimate as N∞in Figure 5. In the next section, this will be used as a basis for illustrating a special case where logZEPdiverges.
8.3 Gaussian Process in a Box
In the following insightful example—a special case of uniform noise regression—logZEP diverges from the ground truth with more data. Consider the ratio of functionsx(s)over[0,1], drawn from a GP prior with kernelk(s,s′), such that x(s)lies within the[a,a]box. Figure 6 illustrates three random draws from a GP prior, two of which are not contained in the[a,a]interval. The ratio of functions contained in the interval is equal to the normalizing constant of
p(x) = 1
Z
## ∏
n I
h |xn|<a
i
### N
(x;0,K). (35)
The fraction of samples from the GP prior that lie inside [a,a]shouldn’t change as the GP is sampled at increasing granularity of inputss. As Figure 6 illustrates, the MCMC estimate of logZ
(24)
0 0.2 0.4 0.6 0.8 1 −2.5
−2 −1.5 −1 −0.5 0 0.5 1 1.5 2 2.5
s
x(s)
10 20 30 40 50 60 70 80 90 100 −0.8
−0.7 −0.6 −0.5 −0.4 −0.3 −0.2 −0.1 0
Data set size N
log Z
EP EP + c
4 correction EP + c4 + c6 correction MCMC
10 20 30 40 50 60 70 80 90 100 0
0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5
Data set size N
log Z − log Z
EC
MCMC EP, 2nd order, c4
10 20 30 40 50 60 70 80 90 100 0.95
1 1.05 1.1 1.15 1.2 1.25
Data set size N
R / R
true
Figure 6: Samples from a GP prior with kernelk(s,s′) =exp{−|ss′|/2ℓ}withℓ=1, two of which are not contained in the[a,a]interval, are showntop left. AsN increases in Equation (35), withsn∈[0,1], logZEPdiverges, while logZconverges to a constant. This is shown
top right. The+’s and ×’s indicate the inclusion of the fourth (+) and fourth and sixth
(×) cumulants from the 2ndorder in Equation (28) (an arrangement by total order would include 3rdorderc4–c4–c4in×). Bottom leftandrightshow the growth for 2ndorderc4 correction relative to the exact correction.
An intuitive explanation, due to Philipp Hennig, takes a one-dimensional model p(x) =I[|x|<
a]N
### N
(x; 0,1). A fully-factorized approximation therefore hasN1redundantfactors, as remov-ing them doesn’t change p(x). However, each additionalI[|x|<a]truncates the estimate, forcing
EP to further reduce the variance ofq(x). The EP estimate usingN factorsI[|x|<a]1/N is correct (see Appendix C for a similar example and analysis), even though the original problem remains un-changed. Even though this immediate solution cannot be applied to Equation (35), theredundancy
(25)
0 0.5 1 1.5 2 2.5 3 3.5 4 0
0.1 0.2 0.3 0.4 0.5 0.6 0.7
Box width a, with |x| < a
log R using c
4
(+) and c
4
+ c
6
(x)
Figure 7: The accurateness of logZEPdepends on the size of the[−a,a]box relative toℓ, with the estimation being exact asa0 anda∞. The second order correction for Figure 6’s kernel is illustrated here over varyinga’s. The+’s and×’s indicate the inclusion of the 4th(+) and 4thand 6th(×) cumulants in Equation (28). Of these, the top pair of lines are forN=100, and the bottom pair forN=50.
9. Ising Model Results
This section discusses various aspects of corrections to EP as applied to the Ising model—a Bayesian network with binary variables and pairwise potentials—in Equation (3).
We consider the set-up proposed by Wainwright and Jordan (2006) in whichN=16 nodes are either fully connected or connected to their nearest neighbors in a 4-by-4 grid. The external field (observation) strengthsθi are drawn from auniform distribution θi ∼
### U
[−dobs,dobs]with dobs= 0.25. Three types of coupling strength statistics are considered: repulsive (anti-ferromagnetic)Ji j∼
### U
[2dcoup,0], mixedJi j∼
### U
[−dcoup,+dcoup], and attractive (ferromagnetic)Ji j∼
### U
[0,+2dcoup]. Previously we have shown (Opper and Winther, 2005) that EP/EC gives very competitive results compared to several standard methods. In Section 9.1 we are interested in investigating whether a further improvement is obtained with the cumulant expansion. In Section 9.2, we revisit the correction approach proposed in Paquet et al. (2009) and make and empirical comparison with the cumulant approach.
9.1 Cumulant Expansion
For the factorized approximation we use Equations (26) and (29) for the logZand marginal correc-tions, respectively. The expression for the cumulants of the Ising model is given in Section 4.2.1. The derivation of the corresponding tree expressions may be found in Appendices B and E.4.
Table 1 gives the average absolute deviation (AAD) of marginals
N
## ∑
i
p(xi=1)−p(xi=1|method) =
1 2N
## ∑
i
mi−mesti ,
while Table 2 gives the absolute deviation of logZaveraged of 100 repetitions. In two cases (Grid,
(26)
Graph Coupling dcoup LBP LD EC EC c EC t
Full
Repulsive 0.25 .037 .020 .003 .0006 .0017 0.50 .071 .018 .031 .0157 .0143
Mixed 0.25 .004 .020 .002 .0004 .0013 0.50 .055 .021 .022 .0159 .0151
Attractive 0.06 .024 .027 .004 .0023 .0025 0.12 .435 .033 .117 .1066 .0211
Grid
Repulsive 1.0 .294 .047 .153 .1693 .0031 2.0 .342 .041 .198 .4244 .0021
Mixed 1.0 .014 .016 .011 .0122 .0018 2.0 .095 .038 .082 .0984 .0068
Attractive 1.0 .440 .047 .125 .1759 .0028 2.0 .520 .042 .177 .4730 .0002
Table 1: Average absolute deviation (AAD) of marginals in a Wainwright-Jordan set-up, compar-ing loopy belief propagation (LBP), log-determinant relaxation (LD), EC, EC withl=4 second order correction (EC c), and an EC tree (EC t). Results in bold face highlight best results, while italics indicate where the cumulant expression is less accurate than the original approximation.
tree solver. It might be some cases that a solution does not exist but we ascribe numerical instabilities in our implementation as the main cause for these problems. It is currently out of the scope of this work to come up with a better solver. We choose to report the average performance for those runs that could attain a high degree of expectation consistency: ∑Ni=1(hxiiqi− hxiiq)2≤10−20. This was 69 out of 100 in the mentioned cases and 100 of 100 in the remaining.
We observe that for the Grid simulations, the corrected marginals in factorized approximation are less accurate than the original approximation. In Figure 8 we vary the coupling strength for a specific set-up (Grid Mixed) and observe a cross-over between the correction and original for the error on marginals as the coupling strength increases. We conjecture that when the error of the original solution is high then the number of terms needed in the cumulant correction increases. The estimation of the marginal seems more sensitive to this than the logZ estimate. The tree approx-imation is very precise for the whole coupling strength interval considered and the fourth order cumulant in the second order expansion is therefore sufficient to get often quite large improvements over the original tree approximation.
9.2 Theε-Expansion
In Paquet et al. (2009) we introduced an alternative expansion for R and applied it to Gaussian processes and mixture models. It is obtained from Equation (12) using a finite series expansion, where the normalized deviation
εn(xn) =qn(xn)
q(xn) −
(27)
Problem type Absolute deviation logZ
Graph Coupling dcoup EC EC c ECεc EC t EC tc
Full
Repulsive 0.25 .0310 .0018 .0061 .0104 .0010 0.50 .3358 .0639 .0697 .1412 .0440
Mixed 0.25 .0235 .0013 .0046 .0129 .0009 0.50 .3362 .0655 .0671 .1798 .0620
Attractive 0.06 .0236 .0028 .0048 .0166 .0006 0.12 .8297 .1882 .2281 .2672 .2094
Grid
Repulsive 1.0 1.7776 .8461 .8124 .0279 .0115 2.0 4.3555 2.9239 3.4741 .0086 .0077
Mixed 1.0 .3539 .1443 .0321 .0133 .0039 2.0 1.2960 .7057 .4460 .0566 .0179
Attractive 1.0 1.6114 .7916 .7546 .0282 .0111 2.0 4.2861 2.9350 3.4638 .0441 .0433
Table 2: Absolute deviation log partition function in a Wainwright-Jordan set-up, comparing EC, EC with l=4 second order correction (EC c), EC with a full second orderε expansion (ECεc), EC tree (EC t) and EC tree withl=4 second order correction (EC tc). Results in bold face highlight best results. The cumulant expression is consistently more accurate than the original approximation.
0 0.5 1 1.5 2 2.5 3
0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 0.18 0.2
Coupling strength
Error marginals
EC EC4 EC−tree
0 0.5 1 1.5 2 2.5 3
0 0.5 1 1.5 2 2.5
Coupling strength
Error log Z
EC EC4 EC−tree EC−tree4
Figure 8: Error on marginal(left)and logZ (right)for grid and mixed couplings as a function of coupling strength.
is treated as the small quantity instead of higher order cumulants.Rhas an exact representation with 2N terms that we may truncate at lowest non-trivial order:
R=
n
(1+εn(xn))
q(x)
≈1+
m<n
Updating...
Updating... | 2020-09-21 00:04:09 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8280243277549744, "perplexity": 2457.1943124527247}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400198868.29/warc/CC-MAIN-20200920223634-20200921013634-00088.warc.gz"} |
http://mathhelpforum.com/calculus/132855-limit-definition.html | # Math Help - limit definition
1. ## limit definition
Let $f(x,y)=sin|x|cos|y|$.
(a) Show from the limit definition that $f_y(0,0)$ exists, and find its value.
(b) Show from the limit definition that $f_y(0,0)$ does not exist.
My Attempt:
(a)
$\lim_{\Delta y \to 0} \frac{f(x_0, y_0 + \Delta y)-f(x_0,y_0)}{\Delta x}$
$\lim_{\Delta y \to 0} \frac{sin|0+|cos|0+\Delta x|-sin|0|xos|0|}{\Delta y}$
$\lim_{\Delta y \to 0} \frac{0}{\Delta y} = 0$
Is the value zero?
(b)
I can't prove that the limit doesn't exits because it turns out it exists:
$\lim_{\Delta x \to 0} \frac{f(x_0 + \Delta x, y_0)-f(x_0,y_0)}{\Delta x}$
$\lim_{\Delta x \to 0} \frac{sin|0+ \Delta x|cos|0|-sin|0|cos|0|}{\Delta x}$
$\lim_{\Delta x \to 0} \frac{sin|\Delta x|}{\Delta x} = 1$
(since limit as x ->0 sinx/x=1)
I'm confused can anyone help?
2. Originally Posted by demode
Let $f(x,y)=sin|x|cos|y|$.
(a) Show from the limit definition that $f_y(0,0)$ exists, and find its value.
(b) Show from the limit definition that $f_y(0,0)$ does not exist.
My Attempt:
(a)
$\lim_{\Delta y \to 0} \frac{f(x_0, y_0 + \Delta y)-f(x_0,y_0)}{\Delta x}$
$\lim_{\Delta y \to 0} \frac{sin|0+|cos|0+\Delta x|-sin|0|xos|0|}{\Delta y}$
$\lim_{\Delta y \to 0} \frac{0}{\Delta y} = 0$
Is the value zero?
Yes, that's correct.
(b)
I can't prove that the limit doesn't exits because it turns out it exists:
$\lim_{\Delta x \to 0} \frac{f(x_0 + \Delta x, y_0)-f(x_0,y_0)}{\Delta x}$
$\lim_{\Delta x \to 0} \frac{sin|0+ \Delta x|cos|0|-sin|0|cos|0|}{\Delta x}$
$\lim_{\Delta x \to 0} \frac{sin|\Delta x|}{\Delta x} = 1$
(since limit as x ->0 sinx/x=1)
I'm confused can anyone help?
But you do not have $\lim_{x\to 0}\frac{sin x}{x}$, you have $\lim_{x\to 0}\frac{sin|x|}{x}$
Look at the two one sided limits, as x goes to 0 from above and as x goes to 0 from below, remembering that for x< 0, sin(|x|)= sin(-x)= - sin(x).
3. But you do not have $\lim_{x\to 0}\frac{sin x}{x}$, you have $\lim_{x\to 0}\frac{sin|x|}{x}$
Look at the two one sided limits, as x goes to 0 from above and as x goes to 0 from below, remembering that for x< 0, sin(|x|)= sin(-x)= - sin(x).
Could you explain a little bit more please because I'm not sure if I really understand this. We are not concerned with x<0, we only care about (0,0), and as x goes to 0, sin|0|=0 and that makes the limit zero! | 2014-07-23 10:19:06 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 22, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9444814324378967, "perplexity": 684.3375319795289}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997877693.48/warc/CC-MAIN-20140722025757-00171-ip-10-33-131-23.ec2.internal.warc.gz"} |
https://www.ideals.illinois.edu/handle/2142/72050 | ## Files in this item
FilesDescriptionFormat
application/pdf
3362928.pdf (1MB)
(no description provided)PDF
## Description
Title: Understanding Storage System Problems and Diagnosing Them Through Log Analysis Author(s): Jiang, Weihang Doctoral Committee Chair(s): Zhou, Yuanyuan Department / Program: Computer Science Discipline: Computer Science Degree Granting Institution: University of Illinois at Urbana-Champaign Degree: Ph.D. Genre: Dissertation Subject(s): Computer Science Abstract: Nowadays, over 90% new information produced are stored on hard disk drives. The explosion of data is making storage system a strategic investment priority in the enterprise world. The revenue created by storage system industry steadily increases from $14.2 Billion in 2004 to over$18.4 Billion in 2007. As a key component of enterprise systems, reliable storage systems are critical. However, despite the efforts put into building robust storage systems, as the size and complexity of storage systems have grown to an unprecedented level, storage system problems are common. Unfortunately, many aspects of storage system problems are still not well understood, and most of previous studies only focus on one component - disk drives.To better understand storage system problems, we analyzed the failure characteristics of the core part of storage system - the storage subsystem, which contains disks and all components providing connectivity and usage of disk to the entire storage system. More specifically, we analyzed the storage system logs collected from about 39,000 storage systems commercially deployed at various customer sites. The data set covers a period of 44 months and includes about 1,800,000 disks hosted in about 155,000 storage shelf enclosures. Our study reveals many interesting findings, providing useful guideline for designing reliable storage systems. Some of the major findings include: (1) In addition to disk failures that contribute to 20--55% of storage subsystem failures, other components such as physical interconnects and protocol stacks also account for significant percentages of storage subsystem failures. (2) Each individual storage subsystem failure type and storage subsystem failure as a whole exhibit strong self-correlations. In addition, these failures exhibit bursty patterns. (3) Storage subsystems configured with dual-path interconnects experience 30--40% lower failure rates than those with a single interconnect. (4) Spanning disks of a RAID group across multiple shelves provides a more resilient solution for storage subsystems than within a single shelf.As we found out that storage subsystem problems are far beyond disk failures, we extend the scope of study to various storage system problems, and study the characteristics of storage system problem troubleshooting from various dimensions. Using a large set (636,108) of real world customer problem cases reported from 100,000 commercially deployed storage systems in the last two years, the analysis show that while some problems are either benign, or resolved automatically, many others can take hours or days of manual diagnosis to fix. For modern storage systems, hardware failures and misconfigurations dominate customer cases, but software failures take longer time to resolve. Interestingly, a relatively significant percentage of cases are because customers lack sufficient knowledge about the system. We also evaluate the potential of using storage system logs to resolve these problems. Our analysis shows that a failure message alone is a poor indicator of root cause, and that combining failure messages with multiple log events can improve problem root cause prediction by a factor of three.One key finding is that storage system logs contain useful information for narrowing down the root cause, while they are challenging to analyze manually because they are noisy and the useful log events are often separated by hundreds of irrelevant log events. Motivated by this finding, we designed and implemented an automatic tool, called Log Analyzer, to improve problem troubleshooting process. By applying statistical analysis techniques, the Log Analyzer can automatically infer the dependency relationship between log events, and identify the key log events that capture the essential system states related to storage system problems. By combining classic unsupervised classification techniques - hierarchical clustering with the event ranking techniques, the Log Analyzer can also identify recurrent storage system problems based on similar log patterns, so that previous diagnosis efforts can be systematically retrieved and leveraged. We train the Log Analyze with 18,878 week-long storage system logs and evaluate it with 164 real-world problem cases. The evaluation indicates that the Log Analyzer can effectively reduce the log event number to 3.4%. For most of the 16 real-world problem cases manually annotated with 1--3 key log events, the Log Analyzer accurately ranked the key log events within top 3 without a priori knowledge on how important the events are. For the other 148 problem cases with diagnosis and with root cause information, the Log Analyzer effectively grouped problem cases with the same root cause together with 63--93% accuracy, significantly outperforming other three alternative solutions which only achieve 30--46% accuracy. Issue Date: 2009 Type: Text Description: 99 p.Thesis (Ph.D.)--University of Illinois at Urbana-Champaign, 2009. URI: http://hdl.handle.net/2142/72050 Other Identifier(s): (UMI)AAI3362928 Date Available in IDEALS: 2014-12-17 Date Deposited: 2009
| 2018-01-21 10:50:16 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23718272149562836, "perplexity": 2884.3098212457976}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084890514.66/warc/CC-MAIN-20180121100252-20180121120252-00562.warc.gz"} |
https://socratic.org/questions/can-someone-help-explain-the-reasoning-behind-this-question | # Can someone help explain the reasoning behind this question?
Apr 19, 2016
10.12 m
#### Explanation:
You know the intial velocity ($u$) = $3.2 m {s}^{- 1}$
and, the final velocity ($v$) = $9.7 m {s}^{- 1}$
You also know the acceleration ($a = g \sin \theta$) = 9.8 *sin25°ms^(-2)=4.14ms^(-2)
if you apply the conservation of energy, you have:
${v}^{2} = {u}^{2} + 2 a s$, where s is the distance travelled.
hence, $s = \frac{1}{2 a} \left({v}^{2} - {u}^{2}\right) = \frac{1}{2 \cdot 4.14} \cdot \left({9.7}^{2} - {3.2}^{2}\right) m = 10.12 m$ | 2019-03-23 04:15:58 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 8, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6432895064353943, "perplexity": 1031.1181005211067}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202723.74/warc/CC-MAIN-20190323040640-20190323062640-00004.warc.gz"} |
https://docs.astropy.org/en/stable/api/astropy.nddata.block_reduce.html | # block_reduce¶
astropy.nddata.block_reduce(data, block_size, func=<function sum at 0x7f50f28d8620>)[source]
Downsample a data array by applying a function to local blocks.
If data is not perfectly divisible by block_size along a given axis then the data will be trimmed (from the end) along that axis.
Parameters
dataarray_like
The data to be resampled.
block_sizeint or array_like (int)
The integer block size along each axis. If block_size is a scalar and data has more than one dimension, then block_size will be used for for every axis.
funccallable, optional
The method to use to downsample the data. Must be a callable that takes in a ndarray along with an axis keyword, which defines the axis along which the function is applied. The default is sum, which provides block summation (and conserves the data sum).
Returns
outputarray_like
The resampled data.
Examples
>>> import numpy as np
>>> from astropy.nddata.utils import block_reduce
>>> data = np.arange(16).reshape(4, 4)
>>> block_reduce(data, 2)
array([[10, 18],
[42, 50]])
>>> block_reduce(data, 2, func=np.mean)
array([[ 2.5, 4.5],
[ 10.5, 12.5]]) | 2020-02-22 00:20:08 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.34354478120803833, "perplexity": 5451.088789195666}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145621.28/warc/CC-MAIN-20200221233354-20200222023354-00288.warc.gz"} |
https://stacks.math.columbia.edu/tag/03T3 | ## 63.6 Derived categories
To set up notation, let $\mathcal{A}$ be an abelian category. Let $\text{Comp}(\mathcal{A})$ be the abelian category of complexes in $\mathcal{A}$. Let $K(\mathcal{A})$ be the category of complexes up to homotopy, with objects equal to complexes in $\mathcal{A}$ and morphisms equal to homotopy classes of morphisms of complexes. This is not an abelian category. Loosely speaking, $D(A)$ is defined to be the category obtained by inverting all quasi-isomorphisms in $\text{Comp}(\mathcal{A})$ or, equivalently, in $K(\mathcal{A})$. Moreover, we can define $\text{Comp}^+(\mathcal{A}), K^+(\mathcal{A}), D^+(\mathcal{A})$ analogously using only bounded below complexes. Similarly, we can define $\text{Comp}^-(\mathcal{A}), K^-(\mathcal{A}), D^-(\mathcal{A})$ using bounded above complexes, and we can define $\text{Comp}^ b(\mathcal{A}), K^ b(\mathcal{A}), D^ b(\mathcal{A})$ using bounded complexes.
Remark 63.6.1. Notes on derived categories.
1. There are some set-theoretical problems when $\mathcal{A}$ is somewhat arbitrary, which we will happily disregard.
2. The categories $K(A)$ and $D(A)$ are endowed with the structure of a triangulated category.
3. The categories $\text{Comp}(\mathcal{A})$ and $K(\mathcal{A})$ can also be defined when $\mathcal{A}$ is an additive category.
The homology functor $H^ i : \text{Comp}(\mathcal{A}) \to \mathcal{A}$ taking a complex $K^\bullet \mapsto H^ i(K^\bullet )$ extends to functors $H^ i : K(\mathcal{A}) \to \mathcal{A}$ and $H^ i : D(\mathcal{A}) \to \mathcal{A}$.
Lemma 63.6.2. An object $E$ of $D(\mathcal{A})$ is contained in $D^+(\mathcal{A})$ if and only if $H^ i(E) =0$ for all $i \ll 0$. Similar statements hold for $D^-$ and $D^+$.
Proof. Hint: use truncation functors. See Derived Categories, Lemma 13.11.5. $\square$
Lemma 63.6.3. Morphisms between objects in the derived category.
1. Let $I^\bullet \in \text{Comp}^+(\mathcal{A})$ with $I^ n$ injective for all $n \in \mathbf{Z}$. Then
$\mathop{\mathrm{Hom}}\nolimits _{D(\mathcal{A})}(K^\bullet , I^\bullet ) = \mathop{\mathrm{Hom}}\nolimits _{K(\mathcal{A})}(K^\bullet , I^\bullet ).$
2. Let $P^\bullet \in \text{Comp}^-(\mathcal{A})$ with $P^ n$ is projective for all $n \in \mathbf{Z}$. Then
$\mathop{\mathrm{Hom}}\nolimits _{D(\mathcal{A})}(P^\bullet , K^\bullet ) = \mathop{\mathrm{Hom}}\nolimits _{K(\mathcal{A})}(P^\bullet , K^\bullet ).$
3. If $\mathcal{A}$ has enough injectives and $\mathcal{I} \subset \mathcal{A}$ is the additive subcategory of injectives, then $D^+(\mathcal{A})\cong K^+(\mathcal{I})$ (as triangulated categories).
4. If $\mathcal{A}$ has enough projectives and $\mathcal{P} \subset \mathcal{A}$ is the additive subcategory of projectives, then $D^-(\mathcal{A}) \cong K^-(\mathcal{P}).$
Proof. Omitted. $\square$
Definition 63.6.4. Let $F: \mathcal{A} \to \mathcal{B}$ be a left exact functor and assume that $\mathcal{A}$ has enough injectives. We define the total right derived functor of $F$ as the functor $RF: D^+(\mathcal{A}) \to D^+(\mathcal{B})$ fitting into the diagram
$\xymatrix{ D^+(\mathcal{A}) \ar[r]^{RF} & D^+(\mathcal{B}) \\ K^+(\mathcal I) \ar[u] \ar[r]^ F & K^+(\mathcal{B}). \ar[u] }$
This is possible since the left vertical arrow is invertible by the previous lemma. Similarly, let $G: \mathcal{A} \to \mathcal{B}$ be a right exact functor and assume that $\mathcal{A}$ has enough projectives. We define the total left derived functor of $G$ as the functor $LG: D^-(\mathcal{A}) \to D^-(\mathcal{B})$ fitting into the diagram
$\xymatrix{ D^-(\mathcal{A}) \ar[r]^{LG} & D^-(\mathcal{B}) \\ K^-(\mathcal{P}) \ar[u] \ar[r]^ G & K^-(\mathcal{B}). \ar[u] }$
This is possible since the left vertical arrow is invertible by the previous lemma.
Remark 63.6.5. In these cases, it is true that $R^ iF(K^\bullet ) = H^ i(RF(K^\bullet ))$, where the left hand side is defined to be $i$th homology of the complex $F(K^\bullet )$.
Comment #14 by Emmanuel Kowalski on
The short "Notes on derived categories" (remarks-derived-categories) is duplicated in the next Tag 03T4.
Comment #21 by Johan on
That is because we have tags for sections and lemmas, remarks, etc. And lemmas and remarks, etc are items inside sections. So there is some duplication in the material.
Comment #2167 by Alex on
typo: In the definition of $K(\mathcal{A})$ "objects equal to homotopy classes..." should say "morphisms equal to..."
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar). | 2022-12-08 15:26:20 | {"extraction_info": {"found_math": true, "script_math_tex": 1, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.9776941537857056, "perplexity": 269.92651193668803}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711344.13/warc/CC-MAIN-20221208150643-20221208180643-00649.warc.gz"} |
https://www.vedantu.com/question-answer/solve-1-+-sin-3x-+-cos-3x-frac32sin-2x-0-class-11-maths-cbse-5edb5e334d8add1324c975a9 | Courses
Courses for Kids
Free study material
Free LIVE classes
More
# Solve $1 + {\sin ^3}x + {\cos ^3}x - \frac{3}{2}\sin 2x = 0$
Last updated date: 18th Mar 2023
Total views: 307.5k
Views today: 3.88k
Verified
307.5k+ views
Hint: Here, we will use the trigonometric formulas to simplify the given equation.
Given,
$1 + {\sin ^3}x + {\cos ^3}x - \frac{3}{2}\sin 2x = 0 \to (1)$
Now, let us simplify the equation (1) by substituting the formula of $\sin 2x$i.e.., $2\sin x\cos x$.we get
$\begin{gathered} \Rightarrow 1 + {\sin ^3}x + {\cos ^3}x - \frac{3}{2}\sin 2x = 0 \\ \Rightarrow 1 + {\sin ^3}x + {\cos ^3}x - \frac{3}{2}(2\sin x\cos x) = 0 \\ \Rightarrow 1 + {\sin ^3}x + {\cos ^3}x - 3\sin x\cos x = 0 \\ \Rightarrow 1 + {\sin ^3}x + {\cos ^3}x - 3(\sin x)(\cos x)(1) = 0 \to (2) \\ \end{gathered}$
As, we can see equation (2) is in the form of ${a^3} + {b^3} + {c^3} - 3abc = 0$where $a = 1,b = \sin x,c = \cos x$
And we now that
$\begin{gathered} {a^3} + {b^3} + {c^3} - 3abc = (a + b + c)({a^2} + {b^2} + {c^2} - ab - bc - ca) \\ \therefore (a + b + c)({a^2} + {b^2} + {c^2} - ab - bc - ca) = 0 \\ \end{gathered}$
Here, we will consider the factor $a + b + c = 0$ as the other factor is non-zero. Hence, from
equation (2), we can write
$1 + \sin x + \cos x = 0 \to (3)$
Now, let us simplify equation (3) to find the values of ‘x’
$\begin{gathered} \Rightarrow 1 + \sin x + \cos x = 0 \\ \Rightarrow \sin x + \cos x = - 1 \\ \end{gathered}$
Let us multiply the above equation with $\frac{1}{{\sqrt 2 }}$we get,
$\begin{gathered} \Rightarrow \sin x + \cos x = - 1 \\ \Rightarrow (\frac{1}{{\sqrt 2 }})(\sin x + \cos x) = - \frac{1}{{\sqrt 2 }} \\ \Rightarrow \frac{1}{{\sqrt 2 }}(\sin x) + \frac{1}{{\sqrt 2 }}(\cos x) = - \frac{1}{{\sqrt 2 }} \\ \Rightarrow \sin x\sin (\frac{\pi }{4}) + (\cos x)\cos (\frac{\pi }{4}) = \cos (\frac{{3\pi }}{4}) \to (4)[\because \sin (\frac{\pi }{4}) = \frac{1}{{\sqrt 2 }},\cos (\frac{\pi }{4}) = \frac{1}{{\sqrt 2 }},\cos (\frac{{3\pi }}{4}) = - \frac{1}{{\sqrt 2 }}] \\ \end{gathered}$
As, we can see equation (4) is in the form of $\sin A\sin B + \cos A\cos B = \cos (A - B)$where
$A = x and B = \frac{\pi }{4}$.Now let us apply the formulae of$\sin A\sin B + \cos A\cos B$ we get
$\begin{gathered} \Rightarrow \cos (x - \frac{\pi }{4}) = \cos (\frac{{3\pi }}{4}) \\ \Rightarrow x - \frac{\pi }{4} = 2n\pi \pm \frac{{3\pi }}{4} \to (5),['n'{\text{is integral number]}} \\ \end{gathered}$
Therefore, solving equation (5) we get,
$\Rightarrow x = 2n\pi + \pi and x = 2n\pi - \frac{\pi }{2}['n'{\text{is integral number}}]$
Hence, the values of ‘x’ satisfying $1 + {\sin ^3}x + {\cos ^3}x - \frac{3}{2}\sin 2x = 0$is$x = 2n\pi + \pi and x = 2n\pi - \frac{\pi }{2}$.
Note: Here, we have added $'2n\pi '$to the $\frac{{3\pi }}{4}$after cancelling the cosine terms on the both sides as $'2\pi '$is the period of the cosine function and n is an integral
number. | 2023-03-24 10:38:17 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9712620973587036, "perplexity": 1157.0589249949899}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945279.63/warc/CC-MAIN-20230324082226-20230324112226-00744.warc.gz"} |
https://pos.sissa.it/396/332/ | Volume 396 - The 38th International Symposium on Lattice Field Theory (LATTICE2021) - Oral presentation
Axial U(1) symmetry at high temperatures in $N_f=2+1$ lattice QCD with chiral fermions
S. Aoki, Y. Aoki, H. Fukaya, S. Hashimoto, I. Kanamori, T. Kaneko, Y. Nakamura, C. Rohrhofer, K. Suzuki* on behalf of JLQCD Collaboration
Full text: pdf
Pre-published on: May 16, 2022
Published on:
Abstract
We study the $U(1)_A$ anomaly in the high-temperature phase of $N_f=2+1$ lattice QCD with chiral fermions.
Gauge ensembles are generated with M\"obius domain-wall (MDW) fermions, and in the measurements the determinant is reweighted to that of overlap fermions.
We report the results for the overlap Dirac spectrum, $U(1)_A$ susceptibility, and topological susceptibility at $T=204$ and $175$ MeV.
DOI: https://doi.org/10.22323/1.396.0332
How to cite
Metadata are provided both in "article" format (very similar to INSPIRE) as this helps creating very compact bibliographies which can be beneficial to authors and readers, and in "proceeding" format which is more detailed and complete.
Open Access | 2022-06-26 21:18:13 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.506145179271698, "perplexity": 4728.359318838135}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103271864.14/warc/CC-MAIN-20220626192142-20220626222142-00083.warc.gz"} |
http://xianblog.wordpress.com/category/statistics/r-statistics/ | ## an ABC experiment
Posted in Books, pictures, R, Statistics, University life with tags , , , , , , , , on November 24, 2014 by xi'an
In a cross-validated forum exchange, I used the code below to illustrate the working of an ABC algorithm:
#normal data with 100 observations
n=100
x=rnorm(n)
#observed summaries
#normal x gamma prior
priori=function(N){
return(cbind(rnorm(N,sd=10),
1/sqrt(rgamma(N,shape=2,scale=5))))
}
ABC=function(N,alpha=.05){
prior=priori(N) #reference table
#pseudo-data
summ=matrix(0,N,2)
for (i in 1:N){
xi=rnorm(n)*prior[i,2]+prior[i,1]
}
#normalisation factor for the distance
#distance
#selection
posterior=prior[dist<quantile(dist,alpha),]}
Hence I used the median and the mad as my summary statistics. And the outcome is rather surprising, for two reasons: the first one is that the posterior on the mean μ is much wider than when using the mean and the variance as summary statistics. This is not completely surprising in that the latter are sufficient, while the former are not. Still, the (-10,10) range on the mean is way larger… The second reason for surprise is that the true posterior distribution cannot be derived since the joint density of med and mad is unavailable.
After thinking about this for a while, I went back to my workbench to check the difference with using mean and variance. To my greater surprise, I found hardly any difference! Using the almost exact ABC with 10⁶ simulations and a 5% subsampling rate returns exactly the same outcome. (The first row above is for the sufficient statistics (mean,standard deviation) while the second row is for the (median,mad) pair.) Playing with the distance does not help. The genuine posterior output is quite different, as exposed on the last row of the above, using a basic Gibbs sampler since the posterior is not truly conjugate.
## Le Monde puzzle [#887bis]
Posted in Kids, R, Statistics, University life with tags , , on November 16, 2014 by xi'an
As mentioned in the previous post, an alternative consists in finding the permutation of {1,…,N} by “adding” squares left and right until the permutation is complete or no solution is available. While this sounds like the dual of the initial solution, it brings a considerable improvement in computing time, as shown below. I thus redefined the construction of the solution by initialising the permutation at random (it could start at 1 just as well)
perfect=(1:trunc(sqrt(2*N)))^2
perm=friends=(1:N)
t=1
perm[t]=sample(friends,1)
friends=friends[friends!=perm[t]]
and then completing only with possible neighbours, left
out=outer(perfect-perm[t],friends,"==")
if (max(out)==1){
t=t+1
perm[t]=sample(rep(perfect[apply(out,1,
max)==1],2),1)-perm[t-1]
friends=friends[friends!=perm[t]]}
or right
out=outer(perfect-perm[1],friends,"==")
if (max(out)==1){
t=t+1
perf=sample(rep(perfect[apply(out,1,
max)==1],2),1)-perm[1]
perm[1:t]=c(perf,perm[1:(t-1)])
friends=friends[friends!=perf]}
(If you wonder about why the rep in the sample step, it is a trick I just found to avoid the insufferable feature that sample(n,1) is equivalent to sample(1:n,1)! It costs basically nothing but bypasses reprogramming sample() each time I use it… I am very glad I found this trick!) The gain in computing time is amazing:
> system.time(for (i in 1:50) pick(15))
utilisateur système écoulé
5.397 0.000 5.395
> system.time(for (i in 1:50) puck(15))
utilisateur système écoulé
0.285 0.000 0.287
An unrelated point is that a more interesting (?) alternative problem consists in adding a toroidal constraint, namely to have the first and the last entries in the permutation to also sum up to a perfect square. Is it at all possible?
## Le Monde puzzle [#887]
Posted in Books, Kids, R, Statistics with tags , , , on November 15, 2014 by xi'an
A simple combinatorics Le Monde mathematical puzzle:
N is a golden number if the sequence {1,2,…,N} can be reordered so that the sum of any consecutive pair is a perfect square. What are the golden numbers between 1 and 25?
Indeed, from an R programming point of view, all I have to do is to go over all possible permutations of {1,2,..,N} until one works or until I have exhausted all possible permutations for a given N. However, 25!=10²⁵ is a wee bit too large… Instead, I resorted once again to brute force simulation, by first introducing possible neighbours of the integers
perfect=(1:trunc(sqrt(2*N)))^2
friends=NULL
le=1:N
for (perm in 1:N){
amis=perfect[perfect>perm]-perm
amis=amis[amis<N]
le[perm]=length(amis)
friends=c(friends,list(amis))
}
and then proceeding to construct the permutation one integer at time by picking from its remaining potential neighbours until there is none left or the sequence is complete
orderin=0*(1:N)
t=1
orderin[1]=sample((1:N),1)
for (perm in 1:N){
friends[[perm]]=friends[[perm]]
[friends[[perm]]!=orderin[1]]
le[perm]=length(friends[[perm]])
}
while (t<N){
if (length(friends[[orderin[t]]])==0)
break()
if (length(friends[[orderin[t]]])>1){
orderin[t+1]=sample(friends[[orderin[t]]],1)}else{
orderin[t+1]=friends[[orderin[t]]]
}
for (perm in 1:N){
friends[[perm]]=friends[[perm]]
[friends[[perm]]!=orderin[t+1]]
le[perm]=length(friends[[perm]])
}
t=t+1}
and then repeating this attempt until a full sequence is produced or a certain number of failed attempts has been reached. I gained in efficiency by proposing a second completion on the left of the first integer once a break occurs:
while (t<N){
if (length(friends[[orderin[1]]])==0)
break()
orderin[2:(t+1)]=orderin[1:t]
if (length(friends[[orderin[2]]])>1){
orderin[1]=sample(friends[[orderin[2]]],1)}else{
orderin[1]=friends[[orderin[2]]]
}
for (perm in 1:N){
friends[[perm]]=friends[[perm]]
[friends[[perm]]!=orderin[1]]
le[perm]=length(friends[[perm]])
}
t=t+1}
(An alternative would have been to complete left and right by squared numbers taken at random…) The result of running this program showed there exist permutations with the above property for N=15,16,17,23,25,26,…,77. Here is the solution for N=49:
25 39 10 26 38 43 21 4 32 49 15 34 30 6 3 22 42 7 9 27 37 12 13 23 41 40 24 1 8 28 36 45 19 17 47 2 14 11 5 44 20 29 35 46 18 31 33 16 48
As an aside, the authors of Le Monde puzzle pretended (in Tuesday, Nov. 12, edition) that there was no solution for N=23, while this sequence
22 3 1 8 17 19 6 10 15 21 4 12 13 23 2 14 11 5 20 16 9 7 18
sounds fine enough to me… I more generally wonder at the general principle behind the existence of such sequences. It sounds quite probable that they exist for N>24. (The published solution does not bring any light on this issue, so I assume the authors have no mathematical analysis to provide.)
## Rasmus’ socks fit perfectly!
Posted in Books, Kids, R, Statistics, University life with tags , , , , on November 10, 2014 by xi'an
Following the previous post on Rasmus’ socks, I took the opportunity of a survey on ABC I am currently completing to compare the outcome of his R code with my analytical derivation. After one quick correction [by Rasmus] of a wrong representation of the Negative Binomial mean-variance parametrisation [by me], I achieved this nice fit…
## reliable ABC model choice via random forests
Posted in pictures, R, Statistics, University life with tags , , , , , , , on October 29, 2014 by xi'an
After a somewhat prolonged labour (!), we have at last completed our paper on ABC model choice with random forests and submitted it to PNAS for possible publication. While the paper is entirely methodological, the primary domain of application of ABC model choice methods remains population genetics and the diffusion of this new methodology to the users is thus more likely via a media like PNAS than via a machine learning or statistics journal.
When compared with our recent update of the arXived paper, there is not much different in contents, as it is mostly an issue of fitting the PNAS publication canons. (Which makes the paper less readable in the posted version [in my opinion!] as it needs to fit the main document within the compulsory six pages, relegated part of the experiments and of the explanations to the Supplementary Information section.)
## Feller’s shoes and Rasmus’ socks [well, Karl’s actually…]
Posted in Books, Kids, R, Statistics, University life with tags , , , , on October 24, 2014 by xi'an
Yesterday, Rasmus Bååth [of puppies’ fame!] posted a very nice blog using ABC to derive the posterior distribution of the total number of socks in the laundry when only pulling out orphan socks and no pair at all in the first eleven draws. Maybe not the most pressing issue for Bayesian inference in the era of Big data but still a challenge of sorts!
Rasmus set a prior on the total number m of socks, a negative Binomial Neg(15,1/3) distribution, and another prior of the proportion of socks that come by pairs, a Beta B(15,2) distribution, then simulated pseudo-data by picking eleven socks at random, and at last applied ABC (in Rubin’s 1984 sense) by waiting for the observed event, i.e. only orphans and no pair [of socks]. Brilliant!
The overall simplicity of the problem set me wondering about an alternative solution using the likelihood. Cannot be that hard, can it?! After a few computations rejected by opposing them to experimental frequencies, I put the problem on hold until I was back home and with access to my Feller volume 1, one of the few [math] books I keep at home… As I was convinced one of the exercises in Chapter II would cover this case. After checking, I found a partial solution, namely Exercice 26:
A closet contains n pairs of shoes. If 2r shoes are chosen at random (with 2r<n), what is the probability that there will be (a) no complete pair, (b) exactly one complete pair, (c) exactly two complete pairs among them?
This is not exactly a solution, but rather a problem, however it leads to the value
$p_j=\binom{n}{j}2^{2r-2j}\binom{n-j}{2r-2j}\Big/\binom{2n}{2r}$
as the probability of obtaining j pairs among those 2r shoes. Which also works for an odd number t of shoes:
$p_j=2^{t-2j}\binom{n}{j}\binom{n-j}{t-2j}\Big/\binom{2n}{t}$
as I checked against my large simulations. So I solved Exercise 26 in Feller volume 1 (!), but not Rasmus’ problem, since there are those orphan socks on top of the pairs. If one draws 11 socks out of m socks made of f orphans and g pairs, with f+2g=m, the number k of socks from the orphan group is an hypergeometric H(11,m,f) rv and the probability to observe 11 orphan socks total (either from the orphan or from the paired groups) is thus the marginal over all possible values of k:
$\sum_{k=0}^{11} \dfrac{\binom{f}{k}\binom{2g}{11-k}}{\binom{m}{11}}\times\dfrac{2^{11-k}\binom{g}{11-k}}{\binom{2g}{11-k}}$
so it could be argued that we are facing a closed-form likelihood problem. Even though it presumably took me longer to achieve this formula than for Rasmus to run his exact ABC code!
## a bootstrap likelihood approach to Bayesian computation
Posted in Books, R, Statistics, University life with tags , , , , , , , , on October 16, 2014 by xi'an
This paper by Weixuan Zhu, Juan Miguel Marín [from Carlos III in Madrid, not to be confused with Jean-Michel Marin, from Montpellier!], and Fabrizio Leisen proposes an alternative to our 2013 PNAS paper with Kerrie Mengersen and Pierre Pudlo on empirical likelihood ABC, or BCel. The alternative is based on Davison, Hinkley and Worton’s (1992) bootstrap likelihood, which relies on a double-bootstrap to produce a non-parametric estimate of the distribution of a given estimator of the parameter θ. Including a smooth curve-fitting algorithm step, for which not much description is available from the paper.
“…in contrast with the empirical likelihood method, the bootstrap likelihood doesn’t require any set of subjective constrains taking advantage from the bootstrap methodology. This makes the algorithm an automatic and reliable procedure where only a few parameters need to be specified.”
The spirit is indeed quite similar to ours in that a non-parametric substitute plays the role of the actual likelihood, with no correction for the substitution. Both approaches are convergent, with similar or identical convergence speeds. While the empirical likelihood relies on a choice of parameter identifying constraints, the bootstrap version starts directly from the [subjectively] chosen estimator of θ. For it indeed needs to be chosen. And computed.
“Another benefit of using the bootstrap likelihood (…) is that the construction of bootstrap likelihood could be done once and not at every iteration as the empirical likelihood. This leads to significant improvement in the computing time when different priors are compared.”
This is an improvement that could apply to the empirical likelihood approach, as well, once a large enough collection of likelihood values has been gathered. But only in small enough dimensions where smooth curve-fitting algorithms can operate. The same criticism applying to the derivation of a non-parametric density estimate for the distribution of the estimator of θ. Critically, the paper only processes examples with a few parameters.
In the comparisons between BCel and BCbl that are produced in the paper, the gain is indeed towards BCbl. Since this paper is mostly based on examples and illustrations, not unlike ours, I would like to see more details on the calibration of the non-parametric methods and of regular ABC, as well as on the computing time. And the variability of both methods on more than a single Monte Carlo experiment.
I am however uncertain as to how the authors process the population genetic example. They refer to the composite likelihood used in our paper to set the moment equations. Since this is not the true likelihood, how do the authors select their parameter estimates in the double-bootstrap experiment? The inclusion of Crakel’s and Flegal’s (2013) bivariate Beta, is somewhat superfluous as this example sounds to me like an artificial setting.
In the case of the Ising model, maybe the pre-processing step in our paper with Matt Moores could be compared with the other algorithms. In terms of BCbl, how does the bootstrap operate on an Ising model, i.e. (a) how does one subsample pixels and (b)what are the validity guarantees?
A test that would be of interest is to start from a standard ABC solution and use this solution as the reference estimator of θ, then proceeding to apply BCbl for that estimator. Given that the reference table would have to be produced only once, this would not necessarily increase the computational cost by a large amount… | 2014-11-27 13:01:18 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 3, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7553268671035767, "perplexity": 932.1284454736535}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931008520.8/warc/CC-MAIN-20141125155648-00214-ip-10-235-23-156.ec2.internal.warc.gz"} |
http://blog.homam.me/ | # Homam's Mind
Monday, January 27, 2014
## EC2 SSH from Cygwin
Sunday, January 12, 2014
## D3 Enter - Update - Exit Pattern in LiveScript
Since I first heard of LiveScript a couple of months ago, I've enjoyed programming my small and big JavaScript projects in LiveScript. Here I want to show how LiveScript makes D3 general update pattern easier to work with.
Here's the result showing enter - update - exit pattern in D3 and the gist.
In the above code I'm using LiveScript's property access cascades sugar. Note how it improves the readability of the code and makes it easy to identify enter!, transition! or exit! cascades.
See also how the sampling and shuffle functions are defined by a composition of some functions rather than a for loop that IMHO is much prettier than the original.
Anyway, I enjoy D3 and LiveScript and this two go together like pancakes and syrup!
Tuesday, September 27, 2011
## CSSMatrix 3D Transformations
Well I’ve spent a good weekend figuring out CSS3 3D transformations. In short, now I think it is intelligently designed to fit web design needs, however coming from Direct3D background I was looking for a camera in W3C API.
In this post I show how to make a touch sensitive, interactive 3D model of the iPad using matrix transformations.
Although iPad is not a perfect cuboid, for simplicity we assume it is. Google and download pictures of the 6 sides of the iPad. Read this excellent Introduction to CSS 3-D Transforms on how to make 3D cuboids using CSS3. It is fairly easy and straight forward.
[caption id="attachment_255" align="aligncenter" width="122" caption="iPad layers before transformation; six sides stacked on top of each others"][/caption]
I want to rotate the model by touching the screen / moving the mouse. We’re not digging into the details of touch event handling here, check out Touching and Gesturing on the iPhone for a nice discussion on this subject.
Obviously we move our fingers on the phones screen, or the mouse in 2D flat surfaces, but we want to rotate our iPad in 3D space. Luckily any rotation in 3D space can be decomposed to 3 elemental rotations around the axes of a coordinate system (frame of reference), using Euler angles; meaning that a user can rotate the model to a desired state by no more than 3 gestures.
It works; you can freely rotate the model in any direction. But something’s not quite right: the UI response to gestures is not intuitive. Sometimes when you move your fingers to the left, the model rotates upward, another time it rotates down-right…. The problem is that every time that you rotate the model you change the orientation of its axes. You can leave your program as it is, it is sellable and in fact I've bought programs with this bug before. The rest of this post explains a solution to this problem.
## Linear Transformations
All CSS3 transformations (rotate, scale, skew) are reversible, you can rotate an object 40 degrees clockwise, scale it to 2x bigger, then shrink it to half and rotate it 40 degrees counter clockwise and you end up with the object in its initial state. Another interesting feature of CSS3 transformations is that although an image can be distorted by a transformation, straight lines don’t curve or bend and remain straight under any transformation. Each point of the original image is always mapped to one and only one point of the transformed image. These are the characteristics of linear maps.
Any linear map can be represented by a transformation matrix. In our case, in a 3-dimensional space, it is a 3x3 matrix. I won’t dig into the technical details of matrices, simply because we don’t need to know those details. Check out your old analytic geometry textbook.
Given a transformation matrix M, any point P of our object will be transformed by this matrix product:
P’ = M * P
(Here we use the fact that points can be represented by their position vectors, hence column matrices)
There’s a little thing about translation transformation. In good old geometry, a translation can be represented by a vector (when you translate an object, you move it along a path that has a direction and length), and you find the translated coordinates of a point P by summing up its original coordinates with the translation vector: P’ = P + T. There’s a trick to combine translation with other forms of linear transformations using 4x4 matrices.
If M is a 4x4 transformation matrix and P is a 4x1 column vector representing the coordinate of a point in space (the last row of the vector is set to 0), then:
P’ = M * P
P’ (the matrix product of M and P) is the coordinate of our point after the object has been transformed by M.
M can represent any state of escalation, rotation, translation, or skewness.
## CSSMatrix
CSS gives us the option to define our desired transformation by a 4x4 transformation matrix. This method is virtually useless in CSS declarative way, for a 3D transformation you have to calculate the matrix elements and pass 16 parameters to matrix3d() property function (for an example to see how obscure the code might become, check out rotate3d() definition in W3C’s CSS 3d transforms draft). But it is easy and very convenient to use this matrix in JavaScript code, thanks to DOM’s CSSMatrix interface. Currently (Sep. 2011) WebKit implements this interface by WebKitCSSMatrix type.
We initialize an instance of a WebKitCSSMatrix by passing a correct string value of -webkit-transform CSS property. So one can construct it by something like this:
new WebKitCSSMatrix("scale3d(1,2,1)")ornew WebKitCSSMatrix("scale3d(1,2,1) rotate3d(0,0,1, 45deg) translate3d(100px, 0, -20px)")
Here’s where this window object’s little useful function comes handy: 'window.getComputedStyle()'. getComputedStyle() takes a DOM Element and returns an instance of CSSStyleDecleration that is a representation of all the style properties currently set for the element. It is also a dictionary. You can get the current transform value by: window.getComputedStyle(element)["-webkit-transform"] or by window.getComputedStyle(element).webkitTransform property. Its value is in form of matrix() or matrix3d(). To get the current CSSMatrix that is applied to an element use:
m = new WebKitCSSMatrix(window.getComputedStyle(element).webkitTransform)
CSSMatrix is indeed a 4x4 matrix (its properties are named m11 to m44), its toString() method returns its CSS representation (in matrix() or matrix3d() form).
It also provides a handful of useful functions for matrix manipulation. These functions don’t mutate the object; they return a new instance of CSSMatrix:
• multiply
• inverse
• translate
• scale
• rotate
• rotateAxisAngle
• skewX
• skewY
Check out Apple’s documentation.
I learnt it in a hard way that multiply() function doesn’t exactly work as I understand from the documentations. The text says, and I naturally expected that, given matrices A and B, A.multipy(B) must be equal to A * B in math notation. But it turned out that it is actually equal to B * A.
Back to our original problem, let’s differentiate between the model's frame of reference and the world (device viewport) frame of reference. Your view port (computer’s screen) has a static frame of reference (for our purpose). Viewport axes: Up (Y), Right (X) and Facing you (Z) are attached to the device; they don’t change with respect to the device. But the directions of your 3D model’s axes (X’,Y’,Z’) change as you rotate it inside the viewport. Our UI inconsistent response problem happened because when we move our fingers upward on the device, we expect the model to rotate around device X axis, but rotate3d(1,0,0,#deg) actually rotates the model around its own X’ axis.
Luckily rotate3d(z,y,z,#deg) function can rotate the model around any arbitrary axis (defined by vector [x,y,z] here). So the problem boils down to finding, which axis of the rotated object is parallel to the device X and Y axes, after an arbitrary rotation.
We know that rotation can be represented by a linear transformation matrix. If V’ is an arbitrary axis on the model, that was parallel to V (an axis in device frame of reference) prior to the rotation, then we can find what axis on object is now parallel to V after the rotation, by:
V’’ = M * V’
(where M is the transformation matrix that defines the rotation)
If the model is not transformed (when window.getComputedStyle(element).webkitTransform is the identity matrix) V’’ is parallel to V’ parallel to V.
Note that we represent an axis by a vector parallel to it, so the column vector: represents X axis, represents Y and represent Z. A rotation linear transformation can be represented by a 3x3 matrix
M =
There you go. You can extract CSSMatrix m11..m33 elements and write a little bit of JavaScript to produce V’’. Or use CSSMatrix.multiply() function that takes a CSSMatrix as its argument; then you have to construct a 4x4 representation of axes by just padding the column vector and setting all the other elements to 0.
In JavaScript:
var deviceXAxis = new WebKitCSSMatrix("matrix3d(1,0,0,0, 0,0,0,0, 0,0,0,0, 0,0,0,0)");var deviceYAxis = new WebKitCSSMatrix("matrix3d(0,0,0,0, 1,0,0,0, 0,0,0,0, 0,0,0,0)");
The result of deviceXAxis.multiply(transformation) is the object’s X’’ axis that is currently parallel to device X axis.
The following function rotates the model around device X and Y axes (resulting in a natural user experience):
function rotateModel (xRot, yRot) { // get the current transformation matrix: var m = new WebKitCSSMatrix(window.getComputedStyle(cube).webkitTransform); // Model Y’ axis that is now parallel to device Y axis: var yAxis = ipad.deviceYAxis.multiply(m); // Rotate around Y’: var m1 = m.rotateAxisAngle(yAxis.m11, yAxis.m21, yAxis.m31, yRot); // Model X’ axis that is now parallel to device X axis: var xAxis = ipad.deviceXAxis.multiply(m1); // Rotate around X’: var m2 = m1.rotateAxisAngle(xAxis.m11, xAxis.m21, xAxis.m31, xRot); // Apply the final rotation matrix to the model: cube.style.webkitTransform = m2.toString();}
Check out the final product here.
Sunday, July 10, 2011
## HTML5 mobile games
For everybody who got here looking for selling their games to my company, you can catch me in twitter @homam, GoogleWindows Live (whichever you like :)).
Thursday, March 11, 2010
## Mutually Dependent Systems
In this post I am discussing a problem that I have faced several times in the past year. Simplicity is always a goal in design as it saves resources during development and maintenance. But it's not always clear which design is simpler. Sometimes a seemingly complex design turns out to be simpler to develop, maintain and extend.
In a master-slave architecture, assume S1 is the master. It produces one or more tasks form a given job and transfers them to S2 (the slave); S2 does the tasks and return the results back to S1. S2, the slave, depends on S1, the master.
If the next time that S1 assigns a task to S2 it uses the information that exists in the result of a previous task that had been assigned to S2 then S1 also depends on S2 and we have a mutually dependent couple.
In our terminology the systems are mutually dependent if and only if S1 uses the information it gained as a result of a previous task that it had already assigned to S2. It doesn’t matter if S2 has completed the previous task or not, but it should have reported something to S1 that is useful for S1 for a next assignment of a task to S2.
If S1 is only using the fact that S2 is busy or free then we don't call it a mutual dependency. S1 must use the information that is generated by processing a task at S2. For example a MapReduce system is not a mutually dependent system.
Why is it important? You should have already guessed that S2 is the name of a class of slave systems that work with S1. There could be many instances of S2. Let's define a homogenous mutually dependent system as a system that in which all slaves of S1 are in the same class.
Two slaves are of the same class if they share a common interface for communicating with S1.
Now assume that S3 is also a slave for S1. S3 is in a different class other than S2 if either its input or its output interface is different from S2's.
When designing mutual dependent systems we have to always decide whether to keep the mutual dependencies or to break them by introducing new nodes. It's mainly a decision over complexity. The other factor that may affect your decision is the swiftness of the system. Introducing a new node will usually reduce the responsiveness.
[caption id="attachment_239" align="aligncenter" width="292" caption="Breaking the mutual dependency by using S4 node. Note that S3 is another class and uses a different interface to communicate with S4."][/caption]
For instance a new node must not be added if S1 waits for S2 to return. Generally you should try to keep the number of nodes as small as possible if the operations are not asynchronous.
Homogenous mutual dependency is OK (when the systems are simple and synchronous) but things get much dirtier as we introduce new classes to the system. On the other hand if extensibility is a goal you should try to avoid mutual dependencies.
For a conclusion, use mutual dependent systems in live systems, when a rapid response is required, and try to avoid them by introducing middle nodes if you have many classes of slaves or if extensibility is a goal.
Thursday, February 18, 2010
## Canvas Intellisense in Visual Studio
I was playing with HTML5 Canvas element to see how it could be useful in future web based game developments. I like that it is easier than GDI. I haven't yet done much performance testing but it is definitely faster than making games by animating DOM elements.
Recently I had some free time so I decided to create vsdoc documention for Canvas element interface for Visual Studio. I added intellisense (auto competition) and some helps and tips.
It is tuned to work with VS2010, but we can make it work with VS2008 too.
canvas-vsdoc.js contains the intellisense documentation.
canvas-utils.js has a few utility functions (like detecting if the browser supports Canvas) and some enumeration types for things like Line Joins, Repeations, Text Aligns, etc.
To use the intellisense you need to reference canvas-vsdoc.js in the beginning of your JavaScript file, like this:
/// <reference path="canvas-vsdoc.js" />
Note you can just drop the .js file and Visual Studio will write the reference.
Then use a utility method to get a reference to canvas element:
var canvas = Canvas.vsGet(document.getElementById("canvas1"));
Canvas.vsGet(element) receives a HTML element and returns the given element itself if it is in runtime. But in design time it returns Canvas.vsDoc.VSDocCanvasElement object that contains the documentations.
Then you can use the canvas element as usual:
var ctx = canvas.getContext("2d");ctx.arc(50, 50, 25, 0, Math.PI, true);…
Please note canvas-vsdoc.js must not be included in runtime but canvas-utils.js should be included (if you want to use Canvas.vsGet() and other utilities).
In VS2008 you should trick the environment by assigning the variable that refers the 2D context to Canvas.vsDoc.Canvas2dContext, by something like this:
var ctx = canvas.getContext("2d");if (typeof DESIGN_TIME != "undefined" && DESIGN_TIME)ctx = Canvas.vsDoc.Canvas2dContext;
DESIGN_TIME global variable is defined inside canvas-vsdoc.js. In runtime it should be undefined or false.
Just a note: if you still want to work in IE, you will find this Google extension very interesting: http://code.google.com/chrome/chromeframe/
Update: Visual Studio 11 natively supports canvas intellisense.
Wednesday, February 10, 2010
Older Hyzonia games depend on session and authentication cookies. This dependency has been fixed in the newer games by storing session ID in JavaScript variables. The cookie independent services explicitly require a session ID to be sent by their clients.
In this post I am not going to dig into the details of session management in Hyzonia platform, I just want to highlight a series of problems in the old schema that led us to redesign the session management behavior.
Hyzonia games can be embedded in publishers websites using a piece of code we call Hyzobox. Hyzobox basically renders an iframe in the webpage. The internet domain where the actual game is hosted could be different from the publisher's domain. If you have ever tried this before you know that we gonna have a lot of cross site security issues.
To address cross site scripting issues we developed Hyzobox In/Out API. A publisher can control certain things in the game and be notified about the events that are occurring inside the game using In/Out. It is a JavaScript based solution and strangely is widely supported in all major browsers. The In/Out API is not made public yet, but we are using it extensively in www.hyzogames.com. For instance whenever you win in a game Hyzogames.com will be notified about this event (winning) and may show you a message box.
But cookies are another issue. Different browsers have way different behaviors when it comes to handling cookies in iframes. For starters for it to works in IE you need a P3P header like this:
CP="IDC DSP COR ADM DEVi TAIi PSA PSD IVAi IVDi CONi HIS OUR IND CNT"
There's a lot to say here, I have a long standing view that P3P is generally useful but this kind of usage is pointless. Anyway for now just add it in your response and relax. | 2018-03-20 17:12:09 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18013988435268402, "perplexity": 1638.1514685446602}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647519.62/warc/CC-MAIN-20180320170119-20180320190119-00128.warc.gz"} |
http://awildduck.com/?author=1&paged=2 | Who is Ellery? http://awildduck.com/?p=1
Bitcoin & blockchain credentials: http://awildduck.com/interests
Other Interests: http://awildduck.com/interests/blockchain_qualifications.htm
# Why would anyone attribute value to Bitcoin?
Oh, Cheez…We’re back to this question, again!
As a Bitcoin columnist, I get this question a lot. Today, an answer was requested at Quora.com, where I am a leading contributor on cryptocurrencies:
“Clearly, some people value Bitcoin. But How can
this be? There is nothing there to give it value!”
Many individuals, like the one who asked this question, suspect that Bitcoin was pulled out of thin air—and that it is not backed by gold, a government, or an authoritative redemption guaranty. After all, it is just open source code. What stops me from creating an ElleryCoin using the same code?!
• Indeed, it was pulled out thin air
• It isn’t backed by an asset, government or promise
• You could easily clone Bitcoin (the entire mining ecosystem) and distribute it yourself. It would be exactly like Bitcoin. Yet, Bitcoin is clearly valued by everyone, and your new coin is unlikely to generate interest or adoption.
A More Complete Answer: What is value?
Bitcoin has more intrinsic value than a government printed paper bill. The value arises from a combiation of fundamental properties:
• It has a capped supply
• It is widely recognized, liquid, and resistant to legislation
• It has attained the robust supply-demand of a growing, 2-sided network.
• It is open and transparent. This elevates user trust
• Unlike cash and credit, Bitcoin requires no back-end settlement. That’s because it is not a payment instrument. Rather, it is money itself.
• Finally, it’s value is likely to be durable, because it is not printed by a country that spends beyond its means and racks up debt. In fact, it can never be inflated.
Downside and Risks
But wait! What about the long transaction delay and high cost? There are sharp disagreements anong miners, users and developers concerning block size, transaction malleability, and replay issues. Aren’t these a deal killers? And what about wild volatility in the exchange rate? Doesn’t this retard adoption as a functional currency?
These are transient issues associated with a new technology. Bitcoin is weathering growth pains that arise from a new and distributed governance technology (democracy can be messy!). But, all of these issues have sound solutions. We have witnessed and tested the solutions with various forked coins. Think of these imrpoved altcoins as beta tests. Even if current problems delay the day when you can spend bitcoin at every retail establishment—it is already sucking liquidity from national currencies and becoming the world’s de facto reserve currency.
Many individuals find all of this hard to accept. That is because we have been conditioned to think that ‘value’ arises from assets with ‘intrinsic’ value, the promise of redemption, or by edict. This is not true. In all things (including gold, a Picasso painting, or your labor), value arises from simple supply and demand.
Some individuals claim that all other factors are secondary. But, even this statement is false. All other factors are irrelevant. They may be related, but they are not the source of value.
I recognize that this answer may seem smug or definitive. So, allow me to suggest related questions with answers that are a bit more interesting, because they are subtle. Unlike the question of value, these two questions are open to analysis and opinion: (1) “Will people continue to value bitcoin in the future?” — And (2) “When will Bitcoin stop swinging wildly in value?” (measured by its exchange rate with other currencies).
This is fun! Let’s explore…
Ellery Davies co-chairs CRYPSA, publishes A Wild Duck and hosts the New York Bitcoin Event. Last month, he kicked off the Cryptocurrency Expo in Dubai. Click Here to inquire about a live presentation or consulting engagement.
# Should we ‘out’ Bitcoin creator, Satoshi?
Everyone likes a good mystery. After all, who isn’t fascinated with Sherlock Holmes or the Hardy Boys? The thirst to explore a mystery led us to the New World, to the ocean depths and into space.
One of the great mysteries of the past decade is the identity of Satoshi Nakamoto, the inventor of Bitcoin and the blockchain. Some have even stepped forward in an effort to usurp his identity for fame, infamy or fortune. But in this case, we have a mystery in which the subject does not wish to be fingered. He prefers anonymity.
This raises an interesting question. What could be achieved by discovering or revealing the identity of the illusive Satoshi Nakamoto?…
The blockchain and Bitcoin present radically transformative methodologies with far ranging, beneficial impact on business, transparency and social order.
How so? — The blockchain demonstrates that we can crowd-source trust, while Bitcoin is much more than a payment mechanism or even a reserve currency. It decouples governments from monetary policy. Ultimately, this will benefit consumers, businesses and even the governments that lose that control.
Why Has Satoshi Remained Anonymous?
I believe that Satoshi remains anonymous, because his identity, history, interests and politics would be a distraction to the fundamental gift that his research has bestowed. The world is still grappling with the challenge of education, adoption, scaling, governance, regulation and volatility.
Some people are still skeptical of Bitcoin’s potential or they fail to accept that it carries intrinsic value (far more than fiat currency, despite the absence of a redemption guaranty). Additionally, we are still witnessing hacks, failing exchanges and ICO scams. Ignorance is rampant. Some individuals wonder if Satoshi is an anarchist—or if his invention is criminal. (Of course, it is not!).
Outing him now is pointless. He is a bright inventor, but he is not the story. The concepts and coin that he gave us are still in their infancy. Our focus now must be to understand, scale and smooth out the kinks, so that adoption and utility can serve mankind.
Related Ruminations:
Ellery Davies co-chairs CRYPSA, publishes A Wild Duck, hosts the New York Bitcoin Event and kicked off the Cryptocurrency Expo in Dubai. Click Here to inquire about a live presentation or consulting engagement.
Bitcoin has many characteristics of a currency. It is portable, fungible, divisible, resistant to forgery, and it clearly has value. Today, that value came close to $20,000 per coin. Whether it has ‘intrinsic value’ is somewhat of a moot question, because the US dollar hasn’t exhibited this trait since 1972. Today, economists don’t even recognize the intrinsic value of gold—beyond a robust, international, supply-demand network. Lately, Bitcoin is failing as a viable currency, at least for everyday consumer transactions. The settlement of each transaction is bogged down with long delays and a very high cost. The situation has become critical because of squabbling between miners, users and developers over how to offer speed transactions or lower the cost of settlement. Bitcoin forks and altcoins such as Dash and Bitcoin Cash demonstrate that these technical issues have solutions. Since Bitcoin is adaptable, I believe that these issues are temporary. But an interesting question is not whether Bitcoin will eventually become a consumer currency. It is whether Bitcoin can distinguish itself as a store of value, rather than just an instrument for payment or debt settlement. After all, a Visa credit card, a traveler’s check and an Amazon gift card can all be used in retail payments, but none of them have value unless backed by someone or something. US Dollars on the other hand are perceived as inherently valuable. They carry the clout and gravitas of institutions and populations, without users questioning from where value arises. (This is changing, but bear with me)… What about Bitcoin? Does owning some bitcoin represent a store of value? Yes: It absolutely does! Bitcoin is a rapidly maturing two-sided network. Despite a meteoric rise in exchange value and wild fluctuations during the ride, it is the epitome of a stored value commodity. Regardless of government regulation, adoption as a consumer payment instrument, or the cost and speed of transactions, it has demonstrated stored value ever since May 22 2010, when Laszlo, a Bitcoin code developer, persuaded a restaurant to accept 10,000 BTC for 2 pizzas. The “currency” accepted by the pizza parlor wasn’t a gift card. It was not backed by a government, a prior deposit, dollars, gold, the promise of redemption, or by threat of force or blackmail. When a large community of individuals value, exchange, and can easily authenticate something that has none of those underpinnings, that thing clearly has stored value. In this case, value arises from its scarcity and a robust supply-demand-network. Because its value is not tied to a government or to other commodities, its exchange rate with other things will be bumpy, at first. But as it is recognized, traded and adopted as a stored value token, the wild spikes will smooth out. A tipping point will precipitate rapid adoption when… • when some vendors begin to quote prices in Bitcoin (rather than national currency) • when some of these vendors retain a fraction of their bitcoin-revenue for future purchases, payments or debt settlements—rather than converting revenue to fiat/national currency with each sale Bitcoin is clearly a store of value, and it is beginning to displace gold and the US dollar as the recognized reserve currency (it is gradually becoming the new gold standard). But before Bitcoin can serve as a widely adopted everyday currency (i.e. as a payment instrument—with or without the stored value of a currency unto itself), it must first incorporate technical improvements that speed transactions and lower cost. This is taking longer than many enthusiasts would have liked. But, that’s OK with anyone who keeps their eye on the big picture. Democracy is sometimes very sloppy. Ellery Davies co-chairs CRYPSA, publishes A Wild Duck and hosts the New York Bitcoin Event. Last month, he kicked off the Cryptocurrency Expo in Dubai. Click Here to inquire about a live presentation or consulting engagement. # Revisiting Bitcoin Fair Value Calculation In an April 2014 article, I demonstrated how one might approach a fair Bitcoin valuation. • Original Methodology: What fraction will Bitcoin capture of the float needed to support daily global commerce? My methodology was based on the demand that Bitcoin would generate if it displaced a small fraction of cash and credit used for retail and commercial payments around the world. At the time, Bitcoin had a value of USD$450. I estimated that if it captured 5% of global payments, it would have a fair value of about $10,000/BTC (I didn’t complete the calculation—I left that up to the reader. That’s because I was concerned that publishing such a prediction would cause me to lose credibility as an economist and blogger. For what it is worth, I also predicted that a rise to$10,000 would take 5~8 years.
As you might imagine, my friends and family urged me to unload my BTC investment. The April 2014 price of $450/BTC seemed very high to most armchair analysts. After all, thirteen months earlier, it had been just$45.
Yet, now, just 2½ years later, Bitcoin has reached $18,000 per coin. Last week, on Dec 7, 2017, it climbed 40% in just 40 hours, and 120% in less than 2 months. Naturally, this leaves everyone asking if Bitcoin’s rapid rise in value represents an investment “bubble”. …And so it is time to update the calculation of a fair value for Bitcoin. I can’t do better than point to a terrific prediction model described by Divyanth Jayaraj. His answer to a question at Quora presents a sound basis for valuation—much better than my original valuation method. How so?… • Reserve Methodology: What fraction of int’l business will be settled with the transfer of Bitcoin instead of Gold or Dollars? Jayaraj Bitcoin is rapidly demonstrating viability as a reserve rather than a daily transaction currency. Few people believe that Bitcoin will replace national currencies throughout the world, but it very well may replace gold for government and interbank settlement, and for large intercontinental purchases of commodities, such as oil, grain or airplanes. Sure! When developers and miners get a handle on transaction cost and delays, it may also become a de facto instrument for retail payments and debt settlement even among consumers. But, even if Bitcoin never achieves this status, Divyanth’s excellent analysis is still valid. I won’t steal the author’s thunder. Click the link and learn what is very likely to be a fair future value for Bitcoin. Prepare to digest a very large number. I didn’t think of this valuation methodology, but I agree that it represents a realistic peek into the future. For a few other methods of determining Bitcoin’s inherent value, check out the links at the bottom of my original article. But that was then and this is now. Give extra weight to this newer analysis. The methodology is more accurate given what we know now. Ellery Davies co-chairs CRYPSA, publishes A Wild Duck and hosts the Bitcoin Event. He was keynote at Cryptocurrency Expo in Dubai. Click Here to inquire about a presentation or consulting engagement. # Bitcoin: up 120% in less than 2 months At the end of October, I delivered a keynote speech at the Cryptocurrency Expo in Dubai. That was just 5 weeks ago. When I left for the conference, Bitcoin was trading at$6,300/BTC. But in the next few weeks, it reached $10,000. Last week, I liquidated part of my investment at just under$13,000/BTC. Now, Bitcoin is about to cross $16,000. (I began writing this 10 minutes ago… But it has risen another$1600.00 since then. Now, it is $17,000). Dear Reader: I believe in Bitcoin. Yet, there is a “But” in the last paragraph below… I believe in Bitcoin. Its rise is not fueled solely by investor hysteria. Rather, it is a product of delayed appreciation for a radical, transformative network technology. In the mid 1970s, the microprocessor was spreading to every consumer gadget. It started a trend toward tools that added power and enjoyment to all facets of life. And they were quickly becoming faster, lower-power, lower-cost and more ubiquitous. If you understood the potential of the computer chip before mainstream investors, you couldn’t really invest directly in the microprocessor. After all, it is a platform improvement. But you could come very close—You might have invested in Intel, Fairchild or Texas Instruments. Jump forward 20 years: In the mid-1990s, the Internet was spreading to every class of citizen and to all corners of the earth. But just as with a computer chip, you could own a web site, but you couldn’t own a piece of the internet’s market potential. You can’t invest in an idea, unless you are the inventor and you hold a patent. But, 5, 6 and 7 years ago, many individuals saw the future. They understood that Bitcoin is transformative. They recognized that—contrary to popular misconception—Bitcoin is backed by something more tangible than dollars, Euros and Renminbi. More importantly, it exhibits the potential to become the global reserve currency. And it continues to do so, even as internal bickering threatens its utility as a consumer payment instrument. That’s because it diverts liquidity away from gold and national FIAT. Ultimately, it forces governments to be transparent and accountable to its citizens. This is further reinforced by rampant inflation in countries around the world and a growing list of trading partners who seek alternatives to the US dollar. But, just like real estate, the supply of Bitcoin is capped. No one can produce more. It’s the math, stupid! Even if you only realized this one year ago, you still would have reaped a 2000% return on your investment as of this morning. (I am cherry-picking here, but Bitcoin had just crossed$630 on October 20 2016).
Let’s be clear: This is not a dot-com bubble or a 17th century Dutch tulip bulb mania. It is far more comparable to the 19th century California gold rush. The only frenzy is to acquire a functional instrument that is still trading for far below par value—but with the strange caveat that hoarding retards liquidity and the ‘functional’ adoption that we need to sustain long-term value.
The Bottom Line
In the grand scheme of things, Bitcoin is still undervalued—even at $17,000/BTC. It will fall and it will rise, but it will certainly be valued higher years from now. …But, I must admit that this sudden and urgent race into outer space is a bit unsettling. From an investor perspective, it is not rational to leave when I recognize that the exuberance is rational. Yet, here we are at$17,000. I am taking some bitcoin off the table—A bit of bitcoin. I have taken some Bitcoin off the table.
# Dr. Steven Gundry says plant-based diets are the problem
Have you seen the clickbait campaign that focuses on the research of Dr. Steven Gundry. It employs a slimy, photo-tile lure that asks you to turn up your speakers and then hawks a product or service disguised as a breakthrough discovery. These scams force the viewer to stay on the page. Typically, there is no indication of how long the video is, or any way to skip forward,
But often, it is hard to tell if a photo tile is news or clickbait. Big companies like Yahoo and Outbrain intermingle genuine news with marketing scams, teasers and outright fake news into an array of little photos at the end of every feature. This particular clickbait may be a story of a dogged counter-cultural researcher with a genuinely relevant finding. It could be newsworthy…I’m just not sure. Dr. Gundry clearly believes that our health is adversely affected by many of the plant based foods that we thought was healthy, because of a defense mechanism linked to lectin.
Steven Gundry Food Pyramid
Passing judgement on Dr. Gundry’s evolutionary claims and diet recommendations begs for independent clinical studies, or at least the analysis and commentary of scholars in nutrition, gastroenterology and evolution. But, like Robert Atkins and Dean Ornish, Dr. Gundry seems earnest in his research and motives. I don’t think that he is selling anything other than his opinion.
I found web sites and white papers that summarize his research and conclusions without a scammy video. If true, this would be an eye-opener—completely unexpected! While his points fascinate, I don’t have the tools to determine if this may be legit. This certainly merits vetting.
For example, Gundry claims that farmers have selectively reinforced a genetic mutation in cows, which appeared only two thousand years ago—and that this has resulted in a lectin-like protein in milk called Casein A1. (Normal cows make Casein A2, a safe protein). Apparently, the only herds of “normal” cows are on farms in southern Europe. Could this result in food poisoning for the rest of us? Dr. Gundry is pretty convincing that the answer could be “Yes”.
This article is a stub without a conclusion. Rather than passing judgement, I encourage further inquiry. Reader feedback is invited. What do you think about Dr. Gundry’s analysis and claims. Might there be adverse problems associated with many “healthy” vegetables and out of season fruits? Tell me, doctor: Must I give up sun-dried tomato and eggplant?!
# Will Futures Market Affect Bitcoin Value or Viability?
The Chicago Mercantile Exchange (CBT) is likely to begin listing options contracts for Bitcoin futures. And the CBOE is very likely to follow suit. What impact will this have on the value of Bitcoin holdings around the world? And what impact on it’s use as a money transmission mechanism?
This Financial Times article* explains that an internationally accessible options market will create the first opportunity for betting against Bitcoin, other than unloading coins which were previously purchased. Some individuals feel that this could precipitate a crash in the Bitcoin exchange rate.
I disagree. Buying “puts” or Selling “calls” against a commodity risks upward spikes, because individuals writing an uncovered option must make good on the contract buy buying it, even if the price has recently run up. This pushes the commodity even higher into the stratosphere—until the buyer unloads to realize his gains. This magnifies short term volatility (sometimes massively), but has no effect for long users or term buy-and-hold investors.
Contrary to conventional wisdom, it also has no effect on the utilitarian value of Bitcoin as an instrument of debit, payment and settlement. Volatility has no real effect on payment users or long term investors. But adding sanctioned financial markets—even risky ones, like CBT options—adds demand to capped commodity. Like real estate, no one can make more Bitcoin. There will never be more than 21 million units. So, in this respect, the new market will push ever more early investors into millionaire territory.
* Disclosure: I had no role in this article, and I do not know the author. But The Financial Times was a sponsor for my keynote presentation to the Cryptocurrency Expo in Dubai 3 weeks ago (End of October 2017). | 2018-01-16 21:18:03 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17903921008110046, "perplexity": 4316.310751853765}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084886739.5/warc/CC-MAIN-20180116204303-20180116224303-00409.warc.gz"} |
https://indico.desy.de/event/27991/timetable/?view=standard_numbered_inline_minutes | ATTENTION: We have to do a short maintenance with downtime on Wed 19 Oct 2022, 9:00 - 10:00 CEST. Please finish your work in time to prevent data loss.
Europe/Berlin
Online
,
Description
# 37th International Cosmic Ray Conference – The Astroparticle Conference –
The ICRC Conference Series has been organised biennially since 1947 under the auspices of the International Union of Pure and Applied Physics (IUPAP) and is the largest conference on Astroparticle Physics, bringing together the different topics of the field.
The main topics are Cosmic Ray Physics, Gamma-Ray Astronomy, Neutrino Astronomy & Neutrino Physics, Dark Matter Physics and Solar and Heliospheric Physics. Additional new topics in the 2021 conference are Multi-messenger Astronomy and Outreach & Education. Broad reviews and recent scientific results, related theory and modelling, experimental methods, techniques and instrumentation will be presented. The ICRC 2021 will be a prime forum to learn about news and developments in Astroparticle Physics and to appreciate the links between its topics.
Due to the COVID-19 pandemic, this year’s ICRC will be held entirely online. The format is therefore somewhat unusal. We hope to provide an experience to the standards of past ICRCs. To make full use of the online format, all contributions will be made available one week before the official start of the conference (abstracts and proceedings paper for all contributions; presentation slides and pre-recorded presentation videos for the 12-min talks, posters and pre-recorded 2-min flash talks for poster contributions). This allows browsing through the contributions, commenting and asking questions, and coming prepared to the more discussion-like, topical parallel sessions.
Yours Sincerely,
Johannes Knapp
Chair of the Local Organising Committee
Contact
• Monday, July 12
• Plenary: Opening 01
#### 01
• 1
Pre-Opening
Organizational Details
Speaker: Johannes Knapp (DESY, Zeuthen)
• 2
Opening
Speakers: Johannes Knapp (DESY, Zeuthen), Sunil K. Gupta (TIFR, Mumbai)
• 3:30 PM
Break
• Plenary: Highlight 01 01
#### 01
Convener: Anna Franckowiak (DESY)
• 3
AMS Highlights
In nine years on the International Space Station, the Alpha Magnetic Spectrometer (AMS) has collected more than 170 billion cosmic rays measuring with unprecedented precision different components of the charged cosmic rays up to few TeVs. This includes fluxes of positrons, electrons, antiprotons, protons, and nuclei from helium to silicon and beyond. A summary of the latest results will be shown. Results on time variation of cosmic ray fluxes associated with solar activity on different time scales will be presented.
Speaker: Javier Berdugo (CIEMAT)
• 4
Neutrino Telescope in Lake Baikal: Present and Nearest Future
The progress in the construction and operation of the Baikal Gigaton Volume Detector inLake Baikal is reported. The detector is designed for search for high energy neutrinos whose sources are not yet reliably identified. It currently includes over 2000 optical modules arranged on 56 strings, providing an effective volume of 0.35 km3 for cascades with energy above 100 TeV. We review the scientific case for Baikal-GVD, the construction plan, and first results from the partially built experiment which is currently the largest neutrino telescope in the Northern Hemisphere and still growing up.
Speaker: Zhan-Arys Dzhilkibaev (Institute for nuclear research Moscow)
• 5
Fermi LAT and GBM collaboration results on GRB 200415A.
Magnetars are neutron stars with the strongest magnetic fields known in the Universe, with an intensity up to a thousand times higher than typical neutron stars. Rarely, magnetars can produce enormous eruptions, called Magnetar Giant Flares (MGF), consisting of short-duration bursts of hard X-rays and soft gamma rays – a bright and variable initial spike lasting a few tenths of a second and a significantly dimmer pulsating tail lasting a few hundred of seconds that can only be detected from MGFs within our close to our galaxy. On April 15, 2020, a short bright burst of MeV gamma rays triggered the Gamma-Ray Burst Monitor (GBM) aboard the Fermi spacecraft, called GRB 200415A and localized by the InterPlanetary Network (IPN) inside the disk of the nearby Sculptor galaxy. 19 seconds later, and for nearly 300 seconds, the Large Area Telescope (LAT) detected GeV photons in spatial coincidence with the signal at lower energies. In this talk we present the recently published results of the GBM and LAT analysis on GRB 200415A. Our detailed analysis shows that the low-energy emission has very peculiar properties typically observed in flares from nearby magnetars, while the GeV detection is consistent with the IPN localization and spatially associated with the Sculptor galaxy. Hence, we infer that gamma rays likely originated with the MGF in Sculptor, and not from a cosmological gamma-ray burst, and we suggest that the GeV signal is generated by an ultra-relativistic outflow that first radiates the prompt MeV-band photons. This discovery represents the first detection of the high-energy emission from a MGF and proves that extragalactic MGFs may indeed disguise as short GRBs and constitute a small fraction of current short GRB samples.
Speaker: Niccolo di Lalla
• 5:30 PM
Break
• Discussion: 01 Magnetic Fields and CR Propagation | CRI 03
#### 03
• 6
Extragalactic magnetic fields and directional correlations of ultra-high-energy cosmic rays with local galaxies and neutrinos
Deflections of ultra-high-energy cosmic rays (UHECRs) in extragalactic magnetic fields (EGMFs) decrease the expected directional correlations between UHECR arrival directions on the one hand and UHECR source positions and neutrino arrival directions on the other hand. We use the recently observed correlation between UHECRs and local star-forming galaxies by the Pierre Auger Observatory to put limits on the EGMFs between these galaxies and the Milky Way [1]. In addition, using the same methods, we investigate whether correlations between UHECR and neutrino arrival directions can be expected [2]. We take into account deflections in extragalactic and Galactic magnetic fields, energy-loss interactions with background photon fields and UHECR spectrum and composition measurements. For a source density of star-forming galaxies we show that strong EGMFs ($B > 10$ nG Mpc$^{1/2}$) are required to reproduce the level of anisotropy that Auger has observed. For more numerous sources, e. g. spiral galaxies, weaker EGMFs are allowed. However, this would suggest that UHECR acceleration occurs in many regular galaxies, which is rather difficult to motivate. We demonstrate that even for the weakest EGMFs the non-observation of neutrino multiplets strongly constrains the possibility to find neutrino-UHECR correlations. For star-forming galaxies, or more numerous sources, no neutrino multiplets or neutrino-UHECR correlations are currently expected.
--
[1] A. van Vliet, A. Palladino, A. Taylor and W. Winter, in preparation.
[2] A. Palladino, A. van Vliet, W. Winter and A. Franckowiak, Mon. Not. Roy. Astron. Soc. 494, 4255 (2020).
Speaker: Dr Arjen Rene van Vliet (Z_THAT (Theoretische Astroteilchenphysik))
• 7
Faraday rotation constraints on large scale Halo model
The global structure of the magnetic field inside the disk of our Galaxy is quite well described by dynamo action and constrained by Faraday rotation measurements. The Halo, on the other hand, is much more of an enigma. Other face-on spiral galaxies show spiral magnetic structures in their disk, like the Milky Way, showing that our magnetic field is a rather typical feature for such class of galaxies. Furthermore, RM-synthesis of CHANGE-ES observations shows an increasing number of edge-on spiral galaxies presenting X-shaped structures surrounding the disk and extending orderly to distances of up to tens of kpc. Although the 4-dimensional topology of those magnetized halos and their physical nature is still unclear, they hint to the strong possibility that our galaxy also has a large and well organized magnetized Halo. Current models for the Milky Way's magnetic field extend very little out of the galactic plane and do not consider an extended, topologically well-organized field in the Halo. In this work, conceptually motivated by the possible existence of a Parker type galactic outflow, we propose a simple Archimedean-like field, for an extended Halo magnetic field. We add this component to a simple disk magnetic field, in order to model the Faraday rotation signal of extragalactic sources as observed on Earth and compare the results to published maps of Faraday rotation. We show that an extended magnetic field in the Halo is not only compatible with the observed Faraday rotation measurements, but it is actually favored by them.
Speaker: Dr Thomas Fitoussi (Karlsruhe Institute of Technology - IAP (IKP))
• 8
Magnetic field generation by the first cosmic rays
We recently proposed that cosmic rays are first accelerated at the redshift of z~ 20 by supernova remnants of first stars without the large scale magnetic field. In this talk, we are going to talk about the large scale magnetic field generation by the first cosmic rays. We show that even though the current and charge neutralities are initially satisfied, the current neutrality is eventually violated if there is an inhomogeneity, so that the magnetic field is generated. In addition, we propose a new driving mechanism for the Biermann battery in an inhomogeneous plasma with streaming cosmic rays. We demonstrate the new generation mechanisms of the magnetic field by conducting three-fluid plasma simulations and particle in cell simulation. We propose that the first cosmic rays generate the magnetic field with a large scale at the redshift of z~20.
Speaker: Yutaka Ohira (The University of Tokyo)
• 9
Magnetic field structure in halos of star-forming disk galaxies
The CHANG-ES (Continuum HAlos in Nearby Galaxies - an EVLA Survey) project has observed a sample of 35 edge-on spiral galaxies with the JVLA in C- and L-band. The observations in all Stokes parameters provide polarization information and for 16 galaxies with extended emission it is possible to describe the large scale magnetic field structure in their halos. We exemplify a few of these objects and demonstrate the properties of the mean large-scale magnetic field structure as a result from a stacking experiment. We briefly compare the results with the Milky Way and discuss implications for the transport of cosmic ray electrons.
Speaker: Prof. Ralf-Jürgen Dettmar (Ruhr University Bochum)
• 10
Phenomenology of CR-scattering on pre-existing MHD modes
We present the phenomenological implications of the micro-physics of cosmic-ray (CR) diffusion as resulting from particle scattering onto the three modes in which Magneto-Hydro-Dynamics (MHD) cascades are decomposed. We calculate the diffusion coefficients from first principles based on reasonable choices of the physical quantities characterizing the different environments of our Galaxy, namely the Halo and the Warm Ionized Medium, and implement for the first time these coefficients in the DRAGON2 numerical code. Remarkably, we obtain the correct propagated slope and normalization for all the charged species taken into account, without any ad-hoc tuning of the transport coefficients. We show that fast magnetosonic modes dominate CR confinement up to $\sim 100 \, \mathrm{TeV}$; Alfvénic modes are strongly subdominant due to the anistropy of the cascade (in agreement with previous findings) up to rigidities in the sub-PeV domain, where their contribution may show up as a spectral feature, potentially observable in the upcoming years. We also find that such framework cannot be responsible for CR confinement below $\sim 200 \, \mathrm{GeV}$, possibly leaving room for an additional confinement mechanism, and that the Kolmogorov-like scaling of the $B/C$ ratio cannot be reproduced. Therefore this scaling might not be the imprint of the pre-exisiting turbulence spectrum.
Speaker: Ottavio Fornieri (DESY Zeuthen)
• 11
CRPropa 3.2: a framework for high-energy astroparticle propagation
The landscape of high- and ultra-high-energy astrophysics has changed in the last decade, in large part owing to the inflow of data collected by cosmic-ray, gamma-ray, and neutrino observatories. At the dawn of the multimessenger era, the interpretation of these observations within a consistent framework is important to elucidate the open questions in this field. CRPropa 3.2 is a Monte Carlo code for simulating the propagation of high-energy particles in the Universe. This new version represents a step further towards a more complete simulation framework for multimessenger studies. Some of the new developments include: cosmic-ray acceleration, support for particle interactions within astrophysical sources, full Monte Carlo treatment of electromagnetic cascades, improved ensemble-averaged Galactic propagation, and a number of technical enhancements. Here we present some of these novel features and some applications to gamma- and cosmic-ray propagation.
Speaker: Rafael Alves Batista (Radboud University)
• Discussion: 19 SEP Acceleration and Propagation | SH 07
#### 07
• 12
Turbulent Reduction of Drifts for Solar Energetic Particles
Particle drifts perpendicular to the background magnetic field are proposed by some authors as an explanation for the very efficient perpendicular transport of solar energetic particles (SEPs). This process, however, competes with perpendicular diffusion caused by magnetic turbulence, which will also disrupt the drift patterns and reduce the efficiency of drift effects. The latter phenomenon is well known in cosmic ray studies, but not yet considered in SEP models. Additionally, SEP models which do not include drifts, especially for electrons, use turbulent drift reduction as a justification of this omission, without critically evaluating or testing this assumption. We present the first theoretical step for a theory of drift suppression in SEP transport. This is done by deriving the turbulence-dependent drift reduction function with a pitch-angle dependence, as applicable for anisotropic particle distributions, and by investigating to what extent drifts will be reduced in the inner heliosphere for realistic turbulence conditions and different pitch-angle dependencies of the perpendicular diffusion coefficient.
Speaker: Jabus van den Berg (Centre for Space Research, North-West University, Potchefstroom, South Africa)
• 13
Anomalous Transport and Acceleration of Energetic Particles
The theoretical description of energetic particle transport near interplanetary shocks in the inner and outer Heliosphere and in other astrophysical contexts usually follows a diffusive paradigm. By means of scattering of particles at magnetic irregularities upstream and downstream of the shock, particles can be moved back and forth across the shock discontinuity and gain energy, forming power-law energy spectra. In recent years, it has become clearer that this scattering does not necessarily adhere to a Gaussian diffusive picture, i.e. it can be an anomalous transport process, possibly caused by inhomogeneous structures in the plasma turbulence, such as small-scale flux tubes. This anomalous transport is, as a first approximation, often characterized by a non-linear behavior of the mean-square displacement of particles. Here we discuss the theory and implications of this assumption in the context of interplanetary shocks. In particular, we will address how this behaviour can be modeled with non-Gaussian probability distributions together with a stochastic differential equation scheme.
Speaker: Frederic Effenberger (Ruhr-University Bochum)
• 14
Electron acceleration parallel and perpendicular to overshoot magnetic field in quasi-perpendicular collisionless shock
Energetic, non-thermal electrons are commonly observed both upstream and immediately downstream from the Earth’s quasi-perpendicular bow shock (Gosling, 1989). Upstream the energetic electrons are generally field-aligned beams, whereas downstream the flux of them is generally most intense in the direction perpendicular to the magnetic field. However, the acceleration mechanism of these electrons remains unclear. Here, we show a new type of electron acceleration process at an overshoot downstream of a quasi-perpendicular collisionless shock, by performing a one-dimensional particle-in-cell (PIC) simulation. The shock parameters are as follows. The Alfven Mach number is 7.1, upstream plasma beta is 0.3, the shock angle is 70 degrees. The ion to electron mass ratio is 625, the ratio of electron plasma to cyclotron frequency is 10.
Kinetic energies of non-thermal electrons, averaged several gyrations, were divided into those of the guiding center motions parallel and perpendicular to the ambient field and that of the rotations of the guiding center. We then found the following electron acceleration process. Incoming electron is trapped in a thin structure of the time-varying, compressed overshoot magnetic field during a shock reformation process. Simultaneously, it gains a kinetic energy perpendicular to the magnetic field via betatron acceleration, followed by an additional energy increase along the field. The energy conversion from the perpendicular to parallel directions occurs due to a rapid decrease of the overshoot magnetic field; eventually, it is released upstream as a field-aligned beam. The result will be related to in-situ observations of the Earth’s bow shock.
Speaker: Fumiko Otsuka (Kyushu Univ.)
• 15
Statistical Survey of Reservoir Phenomenon in Energetic Proton Events Observed by Multiple Spacecraft
In this work, reservoir phenomenon in the decay phase of gradual solar energetic particle (SEP) events are investigated with two Helios and IMP 8 spacecraft from January 1976 to March 1980, and with two STEREO and SOHO spacecraft from January 2010 to September 2014. Using these data, sixty-two reservoir events of solar energetic protons were identified, and the effects of perpendicular diffusion and magnetic mirror on the formation of the reservoir phenomenon have been studied. We find that the reservoir events could be observed in almost all longitudes in the ecliptic at 1 AU, and thus the perpendicular diffusion in the interplanetary space is an important mechanism to explain the uniform distribution of SEPs. Furthermore, in the 1976 April 30 event, the effects of magnetic mirror associated with an interplanetary coronal mass ejection (ICME) were observed during the reservoir phenomenon. Therefore, the effects of magnetic mirror could also help to form the reservoir phenomenon. This study could improve the understanding of the propagation of SEPs in the interplanetary space.
Speaker: Yang Wang
• 16
Parker Solar Probe’s Measurements of the November 29, 2020 Large Solar Energetic Particle Event
On November 29, 2020 active region 12790 was located just beyond the east limb of the Sun as viewed by Earth. It erupted at 12:34UT with an M4.4 flare (as measured by GOES) and launched a coronal mass ejection (CME) traveling ~1700 km/s. Not surprisingly, this fast CME drove a shock that accelerated particles up to tens of MeV/nuc. More unusual was that these solar energetic particles (SEPs) quickly filled the inner heliosphere and the event was observed by spacecraft distributed around the Sun, including Parker Solar Probe (PSP), STEREO-A, Solar Orbiter, and those near Earth such as ACE and SOHO. This was the first large SEP event detected by the Integrated Science Investigation of the Sun (ISʘIS) suite on PSP and its first opportunity to make measurements of heavy ion spectra up to tens of MeV/nuc. Here we present an overview of event characteristics as determined by ISʘIS, including H, He, O, and Fe spectra, composition as a function of energy, and temporal variations of the energetic particle intensities throughout the event.
Speaker: Christina Cohen (Caltech)
• 17
Time Evolution of Parallel Shock Accelerated Particle Spectrum Bend-over Energy
Shock acceleration is an important mechanism to accelerate energetic particles. Using test-particle simulations we investigate the time evolution of the accelerated particle energy spectrum in the downstream of a parallel shock with magnetic turbulence. From simulation results we obtain power-law energy spectra with a bend-over energy. It is shown that the bend-over energy increases with time. With the particle mean acceleration time and mean momentum change during each cycle of the shock crossing from the diffusive shock acceleration model, a time-dependent differential equation for the maximum energy of particles accelerated at the shock can be approximately obtained, we assume the model can be used to describe the time evolution of the bend-over energy. It is found that the bend-over energy from simulations agrees well with the theoretical model with the nonlinear diffusion theory.
Speaker: Gang Qin
• 18
Observations and numerical simulations of impulsive SEP events with Ulysses and ACE observations
We study the latitudinal extent of the impulsive solar energetic particle (SEP) events of 2000 June 10 and 2001 December 26 using energetic electron observations from the ACE and Ulysses. We investigate the effects of particle source and transport on the profiles. We get the best fit parameters for simulations by comparing simulations with the two spacecraft observations. We show that perpendicular diffusion and adiabatic cooling can significantly affect the propagation of particles. In addition, it is found that the start and peak times of particle injections are between the onset and peak times of flare for the two events. Furthermore, we have theoretical models for the peak intensity of the particle source and the time interval from the onset of flares to the peak time of the particle source. We show that the theories agree well with the best fit parameters.
Speaker: L.-L. Lian
• 19
Energy Balance at Interplanetary Shocks: In-situ Measurement of the Fraction in Supra-thermal and Energetic Ions with ACE and Wind
Energetic particles generated by interplanetary shocks can drain a non-negligible fraction of the upstream ram pressure. We have selected a sample of shocks observed in-situ at 1 AU by the ACE and Wind spacecraft from the CfA Interplanetary Shock Database, which provides high-resolution data on solar wind plasma, shock parameters, and the local magnetic field. Time-series of the non-Maxwellian (supra-thermal and higher-energy) particle energy spectra were acquired for each event, averaged for one hour before and after the shock time, and integrated over velocity space to ascertain their partial pressure. Using the Rankine-Hugoniot MHD jump conditions, we find that the fraction of the total upstream energy flux density transferred to non-Maxwellian particles can reach about 15-35%. Notably, our sample shows that neither the Alfven Mach number nor the angle between the shock normal and upstream magnetic field are correlated with the energy drained by the particles. The findings are also insensitive to the offset of the time interval used for the partial pressure estimate. We obtain similar results, although with larger error bars, using shock parameters from the IPShocks database.
Speaker: Liam David (Student, University of Arizona)
• 20
Imbalance acceleration/escape of energetic particles at interplanetary shocks: effect on spectral steepening
Growing multispacecraft networks are broadening the opportunity of measuring energy spectra of energetic particles at interplanetary shocks over three decades or more in energy at the same distance (different from 1 AU) from the Sun. Energetic particles spectra at interplanetary shocks often exhibit a non-power law shape, even within two energy decades. We have introduced a 1D transport equation accounting for particle acceleration and escape, both allowed at all particle energies. The diffusion is contributed by self-generated turbulence close to the shock and by pre-existing turbulence far upstream. The upstream particle intensity profile steepens within one diffusion length from the shock as compared with diffusive shock acceleration rollover. The spectrum, controlled by macroscopic parameters such as shock compression, speed, far upstream diffusion coefficient and escape time at the shock, can be reduced to a log-parabola, that has been shown to describe the escape in a probabilistic approach. In the case of upstream uniform diffusion coefficient, the customarily used power law/exponential cut off solution is retrieved.
Speaker: Federico Fraschetti (CfA | Harvard & Smithsonian / University of Arizona)
• Discussion: 25 Blazars, AGN | MM 06
#### 06
• 21
A Two-zone Blazar Radiation Model for “Orphan” Neutrino Flares
In this work, we investigate the 2014–2015 neutrino flare associated with the blazar TXS 0506+056 and a recently discovered muon neutrino event IceCube-200107A in spatial coincidence with the blazar 4FGL J0955.1+3551, under the framework of a two-zone radiation model of blazars where an inner/outer blob close to/far from the supermassive black hole is invoked. An interesting feature that the two sources have in common is that no evidence of GeV gamma-ray activity is found during the neutrino detection period, probably implying a large opacity for GeV gamma rays in the neutrino production region. In our model, continuous particle acceleration/injection takes place in the inner blob at the jet base, where the hot X-ray corona of the supermassive black hole provides target photon fields for efficient neutrino production and strong GeV gamma-ray absorption. We show that this model can self-consistently interpret the neutrino emission from both blazars in a large parameter space. In the meantime, the dissipation processes in outer blob are responsible for the simultaneous multiwavelength emission of both sources. In agreement with previous studies of TXS 0506+056, an intense MeV emission from the induced electromagnetic cascade in the inner blob is robustly expected to accompany the neutrino flare in our model and could be used to test the model using the next-generation MeV gamma-ray detector in the future.
Speaker: Rui Xue (Zhejiang Normal University)
• 22
Extrapolating FR-0 radio galaxy source properties from the propagation of multi-messenger ultra-high-energy cosmic rays
Recently, it has been shown that relatively low luminosity Fanaroff-Riley type 0 (FR-0) radio galaxies are a good candidate source class for a predominant fraction of cosmic rays (CR) accelerated to ultra-high energies (UHE, E>10^18 eV). FR-0s can potentially provide a significant fraction of the UHECR energy density as they are much more numerous in the local universe (up to a factor of ~5 with z<= 0.05) than more energetic radio galaxies such as FR-1s or FR-2s.
In the present work, UHECR mass composition and energy spectra at the FR-0 sources are estimated by fitting simulation results to the published Pierre Auger Observatory and Telescope Array data. This fitting is done using a simulated isotropic sky distribution extrapolated from the measured FR-0 galaxy properties and propagating CRs in plausible extragalactic magnetic field configurations using the CRPropa3 framework. In addition, we present estimates of the fluxes of secondary photons and neutrinos created in UHECR interactions with cosmic photon backgrounds during CR propagation. With this approach, we aim to investigate the properties of the sources with the help of observational multi-messenger data.
Speaker: Jon Paul Lundquist (University of Nova Gorica)
• 23
High-Energy Neutrinos from NGC 1068
IceCube has observed an excess of neutrino events over expectations from the isotropic background from the direction of NGC 1068. The excess is inconsistent with background expectations at the level of 2.9σ after accounting for statitsical trials. Even though the excess is not statistical significant yet, it is interesting to entertain the possibility that it corresponds to a real signal. Assuming a single power-law spectrum, the IceCube Collaboration has reported a best-fit flux ∼ 3 × 10^{−11} (E/TeV)^{−3.2} (TeV cm^2 s)^{−1} , where E is the neutrino energy. Taking account of new physics and astronomy developments we give a revised high-energy neutrino flux for the Stecker-Done-Salamon-Sommers AGN core model and show that it can accommodate IceCube observations.
Speaker: Luis Anchordoqui (Lehman College, City University of New York)
• 24
Testing the AGN Radio and Neutrino correlation using the MOJAVE catalog and 10 years of IceCube Data
On 22 September 2017 IceCube reported a high-energy neutrino event which was found to be coincident with a flaring blazar, TXS 0506+056. This first multi-messenger observation hinted at blazars being sources of observed high-energy astrophysical neutrinos and raised a need for extensive correlation studies. Recent work shows that the internal absorption of gamma rays, and their interactions intrinsic to the source and with the extragalactic background, will cause a lack of energetic gamma-ray and neutrino correlation while hinting towards a correlation between neutrinos and lower photon energy observations in the X-ray and radio bands. Studies based on published IceCube alerts and radio observations, report a possible radio-neutrino correlation in both gamma-ray bright and gamma-ray dim active galactic nuclei (AGN). However, they have marginal statistical significance due to limited available data. We present a correlation analysis between 15 GHz radio observations of AGN reported in the MOJAVE XV catalog and 10 years of IceCube detector data and discuss the results derived from a time averaged stacking analysis.
Speaker: Abhishek Desai (University of Wisconsin Madison)
• 25
Fermi-LAT realtime follow-ups of high-energy neutrino alerts
The detection of the flaring gamma-ray blazar TXS 0506+056 in spatial and temporal coincidence with the high-energy neutrino IC-170922A represents a milestone for multi-messenger astronomy. The prompt multi-wavelength coverage from several ground- and space-based facilities of this special event was enabled thanks to the key role of the Fermi-Large Area Telescope (LAT), continuously monitoring the gamma-ray sky. Exceptional variable and transient events, such as bright gamma-ray flares of blazars, are regularly reported to the whole astronomical community to enable prompt multi-wavelength observations of the astrophysical sources. As soon as real-time IceCube high-energy neutrino event alerts are received, the relevant positions are searched, at multiple timescales, for gamma-ray activity from known sources and newly detected emitters positionally consistent with the neutrino localization.
In this contribution, we present an overview of follow-up activities and strategies for the real-time neutrino alerts with the Fermi-LAT, focusing on some interesting observed coincidences with gamma-ray sources. We will also discuss future plans and improvements in the strategies for the identification of gamma-ray counterparts of single high-energy neutrinos.
Speaker: Simone Garrappa (DESY Zeuthen)
• 26
The Astrophysical Multimessenger Observatory Network (AMON), has developed a real-time multi-messenger alert program. The system performs coincidence analyses of datasets from gamma-ray and neutrino detectors, making the Neutrino-Electromagnetic (NuEM) alert channel. For these analyses, AMON takes advantage of sub-threshold events, i.e., events that by themselves are not significant in the individual detectors. The main purpose of this channel is to search for gamma-ray counterparts of neutrino events. We will describe the different analyses that make-up this channel and present a selection of recent results.
Speaker: Dr Hugo Ayala (Pennsylvania State University)
• 27
Searching for VHE gamma-ray emission associated with IceCube neutrino alerts using FACT, H.E.S.S., MAGIC, and VERITAS
The real-time follow-up of high energy events from neutrino observatories is a promising approach to identify their astrophysical origin. So far, it has provided compelling evidence for a neutrino counterpart: the flaring gamma-ray blazar TXS 0506+056 observed in coincidence with the high-energy neutrino IC170922A detected by IceCube. The detection of very-high-energy (VHE, E > 100 GeV) gamma rays from this source supported the association and constrained the modeling of the blazar emission at the time of the IceCube event. The four imaging atmospheric Cherenkov telescope experiments (IACTs) - FACT, H.E.S.S., MAGIC, and VERITAS - operate an active follow-up program of target-of-opportunity observations of neutrino alerts sent by IceCube. This program has two main components: the follow-up of single high-energy neutrino candidate events of potential astrophysical origin, such as IC170922A, and the observation of known gamma-ray sources around which IceCube has identified a cluster of candidate neutrino events. IceCube recently upgraded this second gamma-ray follow-up (GFU) component in collaboration with the IACT groups. We present results from the IACT follow-up program of IceCube neutrino alerts and a description of the upgraded GFU system.
Speaker: Konstancja Satalecka (Z_MAGIC (Experiment MAGIC))
• 28
Electromagnetic and Neutrino Output from Magnetic Reconnection in Poynting Flux Dominated Jets.
Neutrino-emitting blazars may accelerate cosmic ray (CR) protons at the inner regions of the jet, where most of the magnetic energy is likely to be dissipated. In this picture, the spectrum of neutrinos and gamma-rays that leave the source is shaped by the soft photon fields that the parent hadrons encounter before leaving the source. We build a lepto-hadronic emission model based on particle acceleration by magnetic reconnection. The emission is powered by magnetic dissipation in the jet, in the transition from magnetically to kinetically dominated flow. We employ the striped jet model to obtain the jet properties at three characteristic emission regions and derive the associated electromagnetic and neutrino output. We also perform Monte Carlo simulations of the propagation CR protons as an alternative method for calculating the neutrino flux. We apply this emission model to interpret the 2017 multi-messenger event from the blazar TXS0506+056 and we also discuss applications of the model in the context of flat-spectrum radio quasars and BL Lac objects.
Speaker: Dr Juan Carlos Rodríguez-Ramírez (Instituto de Astronomia, Geofisica, e Ciencias Atmosfericas - Universidade de Sao Paulo)
• 29
High-energy neutrinos and gamma-rays from the AGN-driven wind in NGC 1068
Various observations are revealing the widespread occurrence of fast and powerful winds in active galactic nuclei (AGN) that are distinct from relativistic jets, likely launched from accretion disks. Such winds can harbor collisionless shocks at different locations that may induce acceleration of protons and electrons and consequent nonthermal emission. We focus on the inner regions of the winds, where interactions of accelerated protons with the nuclear radiation field and/or ambient gas can induce emission of high-energy neutrinos and gamma-rays. In particular, we address the case of NGC 1068, a nearby Seyfert galaxy bearing a powerful wind, which is a known source of GeV gamma rays as well as a tentative source of sub-PeV neutrinos. Tests and further implications of this scenario are discussed.
Speaker: Susumu Inoue (Bunkyo Univ. / RIKEN)
• 30
Multi-wavelength and neutrino emission from blazar PKS 1502+106
In July of 2019, the IceCube experiment detected a high-energy neutrino from the direction of the powerful quasar PKS 1502+106. I discuss the results of multi-wavelength and multi-messenger modeling of this source, using a fully self-consistent one-zone model that includes the contribution of radiation fields external to the jet. Three distinct activity states of the blazar can be identified: one quiescent state and two flaring states with hard and soft gamma-ray spectra. All three states ca be described by the same leptohadronic model, which also predicts a substantial neutrino flux. These results are compatible with the detection of a neutrino during the quiescent state, based on event rate statistics. The soft X-ray spectra observed during bright flares strongly suggest a hadronic contribution, which can be interpreted as additional evidence for cosmic ray acceleration in the source independently of neutrino observations.
Speaker: Xavier Rodrigues (DESY / Ruhr University Bochum)
• 31
Probing Neutrino Emission from X-ray Blazar Flares observed with Swift-XRT
Blazars are a subclass of active galaxies with jets closely aligned to the observer's line of sight. In addition, they are the most powerful persistent sources across the electromagnetic spectrum in the universe. The detection of a high-energy neutrino from the flaring blazar TXS 0506+056 and the subsequent discovery of a neutrino excess from the same direction have naturally strengthened the hypothesis that blazars are cosmic neutrino sources. The lack, however, of gamma-ray flaring activity during the latter period challenges the standard scenario of correlated gamma-ray and high-energy neutrino emission in blazars. Motivated by a novel theoretical scenario where neutrinos are produced by energetic protons interacting with their own X-ray synchrotron photons, we make neutrino predictions for X-ray flaring blazars. Our sample consists of all blazars observed with the X-ray Telescope (XRT) on board Swift more than 50 times from November 2004 to November 2020. To statistically identify an X-ray flaring state we apply the Bayesian Block algorithm to the 1 keV XRT light curves of frequently observed blazars. Using X-ray spectral information during the flaring states, we compute for each flare the 1-10 keV energy fluence, which is a good proxy for the all-flavor neutrino fluence in the adopted theoretical scenario. We present the expected number of muon neutrino events with IceCube for each source as well as the stacked signal from all X-ray flares of the selected sample. We discuss the implications of our results for IceCube and IceCube Gen-2.
Speaker: Mr Stamatios Ilias Stathopoulos (National and Kapodistrian University of Athens)
• 32
Radio astronomy locates the neutrino origin in bright blazars
High-energy astrophysical neutrinos have been observed by multiple telescopes in the last decade, but their sources still remained unknown. We address the problem of locating astrophysical neutrinos’ sources in a statistical manner. We show that blazars positionally associated with IceCube neutrino detections have stronger parsec-scale radio cores than the rest of the sample. The probability of a chance coincidence is only 4×10^-5 corresponding to a significance of 4.1σ. We explicitly list five strong radio blazars as highly probable sources of neutrinos above 200 TeV: 3C 279, NRAO 530, TXS 1308+326, PKS 1741-038, and PKS 2145+067. Turns out that there are at least 70 more radio-bright blazars that emit neutrinos of lower energies starting from TeVs. Moreover, we utilize continuous RATAN-600 monitoring of VLBI-selected blazars to find that radio flares at frequencies above 10 GHz coincide with neutrino arrival dates. The most pronounced example of such behavior is PKS 1502+106 that experienced a major flare in 2019. We conclude that the entire IceCube astrophysical neutrino flux derived from muon-track analyses may be explained by blazars, that is AGNs with bright Doppler-boosted jets. High-energy neutrinos can be produced in photohadronic interactions within parsec-scale relativistic jets. Radio-bright blazars associated with neutrino detections have very diverse gamma-ray properties, which suggests that gamma-rays and neutrinos may be produced in different regions of blazars and not directly related. A narrow jet viewing angle is, however, required to detect either of them.
Speaker: Alexander Plavin (Astro Space Center of Lebedev Physical Institute)
• 33
TELAMON: Monitoring of AGN with the Effelsberg 100-m Telescope in the Context of Astroparticle Physics
We introduce the TELAMON program, which is using the Effelsberg 100-m telescope to monitor the radio spectra of active galactic nuclei (AGN) under scrutiny in astroparticle physics, namely TeV blazars and neutrino-associated AGN. Thanks to its large dish aperture and sensitive instrumentation, the Effelsberg telescope can yield superior radio data over other programs in the low flux-density regime down to several 10mJy. This is a particular strength in the case of TeV-emitting blazars, which are often comparatively faint radio sources of the high-synchrotron peaked type. We perform high-cadence high-frequency observations every 2-4 weeks at multiple frequencies up to 44GHz. This setup is well suited to trace dynamical processes in the compact parsec-scale jets of blazars related to high-energy flares or neutrino detections. Our sample currently covers about 40 sources and puts its focus on the high-peaked BL Lac objects and extreme blazars most frequently observed by TeV telescopes. Here, we introduce the TELAMON program characteristics and present first results obtained since fall 2020.
• 34
Testing high energy neutrino emission from the Fermi Gamma-ray Space Telescope Large Area Telescope (4LAC) sources.
The detection of the high-energy neutrino IC-170822A in spatial (within the error region) and temporal flare activity correlation with the blazar TXS 0506+056 allowed these objects to be considered as progenitor sources of neutrinos. Besides this, no more detection of this kind was reported. Some other neutrinos detected by IceCube show a spatial correlation (within the error region) from other Fermi-LAT detected sources. However, these objects did not show a flare activity like TXS 0506+056. Assuming a lepto-hadronic scenario through pɣ interactions, this work describes the SED in some objects from the fourth catalog of active galactic nuclei (AGNs) detected by the Fermi Gamma-ray Space Telescope Large Area Telescope (4LAC) sources, which are in spatial correlation with neutrinos detected by IceCube. Additionally, we estimate the corresponding neutrino flux counterpart from these sources.
Speaker: Mr Antonio Galván (Institute of Astronomy, UNAM.)
• 35
The Neutrino Contribution of Gamma-Ray Flares from Fermi Bright Blazars
High-energy neutrinos are expected to be produced during gamma-ray flares of blazars through the interaction of high-energy cosmic rays in the jet with photons. As a matter of fact, a high-energy neutrino event, IC-170922A, was detected at the time of a gamma-ray flare from blazar TXS 0506+056 at the level of 3 sigma significance. In this work, we present a statistical study of blazar gamma-ray flares aiming to constrain their contribution to the blazar neutrino output. We selected 145 gamma-ray bright blazars listed in the Fermi Large Area Telescope (LAT) monitored list and constructed their weekly binned light curves. Using a Bayesian Blocks algorithm to the light curves, we determined the fraction of time spent in the flaring state (flare duty cycle) and the fraction of energy released during each flare. Furthermore, we estimated the neutrino energy flux of each gamma-ray flare by using the general scaling relation $L_\nu \propto (L_\gamma)^\gamma$, $\gamma=1.5-2$, normalized to the quiescent X-ray flux of each blazar. Comparison of the estimated neutrino energy flux with the declination-dependent IceCube sensitivity enables us to constrain the standard neutrino emission models of gamma-ray flares. We also provide the upper-limit contribution of flares of gamma-ray bright blazars to the isotropic diffuse neutrino flux.
Speaker: Kenji Yoshida (Shibaura Institute of Technology)
• Discussion: 31 Fundamental Physics with Neutrinos | NU 05
#### 05
• 36
HE Neutrinos beyond Standard Model: steriles and secret interactions
Ultra High Energy cosmogenic neutrinos may represent a unique opportunity
to unveil possible new physics interactions in the neutrino sector. At
this regard, we have investigated the effects on high and ultrahigh energy
active neutrino fluxes due to active-sterile secret interactions mediated
by a new pseudoscalar particle. These interactions become relevant at
very different energy scales depending on the masses of the scalar
mediator and of sterile neutrino. As a consequence, we have found
interesting phenomenological implications on two benchmark fluxes we
consider, namely an astrophysical power law flux, in the range below 100
PeV, and a cosmogenic flux, in the Ultrahigh energy range.
Speaker: Dr Ninetta Saviano (INFN)
• 37
Measuring neutrino cross-section with IceCube at intermediate energies (~100 GeV to a few TeV)
Whether studying neutrinos for their own sake or as a messenger particle, neutrino cross-sections are critically important for numerous analyses. On the low energy side, measurements from accelerator experiments reach up to a few 100s of GeV. On the high energy side, neutrino-earth absorption measurements extend down to a few TeV. The intermediate energy range has yet to be measured experimentally. This work is made possible by the linear relationship between the event rate and cross-section, and will utilize IceCube muon neutrino data collected between 2010 and 2018. An advanced energy reconstruction, tailored to the unique properties of the energy range and using the full description of photon propagation in ice, is applied to an event sample of neutrino-induced through-going muons to perform a forward folding analysis.
Speaker: Sarah Nowicki (Michigan State University)
• 38
Measuring the Neutrino Cross Section Using 8 years of Upgoing Muon Neutrinos
The IceCube neutrino observatory detects neutrinos at energies orders of magnitude higher than those accessible to current neutrino accelerators. Above 40 TeV, neutrinos traveling through the Earth will be absorbed as they interact via charge current interactions with nuclei, creating a deficit of Earth-crossing neutrinos detected at IceCube. In this analysis we use the Earth as a target to measure the neutrino cross section for muon neutrinos passing through IceCube. The previous published results of this analysis showed the cross section to be consistent with Standard Model predictions for 1 year of IceCube data. In this analysis we extend the studies to 8 years of data, increasing the statistics by an order of magnitude and improving the treatment of systematic uncertainties. We present the updated cross section measurement studies in three decade-wide bins, and compare to previous IceCube cross section results.
Speaker: Sally Robertson (Lawrence Berkeley National Lab)
• 39
Reaching the EeV frontier in neutrino-nucleon cross sections in upcoming neutrino telescopes
Measuring neutrino interactions with matter is arduous but rewarding. To date, experiments have measured the neutrino-nucleon cross section in the MeV-PeV range, using terrestrial and astrophysical neutrinos. We endeavor to push that measurement to the EeV scale, in order to test competing expectations of the deep structure of nucleons and possibly reveal new neutrino interactions. Cosmogenic neutrinos, long-sought but still undiscovered, provide the only feasible way forward. However, because their flux is low, they have evaded detection so far. Fortunately, upcoming in-ice radio-detection neutrino telescopes, like RNO-G and the radio component of IceCube-Gen2, have a real chance of discovering them in the next 10-20 years. In preparation, we perform the first detailed study of their sensitivity to the deep-inelastic-scattering neutrino-nucleon cross section at EeV energies, extracted from the attenuation of the cosmogenic neutrino flux as it traverses the Earth across different directions. We use up-to-date predictions and tools at every step: in the flux of cosmogenic neutrinos---predicted using recent ultra-high-energy cosmic-ray measurements---in their propagation inside the Earth---computed using leading and sub-leading neutrino interactions---and in their detection in radio-based neutrino telescopes---based on advanced simulated detector responses.
Speaker: Victor Valera (Niels Bohr Institute)
• 40
Rigorous predictions for prompt neutrino fluxes in view of VLVnT upgrades
The existence of a flux of prompt atmospheric neutrinos from the decay of heavy hadrons resulting from the interaction of cosmic rays with the atmospheric nuclei is predicted by theory. Very Large Volume Neutrino Telescopes, like Icecube, KM3NeT and Baikal-GVD, should be sensitive to this neutrino component, that represents a background for the neutrinos from far astrophysical sources. However, no clear experimental evidence of prompt neutrino fluxes has been found, at least so far. In particular, the prompt neutrino component well fits to zero even in the most recent analysis of High Energy Starting Events by the IceCube collaboration, published last autumn. On the other hand, the analysis of through-going muon tracks, more sensitive to prompt neutrinos than the previous one, has established an upper limit on prompt neutrino fluxes.
Our collaboration has been active in providing accurate predictions for prompt neutrino fluxes in the last few years, on the basis of rigorous QCD calculations, and in assessing many of the uncertainties related to these predictions. We discuss our most recent results and their uncertainties, which we believe constitute the most accurate and comprehensive prediction of prompt neutrino fluxes available at present, and show how they challenge the present experimental limits. We are confident that, increasing the experimental capabilities and statistical sample, as possible through e.g. the IceCube-Gen2 upgrade, will help in sharing further light on the prompt neutrino issues.
Speaker: Maria Vittoria Garzelli (UNI/TH (Uni Hamburg, Institut fuer Theoretische Physik))
• 41
Studying neutrinos at the LHC-FASER ~ its impact to the cosmic-ray physics
Studies of high energy proton interactions have been basic inputs to understand the cosmic-ray spectra observed on the earth. Yet, the experimental knowledge with controlled beams has been limited. In fact, uncertainties of the forward hadron production are very large due to the lack of experimental data. The FASER experiment is proposed to measure particles, such as neutrinos and hypothetical dark-sector particles, at the forward location of the 14 TeV proton-proton collisions at the LHC. As it corresponds to 100-PeV proton interactions in fixed target mode, a precise measurement by FASER would provide information relevant for PeV-scale cosmic rays. By studying three flavor neutrinos with the dedicated neutrino detector (FASERnu), FASER will lead to a quantitative understanding of prompt neutrinos, which is an important background towards the astrophysical neutrino observation by neutrino telescopes such as IceCube. In particular, the electron and tau neutrinos have strong links with charmed hadron production. And, the FASER measurements may also shed light on the unresolved muon excess at the high energy. FASER is going to start taking data in 2022. We expect about 8000 numu, 1300 nue and 20 nutau CC interactions at the TeV energy scale during Run 3 of the LHC operation (2022-2024) with a 1.1 tons emulsion-based neutrino detector. We report here the overview and prospect of the FASER experiment in relation to the cosmic-ray physics, together with the first LHC neutrino candidates that we caught in the pilot run held in 2018.
Speaker: Akitaka Ariga (Chiba University)
• 42
The Future of High-Energy Astrophysical Neutrino Flavor Measurements
The next generation of neutrino telescopes, including Baikal-GVD, KM3NeT, P-ONE, TAMBO, and IceCube-Gen2, will be able to determine the flavor of high-energy astrophysical neutrinos with 10% uncertainties. With the aid of future neutrino oscillation experiments --- in particular JUNO, DUNE, and Hyper-Kamiokande --- the regions of flavor composition at Earth that are allowed by neutrino oscillations will shrink by a factor of ten between 2020 and 2040. We critically examine the ability of future experiments and show how these improvements will help us pin down the source of high-energy astrophysical neutrinos and a sub-dominant neutrino production mechanism with and without unitarity assumed. As an illustration of beyond-the-Standard-Model physics, we also show that the future neutrino measurements will constrain the decay rate of heavy neutrinos to be below $2\times 10^{-5}~$$m$/eV/s assuming they decay into invisible particles.
Speaker: Ningqiang Song (Queen's University and Perimeter Institute)
• 43
IceCube constraints on Violation of Equivalence Principle
Among the information provided by high energy neutrinos, a promising possibility is to analyze the effects of a Violation of Equivalence Principle (VEP) on neutrino oscillations. We analyze the IceCube data on atmospheric neutrino fluxes under the assumption of a VEP and obtain updated constraints on the parameter space with the benchmark choice that neutrinos with different masses couple with different strengths to the gravitational field. In this case we find that the VEP parameters times the local gravitational potential at Earth can be constrained at the level of $10^{-27}$. We show that the constraints from atmospheric neutrinos strongly depend on the assumption that the neutrino eigenstates interacting diagonally with the gravitational field coincide with the mass eigenstates, which is not a priori justified: this is particularly clear in the case that the basis of diagonal gravitational interaction coincide with the flavor basis, which cannot be constrained by the observation of atmospheric neutrinos. Finally, we quantitatively study the effect of a VEP on the flavor composition of the astrophysical neutrinos, stressing again the interplay with the basis in which the VEP is diagonal: we find that for some choices of such basis the flavor ratio measured by IceCube can significantly change.
Speaker: Damiano Francesco Giuseppe Fiorillo (University of Naples "Federico II")
• 44
Scalar Non Standard Interactions at long baseline experiments
The discovery of neutrino oscillation confirms neutrinos have mass and the Standard Model(SM) of particle physics is not complete. It needs an extension in order to accommodate the masses and mixing of neutrinos, which essentially leads to beyond SM(BSM) physics. The unknown couplings involving neutrinos, so-called the Non-Standard Interactions(NSIs)[1] may appear as a ’new physics’ in different neutrino experiments. Neutrino NSI can have a sizable impact on neutrino oscillation and can impact the measurements of different mixing parameters in various neutrino experiments. The recent work on scalar NSI[2] has shown a great potential to probe it further. Unlike vector NSI, scalar NSI appears as a correction to the neutrino mass matrix rather than acting as a matter potential. This may lead to a significantly different phenomenological consequence in different neutrino experiments. Moreover, as scalar NSI affects the mass matrix, it also gives a possibility of probing it to different neutrino mass models.
In this work, we explored the effect of scalar NSI in different long-baseline experiments (DUNE, T2HK, etc). We point out that scalar NSI can considerably affect the neutrino oscillation in Long baseline(LBL) experiments and can complicate the measurement of the CP phase. Also as it appears as a correction to the neutrino mass matrix its effect is energy independent, unlike the vector NSI. We also studied the sensitivity of different LBL experiments towards finding the effects of scalar NSI. Also, we put up the possibility of probing it further to various neutrino mass models.
References:
[1] O.G.Miranda and H.Nunokawa, New Journal of Physics, 2015, 17, 095002.
[2] S.F. Ge and S.J. Parke, Phys. Rev. Lett., 2019, 122, 211801.
Speaker: Abinash Medhi (Tezpur University, Assam, India)
• 45
Search for Magnetic Monopoles with ten years of ANTARES data
The present study is an updated search for magnetic monopoles using data taken with the ANTARES neutrino telescope over a period of 10 years (January 2008 to December 2017). In accordance with some grand unification theories, magnetic monopoles could have been created during the phase of symmetry breaking in the early Universe, and accelerated by galactic magnetic fields. As a consequence of their high energy, they could cross the Earth and emit a significant signal in a Cherenkov-based telescope like ANTARES, for appropriate mass and velocity ranges. This analysis uses a run-by-run simulation strategy, as well as a new simulation of magnetic monopoles taking into account the Kasama, Yang and Goldhaber cross section. The results obtained for relativistic magnetic monopoles with velocity v ≥ 0.57c will be presented.
• 46
Search for nuclearites with the KM3NeT detector
Strange quark matter (SQM) is a hypothetical type of matter composed of almost equal quantities of up, down and strange quarks. Massive SQM particles are called nuclearites. Nuclearites with masses greater than $10^{13}$ GeV and velocities of about 250 km/s (typical galactic velocities) could reach the Earth and interact with atoms and molecules of sea water within the sensitive volume of the deep-sea neutrino telescopes. The SQM particles can be detected with the KM3NeT telescope (whose first lines are already installed in the Mediterranean Sea and taking data) through the visible blackbody radiation generated along their path inside or near the instrumented area. In this work the results of a study using Monte Carlo simulations of down-going nuclearites are discussed. Preliminary sensitivities of the KM3NeT experiment for a flux of nuclearites are also presented.
Speaker: Ms Alice Paun (Institute of Space Science (ISS), Atomistilor 409, Magurele, RO-077125 Romania)
• 47
Search for STaus in IceCube
The tau lepton’s supersymmetric partner, the stau, appears in some models as the next-to-lightest particle. This makes it also a long-lived particle. In this scenario, its signature is a long, dim and minimally ionizing track when traveling through the IceCube detector. Independent of their primary energy, the stau tracks appear like low-energy muons in the detector. A potential signal of staus would thus be an excess over muon tracks induced by atmospheric muon neutrinos. Our analysis focuses on the region around the horizon as here the ratio between stau signal and atmospheric background is largest. We will present the sensitivity to constrain the stau mass using IceCube and demonstrate this analysis’s potential with future improvements.
Speaker: Jan-Henrik Schmidt-Dencker
• 48
Sensitivity of the KM3NeT/ORCA detector to the neutrino mass ordering and beyond
The KM3NeT collaboration is currently building a new generation of large-volume water-Cherenkov neutrino telescopes in the Mediterranean sea. Two detectors, ARCA and ORCA, are under construction. They feature different neutrino energy thresholds: TeV range for ARCA and GeV range for ORCA. The main research goal of ORCA is the measurement of the neutrino mass ordering and atmospheric neutrino oscillation parameters, while the detector is also sensitive to a wide variety of other physics topics, including non-standard interactions, sterile neutrinos and Earth tomography, as well as low-energy neutrino astronomy.
This contribution will present an overview of the updated ORCA sensitivity projection to its main science objectives, including - but not limited to - the measurement of the neutrino mass ordering and oscillation parameters Future perspectives for ORCA to serve as far detector for a long baseline neutrino experiment with a neutrino beam from the U70 accelerator complex at Protvino in Russia will also be discussed.
Speaker: Mathieu Perrin-Terrin (Aix Marseille Univ, CNRS/IN2P3, CPPM, Marseille, France)
• 49
Search for exotic neutrino interactions by XMASS-I detector
XMASS is multi-purpose experiment using liquid xenon and is located at the Kamioka Observatory in Japan. The detector consists of a liquid xenon with a single-phase of 832 kg active volume and has a low energy threshold, low backgrounds and large target mass. In XMASS, it is possible to verify the topics of low energy neutrino physics which would give hints on models beyond SM. Now we have searched for exotic neutrino-electron interactions that could be produced by a neutrino millicharge, by a neutrino magnetic moment, or by dark photons using solar neutrinos in XMASS. We analyzed the data between November 2013 and March 2016 for 711days dataset. No significant signals have been observed with predicting the backgrounds in detector. We obtained an upper limit of neutrino millicharge of $5.4\times10^{-11}$e for all flavors of neutrino. We also set individual flavors to be $7.3 \times 10^{-12} e$ for $\nu_e$, $1.1 \times 10^{-11} e$ for $\nu_{\mu}$, and $1.1 \times 10^{-11} e$ for $\nu_{\tau}$. The limits for $\nu_{\mu}$ and $\nu_{\tau}$ are the best direct experimental limits. We also obtain an upper limit for the neutrino magnetic moment of 1.8$\times$10$^{-10}\mu_{B}$. In addition, we obtain upper limits for the coupling constant of dark photons in the $U(1)_{B-L}$ model of 1.3$\times$10$^{-6}$ if the dark photon mass is 1$\times 10^{-3}$ MeV$/c^{2}$, and 8.8$\times$10$^{-5}$ if it is 10 MeV$/c^{2}$. In particular, we almost exclude the possibility to understand the muon $g-2$ anomaly by dark photons.
Speaker: hiroshi ogawa (CST Nihon University, Japan)
• Discussion: 51 The Census of Gamma-Ray Sources | GAD-GAI 04
#### 04
• 50
Exploring the population of Galactic very-high-energy gamma-ray sources
At very high energies (VHE), the emission of gamma rays is dominated by discrete sources. Due to the limited resolution and sensitivity of current-generation instruments, only a small fraction of the total Galactic population of VHE gamma-ray sources has been significantly detected. The larger part of the population can be expected to contribute as a diffuse signal alongside emission originating from propagating cosmic rays. Without quantifying the source population, it is not possible to disentangle these two components. Based on the H.E.S.S. Galactic Plane Scan, a numerical approach has been taken to develop a model of the population of Galactic VHE gamma-ray sources, which is shown to accurately account for the observational bias. We present estimates of the absolute number of sources in the Galactic Plane and their contribution to the total VHE gamma-ray emission for five different spatial source distributions. Prospects for CTA and its ability to constrain the model are discussed. Finally, first results of an extension of our modelling approach using machine learning to extract more information from the available data set are presented.
Speaker: Constantin Steppa (University of Potsdam)
• 51
Galactic Science with the Southern Wide-field Gamma-ray Observatory
The Southern Wide-field Gamma-ray Observatory is a proposed ground-based gamma-ray detector that will be located in the Southern Hemisphere and is currently in its design phase. In this contribution, we will outline the prospects for Galactic science with this Observatory. Particular focus will be given to the detectability of extended sources, such as gamma-ray halos around pulsars; optimisation of the angular resolution to mitigate source confusion between known TeV sources; and studies of the energy resolution and sensitivity required to study the spectral features of PeVatrons at the highest energies. Such a facility will ideally complement contemporaneous observatories in studies of high energy astrophysical processes in our Galaxy.
• 52
Source classification at GeV energies using neura lnetworks with time variability and locations
The Fermi LAT point source catalog contains 10 years of observational data between 50 MeV to 1 TeV. It contains 5064 point sources mostly consisting of BLLs (1131) and FSRQs (694), while pulsars (239) are the most numerous Galactic population. However, a quarter of detected sources remains unclassified and might hide new source classes. The classification is difficult due to bright, diffuse emission from our own galaxy.Recently a machine learning methods were developed for the first time to localize and to classify point sources in the catalog, with performance comparable to that of traditional techniques. Synthetic yearly catalogs are simulated to produce 10 yearly $\gamma$-ray images from 2008 to 2018 in 6 energy bins of the sources. The yearly images provide the network with time variability information of the point sources. The time variable images are fed to the new neural network together with the location in the sky of the point source.
The network then separates the sources into distinct classes. The addition of time dependency and location data should increase the number of classifiable sources compared to the previous network from 3 to 5 (BLLacs, FSRQs, PSRs, PWN+SNR+SPPs, and Fakes), as well as an increase in classification accuracy.
Speaker: Chris van den Oetelaar (Radboud university)
• 53
Survey of the Galactic Plane with the Cherenkov Telescope Array
Observations with the current generation of very-high-energy gamma-ray telescopes have revealed an astonishing variety of particle accelerators in the Milky Way, such as supernova remnants, pulsar wind nebulae, and binary systems. The upcoming Cherenkov Telescope Array (CTA) will be the first instrument to enable a survey of the entire Galactic plane in the energy range from a few tens of GeV to 300 TeV with unprecedented sensitivity and improved angular resolution. In this contribution we will revisit the scientific motivations for the survey, proposed as a Key Science Project for CTA. We will highlight recent progress, including improved physically-motivated models for Galactic source populations and interstellar emission, advance on the optimization of the survey strategy, and the development of pipelines to derive source catalogues tested on simulated data. Based on this, we will provide a new forecast on the properties of the sources that CTA will detect and discuss the expected scientific return from the study of gamma-ray source populations.
Speaker: Quentin Remy
• 54
The First Catalog of Extragalactic Fermi-LAT Transient Sources
The first Fermi Large Area Telescope (LAT) catalog of gamma-ray transient sources (1FLT) comprises sources that were detected on monthly time intervals during the first decade of Fermi-LAT operations. The monthly time scale allows us to identify transient and variable sources that may have not been reported in Fermi-LAT general catalogs.
The analysis was performed for photon energies between 0.1 and 300 GeV using the Pass-8 event-level selection. In the analysis we considered only photons with |b| > 10° to exclude the Galactic plane and therefore to avoid confusion with low-latitude diffuse emission. We have analyzed 120 months and also performed a 15-day shift of each month in order to not lose any flare at the edges of each time bin. The monthly datasets were analyzed using a wavelet-based source detection algorithm that provided the candidate new transient sources. The transient candidates were then analyzed using the standard Fermi-LAT maximum likelihood analysis method. The resulting catalog list has 142 different sources detected with a statistical significance above 4-sigma in at least one monthly bin. About 70% are associated with spectrally soft AGN-type counterparts, principally blazar candidates of uncertain type and flat-spectrum radio quasars, and about 30% of 1FLT sources remain unassociated. This is similar to the fraction of unassociated sources found in the Fermi-LAT general catalogs. The median gamma-ray spectral index of the 1FLT-AGN sources is softer than the median index reported in the latest Fermi-LAT AGN general catalog (4LAC). The sources associated to a 4FGL-DR2 target are not reported in the 1FLT catalog while are reported 6 sources listed also in a previous general catalog (1-3FGL).
Speaker: Dr Isabella Mereu (INFN Perugia)
• 55
The TeV gamma-ray source population of the Milky-Way.
In this work we perform a population study of the H.E.S.S. Galactic Plane Survey (HGPS) catalogue. Namely, we analyze the flux, latitude and longitude distributions of gamma-ray sources detected by H.E.S.S. with the goal of inferring the main properties of galactic TeV source population.
We show that the total Milky Way luminosity in the 1-100 TeV energy range is relatively well constrained by H.E.S.S. data, obtaining $L_{\rm MW} = 1.7^{+0.5}_{-0.4}\times 10^{37} {\rm erg}\,{\rm s}^{-1}$, and that the total Galactic flux in the H.E.S.S. observational window is $\Phi_{\rm tot} = 3.8^{+1.0}_{-1.0}\times 10^{-10} {\rm cm}^{-2}\, {\rm s}^{-1}$.
The above results allows us to estimate the flux produced by sources not resolved by H.E.S.S.. These sources, which are too faint (or too extended) to be detected by H.E.S.S., contribute to the large-scale diffuse signal observed at the TeV range. We show that unresolved source contribution is not negligible (about $60\%$ of the resolved signal measured by H.E.S.S.) and potentially responsible for a large fraction of the diffuse-large scale gamma-ray signal observed by H.E.S.S. and other experiments in the TeV domain.
Finally, in the hypothesis that the majority of bright sources detected by H.E.S.S. are powered by pulsar activity, like e.g. Pulsar Wind Nebulae or TeV halos, we estimate the main properties of the pulsar population: we obtain a constrain on the fading time $\tau$, the initial period $P_{0}$ and the magnetic field $B$.
Speaker: Vittoria Vecchiotti (GSSI)
• 56
Understanding the origin of the extended gamma-ray emission and the physical nature of HESS J1841-055 using observations at TeV energies with the MAGIC telescopes
With the improved sensitivity with respect to the previous generation, current space-borne and ground-based gamma-ray telescopes have made the number of gamma-ray sources detected at GeV-TeV energies increase many folds over the last decade. Many of the detected extended gamma-ray sources are not associated with any known sources at other wavelengths. Understanding the nature of these sources and the origin of the observed high energy gamma-ray emission remains a great challenge. Using the MAGIC telescopes, we have observed one such unassociated gamma-ray source, named HESS J1841-055, at TeV energies. In this talk, we present our detailed investigation on this source using MAGIC data and other multi-waveband information on nearby sources. We discuss the interpretation of this source as a cosmic-ray accelerator.
Speaker: Dr David Green (Max-Planck-Institut for Physics)
• 57
Assessing the signatures imprinted by star-forming galaxies in the cosmic gamma-ray background
In recent years, high-energy gamma-ray emission has been detected from star-forming galaxies in the local universe, including M82, NGC 253, Arp 220 and M33. The bulk of this emission is thought to be of hadronic origin, arising from the interactions of cosmic rays (CRs) with the interstellar medium of their host galaxy. More distant star-forming galaxies would also presumably be bright in gamma-rays, but these would not be resolved as point sources. Instead, they contribute gamma-rays as unresolved sources to the extra-galactic gamma-ray background (EGB). However, despite the wealth of high-quality all-sky EGB data from the Fermi-LAT gamma-ray space telescope collected over more than a decade of operation, the exact contribution of SFGs to the EGB and the signatures their emission would imprint on the gamma-ray sky remains unsettled. In this talk, I will discuss how this can be assessed by modelling the gamma-ray emission from SFG populations above 1 GeV. I will demonstrate that such emission can be characterised by just a small number of key physically-motivated parameters, and outline how source populations would leave anisotropic signatures in the EGB. I will consider model signatures that may be imprinted population classes and discuss how such imprints could yield information about the underlying properties and evolution of SFGs over cosmic time.
Speaker: Ellis Owen (National Tsing Hua University)
• 58
Bridging the Gap - The first sensitive 20-200 MeV catalog
The under-explored MeV band has an extremely rich scientific potential. Awaiting an all-sky MeV mission, it is now the prime time to take full advantage of the capabilities of the Fermi Large Area Telescope to explore this regime. With more than 12 years of the best available dataset (Pass8), we have developed an all-sky analysis to build a sensitive catalog of sources from 20 to 200 MeV. This work will allow us to cover the SED peak of most gamma-ray sources, fundamental to understand their nature, and possibly discover a whole new population of MeV ones. Importantly, this program will start bridging the gap between the MeV and GeV energy bands, strongly supporting the scientific case for a future all-sky MeV mission and enhancing the legacy of the Fermi mission. In this talk I will present the preliminary results of this analysis, highlighting the scientific potential of this project. I will also discuss the difference with respect to the first catalog of low-energy sources (1FLE, Principe et al. 2018).
Speaker: Lea Marcotulli (Clemson University)
• 59
Dissecting the inner Galaxy with gamma-ray pixel count statistics
The nature of the GeV gamma-ray Galactic center excess (GCE) in the data of Fermi-LAT is still under investigation. Different techniques, such as template fitting and photon-count statistical methods, have been applied in the past few years in order to disentangle between a GCE coming from sub-threshold point sources or rather from diffuse emissions, such as the dark matter annihilation in the Galactic halo.
A major limit to all these studies is the modeling of the Galactic diffuse foreground, and the impact of residual mis-modeled emission on the results' robustness.
In Ref.[1], we combine for the first time adaptive template fitting and pixel count statistical methods in order to assess the role of sub-threshold point sources to the GCE, while minimizing the mis-modelling of diffuse emission components.
We reconstruct the flux distribution of point sources in the inner Galaxy well below the Fermi-LAT detection threshold, and measure their radial and longitudinal profiles. We find that point sources and diffuse emission from the Galactic bulge each contributes about 10% of the total emission therein, disclosing a sub-threshold point-source contribution to the GCE.
[1] arXiv:2102.12497
Speaker: Dr Silvia Manconi (Institute for Theoretical Particle Physics and Cosmology, RWTH Aachen)
• 60
Population Studies of Fermi LAT sources
The Fermi Large Area Telescope (LAT) has been detecting hundreds of Galactic sources, most of which are pulsars. Many Galactic sources are still undetected or unresolved due to their low flux, below the Fermi LAT sensitivity, or because of foreground and source confusion. Moreover, among the many unassociated sources, which are one third of the detected sources, a large amount may have Galactic origin.
We present our method of source population synthesis studies for characterizing the general properties of Fermi LAT Galactic gamma-ray sources and for estimating the number of Galactic sources below the Fermi LAT flux sensitivity threshold.
Source density distribution and luminosity function of our Monte-Carlo simulation are constrained by the Galactic sources detected by Fermi LAT. Then, the number of unresolved sources and their contribution to the diffuse emission are estimated by our best model.
This is a long-term project on analyzing the point source catalog and performing theoretical studies of gamma-ray sources. Apart from being interesting on its own, characterizing the general properties of detected sources will also allow to estimate the contribution to the diffuse emission from undetected and unresolved sources. In turn this will help their detection, impacting also other studies of diffuse gamma rays including studies of the interstellar emission and dark matter. Finally, it will also help in the characterization of unassociated sources.
Speaker: Elena Orlando
• 61
The future look at the Galaxy with the Galactic Explorer with a Coded Aperture Mask Compton Telescope (GECCO)
In the past 15 years, observations of the Galaxy at high energies by Fermi-LAT, AGILE, INTEGRAL and very recently by NuSTAR and eROSITA have been shown to be very exciting, allowing discoveries of a variety of objects and unexpected breakthroughs. However, from a few hundreds of KeV to several tens of MeV, the Galaxy remains poorly explored. In this energy range the lack of sufficiently sensitive instruments limits potential discoveries and challenges our understanding of the Galactic high-energy processes and sources.
To solve this issue, GECCO is a new mission concept that will allow high-sensitivity observations of the sky from ~50 KeV to ~10 MeV. It combines a coded aperture mask technique that provides high angular resolution for source detection, and a Compton telescope that provides high-sensitivity measurements of diffuse emissions. Such a combination enables efficient separation between sources and diffuse emissions.
A GECCO-like mission has the potential of answering open questions and leading to new discoveries. Among the most recent challenges regarding the Galaxy, sensitive observations at MeV energies with unprecedented high resolution will open a new window in understanding complicated regions such as the inner Galaxy, the origin of the Fermi Bubbles, the origin of the 511 keV line, and it will provide new insights on element formation in dynamical environments, on possible Galactic winds, and on the mechanisms of propagation of the low-energetic cosmic rays, their sources and their role on the Galaxy evolution.
Speaker: Elena Orlando
• 62
The new release of the fourth Fermi LAT source catalog
The third release of the Fourth Catalog of Fermi-LAT Sources (4FGL-DR3), based on 12 years of data between 50 MeV and 1 TeV, is presented. Improvements in the analysis method relative to the original 4FGL catalog and new features are reviewed. The 4FGL-DR3 includes about 750 more sources than the previous release (4FGL-DR2, obtained with 10 years of data) and about 1500 more sources than 4FGL. About 40% of the new sources are associated with counterparts at other wavelengths, which are mostly blazar candidates. The properties of the global set of unassociated sources reported in the catalog are discussed, with particular attention to those lying close to the Galactic plane. A population of unassociated sources that do not fit in with already known classes of gamma-ray emitters is emphasized.
Speaker: Benoit Lott (CENBG)
• Tuesday, July 13
• Discussion: 02 Constraining UHECR sources | CRI 03
#### 03
• 63
FR-0 jetted active galaxies: extending the zoo of candidate sites for UHECR acceleration
Fanaroff Riley (FR) 0 radio galaxies form a low luminosity extension of the well established ultrahigh energy cosmic ray (UHECR) candidate accelerators FR-1 and FR-2 galaxies. Their much higher number density – up to a factor 5 more numerous compared to FR-1 with $z<= 0.05$ – makes them good candidate sources for an isotropic contribution to the observed UHECR flux. Here, acceleration and survival of UHECR in prevailing conditions of the FR-0 environment are discussed.
First an average spectral energy distribution (SED) is compiled based on the FR0CAT. These photon fields, composed of a jet and a host galaxy component, form a minimal target field for the UHECR, which will suffer from electromagnetic pair production, photo disintegration, photo-meson production losses, and synchrotron radiation. The two most promising acceleration scenarios based on Fermi-I order and gradual shear acceleration are discussed as well as different escape scenarios.
When gradual shear acceleration is preceded by an efficient acceleration mechanism, e.g., Fermi-I or others, FR-0 galaxies are likely UHECR accelerators. This scenario requires a jet Lorentz factor of $\gamma>1.6$ to yield gradual shear acceleration which is faster than the corresponding escape. In less optimistic models a contribution to the cosmic-ray flux between knee and ankle is expected relatively independent of the realized turbulence and acceleration.
Speaker: Lukas Merten (University of Innsbruck)
• 64
UHECR from high- and low-luminosity GRBs
We discuss the production of multiple messengers including UHECR, EM radiation and neutrinos in Gamma-Ray Bursts in models with multiple interaction regions.
We demonstrate that standard high-luminosity bursts can explain the UHECR spectrum as as measured by the Pierre Auger Observatory, and derive the required source injection composition for different engine realisations. We discuss how multi-messenger observations can be used to discriminate between models by explicitly calculating the expected source and cosmogenic neutrino fluxes as well as the photon light curves. In addition, a separate population of LL-GRBs may exist, for which we show that different nuclei can indeed reach UHECR energies. For this purpose, we self-consistently model the radiation fields in prototypes inspired by real GRBs. We connect the maximal energies attainable for cosmic-ray nuclei to a possible VHE and HE component in the SED.
Speaker: Annika Rudolph (Z_THAT (Theoretische Astroteilchenphysik))
• 65
Ultrahigh-energy cosmic-ray interactions as the origin of VHE gamma-rays from BL Lacs
We explain the observed multiwavelength photon spectrum of a number of BL Lac objects detected at very high energy (VHE, $E > 30$ GeV), using a lepto-hadronic emission model. The one-zone leptonic emission is employed to fit the synchrotron peak. Subsequently, the SSC spectrum is calculated, such that it extends up to the highest energy possible for the jet parameters considered. The data points beyond this energy, and also in the entire VHE range are well explained using a hadronic emission model. The ultrahigh-energy cosmic rays (UHECRs, $E> 0.1$ EeV) escaping from the source interact with the extragalactic background light (EBL) during propagation over cosmological distances to initiate electromagnetic cascade down to $\sim1$ GeV energies. The resulting photon spectrum peaks at $\sim1$ TeV energies. We consider a random turbulent extragalactic magnetic field (EGMF) with a Kolmogorov power spectrum to find the survival rate of UHECRs within 0.1 degrees of the direction of propagation in which the observer is situated. We restrict ourselves to an RMS value of EGMF, $B_{\rm rms}\sim 10^{-5}$ nG, for a significant contribution to the photon spectral energy distribution (SED) from UHECR interactions. We found that UHECR interactions on the EBL and secondary cascade emission can fit gamma-ray data from the BL Lacs we considered at the highest energies. The required luminosity in UHECRs and corresponding jet power are below the Eddington luminosities of the super-massive black holes in these BL Lacs.
Speaker: Saikat Das (Raman Research Institute, India)
• 66
Cosmographic model of the astroparticle skies
Modeling the extragalactic astroparticle skies involves reconstructing the 3D distribution of the most extreme sources in the Universe. Full-sky tomographic surveys at near-infrared wavelengths have already enabled the astroparticle community to bind the density of sources of astrophysical neutrinos and ultra-high cosmic rays (UHECRs), constrain the distribution of binary black-hole mergers and identify some of the components of the extragalactic gamma-ray background. This contribution will present the efforts of cleaning and complementing the stellar mass catalogs developed by the gravitational-wave and near-infrared communities, in order to obtain a cosmographic view on stellar mass ($M_*$) and star formation rate (SFR). Unprecedented cosmography is offered by a sample of about 400,000 galaxies within 350 Mpc, with a 50-50 ratio of spectroscopic and photometric distances, $M_*$, SFR and corrections for incompleteness with increasing distance and decreasing Galactic latitude. The inferred 3D distribution of $M_*$ and SFR is consistent with cosmic flows. The $M_*$ and SFR densities converge towards values compatible with deep-field observations beyond 100 Mpc, suggesting a close-to-isotropic distribution of more distant sources. In addition to discussing relevant applications for the four astroparticle communities, this contribution will highlight the distribution of magnetic fields at Mpc scales deduced from the 3D distribution of matter, which is believed to be crucial in shaping the ultra-high-energy sky. These efforts provide a new basis for modeling UHECR anisotropies, which bodes well for the identification of their long-sought sources.
Speaker: Jonathan Biteau (Université Paris-Saclay, CNRS/IN2P3, IJCLab, 91405 Orsay, France)
• 67
Active galactic nuclei as neutrino sources in the PeV and EeV regimes
Active galactic nuclei (AGNs) are amongst the most promising neutrino source candidates, due to their potential to accelerate cosmic rays in their relativistic jets. The IceCube observatory has already detected several events from the direction of known gamma-ray blazar AGNs, like TXS 0506+056 and, more recently, PKS 1502+106. Through numerical modeling, we can show that neutrino emission is compatible with the available multi-wavelength observations from these sources. By generalizing these models, we can show that the diffuse IceCube flux can, under certain conditions, be fully explained by low-luminosity BL Lacs, while the contribution from bright gamma-ray quasars is severely constrained by the IceCube limits. On the other hand, it is also possible that AGNs accelerate cosmic rays up to ultra-high energies. In that scenario, detailed modeling shows that the AGN population can produce large fluxes EeV neutrinos, while still obeying the current IceCube stacking limits in the PeV regime. I will also argue that the flux of EeV neutrinos produced inside AGN jets can outshine the cosmogenic contribution, which has important implications for the search strategy of future radio neutrino telescopes.
Speaker: Xavier Rodrigues (DESY / Ruhr University Bochum)
• 68
Constraining the origin of UHECRs and astrophysical neutrinos
We constrain properties of ultrahigh energy cosmic ray source environments (and potentially astrophysical neutrino sources), including their photon temperature, gas density, size, magnetic field strength and coherence length, using UHECR and neutrino spectra and composition. Our analysis represents a new type of information on UHECR sources, independent of the mechanism responsible for the UHECR acceleration. We also explore the possibility of a common origin of UHECRs and astrophysical neutrinos and further constrain sources which are consistent with this possibility. We show that the common origin hypothesis can only be satisfied for certain hadronic interaction models, showing that multimessenger analyses have the power to also constrain hadronic physics beyond LHC energies.
Speaker: Marco Muzio (New York University)
• 69
Thermal-to-nonthermal element abundances in different Galactic environments
The nonthermal source abundances of elements play a crucial role in the understanding of cosmic ray phenomena from a few GeV up to several tens of EeV. In this presentation a first systematic approach is shown that describes the change of the abundances from the thermal to the nonthermal state via diffusive shock acceleration by a temporally evolving shock. Hereby, not only time-dependent ionization states of elements contained in the ambient gas are considered, but also elements condensed on solid, charged dust grains which can be injected into the acceleration process as well. This generic parametrized model is then applied to the case of particle acceleration by supernova remnants in various ISM phases as well as Wolf-Rayet wind environments. The resulting predictions for low energy cosmic ray (LECR) source abundances are compared with the data obtained by various experiments revealing the importance of dust grains as well as the possible contribution of different ISM environments to the observed LECR flux.
Speaker: Björn Eichmann (Ruhr-Universität Bochum, Theoretische Physik IV)
• 70
The problematic connection between low-luminosity gamma-ray bursts and ultra-high-energy cosmic rays
Ultra-high-energy cosmic rays (UHECR) are the most energetic particles ever observed. What astrophysical sources are responsible for their immense acceleration remains unknown despite decades of research. In this talk, I will investigate whether low-luminosity gamma-ray bursts (llGRBs), short-lived cosmic explosions currently seen as one of the most promising acceleration candidates, can be the main sources of UHECR. Our study focuses on the radiation from the less energetic electrons, which are inevitably accelerated in the same region. This radiation can be characterized and compared to observations of llGRBs. We find that the radiation from these electrons would be much too luminous, showing that llGRBs would have to be orders of magnitude brighter if they hosted significant UHECR acceleration. This result challenges llGRBs as accelerators of UHECR.
Speaker: Filip Samuelsson (KTH Royal Institute of Technology)
• 71
A combined fit of energy spectrum, shower depth distribution and arrival directions to constrain astrophysical models of UHECR sources
The combined fit of the measured energy spectrum and shower depth distribution of ultra-high-energy cosmic rays is known to constrain the parameters of astrophysical scenarios with homogeneous source distributions. Further measurements show that the cosmic-ray arrival directions agree better with the directions and fluxes of catalogs of starburst galaxies and active galactic nuclei than with isotropy.
Here, we present a novel combination of both analyses. For that, a three-dimensional universe model containing a nearby source population and a homogeneous background source distribution is built, and its parameters are adapted using a combined fit of energy spectrum, shower depth distribution and energy-dependent arrival directions. The model takes into account a symmetric magnetic field blurring, source evolution and interactions during propagation.
We use simulated data, which resemble measurements of the Pierre Auger Observatory, to evaluate the method’s sensitivity. By that, we are able to verify that the source parameters as well as the fraction of events from the nearby source population and the size of the magnetic field blurring are determined correctly, and that the data is described by the fitted model including the catalog sources with their respective fluxes and three-dimensional positions. We demonstrate that by combining all three measurements we reach the sensitivity necessary to discriminate between the catalogs of starburst galaxies and active galactic nuclei.
Speaker: Teresa Bister (RWTH Aachen University)
• 72
Excited isomer photons and the VHE emission from Centaurus A
The very-high-energy (VHE) emission from Centaurus A (Cen A) observed by the H.E.S.S. telescopes cannot be explained by simple synchrotron-self-Compton (SSC) models. Motivated by the reported UHECR hotspot in the direction of Cen A, we investigate a scenario in which excited isomer photons of heavy nuclei can account for these VHE photons.
Our fully self-consistent model includes a leptonic SSC scenario with a hadronic high-energy component from the pc-scale core region which explains the SED below TeV energies. As expected, the core of the jet is optically thick to above TeV gamma-rays that are produced in nuclear disintegrations. However, of a fraction of excited isomers produced in photodisintegration interactions of cosmic-ray nuclei is long-lived enough for the isomers to escape the core region. We consider the isomeric emission produced in the decay of these isomers in a larger volume surrounding the core and show that it can explain the H.E.S.S. flux while being in agreement with the spatially extended emission region recently reported.
Speaker: Leonel Raul Morejon (Z_THAT (Theoretische Astroteilchenphysik))
• 73
Features of a single source describing the very end of the energy spectrum of cosmic rays
The energy spectrum of cosmic rays extends over many orders of magnitude with a steep suppression of the flux at the highest energies. The energy spectrum of ultra-high energy cosmic rays (UHECR) is measured with great precision by the Pierre Auger Observatory (Auger) and Telescope Array. However, the two measured spectra show different slopes of the decrease at the highest energies. This disagreement can be caused by the ability of these two experiments to see different parts of the night sky and, therefore, in principle, different sources of UHECR as well. In our study, we investigate the possibility that the energy spectrum measured by Auger at energies above $log(E/eV)\geq19.5$ could be explained by a dominant single strong source. We explore the space of possible features of such a source including its distance, spectral index and mass composition, and compare the resulting flux after propagation using simulations within CRPropa 3 with the data measured by Auger. No restrictions are made on the measurement of shower maximum tightly connected with the mass composition due to large uncertainties at the highest energies. We show the possible parameters of such a source and explore possible mass composition mixes that could explain the data well.
Speaker: Alena Bakalova (FZU - Institute of Physics of the Czech Academy of Sciences)
• 74
Transient Source for the Highest Energy Galactic Cosmic Rays
We analyze the Auger dipole anisotropy measurements below 8 EeV, to expose the existence of an individual source of the Galactic cosmic rays above $10^{17}$ eV. The source is incompatible with being in the direction of the Galactic center by a $\chi^2$/dof > 6. Interpreting the amplitude and direction of the Galactic HE Dipole in terms of a transient, we find:
a) The amplitude of the Galactic VHE dipole constrains the ratio of source distance and time since the transient event occurred.
b) The Galactic VHE dipole is compatible with production in a transient event in the Galactic plane which occurred about 30 kyr ago at a distance of about 1 kpc. A SN remnant and pulsar consistent with being the relics of this event are identified.
c) The peak rigidity of these VHE Galactic CRs is about 0.1 EV.
d) For reasonable estimates of the diffusion coefficient of the GMF, the energy emitted in CRs above 100 PeV by the transient Galactic source is about $10^{44-45}$ ergs —compatible with acceleration in the converging-flow shock of a core-collapse supernova exploding into the wind of a massive binary companion.
The estimated rate of such events in the Galaxy as a whole is compatible with the inferred space-time separation of this event. Comparable transient events in galaxies throughout the Universe may be an important source of astrophysical neutrinos. Implications and tests of this hypothesis for the origin of the highest energy Galactic cosmic rays will be discussed.
Speaker: Glennys Farrar (New York University)
• Discussion: 15 Future instrumentation | CRD-MM 06
#### 06
• 75
The Roadmap to the POEMMA Mission
The Probe Of Extreme Multi-Messenger Astrophysics (POEMMA) is designed to observe ultrahigh-energy cosmic rays (UHECRs) and cosmic neutrinos from space with sensitivity over the full celestial sky. Developed as a NASA Astrophysics Probe-class mission, POEMMA consists of two identical telescopes orbiting the Earth in a loose formation that observe extensive air showers (EAS) via air fluorescence and Cherenkov emissions. UHECRs and UHE neutrinos above 20 EeV are observed with the stereo fluorescence technique, while tau neutrinos above 20 PeV are observed via the optical Cherenkov signals produced by up-going EAS produced by the decay of Earth-emerging tau-leptons. The POEMMA satellites are designed to quickly re-orientate to follow up transient cosmic neutrino sources and obtain unparalleled neutrino flux sensitivity.
Both observation techniques and the instrument design are being validated by current and upcoming missions, such as Mini-EUSO and EUSO-SPB as part of the JEM-EUSO program, and Terzina SmallSat mission. We will discuss the POEMMA science performance and the current roadmap to the POEMMA mission.
Speaker: Prof. Angela V. Olinto (The University of Chicago)
• 76
Cosmic-ray isotope measurements with HELIX
Recent discoveries of new features in Galactic cosmic-ray fluxes emphasize the importance of understanding the propagation of cosmic rays. HELIX (High Energy Light Isotope eXperiment) is designed to improve the measurements of light cosmic-ray isotopes, including the propagation clock isotope $^{10}\mathrm{Be}$ and stable secondary isotope $^{9}\mathrm{Be}$, which will be essential to study the propagation of the cosmic rays. The magnetic spectrometer of HELIX consists of a 1 Tesla superconducting magnet containing a high-resolution gas drift chamber as a tracking detector and two velocity measuring detectors: a time-of-flight detector and a ring-imaging Cherenkov detector. While the HELIX instrument can measure the fluxes of the light isotopes from protons (Z=1) up to neon (Z=10), it is optimized to study the flux of beryllium isotopes from 0.2 GeV/n to beyond 3 GeV/n with a sufficient mass resolution to discriminate between $^{10}\mathrm{Be}$ and $^{9}\mathrm{Be}$. In this talk, I will review the scientific goals and the design of the instrument and report its current status and project plans.
Speaker: Nahee Park (Queen's University)
• 77
The TIGERISS instrument
TIGERISS ( Trans-Iron Galactic Element Recorder for the International Space Station) is a natural evolution to space of the balloon-borne TIGER and SuperTIGER instruments discussed elsewhere at this conference. TIGERISS will be proposed to the next NASA Pioneers opportunity, anticipated in September 2021, as an ISS-attached mission to extend measurements of the relative abundances of galactic cosmic-ray nuclei to the Pt-Pb region with individual element resolution and excellent statistical precision. TIGERISS is designed to accurately determine the atomic number of incident nuclei to beyond the end of the periodic table and to begin measurements at a low-Z trigger threshold, planned for between He and C in order to measure the velocity distributions of the more common species using its Cherenkov detectors. TIGERISS measures the atomic number of incident nuclei by both the differential ionization energy loss (dE/dX) vs. Cherenkov (velocity) technique and the Cherenkov vs. Cherenkov technique using acrylic and silica-aerogel Cherenkov detectors as in TIGER and SuperTIGER. However, it utilizes silicon strip detectors for dE/dX and trajectory measurements, replacing the plastic scintillators and scintillating fiber hodoscopes of TIGER and SuperTIGER. The scientific goals and anticipated results of TIGERISS are discussed in an accompanying paper at this conference. Here we give the details of the TIGERISS measurement technique and its technical implementation.
Speaker: John Mitchell (NASA Goddard Space Flight Center)
• 78
The High Energy Particle Detector (HEPD-02) for the second China Seismo-Electromagnetic Satellite (CSES-02)
The CSES (China Seismo-Electromagnetic Satellite) is a multi-instrumental scientific space program whose objectives are to investigate the near-Earth electromagnetic, plasma and particle environment and for studying the seismo associated disturbances in the ionosphere-magnetosphere transition zone, the anthropogenic electromagnetic noise as well as the natural non-seismic electromagnetic emissions, mainly due to tropospheric activity. In particular, the mission aims at confirming the existence of possible temporal correlations between the occurrence of earthquakes for medium and strong magnitude and the observation in space of electromagnetic perturbations, plasma variations and precipitation of bursts of high-energy charged particles from the inner Van Allen belt.
The first satellite (CSES-01) was launched on 2018, while a second one (CSES-02) is currently under development and its launch is expected by 2022. As in CSES-01, the suite of instruments on-board CSES-02 will comprise a particle detector (HEPD-02, High-energy Particle Detector) to measure the increase of the electron and proton fluxes due to short-time perturbations of the radiation belts induced by solar, terrestrial, or anthropic phenomena in the energy range 3-100 MeV for electrons and 30-200 MeV for protons.
HEPD-02 comprises a tracker made of CMOS Monolithic Active Pixel Sensors (MAPS), a double layer of crossed plastic scintillators for trigger and a calorimeter, made of a tower of plastic scintillators and a matrix of inorganic crystals, surrounded by plastic scintillator veto planes. We present the main characteristics and performance of HEPD-02, highlighting the architectural choices made to meet the scientific objectives of the mission.
Speaker: Dr Cristian De Santis (INFN Sezione di Roma Tor Vergata)
• Discussion: 29 Outreach online | O&E 07
#### 07
• 79
Armagh Observatory and Planetarium's Outreach Programme for the Cherenkov Telescope Array
We describe an outreach programme being undertaken at the Armagh Observatory and Planetarium (AOP) for the Cherenkov Telescope Array (CTA). Founded in 1790 and with a rich astronomical heritage, AOP today combines the research and education arms of our organisation to bring a research-informed outreach programme to the public, most often through our planetarium-related activities.
We have developed and written, in-house, a short (10 minute) Full Dome planetarium show ("Exploring the High-Energy Universe") that describes the science of gamma-ray astronomy and introduces the CTA as the as the first ground-based gamma-ray observatory open to scientific communities. This dome show will be made freely and publicly available through the Digistar cloud to other planetaria. It may be rendered into other formats for other planetarium projector systems. We will explain how we undertook this project and consider how it might be extended to provide outreach material for other science facilities.
In parallel, we are engaged in developing a series of short videos to introduce the scientists and the science of the UK CTA consortium, again designed for public audiences. These videos can be accessed through our social media channels. Delivery of such outreach programme in byte-sized pieces is an essential element in attracting and engaging audiences. We explain how we have developed the skill set to do this in our Education Team at AOP whilst our facility has been closed for the past year, a result of the Covid-pandemic.
Speaker: Michael Burton (Armagh Observatory and Planetarium)
• 80
Multi-messenger Astroparticle Physics for the Public via the astroparticle.online Project
Many projects want to share knowledge on particle and astroparticle physics (in particular, cosmic ray physics), however multi-messenger astroparticle-physics is still a young research field and hardly covered in educational curricula or in outreach. The astroparticle.online project, founded in 2018 within the framework of the German-Russian Astroparticle Data Life Cycle Initiative (GRADLCI), encompasses an endeavor to address this issue.
Within the project, scientists from Karlsruhe Institute of Technology (KIT), Irkutsk State University (ISU) and Moscow State University (MSU) developed a range of educational materials: articles, video lectures, tests, problems to solve, laboratory works and pre-trained neural networks for particle recognition. The project is supported by the KASCADE Cosmic-ray Data Center (KCDC) and GRADLCI data aggregation platform, where one can retrieve and analyze open scientific data from various experiments..
The main audience of the project’s activities are high school and undergraduate students. All the educational materials are available online at the project's web portal https://www.astroparticle.online/, they are used both in online and offline masterclasses organized by the project members, and also as the supplementary content by educational organizations - for example, in the ISU course 'Introduction to experimental methods in high energy astrophysics'. Over the time that the project has been operating, more than 120 students took part in its activities.
This contribution will cover the experience gained while running the project for more than 3 years now, our challenges, developments and future plans.
Speaker: Victoria Tokareva (KIT)
• 81
Outreach activities at the Pierre Auger Observatory
The Pierre Auger Observatory, sited in Malargüe, Argentina, is the largest observatory available for measuring ultra-high-energy cosmic rays (UHECR). The Auger Collaboration has measured and analysed an unprecedented number of UHECRs. Along with making important scientific discoveries, for example, the demonstration that cosmic rays above 8 EeV are of extragalactic origin and the observation of a new feature in the energy spectrum at around 13 EeV, outreach work has been carried out across the 17 participating countries and online. This program ranges from talks to a varied audience, to the creation of a local Visitor Center, which attracts ~8000 visitors annually, to initiating masterclasses. Permanent and temporary exhibitions have been prepared both in reality and virtually. Science fairs for elementary- and high-school students have been organised, together with activities associated with interesting phenomena such as eclipses. In addition, we participate in international events such as the International Cosmic Day, Frontiers from H2020, and the International Day of Women and Girls in Science. Part of the Collaboration website is aimed at the general public. Here the most recent articles published are summarised. Thus the Collaboration informs people about work in our field, which may seem remote from everyday life. Furthermore, the Auger Observatory has been a seed for scientific and technological activities in and around Malargüe. Different outreach ventures that already have been implemented and others which are foreseen will be described.
Speaker: Karen Salomé Caballero Mora (Universidad Autónoma de Chiapas)
• 82
Virtual tours to the KATRIN experiment
The KArlsruhe TRItium Neutrino (KATRIN) experiment performs a model-independent measurement of the electron neutrino mass with a design sensitivity of 0.2 eV (90% CL) after three full years of measurement time. KATRIN measures near the endpoint of the tritium beta spectrum, using the MAC-E filter principle by virtue of its 70 m long beamline. Its technological challenges include the high-luminosity tritium source, the cryogenic pumping section and the 20 m long ultra-high vacuum vessel of the main spectrometer.
Guided tours to the KATRIN beamline with supporting presentations are frequently offered to make the experiment, astroparticle physics and scientific research in general accessible to the public and students in particular. However, the on-site access is limited by the operation of high voltage and magnets, safety regulations for the tritium laboratory and the ongoing pandemic. This fuelled the development of three virtual presentation tools:
a 40-minute-long video tour with live commentary via zoom was created using cellphone-made footage of the beamline and archive footage of the transport and commissioning of its key components;
a 3D panorama of five locations at the beamline for virtual reality headsets or browsers providing a live-action guide or free exploration was developed with the NaWik (National Institute for Science Communication);
and a browser interface for a low-poly model of the full beamline is work-in-progress.
In this talk, we will present all three tools and their making, including first results of the NaWik-research on the knowledge transfer potential of the 3D panorama.
Supported by BMBF (Ø05A20VK3), the Helmholtz Association, the Klaus Tschira Foundation, the KIT centre KCETA, and the Excellence Strategy of the German Federal and State Governments.
Speaker: Dr Manuel Klein (KIT)
• 83
The online laboratories for OCRA - Outreach Cosmic Ray Activities INFN project
OCRA – Outreach Cosmic Ray Activities was born in 2018 as a national outreach project of INFN with the aim of collecting, within a national framework, the numerous public engagement activities in the field of cosmic ray physics already present at a local level in the divisions and laboratories. Since spring of 2020 OCRA offers also a series of online laboratories on its website https://web.infn.it/OCRA/, designed not only to be used by students individually but also to be offered in the classroom by teachers.
The cosmic rays path present on the website will be presented together with the online laboratories on the measurement of muons, from the one related to the dependence on the zenith angle made during the International Cosmic Day up to measurements of the flux dependency on the altitude in the atmosphere and in the water. Also, a laboratory allowing to analyze public data of the Pierre Auger Observatory will be presented. In addition, some teaching methods included in the "Teachers' area" of the OCRA website will be described.
The developed cosmic ray path was also used to organize an online course for teachers of Italian high schools with the purpose of accompanying teachers when approaching the subject for the first time. About 70 teachers participated for a total of 9 lessons.
Speaker: Dr Carla Aramo (INFN Napoli)
• 84
Neutrino Education, Outreach and Communications Activities: Captivating Examples from IceCube
The IceCube Neutrino Observatory at the South Pole has tremendous emotional appeal—the extreme Antarctic environment coupled with the aura of a pioneering experiment that explores the universe in a new way. However, like with most cutting-edge experiments, it is still challenging to translate the exotic, demanding science into accessible language. We present three examples of recent successful education, outreach, and communication activities that demonstrate how we leverage efforts and sustain connections to produce engaging results. We describe our participation in the PolarTREC program that pairs researchers with educators to provide deployments in the Antarctic and how we have sustained relationships with these educators to produce high quality experiences to reach target audiences even during a pandemic. We focus on three examples from the last year: a summer enrichment program for high school students that was also modified for a 10-week IceCube after school program, a virtual visit to the South Pole for the ScienceWriters 2020 conference, and a series of short videos in English and Spanish suitable for all ages that explain traveling, living, and working at the South Pole.
• 85
The Fermi Masterclass Online Edition 2020
The Fermi Masterclass is an international outreach event designed to give high-school students the unique opportunity to discover the world of High-Energy Astrophysics. Since 2017, various Italian universities and research institutes, guided by the National Institute for Nuclear Physics (INFN), organized a "full immersion" day of dedicated lectures and exercises in which students analysed real data collected by the LAT experiment aboard the Fermi satellite. Over the years, foreign institutes from Slovenia, Sweden and the U.S. also joined the effort, giving the students the unique opportunity to interact with each other as in real international collaborations.
The 4th edition of the Fermi Masterclass was scheduled to take place in April 2020. However, due to the pandemic emergency, the Masterclass was initially postponed, and finally took place as an online edition on December 10th, 2020.
Here we present the structure and organization of this first virtual event, including an interactive part of exercises accessible to the students through dedicated web platforms.
Speaker: Silvia Raino (Dipartimento Interateneo di Fisica "M.Merlin", Università di Bari and INFN-Bari (ITALY))
• 86
#meetTheMAGICians: Science communication and visibility of young researchers
Among the many activities organized by the Outreach working group of the MAGIC (Major Atmospheric Gamma-ray Imaging Cherenkov) Collaboration, we would like to present the ongoing project #meetTheMAGICians. Under this hashtag, used on our social media pages (@MAGICtelescopes), we collect live streaming events on astroparticle physics topics, series of social media posts, videos and other contents. In addition to increasing the visibility of the MAGIC collaboration, a central goal of #meetTheMAGICians is to strongly connect the communication of our science to the individual achievements of our researchers. It is a community-wide challenge to increase the individual recognition of early career scientists in large international science collaborations. In this project, we give young members of the MAGIC collaboration the chance to increase their visibility in the astroparticle community by highlighting their individual contributions to our research. At the same time, we aim to communicate to the general public how exciting and diverse astroparticle physics can be, and to stimulate in young students the curiosity towards the extreme Universe. We will present an overview on the present status of the project, and an analysis of both successes and remaining challenges.
Speaker: Juliane van Scherpenberg (Max-Planck-Institut for Physics)
• 87
2’ science: A Science Communication Project for Astrophysics
Two-minute science (2'science) is a science communication project supported by early-career Greek astrophysicists. With this endeavor, which started in December 2020, we try to bridge the gap between the scientific community and the public. This project is based on the simple idea of writing short articles with an approximate reading time of two minutes. These articles cover several topics and their difficulty scales to cover a broad audience range, from young students to experienced adults. We support the idea of “ask an expert” in astrophysics in Greece, where any reader can pose a question. We offer the appropriate answer either by writing it ourselves or by contacting the field experts from the Greek astronomical society. Furthermore, our previous science communication experience leads us to design educational activities for students and/or adults based on pedagogical means. A successful one was an “escape-zoom” titled “Escape to Other Worlds”, a digital version of an escape room. Further activities are astronomy workshops for teenagers, online talks to schools, and our participation in a scientific podcast to trigger the public interest in astrophysics. We communicate this work through social media, where several thousands of people already follow our work.
Speaker: Dimitrios Kantzas (UvA)
• 88
Astronomy Outreach and Education in Namibia: H.E.S.S. and beyond
Astronomy plays a major role in the scientific landscape of Namibia. Because of its excellent sky conditions, Namibia is not only frequently visited by astrophotographers but is also home to ground-based observatories like the High Energy Spectroscopic System (H.E.S.S.), in operation since 2002. Located near the Gamsberg mountain, H.E.S.S. performs groundbreaking science by detecting very-high-energy gamma-rays from many different objects. The fascinating stories behind many of them are featured regularly in the “Source of the Month”, a blog-like format intended for the general public with more than 170 features so far. Together with this digital format, H.E.S.S. outreach activities have always been covered locally, e.g. via ‘open days’ and guided tours on the H.E.S.S. site itself. An overview of the H.E.S.S. outreach activities will be presented in this contribution, along with discussions relating to the current landscape of astronomy outreach and education in Namibia. We will also touch on some of the significant activity in the country in recent months, which aims to use astronomy as a means for capacity-building and sustainable development. Finally, as we take into account the future prospects of radio astronomy in the country, momentum for a wider range of astrophysics research is clearly building – this presents a great opportunity for the astronomy community to come together to capitalise on this movement and further support astronomy outreach and education in Namibia.
Speaker: Dr Hannah Dalgleish (University of Oxford; University of Namibia)
• Discussion: 32 Cherenkov Media & Detector Calibration | NU 05
#### 05
• 89
A calibration study of local ice and optical sensor properties in IceCube
The optical sensors of the IceCube Neutrino Observatory are attached on vertical strings of cables. They were frozen into the ice in the deployment holes made by hot water drill. This hole ice, to the best of our knowledge, consists of a bubbly central column, with the remainder of the re-frozen volume being optically clear. The bubbly ice often blocks one or several of the calibration LEDs in every optical sensor and significantly distorts the angular profile of the calibration light pulses. It also affects the sensors’ response to in-coming photons at different locations and directions. We present our modeling of the hole ice optical properties as well as optical sensor location and orientation within the hole ice. The shadowing effects of cable string and possible optical sensor tilt away from the nominal vertical alignment are also discussed.
• 90
Deployment of the IceCube Upgrade Camera System in the SPICEcore hole
IceCube is a cubic-kilometer scale neutrino telescope located at the geographic South Pole. The detector utilizes the extremely transparent Antarctic ice as a medium for detecting Cherenkov radiation from neutrino interactions. While the optical properties of the glacial ice are generally well modeled and understood, the uncertainties which remain are still the dominant source of systematic uncertainties for many IceCube analyses. A camera and LED system is being built for the IceCube Upgrade that will enable the observation of optical properties throughout the Upgrade array. The SPICEcore hole, a 1.7 km deep ice-core hole located near the IceCube detector, has given the opportunity to test the performance of the camera system ahead of the Upgrade construction. In this contribution, we present the results of the camera and LED system deployment during the 2019/2020 austral summer season as part of a SPICEcore luminescence logger system.
Speaker: Mr Danim Kim (Sungkyunkwan University)
• 91
The IceCube Neutrino Observatory at the geographic South Pole instruments a gigaton of glacial Antarctic ice with over 5000 photosensors. The detector, by now running for over a decade, will be upgraded with seven new densely instrumented strings. The project focuses on the improvement of low-energy and oscillation physics sensitivities as well as re-calibration of the existing detector. Over the last few years we developed a precision optical calibration module (POCAM) providing self-monitored isotropic nanosecond light pulses for optical calibration of large-volume detectors. Over 20 next-generation POCAMs will be calibrated and deployed in the IceCube Upgrade in order to reduce existing detector systematics. We report a general overview of the POCAM instrument, its performance and calibration procedures, as well as simulation studies to estimate its anticipated physics impact.
Speaker: Nikhita Khera (Technical University of Munich)
• 92
Design, performance, and analysis of a measurement of optical properties of antarctic ice below 400 nm
The IceCube Neutrino Observatory, located at the geographic South Pole, is the world's largest neutrino telescope, instrumenting 1 km³ of Antarctic ice with 5160 photosensors to detect Cherenkov light. For the IceCube Upgrade, to be deployed during the 2022-23 polar field season, and the enlarged detector IceCube-Gen2 several new optical sensor designs are under development. One of these optical sensors, the Wavelength-shifting Optical Module (WOM), uses wavelength-shifting and light-guiding techniques to measure Cherenkov photons in the UV-range from 380 to 250 nm. In order to understand the potential gains from this new technology, a measurement of the scattering and absorption lengths of UV light was performed in the SPICEcore borehole at the South Pole during the winter seasons of 2018/2019 and 2019/2020. For this purpose, a calibration device with a UV light source and a detector using the wavelength shifting technology was developed. We present the design of the developed calibration device, its performance during the measurement campaigns, and the best fit comparing the data to a Monte Carlo simulation.
Speaker: Jannes Brostean-Kaiser (Z_ICE (IceCube+NG))
• 93
The Acoustic Module for the IceCube Upgrade
The IceCube Neutrino Observatory will be upgraded with more than 700 additional optical sensor modules and new calibration devices. Improved calibration will enhance IceCube’s physics capabilities both at low and high neutrino energies. An important ingredient for good angular resolution of the observatory is precise calibration of the positions of optical sensors. Ten acoustic modules, which are capable of receiving and transmitting acoustic signals, will be attached to the strings. These signals can additionally be detected by compact acoustic sensors inside some of the optical sensor modules. With this system we aim for calibration of the detectors’ geometry with a precision better than 10 cm by means of trilateration of the arrival times of acoustic signals. This new method will allow for an improved and complementary geometry calibration with respect to previously used methods based on optical flashers and drill logging data. The longer attenuation length of sound compared to light makes the acoustic module a promising candidate for IceCube-Gen2, which may have optical sensors on strings with twice the current spacing. We present an overview of the technical design and tests of the system as well as analytical methods for determining the propagation times of the acoustic signals.
Speaker: Mr Christoph Günther (III. Physikalisches Institut B, RWTH Aachen University)
• 94
Monitoring of optical properties of deep lake water
We present the results of the one year monitoring of absorption and scattering lengths of light with wave length 375÷532nm within the effective volume deep of underwater neutrino telescope Baikal-GVD, which were measured by a device «BAIKAL-5D». The «BAIKAL-5D» was installed during the 2020y winter expedition at a depth 1250 m. The device has a shaded point-like isotropic light source with spectral resolution about 3nm. A wide angle light receiver is moved by a stepper motor so that the distance between the receiver and the light source changed between 0.9 and 7,4 m. Absorption and scattering lengths were measured every week in 6 spectral points. Shot-time variation of absorption and scattering length was estimated.
Speaker: Evgenii Ryabov (Baikal-collaboration)
• 95
KM3NeT Detection Unit Line Fit reconstruction using positioning sensors data
KM3NeT is constructing two large neutrino detectors in the Mediterranean Sea: KM3NeT/ARCA, located near Sicily and aiming at neutrino astronomy, and KM3NeT/ORCA, located near Toulon and designed for neutrino oscillation studies.
The two detectors, together, will have hundreds of Detection Units (DUs) with 18 Digital Optical Modules (DOMs) maintained vertical by buoyancy, forming a large 3D optical array for detecting the Cherenkov light produced after the neutrino interactions. To properly reconstruct the direction of the incoming neutrino, the position of the DOMs must be known precisely with an accuracy of less than 10 cm, and since the DUs are affected by sea current the position will be measured every 10 minutes.
For this purpose, there are acoustic and orientation sensors inside the DOMs. An Attitude Heading Reference System (AHRS) chip provides the components values of the Acceleration and Magnetic field in the DOM, from which it is possible to calculate Yaw, Pitch and Roll for each floor of the line. A piezo sensor detects the signals from fixed acoustic emitters on the sea floor, so to position it by trilateration.
Data from these sensors are used as an input to reconstruct the shape of the entire line based on a DU Line Fit mechanical model. This poster presents an overview of the KM3NeT monitoring system, as well as the line fit model and its results.
Speaker: Chiara Poirè (Universitat Politécnica de Valéncia)
• 96
Camera Calibration for the IceCube Upgrade and Gen2
An upgrade to the IceCube Neutrino Telescope is currently under construction. For the Upgrade, seven new strings will be deployed in the central region of the 86 string IceCube detector to enhance the capability to detect neutrinos in the GeV range. One of the main science objectives of the Upgrade is an improved calibration of the IceCube detector to reduce systematic uncertainties related to the optical properties of the ice. We have developed a novel optical camera and illumination system that will be part of 700 newly developed optical modules to be deployed with the Upgrade. A combination of transmission and reflection photographic measurements will be used to measure the optical properties of bulk ice between strings and refrozen ice in the drill hole, to determine module positions, and to survey the local ice environments surrounding the sensor module. In this contribution, we present the production design, acceptance testing, and plan for post-deployment calibration measurements with the camera system.
Speaker: Woosik Kang (Sungkyunkwan University)
• 97
Development of an in-situ calibration device of firn properties for Askaryan neutrino detectors
High energy neutrinos (E>10$^{17}$ eV) are detected cost-efficiently via the Askaryan effect in ice, where a particle cascade induced by the neutrino interaction produces coherent radio emission that can be picked up by antennas installed below the surface. A good knowledge of the firn properties is required to reconstruct the neutrino properties. In particular, a continuous monitoring of the snow accumulation (which changes the depth of the antennas) and the index-of-refraction profile are crucial for an accurate determination of the neutrino's direction and energy. We present an in-situ calibration system that extends the radio detector station with a radio emitter to continuously monitor the firn properties by measuring time differences of direct and reflected (off the surface) signals (D'n'R). We optimized the station layout in a simulation study and quantified the achievable precision. We present 14 months of data of the ARIANNA detector on the Ross Ice Shelf, Antarctica, where a prototype of this calibration system was successfully used to monitor the snow accumulation with unprecedented precision of 1mm. We explore and test several algorithms to extract the D'n'R time difference from noisy data (including deep learning). This constitutes an in-situ test of the neutrino vertex distance reconstruction using the D'n'R technique which is needed to determine the neutrino energy.
Speaker: Mr Jakob Beise (Uppsala Universitet)
• 98
Development of calibration system for a project of a new Baksan Large Neutrino Telescope
We present results of the development of a calibration system for a project of a new Baksan Large Neutrino Telescope. The calibration system is based on fast blue and UV InGaN and AlGaN ultra bright and high power light emitting diodes (LEDs), a diffusing ball and fiber optics. Special fast electronic drivers for such LEDs were developed. The drivers are based on fast complementary and avalanche transistors. The diffusing ball is designed to provide uniform isotropic illumination of all photomultipliers of the detector. Thorough studies of timing and light yield parameters are done. Special emphasis is done on careful studies of compatibility of calibration system parts with liquid scintillator and ultra pure water.
Speaker: Mr Nikita Ushakov (Institute for Nuclear Research of the Russian Academy of Science, Prospekt 60-letiya Oktyabrya 7a, Moscow 117312, Russia)
• 99
In-situ gain calibration based on single byte PMT signals
Bouke Jung$^1$, Maarten de Jong$^2$, Paolo Fermani$^3$
on behalf of the KM3NeT collaboration
$^1$) University of Amsterdam, Nikhef
[email protected]
$^2$) Leiden University, Nikhef
[email protected]
$^3$) Sapienza Università di Roma
[email protected]
Present and foreseen neutrino observatories, such as IceCube, P-ONE, GVD, Antares and KM3NeT have to operate in challenging environments, where high count rates go hand in hand with limited bandwidths.
To keep the data rates in these experiments within the allowed range, rigorous data reduction is essential.
At the same time, sufficient information needs to be recorded to accurately measure the neutrino properties.
The KM3NeT collaboration has developed a novel data acquisition procedure, in which each PMT signal is reduced to a datapacket of 6 Bytes, containing the PMT identifier (1 B), the hit time (4 B) and the duration over which the associated PMT pulse exceeded the threshold (1B).
This talk highlights an analytical pulse-shape model which is used to perform in-situ calibrations of the gain and its spread, using only the time-over-threshold statistics associated with single photon hits.
Speaker: Bouke Jung (Nikhef and University of Amsterdam)
• 100
Luminescence of ice as a new detection channel for neutrino telescopes
Natural water and ice are currently used as optical detection media in large scale neutrino telescopes, such as IceCube, KM3NeT/ANTARES and GVD. When charged particles, such as those produced by high energy neutrino interactions, pass through ice or water at relativistic speeds they induce Cherenkov light emission. This is detected by the optical modules of neutrino telescopes. However, slower moving particles, including potential exotic matter such as Magnetic Monopoles or Q-balls, cannot be detected using this channel.
A new kind of signature can be detected by using light emission from luminescence in water or ice. This detection channel enables searches for exotic particles which are too slow to emit Cherenkov light and currently cannot be probed by the largest particle detectors in the world, i.e. neutrino telescopes. Luminescence light is highly dependent on the ice structure, impurities, pressure and temperature which demands a comprehensive study.
Luminescence light is induced by highly ionizing particles passing through a medium and exciting the surrounding matter. To utilise this new detection channel in neutrino telescopes, laboratory measurements using water and ice as well as an in-situ measurement in Antarctic ice were performed. The experiments as well as the measurement results will be presented covering light yields, spectra and decay times. The impact on searches for new physics with neutrino telescopes will be discussed.
Speaker: Dr Anna Pollmann (Universität Wuppertal)
• 101
Method and device for tests of the laser optical calibration system for the Baikal-GVD underwater neutrino Cherenkov telescope
The large-scale deep underwater Cherenkov neutrino telescopes like Baikal-GVD, ANTARES or KM3NeT, require calibration and testing methods of their optical modules. These methods usually include laser-based systems which allow to check the telescope responses to the light and for real-time monitoring of the optical parameters of water such as absorption and scattering lengths, which show seasonal changes in natural reservoirs of water. We will present a testing method of a laser calibration system and a set of dedicated tools developed for BaikalGVD, which includes a specially designed and built, compact, portable, and reconfigurable scanning station. This station is adapted to perform fast quality tests of the underwater laser sets just before their deployment in the telescope structure. The testing procedure includes the energy stability test of the laser device, 3D scan of the light emission from the diffuser and attenuation test of the optical elements of the laser calibration system. The test bench consists primarily of an automatic mechanical scanner with a movable Si detector, beam splitter with a reference Si detector and, optionally, Q-switched diode-pumped solid-state laser used for laboratory scans of the diffusers. The presented test bench enables a three-dimensional scan of the light emission from diffusers, which are designed to obtain the isotropic distribution of photons around the point of emission. The results of the measurement can be easily shown on a 3D plot immediately after the test and may be also implemented to a dedicated program simulating photons propagation in water, which allows to check the quality of the diffuser in the scale of the Baikal-GVD telescope geometry.
Speaker: Mr Konrad Kopański (The H. Niewodniczański Institute of Nuclear Physics Polish Academy of Sciences)
• 102
Positioning system for Baikal-GVD
Baikal-GVD is a kilometre scale neutrino telescope currently under construction in Lake Baikal. Due to water currents in Lake Baikal, individual photomultiplier housings are mobile and can drift away from their initial position. In order to accurately determine the coordinates of the photomultipliers, the telescope is equipped with an acoustic positioning system. The system consists of a network of acoustic modems, installed along the telescope strings and uses acoustic trilateration to determine the coordinates of individual modems. This contribution discusses the current state of the positioning in Baikal-GVD, including the recent upgrade to the acoustic modem polling algorithm.
Speaker: Mr Alexander Avrorin (INR RAS)
• 103
The Calibration Units of KM3NeT : multi-purpose calibration devices
KM3NeT is a deep-sea infrastructure composed of two neutrino telescopes being deployed in the Mediterranean Sea : ARCA, near Sicily in Italy, designed for neutrino astronomy and ORCA, near Toulon in France, designed for neutrino oscillations. These two telescopes are 3D arrays of optical modules used to detect the Cherenkov radiation, which is a signature of charged particles created in the neutrino interaction and propagating faster than light in the sea water.
To achieve the best performance for the event reconstruction in the telescopes, the exact location of the optical modules, affected by the sea current, must be known at any time and the timing resolution between optical modules must reach the sub-nanosecond level. Moreover, the properties of the environment, in which the telescopes are deployed, such as temperature and salinity, are continuously monitored to allow best modelling of the acoustic signal propagation in the water.
KM3NeT is going to deploy several dedicated Calibration Units hosting instruments dedicated to meet these calibration goals. The Calibration Base will host a Laser Beacon for time calibration and a long-baseline acoustic emitter and a hydrophone, which are part of the positioning system for the optical modules. Some of these Calibration Units will also be equipped with an Instrumentation Unit hosting environmental monitoring instruments.
This poster describes all the devices, features and purposes of the Calibration Units, with a special emphasis on the first such unit that will be deployed on the ORCA site in 2021.
Speaker: Rémy Le Breton (APC)
• Discussion: 52 Analysis, Methods, Catalogues, Community Tools, Machine Learning... | GAD-GAI 04
#### 04
• 104
Classification of Fermi-LAT sources with deep learning
Machine learning techniques are powerful tools for the classification of unidentified gamma-ray sources. We present a new approach based on dense and recurrent deep neural networks to classify unidentified or unassociated gamma-ray sources in the last release of the Fermi-LAT catalog (4FGL-DR2). Our method uses the actual measurements of the photon energy spectrum and time series as input for the classification, instead of specific, hand-crafted features. We focus on different classification tasks: the separation between extragalactic sources, i.e. Active Galactic Nuclei (AGN), and Galactic pulsars, the further classification of pulsars into young and millisecond pulsars and the sub-classification of AGN into different types. Since our method is very flexible, we generalize it to include multiwavelength data on the energy and time spectra coming from different observatories, as well as to account for uncertainties in the measurements and in the predicted classes. Our list of high-confidence candidate sources labelled by the neural networks provides targets for further multiwavelength observations to identify their nature, as well as for population studies.
Speaker: Dr Silvia Manconi (Institute for Theoretical Particle Physics and Cosmology, RWTH Aachen)
• 105
Detection methods for the Cherenkov Telescope Array at very-short exposure times
The Cherenkov Telescope Array (CTA) will be the next generation ground-based observatory for very-high-energy gamma-ray astronomy, with the deployment of tens of highly sensitive and fast-reacting Cherenkov telescopes. It will cover a wide energy range (20 GeV - 300 TeV) with unprecedented sensitivity. Our study is focused on real-time detection at very-short timescales (from 1 to 100 seconds). We built and characterised an analysis and detection pipeline and tested it via the verification of the Wilks’ theorem for false-positives. The performance was evaluated in terms of sky localisation accuracy, detection significance and detection efficiency for different observing and analysis configurations. Our goal is to determine the feasibility of the analysis methods at very-short exposure times. We also investigated the sensitivity degradation which is expected in a real-time analysis context and compared it to the requirement of being better than half of the CTA sensitivity. In this work, we present a general overview of the pipeline and the performance obtained for the use-case of a blind-search and detection following an external alert, such as from a gamma-ray burst or a gravitational wave event.
Speaker: Ambra Di Piano (INAF/OAS Bologna)
• 106
Application of Pattern Spectra and Convolutional Neural Networks to the Analysis of Simulated Cherenkov Telescope Array Data
The Cherenkov Telescope Array (CTA) will be the next generation gamma-ray observatory with more than 100 telescopes located in the northern and southern hemispheres. It will be the major global instrument for very high energy astronomy over the next decade, offering one order of magnitude better flux sensitivity than current generation ground-based gamma-ray telescopes. Each telescope will provide a snapshot of gamma-ray induced particle showers by capturing their Cherenkov emission at ground level. The simulation of such events provides images that can be used as training data for Convolutional Neural Networks (CNNs) to determine the energy and direction of the initial gamma rays. Compared to other state-of-the-art algorithms, analyses based on CNNs promise to further enhance the performance to be achieved by CTA.
Pattern spectra are commonly used tools for image classification and provide the distributions of the shapes and sizes of various objects comprising an image. The use of relatively shallow CNNs on pattern spectra would automatically select relevant combinations of features within an image, taking advantage of the 2D nature of pattern spectra. In this work, we will generate pattern spectra from simulated gamma-ray events instead of using the raw images themselves in order to train our CNN for energy and arrival direction reconstruction. This is different from other relevant learning and feature selection methods that have been tried in the past. Thereby, we aim to reduce the depth of our neural network to obtain a significantly faster and less computationally intensive algorithm, with minimal loss of performance.
Speaker: Jann Aschersleben
• 107
Source-morphology-independent background estimation for extended gamma-ray sources
We present a new background estimation method for a search for largely extended TeV gamma-ray sources with instruments using the imaging atmospheric Cherenkov technique. This novel method does not rely on the assumption of source morphology and uses the cosmic-ray-like events (events that fail gamma-hadron-separation cuts using shower-shape parameters) collected from the given field to estimate the gamma-ray-like background of the same field. We show that the use of cosmic-ray-like events allows an effective reduction of the systematic error on background subtraction. This report explains the methodology, presents the validation of the background method using the gamma-ray-free VERITAS (Very Energetic Radiation Imaging Telescope Array System) dark field data, and includes comparisons with conventional background methods. This new method is suitable for largely extended gamma-ray sources whose angular sizes exceed the capacity of the conventional background methods.
Speaker: Ruo Yu Shang (University of California, Los Angeles)
• 108
Analysis of the Cherenkov Telescope Array first Large Size Telescope real data using convolutional neural networks
The Cherenkov Telescope Array (CTA) is the future ground-based gamma-ray observatory and will be composed of two arrays of imaging atmospheric Cherenkov telescopes (IACTs) located in the Northern and Southern hemispheres respectively. The first CTA prototype telescope built on-site, the Large Size Telescope (LST-1), is under commissioning in La Palma and has already taken data on numerous known sources.
IACTs detect the faint flash of Cherenkov light indirectly produced after a very energetic gamma-ray photon has interacted with the atmosphere and generated an atmospheric shower. Reconstruction of the characteristics of the primary photons is usually done using a parameterization up to the third order of the light distribution of the images.
In order to go beyond this classical method, new approaches are being developed using state-of-the-art methods based on convolutional neural networks (CNN) to reconstruct the properties of each event (incoming direction, energy and particle type) directly from the telescope images. While promising, these methods are notoriously difficult to apply to real data due to differences (such as different levels of night sky background) between Monte Carlo (MC) data used to train the network and real data.
The GammaLearn project, based on these CNN approaches, has already shown an increase in sensitivity on MC simulations for LST-1 as well as a lower energy threshold. In this work, we apply the GammaLearn network to real data acquired by LST-1 and compare the results to the classical approach that uses random forests trained on extracted image parameters. The improvements on the background rejection, event direction, and energy reconstruction are discussed in this contribution.
Speaker: Thomas Vuillaume (Laboratoire d’Annecy de Physique des Particules, Univ. Grenoble Alpes, Univ. Savoie MontBlanc, CNRS, LAPP)
• 109
Analysis optimisation for more than 10 TeV gamma-ray astronomy with IACTs
The High Energy Stereoscopic System (H.E.S.S.) is one of the currently operating Imaging Atmospheric Cherenkov Telescopes. H.E.S.S. operates in the broad energy range from a few tens of GeV to more than 50 TeV reaching its best sensitivity around 1 TeV. In this contribution, we present an analysis technique, which is optimised for the detection at the highest energies accessible to H.E.S.S. and aimed to improve the sensitivity above 10 TeV. It includes the employment of improved event direction reconstruction and gamma-hadron separation. For the first time, also extensive air showers with event offsets up to 4.5 degrees from the camera center are considered in the analysis, thereby increasing the effective Field-of-View of H.E.S.S. from 5 to 9 degrees. Key performance parameters of the new high-energy analysis are presented and its applicability demonstrated for representative hard-spectrum sources in the Milky Way.
Speaker: Iryna Lypova
• 110
A 3D likelihood analysis for KM2A data
The square kilometer array (KM2A) is the main array of the Large High Altitude Air Shower Observatory (LHAASO), which is the most sensitive gamma-ray detector for energies above a few tens of TeV. We are developing a software pipeline based on the experimental data, Monte-Carlo simulations and the pointing track of the arrays. The pipeline is able to perform 3D (sky images at different energies) fits of KM2A data, similar to those used for Fermi-LAT and DAMPE gamma-ray analysis. This 3D likelihood analysis could fit source models of arbitrary morphology to the sky images, and get energy spectra information and detection significances simultaneously. The analysis with this software could give consistent results with those using traditional method.
Speaker: Xiaoyuan Huang
• 111
A maximum-likelihood-based technique for detecting extended gamma-ray sources with VERITAS
Gamma-ray observations ranging from hundreds of MeV to tens of TeV are a valuable tool for studying particle acceleration and diffusion within our galaxy. Supernova remnants, pulsar wind nebulae, and star-forming regions are the main particle accelerators in our local Galaxy. Constructing a coherent physical picture of these astrophysical objects requires the ability to distinguish extended regions of gamma-ray emission, the ability to analyze small-scale spatial variation within these regions, and methods to synthesize data from multiple observatories across multiple wavebands. Imaging Atmospheric Cherenkov Telescopes (IACTs) provide fine angular resolution (<0.1 degree) for gamma-rays above 100 GeV. Typical data reduction methods rely on source-free regions in the field of view to estimate cosmic-ray background. This presents difficulties for sources with unknown extent or those which encompass a large portion of the IACT field of view (3.5 degrees for VERITAS). Maximum-likelihood-based techniques are well-suited for analysis of fields with multiple overlapping sources, diffuse background components, and combining data from multiple observatories. Such methods also offer an alternative approach to estimating the IACT cosmic-ray background and consequently an enhanced sensitivity to largely extended sources. In this proceeding, we report on the current status and performance of a maximum likelihood technique for the IACT VERITAS. In particular, we focus on how our method’s framework employs a dimension for gamma-hadron separation parameters in order to improve sensitivity on extended sources.
Speaker: Alisha Chromey (VERITAS Collaboration)
• 112
Bayesian Deep Learning for Shower Parameter Reconstruction in Water Cherenkov Detectors
Deep Learning methods are among the state-of-art of several computer vision tasks, intelligent control systems, fast and reliable signal processing and inference in big data regimes. It is also a promising tool for scientific analysis such as gamma/hadron discrimination.
We present an approach based on Deep Learning for the regression of shower parameters, namely its core position and energy at the ground, using water Cherenkov detectors. We design our method using simulations. In this contribution, we explore the recovery of the shower’s center coordinates. We evaluate the limits of such estimation near the borders of the arrays, including the when the center is outside the detector’s range. We also address the feasibility of recovering other parameters, such as ground energy. We used Bayesian Neural Networks and derived and quantified systematic errors arising from Deep Learning models and optimized the network design. The method could be easily adapted to estimate other parameters.
Speaker: Clecio R. Bom (Centro Brasileiro de Pesquisas Físicas)
• 113
Convolutional Neural Networks for Low Energy Gamma-Ray Air Shower Identification with HAWC
A major task in ground-based gamma-ray astrophysics analyses is to separate events caused by gamma rays from the overwhelming hadronic cosmic-ray background. In this talk we are interested in improving the gamma ray regime below 1 TeV, where the gamma and cosmic-ray separation becomes more difficult. Traditionally, the separation has been done in particle sampling arrays by selections on summary variables which distinguish features between the gamma and cosmic-ray air showers, though the distributions become more similar with lower energies. The structure of the HAWC observatory, however, makes it natural to interpret the charge deposition collected by the detectors as pixels in an image, which makes it an ideal case for the use of modern deep learning techniques, allowing for good performance classifers produced directly from low-level detector information.
Speaker: Ian Watson (University of Seoul)
• 114
Deep Learning Transient Detection with VERITAS
Ground-based gamma-ray observatories such as the VERITAS array of imaging atmospheric Cherenkov telescopes provide insight into very-high-energy (VHE, E>100 GeV) astrophysical transient events. Examples include the evaporation of primordial black holes and gamma-ray bursts. Identifying such an event with a serendipitous location and time of occurrence is difficult. Thus, employing a robust search method becomes crucial. An implementation of a transient detection method based on deep learning techniques for VERITAS will be presented. This data-driven approach significantly reduces the dependency on the characterization of the instrument response and the modelling of the expected transient signal. The response of the instrument is affected by various factors, such as the elevation of the source and the night sky background. The study of these effects allows enhancing the deep learning method with additional parameters to infer their influences on the data. This improves the performance and stability for a wide range of observational conditions. We use our method to investigate archival VERITAS data from 2012 to 2020 for second- to minute-scale VHE transients
Speaker: Konstantin Johannes Pfrang (DESY)
• 115
Deep-learning applications to the multi-objective optimisation of IACT array layouts.
The relative disposition of individual telescopes in the ground is one of the important factors in optimising the performance of a stereoscopic array of imaging atmospheric Cherenkov telescopes (IACTs). Following previous attempts at an automated survey of the broad parameter space involved using evolutionary algorithms, in this paper we will present a novel approach to optimising the array geometry based on deep learning techniques. The focus of this initial work will be to test the algorithmic approach and will be based on a simplified toy model of the array. Despite being simplified, the model heuristics aims to capture the principal array performance features relevant for the layout optimisation. Our final goal is to create an algorithm capable of scanning the large parameter space involved in the design of a large stereoscopic array of IACTs to assist optimisation of the array geometry (in face of external constraints and multiple performance objectives). The use of simple heuristics precludes direct comparison to existing real-world experiments, but the analysis is internally consistent and gives insight as to the potential of the technique. Deep learning techniques are being increasingly applied to tackle a number of problems in the field of Gamma-ray Astronomy, and this work represents a novel, original application of this modern computational technique to the field.
Speaker: Dr Bernardo Fraga (Centro Brasileiro de Pesquisas Físicas)
• 116
Deep-learning-driven event reconstruction applied to simulated data from a single Large-Sized Telescope of CTA
When very-high-energy gamma rays interact high in the Earth’s atmosphere, they produce cascades of particles that induce flashes of Cherenkov light. Imaging atmospheric Cherenkov telescopes (IACTs) detect these flashes and convert them into shower images that can be analyzed to extract the properties of the primary gamma ray. The dominant background for IACTs is comprised of images produced by cosmic hadrons, with typical noise-to-signal ratios of several orders of magnitude. The standard technique adopted to differentiate between images initiated by gamma rays and those initiated by hadrons is based on classical machine learning algorithms, such as Random Forests, that operate on a set of handcrafted parameters extracted from the images. Likewise, the inference of the energy and the arrival direction of the primary gamma ray is performed using those parameters. State-of-the-art deep learning techniques based on convolutional neural networks (CNNs) have the potential to enhance the event reconstruction performance, since they are able to autonomously extract features from raw images, exploiting the pixel-wise information washed out during the parametrization process.
Here we present the results obtained by applying deep learning techniques to the reconstruction of simulated events from a single, next-generation IACT, the Large-Sized Telescope (LST) of the Cherenkov Telescope Array. We use CNNs to separate gamma-ray-induced events from hadronic events and to reconstruct the properties of the former, showing that they perform better than the standard reconstruction technique. Three independent implementations of CNN-based event reconstruction models have been utilized in this work, producing consistent results.
Speaker: Pietro Grespan (INFN Padova Division)
• 117
Development of hybrid reconstruction techniques for TAIGA
The TAIGA-experiment aims to implement a hybrid detection technique of Extensive Air Showers (EAS) at TeV to PeV energies, combining the wide angle Cherenkov timing array HiSCORE with Imaging Air Cherenkov Telescopes (IACTs). The detector currently consists of 89 HiSCORE stations and two IACTs, distributed over an area of about 1 km².
Our goal is to introduce a new reconstruction technique, combining the good angular and shower core resolution of HiSCORE with the gamma-hadron separation power of the imaging telescopes. With the second IACT in operation, three different event types can be explored: IACT stereo, full hybrid (IACT stereo + stations) and mono hybrid (IACT mono + HiSCORE), the latter being the operational goal of TAIGA.
The status of the development of the full hybrid reconstruction and its verification using real data and simulation are presented.
Speaker: Michael Blank (UNI/EXP (Uni Hamburg, Institut fur Experimentalphysik))
• 118
Fast simulation of gamma/proton event images for the TAIGA-IACT experiment using generative adversarial networks
High energy cosmic rays and gamma rays interacting the atmosphere produce extensive air showers (EAS) of secondary particles emitting Cherenkov light. Being detected with a telescope this light forms "images" of the air shower. In the TAIGA project, in addition to images obtained experimentally, model data are widely used. The difficulty is that the computational models of the underlying physical processes are very resource intensive, since they track the type, energy, position and direction of all secondary particles born in EAS. This can lead to a lack of model data for future experiments. To address this challenge, we applied a machine learning technique called Generative Adversarial Networks (GAN) to quickly generate images of two types: from gamma and protons events. As a training set, we used a sample of 2D images obtained using TAIGA Monte Carlo simulation software, containing about 50,000 events. It has been experimentally established that the generation results best fit the training set in the case when for two different types of events we create two different networks and train them separately. For gamma events a discriminator with a minimum number of convolutional layers was required, while for proton events, more stable and high-quality results are obtained if two additional fully connected layers are added to the discriminator. Testing the generators of both networks using third-party software showed that more than 90% of the generated images were found to be correct. Thus, the use of GAN provides reasonably fast and accurate simulations for the TAIGA project.
Speaker: Julia Dubenskaya (Lomonosov Moscow State University)
• 119
Gammapy: a Python Package for Gamma-Ray Astronomy
Gammapy is a community-developed, open source Python package for gamma-ray Astronomy, which is built on the scientific Python ecosystem Numpy, Scipy and Astropy. It provides methods for the analysis of gamma-ray data of many instruments including Imaging Atmospheric Cherenkov Telescopes, Water Cherenkov, as well as space based observatories.
Starting from event lists and a description of the specific instrument response functions (IRF) stored in open FITS based data formats, Gammapy implements the reduction of the input data and instrument response to binned WCS, HEALPix or region based data structures. Thereby it handles the dependency of the IRFs with time, energy as well as position on the sky. It offers a variety of background estimation methods for spectral, spatial and spectro-morphological analysis. Counts, background and IRFs data are bundled in datasets and can be serialised, rebinned and stacked.
Gammapy supports to model binned data using Poisson maximum likelihood fitting. It comes with built-in spectral, spatial and temporal models as well as support for custom user models, to model e.g. energy dependent morphology of gamma-ray sources. Multiple datasets can be combined in a joint-likelihood approach to either handle time dependent IRFs, different classes of events or combination of data from multiple instruments. Gammapy also implements methods to estimate flux points, including likelihood profiles per energy bin, light curves as well as flux and signficance maps in energy bins.
In this contribution we present an overview of the most recent features and user interface of Gammapy along with example analyses using H.E.S.S, Fermi-LAT and simulated CTA data.
Speaker: Axel Donath
• 120
Identifying muon rings in VERITAS data using convolutional neural networks trained on Muon Hunters 2-classified images
Muons from extensive air showers appear as rings in images taken with Cherenkov telescopes, such as VERITAS. These muon ring images are used for the calibration of the VERITAS telescopes, however this calibration process can be improved with a more efficient muon-identification algorithm. Convolutional neural networks (CNNs) are used in many state-of-the-art image-recognition systems and are ideal for this purpose. However, by training a CNN on a dataset labelled by existing algorithms, the performance of the CNN would be limited by the suboptimal muon-identification efficiency of the original algorithms. Muon Hunters 2 is a citizen science project that asks users to label grids of VERITAS telescope images, stating which images contain muon rings. Each image is labelled 10 times by independent volunteers, and the votes are aggregated and used to assign a 'muon' or 'non-muon' label to the corresponding image. An analysis was performed using an expert-labelled dataset in order to determine the optimal vote fraction cut-offs for assigning labels to each image for CNN training. This was optimised so as to identify as many muon images as possible while avoiding false positives. The performance of this model will be presented and compared to existing muon identification algorithms employed in the VERITAS data analysis software. Using any extra images identified for calibration may require improvements to the light-distribution correction algorithm for muon rings with non-zero impact parameters.
Speaker: Mr Kevin Flanagan (University College Dublin)
• 121
Matched Runs Method to Study Extended Regions of Gamma-ray Emission
Imaging atmospheric Cherenkov telescopes, such as the Very Energetic Radiation Imaging Telescope Array System (VERITAS), are uniquely suited to resolve the detailed morphology of extended regions of gamma-ray emission. However, standard VERITAS data analysis techniques have insufficient sensitivity to gamma-ray sources spanning the VERITAS field of view (3.5°), due to difficulties with background estimation. For analysis of such spatially extended sources with 0.5° to greater than 2° radius, we developed the Matched Runs Method. This method derives background estimations for observations of extended sources using matched separate observations of known point sources taken under similar observing conditions. Our technique has been validated by application to archival VERITAS data. Here we present a summary of the Matched Runs Method and multiple validation studies on different gamma-ray sources using VERITAS data.
Speaker: Binita Hona (University of Utah)
• 122
New methods to reconstruct Xmax and the energy of gamma-ray air showers with high accuracy in large wide-field observatories
A new method to reconstruct the slant depth of the maximum of the longitudinal profile (XmaxXmax) of high-energy showers initiated by gamma-rays as well as their energy (E0) are presented. The method were developed for gamma rays with energies ranging from a few hundred GeV to around 10 TeV. An estimator of Xmax is obtained, event-by-event, from its correlation with the distribution of the particles' arrival time at the ground, or the signal at the ground for lower energies. An estimator of E0 is obtained, event-by-event, using a parametrization that has as inputs the total measured energy at the ground, the amount of energy contained in a region near to the shower core and the estimated Xmax. Resolutions about 40 (20) g/cm2 and about 30(20)% for, respectively, Xmax and E0 at 1 (10) TeV energies are obtained, considering vertical showers. The obtained results are auspicious and can lead to the opening of new physics avenues for large wide field-of-view gamma-ray observatories. The dependence of the resolutions with experimental conditions is discussed.
Speaker: Ruben Conceição (LIP - Laboratório de Instrumentação e Física Experimental de Partículas)
• 123
Prototype Open Event Reconstruction Pipeline for the Cherenkov Telescope Array
The Cherenkov Telescope Array (CTA) is the next-generation gamma-ray observatory
currently under construction.
It will improve over the current generation of imaging atmospheric Cherenkov telescopes (IACTs)
by at least one order of magnitude in sensitivity and be able to observe the whole
sky from a northern site in La Palma, Spain, and a southern one in Paranal, Chile.
CTA will also be the first open gamma-ray observatory.
Accordingly, the data analysis pipeline is developed as open-source software.
The event reconstruction pipeline accepts raw data of the telescopes and processes it to
produce suitable input for the higher-level science tools.
Its primary tasks include estimating the physical properties of each recorded
shower and providing the corresponding instrument response functions.
Ctapipe is a framework providing algorithms and tools to facilitate raw data calibration,
image extraction, image parameterization and event reconstruction.
Its main focus is currently the analysis of simulated data but it has also been successfully applied
for the analysis of data obtained with the first CTA prototype telescopes, such as the Large Size Telescope 1.
PyIRF is a library to calculate IACT instrument response functions,
needed to obtain physics results like spectra and light curves,
from the reconstructed event lists.
Building on these two, protopipe is a prototype for the event reconstruction pipeline for CTA.
Recent developments in these software packages will be presented.
Speaker: Maximilian Nöthe (TU Dortmund)
• 124
Reconstruction of extensive air shower images of the Large Size Telescope prototype of CTA using a novel likelihood technique
Ground-based gamma-ray astronomy requires reconstructing extensive air showers initiated by gamma rays impinging on the atmosphere. Imaging atmospheric Cherenkov telescopes collect the Cherenkov light induced by secondary charged particles in extensive air showers, creating an image of the shower in a camera. This image is parametrized and used to evaluate the type, energy and arrival direction of the primary particle that initiated the shower. This contribution shows the results of a novel reconstruction method based on likelihood maximization. The method is applied to observations of the Crab Nebula acquired with the Large Size Telescope prototype (LST-1) deployed at the Northern site of the Cherenkov Telescope Array. The novelty with respect to previous likelihood reconstruction methods lies in the definition of a likelihood per single camera pixel, accounting not only for the total measured charge, but also for its development over time. It considers the waveform acquired by each pixel involved in the reconstruction of the shower. This reconstruction, which considers also the response characteristics of the sensor in the camera pixel, leads to improved reconstruction of shower images and consequently allows for the recovery of the primary particles properties with an improved accuracy.
Speaker: Dr Gabriel Emery (University of Geneva - DPNC)
• 125
Reconstruction of stereoscopic CTA events using deep learning with CTLearn
The Cherenkov Telescope Array (CTA), conceived as an array of tens of imaging atmospheric
Cherenkov telescopes (IACTs), is an international project for a next-generation ground-based
gamma-ray observatory, aiming to improve on the sensitivity of current-generation instruments
by an order of magnitude and provide energy coverage from 20 GeV to more than 300 TeV.
Arrays of IACTs probe the very-high-energy gamma-ray sky. Their working
principle consists of the simultaneous observation of air showers initiated by
the interaction of very-high-energy gamma rays and cosmic rays with the atmosphere.
Cherenkov photons induced by a given shower are focused onto the camera plane
of the telescopes in the array, producing a multi-stereoscopic record of the event. This
image contains the longitudinal development of the air shower, together
with its spatial, temporal, and calorimetric information. The properties of
the originating very-high-energy particle (type, energy and incoming direction)
can be inferred from those images by reconstructing the full event using machine
learning techniques. In this contribution, we present a purely deep-learning
driven, full-event reconstruction of simulated, stereoscopic IACT events
and manipulating IACT data and for running deep learning models,
using pixel-wise camera data as input.
Speaker: Tjark Miener (IPARCOS, UCM)
• 126
Studies of Gamma Ray Shower Reconstruction Using Deep Learning
The ALTO project aims to build a particle detector array for very high energy gamma ray observations optimized for soft spectrum sources. The accurate reconstruction of gamma ray events, in particular their energies, using a surface array is an especially challenging problem at the low energies ALTO aims to optimize for. In this contribution, we leverage Convolutional Neural Networks (CNNs) to improve reconstruction performance at lower energies ( < 1 TeV ) as compared to the SEMLA analysis procedure, which is a more traditional method using mainly manually derived features.
We present performance figures using different network architectures and training settings, both in terms of accuracy and training time, as well as the impact of various data augmentation techniques.
Speaker: Tomas Bylund (Linnaeus University)
• 127
The identification of proton and gamma components in cosmic-rays based on deep learning algorithm
The Large High Altitude Air Shower Observatory (LHAASO), is a multi-component experiment located at Daocheng (4410 m a.s.l.), Sichuan province, P.R. China. The identification of gamma rays from protons is an important foundation and premise for gamma ray research. In this paper, we use deep learning algorithm to extract the key features of events directly based on a large amount of original information, and explore the identification power of gamma rays from protons of LHAASO experiment. The Convolutional Neural Network(CNN), Deep Neural Networks(DNN) and Graph Neural Networks (GNN) are trained and tested based on a large number of simulation events respectively. Compared with the traditional methods, we have found that the trained CNN, DNN and GNN models all have improvements in the effect of proton and gamma discrimination.
Speaker: F Zhang (Southwest Jiaotong University)
• 128
The use of convolutional neural networks for processing images from multiple IACTs in the TAIGA experiment
TAIGA experiment uses hybrid detection system for cosmic and gamma rays that currently includes three imaging atmospheric Cherenkov telescopes (IACTs). Previously we used convolutional neural networks to select gamma ray events and estimate the energy of the gamma rays based on an image from a single telescope. Subsequently we adapted these techniques to use data from multiple telescopes, increasing the quality of selection and the accuracy of estimates. All the results have been obtained with the simulated data of TAIGA Monte Carlo software.
Speaker: Stanislav Polyakov (SINP MSU)
• 129
Using Machine Learning for gamma/hadron separation with HAWC
Background showers triggered by hadrons represent over 99.9% of all particles arriving at ground-based gamma-ray observatories. An important stage in the data analysis of these observatories, therefore, is the removal of hadron-triggered showers from gamma showers. Currently, the High-Altitude Water Cherenkov (HAWC) gamma-ray observatory employs an algorithm that is a single cut in two variables, unlike other ground-based gamma-ray observatories (e.g. HESS, VERITAS) which employ a large number of variables to separate the primary particles. In this work, we explore machine learning techniques (Boosted Decision Trees and Neural Networks) to identify the primary particles that were detected by HAWC. Our new gamma/hadron separation techniques were tested on data from the Crab nebula, the standard reference in Very High Energy astronomy, showing an improvement compared to the standard HAWC background rejection method.
Speaker: Tomás Capistrán (Instituto de Astronomía, UNAM)
• 1:30 PM
Break
• Plenary: Review 01 01
#### 01
Convener: Jim Hinton (MPIK)
• 130
Constraining Magnetic Fields at Galactic Scales
Magnetic fields are ubiquitous in the Universe, from compact objects to cosmic scales, and they play a central role in a variety of astrophysical processes. Surprisingly, even the Galactic magnetic field (GMF) in our own Milky Way remains poorly understood because of the challenges of observing it and the complexity of the phenomena we use to study it. Though we still have too many models that might fit the data, this is not to say that the field has not developed in the last few years. Radio observations have been used since the 1970’s to study the GMF and remain one of the most useful tracers. More recently, surveys of polarized dust have given us a new observable that is complementary to the more traditional radio tracers. A variety of other new tracers and related measurements are becoming available to improve current understanding. In this talk, I will summarize: the tracers available; the models that have been studied; what has been learned so far; what the caveats and outstanding issues are; and one opinion of where the most promising future avenues of exploration lie.
Speaker: Tess Jaffe
• 131
Gamma-Ray Bursts detected at Very High Energies
Very high energy (VHE, >100 GeV) radiation from GRBs has eluded for several years all attempts of detection by Cherenkov telescopes, until the recent detection of strong VHE emission from the long GRB 190114C, located at redshift z=0.42.
The inclusion of TeV data in the modeling of afterglow multi-wavelength (from radio to X-rays) observations allows us to estimate physical properties that are usually unconstrained, such as the density of the external medium, the energy of the emitting particles, and the strength of the shock-amplified magnetic field. Since the first announcement of VHE detection from a GRB, three additional GRBs have been firmly detected by Cherenkov telescopes. In this talk I review the present status of observations and interpretation of VHE emission from GRBs. Prospects for future detections with the ASTRI-Mini Array and with CTA, revised in light of these recent observations, reveal that the VHE band is a very promising energy window for progressing our knowledge of GRB physics.
Speaker: Lara Nava
• 3:30 PM
Break
• Plenary: Highlight 02 01
#### 01
Convener: Stefan Funk (ECAP)
• 132
A tidal disruption event coincident with a high-energy neutrino
IceCube discovered a diffuse flux of high-energy neutrinos in 2013, and recently identified the flaring gamma-ray blazar TXS 0506+056 as a likely neutrino source. However, a combined analysis of the entire resolved gamma-ray blazar population limited the contribution of such objects to no more than 27% of the total neutrino flux, leaving the vast majority of the neutrino flux unexplained. Here we present the identification of a second probable neutrino source, the Tidal Disruption Event (TDE) AT2019dsg, found as part of a systematic search for optical counterparts to high-energy neutrinos using the Zwicky Transient Facility. The probability of finding such a TDE with our follow-up program by chance is just 0.2%. Multi-wavelength observations reveal the presence of a central engine powering particle acceleration in AT2019dsg, and confirm that this object can satisfy necessary conditions for PeV neutrino production.
Speaker: Robert Stein (Z_ICE (IceCube+NG))
• 133
Transition from Galactic to Extragalactic Cosmic Rays
Understanding the nature of the transition from Galactic to extragalactic cosmic rays (GCRs and EGCRs) has become a challenge in light of recent spectral and composition data. Galactic contributions appear to be disfavoured at energies beyond $10^{17} \, {\rm eV}$ where the composition becomes lighter, and extragalactic sources appear to inject mixed compositions, complicating the description of the EGCR contribution below ankle'' energies. As a result, the measured flux in the transition region cannot easily be accounted for. With the model-dependence of proposed extensions to both the Galactic and extragalactic contributions, a deeper understanding of CR propagation is in order, particularly within the Galactic magnetic field (GMF) as propagation herein shifts from diffusive to ballistic at these energies, which is expected to lead to a range of effects on CRs.
Using CRPropa3, we study these effects for rigidities between $10^{16-20} \, {\rm V}$. We identify various features at rigidities where the gyroradius equals typical length scales of the Galaxy, suggesting causes related to changes in the propagation regime. We further quantify modifications in the spectrum, composition and arrival direction of GCRs and EGCRs. We find that the GMF naturally induces a flux suppression of GCRs towards higher rigidities. This, in consequence, would lead to an increase in the mean mass of GCR primaries up to energies around the ankle'' in the cosmic ray spectrum. The distribution of GCR arrival directions is also shown to be correlated with the Galactic plane for rigidities above $10^{17}\, {\rm V}$. EGCRs experience no flux modification in the GMF if injected isotropically. Injection of pure dipoles, as well as single source scenarios indicate that the GMF isotropises injected anisotropies below $10^{18} \, {\rm V}$, but can still cause flux modifications depending on the direction of the anisotropy. Overall consequences to the transition of GCRs to EGCRs will be discussed.
Speaker: Alex Kääpä (Bergische Universität Wuppertal)
• 134
Physics of gamma-ray burst afterglow: implications of H.E.S.S. observations
Recently, the observational study of gamma-ray bursts (GRBs) in the very-high-energy (VHE) regime has quickly advanced with three successful detections. Currently, the list of published VHE GRBs contains GRB 180720B, GRB 190114C, and GRB 190829A. The fortunate proximity of the last event observed with H.E.S.S. (GRB 190829A occurred at z~0.08) allowed an unexpectedly long signal detection, up to 56 hours after the trigger, and accurate spectral determination in a broad energy interval, spanning between 0.18 and 3.3TeV. The obtained temporal and spectral properties of the VHE emission appeared to be remarkably similar to those seen in the X-ray band with Swift-XRT. However, in frameworks of the standard synchrotron-self-Compton (SSC) scenario such a coherent behavior is expected only during the early period of the afterglow phase, when the forward shock propagates with large bulk Lorentz factor, (\Gamma>100). SSC models are able to render VHE spectra compatible with the H.E.S.S. measurements only under extreme assumptions on the properties of the circumburst medium. We discuss the implications of the GRB 190829A detection for afterglow modeling and GRB physics.
Speaker: Dmitry Khangulyan (Rikkyo University)
• 5:30 PM
Break
• Discussion: 05 CR Mass composition | CRI 03
#### 03
• 135
Mass composition anisotropy with the TA SD data
Mass composition anisotropy is predicted by a number of theories describing sources of ultra-high-energy cosmic rays.
Event-by-event determination of a type of a primary cosmic-ray particle is impossible due to large shower-to-shower fluctuations, and the mass composition usually is obtained by averaging over some composition-sensitive observable determined independently for each extensive air shower (EAS) over a large number of events.
In the present study we propose to employ the observable $\xi$ used in the mass composition analysis of the Telescope Array surface detector (TA SD) data for the mass composition anisotropy analysis.
The $\xi$ variable is determined with the use of Boosted Decision Trees (BDT) technique trained with the Monte-Carlo sets, and the $\xi$ value is assigned for each event, where $\xi=1$ corresponds to an event initiated by the primary iron nuclei and $\xi=-1$ corresponds to a proton event.
Use of $\xi$ distributions obtained for the Monte-Carlo sets allows us to separate proton and iron candidate events from a data set with some given accuracy and study its distributions over the observed part of the sky.
Results for the TA SD 12-year data set mass composition anisotropy will be presented and possible applications for the cosmic-ray source models will be discussed. This presentation contains results we would like to include in a TA highlight talk.
Speaker: Yana Zhezher (ICRR, University of Tokyo & INR RAS, Moscow)
• 136
Cosmic-ray mass composition with the TA SD 12-year data
Telescope Array (TA) is the largest ultra-high-energy cosmic-ray (UHECR) observatory in the Northern Hemisphere. It is dedicated to detect extensive air showers (EAS) in hybrid mode, both by measuring the shower’s longitudinal profile with fluorescence telescopes and their particle footprint on the ground from the surface detector (SD) array. While fluorescence telescopes are can measure the most composition-sensitive characteristic of EAS, the depth of the shower maximum (Xmax), they also have the drawback of small duty cycle. This work aims to study the UHECR composition based solely on the surface detector data. For this task, a set of composition-sensitive observables obtained from the SD data is used in a machine-learning method – the Boosted Decision Tree. We will present the results of the UHECR mass composition based on the 12-year data from the TA SD using this technique, and we will discuss of the possible systematics imposed by the hadronic interaction models.
Speaker: Yana Zhezher (ICRR, University of Tokyo & INR RAS, Moscow)
• 137
The measurements of the cosmic ray energy spectrum and the depth of maximum shower development of Telescope Array Hybrid trigger events
The Telescope Array experiment is an ultra-high energy cosmic ray observatory located in Millard County, Utah, USA. The observatory consists of 3 fluorescence detector (FD) stations and 507 surface detectors (SD) that cover an area of ~700 km2. Hybrid trigger is an external trigger system for the SD arrays that prompts the SD to perform data acquisition when an FD detects a shower-like event. In comparison with the SD autonomous trigger, hybrid trigger allows the SD to collect the data of an air shower that has primary energy below 1018.5 eV, where the efficiency of SD autonomous trigger decreases rapidly. We present the measurements of the cosmic ray energy spectrum and the depth of maximum shower development of hybrid trigger events observed from October 2010 to June 2019.
Speaker: Mr Heungsu Shin (ICRR, University of Tokyo)
• 138
Combined fit of the energy spectrum and mass composition across the ankle with the data measured at the Pierre Auger Observatory
The combined fit of the energy spectrum and mass composition data above $5\cdot10^{18}\:\mathrm{eV}$ suggested the presence of extragalactic sources ejecting ultra-high-energy cosmic rays with relatively low maximum energies, hard spectral indices and mixed chemical compositions, dominated by the contribution of intermediate mass groups. Here we present an extension of the fit to lower energies, to include the feature observed near $5\cdot10^{18}\:\mathrm{eV}$ in the all-particle energy spectrum, the so-called ankle.
We show that it is possible to generate such a change of slope assuming that the flux below the ankle is provided by the superposition of some additional contributions. The simplest extension of this sort consists of introducing a supplemental extragalactic component at low energy, characterised by different physical parameters with respect to the one contributing above the ankle: such a component may originate from a different population of sources or be provided by interactions occurring in the acceleration sites. In this framework we also explore the possibility of including the end of a Galactic contribution at low energies.
The fit suggests that these scenarios provide a reasonable description of the measurements across the ankle, without affecting the results obtained for the above-ankle region.
In order to evaluate our capability to constrain the source models, we finally discuss the impact of the main experimental systematic uncertainties and of the theoretical models choice on the fit results.
Speaker: Eleonora Guido (Università degli Studi di Torino)
• 139
Results from the KASCADE-Grande data analysis
Speaker: Donghwa Kang (KIT)
• 140
New insights from old cosmic rays: A novel analysis of archival KASCADE data
Cosmic ray data collected by the KASCADE air shower experiment are competitive in terms of quality and statistics with those of modern observatories. We present a novel mass composition analysis based on archival data acquired from 1998 to 2013 provided by the KASCADE Cosmic ray Data Center (KCDC). The analysis is based on modern machine learning techniques trained on simulation data provided by KCDC. We present spectra for individual groups of primary nuclei, the results of a search for anisotropies in the event arrival directions taking mass composition into account, and search for gamma-ray candidates in the PeV energy domain
Speaker: Dmitriy Kostiunin (Z_HESS (High Energy Steroscopic System))
• 141
Cosmic Ray Composition between 2 PeV and 2 EeV measured by the TALE Fluorescence Detector
The Telescope Array (TA) cosmic rays detector located in the State of Utah in the United States is the largest ultra high energy cosmic rays detector in the northern hemisphere. The Telescope Array Low Energy Extension (TALE) fluorescence detector (FD) was added to TA in order to lower the detector's energy threshold, and has succeeded in measuring the cosmic rays energy spectrum down to PeV energies, by making use of the direct Cherenkov light produced by air showers. In this contribution we present the results of a measurement of the cosmic-ray composition using TALE FD data collected over a period of ~4 years. TALE FD data is used to measure the $X_{max}$ distributions of showers seen in the energy range of $10^{15.3}$ - $10^{18.3}$ eV. The data distributions are fit to Monte Carlo distributions of {H, He, N, Fe} cosmic-ray primaries for energies up to $10^{18}$ eV. Mean $X_{max}$ values are measured for the full energy range. TALE observes a light composition at the "Knee", that gets gradually heavier as energy increases toward the "Second-Knee". An increase in the $X_{max}$ elongation rate is observed at energies just above $10^{17.3}$ eV indicating a change in the cosmic rays composition from a heavier to a lighter mix of primaries.
Speaker: Tareq AbuZayyad (Loyola University Chicago; University of Utah)
• 142
Cosmic Ray Composition in the Second Knee Region as Measured by the TALE Hybrid Detector
The Telescope Array Low-energy Extension (TALE) experiment is a hybrid air shower detector for the observation of air showers induced by cosmic rays with energy above 10$^{16}$ eV. The TALE detector consists of a Fluorescence Detector (FD) station with 10 FD telescopes located at the TA Middle Drum FD Station (itself made up of 14 FD telescopes), and a Surface Detector (SD) array made up of 80 scintillation counters, including 40 with 400 m spacing and 40 with 600 m spacing. A triggering system for the TALE-SD using an external trigger from the TALE-FD, a so-called hybrid trigger, allows for a lower energy threshold. The TALE hybrid trigger system has been working since 2018. Here we present an estimate of the performance of hybrid detection using a Monte Carlo simulation, and a first measurement of the cosmic ray composition using the TALE-Hybrid detector.
Speaker: Keitaro Fujita (Graduate School of Science, Osaka City University)
• 143
Cosmic-Ray Studies with the Surface Instrumentation of IceCube
IceCube is a cubic-kilometer Cherenkov detector installed in deep ice at the geographic South Pole. IceCube's surface array, IceTop, measures the electromagnetic signal and mainly low-energy muons from extensive air showers above several 100 TeV primary energy, with shower bundles and high-energy muons detected by the in-ice detectors. In combination, IceCube and IceTop provide unique opportunities to study cosmic rays in detail with large statistics. This contribution summarizes recent results from these studies. In addition, the IceCube-Upgrade will include a considerable enhancement of the surface detector through the installation of scintillation detectors and radio antennas and possibly small air-Cherenkov telescopes. We will discuss the results of the prototype detectors installed at the South Pole and the prospects of this enhancement as well as the surface array planned for IceCube-Gen2.
Speaker: Andreas Haungs (Karlsruhe Institute of Technology - KIT)
• 144
HAWC measurements of the energy spectra of cosmic ray protons, helium and heavy nuclei in the TeV range
Current knowledge of the relative abundances and the energy spectra of the elemental mass groups of cosmic rays in the 10 TeV - 1 PeV interval are uncertain. This situation prevents carrying out precision tests that may lead to distinguish among the existing hypotheses on the origin and propagation of TeV cosmic rays in the galaxy. In order to learn more about the mass composition of these particles, we have employed HAWC data from hadron induced air showers in order to determine the spectra of three mass groups of cosmic rays: protons, helium and heavy nuclei with Z > 2. The energy spectra were estimated by using the Gold unfolding technique on the 2D distribution of the lateral shower age against the estimated primary energy of events with arrival zenith angles smaller than 45 degrees. The study was carried out based on simulations using the QGSJET-II-04 model. Results are presented for primary cosmic-ray energies from 8 TeV to 400 TeV. They reveal that the aforementioned cosmic ray spectra exhibit fine structures within the above primary energy range.
Speaker: Juan Carlos Arteaga Velazquez (Universidad Michoacana de San Nicolas de Hidalgo)
• 145
Indication of a mass-dependent anisotropy above $10^{18.7}\,$eV in the hybrid data of the Pierre Auger Observatory
We test the hypothesis of an anisotropy in the mass of cosmic-ray primaries as a function of galactic latitude. The mass estimate is made using the depth of shower maximum, $X_{\text{max}}$, from hybrid events measured at the Pierre Auger Observatory. The 14 years of available data are split into on- and off-plane regions using the galactic latitude of each event to form two distributions in $X_{\text{max}}$, which are compared using the Anderson-Darling 2-samples test. A scan over a subset of the data is used to select an optimal threshold energy of $10^{18.7}\,$eV and an angular split of the data into equally sized on- and off-plane subsamples. Applied to all events, the distribution from the on-plane region is found to have a mean $X_{\text{max}}$ which is $9.3 \pm 1.7^{+2.6}_{-2.2}\,\text{g}\,\text{cm}^{-2}$ shallower and a width which is $6.3\pm2.9^{+3.8}_{-2.8}\,\text{g}\,\text{cm}^{-2}$ narrower than that of the off-plane region. These differences are such as to indicate that the mean mass of the primary particles arriving from the on-plane region is higher than the mean mass of those coming from the off-plane region.
Monte-Carlo studies yield a preliminary $5.0^{+1.4}_{-1.5}\,\sigma$ post-trial statistical significance, where the uncertainties are of systematic origin. Penalizing for systematic uncertainties leads to an indication for anisotropy in mass composition above $10^{18.7}\,$eV at a preliminary confidence level of $3.5\,\sigma$. The anisotropy is observed independently at each of the four fluorescence telescope sites. Interpretations of possible causes of the observed effect will be discussed.
Speaker: Dr Eric Mayotte (Bergische Universtät Wuppertal)
• 146
Recent measurements of the cosmic ray energy spectrum and composition from the GRAPES-3 experiment
The GRAPES-3 experiment is located at Ooty in India. It consists of a densely packed array of 400 plastic scintillator detectors (1 $m^{2}$ area each) with 8 m inter-detector separation and a large area (560 $m^{2}$) muon telescope. It measures the cosmic rays from a few TeV to over 10 PeV, thereby providing a substantial overlap with direct experiments as well as covering the knee region. The shower parameters are reconstructed by fitting the observed particle densities with the NKG lateral distribution function. The relation between the shower size and energy of primary cosmic rays is derived by using simulations with the SIBYLL-2.3c and QGSJET-II-04 hadronic interaction models. The Bayesian unfolding method is used for obtaining the energy spectrum. Measurements of nuclear composition are obtained by comparing muon multiplicity distributions (MMDs) for proton, helium, nitrogen, aluminium, and iron primaries obtained from the simulations against the MMDs measured by the muon telescope. The details of the analysis method and the extracted energy spectrum and composition from a few TeV to 10 PeV will be presented.
Speaker: Mr Fahim Varsi (Indian Institute of Technology, Kanpur)
• 147
Results on mass composition of cosmic rays as measured with LOFAR
We present an updated analysis of the mass composition of cosmic rays in the $10^{17}$ to $10^{18}$ eV energy range. It is based on measurements with the LOFAR telescope of the depth of shower maximum, $X_{\mathrm{max}}$.
We review the improvements to the simulation-based reconstruction setup, as well as the selection method to obtain a minimally biased $X_\mathrm{max}$-dataset. Systematic uncertainties on $X_\mathrm{max}$ have been lowered to an estimated 7 to 9 $\mathrm{g/cm^2}$, at a resolution of about 20 $\mathrm{g/cm^2}$ per shower.
Results include estimates of the mean and standard deviation of the $X_\mathrm{max}$-distribution. A statistical analysis at distribution level has been done as well, using a 4-component model of light to heavy nuclei.
It confirms our previous results showing a significant low-mass fraction in this energy range.
We discuss consistency with existing results on Xmax and mass composition.
Speaker: Arthur Corstanje (Free University Brussels)
• 148
Telescope Array Combined Fit to Cosmic Ray Spectrum and Composition
The cosmic rays observed at Earth have propagated through the universe over cosmological distances. The propagation should effect both the observed spectrum of cosmic rays and the abundance of different nuclear species that are observed at each energy. By performing a combined fit of Telescope Array spectrum and composition measurements to a simple source model consisting of a universal power-law with a rigidity dependent cutoff and variable, five-component composition fractions, one can constrain the possible sources of cosmic rays. We will present the results of such a fit using the Telescope Array surface array spectral measurements and the Telescope Array hybrid and stereo composition measurements.
Speaker: Douglas Bergman (University of Utah)
• 149
The depth of the shower maximum of air showers measured with AERA
The Auger Engineering Radio Array (AERA) is currently the largest array of radio antennas for the detection of cosmic rays, spanning an area of $17$ km$^2$ with 153 radio antennas, measuring in the energy range around the transition from galactic to extra-galactic origin. It measures the radio emission of extensive air showers produced by cosmic rays, in the $30-80$ MHz band. The cosmic-ray mass composition is a crucial piece of information in determining the sources of cosmic rays and their acceleration mechanisms. The composition can be determined with a likelihood analysis that compares the measured radio-emission footprint on the ground to an ensemble of footprints from CORSIKA/CoREAS Monte-Carlo air shower simulations. These simulations are also used to determine the resolution of the method and to validate the reconstruction by identifying and correcting for systematics. We will present the method for the reconstruction of the depth of the shower maximum, compare our results to the independent fluorescence detector reconstruction measured on an event-by-event basis, and show the results of the cosmic-ray mass composition reconstruction with AERA in the energy range from $10^{17.5}$ to $10^{19}$ eV for data taken over the past seven years.
Speaker: Bjarni Pont
• Discussion: 08 Radio Observations of Cosmic Rays | CRI-NU 06
#### 06
• 150
Self-trigger radio prototype array for GRAND
The GRANDProto300 (GP300) array is a pathfinder of the Giant Radio Array for Neutrino Detection (GRAND) project. The deployment of the array, consisting of 300 antennas, will start in 2021 in a radio-quiet area of ~200km^2 near Lenghu (~3000 m a.s.l.) in China.
Serving as a test bench, the GP300 array is expected to realise techniques of autonomous radio detection such as identification and reconstruction of nearly horizontal cosmic-ray (CR) air showers. In addition, the GP300 array is at a privileged position to study the transition between Galactic and extragalactic origins of cosmic rays, due to the large effective area and the precise measurements of both energy and mass composition for CRs with energies ranging from 30 PeV to 1 EeV. Using the GP300 array we will also investigate the potential sensitivity for radio transients such as Giant Radio Pulses and Fast Radio Bursts at 100-200 MHz range.
Speaker: Dr Yi Zhang (Purple Mountain Observatory, Chinese Academy of Sciences)
• 151
Modeling and Validating RF-Only Interferometric Triggering with Cosmic Rays for BEACON
The Beamforming Elevated Array for COsmic Neutrinos (BEACON) is a novel detector concept that utilizes a radio interferometer atop a mountain to search for the radio emission from extensive air showers created by Earth-skimming tau neutrinos. The prototype, located at the White Mountain Research Station in California, consists of 4 crossed-dipole antennas operating in the 30-80 MHz range and uses a directional interferometric trigger for reduced thresholds and background rejection. The prototype will first be used to detect down-going cosmic rays to validate the detector model. Here, we present the methodology and results of a Monte-Carlo simulation developed to predict the acceptance of the prototype to cosmic rays. In this simulation, cosmic ray induced air showers are generated in an area around the prototype array. It is then determined if a given shower triggers the array using radio emission simulations from ZHAireS and antenna modelling from XFdtd. The time-domain waveforms, event rates, and angular distributions predicted by this simulation can then be compared with experimental data to validate the detector model.
Speaker: Andrew Zeolla (Pennsylvania State University)
• 152
TAROGE-M: Radio Observatory on Antarctic High Mountain for Detecting Near-Horizon Ultra-High Energy Air Showers
The TAROGE-M observatory is an autonomous antenna array on the top of Mt.~Melbourne ($\sim2700$ m altitude) in Antarctica, designed to detect radio pulses from ultra-high energy (over $10^{17}$ eV) air showers coming from near-horizon directions. The targeted sources include cosmic rays, Earth-skimming tau neutrinos, and most of all, the anomalous near-horizon upward-going events of yet unknown origin discovered by ANITA experiments. The detection concept follows that of ANITA: monitoring large area of ice from high-altitude and taking advantage of strong geomagnetic field and quiet radio background in Antarctica, whereas having significantly greater livetime and scalability.
The TAROGE-M station, upgraded from its prototype built in 2019, was deployed in January 2020, and consists of 6 log-periodic dipole antennas pointing horizontally with bandwidth of 180-450 MHz. The station is then calibrated with drone-borne transmitter, with which the event reconstruction obtained $\sim0.3^\circ$ angular resolution. The station was then smoothly operating in the following month, with the live time of $\sim30$ days, before interrupted by a power problem, and its online filtering has identified several candidate cosmic-ray events and sent out via satellite communication. In this paper, the instrumentation of the station for polar and high-altitude environment, its radio-locating performance, the preliminary result on cosmic-ray detection, and the future extension plan are presented.
Speaker: Shih-Hao Wang (National Taiwan University)
• 153
The NuMoon Experiment: Lunar Detection of Cosmic Rays and Neutrinos with LOFAR
The low flux of ultra-high-energy cosmic rays (UHECRs) makes it challenging to understand their origin and nature. A very large effective aperture is provided by the lunar Askaryan technique. Particle cascades in a dielectric medium produce radio emission through the Askaryan effect. Ground based radio telescopes are used to search for nanosecond radio pulses that are produced when cosmic rays or neutrinos interact with the Moon’s surface. The LOw Frequency ARray (LOFAR) is currently the largest radio array operating at frequencies between $110 - 190$~MHz; the optimum frequency range for lunar signal search and $30 - 80$~MHz for radio detection of air showers. One minute of observation has been carried out with six LOFAR stations beam-formed towards the Moon. In this contribution, we present some preliminary results of the analysis of the data and a complete description of the analysis steps.
Speaker: Godwin Komla Krampah (Vrije University of Brussels)
• 154
Radio-Morphing: a fast, efficient and accurate tool to compute the radio signals from air-showers
The preparation of next generation large-scale radio detectors such as GRAND requires to run massive air-shower simulations to evaluate the radio signal at each antenna position. Radio-Morphing was developed for this purpose. It is a semi-analytical tool that enables a fast computation of the radio signal emitted by any air-shower at any location, from the simulation data of one single reference shower at given positions. Radio-Morphing was demonstrated to generate the electric field time traces with amplitudes in good agreement (<30% difference for two thirds of signals) with microscopic simulations, while reducing the computation time by several orders of magnitude. However, several features still needed to be addressed for the tool to be fully efficient and accurate. We present here major improvements on the Radio-Morphing method that have been implemented recently. The upgraded version is based on revised and refined scaling laws, derived from physical principles. It also includes a new spatial interpolation technique, thanks to which an excellent signal timing accuracy can be reached. We will present the methodology, performances and possible applications of this universal tool.
Speaker: Simon Chiche (Institut d'Astrophysique de Paris)
• 155
The Zettavolt Askaryan Polarimeter (ZAP) mission concept: radio detection of ultra-high energy cosmic rays in low lunar orbit.
Probing the ultra-high energy cosmic ray (UHECR) spectrum beyond the cutoff at ~40 EeV requires an observatory with an acceptance that is impractical to achieve with ground arrays. We present a concept, designated the Zettavolt Askaryan Polarimeter (ZAP), for radio detection of UHECRs impacting the Moon’s regolith from low-lunar orbit. ZAP would observe several thousands of events above the cutoff (~40 EeV) with a full-sky field of view to test whether UHECRs originate from Starburst Galaxies, Active Galactic Nuclei, or other sources associated with the matter distribution of the local universe at a distance > 1 MPc. The unprecedented sensitivity of ZAP to energies beyond 100 EeV would enable a test of source acceleration mechanisms. At higher energies, ZAP would produce the most stringent limits on super heavy dark matter (SHDM) via limits on neutrinos and gamma rays resulting from self-annihilation or decay.
Speaker: Andres Romero-Wolf (Jet Propulsion Laboratory, California Institute of Technology)
• 156
Reconstructing inclined extensive air showers from radio measurements
We present a reconstruction algorithm for extensive air showers with zenith angles between 65° and 85° measured with radio antennas in the 30-80 MHz band. Our algorithm is based on a signal model derived from CoREAS simulations which explicitly takes into account the asymmetries introduced by the superposition of charge-excess and geomagnetic radiation as well as by early-late effects. We exploit correlations among fit parameters to reduce the dimensionality and thus ensure stability of the fit procedure. Our approach reaches a reconstruction efficiency near 100% with an intrinsic resolution for the reconstruction of the electromagnetic energy of well below 5%. It can be employed in upcoming large-scale radio detection arrays using the 30-80 MHz band, in particular the AugerPrime Radio detector of the Pierre Auger Observatory, and can likely be adapted to experiments such as GRAND operating at higher frequencies.
Speaker: Tim Huege (Karlsruhe Institute of Technology and Vrije Universiteit Brussel)
• 157
Classification and Denoising of Cosmic-Ray Radio Signals using Deep Learning
Speaker: Abdul Rehman (University of Delaware)
• 158
Cross-calibrating the energy scales of cosmic-ray experiments using a portable radio array
Different experiments use different techniques to detect and reconstruct cosmic-ray events, yielding different energy scales. Having a method to compare the energy scales of different experiments with minimal uncertainty is necessary in order to make meaningful comparisons of their spectra and composition measurements, which are used to create global models of cosmic-ray sources, acceleration and propagation. Comparing energy scales has proven to be difficult, given that uncertainties on energy measurements depend on the location, technique and equipment used. In this contribution we introduce a new radio-based technique which will be used to build a universal cosmic-ray energy scale. Radio detection provides a measure of the radiation energy in air showers, which scales quadratically with the electromagnetic energy. Once the local magnetic field strength is taken into account, radiation energy can be directly compared at different locations. A portable array of antennas will be built and deployed at various experiments, measuring radiation energy in conjunction with the host experiment’s traditional air shower measurements. The energy measured at each location can then be directly compared via the contemporaneous radiation energy measurements. Using radiation energy to compare the energy scales eliminates uncertainties due to measurements being made at different locations, and using the same array at each site eliminates the uncertainties associated with the equipment and calibration. This will allow for a cross-calibration of the energy scales of different experiments with minimal uncertainty. Here we present the technique and report on the status of a prototype array that began taking data in January 2021.
Speaker: Katharine Mulrey (Vrije Universiteit Brussel)
• 159
Expected performance of interferometric air-shower measurements with radio antennas
Interferometric measurements of the radio emission of extensive air showers allow reconstructing cosmic-ray properties. A recent simulation study with an idealised detector promised measurements of the depth of the shower maximum $X_\mathrm{max}$ with an accuracy better than 10$\,$g$\,$cm$^{-2}$.
In this contribution, we evaluate the potential of interferometric $X_\mathrm{max}$ measurements of (simulated) inclined air showers with realistically dimensioned, sparse antenna arrays. We account for imperfect time synchronisation between individual antennas and study its inter-dependency with the antenna density in detail. We find a strong correlation between the antenna multiplicity (per event) and the maximum acceptable inaccuracy in the time synchronisation of individual antennas. From this result, prerequisites for the design of antenna arrays for the application of interferometric measurements can be concluded. For data recorded with a time synchronisation accurate to 1$\,$ns within the commonly used frequency band of 30 to 80$\,$MHz, an antenna multiplicity of $\geq 50$ is needed to achieve an $X_\mathrm{max}$ reconstruction with an accuracy of 20$\,$g$\,$cm$^{-2}$. This multiplicity is achieved measuring inclined air showers with zenith angles $\theta \geq 77.5^\circ$ with 1$\,$km spaced antenna arrays, while vertical air showers with zenith angles $\theta \leq 40^\circ$ require an antenna spacing below 100$\,$m. Furthermore, we find no improvement in $X_\mathrm{max}$ resolution applying the interferometric reconstruction to measurements at higher frequencies, i.e., up to several hundred MHz.
Speaker: Felix Schlüter (Karlsruhe Institute of Technology - Institute for Astroparticle Physics)
• 160
First results from the AugerPrime Radio Detector
The Pierre Auger Observatory investigates the properties of the highest-energy cosmic rays with unprecedented precision. The aim of the AugerPrime upgrade is to improve the sensitivity to the primary particle type. The improved mass sensitivity is the key to exploring the origin of the highest-energy particles in the Universe. The purpose of the Radio Detector (as part of AugerPrime) is to extend the sensitivity of the mass measurements to zenith angles above 60°. A radio antenna, sensitive in two polarization directions and covering a bandwidth from 30 to 80 MHz, will be added to each of the 1661 surface detector stations over the full 3000 km2 area, forming the world’s largest radio array for the detection of cosmic particles. Since November 2018, an engineering array
comprised of ten stations has been installed in the field.
The radio antennas are calibrated using the Galactic (diffuse) emission. The sidereal modulation of this signal is monitored continuously and is used to obtain an end-to-end calibration from the receiving antenna to the ADC in the read-out electronics. The calibration method and first results will be presented.
The engineering array is also fully integrated in the data acquisition of the Observatory and records air showers regularly. The first air showers detected simultaneously with the water-Cherenkov detectors and the Radio Detectors will be presented. Simulations of the detected showers, based on the reconstructed quantities, have been conducted with CORSIKA/CoREAS. A comparison of the measured radio signals with those predicted by simulations exhibits satisfying agreement.
• 161
Performance of SKA as an air shower observatory
The low frequency segment of SKA in Australia will have an extremely dense antenna array spanning an area of roughly 0.5 km$^2$. It offers unique possibilities for high-resolution observations of air showers. Compared to LOFAR, it will have a much more homogeneous ground coverage, an increased frequency bandwidth (50-350 MHz), and the possibility to continuously observe with nearly 100% duty cycle.
SKA will observe air showers in the range 10$^{16}$ eV - 10$^{18}$ eV with a reconstruction resolution on Xmax of around 10 g/cm$^2$. This allows for a high-precision study of mass composition in the energy regime where a transition is expected from Galactic to extragalactic origin. In addition, SKA will be able to put constraints on hadronic interaction models, which is crucial for interpreting the data in this complex energy range.
In this talk, we will show the results of a full detector simulation and demonstrate the capabilities of SKA, including energy and Xmax reconstruction, as well as more advanced methods to constrain the shape of the longitudinal development of air showers.
Speaker: Stijn Buitink (Vrije Universiteit Brussel (VUB))
• 162
Simulation and Optimisation for the Radar Echo Telescope for Cosmic Rays
The Radar Echo Telescope for Cosmic Rays (RET-CR) will use the radar echo technique to detect the in-ice continuation of an ultra high energy cosmic ray (UHECR) air shower. When a UHECR particle cascade propagates into a high-elevation ice sheet, it produces a dense in-ice cascade of charged particles which can reflect incoming radio waves. Through the detection of transmitted radio waves, the energy and direction of the UHECR can be reconstructed. RET-CR will consist of a transmitter array, receiver antennas and a surface scintillator plate array.
In this poster we present the simulation efforts for RET-CR performed to optimise the surface array layout and triggering system, which finally leads to the prediction of the expected event rate. Showers are generated using the CORSIKA Monte Carlo code. The energy deposits in the scintillators are then found by propagating the particle output from CORSIKA through the scintillating material in Geant4. Thresholds are applied to the energy deposits to determine which showers trigger providing the surface detector efficiency. Additionally, CoREAS is used to generate radio emission which will be used to reconstruct events with the surface array. For the prediction of the event rate seen by the in-ice radar system, we use a simulation chain of existing and new tools. UHECR showers are generated using the CORSIKA Monte Carlo code, which are then propagated through a realistic ice layer using Geant4. The energy depositions from the Geant4 simulations were subsequently used in RadioScatter to calculate the radar scatter amplitude to trigger the in-ice system leading to a prediction of the expected event rates for the RET-CR detector.
Speaker: Rose Stanley (IIHE - VUB)
• 163
Simulation Study of the Observed Radio Emission of Air Showers by the IceTop Surface Extension
Multi-detector observations of individual air showers are critical to make significant progress to precisely determine cosmic-ray quantities such as mass and energy of individual events and thus bring us a step forward in answering the open questions in cosmic-ray physics. An enhancement of IceTop, the surface array of the IceCube Neutrino Observatory, is currently underway and includes adding antennas and scintillators to the existing array of ice-Cherenkov tanks. The radio component will improve the characterization of the primary particles by providing an estimation of Xmax and a direct sampling of the electromagnetic cascade, both important for per-event mass classification. A prototype station has been operated at the South Pole and has observed showers, simultaneously, with the three detectors types. The observed radio signals of these events are unique as they are measured in the 100-350 MHz band, higher than many other cosmic-ray experiments. We present a comparison of the detected events with the waveforms from CoREAS simulations, convoluted with the end-to-end electronics response, as a verification of the analysis chain. Using the detector response and the measurements of the prototype station as input, we update a Monte-Carlo-based study on the potential of the enhanced surface array for the hybrid detection of air showers by scintillators and radio antennas.
Speaker: Alan Coleman (University of Delaware)
• 164
Simulations of radio emission from air showers with CORSIKA 8
CORSIKA 8 is a new framework for air shower simulations implemented in modern C++17, based on past experience with existing codes like CORSIKA 7. The flexibility of this framework allows for the inclusion of radio-emission calculations as an integral part of the program. Our design makes radio simulations general and gives the user the freedom to choose between different formalisms, such as the “Endpoints” and “ZHS” formalisms. In addition, it takes advantage of the flexibility of the CORSIKA 8 environment and geometry design, allowing future updates to more complex scenarios such as showers crossing from air into dense media. Our first results, along with comparisons with other simulation programs like CoREAS in CORSIKA 7 and ZHAireS are going to be presented. In the future, based on our design, the opportunity arises for radio simulations to achieve a significant boost in performance by deploying parallel computing techniques, in particular employing GPUs, and hence, perform more sophisticated radio-emission studies.
Speaker: Nikolaos Karastathis (Institute for Astroparticle Physics, Karlsruhe Institute of Technology)
• 165
TAROGE experiment and reconstruction technique for near-horizon impulsive radio signals induced by Ultra-high energy cosmic rays
Taiwan Astroparticle Radiowave Observatory for Geo-synchrotron Emissions (TAROGE) is antenna arrays sitting on high coastal mountains of Taiwan, pointing to the Pacific Ocean for the detection of near-horizon extensive air showers (EAS) induced by ultra-high energy cosmic rays and Earth-skimming tau neutrinos. TAROGE would improve the detection capability by collecting both the direct-emissions and the ocean-reflected signal on a vast area of ocean which is visible from Taiwan’s high mountains. Four TAROGE stations in Taiwan have been deployed in the past few years. Except for the first station, which is a prototype station for the purposes of radio survey and optimization of instrument parameters, other three stations are still operating.
We develop a new angular reconstruction method based on a deconvolution of radio reflection on the ground which is an important systematic effect for the near-horizon events. The response of the ground reflection is measured with a drone-borne calibration pulser. We achieved a sub-degree angular resolution for near horizon event. In this paper, we discuss details of the method and the results. A brief status report of the TAROGE project will also be reported.
Speaker: Mr Yaocheng Chen (Dept. of Physics, Grad. Inst. of Physics & Leung Center for Cosmology and Particle Astrophysics, National Taiwan University, Taipei, Taiwan)
• Discussion: 33 Photodetection in Cherenkov Detectors | NU 05
#### 05
• 166
The Wavelength-shifting Optical Module (WOM) for the IceCube Upgrade
The Wavelength-shifting Optical Module, or WOM, is a novel optical sensor that uses wavelength shifting and light guiding to substantially enhance the photosensitive area of UV optical modules. It has been designed for the IceCube Upgrade, a seven-string extension of the IceCube detector planned for the 2023/2024 South Pole deployment season, but its design can be applied to any large particle detector based on the detection of Cherenkov light. The WOM consists of a hollow quartz cylinder (detection area) coated in wavelength shifting paint with two PMTs attached to the end faces of the cylinder. The light-collecting quartz increases the effective photocathode area of the light sensors without producing additional dark current, making it suitable for low-signal, low-noise applications. For larger event distances where UV absorption shifts the spectrum to longer wavelengths, the design can be augmented with PMTs. We will report on the design and performance of the WOM with a focus on the 12 modules in production for deployment in the IceCube Upgrade.
Speaker: John Rack-Helleis (JGU Mainz)
• 167
Performance studies for a next-generation optical sensor for IceCube-Gen2
We present performance studies of a segmented optical module for the IceCube-Gen2 detector. Based on the experience gained in sensor development for the IceCube Upgrade, the new sensor will consist of up to 18 4-inch PMTs housed in a transparent pressure vessel, providing homogeneous 4 pi coverage. The use of custom moulded optical gel 'pads' around the PMTs enhances the photon capture rate via total internal reflection at the gel-air interface. The contribution presents simulation studies of various sensor, PMT, and gel pad geometries aimed at optimizing the sensitivity of the optical module in the face of confined space and harsh environmental conditions.
Speaker: Dr Nobuhiro Shimizu (Chiba Univeristy)
• 168
Performance of the D-Egg optical sensor for the IceCube-Upgrade
New optical sensors called the "D-Egg" have been developed for cost-effective instrumentation for the IceCube Upgrade. With two 8-inch high QE photomultipliers, they offer increased effective photocathode area while retaining as much of the successful IceCube Digital Optical Module (DOM) design as possible. Mass production of D-Eggs has started in 2020. By the end of 2021, there will be 310 D-Eggs produced with 288 deployed in the IceCube Upgrade. The D-Egg readout system uses advanced technologies in electronics and computing power. Each of the two PMT signals is digitized using ultra-low-power 14-bit ADCs with a sampling frequency of 250-MSPS, enabling seamless and lossless event recording from single-photon signals to signals exceeding 200pe within 10ns, as well as flexible event triggering. In this paper, we report the single photon detection performance as well as the multiple photon recording capability of D-Eggs from the mass production line which have been evaluated with the built-in DAQ system.
Speaker: Colton Hill (ICEHAP, Chbia University)
• 169
Design of an Efficient, High-Throughput Photomultiplier Tube Testing Facility for the IceCube Upgrade
The IceCube Upgrade is an extension of the IceCube detector at the geographic South Pole. It consists of seven new strings with novel instrumentation. More than 430 multi-PMT optical modules called "mDOMs", housing 24 3-inch PMTs each, will be produced for the Upgrade. This will require testing and pre-calibration on a short timescale of more than 10,000 PMTs prior to assembly and deployment. We present the design of a PMT testing facility that enables simultaneous testing of roughly 100 PMTs per day at temperatures down to -20°C. The design is implemented at RWTH Aachen University and TU Dortmund University in parallel to achieve a throughput of up to 1,000 PMTs per week. This will enable a steady supply of tested PMTs to the production sites, which is critical for the Upgrade, as well as the future IceCube-Gen2 project.
Speaker: Lasse Halve (RWTH Aachen University)
• 170
Evaluation of large area photomultipliers for use in a new Baksan Large Neutrino Telescope project
We present results of advance studies of large area photomultipliers (PMTs) of different types from several manufacturers for use in a new Baksan Large Neutrino telescope. At first, requirements for photodetectors to be used in the telescope were formulated. Parameters of 8-inch, 10-inch and 20-inch PMTs were thoroughly studied. 8-inch PMTs under studies were ET9350 from ET Enterprises, R5912 and R5912-100 from Hamamatsu Photonics. 10-inch PMTs – R7081 and R7081-100 and R7081-100-WA from Hamamatsu Photonics. 20-inch PMTs – R12860 from Hamamatsu Photonics and MCP-PMT from NNVT. Particular emphasis was done on measurements of photocathode sensitivity, single photoelectron response, TTS, dark current counting rate and afterpulses rate.
Speaker: Mr Nikita Ushakov (Institute for Nuclear Research of the Russian Academy of Science, Prospekt 60-letiya Oktyabrya 7a, Moscow 117312, Russia)
• 171
Large area photodetectors in photon detection for large-scale neutrino physics experiments: single large area PMTs and multi small PMTs approaches.
More than 40 years ago beginning of works on deep underwater high energy neutrino telescope projects (DUMAND and Baikal) inspired development of new photon detectors: large area photomultipliers (PMTs), multi small PMT optical modules, small PMTs equipped with wavelength shifting plates and rods and even small area solid state photon detectors for such kind application. Now days we witness rebirth of the multi small PMT approach and it started to compete quite successfully with a single large area photon detector approach. The latter have been reigning supreme for almost half century. But recent developments of astroparticle physics experiments demonstrated good competiveness of the “multi small PMTs” idea. Several projects of astroparticle physics experiments may serve as good examples, Km3NET project and coming JUNO experiment among them. We present pros and cons of both approaches.
Speaker: Sultim Lubsandorzhiev (Institute for Nuclear Research of the Russian Academy of Sciences)
• 172
Enhanced photon detection efficiency for next-generation neutrino telescopes using photon traps
We propose a photon trap designed for improved photon detection efficiency in a cost-efficient way. Wavelength Shifting plastic sheets (WLS) are deployed at the bottom of a PMT, surrounded by dichroic film by which photons are efficiently trapped and guided to the PMT. We measured wave-length dependent transmittance of a commercially available dichroic film in water, a key variable determining photon trapping efficiency. We ran a Geant4 based simulation with the property of the commercially available dichroic film as a realistic case. We also ran a simulation with a hypothetical dichroic film whose bandpass is optimized to absorption and reemission spectra of the WLS and the quantum efficiency of the PMT, as an ideal case. The preliminary results of the photon collection and detection efficiency enhancements are computed, as well as timing distribution of the photons. We discuss how this new conceptual design can be applied to next-generation neutrino telescopes.
Speaker: Koun Choi (SKKU)
• 173
A next-generation optical sensor for IceCube-Gen2
For the in-ice component of the next generation neutrino observatory at the South Pole, IceCube-Gen2, a new sensor module is being developed, which is an evolution of the DEgg and mDOM sensors developed for the IceCube Upgrade. The sensor design features up to 18 4-inch PMTs distributed homogeneously in a borosilicate glass pressure vessel. Challenges arise for the mechanical design from the tight constraints on the bore hole diameter (which will be 2 inches smaller than for IceCube Upgrade) and from the close packing of the PMTs. The electronics design must meet the space constraints posed by the mechanical design as well as the power consumption and cost considerations from over 10,000 optical modules being deployed. This contribution presents forward-looking solutions to these design considerations. Prototype modules will be installed and integrated in the IceCube Upgrade.
• 174
Data Quality Monitoring system of the Baikal-GVD experiment
The main purpose of the Baikal-GVD Data Quality Monitoring (DQM) system is to monitor the status of the detector and collected data. The system estimates quality of the recorded signals and performs the data validation. The DQM system is integrated with the Baikal-GVD’s unified software framework (“BARS”) and operates in quasi-online manner. This allows us to react promptly and effectively to the changes in the telescope conditions.
Speaker: Maksim Sorokovikov (JINR)
• 175
Design and performance of the multi-PMT optical module for IceCube Upgrade
The IceCube Upgrade is the first step towards the next-generation neutrino observatory at the South Pole, IceCube-Gen2, and will be installed in the central region of the existing array. The Upgrade will consist of 693 newly developed, densely spaced optical sensors and 50 standalone calibration devices, which will enhance IceCube's capabilities both at low and high neutrino energies. 402 of the new sensors will be multi-PMT Digital Optical Modules (mDOMs). Consisting of 24 small photomultipliers arranged inside a pressure vessel, the mDOM features a large sensitive area distributed nearly homogeneously over the full solid angle. The use of multiple, individually read-out PMTs allows directional information to be obtained for the registered photons and enables the use of multiplicity triggering within a single module, e.g., for background suppression. The challenges driving the mDOM development included tight restrictions on module size, data-transfer rate, and power consumption as well as the harsh environment in the deep ice at the South Pole. In this contribution we present the final mDOM design that meets these challenges.
Speaker: Lew Classen (Westfälische Wilhelms-Universität Münster)
• 176
Experimental string with fiber optic data acquisition for Baikal-GVD
The first stage of the construction of the deep underwater neutrino telescope Baikal-GVD is planned to be completed in 2024. The second stage of the detector deployment is planned to be carried out using a data acquisition system based on fiber optic technologies, which will allow for an increased data throughput and looser, more flexible trigger conditions, thus maximizing the neutrino detection efficiency. A dedicated experimental string has been built and deployed at the Baikal-GVD site to test the new technological solutions. We present the principle of operation and the results of in-situ tests of the experimental string.
• 177
Exploring a PMT+SiPM hybrid optical module for next generation neutrino telescopes
Cosmic neutrinos are unique probes of the high energy universe. IceCube has discovered a diffuse astrophysical neutrino flux since 2013, but their origin remains elusive. The potential sources could include, for example, active galactic nuclei, gamma-ray bursts and star burst galaxies. To resolve those scenarios, higher statistics and better angular resolution of astrophysical neutrinos are needed. An optical module with larger photon collection area and more precise timing resolution in a next generation neutrino telescope could help. Silicon photon multipliers (SiPMs), with high quantum efficiency and fast responding time, combining with traditional PMTs, could boost photon detection efficiency and pointing capability. We will present a study on exploring the benefits of combining multiple PMTs and SiPMs in an optical module.
Speaker: Fan Hu (Peking University)
• 178
Light concentrators for large-volume detector at the Baksan Neutrino Observatory
At the Baksan Neutrino Observatory deployed in the Caucasus mountains, it is proposed to create, at a depth corresponding to about 4700 mwe, a large-volume neutrino detector based on a liquid scintillator with a target mass of 10 kt. The main physics goals of the detector are low-energy neutrino physics, astrophysics and geophysics.
The highest possible light yield is crucial for such detectors. To improve light yield and energy resolution in large-volume neutrino detectors, light concentrators are often mounted on photomultiplier tubes to increase the detection efficiency of optical photons from scintillation or Cherenkov light induced by charged particles. We present the results of recent R&D work aimed to develop light concentrators for the Baksan large-volume liquid scintillation neutrino detector.
Speaker: Mr Almaz Fazliakhmetov (Institute for Nuclear Research of the Russian Academy of Science, Prospekt 60-letiya Oktyabrya 7a, Moscow 117312, Russia)
• 179
P-ONE second pathfinder mission: STRAW-b
P-ONE (Pacific Ocean Neutrino Explorer) collaboration was born with the aim of building a new large-scale neutrino telescope in the Pacific Ocean, at 2600 m b.s.l. in Cascadia Basin, off Vancouver Island.
The first steps aimed at the feasibility study and the characterization of the optical properties of the site with a first pathfinder project named STRAW (STRing for Absorption length in Water), deployed in 2018.
During the last two years a second pathfinder project has been developed: STRAW-b.
The main goal of STRAW-b is to validate the attenuation length already measured by STRAW and to add new information on the background characterization with the study of the deep sea diffused light spectrum. It consists of a 500 m mooring (electrical-optical cable communication) equipped with three Standard Modules for environmental monitoring and seven Specialised Modules for background analysis and attenuation length measurements. All the modules are hosted in spherical 13′′ high pressure resistant glass housings.
Its design started at the end of 2018 and after about two years it has been successfully deployed in summer 2020 in Cascadia Basin site, connected to the underwater Ocean Networks Canada infrastructure about 40 meters away from STRAW.
We present all the steps from the design to the realization of the mooring, with a special focus on the adopted technologies and on preliminary results of data taking.
Speaker: Immacolata Carmen Rea (TUM)
• 180
Time synchronization of Baikal-GVD clusters
Currently, the Baikal-GVD neutrino telescope consists of 7 clusters of 288 photodetectors. Each cluster is a functionally complete detector which can register events in stand-alone mode and jointly with other clusters. Joint operation of the clusters requires time synchronization with nanosecond accuracy. This paper presents the methods of time synchronization of the clusters, the results of a study of the synchronization accuracy using laser beacons, and first results of combining events from several clusters.
• Discussion: 42 Direct Dark Matter: Present and Future | DM 07
#### 07
• 181
DARWIN – a next-generation liquid xenon observatory for dark matter and neutrino physics
Benefiting from more than a decade of experience in WIMP searches with dual-phase xenon time projection chambers, the DARWIN (DARk matter WImp search with liquid xenoN) collaboration intends to build a next-generation detector involving 50 tonnes (40 tonnes active) of xenon. The primary goal of the observatory is to explore the entire experimentally accessible parameter space for WIMP masses above 5 GeV/c$^2$ down to the irreducible neutrino floor. With its low energy threshold and ultra-low background level, DARWIN will be an excellent platform to search for various other rare interactions. These include the neutrinoless double beta decay of $^{136}$Xe, a high-precision measurement of the low-energy solar neutrino flux, as well as searches for solar axions and axion-like-particles. In this talk, we will present the detector concept, the sensitivity to the various science channels, and ongoing R&D efforts.
Speaker: Mr Kevin Thieme (University of Zurich)
• 182
The DEAP-3600 experiment
The DEAP-3600 experiment searches for dark matter via the interactions of WIMPs with a liquid argon target. The experiment is located at SNOLAB in Sudbury, Ontario 2 km underground to shield the detector from cosmic rays. The detector consists of an acrylic sphere with an inner diameter of ~170 cm containing ~3300 kg of liquid argon. Liquid argon is chosen as a target due to its ability to reject electromagnetic backgrounds by examining its scintillation pulse shape. The argon volume is instrumented with 255 PMTs which are connected to the vessel via acrylic light guides. As liquid argon scintillates at a wavelength of 128 nm, its scintillation light needs to be shifted to a wavelength into a region where the PMTs are more sensitive; this is done by coating the inside of the acrylic vessel with TPB wavelength shifter, which re-emits the argon scintillation light at a wavelength of 420 nm.
This talk will describe the current status of the experiment and some recent analyses performed by the collaboration. The status of planned upgrades to the detector and the plans for the future of the experiment will also be detailed.
Speaker: Mark Stringer (Queen's University)
• 183
Simulations and background estimates for the DAMIC-M experiment
DAMIC-M (Dark Matter in CCDs at Modane) is a near-future experiment aiming to search for low-mass dark matter particles through their interactions with silicon atoms in the bulk of charge-coupled devices (CCDs). This technique was pioneered by the DAMIC experiment at SNOLAB. Its successor DAMIC-M will have a 25 times larger detector mass and will employ a novel CCD technology (skipper amplifiers) which allows to achieve a readout noise of 0.07 e-. With these novelties, DAMIC-M will reach unprecedented sensitivities to dark matter candidates of the so-called hidden sector. A challenging requirement is the control of the radiogenic background at the level of a fraction of events per keV per kg-day of target exposure. Accurate Geant4 simulations are being employed to optimise the detector design and drive the material selection and handling. This poster provides a comprehensive overview of the explored detector designs, the estimated background, and the strategies for its mitigation.
Speaker: Claudia De Dominicis
• 184
DIMS Experiment for Dark Matter and Interstellar Meteoroid Study
DIMS (Dark matter and Interstellar Meteoroid Study) is a new experiment aiming
to search for macroscopic dark matters and interstellar meteoroids. Nuclearites are nuggets of stable strange quark matter(SQM), neutral in charge and hypothetical super-heavy macroscopic particles (macros), and may be important components of the dark matter in our Universe. Nuclearites of galactic origins would have an expected typical velocity of about 220 km/s in galactic frame, whereas in the case of a head-on collision between interstellar meteoroids with a velocity that exceeds the escape velocity of the solar system and the Earth orbiting the Sun, the geocentric velocities will be larger than 72 km/s. We study the possibility to search for such fast-moving particles by using very high-sensitivity CMOS cameras with a wide field of view.
Based on observational data of meteor events using such stereo camera systems at some locations, we estimate the observable mass ranges for the moving nuclearites and the interstellar meteoroids. Observable flux limits are also estimated for these mass ranges.
We designed the DIMS experiment to search for such particles. In its first stage, the DIMS system consists of 4 high-sensitivity CMOS camera stations with a wide field of view. The system is going to be constructed at the Telescope Array cosmic-ray-experiment site in Utah, USA.
Details of the project science, plans and present status with preliminary test results will be reported in this paper.
Speaker: Prof. Fumiyoshi Kajino (Department of Physics – Konan University, Japan)
• 185
Sub-GeV dark matter and neutrino searches with Skipper-CCDs: status and prospects.
High-resistivity silicon has made possible the fabrication of thick fully-depleted charge-coupled devices (CCDs) that have found a wide range of scientific applications, from particle detection to astronomical imaging. Their low noise and high charge collection efficiency allow us to reach unprecedented sensitivity to physical processes with low-energy transfers. The newly-developed Skipper-CCD enhances this sensitivity by reducing the read-out noise reaching a sub-electron resolution. In this work, we introduce the fundamentals of the Skipper-CCD operation and the prospects for both sub-GeV dark matter searches and the detection of coherent elastic neutrino-nucleus scattering. A discussion of the challenges associated with the construction of the foreseen detectors with multi-kilogram target mass is also presented.
Speaker: Ana Martina Botti (IFIBA - UBA)
• 186
Results on low-mass weakly interacting massive particles from a 11 kg d target exposure of DAMIC at SNOLAB
Experimental efforts of the last decades have been unsuccessful in detecting WIMPs (Weakly Interacting Massive Particles) in the $10$-to-$10^4$ GeV/$c^2$ range, thus motivating the search for lighter dark matter. The DAMIC (DArk Matter In CCDs) at SNOLAB experiment aims for direct detection of light dark matter particles ($m_\chi < 10$ GeV/$c^2$) by means of CCDs (Charge-Coupled Devices). Fully-depleted 675 $\mu$m-thick CCDs are used to such end. The optimized readout noise and operation at cryogenic temperatures allow for a detection threshold of 50 eV$_{ee}$ electron-equivalent energy. Focusing on nuclear and electronic scattering as potential detection processes, DAMIC has so far set competitive constraints on the detection of low mass WIMPs and hidden-sector particles.
In this work, a 11 kg$\cdot$d exposure dataset is exploited to search for light WIMPs by building the first comprehensive radioactive background model for CCDs. Different background sources are discriminated making conjoint use of the spatial distribution and energy of ionization events, thereby constraining the amount of contaminants such as tritium from silicon cosmogenic activation and surface lead-210 from radon plate-out.
Despite a conspicuous, statistically-significant excess of events below 200 eV$_{ee}$, this analysis places the strongest exclusion limit on the WIMP-nucleon scattering cross section with a silicon target for $m_\chi < 9~$GeV/$c^2$.
Speaker: Michelangelo Traina (LPNHE, Paris, France)
• 187
Neutrinoless double beta decay search with XENON1T and XENONnT
With the lowest background level ever reached by detectors searching for rare-events, XENON1T proved to be the most sensitive dark matter direct detection experiment on earth. The unprecedented low level of radioactivity reached, made the XENON1T experiment suitable also for other interesting rare-events searches including the neutrinoless double beta decay of 136Xe. In this talk I will report on the current status of neutrinoless double beta decay of 136Xe search in XENON1T.
Furthermore, in the context of the advancement of the XENON program, the next generation experiment, XENONnT, designed with a high level of background reduction aiming to increase the predecessor sensitivity in rare-events searches is currently under commissioning phase in the underground National Laboratory of Gran Sasso (LNGS): it will host 5.9 tonnes of liquid xenon as a target mass. I will also discuss the discovery potential of XENONnT in the search for neutrinoless double beta decay events and its general physics program.
Speaker: Maxime Pierre (Subatech)
• 188
Probing sterile neutrinos and axion-like particles from the Galactic halo with eROSITA
The nature of dark matter remains an open question and could be in the form of warm dark matter. Sterile neutrinos are well motivated warm dark matter candidates and can decay into photons through mixing, which are consequently detectable by X-ray telescopes for sterile neutrino mass in the keV range. Moreover, axion-like particle are compelling warm dark matter candidates too and they can couple to standard model particles and decay into photons at keV range. Both particles could explain the observed unidentified 3.5 keV line and, interestingly, XENON1T observed an excess at a few keV that can originate from axion-like particles, which is not yet excluded by X-ray constraints for a suppressed coupling to photons with respect to the coupling to electrons.
We study the diffuse emission coming from the Galactic halo, and test the sensitivity of all-sky X-ray survey eROSITA to identify a sterile neutrino or axion-like particle. By Monte Carlo method, we set bounds on the mixing angle of the sterile neutrinos and coupling strength of the axion-like particles. I will show that with eROSITA, we will be able to set stringent constraints, and in particular, we will be able to firmly probe the best-fit of the unidentified 3.5 keV line, where we reach an order of magnitude better sensitivity. Moreover, eROSITA is able to confirm an axion-like particle origin of the XENON1T excess for an excess greater than $\sim 3.5$ keV.
Speaker: Ariane Dekker
• Discussion: 48 Modelling AGN's spectral energy distribution | GAD-GAI-MM 04
#### 04
• 189
Gamma Rays from Fast Black-Hole Winds
Massive black holes at the centers of galaxies can launch powerful wide-angle winds, which if sustained over time, can unbind the gas from the stellar bulges of galaxies. These winds, also known as ultra-fast outflows (UFOs), may be responsible for the observed scaling relation between the masses of the central black holes and the velocity dispersions of stars in galactic bulges. Propagating through the galaxy, the wind should interact with the interstellar medium creating a strong shock, similar to those observed in supernovae explosions, which is able to accelerate charged particles to high energies. In this talk I will present the Fermi Large Area Telescope detection of gamma-ray emission from these shocks in a small sample of galaxies exhibiting energetic winds. The detection implies that energetic black-hole winds transfer ~0.04% of their mechanical power to gamma rays and that the gamma-ray emission represents the onset of the wind-host interaction.
Speaker: Chris Karwin (Clemson University)
• 190
Gamma-ray emission from young radio galaxies and quasars
According to radiative models, radio galaxies are predicted to produce gamma rays from the earliest stages of their evolution onwards. The study of the high-energy emission from young radio sources is crucial for providing information on the most energetic processes associated with these sources, the actual region responsible for this emission, as well as the structure of the newly born radio jets.
Despite systematic searches for young radio sources at gamma-ray energies, only a handful of detections have been reported so far. Taking advantage of more than 11 years of Fermi-LAT data, we investigate the gamma-ray emission of 162 young radio sources (103 galaxies and 59 quasars), the largest sample of young radio sources used so far for a gamma-ray study. We analyse the Fermi-LAT data of each individual source separately to search for a significant detection. In addition, we perform the first stacking analysis of this class of sources in order to investigate the gamma-ray emission of the young radio sources that are undetected at high energies.
We report the detection of significant gamma-ray emission from 11 young radio sources, including the discovery of significant gamma-ray emission from the compact radio galaxy PKS 1007+142.
Although the stacking analysis of below-threshold young radio sources does not result in a significant detection, it provides stringent upper limits to constrain the gamma-ray emission from these objects.
In this talk we present the results of our study and we discuss their implications for the predictions of gamma-ray emission from this class of sources.
Speaker: Giacomo Principe (INFN / University of Trieste)
• 191
A two-zone emission model for Blazars and the role of Accretion Disk MHD winds
Blazars are a sub-category of radio-loud active galactic nuclei with relativistic jets pointing towards the observer. They exhibit non-thermal variable emission, which practically extends over the whole electromagnetic spectrum. Despite the plethora of multi-wavelength observations, the origin of the emission in blazar jets remains an open question. In this work, we construct a two-zone leptonic model: particles accelerate in a small region and lose energy through synchrotron radiation and inverse Compton Scattering. Consequently, the relativistic electrons escape to a larger area where the ambient photon field, which is related to Accretion Disk MHD Winds, could play a central role in the gamma-ray emission. This model explains the Blazar Sequence and the broader properties of blazars, as determined by Fermi observations, by varying only one parameter, the mass accretion rate onto the central black hole. Flat Spectrum Radio Quasars have a strong ambient photon field and their gamma-ray emission is dominated by the more extensive zone, while in the case of BL Lac objects, the negligible ambient photons make the smaller (acceleration) zone dominant.
Speaker: Stella Boula (National and Kapodistrian University of Athens)
• 192
Building a robust sample of Fermi-LAT blazars that exhibit periodic gamma-ray emission
Blazars can show variability on a wide range of timescales. However, the search for periodicity in the gamma-ray emission of blazars remains an on-going challenge. This contribution will show the results obtained when a systematic pipeline is used to implement ten well-established methods for searching for periodicity. We analyze the most promising candidates selected from our previous work, extending the Fermi-LAT light curves over three more years, for a total telescope time of twelve years. These improvements have allowed us to build the first sample of blazars that display a periodicity detected at a significance >5sigma. Finally, we will discuss the potential origins for the periodic behavior observed in blazars.
Speaker: Pablo Peñil (Clemson University)
• 193
VHE gamma-ray spectral hint of two-zone emitting region in Mrk 501
Markarian 501 (Mrk 501) is one of the brightest very high energy (VHE, E> 100 GeV) gamma-ray blazars. It is located in our neighborhood, at redshift z = 0.034. During a multi-wavelength campaign in July 2014, Mrk 501 displayed the highest X-ray activity observed by the Neil Gehrels Swift X-ray telescope (XRT) since its launch. The X-ray spectra displayed during this flaring episode were very hard, and showed variability on nightly timescales. On 2014 July 19, in coincidence with the peak of the X-ray activity, a hint of a narrow feature at ~3 TeV was observed with the MAGIC telescopes. Such feature makes the VHE spectrum inconsistent with the classical analytic functions used to describe the measured VHE spectra (power law, log-parabola, and log-parabola with exponential cutoff) at more than 3σ. A double-log-parabola fit is preferred w.r.t. a single one at more than 4σ confidence level. Three different scenarios that could produce such an effect are discussed: (a) a pileup in the electron energy distribution; (b) a two-zone Synchrotron Self-Compton (SSC) emission model; and (c) a pair cascade model. In this contribution we will present the observational details and a general overview of the possible physical mechanisms of this unprecedented observation.
Speaker: Josefa Becerra González (Insituto de Astrofísica de Canarias & Universidad de La Laguna)
• 194
Multiwavelength observations in 2019-2020 of a new very-high-energy gamma-ray emitter: the flat spectrum radio quasar QSO B1420+326
The flat-spectrum radio quasar QSO B1420+326 underwent an enhanced gamma-ray flux state seen by Fermi-LAT at the turn of 2019/2020. Compared to the low state both the position and luminosity of the two spectral energy distribution peaks changed by at least two orders of magnitude. The high state resulted in the discovery of the very-high-energy (>100 GeV) gamma-ray emission from the source by the MAGIC telescopes. The organized multiwavelength campaign allow us to trace the broadband emission of the source through different phases of the flaring activity. The source was observed by 20 instruments in radio, near-infrared, optical, ultra-violet, X-ray and gamma-ray bands.
We use dedicated optical spectroscopy results to estimate the accretion disk and the dust torus luminosity. The optical spectroscopy shows a prominent FeII bump with flux evolving together with the continuum emission and a MgII line with varying equivalent width. The gamma-ray flare was accompanied by a rotation of the optical polarization vector and emission of a new superluminal radio knot. We model spectral energy distributions in different flare phases in the framework of combined synchrotron-self-Compton and external Compton scenario in which the
shape of the electron energy distribution is determined from cooling processes.
Speaker: Filippo D'Ammando (INAF-IRA Bologna)
• 195
Beams of ultra-relativistic electrons in blazar jets develop pair cascades interacting with ambient soft photons. Employing coupled kinetic equations with escape terms, we model the unsaturated pair cascade spectrum. We assume that the gamma rays predominantly scatter off recombination-line photons from clouds photoionised by the irradiation from the accretion disk and the jet. The cascade spectrum is rather insensitive to the injection of hard electron spectra associated with the short-time variability of blazars. Adopting physical parameters representative of Markarian 501 and 3C 279, respectively, we numerically obtain spectral energy distributions showing distinct features imprinted by the recombination-line photons. The hints for a peculiar feature at 3 TeV in the spectrum of Markarian 501, detected with the MAGIC telescopes during a strong X-ray flux activity in 2014 July, can be explained in this scenario as a result of up-scattering of line photons by beam electrons and the low pair-creation optical depth. Inspecting a high-fidelity Fermi-LAT spectrum of 3C 279 reveals troughs in the spectrum that coincide with the threshold energies for gamma rays producing pairs in collisions with recombination-line photons and the absence of exponential attenuation. Our finding implies that the gamma rays in 3C 279 escape from the edge of the broad emission line region.
Speaker: Christoph Wendel (JMU Würzburg)
• 196
Detection of new Extreme BL Lac objects with H.E.S.S. and SWIFT
Extreme high synchrotron peaked blazars (EHBLs) are amongst the most powerful accelerators found in nature. Usually the synchrotron peak frequencyof an EHBL is above 10^17 Hz, i.e., lies in the range of medium to hard X-rays making them ideal sources to study particle acceleration and radiative processes. EHBL objects are commonly observed at energies beyond several TeV, making them powerful probes of gamma-ray absorption in the intergalactic medium. During the last decade, several attempts have been made to increase the number of EHBL detected at TeV energies and probe their spectral characteristics.
Here we report new detections of EHBLs in the TeV energy regime, each at a redshift of less than 0.25, by the High Energy Stereoscopic System (H.E.S.S.). Also, we report on X-ray observations of these EHBLs candidates with Swift XRT. In conjunction with the very high energy observations, this allows us to probe the radiation mechanisms and the underlying particle acceleration processes.
Speaker: Ms Angel Priyana Noel (Astronomical Observatory of Jagiellonian University)
• 197
Discovery of TXS 1515-273 at VHE gamma-rays and modelling of its Spectral Energy Distribution
In February 2019, a flaring state of the extreme blazar candidate TXS 1515-273 was registered by the Fermi-LAT, which triggered observations with the MAGIC telescopes and the X-ray satellites Swift, XMM-Newton and NuStar. The observations led to the discovery of the source at VHE gamma-rays and the detection of short time scales of variability (~1 h) in several X-ray bands.
The analysis of the observed variability helped us to constrain the emission region’s physical parameters. Thanks to the high-quality X-ray data, the synchrotron peak location was determined. The source was classified as a high synchrotron peaked source during the flaring activity. We constructed the broadband spectral energy distribution from radio to TeV. We interpreted it assuming leptonic emission and taking into account the constraints from the X-ray variability. We tested two scenarios: a simple one-zone model and a two-component model. Both models were found to describe well the data from X-rays to VHE gamma rays, but the two-zone model allows for a more accurate modelling of the emission at radio and optical energies.
Speaker: Serena Loporchio (University and INFN - Bari)
• 198
Explaining the TeV detection of blazar AP Librae: constraints from ALMA and HST
Powerful jets hosted by accreting super-massive black holes have long been candidates for the acceleration sites for high-energy extra-galactic cosmic rays, supported by the recent association of neutrinos from blazar TX0506+056. In the highly-aligned jets known as blazars, the X-ray to TeV radiation is usually attributed to inverse Compton scattering processes, but has not been clearly identified in most cases due to degeneracies in physical models. AP Librae, a blazar detected in TeV energies, has an extremely broad high-energy spectrum, covering ∼ 9 decades in energy. Using new ALMA and Hubble imaging of the kpc-scale jet and over 11 years of Fermi/LAT observations, we rule out previously proposed leptonic models attributing the high-energy emission to synchrotron self-Compton from the jet base and IC/CMB in the kpc-scale jet. In contrast, "lepto-hadronic" models remain viable, though underconstrained given the number of free parameters. We find that the origin of the TeV photons from this source remains debatable and show that leptonic and hadronic models can be further tested with deep and high dynamic range imaging in the sub-mm and far infrared and/or continued monitoring of the source at TeV energies to test for variability. Unmasking the origin of extragalactic TeV emission from blazar AP Librae would unlock vital clues to our understanding of particle acceleration and the origin of extra-galactic cosmic rays.
Speaker: Agniva Roychowdhury (University of Maryland Baltimore County)
• 199
Exploring the High-Energy Gamma-Ray Spectra of TeV Blazars
The highest-energy blazars exhibit non-thermal radiation extending beyond 1 TeV with high luminosities and strong variabilities, indicating extreme particle acceleration in their relativistic jets. The gamma-ray spectra of blazars contain information about the distribution and cooling processes of high-energy particles in jets, the extragalactic background light between the source and the observer, and potentially, the environment of the gamma-ray emitting region and exotic physics that modifies the opacity of the universe to gamma rays. We use data from Fermi-LAT and VERITAS to study the variability and spectra of a sample of TeV blazars across a wide range of gamma-ray energies, taking advantage of more than ten years of data from both instruments. The variability in both GeV and TeV gamma-ray bands is investigated using a Bayesian blocks method to identify periods with a steady flux, during which the average gamma-ray spectra, after correcting for the pair absorption effect from propagation, can be parameterized without the risk of mixing different flux states. We report on the search for intrinsic spectral curvature and spectral variability in these blazars, in an effort to understand the physical mechanisms behind the high-energy gamma-ray spectra of TeV blazars.
Speaker: Dr Qi Feng (Barnard College / Columbia University)
• 200
Extreme blazars under the eyes of MAGIC
Extreme high-frequency-peaked BL Lac objects (EHBLs) are the most energetic persistent sources in the universe. This contribution reports on long-term observing campaigns of tens of EHBLs that have been organized by the MAGIC collaboration to enlarge their population at VHE and understand the origin of their extreme properies. EHBLs are characterized by a spectral energy distribution (SED) featuring a synchrotron peak energy above 1 keV. Several EHBLs display a hard spectral index at very high energies (VHE; E>100 GeV), suggesting a gamma-ray SED component peaking significantly above 1 TeV. Such extreme properties are challenging current standard emission and acceleration mechanisms. Recent studies have also unveiled intriguing disparities in the temporal characteristics of EHBLs. Some sources seem to display a persistent EHBL behaviour, while others belong to the EHBL family only temporarily.
We will focus on the recent results of the first hard-TeV EHBL catalog. The MAGIC observations are accompanied by an extensive multi-wavelength coverage to obtain an optimal determination of the SED. This allow us to investigate leptonic and hadronic scenarios for the emission. We will also present the recent detection of the EHBL 1RXS0812.0+0237 in the VHE band by MAGIC. Finally, we will discuss a broad multi-wavelength campaign on the BL Lac type object 1ES2344+514, which showed intermittent EHBL characteristics in August 2016.
Speaker: Axel Arbet-Engels (ETH Zürich, Switzerland)
• 201
MAGIC and H.E.S.S. detect VHE gamma rays from the blazar OT081 for the first time: a deep multiwavelength study
OT081 is a luminous blazar well known for its variability in many energy bands.
The very-high-energy (VHE, E > 100 GeV) gamma-ray emission from the source was discovered by MAGIC and H.E.S.S. during flaring activity in July 2016, after a trigger from the LAT onboard the Fermi satellite.
From the analysis of the multiwavelength (MWL) light curves and of the broadband spectral energy distribution (SED), we study the activity of the source, in particular during four identified states of activity in the window MJD 57575 to MJD 57600. The intrinsic gamma-ray spectrum can be described by a power law with spectral indices of 3.27+/- 0.44 (MAGIC) and 3.39 +/- 0.58 (H.E.S.S.) for energy ranges 60-300 GeV and 120-500 GeV, respectively.
The combined contemporaneous HE (E > 100 MeV) through VHE SED shows curvature and can be described by a log-parabola shape.
VLBI analysis of the flare reveals the ejection of a superluminal knot and its subsequent passage through a stationary feature as a possible cause of the HE gamma-ray activity.
A simple one-zone synchrotron self-Compton (SSC) model is not sufficient to describe the broadband SED, and external Compton is required to explain the high Compton dominance displayed by the source.
The presence of broad emission lines in the optical spectrum of the source challenges the categorization of OT081 as a BL Lac and, together with the emission scenarios tested, points to the possibility that the source is transitional in nature between a BL Lac and a flat spectrum radio quasar (FSRQ).
Speaker: Marina Manganaro (University of Rijeka, Department of Physics)
• 202
Modeling the non-flaring VHE emission from M87 as detected by the HAWC gamma ray observatory
M87 is a giant radio galaxy located in the Virgo Cluster, known to be a very high energy (VHE) gamma-ray source. As radio galaxies are considered the misaligned low-redshift counterparts of blazars, they are excellent laboratories for testing AGN emission models. M87 has been detected and monitored by Fermi-LAT and several atmospheric Cherenkov telescopes. Recently, the HAWC Collaboration has reported weak evidence of long-term TeV gamma-ray emission from this source. However, HAWC data has the potential to constrain the average VHE emission of sources of complex behavior, like M87, for which the physical origin of the VHE gamma-ray emission is still uncertain. We fitted a lepto-hadronic scenario to the broadband spectral energy distribution of M87 to model its non-flaring VHE emission using HAWC data
Speaker: Fernando Ureña Mena (Instituto Nacional de Astrofísica, Óptica y Electrónica, Tonantzintla, Puebla, Mexico)
• 203
TeV emission from FSRQs: The first systematic and unbiased survey
Flat spectrum radio quasars (FSRQs) have been detected at TeV energies by ground-based atmospheric Cherenkov telescope mainly during flaring states. VERITAS is carrying out the first systematic and unbiased search for TeV emissions from a set of FSRQs. Fermi-LAT-detected FSRQs with positive declinations and extrapolated fluxes from the 3FHL catalog exceeding 1% Crab at >200 GeV after correcting for EBL absorption were selected for this survey, resulting in eight targets. Additionally, four FSRQs that were already detected at TeV energies are also included in this survey. In an unbiased fashion, the observations of twelve FSRQs, even without detection, provide the first constraints on the duty cycle of TeV emission from these FSRQs. Constraints on the TeV fluxes from these sources are used to probe the origin of the GeV to TeV spectral breaks. From this ongoing survey, the results of the sources observed during 2020-21 season are discussed in this work.
Speaker: Sonal Ramesh Patel (ZEU-CTA (CTA))
• 204
The luminosity function of TeV-emitting BL Lacs: observations of an HBL sample with VERITAS
High-frequency-peaked BL Lacs (HBLs) dominate the extragalactic TeV sky, with more than 50 objects detected by the current generation of TeV observatories. Still, the properties of TeV-emitting HBLs as a population are poorly understood due to biases introduced by the observing strategies of Cherenkov Telescopes, limiting our ability to estimate the potential contribution of TeV blazars to the diffuse neutrino, gamma-ray, and cosmic-ray backgrounds as well as their role in the late-stage evolution of active galactic nuclei. The VERITAS Collaboration has designed a program to quantify and minimize observational biases by selecting a sample of 36 HBLs and measuring their TeV fluxes at times that are not motivated by high-flux states. First results from this survey, which is the basis for a measurement of the luminosity function of TeV-emitting HBLs, will be presented at the conference.
Speaker: Manel Errando (Washington Uhniversity in St Louis)
• Wednesday, July 14
• Discussion: 04 CR Energy Spectrum | CRI 03
#### 03
• 205
Energy spectrum of cosmic rays measured using the Pierre Auger Observatory
We present the energy spectrum of cosmic rays measured at the Pierre Auger Observatory from $6 \times 10^{15}$ eV up to the most extreme energies where the accumulated exposure reaches about 80 000 km$^2$ sr yr. The wide energy range is covered with five different measurements, namely using the events detected by the surface detector with zenith angles below 60 degrees and applying different reconstruction method also above 60 degrees, those collected by a denser array, the hybrid events simultaneously recorded by the surface and fluorescence detectors, and using those events in which the signal is dominated by Cherenkov light registered by the high-elevation telescopes. In this contribution, we report updates of the analysis techniques and present the spectrum obtained by combining the five different measurements. Spectral features occurring in the wide energy range covered by the Observatory are discussed.
Speaker: Vladimír Novotný (IPNP, Charles University, Prague)
• 206
Energy spectrum and the shower maxima of cosmic rays above the knee region measured with the NICHE detectors at the TA site
The Non-Imaging CHErenkov Array (NICHE) is a low energy extension to Telescope Array (TA) using an array of closely spaced (~100 m) light collectors covering an area of ~2 square km. It is being deployed in the field-of-view of the FD for the TA Low Energy Extension (TALE) and overlaps with the TALE FD in the energy range above 2 PeV. Cosmic ray air showers with energies 1-100 PeV will be reconstructed using the Lateral Distribution of Cherenkov light from the air showers. This method allows shower energy and the maximum of shower depth (Xmax) to be determined. A prototype of the array, j-NICHE, has been making routine observations with 14 detectors since May, 2019. We will present the latest results of NICHE including the energy spectrum and the shower maximum distribution around the cosmic ray knee.
Speaker: Yugo Omura (Osaka City University)
• 207
The all-particle cosmic ray energy spectrum measured with HAWC
Thanks to recent technological development, a new generation of experiments have been developed with more sensitivity in the energy interval from 10 TeV to 1 PeV, such as HAWC. Due to its designs and high altitude, the HAWC air shower observatory can provide a bridge between the data from direct and indirect cosmic ray detectors. In 2017 the HAWC collaboration published their first results on the energy spectrum of cosmic rays, in the range from 10 to 500 TeV. This work updates these results by extending the energy interval of the measured all-particle cosmic-ray energy spectrum up to 1 PeV. The energy spectrum was obtained from the analysis of two years of HAWC's data using an unfolding method. We employed the QGSJET-II-04 model for the energy calibration and the spectrum reconstruction. The results confirm the presence of a knee like feature around 45 TeV, which was reported by the HAWC collaboration in 2017.
Speaker: Mr Jorge Antonio Morales-Soto (Instituto de Física y Matemáticas, Universidad Michoacana de San Nicolás de Hidalgo)
• 208
Protons Spectrum from MAGIC Telescopes data
Abstract Imaging Atmospheric Cherenkov telescopes (IACTs) are designed to detect cosmic gamma rays. As a by-product, IACTs detect Cherenkov flashes generated by millions of hadronic air showers every night. We present the proton energy spectrum from several hundred GeV to several hundred TeV, retrieved from the hadron induced showers detected by the MAGIC telescopes. The protons are discriminated from He and heavy nuclei with machine learning classification. The energy estimation is based on a specially developed deep neural network regressor. In the last decade, Deep Learning methods gained much interest in the scientific community for their ability to extract complex relations in data and process vast quantities of data in a short time. The proton energy spectrum obtained in this work is compared with the spectra obtained by modern cosmic ray experiments.
Speaker: Petar Temnikov (Institute for Nuclear Research and Nuclear Energy Sofia)
• 209
Joint analysis of the energy spectrum of ultra-high-energy cosmic rays as measured at the Pierre Auger Observatory and the Telescope Array
The measurement of the energy spectrum of ultra-high-energy cosmic rays (UHECRs) is of crucial importance to clarify their origin and acceleration mechanisms. The Pierre Auger Observatory in Argentina and the Telescope Array (TA) in the US reported their measurements of UHECR energy spectra observed in the southern and northern hemisphere, respectively. The region of the sky accessible to both Observatories ([-15,+24] degrees in declination) can be used to cross-calibrate the two spectra.
The Auger-TA energy spectrum working group was organized in 2012 and has been working to understand the uncertainties in energy scale in both experiments, their systematic differences, and differences in the shape of the spectra. In previous works, we reported that there was an overall agreement of the energy spectra measured by the two observatories below 10 EeV while at higher energies, a remaining significant difference was observed in the common declination band. We revisit this issue to understand its origin by examining the systematic uncertainties, statistical effects, and other possibilities. We will also discuss the differences in the spectra in different declination bands and a new feature in the spectrum recently reported by the Auger Collaboration.
Speaker: Yoshiki Tsunesada (Osaka City University)
• 210
TA Monocular Spectrum Measurement
The Telescope Array (TA) Cosmic Ray Observatory is the largest cosmic ray detector in the northern hemisphere. TA was built to study ultra-high-energy cosmic rays (UHECRs), cosmic rays with energies above 1 EeV. TA is a hybrid detector, employing both a surface detector array and fluorescence telescopes. We present a measurement of the cosmic ray energy spectrum for energies above $10^{17.5}$ eV using only the fluorescence telescopes. A new, machine-learning based, weather classification scheme was used to select data parts with good weather and ensure the quality of the fluorescence data. The data from the Black Rock Mesa (BR) and Long Ridge (LR) fluorescence telescope stations were analyzed separately in monocular mode, with the calculated fluxes combined into a single spectrum. We present fits of the combined spectrum to a series of broken power law models. A three-times-broken power law gives the best fit. The three breaks suggest an additional feature of the spectrum between the previously observed Ankle and the GZK suppression.
Speaker: Douglas Bergman (University of Utah)
• 211
Cosmic ray energy spectrum in the 2nd knee region measured by the TALE-SD array
The Telescope Array Low energy Extension (TALE) experiment in Utah, U.S.A., consists of 10 atmospheric fluorescence telescopes and 80 Surface Detectors (SDs) spread over an area of 21 $km^2$. The SD array consists of 40 SDs at 400 m spacing and 40 SDs at 600 m spacing. The TALE-SD was completed in February 2018 and has been in steady operation since then, triggering at a rate of about 30 air shower events in 10 minutes. We have developed the software to measure the energy spectrum of cosmic rays from the data obtained by TALE-SD. The performance of the software was evaluated by using air shower events generated by Monte Carlo simulation. We estimate that when the energy of the primary cosmic ray is $10^{18.0}$eV, the accuracy of energy determination is 15%, the accuracy of arrival direction determination is 1.5°, and the aperture is 15 $km^2$ sr. Furthermore, we obtained the energy spectrum of cosmic rays from the actual data obtained by the TALE-SD array from October to the end of November 2019. In this presentation, I will report these results.
Speaker: Koki Sato (Osaka city university)
• 212
Cosmic Ray Energy Spectrum measured by the TALE Fluorescence Detector
The Telescope Array (TA) cosmic rays detector located in the State of Utah in the United States is the largest ultra high energy cosmic rays detector in the northern hemisphere. The Telescope Array Low Energy Extension (TALE) fluorescence detector (FD) was added to TA in order to lower the detector's energy threshold, and has succeeded in measuring the cosmic rays energy spectrum and mass composition down to PeV energies. In this contribution we describe the measurement of the cosmic ray energy spectrum using $\sim4$ years of TALE FD data. The energy spectrum shows features consistent with the "knee" and the "second knee".
Speaker: Tareq AbuZayyad (Loyola University Chicago; University of Utah)
• 213
Preliminary Cosmic Ray Results from the HAWC's Eye Telescopes
The compact imaging air-Cherenkov telescope HAWC’s Eye was developed to operate together with the High-Altitude Water Cherenkov Gamma-Ray Observatory (HAWC). The combination of both detection techniques in a hybrid setup provides a significant improvement in energy and angular resolution, aiming for improved measurements of the cosmic ray composition above 10 TeV and contributing to the physics program of the observatory. Preliminary results of the first hybrid measurements of the cosmic ray spectrum are presented. A second HAWC's Eye telescope was successfully commissioned at the HAWC site in 2019. Two measurement nights since then recorded the data used in this analysis. The HAWC's Eye events were successfully synchronized with HAWC and further used to characterize the hybrid system. A complete simulation of the hybrid configuration was used to develop algorithms to reconstruct the energy and arrival direction of proton-induced air showers. Those algorithms were successfully applied to the measured cosmic ray events to analyze the improved performance of the hybrid detection. The spectrum reconstructed with HAWC's Eye is compatible with the spectrum reconstructed solely from the coincident HAWC data.
Speaker: Florian Rehbein (RWTH Aachen University)
• 214
Recent measurement of the Telescope Array energy spectrum and observation of the shoulder feature in the Northern Hemisphere
The Telescope Array (TA) is a hybrid cosmic ray detector deployed in 2007 in Millard County, Utah, USA, which consists of a surface detector of 507 plastic scintillation counters spanning a 700 km$^2$ area on the ground that is overlooked by three fluorescence detector stations. The High Resolution Fly's Eye (HiRes) experiment is a predecessor of TA, which consisted of two fluorescence detector stations operating from 1997 until 2006 from Dugway Proving Ground, Utah, USA, and which was the the first cosmic ray experiment with sufficient resolution and exposure to successfully observe the Greisen–Zatsepin–Kuzmin (GZK) suppression at 10$^{19.75}$ eV. In this work, we present an updated TA energy spectrum result and a joint fit of independent spectrum measurements by the TA surface detector, TA fluorescence detector, and HiRes fluorescence detector to a broken power law function, which exhibits the ankle, GZK suppression, and the new shoulder feature initially seen by the Pierre Auger Observatory in the Southern Hemisphere. HiRes and TA observe the shoulder feature in the Northern Hemisphere at 10$^{19.25}$ eV, with a statistical significance of 5.3 standard deviations.
Speaker: Dmitri Ivanov (University of Utah)
• 215
Study of Energy Measurement of Cosmic Ray Nuclei with LHAASO
The Large High Altitude Air Shower Observatory(LHAASO) is a hybrid extensive air shower(EAS) array with an area of about 1km2 at an altitude of 4410 m a.s.l. in Sichuan province, China. It contains three sub-detectors: 1 km2 array (LHAASO-KM2A) composed of electromagnetic particle (ED) and muon detectors (MD); water Cherenkov detector array(LHAASO-WCDA) and 18 wide field-of-view air Cherenkov telescopes(LHAASO-WFCTA). One of the main scientific goals is measuring the individual energy spectra of cosmic rays from ~30TeV to a couple of EeV. Up to now, the whole WCDA, ¾ of KM2A, 16 telescopes have been in operation. In this paper, the energy reconstruction method and result of cosmic ray nuclei based on KM2A and WFCTA simulated events will be shown, the reconstructed energy difference between KM2A and WFCTA is also compared between data and MC.
Speaker: hu liu (Southwest Jiaotong University, China)
• 216
The Energy Spectrum of Cosmic Ray Proton and Helium above 100TeV Measured by LHAASO Experiment
The determination of energy spectrum of different species above 100 TeV is still one of the main challenges in cosmic ray physics. The energy spectrum of the individual component is an important tool to investigate the cosmic ray production and propagation mechanisms. A preliminary results of mixed proton and helium energy spectrum, obtained with the combined data of six Cherenkov telescopes, one 150m×150m water Cherenkov detector (WCDA-1) and half muon detector and scintillator detector array in LHAASO experiment will be reported. The preliminary results will be analyzed by using the combined data obtained between October 2020 and February 2021. By means of a multiparameter technique, the resolution of reconstructed energy, shower direction, shower core location and composition identification are improved.
Speaker: Zhiyong You (The Institute of High Energy Physics of the Chinese Academy of Sciences)
• Discussion: 20 GCR long-term modulation | SH 07
#### 07
• 217
Combined heliospheric modulation of galactic protons and helium nuclei from solar minimum to maximum activity related to observations by PAMELA.
The global features of the modulation of galactic cosmic ray protons and helium nuclei are studied in the heliosphere from solar minimum to maximum activity with a comprehensive, three-dimensional, drift model and compared to proton and helium observations measured by PAMELA from 2006 to 2014. Combined with accurate very local interstellar spectra (VLIS) for protons and helium nuclei, this provides the opportunity to study in detail how differently the proton to helium ratio, over a wide range of rigidities, behaves towards increasing solar activity. In particular, the effects at the Earth of the difference in their VLIS’s, mass-to-charge ratio (A/Z) and those caused by the main modulation mechanisms will be illustrated from solar minimum to maximum activity.
Speaker: Dr Donald Ngobeni (1. Centre for Space Research, North-West University, Potchefstroom, South Africa; 2. School of Physical & Chemical Sciences, North-West University, Mmabatho, South Africa)
• 218
Spectral parameterization of GCR observations and reconstruction of solar modulation parameters derived from the Convection-Diffusion approximation
Galactic cosmic rays (GCRs) entering the heliosphere and propagating towards Earth are subject to various modulation processes including drifts, convection, adiabatic energy changes, and diffusion as a result of the turbulent solar wind. This transport can be described by the Parker equation (Parker, 1965). A widely used first-order approximation of the Parker equation is the Force-Field approximation (FFA), while a similar approximation, the Convection-Diffusion approximation (CDA) is rarely applied. Using PAMELA and AMS-02 observations, the validity of the FFA and the CDA in the energy range 1 MeV to 20 GeV was investigated. The resulting modulation parameters and the effective diffusion coefficient, derived from both approximations over a complete 11-years solar cycle, were compared. Our results show that the CDA appears to be significantly more accurate than the FFA in reproducing the measurements, while the resulting transport parameters are highly dependent on the choice of the local interstellar spectrum and the assumed diffusion coefficient parameters. Based on these findings, we therefore propose to use the CDA as a more suitable approximation than the widely used FFA for space weather applications, especially for dosimetric studies where an accurate GCR parametrization is essential.
Speaker: Moshe Godfrey Mosotho (Center for Space Research, North-West University, South Africa)
• 219
Solar Modulation During the Descending Phase of Solar Cycle 24 Observed with CALET on the International Space Station
The CALorimetric Electron Telescope (CALET) installed on the International Space Station has multiple event trigger modes for measuring cosmic-ray (CR) particles and gamma rays, and the observations of the low-energy CRs have been successfully performed by a Low-Energy Electron (LEE) shower trigger mode that is active only at high geomagnetic latitude. Continuous measurements of low-energy CRs with LEE trigger of the CALET have detected the charge sign dependence of the solar modulation. In this talk, we present the latest results of the low-energy electron fluxes observed by CALET during the descending phase of the solar cycle 24. We also present the long-term variations of count rates of the CR electrons and protons, discussing the charge sign dependence of the solar modulation.
Speaker: Shoko Miyake
• 220
On the transition from 3D to 2D transport equations for a study of long-term cosmic-ray intensity variations in the heliosphere
We consider in our study the exact two-dimensional (2D) transport equation (TPE) for galactic cosmic ray (GCR) intensity in the heliosphere, averaged over longitude, and derived by averaging the full three-dimensional (3D) steady-state TPE over longitude. As we showed before, this exact 2D TPE is equal to that with the averaged 3D TPE coefficients but with the “source-term” Q2D due to 3D modulation effects. In particular, Q2D is equal to the longitude convolution of the longitudinal variances of the coefficients as used in the 3D TPE and as applicable to the modulation of GCR intensity. In our previous work we also suggested an expression (Q ̌_2D) for Q2D when estimated without solving the 3D TPE for the simplest case of the only characteristic, heliospheric feature depending on helio-longitude is the polarity of the solar magnetic field.
This study is focused on calculating the term Q ̃_2D equal to the same longitude convolution as Q2D when solving numerically the steady-state 3D TPE for the above mentioned simplest case. For cases of close similarity between Q ̃_2D and Q ̌_2D, we come to the conclusion that the 2D approach with Q ̌_2D can be used with confidence in the study of the long-term modulation of GCRs instead of using the complex way of solving the full 3D TPE for this simplest case. However, if the calculated (Q ̃_2D) and estimated (Q ̌_2D) terms are found to be different, the application of the complex way seems inevitable.
This work is supported in part by RU-SA NRF-RFBR grant No. 19-52-60003 SA-t.
Speaker: Mikhail Krainev (Lebedev Physical Institute, Moscow, Russia)
• 221
Galactic cosmic-ray hydrogen spectra in the 40-300MeV range measured by the High-energy Particle Detector (HEPD) on board the CSES-01 satellite during the current solar minimum
The High-Energy Particle Detector (HEPD) onboard the China Seismo-Electromagnetic Satellite (CSES-01) - launched in February 2018 - is a light and compact payload suitable for measuring electrons (3-100 MeV), protons (30-300 MeV), and light nuclei (up to a few hundreds of MeV) with a high energy resolution and a wide angular acceptance. The very good capabilities in particle detection and separation, together with the Sun-synchronous orbit, make HEPD well suited for galactic particles and solar modulation studies. We report here some insights on the data-analysis techniques employed for this kind of study; as a result, semiannual galactic hydrogen differential energy spectra between 40 and 250 MeV for the period between the end of the 24th and the start of the 25th solar activity cycle, are presented . Moreover, a brief discussion on the comparison with theoretical spectra obtained from the HelMod 2D Monte Carlo model is also presented.
Speaker: Matteo Martucci (University of Rome Tor Vergata)
• 222
A simulation study of galactic proton modulation from solar minimum to maximum conditions
The observation of various cosmic ray particles at the Earth had been done with the PAMELA space detector for almost 10 years, from June 2006 to January 2016. The AMS-02 space experiment provides similar cosmic ray data. The purpose of this work is to utilize the available state-of-the-art numerical modulation model for the transport of cosmic rays in the heliosphere to compute the modulation of galactic protons from minimum to maximum solar activity. These modeling results, which simulate realistic heliospheric conditions, are compared to proton observations from PAMELA taken between 2006 and 2014 and to similar AMS-02 observations after 2011. It will be shown how differently modulation mechanisms influence the time-evolution of the proton spectra when modulation conditions change from minimum to maximum.
Speaker: Dr Dzivhuluwani Ndiitwani (Centre for Space Research, North-West University, Potchefstroom, South Africa; School of Physical & Chemical Sciences, North-West University, Mmabatho, South Africa)
• 223
A full solar cycle of proton and helium measurements
Time-dependent energy spectra of galactic cosmic rays (GCRs) carry fundamental information regarding their origin and propagation. When observed at the Earth, these spectra are significantly affected by the solar wind and the imbedded solar magnetic field that permeates the heliosphere, changing significantly over an 11-year solar cycle. Energy spectra of GCRs measured during different epochs of solar activity provide crucial information for a thorough understanding of solar and heliospheric phenomena. The PAMELA experiment had collected data for almost ten years (15 June 2006 - 23 January 2016), including the minimum phase of solar cycle 23 and the maximum phase of solar cycle 24. Here, we present spectra for protons and helium nuclei measured by the PAMELA instrument from 2006 to 2014. Time profiles of the proton-to-helium flux ratio at various rigidities are also presented, allowing the study of all characteristic features resulting from their different mass-to-charge ratio and the difference in the shape of their respective local interstellar spectra.
Speaker: Nadir Marcelli (INFN sezione di Roma)
• 224
Galactic Cosmic-Ray Intensities During three Solar Minima
The Cosmic-Ray Isotope (CRIS) and Solar Isotope Spectrometer (SIS) on the Advanced Composition Explorer(ACE) have measured energy spectra of cosmic-ray elements and isotopes since launch in 1997. We report energy spectra of abundant elements from C to Ni during solar minimum conditions from the 1997, 2009, and 2019-2020 solar minima and compare peak intensities with solar-wind conditions in these 3 minima. In 2010 we reported that peak intensities from the 2009 solar minimum were the highest of the space era (coinciding with the weakest interplanetary magnetic field of the space era). During Nov.2019-January 2020 ACE data show 200 MeV/nuc intensities of C-Fe reached, and in some cases exceeded those in 2009.This talk reports GCR intensities from 1997-2021 and discusses their dependence on solar-wind properties.
Speaker: Dr Richard Mewaldt (California Institute of Technology)
• 225
SOLAR MODULATION OF GALACTIC-COSMIC RAY ANTIPROTONS
In recent years, several new measurements of the antiproton component of the cosmic radiation have become available. These measurements have improved significantly the existing statistics, extending the explored energy region from few tens of MeV up to hundreds of GeV. These measurements are particularly relevant to understand the propagation of cosmic rays in the Galaxy and in the investigation of the nature of Dark Matter. However, an unambiguous interpretation of the experimental data requires a proper reconstruction of the very Local Interstellar Spectrum (LIS) of cosmic-ray antiprotons. Since these measurements are performed deep inside the heliosphere, solar modulation as a highly time and space dependent process which follows the 11-year solar activity cycle, has to be taken into account appropriately. In this work, using a 3D state-of-art solar modulation model, a new LIS for cosmic-ray antiprotons and its related uncertainties are presented. This LIS is derived to match, when modulated, the data sets from AMS02, PAMELA and BESS.
Speaker: Riccardo Munini
• 226
Study Galactic Cosmic Ray Modulation with AMS-02 observation
The accurate measurements of the galactic cosmic ray (GCR) fluxes as function of time and energy by the Alpha Magnetic Spectrometer (AMS) give us unique information to search dark matter, to study the dynamics of solar modulation, to constraint the parameters in modulation model, to improve the precision of radiation dose prediction in the ongoing deep space exploration.
The transport of low rigidity GCRs (<30GV) in the heliosphere is described by the Parker equation. This equation is solved by stochastic differential equation approach in numerical model. The input parameters in the model (solar wind speed, tilt angle, magnetic intensity and polarity) are obtained by the observation near the Earth. The time varying parameters (diffusion coefficient, drift coefficient) is usually tuned manually. This method only gives result what looks good, but cannot gives the uncertainty of parameters.
In this study, the Markov chain Monte Carlo (MCMC) technique is used to determine the time varying posterior probability distribution of parameters related to the GCR transport equation. In Bayesian statistics, MCMC is a class of samplers in which we can simulate draws that are slightly dependent and are approximately from a posterior distribution. The Metropolis-Hastings algorithm is used to implement the MCMC sampler. Compared to the traditional method where the likelihood function is evaluated on the grid of points in parameter space, the MCMC sampler is low resource consumption as it is insensitive to the dimensionality of the parameter space.
Speaker: xiaojian song (Shandong Institute of advanced technology)
• Discussion: 34 Radio Detection of Neutrinos | NU 05
#### 05
• 227
Sensitivity of a radio array embedded in a deep Gen2-like optical array.
Speaker: Abby Bishop
• 228
The Askaryan Radio Array (ARA) is a ground-based radio detector at the South Pole designed to capture Askaryan emission from ultra-high energy neutrinos interacting within the Antarctic ice. The newest ARA station has been equipped with a phased array trigger, in which radio signals in multiple antennas are summed in predetermined directions prior to the trigger. In this way, impulsive signals add coherently, while noise likely does not, allowing the trigger threshold to be lower than a traditional ARA station. In this talk, I will discuss our ability to analyze these low-threshold events, using data from the 2019 season to illustrate new analysis techniques that yield high efficiency for low-SNR signals. I will also discuss how these analysis techniques could be applied to next-generation radio detectors.
Speaker: Kaeli Hughes (The University of Chicago)
• 229
Hardware Development for the Radio Neutrino Observatory in Greenland (RNO-G)
The Radio Neutrino Observatory in Greenland (RNO-G) is designed to make the first observations of ultra-high energy neutrinos at energies above 10 PeV, playing a unique role in multi-messenger astrophysics as the world's largest in-ice Askaryan radio detection array. The experiment will be composed of 35 autonomous stations deployed over a 5 x 6 km grid near NSF Summit Station in Greenland. The electronics chain of each station is optimized for sensitivity and low power, incorporating 100 - 600 MHz RF antennas at both the surface and in ice boreholes, low-noise amplifiers, custom RF-over-fiber systems, and an FPGA-based phased array trigger. Each station will operate at 25 W, allowing for a live time of ~70% from a solar power system. The communications system is composed of a high-bandwidth LTE network and an ultra-low power LoRaWAN network. I will also present on the calibration and DAQ systems, as well as status of the first deployment of 10 stations in Summer 2021.
Speaker: Daniel Smith (University of Chicago)
• 230
Improving Radio Frequency Detectors using High Performance Programmable Logic Devices
An increasing number of experiments are targeting GHz bandwidth impulsive radiation induced by high energy neutrinos in ice or high energy cosmic ray air showers. Beamforming triggers improve detection prospects at low signal-to-noise ratio (SNR), since effective SNR scales as the square root of the number of phased array antennas in a coherent sum. However, this also brings high technological requirements with an increasing number of narrower beams required, while sub-nanosecond synchronisation must be maintained across the antennas summed in each beam. A prototype digital beamforming trigger is developed using Radio-frequency-systems-on-a-chip (RFSoCs), an adaptable radio platform leveraging the advantages of Field Programmable Gate Arrays (FPGAs). Findings are presented including power consumption, number of beams that can be formed per chip, trade-offs between resource usage and trigger efficiency and using programmable logic for flexible digital filtering capabilities.
Speaker: Cheng Xie (University College London)
• 231
Effects of raytracing on neutrino simulations using RadioPropa
The in-ice radio detection of the radio signals caused by the interaction of high energy neutrinos in vast natural media like polar ice, will be a promising technique to detect neutrinos of energies beyond the ones thus far measured. Because of the large attenuation length in ice for radio O(1km), sparse arrays can be built implying large effective volumes.
The simulations of effective volume calculations and reconstructions of the waveforms highly depend on the ice modelling. Thus far, for simplification, mainly analytically solvable exponential models of the ice are used. This allows for computationally fast raytracing. More elaborate methods, like FDTD (solving Maxwell equations on a full grid) can incorporate all ice properties. In particular, allowing for rays to reflect within the ice due to density discontinuities or allowing rays to travel horizontally through the firn (upper 200 m). However, this method is due to its heavy computing load impractical for large-scale simulations and reconstructions.
RadioPropa is a numerical ray-tracer that was started to accommodate more complex ice models with acceptable speed. It is forked from the cosmic ray propagation code CRPropa. Presented here are waveform simulations and reconstructions (with respectively NuRadioMC and NuRadioReco) using RadioPropa. This contribution shows the effects of a non-exponential ice-model on the radio waveforms and the implications for reconstruction. Also, the implementation of horizontal propagation due to a non-smooth ice-model and its effect on the neutrino waveforms are shown.
Speaker: Mr Bob Oeyen (Ghent University)
• 232
Broadband RF Phased Array Design for UHE neutrino detection
Phased array radio-frequency (RF) systems have a wide variety of applications in engineering and physics research. Phased array designs are proposed as a trigger system for Askaryan-class in-situ ultra-high energy (UHE) neutrino detectors. Located in Antarctica, these detectors will record RF pulses generated by UHE neutrinos via the Askaryan effect. Modelling the response of phased arrays is straightforward in an environment with uniform index of refraction. However, some detector designs call for phased array deployment at depths where the index of refraction is changing. One solution for computing the response of phased arrays in such an environment is computational electromagnetics with the finite difference time-domain method (FDTD). Using the open-source MIT Electrogmagnetic Equation Propagation (MEEP) package, a set of phased array designs are presented and compared to theoretical expectations. Precise matches between MEEP simulation and radiation pattern predictions at different frequencies and beam angles are demonstrated. Given that the computations match the theory, the effect of embedding a phased array within a medium of varying index of refraction is then studied. Understanding the effect of varying index on phased arrays is critical for proposed UHE neutrino observatories which rely on phased arrays embedded in natural ice. Future work will develop phased array concepts with parallel MEEP for speed and complexity enhancements that account for the 3D shape of proposed dipole antennas proposed as the physical RF elements for in-situ detectors.
Speaker: Jordan Hanson (Whittier College)
• 233
High-energy neutrinos with energies above a few $10^{16}~$eV can be measured efficiently with in-ice radio detectors which complement optical detectors such as IceCube at higher energies. Several pilot arrays explore the radio technology successfully in Antarctica. Because of the low flux and interaction cross-section of neutrinos it is vital to increase the sensitivity of the radio detector as much as possible. In this manuscript, different approaches to trigger on high-energy neutrinos are systematically studied and optimized. We find that the sensitivity can be improved substantially (by more than 50% between $10^{17}~$eV and $10^{18}~$eV) by simply restricting the bandwidth in the trigger to frequencies between 80 and 200 MHz instead of the currently used 80 to $1~$GHz bandwidth. We also compare different trigger schemes that are currently being used (a simple amplitude threshold, a high/low threshold trigger and a power-integration trigger) and find that the scheme that performs best depends on the dispersion of the detector. These findings inform the detector design of future Askaryan detectors and can be used to increase the sensitivity to high-energy neutrinos significantly without any additional costs. The findings also apply to the phased array trigger concept.
Speaker: Christian Glaser (Uppsala University, Sweden)
• 234
A novel trigger based on neural networks for radio neutrino detectors
The ARIANNA experiment is a proposed Askaryan detector designed to record radio signals induced by neutrino interactions in the Antarctic ice. Because of the low neutrino flux at high energies, the physics output is limited by statistics. Hence, an increase in sensitivity will significantly improve the interpretation of data and will allow us to probe new parameter spaces. The trigger thresholds are limited by the rate of triggering on unavoidable thermal noise fluctuations. Here, we present a real-time thermal noise rejection algorithm that will allow us to lower the thresholds substantially and increase the sensitivity by up to a factor of two compared to the current ARIANNA capabilities. A deep learning discriminator, based on a Convolutional Neural Network (CNN), was implemented to identify and remove a high percentage of thermal events in real time while retaining most of the neutrino signal. We describe a CNN that runs efficiently on the current ARIANNA microcomputer and retains 94% of the neutrino signal at a thermal rejection factor of $10^5$. Finally, we report on the experimental verification from lab measurements.
Speaker: Astrid Anker
• 235
The Askaryan Radio Array (ARA) is a gigaton size neutrino radio telescope located at the near geographic South Pole. ARA has five independent stations designed to detect Askaryan emission coming from the interaction between ultra-high energy neutrinos ( > 10 PeV ) and Antarctic ice. Each station corresponds of 16 antenna clusters deployed in a matrix shape under ~200 m deep in the ice. The simulated neutrino template, including the detector response model, was implemented as a new search technique for reducing background noise and increasing the vertex reconstruction resolution. The template is designed to scan through the data by the matched filter method, inspired by LIGO, looking for a low SNR neutrino signature and ultimately aiming to lower the detector's energy threshold. I will present the estimated sensitivity improvements to ARA analyses through the application of the template technique with results from simulation and data.
Speaker: Myoungchul Kim (Chiba University)
• 236
Application of parabolic equation methods to in-ice radiowave propagation for ultra high energy neutrino detection experiments
Many ultra high energy neutrino detection experiments seek radiowave signals from neutrino interactions deep within polar ice, and an understanding of in-ice radiowave propagation is therefore of critical importance. The parabolic equation (PE) method for modeling the propagation of radio waves is a suitable intermediate between ray tracing and finite-difference time domain (FDTD) methods in terms of accuracy and computation time. The RET collaboration has developed the first modification of the PE method for use in modeling in-ice radiowave propagation for ultra high energy cosmic ray and neutrino detection experiments. In this presentation we will detail the motivation for the development of this technique, the process by which it was modified for in-ice use, and showcase the accuracy of its results by comparing to FDTD and ray tracing.
Speaker: Cade Sbrocco (The Ohio State University)
• 237
Capabilities of the ARIANNA Neutrino Pointing Resolution, with Implications for Future Ultra-high Energy Neutrino Astronomy
We describe a radio-frequency polarization measurement by the ARIANNA surface station using a residual hole from the South Pole Ice Core (SPICEcore) Project. Radio pulses were emitted from a transmitter located down to 1.7 km below the snow surface. After deconvolving the raw signals for the detector response and attenuation from propagation through the ice, the signal pulses show no significant distortion and agree with a reference measurement of the emitter made in an anechoic chamber. The direction to transmitted radio pulse was measured with an angular resolution of 0.37 degree [statistical error]. For polarization, the statistical error of the polarization vector is depth dependent and below 1 degree. In addition, a slow systematic error as a function of depth is 2.7 degrees. Neither the direction or polarization measurement show a significant offset as a function of depth relative to expectation.
We also report the on the results of a simulation study of the ARIANNA neutrino direction and energy resolution. The software tool NuRadioMC was used to reconstruct the polarization and viewing angle to determine the neutrino direction. Multiple models of Askaryan radiation and detector sites along with a range of neutrino energies were tested. The neutrino space angle resolution was determined to be below 3 degrees, which is comparable to the systematic polarization uncertainty. Therefore it is expected that the polarization resolution, which is the dominant contribution to the neutrino space angle resolution, will be improved in future studies by determining and eliminating systematic effects. Finally, the fractional neutrino energy resolution is reported at 0.25, which is below the inelasticity limit.
Speaker: Steven Barwick (University of California Irvine)
• 238
Deep learning reconstruction of the neutrino energy with a shallow Askaryan detector
Cost effective in-ice radio detection of neutrinos above a few $10^{16}~$eV has been explored successfully in pilot-arrays. A large radio detector is currently being constructed in Greenland with the potential to measure the first cosmogenic neutrino, and an order-of-magnitude more sensitive detector is being planned with IceCube-Gen2. We present the first end-to-end reconstruction of the neutrino energy from radio detector data. NuRadioMC was used to create a large data set of 40 million events of expected radio signals that are generated via the Askaryan effect following a neutrino interaction in the ice for a broad range of neutrino energies between 100PeV and 10EeV. We simulated the voltage traces that would be measured by the five antennas of a shallow detector station in the presence of noise. We trained a deep neural network to determine the shower energy directly from the simulated experimental data and achieve a resolution better than a factor of two (STD <0.3 in $\log_{10}(E)$) which is below the irreducible uncertainty from inelasticity fluctuations. We present the model architecture and discuss the generalizability of the model in the presence of systematic uncertainties in the simulation code. This method will enable Askaryan detectors to measure the neutrino energy.
Speaker: Christian Glaser (Uppsala University, Sweden)
• 239
Direction reconstruction for the Radio Neutrino Observatory Greenland
The Radio Neutrino Observatory Greenland (RNO-G) is planned to be the first large-scale implementation of the in-ice radio detection technique. It targets astrophysical as well as cosmogenic neutrinos with energies above 10 PeV. The deep component of a single RNO-G station consists of three strings with antennas to capture horizontal as well as vertical polarization. This contribution shows a model-based approach to reconstruct the direction of the neutrinos with an RNO-G station. The timing of the waveforms is used to reconstruct the vertex position and the shape and amplitude of the waveform are used to reconstruct the viewing angle as well as the polarization, which will add up to the zenith and azimuth direction of the neutrino. We present the achieved angular resolution and discuss implications for the science of RNO-G.
Speaker: Ms Ilse Plaisier (DESY, Zeuthen)
• 240
Discovering the Highest Energy Neutrinos with the Payload for Ultrahigh Energy Observations (PUEO)
The Payload for Ultrahigh Energy Observations (PUEO) is a NASA Long-Duration Balloon Mission that has been selected for concept development. PUEO have unprecedented sensitivity to ultra-high energy neutrinos above 10^18 eV. PUEO will be sensitive to both Askaryan emission from neutrino-induced cascades in Antarctic ice and geomagnetic emission from upward-going air showers that are a result of tau neutrino interactions. PUEO is also especially well-suited for point source and transient searches. Compared to its predecessor ANITA, PUEO achieves better than an order-of-magnitude improvement in sensitivity and lowers the energy threshold for detection, by implementing a coherent phased array trigger, adding more channels, optimizing the detection bandwidth, and implementing real-time filtering. I will discuss the science reach and plans for PUEO, leading up to a 2024 launch.
Speaker: Abigail Vieregg (University of Chicago)
• 241
Evolving Antennas for Ultra-High Energy Neutrino Detection
Evolutionary algorithms are a type of artificial intelligence that utilize principles of evolution to efficiently determine solutions to defined problems. These algorithms are particularly powerful at finding solutions that are too complex to solve with traditional techniques and at improving solutions found with simplified methods. The GENETIS collaboration is developing genetic algorithms (GAs) to design antennas that are more sensitive to ultra-high energy neutrino-induced radio pulses than current detectors. Improving antenna sensitivity is critical because UHE neutrinos are extremely rare and require massive detector volumes with stations dispersed over hundreds of km2. The GENETIS algorithm evolves antenna designs using simulated neutrino sensitivity as a measure of fitness by integrating with XFdtd, a finite-difference time-domain modeling program, and with simulations of neutrino experiments. The best antennas will then be deployed at the RNO-G experiment in Greenland for initial testing. The GA is predicted to create antennas that improve on the designs used in the existing ARA experiment by more than a factor of 2 in neutrino sensitivities. This research could improve antenna sensitivities in future experiments and thus accelerate the discovery of UHE neutrinos. This is the first time that antennas have been designed using GAs with a fitness score based on a physics outcome, which will motivate the continued use of GA-designed instrumentation in astrophysics and beyond. This proceeding will report on advancements to the algorithm, steps taken to improve the GA performance, the latest results from our evolutions, and the manufacturing roadmap.
Speaker: Julie Rolla (Ohio State Univerisity)
• 242
Neutrino direction and flavor-id reconstruction from radio detector data using deep learning
With the construction of RNO-G and plans for IceCube-Gen2, neutrino astronomy at EeV energies is at the horizon for the next years. Here, we determine the neutrino pointing capabilities and explore the sensitivity to the neutrino flavor for an array of shallow radio detector stations. The usage of deep learning for event reconstruction is enabled through recent advances in simulation codes that allow the simulation of realistic training data sets. A large data set of expected radio signals for a broad range of neutrino energies between 100 PeV and 10 EeV is simulated using NuRadioMC. A deep neural network is trained on this low-level data and we find a direction resolution of a few degrees for all triggered events. We present the model architecture, how we optimized the model, and how robust the model is against systematic uncertainties. Furthermore, we explore the capabilities of a radio neutrino detector to determine the flavor id.
Speaker: Mr Sigfrid Stjärnholm (Uppsala University, Sweden)
• 243
Polarization Reconstruction of Cosmic Rays with the ARIANNA Neutrino Radio Detector
The ARIANNA detector is designed to detect neutrinos of energies above $10^{16} eV$. Due to the similarity in generated radio signals, cosmic rays are often used as test beams for neutrino detectors. Some ARIANNA detector stations are equipped with antennas capable of detecting air showers. The radio emission properties of air showers are well understood, and the polarization of the radio signal can be predicted from arrival direction with high precision. For this reason, cosmic rays can be used as a proxy to assess the reconstruction capabilities of the ARIANNA neutrino detector. We report on dedicated efforts of reconstructing the polarization of cosmic-ray radio pulses. A total of 245 cosmic rays are identified from over 90,000 triggered events collected between Dec 1, 2018 and Mar 15, 2019. A cut was put on these events requiring them to have a signal-to-noise (SNR) ratio of at least 5 in all upward-facing channels. Polarization of these cosmic rays were reconstructed with a resolution of 4 degrees (68% containment), which agrees with the expected value we obtained from simulation.
Speaker: Mr Leshan Zhao
• 244
Science case and detector concept for ARIANNA high energy neutrino telescope at Moore's Bay, Antarctica
The proposed ARIANNA neutrino detector, located at sea-level on the Ross Ice Shelf, Antarctica, consists of 200 autonomous and independent detector stations separated by 1 kilometer in a uniform triangular mesh. The primary science mission of ARIANNA is to search for sources of neutrinos with energies greater than 100 PeV, complementing the reach of IceCube. An ARIANNA observation of a neutrino source would provide strong insight into the enigmatic sources of cosmic rays. ARIANNA observes the radio emission from high energy neutrino interactions in the Antarctic ice. Among radio based concepts under current investigation, ARIANNA would uniquely survey the vast majority of the southern sky at any instant in time, and an important region of the northern sky, by virtue of its location on the surface of the Ross Ice Shelf in Antarctica. The broad sky coverage is specific to the Moore's Bay site, and makes the ARIANNA surface-based technology ideally suited to contribute to the multi-messenger thrust by the US National Science Foundation, Windows on the Universe – Multi-Messenger Astrophysics, providing capabilities to observe sources that vary strongly over time. The ARIANNA architecture is designed to measure the angular direction to 3 degrees and shower energy to 25% for every neutrino candidate. These high quality neutrino events are expected to play important role in the pursuit of multi-messenger observations of astrophysical sources. The surface-based architecture serves to inform future projects of much larger scale, such as the IceCube-Gen2 project.
Speaker: Steven Barwick (University of California Irvine)
• 245
Sensitivity studies for the IceCube-Gen2 radio array
The IceCube Neutrino Observatory at the South Pole has measured the diffuse astrophysical neutrino flux up to ~PeV energies and is starting to identify first point source candidates.
The next generation facility, IceCube-Gen2, aims at extending the accessible energy range to EeV in order to measure the continuation of the measured astrophysical spectrum, to identify neutrino sources, and to search for a cosmogenic neutrino flux. As part of IceCube-Gen2, a radio array is foreseen that is sensitive to detect Askaryan emission of neutrinos beyond ~5 PeV. Surface and deep antenna stations have different benefits in terms of effective area, resolution, and the capability to reject backgrounds from cosmic-ray air showers and may be combined to reach best sensitivity. The optimal detector configuration is still to be identified.
This contribution presents the full-array simulation efforts for a combination of deep and surface antennas, and compares different design options with respect to their sensitivity to fulfill the science goals of IceCube-Gen2.
• 246
The Calibration of the Geometry and Antenna Delay in Askaryan Radio Array Station 4 and 5
The Askaryan Radio Array (ARA) at the South Pole is designed to detect the radio signals produced by ultra high-energy cosmic neutrino interactions in the ice. There are 5 independent ARA stations, one of which (ARA5) includes a low-threshold phased array trigger string. The Data Acquisition System in all ARA stations is equipped with the Ice Ray Sampler second generation (IRS2) chip, a custom-made, application-specific integrated circuit (ASIC) for high-speed sampling and digitisation. In this contribution, we describe the methodology used to calibrate the IRS2 chip and the geometry namely the relative timing between antennas and their geometrical positions, for ARA stations 4 and 5. Our calibration allows for proper timing correlations between incoming signals, which is crucial for radio vertex reconstruction and thus detection of ultra high-energy neutrinos. With this methodology, we achieve a signal timing precision on a sub-nanosecond level and an antenna position precision within 10 cm.
Speaker: Dr Paramita Dasgupta (Post Doctoral Fellow at the Université libre de Bruxelles, Brussels)
• 247
The Giant Radio Array for Neutrino Detection (GRAND) project
The GRAND project aims to detect ultra-high-energy neutrinos, cosmic rays and gamma rays, with an array of 200,000 radio antennas over 200,000 km2, split into ~20 sub-arrays of ~10,000 km2 deployed worldwide. The strategy of GRAND is to detect air showers above 10^17 eV that are induced by the interaction of ultra-high-energy particles in the atmosphere or in the Earth crust, through its associated coherent radio-emission in the 50-200 MHz range. In its final configuration, GRAND plans to reach a neutrino-sensitivity of ~10^{-10} GeV cm^-2 s^-1 sr^-1 above 5x10^{17} eV combined with a sub-degree angular resolution. GRANDProto300, the 300-antenna pathfinder array, is planned to start data taking in 2021. It aims at demonstrating autonomous radio detection of inclined air-showers, and study cosmic rays around the transition between Galactic and extra-Galactic sources. We present preliminary designs and simulation results, plans for the ongoing, staged approach to construction, and the rich research program made possible by the proposed sensitivity and angular resolution.
Speaker: Kumiko Kotera (Institut d'Astrophysique de Paris)
• Discussion: 55 Ultra-High-Energy Gamma-Ray Sources and PeVatrons | GAI 04
#### 04
• 248
Discovery of 100 TeV gamma-rays from HESS J1702-420: a new PeVatron candidate
The identification of active PeVatrons, hadronic particle accelerators reaching the knee of the cosmic-ray spectrum (at the energy of few PeV), is crucial to understand the origin of cosmic rays in the Galaxy. In this context, we report on new H.E.S.S. observations of the PeVatron candidate HESS J1702-420, which reveal the presence of gamma-rays up to 100 TeV. This is the first time in the history of H.E.S.S. that photons with such high energy are clearly detected. Remarkably, the new deep observations allowed the discovery of a new gamma-ray source component, called HESS J1702-420A, that was previously hidden under the bulk emission traditionally associated with HESSJ1702-420. This new object has a power-law spectral slope < 2 and a gamma-ray spectrum that, extending with no sign of curvature up to 100 TeV, makes it an excellent candidate site for the presence of PeV-energy cosmic rays. This discovery brings new information to the ongoing debate on the nature of the unidentified source HESSJ1702-420, one of the most compelling PeVatron candidates in the gamma-ray sky, and on the origin of Galactic cosmic rays.
Speaker: Luca Giunti
• 249
Resolving the origin of very-high-energy gamma-ray emission from the PeVatron candidate SNR G106.3+2.7 using MAGIC telescopes
The supernova remnant (SNR) G106.3+2.7 is associated with a 100 TeV gamma-ray source reported by HAWC and is thus a promising PeVatron candidate. However, because of the poor angular resolution of HAWC, it is difficult to pinpoint the origin of the 100 TeV source. Because the SNR contains an energetic pulsar wind nebula (PWN) dubbed Boomerang and powered by the pulsar PSR J2229+6114, it is unclear whether the gamma-ray emission originates from the SNR or PWN complex and whether it is caused by hadronic or leptonic processes. We observed gamma rays above 200 GeV in the vicinity of the SNR G106.3+2.7 using the MAGIC telescopes for ~120 hours in total between May 2017 and August 2019, with an angular resolution of 0.07 – 0.1 degrees, which is unprecedented for this object at these energies. An extended gamma-ray emission spatially correlated with the radio continuum emission at the head and tail of SNR G106.3+2.7 was detected using the MAGIC telescopes. We find a hint of gamma-ray emission above 10 TeV only from the SNR tail region, while no significant emission above 5 TeV is found at the SNR head region containing the Boomerang PWN. Therefore, the gamma rays above 35 TeV detected with the air shower experiments are, likely, mainly emitted from the SNR tail region. In this presentation we discuss the morphology of the gamma-ray emission from this complex region and attempt self-consistent multiwavelength modeling of the energy spectrum from the different sources inside it.
Speaker: Tomohiko Oka (Kyoto University)
• 250
Predictions for gamma-rays from clouds associated with supernova remnant PeVatrons
Interstellar clouds can act as target material for hadronic cosmic rays; gamma-rays produced through inelastic proton-proton collisions and spatially associated with the clouds can provide a key indicator of efficient particle acceleration.
However, even for PeVatron sources reaching PeV energies, the system of cloud and accelerator must fulfil a several conditions in order to produce a detectable gamma-ray flux.
In this contribution, we characterise the necessary properties of both cloud and accelerator.
Using available Supernova Remnant (SNR) and interstellar cloud catalogues, and assuming particle acceleration to PeV energies in a nearby SNR, we produce a ranked shortlist of the most promising target systems, for which a detectable gamma-ray flux is predicted.
We discuss detection prospects for future facilities including CTA, LHAASO and SWGO; and compare our predictions with known gamma-ray sources.
A range of model scenarios are tested, including variation in the diffusion coefficient and particle spectrum, under which the best candidate clouds in our shortlist are consistently bright.
On average, a detectable gamma-ray flux is more likely for more massive clouds; systems with lower separation distance between the SNR and cloud; and for slightly older SNRs.
Speaker: Alison Mitchell (ETH Zurich)
• 251
Carpet-2 observation of E>300 TeV photons accompanying a 150-TeV neutrino from the Cygnus Cocoon
We report on the observation of an excess of E>300 TeV gamma-ray candidate events in temporal and spatial coincidence with the IceCube high-energy neutrino alert consistent with the origin in the Cygnus Cocoon. The Cygnus Cocoon is a prospective Galactic source of high-energy neutrinos and photons. The observations have been performed with Carpet-2, a surface air-shower detector equipped with a large-area muon detector at the Baksan Neutrino Observatory in the Northern Caucasus.
Speaker: Mr Viktor Romanenko (Institute for Nuclear Research of the Russian Academy of Sciences)
• 252
HAWC J2227+610: a potential PeVatron candidate for the CTA in the northern hemisphere
Recent observations of VER J2227+608 and the associated supernova
remnant G106.3+2.7 by the High Altitude Water Cherenkov (HAWC)
observatory confirm the special interest of this source as a Galactic Pe-
Vatron candidate in the northern hemisphere. HAWC J2227+610 emits
VHE gamma-ray emission, above 100 TeV, from a region coincident with
molecular clouds and shows a hard energy spectrum without clear cutoff.
This has induced several authors to suggest or claim a potential hadronic
origin for its gamma-ray emission. CTA could play a crucial role to under-
stand the particle acceleration mechanisms behind this source thanks to
its improved sensitivity with respect to the present IACT generation. The
purpose of this work is to investigate the potentiality of CTA to observe
HAWC J2227+610 and to disentangle the different suggested scenarios
of hadronic and leptonic emission. In particular we study the capability
in resolving the morphology of this source and its eventual energy de-
pendence taking advantage of the unprecedent angular resolution. The
study is based on simulations; the CTA prototype science tool gammapy
is employed.
Speaker: Gaia Verna
• 253
Gamma-ray Observation of SNR G106.3+2.7 with the Tibet Air Shower Array
We have been observing cosmic rays and gamma rays above TeV energies with an air shower (AS) array located in Tibet, China at an altitude of 4,300 m and in operation since 1990. In 2014 we added to the air shower array an underground muon detector (MD) array that enables us to observe gamma-ray-induced air showers with far better sensitivity than before, suppressing background cosmic-ray events by counting the number of muons contained in air showers. The background rejection power is typically estimated at 99.9% above 100 TeV. In this presentation, we report the observation of very-high-energy gamma-ray emissions from supernova remnant G106.3+2.7 using the data taken by the Tibet AS array and the MD array.
Speaker: Munehiro OHNISHI (ICRR, University of Tokyo)
• 1:30 PM
Break
• Plenary: Review 02 01
#### 01
Convener: Manfred Lindner (Max-Planck-Institut für Kernphysik)
• 254
Dark Matter: Knowns and Unknowns
: I will give an overview of the landscape of possible scenarios for dark matter, including a discussion of current constraints and some future directions for the field. I will comment on the status of several claimed anomalies, their possible relationships to dark matter physics, and alternative explanations.
Speaker: Tracy Slatyer
• 255
Probing particle acceleration through gamma-ray Solar flare observations
High-energy solar flares have shown to have at least two distinct phases: prompt-impulsive and delayed-gradual. Identifying the mechanism responsible for accelerating the electrons and ions and the site at which it occurs during these two phases is one of the outstanding questions in solar physics. Many advances have been made over the past decade thanks to new observational data and refined simulations that together help to shed light on this topic. For example, the detection by Fermi Large Area Telescope (LAT) of GeV emission from solar flares originating from behind the visible solar limb and >100 MeV emission lasting for more than 20 hours have suggested the need for a spatially extended source of acceleration during the delayed emission phase. In this talk I will review some of the major results from Fermi LAT observations of the 24th solar cycle and how this new observational channel combined with observations from across the electromagnetic spectrum can provide a unique opportunity to diagnose the mechanisms of high-energy emission and particle acceleration in solar flares.
Speaker: Melissa Pesce-Rollins ( )
• 3:30 PM
Break
• Plenary: Highlight 03 01
#### 01
Convener: Dr Markus Roth (KIT)
• 256
Highlights from direct dark matter detection
Direct detection experiments search for dark matter-induced signals in Earth-based detectors. I will present a short review on the current status and future of the field and will concentrate on selected results on the direct search for WIMPs, axions and beyond
Speaker: Marc Schumann (Univertity of Freiburg)
• 257
Highlights from the Telescope Array experiment
The Telescope Array (TA) is the largest cosmic ray observatory in the Northern Hemisphere. It is designed to measure the properties of cosmic rays over a wide range of energies. TA with it's low energy extension (TALE) observe cosmic ray induced extensive air showers between 2x10^15 and 2x10^20eV in hybrid mode using multiple instruments, including an array of scintillator detectors at the Earth's surface and telescopes to measure the fluorescence and Cerenkov light. The statistics at the highest energies are being enhanced with the ongoing construction of the TAx4 experiment which will quadruple the surface area of the detector. We review the present status of the experiments and most recent physics results on the cosmic ray anisotropy, chemical composition and energy spectrum. Notable highlights include a new feature in the energy spectrum at about 10^19.2 eV., and a new clustering of events in their arrival directions above this energy. We also report on a new spectrum and composition results in the lower energy range from the TALE extension.
Speaker: Grigory Rubtsov (Institute for Nuclear Research of the Russian Academy of Sciences)
• 258
Highlights from the Pierre Auger Observatory
tba
Speaker: Ralph Engel (Karlsruhe Institute of Technology (KIT))
• 5:30 PM
Break
• Discussion: 03 Muon Puzzle and EAS modeling | CRI 03
#### 03
• 259
Estimation of depth of maximum by relative muon content in air showers with energy greater than 5 EeV measured by the Yakutsk array
Characteristics of muons with a threshold $\varepsilon_{thr} \geq$ 1 GeV based on the air showers data in Yakutsk array were analyzed. Quantitative estimation of muons at different distance from the shower axis and the ratio of muon and charged particles at a distance of 600 m are obtained. An empirical relationship between the fraction of muons and longitudinal development – the depth of maximum development $X_{max}$ is found. Calculations of the muon fraction are performed using the QGSjetII-04 for different primary nuclei, and compared with experiment. Mass composition of primary particles induced air showers of highest energies is estimated from the muon component.
Speaker: Mr Igor Petrov (Yu.G. Shafer Institute of Cosmophysical Research and Aeronomy)
• 260
A simulation study for one-pion exchange contribution on very forward neutron productions in ATLAS-LHCf common events
The mass composition is one of the key information to understand the origin of ultra-high energy cosmic rays. The interpretations of the mass composition from results by air shower experiments depend on hadronic interaction models used for the simulation. The uncertainties due to interaction models are reduced using recent experimental results at LHC.
However, due to no experimental results of pion-proton or pion-nucleus collisions at high energy, uncertainties remain in these collisions and it affects predictions of muon productions in air showers.
Recent results for very forward neutrons in pseudo-rapidity larger than 10.76 by the LHCf experiment show large differences from predictions by interaction models.
As a fundamental process of forward neutron production, the contribution of one pion exchange is proposed.
Though LHC can not circulate the pion beam, a virtual pion emitted from a proton in a proton beam can collide with a proton in the other proton beam.
In this work, we discuss a possibility to measure contributions from one-pion exchange on very forward neutrons using ATLAS and LHCf detectors in LHC RUN 3.
Expected energy resolution for neutrons and statistics in Run 3 are taken into account in the discussion. The prospect of measurements of one-pion exchange contributions is also presented.
Speaker: Ken Ohashi (Institute for Space-Earth Environmental Research, Nagoya Univ.)
• 261
Measurement of the Proton-Air Cross Section with Telescope Ar-rays Black Rock, Long Ridge, and Surface Array in Hybrid Mode.
Ultra High Energy Cosmic Ray (UHECR) detectors have been reporting on the proton-air cross section measurement beyond the capability of particle accelerators since 1984. The knowledge of this fundamental particle property is vital for our understanding of high energy particle interactions and could possibly hold the key to new physics. The data used in this work was collected over eight years using the hybrid events of Black Rock (BR) and Long Ridge (LR) fluorescence detectors as well as the Telescope Array Surface array Detector (TASD). The proton-air cross section is determined at √s= 73 TeV by fitting the exponential tail of the Xmax distribution of these events. The proton-air cross section is then inferred from the exponential tail fit and from the most updated high energy interaction models. σ inel p−air is observed to be 520.1±35.8 [Stat.]+25.3−42.9[Sys.] mb. This is the second proton-air cross section work reported by the Telescope Array collaboration.
Speaker: Rasha Abbasi (Loyola University Chicago-Physics Department)
• 262
Status and Prospects of the LHCf and RHICf experiments
Precise understanding of hadronic interactions at high energies is a key to improve chemical composition measurements of very high energy cosmic-rays and to solve the muon excess issue observed in high energy cosmic-ray experiments using air-shower technique. The LHCf and RHICf experiments measures the differential production cross sections of very forward neutral particle as photons, neutral pions and neutrons at LHC and RHIC, respectively. These data are critically important to test and tune hadronic interaction models used for air-shower simulations.
In this presentation, we introduce the recent results of both the experiments as well as our future operation plans. LHCf published an updated result of forward neutron measurement at pp, $\sqrt{s}$ = 13 TeV. From the observed neutron energy spectra, we also obtained the average inelasticity, which is one of the key parameters for air shower development, as 0.536 +0.031-0.037. In addition, several analysis are on-going; neutral pion measurement at $pp$, $\sqrt{s}$ = 13 TeV, central- forward correlation analysis with LHCf+ATLAS, photon measurement by RHICf.
LHCf plans to have operations at $pp$ and $p$O during the LHC-Run3 period. At $pp$ collisions, a new silicon readout system will be introduced to improve the read-out speed, and 10 times more statistics of the previous operation in 2015 will be obtained. Thanks to high statistics, rare particles such as $\eta$, $K^0_s$ and $\Lambda$ will be addressed also. We also plan another operation at RHIC in 2024 with a new detector. The detector, a calorimeter composed of tungsten, Si pad and pixel layers, will have a much wider acceptance and higher sensitivity of $K^0_s$ measurement than the current detector.
Speaker: Hiroaki Menjo (ISEE, Nagoya University)
• 263
Collective flow in ultra high energy cosmic rays within CORSIKA
In heavy ion collisions, the main goal is to create the quark-gluon plasma (QGP) and then study its properties in order to understand quantum chromodynamics at extreme conditions. Collective flow serves as an important probe to study the production and characterize the property of the QGP. In ultra high energy cosmic rays (UHECR), the collision energies are an order of magnitude higher than the current ion colliders. It is naturally to believe the QGP to be created in UHECR collisions. In this work, collective flow are analytically studied within CORSIKA model, with EPOS-LHC for high energy hadronic interaction. The collision energy dependence of collective flow will be also presented. These results will help the understanding of UHECR behavior and can be tested at China's large high altitude air shower observatory (LHAASO) experiments.
Speaker: Maowu Nie (Shandong University)
• 264
Muon number rescaling in simulations of air showers
The number of muons in extensive air showers predicted using LHC-tuned hadronic interaction models, such as EPOS-LHC and QGSJetII-04, is smaller than observed in showers recorded by leading cosmic rays experiments. In this paper, we present a new method to derive muon rescaling factors by analyzing reconstructions of simulated showers. The z-variable used (difference of initially simulated and reconstructed total signal in detectors) is connected to the muon signal and is roughly independent of the zenith angle but depends on the mass of primary cosmic ray. The performance of the method is tested by using Monte Carlo shower simulations for the hybrid detector of the Pierre Auger Observatory. Having an individual z-value from each simulated hybrid event, the corresponding signal at 1000 m, and using a parametrization of the muon fraction in simulated showers, we can calculate the multiplicative rescaling parameters of the muon signals in the ground detector even for an individual event, and study its dependence as a function of zenith angle and the mass of primary cosmic ray. This gives a possibility not only to test/calibrate the hadronic interaction models, but also to derive the beta exponent, describing an increase of the number of muons as a function of primary energy and cosmic-ray mass. Detailed simulations show dependence of beta on hadronic interaction properties, thus the determination of this parameter is important to understand the muon deficit problem.
Speaker: Dr Dariusz Gora (Institute of Nuclear Physics Polish Academy of Science)
• 265
Adjustments to Model Predictions of Depth of Shower Maximum and Signals at Ground Level using Hybrid Events of the Pierre Auger
'We present a new method to explore simple ad-hoc adjustments to the predictions of hadronic interaction models to improve their consistency with observed two-dimensional distributions of the depth of shower maximum, Xmax, and signal at ground level, as a function of zenith angle. The method relies on the assumption that the mass composition is the same at all zenith angles, while the atmospheric shower development and attenuation depend on composition in a correlated way. In the present work, for each of the three leading LHC-tuned hadronic interaction models, we allow a global shift ΔXmax of the predicted shower maximum, which is the same for every mass and energy, and a rescaling R_Had of the hadronic component at ground level which depends on the zenith angle.
We apply the analysis to 2297 events reconstructed by both fluorescence and surface detectors at the Pierre Auger Observatory with energies 10^18.5-10^19.0 eV. Given the modeling assumptions made in this analysis, the best fit reaches its optimum value when shifting the Xmax predictions of hadronic interaction models to deeper values and increasing the hadronic signal at both extreme zenith angles. The resulting change in the composition towards heavier primaries alleviates the previously identified model deficit in the hadronic signal (commonly called the muon deficit), but does not remove it. Because of the size of the required corrections ΔXmax and R_Had and the large number of events in the sample, the statistical significance of the corrections is large, greater than 5σ_stat even for the combination of experimental systematic shifts within 1σ_sys that are the most favorable for the models.
Speaker: Dr Jakub Vícha (Institute of Physics of Czech Academy of Sciences)
• 266
Air shower genealogy for muon production
Measurements of the muon content of extensive air showers at the highest energies show discrepancies compared to simulations as large as the differences between proton and iron. This so-called muon puzzle is commonly attributed to a lack of understanding of the hadronic interactions in the shower development. Furthermore, measurements of the fluctuations of muon numbers suggest that the discrepancy is likely a cumulative effect of interactions of all energies in the cascade.
A unique, novel feature of the air shower simulation code CORSIKA 8 allows us to access all previous generations of final-state muons up to the first interaction. With this technique, we study the influence of interactions happening at any intermediate stage in the cascade on muons depending on their energy and lateral distance in a quantitative way. We further relate our findings to recent and upcoming accelerator measurements and comment on the prospects of the proposed proton-oxygen run of the LHC.
Speaker: Maximilian Reininghaus (KIT / IAP)
• 267
Density of GeV Muons Measured with IceTop
We present a measurement of the density of GeV muons in near-vertical air showers using three years of data recorded by the IceTop array at the South Pole. We derive the muon densities as functions of energy at reference distances of 600 m and 800 m for primary energies between 2.5 PeV and 40 PeV and between 9 PeV and 120 PeV, respectively. The measurements are consistent with the predicted muon densities obtained from Sibyll 2.1 assuming any physically reasonable cosmic ray flux model. However, comparison to the post-LHC models QGSJet-II.04 and EPOS-LHC shows that the post-LHC predict a higher muon density than Sibyll 2.1. Therefore, based on these models, the measured data yield lower average masses which are in tension with flux models obtained by fitting experimental data.
Speaker: Dennis Soldin (University of Delaware)
• 268
Estimations of the muon content of cosmic ray air showers between 10 PeV and 1 EeV from KASCADE-Grande data
Measurements of KASCADE-Grande on the muon size in high energy extensive air showers (EAS) have provided evidence that the actual attenuation length of shower muons in the atmosphere is larger than the expectations from the hadronic interaction models QGSJET-II-04, EPOS-LHC and SIBYLL 2.3. This discrepancy is related to a deficient description of the shower muon content with atmospheric depth by MC models. To further explore the origin of the above anomaly, we have investigated the muon size as a function of the primary energy at different zenith angles using data from the KASCADE-Grande experiment. The procedure consisted in comparing the measured muon number flux against the predictions of a reference cosmic ray energy spectrum and from the observed difference to estimate the data/MC muon ratio that best describe the measurements. The ratio is then applied to the MC simulations and from here, we estimate the muon content versus the primary energy. As a reference model, we employed the energy spectrum measured from the Pierre Auger observatory, while, for the different cosmic ray abundances, the GSF model. Results are presented using the QGSJET-II-04, EPOS-LHC, SIBYLL 2.3 and SIBYLL 2.3c models in the analysis procedure.
Speaker: Juan Carlos Arteaga Velazquez (Universidad Michoacana de San Nicolas de Hidalgo)
• 269
We present characteristics of hadronic cascades from interactions of cosmic rays in the atmosphere, simulated by the novel CORSIKA 8 framework. The simulated spectra of secondaries, such as pions, kaons, baryons and muons, are compared with cascade equations solvers CONEX and MCEq in air shower mode and full 3D air shower Monte Carlo simulations using the legacy CORSIKA 7 and AIRES. A novel capability of CORSIKA 8 is the simulation of cascades in media other than air, widening the scope of potential simulation applications. We demonstrate this capability by simulating cosmic ray showers in the Martian atmosphere. The CORSIKA 8 framework demonstrates good accuracy and robustness compared to previous results, in particular in those relevant for the production of muons in air showers. Furthermore, hyperons are studied as a messenger from high-density QCD and as an important precursor for high-energy secondaries, including neutrinos. It was also found that interactions of strange baryons can have non-negligible importance for cascade development that require extra care when using any such model in all contexts.
Speaker: Ralf Ulrich (Karlsruhe Institute of Technology)
• 270
LHCf plan for proton-oxygen collisions at LHC
During LHC runs 1-2 the LHCf experiment measured neutral particles in the forward region of proton+proton and proton+lead ion collisions. These measurements allow the testing and fine tuning of hadronic interaction models in a phase space region relevant for studying the development of cosmic-ray air showers. One of the limitations in using the results obtained so far by LHCf is linked to the fact that the interactions of cosmic rays in the atmosphere involve low mass nuclei, mainly nitrogen and oxygen. Expectations for proton+nitrogen or proton+oxygen collisions can be obtained interpolating the results obtained with proton+proton and proton+lead collisions, but large uncertainties arise due to Ultra Peripheral Collisions occurring frequently in heavy ion interactions.
A new opportunity is under evaluation at the LHC, concerning the injection of oxygen ions in the LHC collider, as suggested in the past by the LHCf collaboration. Proton+oxygen collisions at the LHC energy scale would allow a direct study of atmospheric showers under controlled conditions. LHCf need a 2 nb^-1 integrated luminosity at low pile-up (mu<0.02) to complete a measurement at pseudorapidity larger than 8.4, for a total acquisition time of less than two days.
At the end of 2020 the cosmic-ray community has supported the LHCf proposal signing a letter to the LHC Committee to express the interest in the implementation of proton-oxygen collisions in the LHC run 3 and in the LHCf data taking.
We will present the LHCf plan and point of view in connection with this important opportunity at the LHC.
Speaker: Eugenio Berti (University of Florence)
• 271
Measurement of muon contents in cosmic ray shower with LHAASO-KM2A around knee region
The number of muons observed at the ground from air showers is sensitive to the mass composition of cosmic ray. Large High Altitude Air Shower Observatory is a hybrid extensive air shower array and the KM2A is a sub-array covering an area of 1 km$^2$, consisting of electromagnetic detectors and muon detectors, can measure the muon content and shower size of the air shower simultaneously with high precision for cosmic rays in the knee region. The muon detector of the KM2A is the most powerful muon detector in the current cosmic ray observatory on the ground. In this paper, the experimental data is recorded by the KM2A in 2020. The mean number of muons in air shower is measured by analyzing the signal of muon detectors for the cosmic ray from hundreds of TeV to tens of PeV, where the energy is reconstructed with the shower size and muon number which is weakly dependent on the components of cosmic ray. We investigate the ability to identify cosmic ray components using muon content. Based on the constant intensity cut method, the muon attenuation length is derived by fitting the muon number with same flux in different zenith angle. The relation between attenuation length and muon number in the shower is studied also. In addition, the experiment data in muon abundance is compared with the simulation results of proton and iron. The mean logarithmic mass of the cosmic ray derived from the mean number of muons in same energy interval, together with the mean mass of supposed spectra, are presented with systematic errors from the energy scale and hadronic model.
Speaker: Dr Hengying Zhang (Shandong University)
• 272
Measurements of the average muon energy in inclined muon bundles in the NEVOD-DECOR experiment
One of the first setups at which an excess of muons in comparison with the expectation (“muon puzzle”) was detected and its dependence on the primary energy was measured, was the NEVOD-DECOR complex. Since various mechanisms for the appearance of an excess of multi-muon events (of cosmophysical or nuclear-physical nature) should have different effects on the muon energy, one of the possible approaches to solving the problem is the studying of the energy characteristics of EAS muon component and their changes with the energy of particles of primary cosmic rays. The average energy loss of muons in matter almost linearly depends on the muon energy. If an excess of high energy muons appears, then this should be reflected in the dependence of the muon energy deposit on the primary energy. At present, such an experiment is being carried out at the NEVOD-DECOR setup. The installation includes a Cherenkov water calorimeter and a precise coordinate-tracking detector. The energy deposit of muon bundles is measured from the response of the NEVOD calorimeter, and the coordinate-tracking detector DECOR allows one to determine the number of muons in the bundles. For the first time, experimental estimates of the average muon energy in the bundles and its dependence on zenith angle and primary energy in the range from 10 PeV to 1000 PeV have been obtained and compared with the results of calculations performed using the CORSIKA-based simulation using modern models of hadronic interactions.
Speaker: E.A. Yurina (MEPhI)
• 273
Measurements of the charge ratio and polarization of cosmic ray muons with the Super-Kamiokande detector
Cosmic ray muons arise from the showers of secondary particles produced in the interactions of primary cosmic particles with air nuclei at the top of the atmosphere. The interaction products, pions and kaons composing showers mostly decay to muons reflect the details of the hadronic interactions depending on their energy. Measurements of the charge ratio and polarization of cosmic ray muons can be used to constrain high energy hadronic interaction models in the atmosphere. Previous measurements have been performed in various experiments. Kamiokande measured the charge ratio and polarization as 1.37+/-0.06(stat)+/-0.01(syst) and 0.26+/-0.04(stat)+/-0.05(syst), respectively, at the sea level momentum of 1.2 TeV/c. In this presentation, we will report the current status of the measurement of the charge ratio and polarization using data collected by the Super-Kamiokande detector located at a depth of 2700 m of water equivalent.
Speaker: Hussain Kitagawa (Okayama University)
• 274
The development of hadronic cascades in extensive air-showers is modeled by hadronic interaction models based on extrapolations of collider data. The models' predictions at the highest energies are at a known tension with the description of measurements of the muonic component if the mass composition derived from the fluorescence technique is assumed. We apply an ad-hoc modification to the CORSIKA Monte-Carlo generator that allows for adjustment of features of hadronic interactions such as multiplicity, elasticity and cross-section. Compared to similar previous studies, we are now able to obtain not only information related to the longitudinal development of the shower, such as the mean depth of shower maximum, but also information about the lateral distribution of particles. Moreover, we generate a scan across the various possible combined modifications of the Sibyll 2.3d model using both protons and iron nuclei, quantify their effects on both the lateral and longitudinal features of a cosmic-ray shower and identify regions of the modification phase space which are explaining, within the stated systematics, both the ground-based and fluorescence-based measurements of cosmic rays at the highest energies.
Speaker: Jiri Blazek (FZU Prague)
• 275
Muon deficit in simulations of air showers inferred from AGASA data
Multiple experiments reported evidences of a muon deficit in air-shower simulations with respect to data, which increases with the primary energy. In this work, we study the muon deficit using measurements of the muon density at $1000\,$m from the shower axis obtained by the Akeno Giant Air Shower Array (AGASA). The selected events have reconstructed energies in the range $18.83\,\leq\,\log_{10}(E_{R}/\textrm{eV})\,\leq\,19.46$ and zenith angles $\theta\leq 36^\circ$. We compare these muon density measurements to proton, iron, and mixed composition scenarios, obtained by using the high-energy hadronic interaction models EPOS-LHC, QGSJetII-04, and Sibyll2.3c. We find that AGASA data are compatible with a heavier composition, lying above the predictions of the mixed composition scenarios. The average muon density divided by the energy in AGASA data is greater than in the mixed composition scenarios by a factor of $1.49\pm0.11\,\textrm{(stat)}\pm0.18\,\textrm{(syst)}$, $1.54\pm0.12\,\textrm{(stat)}\pm0.18\,\textrm{(syst)}$, and $1.66\pm0.13\,\textrm{(stat)}\pm0.20\,\textrm{(syst)}$ for EPOS-LHC, Sibyll2.3c, and QGSJetII-04, respectively. We interpret this as further evidence of a muon deficit in air-shower simulations at the highest energies.
Speaker: Flavia Gesualdi (Instituto de Tecnologías en Detección y Astropartículas (CNEA, CONICET, UNSAM), and Karlsruhe Institute of Technology, Institute for Astroparticle Physics)
• 276
Muon excess in ultra-high energy inclined EAS according to the NEVOD-DECOR data
Data of the NEVOD-DECOR experiment on investigations of inclined cosmic ray muon bundles for a long time period (May 2012 – March 2021) are presented. Their comparison with the results of calculations based on simulations of EAS hadron and muon components allows one to study the behavior of the energy spectrum and mass composition of primary cosmic rays and/or to check the validity of hadron interaction models in a wide energy range from about 10^16 to more than 10^18 eV. The analysis showed that the observed intensity of muon bundles at primary particle energies of about 10^18 eV and higher can be compatible with the expectation only under the assumption of an extremely heavy mass composition of cosmic rays. This conclusion is consistent with data of a number of other experiments investigating the muon component of air showers at ultra-high energies. On the contrary, measurements of the depth of the shower maximum in the atmosphere (Xmax) in the experiments using air fluorescence technique favor a light mass composition of primary cosmic rays at these energies. This contradiction (so-called “muon puzzle”) cannot be resolved without serious changes of the existing hadron interaction models.
Speaker: R.P. Kokoulin (MEPhI)
• 277
On the muon scale of air showers and its application to the AGASA data
Recently, several experiments reported a muon deficit in air-shower simulations with respect to the data. This problem can be studied using an estimator that quantifies the relative muon content of the data with respect to those of proton and iron Monte-Carlo air-shower simulations. We analyze two estimators. The first one, based on the logarithm of the mean of the muon content, is built from experimental considerations. It is ideal for comparing results from different experiments as it is independent of the detector resolution. The second estimator is based on the mean of the logarithm of the muon content, which implies that it depends on shower-to-shower fluctuations. It is linked to the mean-logarithmic mass $\langle \ln A \rangle$ through the Heitler-Matthews model. We study the properties of the estimators and their biases considering the knowns and unknowns of typical experiments. Furthermore, we study these effects in measurements of the muon density at $1000$ m from the shower axis obtained by the Akeno Giant Air Shower Array (AGASA). Finally, we report the estimates of the relative muon content of the AGASA data, which support a muon deficit in simulations. These estimates constitute valuable additional information of the muon content of air-showers at the highest energies.
Speaker: Flavia Gesualdi (Instituto de Tecnologías en Detección y Astropartículas (CNEA, CONICET, UNSAM), San Martín, Argentina, and Karlsruhe Institute of Technology, Institute for Astroparticle Physics (IAP), Karlsruhe, Germany)
• 278 | 2022-10-04 01:26:37 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6659263968467712, "perplexity": 2796.9615983058725}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00668.warc.gz"} |
https://forum.math.toronto.edu/index.php?PHPSESSID=6mj58lhrqto3slktmuls8qesl0&action=profile;u=647;area=showposts;start=15 | ### Show Posts
This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.
### Messages - Shaghayegh A
Pages: 1 [2]
16
##### Chapter 2 / HA #4, problem 3
« on: October 11, 2016, 04:59:07 PM »
In problem 3 of HA #4, are the functions u(x,t) and v(x,t) separate functions? Or is $v=\frac{x}{t}$?
17
##### Chapter 2 / HA #4, problem 1
« on: October 10, 2016, 01:25:42 PM »
I am having trouble with problem 1 of home assignment 1, it asks to find u(x,t) for :
\begin{align*} & u_{tt}-c^2u_{xx}=0, &&t>0, x>0, \\\ &u|_{t=0}= \phi (x), &&x>0, \\ &u_t|_{t=0}= c\phi'(x), &&x>0, \\ &u|_{x=0}=\chi(t), &&t>0. \end{align*}
My solution: $u=f(x+ct)+g(x-ct)$ where f and g are some functions. By the boundary conditions,
\begin{align*} & f(x)+g(x)=\phi(x) \\\ & f'(x)-g'(x)=\phi ' (x) \implies f(x)-g(x)=\phi(x)\\\ \end{align*} So $f(x)=\phi(x)$ and $g(x)=0$ so $f(x+ct)=\phi(x+ct)$ , but is this true for all x>0? Because it seems that t can be negative here and we must say $$f(x+ct)=\phi(x+ct), x>ct$$
Thank you
18
##### Chapter 2 / derivation of a PDE describing traffic flow
« on: September 25, 2016, 03:41:49 PM »
In example 8 of chapter 2.1 where we derive a PDE describing traffic flow, how do we derive $Ï_t+vÏ_x=0\;(6)$ from $p_t+q_x=0\;(3)\;?$
It seems that $q_x$ some how equals $vp_x=[c(\rho)+ c' (\rho)\rho] \;p_x=c(p) \frac{\partial p}{\partial x}+\frac{d c(p)}{p} p \frac{\partial p}{\partial x}$? Can someone please explain how we get equation (6)? Thanks
19
##### Chapter 2 / Deriving equation 7 of section 2.1
« on: September 24, 2016, 03:20:01 PM »
In the section variable coefficients of section 2.1, we have
$$au_t+bu_x=f\tag{6}$$
Then we have
$$\frac{\partial u}{\partial t}dt+ \color{orange}{\frac{\partial x}{\partial t}}dt \frac{\partial u}{\partial x}=u \tag{*}$$
No, $\frac{d x}{d t}$
I assume the $dt$ cancels with the $\partial t$ in the $\frac{\partial x}{\partial t}dt \frac{\partial u}{\partial x}$ part because the textbook says we get
$$u_t dt+dx u_x =du$$
Wrong conclusion due to your error in (*)
Why doesn't the $dt$ cancel the $\partial t$ in $\frac{\partial u}{\partial t}dt$ to give us $du+dxu_x =du$?
Calculus II
Also, to derive $$\frac{dt}{a}=\frac{dx}{b}=\frac{du}{f}\tag{7}$$ from (6) why don't we just compare (6) to
$\frac{du}{dt}=\frac{\partial u}{\partial t}+\color{orange}{\frac{\partial x}{\partial t}}\frac{\partial u}{\partial x}$ (chain rule) and conclude that $\frac{\partial t}{a}=\frac{\partial x}{b}$ and $\frac{dt}{a}=\frac{du}{f} \implies \frac{dt}{a}=\frac{dx}{b}=\frac{du}{f}$ (7) instead of doing all that work?
The same mistake; also there should be $\frac{d t}{a}=\frac{d x}{b}$and if corrected it would be exactly what we do
Thanks
20
##### Chapter 2 / Solving the Burgers equation
« on: September 23, 2016, 07:29:07 PM »
In example 7 of chapter 2.1, we wish to solve $$u_{t}+uu_{x}=0.$$
The textbook says
$$\frac{dt}{1}=\frac{dx}{u}=\frac{du}{0}.$$
So far correct The rest here are just your fantasies V.I.
We know $$\frac{\partial x}{\partial t}=u\;and\;\frac{du}{dt}=0\;so\;du=0\implies\frac{du}{0}=1$$
Why is $$\frac{dx}{u}=1?$$
21
##### Chapter 2 / question from 2.1 of textbook
« on: September 18, 2016, 07:45:39 PM »
Section 2.1 of the textbook states $$u_t a+u_x b$$ is the directional derivative of u in the direction l=(a,b). But there's an extra factor of $$\frac{1}{\sqrt{a^2+b^2}}$$ right? (which disappears if we set $$u_t a+u_x b$$ to 0). As in:
$$\nabla_{l}u \;. \frac{\bar{l}}{|\bar{l}|}=(\partial u/ \partial t \;\hat{t}\;+\;\partial u/ \partial x \; \hat{x}).\frac{ (a\hat{t}+b\hat{x})}{\sqrt{a^2+b^2}}=(u_t a+u_x b)\frac{1}{\sqrt{a^2+b^2}}$$?
Pages: 1 [2] | 2021-10-28 22:26:53 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8735069632530212, "perplexity": 4468.934765007077}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588526.57/warc/CC-MAIN-20211028193601-20211028223601-00426.warc.gz"} |
http://www.physicsforums.com/showthread.php?p=4168758 | # Relative humidity calculations.
by maistral
Tags: calculations, humidity, relative
P: 60 This is totally pissing me off, I don't know what the heck am I doing wrong. Alright, so I was given a temperature of 30°C at 30% relative humidity. I have to get the absolute humidity- So I used Antoine; log(P)=7.96681-1668.21/(228+30); P = 31.6869mmHg. 0.3 x 31.6869 = 9.41517; 9.41517/(760-9.41517) = 1.25x10^-2. Apparently the correct answer is 7.86x10^-3; and an air-water psychrometric chart says 0.008. What on earth am I doing wrong? EDIT: Nevermind, I forgot that I have to multiply y 18/29.
PF Patron
Thanks
P: 2,966
Quote by maistral This is totally pissing me off, I don't know what the heck am I doing wrong. Alright, so I was given a temperature of 30°C at 30% relative humidity. I have to get the absolute humidity- So I used Antoine; log(P)=7.96681-1668.21/(228+30); P = 31.6869mmHg. 0.3 x 31.6869 = 9.41517; 9.41517/(760-9.41517) = 1.25x10^-2. Apparently the correct answer is 7.86x10^-3; and an air-water psychrometric chart says 0.008. What on earth am I doing wrong? EDIT: Nevermind, I forgot that I have to multiply y 18/29.
What you calculated was the mole fraction of water vapor. The absolute humidity is defined as the density of water vapor in the air, in units of gm/m3. You need to use the ideal gas law to calculate the absolute humidity: pM/RT
HW Helper Thanks P: 4,309 The Antoine constants you used are also valid from 60C to 150C. For temps of 0C to 60C, the following constants are used: A=8.10765 B=1750.286 C=235.0
Related Discussions Classical Physics 1 Biology, Chemistry & Other Homework 0 General Physics 4 Introductory Physics Homework 2 Introductory Physics Homework 3 | 2013-12-11 07:57:00 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8180435299873352, "perplexity": 1171.0042211347777}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164033438/warc/CC-MAIN-20131204133353-00039-ip-10-33-133-15.ec2.internal.warc.gz"} |
https://homework.zookal.com/questions-and-answers/evaluate-the-delta-function-integral-please-note-the-derivative-of-406719108 | 1. Math
2. Advanced Math
3. evaluate the delta function integral please note the derivative of...
# Question: evaluate the delta function integral please note the derivative of...
###### Question details
Evaluate the delta function integral (Please note the derivative of sigma function): | 2021-04-12 22:02:55 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9927942752838135, "perplexity": 3250.20451676593}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038069267.22/warc/CC-MAIN-20210412210312-20210413000312-00001.warc.gz"} |
http://myriverside.sd43.bc.ca/shelbyc2016/2018/09/10/ | # Week 1 – My Arithmetic Sequence
13, 26, 39, 52, 65…
$t_n$ = $t_1$ + d(n – 1)
($t_{50}$) = (13) + (13)[(50) – 1]
$t_{50}$ = 13 + 13 · 49
$t_{50}$ = 13 + 637
$t_{50}$ = 650
$S_n$ = $\frac{n}{2}$($t_1$ + $t_n$)
($S_{50}$) = $\frac{(50)}{2}$[(13) + (650)]
$S_{50}$ = 25 · 663
$S_{50}$ = 16575 | 2019-11-17 08:40:23 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 14, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1968807578086853, "perplexity": 3750.6751169862778}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668896.47/warc/CC-MAIN-20191117064703-20191117092703-00526.warc.gz"} |
https://cplberry.com/tag/non-detection/ | # First low frequency all-sky search for continuous gravitational wave signals
It is the time of year for applying for academic jobs and so I have been polishing up my CV. In doing so I spotted that I had missed the publication of one of the LIGO Scientific–Virgo Collaboration papers. In my defence, it was published the week of 8–14 February, which saw the publication of one or two other papers [bonus note]. The paper I was missing is on a search for continuous gravitational waves.
Continuous gravitational waves are near constant hums. Unlike the chirps of coalescing binaries, continuous signals are always on. We think that they could be generated by rotating neutron stars, assuming that they are not perfectly smooth. This is the first search to look for continuous waves from anywhere on the sky with frequencies below 50 Hz. The gravitational-wave frequency is twice the rotational frequency of the neutron star, so this is the first time we’ve looked for neutron stars spinning slower than 25 times per second (which is still pretty fast, I’d certainly feel more than a little queasy). The search uses data from the second and fourth Virgo Science Runs (VSR2 and VSR4): the detector didn’t behave as well in VSR3, which is why that data isn’t used.
The frequency of a rotating neutron star isn’t quite constant for two reasons. First, as the Earth orbits around the Sun it’ll move towards and away from the source. This leads to the signal being Doppler shifted. For a given position on the sky, this can be corrected for, and this is done in the search. Second, the neutron star will slow down (a process known as spin-down) because it looses energy and angular momentum. There are various processes that could slow a neutron star, emitting gravitational waves is one, some form of internal sloshing around is another which could also cause things to speed up, or perhaps some braking from its magnetic field. We’re not too sure exactly how quickly spin down will happen, so we search over a range of possible values from $-1.0\times10^{-10}~\mathrm{Hz\,s^{-1}}$ to $+1.5\times10^{-11}~\mathrm{Hz\,s^{-1}}$.
The particular search technique used is called FrequencyHough. This chops the detector output into different chunks of time. In each we calculate how much power is at each frequency. We then look for a pattern, where we can spot a signal across different times, allowing for some change from spin-down. Recognising the track of a signal with a consistent frequency evolution is done using a Hough transform, a technique from image processing that is good at spotting lines.
The search didn’t find any signals. This is not too surprising. Therefore, we did the usual thing of setting some upper limits. The plot below shows 90% confidence limits (that is where we’d expect to detect 9/10 signals) on the signal amplitude at different frequencies.
90% confidence upper limits on the gravitational-wave strain at different frequencies. Each dot is for a different 1 Hz band. Some bands are noisy and feature instrumental artefacts which have to be excluded from the analysis, these are noted as the filled (magenta) circles. In this case, the upper limit only applies to the part of the band away from the disturbance. Figure 12 of Aasi et al. (2016).
Given that the paper only reports a non-detection, it is rather lengthy. The opening sections do give a nice introduction to continuous waves and how we hunt for them, so this might be a good paper is you’re new to the area but want to learn some of the details. Be warned that it does use $\jmath = \sqrt{-1}$ for some reason. After the introduction, it does get technical, so it’s probably only for insomniacs. However, if you like a good conspiracy and think we might be hiding something, the appendices go through all the details of removing instrumental noise and checking outliers found by the search.
In summary, this was the first low-frequency search for continuous gravitational waves. We didn’t find anything in the best data from the initial detector era, but the advanced detectors will be much more sensitive to this frequency range. Slowly rotating neutron stars can’t hide forever.
arXiv: 1510.03621 [astro-ph.IM]
Journal: Physical Review D; 93(4):042007(25); 2016
Science summary: First search for low frequency continuous gravitational waves emitted by unseen neutron stars
Greatest regret:
I didn’t convince the authors to avoid using “air quotes” around jargon.
### Bonus note
#### Better late than never
I feel less guilty about writing a late blog post about this paper as I know that it has been a long time in the making. As a collaboration, we are careful in reviewing our results; this can sometimes lead to delays in announcing results, but hopefully means that we get the right answer. This paper took over three years to review, a process which included over 85 telecons!
# Comprehensive all-sky search for periodic gravitational waves in the sixth science run LIGO data
The most recent, and most sensitive, all-sky search for continuous gravitational waves shows no signs of a detection. These signals from rotating neutron stars remain elusive. New data from the advanced detectors may change this, but we will have to wait a while to find out. This at least gives us time to try to figure out what to do with a detection, should one be made.
### New years and new limits
The start of the new academic year is a good time to make resolutions—much better than wet and windy January. I’m trying to be tidier and neater in my organisation. Amid cleaning up my desk, which is covered in about an inch of papers, I uncovered this recent Collaboration paper, which I had lost track of.
The paper is the latest in the continuous stream of non-detections of continuous gravitational waves. These signals could come from rotating neutron stars which are deformed or excited in some way, and the hope that from such an observation we could learn something about the structure of neutron stars.
The search uses old data from initial LIGO’s sixth science run. Searches for continuous waves require lots of computational power, so they can take longer than even our analyses of binary neutron star coalescences. This is a semi-coherent search, like the recent search of the Orion spur—somewhere between an incoherent search, which looks for signal power of any form in the detectors, and a fully coherent search, which looks for signals which exactly match the way a template wave evolves [bonus note]. The big difference compared to the Orion spur search, is that this one looks at the entire sky. This makes it less sensitive in those narrow directions, but means we are not excluding the possibility of sources from other locations.
Artist’s impression of the local part of the Milky Way. The yellow cones mark the extent of the Orion Spur spotlight search, and the pink circle shows the equivalent sensitivity of this all-sky search. Green stars indicate known pulsars. Original image: NASA/JPL-Caltech/ESO/R. Hurt.
The search identified 16 outliers, but an examination of all of these showed they could be explained either as an injected signal or as detector noise. Since no signals were found, we can instead place some upper limits on the strength of signals.
The plot below translates the calculated upper limits (above which there would have been a ~75%–95% chance of us detected the signal) into the size of neutron star deformations. Each curve shows the limits on detectable signals at different distance, depending upon their frequency and the rate of change of their frequency. The dotted lines show limits on ellipticity $\varepsilon$, a measure of how bumpy the neutron star is. Larger deformations mean quicker changes of frequency and produce louder signals, therefore they can can be detected further away.
Range of the PowerFlux search for rotating neutron stars assuming that spin-down is entirely due to gravitational waves. The solid lines show the upper limits as a function of the gravitational-wave frequency and its rate of change; the dashed lines are the corresponding limits on ellipticity, and the dotted line marks the maximum searched spin-down. Figure 6 of Abbott et al. (2016).
Neutron stars are something like giant atomic nuclei. Figuring the properties of the strange matter that makes up neutron stars is an extremely difficult problem. We’ll never be able to recreate such exotic matter in the laboratory. Gravitational waves give us a rare means of gathering experimental data on how this matter behaves. However, exactly how we convert a measurement of a signal into constraints on the behaviour of the matter is still uncertain. I think that making a detection might only be the first step in understanding the sources of continuous gravitational waves.
arXiv: 1605.03233 [gr-qc]
Journal: Physical Review D; 94(4):042002(14); 2016
To attempt to grow a beard. Beard stroking helps you think, right?
### Bonus note
#### The semi-coherent search
As the first step of this search, the PowerFlux algorithm looks for power that changes in frequency as expected for a rotating neutron star: it factors in Doppler shifting due to the motion of the Earth and a plausible spin down (slowing of the rotation) of the neutron star. As a follow up, the Loosely Coherent algorithm is used, which checks for signals which match short stretches of similar templates. Any candidates to make it through all stages of refinement are then examined in more detail. This search strategy is described in detail for the S5 all-sky search.
# Search for transient gravitational waves in coincidence with short-duration radio transients during 2007–2013
Gravitational waves give us a new way of observing the Universe. This raises the possibility of multimessenger astronomy, where we study the same system using different methods: gravitational waves, light or neutrinos. Each messenger carries different information, so by using them together we can build up a more complete picture of what’s going on. This paper looks for gravitational waves that coincide with radio bursts. None are found, but we now have a template for how to search in the future.
On a dark night, there are two things which almost everyone will have done: wondered at the beauty of the starry sky and wondered exactly what was it that just went bump… Astronomers do both. Transient astronomy is about figuring out what are the things which go bang in the night—not the things which make suspicious noises, but objects which appear (and usually disappear) suddenly in the sky.
Most processes in astrophysics take a looooong time (our Sun is four-and-a-half billion years old and is just approaching middle age). Therefore, when something happens suddenly, flaring perhaps over just a few seconds, you know that something drastic must be happening! We think that most transients must be tied up with a violent event such as an explosion. However, because transients are so short, it can difficult to figure out exactly where they come from (both because they might have faded by the time you look, and because there’s little information to learn from a blip in the first place).
Radio transients are bursts of radio emission of uncertain origin. We’ve managed to figure out that some come from microwave ovens, but the rest do seem to come from space. This paper looks at two types: rotating radio transients (RRATs) and fast radio bursts (FRBs). RRATs look like the signals from pulsars, except that they don’t have the characteristic period pattern of pulsars. It may be that RRATs come from dying pulsars, flickering before they finally switch off, or it may be that they come from neutron stars which are not normally pulsars, but have been excited by a fracturing of their crust (a starquake). FRBs last a few milliseconds, they could be generated when two neutron stars merge and collapse to form a black hole, or perhaps from a highly-magnetised neutron star. Normally, when astronomers start talking about magnetic fields, it means that we really don’t know what’s going on [bonus note]. That is the case here. We don’t know what causes radio transients, but we are excited to try figuring it out.
This paper searches old LIGO, Virgo and GEO data for any gravitational-wave signals that coincide with observed radio transients. We use a catalogue of RRATs and FRBs from the Green Bank Telescope and the Parkes Observatory, and search around these times. We use a burst search, which doesn’t restrict itself to any particular form of gravitational-wave; however, the search was tuned for damped sinusoids and sine–Gaussians (generic wibbles), cosmic strings (which may give an indication of how uncertain we are of where radio transients could come from), and coalescences of binary neutron stars or neutron star–black hole binaries. Hopefully the search covers all plausible options. Discovering a gravitational wave coincident with a radio transient would give us much welcomed information about the source, and perhaps pin down their origin.
Search results for gravitational waves coincident with radio transients. The probabilities for each time containing just noise (blue) match the expected background distribution (dashed). This is consistent with a non-detection.
The search discovered nothing. Results match what we would expect from just noise in the detectors. This is not too surprising since we are using data from the first-generation detectors. We’ll be repeating the analysis with the upgraded detectors, which can find signals from larger distances. If we are lucky, multimessenger astronomy will allow us to figure out exactly what needs to go bump to create a radio transient.
arXiv: 1605.01707 [astro-ph.HE]
Journal: Physical Review D; 93(12):122008(14); 2016
Science summary: Searching for gravitational wave bursts in coincidence with short duration radio bursts
Favourite thing that goes bump in the night: Heffalumps and Woozles [probably not the cause of radio transients]
### Bonus note
#### Magnetism and astrophysics
Magnetic fields complicate calculations. They make things more difficult to model and are therefore often left out. However, we know that magnetic fields are everywhere and that they do play important roles in many situations. Therefore, they are often invoked as an explanation of why models can’t explain what’s going on. I learnt early in my PhD that you could ask “What about magnetic fields?” at the end of almost any astrophysics seminar (it might not work for some observational talks, but then you could usually ask “What about dust?” instead). Handy if ever you fall asleep…
# Search of the Orion spur for continuous gravitational waves using a loosely coherent algorithm on data from LIGO interferometers
A cloudy bank holiday Monday is a good time to catch up on blogging. Following the splurge of GW150914 papers, I’ve rather fallen behind. Published back in February, this paper is a search for continuous-wave signals: the almost-constant hum produced by rapidly rotating neutron stars.
Continuous-wave searches are extremely computationally expensive. The searches take a while to do, which can lead to a delay before results are published [bonus note]. This is the result of a search using data from LIGO’s sixth science run (March–October 2010).
To detect a continuous wave, you need to sift the data to find a signal that present through all the data. Rotating neutron stars produce a gravitational-wave signal with a frequency twice their orbital frequency. This frequency is almost constant, but could change as the observation goes on because (i) the neutron star slows down as energy is lost (from gravitational waves, magnetic fields or some form of internal sloshing around); (ii) there is some Doppler shifting because of the Earth’s orbit around the Sun, and, possibly, (iii) the there could be some Doppler shifting because the neutron star is orbiting another object. How do you check for something that is always there?
There are two basic strategies for spotting continuous waves. First, we could look for excess power in a particular frequency bin. If we measure something in addition to what we expect from the detector noise, this could be a signal. Looking at the power is simple, and so not too expensive. However, we’re not using any information about what a real signal should look like, and so it must be really loud for us to be sure that it’s not just noise. Second, we could coherently search for signals using templates for the expected signals. This is much more work, but gives much better sensitivity. Is there a way to compromise between the two strategies to balance cost and sensitivity?
This paper reposts results of a loosely coherent search. Instead of checking how well the data match particular frequencies and frequency evolutions, we average over a family of similar signals. This is less sensitive, as we get a bit more wiggle room in what would be identified as a candidate, but it is also less expensive than checking against a huge number of templates.
We could only detect continuous waves from nearby sources: neutron stars in our own Galaxy. (Perhaps 0.01% of the distance of GW150914). It therefore makes sense to check nearby locations which could be home to neutron stars. This search narrows its range to two directions in the Orion spur, our local band with a high concentration of stars. By focussing in on these spotlight regions, we increase the sensitivity of the search for a given computational cost. This search could possibly dig out signals from twice as far away as if we were considering all possible directions.
Artist’s impression of the local part of the Milky Way. The Orion spur connects the Perseus and Sagittarius arms. The yellow cones mark the extent of the search (the pink circle shows the equivalent all-sky sensitivity). Green stars indicate known pulsars. Original image: NASA/JPL-Caltech/ESO/R. Hurt.
The search found 70 interesting candidates. Follow-up study showed that most were due to instrumental effects. There were three interesting candidates left after these checks, none significant enough to be a detection, but still worth looking at in detail. A full coherent analysis was done for these three candidates. This showed that they were probably caused by noise. We have no detections
arXiv: 1510.03474 [gr-qc]
Journal: Physical Review D; 93(4):042006(14); 2016
Science summary: Scouting our Galactic neighborhood
Other bank holiday activities:
Scrabble
Bank holiday family Scrabble game. When thinking about your next turn, you could try seeing if your letters match a particular word (a coherent search which would get you the best score, but take ages), or just if your letters jumble together to make something word-like (an incoherent search, that is quick, but may result in lots of things that aren’t really words).
### Bonus note
#### Niceness
The Continuous Wave teams are polite enough to wait until we’re finished searching for transient gravitational-wave signals (which are more time sensitive) before taking up the LIGO computing clusters. They won’t have any proper results from O1 just yet.
# All-sky search for long-duration gravitational wave transients with LIGO
It’s now about 7 weeks since the announcement, and the madness is starting to subside. Although, that doesn’t mean things aren’t busy—we’re now enjoying completely new forms of craziness. In mid March we had our LIGO–Virgo Collaboration Meeting. This was part celebration, part talking about finishing our O1 analysis and part thinking ahead to O2, which is shockingly close. It was fun, there was cake.
Celebratory cake from the March LIGO–Virgo Meeting. It was delicious and had a fruity (strawberry?) filling. The image is February 11th’s Astronomy Picture of the Day. There was a second cake without a picture, that was equally delicious, but the queue was shorter.
All the business means that I’ve fallen behind with my posts, and I’ve rather neglected the final paper published the week starting 8 February. This is perhaps rather apt as this paper has the misfortune to be the first non-detection published in the post-detection world. It is also about a neglected class of signals.
### Long-duration transients
We look for several types of signals with LIGO (and hopefully soon Virgo and KAGRA):
• Compact binary coalescences (like two merging black holes), for which we have templates for the signal. High mass systems might only last a fraction of a second within the detector’s frequency range, but low mass systems could last for a minute (which is a huge pain for us to analyse).
• Continuous waves from rotating neutron stars which are almost constant throughout our observations.
• Bursts, which are transient signals where we don’t have a good model. The classic burst source is from a supernova explosion.
We have some effective search pipelines for finding short bursts—signals of about a second or less. Coherent Waveburst, which was the first code to spot GW150914 is perhaps the best known example. This paper looks at finding longer burst signals, a few seconds to a few hundred seconds in length.
There aren’t too many well studied models for these long bursts. Most of the potential sources are related to the collapse of massive stars. There can be a large amount of matter moving around quickly in these situations, which is what you want for gravitational waves.
Massive stars may end their life in a core collapse supernova. Having used up its nuclear fuel, the star no longer has the energy to keep itself fluffy, and its core collapses under its own gravity. The collapse leads to an explosion as material condenses to form a neutron star, blasting off the outer layers of the star. Gravitational waves could be generated by the sloshing of the outer layers as some is shot outwards and some falls back, hitting the surface of the new neutron star. The new neutron star itself will start life puffed up and perhaps rapidly spinning, and can generate gravitational waves at it settles down to a stable state—a similar thing could happen if an older neutron star is disturbed by a glitch (where we think the crust readjusts itself in something like an earthquake, but more cataclysmic), or if a neutron star accretes a large blob of material.
For the most massive stars, the core continues to collapse through being a neutron star to become a black hole. The collapse would just produce a short burst, so it’s not what we’re looking for here. However, once we have a black hole, we might build a disc out of material swirling into the black hole (perhaps remnants of the outer parts of the star, or maybe from a companion star). The disc may be clumpy, perhaps because of eddies or magnetic fields (the usual suspects when astrophysicists don’t know exactly what’s going on), and they rapidly inspiralling blobs could emit a gravitational wave signal.
The potential sources don’t involve as much mass as a compact binary coalescence, so these signals wouldn’t be as loud. Therefore we couldn’t see them quite as far way, but they could give us some insight into these messy processes.
### The search
The paper looks at results using old LIGO data from the fifth and sixth science runs (S5 and S6). Virgo was running at this time, but the data wasn’t included as it vastly increases the computational cost while only increasing the search sensitivity by a few percent (although it would have helped with locating a source if there were one). The data is analysed with the Stochastic Transient Analysis Multi-detector Pipeline (STAMP); we’ll be doing a similar thing with O1 data too.
STAMP searches for signals by building a spectrogram: a plot of how much power there is at a particular gravitational wave frequency at a particular time. If there is just noise, you wouldn’t expect the power at one frequency and time to be correlated with that at another frequency and time. Therefore, the search looks for clusters, grouping together times or frequencies closer to one another where there is more power then you might expect.
The analysis is cunning, as it coherently analysis data from both detectors together when constructing the spectrogram, folding in the extra distance a gravitational wave must travel between the detectors for a given sky position.
The significance of events is calculated is a similar way to how we search for binary black holes. The pipeline ranks candidates using a detection statistic, a signal-to-noise ratio for the cluster of interesting time–frequency pixels $\mathrm{SNR}_\Gamma$ (something like the amount of power measured divided by the amount you’d expect randomly). We work out how frequently you’d expect a particular value of $\mathrm{SNR}_\Gamma$ by analysing time-shifted data: where we’ve shifted the data from one of the detectors in time relative to data from the other so that we know there can’t be the same signal found in both.
The distribution of $\mathrm{SNR}_\Gamma$ is shown below from the search (dots) and from the noise background (lines). You can see that things are entirely consistent with our expectations for just noise. The most significant event has a false alarm probability of 54%, so you’re better off betting it’s just noise. There are no detections here.
False alarm rate (FAR) distribution of triggers from S5 (black circles) and S6 (red triangles) as a function of the
signal-to-noise ratio. The background S5 and S6 noise distributions are shown by the solid black and dashed red lines respectively. An idealised Gaussian noise background is shown in cyan. There are no triggers significantly above the expected background level. Fig. 5 from Abbott et al. (2016).
Since the detectors are now much more sensitive, perhaps there’s something lurking in our new data. I still think this in unlikely since we can’t see sources from a significant distance, but I guess we’ll have to wait for the results of the analysis.
arXiv: 1511.04398 [gr-qc]
Journal: Physical Review D; 93(4):042005(19); 2016
Science summary: Stuck in the middle: an all-sky search for gravitational waves of intermediate duration
Favourite (neglected) middle child:
Lisa Simpson
Sunset over the Grand Canyon. One of the perks of academia is the travel. A group of us from Birmingham went on a small adventure after the LIGO–Virgo Meeting. This is another reason why I’ve not been updating my blog.
# Searches for continuous gravitational waves from nine young supernova remnants
The LIGO Scientific Collaboration is busy analysing the data we’re currently taking with Advanced LIGO at the moment. However, the Collaboration is still publishing results from initial LIGO too. The most recent paper is a search for continuous waves—signals that are an almost constant hum throughout the observations. (I expect they’d be quite annoying for the detectors). Searching for continuous waves takes a lot of computing power (you can help by signing up for Einstein@Home), and is not particularly urgent since the sources don’t do much, hence it can take a while for results to appear.
### Supernova remnants
Massive stars end their lives with an explosion, a supernova. Their core collapses down and their outer layers are blasted off. The aftermath of the explosion can be beautiful, with the thrown-off debris forming a bubble expanding out into the interstellar medium (the diffuse gas, plasma and dust between stars). This structure is known as a supernova remnant.
The youngest known supernova remnant, G1.9+0.3 (it’s just 150 years old), observed in X-ray and optical light. The ejected material forms a shock wave as it pushes the interstellar material out of the way. Credit: NASA/CXC/NCSU/DSS/Borkowski et al.
At the centre of the supernova remnant may be what is left following the collapse of the core of the star. Depending upon the mass of the star, this could be a black hole or a neutron star (or it could be nothing). We’re interested in the case it is a neutron star.
### Neutron stars
Neutron stars are incredibly dense. One teaspoon’s worth would have about as much mass as 300 million elephants. Neutron stars are like giant atomic nuclei. We’re not sure how matter behaves in such extreme conditions as they are impossible to replicate here on Earth.
If a neutron star rotates rapidly (we know many do) and has an uneven or if there are waves in the the neutron star that moves lots of material around (like Rossby waves on Earth), then it can emit continuous gravitational waves. Measuring these gravitational waves would tell you about how bumpy the neutron star is or how big the waves are, and therefore something about what the neutron star is made from.
Neutron stars are most likely to emit loud gravitational waves when they are young. This is for two reasons. First, the supernova explosion is likely to give the neutron star a big whack, this could ruffle up its surface and set off lots of waves, giving rise to the sort of bumps and wobbles that emit gravitational waves. As the neutron star ages, things can quiet down, the neutron star relaxes, bumps smooth out and waves dissipate. This leaves us with smaller gravitational waves. Second, gravitational waves carry away energy, slowing the rotation of the neutron star. This also means that the signal gets quieter (and harder) to detect as the neutron star ages.
Since young neutron stars are the best potential sources, this study looked at nine young supernova remnants in the hopes of finding continuous gravitational waves. Searching for gravitational waves from particular sources is less computationally expensive than searching the entire sky. The search included Cassiopeia A, which had been previously searched in LIGO’s fifth science run, and G1.9+0.3, which is only 150 years old, as discovered by Dave Green. The positions of the searched supernova remnants are shown in the map of the Galaxy below.
The nine young supernova remnants searched for continuous gravitational waves. The yellow dot marks the position of the Solar System. The green markers show the supernova remnants, which are close to the Galactic plane. Two possible positions for Vela Jr (G266.2−1.2) were used, since we are uncertain of its distance. Original image: NASA/JPL-Caltech/ESO/R. Hurt.
### Gravitational-wave limits
No gravitational waves were found. The search checks how well template waveforms match up with the data. We tested that this works by injecting some fake signals into the data. Since we didn’t detect anything, we can place upper limits on how loud any gravitational waves could be. These limits were double-checked by injecting some more fake signals at the limit, to see if we could detect them. We quoted 95% upper limits, that is where we expect that if a signal was present we could see it 95% of the time. The results actually have a small safety margin built in, so the injected signals were typically found 96%–97% of the time. In any case, we are fairly sure that there aren’t gravitational waves at or above the upper limits.
These upper limits are starting to tell us interesting things about the size of neutron-star bumps and waves. Hopefully, with data from Advanced LIGO and Advanced Virgo, we’ll actually be able to make a detection. Then we’ll not only be able to say that these bumps and waves are smaller than a particular size, but they are this size. Then we might be able to figure out the recipe for making the stuff of neutron stars (I think it might be more interesting than just flour and water).
arXiv: 1412.5942 [astro-ph.HE]
Journal: Astrophysical Journal; 813(1):39(16); 2015
Science summary: Searching for the youngest neutron stars in the Galaxy
Favourite supernova remnant:
Cassiopeia A
# Directed search for gravitational waves from Scorpius X-1 with initial LIGO
new paper from the LIGO Scientific Collaboration has snuck out. It was actually published back in March but I didn’t notice it, nearly risking my New Year’s resolution. This is another paper on continuous waves from rotating neutron stars, so it’s a little outside my area of expertise. However, there is an official science summary written by people who do know what they’re talking about.
The paper looks at detecting gravitational waves from a spinning neutron star. We didn’t find any. However, we have slightly improved our limit for how loud they need to be before we would have detected them, which is nice.
Neutron stars can rotate rapidly. They can be spun up if they accrete material from a disc orbiting them. If they neutron star has an asymmetry, if it has a little bump, as it rotates it emits gravitational waves. The gravitational waves carry away angular momentum, which should spin down the neutron star. This becomes more effective as the angular velocity increases. At some point you expect that the spin-up effect from accretion balances the spin-down effect of gravitational waves and you are left with a neutron star spinning at pretty constant velocity. We have some evidence that this might happen, as low-mass X-ray binaries seem to have their spins clustered in a small range of frequencies. Assuming we do have this balance, we are looking for a continuous gravitational wave with constant frequency, a rather dull humming.
Scorpius X-1 is the brightest X-ray source in the sky. It contains a neutron star, so it’s a good place to check for gravitational waves from neutron stars. In this case, we’re using data from initial LIGO’s fifth science run (4 November 2005–1 October 2007). This has been done before, but this paper implements some new techniques. I expect that the idea is to test things out ahead of getting data with Advanced LIGO.
Swift X-ray Telescope image of Scorpius X-1 and the X-ray nova J1745-26 (a stellar-mass black hole), along with the scale of moon, as they would appear in the field of view from Earth. Credit: NASA/Goddard Space Flight Center/S. Immler and H. Krimm.
A limit of 10 days’ worth of data is used, as this should be safely within the time taken for the rotational frequency to fluctuate by a noticeable amount due to variation in the amount of accretion. In human terms, that would be the time between lunch and dinner, where your energy levels change because of how much you’ve eaten. They picked data from 21–31 August 2007, as their favourite (it has the best noise performance over the frequency range of interest), and used two other segments to double-check their findings. We’d be able to use more data if we knew how the spin wandered with time.
We already know a lot about Scorpius X-1 from electromagnetic observations (like where it is and its orbital parameters). We don’t know its spin frequency, but we might have an idea about the orientation of its spin if this coincides with radio jets. The paper considers two cases: one where we don’t know anything about the spin orientation, and one where we use information from the jets. The results are similar in both cases.
As the neutron star orbits in its binary system, it moves back and forth which Doppler shifts the gravitational waves. This adds a little interest to the hum, spreading it out over a range of frequencies. The search looks for gravitational waves over this type of frequency range, which they refer to as sidebands.
There are a few events where it looks like there is something, but after carefully checking, these look like they are entirely consistent with noise. I guess this isn’t too surprising. Since they didn’t detect anything, they can only impose an upper limit. This is stronger than the previous upper limit, but only by a factor of about 1.4. This might not sound too great, but the previous analysis used a year of data, whereas this only used 10 days. This method therefore saves a lot on computational time.
The result of the paper is quite nice, but not too exciting. If it were a biscuit, it’d probably be a rich tea. It’s nice to have, but it’s not a custard cream.
arXiv: 1412.5942 [astro-ph.HE]
Journal: Physical Review D; 91(6):062008(20); 2015
Science summary: Combing Initial LIGO Data for the Potentially Strong Continuous Wave Emitter Scorpius X-1
Biscuit rating:
Rich tea
# Narrow-band search of continuous gravitational-wave signals from Crab and Vela pulsars in Virgo VSR4 data
## Collaboration papers
I’ve been a member of the LIGO Scientific Collaboration for just over a year now. It turns out that designing, building and operating a network of gravitational-wave detectors is rather tricky, maybe even harder than completing Super Mario Bros. 3, so it takes a lot of work. There are over 900 collaboration members, all working on different aspects of the project. Since so much of the research is inter-related, certain papers (such as those that use data from the instruments) written by collaboration members have to include the name of everyone who works (at least half the time) on LIGO-related things. After a year in the collaboration, I have now levelled up to be included in the full author list (if there was an initiation ritual, I’ve suppressed the memory). This is weird: papers appear with my name on that I’ve not actually done any work for. It seems sort of like having to bring cake into your office on your birthday: you do have to share your (delicious) cupcakes with everyone else, but in return you get cake even when your birthday is nowhere near. Perhaps all those motivational posters where right about the value of teamwork? I do feel a little guilty about all the extra trees that will die because of people printing out these papers.
My New Year’s resolution was to write a post about every paper I have published. I am going to try to do the LIGO papers too. This should at least make sure that I actually read them all. There are official science summaries written by the people who did actually do the work, which may be better if you actually want an accurate explanation. My first collaboration paper is a joint publication of the LIGO and Virgo collaborations (even more sharing).
## Searching for gravitational waves from pulsars
Neutron stars are formed from the cores of dead stars. When a star’s nuclear fuel starts to run out, their core collapses. The most massive form black holes, the lightest (like our Sun) form white dwarfs, and the ones in the middle form neutron stars. These are really dense, they have about the same mass as our entire Sun (perhaps twice the Sun’s mass), but are just a few kilometres across. Pulsars are a type of neutron star, they emit a beam of radiation that sweeps across the sky as they rotate, sort of like a light-house. If one of these beams hits the Earth, we see a radio pulse. The pulses come regularly, so you can work out how fast the pulsar is spinning (and do some other cool things too).
The mandatory cartoon of a pulsar that everyone uses. The top part shows the pulsar and its beams rotating, and the bottom part shows the signal measured on Earth. We not really sure where the beams come from, it’ll be something to do with magnetic fields. Credit: M. Kramer
Because pulsars rotate really quickly, if they have a little bump on their surface, they can emit (potentially detectable) gravitational waves. This paper searches for these signals from the Crab and Vela pulsars. We know where these pulsars are, and how quickly they are rotating, so it’s possible to do a targeted search for gravitational waves (only checking the data for signals that are close to what we expect). Importantly, some wiggle room in the frequency is allowed just in case different parts of the pulsar slosh around at slightly different rates and so the gravitational-wave frequency doesn’t perfectly match what we’d expect from the frequency of pulses; the search is done in a narrow band of frequencies around the expected one. The data used is from Virgo’s fourth science run (VSR4). That was taken back in 2011 (around the time that Captain America was released). The search technique is new (Astone et al., 2014), it’s the first one that incorporates this searching in a narrow band of frequencies; I think the point was to test their search technique on real data before the advanced detectors start producing new data.
Composite image of Hubble (red) optical observations and Chandra (blue) X-ray observations of the Crab pulsar. The pulsar has a mass of 1.4 solar masses and rotates every 30 ms. Credit: Hester et al.
The pulsars emit gravitational waves continuously, they just keep humming as they rotate. The frequency will slow gradually as the pulsar loses energy. As the Earth rotates, the humming gets louder and quieter because the sensitivity of gravitational-wave detectors depends upon where the source is in the sky. Putting this all together gives you a good template for what the signal should look like, and you can see how well it fits the data. It’s kind of like trying to find the right jigsaw piece by searching for the one that interlocks best with those around it. Of course, there is a lot of noise in our detectors, so it’s like if the jigsaw was actually made out of jelly: you could get many pieces to fit if you squeeze them the right way, but then people wouldn’t believe that you’ve actually found the right one. Some detection statistics (which I don’t particularly like, but probably give a sensible answer) are used to quantify how likely it is that they’ve found a piece that fits (that there is a signal). The whole pipeline is tested by analysing some injected signals (artificial signals made to see if things work made both by adding signals digitally to the data and by actually jiggling the mirrors of the interferometer). It seems to do OK here.
Turning to the actual data, they very carefully show that they don’t think they’ve detected anything for either Vela or Crab. Of course, all the cool kids don’t detect gravitational waves, so that’s not too surprising.
This paper doesn’t claim a detection of gravitational waves, but it doesn’t stink like Zoidberg.
Having not detected anything, you can place an upper limit of the amplitude of any waves that are emitted (because if they were larger, you would’ve detected them). This amplitude can then be compared with what’s expected from the spin-down limit: the amplitude that would be required to explain the slowing of the pulsar. We know how the pulsars are slowing, but not why; it could be because of energy being lost to magnetic fields (the energy for the beams has to come from somewhere), it could be through energy lost as gravitational waves, it could be because of some internal damping, it could all be gnomes. The spin-down limit assumes that it’s all because of gravitational waves, you couldn’t have bigger amplitude waves than this unless something else (that would have to be gnomes) was pumping energy into the pulsar to keep it spinning. The upper limit for the Vela pulsar is about the same as the spin-down limit, so we’ve not learnt anything new. For the Crab pulsar, the upper limit is about half the spin-down limit, which is something, but not really exciting. Hopefully, doing the same sort of searches with data from the advanced detectors will be more interesting.
In conclusion, the contents of this paper are well described by its title:
• Narrow-band search: It uses a new search technique that is not restricted to the frequency assumed from timing pulses
• of continuous gravitational-wave signals: It’s looking for signals from rotating neutron stars (that just keep going) and so are always in the data
• from Crab and Vela pulsars: It considers two particular sources, so we know where in parameter space to look for signals
• in Virgo VSR4 data: It uses real data, but from the first generation detectors, so it’s not surprising it doesn’t see anything
It’s probably less fun that eating a jigsaw-shaped jelly, but it might be more useful in the future.
arXiv: 1410.8310 [gr-qc]
Journal: Physical Review D; 91(2):022004(15); 2015
Science summary: An Extended Search for Gravitational Waves from the Crab and Vela Pulsars
Percentage of paper that is author list: ~30% | 2017-06-23 08:34:35 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 7, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6331848502159119, "perplexity": 941.204470545632}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320040.36/warc/CC-MAIN-20170623082050-20170623102050-00596.warc.gz"} |
https://github.com/KhronosGroup/OpenVX-api-docs | # KhronosGroup/OpenVX-api-docs
OpenVX API and extension specification documents
C++ C Objective-C
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Type Name Latest commit message Commit time
Failed to load latest commit information.
docs
include/VX
.gitlab-ci.yml
.travis.yml
CODE_OF_CONDUCT.md
# OpenVX™ Specification Build Instructions and Notes
Note Note This is based on the Vulkan README, and has not been fully updated for OpenVX-specific changes to the build process. The most useful parts are the Introduction, Building The Spec, and notes on installing Software Dependencies.
## Introduction
This README describes important stuff for getting the OpenVX API specification and reference pages building properly.
## Building The Spec
Once you have all the right tools installed (see Software Dependencies below), go to …path-to-git-repo/docs/specification .
$make or make the individual targets html and pdf. These targets generate a variety of output documents in the directory specified by the Makefile variable $(OUTDIR) (by default, out). The checked-in file ../../../out/1.0/index.html links to all these targets, or they can individually be found as follows:
• API spec:
• html - Single-file HTML5 in $(OUTDIR)/html/vkspec.html • pdf - PDF in $(OUTDIR)/pdf/vkspec.pdf
## Building Extensions
All the extensions (complete or otherwise) in the tree were converted to asciidoc markup and can be built. The source for the API specification is in 'OpenVX_Specification.txt' while each extension is in 'vx_extension_name.txt'. Build an extension by passing SPECBASE=vx_extension_name to make, e.g.
make SPECBASE=vx_khr_nn html
A helper script, makeAllSpecs, can be called as
makeAllSpecs html (or pdf, or both)
### Alternate and Test Builds
If you are just testing asciidoc formatting, macros, stylesheets, etc., you may want to edit OpenVX_Specification.txt to just include your test code. The asciidoctor HTML build is very fast, even for the whole Specification, but PDF builds take several minutes.
### Rebuilding The Generated Images
There are some images in the images/ directory which are maintained in one format but need to be converted to another format for corresponding types of output. Most are SVG converted to PDF, some are PPT converted to PDF converted to SVG. SVG are needed by all builds.
These files are not automatically converted by the Makefile. Instead, all output forms required are checked into images/ . On the rare occasions that someone changes a source document and needs to regenerate the other forms:
cd images ; make
## Our stylesheets
We use a modified version of the Asciidoctor 'colony' theme, altered to more closely resemble the Doxygen stylesheet.
## Imbedding Equations
Where possible, equations should be written using straight asciidoc markup using the eq role. This covers many common equations and is faster than the alternatives.
For more complex equations, such as multi-case statements, matrices, and complex fractions, equations should be written using the latexmath: inline and block macros. The contents of the latexmath: blocks should be LaTeX math notation. LaTeX math markup delimiters are now inserted by the asciidoctor toolchain.
LaTeX math is passed through unmodified to all HTML output forms, which is subsequently rendered with the KaTeX engine when the html is loaded. A local copy of the KaTeX release is kept in doc/specs/vulkan/katex and copied to the HTML output directory during spec generation. Math is processed into SVGs via asciidoctor-mathematical for PDF output.
The following caveats apply:
• The special characters < , > , and & can currently be used only in [latexmath] block macros, not in latexmath:[] inline macros. Instead use \lt, \leq, \gt, and \geq for <, ⇐, >, and >= respectively. & is an alignment construct for multiline equations, and should only appear in block macros anyway.
• AMSmath environments (e.g. \begin{equation*}, {align*}, etc.) cannot be used in KaTeX at present, and have been replaced with constructs supported by KaTeX such as {aligned}.
• Arbitrary LaTeX constructs cannot be used. KaTeX and asciidoctor-mathematical are only equation renderers, not full LaTeX engines. Imbedding LaTeX like \Large or \hbox{\tt\small VK\_FOO} may not work in any of the backends, and should be avoided.
See the “OpenVX Documentation and Extensions” document for more details of supported LaTeX math constructs.
## Asciidoc Anchors And Xrefs
In the API spec, sections can have anchors (labels) applied with the following syntax. In general the anchor should immediately precede the chapter or section title and should use the form '[[chapter-section-label]]'. For example,
For example, from the Vulkan specification we have:
[[synchronization-primitives]]
Synchronization Primitives
Cross-references to those anchors can then be generated with, for example,
See the <<synchronization-primitives>> section for discussion of fences,
semaphores, and events.
You can also add anchors on arbitrary paragraphs, using a similar naming scheme.
Anything whose definition comes from one of the autogenerated API include files (.txt files in the directories api/basetypes, api/enums, api/flags, api/funcpointers, api/handles, api/protos, and api/structs) has a corresponding anchor whose name is the name of the function, struct, etc. being defined. Therefore you can say something like:
Fences are used with the +++<<vkQueueSubmit>>+++ command...
## Software Dependencies
This section describes the software components used by the OpenVX spec toolchain.
Before building the OpenVX spec, you must install the following tools:
• GNU make (make version: 4.0.8-1; older versions probably OK)
• Python 3 (python, version: 3.4.2)
• Ruby (ruby, version: 2.3.3)
• The Ruby development package (ruby-dev) may also be required in some environments.
• Git command-line client (git, version: 2.1.4). The build can progress without a git client, but branch/commit information will be omitted from the build. Any version supporting the following operations should work:
• git symbolic-ref --short HEAD
• git log -1 --format="%H"
• Ghostscript (ghostscript, version: 9.10). This is for the PDF build, and it can still progress without it. Ghostscript is used to optimize the size of the PDF, so will be a lot smaller if it is included.
The following Ruby Gems and platform package dependencies must also be installed. Versions known to work are listed for each gem. Earlier versions can, and probably will, not work properly in some respects.
Installing gems and package dependencies is described in more detail for individual platforms and environment managers below. Please read the remainder of this document (other than platform-specific parts you don’t use) completely before trying to install.
Only the asciidoctor and coderay gems are needed if you don’t intend to build PDF versions of the spec and supporting documents.
json-schema is only required in order to validate the output of the valid usage extraction scripts to a JSON file. If not installed, validation will be skipped when the JSON is built.
Note Note While it’s easier to install just the toolchain components for HTML builds, people submitting MRs with substantial changes to the Specification are responsible for verifying that their branches build both html and pdf targets.
Platform-specific toolchain instructions follow:
### Windows (General)
Most of the dependencies on Linux packages are light enough that it’s possible to build the spec natively in Windows, but it means bypassing the makefile and calling functions directly. This might be solved in future. For now, there are three options for Windows users: Ubuntu / Windows 10, MinGW, or Cygwin.
#### Ubuntu / Windows 10
When using the “Ubuntu Subsystem” for Windows 10, most dependencies can be installed via apt-get:
sudo apt-get -qq -y install build-essential python3 git cmake bison flex \
libffi-dev libgmp-dev libxml2-dev libgdk-pixbuf2.0-dev libcairo2-dev \
libpango1.0-dev ttf-lyx gtk-doc-tools ghostscript
The default ruby packages on Ubuntu are fairly out of date. Ubuntu only provides ruby and ruby2.0 - the latter is multiple revisions behind the current stable branch, and would require wrangling to get the makefile working with it.
Luckily, there are better options; either rvm or rbenv is recommended to install a more recent version.
Note Note If you are new to Ruby, you should completely remove (through the package manager, e.g. sudo apt-get remove packagename) all existing Ruby and asciidoctor infrastructure on your machine before trying to use rvm or rbenv for the first time. dpkg -l | egrep 'asciidoctor|ruby|rbenv|rvm' will give you a list of candidate package names to remove. If you already have a favorite Ruby package manager, ignore this advice, and just install the required OS packages and gems. In addition, rvm and rbenv are mutually incompatible. They both rely on inserting shims and $PATH modifications in your bash shell. If you already have one of these installed and are familiar with it, it’s probably best to stay with that one. One of the editors, who is new to Ruby, found rbenv far more comprehensible than rvm. The other editor likes rvm better. Neither rvm nor rbenv work, out of the box, when invoked from non-Bash shells like tcsh. This can be hacked up by setting the right environment variables and PATH additions based on a bash environment. Most of the tools on Bash for Windows are quite happy with Windows line endings (CR LF), but bash scripts expect Unix line endings (LF). The file .gitattributes at the top of the vulkan tree in the 1.0 branch forces such scripts to be checked out with the proper line endings on non-Linux platforms. If you add new scripts whose names don’t end in .sh, they should be included in .gitattributes as well. ##### Ubuntu/Windows 10 Using Rbenv Rbenv is a lighter-weight Ruby environment manager with less functionality than rvm. Its primary task is to manage different Ruby versions, while rvm has additional functionality such as managing “gemsets” that is irrelevant to our needs. A complete installation script for the toolchain on Ubuntu for Windows, developed on an essentially out-of-the-box environment, follows. If you try this, don’t try to execute the entire thing at once. Do each step separately in case of errors we didn’t encounter. # Install packages needed by ruby_build and by toolchain components. # See https://github.com/rbenv/ruby-build/wiki and # https://github.com/asciidoctor/asciidoctor-mathematical#dependencies sudo apt-get install autoconf bison build-essential libssl-dev \ libyaml-dev libreadline6-dev zlib1g-dev libncurses5-dev \ libffi-dev libgdbm3 libgdbm-dev cmake libgmp-dev libxml2 \ libxml2-dev flex pkg-config libglib2.0-dev \ libcairo-dev libpango1.0-dev libgdk-pixbuf2.0-dev \ libpangocairo-1.0 # Install rbenv from https://github.com/rbenv/rbenv git clone https://github.com/rbenv/rbenv.git ~/.rbenv # Set path to shim layers in .bashrc echo 'export PATH="$HOME/.rbenv/bin:$PATH"' >> .bashrc ~/.rbenv/bin/rbenv init # Set .rbenv environment variables in .bashrc echo 'eval "$(rbenv init -)"' >> .bashrc
# Restart your shell (e.g. open a new terminal window). Note that
# you do not need to use the -l option, since the modifications
# were made to .bashrc rather than .bash_profile. If successful,
# type rbenv should print 'rbenv is a function' followed by code.
# Install ruby_build plugin from https://github.com/rbenv/ruby-build
git clone https://github.com/rbenv/ruby-build.git
~/.rbenv/plugins/ruby-build
# Install Ruby 2.3.3
# This takes in excess of 20 min. to build!
# https://github.com/rbenv/ruby-build/issues/1054#issuecomment-276934761
# suggests:
# "You can speed up Ruby installs by avoiding generating ri/RDoc
# documentation for them:
# RUBY_CONFIGURE_OPTS=--disable-install-doc rbenv install 2.3.3
# We have not tried this.
rbenv install 2.3.3
# Configure rbenv globally to always use Ruby 2.3.3.
echo "2.3.3" > ~/.rbenv/version
# Finally, install toolchain components.
# asciidoctor-mathematical also takes in excess of 20 min. to build!
# The same RUBY_CONFIGURE_OPTS advice above may apply here as well.
gem install asciidoctor coderay json-schema
gem install --pre asciidoctor-pdf
MATHEMATICAL_SKIP_STRDUP=1 gem install asciidoctor-mathematical
##### Ubuntu/Windows 10 Using RVM
Here are (sparser) instructions for using rvm to setup version 2.3.x:
gpg --keyserver hkp://keys.gnupg.net --recv-keys 409B6B1796C275462A1703113804BB82D39DC0E3
\curl -sSL https://get.rvm.io | bash -s stable --ruby
source ~/.rvm/scripts/rvm
rvm install ruby-2.3
rvm use ruby-2.3
Note Windows 10 Bash will need to be launched with the "-l" option appended, so that it runs a login shell; otherwise RVM won’t function correctly on future launches.
##### Ubuntu 16.04 using system Ruby
The Ubuntu 16.04.1 default Ruby install (version 2.3.1) seems to be up-to-date enough to run all the required gems, but also needs the ruby-dev package installed through the package manager.
In addition, the library /var/lib/gems/2.3.0/gems/mathematical-1.6.7/ext/mathematical/lib/liblasem.so has to be copied or linked into a directory where the loader can find it. This requirement appears to be due to a problem with the asciidoctor-mathematical build process.
#### MinGW
MinGW can be obtained here: http://www.mingw.org/
Once the installer has run its initial setup, following the instructions on the website, you should install the mingw-developer-tools, mingw-base and msys-base packages. The msys-base package allows you to use a bash terminal from windows with whatever is normally in your path on Windows, as well as the unix tools installed by MinGW.
In the native Windows environment, you should also install the following native packages:
Once this is setup, and the necessary Ruby Gems are installed, launch the msys bash shell, and navigate to the spec Makefile. From there, you’ll need to set PYTHON= to the location of your python executable for version 3.x before your make command - but otherwise everything other than pdf builds should just work.
Note Building the PDF spec via this path has not yet been tested but may be possible - liblasem is the main issue and it looks like there is now a mingw32 build of it available.
#### Cygwin
When installing Cygwin, you should install the following packages via setup:
// "curl" is only used to download fonts, can be done in another way
autoconf
bison
cmake
curl
flex
gcc-core
gcc-g++
ghostscript
git
libbz2-devel
libcairo-devel
libcairo2
libffi-devel
libgdk_pixbuf2.0-devel
libgmp-devel
libiconv
libiconv-devel
liblasem0.4-devel
libpango1.0-devel
libpango1.0_0
libxml2
libxml2-devel
make
python3
ruby
ruby-devel
Note Native versions of some of these packages are usable, but care should be taken for incompatibilities with various parts of cygwin - e.g. paths. Ruby in particular is unable to resolve Windows paths correctly via the native version. Python and Git for Windows can be used, though for Python you’ll need to set the path to it via the PYTHON environment variable, before calling make.
When it comes to installing the mathematical ruby gem, there are two things that will require tweaking to get it working. Firstly, instead of:
MATHEMATICAL_SKIP_STRDUP=1 gem install asciidoctor-mathematical
You should use
MATHEMATICAL_USE_SYSTEM_LASEM=1 gem install asciidoctor-mathematical
The latter causes it to use the lasem package already installed, rather than trying to build a fresh one.
The mathematical gem also looks for "liblasem" rather than "liblasem0.4" as installed by the lasem0.4-devel package, so it is necessary to add a symlink to your /lib directory using:
ln -s /lib/liblasem-0.4.dll.a /lib/liblasem.dll.a
Ruby Gems are not installed to a location that is in your path normally. Gems are installed to ~/bin/ - you should add this to your path before calling make:
export PATH=~/bin:\$PATH
Finally, you’ll need to manually install fonts for lasem via the following commands:
mkdir /usr/share/fonts/truetype cd /usr/share/fonts/truetype
curl -LO http://mirrors.ctan.org/fonts/cm/ps-type1/bakoma/ttf/cmex10.ttf \
-LO http://mirrors.ctan.org/fonts/cm/ps-type1/bakoma/ttf/cmmi10.ttf \
-LO http://mirrors.ctan.org/fonts/cm/ps-type1/bakoma/ttf/cmr10.ttf \
-LO http://mirrors.ctan.org/fonts/cm/ps-type1/bakoma/ttf/cmsy10.ttf \
-LO http://mirrors.ctan.org/fonts/cm/ps-type1/bakoma/ttf/esint10.ttf \
-LO http://mirrors.ctan.org/fonts/cm/ps-type1/bakoma/ttf/eufm10.ttf \
-LO http://mirrors.ctan.org/fonts/cm/ps-type1/bakoma/ttf/msam10.ttf \
-LO http://mirrors.ctan.org/fonts/cm/ps-type1/bakoma/ttf/msbm10.ttf
### Mac OS X
Mac OS X should work in the same way as for ubuntu by using the Homebrew package manager, with the exception that you can simply install the ruby package via brew rather than using a ruby-specific version manager.
You’ll likely also need to install additional fonts for the PDF build via mathematical, which you can do with:
cd ~/Library/Fonts
curl -LO http://mirrors.ctan.org/fonts/cm/ps-type1/bakoma/ttf/cmex10.ttf \
-LO http://mirrors.ctan.org/fonts/cm/ps-type1/bakoma/ttf/cmmi10.ttf \
-LO http://mirrors.ctan.org/fonts/cm/ps-type1/bakoma/ttf/cmr10.ttf \
-LO http://mirrors.ctan.org/fonts/cm/ps-type1/bakoma/ttf/cmsy10.ttf \
-LO http://mirrors.ctan.org/fonts/cm/ps-type1/bakoma/ttf/esint10.ttf \
-LO http://mirrors.ctan.org/fonts/cm/ps-type1/bakoma/ttf/eufm10.ttf \
-LO http://mirrors.ctan.org/fonts/cm/ps-type1/bakoma/ttf/msam10.ttf \
-LO http://mirrors.ctan.org/fonts/cm/ps-type1/bakoma/ttf/msbm10.ttf
Then install the required Ruby Gems.
### Linux (Debian, Ubuntu, etc.)
The instructions for the Ubuntu / Windows 10 installation are generally applicable to native Linux environments using Debian packages, such as Debian and Ubuntu, although the exact list of packages to install may differ. Other distributions using different package managers, such as RPM (Fedora) and Yum (SuSE) will have different requirements.
Using rbenv or rvm is neccessary, since the system Ruby packages are often well out of date.
Once the environment manager, Ruby, and ruby_build have been installed, install the required Ruby Gems.
### Ruby Gems
The following ruby gems can be installed directly via the gem install command, once the platform is set up:
gem install rake asciidoctor coderay json-schema
# Required only for pdf builds
MATHEMATICAL_SKIP_STRDUP=1 gem install asciidoctor-mathematical
gem install --pre asciidoctor-pdf
gem install --pre asciidoctor-diagram
To make sure you have the latest versions of installed gems, periodically execute
gem update
## Revision History
• 2018-11-01 - Update required gem versions
• 2018-02-05 - Retarget document from Vulkan repository for OpenVX asciidoctor spec builds.
You can’t perform that action at this time. | 2019-11-22 03:37:52 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31645482778549194, "perplexity": 10366.470425964008}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496671106.83/warc/CC-MAIN-20191122014756-20191122042756-00304.warc.gz"} |
https://crypto.stackexchange.com/questions/59460/reveal-all-information-up-to-n-th-message | # Reveal all information up to n-th message
Is there a way to generate a sequence of keys that have the property that exposing any of those keys will reveal all previous keys, but no future key?
I'm thinking about an application that sends an encrypted message every day, and sometimes I want to disclose to some other party the content of all past messages at any point in time, but without revealing any future message.
A possible way would be to hash some data x times, and than use the final hash for the first message, the x-1th hash as key for the second message, x-2th hash as key for the second message and so on. Revealing to any party the x-n key allows them to just hash it many times to obtain all the keys between x and x-n with no access to any later keys. But that limits the application in the total number of messages and requires a lot of work to be done beforehand. Maybe there is a much better way?
• Encrypt the old key under the new key and publish the ciphertext (for every key update) and the newest key you want to leak? – SEJPM May 22 '18 at 20:00
Is there a way to generate a sequence of keys that have the property that exposing any of those keys will reveal all previous keys, but no future key?
One possibility: select a hard to factor value $n = pq$; publish the value $n$ (or include it with each day's "key"), and keep the factorization secret. It'll make things easier if you select $p \equiv q \equiv 3 \pmod 4$
Then, select a random quadratic residue $r_0$ (which you can do by selecting a random value $t$ and computing $r_0 = t^2 \bmod n$)
Then, for each day's secret $r_i$, you can compute the next day's secret $r_{i+1} = \sqrt{r_i} \bmod n$ (that is, a modular square root); there are four such square roots; you'll want the one which is a quadratic residue.
That's actually easier than it sounds; if you took my advice about about $p \equiv q \equiv 3 \pmod 4$, then all you need to do is compute:
$$r_{i+1} \mod p = r_i^{(p+1)/4} \mod p$$
$$r_{i+1} \mod q = r_i^{(q+1)/4} \mod q$$
And combine $r_{i+1} \mod p$ and $r_{i+1} \mod q$ using CRT to reconstruct $r_{i+1}$
• Computing next keys is as difficult as factoring $n$
• Computing previous keys is easy, as $r_{i-1} = r_i^2 \bmod n$ | 2021-04-13 17:04:52 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32848793268203735, "perplexity": 779.2664328072406}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038073437.35/warc/CC-MAIN-20210413152520-20210413182520-00133.warc.gz"} |
http://cvgmt.sns.it/paper/3842/ | # On the isoperimetric problem with double density
created by saracco on 09 Apr 2018
modified on 01 Oct 2018
[BibTeX]
Online first
Inserted: 9 apr 2018
Last Updated: 1 oct 2018
Journal: Nonlinear Analysis
Year: 2018
Doi: 10.1016/j.na.2018.04.009
ArXiv: 1804.02966 PDF
Notes:
A subscript $r$ is missing in the hypothesis of Theorem A and related Lemmas in the published version. This version contains the correct statements.
Abstract:
In this paper we consider the isoperimetric problem with double density in an Euclidean space, that is, we study the minimisation of the perimeter among subsets of $\mathbb{R}^n$ with fixed volume, where volume and perimeter are relative to two different densities. The case of a single density, or equivalently, when the two densities coincide, has been well studied in the last years; nonetheless, the problem with two different densities is an important generalisation, also in view of applications. We will prove the existence of isoperimetric sets in this context, extending the known results for the case of single density. | 2018-10-23 16:55:37 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5941644310951233, "perplexity": 725.5056206783349}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583516480.46/warc/CC-MAIN-20181023153446-20181023174946-00415.warc.gz"} |
https://socratic.org/questions/a-man-invests-some-money-at-5-and-49-000-more-than-three-times-the-amount-at-11- | # A man invests some money at 5%, and $49,000 more than three times the amount at 11%. The total annual interest earned from the investment is$51,370. How much did he invest at each amount?
Dec 20, 2017
$121,000 was invested at 5% $412,000 was invested at 11%
#### Explanation:
There is a given relationship between the two amounts, so you can define them using one variable.
Let the amount at 5% be $x$.
Therefore the larger amount at 11% is $3 x + 49 , 000$
Now form an equation - you know the total amount of interest is $51 , 370$
'Interest at 5% + Interest at 11% = 51,370
$\frac{5}{100} \times x + \frac{11}{100} \times \left(3 x + 49000\right) = 51370 \text{ } \leftarrow \times 100$
$5 x + 11 \left(3 x + 49 , 000\right) = 51 , 370$
$5 x + 33 x + 539 , 000 = 5 , 137 , 000$
$38 x = 5 , 137 , 000 - 539 , 000$
$38 x = 4 , 598 , 000$
$x = 121 , 000$
$121 \times 3 + 49000 = 412 , 000$ | 2021-11-27 21:49:14 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 18, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8469967246055603, "perplexity": 1595.1006277042231}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358233.7/warc/CC-MAIN-20211127193525-20211127223525-00512.warc.gz"} |
https://aiida-tutorials.readthedocs.io/en/latest/pages/2018_PRACE_MaX/sections/bands.html | 7. A real-world WorkChain: computing a band structure¶
Note: If you still have enough time, you might want to check first Appendix [sec:convpressure] before continuing with this section.
As a final demonstration of the power of WorkChains in AiiDA, we want to give a demonstration of a WorkChain that we have written that will take a structure as its only input and will compute its band structure. All of the steps that would normally have to be done manually by the researcher, choosing appropriate pseudopotentials, energy cutoffs, k-points meshes, high-symmetry k-point paths and performing the various calculation steps, are performed automatically by the WorkChain.
The demonstration of the workchain will be performed in a Jupyter notebook. To run it, follow the instructions that were given for the querybuilder notebook in section [sec:querybuilder]. The only difference is that instead of selecting the notebook in the querybuilder directory, go to pw/bandstructure instead and choose the bandstructure.ipynb notebook. There you will find some example structures that are loaded from COD, through the importer integrated within AiiDA. Note that the required time to calculate the bandstructure for these example structures ranges from 3 minutes to almost half an hour, given that the virtual machine is running on a single core with minimal computational power. It is not necessary to run these examples as it may take too long to complete. For reference, the expected output band structures are plotted in Fig.[fig:workchainbandstructures].
Electronic band structures of four different crystal structures computed with AiiDA’s PwBandsWorkChain
The following appendices consist of optional exercises, and are mentioned in earlier parts of the tutorial. Go through them only if you have time. | 2021-12-05 16:36:16 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6272947788238525, "perplexity": 923.6721933844094}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363215.8/warc/CC-MAIN-20211205160950-20211205190950-00415.warc.gz"} |
https://www.physicsforums.com/threads/problem-on-block-sliding-on-a-wedge.51344/ | # Problem on block sliding on a wedge
1. Nov 4, 2004
### gauravkukreja
Consider a block of mass 'm' kept on the hypotenuse of a right triangular wedge of mass 'M'. Calculate the accelaration of the wedge and the block.
Hence find the force that should be applied to 'M' so that 'm' does not move?
2. Nov 4, 2004
### airbuzz
is there no friction?
in this case the interaction force between the block and the wedge is the normal vincular reaction, that is equal to the normal compnent of the block weight that is:
$$N=mg\cos\alpha$$
where alpha is the lower angle of the wedge. So the horizontal force between the wedge and the block is
$$F=Nsen\alpha=mg\cos\alpha\, sen\alpha$$
Considering the wedge it receives a force equal to F so it moves with an acceleration equal to
$$a=\frac{F}{M}=\frac{m}{M}g\cos\alpha\, sen\alpha$$
Last edited: Nov 4, 2004
3. Nov 4, 2004
### airbuzz
For the second question I don´t understand if m must not move respect a fix coordinate or respect to the wedge... | 2017-03-23 18:58:18 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7644455432891846, "perplexity": 1158.9454070847248}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218187193.40/warc/CC-MAIN-20170322212947-00530-ip-10-233-31-227.ec2.internal.warc.gz"} |
https://mathematica.stackexchange.com/questions/216610/minimization-problem-and-voronoi-mesh | # Minimization problem and Voronoi Mesh
I have a random convex mesh $$Q$$, made of $$n$$ polygons, and I want to test how close it is to a Voronoi tessellation. In other words, I'm looking for generators (seeds) $$\{(x_i,y_i)\}_{1\leq i\leq n}$$.
This requires a bit of math, therefore I'll present some of the ideas in Chapter 2.6 of Spatial Tessellations, by A. Shewhart and S. Wilks, so that you have a bit of context.
In order to check if the mesh forms a Voronoi Tessellation, we need to guarantee that
1. Each generator is in the associated Voronoi region.
2. An edge in $$Q$$ should be on the perpendicular bisector of the two side generators.
Regarding 1, let $$e$$ be an edge of $$Q$$ shared by two polygons $$q_i$$ and $$q_j$$. Then, for some $$a,b\in\mathbb{R}$$, the line containing $$e$$ can be expressed by the equation $$ax+by=1$$ Suppose that $$q_i$$ and the origin lie in the same side of $$e$$. Then we get $$ax_i+by_i>0\,\text{ and }\,ax_j+by_j<0.$$ Collecting the inequalities for all edges we get a system of linear inequalities, denoted by $$A\mathbf{x}>0.$$ For 2, the line containing $$e$$ should contain the midpoint of $$(x_i,y_i)$$ and $$(x_j,y_j)$$. Hence we get $$a\frac{x_i+x_j}{2}+b\frac{y_i+y_j}{2}=1.$$ Furthermore, since the line connecting $$(x_i, y_i)$$ and $$(x_j, y_j)$$ should be perpendicular to $$e$$, we get $$a(y_i-y_j)-b(x_i-x_j)=0.$$ We get similar equations for each edge. Collecting them all, we obtain a system of linear equations, which we denote by $$B\mathbf{x}=\mathbf{c}.$$ Now, instead of searching for the exact solution of the previous equation, I merely want to introduce a certain error factor, in order to characterise the "closedness" to a Voronoi tessellation. Therefore, together with $$A\mathbf{x}>0$$, I want to use Mathematica to solve the problem $$\min_{\mathbf{x}}\| B\mathbf{x}-\mathbf{c} \|^2.$$ How do I do this?
My main problem is in defining the equations for the corresponding seeds. Regarding $$A\mathbf{x}>0$$, I could simply use RegionIntersection with each polygon to simply force the points to be inside them. But how do I define $$B$$ and $$\mathbf{c}$$? It doesn't seem obvious.
In the end, I want something like
Minimize[{Dot[B, {Join[Table[x[i], {i, n}], Table[y[i], {i, n}]]}] - c,
Dot[A, {Join[Table[x[i], {i, n}], Table[y[i], {i, n}]]}] > 0},
{Join[Table[x[i], {i, n}], Table[y[i], {i, n}]]}]
where the condition $$A\mathbf{x}>0$$ could be replaced in the following manner
Minimize[{Dot[B, {Join[Table[x[i], {i, n}], Table[y[i], {i, n}]]}] - c,
AllTrue[
Table[Not[RegionEqual[
RegionIntersection[MeshPrimitives[Q, 2][[i]],
Point[{x[[i]], y[[i]]}]], EmptyRegion[2]]], {i, n}], TrueQ]},
{Join[Table[x[i], {i, n}], Table[y[i], {i, n}]]}]
For $$B$$ and $$\mathbf{c}$$, I have access to the edges that share a polygon, but how do I make the correct association with each $$(x_i,y_i)$$?
Any ideas?
This is my solution
abf = Function[l, Module[{x1, y1, x2, y2},
x1 = l[[1, 1, 1]];
y1 = l[[1, 1, 2]];
x2 = l[[1, 2, 1]];
y2 = l[[1, 2, 2]];
Solve[as x1 + bs y1 == 1 && as x2 + bs y2 == 1, {as, bs}]
]];
n = MeshPrimitives[mesh, 2] // Length;
shre0 = Complement[MeshPrimitives[mesh, 1],
MeshPrimitives[BoundaryMesh[mesh], 1]];
edgn = Length[shre0];
cents = RegionCentroid[MeshPrimitives[mesh, 2]];
shre = Table[Line[SortBy[shre0[[i, 1]], Norm]], {i, edgn}];
pol = Table[
Append[MeshPrimitives[mesh, 2][[i]][[1]],
MeshPrimitives[mesh, 2][[i]][[1, 1]]], {i, n}];
polin0 = Table[
Table[Line[{pol[[j, i]], pol[[j, i + 1]]}], {i,
Length[pol[[j]]] - 1}], {j, n}];
polin = Table[
Table[Line[SortBy[polin0[[j, i, 1]], Norm]], {i,
Length[polin0[[j]]]}], {j, n}];
lsp = Table[Intersection[shre, polin[[i]]], {i, Length[polin]}];
lor = {};
For[i = 1, i <= n, i++,
For[j = i + 1, j <= n, j++,
regg = Intersection[lsp[[i]], lsp[[j]]];
If[regg =!= {},
lor =
Append[lor, {{i, j}, {abf[regg[[1]]][[1, 1, 2]],
abf[regg[[1]]][[1, 2, 2]]}, regg[[1]]}]]
]];
abi = Transpose[lor][[1]];
ab = Transpose[lor][[2]];
matB = Flatten[
Table[{ReplacePart[
ConstantArray[0, 2 n], {2*abi[[i, 1]] - 1 -> ab[[i, 1]],
2*abi[[i, 1]] -> ab[[i, 2]], 2*abi[[i, 2]] - 1 -> ab[[i, 1]],
2*abi[[i, 2]] -> ab[[i, 2]]}],
ReplacePart[
ConstantArray[0, 2 n], {2*abi[[i, 1]] - 1 -> -ab[[i, 2]],
2*abi[[i, 1]] -> ab[[i, 1]], 2*abi[[i, 2]] - 1 -> ab[[i, 2]],
2*abi[[i, 2]] -> -ab[[i, 1]]}]},
{i, 1, edgn}], 1];
cB = Flatten[ConstantArray[{2, 0}, edgn]];
crns = Transpose[
Table[MeshPrimitives[mesh, 0][[i, 1]], {i,
Length[MeshPrimitives[mesh, 0]]}]];
cr00 = Min[crns[[1]]];
cr10 = Max[crns[[1]]];
cr01 = Min[crns[[2]]];
cr11 = Max[crns[[2]]];
minV = FindMinimum[{Norm[
Dot[matB, Flatten[Table[{xs[i], ys[i]}, {i, n}]]] - cB]^2,
Table[cr00 <= xs[i] <= cr10 && cr01 <= ys[i] <= cr11 &&
RegionMember[MeshPrimitives[mesh, 2][[i]], {xs[i], ys[i]}], {i,
n}]
},
Flatten[Table[{xs[i], ys[i]}, {i, n}]]];
error = minV[[1]];
ptt = Table[{minV[[2, 2 i - 1, 2]], minV[[2, 2 i, 2]]}, {i, n}];
Then, testing for a Voronoi mesh we get
n = 5; mesh = VoronoiMesh[RandomReal[1, {n, 2}]];
Just as an example, for a random convex mesh, we get
Any comments or questions are welcome. | 2020-08-04 14:28:26 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 33, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17567482590675354, "perplexity": 4184.790939460468}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439735867.94/warc/CC-MAIN-20200804131928-20200804161928-00231.warc.gz"} |
http://zymne.org/bpxsnc/infosys-level-7-salary-e66552 | 8. Which of the following is a species with 12 protons and 10 electrons? Answer #2 | 05/12 2014 01:35 I can't say specifically because it depends on which isotope of calcium you are working with Positive: 50 %. All of the above statements (A–D) are true. COVID-19 is an emerging, rapidly evolving situation. Calcium is an important component of a healthy diet. How many neutrons are found in an atom of aluminum 27? Calcium is a chemical element with the symbol Ca and atomic number 20. Since calcium lost two electrons, it has 20 protons, but only 18 electrons. That's the number of protons + neutrons. In this video we will write the electron configuration for Ca2+, the Calcium ion. Calcium is a chemical element with atomic number 20 which means there are 20 protons in its nucleus. Positive: 50 %. It is the fifth most abundant element in Earthâs crust and the third most abundant metal, after iron and aluminium. What does a Cherokee Purple tomato taste like? However, this assumes the atom to exhibit a spherical shape, which is only obeyed for atoms in vacuum or free space. Authored by: Jessica Garber. Total number of protons in the nucleus is called the atomic number of the atom and is given the symbol Z. An electrons weight is almost nothing 0.0005 so if you simply take away the number of protons (20) away form its mass number (neutrons+protons, 40) you arrive with the answer 20 neutrons. Therefore, the number of electrons in neutral atom of Calcium is 20. How many electrons does a calcium ion have? 4) Chlorine has 17 electrons, so Chlorine ion has 17+1 = 18 electrons. Oppositely charged ions attract each other, forming an ionic bond. Calcium is a chemical element with symbol Ca and atomic mass is 40.. The atomic radius of Calcium atom is 176pm (covalent radius). Oxygen and sulfur are in the same group (16) in the periodic table. This makes calcium a positive ion with a charge of 2+. Due large atomic mass of calcium it is bigger in size than sodium. 1 Answer. The mention of names of specific companies or products does not imply any intention to infringe their proprietary rights. Calcium is unique among metals because its ions have a very large concentration gradient across the plasma membrane of all cells, from 10−3 M Ca2+ outside, to 10−7 M Ca2+ inside. 2) calcium has 20 electrons, so calcium ion has 20-2 = 18 electrons. It reacts with water displacing hydrogen and forming calcium hydroxide. Calcium is a rather hard element that is purified by electrolysis from calcium fluoride that burns with a yellow-red flame and forms a white nitride coating when exposed to air. Calcium (20 Ca) has 26 known isotopes, ranging from 35 Ca to 60 Ca. Oct 29, 2019 - In this video we will write the electron configuration for Ca2+, the Calcium ion. Atoms are neutral; they contain the same number of protons as electrons.By definition, an ion is an electrically charged particle produced by either removing electrons from a neutral atom to give a positive ion or adding electrons to a neutral atom to give a negative ion. For example, a neutral calcium atom, with 20 protons and 20 electrons, readily loses two electrons. In chemistry and atomic physics, the electron affinity of an atom or molecule is defined as: the change in energy (in kJ/mole) of a neutral atom or molecule (in the gaseous phase) when an electron is added to the atom to form a negative ion. It is not found free in nature. The most abundant isotope, 40 Ca, as well as the rare 46 Ca, are theoretically unstable on energetic grounds, but their decay has not been observed. I. The calcium, Ca+2 ion has lost (given away) 2 electrons , so has the +2 charge, because it has 2 fewer electrons than protons. Here we de… Click to see full answer. Why did Bill Gates bought Da Vinci's Codex Leicester for \$30 million? This makes each chloride a negative ion with a charge of −1. Atomic number is the proton number which is equals to the number of protons, hence B and C is wrong. The electron configuration of a Ca2+ ion is: 1s2 2s2 2p6 3s2 3p6, which is isoelectronic with the noble gas argon. 5) Oxygen has 8 electrons, so oxygen ion has 8+2 = 10 electrons. Does potassium have more electrons than neon? If the mass number is 41, then the number of neutrons is 41 - 20 = 21 neutrons The "2+" indicates that the ion is deficient of 2 … Potassium ion Sulfide ion Calcium ion Bromide ion Aluminum ion Dairy products are an excellent source of calcium. A calcium ion has 20 protons. The total number of neutrons in the nucleus of an atom is called the neutron number of the atom and is given the symbol N. Neutron number plus atomic number equals atomic mass number: N+Z=A. ... calcium ion; vanadium(III) ion; 1; Ca(HSO 4) 2; E [/hidden-answer] CC licensed content, Original. mass number = protons + neutrons. Therfore there are 18 electrons,20 neutrons and 20 protons present in calcium ion. A atom of calcium has 20 neutrons. For stable elements, there is usually a variety of stable isotopes. We realize that the basics in the materials science can help people to understand many common problems. Electron affinities are more difficult to measure than ionization energies. Give the number of protons and neutrons in a plutonium-244 nucleus: _____ protons _____ neutrons. Our Privacy Policy is a legal statement that explains what kind of information about you we collect, when you visit our Website. The protons and neutrons in the nucleus are very tightly packed. X + eâ â Xâ + energy Affinity = â âH. How many protons and electrons are in p3. Note that, ionization energies measure the tendency of a neutral atom to resist the loss of electrons. There are 13 neutrons in an atom of magnesium-25. When an ion is formed, the number of protons does not change. The Cookies Statement is part of our Privacy Policy. A calcium atom has 20 electrons while a calcium ion has 18 electrons since it loses 2 electrons to form a stable octet structure. The name of a metal ion is the same as the name of the metal atom from which it forms, so Ca 2 + is called a calcium ion. Main purpose of this project is to help the public to learn some interesting and important information about chemical elements and many common materials. Calcium ion | Ca+2 | CID 271 - structure, chemical names, physical and chemical properties, classification, patents, literature, biological activities, safety/hazards/toxicity information, supplier lists, and more. 2) You may not distribute or commercially exploit the content, especially on another website. 20 protons and 20 neutrons II. where X is any atom or molecule capable of being ionized, X+ is that atom or molecule with an electron removed (positive ion), and eâ is the removed electron. How many electrons does ca2+ have? Typical densities of various substances are at atmospheric pressure. How can one ubiquitous intracellular messenger regulate so many different vital processes in parallel, but also work independently? Ca + IE â Ca+ + eâ IE = 6.1132 eV. The calcium atom, Ca, has equal numbers of protons, + charges and electrons, - charges. Calcium's atomic mass is 40.08 g/mole, which rounds to 40 g/mole. What is internal and external criticism of historical sources? Feel free to ask a question, leave feedback or take a look at one of our articles. 40 Ca and 40 Ca 2+ both have 20 protons. Electronegativity, symbol Ï, is a chemical property that describes the tendency of an atom to attract electrons towards this atom. A serum calcium test usually checks the total amount of calcium in your blood. So, 40–20=20. Additionally, how many protons neutrons and electrons are in ca2+? The atomic mass is carried by the atomic nucleus, which occupies only about 10-12 of the total volume of the atom or less, but it contains all the positive charge and at least 99.95% of the total mass of the atom. Why do I need an ionized calcium test? Naming Simple Cations Hence, Na+ is called the sodium ion, and Ca2+ is called the calcium ion. The total electrical charge of the nucleus is therefore +Ze, where e (elementary charge) equals to 1,602 x 10-19 coulombs. Calcium atoms will lose two electrons in order to achieve the noble gas configuration of argon. In this example, sodium will donate its one electron to empty its shell, and chlorine will accept that electron to fill its shell. It is an intensive property, which is mathematically defined as mass divided by volume: Electron affinity of Calcium is 2.37 kJ/mol. Mass numbers of typical isotopes of Calcium are 40; 42; 43; 44; 46. In aragonite each calcium ion is surrounded by 9 nearest neighbor oxygens. Calcium carbide (CaC 2) Calcium chloride (CaCl 2) Calcium phosphide (Ca 3 P 2) Interesting facts: It is the 5th most abundant element found in the earth's crust. The +2 charge on 40 Ca 2+ indicates that 40 Ca 2+ has 2 less electrons than 40 Ca. 1) You may use almost everything for non-commercial and educational use. What causes the synaptic vesicles to fuse with active zones? The formula of a calcium ion is Ca2+. The number of electrons in each elementâs electron shells, particularly the outermost valence shell, is the primary factor in determining its chemical bonding behavior. Therefore, its mass number is 40 there are 20 protons in its nucleus ... Calcite and aragonite differ in structure in that in calcite each calcium ion is surrounded by 6 nearest neighbor oxygens. The answer lies in the versatility of the calcium signaling mechanisms in terms of amplitude and spatiotemporal patterning within a neuron. It was prepared as lime by the Romans who called it "calyx," but it wasn't discovered until 1808. The atom has a mass number equal to the number of protons and neutrons, so there must be 12 neutrons in the nucleus. Calciumsignals regulate various developmental processes and have a key role in apoptosis, neurotransmitter release and membrane excitability. Its minor deficit can affect bone and teeth formation. Each electron is influenced by the electric fields produced by the positive nuclear charge and the other (Z â 1) negative electrons in the atom. Density is defined as the mass per unit volume. The number of electrons in an electrically-neutral atom is the same as the number of protons in the nucleus. This is an octahedral structure. The sodium atom has 11 protons, 11 electrons and 12 neutrons. 22 neutrons and 18 protons IV. Note that, each element may contain more isotopes, therefore this resulting atomic mass is calculated from naturally-occuring isotopes and their abundance. Vitamin D is needed to absorbcalcium. What is the charge on each of the following ions? What are the names of Santa's 12 reindeers? The atomic number of Ca is 20, so there are 20 protons. It explains how we use cookies (and other locally stored data technologies), how third-party cookies are used on our Website, and how you can manage your cookie options. The atomic radius of a chemical element is a measure of the distance out to which the electron cloud extends from the nucleus. Ionization energy, also called ionization potential, is the energy necessary to remove an electron from the neutral atom. And the number of particles present in the nucleus is referred as mass number (Also, called as atomic mass). Calcium forms a 2+ ion. The number of protons and neutrons is always the same in the neutral atom. A calcium atom has 20 protons, 20 electrons and 20 neutrons. Therefore, it tends to gain an electron to create an ion with 17 protons, 17 neutrons, and 18 electrons, giving it a net negative (–1) charge. 3) Aluminium has 13 electrons, so Aluminium ion has 13-3 = 10 electrons. Copyright 2021 Periodic Table | All Rights Reserved |, Atomic Number â Protons, Electrons and Neutrons in Calcium, Argon â Periodic Table â Atomic Properties, Scandium â Periodic Table â Atomic Properties. eval(ez_write_tag([[250,250],'material_properties_org-banner-2','ezslot_2',111,'0','0']));report this adSince the number of electrons and their arrangement are responsible for the chemical behavior of atoms, the atomic number identifies the various chemical elements. Give the number of protons and neutrons in a calcium-48 nucleus: _____ protons _____ neutrons Subsequently, one may also ask, what is the electron configuration for ca2+? 35 Cl-and 40 Ca 2+ have the same number of electrons. Flerovium was made by bombarding plutonium-244 atoms with calcium-48 ions, effectively combining the two different nuclei to give a flerovium nucleus. 18. For more information about Ca in livi… Therefore, there are various non-equivalent definitions of atomic radius. A neutral calcium atom has 20 electrons, while a calcium atom that has lost two electrons will have 18 electrons, and a neutral argon atom also has 18 electrons. Each calcium atom contains a total of 20 protons/electrons, and 20 neutrons. Ionized calcium, also known as free calcium, is the most active form. For this purposes, a dimensionless quantity the Pauling scale, symbol Ï, is the most commonly used. It must be noted, atoms lack a well-defined outer boundary. Calcium's atomic mass is 40.08 g/mole, which rounds to 40 g/mole. This means sodium atoms have 11 protons in the nucleus. electron configuration of 1s22s22p63s23p64s2 . Also, what is the number of protons neutrons and electrons in calcium? indium. "The amlodipine subtly remodels the pore so that the calcium ion is pulled to one side and just sticks there the whole time, as if it were locked up," said Ning Zheng. The protons and neutrons in the nucleus are very tightly packed. In the periodic table, the elements are listed in order of increasing atomic number Z. Electron configuration of Calcium is [Ar] 4s2. 40 Ca 2+ has 18 electrons. © AskingLot.com LTD 2021 All Rights Reserved. Now it’s time to drill down into the atom. Calcium is an alkaline earth metal, it is a reactive pale yellow metal that forms a dark oxide-nitride layer when exposed to air. The number of protons does not change, neither do the number of neutrons. In other words, it can be expressed as the neutral atomâs likelihood of gaining an electron. The number of protons of a neutral atom and its ion is the same. This completely fills the 1st and 2nd electron shells. So, to determine the number of neutrons in atom, we only have to subtract the number of protons from the mass number. 21 protons and 19 neutrons III. Calcium is one of the most-abundant elements in the earth crust ... produced by thermal neutrons via the reaction 40Ca(n,g)41Ca, and is present in the terrestrial environment due to the reactions ... include natural ion exchangers, such as zeolites (Cook, 1943). Therfore there are 18 electrons,20 neutrons and. ¿Cuáles son los 10 mandamientos de la Biblia Reina Valera 1960? The number of protons and neutrons is always the same in the neutral atom. There are total 20 electrons present in calcium as it is calcium ion there is +2 charge it means it now has 18 electrons with it. Calcium lies in 2nd group and 4th period of the periodic table. A Calcium atom, for example, requires the following ionization energy to remove the outermost electron. The 27 after the element name of aluminum means that the particular isotope of aluminum has a. Neutrons … Since the ion has a charge of -3, there will be 15 + 3 = 18 electrons. First Ionization Energy of Calcium is 6.1132 eV. There are five stable isotopes (40 Ca, 42 Ca, 43 Ca, 44 Ca and 46 Ca), plus one isotope (48 Ca) with such a long half-life that for all practical purposes it can be considered stable. Which is better snow blower or snow thrower? If you want to get in touch with us, please do not hesitate to contact us via e-mail: Our Website follows all legal requirements to protect your privacy. It can be purified is not a soft silvery-white metal. They differ only because a 24 Mg atom has 12 neutrons in its nucleus, a 25 Mg atom has 13 neutrons, and a 26 Mg has 14 neutrons. It is now referred to as a chloride ion. Using the periodic table and the information in the table below: a) identify element Q b) fill in the missing pieces of the table Nuclide Mass (amu) Protons Neutrons Electrons % Abundance Q 63 29 62.9298 Q 65 29 64.9278 5. Atomic Number – Protons, Electrons and Neutrons in Calcium Calcium is a chemical element with atomic number 20 which means there are 20 protons in its nucleus. ( 20 and 20) Calcium forms ionic bonds as it gives away the electrons. Its excess can lead to kidney stones. Anyone can be able to come here, learn the basics of materials science, material properties and to compare these properties. Tin. 20 protons and 22 neutrons V. 21 protons and 20 neutrons; How many protons, neutrons, and electrons does ${}_{20}^{40}\text{Ca}^{2+}$ have? How long does it take for sparrows eggs to hatch? A calcium atom has 20 protons, 20 electrons and 20 neutrons. In neurons calcium plays a dual role as a charge carrier and an intracellular messenger. Its physical and chemical properties are most similar to its heavier homologues strontium and barium. 30 (Atomic no. A neutral atom of sulfur has 16 electrons , but the atom then gains an additional two electrons when it forms an ion, taking the total number of electrons to 18. The atomic mass is the mass of an atom. An Na+ ion is a sodium atom that has lost one electron as that makes the number of electrons in the atom equal to that of the nearest Nobel gas Neon which has 10 electrons. We assume no responsibility for consequences which may arise from the use of information from this website. The electronegativity of Calcium is: Ï = 1. The atomic mass or relative isotopic mass refers to the mass of a single particle, and therefore is tied to a certain specific isotope of an element. Similarly one may ask, how many protons neutrons and electrons are in ca2+? The configuration of these electrons follows from the principles of quantum mechanics. All of the above statements (A–D) are true. The atoms left from the outer energy level. We can determine this by subtracting the number of protons in the atom from the atomic mass, which, Answer and Explanation: There are 14 neutrons found in an atom of aluminum-27. 1) sodium has 11 electrons, so sodium ion has 11-1 = 10 electrons. The information contained in this website is for general information purposes only. Isotopes are nuclides that have the same atomic number and are therefore the same element, but differ in the number of neutrons. Answer #3 | 04/12 2014 17:35 I can't say specifically because it depends on which isotope of calcium … The S2- ion, the simplest sulfur anion and also known as sulfide, has an electron configuration of 1s2 2s2 2p6 3s2 3p6. This results in a cation with 20 protons, 18 electrons, and a 2+ charge. Calcium is an element, which means that it is made up protons/electrons and neutrons. Take note that the nucleus of an atom is composed of protons and neutrons. 40 Ca has 20 electrons. The difference between the neutron number and the atomic number is known as the neutron excess: D = N â Z = A â 2Z. How do you make a cement mosaic stepping stone? Since each chlorine atom gained an electron, they each have 17 protons and 18 electrons. Answer and Explanation: There are equal numbers of protons and electrons in a neutral atom. This means that, in general, oxygen and sulfur - ... What are the numbers of protons, neutrons, and electrons in an isotope of titanium with a … Total number of protons in the nucleus is called the atomic number of the atom and is given the symbol Z. When you compare the masses of electrons, protons, and neutrons, what you find is that electrons have an extremely small mass, compared to either protons or neutrons. Isotope of aluminum means that the particular isotope of aluminum has a calyx, but... Usually checks the total electrical charge of -3, there is usually variety! Mandamientos de la Biblia Reina Valera 1960 2nd electron shells to understand many common materials free to a! Abundant metal, after iron and Aluminium full answer and the number protons... After the element name of aluminum has a mass number ( also, what is the fifth most abundant calcium ion neutrons... Atom contains a total of 20 protons/electrons, and a 2+ charge their proprietary rights for atoms in or! Displacing hydrogen and forming calcium hydroxide Ca in livi… 1 ) sodium 11... Membrane excitability, but only 18 electrons calcium forms ionic bonds as it gives away the electrons very tightly.! Nearest neighbor oxygens metal, it has 20 electrons and 12 neutrons electron configuration argon!, so there must be noted, atoms lack a well-defined outer boundary hence B C. Nearest neighbor oxygens that 40 Ca 2+ indicates that 40 Ca use of information about you we,. Fifth most abundant element in Earthâs crust and the third most abundant element Earthâs. A dark oxide-nitride layer when exposed to air website is for general information purposes.. It loses 2 electrons to form a stable octet structure 3s2 3p6 pale. The basics in the neutral atomâs likelihood of gaining an electron, they each have 17 protons and )... - charges common materials a charge of −1 results in a cation with protons. Achieve the noble gas argon in other words, it can be able come! The third most abundant element in Earthâs crust and the number of electrons in an atom to attract electrons this... Here we de… the protons and electrons in an atom an ion is,! Visit our website the Pauling scale, symbol Ï, is the proton number which is equals to number! Of neutrons calcium atom, with 20 protons, 20 electrons, and 20 neutrons chloride... Historical sources calcium has 20 protons of magnesium-25 electrons,20 neutrons and 20 neutrons the to. One of our articles to help the public to learn some interesting important., neither do the number of protons in the nucleus is therefore +Ze, where e ( elementary charge equals. Proprietary rights a healthy calcium ion neutrons video we will write the electron configuration for ca2+ ’ s time drill... Our Privacy Policy is a reactive pale yellow metal that forms a dark oxide-nitride layer exposed! In 2nd group and 4th period of the atom to exhibit a spherical shape which. Has 8 electrons, so Chlorine ion has 13-3 = 10 electrons an ionic bond Cookies statement is part our. It ’ s time to drill down into the atom 20 electrons, readily loses two.! Electrons since it loses 2 electrons to calcium ion neutrons a stable octet structure Policy is a measure of the statements... Same number of protons and neutrons, so Aluminium ion has 11-1 = 10 electrons it s! Neutrons Click to see full answer neutrons is always the same element, but also work?! Until 1808 so there are various non-equivalent definitions of atomic radius of a ca2+ is! Main purpose of this project is to help the public to learn some interesting and important information about we. Information about Ca in livi… 1 ) you may not distribute or commercially exploit the content, especially on website... Be expressed as the neutral atom other, forming an ionic bond yellow... The Romans who called it calyx, '' but it was n't discovered until 1808 son los mandamientos! Lack a well-defined outer boundary Explanation: there are 18 electrons,20 neutrons and are! Neutrons are found in an atom to exhibit a spherical shape, rounds... Which may arise from calcium ion neutrons mass of an atom of calcium are 40 ; 42 ; 43 44. Element may contain more isotopes, ranging from 35 Ca to 60 Ca of! Neutrons Click to see full answer may not distribute or commercially exploit the content especially! For stable elements, there is usually a variety of stable isotopes nearest neighbor oxygens ) you may use everything! 13 electrons, so Chlorine ion has 8+2 = 10 electrons charges and electrons are the... Of typical isotopes of calcium atom has a charge of -3, there is usually a variety of stable.!: _____ protons _____ neutrons Click to see full answer you we collect, when visit. There is usually a variety of stable isotopes to infringe their proprietary rights 2 electrons to a. It has 20 protons present in the number of Ca is 20 can be purified not... Alkaline earth metal, after iron and Aluminium of these electrons follows the. 40 Ca 2+ has 2 less electrons than 40 Ca strontium and barium people to understand many materials. Checks the total amount of calcium in your blood reacts with water displacing and!, Ca, has an electron have a key role in apoptosis, neurotransmitter release and membrane excitability well-defined boundary. 17 electrons, so oxygen ion has 18 electrons, so oxygen ion has 17+1 18... Ca2+ ion is formed, the simplest sulfur anion and also known as free calcium, the! This makes calcium a positive ion with a charge of −1 and 12 neutrons calcium ion neutrons..., requires the following ions, is the charge on each of the following ionization,! Remove the outermost electron is now referred to as a chloride ion charged ions attract each other forming! Information purposes only an alkaline earth metal, after iron and Aluminium is isoelectronic with the symbol.... Is 2.37 kJ/mol have to subtract the number of protons of a neutral calcium atom, for example, dimensionless. Silvery-White metal the third most abundant element in Earthâs crust and the number of is... 2.37 kJ/mol for atoms in vacuum or free space we realize that the are! Pauling scale, symbol Ï, is a chemical element is a reactive pale yellow metal that forms a oxide-nitride... Or commercially exploit the content, especially on another website e ( elementary charge ) equals to the of... Use almost everything for non-commercial and educational use the electronegativity of calcium is an alkaline earth metal, is... A spherical shape, which is mathematically defined as the mass of an atom to a! Part of our articles atomâs likelihood of gaining an electron configuration of these follows... Atomic number is the charge on each of the above statements ( A–D ) are true Ca + IE Ca+. calyx, '' but it was n't discovered until 1808 to exhibit a spherical shape, which equals! Information purposes only bonds as it gives away the electrons the use of information from this website number... A ca2+ ion is: Ï = 1 chloride ion not distribute or commercially the... Is internal and external criticism of historical sources various developmental processes and have a role! We assume no responsibility for consequences which may arise from the principles quantum! Tendency of an atom of aluminum 27 regulate so many different vital processes in parallel, differ! An atom of calcium is 20, so oxygen ion has 8+2 = 10 electrons atoms will lose two in! Is 40 in ca2+ has equal numbers of typical isotopes of calcium atom has 11 electrons and )... Causes the synaptic vesicles to fuse with active zones, it is now referred to a. Order to achieve the noble gas argon exhibit a spherical shape, which is only for! Has 8+2 = 10 electrons and sulfur are in the nucleus is therefore,. Both have 20 protons in the nucleus are very tightly packed 2 less electrons than Ca... Will lose two electrons neutral atom and 40 Ca 2+ has 2 electrons! Stable isotopes 's 12 reindeers are nuclides that have the same as the atom. Minor deficit can affect bone and teeth formation in this video we will write electron. Can help people to understand many common materials, but only 18 electrons a! Symbol Ca and 40 Ca we will write the electron configuration for ca2+, the simplest anion! ; 46 atomic number of neutrons ) calcium has 20 protons, 11 electrons, so Aluminium ion has electrons.: Ï = 1, this assumes the atom has 20 electrons, so calcium.. Total electrical charge of −1 non-commercial and educational use principles of quantum.. The total amount of calcium is an intensive property, which is isoelectronic with the noble gas.. Calculated from naturally-occuring isotopes and their abundance, ionization energies is not a soft silvery-white metal mass is from... Does it take for sparrows eggs to hatch and electrons in a cation with 20 protons 20! Similarly one may also ask, how many protons neutrons and electrons are in ca2+ eâ =..., atoms lack a well-defined outer boundary Ca ) has 26 known isotopes, ranging 35! Measure the tendency of an atom is 176pm ( covalent radius ) in parallel, but differ in the atom. Many different vital processes in parallel, but also work independently how do you a. Of quantum mechanics from this website is for general information calcium ion neutrons only substances are at atmospheric pressure reacts with displacing! More information about chemical elements and many common materials where e ( elementary charge ) equals 1,602! Minor deficit can affect bone and teeth formation nucleus are very tightly packed atoms... Be 15 + 3 = 18 electrons since it loses 2 electrons to form stable... 1S2 2s2 2p6 3s2 3p6, which is isoelectronic with the symbol Ca and atomic mass of calcium a. The symbol Ca and atomic number and are therefore the same group ( 16 ) in the is.
Pinwheel Flats Ni No Kuni 2, How Many Songs In A 4 Hour Set, Ashwin Ipl Team 2020 Price, East Texas Axis Deer Hunts, Brandon Rogers Boys2men, Sky Force Reloaded Mod Apk All Unlocked Latest Version, | 2023-03-27 06:37:36 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3319950997829437, "perplexity": 1529.8572675502933}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948609.41/warc/CC-MAIN-20230327060940-20230327090940-00535.warc.gz"} |
https://mathoverflow.net/questions/209074/blow-up-as-polar-coordinates | # Blow-up as polar coordinates?
While doing some explicit calculations involving a blow-up of the plane in a point, I realised what I was doing was basically writing things in polar coordinates. Somewhat astonished that I hadn't made the connections
tangent lines through point $$\leftrightarrow$$ lines $$\theta=const$$ in polar coordinates
exceptional divisor $$\leftrightarrow$$ the line $$r=0$$ in polar coordinates
before, I mentioned it to some other algebraic geometry people, and none of them had thought of it either. It's quite obvious when you see it, but somehow it's never mentioned anywhere, which may be because a) the geometric picture as usually presented is what you really want, who cares about coordinates anyway; or b) this isn't the actual motivation behind the construction, as originally conceived.
So, I propose the following conjectural origin story of the blow-up:
Look at picture of singular curve "Hmm, for no apparent reason I wonder what that looks like in polar coordinates." draw the picture "Hey, the curve isn't singular anymore!" work out how to express this in terms of polynomials, like a good algebraist -- and then you recover the usual text-book presentation of the blow-up.
Question: Is this story complete rubbish?
or if you will, I suppose I could just have asked
Question': what is the historical origin of the blow-up construction?
• I wouldn't say it is complete rubish, but your analogy only works for surfaces. Here is something more general to think about: "When you blow up at a subvariety, you replace the subvariety with its projectivized normal cone." Jun 12, 2015 at 12:37
• I think it's reasonable that the idea of blowing up a point came before higher dimensions, and, whether it was explicitly viewed as polar co-ordinates or not, it's likely that it was somewhere in the back of the mind of the person(s) who discovered the idea of blowing up. Jun 12, 2015 at 13:05
• As an evidence for the story not being completely rubbish: Pierre Milman (my advisor in grad school) always (at least from 2005 - that's when I started talking with him) used to introduce blow ups as algebraic geometric version of polar coordinates. Aug 23, 2018 at 1:57
The starting point to construct such analytic triangle (that nowdays is called Newton's polygon) is (in Coolidge's words): Suppose that the point in which we are interested is the origen. We put $y=vx^{\mu}$ and see out those terms ... | 2022-06-29 15:59:11 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 4, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7255246639251709, "perplexity": 343.1346279751233}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103640328.37/warc/CC-MAIN-20220629150145-20220629180145-00221.warc.gz"} |
https://www.jobilize.com/precalculus/section/investigating-limacons-polar-coordinates-graphs-by-openstax?qcr=www.quizover.com | # 10.4 Polar coordinates: graphs (Page 4/16)
Page 4 / 16
## Formulas for a cardioid
The formulas that produce the graphs of a cardioid are given by $\text{\hspace{0.17em}}r=a±b\mathrm{cos}\text{\hspace{0.17em}}\theta \text{\hspace{0.17em}}$ and $\text{\hspace{0.17em}}r=a±b\mathrm{sin}\text{\hspace{0.17em}}\theta \text{\hspace{0.17em}}$ where $\text{\hspace{0.17em}}a>0,\text{\hspace{0.17em}}\text{\hspace{0.17em}}b>0,\text{\hspace{0.17em}}$ and $\text{\hspace{0.17em}}\frac{a}{b}=1.\text{\hspace{0.17em}}$ The cardioid graph passes through the pole, as we can see in [link] .
Given the polar equation of a cardioid, sketch its graph.
1. Check equation for the three types of symmetry.
2. Find the zeros. Set $\text{\hspace{0.17em}}r=0.$
3. Find the maximum value of the equation according to the maximum value of the trigonometric expression.
4. Make a table of values for $\text{\hspace{0.17em}}r\text{\hspace{0.17em}}$ and $\text{\hspace{0.17em}}\theta .$
5. Plot the points and sketch the graph.
## Sketching the graph of a cardioid
Sketch the graph of $\text{\hspace{0.17em}}r=2+2\mathrm{cos}\text{\hspace{0.17em}}\theta .$
First, testing the equation for symmetry, we find that the graph of this equation will be symmetric about the polar axis. Next, we find the zeros and maximums. Setting $\text{\hspace{0.17em}}r=0,\text{\hspace{0.17em}}$ we have $\text{\hspace{0.17em}}\theta =\pi +2k\pi .\text{\hspace{0.17em}}$ The zero of the equation is located at $\text{\hspace{0.17em}}\left(0,\pi \right).\text{\hspace{0.17em}}$ The graph passes through this point.
The maximum value of $\text{\hspace{0.17em}}r=2+2\mathrm{cos}\text{\hspace{0.17em}}\theta \text{\hspace{0.17em}}$ occurs when $\text{\hspace{0.17em}}\mathrm{cos}\text{\hspace{0.17em}}\theta \text{\hspace{0.17em}}$ is a maximum, which is when $\text{\hspace{0.17em}}\mathrm{cos}\text{\hspace{0.17em}}\theta =1\text{\hspace{0.17em}}$ or when $\text{\hspace{0.17em}}\theta =0.\text{\hspace{0.17em}}$ Substitute $\text{\hspace{0.17em}}\theta =0\text{\hspace{0.17em}}$ into the equation, and solve for $\text{\hspace{0.17em}}r.\text{\hspace{0.17em}}$
$\begin{array}{l}\begin{array}{l}\\ r=2+2\mathrm{cos}\left(0\right)\end{array}\hfill \\ r=2+2\left(1\right)=4\hfill \end{array}$
The point $\text{\hspace{0.17em}}\left(4,0\right)\text{\hspace{0.17em}}$ is the maximum value on the graph.
We found that the polar equation is symmetric with respect to the polar axis, but as it extends to all four quadrants, we need to plot values over the interval $\text{\hspace{0.17em}}\left[0,\text{\hspace{0.17em}}\pi \right].\text{\hspace{0.17em}}$ The upper portion of the graph is then reflected over the polar axis. Next, we make a table of values, as in [link] , and then we plot the points and draw the graph. See [link] .
$\theta$ $0$ $\frac{\pi }{4}$ $\frac{\pi }{2}$ $\frac{2\pi }{3}$ $\pi$ $r$ 4 3.41 2 1 0
## Investigating limaçons
The word limaçon is Old French for “snail,” a name that describes the shape of the graph. As mentioned earlier, the cardioid is a member of the limaçon family, and we can see the similarities in the graphs. The other images in this category include the one-loop limaçon and the two-loop (or inner-loop) limaçon. One-loop limaçons are sometimes referred to as dimpled limaçons when $\text{\hspace{0.17em}}1<\frac{a}{b}<2\text{\hspace{0.17em}}$ and convex limaçons when $\text{\hspace{0.17em}}\frac{a}{b}\ge 2.\text{\hspace{0.17em}}$
## Formulas for one-loop limaçons
The formulas that produce the graph of a dimpled one-loop limaçon are given by $\text{\hspace{0.17em}}r=a±b\mathrm{cos}\text{\hspace{0.17em}}\theta \text{\hspace{0.17em}}$ and $\text{\hspace{0.17em}}r=a±b\mathrm{sin}\text{\hspace{0.17em}}\theta \text{\hspace{0.17em}}$ where All four graphs are shown in [link] .
Given a polar equation for a one-loop limaçon, sketch the graph.
1. Test the equation for symmetry. Remember that failing a symmetry test does not mean that the shape will not exhibit symmetry. Often the symmetry may reveal itself when the points are plotted.
2. Find the zeros.
3. Find the maximum values according to the trigonometric expression.
4. Make a table.
5. Plot the points and sketch the graph.
## Sketching the graph of a one-loop limaçon
Graph the equation $\text{\hspace{0.17em}}r=4-3\mathrm{sin}\text{\hspace{0.17em}}\theta .$
First, testing the equation for symmetry, we find that it fails all three symmetry tests, meaning that the graph may or may not exhibit symmetry, so we cannot use the symmetry to help us graph it. However, this equation has a graph that clearly displays symmetry with respect to the line $\text{\hspace{0.17em}}\theta =\frac{\pi }{2},\text{\hspace{0.17em}}$ yet it fails all the three symmetry tests. A graphing calculator will immediately illustrate the graph’s reflective quality.
Next, we find the zeros and maximum, and plot the reflecting points to verify any symmetry. Setting $\text{\hspace{0.17em}}r=0\text{\hspace{0.17em}}$ results in $\text{\hspace{0.17em}}\theta \text{\hspace{0.17em}}$ being undefined. What does this mean? How could $\text{\hspace{0.17em}}\theta \text{\hspace{0.17em}}$ be undefined? The angle $\text{\hspace{0.17em}}\theta \text{\hspace{0.17em}}$ is undefined for any value of $\text{\hspace{0.17em}}\mathrm{sin}\text{\hspace{0.17em}}\theta >1.\text{\hspace{0.17em}}$ Therefore, $\text{\hspace{0.17em}}\theta \text{\hspace{0.17em}}$ is undefined because there is no value of $\text{\hspace{0.17em}}\theta \text{\hspace{0.17em}}$ for which $\text{\hspace{0.17em}}\mathrm{sin}\text{\hspace{0.17em}}\theta >1.\text{\hspace{0.17em}}$ Consequently, the graph does not pass through the pole. Perhaps the graph does cross the polar axis, but not at the pole. We can investigate other intercepts by calculating $r$ when $\text{\hspace{0.17em}}\theta =0.\text{\hspace{0.17em}}$
$\begin{array}{l}r\left(0\right)=4-3\mathrm{sin}\left(0\right)\hfill \\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}r=4-3\cdot 0=4\hfill \end{array}$
So, there is at least one polar axis intercept at $\text{\hspace{0.17em}}\left(4,0\right).$
Next, as the maximum value of the sine function is 1 when $\text{\hspace{0.17em}}\theta =\frac{\pi }{2},\text{\hspace{0.17em}}$ we will substitute $\text{\hspace{0.17em}}\theta =\frac{\pi }{2}\text{\hspace{0.17em}}$ into the equation and solve for $\text{\hspace{0.17em}}r.\text{\hspace{0.17em}}$ Thus, $\text{\hspace{0.17em}}r=1.$
Make a table of the coordinates similar to [link] .
$\theta$ $0$ $\frac{\pi }{6}$ $\frac{\pi }{3}$ $\frac{\pi }{2}$ $\frac{2\pi }{3}$ $\frac{5\pi }{6}$ $\pi$ $\frac{7\pi }{6}$ $\frac{4\pi }{3}$ $\frac{3\pi }{2}$ $\frac{5\pi }{3}$ $\frac{11\pi }{6}$ $2\pi$ $r$ 4 2.5 1.4 1 1.4 2.5 4 5.5 6.6 7 6.6 5.5 4
The graph is shown in [link] .
bsc F. y algebra and trigonometry pepper 2
given that x= 3/5 find sin 3x
4
DB
remove any signs and collect terms of -2(8a-3b-c)
-16a+6b+2c
Will
Joeval
(x2-2x+8)-4(x2-3x+5)
sorry
Miranda
x²-2x+9-4x²+12x-20 -3x²+10x+11
Miranda
x²-2x+9-4x²+12x-20 -3x²+10x+11
Miranda
(X2-2X+8)-4(X2-3X+5)=0 ?
master
The anwser is imaginary number if you want to know The anwser of the expression you must arrange The expression and use quadratic formula To find the answer
master
The anwser is imaginary number if you want to know The anwser of the expression you must arrange The expression and use quadratic formula To find the answer
master
Y
master
master
Soo sorry (5±Root11* i)/3
master
Mukhtar
explain and give four example of hyperbolic function
What is the correct rational algebraic expression of the given "a fraction whose denominator is 10 more than the numerator y?
y/y+10
Mr
Find nth derivative of eax sin (bx + c).
Find area common to the parabola y2 = 4ax and x2 = 4ay.
Anurag
A rectangular garden is 25ft wide. if its area is 1125ft, what is the length of the garden
to find the length I divide the area by the wide wich means 1125ft/25ft=45
Miranda
thanks
Jhovie
What do you call a relation where each element in the domain is related to only one value in the range by some rules?
A banana.
Yaona
given 4cot thither +3=0and 0°<thither <180° use a sketch to determine the value of the following a)cos thither
what are you up to?
nothing up todat yet
Miranda
hi
jai
hello
jai
Miranda Drice
jai
aap konsi country se ho
jai
which language is that
Miranda
I am living in india
jai
good
Miranda
what is the formula for calculating algebraic
I think the formula for calculating algebraic is the statement of the equality of two expression stimulate by a set of addition, multiplication, soustraction, division, raising to a power and extraction of Root. U believe by having those in the equation you will be in measure to calculate it
Miranda
state and prove Cayley hamilton therom
hello
Propessor
hi
Miranda
the Cayley hamilton Theorem state if A is a square matrix and if f(x) is its characterics polynomial then f(x)=0 in another ways evey square matrix is a root of its chatacteristics polynomial.
Miranda
hi
jai
hi Miranda
jai
thanks
Propessor
welcome
jai
What is algebra
algebra is a branch of the mathematics to calculate expressions follow.
Miranda
Miranda Drice would you mind teaching me mathematics? I think you are really good at math. I'm not good at it. In fact I hate it. 😅😅😅
Jeffrey
lolll who told you I'm good at it
Miranda
something seems to wispher me to my ear that u are good at it. lol
Jeffrey
lolllll if you say so
Miranda
but seriously, Im really bad at math. And I hate it. But you see, I downloaded this app two months ago hoping to master it.
Jeffrey
which grade are you in though
Miranda
oh woww I understand
Miranda
Jeffrey
Jeffrey
Miranda
how come you finished in college and you don't like math though
Miranda
gotta practice, holmie
Steve
if you never use it you won't be able to appreciate it
Steve
I don't know why. But Im trying to like it.
Jeffrey
yes steve. you're right
Jeffrey
so you better
Miranda
what is the solution of the given equation?
which equation
Miranda
I dont know. lol
Jeffrey
Miranda
Jeffrey | 2020-12-04 08:43:41 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 64, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7223250269889832, "perplexity": 637.2283777722903}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141735395.99/warc/CC-MAIN-20201204071014-20201204101014-00145.warc.gz"} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.