URL
stringlengths 15
1.68k
| text_list
sequencelengths 1
199
| image_list
sequencelengths 1
199
| metadata
stringlengths 1.19k
3.08k
|
---|---|---|---|
https://www.shaalaa.com/question-bank-solutions/representation-images-formed-spherical-mirrors-using-ray-diagrams-concave-mirror-we-wish-obtain-erect-image-object-using-concave-mirror-focal-length-15-cm-what-should-be-range-distance-object-mirror_6179 | [
"Share\nNotifications\n\nView all notifications\nBooks Shortlist\nYour shortlist is empty\n\n# We Wish to Obtain an Erect Image of an Object, Using a Concave Mirror of Focal Length 15 Cm. What Should Be the Range of Distance of the Object from the Mirror? - CBSE Class 10 - Science\n\nLogin\nCreate free account\n\nForgot password?\nConceptRepresentation of Images Formed by Spherical Mirrors Using Ray Diagrams - Concave Mirror\n\n#### Question\n\nWe wish to obtain an erect image of an object, using a concave mirror of focal length 15 cm. What should be the range of distance of the object from the mirror? What is the nature of the image? Is the image larger or smaller than the object? Draw a ray diagram to show the image formation in this case.\n\n#### Solution 1\n\nRange of the distance of the object = 0 to 15 cm from the pole of the mirror.\nNature of the image = virtual, erect and larger than the object.",
null,
"#### Solution 2\n\nRange of object distance = 0 cm to15 cm\nA concave mirror gives an erect image when an object is placed between its pole (P) and the principal focus (F).\nHence, to obtain an erect image of an object from a concave mirror of focal length 15 cm, the object must be placed anywhere between the pole and the focus. The image formed will be virtual, erect, and magnified in nature, as shown in the given figure.",
null,
"Is there an error in this question or solution?\n\n#### APPEARS IN\n\nNCERT Solution for Science Textbook for Class 10 (2019 to Current)\nChapter 10: Light – Reflection and Refraction\nQ: 7 | Page no. 186\nSolution We Wish to Obtain an Erect Image of an Object, Using a Concave Mirror of Focal Length 15 Cm. What Should Be the Range of Distance of the Object from the Mirror? Concept: Representation of Images Formed by Spherical Mirrors Using Ray Diagrams - Concave Mirror.\nS"
] | [
null,
"https://www.shaalaa.com/images/_4:ac5a12585d0349fc91f62f6334fd8924.png",
null,
"https://www.shaalaa.com/images/_4:f1db2eb3a5264e2696270ea4958760a5.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.90484565,"math_prob":0.6441245,"size":882,"snap":"2019-35-2019-39","text_gpt3_token_len":209,"char_repetition_ratio":0.17995444,"word_repetition_ratio":0.071428575,"special_character_ratio":0.23469388,"punctuation_ratio":0.09139785,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9504005,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,3,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-08-20T03:49:25Z\",\"WARC-Record-ID\":\"<urn:uuid:bd049ace-186f-4798-a5ea-9916524b0fbb>\",\"Content-Length\":\"47546\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4136fc12-b318-4b7d-998f-f966d4b7936e>\",\"WARC-Concurrent-To\":\"<urn:uuid:dbbfc90b-de21-4ef8-8c57-86df20e0f866>\",\"WARC-IP-Address\":\"139.162.10.45\",\"WARC-Target-URI\":\"https://www.shaalaa.com/question-bank-solutions/representation-images-formed-spherical-mirrors-using-ray-diagrams-concave-mirror-we-wish-obtain-erect-image-object-using-concave-mirror-focal-length-15-cm-what-should-be-range-distance-object-mirror_6179\",\"WARC-Payload-Digest\":\"sha1:WQDZ2NMLSVUVAZ2P6MOSFEE7VQ27RRDI\",\"WARC-Block-Digest\":\"sha1:3JGZTRHXSJQWI2VKKRZVDYHLEHN5WFV2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-35/CC-MAIN-2019-35_segments_1566027315222.14_warc_CC-MAIN-20190820024110-20190820050110-00393.warc.gz\"}"} |
https://eprint.iacr.org/2001/094 | [
"### Slope packings and coverings, and generic algorithms for the discrete logarithm problem\n\nM. Chateauneuf, A. C. H. Ling, and D. R. Stinson\n\n##### Abstract\n\nWe consider the set of slopes of lines formed by joining all pairs of points in some subset S of a Desarguesian affine plane of prime order, p. If all the slopes are distinct and non-infinite, we have a slope packing; if every possible non-infinite slope occurs, then we have a slope covering. We review and unify some results on these problems that can be derived from the study of Sidon sets and sum covers. Then we report some computational results we have obtained for small values of p. Finally, we point out some connections between slope packings and coverings and generic algorithms for the discrete logarithm problem in prime order (sub)groups. Our results provide a combinatorial characterization of such algorithms, in the sense that any generic algorithm implies the existence of a certain slope packing or covering, and conversely.\n\nAvailable format(s)\nCategory\nFoundations\nPublication info\nPublished elsewhere. preprint\nKeywords\ndiscrete logarithm problemcombinatorial cryptography\nContact author(s)\ndstinson @ uwaterloo ca\nHistory\nShort URL\nhttps://ia.cr/2001/094",
null,
"CC BY\n\nBibTeX\n\n@misc{cryptoeprint:2001/094,\nauthor = {M. Chateauneuf and A. C. H. Ling and D. R. Stinson},\ntitle = {Slope packings and coverings, and generic algorithms for the discrete logarithm problem},\nhowpublished = {Cryptology ePrint Archive, Paper 2001/094},\nyear = {2001},\nnote = {\\url{https://eprint.iacr.org/2001/094}},\nurl = {https://eprint.iacr.org/2001/094}\n}",
null,
"Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content."
] | [
null,
"https://eprint.iacr.org/img/license/CC_BY.svg",
null,
"https://eprint.iacr.org/img/iacrlogo_small.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.83990467,"math_prob":0.64632624,"size":1682,"snap":"2023-14-2023-23","text_gpt3_token_len":432,"char_repetition_ratio":0.116805725,"word_repetition_ratio":0.06692913,"special_character_ratio":0.25208086,"punctuation_ratio":0.1525974,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95302504,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-03T15:31:36Z\",\"WARC-Record-ID\":\"<urn:uuid:ff97cfe7-9f22-4641-a5bb-5ff8c61bf889>\",\"Content-Length\":\"12216\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:950d1b08-6871-4831-94a4-a65258080d47>\",\"WARC-Concurrent-To\":\"<urn:uuid:295c2890-df48-4b81-8cf4-965708e8219f>\",\"WARC-IP-Address\":\"64.227.104.178\",\"WARC-Target-URI\":\"https://eprint.iacr.org/2001/094\",\"WARC-Payload-Digest\":\"sha1:LOQA3D475LFY6W6XYK7BY66RZICOC6FS\",\"WARC-Block-Digest\":\"sha1:I522NT5C5WKRWTQK7HM4Z3CFDVT4Z5JY\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224649293.44_warc_CC-MAIN-20230603133129-20230603163129-00567.warc.gz\"}"} |
https://math.libretexts.org/Bookshelves/Applied_Mathematics/Book%3A_Introduction_to_the_Modeling_and_Analysis_of_Complex_Systems_(Sayama)/12%3A_Cellular_Automata_II_-_Analysis | [
"$$\\newcommand{\\id}{\\mathrm{id}}$$ $$\\newcommand{\\Span}{\\mathrm{span}}$$ $$\\newcommand{\\kernel}{\\mathrm{null}\\,}$$ $$\\newcommand{\\range}{\\mathrm{range}\\,}$$ $$\\newcommand{\\RealPart}{\\mathrm{Re}}$$ $$\\newcommand{\\ImaginaryPart}{\\mathrm{Im}}$$ $$\\newcommand{\\Argument}{\\mathrm{Arg}}$$ $$\\newcommand{\\norm}{\\| #1 \\|}$$ $$\\newcommand{\\inner}{\\langle #1, #2 \\rangle}$$ $$\\newcommand{\\Span}{\\mathrm{span}}$$\n\n# 12: Cellular Automata II - Analysis\n\n$$\\newcommand{\\vecs}{\\overset { \\rightharpoonup} {\\mathbf{#1}} }$$ $$\\newcommand{\\vecd}{\\overset{-\\!-\\!\\rightharpoonup}{\\vphantom{a}\\smash {#1}}}$$$$\\newcommand{\\id}{\\mathrm{id}}$$ $$\\newcommand{\\Span}{\\mathrm{span}}$$ $$\\newcommand{\\kernel}{\\mathrm{null}\\,}$$ $$\\newcommand{\\range}{\\mathrm{range}\\,}$$ $$\\newcommand{\\RealPart}{\\mathrm{Re}}$$ $$\\newcommand{\\ImaginaryPart}{\\mathrm{Im}}$$ $$\\newcommand{\\Argument}{\\mathrm{Arg}}$$ $$\\newcommand{\\norm}{\\| #1 \\|}$$ $$\\newcommand{\\inner}{\\langle #1, #2 \\rangle}$$ $$\\newcommand{\\Span}{\\mathrm{span}}$$ $$\\newcommand{\\id}{\\mathrm{id}}$$ $$\\newcommand{\\Span}{\\mathrm{span}}$$ $$\\newcommand{\\kernel}{\\mathrm{null}\\,}$$ $$\\newcommand{\\range}{\\mathrm{range}\\,}$$ $$\\newcommand{\\RealPart}{\\mathrm{Re}}$$ $$\\newcommand{\\ImaginaryPart}{\\mathrm{Im}}$$ $$\\newcommand{\\Argument}{\\mathrm{Arg}}$$ $$\\newcommand{\\norm}{\\| #1 \\|}$$ $$\\newcommand{\\inner}{\\langle #1, #2 \\rangle}$$ $$\\newcommand{\\Span}{\\mathrm{span}}$$\n\n• 12.1: Sizes of Rule Space and Phase Space\nOne of the unique features of typical CA models is that time, space, and states of cells are all discrete. Because of such discreteness, the number of all possible state-transition functions is finite, i.e., there are only a finite number of “universes” possible in a given CA setting. Moreover, if the space is finite, all possible configurations of the entire system are also enumerable. This means that, for reasonably small CA settings, one can conduct an exhaustive search of the entire rule space o\n• 12.2: Phase Space Visualization\nIf the phase space of a CA model is not too large, you can visualize it using the technique we discussed in Section 5.4. Such visualizations are helpful for understanding the overall dynamics of the system, especially by measuring the number of separate basins of attraction, their sizes, and the properties of the attractors.\n• 12.3: Mean-Field Approximation\nBehaviors of CA models are complex and highly nonlinear, so it isn’t easy to analyze their dynamics in a mathematically elegant way. But still, there are some analytical methods available. Mean-field approximation is one such analytical method. It is a powerful analytical method to make a rough prediction of the macroscopic behavior of a complex system.\n• 12.4: Renormalization Group Analysis to Predict Percolation Thresholds\nThe next analytical method is for studying critical thresholds for percolation to occur in spatial contact processes, like those in the epidemic/forest fire CA model discussed in Section 11.5. The percolation threshold may be estimated analytically by a method called renormalization group analysis."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.69766074,"math_prob":0.99886215,"size":2728,"snap":"2020-10-2020-16","text_gpt3_token_len":750,"char_repetition_ratio":0.20778267,"word_repetition_ratio":0.11377245,"special_character_ratio":0.28225806,"punctuation_ratio":0.11136891,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97099847,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-04-07T01:33:58Z\",\"WARC-Record-ID\":\"<urn:uuid:3328dd44-0d16-4df3-a429-b285484445d2>\",\"Content-Length\":\"96730\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1357dd01-0f83-420d-b755-36dcb704beee>\",\"WARC-Concurrent-To\":\"<urn:uuid:2ad9c541-8db5-4b55-bb88-42744e57e764>\",\"WARC-IP-Address\":\"13.249.44.60\",\"WARC-Target-URI\":\"https://math.libretexts.org/Bookshelves/Applied_Mathematics/Book%3A_Introduction_to_the_Modeling_and_Analysis_of_Complex_Systems_(Sayama)/12%3A_Cellular_Automata_II_-_Analysis\",\"WARC-Payload-Digest\":\"sha1:SWU7SE3EZXAVOOISX465GOWKXCVYQL76\",\"WARC-Block-Digest\":\"sha1:VUGYJMTN4GXSSC6JHZUHTGGHICCHME4I\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-16/CC-MAIN-2020-16_segments_1585371662966.69_warc_CC-MAIN-20200406231617-20200407022117-00217.warc.gz\"}"} |
https://www.math.net/octahedron | [
"# Octahedron\n\nAn octahedron is a space figure with 8 faces that are polygons. In the figure below are 3 different types of octahedrons. The prefix \"octa\" means eight.\n\nAn octahedron can be formed by two pyramids with bases in the shape of a quadrilateral, as shown in the figure below.\n\n## Properties of a regular octahedron\n\nA regular octahedron is an octahedron whose faces are all congruent, regular polygons. Otherwise, it is irregular. Regular octahedrons are studied more often.\n\nA regular octahedron, such as the one shown above, is one of the 5 Platonic solids, which are a type of regular polyhedron. A regular octahedron has 8 congruent faces that are congruent equilateral triangles, 12 congruent edges, and 6 vertices; an edge is a line segment formed by the intersection of two adjacent faces; a vertex for a regular octahedron is a point where 4 edges meet.\n\n## Surface area of a regular octahedron\n\nWe can find the area of one of the faces and multiply it by eight to find the total surface area of a regular octahedron. An equilateral triangle with side length, e (also the length of the edges of a regular octahedron), has an area, A, of\n\nThe total surface area, S, of a regular octahedron in terms of its edges, e, is,\n\n## Volume of a regular octahedron\n\nThe volume, V, of a regular octahedron is\n\nwhere e is the length of the edge.\n\nExample:\n\nIf the total surface area of a regular octahedron is",
null,
", what is its volume?\n\nWe can find e by substituting the given value in for the total surface area to get\n\n36 = e2\n\ne = 6\n\nSubstituting the length of the edge into the volume formula:"
] | [
null,
"https://dfrrh0itwp1ti.cloudfront.net/mj/NzIgXHNxcnR7M30=_100.svg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9387575,"math_prob":0.979551,"size":1570,"snap":"2022-40-2023-06","text_gpt3_token_len":410,"char_repetition_ratio":0.23116219,"word_repetition_ratio":0.035587188,"special_character_ratio":0.21528663,"punctuation_ratio":0.11320755,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99878365,"pos_list":[0,1,2],"im_url_duplicate_count":[null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-02-06T19:41:25Z\",\"WARC-Record-ID\":\"<urn:uuid:92093594-3ed9-41d4-815b-39a5f3f96378>\",\"Content-Length\":\"9661\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b8dbe895-d018-4f87-ab74-d11b92125fac>\",\"WARC-Concurrent-To\":\"<urn:uuid:b89ba3d2-46b8-4c28-b1b4-611e602db9e6>\",\"WARC-IP-Address\":\"69.10.42.198\",\"WARC-Target-URI\":\"https://www.math.net/octahedron\",\"WARC-Payload-Digest\":\"sha1:3RR2WJIS4S2LMFG444DVW2XJH6ZDHEET\",\"WARC-Block-Digest\":\"sha1:G4B64SIRBVZXWA7J5MWMQF6AP7Y3DDNB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764500357.3_warc_CC-MAIN-20230206181343-20230206211343-00828.warc.gz\"}"} |
http://algebra2014.wikidot.com/theorem-8-16 | [
"Theorem 8.16\n\nLet $G$ be a group. We show that $G$ is isomorphic to a subgroup of $S_G$. By Lemma 8.15, we need only to define a one-to-one function $\\phi : G \\rightarrow S_G$ such that $\\phi (xy) = \\phi (x) \\phi (y)$ for all $x, y \\in G$. For $x \\in G$, let $\\lambda _x : G \\rightarrow G$ be defined by $\\lambda _x (g) = xg$ for all $g \\in G$. (We think of $\\lambda _x$ as performing left multiplication by $x$.) The equation $\\lambda _x (x^{-1} c) = x(x^{-1} c) =c$ for all $c \\in G$ shows that $\\lambda _x$ maps $G$ onto $G$. If $\\lambda _x (a) = \\lambda _x (b)$, then $xa = xb$ so $a = b$ by cancellation. Thus $\\lambda _x$ is also one-to-one, and is a permutation of $G$. We now define $\\phi : G \\rightarrow S_G$ by defining $\\phi (x) = \\lambda _x$ for all $x \\in G$.\nTo show that $\\phi$ is one-to-one, suppose that $\\phi (x) = \\phi (y)$. Then $\\lambda _x = \\lambda _y$ as functions mapping $G$ into $G$. In particular $\\lambda _x (e) = \\lambda _y (e)$, so $xe = ye$ and $x=y$. Thus $\\phi$ is one-to-one. It only remains to show that $\\phi (xy) = \\phi (x) \\phi (y)$, that is, that $\\lambda _{xy} = \\lambda _x \\lambda _y$. Now for any $g \\in G$, we have $\\lambda _{xy} (g) = (xy)g$. Permutation multiplication is function composition, so $(\\lambda _x \\lambda _y )(g) = \\lambda _x (\\lambda _y(g)) = \\lambda _x (yg) =x(yg)$. Thus by associativity, $\\lambda _{xy} = \\lambda _x \\lambda _y$. $\\blacksquare$"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7719419,"math_prob":1.0000097,"size":1510,"snap":"2019-43-2019-47","text_gpt3_token_len":521,"char_repetition_ratio":0.21779549,"word_repetition_ratio":0.05,"special_character_ratio":0.3807947,"punctuation_ratio":0.115625,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.00001,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-21T12:27:27Z\",\"WARC-Record-ID\":\"<urn:uuid:aee0751e-3c99-4902-b224-c530d8808b6c>\",\"Content-Length\":\"30478\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:637f7054-a3fd-442b-9974-845c2c5913bd>\",\"WARC-Concurrent-To\":\"<urn:uuid:485423fb-1923-4e81-aff3-2c2453630d93>\",\"WARC-IP-Address\":\"107.20.139.176\",\"WARC-Target-URI\":\"http://algebra2014.wikidot.com/theorem-8-16\",\"WARC-Payload-Digest\":\"sha1:WLZD2OEIK4RHVZ5BXDIUB5K6H3FKJJDN\",\"WARC-Block-Digest\":\"sha1:FYMMF254J2I6JEUQ2MNBLM32HN2IZEFS\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570987773711.75_warc_CC-MAIN-20191021120639-20191021144139-00500.warc.gz\"}"} |
https://math.libretexts.org/Bookshelves/Analysis/Book%3A_Real_Analysis_(Boman_and_Rogers)/05%3A_Convergence_of_the_Taylor_Series-_A_%E2%80%9CTayl%E2%80%9D_of_Three_Remainders/5.01%3A_The_Integral_Form_of_the_Remainder | [
"$$\\newcommand{\\id}{\\mathrm{id}}$$ $$\\newcommand{\\Span}{\\mathrm{span}}$$ $$\\newcommand{\\kernel}{\\mathrm{null}\\,}$$ $$\\newcommand{\\range}{\\mathrm{range}\\,}$$ $$\\newcommand{\\RealPart}{\\mathrm{Re}}$$ $$\\newcommand{\\ImaginaryPart}{\\mathrm{Im}}$$ $$\\newcommand{\\Argument}{\\mathrm{Arg}}$$ $$\\newcommand{\\norm}{\\| #1 \\|}$$ $$\\newcommand{\\inner}{\\langle #1, #2 \\rangle}$$ $$\\newcommand{\\Span}{\\mathrm{span}}$$\n\n# 5.1: The Integral Form of the Remainder\n\n$$\\newcommand{\\vecs}{\\overset { \\rightharpoonup} {\\mathbf{#1}} }$$ $$\\newcommand{\\vecd}{\\overset{-\\!-\\!\\rightharpoonup}{\\vphantom{a}\\smash {#1}}}$$$$\\newcommand{\\id}{\\mathrm{id}}$$ $$\\newcommand{\\Span}{\\mathrm{span}}$$ $$\\newcommand{\\kernel}{\\mathrm{null}\\,}$$ $$\\newcommand{\\range}{\\mathrm{range}\\,}$$ $$\\newcommand{\\RealPart}{\\mathrm{Re}}$$ $$\\newcommand{\\ImaginaryPart}{\\mathrm{Im}}$$ $$\\newcommand{\\Argument}{\\mathrm{Arg}}$$ $$\\newcommand{\\norm}{\\| #1 \\|}$$ $$\\newcommand{\\inner}{\\langle #1, #2 \\rangle}$$ $$\\newcommand{\\Span}{\\mathrm{span}}$$ $$\\newcommand{\\id}{\\mathrm{id}}$$ $$\\newcommand{\\Span}{\\mathrm{span}}$$ $$\\newcommand{\\kernel}{\\mathrm{null}\\,}$$ $$\\newcommand{\\range}{\\mathrm{range}\\,}$$ $$\\newcommand{\\RealPart}{\\mathrm{Re}}$$ $$\\newcommand{\\ImaginaryPart}{\\mathrm{Im}}$$ $$\\newcommand{\\Argument}{\\mathrm{Arg}}$$ $$\\newcommand{\\norm}{\\| #1 \\|}$$ $$\\newcommand{\\inner}{\\langle #1, #2 \\rangle}$$ $$\\newcommand{\\Span}{\\mathrm{span}}$$\n\n##### Learning Objectives\n• Explain the integral form of the remainder\n\nNow that we have a rigorous definition of the convergence of a sequence, let’s apply this to Taylor series. Recall that the Taylor series of a function $$f(x)$$ expanded about the point $$a$$ is given by\n\n$\\sum_{n=0}^{\\infty }\\frac{f^{(n)}(a)}{n!}(x-a)^n = f(a) + \\frac{f'(a)}{1!}(x-a) + \\frac{f''(a)}{2!}(x-a)^2 + \\cdots$\n\nWhen we say that $$\\sum_{n=0}^{\\infty }\\frac{f^{(n)}(a)}{n!}(x-a)^n$$ for a particular value of $$x$$,what we mean is that the sequence of partial sums\n\n$\\left (\\sum_{j=0}^{n}\\frac{f^{(j)}(a)}{j!}(x-a)^j \\right )_{n=0}^{\\infty } = \\left ( f(a), f(a) + \\frac{f'(a)}{1!}(x-a), f(a) + \\frac{f'(a)}{1!}(x-a) + \\frac{f''(a)}{2!}(x-a)^2 + \\cdots \\right )$\n\nconverges to the number $$f(x)$$. Note that the index in the summation was changed to $$j$$ to allow $$n$$ to represent the index of the sequence of partial sums. As intimidating as this may look, bear in mind that for a fixed real number $$x$$, this is still a sequence of real numbers so, that saying $$f(x) = \\sum_{n=0}^{\\infty }\\frac{f^{(n)}(a)}{n!}(x-a)^n$$ means that $$\\lim_{n \\to \\infty }\\left (\\sum_{j=0}^{n}\\frac{f^{(j)}(a)}{j!}(x-a)^j \\right ) = f(x)$$ and in the previous chapter we developed some tools to examine this phenomenon. In particular, we know that $$\\lim_{n \\to \\infty }\\left (\\sum_{j=0}^{n}\\frac{f^{(j)}(a)}{j!}(x-a)^j \\right ) = f(x)$$ is equivalent to\n\n$\\lim_{n \\to \\infty }\\left [f(x) - \\left (\\sum_{j=0}^{n}\\frac{f^{(j)}(a)}{j!}(x-a)^j \\right ) \\right ] = 0$\n\nWe saw an example of this in the last chapter with the geometric series $$1 + x + x^2 + x^3+\\cdots$$. Problem Q4 of the last chapter basically had you show that this series converges to $$\\frac{1}{1-x}$$, for $$|x| < 1$$ by showing that $$\\lim_{n \\to \\infty }\\left [\\frac{1}{1-x} - \\left (\\sum_{j=0}^{n}x^j \\right ) \\right ] = 0$$.\n\nThere is generally not a readily recognizable closed form for the partial sum for a Taylor series. The geometric series is a special case. Fortunately, for the issue at hand (convergence of a Taylor series), we don’t need to analyze the series itself. What we need to show is that the difference between the function and the $$n^{th}$$ partial sum converges to zero. This difference is called the remainder (of the Taylor series). (Why?)\n\nWhile it is true that the remainder is simply\n\n$f(x) - \\left (\\sum_{j=0}^{n}\\frac{f^{(j)}(a)}{j!}(x-a)^j \\right )$\n\nthis form is not easy to work with. Fortunately, a number of alternate versions of this remainder are available. We will explore these in this chapter. Recall the result from Theorem 3.1.2 from Chapter 3,\n\n$f(x) = f(a) + \\frac{f'(a)}{1!}(x-a) + \\frac{f''(a)}{2!}(x-a)^2 + \\cdots + \\frac{f^{(n)}(a)}{n!}(x-a)^n + \\frac{1}{n!}\\int_{t=a}^{x}f^{(n+1)}(t)(x-t)^n dt$\n\nWe can use this by rewriting it as\n\n$f(x) - \\left (\\sum_{j=0}^{n}\\frac{f^{(j)}(a)}{j!}(x-a)^j \\right ) = \\frac{1}{n!}\\int_{t=a}^{x}f^{(n+1)}(t)(x-t)^n dt \\label{50}$\n\nThe left hand side of Equation \\ref{50} is called the integral form of the remainder for the Taylor series of $$f(x)$$, and the Taylor series will converge to $$f(x)$$ exactly when the sequence $$\\lim_{n \\to \\infty }\\left (\\frac{1}{n!}\\int_{t=a}^{x}f^{(n+1)}(t)(x-t)^n dt \\right )$$ converges to zero. It turns out that this form of the remainder is often easier to handle than the original $$f(x) - \\left (\\sum_{j=0}^{n}\\frac{f^{(j)}(a)}{j!}(x-a)^j \\right )$$ and we can use it to obtain some general results.\n\n##### Theorem $$\\PageIndex{1}$$: Taylor’s Series\n\nIf there exists a real number $$B$$ such that $$|f^{(n+1)}(t)|≤ B$$ for all nonnegative integers $$n$$ and for all $$t$$ on an interval containing $$a$$ and $$x$$, then\n\n$\\lim_{n \\to \\infty }\\left (\\frac{1}{n!}\\int_{t=a}^{x}f^{(n+1)}(t)(x-t)^n dt \\right ) = 0$\n\nand so\n\n$f(x) = \\sum_{n=0}^{\\infty }\\frac{f^{(n)}(a)}{n!}(x-a)^n$\n\nIn order to prove this, it might help to first prove the following Lemma.\n\n##### Lemma $$\\PageIndex{1}$$: Triangle Inequality for Integrals\n\nIf $$f$$ and $$|f|$$ are integrable functions and $$a ≤ b$$, then\n\n$\\left | \\int_{t=a}^{b} f(t)dt\\right | \\leq \\int_{t=a}^{b} \\left |f(t) \\right |dt$\n\n##### Exercise $$\\PageIndex{1}$$\n\nProve Lemma $$\\PageIndex{1}$$.\n\nHint\n\n$$-|f(t)|≤ f(t) ≤|f(t)|$$.\n\n##### Exercise $$\\PageIndex{2}$$\n\nProve Theorem $$\\PageIndex{1}$$.\n\nHint\n\nYou might want to use Problem Q8 of Chapter 4. Also there are two cases to consider: $$a < x$$ and $$x < a$$ (the case $$x = a$$ is trivial). You will find that this is true in general. This is why we will often indicate that $$t$$ is between $$a$$ and $$x$$ as in the theorem. In the case $$x < a$$, notice that \\begin{align*} \\left | \\int_{t=a}^{x}f^{(n+1)}(t)(x-t)^n dt \\right | &= \\left | (-1)^{n+1}\\int_{t=a}^{x}f^{(n+1)}(t)(t-x)^n dt \\right |\\\\ &= \\left | \\int_{t=a}^{x}f^{(n+1)}(t)(t-x)^n dt \\right | \\end{align*}\n\n##### Exercise $$\\PageIndex{3}$$\n\nUse Theorem $$\\PageIndex{1}$$ to prove that for any real number $$x$$\n\n1. $$\\displaystyle \\sin x = \\sum_{n=0}^{\\infty }\\frac{(-1)^n x^{2n+1}}{(2n+1)!}$$\n2. $$\\displaystyle \\cos x = \\sum_{n=0}^{\\infty }\\frac{(-1)^n x^{2n}}{(2n)!}$$\n3. $$\\displaystyle e^x = \\sum_{n=0}^{\\infty }\\frac{x^n}{n!}$$\n\nPart c of exercise $$\\PageIndex{3}$$ shows that the Taylor series of $$e^x$$ expanded at zero converges to $$e^x$$ for any real number $$x$$. Theorem $$\\PageIndex{1}$$ can be used in a similar fashion to show that\n\n$e^x = \\sum_{n=0}^{\\infty }\\frac{e^a(x-a)^n}{n!}$\n\nfor any real numbers $$a$$ and $$x$$.\n\nRecall that in section 2.1 we showed that if we define the function $$E(x)$$ by the power series $$\\sum_{n=0}^{\\infty }\\frac{x^n}{n!}$$ then $$E(x + y) = E(x)E(y)$$. This, of course, is just the familiar addition property of integer coefficients extended to any real number. In Chapter 2 we had to assume that defining $$E(x)$$ as a series was meaningful because we did not address the convergence of the series in that chapter. Now that we know the series converges for any real number we see that the definition\n\n$f(x) = e^x = \\sum_{n=0}^{\\infty }\\frac{x^n}{n!}$\n\nis in fact valid.\n\nAssuming that we can differentiate this series term-by-term it is straightforward to show that $$f'(x) = f(x)$$. Along with Taylor’s formula this can then be used to show that $$e^{a+b} = e^ae^b$$ more elegantly than the rather cumbersome proof in section 2.1, as the following problem shows.\n\n##### Exercise $$\\PageIndex{4}$$\n\nRecall that if $$f(x) = e^x$$ then $$f'(x) = e^x$$. Use this along with the Taylor series expansion of $$e^x$$ about $$a$$ to show that $e^{a+b} = e^ae^b \\nonumber$\n\nTheorem $$\\PageIndex{1}$$ is a nice “first step” toward a rigorous theory of the convergence of Taylor series, but it is not applicable in all cases. For example, consider the function $$f(x) = \\sqrt{1+x}$$. As we saw in Chapter 2, Exercise 2.2.9, this function’s Maclaurin series (the binomial series for $$(1 + x)^{1/2}$$)appears to be converging to the function for $$x ∈ (-1,1)$$. While this is, in fact, true, the above proposition does not apply. If we consider the derivatives of $$f(t) = (1 + t)^{1/2}$$, we obtain:\n\n$f'(t) = \\frac{1}{2}(1+t)^{\\frac{1}{2}-1}$\n\n$f''(t) = \\frac{1}{2}\\left ( \\frac{1}{2}-1 \\right )(1+t)^{\\frac{1}{2}-2}$\n\n$f'''(t) = \\frac{1}{2}\\left ( \\frac{1}{2}-1 \\right )\\left ( \\frac{1}{2}-2 \\right )(1+t)^{\\frac{1}{2}-2}$\n\n$\\vdots$\n\n$f^{n+1}(t) = \\frac{1}{2}\\left ( \\frac{1}{2}-1 \\right )\\left ( \\frac{1}{2}-2 \\right )\\cdots \\left ( \\frac{1}{2}-n \\right )(1+t)^{\\frac{1}{2}-(n+1)}$\n\nNotice that\n\n$\\left |f^{n+1}(0) \\right | = \\frac{1}{2}\\left ( 1 - \\frac{1}{2} \\right )\\left ( 2 - \\frac{1}{2} \\right )\\cdots \\left ( n - \\frac{1}{2} \\right )$\n\nSince this sequence grows without bound as $$n →∞$$, then there is no chance for us to find a number $$B$$ to act as a bound for all of the derviatives of $$f$$ on any interval containing $$0$$ and $$x$$, and so the hypothesis of Theorem $$\\PageIndex{1}$$ will never be satisfied. We need a more delicate argument to prove that\n\n$\\sqrt{1+x} = 1 + \\frac{1}{2}x + \\frac{\\frac{1}{2}\\left ( \\frac{1}{2}-1 \\right )}{2!}x^2 + \\frac{\\frac{1}{2}\\left ( \\frac{1}{2}-1 \\right )\\left ( \\frac{1}{2}-2 \\right )}{3!}x^3 + \\cdots$\n\nis valid for $$x ∈ (-1,1)$$. To accomplish this task, we will need to express the remainder of the Taylor series differently. Fortunately, there are at least two such alternate forms.\n\nThis page titled 5.1: The Integral Form of the Remainder is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Eugene Boman and Robert Rogers (OpenSUNY) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.768199,"math_prob":1.0000058,"size":8092,"snap":"2022-27-2022-33","text_gpt3_token_len":2943,"char_repetition_ratio":0.15838279,"word_repetition_ratio":0.045379538,"special_character_ratio":0.39977756,"punctuation_ratio":0.06852497,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000051,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-15T22:22:26Z\",\"WARC-Record-ID\":\"<urn:uuid:f67f99c3-b009-408d-acae-fba292e6b312>\",\"Content-Length\":\"120399\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6a2dc27b-4181-4a3a-9ff1-769bb91a1adb>\",\"WARC-Concurrent-To\":\"<urn:uuid:b992017c-b814-47e6-a488-0da044c85816>\",\"WARC-IP-Address\":\"13.249.39.24\",\"WARC-Target-URI\":\"https://math.libretexts.org/Bookshelves/Analysis/Book%3A_Real_Analysis_(Boman_and_Rogers)/05%3A_Convergence_of_the_Taylor_Series-_A_%E2%80%9CTayl%E2%80%9D_of_Three_Remainders/5.01%3A_The_Integral_Form_of_the_Remainder\",\"WARC-Payload-Digest\":\"sha1:E5HSXDYSI3OJ3CW6VZWZMHFAFTKJJG6M\",\"WARC-Block-Digest\":\"sha1:JEAWWZZR2FDK75XSNAB7MCC2F5ONGQF5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882572212.96_warc_CC-MAIN-20220815205848-20220815235848-00246.warc.gz\"}"} |
https://arizona.pure.elsevier.com/en/publications/remarks-on-the-fourier-coefficients-of-modular-forms | [
"# Remarks on the Fourier coefficients of modular forms\n\nResearch output: Contribution to journalArticle\n\n### Abstract\n\nWe consider a variant of a question of N. Koblitz. For an elliptic curve E/Q which is not Q-isogenous to an elliptic curve with torsion, Koblitz has conjectured that there exists infinitely many primes p such that Np(E)=#E(Fp)=p+1-ap(E) is also a prime. We consider a variant of this question. For a newform f, without CM, of weight k≥4, on Γ 0(M) with trivial Nebentypus χ 0 and with integer Fourier coefficients, let N p(f)=χ 0(p)p k-1+1-a p(f) (here a p(f) is the p-th-Fourier coefficient of f). We show under GRH and Artin's Holomorphy Conjecture that there are infinitely many p such that N p(f) has at most [5k+1+log(k)] distinct prime factors. We give examples of about hundred forms to which our theorem applies. We also show, on GRH, that the number of distinct prime factors of N p(f) is of normal order log(log(p)) and that the distribution of these values is asymptotically a Gaussian distribution (\"Erdo{double acute}s-Kac type theorem\").\n\nOriginal language English (US) 1314-1336 23 Journal of Number Theory 132 6 https://doi.org/10.1016/j.jnt.2011.10.004 Published - Jun 1 2012\n\n### Keywords\n\n• Hecke eigenvalues\n• Koblitz conjecture\n• Modular forms\n• Normal orders\n\n### ASJC Scopus subject areas\n\n• Algebra and Number Theory"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.88579714,"math_prob":0.8301621,"size":1280,"snap":"2020-34-2020-40","text_gpt3_token_len":360,"char_repetition_ratio":0.08934169,"word_repetition_ratio":0.009708738,"special_character_ratio":0.26015624,"punctuation_ratio":0.05928854,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9973166,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-05T16:55:05Z\",\"WARC-Record-ID\":\"<urn:uuid:1eff1bdd-d86d-415b-932d-ac7b9766c727>\",\"Content-Length\":\"41403\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:684ccc0e-e503-4e6b-a005-174ac283c394>\",\"WARC-Concurrent-To\":\"<urn:uuid:e37a46b2-3bdd-4903-9ef9-4c713c2085b6>\",\"WARC-IP-Address\":\"52.72.50.98\",\"WARC-Target-URI\":\"https://arizona.pure.elsevier.com/en/publications/remarks-on-the-fourier-coefficients-of-modular-forms\",\"WARC-Payload-Digest\":\"sha1:7CJGQ3CQRAXJWQC6QPWVWWHGXHF4JKND\",\"WARC-Block-Digest\":\"sha1:IFRYXRDWW745NIKDO7H74IZVAIRP4LHP\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439735963.64_warc_CC-MAIN-20200805153603-20200805183603-00349.warc.gz\"}"} |
https://matheducators.stackexchange.com/questions/4165/is-the-reciprocal-function-continuous | [
"# Is the reciprocal function continuous?\n\nI'm curious the views of those who teach calculus.\n\nAs you know the continuity of a function at a point is defined in terms of the limit in the typical course. I'd like to ask a pair of questions:\n\n1. Consider $f(x)=1/x$. Is $f$ continuous ?\n2. Let $\\text{dom}(g) = (-\\infty, 0 ) \\cup (0, \\infty)$ where $g(x)=1/x$ for each $x \\in \\text{dom}(g)$. Is $g$ continuous?\n\nI don't want to say much more initially as I'd rather not influence the answers at the outset. The audience I have in mind is a mixed math majors / science majors calculus course at the university and time is not a large issue.\n\nI think the point of the question is that although there is no mathematical ambiguity, in different circumstances one would give different answers. So:\n\nI. From the perspective of a freshman calculus textbook: well, I know my enemy. The textbook answer to (1) is \"A function is 'continuous' if it is continuous at $c$ for every real number $c$. A function is continuous at $c$ if it is defined at $c$, if $\\lim_{x \\rightarrow c} f(x) = L$ exists and $L = f(c)$. Since $f(x)$ is not defined at $0$, it is not continuous there.\" Yikes. To be fair, most calculus textbooks would classify the discontinuity at $0$ as not being removable.\n\nI feel ever so slightly worried that I've unfairly caricatured this addle-pated view of continuity. I know that freshman calculus textbooks would give the answer \"No, because it is not defined at $0$\" to (1). As written, it sounds like they would give a similar answer to the continuity of, say, $f(x) = \\sqrt{x-1}$, and I think that they may not be as legalistic as I implied above that a function can only be called continuous if it is defined at all real numbers: sometimes being defined on a closed interval is regarded as sufficient. I think question (2) outfoxes the freshman calculus text: even by asking it you show understanding that the continuity of a function depends upon what domain you take. The calculus texts are not clear on this.\n\nII. From the perspective of a mathematician: (1) I would ask, \"Well, what is the domain and codomain of the function? If the function is meant to take real or complex values, then it is not defined at $0$. As a function from $\\mathbb{R} \\setminus \\{0\\}$ to $\\mathbb{R}$ or from $\\mathbb{C} \\setminus \\{0\\}$ to $\\mathbb{C}$, it is continuous (and more...). The function cannot be extended continuously to $0$, e.g. because it is unbounded in every neighborhood of $0$. However, it extends continuously [and more...] to a function on the Riemann sphere: $f(0) = \\infty$.\" (2) Yes, of course $g$ is continuous [and more...].\n\nIII. If a freshman calculus student asked me this in class:\n\n(1) \"There are some tricky issues in continuity for functions which are not defined at isolated points. Our intuitive notion of continuity is that the graph is a nice unbroken curve. [Draws the graph of f] As you can see, in this case the graph is not nice at $0$. Now in fact the function is not even defined at $0$. Whether a function is continuous at a point where it is not defined is a bit legalistic! More pertinently, sometimes functions which are not defined at an isolated point can be defined at the point so as to be continuous there. This happens exactly when the limit as you approach the point exists and is called a 'removable discontinuity'. However, in this case, $\\lim_{x \\rightarrow 0^{-}} f(x) = -\\infty$ and $\\lim_{x \\rightarrow 0^{+}} f(x) = \\infty$ so it's pretty bad: each one-sided limit is infinite, but the signs are different, so we can't even say that the overall limit is $\\pm \\infty$ [even if we could, since $\\pm \\infty$ are not real numbers, the limit would not exist]. The upshot is that there is no \"good\" way to define $f(x) = \\frac{1}{x}$ at $0$.\"\n\n(2) \"Yes, the function is definitely continuous on the given domain, as you can see from the graph. Beware though: the given domain is not an interval but rather a union of two non-overlapping intervals. Be very careful with this: most of the big theorems in calculus concern continuous functions defined on an interval. When the domain is 'more than one interval', some funny things can happen. [Depending on where we are in freshman calculus...] For instance the intermediate value theorem fails here: the function takes negative values and positive values but never takes the value $0$. Also one antiderivative of the function is $\\log |x|$, but not every antiderivative is of the form $\\log |x| + C$! [Maybe say more about this if someone seems interested; it is a tricky point.] It may be more useful to think of the function $g$ as really being 'two different functions', one defined for negative values and one defined for positive values.\"\n\nAdded: The most general antiderivative of $g$ seems like a tricky point indeed, since more than one person has asked about it in the comments. It may be worth explaining the answer. The uniqueness of antiderivatives up to a constant is equivalent to the Zero Velocity Theorem: if $F$ is differentiable and $F' = 0$ (identically) then $F = C$ is constant. This in turn is a consequence of the Mean Value Theorem, and the hypothesis that the domain be an interval is critical: more generally any locally constant function has derivative zero, and if the domain has $N$ connected components, the vector space of locally constant functions has dimension $N$. In this case the domain of $g$ consists of two disjoint intervals, so the most general antiderivative is $G(x) =$ $\\log(x) + C_1$, $x > 0$\n$\\log(-x) + C_2$, $x <0$\nfor any two $C_1,C_2 \\in \\mathbb{R}$.\n\n• +1 especially for how a mathematician would salvage the first question. Aug 9, 2014 at 22:27\n• @metacompactness By the fundamental theorem of calculus if $F(x)$ is the antiderivative of $f(x)$ then any other antiderivative can only differ by a constant. So no, there's no other antiderivatives. Aug 10, 2014 at 9:02\n• @Darksonn Of course. Aug 10, 2014 at 9:04\n• @Darksonn: The Fundamental Theorem of Calculus applies separately on each interval. Thus the most general antiderivative of $g$ is $F(x) = \\log x + C_1$ for $x > 0$ and $\\log (-x) + C_2$ for $x < 0$, where $C_1,C_2$ are two arbitrary constants. (metacompactness and I had some previous exchanges about this which we both deleted.) Aug 10, 2014 at 16:20\n• Every freshman calculus text that I've seen explicitly restricts the definition of a continuous function to considering only points in the domain, such that the answer is \"yes\", $f(x) = 1/x$ is a continuous function. E.g., in Stein/Barcellos Sec. 2.8 that is the very first example. Jul 5, 2017 at 16:36\n\nIn this situation, I'd claim there's the \"prior error\" of seemingly requiring a boolean answer, yes-or-no. Surely we're not so much interested in pranking the students by giving them a function whose domain is not topologically connected, which is continuous on each connected component, but not \"continuous\" in an obvious, intuitive sense \"across the gap\". The situation is clear, because everyone in Calc I knows the graph of this function. What is not clear is the semantics of a formal definition, whose pitfalls will not have caught the students' attention... and maybe don't merit it, besides.\n\nThat is, \"$f(x)=1/x$ has a problem at $x=0$, but otherwise is fine\".\n\n(\"What about $1/x^2$?\")\n\nThat is, I'd propose not over-burdening a boolean \"continuity-or-not\" with this situation in Calc I. It's not so much a mathematical issue as a semantic one, which oughtn't be a primary point, in my opinion.\n\n• I tend to agree with your viewpoint about not trapping students. So, if this was a major point in the course and it was my goal to \"get\" them on it I could see it. But, the reason I discuss this issue (when I do) is to bring attention to the limitations of the definitions. In the same course, at other points, I am quite careful about the $\\epsilon \\delta$ proofs, so, it seems lopsided to just abandon logical clarity for the sake of intuition at this point. I mean, intuitively who needs the $\\epsilon \\delta$ proofs? Of course, the issue can be avoided by focusing on intervals. Aug 12, 2014 at 4:08\n\nI thought it might be interesting to look at a variety of different Calculus textbooks to see how they handle the terminology. Here is an unscientific survey of the half-dozen different Calc textbooks I have on my bookshelf right now:\n\nStewart, Single Variable Calculus, 4e (both Early Transcendentals and Late Transcendentals versions) and 5e:\n\nStewart defines the phrase continuous at a number a in the usual way; the text also defines continuous from the right at a and continuous from the left at a. Armed with these definitions, the text defines continuous on an interval, with one-sided limits used at the endpoints of the interval if appropriate in context.\n\nWith respect to the specific question of the OP, Stewart's Theorem 5 states that \"Any rational function is continuous wherever it is defined; that is, it is continuous on its domain.\"\n\nThe phrase \"a continuous function\" (without qualifiers like \"at a point\" or \"on an interval\") is used in the exposition, but never in the definitions, theorems, proofs, or exercises. The phrase is used solely as a kind of informal shorthand that makes it possible to refer to several different types of continuity at once, for example in statements like \"a sum or product of continuous functions is continuous\". A question like \"Is the reciprocal function continuous?\" would not be found in this text; however, \"Is the reciprocal function continuous on its domain?\" and \"At what points is the reciprocal function discontinuous?\" would be legitimate.\n\nGillett, Introduction to Calculus and Analytic Geometry, 4e\n\nLike Stewart, Gillett defines \"continuous at $x_0$\". This text also defines \"continuous in S\" (where S is a subset of the domain). (Interestingly the definition does not impose any restrictions on what types of subsets may be considered, but every example seems to be a countable union of intervals.) The phrase \"continuous in its domain\" comes next, and Gillett has as Example 6 that \"The reciprocal function is continuous in its domain.\"\n\nRegarding the terminology \"a continuous function\", Gillett has the following remark immediately following Example 6:\n\nWhen a function is continuous in its domain, many writers simply call it continuous, using the global term (unqualified)… This is not unreasonable… but it can be misleading. To say that f(x) = 1/x is continuous (without any qualification) is to risk overlooking the infinite jump at x = 0. It also sounds paradoxical to say \"f(x) is continuous\" in one breath and \"f(x) is discontinuous at x=0\" in the next. That is why we prefer the language \"continuous in its domain.\" It leaves the door open for further remarks.\n\nSo Gillett explicitly rejects the phrase \"a continuous function\" without specifying a point or a subset of its domain, and justifies this using the reciprocal function as its example of why that usage would be poor.\n\nMoise, Calculus: Part I.\n\nThis text (from 1966), like the more modern examples, starts by defining \"continuous at $x_0$\". Unlike Stewart and Gillett, Moise continues by defining \"If $f$ is continuous at every point $x_0$ of its domain, then we say that $f$ is continuous.\" So it uses precisely the terminology that Gillett critiques.\n\nAs an example of how continuous functions can be combined to produce more continuous functions, Moise considers the example of $f(x) = x^2 + 1$ and $h(x) = x^2 - 1$. The text states that $f+h$ and $fh$ are both \"continuous\" (note the absence of any qualifier) and then says that $f/h$ is continuous except at $1$ and $-1$. The example continues, \"Of course, at $x=1$ and $x = -1$ it is not just continuity that breaks down: the quotient function is not even defined at these points, because the denominator of $f/h$ becomes $0$.\" Oddly enough, Moise stops short of saying that $f/h$ is a \"continuous function\", despite the fact that it would be completely in keeping with his own usage conventions to do so. Presumably that is because, as Gillett noted in his remark quoted above, it would seem weird to say that \"$f/h$ is continuous\" and then also say that \"$f/h$ is discontinuous at $1$ and $-1$\".\n\nZill (1993), Calculus (3e)\n\nLike all the others, Zill begins by defining \"continuity at a point\". Zill states that that rational functions are discontinuous at points where they are not defined. The text defines continuity on an interval (with \"continuous from the right/left\" used at end points as appropriate).\n\nWith respect to the usage \"continuous function\", Zill says:\n\nFunctions that are continuous on $(-\\infty, \\infty)$ are said to be \"continuous everywhere\" or simply \"continuous\".\n\nSo Zill reserves the phrase \"continuous function\" for use only with functions whose domain is all of $\\mathbb{R}$. On the other hand (and somewhat in contradiction to this explicit definition), \"continuous function\" is used throughout the exposition (but never in the theorems!) as a shorthand, much in the same way that Stewart does.\n\nSo, to summarize, I'm going to go out on a limb here and, based on the single example of the 1966 textbook, suggest that the use of the phrase \"continuous function\" to mean \"function that is continuous at each point of its domain\" might be an artifact of an earlier generation of textbook authors. More recent texts either avoid giving the phrase any technical meaning at all, using it instead as an informal shorthand, or (in the case of Zill) define it to mean \"continuous on all of $\\mathbb{R}$\".\n\nThis is my experience as a calculus TA for the last 4 years, and I have had no active role in setting the curriculums I'm talking about. The \"continuity of a function\" may be defined and explained thoroughly in an advanced honors rigorous calculus class, but in the trenches that are the first year calculus at most first year universities it does not play a large role. There are essentially 3 options for defining continuity for these students.\n\n1. A function is continuous if its graph can be drawn without picking up the pen. This is the most popular method (in what I've observed) , and it leads to the fallacy you are hinting at. Still, this definition (whether flawed or not) is understandable to calculus students and is of the kind of thinking that mathematicians use.\n\n2. $\\lim_{x\\rightarrow a} f(x)=f(a)$. This definition is in every calculus text and is of course entirely correct. The issue is that this definiton has the giant black box of a \"limit\", which many students simply will not understand unless they are forced to (through HW or lectures). At least in my university the pressure to skip over these more difficult and harder to test aspects of calculus in favor of a larger more impressive-looking curriculum guarantees that they will not ever get experience in things like limits.\n\n3. $\\forall\\epsilon\\ \\exists\\delta$ s.t $|f(x)-f(x_0)|<\\epsilon\\ \\forall|x-x_0|<\\delta$. Again if an effort is made to teach this kind of a definition, it can be taught to first years. There simply isn't time for that in the curriculums I have seen.\n\nThe goal of most calculus curriculums I have been party to is to get the students calculating integrals and derivatives of elementary functions. What is done after that is not necessairly constant, but almost all of these classes are in a sprint to get to this material. Generally what I think the very best students in such a class understand that continuity of a function allows us to apply the fundamental theorem of calculus to that function and perhaps the first definition I've given. There is a huge expanse of material to cover in a couple of semesters and foundations generally get short shrift. Saying that, without some power to change the curriculum, I cannot expect students to try and understand difficult concepts when they will be given tests that completely ignore these concepts.\n\nEDIT: If this seems as an indirect answer it somewhat is. I understand what continuity means if it were said in a research seminar, but I choose to reinforce definition 1 to calculus students then the others as I think it is much more likely to be understood.\n\n• The definitions you give are silent on endpoints. Is $\\sqrt{x}$ continuous at $x=0$? But, I understand, it's hard to cover all the cases when we're mostly teaching algebra which has yet to be mastered. A compromise I've found is to write the two-sided limit definition and mutter in class that we replace with appropriated one-sided limits for points in the domain. The graph-drawing technique is hard to violate if you add the critera \"on a connected domain\" aka interval. Aug 9, 2014 at 14:17\n\nI cover three different concepts of continuity in my high-school calculus class (separate from advanced concepts such as uniform continuity). This vocabulary is based on the AP textbook \"Calculus: Graphical, Numerical, Algebraic\" by Ross L. Finney et al., pages 75 and 77.\n\n1. A function $y=f(x)$ is continuous at an interior point $c$ of its domain if $$\\mathop {\\lim }\\limits_{x \\to c} f(x) = f(c)$$ And similarly for an endpoint of an interval, using a one-sided limit. A function is not continuous at any point not in its domain. Hence your reciprocal function is continuous at every value of $x$ other than $x=0$, where it is discontinuous.\n\n2. A function is continuous on an interval if and only if it is continuous at every point of the interval. Your reciprocal function is continuous on every interval not containing $x=0$.\n\n3. A continuous function is one that is continuous at every point in its domain. Hence, the answer to your first question is: for $f(x)=\\frac1x$, $f$ is a continuous function. And so is your function $g$.\n\nThese definitions are not perfect but they are indeed helpful in my teaching. I can use theorems about continuous functions without being concerned about points not in the domain.\n\n• See, this to be is logical, but it makes some of my colleagues uneasy as it disagrees with precalculus discussions where the pen-drawing definition makes $y=1/x$ a discontinuous graph. I was discussing this with brother and we agreed to disagree. For him, the nineteenth century conflation of the function with its formula and the intuitive appeal is not worth abandoning for the sake of matching the technical defn. of continuity we give in later courses. I'm generally a fan of doing things right the first time, so I like your approach. But, I see @Pete L. Clark shares my brothers view. Aug 9, 2014 at 14:21\n• To be clear, I don't think it is wrong to teach $1/x$ is discontinuous in a class where domains are not emphasized. However, if serious discussion is made of domains of functions then it seems out of place to me to say $1/x$ is discontinuous at $x=0$. Unless, you give a definition of discontinuity which explicitly defines discontinuity for points outside the domain. Aug 9, 2014 at 14:23\n• @JamesS.Cook: In my class I emphasize that the graph of a continuous function can be drawn with a pen kept on the paper through any point on the graph but not necessarily for the entire graph. This has worked best for me as a compromise in the various, somewhat conflicting ideas about continuity. I do see the point of what you say. Aug 9, 2014 at 15:13\n• I remember that in primary school (ages 5 - 10 I think) our maths textbooks emphasised the importance of the domain and codomain of a function. The massive, heavy, expensive College Calculus textbooks unfortunately throw all that progress away. This particular issue (about the continuity of 1/x) has caused much confusion and consternation among students and colleagues during my time as a University teacher - because they are smart, and can smell a rat. @Pete L. Clark's characterisation of those books as \"Legalistic\" is spot-on, if erring on the side of charitable ! Sep 16, 2020 at 23:51\n\nIt might bear pointing out that there is a rather wild kaleidoscope of different definitions for continuous functions seen in basic calculus and analysis texts (which might possibly inform different answers to this question). For example, this issue is highlighted in Steven Krantz's How to Teach Mathematics (Sec. 2.12; cites \"some calculus books\" that conclude the reciprocal function is discontinuous at $$x = 0$$). And a short paper on the issue was written by J.F. Harper:\n\n• Harper, J. F. \"What Really is a Continuous Function?.\" (2007).\n\nOne can say $$x\\mapsto1/x$$ is continuous on $$\\mathbb R\\cup\\{\\infty\\}$$ where this is the $$\\text{“}\\infty\\text{”}$$ that is approached if one goes in either direction (so that the domain is topologically a circle) and the value of this function at $$0$$ is $$\\infty$$ (not $$\\text{“}{+\\infty}\\text{”}$$ nor $$\\text{“}{-\\infty}{”}$$), and the limit of the function as the argument approached $$0$$ is also $$\\infty,$$ and the value of the function at $$\\infty$$ is $$0.$$\n\nThis way of looking at things makes the tangent function continuous on $$\\mathbb R$$ (but not on $$\\mathbb R\\cup\\{\\infty\\}.$$ If you want a discontinuity that fewer people will argue about, look at the discontinuity of the tangent function at $$\\infty.$$ And if it is objected that $$\\infty$$ is not in the domain so the function cannot have a discontinuity there, I will claim that it sometimes makes sense to speak of continuity on the closure of the domain."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.945162,"math_prob":0.9864772,"size":5400,"snap":"2023-40-2023-50","text_gpt3_token_len":1227,"char_repetition_ratio":0.18254262,"word_repetition_ratio":0.002244669,"special_character_ratio":0.22574075,"punctuation_ratio":0.09281437,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99752367,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-10-04T23:46:50Z\",\"WARC-Record-ID\":\"<urn:uuid:e19cf73c-2822-4d37-b79c-5a84f4cad05f>\",\"Content-Length\":\"251321\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f6d8aa98-9d65-433f-a052-14bebc1dc00f>\",\"WARC-Concurrent-To\":\"<urn:uuid:b694f48d-5b9f-49cb-9138-f13a5f876209>\",\"WARC-IP-Address\":\"104.18.11.86\",\"WARC-Target-URI\":\"https://matheducators.stackexchange.com/questions/4165/is-the-reciprocal-function-continuous\",\"WARC-Payload-Digest\":\"sha1:OBEBONOT4454ZJXOX33UGZTMFXPXODI2\",\"WARC-Block-Digest\":\"sha1:EBW2Q6WTIUCBNXPCLSJZBJFOSGJTTXBG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233511424.48_warc_CC-MAIN-20231004220037-20231005010037-00622.warc.gz\"}"} |
https://www.colorhexa.com/784580 | [
"# #784580 Color Information\n\nIn a RGB color space, hex #784580 is composed of 47.1% red, 27.1% green and 50.2% blue. Whereas in a CMYK color space, it is composed of 6.3% cyan, 46.1% magenta, 0% yellow and 49.8% black. It has a hue angle of 291.9 degrees, a saturation of 29.9% and a lightness of 38.6%. #784580 color hex could be obtained by blending #f08aff with #000001. Closest websafe color is: #663399.\n\n• R 47\n• G 27\n• B 50\nRGB color chart\n• C 6\n• M 46\n• Y 0\n• K 50\nCMYK color chart\n\n#784580 color description : Dark moderate magenta.\n\n# #784580 Color Conversion\n\nThe hexadecimal color #784580 has RGB values of R:120, G:69, B:128 and CMYK values of C:0.06, M:0.46, Y:0, K:0.5. Its decimal value is 7882112.\n\nHex triplet RGB Decimal 784580 `#784580` 120, 69, 128 `rgb(120,69,128)` 47.1, 27.1, 50.2 `rgb(47.1%,27.1%,50.2%)` 6, 46, 0, 50 291.9°, 29.9, 38.6 `hsl(291.9,29.9%,38.6%)` 291.9°, 46.1, 50.2 663399 `#663399`\nCIE-LAB 37.496, 32.015, -24.389 13.77, 9.808, 21.589 0.305, 0.217, 9.808 37.496, 40.247, 322.701 37.496, 22.539, -37.608 31.318, 23.674, -18.948 01111000, 01000101, 10000000\n\n# Color Schemes with #784580\n\n• #784580\n``#784580` `rgb(120,69,128)``\n• #4d8045\n``#4d8045` `rgb(77,128,69)``\nComplementary Color\n• #5b4580\n``#5b4580` `rgb(91,69,128)``\n• #784580\n``#784580` `rgb(120,69,128)``\n• #80456b\n``#80456b` `rgb(128,69,107)``\nAnalogous Color\n• #45805b\n``#45805b` `rgb(69,128,91)``\n• #784580\n``#784580` `rgb(120,69,128)``\n• #6b8045\n``#6b8045` `rgb(107,128,69)``\nSplit Complementary Color\n• #458078\n``#458078` `rgb(69,128,120)``\n• #784580\n``#784580` `rgb(120,69,128)``\n• #807845\n``#807845` `rgb(128,120,69)``\n• #454d80\n``#454d80` `rgb(69,77,128)``\n• #784580\n``#784580` `rgb(120,69,128)``\n• #807845\n``#807845` `rgb(128,120,69)``\n• #4d8045\n``#4d8045` `rgb(77,128,69)``\n• #492a4e\n``#492a4e` `rgb(73,42,78)``\n• #59335f\n``#59335f` `rgb(89,51,95)``\n• #683c6f\n``#683c6f` `rgb(104,60,111)``\n• #784580\n``#784580` `rgb(120,69,128)``\n• #884e91\n``#884e91` `rgb(136,78,145)``\n• #9757a1\n``#9757a1` `rgb(151,87,161)``\n• #a365ac\n``#a365ac` `rgb(163,101,172)``\nMonochromatic Color\n\n# Alternatives to #784580\n\nBelow, you can see some colors close to #784580. Having a set of related colors can be useful if you need an inspirational alternative to your original color choice.\n\n• #694580\n``#694580` `rgb(105,69,128)``\n• #6e4580\n``#6e4580` `rgb(110,69,128)``\n• #734580\n``#734580` `rgb(115,69,128)``\n• #784580\n``#784580` `rgb(120,69,128)``\n• #7d4580\n``#7d4580` `rgb(125,69,128)``\n• #80457e\n``#80457e` `rgb(128,69,126)``\n• #804579\n``#804579` `rgb(128,69,121)``\nSimilar Colors\n\n# #784580 Preview\n\nThis text has a font color of #784580.\n\n``<span style=\"color:#784580;\">Text here</span>``\n#784580 background color\n\nThis paragraph has a background color of #784580.\n\n``<p style=\"background-color:#784580;\">Content here</p>``\n#784580 border color\n\nThis element has a border color of #784580.\n\n``<div style=\"border:1px solid #784580;\">Content here</div>``\nCSS codes\n``.text {color:#784580;}``\n``.background {background-color:#784580;}``\n``.border {border:1px solid #784580;}``\n\n# Shades and Tints of #784580\n\nA shade is achieved by adding black to any pure hue, while a tint is created by mixing white to any pure color. In this example, #010001 is the darkest color, while #f8f3f8 is the lightest one.\n\n• #010001\n``#010001` `rgb(1,0,1)``\n• #0c070d\n``#0c070d` `rgb(12,7,13)``\n• #180e1a\n``#180e1a` `rgb(24,14,26)``\n• #241527\n``#241527` `rgb(36,21,39)``\n• #301c34\n``#301c34` `rgb(48,28,52)``\n• #3c2340\n``#3c2340` `rgb(60,35,64)``\n• #482a4d\n``#482a4d` `rgb(72,42,77)``\n• #54305a\n``#54305a` `rgb(84,48,90)``\n• #603767\n``#603767` `rgb(96,55,103)``\n• #6c3e73\n``#6c3e73` `rgb(108,62,115)``\n• #784580\n``#784580` `rgb(120,69,128)``\n• #844c8d\n``#844c8d` `rgb(132,76,141)``\n• #905399\n``#905399` `rgb(144,83,153)``\n• #9c5aa6\n``#9c5aa6` `rgb(156,90,166)``\n``#a367ad` `rgb(163,103,173)``\n• #ab73b4\n``#ab73b4` `rgb(171,115,180)``\n• #b380bb\n``#b380bb` `rgb(179,128,187)``\n• #ba8dc1\n``#ba8dc1` `rgb(186,141,193)``\n• #c29ac8\n``#c29ac8` `rgb(194,154,200)``\n• #caa6cf\n``#caa6cf` `rgb(202,166,207)``\n• #d1b3d6\n``#d1b3d6` `rgb(209,179,214)``\n• #d9c0dd\n``#d9c0dd` `rgb(217,192,221)``\n• #e1cde4\n``#e1cde4` `rgb(225,205,228)``\n• #e8d9eb\n``#e8d9eb` `rgb(232,217,235)``\n• #f0e6f2\n``#f0e6f2` `rgb(240,230,242)``\n• #f8f3f8\n``#f8f3f8` `rgb(248,243,248)``\nTint Color Variation\n\n# Tones of #784580\n\nA tone is produced by adding gray to any pure hue. In this case, #675c69 is the less saturated color, while #aa01c4 is the most saturated one.\n\n• #675c69\n``#675c69` `rgb(103,92,105)``\n• #6d5471\n``#6d5471` `rgb(109,84,113)``\n• #724d78\n``#724d78` `rgb(114,77,120)``\n• #784580\n``#784580` `rgb(120,69,128)``\n• #7e3d88\n``#7e3d88` `rgb(126,61,136)``\n• #83368f\n``#83368f` `rgb(131,54,143)``\n• #892e97\n``#892e97` `rgb(137,46,151)``\n• #8e279e\n``#8e279e` `rgb(142,39,158)``\n• #941fa6\n``#941fa6` `rgb(148,31,166)``\n``#9918ad` `rgb(153,24,173)``\n• #9f10b5\n``#9f10b5` `rgb(159,16,181)``\n• #a408bd\n``#a408bd` `rgb(164,8,189)``\n• #aa01c4\n``#aa01c4` `rgb(170,1,196)``\nTone Color Variation\n\n# Color Blindness Simulator\n\nBelow, you can see how #784580 is perceived by people affected by a color vision deficiency. This can be useful if you need to ensure your color combinations are accessible to color-blind users.\n\nMonochromacy\n• Achromatopsia 0.005% of the population\n• Atypical Achromatopsia 0.001% of the population\nDichromacy\n• Protanopia 1% of men\n• Deuteranopia 1% of men\n• Tritanopia 0.001% of the population\nTrichromacy\n• Protanomaly 1% of men, 0.01% of women\n• Deuteranomaly 6% of men, 0.4% of women\n• Tritanomaly 0.01% of the population"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.52767384,"math_prob":0.60604835,"size":3715,"snap":"2020-10-2020-16","text_gpt3_token_len":1619,"char_repetition_ratio":0.12449475,"word_repetition_ratio":0.011090573,"special_character_ratio":0.57523555,"punctuation_ratio":0.23783186,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9913363,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-04-09T08:34:58Z\",\"WARC-Record-ID\":\"<urn:uuid:88aa6e25-aa0a-47f8-bb3e-a8533611e0b0>\",\"Content-Length\":\"36312\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:10d8b287-f4ef-44e1-a993-c71cd2ace52c>\",\"WARC-Concurrent-To\":\"<urn:uuid:15be0df8-b2c9-43d8-94ed-0eeeeb89a98e>\",\"WARC-IP-Address\":\"178.32.117.56\",\"WARC-Target-URI\":\"https://www.colorhexa.com/784580\",\"WARC-Payload-Digest\":\"sha1:FDFROKV3MLI32PU4RLNMI56OU4ABDJMR\",\"WARC-Block-Digest\":\"sha1:DJOGVPALSERL2YVNJUGNJDQ2KCZ73V5T\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-16/CC-MAIN-2020-16_segments_1585371830894.88_warc_CC-MAIN-20200409055849-20200409090349-00084.warc.gz\"}"} |
http://mvphip.org/9-2-practice-measuring-angles-and-arcs-answers/ | [
"# 9-2 Practice Measuring Angles And Arcs Answers\n\nPosted on\n\n9-2 Practice Measuring Angles And Arcs Answers. › 9 2 skills practice answers. This online notice lesson 10 2 angles and arcs answers can be one of the options to accompany you considering having additional time. Aanndsawrcesrsskill practice answers can be. › practice and homework lesson 9.2. Lesson 9.2 measuring angles and arcsdone.notebook february 23, 2016 9.2 arc measure and arc length a central angle of a circle is an angle with a vertex in the center of the. An angle that intersects a circle in two points and has a vertex at the center of the circle. › lesson 9.2 prime factorization answers. The sum of the measures of the central angles of a circle with no interior points in common is answer: This is just one of the. This video is intended for sophomores and juniors in a geometry college prep course. Identify each arc as a major arc, minor arc, or semicircle. 10.2 find arc measures homework 409k: 150 ccss precision and are diameters of. Angles and arcs skill practice. Yeah, reviewing a book measuring angles and arcs skill practice answers could build up your close contacts listings.\n\n9-2 Practice Measuring Angles And Arcs Answers Indeed lately is being hunted by consumers around us, maybe one of you. People now are accustomed to using the net in gadgets to see video and image information for inspiration, and according to the name of the post I will discuss about 9-2 Practice Measuring Angles And Arcs Answers.\n\n• Notes And Practice – Student Copy – Lesson 12 – Geometry … : Two Pages Of Notes Do Now:\n• 7.2 Use The Converse Of The Pythagorean Theorem , Feb 1, 2013, 7:41 Am:\n• Ch 9.Pdf – Lesson 9.1 Skills Practice Name Date Three … – Tangents, Secants, Arcs And Angles.\n• 9-2 Angle Relationships Practice Wkst – Youtube . Lintel Angle Iron Span Tables.\n• Geometry Archives – Ms. Hall – Kenowa Hills High School … . Feb 1, 2013, 7:41 Am:\n• Trig Exam.pdf – Math 1316 Test 2Review Sheet Short Answer … : Try Your Best To Answer The Questions Above.\n• Warm-Up_-_Student_Copy_-_Lesson_6_-_Geometryhonors_-_Crm_1 … : Then Tell Whether It Is A Minor Arc, Major Arc, Or A Semicircle.\n• Kami Export – Angles Homework (5).Pdf – Section 2 Angles … , The Not So Safe Category 27.\n• Chapter 5 Practice Test – Geometry Connections Chapter 5 … , Two Pages Of Notes Do Now:\n• Solved: The Numerical Aperture (Na) Of An Optical Fiber Os … , Students Find Positive And Negative Measure Coterminals With Given Angles.\n\nFind, Read, And Discover 9-2 Practice Measuring Angles And Arcs Answers, Such Us:",
null,
"• 9 Practice Using Visual Cues Use The Triangle Below To … . Shopping The Graph Shows The Results Of A.",
null,
"• 7.2 Use The Converse Of The Pythagorean Theorem . Try Your Best To Answer The Questions Above.",
null,
"• Section 2 Angles (Workbook).Pdf – Section 2 Angles Use The … . §9.2 Angles & Arcs Definitions.",
null,
"• 29 Kuta Software Infinite Geometry All Transformations … : We Welcome Your Feedback, Comments And Questions.",
null,
"• Inscribed Angles In Circles ( Read ) | Geometry | Ck-12 … : Measure Angles Using A Protractor.",
null,
"• Geometry Archives – Ms. Hall – Kenowa Hills High School … – This Is Just One Of The.",
null,
"• Ch 1 P. Test Trig (1).Pdf – Chapter 1 Practice Exam Name … , Students Find Positive And Negative Measure Coterminals With Given Angles.",
null,
"• 有名な 6 1 Practice Angles Of Polygons Answer Key – おそ松 さん … . Therefore, The Red Arc In The Picture Below Is Not Used In This Formula.",
null,
"• Sec. 9.2 . Identify Each Arc As A Major Arc, Minor Arc, Or Semicircle.",
null,
"• Sb_Activity_14.Pdf – Angles And Angle Measure Activity 14 … – Therefore, The Red Arc In The Picture Below Is Not Used In This Formula.\n\n## 9-2 Practice Measuring Angles And Arcs Answers – Triangles On Act Math: Geometry Guide And Practice Problems\n\nNotes and Practice – Student Copy – Lesson 12 – Geometry …. › practice and homework lesson 9.2. Aanndsawrcesrsskill practice answers can be. 10.2 find arc measures homework 409k: This is just one of the. Angles and arcs skill practice. The sum of the measures of the central angles of a circle with no interior points in common is answer: Yeah, reviewing a book measuring angles and arcs skill practice answers could build up your close contacts listings. › lesson 9.2 prime factorization answers. Lesson 9.2 measuring angles and arcsdone.notebook february 23, 2016 9.2 arc measure and arc length a central angle of a circle is an angle with a vertex in the center of the. Identify each arc as a major arc, minor arc, or semicircle. This video is intended for sophomores and juniors in a geometry college prep course. This online notice lesson 10 2 angles and arcs answers can be one of the options to accompany you considering having additional time. An angle that intersects a circle in two points and has a vertex at the center of the circle. › 9 2 skills practice answers. 150 ccss precision and are diameters of.",
null,
"HSG-SRT.8 Right Triangles and the Pythagorean Theorem … from s3.amazonaws.com\n\nThe two angles formed at b are 50° and 30° because they are corresponding angles for parallel lines. The sides opposite those angles are congruent. Angles are measured with something called a protractor. Then tell whether it is a minor arc, major arc, or a semicircle. Use arc addition to find measures of arcs arc addition postulate = mlh + mhi = 58 + 32 = 90 answer: Lily crossing chapter 13 questions. What is the measure of \\$\\$ \\angle x \\$\\$?\n\n## An angle that intersects a circle in two points and has a vertex at the center of the circle.\n\nThere are two commonly used units of measurement for angles. In the exam, write your answers in capital letters on tho separate answer sheet. I used it last year during distance learning for my 4th students to practice measuring angles. The not so safe category 27. Angles and arcs skill practice. This is the currently selected item. Remember how one side of the angle traces out a circular arc? Lintel angle iron span tables. Students find positive and negative measure coterminals with given angles. Name the shaded arc in each picture practice: Arc a minor arc is the shortest arc connecting two endpoints on a circle. Lily crossing chapter 13 questions. It is given that zw bisects angle. 10.2 find arc measures homework 409k: The measure of the arc is equal to the measure of the central angle. This video is intended for sophomores and juniors in a geometry college prep course. To identify and use parts of circles and to solve problems involving the circumference of a circle and to recognize major arcs, minor arcs, semicircles, and central angles to find measures of practice problems (cont) determine the circumference of a circle with a radius of 2.5 inches. Lesson 9.2 measuring angles and arcsdone.notebook february 23, 2016 9.2 arc measure and arc length a central angle of a circle is an angle with a vertex in the center of the. For this angles and angle measure worksheet, learners draw given angles and they rewrite degrees in radians. The two angles formed at b are 50° and 30° because they are corresponding angles for parallel lines. Type your answers into the boxes provided leaving no spaces. New vocabulary central angle arc minor arc major arc semicircle congruent arcs adjacent arcs arcs and arc measure. › practice and homework lesson 9.2. Tangents, secants, arcs and their angles. › lesson 9.2 prime factorization answers. Measure angles using a protractor. This online notice lesson 10 2 angles and arcs answers can be one of the options to accompany you considering having additional time. There is an example at the beginning (0). This is just one of the. We welcome your feedback, comments and questions. › 9 2 skills practice answers.\n\n## 9-2 Practice Measuring Angles And Arcs Answers , New Vocabulary Central Angle Arc Minor Arc Major Arc Semicircle Congruent Arcs Adjacent Arcs Arcs And Arc Measure.",
null,
"## 9-2 Practice Measuring Angles And Arcs Answers , Lintel Angle Iron Span Tables.",
null,
""
] | [
null,
"https://files.spazioweb.it/e2/f6/e2f64f65-23e1-4fd7-b13d-35ac01b094ed.png",
null,
"https://files.spazioweb.it/e2/f6/e2f64f65-23e1-4fd7-b13d-35ac01b094ed.png",
null,
"https://files.spazioweb.it/e2/f6/e2f64f65-23e1-4fd7-b13d-35ac01b094ed.png",
null,
"https://files.spazioweb.it/e2/f6/e2f64f65-23e1-4fd7-b13d-35ac01b094ed.png",
null,
"https://files.spazioweb.it/e2/f6/e2f64f65-23e1-4fd7-b13d-35ac01b094ed.png",
null,
"https://files.spazioweb.it/e2/f6/e2f64f65-23e1-4fd7-b13d-35ac01b094ed.png",
null,
"https://files.spazioweb.it/e2/f6/e2f64f65-23e1-4fd7-b13d-35ac01b094ed.png",
null,
"https://files.spazioweb.it/e2/f6/e2f64f65-23e1-4fd7-b13d-35ac01b094ed.png",
null,
"https://files.spazioweb.it/e2/f6/e2f64f65-23e1-4fd7-b13d-35ac01b094ed.png",
null,
"https://files.spazioweb.it/e2/f6/e2f64f65-23e1-4fd7-b13d-35ac01b094ed.png",
null,
"https://s3.amazonaws.com/atlas-production.goalbookapp.com/resource-607b985b-50e1-4703-6bd9-2c40d4a4bd47/G.SRT.C.8_QA_FullPDF+%28dragged%291.png",
null,
"https://files.spazioweb.it/e2/f6/e2f64f65-23e1-4fd7-b13d-35ac01b094ed.png",
null,
"https://files.spazioweb.it/e2/f6/e2f64f65-23e1-4fd7-b13d-35ac01b094ed.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.82408756,"math_prob":0.7052921,"size":7518,"snap":"2021-04-2021-17","text_gpt3_token_len":1806,"char_repetition_ratio":0.16156508,"word_repetition_ratio":0.35795027,"special_character_ratio":0.24208567,"punctuation_ratio":0.12722299,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98663765,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,1,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-04-15T16:16:50Z\",\"WARC-Record-ID\":\"<urn:uuid:36257410-cba7-425c-ba2d-fd5292df4ee8>\",\"Content-Length\":\"63169\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2d10daac-5c63-4401-9113-8b9dd6dea482>\",\"WARC-Concurrent-To\":\"<urn:uuid:83f3e43c-2dfe-4664-ba4e-16e94f40a1ed>\",\"WARC-IP-Address\":\"172.67.147.130\",\"WARC-Target-URI\":\"http://mvphip.org/9-2-practice-measuring-angles-and-arcs-answers/\",\"WARC-Payload-Digest\":\"sha1:WPT2PJOL75UXX3Z5I4U4S6ZMNU2RT2MO\",\"WARC-Block-Digest\":\"sha1:TOVYQCHNDMCL6ZI3YGWWGKH7TKKHMTZY\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-17/CC-MAIN-2021-17_segments_1618038087714.38_warc_CC-MAIN-20210415160727-20210415190727-00184.warc.gz\"}"} |
https://www.fxsolver.com/browse/?like=1587&p=3 | [
"# Search results\n\nFound 1213 matches\nArea of rhombus (circumscribed)\n\nRhombus is a simple (non-self-intersecting) quadrilateral whose four sides all have the same length. The can be calculated by the semi perimeter and the ... more\n\nEuler's quadrilateral theorem\n\nIn any convex quadrilateral the sum of the squares of the four sides is equal to the sum of the squares of the two diagonals plus four times the square of ... more\n\nArea of a Square\n\nSquare is a regular quadrilateral, which means that it has four equal sides and four equal angles (90-degree angles, or right angles). It can also be ... more\n\nRhombus' side\n\nThe length of the side can computed by the length of the diagonals\n\n... more\n\nPythagorean theorem (arbitrary triangle - acute angle)\n\nGeneralization of the Pythagorean theorem for the side opposite of the acute angle of an arbitrary triangle\n\n... more\n\nSemiperimeter of a triangle\n\nThe semi sum of the length of a triangle’s sides\n\n... more\n\nPerimeter of a rhombus\n\nA rhombus is a simple (non-self-intersecting) quadrilateral all of whose four sides have the same length. A perimeter of a rhombus is a path that surrounds ... more\n\nRelation between the consecutive sides and the diagonals of a trapezoid\n\nTrapezoid is a convex quadrilateral with only one pair of parallel sides. The parallel sides are called the bases of the trapezoid and the other two sides ... more\n\nRelation between inradius,exradii and sides of a right triangle\n\nRight triangle or right-angled triangle is a triangle in which one angle is a right angle (that is, a 90-degree angle). The incircle or inscribed circle of ... more\n\nVarignon's theorem (Varignon parallelogram)\n\nThe Varigons theorem states that :\nThe midpoints of the sides of an arbitrary quadrangle form a parallelogram. If the quadrangle is convex or ... more\n\n...can't find what you're looking for?\n\nCreate a new formula"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8975175,"math_prob":0.9967972,"size":1455,"snap":"2019-35-2019-39","text_gpt3_token_len":336,"char_repetition_ratio":0.17298415,"word_repetition_ratio":0.031620555,"special_character_ratio":0.22130585,"punctuation_ratio":0.147651,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9980228,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-08-19T05:23:51Z\",\"WARC-Record-ID\":\"<urn:uuid:84188aa1-ac98-4d8b-a6b8-0a3790866308>\",\"Content-Length\":\"119524\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:570cad01-f3c3-4f69-9e87-07f7a5372c46>\",\"WARC-Concurrent-To\":\"<urn:uuid:227b1f23-3fdd-44f4-b410-12b7b792297f>\",\"WARC-IP-Address\":\"178.254.54.75\",\"WARC-Target-URI\":\"https://www.fxsolver.com/browse/?like=1587&p=3\",\"WARC-Payload-Digest\":\"sha1:FOIFDGLM3FZBMPI5QGWD63L3FCMWBUD4\",\"WARC-Block-Digest\":\"sha1:6FEOAU5PIW4VASLZLEBHXO6DZ5T3POBF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-35/CC-MAIN-2019-35_segments_1566027314667.60_warc_CC-MAIN-20190819052133-20190819074133-00447.warc.gz\"}"} |
https://developpaper.com/understand-pythons-iterators-iteratable-objects-and-generators/ | [
"# Understand Python’s iterators, iteratable objects, and generators\n\nTime:2020-1-10\n\nMany partners are a little confused about the concepts of Python iterator, iteratable object and generator. Let me talk about my understanding and hope to help the friends in need.\n\n# 1 iterator protocol\n\nIterator protocol is the core. If you understand this, the above concepts will be well understood.\n\nThe so-called iterator protocol requires an iterator to implement the following two methods\n\n`iterator.__iter__()`\nReturn the iterator object itself.\n\n`iterator.__next__()`\nReturn the next item from the container.\n\nIn other words, as long as an object supports the above two methods, it is an iterator.`__iter__()`You need to return the iterator itself, and`__next__()`The next element needs to be returned.\n\n# 2 iteratable objects\n\nKnowing the concept of iterator, what is an iterative object?\n\nThis is simpler, as long as the object is implemented`__iter__()`Method and returns an iterator, which is an iteratable object.\n\nFor example, our common list is an iterative object\n\n``````>>> l = [1, 3, 5]\n>>> iter(l)``````\n\nUsing iter() will call the corresponding`__iter__()`Method, which returns a list iterator, so a list is an iteratable object.\n\n# 3 handwriting an iterator\n\nThere are different ways to implement iterators. I believe the first thing you can think of is custom classes. Let’s start from this.\n\nFor illustration, we write an iterator to generate an odd sequence.\n\nAccording to the iterator protocol, we implement the above two methods.\n\n``````class Odd:\ndef __init__(self, start=1):\nself.cur = start\n\ndef __iter__(self):\nreturn self\n\ndef __next__(self):\nret_val = self.cur\nself.cur += 2\nreturn ret_val``````\n\nIn the terminal, we instantiate an odd class to get an object odd\n\n``````>>> odd = Odd()\n>>> odd``````\n\nUsing the ITER () method will call the`__iter__`Method, get it\n\n``>>> iter(odd)``\n\nUsing the next () method will call the corresponding`__next__()`Method to get the next element\n\n``````>>> next(odd)\n1\n>>> next(odd)\n3\n>>> next(odd)\n5``````\n\nIn fact, the odd object is an iterator.\n\nWe can traverse it with for\n\n``````odd = Odd()\nfor v in odd:\nprint(v)``````\n\nCareful partners may find that this will be printed infinitely, so how to solve it?\n\nLet’s experiment with a list and get its iterator object first\n\n``````>>> l = [1, 3, 5]\n>>> li = iter(l)\n>>> li``````\n\nThen get the next element manually until there is no next element, and see what happens\n\n``````>>> next(li)\n1\n>>> next(li)\n3\n>>> next(li)\n5\n>>> next(li)\nTraceback (most recent call last):\nFile \"\", line 1, in\nStopIteration``````\n\nThe original list iterator will throw a stopiteration exception when there is no next element. It is estimated that the for statement is based on this exception to determine whether to end.\n\nLet’s modify the original code to generate odd numbers within the specified range\n\n``````class Odd:\ndef __init__(self, start=1, end=10):\nself.cur = start\nself.end = end\n\ndef __iter__(self):\nreturn self\n\ndef __next__(self):\nif self.cur > self.end:\nraise StopIteration\nret_val = self.cur\nself.cur += 2\nreturn ret_val``````\n\nLet’s try it with for\n\n``````>>> odd = Odd(1, 10)\n>>> for v in odd:\n... print(v)\n...\n1\n3\n5\n7\n9``````\n\nAs expected, it is consistent with the expectation.\n\nWe use the while loop to simulate the execution of for\n\nTarget code\n\n``````for v in iterable:\nprint(v)``````\n\nTranslated code\n\n``````iterator = iter(iterable)\nwhile True:\ntry:\nv = next(iterator)\nprint(v)\nexcept StopIteration:\nbreak``````\n\nIn fact, Python’s for statement principle is just like this. You can understand for as a syntax sugar.\n\n# 4 other ways to create iterators\n\nGenerators are also iterators, so you can use the way generators are created to create iterators.\n\n## 4.1 generator functions\n\nUnlike the return of a normal function, the generator function uses yield.\n\n``````>>> def odd_func(start=1, end=10):\n... for val in range(start, end + 1):\n... if val % 2 == 1:\n... yield val\n...\n>>> of = odd_func(1, 5)\n>>> of\n\n>>> iter(of)\n\n>>> next(of)\n1\n>>> next(of)\n3\n>>> next(of)\n5\n>>> next(of)\nTraceback (most recent call last):\nFile \"\", line 1, in\nStopIteration``````\n\n## 4.2 generator expression\n\n``````>>> g = (v for v in range(1, 5 + 1) if v % 2 == 1)\n>>> g\nat 0x101a142b0>\n>>> iter(g)\nat 0x101a142b0>\n>>> next(g)\n1\n>>> next(g)\n3\n>>> next(g)\n5\n>>> next(g)\nTraceback (most recent call last):\nFile \"\", line 1, in\nStopIteration``````\n\n## 4.3 how to choose\n\nSo far, we know three ways to create iterators, so how to choose?\n\nNeedless to say, the simplest is the generator expression. If the expression can meet the needs, it is it; if you need to add more complex logic, you can choose the generator function; if the first two cannot meet the needs, then you can customize the class implementation. In short, choose the simplest way.\n\n# Characteristics of 5 iterators\n\n## 5.1 inertia\n\nIterators don’t compute all elements in advance, but return when they need to.\n\n## 5.2 support for unlimited elements\n\nFor example, the first odd class we created above, whose instance odd is greater than all the odd numbers of start, and the list and other containers can’t hold infinite elements.\n\n## 5.3 provincial space\n\nLike 10000 elements\n\n``````>>> from sys import getsizeof\n>>> a = * 10000\n>>> getsizeof(a)\n80064``````\n\n``````>>> from itertools import repeat\n>>> b = repeat(1, times=10000)\n>>> getsizeof(b)\n56``````\n\nIt only takes 56 bytes.\n\nBecause of the inertia of the iterator, it has this advantage.\n\n# 6.1 iterators are also iterative objects\n\nBecause of the`__iter__()`Method returns itself, which is an iterator, so iterators are also iteratable objects.\n\n# 6.2 iterator can’t start from scratch after traversing once\n\nLook at a strange example\n\n``````>>> l = [1, 3, 5]\n>>> li = iter(l)\n>>> li\n\n>>> 3 in li\nTrue\n>>> 3 in li\nFalse``````\n\nBecause Li is a list iterator, it was found the first time it looked for 3, so it returned true. However, since the first iteration has skipped the 3 element, it cannot be found the second time, so it will appear false.\n\nTherefore, remember that iterators are “disposable.”.\n\nOf course, lists are iterative objects, and it’s normal to look up them several times. (if it’s hard to understand, think about the execution principle of the for statement above. Every time, a new iterator will be obtained from the iteratable object through the ITER () method.)\n\n``````>>> 3 in l\nTrue\n>>> 3 in l\nTrue``````\n\n# 7 stanzas\n\n• All objects that implement the iterator protocol are iterators\n• Realized`__iter__()`Method and return iterators are iteratable objects\n• Generator is also an iterator\n• There are three ways to create an iterator: generator expression, generator function, and custom class. You can choose the simplest one according to the situation\n• Iterators are also iterative objects\n• Iterators are “disposable”\n\nThe first three small items are the key points. These three points have been understood and others can be understood. It is no problem to understand the concepts of the nouns in the title.\n\n# 8 reference\n\n• https://docs.python.org/3/library/stdtypes.html#iterator-types\n• https://opensource.com/article/18/3/loop-better-deeper-look-iteration-python\n• http://treyhunner.com/2018/06/how-to-make-an-iterator-in-python"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7766273,"math_prob":0.89679015,"size":7067,"snap":"2020-24-2020-29","text_gpt3_token_len":1759,"char_repetition_ratio":0.15220161,"word_repetition_ratio":0.04599659,"special_character_ratio":0.27564737,"punctuation_ratio":0.13874911,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9748783,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-05T04:12:47Z\",\"WARC-Record-ID\":\"<urn:uuid:8a08d29e-0f2c-4578-96e3-43928e6c61d1>\",\"Content-Length\":\"50034\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4e805144-b4f5-4aed-991e-35a585ec862d>\",\"WARC-Concurrent-To\":\"<urn:uuid:31bf1837-cdb7-4d10-9fd5-b5d84dec5db6>\",\"WARC-IP-Address\":\"104.28.3.139\",\"WARC-Target-URI\":\"https://developpaper.com/understand-pythons-iterators-iteratable-objects-and-generators/\",\"WARC-Payload-Digest\":\"sha1:C7HMD62LISD2SXFX7OBGVGVBFWK4KM7I\",\"WARC-Block-Digest\":\"sha1:EQTR4BQIX5LQI66U3ONTXWYQ3YDWESL5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593655886865.30_warc_CC-MAIN-20200705023910-20200705053910-00584.warc.gz\"}"} |
https://jp.mathworks.com/matlabcentral/cody/problems/12-fibonacci-sequence/solutions/140439 | [
"Cody\n\nProblem 12. Fibonacci sequence\n\nSolution 140439\n\nSubmitted on 19 Sep 2012 by Robert\nThis solution is locked. To view this solution, you need to provide a solution of the same size or smaller.\n\nTest Suite\n\nTest Status Code Input and Output\n1 Pass\n%% n = 1; f = 1; assert(isequal(fib(n),f))\n\n2 Pass\n%% n = 6; f = 8; assert(isequal(fib(n),f))\n\n3 Pass\n%% n = 10; f = 55; assert(isequal(fib(n),f))\n\n4 Pass\n%% n = 20; f = 6765; assert(isequal(fib(n),f))"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.51680964,"math_prob":0.9990038,"size":442,"snap":"2019-43-2019-47","text_gpt3_token_len":173,"char_repetition_ratio":0.16438356,"word_repetition_ratio":0.0,"special_character_ratio":0.4638009,"punctuation_ratio":0.16666667,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99567574,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-24T05:12:45Z\",\"WARC-Record-ID\":\"<urn:uuid:51e3f853-1e51-4971-a13d-528081933376>\",\"Content-Length\":\"73199\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0d6598c7-449a-4abe-8af2-61d9852861a1>\",\"WARC-Concurrent-To\":\"<urn:uuid:adb4c02b-1337-4d16-85e5-b525cfadc2f9>\",\"WARC-IP-Address\":\"104.110.193.39\",\"WARC-Target-URI\":\"https://jp.mathworks.com/matlabcentral/cody/problems/12-fibonacci-sequence/solutions/140439\",\"WARC-Payload-Digest\":\"sha1:M5EMS5VC43LOD5JANHG24GDSQ52FXFZI\",\"WARC-Block-Digest\":\"sha1:NI6WMZO4ZA6NT2ZIYLLFQJC3EP33UCQA\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570987841291.79_warc_CC-MAIN-20191024040131-20191024063631-00459.warc.gz\"}"} |
https://artofproblemsolving.com/wiki/index.php/2003_AIME_I_Problems/Problem_13 | [
"# 2003 AIME I Problems/Problem 13\n\n## Problem\n\nLet",
null,
"$N$ be the number of positive integers that are less than or equal to",
null,
"$2003$ and whose base-",
null,
"$2$ representation has more",
null,
"$1$'s than",
null,
"$0$'s. Find the remainder when",
null,
"$N$ is divided by",
null,
"$1000$.\n\n## Solution 1\n\nIn base-",
null,
"$2$ representation, all positive numbers have a leftmost digit of",
null,
"$1$. Thus there are",
null,
"${n \\choose k}$ numbers that have",
null,
"$n+1$ digits in base",
null,
"$2$ notation, with",
null,
"$k+1$ of the digits being",
null,
"$1$'s.\n\nIn order for there to be more",
null,
"$1$'s than",
null,
"$0$'s, we must have",
null,
"$k+1 > \\frac{d+1}{2} \\Longrightarrow k > \\frac{d-1}{2} \\Longrightarrow k \\ge \\frac{d}{2}$. Therefore, the number of such numbers corresponds to the sum of all numbers on or to the right of the vertical line of symmetry in Pascal's Triangle, from rows",
null,
"$0$ to",
null,
"$10$ (as",
null,
"$2003 < 2^{11}-1$). Since the sum of the elements of the",
null,
"$r$th row is",
null,
"$2^r$, it follows that the sum of all elements in rows",
null,
"$0$ through",
null,
"$10$ is",
null,
"$2^0 + 2^1 + \\cdots + 2^{10} = 2^{11}-1 = 2047$. The center elements are in the form",
null,
"${2i \\choose i}$, so the sum of these elements is",
null,
"$\\sum_{i=0}^{5} {2i \\choose i} = 1 + 2 +6 + 20 + 70 + 252 = 351$.\n\nThe sum of the elements on or to the right of the line of symmetry is thus",
null,
"$\\frac{2047 + 351}{2} = 1199$. However, we also counted the",
null,
"$44$ numbers from",
null,
"$2004$ to",
null,
"$2^{11}-1 = 2047$. Indeed, all of these numbers have at least",
null,
"$6$",
null,
"$1$'s in their base-",
null,
"$2$ representation, as all of them are greater than",
null,
"$1984 = 11111000000_2$, which has",
null,
"$5$",
null,
"$1$'s. Therefore, our answer is",
null,
"$1199 - 44 = 1155$, and the remainder is",
null,
"$\\boxed{155}$.\n\n## Solution 2\n\nWe seek the number of allowed numbers which have",
null,
"$k$ 1's, not including the leading 1, for",
null,
"$k=0, 1, 2, \\ldots , 10$.\n\nFor",
null,
"$k=0,\\ldots , 4$, this number is",
null,
"$\\binom{k}{k}+\\binom{k+1}{k}+\\cdots+\\binom{2k}{k}$.\n\nBy the Hockey Stick Identity, this is equal to",
null,
"$\\binom{2k+1}{k+1}$. So we get",
null,
"$\\binom{1}{1}+\\binom{3}{2}+\\binom{5}{3}+\\binom{7}{4}+\\binom{9}{5}=175$.\n\nFor",
null,
"$k=5,\\ldots , 10$, we end on",
null,
"$\\binom{10}{k}$ - we don't want to consider numbers with more than 11 digits. So for each",
null,
"$k$ we get",
null,
"$\\binom{k}{k}+\\binom{k+1}{k}+\\ldots+\\binom{10}{k}=\\binom{11}{k+1}$\n\nagain by the Hockey Stick Identity. So we get",
null,
"$\\binom{11}{6}+\\binom{11}{7}+\\binom{11}{8}+\\binom{11}{9}+\\binom{11}{10}+\\binom{11}{11}=\\frac{2^{11}}{2}=2^{10}=1024$.\n\nThe total is",
null,
"$1024+175=1199$. Subtracting out the",
null,
"$44$ numbers between",
null,
"$2003$ and",
null,
"$2048$ gives",
null,
"$1155$. Thus the answer is",
null,
"$155$.\n\n## Solution 3\n\nWe will count the number of it",
null,
"$< 2^{11}=2048$ instead of",
null,
"$2003$ (In other words, the length of the base-2 representation is at most",
null,
"$11$. If there are even digits,",
null,
"$2n$, then the leftmost digit is",
null,
"$1$, the rest,",
null,
"$2n-1$, has odd number of digits. In order for the base-2 representation to have more",
null,
"$1$'s, we will need more",
null,
"$1$ in the remaining",
null,
"$2n-1$ than",
null,
"$0$'s. Using symmetry, this is equal to",
null,
"$\\frac{2^9+2^7+..+2^1}{2}$ Using similar argument where there are odd amount of digits. The remaining even amount of digit must contains the number of",
null,
"$1$'s at least as the number of",
null,
"$0$'s. So it's equal to",
null,
"$\\frac{\\binom{10}{5}+2^{10}+\\binom{8}{4}+2^8+\\binom{6}{3}+2^6+...+\\binom{0}{0}+2^0}{2}$ Summing both cases, we have",
null,
"$\\frac{2^0+2^1+..+2^{10}+\\binom{0}{0}+..+\\binom{10}{5}}{2} = 1199$. There are",
null,
"$44$ numbers between",
null,
"$2004$ and",
null,
"$2047$ inclusive that satisfy it. So the answer is",
null,
"$1199-44=\\boxed{155}$\n\nThe problems on this page are copyrighted by the Mathematical Association of America's American Mathematics Competitions.",
null,
""
] | [
null,
"https://latex.artofproblemsolving.com/f/c/9/fc97ef67268cd4e91bacdf12b8901d7036c9a056.png ",
null,
"https://latex.artofproblemsolving.com/b/9/1/b910f4dafca80b54c6e7bedf63b6485eebff0764.png ",
null,
"https://latex.artofproblemsolving.com/4/1/c/41c544263a265ff15498ee45f7392c5f86c6d151.png ",
null,
"https://latex.artofproblemsolving.com/d/c/e/dce34f4dfb2406144304ad0d6106c5382ddd1446.png ",
null,
"https://latex.artofproblemsolving.com/b/c/1/bc1f9d9bf8a1b606a4188b5ce9a2af1809e27a89.png ",
null,
"https://latex.artofproblemsolving.com/f/c/9/fc97ef67268cd4e91bacdf12b8901d7036c9a056.png ",
null,
"https://latex.artofproblemsolving.com/6/e/e/6ee927e1332358c96c62c277441c907c4f51057f.png ",
null,
"https://latex.artofproblemsolving.com/4/1/c/41c544263a265ff15498ee45f7392c5f86c6d151.png ",
null,
"https://latex.artofproblemsolving.com/d/c/e/dce34f4dfb2406144304ad0d6106c5382ddd1446.png ",
null,
"https://latex.artofproblemsolving.com/4/4/d/44d91e7f8ccfc801e16fd929ae7af0811898b738.png ",
null,
"https://latex.artofproblemsolving.com/0/e/3/0e3efd9b14723a92c2ae891fe27780d5f8e2b215.png ",
null,
"https://latex.artofproblemsolving.com/4/1/c/41c544263a265ff15498ee45f7392c5f86c6d151.png ",
null,
"https://latex.artofproblemsolving.com/f/0/5/f05f085f774fc8ab50676b778b86a1f1d1114fc8.png ",
null,
"https://latex.artofproblemsolving.com/d/c/e/dce34f4dfb2406144304ad0d6106c5382ddd1446.png ",
null,
"https://latex.artofproblemsolving.com/d/c/e/dce34f4dfb2406144304ad0d6106c5382ddd1446.png ",
null,
"https://latex.artofproblemsolving.com/b/c/1/bc1f9d9bf8a1b606a4188b5ce9a2af1809e27a89.png ",
null,
"https://latex.artofproblemsolving.com/9/f/4/9f4584e628041cccb22d1e8d3e4be221849afeea.png ",
null,
"https://latex.artofproblemsolving.com/b/c/1/bc1f9d9bf8a1b606a4188b5ce9a2af1809e27a89.png ",
null,
"https://latex.artofproblemsolving.com/f/c/6/fc606f7f1e530731ab4f1cc364c01dc64a4455ee.png ",
null,
"https://latex.artofproblemsolving.com/c/5/4/c54bc62adccf5855a75ef957b448d0e8aa3177d9.png ",
null,
"https://latex.artofproblemsolving.com/b/5/5/b55ca7a0aa88ab7d58f4fc035317fdac39b17861.png ",
null,
"https://latex.artofproblemsolving.com/a/2/d/a2d230fae6558a1a3ada3c4452aeb2561e9a0e71.png ",
null,
"https://latex.artofproblemsolving.com/b/c/1/bc1f9d9bf8a1b606a4188b5ce9a2af1809e27a89.png ",
null,
"https://latex.artofproblemsolving.com/f/c/6/fc606f7f1e530731ab4f1cc364c01dc64a4455ee.png ",
null,
"https://latex.artofproblemsolving.com/0/9/e/09ef2001322bbb3335a2b95362035b4cb855ec7c.png ",
null,
"https://latex.artofproblemsolving.com/b/1/d/b1d528cb805b953a8cf895efea13b9dcedc132cc.png ",
null,
"https://latex.artofproblemsolving.com/a/3/c/a3c0952ff11bc396bb60b5284244957d28bdadf6.png ",
null,
"https://latex.artofproblemsolving.com/0/6/3/06381c93e7032eb97a5a4cef52c26b4830ce5d04.png ",
null,
"https://latex.artofproblemsolving.com/5/b/4/5b45828f292e161290be6ee62bc4c9df081ee1e1.png ",
null,
"https://latex.artofproblemsolving.com/9/0/a/90ae760d1ea10eb7a90d22087b3c7e2393d02117.png ",
null,
"https://latex.artofproblemsolving.com/4/2/2/42241e8a2f1aee13bc5d363eb1c89938c082e89e.png ",
null,
"https://latex.artofproblemsolving.com/6/0/1/601a7806cbfad68196c43a4665871f8c3186802e.png ",
null,
"https://latex.artofproblemsolving.com/d/c/e/dce34f4dfb2406144304ad0d6106c5382ddd1446.png ",
null,
"https://latex.artofproblemsolving.com/4/1/c/41c544263a265ff15498ee45f7392c5f86c6d151.png ",
null,
"https://latex.artofproblemsolving.com/1/e/7/1e74c90ce8e12595828f065e98ffbe75ff7e5a05.png ",
null,
"https://latex.artofproblemsolving.com/7/9/0/79069377f91364c2f87a64e5f9f562a091c8a6c1.png ",
null,
"https://latex.artofproblemsolving.com/d/c/e/dce34f4dfb2406144304ad0d6106c5382ddd1446.png ",
null,
"https://latex.artofproblemsolving.com/8/0/1/801e3af109174c91e77dd805df96496dd92ebeb3.png ",
null,
"https://latex.artofproblemsolving.com/c/7/f/c7f8bfe45411faee0a56234d13f58522bad5a40c.png ",
null,
"https://latex.artofproblemsolving.com/8/c/3/8c325612684d41304b9751c175df7bcc0f61f64f.png ",
null,
"https://latex.artofproblemsolving.com/9/6/a/96a735b5b58ff6b92524844fc66f0653c202df63.png ",
null,
"https://latex.artofproblemsolving.com/e/4/9/e498a3afc834ddaad84b0f5dfc40f9e5bed2c726.png ",
null,
"https://latex.artofproblemsolving.com/f/8/1/f8132c2f546b3dc7ea5b120d492e1c23515269b1.png ",
null,
"https://latex.artofproblemsolving.com/3/1/9/319c308238c03d24de9360eab2738b7bd60f6c74.png ",
null,
"https://latex.artofproblemsolving.com/6/9/8/698af33d8af3d9748a90bafd1ff32929edcd9d82.png ",
null,
"https://latex.artofproblemsolving.com/5/e/f/5efcc1e0ce6c0c5826b0c2baf277f47be502c888.png ",
null,
"https://latex.artofproblemsolving.com/8/3/4/83486625d80d357893d890c0c813e9b2ad449506.png ",
null,
"https://latex.artofproblemsolving.com/8/c/3/8c325612684d41304b9751c175df7bcc0f61f64f.png ",
null,
"https://latex.artofproblemsolving.com/0/3/9/0392628d6feab6a7b4e78095bb0f8eb937131e9b.png ",
null,
"https://latex.artofproblemsolving.com/8/a/6/8a6695f4ec75641ac6c1926ec5c738ac5728b9fe.png ",
null,
"https://latex.artofproblemsolving.com/8/3/b/83b1d28f1c5145aed8d16497e86f0aeefbe28e0b.png ",
null,
"https://latex.artofproblemsolving.com/5/b/4/5b45828f292e161290be6ee62bc4c9df081ee1e1.png ",
null,
"https://latex.artofproblemsolving.com/b/9/1/b910f4dafca80b54c6e7bedf63b6485eebff0764.png ",
null,
"https://latex.artofproblemsolving.com/a/d/e/ade582b767f93cb8ade6938238a0d0e79ec5c62f.png ",
null,
"https://latex.artofproblemsolving.com/4/8/a/48a931729e108dfcfc4d6642c9a8dd0269e8f616.png ",
null,
"https://latex.artofproblemsolving.com/2/7/e/27e46896b395361121e899e0ff08341c136bb443.png ",
null,
"https://latex.artofproblemsolving.com/0/4/e/04eee21ba3d50cb71e3711e852b1bd762ae28e34.png ",
null,
"https://latex.artofproblemsolving.com/b/9/1/b910f4dafca80b54c6e7bedf63b6485eebff0764.png ",
null,
"https://latex.artofproblemsolving.com/c/6/8/c6878713578626763c38433b3f4c8c2205ad0c15.png ",
null,
"https://latex.artofproblemsolving.com/f/9/3/f93f5c51ea4b04b4992b003b39479f29018f6bda.png ",
null,
"https://latex.artofproblemsolving.com/d/c/e/dce34f4dfb2406144304ad0d6106c5382ddd1446.png ",
null,
"https://latex.artofproblemsolving.com/2/f/9/2f9a96c63183be91b9fe70e6e16dfa4bfec6f3da.png ",
null,
"https://latex.artofproblemsolving.com/d/c/e/dce34f4dfb2406144304ad0d6106c5382ddd1446.png ",
null,
"https://latex.artofproblemsolving.com/d/c/e/dce34f4dfb2406144304ad0d6106c5382ddd1446.png ",
null,
"https://latex.artofproblemsolving.com/2/f/9/2f9a96c63183be91b9fe70e6e16dfa4bfec6f3da.png ",
null,
"https://latex.artofproblemsolving.com/b/c/1/bc1f9d9bf8a1b606a4188b5ce9a2af1809e27a89.png ",
null,
"https://latex.artofproblemsolving.com/4/0/c/40cad808eccc3a789cec21d0f99e6ff9c97178c6.png ",
null,
"https://latex.artofproblemsolving.com/d/c/e/dce34f4dfb2406144304ad0d6106c5382ddd1446.png ",
null,
"https://latex.artofproblemsolving.com/b/c/1/bc1f9d9bf8a1b606a4188b5ce9a2af1809e27a89.png ",
null,
"https://latex.artofproblemsolving.com/2/f/b/2fbbd7657c5b6ab9f1b3cd74e181df9b3e87e049.png ",
null,
"https://latex.artofproblemsolving.com/9/0/4/9041344ec5089fb4fd08c772496eff4c772f8492.png ",
null,
"https://latex.artofproblemsolving.com/5/b/4/5b45828f292e161290be6ee62bc4c9df081ee1e1.png ",
null,
"https://latex.artofproblemsolving.com/9/0/a/90ae760d1ea10eb7a90d22087b3c7e2393d02117.png ",
null,
"https://latex.artofproblemsolving.com/0/3/f/03fe4350c9a712f76115cd345150d3cc8274c117.png ",
null,
"https://latex.artofproblemsolving.com/3/8/7/387f267935c41dbb29130b721d1e1cd24b7201a8.png ",
null,
"https://wiki-images.artofproblemsolving.com//8/8b/AMC_logo.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9283068,"math_prob":1.0000031,"size":2448,"snap":"2021-21-2021-25","text_gpt3_token_len":606,"char_repetition_ratio":0.13993454,"word_repetition_ratio":0.012269938,"special_character_ratio":0.2622549,"punctuation_ratio":0.11264822,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000033,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127,128,129,130,131,132,133,134,135,136,137,138,139,140,141,142,143,144,145,146,147,148,149,150,151,152],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,5,null,null,null,null,null,9,null,null,null,null,null,null,null,null,null,9,null,null,null,9,null,9,null,null,null,null,null,null,null,null,null,null,null,null,null,9,null,null,null,null,null,9,null,9,null,null,null,9,null,9,null,9,null,9,null,9,null,9,null,10,null,null,null,9,null,9,null,9,null,null,null,null,null,null,null,9,null,null,null,8,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,6,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-05-16T12:53:54Z\",\"WARC-Record-ID\":\"<urn:uuid:ba39ab3e-1833-49f2-83cf-cdd2ce2345aa>\",\"Content-Length\":\"52430\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:df0606f4-e56c-4e66-826d-d372a36f2c3f>\",\"WARC-Concurrent-To\":\"<urn:uuid:75362721-8859-4439-9d50-1f8f4e8e9d65>\",\"WARC-IP-Address\":\"104.26.11.229\",\"WARC-Target-URI\":\"https://artofproblemsolving.com/wiki/index.php/2003_AIME_I_Problems/Problem_13\",\"WARC-Payload-Digest\":\"sha1:4B2IPEYT62KJ6YDLTRIEE6YQI42PTX5E\",\"WARC-Block-Digest\":\"sha1:TQDP3H4FM4JFWCENCXSTE3HTAXVM6DRT\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-21/CC-MAIN-2021-21_segments_1620243991269.57_warc_CC-MAIN-20210516105746-20210516135746-00489.warc.gz\"}"} |
https://quuxplusone.github.io/blog/2018/08/05/mathjax-in-jekyll/ | [
"# MathJax in Jekyll\n\nI spent most of today figuring out how to write the mathematics in my previous post on quantum circuits. It turned out to be way easier than I was making it.\n\n(This post assumes that you are already familiar enough with TeX, or should I say, $\\rm\\TeX$. I have found the TeX StackExchange to be super useful.)\n\nDason Kurkiewicz’s blog post from October 2012 is still surprisingly accurate. Step one is to follow the most basic possible instructions from MathJax’s own Getting Started guide: place the snippet\n\n<script type=\"text/javascript\" async\nsrc=\"https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.5/MathJax.js?config=TeX-MML-AM_CHTML\">\n</script>\n\n\nsomewhere that Jekyll will pick it up. (You don’t even need the suggested config=TeX-MML-AM_CHTML parameter; we’re going to specify our own config.)\n\nStep two is to specify the config, right above (not below!) the script tag that fetches MathJax.js.\n\n<script type=\"text/x-mathjax-config\">\nMathJax.Hub.Config({\nextensions: [\n\"MathZoom.js\",\n\"AssistiveMML.js\",\n],\njax: [\"input/TeX\", \"output/CommonHTML\"],\nTeX: {\nextensions: [\n\"AMSmath.js\",\n\"AMSsymbols.js\",\n\"noErrors.js\",\n\"noUndefined.js\",\n]\n}\n});\n</script>\n\n\nStep three is to realize that the Markdown processor used by Jekyll — its name is “Kramdown” — has a built-in feature called “math blocks” which can be hooked up to MathJax! This means that the rest of the integration has already been done for you. Having followed steps 1 and 2 above, you can now just write TeX code with double-dollar-sign escapes:\n\n... a given wire happens to be carrying \"$$\\lvert 0\\rangle$$.\"\nBy that we mean that it's carrying the linear combination\n$$\\begin{psmallmatrix} 1 \\\\ 0 \\end{psmallmatrix}$$ ...\n\n\nThis renders as:\n\n… a given wire happens to be carrying “$\\lvert 0\\rangle$.” By that we mean that it’s carrying the linear combination $\\begin{psmallmatrix} 1 \\\\ 0 \\end{psmallmatrix}$\n\nAnd if you make a paragraph that contains nothing but a $$-escaped chunk of math, it will be rendered using MathJax’s mode=display, i.e., TeX display mode. To see how to enable MathJax for your own Jekyll blog, click through to the relevant commit in this blog’s GitHub repository. The rabbit-hole that I went down by accident was that I didn’t realize until very late that Kramdown already supported “math blocks.” So I spent some time trying to use MathJax’s “tex2jax.js” preprocessor — which is very easy to add to the config block above, by the way. //... extensions: [ \"tex2jax.js\", // HERE \"MathMenu.js\", \"MathZoom.js\", \"AssistiveMML.js\", \"a11y/accessibility-menu.js\" ], tex2jax: { // AND HERE inlineMath: [['', '']], displayMath: [['$$', '$$']] }, jax: [\"input/TeX\", \"output/CommonHTML\"], //... But “tex2jax.js” runs on the text of the page after Kramdown gets done with it; which is to say, after Kramdown has already processed out all of the $$s and replaced them with <script type=\"math/tex\"> tags. What’s more, Kramdown overloads $$ to mean both “inline” and “display,” depending on the surrounding linebreaks. So the upshot of that interaction was that I kept writing $$ (with no surrounding linebreaks) expecting display math, and what I got on the rendered page was inline math. It took me forever to figure out that this was due to Kramdown, and not a bug either in my config or in “tex2jax.js”!\n\nThe fix for this issue, of course, was to find out that Kramdown math blocks were a better solution than “tex2jax.js”.\n\nPosted 2018-08-05"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.89041454,"math_prob":0.648711,"size":3385,"snap":"2019-13-2019-22","text_gpt3_token_len":883,"char_repetition_ratio":0.09730849,"word_repetition_ratio":0.019193858,"special_character_ratio":0.2579025,"punctuation_ratio":0.16739447,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9690793,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-03-23T00:48:53Z\",\"WARC-Record-ID\":\"<urn:uuid:eae956c1-c997-4adc-bd68-50c0d5b99401>\",\"Content-Length\":\"17643\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:03c807e3-42cf-43da-867b-57795dd29af9>\",\"WARC-Concurrent-To\":\"<urn:uuid:99e4d8f6-fc9f-4534-a02f-31152a0894c1>\",\"WARC-IP-Address\":\"185.199.108.153\",\"WARC-Target-URI\":\"https://quuxplusone.github.io/blog/2018/08/05/mathjax-in-jekyll/\",\"WARC-Payload-Digest\":\"sha1:NG5LQIRPDMOCERZMKIFAT6VZTP5MLJCZ\",\"WARC-Block-Digest\":\"sha1:UZLJO5H24RQ4SFU3BIQGMOMVXWZY5GCE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-13/CC-MAIN-2019-13_segments_1552912202704.58_warc_CC-MAIN-20190323000443-20190323022443-00361.warc.gz\"}"} |
https://affairscloud.com/basic-concepts-of-vedic-maths/?amp | [
"",
null,
"# Basic Concepts of Vedic Maths\n\nIt is very important to learn and understand the two basic concepts to make any calculation easy. These two concepts are the pillars of Vedic Maths. They are\n\n1) Base\n\n2) Complements\n\nWhat is a Base?\n\nBases are the numbers which start with 1 and ends with 0’s. For example 10, 100, 1000, 10000, 100000, and so on are called bases. The first 2-digit number is 10 and the first 3-digit number is 100 and so on. Only these kind of numbers are considered as bases. Numbers such as 200, 300, 450, 1500 are not considered as bases.\n\nWhat is a Complement?\n\nWhen two numbers are added with each other and it results in the next nearest base, then they are called Complements of each other.\n\nFor Example,\n\n1) Consider the number 73. The next nearest base of 73 is 100. By adding 27 we will get 100. So 73 and 27 are complements to each other.\n\n2) Consider the number 156. The next nearest base of 156 is 1000. By adding 844 we get 1000. So they are Complements.\n\n3) Similarly,\n\nComplement of 6 is 4 (Base is 10).\n\nComplement of 25 is 75 (Base is 100).\n\nComplement of 6545 is 3455 (Base is 10000).\n\nHow to find the Complement of a number in a easier way?\n\nComplement of a number can be calculated by subtracting all numbers from 9 and the last number by 10.\n\neg: Complement of 358732?\n\nSolution: 9 9 9 9 9 10 3 5 8 7 3 2 = 6 4 1 3 6 8\n\nThe Formula is “All from 9 and the last from 10“.\n\nComplement of 5183?\n\nSolution :\n\n9 9 9 10 – 5 1 8 3 = 4 8 1 7\n\nWhat if the last digit is “Zero”??\n\nIn that case treat the last non-zero number as the last digit and use the same above formula.\n\nEg: Complement of 69560670??\n\nSolution: 9 9 9 9 9 9 10\n\n– 6 9 5 6 0 6 7 0\n\n3 0 4 3 9 3 3 0\n\nExercises:\n\nFind the Complement of\n\n1) 234 2) 3457 3) 12967 4) 2463751 5) 10920"
] | [
null,
"data:image/svg+xml;base64,PHN2ZyBoZWlnaHQ9IjI4MiIgd2lkdGg9IjMzNiIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIiB2ZXJzaW9uPSIxLjEiLz4=",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.877625,"math_prob":0.99638,"size":1767,"snap":"2021-43-2021-49","text_gpt3_token_len":609,"char_repetition_ratio":0.17073171,"word_repetition_ratio":0.038043477,"special_character_ratio":0.39388794,"punctuation_ratio":0.11694511,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9981025,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-11-30T16:01:07Z\",\"WARC-Record-ID\":\"<urn:uuid:6656ca8d-1dd8-4aad-9b41-7d5948ba4f28>\",\"Content-Length\":\"102651\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c7cd2cc2-52a2-412e-8e52-44fbbf949c7f>\",\"WARC-Concurrent-To\":\"<urn:uuid:9c4a079d-337a-489c-bb7e-16582ef7e126>\",\"WARC-IP-Address\":\"104.26.5.172\",\"WARC-Target-URI\":\"https://affairscloud.com/basic-concepts-of-vedic-maths/?amp\",\"WARC-Payload-Digest\":\"sha1:WDEYF754ATZK3IWBLIHRYRHNZIBLQRGA\",\"WARC-Block-Digest\":\"sha1:VP2LSDHM4A52FDEEFAXJFHVSF3M37LAZ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964359037.96_warc_CC-MAIN-20211130141247-20211130171247-00187.warc.gz\"}"} |
https://www.shaalaa.com/question-bank-solutions/convert-0010-0100-0010-1101-base-2-decimal-convert-134-base-10-hexadecimal-write-steps-conversion-turing-model_58121 | [
"# Convert 0010 0100 0010 1101 from Base 2 to Decimal. Convert 134 from Base 10 to Hexadecimal. Write Steps for Conversion. - Structured Programming Approach\n\nConvert 0010 0100 0010 1101 from base 2 to decimal. Convert 134 from base 10 to hexadecimal. Write steps for conversion.\n\n#### Solution\n\nBinary to decimal: The steps to be followed are:\n\n1. Multiply the binary digits (bits) with powers of 2 according to their positionalweight.\n2. The position of the first bit (going from right to left) is 0 and it keeps on incrementing as you go towards left for every bit.\n3.Given: (0010 0100 0010 1101)2 = (?)10\n4.0*215+0*214+1*213+0*212+0*211+1*210+0*29+0*28+0*27+0*26+1*25+0*24+1*23+1*22+0*21+1*20\n=0+0+8192+0+0+1024+0+0+0+0+32+0+8+4+1\n=9261\n(0010 0100 0010 1101)2 = (9261)10\n\nDecimal to hexadecimal: The steps to be followed are:\n1.Divide the decimal number by 16. Treat the division as an integer division.\n2.Write down the remainder (in hexadecimal).\n3.Divide the result again by 16. Treat the division as an integer division.\n4.Repeat step 2 and 3 until result is 0.\n5.The hexadecimal value is digit sequence of the reminders from last to first. 6.Given: (134)10 = (?)16",
null,
"(134)10 = (86)16\n\nConcept: Turing Model\nIs there an error in this question or solution?\n2016-2017 (June) CBCGS\n\nShare"
] | [
null,
"https://www.shaalaa.com/images/_4:be792f4a939645d9af57e02bce80180d.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.685843,"math_prob":0.9785089,"size":1022,"snap":"2023-40-2023-50","text_gpt3_token_len":355,"char_repetition_ratio":0.11689588,"word_repetition_ratio":0.103896104,"special_character_ratio":0.4295499,"punctuation_ratio":0.12345679,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.998466,"pos_list":[0,1,2],"im_url_duplicate_count":[null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-09T08:55:41Z\",\"WARC-Record-ID\":\"<urn:uuid:14f9c961-edf4-4fcc-8e80-4dabce41477b>\",\"Content-Length\":\"69996\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2e33af4c-2df7-481b-b28c-d36ab5c841e6>\",\"WARC-Concurrent-To\":\"<urn:uuid:e97462bd-2e29-4ba7-ada0-4e3255f95ba9>\",\"WARC-IP-Address\":\"104.26.12.31\",\"WARC-Target-URI\":\"https://www.shaalaa.com/question-bank-solutions/convert-0010-0100-0010-1101-base-2-decimal-convert-134-base-10-hexadecimal-write-steps-conversion-turing-model_58121\",\"WARC-Payload-Digest\":\"sha1:BRXQAC54SFKIV4IPDN2AZTVPP42PJN7Y\",\"WARC-Block-Digest\":\"sha1:QP65I47HPH5FKYGFQUUYAE42RTEZO3DI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100873.6_warc_CC-MAIN-20231209071722-20231209101722-00447.warc.gz\"}"} |
https://quantumcomputing.stackexchange.com/questions/9287/can-you-use-rz-to-flip-from-rangle-to-rangle | [
"# Can you use Rz to flip from $|+\\rangle$ to $|-\\rangle$?\n\nHere's the Rz matrix:\n\n$$Rz(\\theta) = \\begin{bmatrix} e^{-i\\theta/2} & 0 \\\\ 0 & e^{i\\theta/2} \\end{bmatrix}$$\n\nAs I understand it, Rz rotates around the Z axis on the Bloch sphere. Since $$|+\\rangle$$ and $$|-\\rangle$$ are both on the Bloch sphere X-Y plane, it seems you should be able to rotate between them; however, I can't figure out a value of $$\\theta$$ to do that.\n\nAm I misunderstanding how the rotation gates work, or is there just a solution I'm not seeing?\n\nThe more elegant approach is to view $$R_z(\\phi)$$ as a linear transformation acting on the basis $$\\{|0\\rangle, |1\\rangle\\}$$. $$|0\\rangle$$ is mapped to $$e^{i\\phi/2}|0\\rangle$$ and $$|1\\rangle$$ is mapped to $$e^{-i\\phi/2}|1\\rangle$$ by the transformation. As, $$|+\\rangle = \\frac{1}{\\sqrt 2}(|0\\rangle +|1\\rangle)$$ it'd be mapped to $$R_z(\\phi)|+\\rangle = \\frac{1}{\\sqrt 2}(e^{i\\phi/2}|0\\rangle + e^{-i\\phi/2}|1\\rangle)$$. Now the question is, for what value of $$\\phi$$ (if at all), $$R_z|+\\rangle$$ coincides with $$|-\\rangle = \\frac{1}{\\sqrt 2}(|0\\rangle - |1\\rangle)$$. As global phase factors are irrelevant in quantum mechanics, you just need to check for what value of $$\\phi$$ (if at all) the coefficients of the basis vectors are in proportion i.e.,\n\n$$\\frac{e^{i\\phi/2}}{e^{-i\\phi/2}} = \\frac{1}{-1} \\implies e^{i\\phi} = -1 \\implies e^{i\\phi} = e^{i\\pi} \\implies \\phi = \\pi$$\n\nassuming $$\\phi \\in [0, 2\\pi)$$.\n\nObviously, the Bloch sphere visualization (as in @kludg's answer) makes it even more evident. You can clearly see that a anti-clockwise rotation of $$\\phi = \\pi$$ (180 degrees) about the $$z$$-axis ($$|0\\rangle$$ - $$|1\\rangle$$ axis) takes the $$|+\\rangle$$ state to the $$|-\\rangle$$ state.\n\nP.S: I used $$R_z(\\phi)$$ instead of $$R_z(\\theta)$$ in order to match the convention in the above diagram.\n\nIf you use $$\\theta = \\pi$$, you get the following:\n\n$$Rz(\\pi)\\begin{bmatrix} \\frac{1}{\\sqrt{2}} \\\\ \\frac{1}{\\sqrt{2}} \\end{bmatrix} = \\begin{bmatrix} e^{-i \\pi/2} & 0 \\\\ 0 & e^{i \\pi/2} \\end{bmatrix} \\begin{bmatrix} \\frac{1}{\\sqrt{2}} \\\\ \\frac{1}{\\sqrt{2}} \\end{bmatrix} = \\begin{bmatrix} -i & 0 \\\\ 0 & i \\end{bmatrix} \\begin{bmatrix} \\frac{1}{\\sqrt{2}} \\\\ \\frac{1}{\\sqrt{2}} \\end{bmatrix} = -i\\begin{bmatrix} \\frac{1}{\\sqrt{2}} \\\\ \\frac{-1}{\\sqrt{2}} \\end{bmatrix}$$\n\nWhich is equal to $$|-\\rangle$$ under the global phase $$-i = e^{-i\\pi/2}$$.\n\n• to visualise the rotations in the Bloch sphere it might be more natural to work directly with density matrices. In this case, you can see quite nicely how $R_z(\\theta)$ acts as a rotation when you consider its action (via conjugation) on states: $\\rho\\mapsto R_z(\\theta)\\rho R_z(\\theta)^\\dagger$.\n– glS\nDec 22, 2019 at 16:41\n\nJust think where $$|+\\rangle$$ and $$|-\\rangle$$ states on the Bloch sphere ($$x$$ axis):",
null,
"On IBM Q the Rz gate is defined as\n\n$$\\begin{equation} Rz(\\theta) = \\begin{pmatrix} 1 & 0 \\\\ 0 & \\mathrm{e}^{i\\theta} \\end{pmatrix} \\end{equation}$$\n\nIf you put $$\\theta = \\pi$$ then the matrix turns to\n\n$$\\begin{equation} Rz(\\pi) = \\begin{pmatrix} 1 & 0 \\\\ 0 & \\mathrm{e}^{i\\pi} \\end{pmatrix} = \\begin{pmatrix} 1 & 0 \\\\ 0 & -1 \\end{pmatrix} =Z \\end{equation}$$ Hence $$Rz(\\pi)|+\\rangle = |-\\rangle$$.\n\nYou can rewrite your matrix as\n\n$$\\begin{equation} Rz(\\theta) = \\begin{pmatrix} \\mathrm{e}^{-i\\frac{\\theta}{2}} & 0 \\\\ 0 & \\mathrm{e}^{i\\frac{\\theta}{2}} \\end{pmatrix} = \\mathrm{e}^{-i\\frac{\\theta}{2}} \\begin{pmatrix} 1 & 0 \\\\ 0 & \\mathrm{e}^{i\\theta} \\end{pmatrix} \\end{equation}$$\n\nHence the only difference is a global phase.\n\nOverall, Rz gate matrix as defined on IBM Q seems to more convinient as you do not have to deal with a global phase coeficient.\n\n• I disagree with IBM Q conventions; in QM, spin 1/2 rotation matrices are related to Pauli matrices as $R_i(\\theta)=\\exp(i \\theta/2\\cdot \\sigma_i)$ and this defines $R_z(\\theta)$; and IBM for the sake of petty simplification is making a mess of QM conventions. Dec 22, 2019 at 7:50\n• @kludg: I understand your objection but in the end the result is the same regardless which matrix (either QM approach or IBM approach) is used as the only difference is omission of global phase. Quantum states which differ only in global phase are considered to be same, so both approaches work. Or do I understand anything in wrong way? Dec 22, 2019 at 8:49\n• This is the question of using common conventions. For example, it is possible to define Pauli matrices differently; fortunately physicists are wise to avoid a mess and everybody uses the form proposed by Pauli; IMO IBM Q designers were not wise when defined $R_z(\\theta)$ matrix. Dec 22, 2019 at 9:43\n• @kludg: OK, thanks for explanation. Dec 22, 2019 at 9:53"
] | [
null,
"https://i.stack.imgur.com/mX4eD.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.90182316,"math_prob":0.9999709,"size":461,"snap":"2023-40-2023-50","text_gpt3_token_len":140,"char_repetition_ratio":0.10940919,"word_repetition_ratio":0.0,"special_character_ratio":0.30368763,"punctuation_ratio":0.094736844,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000038,"pos_list":[0,1,2],"im_url_duplicate_count":[null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-24T06:06:21Z\",\"WARC-Record-ID\":\"<urn:uuid:35e7be82-33f8-4a71-868c-6272eb48d6e2>\",\"Content-Length\":\"195692\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7f3cff37-0111-4015-b542-d1bce105f535>\",\"WARC-Concurrent-To\":\"<urn:uuid:636add0b-384c-4786-9cb5-2ed26984480f>\",\"WARC-IP-Address\":\"104.18.10.86\",\"WARC-Target-URI\":\"https://quantumcomputing.stackexchange.com/questions/9287/can-you-use-rz-to-flip-from-rangle-to-rangle\",\"WARC-Payload-Digest\":\"sha1:CB6FPPKPP3JKWQUJKA7PYI2ZWNUSRSCA\",\"WARC-Block-Digest\":\"sha1:RFPYBIK4CWKRQZJAZ7TGQ3ZZIZG4MLNQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233506623.27_warc_CC-MAIN-20230924055210-20230924085210-00051.warc.gz\"}"} |
https://offbeat.cc/blog/our-trip-to-integer-partitions.html | [
"# Our Next Trip to Integer Partitions\n\nPublished on 18 Sep 2021 by Susam Pal\n\n## Introduction\n\nAfter 114 meetings and 75 hours of studying together, our analytic number theory book club has finally reached the final chapter of the book Introduction to Analytic Number Theory (Apostol, 1976). We have less than 18 pages to read in order to complete reading this book. Considering that we read about 2-3 pages in every meeting, it appears that we will complete reading this book in another 2 weeks.\n\nReading this book has been quite a journey! The previous three blog posts on this blog provide an account of how this journey has been. It has been fun, of course. The best part of hosting a book club like this has been the number of extremely smart people I got an opportunity to meet and interact with. The insights and comments on the study material that some of our book club participants shared during the meetings were very helpful. Special thanks to Andrey who goes by the Libera IRC nick halyavin for joining most of our meetings and sharing his explanations on various steps of the proofs we came across in the book.\n\nThe meeting log shows that our book club started really small with only 4 participants in the first meeting in March 2021 and then it gradually grew to about 10-12 regular members within a month. Then a few months later, the number of participants began dwindling a little. This happened because some members of the book club had to drop out as they got busy with other personal or professional engagements. However, six months later, we still have about 4-5 regular participants meeting consistently. I think it is pretty good that we have made it this far.\n\n## Unrestricted Partitions\n\nThe final chapter on integer partitions is very unlike all the previous 12 chapters. While the previous chapters dealt with multiplicative number theory, this final chapter deals with additive number theory. For example, the first theorem talks about an interesting property of unrestricted partitions. We study the number of ways a positive integer can be expressed as a sum of positive integers. The number of summands is unrestricted, repetition of summands is allowed, and the order of the summands is not taken into account. For example, the number 3 has 3 partitions: 3, 2 + 1, and 1 + 1 + 1. Similarly, the number 4 has 5 partitions: 4, 3 + 1, 2 + 2, 2 + 1 + 1, and 1 + 1 + 1 + 1.\n\nI have always wanted to learn about partitions more deeply, so I am quite happy that this book ends with a chapter on partitions. The subject of partitions is rich with very interesting results obtained by various accomplished mathematicians. In the book, the first theorem about partitions is a very simple one that follows from the geometric representation of partitions. Let us see an illustration first.\n\nHow many partitions of 6 are there? There are 11 partitions of 6. They are 6, 5 + 1, 4 + 2, 4 + 1 + 1, 3 + 3, 3 + 2 + 1, 3 + 1 + 1 + 1, 2 + 2 + 2, 2 + 2 + 1 + 1, 2 + 1 + 1 + 1 + 1, and 1 + 1 + 1 + 1 + 1 + 1. Now how many of these partitions are made up of 5 parts? Each summand is called a part. The answer is 2. There are 2 partitions of 6 that are made up of 5 parts. They are 3 + 1 + 1 + 1 and 2 + 2 + 1 + 1. Let us represent both these partitions as arrangements of lattice points. Here is the representation of the partition 3 + 1 + 1 + 1:\n\n• • •\n•\n•\n•\n\n\nNow if we read this arrangement from left-to-right, column-by-column, we get another partition of 6, i.e., 4 + 1 + 1. Note that the number of parts in 3 + 1 + 1 + 1 (i.e., 4) appears as the largest part in 4 + 1 + 1. Similarly, the number of parts in 4 + 1 + 1 (i.e., 3) appears as the largest part in 3 + 1 + 1 + 1. Let us see one more example of this relationship. Here is the geometric representation of 2 + 2 + 1 + 1:\n\n• •\n• •\n•\n•\n\n\nOnce again, reading this representation from left-to-right, we get 4 + 2, another partition of 6. Once again, we can see that the number of partitions in 2 + 2 + 1 + 1 (i.e., 4) appears as the largest part in 4 + 2, and vice versa. These observations lead to the first theorem in the chapter on partitions:\n\nTheorem 14.1 The number ofpartitions of $$n$$ into $$m$$ parts is equal to the number of partitions of $$n$$ into parts, the largest of which is $$m$$.\n\nThat was a brief introduction to the chapter on partitions. In the next two or so weeks, we will dive deeper into the theory of partitions.\n\n## Next Meeting\n\nIf this blog post was fun for you, consider joining our next meeting. Our next meeting is on Tue, 21 Sep 2021 at 17:00 UTC. Since we are at the beginning of a new chapter, it is a good time for new participants to join us. It is also a good time for members of the club who have been away for a while to join us back. Since this chapter does not depend much on the previous chapters, new participants should be able to join our reading sessions for this chapter and follow along easily without too much effort.\n\nTo join our book club, see our channel details in the home page here. To get the meeting link for the next meeting, visit the analytic number theory book club page.\n\nIt is worth mentioning here that lurking is absolutely fine in our meetings. In fact, most members of the club join in and stay silent throughout the meeting. Only a few members talk via audio/video or chat. This is considered absolutely normal in our meetings, so please do not hesitate to join our meetings!"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9577121,"math_prob":0.95631325,"size":5266,"snap":"2021-43-2021-49","text_gpt3_token_len":1342,"char_repetition_ratio":0.1622957,"word_repetition_ratio":0.114754096,"special_character_ratio":0.27041396,"punctuation_ratio":0.11212397,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9530624,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-21T09:37:02Z\",\"WARC-Record-ID\":\"<urn:uuid:3b9263d9-e53d-48af-b1b2-f482fc36420f>\",\"Content-Length\":\"8027\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:52a40224-d61c-46d6-8a72-b4b79d2b0102>\",\"WARC-Concurrent-To\":\"<urn:uuid:c51c6c13-8e41-40e1-b9a3-961606987982>\",\"WARC-IP-Address\":\"185.199.108.153\",\"WARC-Target-URI\":\"https://offbeat.cc/blog/our-trip-to-integer-partitions.html\",\"WARC-Payload-Digest\":\"sha1:PC3Z47IS2MY4U3RRINYZE4W3MPJDP6AN\",\"WARC-Block-Digest\":\"sha1:TDNHTPERHWDPVCIPNURL62HVDIPNEPMJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585382.32_warc_CC-MAIN-20211021071407-20211021101407-00015.warc.gz\"}"} |
https://softwareengineering.stackexchange.com/questions/164221/understanding-binary-numbers-in-terms-of-real-world-objects | [
"# Understanding binary numbers in terms of real world objects [closed]\n\nWhen I represent a number in the decimal system, I have an intuitive knowledge of what it amounts to. For example take the number '10': I understand that it means 10 apples or 10 people... i.e I can count in the real world.\n\nBut as soon as the number is converted to any other system, this understanding no longer applies. For example 10 when converted to binary will be 1010...now what does this represent? Is there a way to understand this number 1010 in terms of counting objects in the real world?\n\n• For me, 10 apples means \"one zero\" apples. What do you mean by one zero apples? – Francisco Presencia Oct 22 '13 at 11:03\n\n\"Io sono italiano\" is the italian sentence for \"I am italian\". Barring issues with translation, the two sequence of letters mean to (a reader who is able to speak both) the same concept.\n\nSo a number expressed in the decimal system may, under some condition, represent a counting device (you can count with integer, but then think that zero is a strange number, negative numbers are even stranger, and rational and irrational and imaginary numbers also are interesting).\n\nBut the decimal system is just a notation, which happens to have gained some predominance (well, in the computing world the case could be made for binary, octal and hexadecimal). It is by no means special: a given number can be written in whatever notation you decide to use.\n\nSo 1010 (in binary notation) is precisely the same as 0*2^0 + 1*2^1 + 0*2^2 + 1*2^3 = 10 in decimal notation. You can say that humans usually have 10 fingers on their two hands (decimal notation) or 1010 finger on their 10 hands (binary notation).\n\n• in any base (in any notation) the number written as 10 is special\n• there are a lot (to use an understatement) of numbers which are properly expressed implicitly: think about e and pi which are surely very famous numbers whose representation we prefer to keep \"in letters\" rather than using a sequence of digits (which should be infinite to be precise).\n• It all comes down to practice. You can read and think fluently in any language or radix as long as you practice it. People who know morse code don't hear .-- --- .-. -.. ..., they hear \"words\". – Hand-E-Food Sep 9 '12 at 5:55\n• Morse code is a nice example of a specific notation (albeit not positional :-): you go with the flow of it rather than consciously thinking \"ok, now the sequence for S was...?\". For this reason it could happen that one is better able to \"write it\" than to \"read it\" or vice versa. – Francesco Sep 9 '12 at 5:59\n• Fun fact: You can count to 1023 with your fingers. – phant0m Sep 9 '12 at 11:28\n• Well, octal doesn't fit so well since we just about standardized on the 8-bit-byte, and it's on the decline. It's most notorious for gotchas by now. – Deduplicator Oct 16 '15 at 21:33\n\nStudents of The Hitch Hiker's Guide to the Galaxy will recall that the Ultimate Question (Answer: 42) was \"What do you get when you multiply 6 by 9\"\n\nSo, whilst the rebuilt Earth uses Base 10 (decimal) the original Earth used base 13.\n\nThis seemingly off-topic prologue is there to illustrate that our concept of numbers is because we use Base 10 (decimal) as a matter of routine. And whilst decimalisation reinforces the 10-ness, this doesn't have to be the case.\n\nAnd since this is programmers.SE, then binary (true/false) and hexadecimal (byte packing) should be equally widely understood... by that I mean, to me, A(hex) is (nearly) as understandable a concept as 10(dec)\n\nOr more esoterically, in days gone by, before grams and kilograms, we used oz/lb/st and people conceptually \"got it\" - in fact some of these other measures were more intuitive.\n\nA binary number denotes a sum of powers of two. 10102 means (from right to left) no one, a two, no four, an eight.\n\nI'll try to give a more expansive explanation of the concept of bases here.\n\nI would argue that a number is an abstract concept. When you read `10`, it is not the real thing, it is merely one possible representation of an idea/concept that you cannot truly grasp.\n\n`10` (that is, `1` followed by a `0`), is an encoding for this idea of \"ten\". Although we have may have developed some sort of intuition for this way of encoding numbers, there is a more fundamental system at work here.\n\n`143` for instance is `100 + 40 + 3`, or: 3 ones, 4 tens and 1 hundred (ten tens). We can decompose any such number by their digits. The position of a digit determines its significance, i.e. what power of ten it counts:",
null,
"As you can see, the digits denote the coefficients for a sum of powers of ten. We call this number (ten) the base.\n\nTen (as a concept) is not a special number. It just seems natural because we are so used to it.\n\nWe can freely choose virtually any basis we like. The meaning of each digit always depends on the base and is obtained as follows:",
null,
"Thus, `143` (the encoding) can have a different meaning, depending on which base is implied (usually ten of course) but the base could just as well be 11:",
null,
"With binary numbers, it works the exact same way, but the base is two of course.",
null,
"For any encoding in a certain base, you need base-many distinct symbols to encode numbers. That is, for base ten, you need ten distinct symbols, for a base-two system, you need two distinct symbols, or for hexadecimal, you need sixteen distinct symbols. Also, you need to have the idea of a \"successor\", i.e. in the decimal system, the symbol \"2\" is the successor of the symbol \"1\".\n\nIn a system that uses the the special symbols `0` and `1` (with their common meaning), `10` is always the encoding for its base.\n\"common meaning\": 0*a = 0, 1*a = a\n\nThis system can be used to denote any objects that are enumerable.\n\n## In real world terms - visually\n\nTry to imagine a series of objects starting at some point, continuing indefinitely.\n\nFor the purposes of this document, I'll use dots:\n\n`Zero . . . . . . . . .`\n\nEvery dot represents a number. naturally, each dot is the successor of the dot to its left.\n\nOne way to refer to a dot would be to assign a unique symbol to each dot. As you probably can imagine, without any system, this is going to be extremely cumbersome. Just to be able to count to one thousand, you would have to remember one thousand arbitrary symbols (the Chinese for example prove it's possible, but that's beside the point).\n\nLet's just go with the assumption that it's cumbersome, and provide some shortcut: I'll just replace every fourth dot with a pipe:\n\n`Zero . . . | . . . | . . . | .`\n\nNow, we have simplified the problem a bit, assuming we can reliably name which pipe we mean, we can name dots relatively to a pipe. For instance, I can say: The second point after pipe X. Or the second point after the third pipe to denote the number fourteen.\n\nHowever, as numbers get bigger (or speaking visually: we zoom out, we are left with a series of pipes), it's extremely cumbersome to simply count the pipes and then refer to a dot relatively of one such pipe. We apply the same trick. We can replace every fourth pipe with an ampersand:\n\n`Zero . . . | . . . | . . . | . . . & . . . | . . . | . . . | . . . & . . .`\n\nNow, it becomes easier to specify a certain pipe, we simply say which ampersand we mean. Going from that, we relatively specify the pipe, and going from that, we relatively specify the dot.\n\nThis is exactly what the base-notation does. It places these markers on every n-th object, different markers on every n-th marker, etc Thus, there are never n or more of the same markers in sequence.\n\nIn this case, we have encode numbers with the base four: 1234 refers to the first ampersand, the second pipe after that, and the third dot after that: The number twenty-seven.\n\nApplying the aforementioned rule, we never need more than four distinct symbols to denote an arbitrary number in base four.\n\nProof: Let's assume we need more than four distinct symbols. This would imply, that I need to count four or more markers of the same kind from a given position. However, that is never necessary, since I replaced every fourth of these very markers with a different kind, I must be able to skip ahead. Thus, the premise is incorrect: Four distinct symbols are always enough.\n\n• Good explaination of the concept. Too often we (as in the set of programmers) make big assumptions, because we (as individual programmers) know (and understand) what is meant to be said, and therefore leave it unsaid – Andrew Sep 9 '12 at 10:46\n• @Andrew Thanks. I have added a second part trying to illustrate the concept of bases visually. – phant0m Sep 9 '12 at 10:59\n• Even better... because this just about illustrates how the Roman's (What have they done for us?) came up with their numbering... One is a I... five Is became an V, two Vs an X Five Xs became an L, two Ls a C etc So in effect, they had a binary, pentary (?) system – Andrew Sep 9 '12 at 11:07\n• @Andrew A good observation, they even put these \"markers\" as I called them (I have no idea whether there is any \"official\" terminology on this matter) explicitly into their numbers, whereas we have the implicitly by position. – phant0m Sep 9 '12 at 11:26\n\nBasically, the decimal system is based on the powers of 10 (10⁰, 10¹, 10²...), whilst the binary number system is based off of the powers of two (Place values count up by two, 2⁰, 2¹, 2²...). So here's 101 converted from binary to decimal. [(1x2³)+(0x2²)+(1x2¹)]÷2 which equals to 5. Do you notice the pattern (No, not illuminati). The exponents count down from how many numbers the binary term is, while 0 represents.. well.. a zero in the binary term, when a 1 represents a 1. 1x2 is a one in the term, and 0x2 is a zero."
] | [
null,
"https://i.stack.imgur.com/v67iK.png",
null,
"https://i.stack.imgur.com/KB0DF.png",
null,
"https://i.stack.imgur.com/2GdVJ.png",
null,
"https://i.stack.imgur.com/oNhrt.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.93926936,"math_prob":0.94578904,"size":4465,"snap":"2020-34-2020-40","text_gpt3_token_len":1057,"char_repetition_ratio":0.11723829,"word_repetition_ratio":0.06549708,"special_character_ratio":0.25352743,"punctuation_ratio":0.1714876,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9751421,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,3,null,3,null,3,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-15T03:06:44Z\",\"WARC-Record-ID\":\"<urn:uuid:2fb0b44f-1921-4435-be3b-4651bdb067df>\",\"Content-Length\":\"168563\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:fed8ba5b-57c4-4cd0-838a-707793a22dbd>\",\"WARC-Concurrent-To\":\"<urn:uuid:1b904307-c542-433d-9abc-4893360d5c90>\",\"WARC-IP-Address\":\"151.101.65.69\",\"WARC-Target-URI\":\"https://softwareengineering.stackexchange.com/questions/164221/understanding-binary-numbers-in-terms-of-real-world-objects\",\"WARC-Payload-Digest\":\"sha1:CU3N63HZM3XIGDTA7CUHABGBAN4W2M27\",\"WARC-Block-Digest\":\"sha1:ZZD25ZP6DJ7TPD5WEV3DEZ6DI46HBXG3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439740423.36_warc_CC-MAIN-20200815005453-20200815035453-00112.warc.gz\"}"} |
https://www.fxsolver.com/browse/?like=2679&p=150 | [
"'\n\n# Search results\n\nFound 1529 matches\nStandard Equation of an Ellipse\n\nEllipse is a curve on a plane surrounding two focal points such that a straight line drawn from one of the focal points to any point on the curve and then ... more\n\nLift-to-Drag Ratio - with wetted aspect ratio\n\nIn aerodynamics, the lift-to-drag ratio, or L/D ratio, is the amount of lift generated by a wing or vehicle, divided by the drag it creates by moving ... more\n\nHorizontal curve - Sight obstraction distance (S<L)\n\nHorizontal curve – Sight Distance Properties (S<L)\n\nHorizontal Curves are one of the two important transition elements in geometric ... more\n\nDiffusion Coefficient for two different gases (related to Fick's laws)\n\nDiffusion is the net movement of a substance (e.g., an atom, ion or molecule) from a region of high concentration to a region of low concentration. For two ... more\n\nMenelaus' theorem ( transversal line passes inside triangle )\n\nMenelaus’ theorem, named for Menelaus of Alexandria, is a theorem about triangles in plane geometry. Given a triangle ABC, ... more\n\nCeva's theorem (lines from vertices to the opposite sides of a triangle)\n\nCeva’s theorem is a theorem about triangles in Euclidean plane geometry. Given a triangle ABC, let the lines AO, BO and CO ... more\n\nBall Screw - Preload Drag Torque\n\nA ball screw is a mechanical linear actuator that translates rotational motion to linear motion with little friction. A threaded shaft provides a helical ... more\n\nWorksheet 980\n\nPPI can be calculated from knowing the diagonal size of the screen in inches and the resolution in pixels (width and height). This can be done in two steps\n\nUsing the Pythagorean theorem, for 3 different screen resolutions:\n\nDiagonal Resolution - Pixels\n\nUsing the Diagonal Resolution from the previous formula we calculate the PPI for 3 corresponding screen sizes :\n\nPixels Per Inch (PPI)\n\nResults:\n\n10.1 inch tablet screen of resolution 1024×600 : 117.5PPI\n21.5 inch PC monitor of 1080p resolution : 102.46PPI\n27 inch PC monitor of 1440p resolution : 108.78PPI\n\nHorizontal curve - Sight obstraction distance (S>L)\n\nHorizontal Curves are one of the two important transition elements in geometric design for highways (along with Vertical Curves). A horizontal curve ... more\n\nKnoop hardness test\n\nThe Knoop hardness test /kəˈnuːp/ is a microhardness test – a test for mechanical hardness used particularly for very brittle materials or thin sheets, ... more\n\n...can't find what you're looking for?\n\nCreate a new formula\n\n### Search criteria:\n\nSimilar to formula\nCategory"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8732817,"math_prob":0.9599776,"size":1953,"snap":"2023-40-2023-50","text_gpt3_token_len":444,"char_repetition_ratio":0.11185223,"word_repetition_ratio":0.073846154,"special_character_ratio":0.23195085,"punctuation_ratio":0.16153847,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9746103,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-09T12:41:26Z\",\"WARC-Record-ID\":\"<urn:uuid:40190a40-ec40-4aba-9402-08e3c9e1bf1f>\",\"Content-Length\":\"206483\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1892e958-2d2c-4939-ad0f-670d6becb4c4>\",\"WARC-Concurrent-To\":\"<urn:uuid:b6ac0c90-1aa6-41e7-84cf-7c557ecc25c9>\",\"WARC-IP-Address\":\"178.254.54.75\",\"WARC-Target-URI\":\"https://www.fxsolver.com/browse/?like=2679&p=150\",\"WARC-Payload-Digest\":\"sha1:MV6YPV22WYML7ETKAZO3FCNS2GPGOHAA\",\"WARC-Block-Digest\":\"sha1:Z46WHHBD5AAPZ74QZPGCJQKO7LJIIITI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100909.82_warc_CC-MAIN-20231209103523-20231209133523-00303.warc.gz\"}"} |
https://www.cbseguess.com/papers/question_papers/xii/2004/2004_economics_outside_delhi_set1.php | [
"Wednesday 26th February 2020\n\nCBSE Guess > Papers > Question Papers > Class XII > 2004 > Economics > Outside Delhi Set-I\n\nECONOMICS (Set I—Outside Delhi)\n\nSECTION – A\n\nQ. 1. Answer the following questions: 1x4\n(i) What is meant by price elasticity of demand?\n(ii) In which market form are the products homogeneous?\n(iii) Define marginal revenue.\n(iv) State the law of supply.\n\nQ. 2. Mention any three factors that affect the price elasticity of demand of a commodity. 3\n\nQ. 3. Distinguish between ‘change in demand’ and ‘change in quantity demanded’ of a commodity. 3\n\nQ. 4. List any three determinants of supply of a commodity. 3\n\nQ. 5. State any three main features of monopolistic competition. Describe any one. 3\n\nQ. 6. The quantity supplied of a commodity at a price of Rs. 8 per unit is 400 units. Its price elasticity of supply is 2. Calculate the price at which its quantity supplied will be 600 units. 4\n\nQ. 7. Complete the following table: 4\n\n Output (unit) Price (Rs.) Total Revenue (Rs.) Margial Revenue (Rs.) 1 2 3 4 7 6 4 2 — — — — — — — —\n\nQ. 8. Explain the relationship between marginal cost and average cost with the help of a cost schedule. 4\nOr\nDistinguish between fixed costs and variable costs. Give two examples of each.\n\nQ. 9. What are the three central problems of an economy? Why do they arise? 4\n\nQ. 10. Explain with the help of diagrams the effect of the following changes on the demand of a commodity: 6\n(i) A fall in the price of complementary good\n(ii) A rise in the income of its buyer\n\nQ. 11. Explain the law of variable proportions with the help of total product and marginal product curves. 6\n\nQ. 12. If at a given price of a commodity there is, excess supply, how will the equilibrium price be reached? Explain with the help of a diagram. 6\nOr\nExplain the effect of a leftward shift of demand curve of a commodity on its equilibrium price and quantity, with the help of a diagram.\n\nSECTION - B\n\nQ. 13. Answer the following questions: 1x4\n(i) What is macro-economics?\n(ii) Give an example of a micro-economic study.\n(iii) What is meant by fiscal deficit?\n(iv) When is there a deficit in the balance of trade?\n\nQ. 14. What is meant by revenue deficit? What are the implications of this deficit? 3\n\nQ. 15. Calculate Net National Disposable Income from the following data: 3\n\n Rs. (Crores) (i) Gross national product at factor cost (ii) Net current transfers from rest of the world (iii) Net indirect tax (iv) Consumption of fixed capital (v) Net factor income from abroad 800 50 70 60 (-) 10\n\nQ. 16. Give the meaning of marginal propensity to save and aver age propensity to save. Can the value of average propensity to save benegative? If yes, when?3\n\nQ. 17. In an economy, the marginal propensity to consume is 0.75. Investment is increased by Rs. 200 crores. Calculate the total increase in income and consumption expenditures. 3\n\nQ. 18. How does a central bank control the availability of credit by open market operations? Explain. 4\n\nQ. 19. Explain briefly any two objectives of a government budget. 4\n\nQ. 20. State the four functions of money. Describe any one. 4\n\nQ. 21. Distinguish between current account and capital account of balance of payments account. Mention any two transactions of capital account. 4\nOr\nHow is the foreign exchange market rate determined? Explain with the help of a diagram.\n\nQ. 22. From the following data calculate National Income by (i) income method and (ii) expenditure method: 3, 3\n\n Rs. (Crores) (i) Compensation of employees (ii) Net factor income from abroad (iii) Net indirect tax (iv) Profits (v) Private final consumption expenditure (vi) Net domestic capital formation (vii) Consumption of fixed capital (viii) Rent (ix) Interest (x) Mixed income of self-employed (xi) Net exports (xii) Government final consumption expenditure 1,200 (-) 20 120 800 2,000 770 130 400 620 700 (-) 30 1, 100\n\nQ. 23. Will the following be included in domestic factor Income of India? Give reasons for your answer. 6\n(i) Profits earned by a foreign bank from its branches Ii India.\n(ii) Scholarships given by Government of India.\n(iii) Profits earned by a resident of India from his company In Singapore.\n(iv) Salaries received by Indians working In American Embassy in India.\n\nQ. 24. Explain the concept of under-employment equilibrium with the help of a diagram. Show on the same diagram the additional in vestment expenditure required to reach full employment equilibrium. 6\nOr\nExplain the equilibrium level of income with the help of Consumption + Investment (C + I) curve If planned saving is greater than planned Investment, what adjustments will bring about equality between the two?\n\n Economics 2004 Question Papers Class XII Delhi Outside Delhi Compartment Delhi Compartment Outside Delhi",
null,
"Set I",
null,
"Set I",
null,
"Set I",
null,
"Set I",
null,
"Set II",
null,
"Set II",
null,
"Set II",
null,
"Set II",
null,
"Set III",
null,
"Set III\n\n CBSE 2004 Question Papers Class XII",
null,
"English",
null,
"Sociology",
null,
"Functional English",
null,
"Psychology",
null,
"Mathematics",
null,
"Philosopy",
null,
"Physics",
null,
"Computer Science",
null,
"Chemistry",
null,
"Entrepreneurship",
null,
"Biology",
null,
"Informatics Practices",
null,
"Geography",
null,
"Multimedia & Web Technology",
null,
"Economics",
null,
"Biotechnology",
null,
"Business Studies",
null,
"Physical Education",
null,
"Accountancy",
null,
"Fine Arts",
null,
"Political Science",
null,
"History",
null,
"Agriculture"
] | [
null,
"https://www.cbseguess.com/images/arrow1.gif",
null,
"https://www.cbseguess.com/images/arrow1.gif",
null,
"https://www.cbseguess.com/images/arrow1.gif",
null,
"https://www.cbseguess.com/images/arrow1.gif",
null,
"https://www.cbseguess.com/images/arrow1.gif",
null,
"https://www.cbseguess.com/images/arrow1.gif",
null,
"https://www.cbseguess.com/images/arrow1.gif",
null,
"https://www.cbseguess.com/images/arrow1.gif",
null,
"https://www.cbseguess.com/images/arrow1.gif",
null,
"https://www.cbseguess.com/images/arrow1.gif",
null,
"https://www.cbseguess.com/images/bullet1.gif",
null,
"https://www.cbseguess.com/images/bullet1.gif",
null,
"https://www.cbseguess.com/images/bullet1.gif",
null,
"https://www.cbseguess.com/images/bullet1.gif",
null,
"https://www.cbseguess.com/images/bullet1.gif",
null,
"https://www.cbseguess.com/images/bullet1.gif",
null,
"https://www.cbseguess.com/images/bullet1.gif",
null,
"https://www.cbseguess.com/images/bullet1.gif",
null,
"https://www.cbseguess.com/images/bullet1.gif",
null,
"https://www.cbseguess.com/images/bullet1.gif",
null,
"https://www.cbseguess.com/images/bullet1.gif",
null,
"https://www.cbseguess.com/images/bullet1.gif",
null,
"https://www.cbseguess.com/images/bullet1.gif",
null,
"https://www.cbseguess.com/images/bullet1.gif",
null,
"https://www.cbseguess.com/images/bullet1.gif",
null,
"https://www.cbseguess.com/images/bullet1.gif",
null,
"https://www.cbseguess.com/images/bullet1.gif",
null,
"https://www.cbseguess.com/images/bullet1.gif",
null,
"https://www.cbseguess.com/images/bullet1.gif",
null,
"https://www.cbseguess.com/images/bullet1.gif",
null,
"https://www.cbseguess.com/images/bullet1.gif",
null,
"https://www.cbseguess.com/images/bullet1.gif",
null,
"https://www.cbseguess.com/images/bullet1.gif",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.88583916,"math_prob":0.7540444,"size":2653,"snap":"2020-10-2020-16","text_gpt3_token_len":681,"char_repetition_ratio":0.1208003,"word_repetition_ratio":0.029978586,"special_character_ratio":0.25329816,"punctuation_ratio":0.16934046,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":0.9826251,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-26T08:01:00Z\",\"WARC-Record-ID\":\"<urn:uuid:26d1e845-5368-4526-bec8-e920154ea5e2>\",\"Content-Length\":\"29750\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5056a738-fe96-4f10-851f-3bcaf3b2a3fc>\",\"WARC-Concurrent-To\":\"<urn:uuid:530489a7-0a3f-4d8c-a9fa-f36a04f69ea9>\",\"WARC-IP-Address\":\"104.28.8.45\",\"WARC-Target-URI\":\"https://www.cbseguess.com/papers/question_papers/xii/2004/2004_economics_outside_delhi_set1.php\",\"WARC-Payload-Digest\":\"sha1:HLJ44VYIPS2RGROFELOAVCT2U6EFOLG6\",\"WARC-Block-Digest\":\"sha1:6KSRLBVTHG553XKAL4Y5Y57NY5TC7W3O\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875146187.93_warc_CC-MAIN-20200226054316-20200226084316-00232.warc.gz\"}"} |
https://www.colorhexa.com/03839f | [
"# #03839f Color Information\n\nIn a RGB color space, hex #03839f is composed of 1.2% red, 51.4% green and 62.4% blue. Whereas in a CMYK color space, it is composed of 98.1% cyan, 17.6% magenta, 0% yellow and 37.6% black. It has a hue angle of 190.8 degrees, a saturation of 96.3% and a lightness of 31.8%. #03839f color hex could be obtained by blending #06ffff with #00073f. Closest websafe color is: #009999.\n\n• R 1\n• G 51\n• B 62\nRGB color chart\n• C 98\n• M 18\n• Y 0\n• K 38\nCMYK color chart\n\n#03839f color description : Dark cyan.\n\n# #03839f Color Conversion\n\nThe hexadecimal color #03839f has RGB values of R:3, G:131, B:159 and CMYK values of C:0.98, M:0.18, Y:0, K:0.38. Its decimal value is 230303.\n\nHex triplet RGB Decimal 03839f `#03839f` 3, 131, 159 `rgb(3,131,159)` 1.2, 51.4, 62.4 `rgb(1.2%,51.4%,62.4%)` 98, 18, 0, 38 190.8°, 96.3, 31.8 `hsl(190.8,96.3%,31.8%)` 190.8°, 98.1, 62.4 009999 `#009999`\nCIE-LAB 50.398, -19.585, -23.38 14.41, 18.754, 35.659 0.209, 0.272, 18.754 50.398, 30.499, 230.047 50.398, -35.839, -32.235 43.306, -16.388, -18.508 00000011, 10000011, 10011111\n\n# Color Schemes with #03839f\n\n• #03839f\n``#03839f` `rgb(3,131,159)``\n• #9f1f03\n``#9f1f03` `rgb(159,31,3)``\nComplementary Color\n• #039f6d\n``#039f6d` `rgb(3,159,109)``\n• #03839f\n``#03839f` `rgb(3,131,159)``\n• #03359f\n``#03359f` `rgb(3,53,159)``\nAnalogous Color\n• #9f6d03\n``#9f6d03` `rgb(159,109,3)``\n• #03839f\n``#03839f` `rgb(3,131,159)``\n• #9f0335\n``#9f0335` `rgb(159,3,53)``\nSplit Complementary Color\n• #839f03\n``#839f03` `rgb(131,159,3)``\n• #03839f\n``#03839f` `rgb(3,131,159)``\n• #9f0383\n``#9f0383` `rgb(159,3,131)``\n• #039f1f\n``#039f1f` `rgb(3,159,31)``\n• #03839f\n``#03839f` `rgb(3,131,159)``\n• #9f0383\n``#9f0383` `rgb(159,3,131)``\n• #9f1f03\n``#9f1f03` `rgb(159,31,3)``\n• #024554\n``#024554` `rgb(2,69,84)``\n• #025a6d\n``#025a6d` `rgb(2,90,109)``\n• #036e86\n``#036e86` `rgb(3,110,134)``\n• #03839f\n``#03839f` `rgb(3,131,159)``\n• #0398b8\n``#0398b8` `rgb(3,152,184)``\n• #04acd1\n``#04acd1` `rgb(4,172,209)``\n• #04c1ea\n``#04c1ea` `rgb(4,193,234)``\nMonochromatic Color\n\n# Alternatives to #03839f\n\nBelow, you can see some colors close to #03839f. Having a set of related colors can be useful if you need an inspirational alternative to your original color choice.\n\n• #039f94\n``#039f94` `rgb(3,159,148)``\n• #039d9f\n``#039d9f` `rgb(3,157,159)``\n• #03909f\n``#03909f` `rgb(3,144,159)``\n• #03839f\n``#03839f` `rgb(3,131,159)``\n• #03769f\n``#03769f` `rgb(3,118,159)``\n• #03699f\n``#03699f` `rgb(3,105,159)``\n• #035c9f\n``#035c9f` `rgb(3,92,159)``\nSimilar Colors\n\n# #03839f Preview\n\nThis text has a font color of #03839f.\n\n``<span style=\"color:#03839f;\">Text here</span>``\n#03839f background color\n\nThis paragraph has a background color of #03839f.\n\n``<p style=\"background-color:#03839f;\">Content here</p>``\n#03839f border color\n\nThis element has a border color of #03839f.\n\n``<div style=\"border:1px solid #03839f;\">Content here</div>``\nCSS codes\n``.text {color:#03839f;}``\n``.background {background-color:#03839f;}``\n``.border {border:1px solid #03839f;}``\n\n# Shades and Tints of #03839f\n\nA shade is achieved by adding black to any pure hue, while a tint is created by mixing white to any pure color. In this example, #000405 is the darkest color, while #f1fcff is the lightest one.\n\n• #000405\n``#000405` `rgb(0,4,5)``\n• #001418\n``#001418` `rgb(0,20,24)``\n• #01242b\n``#01242b` `rgb(1,36,43)``\n• #01343f\n``#01343f` `rgb(1,52,63)``\n• #024452\n``#024452` `rgb(2,68,82)``\n• #025365\n``#025365` `rgb(2,83,101)``\n• #026378\n``#026378` `rgb(2,99,120)``\n• #03738c\n``#03738c` `rgb(3,115,140)``\n• #03839f\n``#03839f` `rgb(3,131,159)``\n• #0393b2\n``#0393b2` `rgb(3,147,178)``\n• #04a3c6\n``#04a3c6` `rgb(4,163,198)``\n• #04b3d9\n``#04b3d9` `rgb(4,179,217)``\n• #04c2ec\n``#04c2ec` `rgb(4,194,236)``\n• #0acffa\n``#0acffa` `rgb(10,207,250)``\n• #1dd3fb\n``#1dd3fb` `rgb(29,211,251)``\n• #30d7fb\n``#30d7fb` `rgb(48,215,251)``\n• #43dafb\n``#43dafb` `rgb(67,218,251)``\n• #57defc\n``#57defc` `rgb(87,222,252)``\n• #6ae2fc\n``#6ae2fc` `rgb(106,226,252)``\n• #7de6fd\n``#7de6fd` `rgb(125,230,253)``\n• #90e9fd\n``#90e9fd` `rgb(144,233,253)``\n• #a4edfd\n``#a4edfd` `rgb(164,237,253)``\n• #b7f1fe\n``#b7f1fe` `rgb(183,241,254)``\n• #caf5fe\n``#caf5fe` `rgb(202,245,254)``\n• #ddf8fe\n``#ddf8fe` `rgb(221,248,254)``\n• #f1fcff\n``#f1fcff` `rgb(241,252,255)``\nTint Color Variation\n\n# Tones of #03839f\n\nA tone is produced by adding gray to any pure hue. In this case, #4e5354 is the less saturated color, while #03839f is the most saturated one.\n\n• #4e5354\n``#4e5354` `rgb(78,83,84)``\n• #48575a\n``#48575a` `rgb(72,87,90)``\n• #415b61\n``#415b61` `rgb(65,91,97)``\n• #3b5f67\n``#3b5f67` `rgb(59,95,103)``\n• #35636d\n``#35636d` `rgb(53,99,109)``\n• #2f6773\n``#2f6773` `rgb(47,103,115)``\n• #286b7a\n``#286b7a` `rgb(40,107,122)``\n• #226f80\n``#226f80` `rgb(34,111,128)``\n• #1c7386\n``#1c7386` `rgb(28,115,134)``\n• #16778c\n``#16778c` `rgb(22,119,140)``\n• #0f7b93\n``#0f7b93` `rgb(15,123,147)``\n• #097f99\n``#097f99` `rgb(9,127,153)``\n• #03839f\n``#03839f` `rgb(3,131,159)``\nTone Color Variation\n\n# Color Blindness Simulator\n\nBelow, you can see how #03839f is perceived by people affected by a color vision deficiency. This can be useful if you need to ensure your color combinations are accessible to color-blind users.\n\nMonochromacy\n• Achromatopsia 0.005% of the population\n• Atypical Achromatopsia 0.001% of the population\nDichromacy\n• Protanopia 1% of men\n• Deuteranopia 1% of men\n• Tritanopia 0.001% of the population\nTrichromacy\n• Protanomaly 1% of men, 0.01% of women\n• Deuteranomaly 6% of men, 0.4% of women\n• Tritanomaly 0.01% of the population"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.5506238,"math_prob":0.81108797,"size":3685,"snap":"2019-43-2019-47","text_gpt3_token_len":1626,"char_repetition_ratio":0.1252377,"word_repetition_ratio":0.011111111,"special_character_ratio":0.56580734,"punctuation_ratio":0.23809524,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9920629,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-12T01:58:42Z\",\"WARC-Record-ID\":\"<urn:uuid:1b927c1e-f7be-45c7-9914-15319a03a164>\",\"Content-Length\":\"36252\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b49fefee-2e29-43b1-891e-e0065bfd2d32>\",\"WARC-Concurrent-To\":\"<urn:uuid:b985d6f3-a584-45fa-80ff-9440178d39fc>\",\"WARC-IP-Address\":\"178.32.117.56\",\"WARC-Target-URI\":\"https://www.colorhexa.com/03839f\",\"WARC-Payload-Digest\":\"sha1:2BTZXFUPJMDMZT64MOJIDZGNZ4ZE2WRK\",\"WARC-Block-Digest\":\"sha1:PTCACY7WO3QHN5B5SEFVMSXOSLZBOHCB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496664469.42_warc_CC-MAIN-20191112001515-20191112025515-00309.warc.gz\"}"} |
https://hackaday.io/project/11732-code | [
"0%\n0%\n\n# µδ code\n\nA binary maths hack, for use in DSP code and lossless compression\n\nSimilar projects worth following\n172 views\nFor your DSP/DCT/FFT needs, µδ codes are the next enhancement, beyond the lifting scheme and dyadic multiplies.\nAs far as I know, it has not yet been documented openly or used because most of the research is focused on lossy compression (think: MPEG).\nµδ codes make DSP algos more bit-efficient and will be used in my lossless video and audio CODECs, along with the 3R compaction algorithm ( https://hackaday.io/project/7385-recursive-range-reduction-3r-hwsw-codec )\nMore details will appear here soon :-)\n\nA µδ CODEC adds only one XOR gate to the critical datapath of a circuit that computes the simultaneous sum and difference of two integer numbers. The transform is perfectly bit-reversible and the reverse transform is almost identical to the classic version.\n\nThe two results (µ and δ) are as large as the original numbers, whereas a classical sum&difference expands the data by 2 bits (yet the result can only use 1/4 of the coding space).\n\nThe results are distorted : the MSB of µ gets disturbed by the sign of δ, which is simply truncated. The bet is that this distorsion is not critical for certain types of lossless compression CODECs, while reducing the size and consumption of hardware circuits.\n\nYann Guidon / YGDES05/24/2017 at 18:12 0 comments\n\nLook at these new logs:\n\nThe have some similarities with µδ but they differ in critical ways:\n\n• µδ works with median and difference while the new transform creates a sum (σ) and one of the operands (let's call it α). The new transform shall then called σα.\n• µδ works inside a ring (a closed set with wrap-around) but σα \"escapes\" this (the output is not constrained inside the original boundaries), which allows actual compression for some combinations of values.\n• actually σα has an initial value range that is a ring, however each operand can have its own original ring, they both are combined and the boundaries remain as auxiliary data in the background.\n• σα is used with VLC (variable length codes) while µδ uses fixed-length words.\n\nσα already has a fantastic application in the color decomposition of RGB pictures and will make 3R a truly entropy coder... Follow the developments there !\n\n• ### X+X = 2X\n\nYann Guidon / YGDES07/09/2016 at 16:01 0 comments\n\nBefore we start, let's just remember and examine one of the fundamental rules of arithmetics. If you take 2 non-negative numbers and add them, the sum is equal or greater than any of the addends. This is the rule that makes the #Recursive Range Reduction (3R) HW&SW CODEC work.\n\nIn a more general way, if you have two numbers of N bits, you need N+1 bits to uniquely represent the sum. Since X+X=2X, 2^N + 2^N = 2^(N+1). This +1 is the carry bit and is required to restore the original addends if you ever want to subtract it back from the other.\n\nSame goes for the subtraction : the difference requires a \"borrow\" bit. You can't avoid this, even though there are some cases where you can work modulo 2^N.\n\nWe are used to dealing with the carry and borrow bits but things can quickly get out of hand! Imagine you want to compute a 1024-tap integer FFT: each result will be the sum of 2^10 numbers, adding 10 bits to the original sample size. If you're dealing with CD quality, 16+10=26 bits so it fits in the 32-bits registers of common CPUs or DSPs.\n\nNow if you want to use 24-bits samples, you're screwed. 34 bits don't easily fit in 32-bits registers. Will you resort to slower 32-bits floating points ? 40-bits integers ? 64-bits integers ?\n\nTake the classical 8×8 DCT square now. The original 8-bits samples of a picture get fatter during each of the 3+3=6 passes, resulting in 14 bits for the results. The integer units have almost doubled the precision of the source data and this considerably increases gate count, latency, power consumption...\n\nNow you start to see where I'm getting to : the classical filters and time-frequency transforms have a fundamental problem of size.\n\n• ### First publication\n\nYann Guidon / YGDES07/08/2016 at 08:11 8 comments\n\nOpenSilicium#19 is out !\n\nI'll publish the code soon.\n\n• 1\nStep 1\n\n# Step 1: encoding\n\n• Take A and B, two signed numbers made of N bits\n• Compute the sum µ = A + B (with N+1 resolution)\n• Compute the difference δ = A - B (with N+1 bits of resolution)\n• Adjust the sign bit : µ = µ xor MSB(δ)\n• Remove the LSB of µ : µ = µ shr 1 (µ now uses N bits)\n• Remove the MSB of δ : δ = δ mod 2^N (δ now uses N bits)\n• 2\nStep 2\n\n# Step 2: decoding\n\n• Take the coded values µ and δ, with N bits each\n• Restore the LSB of µ by copying it from δ: µ = (µ shl 1) + (δ mod 2)\n• Restore A but drop the MSB: A = (µ + δ) mod 2^N\n• Restore B but drop the MSB: B = (µ - δ) mod 2^N\n\nShare\n\n## Discussions\n\nYann Guidon / YGDES wrote 06/30/2016 at 16:31 point\n\nThe founding article will be published in french kiosks in a few days now :-) I can't wait !\n\nAre you sure? yes | no\n\n# Does this project spark your interest?\n\nBecome a member to follow this project and never miss any updates",
null,
""
] | [
null,
"https://analytics.supplyframe.com/trackingservlet/impression",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.91293204,"math_prob":0.90107185,"size":867,"snap":"2020-34-2020-40","text_gpt3_token_len":194,"char_repetition_ratio":0.09733488,"word_repetition_ratio":0.0,"special_character_ratio":0.21453287,"punctuation_ratio":0.08928572,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98446065,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-24T05:25:31Z\",\"WARC-Record-ID\":\"<urn:uuid:173756c6-0275-4145-9a45-53d9924da217>\",\"Content-Length\":\"93836\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1c31cb8e-2dae-4650-bee5-2222dbef97c1>\",\"WARC-Concurrent-To\":\"<urn:uuid:83d1c239-8e2e-48a4-814d-35e5d4eb97de>\",\"WARC-IP-Address\":\"198.54.96.98\",\"WARC-Target-URI\":\"https://hackaday.io/project/11732-code\",\"WARC-Payload-Digest\":\"sha1:OR3AQLCZ5CPT7BMZEM6LWZ3MTBXZCSLT\",\"WARC-Block-Digest\":\"sha1:WTRWO3H6LAEXFWGQYHTBWOHBRRYY6AKA\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600400213454.52_warc_CC-MAIN-20200924034208-20200924064208-00609.warc.gz\"}"} |
https://learn.microsoft.com/en-us/cpp/cpp/tile-static-keyword?view=msvc-170 | [
"# tile_static Keyword\n\nThe tile_static keyword is used to declare a variable that can be accessed by all threads in a tile of threads. The lifetime of the variable starts when execution reaches the point of declaration and ends when the kernel function returns. For more information on using tiles, see Using Tiles.\n\nThe tile_static keyword has the following limitations:\n\n• It can be used only on variables that are in a function that has the `restrict(amp)` modifier.\n\n• It cannot be used on variables that are pointer or reference types.\n\n• A tile_static variable cannot have an initializer. Default constructors and destructors are not invoked automatically.\n\n• The value of an uninitialized tile_static variable is undefined.\n\n• If a tile_static variable is declared in a call graph that is rooted by a non-tiled call to `parallel_for_each`, a warning is generated and the behavior of the variable is undefined.\n\n## Example\n\nThe following example shows how a tile_static variable can be used to accumulate data across several threads in a tile.\n\n``````// Sample data:\nint sampledata[] = {\n2, 2, 9, 7, 1, 4,\n4, 4, 8, 8, 3, 4,\n1, 5, 1, 2, 5, 2,\n6, 8, 3, 2, 7, 2};\n\n// The tiles:\n// 2 2 9 7 1 4\n// 4 4 8 8 3 4\n//\n// 1 5 1 2 5 2\n// 6 8 3 2 7 2\n\n// Averages:\nint averagedata[] = {\n0, 0, 0, 0, 0, 0,\n0, 0, 0, 0, 0, 0,\n0, 0, 0, 0, 0, 0,\n0, 0, 0, 0, 0, 0,\n};\n\narray_view<int, 2> sample(4, 6, sampledata);\narray_view<int, 2> average(4, 6, averagedata);\n\nparallel_for_each(\n// Create threads for sample.extent and divide the extent into 2 x 2 tiles.\nsample.extent.tile<2,2>(),\n[=](tiled_index<2,2> idx) restrict(amp)\n{\n// Create a 2 x 2 array to hold the values in this tile.\ntile_static int nums;\n// Copy the values for the tile into the 2 x 2 array.\nnums[idx.local][idx.local] = sample[idx.global];\n// When all the threads have executed and the 2 x 2 array is complete, find the average.\nidx.barrier.wait();\nint sum = nums + nums + nums + nums;\n// Copy the average into the array_view.\naverage[idx.global] = sum / 4;\n}\n);\n\nfor (int i = 0; i < 4; i++) {\nfor (int j = 0; j < 6; j++) {\nstd::cout << average(i,j) << \" \";\n}\nstd::cout << \"\\n\";\n}\n\n// Output:\n// 3 3 8 8 3 3\n// 3 3 8 8 3 3\n// 5 5 2 2 4 4\n// 5 5 2 2 4 4\n// Sample data.\nint sampledata[] = {\n2, 2, 9, 7, 1, 4,\n4, 4, 8, 8, 3, 4,\n1, 5, 1, 2, 5, 2,\n6, 8, 3, 2, 7, 2};\n\n// The tiles are:\n// 2 2 9 7 1 4\n// 4 4 8 8 3 4\n//\n// 1 5 1 2 5 2\n// 6 8 3 2 7 2\n\n// Averages.\nint averagedata[] = {\n0, 0, 0, 0, 0, 0,\n0, 0, 0, 0, 0, 0,\n0, 0, 0, 0, 0, 0,\n0, 0, 0, 0, 0, 0,\n};\n\narray_view<int, 2> sample(4, 6, sampledata);\narray_view<int, 2> average(4, 6, averagedata);\n\nparallel_for_each(\n// Create threads for sample.grid and divide the grid into 2 x 2 tiles.\nsample.extent.tile<2,2>(),\n[=](tiled_index<2,2> idx) restrict(amp)\n{\n// Create a 2 x 2 array to hold the values in this tile.\ntile_static int nums;\n// Copy the values for the tile into the 2 x 2 array.\nnums[idx.local][idx.local] = sample[idx.global];\n// When all the threads have executed and the 2 x 2 array is complete, find the average.\nidx.barrier.wait();\nint sum = nums + nums + nums + nums;\n// Copy the average into the array_view.\naverage[idx.global] = sum / 4;\n}\n);\n\nfor (int i = 0; i < 4; i++) {\nfor (int j = 0; j < 6; j++) {\nstd::cout << average(i,j) << \" \";\n}\nstd::cout << \"\\n\";\n}\n\n// Output.\n// 3 3 8 8 3 3\n// 3 3 8 8 3 3\n// 5 5 2 2 4 4\n// 5 5 2 2 4 4\n``````"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.555115,"math_prob":0.9971537,"size":3549,"snap":"2023-40-2023-50","text_gpt3_token_len":1328,"char_repetition_ratio":0.13822284,"word_repetition_ratio":0.6566951,"special_character_ratio":0.41983658,"punctuation_ratio":0.22888888,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9610016,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-22T09:46:32Z\",\"WARC-Record-ID\":\"<urn:uuid:9a7f8a4a-067f-43e8-aae3-e4f2cc28e886>\",\"Content-Length\":\"45744\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:27164cf6-bf8e-473f-a7b5-418c1f482b05>\",\"WARC-Concurrent-To\":\"<urn:uuid:748cdf89-9827-498d-9384-231a49d73ce4>\",\"WARC-IP-Address\":\"23.50.126.168\",\"WARC-Target-URI\":\"https://learn.microsoft.com/en-us/cpp/cpp/tile-static-keyword?view=msvc-170\",\"WARC-Payload-Digest\":\"sha1:KHHBZT2RB2B6FL54CVYGPTQB3NPCOBP5\",\"WARC-Block-Digest\":\"sha1:UVBBVSAHI3FRX3SX4REEJJYSORA5F4HY\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233506339.10_warc_CC-MAIN-20230922070214-20230922100214-00749.warc.gz\"}"} |
https://www.brightstorm.com/science/chemistry/kinetic-molecular-theory/ideal-gas-law/ | [
"",
null,
"###### Kendal Orenstein\n\nRutger's University\nM.Ed., Columbia Teachers College\n\nKendal founded an academic coaching company in Washington D.C. and teaches in local area schools. In her spare time she loves to explore new places.\n\n##### Thank you for watching the video.\n\nTo unlock all 5,300 videos, start your free trial.\n\n# Ideal Gas Law - Concept\n\nKendal Orenstein",
null,
"###### Kendal Orenstein\n\nRutger's University\nM.Ed., Columbia Teachers College\n\nKendal founded an academic coaching company in Washington D.C. and teaches in local area schools. In her spare time she loves to explore new places.\n\nShare\n\nThe Ideal Gas Law mathematically relates the pressure, volume, amount and temperature of a gas with the equation pressure x volume = moles x ideal gas constant x temperature; PV=nRT. The Ideal Gas Law is ideal because it ignores interactions between the gas particles in order to simplify the equation. There is also a Real Gas Law which is much more complicated and produces a result which, under most circumstances, is almost identical to that predicted by the Ideal Gas Law.\n\nAlright. So we're going to talk about the ideal gas law and one word that might stick out to you is the word ideal. So we're going to talk about ideal gases versus real gases and gases in real life.\n\nSo ideally, when thinking about this and the kinetic molcular theory, there are two things postulates within that kinetic molcular theory that were kind of iffy or untrue. One of them was that gas particles have virtually no volume and we're going to basically have calculations which they do volume of the gas particles does not count. They're negligible. We know that's not true. Gas particles do have some sort of volume. We know it's really small and we know that a ga- real gas particles actually do contain some sort of volume. And also ideally in an ideal world, we're saying that gas particles have no intermolcular forces meaning that they don't attract or repel from each other and we know that's not true. All gas particles and all molcules actually have some sort of way that they are attracted or repelling from each other called IMFs. So really that's not really true. But in most cases, real gases actually behave extremely similarly to ideal gases and so we can actually use this ideal gas conditions in when we're making our calculations, they're pretty accurate. However, the only time that they are not accurate is when we're dealing with high pressure situations or low temperature situations. The reason high pressure situations are different is because most when you have high pressure, those gas particles they're being pushed together. And these intermolcular forces are going to start playing a major role. Also when you're dealing with low temperatures, those gas particles will start slowing down and they will start containing some sort of volume that is negligible and these IMFs will start playing a part too. So in these two scenarios, we can't use the ideal conditions. Otherwise, we can use them all the time which we are going to start doing.\n\nAlright. So let's compare, let's say you have a gas. Using our combined gas law, let's say you have a gas and you don't have anything to compare it to. We can compare, we can always compare the gas law, any gas to its conditions at STP or standard temperature and pressure. And don't forget these conditions are one atmosphere or 101.3 kilo pascals or 760 milimetres of mercury for your pressure. Our molar volume will always be no matter what gas we're talking about 22.4 litres and our temperature will always be 273 kelvin or 0 degrees celsius. But we like things in kelvin because it's always positive that way.\n\nSo if we were to replace one of these guys, one of these pressure times volume over temperature with our conditions at STP, we get a certain number. And depending on our pressure, whatever unit of pressure we're talking about, we get different numbers. So we get, if we're using atmospheres we get 0.0821, if we're dealing with kilo pascals we get 8.314, if we're dealing with milimetres of mercury we get 62.4. And we're going to, this is always, will always be the case. We're just going ot make it a constant. And we're going to use the letter r to denote that constant. So, when we're dealing with, I'm going to grab a pen. When we're dealing with the combined gas law we want to actually compare something to a situation in a at STP, I can just replace this pv over t with the letter r. So in this case I'm going to say p1 times v1 over t1 equals r. Okay. Great. But let's say we're talking about one mol of gas. We're talking about one mol of gas, this all works fine because at one mol of gas, our volume is 22.4 litres.\n\nGreat. But what if we're talking about two mols of gas? We're going to have to multiply this number by two because we're multiplying that 22.4 litres by 2. Let's say we're talking about 1000 litres. 1000 mols. We have to multiply this whole thing by 1000. If we're talking about 0.55 mols, multiply this whole thing by 0.55. So multiply by the number of mols that we have in our sample.\n\nSo we're going to say the letter n denotes the number of mols we have in our sample. Okay, so if I rearrange this to make it much more easy to, easy to write down or remember, I'm going to rearrange this and bring the t over. So I'm going to say pv=nrt. Some people call it pivnert to remember the ideal gas law. So this combination of things is the ideal gas law. It's basically just the combined gas law we've rearranged using conditions at STP for r. Okay. So this actually uses, the ideal gas law uses all four variables. We have volume, pressure, number of mols and temperature. This number of mols, it's the first time it's been introduced. The other gas laws don't have the number of mols within them. So this actually is very very useful. Let's go over here and do a problem with them.\n\nAlright, so our problem is what is the pressure in atmospheres of a 0.108 mol sample of helium gas at a temperature of 20 degrees celsius if its volume is 0.505 litres. Okay. I know right away this is an ideal gas law problem. how do I know that? Well, my problem had a number of mols in it. Now, remember I told you ideal gas laws, the only gas laws that actually can contain the number of mols within it, within it. So, I know when I'm given a mol sample or I'm asked about the number of mols, I know I'm going to always be using the ideal gas law.\n\nOkay. So let's pull everything out. We have ideal gas law just to rewrite is pv=nrt. So our pressure in this case, we're looking for. We're looking for the pressure. So I'm going to say pressure is our variable. Our volume in this case is 0.505 litres. The number of mols we found is 0.108 mols and the r that we're going to use three r's to choose from. We're going to use r dealing with atmospheres. So pressure is wanted in the unit of atmospheres. So where r unit of atmosphere is this point 0821. And then our temperature in this case is 20 degrees celsius. Don't forget we always want things in kelvin. So we're going to add 273 to make it 293 kelvin. So if we multiplied all these together and divide by 5, sorry, 0.505 to isolate our pressure value, we're actually going to get 5.14 atmospheres. So we've just found out using ideal gas law that our new pressure or that our pressure in this scenario is 5.14 atmospheres. Awesome. Great. So this actually can be used, all you have to do is that make, you have to make sure that you get all your variables out and just plug this within this gas, within this ideal gas law.\n\nBut there's also lots of fun things you can do with the ideal gas law and one of them is to find the density of the gas. So we know density is mass over volume or grams over litres. And actually, if you rearranged the ideal gas law you're going to get this to find density. Let's actually do that together. Let's derive this. Alright. So we know what density is. So we know our mol grams per mol is equal to our molar mss. And I'm going to just substitute m m as molar mass because it's not milimetres. I'm just going to say, shorten it instead of having to write molar mass. Okay.\n\nI'm going to isolate mols because in my ideal gas law, I have mols. So I'm going to isolate mols to just figure out what it solves for. So I'm going to multiply both sides by mol. So we know our grams equals molar mass times mol. But I want to isolate that mol so I'm going to divide by molar mass on both sides to cancel. So when I, our mol equals grams per molar mass. So in our ideal gas law, I'm going to substitute this n for grams per molar mass. And I have pressure times volume equals grams per molar mass times rt. Okay.\n\nI want to get it so it's grams per litre, because that's our density value. So I'm going to mul- I'm going to divide both sides by rt to isolate the grams. So those cross out and I'll have pv over rt equals grams per molar mass. Okay. So now I want to get grams by itself so I'm going to multiply both sides by molar mass. So now my new formula is molar mass times pressure times volume over rt is equal to grams. And then I want to bring out my volume. Please don't forget my volume is measured in litres. So I want to divide both sides by volume, divide both sides by volume and I get molar mass. I can cross out my volume. Molar mass times pressure over rt equals grams over volume which is what I have written here which in other words is grams over litres. The new volume's in litres, which is our density. Yay. We found density using pv=nrt or ideal gas law.\n\nSo, there's lots of fun things you can do to find the ideal gas with the ideal gas law. Lots of fun different things you can do density being one of them and the ideal gas law is used in a lot of different ways."
] | [
null,
"https://d3a0jx1tkrpybf.cloudfront.net/img/teachers/teacher-6.png",
null,
"https://d3a0jx1tkrpybf.cloudfront.net/img/teachers/teacher-6.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9531713,"math_prob":0.96058345,"size":9602,"snap":"2021-43-2021-49","text_gpt3_token_len":2262,"char_repetition_ratio":0.14336321,"word_repetition_ratio":0.051454138,"special_character_ratio":0.2299521,"punctuation_ratio":0.10893033,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98398244,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-21T06:40:47Z\",\"WARC-Record-ID\":\"<urn:uuid:dae435da-b963-4a45-a5c0-774dcf1bee9b>\",\"Content-Length\":\"91755\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a60078a6-5fe6-47c5-8f65-6b3ab64251fb>\",\"WARC-Concurrent-To\":\"<urn:uuid:b7c4f2a0-0772-49a5-aed8-45e8171ebd88>\",\"WARC-IP-Address\":\"23.23.139.119\",\"WARC-Target-URI\":\"https://www.brightstorm.com/science/chemistry/kinetic-molecular-theory/ideal-gas-law/\",\"WARC-Payload-Digest\":\"sha1:VCJHPXP7BBTSDPCHP3MVO6SCNNAZ6PFF\",\"WARC-Block-Digest\":\"sha1:AJFLFHLZH5KDDP5UMMA3U75N35B3IAGN\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585381.88_warc_CC-MAIN-20211021040342-20211021070342-00312.warc.gz\"}"} |
https://mathgoodies.com/lessons/vol4/fractions_to_percents | [
"# Writing Fractions as Percents\n\n## Learn About Writing Fractions as Percents With The Following Examples And Interactive Exercises",
null,
"Problem: Last marking period, Ms. Jones gave an A grade to 15 out of every 100 students and Mr. McNeil gave an A grade to 3 out of every 20 students. What percent of each teacher's students received an A?\n\n Solution Teacher Ratio Fraction Percent Ms. Jones 15 to 100",
null,
"15% Mr. McNeil 3 to 20",
null,
"15%",
null,
"Solution: Both teachers gave 15% of their students an A last marking period.\n\nIn the problem above, the fraction for Ms. Jones was easily converted to a percent. This is because It is easy to convert a fraction to a percent when the denominator is 100. If a fraction does not have a denominator of 100, you can convert it to an equivalent fraction with a denominator of 100, and then write the equivalent fraction as a percent. This is what was done in the problem above for Mr. McNeil. Let's look at some problems in which we use equivalent fractions to help us convert a fraction to a percent.\n\nExample 1: Write each fraction as a percent:",
null,
"Solution Fraction Equivalent Fraction Percent",
null,
"",
null,
"50%",
null,
"",
null,
"90%",
null,
"",
null,
"80%\n\nExample 2: One team won 19 out of every 20 games played, and a second team won 7 out of every 8 games played. Which team has a higher percentage of wins?\n\n Solution Team Fraction Equivalent Fraction Percent 1",
null,
"",
null,
"95% 2",
null,
"",
null,
"87.5%\n\nSolution: The first team has a higher percentage of wins.\n\nIn Examples 1 and 2, we used equivalent fractions to help us convert each fraction to a percent. Another way to do this is to convert each fraction to a decimal, and then convert each decimal to a percent. To convert a fraction to a decimal, divide its numerator by its denominatorLook at Example 3 below to see how this is done.\n\nExample 3: Write each fraction as a percent:",
null,
"Solution Fraction Decimal Percent",
null,
"",
null,
"87.5%",
null,
"",
null,
"95%",
null,
"",
null,
"1.5%\n\nNow that you are familiar with writing fractions as percents, do you see a pattern in the problem below?\n\n Problem: If 165% equals",
null,
", and 16.5% equals",
null,
", then what fraction is equal to 1.65%?\n Solution Percent Fraction 165%",
null,
"16.5%",
null,
"1.65%",
null,
"Summary: To write a fraction as a percent, we can convert it to an equivalent fraction with a denominator of 100. Another way to write a fraction as a percent is to divide its numerator by its denominator, then convert the resulting decimal to a percent.\n\n### Exercises\n\nDirections: Read each question below. Select your answer by clicking on its button. Feedback to your answer is provided in the RESULTS BOX. If you make a mistake, choose a different button.\n\n 1. Which of the following is equal to 36%",
null,
"",
null,
"",
null,
"None of the above. RESULTS BOX:\n 2. Which of the following is equal to 62.5%?",
null,
"",
null,
"",
null,
"None of the above. RESULTS BOX:\n 3. Which of the following is equal to",
null,
"? .583% 5.83% 58.3% None of the above. RESULTS BOX:\n 4. Which of the following is equal to",
null,
"? 11% 5.5% 200% None of the above. RESULTS BOX:\n 5. What fraction is equal to .42%?",
null,
"",
null,
"",
null,
"All of the above. RESULTS BOX:"
] | [
null,
"https://mathgoodies.com/sites/default/files/lesson_images/per_centum.gif",
null,
"https://mathgoodies.com/sites/all/modules/custom/lessons/images/percent/fifteen_100ths.gif",
null,
"https://mathgoodies.com/sites/all/modules/custom/lessons/images/percent/equiv_3_20ths.gif",
null,
"https://mathgoodies.com/sites/default/files/lesson_images/gradebook.gif",
null,
"https://mathgoodies.com/sites/all/modules/custom/lessons/images/percent/fractions_example1.gif",
null,
"https://mathgoodies.com/sites/all/modules/custom/lessons/images/percent/one_half.gif",
null,
"https://mathgoodies.com/sites/all/modules/custom/lessons/images/percent/equiv_one_half.gif",
null,
"https://mathgoodies.com/sites/all/modules/custom/lessons/images/percent/eighteen_20ths.gif",
null,
"https://mathgoodies.com/sites/all/modules/custom/lessons/images/percent/equiv_18_20ths.gif",
null,
"https://mathgoodies.com/sites/all/modules/custom/lessons/images/percent/four_fifths.gif",
null,
"https://mathgoodies.com/sites/all/modules/custom/lessons/images/percent/equiv_4_5ths.gif",
null,
"https://mathgoodies.com/sites/all/modules/custom/lessons/images/percent/nineteen_20ths.gif",
null,
"https://mathgoodies.com/sites/all/modules/custom/lessons/images/percent/equiv_19_20ths.gif",
null,
"https://mathgoodies.com/sites/all/modules/custom/lessons/images/percent/seven_8ths.gif",
null,
"https://mathgoodies.com/sites/all/modules/custom/lessons/images/percent/equiv_7_8ths.gif",
null,
"https://mathgoodies.com/sites/all/modules/custom/lessons/images/percent/fractions_example2.gif",
null,
"https://mathgoodies.com/sites/all/modules/custom/lessons/images/percent/seven_8ths.gif",
null,
"https://mathgoodies.com/sites/all/modules/custom/lessons/images/percent/seven_div8.gif",
null,
"https://mathgoodies.com/sites/all/modules/custom/lessons/images/percent/nineteen_20ths.gif",
null,
"https://mathgoodies.com/sites/all/modules/custom/lessons/images/percent/nineteen_div20.gif",
null,
"https://mathgoodies.com/sites/all/modules/custom/lessons/images/percent/three_200ths.gif",
null,
"https://mathgoodies.com/sites/all/modules/custom/lessons/images/percent/three_div200.gif",
null,
"https://mathgoodies.com/sites/all/modules/custom/lessons/images/percent/thirty3_20ths.gif",
null,
"https://mathgoodies.com/sites/all/modules/custom/lessons/images/percent/thirty3_200ths.gif",
null,
"https://mathgoodies.com/sites/all/modules/custom/lessons/images/percent/thirty3_20ths.gif",
null,
"https://mathgoodies.com/sites/all/modules/custom/lessons/images/percent/thirty3_200ths.gif",
null,
"https://mathgoodies.com/sites/all/modules/custom/lessons/images/percent/thirty3_2000ths.gif",
null,
"https://mathgoodies.com/sites/all/modules/custom/lessons/images/percent/nine_25ths.gif",
null,
"https://mathgoodies.com/sites/all/modules/custom/lessons/images/percent/five_8ths.gif",
null,
"https://mathgoodies.com/sites/all/modules/custom/lessons/images/percent/six_25ths.gif",
null,
"https://mathgoodies.com/sites/all/modules/custom/lessons/images/percent/four_fifths.gif",
null,
"https://mathgoodies.com/sites/all/modules/custom/lessons/images/percent/five_8ths.gif",
null,
"https://mathgoodies.com/sites/all/modules/custom/lessons/images/percent/eighteen_20ths.gif",
null,
"https://mathgoodies.com/sites/all/modules/custom/lessons/images/percent/fifty8_point3_100ths.gif",
null,
"https://mathgoodies.com/sites/all/modules/custom/lessons/images/percent/eleven_200ths.gif",
null,
"https://mathgoodies.com/sites/all/modules/custom/lessons/images/percent/four_point2_1000ths.gif",
null,
"https://mathgoodies.com/sites/all/modules/custom/lessons/images/percent/forty_two_10000ths.gif",
null,
"https://mathgoodies.com/sites/all/modules/custom/lessons/images/percent/twenty1_5000ths.gif",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9199037,"math_prob":0.98213714,"size":2723,"snap":"2022-27-2022-33","text_gpt3_token_len":696,"char_repetition_ratio":0.1636631,"word_repetition_ratio":0.11340206,"special_character_ratio":0.26625046,"punctuation_ratio":0.12743363,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9986824,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76],"im_url_duplicate_count":[null,9,null,3,null,3,null,3,null,3,null,3,null,3,null,6,null,3,null,6,null,3,null,6,null,3,null,6,null,3,null,3,null,6,null,3,null,6,null,3,null,3,null,3,null,6,null,6,null,6,null,6,null,3,null,3,null,6,null,3,null,6,null,6,null,6,null,3,null,3,null,3,null,3,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-17T19:12:08Z\",\"WARC-Record-ID\":\"<urn:uuid:0021c461-7a77-494e-b10e-e0dbb7b4b7e4>\",\"Content-Length\":\"82756\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8f18676f-706d-451b-b291-fb0e05a05f58>\",\"WARC-Concurrent-To\":\"<urn:uuid:26b3cecd-a262-4e3f-b5f9-600b80a9f48c>\",\"WARC-IP-Address\":\"172.64.197.2\",\"WARC-Target-URI\":\"https://mathgoodies.com/lessons/vol4/fractions_to_percents\",\"WARC-Payload-Digest\":\"sha1:RGFRK3HDRKDUEVO5T67L4W5MYTUY733K\",\"WARC-Block-Digest\":\"sha1:HAZZH7LEVUJQZLJ3M6ZBF6FPFFXEMDMH\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882573104.24_warc_CC-MAIN-20220817183340-20220817213340-00420.warc.gz\"}"} |
https://www.mbeckler.org/blog/?tag=cpp | [
"# cpp\n\n## Pretty printing a number of bytes in C/C++\n\nI spend a lot of time write and debugging command line scripts and programs. As much as I like looking at large numbers (millions, billions, trillions, etc) it can be difficult to read a big number and quickly parse how large it is, i.e. “is that 12 megabytes or 1.2 gigabytes?”.\n\nA long time ago I wrote a small function that does pretty printing of a number of bytes. It can handle from bytes to exabytes, and properly handles integer numbers of a unit by printing it as an integer instead of float. Should be easy enough to adjust to your specific needs or style desires.\n\n```// Prints to the provided buffer a nice number of bytes (KB, MB, GB, etc)\nvoid pretty_bytes(char* buf, uint bytes)\n{\nconst char* suffixes;\nsuffixes = \"B\";\nsuffixes = \"KB\";\nsuffixes = \"MB\";\nsuffixes = \"GB\";\nsuffixes = \"TB\";\nsuffixes = \"PB\";\nsuffixes = \"EB\";\nuint s = 0; // which suffix to use\ndouble count = bytes;\nwhile (count >= 1024 && s < 7)\n{\ns++;\ncount /= 1024;\n}\nif (count - floor(count) == 0.0)\nsprintf(buf, \"%d %s\", (int)count, suffixes[s]);\nelse\nsprintf(buf, \"%.1f %s\", count, suffixes[s]);\n}```"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8087208,"math_prob":0.9739436,"size":1099,"snap":"2020-45-2020-50","text_gpt3_token_len":321,"char_repetition_ratio":0.14977169,"word_repetition_ratio":0.0,"special_character_ratio":0.322111,"punctuation_ratio":0.17167382,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9557192,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-26T03:41:39Z\",\"WARC-Record-ID\":\"<urn:uuid:74219561-02de-45fd-b787-0a3aa58b8c1d>\",\"Content-Length\":\"16070\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3cf35ad2-bdcc-46f5-9f29-8dd5a9fda5d2>\",\"WARC-Concurrent-To\":\"<urn:uuid:f815ac19-acd8-4183-a04f-7e288143b463>\",\"WARC-IP-Address\":\"66.33.210.224\",\"WARC-Target-URI\":\"https://www.mbeckler.org/blog/?tag=cpp\",\"WARC-Payload-Digest\":\"sha1:JFDZVL3QMDJ4KCRUPJK4KMT33YJVDEYE\",\"WARC-Block-Digest\":\"sha1:ICBXTCDO3GO6YVEMHN4KWIMHVOYFQWTB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107890273.42_warc_CC-MAIN-20201026031408-20201026061408-00557.warc.gz\"}"} |
https://simplyans.com/mathematics/what-is-the-positive-square-root-of-12656457 | [
"",
null,
", 14.05.2019 20:39 cesarcastellan9\n\n# What is the positive square root of 0.81enter your answer in the box.",
null,
"",
null,
"",
null,
"### Another question on Mathematics",
null,
"Mathematics, 04.02.2019 19:52\nPart a: select all of the ordered pairs that are located on the graph of the equation. part b: does the graph of the equation represent a function? select all correct answers for part a and one answer for part b.",
null,
"Mathematics, 04.02.2019 06:30\nYou are given a convex polygon. you are asked to draw a new polygon by increasing the sum of the interior angle measures by 540°. how many sides does your new polygon have?",
null,
"Mathematics, 04.02.2019 05:14\n43lbs of tomatos cost \\$387. how much would 41lbs cost",
null,
"Mathematics, 03.02.2019 17:10\nAgarden consists of an apple tree, a pear tree, cauliflowers, and heads of cabbage. there are 40 vegetables in the garden. 24 of them are cauliflowers. what is the ratio of the number of cauliflowers to the number of heads of cabbage?\nWhat is the positive square root of 0.81\n...\nQuestions",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"Questions on the website: 6616171"
] | [
null,
"https://simplyans.com/tpl/images/cats/mat.png",
null,
"https://simplyans.com/tpl/images/cats/User.png",
null,
"https://simplyans.com/tpl/images/ask_question.png",
null,
"https://simplyans.com/tpl/images/ask_question_mob.png",
null,
"https://simplyans.com/tpl/images/cats/mat.png",
null,
"https://simplyans.com/tpl/images/cats/mat.png",
null,
"https://simplyans.com/tpl/images/cats/mat.png",
null,
"https://simplyans.com/tpl/images/cats/mat.png",
null,
"https://simplyans.com/tpl/images/cats/mir.png",
null,
"https://simplyans.com/tpl/images/cats/health.png",
null,
"https://simplyans.com/tpl/images/cats/en.png",
null,
"https://simplyans.com/tpl/images/cats/en.png",
null,
"https://simplyans.com/tpl/images/cats/ekonomika.png",
null,
"https://simplyans.com/tpl/images/cats/fizika.png",
null,
"https://simplyans.com/tpl/images/cats/mat.png",
null,
"https://simplyans.com/tpl/images/cats/istoriya.png",
null,
"https://simplyans.com/tpl/images/cats/obshestvoznanie.png",
null,
"https://simplyans.com/tpl/images/cats/istoriya.png",
null,
"https://simplyans.com/tpl/images/cats/health.png",
null,
"https://simplyans.com/tpl/images/cats/istoriya.png",
null,
"https://simplyans.com/tpl/images/cats/fr.png",
null,
"https://simplyans.com/tpl/images/cats/biologiya.png",
null,
"https://simplyans.com/tpl/images/cats/fizika.png",
null,
"https://simplyans.com/tpl/images/cats/ekonomika.png",
null,
"https://simplyans.com/tpl/images/cats/en.png",
null,
"https://simplyans.com/tpl/images/cats/mat.png",
null,
"https://simplyans.com/tpl/images/cats/obshestvoznanie.png",
null,
"https://simplyans.com/tpl/images/cats/mat.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.81871957,"math_prob":0.9283398,"size":1291,"snap":"2020-45-2020-50","text_gpt3_token_len":425,"char_repetition_ratio":0.13519813,"word_repetition_ratio":0.115384616,"special_character_ratio":0.36948103,"punctuation_ratio":0.248503,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9674881,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-24T05:52:27Z\",\"WARC-Record-ID\":\"<urn:uuid:3f73f90a-91f1-4ee3-ae3d-bd6a37e8ef57>\",\"Content-Length\":\"129330\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:017eb1d3-e8d1-493f-ace6-565e5554f95f>\",\"WARC-Concurrent-To\":\"<urn:uuid:d4f0f5fa-7c47-4ba8-8175-384ee174da26>\",\"WARC-IP-Address\":\"64.74.160.240\",\"WARC-Target-URI\":\"https://simplyans.com/mathematics/what-is-the-positive-square-root-of-12656457\",\"WARC-Payload-Digest\":\"sha1:2QC2TX3CNQR4QMT743DXVDWUOBJGFGWT\",\"WARC-Block-Digest\":\"sha1:WSPK5AZJ5HNXB62PAUW6P36Z6AHOTKH7\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107882102.31_warc_CC-MAIN-20201024051926-20201024081926-00466.warc.gz\"}"} |
https://schoolgraders.com/evaluate-the-function-at-the-indicated-value-of-x-round-your-result-to-three-decimal-places-function-fx-0-5x-value-x-1-7/ | [
"# Evaluate the function at the indicated value of x. Round your result to three decimal places. Function: f(x) = 0.5x Value: x = 1.7\n\nQUESTION 1\n1. Evaluate the function at the indicated value of x. Round your result to three decimal places.\nFunction: f(x) = 0.5x Value: x = 1.7\n\n-0.308\n\n1.7\n\n0.308\n\n0.5\n\n-1.7\nQUESTION 2\n1. Match the graph with its exponential function.\n\ny = 2-x – 3\n\ny = -2x + 3\n\ny = 2x + 3\n\ny = 2x – 3\n\ny = -2x – 3\n\nQUESTION 3\n1. Select the graph of the function.\nf(x) = 5x-1\n\nQUESTION 4\n1. Evaluate the function at the indicated value of x. Round your result to three decimal places.\nFunction: f(x) = 500e0.05x Value: x=17\n\n1169.823\n\n1369.823\n\n1569.823\n\n1269.823\n\n1469.823\n\nQUESTION 5\n1. Use the One-to-One property to solve the equation for x.\ne3x+5 = e6\n\nx = -1/3\n\nx2 = 6\n\nx = -3\n\nx = 1/3\n\nx = 3\n\nQUESTION 6\n1. Write the logarithmic equation in exponential form.\nlog8 64 = 2\n\n648 = 2\n\n82 = 16\n\n82 = 88\n\n82 = 64\n\n864 = 2\n\nQUESTION 7\n1. Write the logarithmic equation in exponential form.\nlog7 343 = 3\n\n7343 = 2\n\n73 = 77\n\n73 = 343\n\n73 = 14\n\n3437 = 2\n\nQUESTION 8\n1. Write the exponential equation in logarithmic form.\n43 = 64\n\nlog64 4 = 3\n\nlog4 64 = 3\n\nlog4 64 = -3\n\nlog4 3 = 64\n\nlog4 64 = 1/3\n\nQUESTION 9\n1. Use the properties of logarithms to simplify the expression.\nlog20 209\n\n0\n\n-1/9\n\n1/9\n\n-9\n\n9\n\nQUESTION 10\n1. Use the One-to-One property to solve the equation for x.\nlog2(x+4) = log2 20\n\n19\n\n17\n\n18\n\n16\n\n20\n\nQUESTION 11\n1. Find the exact value of the logarithmic expression.\nlog6 36\n\n2\n\n6\n\n36\n\n-2\n\nnone of these\n\nQUESTION 12\n1. Use the properties of logarithms to expand the expression as a sum, difference, and/or constant multiple of logarithms. (Assume all variables are positive.)\nlog3 9x\n\nlog3 9 x log3 x\n\nlog3 9 + log3 x\n\nlog3 9 log3\n\nnone of these\n\nQUESTION 13\n1. Condense the expression to a logarithm of a single quantity.\nlogx – 2logy + 3logz\n\nQUESTION 14\n1. Evaluate the logarithm using the change-of-base formula. Round your result to three decimal places.\nlog4 9\n\n1.585\n\n5.585\n\n3.585\n\n4.585\n\n2.585\nQUESTION 15\n1. Determine whether the given x-value is a solution (or an approximate solution) of the equation.\n42x-7 = 16\nx = 5\n\nno\n\nyes\nQUESTION 16\n1. Solve for x.\n3x = 81\n\n7\n\n3\n\n4\n\n-4\n\n-3\nQUESTION 17\n1. Solve the exponential equation algebraically. Approximate the resulte to three decimal places.\ne5x = ex2-14\n\n-7, -2\n\n7, -2\n\n5, -14\n\n7, 2\n\n-7, 2\nQUESTION 18\n1. Solve the logarithmic equation algebraically. Approximate the result to three decimal places.\nlog3(6x-8) = log3(5x + 10)\n\n18\n\n20\n\n17\n\n19\n\n-2\nQUESTION 19\n1. Find the magnitude R of each earthquake of intensity I (let I0=1).\nI = 19000\n\n3.28\n\n5.28\n\n4.28\n\n2.38\n\n6.28\nQUESTION 20\n1. \\$2500 is invested in an account at interest rate r, compounded continuously. Find the time required for the amount to double. (Approximate the result to two decimal places.)\nr = 0.0570\n\n13.16 years\n\n10.16 years\n\n11.16 years\n\n12.16 years\n\n14.16 years"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7656984,"math_prob":0.99945027,"size":2696,"snap":"2020-45-2020-50","text_gpt3_token_len":1005,"char_repetition_ratio":0.15156017,"word_repetition_ratio":0.13108614,"special_character_ratio":0.41357568,"punctuation_ratio":0.14037855,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99986434,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-11-29T03:49:17Z\",\"WARC-Record-ID\":\"<urn:uuid:2551aa22-36e8-4019-8f62-7fcdb0f5f4c0>\",\"Content-Length\":\"122503\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7df15222-3b29-46d9-913f-6ebcd8c7bf82>\",\"WARC-Concurrent-To\":\"<urn:uuid:b34288b8-f73e-456f-97d7-1ce145aac5c9>\",\"WARC-IP-Address\":\"172.67.152.131\",\"WARC-Target-URI\":\"https://schoolgraders.com/evaluate-the-function-at-the-indicated-value-of-x-round-your-result-to-three-decimal-places-function-fx-0-5x-value-x-1-7/\",\"WARC-Payload-Digest\":\"sha1:BHA3SNYVVLYIY2KMRNAVTL5ZS4GXUPXV\",\"WARC-Block-Digest\":\"sha1:PNLEIV7MRNAFLB6CKHAGKJ5WZWXDRGNR\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141196324.38_warc_CC-MAIN-20201129034021-20201129064021-00223.warc.gz\"}"} |
http://www.neuralnoise.com/2016/gaussian-fields/ | [
"# neuralnoise.com\n\nHomepage of Dr. Pasquale Minervini, Ph.D.\nResearcher at University College London\nLondon, United Kingdom\n\nIn several occasions, we find ourselves in need of propagating information among nodes in an undirected graph.\n\nFor instance, consider graph-based Semi-Supervised Learning (SSL): here, labeled and unlabeled examples are represented by an undirected graph, referred to as the similarity graph.\n\nThe task consists in finding a label assignment to all examples, such that:\n\n1. The final labeling is consistent with training data (e.g. positive training examples are still classified as positive at the end of the learning process), and\n2. Similar examples are assigned similar labels: this is referred to as the semi-supervised smoothness assumption.\n\nSimilarly, in networked data such as social networks, we might assume that related entities (such as friends) are associated to similar attributes (such as political and religious views, musical tastes and so on): in social network analysis, this phenomenon is commonly referred to as homophily (love of the same).\n\nIn both cases, propagating information from a limited set of nodes in a graph to all nodes provides a method for predicting the attributes of such nodes, when this information is missing.\n\nIn the following, we introduce a really clever method for efficiently propagating information about nodes in undirected graphs, known as the Gaussian Fields method.\n\n### Propagation as a Cost Minimization Problem\n\nWe now cast the propagation problem as a binary classification task. Let $X = \\{ x_{1}, x_{2}, \\ldots, x_{n} \\}$ be a set of $n$ instances, of which only $l$ are labeled: $X^{+}$ are positive examples, while $X^{-}$ are negative examples\n\nSimilarity relations between instances can be represented by means of an undirected similarity graph having adjacency matrix $\\mathbf{W} \\in \\mathbb{R}^{n \\times n}$: if two instances are connected in the similarity graph, it means that they are considered similar, and should be assigned the same label. Specifically, $\\mathbf{W}_{ij} > 0$ iff the instances $x_{i}, x_{j} \\in X$ are connected by an edge in the similarity graph, and $\\mathbf{W}_{ij} = 0$ otherwise.\n\nLet $y_{i} \\in \\{ \\pm 1 \\}$ be the label assigned to the $i$-th instance $x_{i} \\in X$. We can encode our assumption that similar instances should be assigned similar labels by defining a quadratic cost function over labeling functions in the form $f : X \\mapsto \\{ \\pm 1 \\}$:\n\nGiven an input labeling function $f$, the cost function $E(\\cdot)$ associates, for each pair of instances $x_{i}, x_{j} \\in X$, a non-negative cost $\\mathbf{W}_{ij} \\left[ f(x_{i}) - f(x_{j}) \\right]$: this quantity is $0$ when $\\mathbf{W}_{ij} = 0$ (i.e. $x_{i}$ and $X_{j}$ are not linked in the similarity graph), or when $f(x_{i}) = f(x_{j})$ (i.e. they are assigned the same label).\n\nFor such a reason, the cost function $E(\\cdot)$ favors labeling functions that are more likely to assign the same labels to instances that are linked by an edge in the similarity graph.\n\nNow, the problem of finding a labeling function that is both consistent with training labels, and assigns similar labels to similar instances, can be cast as a cost minimization problem. Let’s represent a labeling function $f$ by a vector $\\mathbf{f} \\in \\mathbb{R}^{n}$, $L \\subset X$ denote labeled instances, and $\\mathbf{y}_{i} \\in \\{ \\pm 1 \\}$ denote the label of the $x_{i}$-th instance. The optimization problem can be defined as follows:\n\nThe constraint $\\forall x \\in L : \\mathbf{f}_{i} = \\mathbf{y}_{i}$ enforces the label of each labeled example $x_{i} \\in L$ to $\\mathbf{f}_{i} = +1$ if the instance has a positive label, and to $\\mathbf{f}_{i} = -1$ if the instance has a negative label, so to achieve consistency with training labels.\n\nHowever, constraining labeling functions $f$ to only take discrete values has two main drawbacks:\n\n• Each function $f$ can only provide hard classifications, without yielding any measure of confidence in the provided classification.\n• The cost term $E(\\cdot)$ can be hard to optimize in a multi-label classification setting.\n\nFor overcoming such limitations, Zhu et al. propose a continuous relaxation of the previous optimization problem:\n\nwhere the term $\\sum_{x_{i} \\in X} \\mathbf{f}_{i}^{2} = \\mathbf{f}^{T} \\mathbf{f}$ is a $L_{2}$ regularizer over $\\mathbf{f}$, weighted by a parameter $\\epsilon > 0$ which ensures that the optimization problem has a unique global solution.\n\nThe parameter $\\epsilon$ can be interpreted as the decay of the propagation process: as the distance from a labeled instance within the similarity graph increases, the confidence in the classification (as measured by the continuous label) gets closer to zero.\n\nThis optimization problem has a unique, global solution that can be calculated in closed-form. Specifically, the optimal (relaxed) discriminant function $f : X \\mapsto \\mathbb{R}$ is given by $\\mathbf{\\hat{f}} = \\left[ \\mathbf{f}_{L}, \\mathbf{f}_{U} \\right]^{T}$, where $\\mathbf{\\hat{f}}_{L} = \\mathbf{y}_{L}$ (i.e. labels for labeled examples in $L$ coincide with training labels), while $\\mathbf{\\hat{f}}_{U}$ is given by:\n\nwhere $\\mathbf{L} = \\mathbf{D} - \\mathbf{W}$ is the graph Laplacian of the similarity graph with adjacency matrix $\\mathbf{W}$, and $\\mathbf{D}$ is a diagonal matrix such that $\\mathbf{D}_{ii} = \\sum_{j} \\mathbf{W}_{ij}$."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8657781,"math_prob":0.9989867,"size":5242,"snap":"2021-21-2021-25","text_gpt3_token_len":1332,"char_repetition_ratio":0.14852998,"word_repetition_ratio":0.010012516,"special_character_ratio":0.25982448,"punctuation_ratio":0.095918365,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999528,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-05-16T21:12:34Z\",\"WARC-Record-ID\":\"<urn:uuid:90a7ca33-73fd-40d6-a75c-bce08213fcad>\",\"Content-Length\":\"17596\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7322c4f2-cf05-441b-b9f4-08b3c1641535>\",\"WARC-Concurrent-To\":\"<urn:uuid:8b1226cd-0c2a-4817-a152-f4e7a056bd22>\",\"WARC-IP-Address\":\"148.251.85.242\",\"WARC-Target-URI\":\"http://www.neuralnoise.com/2016/gaussian-fields/\",\"WARC-Payload-Digest\":\"sha1:E5FZWZHKVCCMHGS7J4S5WSXXIDLFMG2K\",\"WARC-Block-Digest\":\"sha1:V7EISJDUXPXGMGK25V3LALZKXXJW2SCI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-21/CC-MAIN-2021-21_segments_1620243989914.60_warc_CC-MAIN-20210516201947-20210516231947-00519.warc.gz\"}"} |
https://data.epo.org/publication-server/html-document?PN=EP3550726%20EP%203550726&iDocId=6391744&disclaimer=true&citations=true | [
"(19)",
null,
"(11)EP 3 550 726 B1\n\n (12) EUROPEAN PATENT SPECIFICATION\n\n (45) Mention of the grant of the patent: 04.11.2020 Bulletin 2020/45\n\n (21) Application number: 19172900.3\n\n (22) Date of filing: 20.05.2011\n(51)International Patent Classification (IPC):\n H03M 7/40(2006.01)\n\n (54) METHODS AND DEVICES FOR REDUCING SOURCES IN BINARY ENTROPY CODING AND DECODINGVERFAHREN UND VORRICHTUNGEN ZUR REDUZIERUNG VON QUELLEN FÜR BINÄRE ENTROPIECODIERUNG UND -DECODIERUNGPROCÉDÉS ET DISPOSITIFS DE RÉDUCTION DE SOURCES DANS LE CODAGE ET LE DÉCODAGE D'ENTROPIE BINAIRE\n\n (84) Designated Contracting States: AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR\n\n (30) Priority: 21.05.2010 US 34702710 P\n\n (43) Date of publication of application: 09.10.2019 Bulletin 2019/41\n\n (62) Application number of the earlier application in accordance with Art. 76 EPC: 11782837.6 / 2572455\n\n (73) Proprietor: BlackBerry Limited Waterloo, Ontario N2K 0A7 (CA)\n\n (72) Inventors: HE, DakeWaterloo, Ontario N2K 0A7 (CA)KORODI, Gergely FerencWaterloo, Ontario N2K 0A7 (CA)\n\n (74) Representative: Hanna Moore + Curley Garryard House 25/26 Earlsfort TerraceDublin 2, D02 PX51Dublin 2, D02 PX51 (IE)\n\n(56)References cited: :\n EP-A1- 2 124 343\n\n• KARWOWSKI DAMIAN ET AL: \"Improved context-adaptive arithmetic coding in H.264/AVC\", 2009 17TH EUROPEAN SIGNAL PROCESSING CONFERENCE, IEEE, 24 August 2009 (2009-08-24), pages 2216-2220, XP032758765, ISBN: 978-1-61738-876-7 [retrieved on 2015-04-01]\n• MRAK M ET AL: \"Comparison of context-based adaptive binary arithmetic coders in video compression\", VIDEO/IMAGE PROCESSING AND MULTIMEDIA COMMUNICATIONS, 2003. 4TH EURASI P CONFERENCE FOCUSED ON 2-5 JULY 2003, PISCATAWAY, NJ, USA,IEEE, vol. 1, 2 July 2003 (2003-07-02), pages 277-286, XP010650143, ISBN: 978-953-184-054-5\n• KORODI G ET AL: \"Source selection for V2V entropy coding in HEVC\", 2. JCT-VC MEETING; 21-7-2010 - 28-7-2010; GENEVA; (JOINT COLLABORATIVETEAM ON VIDEO CODING OF ISO/IEC JTC1/SC29/WG11 AND ITU-T SG.16 ); URL:HTTP://WFTP3.ITU.INT/AV-ARCH/JCTVC-SIT E/,, no. JCTVC-B034, 18 July 2010 (2010-07-18), XP030007614, ISSN: 0000-0048\n• KARWOWSKI DAMIAN: \"Improved arithmetic coding in H.264/AVC using Context-Tree Weighting and prediction by Partial Matching\", 2007 15TH EUROPEAN SIGNAL PROCESSING CONFERENCE, IEEE, 3 September 2007 (2007-09-03), pages 1270-1274, XP032772888, ISBN: 978-83-921340-4-6 [retrieved on 2015-04-30]\n\n Note: Within nine months from the publication of the mention of the grant of the European patent, any person may give notice to the European Patent Office of opposition to the European patent granted. Notice of opposition shall be filed in a written reasoned statement. It shall not be deemed to have been filed until the opposition fee has been paid. (Art. 99(1) European Patent Convention).\n\nCROSS-REFERENCE TO RELATED APPLICATIONS\n\nFIELD\n\n The present application generally relates to data compression and, in particular, to an encoder, a decoder and methods for reducing sources in binary entropy coding and decoding.\n\nBACKGROUND\n\n A number of coding schemes have been developed to encode binary data. For example, JPEG images may be encoded using Huffman codes. The H.264 standard allows for two possible entropy coding processes: Context Adaptive Variable Length Coding (CAVLC) or Context Adaptive Binary Arithmetic Coding (CABAC). CABAC results in greater compression than CAVLC, but CABAC is more computationally demanding.\n\n Recent advances in binary entropy coding have made it increasingly attractive as an encoding scheme. For example, it is possible to engage in parallel encoding, where a source input sequence is separated into parallel subsequences based on context modeling, where each subsequence contains the symbols associated with a particular probability from the context model. In another example, variable-to-variable coding with a primary codeset and a secondary codeset can improve coding efficiency. The secondary codeset is used only during \"end\" or \"flush\" events, where a sequence or subsequence of symbols does not complete a primary codeword. The subsequences of symbols associated with particular probabilities defined in the context model may be considered as being produced by different \"sources\".\n\n Nevertheless, there are still certain drawbacks to binary entropy encoding that can arise in certain circumstances. For example, in the context of parallel encoding the output bitstream may include overhead information about the length of each of the encoded subsequences. For a context model having a large number of \"sources\", i.e. probabilities, this can mean a sizable overhead. In the context of two-codeset variable-to-variable encoding, the use of the less-efficient secondary codeset can be costly, particular where there are a large number of \"sources\" and, thus, a large number of partial subsequences that need to be encoded using secondary codewords after a flush event.\n\n Karwowski Damian et al, \"Improved context-adaptive arithmetic coding in H.264/AVC\", 17th European Signal Processing Conference, 24-28 August 2009, page 2216 - 2220, discloses an improved Context-based Adaptive Binary Arithmetic Coding (CABAC), for applications in video compression, having a higher coding efficiency obtained by application of more exact data statistics estimation based on the Context-Tree Weighting (CTW) data modeling algorithm. Finite-state machines in the context modeler are replaced by binary context trees. The improved context modeler generates values of conditional probabilities in a significantly greater set of numbers as compare to the original CABAC. The core of the fast binary arithmetic codec (M-codec) is adopted to work properly with a limited set of only 128 predefined probabilities, therefore each probability calculated with the improved context modeler has to be mapped to a value from the smaller set of 128 predefined probabilities.\n\n Mrak M et al, \"Comparison of context-based adaptive binary arithmetic coders in video compression\", Video/Image Processing and Multimedia Communications, 2-5 July 2003, pages 277-286, discloses a comparison two context-based modeling methods, CABCA and Growth by Reordering and Selection by Pruning (GRASP). GRASP is a two-step approach. In the first step, a full balanced tree of a pre-defined depth is grown and in the second step, the resulting full balanced tree is pruned; branches are pruned that result in a higher cost than that of their corresponding root nodes.\n\nBRIEF SUMMARY\n\n The present application describes architectures, methods and processes for decoding data.\n\n In one aspect, the present application describes a method for decoding an encoded bitstream as specified by claim 1. Further detailed embodiments are specified in the dependent claims.\nIn yet a further aspect, the present application describes a decoder having a memory, processor, and decoding application executable by the processor that, when executed, configures the processor to perform one of the decoding processes described herein.\nIn another aspect, the present application describes a tangible computer-readable medium storing computer-executable instructions\nwhich, when executed by one or more processors, implement the steps of the decoding processes described herein.\nIt will be understood that the reference herein to a processor is not limited to a single processor computer architecture and can include multi-core processors, multiple processors, and distributed\narchitectures, in some embodiments. Only the embodiments for the method for decoding, decoder and associated computer-readable medium storing computer-executable instructions implementing the steps of the method for decoding are to be considered embodiments of the invention. The other so called embodiments are to be considered only examples.\n\n Other aspects and features of the present application will be understood by those of ordinary skill in the art from a review of the following description of examples in conjunction with the accompanying figures.\n\nBRIEF DESCRIPTION OF THE DRAWINGS\n\n Reference will now be made, by way of example, to the accompanying drawings which show example embodiments of the present application, and in which:\n\nFigure 1 shows, in block diagram form, an encoder for encoding video;\n\nFigure 2 shows, in block diagram form, a decoder for decoding video;\n\nFigure 3 shows a block diagram of an encoding process;\n\nFigure 4 shows, in flowchart form, a method of encoding input symbols using a new set of probabilities and a mapping;\n\nFigure 5 shows, in flowchart form, a method of decoding an encoded bitstream based upon a new set of probabilities and a mapping;\n\nFigure 6 shows a simplified block diagram of an example embodiment of an encoder; and\n\nFigure 7 shows a simplified block diagram of an example embodiment of a decoder.\n\n Similar reference numerals may have been used in different figures to denote similar components.\n\nDESCRIPTION OF EXAMPLE EMBODIMENTS\n\n The following description relates to data compression in general and, in particular, to the encoding of finite alphabet sources, such as a binary source. In many of the examples given below, particular applications of such an encoding and decoding scheme are given. For example, many of the illustrations below make reference to video coding. It will be appreciated that the present application is not limited to video coding or image coding.\n\n In the description that follows, example embodiments are described with reference to the H.264 standard. Those ordinarily skilled in the art will understand that the present application is not limited to H.264 but may be applicable to other video coding/decoding standards. It will also be appreciated that the present application is not necessarily limited to video coding/decoding and may be applicable to coding/decoding of any binary sources.\n\n In the description that follows, in the context of video applications the terms frame and slice are used somewhat interchangeably. Those of skill in the art will appreciate that, in the case of the H.264 standard, a frame may contain one or more slices. It will also be appreciated that certain encoding/decoding operations arc performed on a frame-by-frame basis and some arc performed on a slice-by-slice basis, depending on the particular requirements of the applicable video coding standard. In any particular embodiment, the applicable video coding standard may determine whether the operations described below are performed in connection with frames and/or slices, as the case may be. Accordingly, those ordinarily skilled in the art will understand, in light of the present disclosure, whether particular operations or processes described herein and particular references to frames, slices, or both are applicable to frames, slices, or both for a given embodiment.\n\n Reference is now made to Figure 1, which shows, in block diagram form, an encoder 10 for encoding video. Reference is also made to Figure 2, which shows a block diagram of a decoder 50 for decoding video. It will be appreciated that the encoder 10 and decoder 50 described herein may each be implemented on an application-specific or general purpose computing device, containing one or more processing elements and memory. The operations performed by the encoder 10 or decoder 50, as the case may be, may be implemented by way of application-specific integrated circuit, for example, or by way of stored program instructions executable by a general purpose processor. The device may include additional software, including, for example, an operating system for controlling basic device functions. The range of devices and platforms within which the encoder 10 or decoder 50 may be implemented will be appreciated by those ordinarily skilled in the art having regard to the following description.\n\n The encoder 10 receives a video source 12 and produces an encoded bitstream 14. The decoder 50 receives the encoded bitstream 14 and outputs a decoded video frame 16. The encoder 10 and decoder 50 may be configured to operate in conformance with a number of video compression standards. For example, the encoder 10 and decoder 50 may be H.264/AVC compliant. In other embodiments, the encoder 10 and decoder 50 may conform to other video compression standards, including evolutions of the H.264/AVC standard.\n\n The encoder 10 includes a spatial predictor 21, a coding mode selector 20, transform processor 22, quantizer 24, and entropy coder 26. As will be appreciated by those ordinarily skilled in the art, the coding mode selector 20 determines the appropriate coding mode for the video source, for example whether the subject frame/slice is of I, P, or B type, and whether particular macroblocks within the frame/slice are inter or intra coded. The transform processor 22 performs a transform upon the spatial domain data. In particular, the transform processor 22 applies a block-based transform to convert spatial domain data to spectral components. For example, in many embodiments a discrete cosine transform (DCT) is used. Other transforms, such as a discrete sine transform or others may be used in some instances. Applying the block-based transform to a block of pixel data results in a set of transform domain coefficients. The set of transform domain coefficients is quantized by the quantizer 24. The quantized coefficients and associated information, such as motion vectors, quantization parameters, etc., are then encoded by the entropy coder 26.\n\n Intra-coded frames/slices (i.e. type I) are encoded without reference to other frames/slices. In other words, they do not employ temporal prediction. However intra-coded frames do rely upon spatial prediction within the frame/slice, as illustrated in Figure 1 by the spatial predictor 21. That is, when encoding a particular block the data in the block may be compared to the data of nearby pixels within blocks already encoded for that frame/slice. Using a prediction algorithm, the source data of the block may be converted to residual data. The transform processor 22 then encodes the residual data. H.264, for example, prescribes nine spatial prediction modes for 4x4 transform blocks. In some embodiments, each of the nine modes may be used to independently process a block, and then rate-distortion optimization is used to select the best mode.\n\n The H.264 standard also prescribes the use of motion prediction/compensation to take advantage of temporal prediction. Accordingly, the encoder 10 has a feedback loop that includes a de-quantizer 28, inverse transform processor 30, and deblocking processor 32. These elements mirror the decoding process implemented by the decoder 50 to reproduce the frame/slice. A frame store 34 is used to store the reproduced frames. In this manner, the motion prediction is based on what will be the reconstructed frames at the decoder 50 and not on the original frames, which may differ from the reconstructed frames due to the lossy compression involved in encoding/decoding. A motion predictor 36 uses the frames/slices stored in the frame store 34 as source frames/slices for comparison to a current frame for the purpose of identifying similar blocks. Accordingly, for macroblocks to which motion prediction is applied, the \"source data\" which the transform processor 22 encodes is the residual data that comes out of the motion prediction process. The residual data is pixel data that represents the differences (if any) between the reference block and the current block. Information regarding the reference frame and/or motion vector may not be processed by the transform processor 22 and/or quantizer 24, but instead may be supplied to the entropy coder 26 for encoding as part of the bitstream along with the quantized coefficients.\n\n Those ordinarily skilled in the art will appreciate the details and possible variations for implementing H.264 encoders.\n\n The decoder 50 includes an entropy decoder 52, dequantizer 54, inverse transform processor 56, spatial compensator 57, and deblocking processor 60. A frame buffer 58 supplies reconstructed frames for use by a motion compensator 62 in applying motion compensation. The spatial compensator 57 represents the operation of recovering the video data for a particular intra-codcd block from a previously decoded block.\n\n The bitstream 14 is received and decoded by the entropy decoder 52 to recover the quantized coefficients. Side information may also be recovered during the entropy decoding process, some of which may be supplied to the motion compensation loop for use in motion compensation, if applicable. For example, the entropy decoder 52 may recover motion vectors and/or reference frame information for inter-coded macroblocks.\n\n The quantized coefficients are then dequantized by the dequantizer 54 to produce the transform domain coefficients, which arc then subjected to an inverse transform by the inverse transform processor 56 to recreate the \"video data\". It will be appreciated that, in some cases, such as with an intra-coded macroblock, the recreated \"video data\" is the residual data for use in spatial compensation relative to a previously decoded block within the frame. The spatial compensator 57 generates the video data from the residual data and pixel data from a previously decoded block. In other cases, such as inter-coded macroblocks, the recreated \"video data\" from the inverse transform processor 56 is the residual data for use in motion compensation relative to a reference block from a different frame. Both spatial and motion compensation may be referred to herein as \"prediction operations\".\n\n The motion compensator 62 locates a reference block within the frame buffer 58 specified for a particular inter-coded macroblock. It does so based on the reference frame information and motion vector specified for the inter-coded macroblock. It then supplies the reference block pixel data for combination with the residual data to arrive at the recreated video data for that macroblock.\n\n A deblocking process may then be applied to a reconstructed frame/slice, as indicated by the deblocking processor 60. After deblocking, the frame/slice is output as the decoded video frame 16, for example for display on a display device. It will be understood that the video playback machine, such as a computer, set-top box, DVD or Blu-Ray player, and/or mobile handheld device, may buffer decoded frames in a memory prior to display on an output device.\n\n Entropy coding is a fundamental part of all lossless and lossy compression schemes, including the video compression described above. The purpose of entropy coding is to represent a presumably decorrelated signal, often modeled by an independent, but not identically distributed process, as a sequence of bits. The technique used to achieve this must not depend on how the decorrelated signal was generated, but may rely upon relevant probability estimations for each upcoming symbol.\n\n There arc two common approaches for entropy coding used in practice: the first one is variable-length coding, which identifies input symbols or input sequences by codewords, and the second one is range (or arithmetic) coding, which encapsulates a sequence of subintervals of the [0, 1) interval, to arrive at a single interval, from which the original sequence can be reconstructed using the probability distributions that defined those intervals. Typically, range coding methods tend to offer better compression, while VLC methods have the potential to be faster. In either case, the symbols of the input sequence are from a finite alphabet.\n\n A special case of entropy coding is when the input alphabet is restricted to binary symbols. Here VLC schemes must group input symbols together to have any potential for compression, but since the probability distribution can change after each bit, efficient code construction is difficult. Accordingly, range encoding is considered to have greater compression due to its greater flexibility, but practical applications are hindered by the higher computational requirements of arithmetic codes.\n\n One of the techniques used in some entropy coding schemes, such as CAVLC and CABAC, both of which are used in H.264/AVC, is context modeling. With context modeling, each bit of the input sequence has a context, where the context is given by the bits that preceded it. In a first-order context model, the context may depend entirely upon the previous bit (symbol). In many cases, the context models may be adaptive, such that the probabilities associated with symbols for a given context may change as further bits of the sequence are processed. In yet other cases, such as those described herein, the context model is based on i.i.d. (independent and identically distributed) binary sources, wherein the context model defines a set of probabilities of producing a least probable symbol (LPS), and each bit of the input sequence is assigned one of the probabilities from the set. The bits associated with a given one of the probabilities are considered to be a sequence of symbols produced by an i.i.d. source.\n\n Reference is made to Figure 3, which shows a block diagram of an encoding process 100. The encoding process 100 includes a context modeling component 104 and an entropy coder 108. The context modeling component 104 receives the input sequence x 102, which in this example is a bit sequence (b0, b1, ..., bn). The context modeling component 104 determines a context for each bit bi based on one or more previous bits in the sequence, and determines, based on the adaptive context model, a probability pi associated with that bit bi, where the probability is the probability that the bit will be the Least Probable Symbol (LPS). The LPS may be \"0\" or \"1\" in a binary embodiment, depending on the convention or application. The context modeling component outputs the input sequence, i.e. the bits (b0, b1, ..., bn) along with their respective probabilities (p0, p1, ..., pn). The probabilities arc an estimated probability determined by the context model. This data is then input to the entropy coder 106, which encodes the input sequence using the probability information. For example, the entropy coder 106 may be a binary arithmetic coder. The entropy coder 106 outputs a bitstream 108 of encoded data.\n\n It will be appreciated each bit of the input sequence is processed serially to update the context model, and the serial bits and probability information are supplied to the entropy coder 106, which then entropy codes the bits to create the bitstream 108. Those ordinarily skilled in the art will appreciate that, in some embodiments, explicit probability information may not be passed from the context modeling component 104 to the entropy coder 106; rather, in some instances, for each bit the context modeling component 104 may send the entropy coder 106 an index or other indicator that reflects the probability estimation made be the context modeling component 104 based on the context model and the current context of the input sequence 102. The index or other indicator is indicative of the probability estimate associated with its corresponding bit.\n\n In US patent application 12/713,613, filed February 26, 2010, and owned in common herewith, a process and devices were described for encoding an input sequence using dual codesets. The primary codeset efficiently encoded the input sequence in a buffer-based encoding process. When a flush event occurred, for example due to end-of-frame, end of group-of-pictures, etc., secondary codewords from a secondary codeset were used to encode the partial subsequences associated with each probability defined by the context model.\n\n In US patent application 12/707,797, filed February 18, 2010, and owned in common herewith, methods and devices for parallel entropy coding are described. In one example, an input sequence is de-multiplexed into subsequences based on the associated probability of each input symbol (e.g. bit). The subsequences are each encoded by a respective one of the parallel entropy encoders. In similar manner at the decoder, the encoded bitstream is separated into encoded subsequences, which are each decoded by a respective one of the parallel decoders. The encoded bitstream may include overhead information regarding the length of each of the encoded subsequences to allow the decoder to separate them for parallel decoding.\n\n In both of these applications, the input sequence is first evaluated using the applicable context model to assign a probability to each input symbol, where the assigned probability is one of a predefined set of probabilities specified by the context model. In some instances, the number of probabilities can result in inefficiencies, either in the use of a large number of secondary codewords due to a flush event, or in the need to send a large amount of overhead specifying the length of each encoded subsequence for each probability value (or index). Each subsequence associated with each of the probabilities may be considered as originating from a \"source\" as that term is used herein.\n\n In accordance with one aspect, the present application proposes an encoder and decoder in which a new probability set is selected for the purposes of entropy encoding and decoding. The context modeling occurs as before using the predefined probability set associated with the context model: however, prior to entropy encoding and decoding, a new probability set is selected. A mapping defines the relationship between the predefined probability set and the new probability set. Each symbol that has been assigned a probability from the predefined probability set by the context model is then re-assigned or mapped to a new probability selected from the new probability set based on the mapping.\n\n The symbols are then binary entropy encoded using the new probabilities. In many embodiments, there are fewer new probabilities than predefined probabilities. In other words, the new probability set is smaller than the predefined probability set. In some instances, the new probability set is a subset of the probabilities in the predefined probability set, but not necessarily. With fewer probabilities, there are fewer subsequences, and thus less overhead and/or fewer secondary codewords. Accordingly, conceptually by using fewer probabilities we are allowing for fewer \"sources\". Each of the \"sources\" is still considered to be producing a random sequence of symbols as an i.i.d. binary source.\n\n Provided the loss of efficiency in merging symbols from more than one of the predefined probabilities into a single new probability is offset by the gain in reduced overhead or fewer secondary codewords, then the mapping improves the efficiency of the encoder and/or decoder.\n\n To illustrate one example embodiment, consider a finite number of probability sets P0, P1, ..., PK-1, where P0=P. For each Pk, we define a mapping Tk: P→Pk by the following rule:",
null,
"We denote R(p,q) = plog(p/q) + (1-p)log[(1-p)/(1-q)]. It will be understood that the quantity R(p,q) is a measure of relative entropy. It will be noted that the mapping expression for Tk(p) results in mapping each probability p of the probability set P to a probability q of the new probability set Pk, on the basis that the selection of probability q minimizes the relative entropy expression.\n\n In one embodiment, the Tk mappings are computed by the following algorithm, which for any probability set Pk runs in O(N) time, provided that the sets P and Pk are sorted according to decreasing probability:\n1. 1. Let i=0, j=0, p=P.\n2. 2. If i==N, the algorithm terminates.\n3. 3. While j<|Pk|-1 and Pk[j] >= p, set j=j+1\n4. 4. If j==0 or R(p, Pk[j]) < R(p, Pk[j-1]), set Tk(p) = Pk[j]. Otherwise, set Tk(p) = Pk[j-1].\n5. 5. Let i=i+1, p=P[i] and go to 2.\n\n In the off-line setting, the probability sets and mappings are known to both the encoder and the decoder. In the on-line setting, the sets and the mappings between them are updated on the run, optionally adaptively based on the input data.\n\n In the description below various example embodiments are described. The examples vary in how the probability set is selected. As a general rule, each variation introduces some overhead to the encoded sequences. This overhead may be taken into account when finding the best probability set for encoding. It will be appreciated that the underlying entropy coding method may incur some overhead on its own. For the purposes of this description the encoded sequences may be regarded as inseparable, their overhead included, and we discuss only the overhead for this framework. As a consequence for this discussion we may not use the entropy to compute the sequence lengths, but use the actual encoded length, or at least a reliable upper bound.\n\nOffline Solution\n\n In one exemplary application, the probability sets and mappings may be determined off-line (prior to both the encoding and decoding process, thereby not affecting encoding and decoding times). In one embodiment, the encoding process can be described as follows:\n\n For each k = 0, ..., K-1, the entropy coder encodes the N sources using the probability mappings Tk(pj) for pj ∈ P. That is, instead of using the probability pj for source j, the output of source j will be merged into the output of the source with probability Tk(pj), and the interleaved output of all such sources are encoded with probability Tk(pj). For index k this results in |Pk| encoded sequences of total length Lk. The encoder then picks index m that minimizes Lk (k=0, ..., K-1), writes C(m) into the output, followed by the encoded sequences for mapping m (the definition of the prefix code C() is described in US patent application 12/707,797). The overhead in this example is |C(m)|.\n\n If the number K of candidate sets is low, choosing the best one can be done by an exhaustive search, that is, trying out each of P0, ..., PK-1. This increases encoding computational complexity, but it does not affect the decoder. If the number of candidate sets is too high for an exhaustive linear search, we may use a heuristic search, such as a search based on gradients evaluated from neighbouring encoded length values.\n\n The decoder reads out the value of m from the decoded file, after which the probability value Tm(pj) is used instead of pj ∈ P for the sources Pm instead of P. That is, whenever the decoder needs to determine a symbol from source j, it looks up the source j' to which j is mapped based on Pm and Tm, then decodes the next bit from the encoded data for j' using probability Tm(pj).\n\n One embodiment of the present application may be illustrated by way of the following example. Let P be the probability values defined for CABAC: pk = 0.5αk for k=0, ..., 63, α = (2*0.01875)1/63. Let P0=P={pk|k=0, ..., 63}, and for k = 0, ..., 9:",
null,
"This creates ten probability sets, each of them the subset of P, and with decreasing size, evenly distributed probability values, but retaining the original range (p0 to p63). Since the size of Pk is smaller than the size of P0, the overhead associated to encoding with Pk is also smaller, but the encoded length may be higher. The proposed encoder encodes the input symbols with each of P0, ..., P9, and chooses the one that gives the shortest (overhead + payload) length.\n\nOnline Solution with Fixed Source Probabilities and Dynamic Probability Sets\n\n In another exemplary application, the probability sets and/or the mappings may be determined online, i.e. dynamically. Thus for video encoding the sets and mappings may be different for different slices or frames or groups of frames (GOPs). As opposed to the offline solution outlined above, here we have to specify how to determine the sets P1, ..., PK-1 and how to inform the decoder about the mappings Tm(pj).\n\n Choosing the candidate sets P1, ..., PK-1 may be done based on probabilities, so that the most frequent values are collected in each set (see the example below). Once these sets are known, the mappings Tk are determined with the algorithm above, and the best candidate m is selected using the methods described above in the offline example. After this, the encoder may write C(|Pm|), C(Pm(0)), C(Pm(1)), ..., C(Pm(|Pm|-1)) into the encoded file, followed by the encoded sequences. We note that a probability is encoded by its index in P. For example, if Pm(0)=0.4, and P={0.5, 0.4, 0.3, ...}, then C(Pm(0)) is analogous to C(1). The overhead of this variation is |C(|Pm|)| + |C(Pm(0))| + ... + |C(Pm(|Pm|-1))|. This also means that for this exemplary application the set Pk is a subset of P.\n\n The decoder reads the number of merged sources from the file, followed by their probability indices, and from this it reconstructs Pm. Then it computes the mapping Tm.\n\n One example embodiment may be illustrated by way of the following example. Let P be the probability values defined for CABAC: pk = 0.5αk for k=0, ..., 63, α = (2*0.01875)1/63. For each slice or each frame or each GOP, let Pk denote the subset of P consisting of the (64-k) most frequently used probabilities in P for the current slice or frame, where k=0, 1, ..., 63. The mappings Tk: P→Pk are determined by the algorithm above.\n\n In this example, we have 64 probability sets. Assuming that a fixed length code is used to communicate Pk to the decoder, we find the index number m at which the overhead of using Pm is the smallest among the 64 candidate choices k=0, 1, ..., 63.\n\nOnline Solution with Adaptive Source Probabilities and Static Probability Sets\n\n In another exemplary application, the probability values of P are not known in advance, but rather calculated empirically from the observed data. The context model separates the input symbols into different source buffers, and the source probability is estimated as (number of ones in buffer) / (number of symbols in buffer) for each source. We denote the empirical probabilities by Q = {q0, ..., qN-1}. Then, for the fixed P0, ..., PK-1 probability sets, the encoder first determines the mappings between Pk and Q, then finds the one Pm that gives the smallest encoded length Lm, based on cither an exhaustive or heuristical search, as explained above.\n\n Since Q is in general different from P. the actual mappings Tk(qj) may differ from the static mappings Tk(pj), which are known to both the encoder and decoder. Since the decoder needs Tk(qj), these must be transmitted. In one application, the encoder may just include C(Tm(q0)), C(Tm(q1)), ..., C(Tm(qN-1)) in the overhead, as was described above. However, we point out that the correlation between the two mappings is usually high. Therefore, in another embodiment, the encoder transmits Tm(qj) with a run-length scheme based on the known mappings Tm(pj). In one example, this may work as follows: first, set M to 0. Let j be the first index from M for which Tm(qM+j) ≠ Tm(pM+j), or j=N-M if all of them are equal. Then encode C(j), and if j < N-M, encode C(Tm(qM+j)) as well. Then set M to j+1, and if M<N, repeat the procedure, otherwise stop. The overhead of this variation is |C(m)| + |C(jl)| + |C(Tm(qj1))| + |C(j2)| + |C(Tm(qj2))| + ... + |C(jr)|.\n\n To illustrate by way of an example, let P={0.5, 0.4, 0.3, 0.25, 0.2, 0.15, 0.12, 0.10}, and Pm = {0.5, 0.32, 0.18, 0.12}. Then the mapping algorithm gives Tm(0.5)=0.5, Tm(0.4)=0.32, Tm(0.3)=0.32, Tm(0.25)=0.32, Tm(0.2)=0.18, Tm(0.15)=0.18, Tm(0.12)=0.12 and Tm(0.10)=0.12. Suppose that the empirical probabilities are Q={0.48, 0.45, 0.31, 0.20, 0.18, 0.26, 0.08, 0.12}. Then the actual mapping, by the same algorithm, is Tm(0.48)=0.5, Tm(0.45)=0.5, Tm(0.31)=0.32, Tm(0.20)=0.18, Tm(0.18)=0.18, Tm(0.26)=0.32, Tm(0.08)=0.12 and Tm(0.12)=0.12. Then the run-length sequence is (1, 0.5), (1, 0.18), (1, 0.32), (2). For prefix encoding the probability values are represented by their indices in Pm: 0.5 is 0, 0.32 is 1, 0.18 is 2, 0.12 is 3. The encoded header is therefore C(m), C(1), C(0), C(1), C(2), C(1), C(1), C(2).\n\nOnline Solution with Adaptive Source Probabilities and Dynamic Probability Sets\n\n In another exemplary application, the probability values Q are calculated from the observed data, as described immediately above, and the probability sets P1, ..., PK-1 are determined dynamically. Hence, first the empirical probabilities Q are determined from the (number of ones in buffer) / (number of symbols in buffer) for each source. From this point, this variation follows the one given above: first the candidate sets P0, ..., PK-1 are determined (where P0 is the predetermined probability set and P1,...PK-1 are the dynamically determined probability sets), the mappings Tk are computed, and the best candidate m is selected using the methods described above. After this, the encoder writes C(|Pm|), C(Pm(0)), C(Pm(1)), ..., C(Pm(|Pm|-1)) into the encoded file, followed by the actual mapping Tm(qj), followed by the encoded sequences. The overhead of this variation is |C(|Pm|)| + |C(Pm(0))| + ... + |C(Pm(|Pm|-1))| + |C(j1)| + |C(Tm(qj1))| + |C(j2)| + |C(Tm(qj2))| + ... + |C(jr)|. In this specific example, it is presumed that Pk is a subset of P. The actual mapping may be written into the overhead in the same manner as described above. For example, run-length coding is an option.\n\n The decoder first reads the number of merged sources from the encoded file, then the merged source probabilities indexed by elements of P, as given by C(Pm(0)), ..., C(Pm(|Pm|-1)). After this, the decoder can compute the mapping Tm(pj) between P and Pm. Using this mapping and the run-length coded actual mapping, it then proceeds to compute the actual mapping Tm(qj). Once ready, it can start decoding the sequences, decoding from the buffer of Tm(qj) any time a request for a new symbol is made from source j.\n\n The selection of the probability set to use from amongst a number of possible candidate probability sets may be application dependent. In some instances, the mapping for each of the candidate sets may be predefined and known. In some instances, the mapping may be determined dynamically, particularly if there are a large number of candidate probability sets.\n\n To select the probability set to use for a given video, frame, etc., in one embodiment each of the candidates may be tested by using that set to encode the data and the optimal probability set selected based on the testing. This is the brute force approach.\n\n In another embodiment, the selection may be based on the data type or characteristics. For example, one or more particular candidate probabilities sets may be designated for I or P frames, whereas a different candidate probability set or sets may be designated for B frames. Yet other characteristics of the video data may used as the basis for determining the candidate set or sets.\n\n In yet another embodiment, as described above, the selection may be based on an actual distribution of bits amongst the predefined probabilities, i.e. the statistics for the actual data, for example for the frame, etc.. For example, any probabilities that have no bits assigned may dropped; or any probabilities having few bits may be merged with neighbouring probabilities.\n\n In one embodiment, the dynamic merging or consolidation of predetermined probabilities based on actual data statistics may be based upon obtaining approximately the same number of bits for each probability, i.e. balancing.\n\n In yet another embodiment, empirical probability data may be used to dynamically determine the mapping as opposed to the predefined probabilities, since in some instances (particularly at higher order indices) the actual probability statistics may vary considerably from the probabilities predicted by the context model.\n\n Reference is now made to Figure 4, which shows, in flowchart form, one example process 200 for encoding an input sequence by reducing sources. The process 200 begins in operation 202 with application of the context model. In particular, each symbol of the input sequence is processed and assigned an estimated probability. The estimated probability is selected from a defined probability set using the context of the symbol and the context model. As described above, the defined probability set may be predefined and associated with the context model. An example is the 64 states defined in CABAC.\n\n In operation 204, a new probability set is selected. In a simple embodiment, this may mean selecting the sole candidate new probability set. In other embodiments, this operation may involve testing candidate probability sets to see which of them results in the shortest overall length of encoded sequences. In some instances, this operation may involve dynamically generating a new candidate probability set based on actual statistics from the input sequence, as described above.\n\n Operation 206 involves defining or generating the mapping between the predefined probability set and the new probability set. It will be understood that the mapping may be predetermined and stored if the candidate probability sets and the predetermined probability set are predetermined. It will also be understood that the mapping operation may form part of operation 204, i.e. the selection of the new probability set from amongst candidate sets, since a mapping is defined for each candidate set. It will also be appreciated that the mapping means that cach symbol that was assigned a probability from the predefined probability set in operation 202 may be assigned a probability from the new probability set based upon the mapping.\n\n The symbols are entropy encoded in operation 208 to produce an encoded bitstream. The entropy encoding is based upon the new probabilities assigned in operation 206. In other words, the entropy encoding is based upon the new, and in many embodiments reduced-size, probability set.\n\n Figure 5 shows, in flowchart form, a process 300 for decoding an encoded bitstream in accordance with the present application. The process 300 begins in operation 302 with reading information from the bitstream regarding the new probability set. As described above, in some embodiments, this information may simply indicate the selection of one of a plurality of predefined candidate probability sets already known to the decoder. In some embodiments, this information may identify particular probabilities from the predefined probability set that are included in the new probability set, i.e. the information may define or identify a subset of the predefined probability set. In yet other embodiments, this information may provide explicit probability information. In all embodiments, through operation 302 the decoder obtains the new probability set.\n\n Operation 304 includes determining the mapping between the new probability set and the predetermined probability set. The mapping may be predefined such that operation 304 involves retrieving the predefined mapping from memory or other storage. In some embodiments, the mapping may be specified in the bitstream and the decoder may read the mapping from the bitstream. In yet other embodiments, the mapping may be generated by the decoder, for example using the relative entropy algorithm described herein.\n\n In operation 306, the encoded bitstream is entropy decoded. Entropy decoding includes obtaining decoded subsequences, where each decoded subsequence is associated with a probability from the new set of probabilities. The entropy decoding in operation 306 is based upon the context model, which provides a probability estimate for each symbol, and the mapping, which maps the probability estimate to its corresponding probability from the new set of probabilities and, thus, to one of the decoded subsequenes.\n\n Reference now made to Figure 6, which shows a simplified block diagram of an example embodiment of an encoder 900. The encoder 900 includes a processor 902, memory 904, and an encoding application 906. The encoding application 906 may include a computer program or application stored in memory 904 and containing instructions for configuring the processor 902 to perform steps or operations such as those described herein. For example, the encoding application 906 may encode and output video bitstreams encoded in accordance with the probability mapping encoding process described herein. The encoding application 906 may include an entropy encoder 26 configured to entropy encode input sequences and output a bitstream using one or more of the processes described herein. It will be understood that the encoding application 906 may be stored in on a computer readable medium, such as a compact disc, flash memory device, random access memory, hard drive, etc.\n\n In some embodiments, the processor 902 in the encoder 900 may be a single processing unit configured to implement the instructions of the encoding application 906. In some other embodiments, the processor 902 may include more than one processing unit capable of executing instructions in parallel. The multiple processing units may be logically or physically separate processing units. In some instances, the encoder 900 may include N or more processing units, wherein N of the processing units are configured by the encoding application 906 to operate as parallel entropy coders for implementing the methods described herein. It will further be appreciated that in some instances, some or all operations of the encoding application 906 and one or more processing units may be implemented by way of application-specific integrated circuit (ASIC), etc.\n\n Reference is now also made to Figure 7, which shows a simplified block diagram of an example embodiment of a decoder 1000. The decoder 1000 includes a processor 1002, a memory 1004, and a decoding application 1006. The decoding application 1006 may include a computer program or application stored in memory 1004 and containing instructions for configuring the processor 1002 to perform steps or operations such as those described herein. The decoding application 1006 may include an entropy decoder 1008 configured to receive a bitstream encoded in accordance with the probability mapping entropy encoding process described herein, and to extract encoded subsequences from the bitstream and decode them in using mapped probabilities. The decoding application 1006 may configure the processor to decode the encoded subsequences in parallel to produce parallel decoded sequences and to interleave the symbols of the decode sequences to produce a reconstructed sequences. It will be understood that the decoding application 1006 may be stored in on a computer readable medium, such as a compact disc, flash memory device, random access memory, hard drive, etc.\n\n In some embodiments, the processor 1002 in the decoder 1000 may be a single processing unit configured to implement the instructions of the decoding application 1006. In some other embodiments, the processor 1002 may include more than one processing unit capable of executing instructions in parallel. The multiple processing units may be logically or physically separate processing units. In some instances, the decoder 1000 may include d, N or more or fewer processing units, wherein the processing units are configured by the decoding application 1006 to operate as parallel entropy decoders for implementing the methods described herein. It will further be appreciated that in some instances, some or all operations of the decoding application 1006 and one or more processing units may be implemented by way of application-specific integrated circuit (ASIC), etc.\n\n It will be appreciated that the decoder and/or encoder according to the present application may be implemented in a number of computing devices, including, without limitation, servers, suitably programmed general purpose computers, set-top television boxes, television broadcast equipment, and mobile devices. The decoder or encoder may be implemented by way of software containing instructions for configuring a processor to carry out the functions described herein. The software instructions may be stored on any suitable tangible computer-readable medium or memory, including CDs, RAM, ROM, Flash memory, etc.\n\n It will be understood that the encoder and decoder described herein and the module, routine, process, thread, or other software component implementing the described method/process for configuring the encoder may be realized using standard computer programming techniques and languages. The present application is not limited to particular processors, computer languages, computer programming conventions, data structures, other such implementation details. Those skilled in the art will recognize that the described processes may be implemented as a part of computer-executable code stored in volatile or non-volatile memory, as part of an application-specific integrated chip (ASIC), etc.\n\n Certain adaptations and modifications of the described embodiments can be made. Therefore, the above discussed embodiments are considered to be illustrative and not restrictive.\n1. A method (300) for decoding an encoded bitstream to obtain a sequence of symbols, the symbols belonging to a finite alphabet, wherein a context model specifies a predefined probability set, and wherein each symbol of the sequence of symbols is assigned a probability from the predefined probability set on the basis of the context model (202), the method comprising:\n\nreading, from the bitstream, information identifying a new probability set (302), wherein the new probability set is not identical to the predefined probability set and is a subset of the predefined probability set;\n\nassigning, to each of the symbols of the sequence of symbols, a respective probability from the new probability set based upon a mapping (304), wherein the mapping maps each of the probabilities of the predefined probability set to a respective one of the probabilities from the new probability set, wherein assigning to each of the symbols a respective probability, includes defining the mapping by determining, for each probability of the predefined probability set, to which one of the probabilities from the new probability set it is mapped, and wherein determining is based on selecting the one of the probabilities from the new probability set that minimizes a relative entropy expression; and\n\nentropy decoding the encoded bitstream on the basis of their respective assigned probabilities from the new probability set (306).\n\n2. The method claimed in any one of claim 1, wherein the information identifying the new probability set comprises information identifying one of a predefined set of candidate probability sets.\n\n3. The method claimed in any one of claim 1, wherein the information identifying the new probability set specifies the probabilities from the predefined probability set that are included in the new probability set.\n\n4. The method claimed in claim 3, wherein reading further comprises dynamically generating the mapping by determining, for each probability of the predefined probability set, to which one of the probabilities from the new probability set it is mapped based on minimizing a relative entropy expression.\n\n5. A decoder (1000) for decoding an encoded bitstream to obtain a sequence of symbols, the symbols belonging to a finite alphabet, wherein a context model specifies a predefined probability set, and wherein each symbol of the sequence of symbols assigned with a probability from the predefined probability set on the basis of the context model, the decoder comprising:\n\na memory (1004);\n\na processor (1002);\n\na decoding application (1006) executable by the processor and which, when executed, configures the processor to perform the method claimed in any one of claims 1 to 4.\n\n6. A tangible computer-readable medium storing computer-executable instructions which, when executed by one or more processors, implement the steps of the method claimed in any one of claims 1 to 4.\n\n1. Verfahren (300) zum Decodieren für einen codierten Bitstrom, um eine Folge von Symbolen zu erhalten, wobei die Symbole zu einem endlichen Alphabet gehören, wobei ein Kontextmodell einen vordefinierten Wahrscheinlichkeitssatz spezifiziert, und wobei jedem Symbol der Folge von Symbolen basierend auf dem Kontextmodell (202) eine Wahrscheinlichkeit aus dem vordefinierten Wahrscheinlichkeitssatz zugewiesen wird, wobei das Verfahren umfasst:\n\nLesen, aus dem Bitstrom, von Informationen die einen neuen Wahrscheinlichkeitssatz (302) identifizieren, wobei der neue Wahrscheinlichkeitssatz nicht mit dem vordefinierten Wahrscheinlichkeitssatz identisch ist und eine Teilmenge des vordefinierten Wahrscheinlichkeitssatzes ist;\n\nZuweisen, zu jedem der Symbole der Folge von Symbolen, einer jeweiligen Wahrscheinlichkeit aus dem neuen Wahrscheinlichkeitssatz basierend auf einer Abbildung (304), wobei die Abbildung jede der Wahrscheinlichkeiten des vordefinierten Wahrscheinlichkeitssatzes auf eine jeweilige eine der Wahrscheinlichkeiten aus dem neuen Wahrscheinlichkeitssatz abbildet, wobei das Zuweisen einer jeweiligen Wahrscheinlichkeit zu jedem der Symbole ein Definieren der Abbildung durch Bestimmen, für jede Wahrscheinlichkeit des vordefinierten Wahrscheinlichkeitssatzes, in der eine der Wahrscheinlichkeiten aus dem neuen Wahrscheinlichkeitssatz diese abbildet, beinhaltet, und wobei das Bestimmen auf einem Auswählen der einen der Wahrscheinlichkeiten aus dem neuen Wahrscheinlichkeitssatz, das einen relativen Entropieausdruck minimiert, basiert; und\n\nEntropiedecodieren für den codierten Bitstrom basierend auf seinen jeweiligen zugewiesenen Wahrscheinlichkeiten aus dem neuen Wahrscheinlichkeitssatz (306).\n\n2. Verfahren nach einem der Ansprüche 1, wobei die Informationen, die den neuen Wahrscheinlichkeitssatz identifizieren, Informationen umfassen, die einen von einem vordefinierten Satz von Kandidatenwahrscheinlichkeitssätzen identifizieren.\n\n3. Verfahren nach einem der Ansprüche 1, wobei die Informationen, die den neuen Wahrscheinlichkeitssatz identifizieren, die Wahrscheinlichkeiten aus dem vordefinierten Wahrscheinlichkeitssatz spezifizieren, die in dem neuen Wahrscheinlichkeitssatz beinhaltet sind.\n\n4. Verfahren nach Anspruch 3, wobei das Lesen ferner ein dynamisches Erzeugen der Abbildung durch Bestimmen für jede Wahrscheinlichkeit des vordefinierten Wahrscheinlichkeitssatzes, in der eine der Wahrscheinlichkeiten aus dem neuen Wahrscheinlichkeitssatz diese basierend auf einem Minimieren eines relativen Entropieausdrucks abbildet, umfasst.\n\n5. Decodierer (1000) zum Decodieren für einen codierten Bitstrom, um eine Folge von Symbolen zu erhalten, wobei die Symbole zu einem endlichen Alphabet gehören, wobei ein Kontextmodell einen vordefinierten Wahrscheinlichkeitssatz spezifiziert, und wobei jedem Symbol der Folge von Symbolen basierend auf dem Kontextmodell eine Wahrscheinlichkeit aus dem vordefinierten Wahrscheinlichkeitssatz zugewiesen ist, wobei der Decoder umfasst:\n\neinen Speicher (1004);\n\neinen Prozessor (1002);\n\neine Decodierungsanwendung (1006), die durch den Prozessor ausführbar ist und die, wenn sie ausgeführt wird, den Prozessor konfiguriert, um das beanspruchte Verfahren nach einem der Ansprüche 1 bis 4 durchzuführen.\n\n6. Greifbares computerlesbares Medium, das computerausführbare Anweisungen speichert, die, wenn sie durch einen oder mehrere Prozessoren aufgeführt werden, die Schritte des beanspruchten Verfahrens nach einem der Ansprüche 1 bis 4 implementieren.\n\n1. Procédé (300) de décodage d'un train de bits codé pour obtenir une séquence de symboles, les symboles appartenant à un alphabet fini, un modèle de contexte spécifiant un ensemble prédéfini de probabilités, et chaque symbole de la séquence de symboles se voyant attribuer une probabilité à partir de l'ensemble prédéfini de probabilités sur la base du modèle de contexte (202), le procédé comprenant :\n\nla lecture, à partir du train de bits, d'informations identifiant un nouvel ensemble de probabilités (302), le nouvel ensemble de probabilités n'étant pas identique à l'ensemble prédéfini de probabilités et étant un sous-ensemble de l'ensemble prédéfini de probabilités ;\n\nl'attribution, à chacun des symboles de la séquence de symboles, d'une probabilité respective à partir du nouvel ensemble de probabilités sur la base d'un mappage (304), le mappage mappant chacune des probabilités de l'ensemble prédéfini de probabilités à une probabilité respective des probabilités du nouvel ensemble de probabilités, l'attribution à chacun des symboles d'une probabilité respective comprenant la définition du mappage en déterminant, pour chaque probabilité de l'ensemble prédéfini de probabilités, à laquelle des probabilités du nouvel ensemble de probabilités il est mappé, et la détermination étant basée sur la sélection de la probabilité, parmi les probabilités du nouvel ensemble de probabilités, qui réduit au minimum l'expression de l'entropie relative ; et\n\nle décodage entropique du train de bits codé sur la base de leurs probabilités attribuées respectives à partir du nouvel ensemble de probabilités (306).\n\n2. Procédé selon l'une quelconque des revendications 1, dans lequel les informations identifiant le nouvel ensemble de probabilités comprennent des informations identifiant un ensemble candidat de probabilités dans un ensemble prédéfini d'ensembles candidats de probabilités.\n\n3. Procédé selon l'une quelconque des revendications 1, dans lequel les informations identifiant le nouvel ensemble de probabilités spécifient les probabilités, à partir de l'ensemble prédéfini de probabilités, qui font partie du nouvel ensemble de probabilités.\n\n4. Procédé selon la revendication 3, dans lequel la lecture comprend en outre la génération dynamique du mappage en déterminant, pour chaque probabilité de l'ensemble prédéfini de probabilités, à laquelle des probabilités du nouvel ensemble de probabilités il est mappé sur la base de la réduction au minimum d'une expression de l'entropie relative.\n\n5. Décodeur (1000) servant à décoder un train de bits codé pour obtenir une séquence de symboles, les symboles appartenant à un alphabet fini, un modèle de contexte spécifiant un ensemble prédéfini de probabilités, et chaque symbole de la séquence de symboles se voyant attribuer une probabilité à partir de l'ensemble prédéfini de probabilités sur la base du modèle de contexte, le décodeur comprenant :\n\nune mémoire (1004) ;\n\nun processeur (1002) ;\n\nune application de décodage (1006) exécutable par le processeur et qui, lorsqu'elle est exécutée, configure le processeur de façon à ce qu'il réalise le procédé selon l'une quelconque des revendications 1 à 4.\n\n6. Support tangible lisible par ordinateur stockant des instructions exécutables par ordinateur qui, lorsqu'elles sont exécutées par un ou plusieurs processeurs, mettent en oeuvre les étapes du procédé selon l'une quelconque des revendications 1 à 4.",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"REFERENCES CITED IN THE DESCRIPTION\n\nThis list of references cited by the applicant is for the reader's convenience only. It does not form part of the European patent document. Even though great care has been taken in compiling the references, errors or omissions cannot be excluded and the EPO disclaims all liability in this regard.\n\nPatent documents cited in the description\n\nNon-patent literature cited in the description\n\n• KARWOWSKI DAMIAN et al.Improved context-adaptive arithmetic coding in H.264/AVC17th European Signal Processing Conference, 2009, 2216-2220 \n• MRAK M et al.Comparison of context-based adaptive binary arithmetic coders in video compressionVideo/Image Processing and Multimedia Communications, 2003, 277-286"
] | [
null,
"https://data.epo.org/publication-server/img/EPO_BL_WORD.jpg",
null,
"https://data.epo.org/publication-server/image",
null,
"https://data.epo.org/publication-server/image",
null,
"https://data.epo.org/publication-server/image",
null,
"https://data.epo.org/publication-server/image",
null,
"https://data.epo.org/publication-server/image",
null,
"https://data.epo.org/publication-server/image",
null,
"https://data.epo.org/publication-server/image",
null,
"https://data.epo.org/publication-server/image",
null,
"https://data.epo.org/publication-server/image",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8948221,"math_prob":0.8830393,"size":46175,"snap":"2022-05-2022-21","text_gpt3_token_len":9968,"char_repetition_ratio":0.18227892,"word_repetition_ratio":0.104211666,"special_character_ratio":0.22551164,"punctuation_ratio":0.12274327,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97343856,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-28T07:43:16Z\",\"WARC-Record-ID\":\"<urn:uuid:496757dc-37b2-46cc-aae8-1bf4c6829f69>\",\"Content-Length\":\"87609\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c9cde661-a913-4d0b-aff9-33d8a98ba4d9>\",\"WARC-Concurrent-To\":\"<urn:uuid:5c8ed537-fdae-4519-87d8-f4158608bc1a>\",\"WARC-IP-Address\":\"195.6.57.68\",\"WARC-Target-URI\":\"https://data.epo.org/publication-server/html-document?PN=EP3550726%20EP%203550726&iDocId=6391744&disclaimer=true&citations=true\",\"WARC-Payload-Digest\":\"sha1:URPC6MZBO73EEDX6DXA244GZRE3LSXD7\",\"WARC-Block-Digest\":\"sha1:ZJYEPF6T5SUDCJSZ67DZYRYB2MDVW55P\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320305423.58_warc_CC-MAIN-20220128074016-20220128104016-00375.warc.gz\"}"} |
https://arc2.nesa.nsw.edu.au/view/byarea/course/15240/question/1414 | [
"# Question 8\n\nSummary: Calculate values of constants in equation modelling exponential population growth. Find the probability of a single-stage event and of a multi-stage event. Determine the maximum and minimum values of the rate of change of a quantity. Sketch the graph of the quantity as a function of time and identify any points on the graph where the concavity changes."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8270815,"math_prob":0.9947029,"size":439,"snap":"2022-40-2023-06","text_gpt3_token_len":87,"char_repetition_ratio":0.121839084,"word_repetition_ratio":0.0,"special_character_ratio":0.19134396,"punctuation_ratio":0.06329114,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99140143,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-02-08T00:05:46Z\",\"WARC-Record-ID\":\"<urn:uuid:30c0d948-b173-4831-a86d-a52414e3b35f>\",\"Content-Length\":\"29740\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c67b6dc5-8608-4b68-892a-ebdd043a0a93>\",\"WARC-Concurrent-To\":\"<urn:uuid:9f0d65aa-e736-44b1-ae45-c05236fd46fd>\",\"WARC-IP-Address\":\"18.154.227.124\",\"WARC-Target-URI\":\"https://arc2.nesa.nsw.edu.au/view/byarea/course/15240/question/1414\",\"WARC-Payload-Digest\":\"sha1:AJRKK7TWCJ2CV67DHOQYMMHVWEQ6PCVV\",\"WARC-Block-Digest\":\"sha1:75GB2RBIXV7W6BZMSQ3NXWXZRKFUKU6C\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764500664.85_warc_CC-MAIN-20230207233330-20230208023330-00595.warc.gz\"}"} |
https://www.colorhexa.com/3a7474 | [
"# #3a7474 Color Information\n\nIn a RGB color space, hex #3a7474 is composed of 22.7% red, 45.5% green and 45.5% blue. Whereas in a CMYK color space, it is composed of 50% cyan, 0% magenta, 0% yellow and 54.5% black. It has a hue angle of 180 degrees, a saturation of 33.3% and a lightness of 34.1%. #3a7474 color hex could be obtained by blending #74e8e8 with #000000. Closest websafe color is: #336666.\n\n• R 23\n• G 45\n• B 45\nRGB color chart\n• C 50\n• M 0\n• Y 0\n• K 55\nCMYK color chart\n\n#3a7474 color description : Dark moderate cyan.\n\n# #3a7474 Color Conversion\n\nThe hexadecimal color #3a7474 has RGB values of R:58, G:116, B:116 and CMYK values of C:0.5, M:0, Y:0, K:0.55. Its decimal value is 3830900.\n\nHex triplet RGB Decimal 3a7474 `#3a7474` 58, 116, 116 `rgb(58,116,116)` 22.7, 45.5, 45.5 `rgb(22.7%,45.5%,45.5%)` 50, 0, 0, 55 180°, 33.3, 34.1 `hsl(180,33.3%,34.1%)` 180°, 50, 45.5 336666 `#336666`\nCIE-LAB 45.152, -18.881, -5.861 11.142, 14.651, 18.763 0.25, 0.329, 14.651 45.152, 19.769, 197.245 45.152, -25.037, -5.408 38.276, -15.023, -2.27 00111010, 01110100, 01110100\n\n# Color Schemes with #3a7474\n\n• #3a7474\n``#3a7474` `rgb(58,116,116)``\n• #743a3a\n``#743a3a` `rgb(116,58,58)``\nComplementary Color\n• #3a7457\n``#3a7457` `rgb(58,116,87)``\n• #3a7474\n``#3a7474` `rgb(58,116,116)``\n• #3a5774\n``#3a5774` `rgb(58,87,116)``\nAnalogous Color\n• #74573a\n``#74573a` `rgb(116,87,58)``\n• #3a7474\n``#3a7474` `rgb(58,116,116)``\n• #743a57\n``#743a57` `rgb(116,58,87)``\nSplit Complementary Color\n• #74743a\n``#74743a` `rgb(116,116,58)``\n• #3a7474\n``#3a7474` `rgb(58,116,116)``\n• #743a74\n``#743a74` `rgb(116,58,116)``\n• #3a743a\n``#3a743a` `rgb(58,116,58)``\n• #3a7474\n``#3a7474` `rgb(58,116,116)``\n• #743a74\n``#743a74` `rgb(116,58,116)``\n• #743a3a\n``#743a3a` `rgb(116,58,58)``\n• #214141\n``#214141` `rgb(33,65,65)``\n• #295252\n``#295252` `rgb(41,82,82)``\n• #326363\n``#326363` `rgb(50,99,99)``\n• #3a7474\n``#3a7474` `rgb(58,116,116)``\n• #438585\n``#438585` `rgb(67,133,133)``\n• #4b9696\n``#4b9696` `rgb(75,150,150)``\n• #54a7a7\n``#54a7a7` `rgb(84,167,167)``\nMonochromatic Color\n\n# Alternatives to #3a7474\n\nBelow, you can see some colors close to #3a7474. Having a set of related colors can be useful if you need an inspirational alternative to your original color choice.\n\n• #3a7466\n``#3a7466` `rgb(58,116,102)``\n• #3a746a\n``#3a746a` `rgb(58,116,106)``\n• #3a746f\n``#3a746f` `rgb(58,116,111)``\n• #3a7474\n``#3a7474` `rgb(58,116,116)``\n• #3a6f74\n``#3a6f74` `rgb(58,111,116)``\n• #3a6a74\n``#3a6a74` `rgb(58,106,116)``\n• #3a6674\n``#3a6674` `rgb(58,102,116)``\nSimilar Colors\n\n# #3a7474 Preview\n\nThis text has a font color of #3a7474.\n\n``<span style=\"color:#3a7474;\">Text here</span>``\n#3a7474 background color\n\nThis paragraph has a background color of #3a7474.\n\n``<p style=\"background-color:#3a7474;\">Content here</p>``\n#3a7474 border color\n\nThis element has a border color of #3a7474.\n\n``<div style=\"border:1px solid #3a7474;\">Content here</div>``\nCSS codes\n``.text {color:#3a7474;}``\n``.background {background-color:#3a7474;}``\n``.border {border:1px solid #3a7474;}``\n\n# Shades and Tints of #3a7474\n\nA shade is achieved by adding black to any pure hue, while a tint is created by mixing white to any pure color. In this example, #060b0b is the darkest color, while #fdfefe is the lightest one.\n\n• #060b0b\n``#060b0b` `rgb(6,11,11)``\n• #0c1818\n``#0c1818` `rgb(12,24,24)``\n• #132626\n``#132626` `rgb(19,38,38)``\n• #193333\n``#193333` `rgb(25,51,51)``\n• #204040\n``#204040` `rgb(32,64,64)``\n• #264d4d\n``#264d4d` `rgb(38,77,77)``\n• #2d5a5a\n``#2d5a5a` `rgb(45,90,90)``\n• #336767\n``#336767` `rgb(51,103,103)``\n• #3a7474\n``#3a7474` `rgb(58,116,116)``\n• #418181\n``#418181` `rgb(65,129,129)``\n• #478e8e\n``#478e8e` `rgb(71,142,142)``\n• #4e9b9b\n``#4e9b9b` `rgb(78,155,155)``\n• #54a8a8\n``#54a8a8` `rgb(84,168,168)``\n• #60b0b0\n``#60b0b0` `rgb(96,176,176)``\n• #6db6b6\n``#6db6b6` `rgb(109,182,182)``\n• #7bbdbd\n``#7bbdbd` `rgb(123,189,189)``\n• #88c3c3\n``#88c3c3` `rgb(136,195,195)``\n• #95caca\n``#95caca` `rgb(149,202,202)``\n• #a2d0d0\n``#a2d0d0` `rgb(162,208,208)``\n• #afd7d7\n``#afd7d7` `rgb(175,215,215)``\n• #bcdddd\n``#bcdddd` `rgb(188,221,221)``\n• #c9e4e4\n``#c9e4e4` `rgb(201,228,228)``\n• #d6ebeb\n``#d6ebeb` `rgb(214,235,235)``\n• #e3f1f1\n``#e3f1f1` `rgb(227,241,241)``\n• #f0f8f8\n``#f0f8f8` `rgb(240,248,248)``\n• #fdfefe\n``#fdfefe` `rgb(253,254,254)``\nTint Color Variation\n\n# Tones of #3a7474\n\nA tone is produced by adding gray to any pure hue. In this case, #555959 is the less saturated color, while #04aaaa is the most saturated one.\n\n• #555959\n``#555959` `rgb(85,89,89)``\n• #4e6060\n``#4e6060` `rgb(78,96,96)``\n• #476767\n``#476767` `rgb(71,103,103)``\n• #416d6d\n``#416d6d` `rgb(65,109,109)``\n• #3a7474\n``#3a7474` `rgb(58,116,116)``\n• #337b7b\n``#337b7b` `rgb(51,123,123)``\n• #2d8181\n``#2d8181` `rgb(45,129,129)``\n• #268888\n``#268888` `rgb(38,136,136)``\n• #1f8f8f\n``#1f8f8f` `rgb(31,143,143)``\n• #199595\n``#199595` `rgb(25,149,149)``\n• #129c9c\n``#129c9c` `rgb(18,156,156)``\n• #0ba3a3\n``#0ba3a3` `rgb(11,163,163)``\n• #04aaaa\n``#04aaaa` `rgb(4,170,170)``\nTone Color Variation\n\n# Color Blindness Simulator\n\nBelow, you can see how #3a7474 is perceived by people affected by a color vision deficiency. This can be useful if you need to ensure your color combinations are accessible to color-blind users.\n\nMonochromacy\n• Achromatopsia 0.005% of the population\n• Atypical Achromatopsia 0.001% of the population\nDichromacy\n• Protanopia 1% of men\n• Deuteranopia 1% of men\n• Tritanopia 0.001% of the population\nTrichromacy\n• Protanomaly 1% of men, 0.01% of women\n• Deuteranomaly 6% of men, 0.4% of women\n• Tritanomaly 0.01% of the population"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.590495,"math_prob":0.59792835,"size":3699,"snap":"2021-04-2021-17","text_gpt3_token_len":1663,"char_repetition_ratio":0.12016238,"word_repetition_ratio":0.011090573,"special_character_ratio":0.56420654,"punctuation_ratio":0.2331081,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99185365,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-23T03:30:06Z\",\"WARC-Record-ID\":\"<urn:uuid:75b40cf4-3698-47f2-8cc7-0c84c08fce53>\",\"Content-Length\":\"36291\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e7c6ffd0-da31-4a34-bafc-f56f9ab2addc>\",\"WARC-Concurrent-To\":\"<urn:uuid:68b41931-db8b-4635-bff6-efc8d2eb5fff>\",\"WARC-IP-Address\":\"178.32.117.56\",\"WARC-Target-URI\":\"https://www.colorhexa.com/3a7474\",\"WARC-Payload-Digest\":\"sha1:HNRL5KOYT6BBZZKDLE32PYDX3M5ONQQN\",\"WARC-Block-Digest\":\"sha1:ZVZCUY25C7CTGXL2T25GIDTUI7P35AC4\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610703533863.67_warc_CC-MAIN-20210123032629-20210123062629-00706.warc.gz\"}"} |
https://www.journalduparanormal.com/id/cos-graph-equation-90d75d | [
"For example. Like with sine graphs, the domain of cosine is all real numbers, and its range is Calculate the graph’s x- intercepts. This means that it repeats itself every 360 . Thus, the two graphs are shifts of 1/4 of the period from each other. Even though the input sign changed, the output sign for cosine stayed the same, and it always does for any theta value and its opposite. The graph of y = cos θ The graph of $$y = \\cos{\\theta}$$ has a maximum value of 1 and a minimum value of -1. This article has been viewed 13,641 times. Here are the steps: Like with sine graphs, the domain of cosine is all real numbers, and its range is, Referring to the unit circle, find where the graph f(x)=cos x crosses the x-axis by finding the angles on the unit circle where the cosine is 0. Unlike the sine function, which has 180-degree symmetry, cosine has y-axis symmetry. Connection between period of graph, equation and formula Table of contents top Formula Practice What is the period of a sine cosine curve? It’s symmetrical about the y-axis (in mathematical dialect, it’s an even function). For each y-value, move the y-value up by D if D is positive, or move the y-value down if D is negative. In other words, you can fold the graph in half at the y-axis and it matches exactly. {\"smallUrl\":\"https:\\/\\/www.wikihow.com\\/images\\/thumb\\/c\\/c7\\/Graphingtemplate.jpg\\/460px-Graphingtemplate.jpg\",\"bigUrl\":\"\\/images\\/thumb\\/c\\/c7\\/Graphingtemplate.jpg\\/552px-Graphingtemplate.jpg\",\"smallWidth\":460,\"smallHeight\":329,\"bigWidth\":\"552\",\"bigHeight\":\"395\",\"licensing\":\""
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.5397814,"math_prob":0.99533665,"size":4788,"snap":"2021-04-2021-17","text_gpt3_token_len":1495,"char_repetition_ratio":0.25167224,"word_repetition_ratio":0.87763715,"special_character_ratio":0.3565163,"punctuation_ratio":0.29895714,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9992165,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-15T15:18:07Z\",\"WARC-Record-ID\":\"<urn:uuid:b871e347-84fe-458d-aca5-18c5e4edf40c>\",\"Content-Length\":\"17101\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c4e6a3a0-5fb3-4c9c-9ffb-16718f040fc1>\",\"WARC-Concurrent-To\":\"<urn:uuid:615cafd2-5e9f-4693-a4c0-d5d84c7ca051>\",\"WARC-IP-Address\":\"213.186.33.16\",\"WARC-Target-URI\":\"https://www.journalduparanormal.com/id/cos-graph-equation-90d75d\",\"WARC-Payload-Digest\":\"sha1:OUHG7CAD66AHAKLFEG26YMWODC3YU5CX\",\"WARC-Block-Digest\":\"sha1:76FTP7GZEJFU65STBYJTYJJZHCFINZ7C\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610703495901.0_warc_CC-MAIN-20210115134101-20210115164101-00549.warc.gz\"}"} |
https://mycbseguide.com/blog/ncert-solutions-class-10-maths-exercise-14-4/ | [
"# NCERT Solutions for Class 10 Maths Exercise 14.4",
null,
"## myCBSEguide App\n\nComplete Guide for CBSE Students\n\nNCERT Solutions, NCERT Exemplars, Revison Notes, Free Videos, CBSE Papers, MCQ Tests & more.\n\nNCERT solutions for Maths Statistics",
null,
"## NCERT Solutions for Class 10 Maths Statistics\n\n1. The following distribution gives the daily income of 50 workers of a factory:",
null,
"Convert the distribution above to a less than type cumulative frequency distribution and draw its ogive.\n\nAns.",
null,
"Now, by drawing the points on the graph,\n\ni.e., (120, 12); (140, 26); 160, 34); (180, 40); (200, 50)\n\nScale: On",
null,
"axis 10 units = Rs. 10 and on",
null,
"axis 10 units = 5 workers",
null,
"NCERT Solutions for Class 10 Maths Exercise 14.4\n\n2. During the medical checkup of 35 students of a class, their weights were recorded as follows:",
null,
"Draw a less than type ogive for the given data. Hence obtain the median weight from the graph and verify the result by using the formula.\n\nAns.",
null,
"Hence, the points for graph are:\n\n(38, 0), (40, 3), (42, 5), (44, 9), (46, 14), (48, 28), (50, 32), (52, 35)\n\nScale: On",
null,
"axis, 10 units = 2 kg and on",
null,
"axis, 10 units = 5 students",
null,
"From the above graph, Median = 46.5 kg, which lies in class interval 46 – 48.\n\nHere,",
null,
", then",
null,
", which lies in interval 46 – 48.",
null,
"Median class = 46 – 48\n\nSo,",
null,
"and",
null,
"Now, Median =",
null,
"=",
null,
"=",
null,
"= 46 + 0.5\n\n= 46.5\n\nHence median weight of students is 46.5 kg.\n\n3. The following table gives production yield per hectare of wheat of 100 farms of a village.",
null,
"Change the distribution to a more than type distribution and draw its ogive.\n\nAns.",
null,
"The points for the graph are:\n\n(50, 100), (55, 98), (60, 90), (65, 78), (70, 54), (75, 16)\n\nScale: On",
null,
"axis, 10 units = 5 kg/ha and on",
null,
"axis, 10 units = 10 forms.",
null,
"## NCERT Solutions for Class 10 Maths Exercise 14.4\n\nNCERT Solutions Class 10 Maths PDF (Download) Free from myCBSEguide app and myCBSEguide website. Ncert solution class 10 Maths includes text book solutions from Mathematics Book. NCERT Solutions for CBSE Class 10 Maths have total 15 chapters. 10 Maths NCERT Solutions in PDF for free Download on our website. Ncert Maths class 10 solutions PDF and Maths ncert class 10 PDF solutions with latest modifications and as per the latest CBSE syllabus are only available in myCBSEguide.\n\n## CBSE app for Class 10\n\nTo download NCERT Solutions for Class 10 Maths, Computer Science, Home Science,Hindi ,English, Social Science do check myCBSEguide app or website. myCBSEguide provides sample papers with solution, test papers for chapter-wise practice, NCERT solutions, NCERT Exemplar solutions, quick revision notes for ready reference, CBSE guess papers and CBSE important question papers. Sample Paper all are made available through the best app for CBSE",
null,
"## Test Generator\n\nCreate Papers with your Name & Logo\n\n### 8 thoughts on “NCERT Solutions for Class 10 Maths Exercise 14.4”\n\n1. My cbse guide is the best\n\n2. Every thing is great but graphs are wrong. We can’t put origin’s value anything except zero."
] | [
null,
"https://media-mycbseguide.s3.amazonaws.com/images/logo.png",
null,
"https://media-mycbseguide.s3.ap-south-1.amazonaws.com/images/blog/10%20Maths%20Book.jpg",
null,
"http://media.mycbseguide.com/images/static/ncert/10/mathematics/ch14/Ex14.4/image001.png",
null,
"http://media.mycbseguide.com/images/static/ncert/10/mathematics/ch14/Ex14.4/image002.png",
null,
"http://media.mycbseguide.com/images/static/ncert/10/mathematics/ch14/Ex14.4/image003.png",
null,
"http://media.mycbseguide.com/images/static/ncert/10/mathematics/ch14/Ex14.4/image004.png",
null,
"http://media.mycbseguide.com/images/static/ncert/10/mathematics/ch14/Ex14.4/image005.jpg",
null,
"http://media.mycbseguide.com/images/static/ncert/10/mathematics/ch14/Ex14.4/image006.png",
null,
"http://media.mycbseguide.com/images/static/ncert/10/mathematics/ch14/Ex14.4/image007.png",
null,
"http://media.mycbseguide.com/images/static/ncert/10/mathematics/ch14/Ex14.4/image003.png",
null,
"http://media.mycbseguide.com/images/static/ncert/10/mathematics/ch14/Ex14.4/image004.png",
null,
"http://media.mycbseguide.com/images/static/ncert/10/mathematics/ch14/Ex14.4/image008.jpg",
null,
"http://media.mycbseguide.com/images/static/ncert/10/mathematics/ch14/Ex14.4/image009.png",
null,
"http://media.mycbseguide.com/images/static/ncert/10/mathematics/ch14/Ex14.4/image010.png",
null,
"http://media.mycbseguide.com/images/static/ncert/10/mathematics/ch14/Ex14.4/image011.png",
null,
"http://media.mycbseguide.com/images/static/ncert/10/mathematics/ch14/Ex14.4/image012.png",
null,
"http://media.mycbseguide.com/images/static/ncert/10/mathematics/ch14/Ex14.4/image013.png",
null,
"http://media.mycbseguide.com/images/static/ncert/10/mathematics/ch14/Ex14.4/image014.png",
null,
"http://media.mycbseguide.com/images/static/ncert/10/mathematics/ch14/Ex14.4/image015.png",
null,
"http://media.mycbseguide.com/images/static/ncert/10/mathematics/ch14/Ex14.4/image016.png",
null,
"http://media.mycbseguide.com/images/static/ncert/10/mathematics/ch14/Ex14.4/image017.png",
null,
"http://media.mycbseguide.com/images/static/ncert/10/mathematics/ch14/Ex14.4/image018.png",
null,
"http://media.mycbseguide.com/images/static/ncert/10/mathematics/ch14/Ex14.4/image003.png",
null,
"http://media.mycbseguide.com/images/static/ncert/10/mathematics/ch14/Ex14.4/image004.png",
null,
"http://media.mycbseguide.com/images/static/ncert/10/mathematics/ch14/Ex14.4/image019.jpg",
null,
"https://media-mycbseguide.s3.amazonaws.com/images/category/flat64/tgnr1.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.864051,"math_prob":0.9178808,"size":3466,"snap":"2020-45-2020-50","text_gpt3_token_len":925,"char_repetition_ratio":0.16262276,"word_repetition_ratio":0.05098684,"special_character_ratio":0.2798615,"punctuation_ratio":0.15694444,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98676366,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52],"im_url_duplicate_count":[null,null,null,null,null,1,null,2,null,3,null,3,null,1,null,1,null,1,null,3,null,3,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,3,null,3,null,1,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-21T23:59:39Z\",\"WARC-Record-ID\":\"<urn:uuid:53c3e672-cc31-49df-b779-592358fc61e5>\",\"Content-Length\":\"266562\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:71c5a5d3-bcbe-46d1-af72-cd176b7ec9ca>\",\"WARC-Concurrent-To\":\"<urn:uuid:a4895831-6783-41ca-874d-96a73ec4a9bb>\",\"WARC-IP-Address\":\"164.52.201.197\",\"WARC-Target-URI\":\"https://mycbseguide.com/blog/ncert-solutions-class-10-maths-exercise-14-4/\",\"WARC-Payload-Digest\":\"sha1:JIYLGWQJ3UIHZJYVIXXI2PH4X6O2GP2J\",\"WARC-Block-Digest\":\"sha1:KEQTWWAAEE2A7WJERRVVPNHAPAT7RHVJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107878662.15_warc_CC-MAIN-20201021235030-20201022025030-00045.warc.gz\"}"} |
https://www.math-only-math.com/high-school-math-puzzles.html | [
"# High School Math Puzzles\n\nIn high school math puzzles and games we will solve different types of math logical questions. Numerous collections of high school and college grade math puzzles ranging over algebra, geometry, probability, number theory etc. These printable math puzzles are great to get prepared for exams.\n\n1. Which one of the answer figures should come after the problem figure if the sequence is continued?\n\n2. 5 + 3 = 16, 7 + 3 = 40 then 9 + 3 = ……….. .\n\n(a) 70\n\n(b) 72\n\n(c) 75\n\n(d) 80\n\n[Hints: 5² – 3² = 5 × 5 – 3 × 3 = 25 – 9 = 16]\n\n3. If a = 11 (242) 121, b = 14 (392) 196 then, find c = 13 (?) 169.\n\n(a) 338\n\n(b) 225\n\n(c) 337\n\n(d) 119\n\n4. Which one of the answer figures should come after the problem figure if the sequence is continued?\n\n5. How many triangles are there in the given figure?\n\n(a) 15\n\n(b) 14\n\n(c) 12\n\n(d) 13\n\n6. If GUN = 42 and ME = 18 then, HOME =?\n\n(a) 51\n\n(b) 31\n\n(c) 41\n\n(d) 21\n\n7. If X = 15 (104) 11 and Y = 13 (88) 9 then, Z = 14 (?) 10; find the number in place of the question mark (?).\n\n(a) 96\n\n(b) 95\n\n(c) 85\n\n(d) 75\n\n[Hints: 15² – 11² = 104]\n\n8. If A = 1, PAT = 37 then find TAP = …………… .\n\n(a) 73\n\n(b) 37\n\n(c) 36\n\n(d) 38\n\n9. 5, 16, 49, 148 then, find the next number.\n\n(a) 445\n\n(b) 449\n\n(c) 440\n\n(d) 442\n\n[Hints: Multiply 3 with each number and then add 1]\n\n10. If E = 5 and AMENDMENT = 89 then find SECRETARY =?\n\n(a) 114\n\n(b) 113\n\n(c) 123\n\n(d) 115\n\nFree printable math puzzles are available for everyone and even parents and teachers can encourage and suggest the child to practice the cool math puzzles for increasing the joy of thinking."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8045364,"math_prob":0.9991868,"size":2116,"snap":"2021-31-2021-39","text_gpt3_token_len":705,"char_repetition_ratio":0.12357955,"word_repetition_ratio":0.062921345,"special_character_ratio":0.38657844,"punctuation_ratio":0.12200436,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99421567,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-23T06:03:36Z\",\"WARC-Record-ID\":\"<urn:uuid:b936c61e-3afa-4a2b-93db-a8c704b43d79>\",\"Content-Length\":\"39398\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f6ea8c08-9e55-46b7-aee5-e8d9632fbd7a>\",\"WARC-Concurrent-To\":\"<urn:uuid:c45751a4-86c0-4892-a015-d33f6ab0ee32>\",\"WARC-IP-Address\":\"173.247.219.53\",\"WARC-Target-URI\":\"https://www.math-only-math.com/high-school-math-puzzles.html\",\"WARC-Payload-Digest\":\"sha1:6GRPVO3MLZ25VHOTQLJF23SRXL3MQAX7\",\"WARC-Block-Digest\":\"sha1:CWUIJYNLBEMTTJVSNH6DC5GVCLGOAZE6\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780057417.10_warc_CC-MAIN-20210923044248-20210923074248-00154.warc.gz\"}"} |
https://www.studystack.com/flashcard-173159 | [
"",
null,
"or",
null,
"or",
null,
"taken",
null,
"why\n\nMake sure to remember your password. If you forget it there is no way for StudyStack to send you a reset link. You would need to create a new account.\n\nDon't know\nKnow\nremaining cards\nSave\n0:01\nEmbed Code - If you would like this activity on your web page, copy the script below and paste it into your web page.\n\nNormal Size Small Size show me how\n\n# QDM Exam #2\n\n### Ch. 8\n\nQuantitative Forecasting Methods Forecast is based on mathematical modeling.\nExecutive Opinion Forecasting method in which a group of managers collectively develop a forecast.\nMarket Research Approach to forecasting that relies on surveys and interviews to determine customer preferences.\nDelphi Method Approach to forecasting in which a forecast is the product of a consensus among a group of experts.\nTime Series Models Based on the assumption that a forecast can be generated from the information contained in a time series of data.\nTime Series A series of observations taken over time.\nCausal Models Based on the assumption that the variable being forecast is related to other variables in the environment.\nLevel or Horizontal Pattern Pattern in which data values fluctuate around a constant mean.\nTrend Pattern in which data exhibit increasing or decreasing values over time.\nSeasonality Any pattern that regularly repeats itself and is constant in length.\nCycles Data patterns created by economic fluctuations.\nRandom Variation Unexplained variation that cannot be predicted.\nNaive Method Forecasting method that assumes next period's forecast is equal to the current period's actual value.\nSimple Mean or Average The average of a set of data.\nSimple Moving Average A forecasting method in which only n of the most recent observations are averaged.\nWeighted Moving Average A forecasting method in which n of the most recent observations are averaged and past observations may be weighted differently.\nExponential Smoothing Model Uses a sophisticated weight average procedure to generate a forecast.\nTrend-Adjusted Exponential Smoothing Exponential smoothing model that is suited to data that exhibit a trend.\nSeasonal Index Percentage amount by which data for each season are above or below the mean.\nLinear Regression Procedure that models a straight-line relationship between two variables.\nCorrelation Coefficient Statistic that measures the direction and strength of the linear relationship between two variables.\nForecast Error Difference between forecast and actual value for a given period.\nMean Absolute Deviation (MAD) Measure of forecast error that computes error as the average of the sum of the absolute errors.\nMean Squared Error (MSE) Measure of forecast error that computes error as the average of the squared error.\nForecast Bias A persistent tendency for a forecast to be over or under the actual value of the data.\nTracking Signal Tool used to monitor the quality of a forecast."
] | [
null,
"https://sstk.biz/images/studystacklogo.svg",
null,
"https://sstk.biz/images/blackeye.png",
null,
"https://sstk.biz/images/greenCheckMark.svg",
null,
"https://sstk.biz/images/blackeye.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8608583,"math_prob":0.87912637,"size":2855,"snap":"2020-45-2020-50","text_gpt3_token_len":580,"char_repetition_ratio":0.13188355,"word_repetition_ratio":0.08695652,"special_character_ratio":0.19124344,"punctuation_ratio":0.06410257,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97460407,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-24T21:15:23Z\",\"WARC-Record-ID\":\"<urn:uuid:cd292e0b-c4e0-48a7-8c2a-d45c29d577d0>\",\"Content-Length\":\"58988\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:942e12c3-bb46-432a-9d71-3a77a00601a5>\",\"WARC-Concurrent-To\":\"<urn:uuid:49ba8eb4-81f5-430c-9258-9e939b80d217>\",\"WARC-IP-Address\":\"69.89.15.53\",\"WARC-Target-URI\":\"https://www.studystack.com/flashcard-173159\",\"WARC-Payload-Digest\":\"sha1:CIEZCADQFJUVTIDI5KR5V2KZPLM4OMAH\",\"WARC-Block-Digest\":\"sha1:RZE3Z5LGUXFLUIVSTIKMKFU5CDPD424L\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107884755.46_warc_CC-MAIN-20201024194049-20201024224049-00155.warc.gz\"}"} |
https://researchportal.uc3m.es/display/act532471 | [
"# A dirichlet process prior approach for covariate selection Articles",
null,
"• August 2020\n\n• 948\n\n• 9\n\n• 22\n\n• 1099-4300\n\n### abstract\n\n• The variable selection problem in general, and specifically for the ordinary linear regression model, is considered in the setup in which the number of covariates is large enough to prevent the exploration of all possible models. In this context, Gibbs-sampling is needed to perform stochastic model exploration to estimate, for instance, the model inclusion probability. We show that under a Bayesian non-parametric prior model for analyzing Gibbs-sampling output, the usual empirical estimator is just the asymptotic version of the expected posterior inclusion probability given the simulation output from Gibbs-sampling. Other posterior conditional estimators of inclusion probabilities can also be considered as related to the latent probabilities distributions on the model space which can be sampled given the observed Gibbs-sampling output. This paper will also compare, in this large model space setup the conventional prior approach against the non-local prior approach used to define the Bayes Factors for model selection. The approach is exposed along with simulation samples and also an application of modeling the Travel and Tourism factors all over the world.\n\n### keywords\n\n• conventional priors; covariate inclusion probability; dirichlet process prior; non-local prior; ordinary linear regression; variable selection"
] | [
null,
"https://researchportal.uc3m.es/images/individual/uriIcon.gif",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.837677,"math_prob":0.9079986,"size":1614,"snap":"2021-43-2021-49","text_gpt3_token_len":305,"char_repetition_ratio":0.1173913,"word_repetition_ratio":0.0,"special_character_ratio":0.17905824,"punctuation_ratio":0.071428575,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9511032,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-11-29T21:43:54Z\",\"WARC-Record-ID\":\"<urn:uuid:8da520c0-926e-45e0-8772-2be289699644>\",\"Content-Length\":\"17517\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:19c8ad26-b0f6-4c48-bcec-2845da89fb42>\",\"WARC-Concurrent-To\":\"<urn:uuid:7313c58b-d885-4fcf-93c9-99116bea00dc>\",\"WARC-IP-Address\":\"163.117.98.14\",\"WARC-Target-URI\":\"https://researchportal.uc3m.es/display/act532471\",\"WARC-Payload-Digest\":\"sha1:OKURM7T6R4CE2GUTKZCOZ7GNZNSZ5UJ6\",\"WARC-Block-Digest\":\"sha1:7CGAOYVETTWMYQ6QF6XOO5QKXGWFLU4I\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964358842.4_warc_CC-MAIN-20211129194957-20211129224957-00141.warc.gz\"}"} |
https://www.arxiv-vanity.com/papers/hep-ph/0006158/ | [
"# Space-time description of the hadron interaction at high energies.\n\nV.N.Gribov\n###### Abstract\n\nIn this lecture we consider the strong and electromagnetic interactions of hadrons in a unified way. It is assumed that there exist point-like particles (partons) in the sense of quantum field theory and that a hadron with large momentum consists of partons which have restricted transverse momenta, and longitudinal momenta which range from to zero. The density of partons increases with the increase of the coupling constant. Since the probability of their recombination also increases, an equilibrium may be reached. In the lecture we will consider consequences of the hypothesis that the equilibrium really occurs. We demonstrate that it leads to constant total cross sections at high energies, and to the Bjorken scaling in the deep inelastic ep-scattering. The asymptotic value of the total cross sections of hadron-hadron scattering turns out to be universal, while the cross sections of quasi-elastic scattering processes at zero angle tend to zero.\n\nThe multiplicity of the outgoing hadrons and their distributions in longitudinal momenta (rapidities) are also discussed.\n\n## Introduction\n\nIn this lecture we will try to describe electromagnetic and strong interactions of hadrons in the same framework which follows from general quantum field theory considerations without the introduction of quarks or other exotic objects.\n\nWe will assume that there exist point-like constituents in the sense of quantum field theory which are, however, strongly interacting. It is convenient to refer to these particles as partons. We will not be interested in the quantum numbers of these partons, or the symmetry properties of their interactions. We will assume that, contrary to the perturbation theory, the integrals over the transverse momenta of virtual particles converge like in the theory. It turns out that within this picture a common cause exists for two seemingly very different phenomena: the Bjorken scaling in deep inelastic scattering, and the recent theoretical observation that all hadronic cross sections should approach the same limit (provided that the Pomeranchuk pole exists). The lecture is organized as follows. In the first part we discuss the propagation of the hadrons in the space as a process of creation and absorption of the virtual particles (partons) and formulate the notion of the parton wave function of the hadron. The second part describes momentum and coordinate parton distributions in hadrons. In the third part we consider the process of deep inelastic scattering. It is shown that from the point of view of our approach the deep inelastic scattering satisfies the Bjorken scaling, and, in contrast to the quark model, the multiplicity of the produced hadrons is of the order of . The fourth part is devoted to the strong interactions of hadrons and it is shown that in the same framework the total hadron cross sections have to approach asymptotically the same limiting value. In the last part of the lecture we discuss the processes of elastic and quasi-elastic scattering at high energies. It is demonstrated that the cross sections of the quasi-elastic scattering processes at zero angle tend to zero at asymptotically high energies.\n\nLet us discuss, how one can think of the space-time propagation of a physical particle in terms of virtual particles which are involved in the interaction with photon and other hadrons. It is well known that the propagation of a real particle is described by its Green function, which corresponds to a series of Feynman diagrams of the type\n\n(for simplicity, we will consider identical scalar particles). The Feynman diagrams, having many remarkable properties, have, nevertheless, a disadvantage compared to the old-fashioned perturbation theory. Indeed, they do not show how a system evolves with time in a given coordinate reference frame. For example, depending on the relations between the time coordinates , , and , the graph in Fig.1b corresponds to different processes:\n\nSimilarly, the diagram Fig.1c corresponds to the processes\n\nIn quantum electrodynamics, where explicit calculations can be carried out, this complicated correspondence is of little interest. However, for strong interactions, where explicit calculations are impossible, distinguishing between different space developments will be useful.\n\nObviously, if the interaction is strong (the coupling constant is large), many diagrams are relevant. The first question which arises is which configurations dominate: the ones which correspond to the subsequent decays of the particles - the diagrams Fig.2a and Fig3.a, or those which correspond to the interaction of the initial particle with virtual ”pairs” created in the vacuum. It is clear that if the coupling constant is large and the momentum of the incoming particle is small (see below), configurations with ”pairs” dominate (at least if the theory does not contain infinities). Indeed, if is small, then in the case of configurations without ”pairs” the integration regions corresponding to each correction will tend to zero with an increase of the number of corrections. At the same time, for the configurations containing ”pairs” the region of integration over time will remain infinite. Hence, if the retarded Green function does not have a strong singularity at , the contribution of the configurations without ”pairs” will be relatively small if the coupling constant is large. Even the graphs of the type Fig.1d are determined mainly by configurations like\n\nThis means that if we observe a low energy particle at any particular moment of time (the cut in the diagram in Fig. 4), we will see few partons which are decay products of the particle, and a large number of virtual ”pairs” which will interact with these partons in the future.\n\nWhat happens if a particle has a large momentum in our coordinate reference frame? To analyze the space-time evolution of a fast particle we have to consider a space-time interval such that , and . Here is the mass of the particle, its energy. In this case, , . For such intervals the relation between the configurations with and without ”pairs” changes. Configurations corresponding to a decay of one parton into many others start to dominate, while the role of configurations with ”pairs” decreases.\n\nThe physical origin of this phenomenon is evident. A fast parton can decay, for example, into two fast partons which, due to the energy-time uncertainty relation, will exist for a long time (of the order of ), since\n\n ΔE=√μ2+→p2−√μ2+→p21−√μ2+(→p−→p1)2∼μ22|→p|−μ22|→p1|−μ22|→p−→p1|.\n\nEach of these two partons can again decay into two partons and this will continue up to the point when slow particles, living for a time of the order of , are created. After that the fluctuations must evolve in the reverse direction, i.e. the recombination of the particles begins.\n\nOn the other hand, due to the same uncertainty relation, creation of virtual ”pairs” with large momenta in vacuum is possible only for short time intervals of the order of . Hence, it affects only the region of small momentum partons. The way in which this phenomenon manifests itself can be seen using the simplest graph in Fig.5. as an example. We will observe that it is possible to place here many emissions in spite of the fact that the interval is of the order of unity (), and the Green function depends only on the invariants.\n\nFor the sake of simplicity, let us verify this for one space dimension (). Suppose that and , . Then . Let us choose the variables , in the same way: , , and consider the following region of integration in the integral, corresponding to the diagram in Fig.5:\n\n 1\n 1\n zi∼z′i,ti=zi+y2i2zi,t′i=z′i+y′2i2z′i.\n\nThe integrations over can be substituted by integrations over , and\n\n d2yi=12dy2idzizi,d2y′i=12dy′2idz′iz′i\n\nIt is easy to see that in this region of integration the arguments of all Green functions: , , , are of the order of unity, and the integrals do not contain any small factors. All these conditions for can be satisfied simultaneously for a large number of emissions: . Indeed, if we write in the form , all conditions will be fulfilled for\n\n n∼lntlnC,C≥1.\n\nObviously, one can consider a more complicated diagram than Fig.5 by including interactions of the virtual particles. On the other hand, configurations containing vacuum ”pairs” play a minor role. Moving backwards in time is possible only for short time intervals (Fig.6):\n\nHence, we reach the following picture. A real particle with a large momentum can be described as an ensemble of an indefinite number of partons of the order of with momenta in the range from to zero, and several vacuum pairs with small momenta which in the future can interact with the target.\n\nThe observation of a slow particle during an interval of the order of does not tell us anything about the structure of the particle since we cannot distinguish it from the background of the vacuum fluctuations, and we can speak only about the interaction of particles or about the spectrum of states. On the contrary, in the case of a fast particle we can speak about its structure, i.e. about the fast partons which do not mix with the vacuum fluctuations. As a result, in a certain sense a fast particle becomes an isolated system which is only weakly coupled to the vacuum fluctuations. Hence, it can be described using a quantum mechanical wave function or an ensemble of wave functions, which determine probabilities of finding certain numbers of partons and their momentum distribution. Such a description is not invariant , since the number of partons depends on the momentum of the particle, but it can be considered as covariant. Moreover, it may be even invariant, if the momentum distribution of the partons is homogeneous in the region of momenta much smaller than the maximal one, and much larger than .\n\nIndeed, under the transformation from one reference frame to another in which the particle has, for example, a larger momentum, a new region emerges in the distribution of partons; in the old region, however, the parton distribution remains unchanged. One usually describes hadrons in terms of the quantum mechanics of partons in the reference frame which moves with an infinite momentum, because in this case all partons corresponding to vacuum fluctuations have zero momenta, and such a description is exact. Such a reference frame is convenient for the description of the deep inelastic scattering. However, it is not as good for describing strong interactions, where the slow partons are important. In any case, it appears useful to preserve the freedom in choosing the reference frame and to use the covariant description. This allows a more effective analysis of the accuracy of the derivations.\n\n## 1 Wave function of the hadron. Orthogonality and normalization\n\nThe previous considerations allow us to introduce the hadron wave function in the following way. Let us assume, as usual, that at the hadron can be represented as a bare particle (the parton). After a sufficiently long time the parton will decay into other partons and form a stationary state which we call a hadron. Diagrams corresponding to this process are shown in Fig.7.\n\nLet us exclude from the Feynman diagrams those configurations (in the sense of integrations over intermediate times) which correspond to vacuum pair creation.\n\nFor the theory such a separation of vacuum fluctuations corresponds to decomposing into positive and negative frequency parts and substituting by . The previous discussion shows that the ignored term would mix only partons with small momenta.\n\nIt is natural to consider the set of all possible diagrams with a given number of partons at the given moment of time as a component of the hadron wave function . Similarly, we can determine the wave functions of several hadrons with large momenta provided the energy of their relative motion is small compared to their momenta. The latter condition is necessary to ensure that slow partons are not important in the interaction. The Lagrangian of the interaction remains Hermitian even after the terms corresponding to the vacuum fluctuations are omitted. As a result, the wave functions will be orthogonal, and will be normalized in the usual way:\n\n ∑n∫Ψb∗n(→y1…,→yn,pb)i↔∂Ψan(→y1…,→yn,pa)d3y1…d3ynn!=(2π)3δ(→pa−→pb)δab, (1)\n\nor similarly in the momentum space, after separating\n\n ∑n1n!∫Ψb∗n(→k1…,→kn,→p)Ψan(→k1…,→kn,→p)d3k1…d3kn2k10…2kn0δ(p−∑ki)(2π)3n−1=δab. (2)\n\nFor the momentum range , the wave functions coincide with those calculated in the infinite momentum frame. In this reference frame they do not depend on the momentum of the system (except for a trivial factor). This can be easily proven by expanding the parton momenta\n\n →ki=βi→p+ki⊥, (3)\n\nand writing the parton energy in the form\n\n εi=√→k2i+m2=βip+m2+k2i⊥2pβi. (4)\n\nNote now that the integrals which determine , corresponding to Fig.7, can be represented in the form of the old-fashioned perturbation theory where only the differences between the energies of the intermediate states and the initial state enter, and the momentum is conserved. Hence, the terms linear in cancel in these differences, and concequently\n\n Ek−E=12p(∑im2+k2i⊥βi−m2) (5)\n\nEach consequent intermediate state in Fig.7 in the model differs from the previous one by the appearance or disappearance of one particle. The factor\n\n ∑n1n!∫Ψb∗n(ki⊥,βi)Ψan(ki⊥,βi)∏d2ki⊥2(2π)2dβiβi(2π)3δ(1−∑βi)=δab. (6)\n\nFor slow partons, where the expansion (4) is not correct, the dependence on momentum does not disappear, and contrary to the case of the system moving with , this dependence cuts off the sum over the number of partons.\n\n## 2 Distribution of the partons in space and momentum\n\nThe distribution of partons in longitudinal momenta can be characterized by the rapidity:\n\n ηi=12lnεi+kizεi−kiz, (7)\n\nwhere is the component of the parton momentum along the hadron momentum.\n\n ηi≈ln2βip√m2+k2i⊥. (8)\n\nAs it is well known, this quantity is convenient since it simply transforms under the Lorentz transformations along the direction: , where is the rapidity of the coordinate system.\n\nThe determination of the parton distribution over is based on the observation that in each decay process shown in Fig.7 the momenta and are, in the average, of the same order. This means that in the process of subsequent parton emission and absorption the rapidities of the partons change by a factor of the order of unity. At the same time the overall range of parton rapidities is large, of the order of . This implies that in the rapidity space we have short range forces.\n\nLet us consider the density of the distribution in rapidity\n\n φ(η,k⊥,p)= (9) ∑n1n!∫|Ψ(k⊥,η,k⊥1,η1,…,k⊥n,ηn,)|2(2π)3δ(→p−→k−∑→ki)∏dkidηi2(2π)3\n\nin the interval (see Fig.8).\n\nThe independence of on for these values of means that depends only on the differences . If decreases with the increase of , this corresponds to a weak coupling, i.e. to a small probability of the decay of the initial parton. If the coupling constant grows, the number of partons increases and at a certain value of the coupling constant an equilibrium is reached, since the probability of recombination also increases. The value of this critical coupling constant has to be such that the recombination probability due to the interaction should be larger than the recombination probability related to the uncertainty principle.\n\nThe basic hypothesis is that such an equilibrium does occur and that due to the short-range character of interaction it is local. This is equivalent to the hypothesis of the constant total cross sections of interaction at . Hence we assume that the equilibrium is determined by the vicinity of the point of the order of unity and it does not depend on . Obviously, this can be satisfied only if does not depend on and at . According to the idea of Feynman, this situation resembles the case of a sufficiently long one-dimensional matter in which, due to the homogeneity of the space, far from the boundaries the density is either constant or oscillating (for a crystal). In our case the analogue of the homogeneity of space is the relativistic invariance (the shift in the space of rapidities). For the time being we will not consider the case of the crystal. According to (9), the integral of over and has the meaning of the average parton density which is, obviously, of the order of .\n\nGenerally speaking, we cannot say anything about the parton distribution in the transverse momenta except for one statement: it is absolutely crucial for the whole concept that it must be restricted to the region of the order of parton masses, like in the theory.\n\nConsider now the spatial distribution of the partons. First, let us discuss parton distribution in the plane perpendicular to the momentum . For that purpose it is convenient to transform from to the impact parameter representation :\n\n Ψn(→ρn,ηn)=∫ei∑ki⊥ρiΨ(ki⊥,ηi)δ(∑k⊥i)(2π)2∏d2ki(2π)2. (10)\n\nLet us rank the partons in the order of their decreasing rapidities. Consider a parton with the rapidity and let us follow its history from the initial parton. Initially, we will assume that it was produced solely via parton emissions (Fig.9).\n\nIn this case it is clear that if the transversal momenta of all partons are of the order of , than each parton emission leads to a change of the impact parameter by . If emissions are necessary to reduce the rapidity from to , and they are independent and random, . If every emission changes the rapidity of the parton by about one unit, then\n\n ¯¯¯¯¯¯¯¯¯¯¯¯¯¯(Δρ)2=γ(ηp−η). (11)\n\nHence, the process of the subsequent parton emissions results in a kind of diffusion in the impact parameter plane. The parton distribution in for the rapidity has the Gaussian form:\n\n φ(ρ,η)=C(η)πγ(ηp−η)e−ρ2γ(ηp−η), (12)\n\nif the impact parameter of the initial parton is considered as the origin. Consequently, the partons with have the broadest distribution, and, hence, the fast hadron is of the size\n\n R=√γηp≈√γln2pm. (13)\n\nThe account of the recombination and the scattering of the partons affects only densities of partons and fluctuations, but does not change the radius of the distribution which can be viewed as the front of the diffusion wave.\n\nLet us discuss the parton distribution over the longitudinal coordinate. A relativistic particle with a momentum is commonly considered as a disk of the thickness . In fact, this is true only in the first approximation of the perturbation theory. Really, a hadron is a disk with radius and the thickness of the order of . Indeed, each parton with a longitudinal momentum is distributed in the longitudinal direction in an interval . Since the parton spectrum exists in the range of momenta from down to , the longitudinal projection of the hadron wave function has the structure depicted in Fig.11.\n\nFinally, let us consider what is the lifetime of a particular parton. As we have discussed in the Introduction, in a theory which is not singular at short distances, the intervals between two events represented by a Feynman diagram are of the order of unity. For a fast particle moving along the axis, and . Consequently, the lifetime of a fast parton with a momentum is of the order of . The presented arguments were based on the theory which is the only theory which provides a cutoff in transverse momenta. Still, the argument should hold for other theories and for particles with spins, if one assumes that in these theories the cutoff of transverse momenta occures in some way. On the other hand, the theory cannot be considered as a self-consistent example. Indeed, due to the absence of a vacuum state, the series of perturbation theory do not make sense (series with positive coefficients are increasing as factorials). Hence, the picture we have presented here does not correspond literally to any particular field theory. At the same time, it corresponds fully to the main ideas of the quantum field theory and to its basic space-time relations.\n\n## 3 Deep inelastic scattering\n\nIt is convenient to consider the deep inelastic scattering of electrons in the frame where the time component of the virtual photon momentum is . In this reference frame the momentum of the photon is equal to (), while the momentum of the hadron is (). Suppose that is large and . According to our previous considerations, a fast hadron can be viewed as an ensemble of partons. In this system a photon looks as a static field with the wavelength .\n\nThe main question is, with which partons can the photon interact. We can consider the static field of a photon as a packet with a longitudinal size of the order of . The interaction time between a hadron with the size and such a packet is of the order of . However, due to the big difference between the parton and photon wave lengths, the interaction with a slow parton is small. Hence, the photon interacts with partons which have momenta of the order of . Partons with such momenta are distributed in the longitudinal direction in the region . Because of this, the time of the hadron-photon interaction is in fact of the order of , i.e. much shorter than the lifetime of a parton. This means that the photon interacts with a parton as with a free particle, and so not only the momentum but also the energy is conserved. As a result, the energy-momentum conservation laws select the parton with momentum , which can absorb a photon\n\n kiz−qz=k′iz,|kiz−qz|=kiz.\n\nThis gives\n\n kiz=qz2,k′iz=−qz2.\n\nThe cross section of such a process is, obviously, equal to the cross section of the absorption of a photon by a free particle, multiplied by the probability to find a parton with a longitudinal momentum inside the hadron, i.e. by the value (9), integrated over . (The necessary accuracy of fulfilment of the conservation laws allows any ).\n\nAs it was already discussed, . Hence, using the known cross section for the interaction of the photon with a charged spinless particle, we obtain for the cross section of the deep inelastic scattering\n\n d2σdq2dω=4πα2q4(1−pqppe)φ(ω), (14)\n\nwhere is the electron momentum. If the partons have spins, the situation becomes more complicated, since the cross sections of the interactions between photons and partons with different spins are different. The parton distributions in rapidities for different spins may also be different, leading to the form:\n\n d2σdq2dω=4πα2q4{(1−(pq)(ppe))φ0(ω)+[1−pqppe+12(pqppe)2]φ12(ω)}. (15)\n\nLet us discuss now a very important question, namely: what physical processes take place in deep inelastic scatterings. To clarify this, we go back to Fig.7 determining the hadron wave function. We will neglect the parton recombinations in the process of their creation from the initial parton, i.e. we consider fluctuations of the type shown in Fig.9. Suppose that the photon was absorbed by a parton with a large momentum . As a result, this parton obtained a momentum and moves in the opposite direction with momentum . The process is depicted in Fig.12. What will now happen to this parton and to the remaining partons? Within the framework we are using it is highly unlikely that the parton with the momentum will have time to interact with the other partons. The probability to interact directly with residual partons will be small, because the relative momentum of the parton with and the rest of the partons is large. It could interact with other partons after many subsequent decays which, in the end, could create a slow parton. However, the time needed for these decays is large, and during this time the parton and its decay products will move far away from the remaining partons, thus the interaction will not take place.\n\nHence, we come to the conclusion that one free parton is moving in the direction . What will we observe experimentally, if we investigate particles moving in this direction? To answer this question, it is sufficient to note that, in average, a hadron with a momentum consists of partons; at .\n\nIn a sense there should exist an uncertainty relation between the number of partons in a hadron () and the number of hadrons in a parton ()\n\n npn>∼clnkzμ, (16)\n\nwhere is the momentum of the state.\n\nWe came to the conclusion that the parton decays into a large number of hadrons i.e. in fact the parton is very short-lived, highly virtual. Hence, we have to discuss whether this conclusion is consistent with the assumption that the photon-parton interaction satisfies the energy conservation. To answer this question, let us calculate the mass of a virtual parton with momentum , decaying into hadrons with momenta and masses .\n\n M2=(∑√m2i+k2i)2−k2z=(kz+∑im2i+k2i⊥2kiz)2−k2z≈kz∑im2i+k2i⊥kiz.\n\nIf the hadrons are distributed almost homogeneously in rapidities, their longitudinal momenta decrease exponentially with their number, and in the sum only a few terms, corresponding to slow hadrons, are relevant. As a result, , i.e. the time of the existence of the parton is of the order of , much larger than the time of interaction with a photon .\n\nLet us discuss now, what happens to the remaining partons. Little can be determined using only the uncertainty relation eq.(16). This is because the number of partons before the photon absorption was , after the photon absorption it became and, consequently, according to the uncertainty relation, the number of hadrons corresponding to this state can range from 1 to . Hence, everything depends on the real perturbation of the hadron wave function due to the photon absorption.\n\nConsider now the fluctuation shown in Fig.12. The photon absorption will not have any influence on partons created after the parton ”” which absorbed the photon was produced, and and which have momenta smaller than “b”. These fluctuations will continue, and the partons can, in particular, recombine back into the parton ””. The situation is different for partons which occured earlier and have large momenta (””, ””). In this case the fluctuation cannot evolve further the same way, since the parton ”” has moved in the opposite direction. As a result, it is highly probable that partons ”” and ”” will move apart and lose coherence. On the other hand, slow partons which were emitted by ”” and ”” earlier and which are not connected with the parton ””, will be correlated, as before, with each of them. Thus ”” and ”” will move in space together with their slow partons, i.e. in the form of hadrons. Hence, it appears that partons flying in the initial direction lead to the production of the order of\n\nContrary to the case of rapidities of partons, we will count the rapidity of the hole not from zero rapidity but from the rapidity . In this case the rapidity of the hole is . If we now represent the parton hole with rapidity as a superposition of the hadron states, this superposition will contain hadron states.\n\nLet us represent the whole process by a diagram describing rapidity distributions of partons and hadrons. Before the photon absorption the partons in the hadrons are distributed at rapidities between zero and , while after the photon absorption a parton distribution is produced which is shown in Fig. 13.\n\nThis parton distribution leads to the hadron distribution shown in Fig. 14. The total multiplicity corresponding to this distribution is\n\n ¯n=clnqzμ+clnω=clnνμ√−q2.\n\nThis hadron distribution in rapidities in the deep inelastic scattering differs qualitatively from those previously discussed in the literature. It corresponds to hadrons moving in the photon momentum direction, and hadrons are moving in the nucleon momentum direction, with a gap in rapidity between these distributions. The hadron distribution which was obtained in the framework of perturbation theory for superconverging theories like (Drell, Yan) differs qualitatively from the distribution in Fig. 14.\n\nIn conclusion of this part, it is necessary to point out that the problem of spin properties of the partons exist in this picture even if the partons do not have quark quantum numbers. If, as the experiment shows, the cross section for the interaction of the transversal photons is larger than the cross section for the interaction of the longitudinal photons, , the charged partons have predominantly spin . This means that at least one fermion, for example a nucleon, has to move in the direction of the photon momentum. In other words, in deep inelastic scattering the distribution of the created hadrons in quantum numbers as the function of their rapidities differs essentially from what we are used to in the strong interactions. Perhaps this is one of the key prediction of the non-quark parton picture for .\n\n## 4 Strong interactions of hadrons\n\nLet us discuss now the strong interactions of hadrons. First, we consider a collision of two hadrons in the laboratory frame. Suppose that a hadron ”1” with momentum hits hadron ”2” which is at rest. Obviously, the parton wave function makes no sense for the hadron at rest, since for the latter the vacuum fluctuations are absolutely essential. However, the hadron at rest can also be understood as an ensemble of slow partons distributed in a volume of the order of , independent of the origin of the partons. Indeed, it does not matter whether these partons are decay products of the initial parton or the result of the vacuum fluctuations. How can a fast hadron, consisting of partons with rapidities from to zero, interact with the target which consists of slow partons? Obviously, the cross section of the interaction of two point-like particles with a large relative energy is not larger than (where is the wave length in the c.m. frame, is the relative rapidity). That is why only slow partons of the incident hadron can interact with the target with a cross section which is not too small. This process is shown in Fig. 15.\n\nIf the slow parton which initiated the interaction was absorbed in this interaction, the fluctuation which lead to its creation from a fast parton was interrupted. Hence, all partons which were emitted by the fast parton in the process of fluctuation cannot recombine any more. They disperse in space and ultimately decay into hadrons leading to the creation of hadrons with rapidities from zero to . The interaction between the partons is short-range in rapidities. Hence, the hadron distribution in rapidities will reproduce the parton distribution in rapidities. In particular, the inclusive spectrum of hadrons will have the form shown in Fig. 8, with an unknown distribution near the boundaries. The total hadron multiplicity will be of the order of . If the probability of finding a slow parton in the hadron does not depend on the hadron momentum (this would be quite natural, since with the increase of the momentum the life-time of the fluctuation is also growing), the total cross section of the interaction will not depend on the energy at high energies.\n\nBefore continuing the analysis of inelastic processes, let us discuss, how to reconcile the energy independence of the total interaction cross section at high energies with the observation discussed above that the transverse hadron sizes increase with the increase of the energy as . The answer is that slow partons are distributed almost homogeneously over the disk of the radius (Eq.(11)) , while their overall multiplicity during the time of is of order of unity.\n\nLet us see now how the same process will look, for example, in the c.m. frame. In this reference frame the interaction will have the form as shown in Fig. 16.\n\nEach of the hadrons consists of partons with rapidities ranging from to zero and from zero to , respectively. The slow partons interact with cross sections which are not small. As a result, the fluctuations will be interrupted in both hadrons, and the partons will fly away in the opposite directions, leading to the creation of hadrons with rapidities from to . From the point of view of this reference frame the inclusive spectrum must have the form shown in Fig. 17, with unknown distributions not only at the boundaries but also in the centre, since the distribution of the slow partons in the hadrons and in vacuum fluctuations is unknown. The hadron inclusive spectrum, however, should not depend on the reference frame. Thus the inclusive spectrum in Fig. 17 should coincide with the inclusive spectrum in Fig. 8, and they should differ only by a trivial shift along the rapidity axis, i.e. due to relativistic invariance we know something about the spectra of slow partons and vacuum fluctuations.\n\nLet us demonstrate that this comparison of processes in two reference frames leads to a very important statement, namely that at ultra-high energies the total cross sections for the interactions of arbitrary hadrons should be equal. Indeed, we have assumed that the distribution of hadrons reproduces the parton distribution.\n\nFrom the point of view of the laboratory frame the distribution of partons and, consequently, distribution of hadrons in the central region of the spectrum is completely determined by the properties (quantum numbers, mass, etc.) of particle 1, and does not depend on the properties of particle 2. On the other hand, from the point of view of the antilaboratory frame (where the particle 1 is at rest) everything is determined by the properties of particle 2. This is possible only if the distribution of partons in the hadrons with rapidities much smaller than the hadron rapidity does not depend on the quantum numbers and the mass of the hadron, that is the parton distribution with should be universal. From the point of view of the c.m. system the same region is determined by slow partons of both hadrons and by vacuum fluctuations (which are universal), and, consequently, the distribution of slow partons is also universal.\n\nIt is natural to assume that the probability of finding a hadron in a sterile state without slow partons tends to zero with the increase of its momentum, in other words assume that slow partons are always present in a hadron (compare to the decrease of the cross section of the elastic electron scattering at large ). In this case considering the process in the c.m. system, we see that the total cross section of the hadron interaction is determined by the cross section of the interaction of slow partons and by their transverse distribution which is universal. Consequently, the total hadron interaction cross section is also universal, i.e. equal for any hadrons.\n\nThis statement looks rather strange if we regard it, for instance, from the following point of view. Let us consider the scattering of a complicated system with a large radius, for example, deuteron-nucleon scattering. As we know, the cross section of the deuteron-nucleon interaction equals the sum of the nucleon-nucleon cross sections, thus it is twice as large as the nucleon-nucleon cross section. How and at what energies can the deuteron-nucleon cross section become equal to the nucleon-nucleon cross section? How is it possible that the density of slow partons in the deuteron turns out to be equal to the density of slow partons in the nucleon? To answer this question, let us discuss the parton structure of two hadrons which are separated in the plane transverse to their longitudinal momenta by a distance much larger than their Compton wave length . Suppose that at the initial moment they were point-like particles. Next, independently of each other, they begin to emit partons with decreasing longitudinal momenta. At the same time the diffusion takes place in the transverse plane so that the partons will be distributed in a growing region. The basic observation which we shall prove and which answers our question is that if the momenta of the initial partons are sufficiently large, then during one fluctuation the partons coming from different initial partons will inevitably meet in space (Fig. 18) in the region of the order of . They will have similar large rapidities and, hence, will be able to interact with a probability of the order of unity. If such “meetings” take place sufficiently frequently, the probability of the parton interaction will be unity. Consequently, the further evolution and the density of the slow partons which are created after the meeting may not depend on the fact that initially the transverse distance between two partons was large.\n\nIn terms of the diffusion in the impact parameter plane this statement corresponds to the following picture. Suppose that initial partons were placed at points and in Fig. 19 and that their longitudinal momenta are of the same order of magnitude, i.e. difference of their rapidities is of the order of unity, while each of the rapidities is large. We will follow the parton starting from point , which decelerates via emission of other partons. As we have seen, its propagation in the perpendicular plane corresponds to diffusion. The difference of rapidities at the initial and the considered moments serves the role of time in this diffusion process.\n\nThe diffusion character of the process means that the probability density of finding a parton with rapidity at the point if it started from the point with rapidity is\n\n ω(→ρ,→ρ1,ηp−η)=1πγ(ηp−η)e−(→ρ−→ρ1))2γ(ηp−η). (17)\n\nThe situation is exactly the same for a decelerating parton which started from the point . Thus, the probability of finding both partons at the same point with equal rapidities is proportional to\n\n ω(ρ12,ηp−η)= (18) ∫ω(→ρ,→ρ1,ηp−η)ω(→ρ,→ρ2,ηp−η)d2ρ= 12πγ(ηp−η)exp[−(→ρ1−→ρ2))22γ(ηp−η)]\n\nIf we now integrate this expression over , i.e. estimate the probability for the partons to meet at some rapidities, we obtain\n\n ∫ηp0ω(ρ12,ηp−η)dη≈1πlog2γηpρ212|ηp→∞⟶∞. (19)\n\nThis means that if , the partons will inevitably meet. According to (19) we get a probability much larger than unity. The reason is that under these conditions the meeting of partons at different values of are not independent events and therefore it does not make sense to add the probabilities. It is easy to prove this statement directly, for example with the help of the diffusion equation. We will not do this, however. According to a nice analogy suggested by A. Larkin, this theorem is equivalent to the statement that if you are in an infinite forest in which there is a house on a finite distance from you, then, randomly wandering in the forest, you sooner or later arrive at this house. Essentially, the reason is that in the two-dimensional space the region inside of which the diffusion takes place and the length of the path travelled during the diffusion increase with time in the same way. ¿From the point of view of the reference frame in which the deuteron is at rest and is hit by a nucleon in the form of a disk, the radius of which is much larger than that of the deuteron, the statement of the equality of cross sections means that the parton states inside the disk are highly coherent.\n\nIt is clear from above that the cross sections of two hadrons can become equal only when the radius of parton distribution which is increasing with the energy becomes much larger than the size of both hadrons. Substituting for the value of ( is the proton mass) 111It will be demonstrated below that , where is the slope of the Pomeron trajectory. The current data give . we see that the deuteron-nucleon cross section will practically never coincide with the nucleon-nucleon cross section, while the tendency for convergence of cross sections for pion-nucleon, kaon-nucleon and nucleon-nucleon scatterings may be manifested already starting at the incident energies GeV.\n\n## 5 Elastic and quasi-elastic processes\n\nSo far we focused on the implications of the considered picture for inelastic processes with multiplicities, growing logarithmically with the energy. However, with a certain probability it can happen that slow partons scatter at very small angles and the fluctuations will not be interrupted in either of the hadrons (for example, if we discuss the process in the c.m. frame). In this case small angle elastic or quasi-elastic scattering will take place (Fig. 20).\n\nFirst, let us calculate the elastic scattering amplitude. It is well known that the imaginary part of the elastic scattering amplitude can be written in the form\n\n A1(s12)=s12∫d2ρ12ei→q→ρ12σ(ρ12,s12), (20)\n\nwhere is the energy squared in the c.m. system, is the relative impact parameter, - the total interaction cross section of particles being at the distance and is the momentum transferred. In order to calculate it is sufficient to notice that, according to (12), the probability of finding a slow parton with rapidity at the impact parameter which originated from the first hadron with an impact parameter is\n\n φ1(→ρ1,→ρ′1,η1,ηpc)C(η1)πγηpcexp[−(→ρ1−→ρ′1)2γηpc]. (21)\n\nThe probaility a parton originating from the second hadron at impact parameter is\n\n φ2(→ρ2,→ρ′2,η2,ηpc)C(η2)πγηpcexp[−(→ρ2−→ρ′2)2γηpc]. (22)\n\nThe total cross section of the hadron interaction which is due to the interaction of slow partons is equal to\n\n σ(ρ12,s12)=∫dη1dη2d2ρ′12C(η1)C(η2)\n ×∫d2ρ(πγηpc)2exp[−(→ρ−→ρ1)2γηpc−(→ρ−→ρ2)2γηpc].\n\nWe have taken into account that , , and that the dependence on can be neglected in the exponential factor.\n\nAfter carrying out the integration over , we obtain\n\n σ(ρ12,s12)=e−(→ρ1−→ρ2)22γηpcσ02πγηpc (23)\n\nInserting (22) into (20), we get\n\n A1=s12σ0e−γ4q2ξ,\n ξ=2ηpc=logs12μ2. (24)\n\nWe obtained the scattering amplitude corresponding to the exchange by the Pomeranchuk pole with the slope , where is the universal coupling constant of the Pomeron and hadron. The amplitude (24) is usually represented by diagram in Fig. 21\n\nwhere a propagator of the form corresponds to the Pomeron. In the impact parameter space this propagator has the form (22).\n\nLet us discuss the physical meaning of in more detail. For this purpose, let us calculate the zero angle scattering amplitude at (), without using the impact parameter representation. The probability of finding a parton with rapidity and a transverse momentum is described by (9). This expression at corresponds to the diagram in Fig.22. The wavy line represents integration over parton rapidities from to zero.\n\nThis figure reflects the hypothesis that the calculation of for suffiently large and leads to an expression for which is factorized in the same way as the Pomeron contribution to the scattering amplitude. This is because the parton distribution in this region is independent of the properties of the hadron as well as the values of . Compared to the diagram in Fig. 7, Fig. 22 indicates that the calculation of is similar to the calculation of the inclusive cross section due to the Pomeron exchange. The only difference is that the coupling of the hadron with the Pomeron should be substituted by unity, since a hadron always exists in a Pomeron state. If , corresponds to the diagram in Fig.22a, which shows that depends on . Similarly, it is possible to determine the probability of finding several slow partons (Fig. 22b), and even the density matrix of slow partons. In this case the amplitude of elastic hadron-hadron scattering in the center of mass frame is determined by the diagram of fig.23 and the value of is determined solely by the interaction of slow partons.\n\nNow let us consider the quasi-elastic scattering, corresponding to the Pomeron exchange (Figs.24,25) at zero transverse momentum. While the probability to find the parton in hadron ”a” is determined in eq.(8) by the integral of the wave function squared, the analogous quantity for the amplitude of the inelastic diffractive process (Fig.25) will lead to the integral of the product of the parton functions of different hadrons. They are orthogonal to each other and it is almost obvious that amplitude for inelastic diffractive process at zero angle should vanish for this reason. Indeed the orthogonality condition of eq.(6) has the same structure as the imaginary part of the amplitude. Thus, if at high energies the amplitude factorizes (as it should be for the Pomeron exchange), than the orthonormality condition should also have factorized form in the sense that the integral over parton rapidities with factors out, and only constants depend on the properties of specific hadrons (see Fig.26).\n\nOrthogonality of the wave functions of different hadrons implies that at . In fact the reason, why the amplitude of inelastic diffractive process vanishes at zero angle is the same as the reason why all cross sections should approach the same value at high energies. Both phenomena are due to the fact that properties of slow partons do not depend on the properties of hadrons to which they belong. We can illustrate this again using the example of quasi-elastic dissociation of the composite system — e.g. deuteron. Let us consider the interaction of a fast nucleon with a deuteron. As we discussed in the previous section, at very large energies partons from different nucleons will always interact with each other independent of the distance between nucleons. This will lead to production of the spectrum of slow partons which does not depend on the relative distance between nucleons in deuteron. This means that the amplitude of the nucleon-nucleon interaction will not depend on the internucleon distance as well. Thus, if nucleons inside the deuteron will remain intact after the interaction, than the deuteron will not dissociate as well, since if the amplitude does not depend on the internucleon distance, the wave function of the deuteron will not change after the interaction.\n\n## References\n\n• R.Feynman, What neutrinos can tell about partons, Neutrino-72, v.II, p.75, Proceedings Conference, Hungary, June 1972. .\n• J.D.Bjorken, Phys.Rev., 179 (1969), p.1547.\n• S.D.Drell and T.M.Yan, Annals of Phys., 66(1971), p.555\n• V.N.Gribov, Proceedings of Batavia Conference, 1972."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.91924936,"math_prob":0.9694566,"size":44614,"snap":"2023-14-2023-23","text_gpt3_token_len":9149,"char_repetition_ratio":0.20076217,"word_repetition_ratio":0.030396773,"special_character_ratio":0.19590263,"punctuation_ratio":0.09835866,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":0.98393965,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-02T08:30:52Z\",\"WARC-Record-ID\":\"<urn:uuid:819cb864-e4ab-4bc5-9bc8-41d2c1252415>\",\"Content-Length\":\"800020\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e42546f9-84c2-4c1c-9bae-b753a5fb0ec5>\",\"WARC-Concurrent-To\":\"<urn:uuid:90403bab-8c27-42bd-9d4b-0c61cc4a2163>\",\"WARC-IP-Address\":\"172.67.158.169\",\"WARC-Target-URI\":\"https://www.arxiv-vanity.com/papers/hep-ph/0006158/\",\"WARC-Payload-Digest\":\"sha1:MDRUQXNV63SBZCWCY4W6GVC7GLMPGENU\",\"WARC-Block-Digest\":\"sha1:BVIRZCNH3E3JZFYOSYVIASLCNOVBTHLD\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224648465.70_warc_CC-MAIN-20230602072202-20230602102202-00103.warc.gz\"}"} |
https://fxsolver.com/browse/formulas/Area+of+rhombus+%28by+diagonals%29 | [
"'\n\n# Area of rhombus (by diagonals)\n\n## Description\n\nRhombus is a simple (non-self-intersecting) quadrilateral whose four sides all have the same length. Every rhombus is a parallelogram, and a rhombus with right angles is a square. Every rhombus has two diagonals connecting pairs of opposite vertices, and two pairs of parallel sides. any rhombus has the following properties: Opposite angles of a rhombus have equal measure. The two diagonals of a rhombus are perpendicular; that is, a rhombus is an orthodiagonal quadrilateral. Its diagonals bisect opposite angles.\nThe Area of Rhombus can be calculated by the half product of its two diagonals.\n\nRelated formulas\n\n## Variables\n\n A Area of rhombus (m2) p Diagonal (m) q Diagonal (m)"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.90588707,"math_prob":0.9537296,"size":708,"snap":"2023-14-2023-23","text_gpt3_token_len":187,"char_repetition_ratio":0.1846591,"word_repetition_ratio":0.0,"special_character_ratio":0.20056497,"punctuation_ratio":0.0952381,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99809843,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-06T09:00:10Z\",\"WARC-Record-ID\":\"<urn:uuid:ce3b62a9-ccf2-4365-9aea-aa9ffe5a7d0a>\",\"Content-Length\":\"15269\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f1c71b7a-78b1-443d-a4b3-d9b4560a2355>\",\"WARC-Concurrent-To\":\"<urn:uuid:e963ab04-8f49-4381-9338-56f71f4a2cc6>\",\"WARC-IP-Address\":\"178.254.54.75\",\"WARC-Target-URI\":\"https://fxsolver.com/browse/formulas/Area+of+rhombus+%28by+diagonals%29\",\"WARC-Payload-Digest\":\"sha1:HGGBQ2HEJEAEEUNVK2LBSEIV4JM6DZHH\",\"WARC-Block-Digest\":\"sha1:B45E65SHMZQQXWXEO4FXLNEIB22H2RKK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224652494.25_warc_CC-MAIN-20230606082037-20230606112037-00048.warc.gz\"}"} |
https://ru.scribd.com/document/306915471/eBk-TPC-4-F-pdf | [
"Вы находитесь на странице: 1из 16\n\n# 1\n\nModules 4, 5, 6 / Topic 4\nEARTH PRESSURES\n\n4.1 Introduction\nA retaining structure, such s a retaining wall, retains (holds in place) soil on one side\n(Fig. 4.1). The lateral pressure exerted by the retained soil on the wall is called earth\npressure. It is necessary for us to quantitatively determine these pressures as they\nconstitute the loading on the wall for which it must be designed both geotechnically and\nstructurally, the former ensuring the various aspects of stability of the wall (stability\nanalysis) and the latter, catering to the structural action induced in the wall by the forces\n(Kurian, 2005: Sec.1.1.2). Since we deal with the limiting values of these pressures, earth\npressures are ultimate problems in Soil Mechanics. This means, at this stage the soil is\nno longer in a state of elastic equilibrium, but has reached the stage of plastic equilibrium.\nIn situations such as the one shown in Fig. 4.2, which involves grading (removal) and\nfilling, the in-situ soil itself may be used as the fill. The soil which thus stays in contact\nwith the wall is called backfill in the sense of being the fill at the back of the wall or a fill\nwhich is put back. However, where we have the choice for a fresh backfill material, we\nwould go in for cohesionless soils of high internal friction (), and permeability (k) to aid\nfast drainage (Kurian, 2005: Sec.6.1).\nA retaining wall permits a backfill with a vertical face. The alternative to a retaining wall\nto secure the sides is to provide a wide slope (Fig. 4.3) as, for example, in road\nembankments, but this needs the availability of adequate land to ensure the desired level\nof stability of slopes, which may not be available in all instances. It may be noted in this\nconnection that sometimes backfills may themselves be laid in slopes to reduce the\nheights of the wall (Fig. 4.4).\nIt is ideal that the water table (free water level in soil) stays below the base of the wall\nwithout allowing it to rise into the backfill, adding to the load on the wall with water\npressure adding to the earth pressure from the submerged soil below the water table\neventualities. However, in situations such as water-front structures (Fig. 4.5), we have to\nreckon with water, since water table will eventually rise in the backfill and attain the same\nlevel as the free water in the front.\nWe shall first consider earth pressure due to dry backfills.\n\n## 4.2 The limiting values of earth pressures\n\n2\nEarth pressures attain their ultimate or limiting values depending on the relative\nmovement of the wall with respect to the backfill.\nThus from a stationary position, if the wall starts moving away from the soil, the\npressure exerted by the soil on the wall starts decreasing until a stage is reached when\nthe pressure reaches its lowest value (Fig.4.6). This means that there will be no further\nreduction in the pressure, if the wall moves further away from the soil. This limiting\npressure is called active earth pressure.\nOn the other hand, if the wall is made to move towards the soil, i.e. the wall pushes\nthe soil, the pressure exerted by the soil on the wall starts increasing until a stage is\nreached when the pressure attains its maximum value (Fig.4.6) and as before, there will\nbe no further increase in the pressure if the wall moves further towards the soil. This\nlimiting pressure is called the passive earth pressure. In the initial at-rest state, the soil\nis in a state of elastic equilibrium. From this state it reaches the states of plastic\nequilibrium at the limiting active and passive states. The initial value of the earth pressure\nmay be called neutral earth pressure or earth pressure at-rest.\nFig. 4.7 shows quantitatively typical at-rest, active and passive earth pressure\ndistributions on the retaining wall. While the active pressure is about 2/3 of the at-rest\nvalue, the passive pressure is nearly 6 times the at-rest value, or 9 times the active value\nin a cohesionless soil with = 300 . Further, in a similar manner, the passive state is\nmobilised at a much higher value of wall movement than the active state. Quantitatively,\nthe lateral wall movements are typically 0.25 % and 3.5 % of the wall height for the active\nand passive pressure conditions to get fully mobilised, respectively (Venkatramaiah,\n2006: Sec. 13.3).\nA word of explanation is due with regard to the names active and passive. In the active\ncase, soil is the actuating element the movement of which leading to the active condition.\nIn the passive case, the actuating element is the wall leading the soil to a passive state\nof resistance against the approaching wall (Venkatramaiah, 2006: Sec. 13.3).\n\n## 4.3 Determination of earth pressures by earth pressure theories\n\nEarth pressures are determined by earth pressure theories. The two basic theories\navailable for this purpose are the Rankines theory and the Coulombs theory. Of the two,\nthe Coulombs theory is the older one; we shall, however, take up the Rankines theory\nfirst because of its theoretical form.\nHowever, before setting out on the above theories of limiting earth pressures, it is\nnecessary for us to look at earth pressure at-rest which should be treated as a starting\ncase. The soil being in elastic equilibrium at this stage, we should be able to proceed\nwith it based on theory of elasticity considerations.\n\n## 4.4 Earth pressure at rest\n\nFig. 4.8 shows an element of soil at a depth z in a semi-infinite soil mass. (Semiinfinite means the mass extends in the +x, -x, +y, -y directions, but only in the +z\n(downward) directions, all to infinity. If it extends equally also in the z direction (upwards)\nit would have made a fully infinite space.) The vertical and horizontal stresses in the\nelement are shown. The element can deform (undergo strain) in the vertical direction\nonly since the soil extends to infinity in the horizontal directions. Let the modulus of\nelasticity and Poissons ratio of the soil be and respectively. Setting the lateral strain,\nobtained from theory of elasticity to 0,\n\n-(\n\n)=0\n\n(4.1)\n\nMultiplying by\n\n- ( + ) = 0\n(1- ) =\n\n= 1\n\n(4.2)\n\n= .\nTherefore\nIf (\n\n= (\n\n) is denoted as\n\n) .\n\n## 0 and named coefficient of earth pressure at-rest, we can\n\nset\n\n= 0 .\n\n(4.3)\n\n0 being a constant, it is noted that also increases linearly with depth as itself,\nstarting with 0 at the surface (z = 0)\nIf we now revert to Topic 1, it is seen that the 0 - relationship is of the same form as\nthe e-n relationship. Hence 0 will plot against as in Fig. 1.3.\n\n4\nWe note that 0 = 0 when = 0, a condition giving rise to 0 horizontal pressure.\nFurther setting 0 = 1,\n\n=1\n\n= 1-\n2 = 1\n\n1\n2\n\nAt this value of , = = .\nor in other words, and will plot identically with depth. When varies from 0 to 0.5,\nwill vary as increasing fractions of , as can be noted from Fig. 4.9.\nBecause of the difficulty in determining of a soil reliably, various empirical formulae\nhave been suggested among which the one attributed to Jaky (1944) is an early favourite.\nIt states:\n\n0 = 1\n\n(4.4)\n\nFig 4.10 plots 0 against . It bears comparison with Fig. 16 (Kurian, 2005: Sec. 6.4.1)\nfor which it is plotted till = 900 . It is noted from Figs. 4.9 and 4.10 that 0 increases with\n, but decreases with . At = 0, applying to water, 0 = 1, following which = .\nOn the other hand, at = 900 , applying to rock, 0 = = 0\n\n4.5 Rankines theory for active and passive earth pressures (1857)\nBefore we take up Rankines theory of earth pressure, we shall try to establish\nanalytically the relationship between 1 3 , the principal stresses, based on the\nMohr-Coulomb failure theory (Fig.4.11).\nIn the figure,\nCA = CD =\n\n1 3\n2\n\nOC = OA+AC = 3\nEO = c cot\nCD = EC sin\n\n1\n2\n\n3\n2\n\n3 +1\n2\n\n5\n1 3\n\ni.e.,\n\n=(\n\n1 +3\n2\n\n+ )\n\nMultiplying by 2\n\n1 3 = (1 + 3 + 2 )\n= (1 + 3 + 2\n1 (1 ) = 3 (1 + ) + 2\n1 = 3 (\nSimilarly,\n\n3 = 1 (\n\nTrignometrically\n\n1\n1+\n1+\n1\n\n1+\n\n1\n\n1+\n\n) + 2\n1\n\n1\n1+\n\n) 2\n\n= 2 (45\n\n1\n\n1+\n\n= 2 (45 + )\n2\n\n= (45 + )\n2\n\n= (45 + )\n2\n\n## Hence we can state\n\n3 = 1 2 (45 ) 2 (45 )\n2\n2\n1 = 3 2 (45 + ) + 2 (45 + )\n2\n2\n\n(4.5)\n(4.6)\n\nReferring to Fig. 4.11, one may look upon the c- case as the -case with the origin\nshifting from E to O.\n\n## 4.5.1 Rankines expressions for active and passive earth pressures\n\nIn Fig. 4.12 let OA represent the vertical (principal) stress. Mohrs circles I and II are\ndrawn on either side of A without gap. In case I the soil is laterally relieved leading to\nreduction in reaching the limiting active value at failure. In case II the soil is\npushed into itself and reaches the limiting passive value of at failure.\n\n6\nUsing Eqs. (4.5) and (4.6) we can now state:\n\n= 2 (45 ) 2 (45 )\n2\n2\n= 2 (45 + ) + 2 (45 + )\n2\n2\n\n(4.7)\n(4.8)\n\n2\n\n## 2 (45 + ) = ,the coefficient of passive earth pressure,\n\n2\nand further = , and, therefore, =\n\n## and being reciprocals of each other,\n\nwe can further state,\n\n= .\n\n= . + 2\n\n(4.9)\n(4.10)\n\n## In the case of a cohesionless soil, with c = 0,\n\n(4.11)\n(4.12)\n\nOn the other hand, in the case of an ideally cohesive soil for which is 0,\n\n= = 1, we have\n= 2\n\n(4.13)\n\n= + 2\n\n(4.14)\n\n## It may be noted that if c also = 0 (case of water) we get = = = .\n\nNote further that, since the plane of failure is inclined at = (45 + ), it follows\n2\n\nthat = , =\n\nThe above results pertaining to and c-soils can be directly obtained from the respective\nMohrs circles as shown in Figs. 4.13 and 4.14.\n\n## 4.5.2 Failure planes\n\nThe failure plane in the active state is inclined at = (45 + ) to the horizontal. If the\n2\n\nfull Mohrs circle is drawn, the potential failure planes are as shown in Fig. 4.15\n(Venkatramaiah, 2006: Sec.13.6.1). In the passive case, like in the active case, the failure\nplane should be reckoned from the point . It can be identified that the failure planes at\n\npassive failure are inclined at (45 ) to the horizontal. (The arcs of the Mohrs circles\n2\n\nsubtending these angles are highlighted in Fig. 4.12.) The picture is the same for -soil.\nIn the case of the c-soil, these planes are inclined at 450 to the horizontal.\n\n## 4.5.3 Variation of active and passive earth pressure coefficients\n\nIt follows from the Rankines theory that the higher the , the higher the shear strength,\nthe lesser the active pressure and the higher the passive pressure.\nIt is interesting to note that the Rankines theory for earth pressure developed for soil\ncan be extended to water ( = 0) on the one hand and rock ( =900 ), on the other.\nWhen\nTherefore,\n\n= 0, = = = 1\n= = 0 = = =\n\n## This is the hydrostatic pressure condition, applying to water.\n\nIf increases, decreases and increases. The latter increases much faster than the\nformer decreases, until we reach = 900 at which = 0 and =\n\n. As a result, =\n\n, = 0 and = .The variations of and , and also their square roots, with are\nshown in Fig. 4.16.\n\n## 4.5.4 Plots of and\n\nc- case\n\n8\nSince c and are constants, the first part of and , as per Eqs. (4.9) and (4.10)\nplot linearly like , but the second parts are constants. Fig 4.17 shows the sum of these\neffects. (Note that when two plots are to be added they should be drawn on opposite\nsides, whereas if one is to be subtracted from the other they should be drawn on the same\nside.)\nIt is observed from Eqs. (4.9) and (4.10) that is decreased and is increased on\naccount of the contribution of c. As a result of the subtraction, Fig 4.16a shows a tensile\nzone to a depth z which can be determined by setting,\n\n. Therefore =\n\n(4.15)\n\nSince soil cannot exist in a state of tension, it is likely that it breaks contact with the\nsupport over this depth (Kurian, 2005: Sec. 8.8).\nc-case\nFig. 4.18 shows the active and passive pressure variations in the c-case.\nTo obtain the depth z at which the net pressure is 0,\nSetting\n\n## 4.5.5 Effect of surcharge on the backfill\n\nThere are instances such as in port and harbour structures where the backfill is\nsubjected to heavy surcharges such as due to supporting roads, railway tracks and heavy\nstationary equipments. Like any other vertical load such as the self weight of the backfill,\nthese surcharges add to the lateral pressure on the wall the effect of which must be taken\ninto account in its design.\nIn order to consider the influence of the surcharge, its effect is reduced to an equivalent\ndownward pressure, q per unit area, Fig. 4.19.\nThe lateral active pressure due to surcharge is q which is uniform with depth, q and\nbeing constants. To this will be added the active earth pressure as shown in the figure.\nThe same figure can be obtained by converting the surcharge pressure q as an\nequivalent additional height (h) of the backfill which is obtained by setting\n\nh = q, from which h = ( )\n\n(4.16)\n\n9\nThe pressure diagram on the wall alone for the full height of the backfill including the\nadditional height is the same as the earlier pressure diagram as shown in the figure.\n\n## 4.5.6 Earth pressure due to layered backfills\n\nRankines theory can easily accommodate layered backfills, if the layers concerned\nare horizontal.\nLet us consider the example shown in Fig. 4.20.\n\n= 1 1\nB\n\n(4.17)\n\nB\nat +\n= 1 1\n\n1\n2\n\nat C = (1 1 + 2 2 )\n\n(4.18)\n\n2\n1\n\n(4.19)\n\nThe above means that there is an immediate transition at B thanks to the difference in the\n\nshear strength parameters of layers I and II. As a result it is seen that at + undergoes\na sudden decrease thanks to the presence of c, being the same in the present case.\n\n## applies to a point in layer I lying infinitesimally above\n\nTheoretically speaking, at B\nB\n\npoint B, whereas at point + applies to a point in layer II lying infinitesimally below point\nB. If one asks what is its value exactly at point B, the theoretical answer is, it is not the\n\naverage of B\n+\n\n## 4.5.7 Earth pressure due to submerged backfills\n\nIf the backfill is submerged fully or partially, i.e. to full height or partial height, there is\na continuous body of water running through the pore space in the soil below the water\ntable. The water over this height will exert full hydrostatic pressure on the wall. To this\nwill be added the pressure due to submerged soil over this depth and the dry soil above\n(Fig. 4.21).\nWhile submergence causes a reduction in the unit weight of the soil, the shear strength\nparameters c and remain unchanged.\nSubmerged unit weight (Kurian, 2005: Sec.2.7.1)\nThe submerged weight of a continuous (i.e. non-porous) body is its weight in air\nsubtracted by the weight of water displaced by the body. In other words,\n\n10\nsubmerged weight of an object = weight in air of the object - weight of a body of water\nhaving the same volume as the object.\nThat is to say, ws = w v.\n\n(4.20)\n\nUnit weight is the weight per unit volume. In submerged unit weight sub of the soil we\nare concerned with is the weight of the solid particles in the soil in a unit volume which\nare in a state of submergence.\nFig. 4.22 represents a unit volume in which\n\n## sub = weight of solids weight of an equal volume of water.\n\nIn order to simplify calculation, we add to both the parts on the R.H.S. a constant which\nis the water to fill the pore space. The constant being the same, the result is, we have\nsaturated unit weight as the first term and unit weight of water as the second term on the\nR.H.S. The final result is the familiar result,\n\n= - .\n\n(4.21)\n\n## Interestingly enough, this follows Eq.(4.20) with in place of w, as it should,\n\nrepresenting the whole body.\n\n## 4.5.8 Combined pressures (Kurian, 2005: Sec.6.4.2)\n\nWhat follows is an important matter which every student/geotechnical engineer should\nclearly understand, appreciate and assimilate.\nIf we take the unit weight of dry soil as 15 kN/m3 and = 300 (c = 0), Ka =\n\n1\n3\n\nand\n\ntherefore the active earth pressure at any depth h m = 5h kN/m2 . being 10 kN/m3 ,\nthe water pressure at the same depth = 10 h kN/m2 , which is twice the value of the active\nearth pressure. (This is important since many, at least among the lay public, may tend to\nassume that water being thinner, the corresponding pressure is also lower!)\nWhen the backfill is saturated,\n\n= + = ( + ) = .\nbut ( ). ,\nbut = ( ) + .\n\n11\nThe importance of the above result is illustrated in Fig. 4.23. For a case where =\nand = 300 , it is seen that, while the active pressure intensity is only 1/3 of the water\npressure, the passive pressure intensity is 3 times the water pressure or 9 times the active\nsoil pressure. (The first of the above statements means that water pressure is three times\nthe active pressure due to submerged soil, which we noted above as twice the active\n\npressure due to dry soil. Further if = , it follows that the active pressure due to dry\n2\n\nsoil is twice the same due to submerged soil.) It is obvious from the figure that walls\ndesigned for active soil pressure are unsafe if the soil is allowed to get saturated!!\n\n## 4.5.9 Need for retention (Kurian, 2005: Sec.6.4.1)\n\nFig. 4.24 draws attention to the need for retention in water, soil and intact rock.\nSince water has no shear strength, its surface must always remain horizontal;\ntherefore water must be fully retained.\nOn the other extreme, if we treat intact rock as a medium with = 900 , its sides can\nremain vertical, calling for no support since Ka = 0.\nBecause of its shear strength, soil can remain in a slope. This means that only the fill\nplaced over this slope, which is needed to maintain a horizontal surface, requires support,\nwhich therefore may be described as partial. This is, however, a qualitative statement as\nthe next section will show that the active pressure on the retaining wall is not exactly due\nto such a wedge.\n\n## 4.6 The Coulombs theory of earth pressure (1776)\n\nThe earth pressure theory propounded by Coulomb involves the consideration of a\ncritical wedge in the backfill adjoining the retaining wall the failure of which by shear at\nthe interface with the intact backfill and the wall gives rise to the active and passive failure\nconditions. It involves the mechanical analysis of trial wedges for equilibrium at the stage\nof incipient or imminent failure by shear in the above manner (Fig4.25). It involves a\ngeometrical trial and error approach and therefore more tedious than the theoretical\napproach followed by Rankine.\n\n## 4.6.1 Coulombs method for the determination of active earth pressure\n\nLet us take the general case of a retaining structure with an inclined back face,\nsupporting and inclined backfill in a c - soil (Fig. 4.26a). At the wall-soil interface we\nassume an angle of wall friction . The analysis is per unit length of the wall which makes\nit a purely 2D_case.\n\n12\nLet us start with a trial wedge of the soil ABC defined by the angle at which rises\nthe soil face of the wedge. The wedge slides downwards because of the lateral yield of\nthe retaining wall away from the backfill. We will now examine the forces keeping this\nwedge in equilibrium at the time failure just starts, or what we call the stage of incipient\nfailure. For this we need the forces acting on the wedge at this stage which include its\nown weight and the reactive forces from the wall and the intact backfill.\nFirst let us take the self weight of the wedge W acting through its centre of gravity. (W\n= ABC x 1 x , where is the unit weight of the backfill soil. (Note that weight is an\nexternal force, being the force with which earth attracts the mass of the wedge.) On the\nface AC let us take a small elemental width. This elemental width multiplied by the unit\nlength is the elemental area we are considering. Since the wedge has failed in shear\nalong AC, shear strength is fully mobilised given by s = c+ tan, acting on the wedge\nin the direction AC. C acting on the elemental area will add up as C over the full width\nAC and unit length as C = c x AC x 1. The same applies to which adds up as N =\nx AC x 1. The component of shear strength contributed by N is N tan . Thus on AC we\nhave three forces acting on the wedge which are C, N and N tan. We have to find their\nresultant for which we keep C separately and find the resultant of N and N tan . If we\nexamine the resolution of forces, we can easily identify that the resultant of N and N tan,\nwhich we shall call R, will be inclined at to N. (Note that c and are pressures\n(intensity terms acting per unit area) which do not resolve. Only forces resolve and hence\nwe have to necessarily multiply the pressures by areas to obtain the forces.)\nOn the face AB, since the wedge moves downwards, we have over the length AB\nreactive forces with a normal component N and tangential component N tan, where\nis the angle of wall friction. (It is typically taken as\n\n2\n3\n\n## .) Their resultant P is inclined at\n\nto N. P is our unknown of which we want to determine the value by drawing the polygon\nof forces.\nOf the forces mentioned above, we know the magnitude and direction of W and C.\nHowever, the values of N and N and hence also their shear components N tan and\nN tan are not known. But we know the direction of their resultants R and P, which is\nsufficient for us to proceed with the force polygon. (Note that for drawing the force\npolygon we do not have to know the point of application of the forces.)\nTo draw the force polygon, we start with W (ab in Fig. 4.26b). At b we draw lines\nparallel to AC and mark C at bc. At c we draw a line parallel to R and at a, a line parallel\nto P. They intersect at d. ad gives the value of P which is our unknown. (Note that in the\nsame way cd gives the value of R, which we, however, do not need.) All the forces we\ndealt with in the above are forces acting on the wedge. Our concern is essentially the\n\n13\naction of the wedge on the wall and this is equal in magnitude and opposite in direction\nto P obtained as above.\nNow for different values of , we have to repeat the above work and determine the\ncorresponding values of P. We now make a plot of the values of P so obtained against\n(Fig. 4.26c). Join the points so obtained by a smooth curve and by drawing a horizontal\nline (i.e. parallel to the - axis) touching the curve tangentially we determine the highest\nvalue of P which is the active thrust . We can note the corresponding value of which\ngives us the critical wedge causing the active thrust . (Note that we cannot go by the\nhighest value of P from among the individual results obtained. A curve must necessarily\nbe drawn because the peak may generally lie between two values and need not coincide\nwith any single value.). The Pa that we have determined is the reaction of the wall on the\nwedge. The action of the wedge on the wall which we are investigating is a force of the\nsame magnitude of Pa, acting exactly in the opposite direction. It is this action on the wall\nthat we need for the design of the wall.\nReverting to the trial wedges, we can look upon the picture as the weight of the wedge\nacting downwards, which we have noted as the force with which earth attracts the mass\nof the wedge, being held back by the forces C, R and P.\nIf we want to make the picture more general by adding an adhesion component a at\nthe wall-backfill interface, a total tangential force A is generated in the direction AB, which\nmust be entered at point c at the end of which is to be drawn the line parallel to R. (Note\nthat adhesion at the wall-soil interface is similar to c at the backfill-backfill interface, i.e.\nwithin the soil. In other words, a and at the wall-soil interface correspond to c and\nwithin the soil. And just like in the case of c and , shear strength at the interface can be\nwritten as,\n\ns = a + tan\n\n(4.22)\n\nIn the case of the -soil (c = 0) the only difference is that C (and A) do not appear. In\nthe c-case ( = 0), on the other hand, we have to deal with only W, C (and A), N and P.\nAs regards the influence of the parameters, the higher the values of c, , a and ,\nthe lesser the value of Pa as can be identified from the force polygon. This picture will\nreverse sign when we come to the passive case.\n\n## 4.6.2 Coulombs method for the determination of passive earth pressure\n\nThe point of departure in the passive case is, since the wall pushes the soil, the wedge\nmoves upwards causing shear failure along AB and AC. This causes reversal in the\ndirection of the forces C, N tan and P tan (Fig. 4.27a).\n\n14\nThe polygon of forces (Fig. 4.27b) starts with W marked as ab. bc represents C. At c\na line is drawn parallel to R and at a, a line parallel to P. They intersect at d; ad gives the\nvalue of P corresponding to this trial wedge. The P values so obtained from several trial\nwedges are plotted against (Fig 4.27c) and the minimum value so obtained by drawing\na horizontal line (i.e. parallel to the - axis) tangential to the curve, gives Pp.\nIt is important to note that Ka which gives the minimum value of earth pressure is\nobtained as a maximum in Fig. 4.26c, and Kp which gives the maximum value of earth\npressure is obtained as the minimum value in Fig.4.27c, both being optimum values.\n\n## 4.7 Comparison between Rankines and Coulombs theories of earth\n\npressure\nA fundamental difference between the two theories is that, while Rankines theory\ngives pressure distribution, Coulombs theory gives only total thrust. One can of course\nobtain distribution from the latter, by assuming the nature of variation, such as lineartriangular.\nRankines theory, though theoretically elegant, has several limitations as it goes by the\nconcept of principal stresses, without even recognising the presence of the wall. Hence\nit cannot take adhesion and wall friction into account, leading to conservative values of\nthe active earth pressure. It can be extended to backfills with single slopes, but an\ninclined wall-backfill interface is difficult to accommodate.\nCoulombs theory, on the other hand, is more versatile as it can accommodate wall\nwith inclined interface, sloped backfills with even more slopes than one, with practically\nthe same ease as vertical wall with horizontal backfill. It can also account for wall friction\nand adhesion leading to more realistic results. However, being a geometrical trial and\nerror approach, it is certainly more tedious and time consuming unlike Rankines theory,\nthe expressions from which can be programmed and put on computer to yield fast results.\n(While on this issue, it may be added that, trial wedge approach can also be programmed\non the computer for obtaining faster results.)\nIt is now time to show that both Rankines theory and Coulombs theory give the same\nresults for the basic case, as shown below.\nLet us consider the basic active case of a retaining structure with a vertical face and\nno wall friction, supporting a -soil with a horizontal surface. Consider a trial wedge which\nmakes an angle with the horizontal (Fig.4.28). Angle BAC is therefore (90-) =, say.\nThe force polygon (triangle of forces) consists of P, W and R.\nFrom the triangle of forces,\n\n15\n\n= 90 ( + )\n\nTherefore,\n\n(+)\n\nW = . . = 2\nRewriting,\n\nFor P to a maximum,\ni.e.,\ni.e.,\n\ni.e.,\n\nP=\n\n1 2\n\n2\n\n(4.23)\n\n(+)\n\n=0\n\ntan( + ) x 2 . 2 ( + ) = 0\n(+)\n(+)x 2\n\n.2 (+)\n\n(+) (+)\n2 (+)2\n\n=0\n\n=0\n\n2 ( + ) 2 = 0\nsin2( + ) = 2\n2 = 90\n= 45\nTherefore\n\n= 45 + ,\n2\n\n(4.24)\n\nwhich defines the critical failure plane. It is noted that the result is the same as obtained\nfrom the Rankines theory.\nSubstituting so obtained in the expression for P (Eq.4.23), we get,\n1\n\n= 2\n2\n\n=\n2\n\n(+)\n\n(45 2 )\n\n(45+ 2 )\n\n16\n\n=\n2\n\n=\n=\n\n(45 2 )\n\n(90(45+ 2 ))\n\n## 2 (45 ) x (90 (45 + ))\n\n2\n2\n2\n\n2 (45 ) x (45 )\n2\n2\n2\n1\n\n= 2 2 (45 )\n2\n2\n1\n\n1+\n\n= 2 (\n\n(4.25)\n\nwhich is the same result as obtained from Rankines theory, as per Eq. (4.11).\nIt is indeed interesting to observe how both the theories converge to the same result\nin this case which establishes the soundness of both the approaches.\n\n4.8 Conclusion\nWe may close with a significant question in relation to active earth pressure which has\na bearing on design. The question pertains to the assumption of active pressure which\nhas the potential of making the design based on it unsafe!\nActive pressure being the lowest value of pressure, it is logical to ask, have we taken\nany steps in the design to ensure active conditions to develop? The significance of the\nquestion is that, if active conditions are not developing, the pressures are higher, and the\nretaining wall designed for active pressure will be inadequate, and in the limit, even\nunsafe!\nThe answer to this question is in fact simple. The design of the wall implies an\nassumption that if the wall is designed for active pressure, the same being the lowest\npressure, the design of the wall will be the thinnest, and since it is thin, it will deflect\nenough (Fig. 4.29) resulting in active conditions and the corresponding active pressures\nto develop, considering especially the low value of deformation needed to mobilise the\nactive condition (Kurian, 2005: Sec.12.1)."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9192125,"math_prob":0.9850224,"size":18918,"snap":"2019-51-2020-05","text_gpt3_token_len":4970,"char_repetition_ratio":0.15766099,"word_repetition_ratio":0.025085518,"special_character_ratio":0.2639814,"punctuation_ratio":0.12449393,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9907764,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-22T04:32:25Z\",\"WARC-Record-ID\":\"<urn:uuid:bc232681-b575-4176-ba27-b9aab8163eee>\",\"Content-Length\":\"412916\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3edac0a4-1985-4c33-b82a-fc9c4f5bfe76>\",\"WARC-Concurrent-To\":\"<urn:uuid:639f8493-6ef0-471a-a888-619df0f8ecc9>\",\"WARC-IP-Address\":\"151.101.250.152\",\"WARC-Target-URI\":\"https://ru.scribd.com/document/306915471/eBk-TPC-4-F-pdf\",\"WARC-Payload-Digest\":\"sha1:FV2XFHADALGPN3HWVE3UTEM2FRTKEO3G\",\"WARC-Block-Digest\":\"sha1:X22B2PNEH42RGPSYONPM3M2NNNUNYWGU\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579250606696.26_warc_CC-MAIN-20200122042145-20200122071145-00508.warc.gz\"}"} |
https://patents.justia.com/patent/7843972 | [
"# OFDM signal transmission method, and OFDM signal transmitter/receiver\n\n- Panasonic\n\nIn an OFDM transmission method, in order to compensate any frequency response variation time wise resulted from any distortion in a transmission path, out-of-synchronization with passage of time, frequency drift, and phase shift, and to improve a demodulation characteristic, a PS detector in a receiver receiving an OFDM signal detects a pilot symbol. A PS1 TPFR calculator calculates a frequency response of the transmission path for a first pilot symbol, while a PS2 TPFR calculator calculates a frequency response of the transmission path for a second pilot symbol. Thereafter, a compensation vector calculator calculates compensation vectors from the frequency responses of the transmission path for both the first and second pilot symbols by linear approximation. A frequency response compensator compensates the frequency response variation of subcarriers in a data symbol based on the calculated compensation vectors.\n\n## Latest Panasonic Patents:\n\nDescription\n\nThis application is a divisional application of Ser. No. 09/627,781, filed Jul. 28, 2000 now U.S. Pat. No. 7,027,464.\n\nBACKGROUND OF THE INVENTION\n\n1. Field of the Invention\n\nThe present invention relates to an OFDM (Orthogonal Frequency Division Multiplexing) transmission method, and more specifically to a scheme for transmitting data via a wired or wireless transmission path by using an OFDM signal, and a transmitter/receiver therefor.\n\n2. Description of the Background Art\n\nIn an OFDM signal transmission scheme, a demodulation characteristic deteriorates due to any distortion observed in a transmission path, out-of-synchronization after passage of time, frequency drift between a transmission side and a reception side, amplitude and phase errors resulted from phase noise in a local oscillator provided in a receiver, and the like. Such error factors provoking the demodulation characteristic to deteriorate are hereinafter referred to as frequency response variation.\n\nIn the OFDM signal transmission scheme, for synchronization with a receiver, a transmitter often inserts one or more preambles into a signal before data transmission. The preamble is longer than one symbol time wise, and therewith, a frequency response of the transmission path can be correctly estimated. The more preambles lead to the higher accuracy in estimating the frequency response, but the transmission speed shows a considerable drop.\n\nTherefore, as is disclosed in Japanese Patent Laying-Open No. 8-265293 (96-265293), interposing one or more pilot carriers in between data carriers in a data symbol is conventionally popular.\n\nThe OFDM signal is structured by a plurality of equal-length symbols, each of which includes several subcarriers. The above-mentioned data carrier and pilot carrier are both subcarriers. In the above prior art, a phase error of the pilot carriers included in the data symbol is detected for every data symbol for compensation.\n\nSuch prior art, however, bares a problem in an environment where any higher-level noise is observed in the transmission path or a multi-path fading environment. Accordingly, the fewer number of pilot carriers per symbol, the lower the accuracy of phase error detection becomes. Although the more number of pilot carriers surely achieve the higher accuracy thereof, the occupied frequency bandwidth becomes wider, and the transmission speed considerably drops. Furthermore, it is also difficult to compensate for the amplitude error caused by any distortion observed in the transmission path.\n\nSUMMARY OF THE INVENTION\n\nTherefore, an object of the present invention is to provide an OFDM signal transmission scheme (method) and a transmitter/receiver therefor. With the OFDM signal transmission scheme, even in an environment where any higher-level noise is observed in the transmission path or a multipath fading environment, any frequency response variation of the transmission path, which is caused by any one or more of distortion observed in the transmission path, out-of-synchronization after passage of time, frequency drift between transmission and reception sides, and a residual phase error, is accurately compensated for with respect to every subcarrier included in a symbol without dropping the transmission speed. Further, with such scheme, the OFDM signal is transmitted with a lower error rate.\n\nThe present invention has the following features to attain the object above.\n\nA first aspect of the present invention is directed to a scheme for transmitting an OFDM signal from a transmission side to a reception side, wherein\n\nthe OFDM signal includes both a data symbol having data therein, and a pilot symbol having a frequency component predetermined in amplitude and phase,\n\non the transmission side, the pilot symbol is inserted before or after one or more the data symbols, and is transmitted together with one or more the data symbols, and\n\non the reception side, the received pilot symbol is utilized for compensating for a frequency response variation of a transmission path resulted from any one or more of distortion observed in the transmission path, out-of-synchronization with passage of time, frequency drift, and residual phase error.\n\nAs described above, in the first aspect, a pilot symbol having a predetermined frequency component predetermined in amplitude and phase is inserted between data symbols with a predetermined interval on the transmission side. On the reception side, a frequency response of a transmission path is accurately estimated by using those pilot symbols. By utilizing the estimated frequency response and a difference in frequency response between any two pilot symbols away from each other for a predetermined number of data symbols, a frequency response variation of the data symbols between the pilot symbols is compensated. In this manner, the data symbols can be correctly demodulated even in the multipath fading environment or high-level-noise environment.\n\nAccording to a second aspect, in the first aspect,\n\nevery subcarrier included in the pilot symbol is a pilot carrier predetermined in amplitude and phase.\n\nAs described above, in the second aspect, the symbol length remains the same regardless of the number of subcarriers included therein. Accordingly, transmission speed does not drop even if one symbol wholly includes the subcarriers, and thus an OFDM signal transmission scheme in which the phase error is corrected with higher accuracy can be implemented.\n\nAccording to a third aspect, in the first aspect,\n\nthe pilot symbol is plurally and sequentially inserted before or after one or more of the data symbols.\n\nAs described above, in the third aspect, with the pilot symbol inserted plurally in a row, the frequency response of the transmission path can be estimated with higher accuracy on the reception side. Therefore, the data symbols can be correctly demodulated even in the multipath fading environment or high-level-noise environment.\n\nAccording to a fourth aspect, in the first aspect,\n\nthe pilot symbol is periodically inserted before or after one or more of the data symbols.\n\nAs described above, in the fourth aspect, with the pilot symbol periodically inserted, the temporal location of the pilot symbol can be easily detected when received.\n\nAccording to a fifth aspect, in the first aspect,\n\nthe pilot symbol is non-periodically inserted before or after one or more of the data symbols.\n\nAs described above, in the fifth aspect, when the pilot symbol is inserted non-periodically or with an irregular interval, the insertion interval is determined depending on how speedy the transmission path changes in state.\n\nAccording to a sixth aspect, in the fifth aspect,\n\non the transmission side, the pilot symbol is adaptively changed in frequency and number for insertion depending on a state of the transmission path.\n\nAs described above, in the sixth aspect, by adaptively changing the pilot symbol in frequency and number for insertion depending on what state the transmission path is in, the transmission efficiency can be improved.\n\nAccording to a seventh aspect, in the fifth aspect,\n\non the transmission side, the OFDM signal is provided with control information indicating how often and how many pilot symbols are inserted.\n\nAs described above, in the seventh aspect, by providing the transmission signal with the control information telling how often and how many pilot symbols are to be inserted in between the data symbols, the pilot symbol and the data symbol are discriminated from each other based on the control information at demodulation on the reception side.\n\nAccording to an eighth aspect, in the first aspect,\n\nthe frequency response variation of the transmission path is compensated for by using a compensation vector calculated, as a time series linear approximation, from a difference in frequency response between any two pilot symbols closest to each other.\n\nAs described above, in the eighth aspect, the frequency response variation of the data symbols between the pilot symbols is compensated for by linear approximation. In this manner, the phase shift between the pilot symbols caused by the frequency drift becomes linear with time. Therefore, compensation can be linearly done with accuracy. Further, with a proper interval of inserting the pilot symbols, the frequency response of the transmission path also becomes linear, allowing compensation to be correctly and linearly done.\n\nAccording to a ninth aspect, in the first aspect,\n\nthe frequency response variation of the transmission path resulted from either one or both of the frequency drift and the residual phase error is compensated for by using a value calculated, as a time series linear approximation, from a difference in phase between any two pilot symbols closest to each other.\n\nAs described above, in the ninth aspect, the phase error of the data symbols between the pilot symbols is compensated for by linear approximation. In this manner, the phase shift caused by the frequency drift becomes linear with time, and thus compensation can be linearly done with accuracy.\n\nAccording to a tenth aspect, in the first aspect,\n\nthe frequency response variation of the transmission path is compensated for by using an average value taken for a phase error among pilot carriers in the pilot symbol.\n\nAs described above, in the tenth aspect, by averaging the phase error of the received pilot carriers, such OFDM signal transmission scheme that the phase error is corrected with higher accuracy can be implemented.\n\nAccording to an eleventh aspect, in the tenth aspect,\n\nthe average value is calculated by weighing each amplitude value for the pilot carriers.\n\nAs described above, in the eleventh aspect, by calculating an average value after weighing each carrier in the received pilot symbol according to its amplitude, such OFDM transmission scheme that the phase error can be corrected with higher accuracy can be implemented even if the received signal distorts in the transmission path and by noise.\n\nA twelfth aspect of the present invention is directed to an OFDM signal transmitter for transmitting an OFDM signal towards a reception side, comprising:\n\na data symbol generator for generating an OFDM data symbol after inputting data for transmission;\n\na pilot symbol generator for generating an OFDM pilot symbol; and\n\na symbol selector for switching between signals provided by the data symbol generator and the pilot symbol generator so that the pilot symbol is inserted before or after one or more of the data symbols.\n\nAs described above, in the twelfth aspect, a transmitter inserts a pilot symbol having a predetermined frequency component predetermined in amplitude and phase between data symbols with a predetermined interval. On the reception side, a frequency response variation of the data symbols is accurately estimated by using those pilot symbols. In this manner, the data symbols can be correctly demodulated even in the multipath fading environment or high-level-noise environment.\n\nAccording to a thirteenth aspect, in the twelfth aspect,\n\n• the data symbol generator comprises:\n• a frequency-domain data symbol generator for generating a frequency-domain data symbol after inputting data for transmission; and\n• an inverse Fourier transformer for subjecting a signal provided by the frequency-domain data symbol generator to inverse Fourier transform, and\n• the pilot symbol generator comprises:\n• a frequency-domain pilot symbol generator for generating a frequency-domain pilot symbol; and\n• an inverse Fourier transformer for subjecting a signal provided by the frequency-domain pilot symbol generator to inverse Fourier transform.\n\nAs described above, in the thirteenth aspect, the transmitter generates a signal having a predetermined frequency component predetermined in amplitude and phase and data symbols as a frequency-domain signal, and then subjects the signal to inverse Fourier transform. In this manner, it becomes possible to generate an OFDM signal in a simplified structure, and accordingly, the data symbols can be transmitted with such simplified structure even in the multipath fading environment or the high-level-noise environment.\n\nA fourteenth aspect of the present invention is directed to an OFDM signal receiver for receiving, from a transmission side, an OFDM signal including a data symbol having data therein, and a pilot symbol having a frequency component predetermined in amplitude and phase and being inserted before or after one or more of the data symbols, the receiver comprising:\n\na Fourier transformer for subjecting the received OFDM signal to Fourier transform;\n\na transmission path frequency response compensator for detecting the pilot symbol from a signal provided by the Fourier transformer, and with respect to the signal, compensating for a frequency response variation of a transmission path; and\n\na demodulator for receiving the signal compensated with the frequency response variation of the transmission path, and demodulating the signal for output as demodulated data.\n\nAs described above, in the fourteenth aspect, on the transmission side, a pilot symbol having a predetermined frequency component predetermined in amplitude and phase is inserted between data symbols with a predetermined interval. On the reception side, a frequency response variation is accurately estimated by using those pilot symbols. In this manner, the data symbols can be correctly demodulated even in the multipath fading environment or high-level-noise environment.\n\nAccording to a fifteenth aspect, in the fourteenth aspect,\n\nthe transmission path frequency response compensator calculates a compensation vector for compensation, by referring to a frequency response of a pilot symbol, a frequency response of another pilot symbol closest thereto, and a frequency response of a reference pilot symbol provided on a reception side, so that a frequency response of the received data symbol corresponds to that of the reference pilot symbol.\n\nAs described above, in the fifteenth aspect, on the transmission side, a pilot symbol having a predetermined frequency component predetermined in amplitude and phase is inserted between data symbols with a predetermined interval. On the reception side, a frequency response variation of the transmission path is accurately estimated by using those pilot symbols. The estimated frequency response is compared with a difference in frequency response between any two pilot symbols away from each other for a predetermined number of data symbols. By referring to the difference, a frequency response variation of the data symbols interposed between the pilot symbols is compensated, allowing the data symbols to be demodulated correctly even in the multipath fading environment or the high-level-noise environment,\n\nAccording to a sixteenth aspect, in the fifteenth aspect,\n\nthe compensation vector is calculated for every subcarrier included in the received data symbol by using every pilot carrier included in each of the pilot symbols.\n\nAs described above, in the sixteenth aspect, the compensation vector is calculated for each of the subcarriers. Therefore, even if the receiver is used for a case where distortion level of the transmission path varies or out-of-synchronization is observed with time, for example, in mobile communications, the frequency response variation is compensated for and the data symbols can be correctly demodulated.\n\nAccording to a seventeenth aspect, in the fifteenth aspect,\n\nthe compensation vector is calculated as a time series linear approximation from the frequency response variation between any two pilot symbols closest to each other.\n\nAs described above, in the seventeenth aspect, the frequency response variation of the data symbols between the pilot symbols is compensated for by linear approximation. In this manner, when the transmission path seems to linearly change in state between the pilot symbols, compensation can be linearly and correctly done. Further, the phase shift caused by the frequency drift is linear with time, allowing linear compensation with accuracy.\n\nAccording to an eighteenth aspect, in the fourteenth aspect,\n\nthe transmission path frequency response compensator comprises:\n\na pilot symbol detector for detecting both a first pilot symbol being an arbitrary pilot symbol and a second pilot symbol transmitted after the first pilot symbol;\n\na first pilot symbol transmission path frequency response calculator for calculating a first pilot symbol transmission path frequency response by dividing a frequency response of the first pilot symbol by that of a reference pilot symbol provided on a reception side;\n\na second pilot symbol transmission path frequency response calculator for calculating a second pilot symbol transmission path frequency response by dividing a frequency response of the second pilot symbol by that of the reference pilot symbol;\n\na compensation vector calculator for calculating, after inputting the first and second pilot symbol transmission path frequency responses, a compensation vector for compensating for the frequency response variation of the transmission path; and\n\na frequency response compensator for compensating for the frequency response of one or more the data symbols after inputting the compensation vector.\n\nAs described above, in the eighteenth aspect, on the transmission side, a pilot symbol having a predetermined frequency component predetermined in amplitude and phase is inserted between data symbols with a predetermined interval. On the reception side, the first and second pilot symbol transmission path frequency response are each calculated by dividing the first and second pilot symbols detected from the received signal by a predetermined reference pilot symbol, respectively. A difference therebetween is then obtained. By using the difference, the compensation vector for the data symbols is each calculated. Therefore, in this manner, any distortion observed in the transmission path, out-of-synchronization after passage of time, frequency drift, and residual phase error for the data symbols can be correctly compensated.\n\nA nineteenth aspect of the present invention is directed to an OFDM signal receiver for receiving, from a transmission side, an OFDM signal including a data symbol having data therein, and a pilot symbol having a frequency component predetermined in amplitude and phase and being inserted before or after one or more of the data symbols, the receiver comprising:\n\na Fourier transformer for subjecting the received OFDM signal to Fourier transform;\n\na phase compensator for detecting the pilot symbol from a signal provided by the Fourier transformer, and compensating for the signal for either one or both of frequency drift and residual phase error; and\n\na demodulator for receiving the signal compensated with either or both of the frequency drift and the residual phase error, and demodulating the signal output demodulated data.\n\nAs described above, in the nineteenth aspect, on the transmission side, a pilot symbol having a predetermined frequency component predetermined in amplitude and phase is inserted between data symbols with a predetermined interval. On the reception side, a frequency response variation of the data symbols is accurately estimated by using those pilot symbols. In this manner, the data symbols can be correctly demodulated even in the multipath fading environment or high-level-noise environment.\n\nAccording to a twentieth aspect, in the nineteenth aspect,\n\nthe phase compensator calculates a compensation value for compensation, by referring to a first difference between a phase of a pilot symbol and a predetermined phase, and a second difference in phase between any two pilot symbols closest to each other, so that a phase of the received data symbol corresponds to the predetermined phase.\n\nAs described above, in the twentieth aspect, a pilot symbol having a predetermined frequency component predetermined in amplitude and phase is inserted between data symbols with a predetermined interval on the transmission side. On the reception side, a frequency response of a transmission path is accurately estimated by using those pilot symbols. By utilizing the estimated frequency response and a difference in frequency response between any two pilot symbols away from each other for a predetermined number of data symbols, a frequency response variation of the data symbols between the pilot symbols is compensated. In this manner, the data symbols can be correctly demodulated even in the multipath fading environment or high-level-noise environment.\n\nAccording to a twenty-first aspect, in the twentieth aspect,\n\nthe first and second differences are each calculated by using a phase average value calculated for every pilot carrier included in each of the pilot symbols.\n\nAs described above, in the twenty-first aspect, such OFDM transmission scheme that the phase error can be corrected with higher accuracy can be implemented by averaging the phase of the received pilot carriers.\n\nAccording to a twenty-second aspect, in the twenty-first aspect,\n\nthe phase average value is calculated by weighing each amplitude value for the pilot carriers.\n\nAs described above, in the twenty-second aspect, by calculating an average value after weighing each carrier in the received pilot symbol according to its amplitude, such OFDM transmission scheme that the average value can be calculated with higher accuracy can be implemented even if the received signal distorts in the transmission path and by noise.\n\nAccording to a twenty-third aspect, in the twentieth aspect,\n\nthe phase compensation value is calculated as a time series linear approximation from a difference in phase between any two pilot symbols closest to each other.\n\nAs described above, in the twenty-third aspect, the phase error of the data symbols between the pilot symbols is compensated for by linear approximation. In this manner, the phase shift caused by the frequency drift becomes linear with time, and thus compensation can be linearly done with accuracy.\n\nAccording to a twenty-fourth aspect, in the nineteenth aspect,\n\nthe phase compensator comprises:\n\na pilot symbol detector for detecting both a first pilot symbol being an arbitrary pilot symbol and a second pilot symbol transmitted after the first pilot symbol;\n\na first pilot symbol phase difference calculator for calculating a difference between a phase of the first pilot symbol and a predetermined phase;\n\na pilot symbol phase difference calculator for calculating a difference in phase between the first pilot symbol and the second pilot symbol;\n\na phase compensation value calculator for calculating, after inputting the phase difference value calculated by the first pilot symbol phase difference calculator and the phase difference calculated by the pilot symbol phase difference calculator, a phase compensation value for compensating for the frequency drift and the residual phase error; and\n\na phase rotator for rotating, in response to the phase compensation value, the phase of the one or more data symbols.\n\nAs described above, in the twenty-fourth aspect, on the transmission side, a pilot symbol having a predetermined frequency component predetermined in amplitude and phase is inserted between data symbols with a predetermined interval. On the reception side, a difference in phase between the first pilot symbol to be first detected from the received signal and a reference pilot symbol provided on the reception side is found. Then, a difference in phase between the first and second pilot symbols is found. By utilizing the difference in phase, a phase compensation value for the data symbols can be obtained, allowing the frequency drift and residual phase error of the data symbols to be correctly compensated.\n\nThese and other objects, features, aspects and advantages of the present invention will become more apparent from the following detailed description of the present invention when taken in conjunction with the accompanying drawings.\n\nBRIEF DESCRIPTION OF THE DRAWINGS\n\nFIGS. 1a to 1d are diagrams each showing the structure of an OFDM signal in a transmission scheme according to a first embodiment of the present invention;\n\nFIG. 2 is a block diagram showing the structure of a transmitter of the first embodiment;\n\nFIGS. 3a and 3b are block diagrams each showing the structure of a DSofdm generator and a PSofdm generator, respectively, in the transmitter of the first embodiment;\n\nFIG. 4 is a block diagram showing the structure of a receiver of the first embodiment;\n\nFIG. 5 is a block diagram showing the structure of a TPFR compensator 6 in the receiver of the first embodiment;\n\nFIGS. 6a and 6b are schematic views each for explaining subcarriers of a first pilot symbol and those of a reference symbol, respectively;\n\nFIGS. 7a and 7b are schematic views each for explaining subcarriers of a second pilot symbol and those of the reference symbol, respectively;\n\nFIG. 8 is a graph showing that a compensation vector can be calculated, through linear approximation, from a difference between first and second transmission path frequency responses;\n\nFIG. 9 is a schematic view for explaining how a frequency response variation of subcarriers in a data symbol is compensated;\n\nFIG. 10 is a block diagram showing the structure of a receiver according to a second embodiment of the present invention;\n\nFIG. 11 is a block diagram showing the structure of a phase compensator 26 in the receiver of the second embodiment;\n\nFIGS. 12a and 12b are schematic views each for explaining the subcarriers in the first pilot symbol and those in the reference symbol, respectively;\n\nFIGS. 13a and 13b are schematic views each for explaining the subcarriers in the second pilot symbol and those in the reference symbol, respectively;\n\nFIG. 14 is a diagram showing that a phase compensating value can be calculated, through linear approximation, from a phase difference between pilot symbols; and\n\nFIG. 15 is a schematic view for explaining how phase compensation is carried out with respect to the subcarriers in the data symbol.\n\nDESCRIPTION OF THE PREFERRED EMBODIMENTS First Embodiment\n\nFirst, described below is a transmission scheme according to a first embodiment of the present invention. FIGS. 1a to 1d are diagrams each showing the structure of an OFDM signal to be transmitted under the transmission scheme of the first embodiment. As shown in FIGS. 1a to 1d, a pilot symbol is accompanied by a plurality of data symbols. The pilot symbol has a frequency component predetermined in amplitude and phase. After the data symbols, another pilot symbol follows. As such, the OFDM signal under the transmission scheme of the first embodiment takes such structure that the pilot symbol is inserted before and after one or more data symbols. Herein, the number of pilot symbols to be inserted is not limited, but in a row if plurally provided.\n\nThe OFDM signal includes subcarriers, and the symbol length remains the same regardless of the number of subcarriers included therein. Accordingly, the subcarriers may be predetermined in amplitude and phase either wholly or partially. In order to compensate for the frequency response variation with higher accuracy, every subcarrier is preferably predetermined in amplitude and phase.\n\nAs already described above, under the OFDM signal transmission scheme, a signal to be transmitted is often provided with a preamble(s) in a transmitter for synchronization with a receiver. Such preamble is longer than one symbol time wise, and may be inserted before or during transmission with an adaptively determined interval. Although the more preambles surely lead to the higher accuracy in compensating for the frequency response variation, the transmission speed shows a considerable drop. Therefore, according to the transmission scheme of the first embodiment, the signal is preferably provided with the preamble before transmission or less often during transmission.\n\nFurther, the preamble may include control information telling how often the pilot symbol is to be inserted in between the data symbols and how many. If so, the control information is analyzed on the reception side so that the pilot symbol and the data symbol are discriminated from each other.\n\nStill further, the control information may be inserted after an initial pilot symbol as a data symbol or a signaling symbol for transmitting information such as a cue. If so, the control information as the OFDM signal can be correctly demodulated.\n\nAs such, the OFDM signal having pilot symbols inserted before and after one or more data symbols is transmitted from the transmission side. Those pilot symbols are utilized, on the reception side, to estimate the frequency response of the transmission path with accuracy.\n\nThe estimated frequency response of the transmission path is compared with a frequency response of the transmission path between any two pilot symbols away from each other for a predetermined number of data symbols for a difference therebetween. By referring to the difference, a frequency response variation of the data symbol(s) interposed between the pilot symbols is compensated. With such transmission scheme, even in the multipath fading environment or the high-level-noise environment, the data symbols can be correctly demodulated.\n\nHerein, as shown in FIG. 1a, the pilot symbol before the data symbol(s) is referred to as a first pilot symbol, and the one after as a second pilot symbol. The temporal interval therebetween is made longer if the transmission path does not change in state that much, but otherwise made shorter as much to make the change linearly between the pilot symbols. As such, by adaptively changing the interval of inserting the pilot symbol depending on in what state the transmission path is, the transmission efficiency is improved.\n\nNote herein that the state of the transmission path may be measured and estimated on the transmission side, or measured on the reception side and then fed back to the transmission side for determination.\n\nThe interval of inserting the pilot symbol may be periodic or non-periodic. When the pilot symbol is periodically inserted, the temporal location thereof can be easily detected when received. In the case that the pilot symbol is inserted non-periodically or with an irregular interval, the insertion interval is determined depending on how quickly the transmission path changes in state. Herein, the expression of the pilot signal being inserted non-periodically or with an irregular interval indicates that the pilot signal is not periodically inserted for the entire duration of signal transmission. Thus, the expression does not exclude a case where the pilot symbol is periodically inserted for a short duration of signal transmission.\n\nAs a case for non-periodic insertion, as shown in FIG. 1b, the first pilot symbol is accompanied by a signaling symbol including the control information telling how often and how many pilot symbols are inserted. With such arrangement, the control information can be demodulated based on the frequency response of the transmission path estimated by using the first pilot symbol. In this manner, the demodulation is carried out more accurately than a case where the control information is included in the preamble.\n\nNote herein that, as shown in FIG. 1c, the first pilot symbol may be doubly provided so as to improve the accuracy in estimating the frequency response of the transmission path by the pilot symbols. Further, as shown in FIG. 1d, the number of pilot symbols for one insertion may be two or plural in a row. In such case, for correct estimation, the frequency response of the transmission path is averaged for the pilot symbols.\n\nThe OFDM signal having such structure can be generated by a transmitter as described next below. FIG. 2 is a schematic view structurally showing such transmitter according to the first embodiment of the present invention. Hereinafter, presumably, the number of data symbols is M, and the number of subcarriers per symbol is N.\n\nIn FIG. 2, the transmitter includes a DSofdm generator 1 for generating a data symbol from transmitted data, a PSofdm generator 2 for generating a pilot symbol having such frequency component, as described above, predetermined in amplitude and phase, a symbol selector 3 for receiving two signals each from the DSofdm generator 1 and the PSofdm generator 2, and selecting either one of the signals for output, and a D/A converter 4 for subjecting data provided by the symbol selector 3 to D/A conversion, and outputting a transmission signal. Herein, as to the DSofdm generator 1 and PSofdm generator 2, “DS” denotes “Data Symbol”, and “PS” “Pilot Symbol”. Further, “ofdm” accompanying DS and PS denotes that the symbol is an OFDM signal.\n\nDepicted in FIGS. 3a and 3b are block diagrams respectively showing the detailed structure of the DSofdm generator 1 and the PSofdm generator 2 in the transmitter of the first embodiment. In FIG. 3a, the DSofdm generator 1 includes a DSf generator 11 and an inverse Fourier transformer 12. Herein, as to the DSf generator 11, “DS” denotes “Data Symbol”, and “f” accompanying DS denotes that the symbol is a frequency-domain signal. In FIG. 3b, the PSofdm generator 2 includes a PSf generator 21 and an inverse Fourier transformer 22. Herein, the above description for “PS” and “f” is also applicable to the PSf generator 21.\n\nReferring back to FIG. 2, data for transmission is provided to the DSofdm generator 1. The data is then converted into a data symbol, and is outputted to the symbol selector 3.\n\nMore specifically, referring to FIG. 3a, the data for transmission is first provided to the DSf generator 11. The DSf generator 11 outputs a frequency-domain data symbol, which includes many data carriers arranged on a frequency axis with a predetermined interval. This frequency-domain data symbol is subjected to inverse Fourier transform by the inverse Fourier transformer 12, and then is converted into a time-domain OFDM data symbol. After the conversion, the OFDM data symbol is provided to the symbol selector 3.\n\nThe above-described pilot symbol having the frequency component predetermined in amplitude and phase is generated in the PSofdm generator 2, and is outputted to the symbol selector 3.\n\nIn detail, referring to FIG. 3b, the PSf generator 21 outputs the frequency-domain pilot symbol including many pilot carriers arranged on the frequency axis with a predetermined interval. The frequency-domain pilot symbol is subjected to inverse Fourier transform by the inverse Fourier transformer 22, and is converted into a time-domain OFDM pilot symbol. After the conversion, the OFDM pilot symbol is provided to the symbol selector 3.\n\nThe symbol selector 3 selects either one of those two signals for output. Herein, the symbol selector 3 is assumed to output such signal as shown in FIG. 1a, in which pilot symbol insertion is made for every three data symbols.\n\nIf this is the case, the symbol selector 3 first selects the PSofdm, generator 2 for its signal. After the pilot symbol is outputted, the symbol selector 3 then selects the DSofdm generator 1 for its signal. When three data symbols are outputted, the symbol selector 2 selects the PSofdm generator 2 for its signal. Thereafter, in the same manner, the symbol selector 3 selects the DSofdm generator 1 for its signal when another pilot symbol is outputted. As such, by switching among the two signals, the symbol selector 3 becomes capable of successively outputting such OFDM signal as shown in FIG. 1a.\n\nSuch signal outputted from the symbol selector 3 is provided to the D/A converter 4. The D/A converter 4 subjects the signal to D/A conversion, and outputs the D/A converted signal as a transmission signal.\n\nAs is known from the above, the transmitter of the first embodiment inserts a pilot symbol having a frequency component predetermined in amplitude and phase in between data symbols with a predetermined interval. With such transmitter, as long as the frequency response variation of the data symbols is accurately compensated for on the reception side by using those pilot symbols, data symbol transmission can be correctly done even in the multipath fading environment or the high-level-noise environment.\n\nDepicted in FIG. 4 is a schematic view structurally showing a receiver according to the first embodiment of the present invention. In FIG. 4, the receiver includes a Fourier transformer 5 for subjecting a received signal to Fourier transform, a TPFR compensator 6 for compensating for a frequency response variation of a signal provided by the Fourier transformer 5, and a demodulator 7 for demodulating a signal provided by the TPFR compensator 6. Herein, as to the TPFR compensator 6, “TPFR” denotes a transmission path frequency response.\n\nThe Fourier transformer 5 subjects every symbol to Fourier transform, and then outputs frequency-domain data. The outputted data has the frequency response variation of the transmission path eliminated therefrom in the TPFR compensator 6. Thereafter, the data freed from the frequency response variation is demodulated as the data symbol in the demodulator 7.\n\nFIG. 5 is a schematic view showing the structure of the TPFR compensator 6 in the receiver of the first embodiment. In FIG. 5, the TPFR compensator 6 includes: a PS detector 61 for detecting the pilot symbol from the signal provided by the Fourier transformer 5; a PS1 TPFR calculator 62 for dividing the first pilot symbol provided by the PS detector 61 by a reference pilot symbol; a PS2 TPFR calculator 63 for dividing the second pilot symbol provided by the PS detector 61 by the reference pilot symbol; a compensation vector calculator 64 for calculating a compensation vector after receiving outputs from the PS1 and PS2 TPFR calculators 62 and 63; and a frequency response compensator 65 for compensating for a frequency response of a signal provided by the PS detector 61 on the basis of an output from the compensation vector calculator 64. Herein, for the above components, “PS1” denotes “first pilot symbol”, and “PS2” “second pilot symbol”.\n\nThe PS detector 61 detects the pilot symbol from the Fourier-transformed frequency-domain data. The PS1 TPFR calculator 62 divides the subcarriers in the first pilot symbol by those in the reference pilot symbol stored in memory (not shown) in the receiver, thereby estimating the frequency response of the transmission path.\n\nThe reference pilot symbol stored in the memory is considered ideal having no frequency response variation error at the time of reception. Accordingly, the frequency response of the transmission path can be accurately calculated by dividing the frequency response of the subcarriers in the first pilot symbol by those in the reference pilot symbol.\n\nFIGS. 6a and 6b are schematic views respectively showing the first pilot symbol having subcarriers with a complex amplitude of P1, and the reference pilot symbol having those with a complex amplitude of Pr. The PS1 TPFR calculator 62 divides the complex amplitude P1 as shown in FIG. 6a by the complex amplitude Pr as shown in FIG. 6b so as to calculate a frequency response of the transmission path Pa. An equation (1) therefor is as follows:\nPa(i)=P1(iPr(i) (1)\nwhere i is an arbitrary integer between 1 and N.\n\nAs described in the foregoing, when the pilot symbol is sequentially and plurally inserted, the frequency response of the transmission path is averaged for the pilot symbols. In this manner, the frequency response of the transmission path for the pilot symbols can be estimated with higher accuracy.\n\nWith reference to FIG. 5 again, the PS2 TPFR calculator 63 divides the subcarriers in the second pilot symbol by those in the reference pilot symbol exemplarily stored in the memory in the receiver, thereby estimating the frequency response of the transmission path for the second pilot symbol.\n\nFIGS. 7a and 7b are schematic views respectively showing the second pilot symbol having the subcarriers with a complex amplitude of P2 and the reference pilot symbol having those with the complex amplitude of Pr. The PS2 TPFR calculator 63 divides the complex amplitude P2 as shown in FIG. 7a by the complex amplitude Pr as shown in FIG. 7b so as to calculate a frequency response of the transmission path Pb. An equation (2) therefor is as follows:\nPb(i)=P2(iPr(i) (2)\nwhere i is an arbitrary integer between 1 and N.\n\nAs already described in the foregoing, when the second pilot symbol is sequentially and plurally inserted, the frequency response of the transmission path is averaged for the pilot symbols. In this manner, the frequency response of the transmission path for the second pilot symbols can be estimated with higher accuracy.\n\nThe compensation vector calculator 64 calculates a compensation vector Vk for each of the data symbols between the first and second pilot symbols. This is done by linear approximation between the first and second pilot symbol transmission path frequency responses Pa and Pb. The linear approximation is applicable herein since the pilot symbols are inserted with a shorter interval to make the transmission path linearly change in state, and the phase shift caused by frequency drift becomes linear with time. Therefore, by utilizing linear approximation, compensation can be linearly done with accuracy.\n\nFIG. 8 is a graph having a longitudinal axis indicating the compensation vector Vk for each of the data symbols between the first and the second pilot symbols and a lateral axis indicating the symbols by number, i.e., time, and the graph shows the relationship therebetween. As is known from FIG. 8, the compensation vector Vk for each of the data symbols can be calculated, through linear approximation, from a difference between the pilot symbols in frequency response of the transmission path.\n\nIt is now assumed that the number of data symbols between the first and second pilot symbols is M, and a certain data symbol therebetween is k, where k is an arbitrary integer between 1 and M. With such assumption and by using an equation (3) below, the compensation vector Vk is calculated for each of the data symbols through linear approximation.\n\n$Vk ( ⅈ ) = Pa ( ⅈ ) + Pb ( ⅈ ) - Pa ( ⅈ ) M + 1 × k ( 3 )$\nwhere k is an arbitrary integer between 1 and M.\n\nBy using the compensation vectors calculated in such manner, the frequency response compensator 65 then compensates for the frequency response variation of the subcarriers included in each of the data symbols between the first and second pilot symbols.\n\nFIG. 9 is a schematic view showing how the frequency response variation of the k-th data symbol is compensated. With the compensation vectors, the frequency response variation of the subcarriers in each of the data symbols is compensated for as follows with an equation (4).\nC′k(i)=Ck(i)/Vk(i) (4)\n\nSuch compensation is carried out with respect to the k data symbols between the first and second pilot symbols. Therefore, practically, these data symbols are once stored in a data symbol storage (not shown) provided in the receiver, for example. After the compensation vectors are calculated, the data symbols stored in the storage are read, and then the frequency response variation is compensated for with respect to the data symbols.\n\nTypically, such data symbol storage is provided preceding to or in the frequency response compensator 65. With respect to the data symbols stored therein, the compensation vector calculator 64 calculates the compensation vectors Vk respectively, and then the frequency response compensator 65 compensates for the frequency response variation thereof.\n\nIn this manner, however, demodulation cannot be done for the period after the first pilot symbol is received and before the second pilot symbol is received, rendering the receiver of the first embodiment to take a certain length of time for its processing. By taking this into consideration, the receiver of the first embodiment is more suitable for image transmission in which an image is not required so soon to be retransmitted or under a broadcast transmission system.\n\nIn the above-described manner, the compensation vector can be calculated to compensate for the frequency response variation resulted from the change in state of the transmission path for each of the subcarriers in the pilot symbol. Therefore, in the OFDM transmission scheme of the first embodiment, the compensation vector can be calculated with higher accuracy for every subcarrier compared with the conventional scheme for interposing the pilot carriers between the data carriers. Since the pilot carriers for insertion in the conventional scheme are quite fewer in number than the subcarriers, it is rather difficult to calculate the frequency response variation of the transmission path, with accuracy, for the entire frequency band.\n\nAs such, the frequency response compensator 65 can free the received data from the frequency response variation of the transmission path. Especially when the transmission path changes approximately linearly in state between the pilot symbols, the data symbols can be correctly demodulated even in the multipath fading environment or the high-level-noise environment. This is enabled by compensating, through linear approximation, for the frequency response of the data symbols between the pilot symbols. Further, the phase shift caused by the frequency drift is linear with time, allowing linear compensation with accuracy.\n\nIn a case where the transmission path does not change in state that much, the frequency response variation of the transmission path for the data symbols may be compensated for by using only one pilot symbol preceding thereto. Accordingly, the frequency response variation of the transmission path for the data symbols can be compensated for without receiving another pilot symbol subsequent thereto.\n\nSecond Embodiment\n\nA transmission scheme according to a second embodiment of the present invention is quite similar to the one described in the first embodiment. Further, a transmitter of the second embodiment is structurally identical to the one in the first embodiment, and is not described twice. A receiver of the second embodiment, however, is partially different in structure from the one in the first embodiment, and thus a description is made focusing on the difference.\n\nFIG. 10 is a schematic view showing the structure of the receiver of the second embodiment. The receiver is provided with the Fourier transformer 5, a phase compensator 26 for compensating for a phase of a signal provided by the Fourier transformer 5, and the demodulator 7 for demodulating a signal outputted from the phase compensator 26. As such, the receiver of the second embodiment includes the phase compensator 26 as an alternative to the TPFR compensator 6 in the receiver in FIG. 4.\n\nThe phase compensator 26 frees data provided by the Fourier transformer 5 from frequency drift and residual phase error. The phase compensator 26 is later structurally described in detail. The data freed from error is demodulated by the demodulator 7.\n\nFIG. 11 is a schematic view showing the detailed structure of the phase compensator 26 in the receiver of the second embodiment. The phase compensator 26 includes: a PS detector 261 for detecting a pilot symbol from a signal provided by the Fourier transformer 5; a PS1 phase difference calculator 262 for calculating a difference in phase between the first pilot symbol provided by the PS detector 261 and a predetermined reference pilot symbol; a PS1-PS2 phase difference calculator 263 for calculating a difference in phase between the pilot symbols provided by the PS detector 261; a phase compensation value calculator 264 for calculating a phase compensation value after receiving outputs from the PS1 phase difference calculator 262 and the PS1-PS2 phase difference calculator 263; and a phase rotator 265 for rotating a phase of a signal provided by the PS detector 261 on the basis of an output from the phase compensation value calculator 264. The above description for “PS”, “PS1”, “PS2” is herein also applicable to the above components.\n\nThe PS detector 261 detects, in a similar manner to the PS detector 61 in FIG. 5, a pilot symbol from the Fourier-transformed frequency-domain data. The PS1 phase difference calculator 262 calculates a difference in phase between the subcarriers in the first pilot symbol and those of the reference pilot symbol stored in memory (not shown) in the receiver.\n\nThe reference pilot symbol stored in the memory is also an ideal pilot symbol as is the one in the receiver of the first embodiment. Accordingly, by calculating the difference in phase between the subcarriers in the first pilot symbol and those in the reference pilot symbol, a phase error caused by transmission can be obtained.\n\nFIGS. 12a and 12b are schematic views respectively showing the first pilot symbol having the subcarriers with a phase of φ1 and the reference pilot symbol having those with a phase of φr. The PS1 phase difference calculator 262 calculates a difference φps between the phase φ1 as shown in FIG. 12a and the phase φr as shown in FIG. 12b so as to calculate the phase error of the first pilot symbol. An equation (5) therefor is as follows:\nφps(i)=φ1(i)−φr(i) (5)\nwhere i is an arbitrary integer between 1 and N.\n\nThe PS1 phase difference calculator 262 averages the phase difference for the number of subcarriers. Assuming that the averaged value is φp, an equation (6) therefor is as follows:\n\n$ϕ p = 1 N ∑ i = 1 N ϕ ps ( ⅈ ) ( 6 )$\n\nThe received signal distorts in the transmission path and by noise. Therefore, in order to obtain φp, each carrier in the received pilot symbol is weighed according to its amplitude before calculating the average value. As thereto, it is described how below.\n\nPresumably, a complex signal of the i-th subcarrier in the received first pilot symbol is A1(i), that of the i-th subcarrier in the received second pilot symbol is A2(i), and an amplitude of the i-th subcarrier in the reference pilot symbol is R(i) With such assumption, the average value φp can be calculated by the following equation (7).\n\n$ϕ p = - angle [ ∑ i = 1 N ( R ( ⅈ ) A 1 ( ⅈ ) A 1 ( ⅈ ) 2 ) ] = - angle [ ∑ i = 1 N ( R ( ⅈ ) × A 1 ( ⅈ ) * ) ] ( 7 )$\nwhere an asterisk * indicates complex conjugate, and a term “angle” indicates a phase angle of a complex number.\n\nWith such equation of calculating the average value, each component is weighed by the power level of the complex signal A1(i). Consequently, any carrier larger in amplitude may contribute more with respect to the average value, and vice versa. In such manner, even if the received signal distorts in the transmission path and by noise, the average value can be calculated with higher accuracy.\n\nThe PS1-PS2 phase difference calculator 263 then calculates a difference in phase between the subcarriers in the first pilot symbol and those in the second pilot symbol.\n\nFIGS. 13a and 13b are schematic diagrams respectively showing the first pilot symbol having the subcarriers with the phase of φ1 and the second pilot symbol having the subcarriers with a phase of φ2. The PS1-PS2 phase difference calculator 263 calculates a phase difference φ between the phase φ1 as shown in FIG. 13a and the phase φ2 as shown in FIG. 13b. An equation (8) therefor is as follows:\nφ(i)=φ1(i)−φ2(i) (8)\nwhere i is an arbitrary integer between 1 and N.\n\nThe PS1-PS2 phase difference calculator 263 averages the phase difference for the number of subcarriers. Assuming that the averaged value is φa, an equation (9) therefor is as follows:\n\n$ϕ a = 1 N ∑ i = 1 N ϕ ( ⅈ ) ( 9 )$\n\nIn the above-described manner, the phase error can be correctly calculated for the entire frequency covering every subcarrier by averaging the phase error for the number of subcarriers in the pilot symbol. Therefore, in the OFDM transmission scheme of the second embodiment, the phase error can be calculated with higher accuracy compared with the conventional scheme for interposing the pilot carriers between the data carriers. Since the pilot carriers for insertion in the conventional scheme are quite fewer in number than the subcarriers, it is rather difficult for the conventional scheme to correctly calculate the phase error for the entire frequency band.\n\nIn order to calculate such average value more accurately, in a similar manner to the above, each carrier in the received pilot symbol is weighed according to its amplitude before calculating the average value. As thereto, assuming also that the average value is φa, an equation (10) therefor is as follows:\n\n$ϕ a = angle [ ∑ i = 1 N ( A 2 ( ⅈ ) A 1 ( ⅈ ) A 1 ( ⅈ ) 2 ) ] = angle [ ∑ i = 1 N ( A 2 ( ⅈ ) × A 1 ( ⅈ ) * ) ] ( 10 )$\n\nWith such equation of calculating the average value, each component is weighed by the power level of the complex signal A1(i) Consequently, any carrier larger in amplitude may contribute more with respect to the average value, and vice versa. In such manner, even if the received signal distorts in the transmission path and by noise, the average value can be calculated with higher accuracy.\n\nIn such manner, however, unlike the receiver of the first embodiment, the compensation value cannot be calculated for each of the subcarriers. From a different point of view, on the other hand, accuracy of compensation may be degraded by calculating the compensation value for each of the subcarriers in such a case that some of the subcarriers in the pilot symbol are suppressed or vanished. By taking this into consideration, the receiver of the second embodiment works effective especially for frequency drift and phase shift distorting every carrier to almost the same extent. More specifically, the receiver of the second embodiment is suitable for communications carried out in a static transmission path with a smaller distortion. Conversely, the receiver of the first embodiment is suitable for mobile communications where distortion level of the transmission path varies or out-of-synchronization is observed with time.\n\nThe phase compensation value calculator 264 calculates a phase compensation value φd for each of the data symbols between the first and second pilot symbols. This is done through linear approximation by the phase difference φa between the pilot symbols. The linear approximation is applicable herein since the phase shift caused by the frequency drift becomes linear with time. Therefore, by utilizing linear approximation, compensation can be linearly done with accuracy.\n\nFIG. 14 is a graph having a longitudinal axis indicating the phase compensation value φd for each of the data symbols between the first and the second pilot symbols and a lateral axis indicating the symbols by number, i.e., time, and the graph shows the relationship therebetween. As is known from FIG. 14, the phase compensation value φd for each of the data symbols can be calculated, through linear approximation, from the phase compensation value φd between the pilot symbols.\n\nIt is now assumed that the number of data symbols between the first and second pilot symbols is M, and a certain data symbol therebetween is k, where k is an arbitrary integer between 1 and M. With such assumption and by using an equation (11) next below, the phase compensation value φd is calculated for each of the data symbols through linear approximation.\n\n$ϕ d ( k ) = ϕ p + ϕ a M + 1 × k ( 11 )$\n\nWith the phase compensation values calculated in such manner, the phase rotator 265 then compensates the phase of the subcarriers in each of the data symbols between the first and second pilot symbols. FIG. 15 is a schematic view showing how the k-th data symbol is compensated in phase. The subcarriers in each of the data symbols are compensated in phase by utilizing the calculated phase compensation value. An equation (12) therefor is as follows:\nC′k(i)=Ck(i)×exp(j·φd(k)) (12)\nwhere i and k are, respectively, an arbitrary integer between 1 and N.\n\nSuch phase compensation is done with respect to the M data symbols between the first and second pilot symbols. Therefore, as in the receiver of the first embodiment, such data symbols are practically once stored in a data symbol storage (not shown). After the phase compensation values are calculated, the data symbols stored in the data symbol storage are read, and then phase compensation is carried out with respect thereto. By taking this into consideration, like the receiver of the first embodiment, the receiver of the second embodiment is more suitable for image transmission in which an image is not required so soon to be retransmitted or under a broadcast transmission system.\n\nAs such, the phase compensator 26 frees the received data from frequency drift and residual phase error. The data symbols can be correctly demodulated even in the multipath fading environment or the high-level-noise environment by compensating, through linear approximation, for the phase error of the data symbols between the pilot symbols. Further, the phase shift caused by the frequency drift is linear with time, allowing linear compensation with accuracy. Therefore, the receiver and reception method of the second embodiment are effective especially for linear phase error such as frequency drift.\n\nAs shown in the above equation (12), in the receiver of the second embodiment, every subcarrier included in one data symbol is subjected to phase compensation with a single phase compensation value. Therefore, compared with the TPFR compensator 6 in the receiver of the first embodiment in which the frequency response variation is compensated for by using compensation values calculated for each subcarrier, the phase compensator 26 of this receiver is simplified in structure.\n\nTo be more specific, the frequency response compensator 65 in the TPFR compensator 6 is internally provided with memory for storing the compensation values for each of the subcarriers, and performing control and calculation with respect to each of the subcarriers by using the compensation values therefor. On the other hand, the phase rotator 265 in the phase compensator 26 is provided with memory for storing only one compensating value, and performing control and calculation therewith, rendering the structure thereof more simplified.\n\nWhile the invention has been described in detail, the foregoing description is in all aspects illustrative and not restrictive. It is understood that numerous other modifications and variations can be devised without departing from the scope of the invention.\n\n## Claims\n\n1. A method for transmitting an OFDM signal from a transmission side to a reception side, wherein\n\nthe OFDM signal comprises at least one data symbol including a plurality of subcarriers having data therein, and at least one pilot symbol including a plurality of subcarriers, at least one of the subcarriers having a frequency component predetermined in amplitude and phase, said method comprising:\ninserting, on the transmission side, the pilot symbol before or after one or more data symbols, and transmitting the pilot symbol together with one or more data symbols; and\nutilizing, on the reception side, the received pilot symbol for compensating for a frequency response variation of a transmission path resulted from at least one of distortion in the transmission path, out-of-synchronization with passage of time, frequency drift, and residual phase error, wherein\nthe frequency response variation of the transmission path is compensated for by using a compensation vector calculated, as a time series linear approximation, from a difference in frequency response between at least any two pilot symbols closest to each other.\n\n2. The OFDM signal transmission method according to claim 1, wherein every subcarrier included in the pilot symbol is a pilot carrier predetermined in amplitude and phase.\n\n3. The OFDM signal transmission method according to claim 1, wherein said inserting comprises sequentially inserting the pilot symbol and at least one additional pilot symbol before or after one or more data symbols.\n\n4. The OFDM signal transmission method according to claim 1, wherein said inserting comprises periodically inserting a pilot symbol before or after one or more data symbols.\n\n5. The OFDM signal transmission method according to claim 1, wherein said inserting comprises non-periodically inserting a pilot symbol before or after one or more data symbols.\n\n6. The OFDM signal transmission method according to claim 1, further comprising adaptively changing, on the transmission side, a pilot symbol in frequency and number for insertion depending on a state of the transmission path.\n\n7. The OFDM signal transmission method according to claim 1, further comprising providing, on the transmission side, the OFDM signal with control information indicating how often and how many pilot symbols are inserted.\n\n8. The OFDM signal transmission method according to claim 1, wherein the frequency response variation of the transmission path is compensated for by using an average value taken for a phase error among pilot carriers in the pilot symbol.\n\n9. The OFDM signal transmission method according to claim 8, wherein the average value is calculated by weighing each amplitude value for the pilot carriers.\n\n10. An OFDM signal transmitter for transmitting an OFDM signal towards a reception side, said OFDM signal transmitter comprising:\n\na data symbol generator for generating an OFDM data symbol including a plurality of subcarriers for data for transmission;\na pilot symbol generator for generating an OFDM pilot symbol including a plurality of subcarriers; and\na symbol selector for switching between signals provided by said data symbol generator and said pilot symbol generator so that the pilot symbol is inserted before or after one or more data symbols, wherein\nthe OFDM signal comprises at least one data symbol including a plurality of subcarriers having data therein, and at least one pilot symbol including a plurality of subcarriers, at least one of the subcarriers having a frequency component predetermined in amplitude and phase,\non the reception side, the pilot symbol is received and utilized for compensating for a frequency response variation of a transmission path resulted from at least one of distortion in the transmission path, out-of-synchronization with passage of time, frequency drift, and residual phase error, and\nthe frequency response variation of the transmission path is compensated for by using a compensation vector calculated, as a time series linear approximation, from a difference in frequency response between at least any two pilot symbols closest to each other.\n\n11. The OFDM signal transmitter according to claim 10, wherein\n\nsaid data symbol generator comprises: a frequency-domain data symbol generator for generating a frequency-domain data symbol after inputting the data for transmission; and an inverse Fourier transformer for subjecting a signal provided by said frequency-domain data symbol generator to inverse Fourier transform, and\nsaid pilot symbol generator comprises: a frequency-domain pilot symbol generator for generating a frequency-domain pilot symbol; and an inverse Fourier transformer for subjecting a signal provided by said frequency-domain pilot symbol generator to inverse Fourier transform.\n\n12. The OFDM signal transmitter according to claim 10, wherein\n\nsaid data symbol generator comprises: a frequency-domain data symbol generator for generating a frequency-domain data symbol after inputting the data for transmission; and a time-domain data symbol converter for converting the frequency-domain data symbol into a time-domain data symbol, and\nsaid pilot symbol generator comprises: a frequency-domain pilot symbol generator for generating a frequency-domain pilot symbol; and a time-domain pilot symbol converter for converting the frequency-domain pilot symbol into a time-domain pilot symbol.\n\n13. A method for transmitting an OFDM signal, the method comprising:\n\ngenerating the OFDM signal; and\ntransmitting the OFDM signal, wherein\nthe OFDM signal comprises: at least one data symbol including a plurality of subcarriers having data therein; and at least one pilot symbol having a plurality of subcarriers having a frequency component predetermined in amplitude and phase,\none or more pilot symbols are located before or after one or more data symbols in a time-domain,\non a reception side, the pilot symbol is received and utilized for compensating for a frequency response variation of a transmission path resulted from at least one of distortion in the transmission path, out-of-synchronization with passage of time, frequency drift, and residual phase error, and\nthe frequency response variation of the transmission path is compensated for by using a compensation vector calculated, as a time series linear approximation, from a difference in frequency response between at least any two pilot symbols closest to each other.\nPatent History\nPatent number: 7843972\nType: Grant\nFiled: Dec 21, 2005\nDate of Patent: Nov 30, 2010\nPatent Publication Number: 20060104195\nAssignee: Panasonic Corporation (Osaka)\nInventors: Hideki Nakahara (Takatsuki), Koichiro Tanaka (Takaraduka), Naganori Shirakata (Suita), Tomohiro Kimura (Hirakata), Yasuo Harada (Kobe)\nPrimary Examiner: Chi H. Pham\nAssistant Examiner: Warner Wong\nAttorney: Wenderoth, Lind & Ponack, L.L.P.\nApplication Number: 11/312,344\nClassifications\nCurrent U.S. Class: Synchronizing (370/503); Generalized Orthogonal Or Special Mathematical Techniques (370/203); Adaptive (370/465)\nInternational Classification: H04J 3/06 (20060101); H04J 11/00 (20060101); H04J 3/16 (20060101);"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.91657406,"math_prob":0.9560103,"size":54185,"snap":"2023-14-2023-23","text_gpt3_token_len":10509,"char_repetition_ratio":0.24632251,"word_repetition_ratio":0.4524551,"special_character_ratio":0.18925902,"punctuation_ratio":0.0926282,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":0.9702028,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-22T14:02:45Z\",\"WARC-Record-ID\":\"<urn:uuid:92812a38-2867-4edb-976a-a1015178ee4c>\",\"Content-Length\":\"158366\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9cb43e6d-3e45-4ab4-b082-0f8d98be4598>\",\"WARC-Concurrent-To\":\"<urn:uuid:52be7210-8c8b-4f90-830e-f0c11620d413>\",\"WARC-IP-Address\":\"44.209.188.110\",\"WARC-Target-URI\":\"https://patents.justia.com/patent/7843972\",\"WARC-Payload-Digest\":\"sha1:AAWNW44HH2GNBAP6AD2OMTJXUXRQOAHD\",\"WARC-Block-Digest\":\"sha1:OYZKNWERGYXC7HX257DX2U7S4FULM6KB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296943809.76_warc_CC-MAIN-20230322114226-20230322144226-00402.warc.gz\"}"} |
https://www.davidzeleny.net/anadat-r/doku.php/en:similarity | [
"#",
null,
"Analysis of community ecology data in R\n\nDavid Zelený\n\n### Others\n\nAuthor: David Zelený",
null,
"en:similarity\n\n# Ecological resemblance\n\nThe ecological resemblance including similarities and distances between samples, is the basic tool how to handle multivariate ecological data. Two samples sharing the same species in the same abundances have the highest similarity (and lowest distance), and the similarity decreases (and distance increases) with the differences in their species composition. All cluster and ordination methods operate with similarity or distance between samples. Even PCA and CA, even if not said explicitly, are based on Euclidean and chi-square distances, respectively.\n\n## Similarity, dissimilarity and distance\n\nIntuitively, one thinks about similarity among objects - the more are two objects similar in terms of their properties, the higher is their similarity. In the case of species composition data, the similarity is calculated using similarity indices, ranging from 0 (the samples do not share any species) to 1 (samples have identical species composition). Ordination techniques are usually based on distances, because they need to localize the samples in a multidimensional space; clustering methods could usually handle both similarities or distances. Distances are of two types, either dissimilarity, converted from analogous similarity indices, or specific distance measures, such as Euclidean, which doesn't have a counterpart in any similarity index. While all similarity indices can be converted into distances, not all distances could be converted into similarities (as is true e.g. for Euclidean distance).\n\nThere is a number of measures of similarities or distances (Legendre & Legendre 2012 list around 30 of them). The first decision one has to make is whether the aim is R- or Q-mode analysis (R-mode focuses on differences among species, Q-mode on differences among samples), since some of the measures differ between both modes (e.g. Pearson's r correlation coefficient makes sense for the association between species (R-mode), but not for the association between samples (Q-mode); in contrast, e.g. Sørensen index can be used in both Q- and R-mode analysis, and is called Dice index in R-mode analysis). Further, if focusing on differences between samples (Q-mode), the most relevant measures in ecology are asymmetric indices ignoring double zeros (more about double-zero problem below). Then, it also depends whether the data are qualitative (i.e. binary, presence-absence) or quantitative (species abundances). In the case of distance indices, an important criterium is whether they are metric (they can be displayed in Euclidean space) or not, since this influences the choice of the index for some ordination or clustering methods.\n\nLegendre & Legendre (2012) offers a key how to select an appropriate measure for given data and problem (check their Tables 7.4-7.6). Generally, as a rule of thumb, Bray-Curtis and Hellinger distances may be better choices than Euclidean or Chi-square distances.\n\n## Double-zero problem\n\n“Double zero” is a situation when certain species is missing in both compared community samples for which similarity/distance is calculated. Species missing simultaneously in two samples can mean the following: (1) samples are located outside of the species ecological niche, but one cannot say whether both samples are on the same side of the ecological gradient (i.e. they can be rather ecologically similar, samples A and B on Figure 1) or they are on the opposite sides (and hence very different, samples A and C). Alternatively, (2) samples are located inside species ecological niche (samples D and E), but the species in given samples does not occur since it didn’t get there (dispersal limitation), or it was present, but overlooked and not sampled (sampling bias). In both cases, the double zero represents missing information, which cannot offer an insight into the ecological similarity of compared samples.",
null,
"Fig. 1: Response curve of a single species along environmental gradient; A, B..., E are samples located within or outside the species niche.\n\nBoth similarity and distance indices differ in a way how they approach the double-zero problem. Symmetrical indices treat double zero (0-0) in the same way (symmetrically) as double presences (1-1), i.e. as a reason to consider samples similar. This is usually not meaningful for species composition data (as explained above) but could be meaningful e.g. for multivariate data containing chemical measurement (for example, the fact that heavy metals are missing in both samples could really indicate similarity between both samples). Asymmetrical indices treat double zeros and double presences asymmetrically - they ignore double zeros, and focus only on double presences when evaluating the similarity of samples; these indices are usually more meaningful for species composition data.",
null,
"Fig. 2: For details see the text.\n\nIn Figure 2 you can see ecological example of double zero problem. Samples 1 to 3 are sorted according to the wetness of their habitat – sample 1 is the wettest and sample 3 is the driest. In samples 1 and 3, no mesic species occur, since sample 1 is too wet and sample 3 too dry - these is the double zero. The fact that the mesic species is missing does not say anything about ecological similarity or difference between both samples; simply there is no information, and it is better to ignore it. In the case of symmetrical indices of similarity, the absence of mesic species in sample 1 and sample 3 (0-0, double zero) will increase the similarity of sample 1 and 3; in asymmetrical indices, double zeros will be ignored and only presences (1-1, 1-0, 0-1) will be considered.\n\n## Similarity indices\n\nCategories of similarity indices are summarized in Table 1. Symmetric indices, i.e. those which consider double zeros as relevant, are not further treated here since they are not useful for analysis of ecological data (although they may be useful e.g. for analysis of environmental variables). Here we will consider only asymmetric similarity indices, i.e. those ignoring double zeros. These split into two types according to the data which they are using: qualitative (binary) indices, applied on presence-absence data, and quantitative indices, applied on raw (or transformed) species abundances (note, however, that presence-absence (qualitative) and abundance (quantitative) species composition data carry different type of information, and their analysis may have a different meaning - see the section Presence-absence vs quantitative species composition data). Some of the similarity indices have also multi-sample alternatives (i.e. they could be calculated on more than two samples), which could be used for calculating beta diversity.\n\nSimilarity indices How they deal with double zero problem?\nsymmetrical (treat double zeros as important information) asymmetrial (ignore double zeros)\nWhich type of data indices use? qualitative (binary = presence absence data) not suitable for ecological data Jaccard similarity, Sørensen similarity, Simpson similarity\nquantitative (species abundances) not suitable for ecological data Percentage similarity1)\nTab. 1: Similarity indices classified according to their properties.\n\nJaccard similarity:",
null,
"Sørensen similarity:",
null,
"Simpson similarity:",
null,
"Qualitative (binary) asymmetrical similarity indices use information about the number of species shared by both samples, and numbers of species which are occurring in the first or the second sample only (see the schema at Table 2). Jaccard similarity index divides the number of species shared by both samples (fraction a) by the sum of all species occurring in both samples (a+b+c, where b and c are numbers of species occurring only in the first and only in the second sample, respectively). Sørensen similarity index considers the number of species shared among both samples as more important, so it counts it twice. Simpson similarity index is useful in a case that compared samples largely differ in species richness (i.e. one sample has considerably more species than the other). If Jaccard or Sørensen are used on such data, their values are generally very low, since the fraction of species occurring only in the rich sample will make the denominator too large and the overall value of the index too low; Simpson index, which was originally introduced for comparison of fossil data, eliminates this problem by taking only the smaller from the fractions b and c. (Note that there is yet another Simpson index, namely Simpson diversity index; each of the indices was named after different person surnamed Simpson; while the Simpson similarity index is calculating the similarity between pair of compositional samples, the Simpson diversity index is calculating diversity of a single community sample; you may find details in my blog post on this topic).\n\n The number of species which are in sample 1 in sample 2",
null,
"present absent present a b absent c d\nTab. 2: The meaning of fraction a, b, c and d used in qualitative indices calculating similarity among two samples. In assymetric indices, the fraction d (double zero) is ignored.\n\nPercentage similarity:",
null,
",\nwhere W is the sum of minimum abundances of various species; A and B each are sum of abundances of all species at each compared site:\n\nSpecies abundances A B W\nSite x1 7 3 0 5 0 1 16\nSite x2 2 4 7 6 0 3 22\nMinimum 2 3 0 5 0 1 11",
null,
"Quantitative similarity indices (applied on quantitative abundance data) include percentage similarity, which is a quantitative version of Sørensen similarity index (if calculated on presence-absence data, it gives the same results are Sørensen similarity index). Note that percentage difference, calculated as 1-percentage similarity, is called Bray-Curtis distance index (see below).\n\n## Distance indices\n\nWhile similarity indices return the highest value in the case that both compares samples are identical (maximally similar), distance indices are largest for two samples which do not share any species (are maximally dissimilar). There are two types of distance (or dissimilarity) indices2):\n\n1. those calculated from similarity indices, usually as D = 1 - S, where S is the similarity index; examples include Jaccard, Sørensen and Simpson dissimilarity for qualitative (binary) data, and percentage difference (known also as Bray-Curtis distance) for quantitative data;\n2. those distances which have no analogue in the similarity indices, e.g. Euclidean, chord, Hellinger or chi-square distance index.",
null,
"Fig. 3: Triangle inequality principle.\n\nAn important criterium is whether the distance index is metric or not (i.e. it is semi-metric or non-metric). The term “metric” refers to the distance indices that obey the following four metric properties: 1) minimum distance is zero, 2) distance is always positive (unless it is zero), 3) the distance between sample 1 and sample 2 is the same as distance between sample 2 and sample 1, and 4) triangle inequality (see explanation in Figure 3). Indices that obey the fourth, triangle-inequality principles, can be displayed in the orthogonal Euclidean space (and are sometimes called as having Euclidean property; note that Euclidean distance is just one of many distance indices having Euclidean property). Some distance indices calculated from similarities are metric (e.g. Jaccard dissimilarity), some are not (e.g. Sørensen dissimilarity and its quantitative version, called Bray-Curtis distance, are semimetric; some other distance may be nonmetric - they can reach negative values, which is nonsensible for ecological data). In the case of Sørensen and Bray-Curtis (and some others), this can be solved by calculating the dissimilarity as",
null,
"instead of the standard",
null,
"(where S is the similarity); resulting dissimilarity index is then metric. Indices which are not metric cause troubles in ordination methods relying on Euclidean space (PCoA or db-RDA) and numerical clustering algorithms which need to locate samples in the Euclidean space (such as Ward algorithm or K-means). For example, PCoA calculated using distances which are not metric creates axes with negative eigenvalues, and this e.g. in db-RDA may result in virtually higher variation explained by explanatory variables than would reflect the data.\n\nBray-Curtis dissimilarity or percentage difference3) is one complement of percentage similarity index described above. It is considered suitable for community composition data since it is asymmetrical (ignores double zeros), and it has a meaningful upper value equal to one (meaning complete mismatch between species composition of two samples, i.e. if one species in one sample is present and has some abundance, the same species in the other samples is zero, and vice versa). Bray-Curtis considers absolute species abundances in the samples, not only relative species abundances. The index is not metric, but the version calculated as",
null,
"(where PS is percentage similarity) is metric and can be used in PCoA.\n\nEuclidean distance:",
null,
"where y1j and y2j are abundances of species j in sample 1 and 2, respectively.\n\nEuclidean distance, although not suitable for ecological data, is frequently used in a multivariate analysis (mostly because it is the implicit distance for linear ordination methods like PCA, RDA and for some clustering algorithms). Euclidean distance has no upper limit and the maximum value depends on the data. The main reason why it is not suitable for compositional data is that it is a symmetrical index, i.e. it treats double zeros in the same way as double presences, and as a result, double zeros shrink the distance between two plots (the solution is to apply Euclidean distances on pre-transformed species composition data, e.g. using Hellinger, Chord or chi-square transformation; resulting distances are then asymmetrical). Another disadvantage of Euclidean distance is that it puts more emphasis on the absolute species abundances instead of species presences and absences in the samples; as the result, Euclidean distances between two samples not sharing any species may be smaller than between two samples sharing all species, but with the same species having large abundance differences between samples (Euclidean paradox). An example of calculating Euclidean distance between samples with only two species is on Figure 4.",
null,
"Fig. 4: Euclidean distance between two samples with only two species.\n\nChord distance is Euclidean distance calculated on normalized species data. Normalization means that species vector in multidimensional space is of unit length; to normalize the species vector, one needs to divide each species abundance in a given sample by the square-rooted sum of squared abundances of all species in that sample. Chord distance is then the Euclidean distance between samples with normalized species data. An advantage of chord distance compared to Euclidean distance is that it is asymmetrical (ignores double zeros) and has the upper limit (equal to",
null,
"), while Euclidean distance has no upper limit.\n\nHellinger distance is Euclidean distance calculated on Hellinger transformed species data (and is the distance used in tb-PCA and tb-RDA if the species data are pre-transformed by Hellinger transformation). Hellinger transformation consists of first relativizing the species abundances in the sample by standardizing them to sample total (sum of all abundances in the sample); then, each standardized value is square-rooted. This puts the species abundances on the relative scale, and square-rooting lowers the importance of the dominant species. Hellinger distance is asymmetrical (not influenced by double zeros) and has an upper limit of",
null,
", which makes it a suitable method for ecological data with many zeros.\n\nChi-square distance is an asymmetrical distance which is rarely calculated itself, but is important since it is implicit for CA and CCA ordination.\n\nWhen comparing two samples, Euclidean distance puts more weight on differences in species abundances than on difference in species presences. As a result, two samples not sharing any species could appear more similar (with lower Euclidean distance) than two samples which share species but the species largely differ in their abundances (see the example below).\n\nIn the species composition matrix below, samples 1 and 2 does not share any species, while samples 1 and 3 share all species but differ in abundances (e.g. species 3 has abundance 1 in sample 1 and abundance 8 in sample 3):\n\n Species 1 Species 2 Species 3 Sample 1 0 1 1 Sample 2 1 0 0 Sample 3 0 4 8",
null,
"",
null,
"Euclidean distance between sample 1 and 2 is lower than between sample 1 and 3, although samples 1 and 2 have no species in common, while sample 1 and 3 share all species. Distances, which are based on relative species abundances (i.e. those in which abundances of species in the sample are made relative e.g. by dividing each abundance of species in the sample by the sum of abundances for all species in that sample), do not have this problem (e.g. Hellinger distance, which is Euclidean distance applied on Hellinger-standardized data - the first step of Hellinger standardization converts absolute species abundances into relative ones).\n\n## Matrix of similarities/distances\n\nThe matrix of similarities or distances is squared (the same number of rows as columns), with the values on diagonal either zeros (distances) or ones (similarities), and is symmetric - the upper right triangle is a mirror of values in lower left one (since it does not matter whether you calculate similarit/distance from sample A to sample B or from sample B to sample A; Figure 5).",
null,
"Fig. 5: Matrix of Euclidean distances calculated between all pairs of samples (a subset of 10 samples from Ellenberg's Danube meadow dataset used). Diagonal values (yellow) are zeros since the distance of two identical samples is zero.\n\n## Presence-absence vs quantitative species composition data\n\nSpecies composition data (i.e. data about the occurrence of species in individual community samples) containing species quantities (abundances, covers, biomass, numbers of individuals) can always be transformed into species presences-absences. It is important to know that by transforming abundances into presences-absences you are not only reducing the amount of information (quantities are lost) but also likely changing what data are able to tell you about natural processes behind the community assembly.\n\nAbundance data carry two types of information: 1) whether a species occurs in this community (or not), and 2) how much of this species occurs here; presence-absence data contain only the first type of this information. At the same time, whether a species occurs in a given community (or not) is driven by different ecological processes than the abundance of species in the community if the species already occurs there. The occurrence of a species in the site is often dependent not only on environmental suitability, but also on dispersal limitation (whether the species can get there), random drift (species may simply go extinct due to stochastic processes) or the existence of biogeographical boundaries (two sites with similar environmental conditions may not share the same species simply because there is a river or mountain range between them). On the other side, if the species already occurs in the sampled community, then its abundance is often driven by the suitability of environmental conditions (more abundant species are those for which the environment is favourable), but also by biotic interaction (competition or mutualism with other species present in the community).\n\nThis also means that sometimes, analysing parallelly both abundance and presence-absence transformed species composition data may be meaningful if we are trying to uncover alternative processes. Note, however, that if the species composition data contain many zeros (meaning that they are very heterogeneous), then most of the information stored in them is related to the first type of information (see above), even if they contain abundances of all non-absent species. Some studies (e.g. Wilson 2012) show that when analysing the relationship of species composition to environmental variables, transforming species composition data into presences-absences may actually improve results (variance explained by the environment in constrained ordination, or fit of environmental variables to unconstrained ordination axes).\n\n1)\nPercentage similarity (PS) is quantitative analog of Sørensen index; 1-PS is Percentage dissimilarity, also known as Bray-Curtis distance.\n2)\nNote that the use of “distance” and “dissimilarity” is somewhat not systematic; some authors call distances only those indices which are metric (Euclidean), i.e. can be displayed in metric (Euclidean) geometric space, and the other indices are called dissimilarities; but sometimes these two terms are simply synonyms.\n3)\nNote that according to P. Legendre, Bray-Curtis index should not be called after Bray and Curtis, since they have not really published it, only used it.",
null,
""
] | [
null,
"https://www.davidzeleny.net/anadat-r/lib/exe/fetch.php/wiki:logo.png",
null,
"https://www.davidzeleny.net/anadat-r/lib/exe/fetch.php/obrazky:qr-anadat-r.png",
null,
"https://www.davidzeleny.net/anadat-r/lib/exe/fetch.php/obrazky:double-zero-illustration.png",
null,
"https://www.davidzeleny.net/anadat-r/lib/exe/fetch.php/obrazky:double-zero-table.jpg",
null,
"https://www.davidzeleny.net/anadat-r/lib/plugins/mathpublish/img.php",
null,
"https://www.davidzeleny.net/anadat-r/lib/plugins/mathpublish/img.php",
null,
"https://www.davidzeleny.net/anadat-r/lib/plugins/mathpublish/img.php",
null,
"https://www.davidzeleny.net/anadat-r/lib/exe/fetch.php/obrazky:similarity-venns-diagram-abc.jpg",
null,
"https://www.davidzeleny.net/anadat-r/lib/plugins/mathpublish/img.php",
null,
"https://www.davidzeleny.net/anadat-r/lib/plugins/mathpublish/img.php",
null,
"https://www.davidzeleny.net/anadat-r/lib/exe/fetch.php/obrazky:triangle_inequality_principle.jpg",
null,
"https://www.davidzeleny.net/anadat-r/lib/plugins/mathpublish/img.php",
null,
"https://www.davidzeleny.net/anadat-r/lib/plugins/mathpublish/img.php",
null,
"https://www.davidzeleny.net/anadat-r/lib/plugins/mathpublish/img.php",
null,
"https://www.davidzeleny.net/anadat-r/lib/plugins/mathpublish/img.php",
null,
"https://www.davidzeleny.net/anadat-r/lib/exe/fetch.php/obrazky:schema-calculating-eucl-distance.png",
null,
"https://www.davidzeleny.net/anadat-r/lib/plugins/mathpublish/img.php",
null,
"https://www.davidzeleny.net/anadat-r/lib/plugins/mathpublish/img.php",
null,
"https://www.davidzeleny.net/anadat-r/lib/plugins/mathpublish/img.php",
null,
"https://www.davidzeleny.net/anadat-r/lib/plugins/mathpublish/img.php",
null,
"https://www.davidzeleny.net/anadat-r/lib/exe/fetch.php/obrazky:eucl-dist-matrix-danube-data.jpg",
null,
"https://www.davidzeleny.net/anadat-r/lib/exe/taskrunner.php",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9174278,"math_prob":0.982522,"size":18533,"snap":"2020-34-2020-40","text_gpt3_token_len":3772,"char_repetition_ratio":0.18889308,"word_repetition_ratio":0.01497736,"special_character_ratio":0.19597475,"punctuation_ratio":0.09378844,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97471225,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44],"im_url_duplicate_count":[null,null,null,null,null,5,null,5,null,null,null,null,null,null,null,5,null,null,null,null,null,5,null,null,null,null,null,null,null,null,null,5,null,null,null,null,null,null,null,null,null,5,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-28T16:05:37Z\",\"WARC-Record-ID\":\"<urn:uuid:91253da3-276a-49c0-a9cd-887c46b02d2b>\",\"Content-Length\":\"56006\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3e40e553-eaa3-45ac-8181-85661702cb69>\",\"WARC-Concurrent-To\":\"<urn:uuid:45979529-8682-4f82-8730-2172200e04f5>\",\"WARC-IP-Address\":\"104.28.31.70\",\"WARC-Target-URI\":\"https://www.davidzeleny.net/anadat-r/doku.php/en:similarity\",\"WARC-Payload-Digest\":\"sha1:TWEXTXPJZKOZEFB64MRUP2BLIGCOXT3R\",\"WARC-Block-Digest\":\"sha1:SW4N53MIGXGU2HKF2JWWWSFRUTW5E4HG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600401601278.97_warc_CC-MAIN-20200928135709-20200928165709-00583.warc.gz\"}"} |
https://experttech.in/2021/01/02/how-to-use-the-excel-isnumber-and-isnumeric-vba-function/ | [
"## How to use the Excel ISNUMBER and ISNUMERIC VBA Function\n\n• Post author:\n• Post category:Uncategorized\n\nWe can use the Excel ISNUMBER and ISNUMERIC VBA functions to detect numeric values in Excel and VBA programming. Both functions return TRUE if the cell contains a number and FALSE, if not. However, the Excel ISNUMERIC function is a VBA function whereas the ISNUMBER function is a worksheet function. Both functions can also yield different results in similar circumstances.\n\nIn this tutorial, we will learn how to use the EXCEL ISNUMBER and the ISNUMERIC VBA functions.\n\n## Syntax for EXCEL ISNUMBER Function\n\n`=ISNUMBER(value)`\n\nWhere value is the specified cell, formula, function or value to test. The ISNUMBER function checks if a value is stored as a number.\n\n## Syntax for EXCEL ISNUMERIC function\n\n`IsNumeric(expression)`\n\nWhere expression is evaluated as a number. The Excel VBA ISNUMERIC checks if a value can be converted to a number.\n\nFigure 1 – Result of excel check if number is True or False\n\n## Setting up Data and VBA for ISNUMERIC and Excel ISNUMBER functions\n\n• We will set a data table as shown below\n\nFigure 2 – Setting Data for Excel ISNUMBER and ISNUMERIC function\n\n• We will click on Developer and select Visual Basic\n• Next, we will click on Insert and select Module\n• In the new VBA window, we will enter this Macro code below\n\n`Function IsNumericTest(TestCell As Variant)`\n\n`'Use VBA to test if a cell is numeric via a function`\n\n`If IsNumeric(TestCell) Then 'if TestCell is True`\n\n` IsNumericTest = True 'Cell is a number`\n\n`Else`\n\n` IsNumericTest = False 'Cell is not a number`\n\n`End If`\n\n`End Function`\n\nFigure 3 – Excel ISNUMERIC VBA\n\n## Testing Data using Excel VBA ISNUMERIC and ISNUMBER functions\n\nWe will test Column A with the Excel ISNUMBER function in Column B and Excel VBA IsNumeric() function in Column C.\n\n1. To check with the ISNUMBER function;\n\n• In Cell B4, we will enter the formula below and press the Enter key\n\n`=ISNUMBER(A4)`\n\nFigure 4 – Excel ISNUMBER\n\n• We will have this result\n\nFigure 5 – Using the Excel ISNUMBER function\n\n• We will click again on Cell B4 and using the fill handle tool, we will drag the formula down the column to get this result:\n\nFigure 6 – ISNUMBER function in Excel\n\n2. For the IsNumeric function test\n\n• We will click on Cell C4, enter the formula below and press the enter key\n\n`=IsNumericTest(A4)`\n\nFigure 7 – ISNUMERIC VBA\n\n• We will have this result\n\nFigure 8 – VBA ISNUMERIC\n\n• Now, we will click on Cell C4 and using the fill handle tool, we will drag the formula down the column to get this result:\n\nFigure 9 – Excel VBA ISNUMERIC\n\n## Explanation\n\nIn our example, this is the different results given by the two functions.\n\nFigure 10 – Using Conditional formatting for ISNUMBER versus ISNUMERIC in Excel\n\nBy Comparing results using both functions, we will find that\n\n• The Excel IsNumeric function considers empty numeric cells but the Excel ISNUMBER function does not\n• The Excel ISNUMBER function finds dates entered with texts and characters as numbers because it is stored as numbers whereas the Excel IsNumeric does not"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.6853503,"math_prob":0.86656064,"size":2922,"snap":"2021-31-2021-39","text_gpt3_token_len":694,"char_repetition_ratio":0.2148732,"word_repetition_ratio":0.086105675,"special_character_ratio":0.20807666,"punctuation_ratio":0.056925997,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9994462,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-23T05:23:04Z\",\"WARC-Record-ID\":\"<urn:uuid:0ce4ca6b-c6c6-441a-b795-1fa6143bc282>\",\"Content-Length\":\"52568\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2fe890b8-e08a-4159-9267-5e3bf7013261>\",\"WARC-Concurrent-To\":\"<urn:uuid:fa3061bc-8513-4e4b-b78b-b51a8479b1b9>\",\"WARC-IP-Address\":\"104.131.109.233\",\"WARC-Target-URI\":\"https://experttech.in/2021/01/02/how-to-use-the-excel-isnumber-and-isnumeric-vba-function/\",\"WARC-Payload-Digest\":\"sha1:R2CSVNXBKLFZO5BIV6ALPR5EAA7SOXRH\",\"WARC-Block-Digest\":\"sha1:FSVMNCAHUCGPNB7ZUYQVFWLO76ACCULR\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780057417.10_warc_CC-MAIN-20210923044248-20210923074248-00317.warc.gz\"}"} |
http://docjar.org/docs/api/java/awt/geom/AffineTransform.html | [
"Home » openjdk-7 » java » awt » geom » [javadoc | source]\njava.awt.geom\npublic class: AffineTransform [javadoc | source]\n```java.lang.Object",
null,
"java.awt.geom.AffineTransform```\n\nAll Implemented Interfaces:\nCloneable, java\\$io\\$Serializable\n\nThe `AffineTransform` class represents a 2D affine transform that performs a linear mapping from 2D coordinates to other 2D coordinates that preserves the \"straightness\" and \"parallelness\" of lines. Affine transformations can be constructed using sequences of translations, scales, flips, rotations, and shears.\n\nSuch a coordinate transformation can be represented by a 3 row by 3 column matrix with an implied last row of [ 0 0 1 ]. This matrix transforms source coordinates {@code (x,y)} into destination coordinates {@code (x',y')} by considering them to be a column vector and multiplying the coordinate vector by the matrix according to the following process:\n\n``` [ x'] [ m00 m01 m02 ] [ x ] [ m00x + m01y + m02 ]\n[ y'] = [ m10 m11 m12 ] [ y ] = [ m10x + m11y + m12 ]\n[ 1 ] [ 0 0 1 ] [ 1 ] [ 1 ]\n```\n\n#### Handling 90-Degree Rotations\n\nIn some variations of the `rotate` methods in the `AffineTransform` class, a double-precision argument specifies the angle of rotation in radians. These methods have special handling for rotations of approximately 90 degrees (including multiples such as 180, 270, and 360 degrees), so that the common case of quadrant rotation is handled more efficiently. This special handling can cause angles very close to multiples of 90 degrees to be treated as if they were exact multiples of 90 degrees. For small multiples of 90 degrees the range of angles treated as a quadrant rotation is approximately 0.00000121 degrees wide. This section explains why such special care is needed and how it is implemented.\n\nSince 90 degrees is represented as `PI/2` in radians, and since PI is a transcendental (and therefore irrational) number, it is not possible to exactly represent a multiple of 90 degrees as an exact double precision value measured in radians. As a result it is theoretically impossible to describe quadrant rotations (90, 180, 270 or 360 degrees) using these values. Double precision floating point values can get very close to non-zero multiples of `PI/2` but never close enough for the sine or cosine to be exactly 0.0, 1.0 or -1.0. The implementations of `Math.sin()` and `Math.cos()` correspondingly never return 0.0 for any case other than `Math.sin(0.0)`. These same implementations do, however, return exactly 1.0 and -1.0 for some range of numbers around each multiple of 90 degrees since the correct answer is so close to 1.0 or -1.0 that the double precision significand cannot represent the difference as accurately as it can for numbers that are near 0.0.\n\nThe net result of these issues is that if the `Math.sin()` and `Math.cos()` methods are used to directly generate the values for the matrix modifications during these radian-based rotation operations then the resulting transform is never strictly classifiable as a quadrant rotation even for a simple case like `rotate(Math.PI/2.0)`, due to minor variations in the matrix caused by the non-0.0 values obtained for the sine and cosine. If these transforms are not classified as quadrant rotations then subsequent code which attempts to optimize further operations based upon the type of the transform will be relegated to its most general implementation.\n\nBecause quadrant rotations are fairly common, this class should handle these cases reasonably quickly, both in applying the rotations to the transform and in applying the resulting transform to the coordinates. To facilitate this optimal handling, the methods which take an angle of rotation measured in radians attempt to detect angles that are intended to be quadrant rotations and treat them as such. These methods therefore treat an angle theta as a quadrant rotation if either `Math.sin(theta)` or `Math.cos(theta)` returns exactly 1.0 or -1.0. As a rule of thumb, this property holds true for a range of approximately 0.0000000211 radians (or 0.00000121 degrees) around small multiples of `Math.PI/2.0`.\n\nauthor: `Jim` - Graham\nsince: `1.2` -\nField Summary\npublic static final int TYPE_IDENTITY This constant indicates that the transform defined by this object is an identity transform. An identity transform is one in which the output coordinates are always the same as the input coordinates. If this transform is anything other than the identity transform, the type will either be the constant GENERAL_TRANSFORM or a combination of the appropriate flag bits for the various coordinate conversions that this transform performs.\npublic static final int TYPE_TRANSLATION This flag bit indicates that the transform defined by this object performs a translation in addition to the conversions indicated by other flag bits. A translation moves the coordinates by a constant amount in x and y without changing the length or angle of vectors.\npublic static final int TYPE_UNIFORM_SCALE This flag bit indicates that the transform defined by this object performs a uniform scale in addition to the conversions indicated by other flag bits. A uniform scale multiplies the length of vectors by the same amount in both the x and y directions without changing the angle between vectors. This flag bit is mutually exclusive with the TYPE_GENERAL_SCALE flag.\npublic static final int TYPE_GENERAL_SCALE This flag bit indicates that the transform defined by this object performs a general scale in addition to the conversions indicated by other flag bits. A general scale multiplies the length of vectors by different amounts in the x and y directions without changing the angle between perpendicular vectors. This flag bit is mutually exclusive with the TYPE_UNIFORM_SCALE flag.\npublic static final int TYPE_MASK_SCALE This constant is a bit mask for any of the scale flag bits.\n\npublic static final int TYPE_FLIP This flag bit indicates that the transform defined by this object performs a mirror image flip about some axis which changes the normally right handed coordinate system into a left handed system in addition to the conversions indicated by other flag bits. A right handed coordinate system is one where the positive X axis rotates counterclockwise to overlay the positive Y axis similar to the direction that the fingers on your right hand curl when you stare end on at your thumb. A left handed coordinate system is one where the positive X axis rotates clockwise to overlay the positive Y axis similar to the direction that the fingers on your left hand curl. There is no mathematical way to determine the angle of the original flipping or mirroring transformation since all angles of flip are identical given an appropriate adjusting rotation.\npublic static final int TYPE_QUADRANT_ROTATION This flag bit indicates that the transform defined by this object performs a quadrant rotation by some multiple of 90 degrees in addition to the conversions indicated by other flag bits. A rotation changes the angles of vectors by the same amount regardless of the original direction of the vector and without changing the length of the vector. This flag bit is mutually exclusive with the TYPE_GENERAL_ROTATION flag.\npublic static final int TYPE_GENERAL_ROTATION This flag bit indicates that the transform defined by this object performs a rotation by an arbitrary angle in addition to the conversions indicated by other flag bits. A rotation changes the angles of vectors by the same amount regardless of the original direction of the vector and without changing the length of the vector. This flag bit is mutually exclusive with the TYPE_QUADRANT_ROTATION flag.\npublic static final int TYPE_MASK_ROTATION This constant is a bit mask for any of the rotation flag bits.\n\npublic static final int TYPE_GENERAL_TRANSFORM This constant indicates that the transform defined by this object performs an arbitrary conversion of the input coordinates. If this transform can be classified by any of the above constants, the type will either be the constant TYPE_IDENTITY or a combination of the appropriate flag bits for the various coordinate conversions that this transform performs.\nstatic final int APPLY_IDENTITY This constant is used for the internal state variable to indicate that no calculations need to be performed and that the source coordinates only need to be copied to their destinations to complete the transformation equation of this transform.\nstatic final int APPLY_TRANSLATE This constant is used for the internal state variable to indicate that the translation components of the matrix (m02 and m12) need to be added to complete the transformation equation of this transform.\nstatic final int APPLY_SCALE This constant is used for the internal state variable to indicate that the scaling components of the matrix (m00 and m11) need to be factored in to complete the transformation equation of this transform. If the APPLY_SHEAR bit is also set then it indicates that the scaling components are not both 0.0. If the APPLY_SHEAR bit is not also set then it indicates that the scaling components are not both 1.0. If neither the APPLY_SHEAR nor the APPLY_SCALE bits are set then the scaling components are both 1.0, which means that the x and y components contribute to the transformed coordinate, but they are not multiplied by any scaling factor.\nstatic final int APPLY_SHEAR This constant is used for the internal state variable to indicate that the shearing components of the matrix (m01 and m10) need to be factored in to complete the transformation equation of this transform. The presence of this bit in the state variable changes the interpretation of the APPLY_SCALE bit as indicated in its documentation.\ndouble m00 The X coordinate scaling element of the 3x3 affine transformation matrix.\nserial:\n\ndouble m10 The Y coordinate shearing element of the 3x3 affine transformation matrix.\nserial:\n\ndouble m01 The X coordinate shearing element of the 3x3 affine transformation matrix.\nserial:\n\ndouble m11 The Y coordinate scaling element of the 3x3 affine transformation matrix.\nserial:\n\ndouble m02 The X coordinate of the translation element of the 3x3 affine transformation matrix.\nserial:\n\ndouble m12 The Y coordinate of the translation element of the 3x3 affine transformation matrix.\nserial:\n\ntransient int state This field keeps track of which components of the matrix need to be applied when performing a transformation.\nConstructor:\n```",
null,
"public AffineTransform() {\nm00 = m11 = 1.0;\n// m01 = m10 = m02 = m12 = 0.0; /* Not needed. */\n// state = APPLY_IDENTITY; /* Not needed. */\n// type = TYPE_IDENTITY; /* Not needed. */\n}```\n```",
null,
"public AffineTransform(AffineTransform Tx) {\nthis.m00 = Tx.m00;\nthis.m10 = Tx.m10;\nthis.m01 = Tx.m01;\nthis.m11 = Tx.m11;\nthis.m02 = Tx.m02;\nthis.m12 = Tx.m12;\nthis.state = Tx.state;\nthis.type = Tx.type;\n}```\nConstructs a new `AffineTransform` that is a copy of the specified `AffineTransform` object.\nParameters:\n`Tx` - the `AffineTransform` object to copy\nsince: `1.2` -\n```",
null,
"public AffineTransform(float[] flatmatrix) {\nm00 = flatmatrix;\nm10 = flatmatrix;\nm01 = flatmatrix;\nm11 = flatmatrix;\nif (flatmatrix.length > 5) {\nm02 = flatmatrix;\nm12 = flatmatrix;\n}\n}```\n```",
null,
"public AffineTransform(double[] flatmatrix) {\nm00 = flatmatrix;\nm10 = flatmatrix;\nm01 = flatmatrix;\nm11 = flatmatrix;\nif (flatmatrix.length > 5) {\nm02 = flatmatrix;\nm12 = flatmatrix;\n}\n}```\n```",
null,
"public AffineTransform(float m00,\nfloat m10,\nfloat m01,\nfloat m11,\nfloat m02,\nfloat m12) {\nthis.m00 = m00;\nthis.m10 = m10;\nthis.m01 = m01;\nthis.m11 = m11;\nthis.m02 = m02;\nthis.m12 = m12;\n}```\n```",
null,
"public AffineTransform(double m00,\ndouble m10,\ndouble m01,\ndouble m11,\ndouble m02,\ndouble m12) {\nthis.m00 = m00;\nthis.m10 = m10;\nthis.m01 = m01;\nthis.m11 = m11;\nthis.m02 = m02;\nthis.m12 = m12;\n}```\nMethod from java.awt.geom.AffineTransform Summary:\nclone, concatenate, createInverse, createTransformedShape, deltaTransform, deltaTransform, equals, getDeterminant, getMatrix, getQuadrantRotateInstance, getQuadrantRotateInstance, getRotateInstance, getRotateInstance, getRotateInstance, getRotateInstance, getScaleInstance, getScaleX, getScaleY, getShearInstance, getShearX, getShearY, getTranslateInstance, getTranslateX, getTranslateY, getType, hashCode, inverseTransform, inverseTransform, invert, isIdentity, preConcatenate, quadrantRotate, quadrantRotate, rotate, rotate, rotate, rotate, scale, setToIdentity, setToQuadrantRotation, setToQuadrantRotation, setToRotation, setToRotation, setToRotation, setToRotation, setToScale, setToShear, setToTranslation, setTransform, setTransform, shear, toString, transform, transform, transform, transform, transform, transform, translate, updateState\nMethods from java.lang.Object:\nclone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait\nMethod from java.awt.geom.AffineTransform Detail:\n```",
null,
"public Object clone() {\ntry {\nreturn super.clone();\n} catch (CloneNotSupportedException e) {\n// this shouldn't happen, since we are Cloneable\nthrow new InternalError();\n}\n}```\nReturns a copy of this `AffineTransform` object.\n```",
null,
"public void concatenate(AffineTransform Tx) {\ndouble M0, M1;\ndouble T00, T01, T10, T11;\ndouble T02, T12;\nint mystate = state;\nint txstate = Tx.state;\nswitch ((txstate < < HI_SHIFT) | mystate) {\n/* ---------- Tx == IDENTITY cases ---------- */\ncase (HI_IDENTITY | APPLY_IDENTITY):\ncase (HI_IDENTITY | APPLY_TRANSLATE):\ncase (HI_IDENTITY | APPLY_SCALE):\ncase (HI_IDENTITY | APPLY_SCALE | APPLY_TRANSLATE):\ncase (HI_IDENTITY | APPLY_SHEAR):\ncase (HI_IDENTITY | APPLY_SHEAR | APPLY_TRANSLATE):\ncase (HI_IDENTITY | APPLY_SHEAR | APPLY_SCALE):\ncase (HI_IDENTITY | APPLY_SHEAR | APPLY_SCALE | APPLY_TRANSLATE):\nreturn;\n/* ---------- this == IDENTITY cases ---------- */\ncase (HI_SHEAR | HI_SCALE | HI_TRANSLATE | APPLY_IDENTITY):\nm01 = Tx.m01;\nm10 = Tx.m10;\n/* NOBREAK */\ncase (HI_SCALE | HI_TRANSLATE | APPLY_IDENTITY):\nm00 = Tx.m00;\nm11 = Tx.m11;\n/* NOBREAK */\ncase (HI_TRANSLATE | APPLY_IDENTITY):\nm02 = Tx.m02;\nm12 = Tx.m12;\nstate = txstate;\ntype = Tx.type;\nreturn;\ncase (HI_SHEAR | HI_SCALE | APPLY_IDENTITY):\nm01 = Tx.m01;\nm10 = Tx.m10;\n/* NOBREAK */\ncase (HI_SCALE | APPLY_IDENTITY):\nm00 = Tx.m00;\nm11 = Tx.m11;\nstate = txstate;\ntype = Tx.type;\nreturn;\ncase (HI_SHEAR | HI_TRANSLATE | APPLY_IDENTITY):\nm02 = Tx.m02;\nm12 = Tx.m12;\n/* NOBREAK */\ncase (HI_SHEAR | APPLY_IDENTITY):\nm01 = Tx.m01;\nm10 = Tx.m10;\nm00 = m11 = 0.0;\nstate = txstate;\ntype = Tx.type;\nreturn;\n/* ---------- Tx == TRANSLATE cases ---------- */\ncase (HI_TRANSLATE | APPLY_SHEAR | APPLY_SCALE | APPLY_TRANSLATE):\ncase (HI_TRANSLATE | APPLY_SHEAR | APPLY_SCALE):\ncase (HI_TRANSLATE | APPLY_SHEAR | APPLY_TRANSLATE):\ncase (HI_TRANSLATE | APPLY_SHEAR):\ncase (HI_TRANSLATE | APPLY_SCALE | APPLY_TRANSLATE):\ncase (HI_TRANSLATE | APPLY_SCALE):\ncase (HI_TRANSLATE | APPLY_TRANSLATE):\ntranslate(Tx.m02, Tx.m12);\nreturn;\n/* ---------- Tx == SCALE cases ---------- */\ncase (HI_SCALE | APPLY_SHEAR | APPLY_SCALE | APPLY_TRANSLATE):\ncase (HI_SCALE | APPLY_SHEAR | APPLY_SCALE):\ncase (HI_SCALE | APPLY_SHEAR | APPLY_TRANSLATE):\ncase (HI_SCALE | APPLY_SHEAR):\ncase (HI_SCALE | APPLY_SCALE | APPLY_TRANSLATE):\ncase (HI_SCALE | APPLY_SCALE):\ncase (HI_SCALE | APPLY_TRANSLATE):\nscale(Tx.m00, Tx.m11);\nreturn;\n/* ---------- Tx == SHEAR cases ---------- */\ncase (HI_SHEAR | APPLY_SHEAR | APPLY_SCALE | APPLY_TRANSLATE):\ncase (HI_SHEAR | APPLY_SHEAR | APPLY_SCALE):\nT01 = Tx.m01; T10 = Tx.m10;\nM0 = m00;\nm00 = m01 * T10;\nm01 = M0 * T01;\nM0 = m10;\nm10 = m11 * T10;\nm11 = M0 * T01;\ntype = TYPE_UNKNOWN;\nreturn;\ncase (HI_SHEAR | APPLY_SHEAR | APPLY_TRANSLATE):\ncase (HI_SHEAR | APPLY_SHEAR):\nm00 = m01 * Tx.m10;\nm01 = 0.0;\nm11 = m10 * Tx.m01;\nm10 = 0.0;\nstate = mystate ^ (APPLY_SHEAR | APPLY_SCALE);\ntype = TYPE_UNKNOWN;\nreturn;\ncase (HI_SHEAR | APPLY_SCALE | APPLY_TRANSLATE):\ncase (HI_SHEAR | APPLY_SCALE):\nm01 = m00 * Tx.m01;\nm00 = 0.0;\nm10 = m11 * Tx.m10;\nm11 = 0.0;\nstate = mystate ^ (APPLY_SHEAR | APPLY_SCALE);\ntype = TYPE_UNKNOWN;\nreturn;\ncase (HI_SHEAR | APPLY_TRANSLATE):\nm00 = 0.0;\nm01 = Tx.m01;\nm10 = Tx.m10;\nm11 = 0.0;\nstate = APPLY_TRANSLATE | APPLY_SHEAR;\ntype = TYPE_UNKNOWN;\nreturn;\n}\n// If Tx has more than one attribute, it is not worth optimizing\n// all of those cases...\nT00 = Tx.m00; T01 = Tx.m01; T02 = Tx.m02;\nT10 = Tx.m10; T11 = Tx.m11; T12 = Tx.m12;\nswitch (mystate) {\ndefault:\nstateError();\n/* NOTREACHED */\ncase (APPLY_SHEAR | APPLY_SCALE):\nstate = mystate | txstate;\n/* NOBREAK */\ncase (APPLY_SHEAR | APPLY_SCALE | APPLY_TRANSLATE):\nM0 = m00;\nM1 = m01;\nm00 = T00 * M0 + T10 * M1;\nm01 = T01 * M0 + T11 * M1;\nm02 += T02 * M0 + T12 * M1;\nM0 = m10;\nM1 = m11;\nm10 = T00 * M0 + T10 * M1;\nm11 = T01 * M0 + T11 * M1;\nm12 += T02 * M0 + T12 * M1;\ntype = TYPE_UNKNOWN;\nreturn;\ncase (APPLY_SHEAR | APPLY_TRANSLATE):\ncase (APPLY_SHEAR):\nM0 = m01;\nm00 = T10 * M0;\nm01 = T11 * M0;\nm02 += T12 * M0;\nM0 = m10;\nm10 = T00 * M0;\nm11 = T01 * M0;\nm12 += T02 * M0;\nbreak;\ncase (APPLY_SCALE | APPLY_TRANSLATE):\ncase (APPLY_SCALE):\nM0 = m00;\nm00 = T00 * M0;\nm01 = T01 * M0;\nm02 += T02 * M0;\nM0 = m11;\nm10 = T10 * M0;\nm11 = T11 * M0;\nm12 += T12 * M0;\nbreak;\ncase (APPLY_TRANSLATE):\nm00 = T00;\nm01 = T01;\nm02 += T02;\nm10 = T10;\nm11 = T11;\nm12 += T12;\nstate = txstate | APPLY_TRANSLATE;\ntype = TYPE_UNKNOWN;\nreturn;\n}\n}```\nConcatenates an `AffineTransform` `Tx` to this `AffineTransform` Cx in the most commonly useful way to provide a new user space that is mapped to the former user space by `Tx`. Cx is updated to perform the combined transformation. Transforming a point p by the updated transform Cx' is equivalent to first transforming p by `Tx` and then transforming the result by the original transform Cx like this: Cx'(p) = Cx(Tx(p)) In matrix notation, if this transform Cx is represented by the matrix [this] and `Tx` is represented by the matrix [Tx] then this method does the following:\n``` [this] = [this] x [Tx]\n```\n```",
null,
"public AffineTransform createInverse() throws NoninvertibleTransformException {\ndouble det;\nswitch (state) {\ndefault:\nstateError();\n/* NOTREACHED */\ncase (APPLY_SHEAR | APPLY_SCALE | APPLY_TRANSLATE):\ndet = m00 * m11 - m01 * m10;\nif (Math.abs(det) < = Double.MIN_VALUE) {\nthrow new NoninvertibleTransformException(\"Determinant is \"+\ndet);\n}\nreturn new AffineTransform( m11 / det, -m10 / det,\n-m01 / det, m00 / det,\n(m01 * m12 - m11 * m02) / det,\n(m10 * m02 - m00 * m12) / det,\n(APPLY_SHEAR |\nAPPLY_SCALE |\nAPPLY_TRANSLATE));\ncase (APPLY_SHEAR | APPLY_SCALE):\ndet = m00 * m11 - m01 * m10;\nif (Math.abs(det) < = Double.MIN_VALUE) {\nthrow new NoninvertibleTransformException(\"Determinant is \"+\ndet);\n}\nreturn new AffineTransform( m11 / det, -m10 / det,\n-m01 / det, m00 / det,\n0.0, 0.0,\n(APPLY_SHEAR | APPLY_SCALE));\ncase (APPLY_SHEAR | APPLY_TRANSLATE):\nif (m01 == 0.0 || m10 == 0.0) {\nthrow new NoninvertibleTransformException(\"Determinant is 0\");\n}\nreturn new AffineTransform( 0.0, 1.0 / m01,\n1.0 / m10, 0.0,\n-m12 / m10, -m02 / m01,\n(APPLY_SHEAR | APPLY_TRANSLATE));\ncase (APPLY_SHEAR):\nif (m01 == 0.0 || m10 == 0.0) {\nthrow new NoninvertibleTransformException(\"Determinant is 0\");\n}\nreturn new AffineTransform(0.0, 1.0 / m01,\n1.0 / m10, 0.0,\n0.0, 0.0,\n(APPLY_SHEAR));\ncase (APPLY_SCALE | APPLY_TRANSLATE):\nif (m00 == 0.0 || m11 == 0.0) {\nthrow new NoninvertibleTransformException(\"Determinant is 0\");\n}\nreturn new AffineTransform( 1.0 / m00, 0.0,\n0.0, 1.0 / m11,\n-m02 / m00, -m12 / m11,\n(APPLY_SCALE | APPLY_TRANSLATE));\ncase (APPLY_SCALE):\nif (m00 == 0.0 || m11 == 0.0) {\nthrow new NoninvertibleTransformException(\"Determinant is 0\");\n}\nreturn new AffineTransform(1.0 / m00, 0.0,\n0.0, 1.0 / m11,\n0.0, 0.0,\n(APPLY_SCALE));\ncase (APPLY_TRANSLATE):\nreturn new AffineTransform( 1.0, 0.0,\n0.0, 1.0,\n-m02, -m12,\n(APPLY_TRANSLATE));\ncase (APPLY_IDENTITY):\nreturn new AffineTransform();\n}\n/* NOTREACHED */\n}```\nReturns an `AffineTransform` object representing the inverse transformation. The inverse transform Tx' of this transform Tx maps coordinates transformed by Tx back to their original coordinates. In other words, Tx'(Tx(p)) = p = Tx(Tx'(p)).\n\nIf this transform maps all coordinates onto a point or a line then it will not have an inverse, since coordinates that do not lie on the destination point or line will not have an inverse mapping. The `getDeterminant` method can be used to determine if this transform has no inverse, in which case an exception will be thrown if the `createInverse` method is called.\n\n```",
null,
"public Shape createTransformedShape(Shape pSrc) {\nif (pSrc == null) {\nreturn null;\n}\nreturn new Path2D.Double(pSrc, this);\n}```\nReturns a new Shape object defined by the geometry of the specified `Shape` after it has been transformed by this transform.\n```",
null,
"public Point2D deltaTransform(Point2D ptSrc,\nPoint2D ptDst) {\nif (ptDst == null) {\nif (ptSrc instanceof Point2D.Double) {\nptDst = new Point2D.Double();\n} else {\nptDst = new Point2D.Float();\n}\n}\n// Copy source coords into local variables in case src == dst\ndouble x = ptSrc.getX();\ndouble y = ptSrc.getY();\nswitch (state) {\ndefault:\nstateError();\n/* NOTREACHED */\ncase (APPLY_SHEAR | APPLY_SCALE | APPLY_TRANSLATE):\ncase (APPLY_SHEAR | APPLY_SCALE):\nptDst.setLocation(x * m00 + y * m01, x * m10 + y * m11);\nreturn ptDst;\ncase (APPLY_SHEAR | APPLY_TRANSLATE):\ncase (APPLY_SHEAR):\nptDst.setLocation(y * m01, x * m10);\nreturn ptDst;\ncase (APPLY_SCALE | APPLY_TRANSLATE):\ncase (APPLY_SCALE):\nptDst.setLocation(x * m00, y * m11);\nreturn ptDst;\ncase (APPLY_TRANSLATE):\ncase (APPLY_IDENTITY):\nptDst.setLocation(x, y);\nreturn ptDst;\n}\n/* NOTREACHED */\n}```\nTransforms the relative distance vector specified by `ptSrc` and stores the result in `ptDst`. A relative distance vector is transformed without applying the translation components of the affine transformation matrix using the following equations:\n``` [ x' ] [ m00 m01 (m02) ] [ x ] [ m00x + m01y ]\n[ y' ] = [ m10 m11 (m12) ] [ y ] = [ m10x + m11y ]\n[ (1) ] [ (0) (0) ( 1 ) ] [ (1) ] [ (1) ]\n```\nIf `ptDst` is `null`, a new `Point2D` object is allocated and then the result of the transform is stored in this object. In either case, `ptDst`, which contains the transformed point, is returned for convenience. If `ptSrc` and `ptDst` are the same object, the input point is correctly overwritten with the transformed point.\n```",
null,
"public void deltaTransform(double[] srcPts,\nint srcOff,\ndouble[] dstPts,\nint dstOff,\nint numPts) {\ndouble M00, M01, M10, M11; // For caching\nif (dstPts == srcPts &&\ndstOff > srcOff && dstOff < srcOff + numPts * 2)\n{\n// If the arrays overlap partially with the destination higher\n// than the source and we transform the coordinates normally\n// we would overwrite some of the later source coordinates\n// with results of previous transformations.\n// To get around this we use arraycopy to copy the points\n// to their final destination with correct overwrite\n// handling and then transform them in place in the new\n// safer location.\nSystem.arraycopy(srcPts, srcOff, dstPts, dstOff, numPts * 2);\n// srcPts = dstPts; // They are known to be equal.\nsrcOff = dstOff;\n}\nswitch (state) {\ndefault:\nstateError();\n/* NOTREACHED */\ncase (APPLY_SHEAR | APPLY_SCALE | APPLY_TRANSLATE):\ncase (APPLY_SHEAR | APPLY_SCALE):\nM00 = m00; M01 = m01;\nM10 = m10; M11 = m11;\nwhile (--numPts >= 0) {\ndouble x = srcPts[srcOff++];\ndouble y = srcPts[srcOff++];\ndstPts[dstOff++] = x * M00 + y * M01;\ndstPts[dstOff++] = x * M10 + y * M11;\n}\nreturn;\ncase (APPLY_SHEAR | APPLY_TRANSLATE):\ncase (APPLY_SHEAR):\nM01 = m01; M10 = m10;\nwhile (--numPts >= 0) {\ndouble x = srcPts[srcOff++];\ndstPts[dstOff++] = srcPts[srcOff++] * M01;\ndstPts[dstOff++] = x * M10;\n}\nreturn;\ncase (APPLY_SCALE | APPLY_TRANSLATE):\ncase (APPLY_SCALE):\nM00 = m00; M11 = m11;\nwhile (--numPts >= 0) {\ndstPts[dstOff++] = srcPts[srcOff++] * M00;\ndstPts[dstOff++] = srcPts[srcOff++] * M11;\n}\nreturn;\ncase (APPLY_TRANSLATE):\ncase (APPLY_IDENTITY):\nif (srcPts != dstPts || srcOff != dstOff) {\nSystem.arraycopy(srcPts, srcOff, dstPts, dstOff,\nnumPts * 2);\n}\nreturn;\n}\n/* NOTREACHED */\n}```\nTransforms an array of relative distance vectors by this transform. A relative distance vector is transformed without applying the translation components of the affine transformation matrix using the following equations:\n``` [ x' ] [ m00 m01 (m02) ] [ x ] [ m00x + m01y ]\n[ y' ] = [ m10 m11 (m12) ] [ y ] = [ m10x + m11y ]\n[ (1) ] [ (0) (0) ( 1 ) ] [ (1) ] [ (1) ]\n```\nThe two coordinate array sections can be exactly the same or can be overlapping sections of the same array without affecting the validity of the results. This method ensures that no source coordinates are overwritten by a previous operation before they can be transformed. The coordinates are stored in the arrays starting at the indicated offset in the order `[x0, y0, x1, y1, ..., xn, yn]`.\n```",
null,
"public boolean equals(Object obj) {\nif (!(obj instanceof AffineTransform)) {\nreturn false;\n}\nAffineTransform a = (AffineTransform)obj;\nreturn ((m00 == a.m00) && (m01 == a.m01) && (m02 == a.m02) &&\n(m10 == a.m10) && (m11 == a.m11) && (m12 == a.m12));\n}```\nReturns `true` if this `AffineTransform` represents the same affine coordinate transform as the specified argument.\n```",
null,
"public double getDeterminant() {\nswitch (state) {\ndefault:\nstateError();\n/* NOTREACHED */\ncase (APPLY_SHEAR | APPLY_SCALE | APPLY_TRANSLATE):\ncase (APPLY_SHEAR | APPLY_SCALE):\nreturn m00 * m11 - m01 * m10;\ncase (APPLY_SHEAR | APPLY_TRANSLATE):\ncase (APPLY_SHEAR):\nreturn -(m01 * m10);\ncase (APPLY_SCALE | APPLY_TRANSLATE):\ncase (APPLY_SCALE):\nreturn m00 * m11;\ncase (APPLY_TRANSLATE):\ncase (APPLY_IDENTITY):\nreturn 1.0;\n}\n}```\nReturns the determinant of the matrix representation of the transform. The determinant is useful both to determine if the transform can be inverted and to get a single value representing the combined X and Y scaling of the transform.\n\nIf the determinant is non-zero, then this transform is invertible and the various methods that depend on the inverse transform do not need to throw a NoninvertibleTransformException . If the determinant is zero then this transform can not be inverted since the transform maps all input coordinates onto a line or a point. If the determinant is near enough to zero then inverse transform operations might not carry enough precision to produce meaningful results.\n\nIf this transform represents a uniform scale, as indicated by the `getType` method then the determinant also represents the square of the uniform scale factor by which all of the points are expanded from or contracted towards the origin. If this transform represents a non-uniform scale or more general transform then the determinant is not likely to represent a value useful for any purpose other than determining if inverse transforms are possible.\n\nMathematically, the determinant is calculated using the formula:\n\n``` | m00 m01 m02 |\n| m10 m11 m12 | = m00 * m11 - m01 * m10\n| 0 0 1 |\n```\n```",
null,
"public void getMatrix(double[] flatmatrix) {\nflatmatrix = m00;\nflatmatrix = m10;\nflatmatrix = m01;\nflatmatrix = m11;\nif (flatmatrix.length > 5) {\nflatmatrix = m02;\nflatmatrix = m12;\n}\n}```\nRetrieves the 6 specifiable values in the 3x3 affine transformation matrix and places them into an array of double precisions values. The values are stored in the array as { m00 m10 m01 m11 m02 m12 }. An array of 4 doubles can also be specified, in which case only the first four elements representing the non-transform parts of the array are retrieved and the values are stored into the array as { m00 m10 m01 m11 }\n```",
null,
"public static AffineTransform getQuadrantRotateInstance(int numquadrants) {\nAffineTransform Tx = new AffineTransform();\nreturn Tx;\n}```\nReturns a transform that rotates coordinates by the specified number of quadrants. This operation is equivalent to calling:\n``` AffineTransform.getRotateInstance(numquadrants * Math.PI / 2.0);\n```\nRotating by a positive number of quadrants rotates points on the positive X axis toward the positive Y axis.\n```",
null,
"public static AffineTransform getQuadrantRotateInstance(int numquadrants,\ndouble anchorx,\ndouble anchory) {\nAffineTransform Tx = new AffineTransform();\nreturn Tx;\n}```\nReturns a transform that rotates coordinates by the specified number of quadrants around the specified anchor point. This operation is equivalent to calling:\n``` AffineTransform.getRotateInstance(numquadrants * Math.PI / 2.0,\nanchorx, anchory);\n```\nRotating by a positive number of quadrants rotates points on the positive X axis toward the positive Y axis.\n```",
null,
"public static AffineTransform getRotateInstance(double theta) {\nAffineTransform Tx = new AffineTransform();\nTx.setToRotation(theta);\nreturn Tx;\n}```\nReturns a transform representing a rotation transformation. The matrix representing the returned transform is:\n``` [ cos(theta) -sin(theta) 0 ]\n[ sin(theta) cos(theta) 0 ]\n[ 0 0 1 ]\n```\nRotating by a positive angle theta rotates points on the positive X axis toward the positive Y axis. Note also the discussion of Handling 90-Degree Rotations above.\n```",
null,
"public static AffineTransform getRotateInstance(double vecx,\ndouble vecy) {\nAffineTransform Tx = new AffineTransform();\nTx.setToRotation(vecx, vecy);\nreturn Tx;\n}```\nReturns a transform that rotates coordinates according to a rotation vector. All coordinates rotate about the origin by the same amount. The amount of rotation is such that coordinates along the former positive X axis will subsequently align with the vector pointing from the origin to the specified vector coordinates. If both `vecx` and `vecy` are 0.0, an identity transform is returned. This operation is equivalent to calling:\n``` AffineTransform.getRotateInstance(Math.atan2(vecy, vecx));\n```\n```",
null,
"public static AffineTransform getRotateInstance(double theta,\ndouble anchorx,\ndouble anchory) {\nAffineTransform Tx = new AffineTransform();\nTx.setToRotation(theta, anchorx, anchory);\nreturn Tx;\n}```\nReturns a transform that rotates coordinates around an anchor point. This operation is equivalent to translating the coordinates so that the anchor point is at the origin (S1), then rotating them about the new origin (S2), and finally translating so that the intermediate origin is restored to the coordinates of the original anchor point (S3).\n\nThis operation is equivalent to the following sequence of calls:\n\n``` AffineTransform Tx = new AffineTransform();\nTx.translate(anchorx, anchory); // S3: final translation\nTx.rotate(theta); // S2: rotate around anchor\nTx.translate(-anchorx, -anchory); // S1: translate anchor to origin\n```\nThe matrix representing the returned transform is:\n``` [ cos(theta) -sin(theta) x-x*cos+y*sin ]\n[ sin(theta) cos(theta) y-x*sin-y*cos ]\n[ 0 0 1 ]\n```\nRotating by a positive angle theta rotates points on the positive X axis toward the positive Y axis. Note also the discussion of Handling 90-Degree Rotations above.\n```",
null,
"public static AffineTransform getRotateInstance(double vecx,\ndouble vecy,\ndouble anchorx,\ndouble anchory) {\nAffineTransform Tx = new AffineTransform();\nTx.setToRotation(vecx, vecy, anchorx, anchory);\nreturn Tx;\n}```\nReturns a transform that rotates coordinates around an anchor point accordinate to a rotation vector. All coordinates rotate about the specified anchor coordinates by the same amount. The amount of rotation is such that coordinates along the former positive X axis will subsequently align with the vector pointing from the origin to the specified vector coordinates. If both `vecx` and `vecy` are 0.0, an identity transform is returned. This operation is equivalent to calling:\n``` AffineTransform.getRotateInstance(Math.atan2(vecy, vecx),\nanchorx, anchory);\n```\n```",
null,
"public static AffineTransform getScaleInstance(double sx,\ndouble sy) {\nAffineTransform Tx = new AffineTransform();\nTx.setToScale(sx, sy);\nreturn Tx;\n}```\nReturns a transform representing a scaling transformation. The matrix representing the returned transform is:\n``` [ sx 0 0 ]\n[ 0 sy 0 ]\n[ 0 0 1 ]\n```\n```",
null,
"public double getScaleX() {\nreturn m00;\n}```\nReturns the X coordinate scaling element (m00) of the 3x3 affine transformation matrix.\n```",
null,
"public double getScaleY() {\nreturn m11;\n}```\nReturns the Y coordinate scaling element (m11) of the 3x3 affine transformation matrix.\n```",
null,
"public static AffineTransform getShearInstance(double shx,\ndouble shy) {\nAffineTransform Tx = new AffineTransform();\nTx.setToShear(shx, shy);\nreturn Tx;\n}```\nReturns a transform representing a shearing transformation. The matrix representing the returned transform is:\n``` [ 1 shx 0 ]\n[ shy 1 0 ]\n[ 0 0 1 ]\n```\n```",
null,
"public double getShearX() {\nreturn m01;\n}```\nReturns the X coordinate shearing element (m01) of the 3x3 affine transformation matrix.\n```",
null,
"public double getShearY() {\nreturn m10;\n}```\nReturns the Y coordinate shearing element (m10) of the 3x3 affine transformation matrix.\n```",
null,
"public static AffineTransform getTranslateInstance(double tx,\ndouble ty) {\nAffineTransform Tx = new AffineTransform();\nTx.setToTranslation(tx, ty);\nreturn Tx;\n}```\nReturns a transform representing a translation transformation. The matrix representing the returned transform is:\n``` [ 1 0 tx ]\n[ 0 1 ty ]\n[ 0 0 1 ]\n```\n```",
null,
"public double getTranslateX() {\nreturn m02;\n}```\nReturns the X coordinate of the translation element (m02) of the 3x3 affine transformation matrix.\n```",
null,
"public double getTranslateY() {\nreturn m12;\n}```\nReturns the Y coordinate of the translation element (m12) of the 3x3 affine transformation matrix.\n```",
null,
"public int getType() {\nif (type == TYPE_UNKNOWN) {\ncalculateType();\n}\nreturn type;\n}```\nRetrieves the flag bits describing the conversion properties of this transform. The return value is either one of the constants TYPE_IDENTITY or TYPE_GENERAL_TRANSFORM, or a combination of the appriopriate flag bits. A valid combination of flag bits is an exclusive OR operation that can combine the TYPE_TRANSLATION flag bit in addition to either of the TYPE_UNIFORM_SCALE or TYPE_GENERAL_SCALE flag bits as well as either of the TYPE_QUADRANT_ROTATION or TYPE_GENERAL_ROTATION flag bits.\n```",
null,
"public int hashCode() {\nlong bits = Double.doubleToLongBits(m00);\nbits = bits * 31 + Double.doubleToLongBits(m01);\nbits = bits * 31 + Double.doubleToLongBits(m02);\nbits = bits * 31 + Double.doubleToLongBits(m10);\nbits = bits * 31 + Double.doubleToLongBits(m11);\nbits = bits * 31 + Double.doubleToLongBits(m12);\nreturn (((int) bits) ^ ((int) (bits > > 32)));\n}```\nReturns the hashcode for this transform.\n```",
null,
"public Point2D inverseTransform(Point2D ptSrc,\nPoint2D ptDst) throws NoninvertibleTransformException {\nif (ptDst == null) {\nif (ptSrc instanceof Point2D.Double) {\nptDst = new Point2D.Double();\n} else {\nptDst = new Point2D.Float();\n}\n}\n// Copy source coords into local variables in case src == dst\ndouble x = ptSrc.getX();\ndouble y = ptSrc.getY();\nswitch (state) {\ndefault:\nstateError();\n/* NOTREACHED */\ncase (APPLY_SHEAR | APPLY_SCALE | APPLY_TRANSLATE):\nx -= m02;\ny -= m12;\n/* NOBREAK */\ncase (APPLY_SHEAR | APPLY_SCALE):\ndouble det = m00 * m11 - m01 * m10;\nif (Math.abs(det) < = Double.MIN_VALUE) {\nthrow new NoninvertibleTransformException(\"Determinant is \"+\ndet);\n}\nptDst.setLocation((x * m11 - y * m01) / det,\n(y * m00 - x * m10) / det);\nreturn ptDst;\ncase (APPLY_SHEAR | APPLY_TRANSLATE):\nx -= m02;\ny -= m12;\n/* NOBREAK */\ncase (APPLY_SHEAR):\nif (m01 == 0.0 || m10 == 0.0) {\nthrow new NoninvertibleTransformException(\"Determinant is 0\");\n}\nptDst.setLocation(y / m10, x / m01);\nreturn ptDst;\ncase (APPLY_SCALE | APPLY_TRANSLATE):\nx -= m02;\ny -= m12;\n/* NOBREAK */\ncase (APPLY_SCALE):\nif (m00 == 0.0 || m11 == 0.0) {\nthrow new NoninvertibleTransformException(\"Determinant is 0\");\n}\nptDst.setLocation(x / m00, y / m11);\nreturn ptDst;\ncase (APPLY_TRANSLATE):\nptDst.setLocation(x - m02, y - m12);\nreturn ptDst;\ncase (APPLY_IDENTITY):\nptDst.setLocation(x, y);\nreturn ptDst;\n}\n/* NOTREACHED */\n}```\nInverse transforms the specified `ptSrc` and stores the result in `ptDst`. If `ptDst` is `null`, a new `Point2D` object is allocated and then the result of the transform is stored in this object. In either case, `ptDst`, which contains the transformed point, is returned for convenience. If `ptSrc` and `ptDst` are the same object, the input point is correctly overwritten with the transformed point.\n```",
null,
"public void inverseTransform(double[] srcPts,\nint srcOff,\ndouble[] dstPts,\nint dstOff,\nint numPts) throws NoninvertibleTransformException {\ndouble M00, M01, M02, M10, M11, M12; // For caching\ndouble det;\nif (dstPts == srcPts &&\ndstOff > srcOff && dstOff < srcOff + numPts * 2)\n{\n// If the arrays overlap partially with the destination higher\n// than the source and we transform the coordinates normally\n// we would overwrite some of the later source coordinates\n// with results of previous transformations.\n// To get around this we use arraycopy to copy the points\n// to their final destination with correct overwrite\n// handling and then transform them in place in the new\n// safer location.\nSystem.arraycopy(srcPts, srcOff, dstPts, dstOff, numPts * 2);\n// srcPts = dstPts; // They are known to be equal.\nsrcOff = dstOff;\n}\nswitch (state) {\ndefault:\nstateError();\n/* NOTREACHED */\ncase (APPLY_SHEAR | APPLY_SCALE | APPLY_TRANSLATE):\nM00 = m00; M01 = m01; M02 = m02;\nM10 = m10; M11 = m11; M12 = m12;\ndet = M00 * M11 - M01 * M10;\nif (Math.abs(det) < = Double.MIN_VALUE) {\nthrow new NoninvertibleTransformException(\"Determinant is \"+\ndet);\n}\nwhile (--numPts >= 0) {\ndouble x = srcPts[srcOff++] - M02;\ndouble y = srcPts[srcOff++] - M12;\ndstPts[dstOff++] = (x * M11 - y * M01) / det;\ndstPts[dstOff++] = (y * M00 - x * M10) / det;\n}\nreturn;\ncase (APPLY_SHEAR | APPLY_SCALE):\nM00 = m00; M01 = m01;\nM10 = m10; M11 = m11;\ndet = M00 * M11 - M01 * M10;\nif (Math.abs(det) < = Double.MIN_VALUE) {\nthrow new NoninvertibleTransformException(\"Determinant is \"+\ndet);\n}\nwhile (--numPts >= 0) {\ndouble x = srcPts[srcOff++];\ndouble y = srcPts[srcOff++];\ndstPts[dstOff++] = (x * M11 - y * M01) / det;\ndstPts[dstOff++] = (y * M00 - x * M10) / det;\n}\nreturn;\ncase (APPLY_SHEAR | APPLY_TRANSLATE):\nM01 = m01; M02 = m02;\nM10 = m10; M12 = m12;\nif (M01 == 0.0 || M10 == 0.0) {\nthrow new NoninvertibleTransformException(\"Determinant is 0\");\n}\nwhile (--numPts >= 0) {\ndouble x = srcPts[srcOff++] - M02;\ndstPts[dstOff++] = (srcPts[srcOff++] - M12) / M10;\ndstPts[dstOff++] = x / M01;\n}\nreturn;\ncase (APPLY_SHEAR):\nM01 = m01; M10 = m10;\nif (M01 == 0.0 || M10 == 0.0) {\nthrow new NoninvertibleTransformException(\"Determinant is 0\");\n}\nwhile (--numPts >= 0) {\ndouble x = srcPts[srcOff++];\ndstPts[dstOff++] = srcPts[srcOff++] / M10;\ndstPts[dstOff++] = x / M01;\n}\nreturn;\ncase (APPLY_SCALE | APPLY_TRANSLATE):\nM00 = m00; M02 = m02;\nM11 = m11; M12 = m12;\nif (M00 == 0.0 || M11 == 0.0) {\nthrow new NoninvertibleTransformException(\"Determinant is 0\");\n}\nwhile (--numPts >= 0) {\ndstPts[dstOff++] = (srcPts[srcOff++] - M02) / M00;\ndstPts[dstOff++] = (srcPts[srcOff++] - M12) / M11;\n}\nreturn;\ncase (APPLY_SCALE):\nM00 = m00; M11 = m11;\nif (M00 == 0.0 || M11 == 0.0) {\nthrow new NoninvertibleTransformException(\"Determinant is 0\");\n}\nwhile (--numPts >= 0) {\ndstPts[dstOff++] = srcPts[srcOff++] / M00;\ndstPts[dstOff++] = srcPts[srcOff++] / M11;\n}\nreturn;\ncase (APPLY_TRANSLATE):\nM02 = m02; M12 = m12;\nwhile (--numPts >= 0) {\ndstPts[dstOff++] = srcPts[srcOff++] - M02;\ndstPts[dstOff++] = srcPts[srcOff++] - M12;\n}\nreturn;\ncase (APPLY_IDENTITY):\nif (srcPts != dstPts || srcOff != dstOff) {\nSystem.arraycopy(srcPts, srcOff, dstPts, dstOff,\nnumPts * 2);\n}\nreturn;\n}\n/* NOTREACHED */\n}```\nInverse transforms an array of double precision coordinates by this transform. The two coordinate array sections can be exactly the same or can be overlapping sections of the same array without affecting the validity of the results. This method ensures that no source coordinates are overwritten by a previous operation before they can be transformed. The coordinates are stored in the arrays starting at the specified offset in the order `[x0, y0, x1, y1, ..., xn, yn]`.\n```",
null,
"public void invert() throws NoninvertibleTransformException {\ndouble M00, M01, M02;\ndouble M10, M11, M12;\ndouble det;\nswitch (state) {\ndefault:\nstateError();\n/* NOTREACHED */\ncase (APPLY_SHEAR | APPLY_SCALE | APPLY_TRANSLATE):\nM00 = m00; M01 = m01; M02 = m02;\nM10 = m10; M11 = m11; M12 = m12;\ndet = M00 * M11 - M01 * M10;\nif (Math.abs(det) < = Double.MIN_VALUE) {\nthrow new NoninvertibleTransformException(\"Determinant is \"+\ndet);\n}\nm00 = M11 / det;\nm10 = -M10 / det;\nm01 = -M01 / det;\nm11 = M00 / det;\nm02 = (M01 * M12 - M11 * M02) / det;\nm12 = (M10 * M02 - M00 * M12) / det;\nbreak;\ncase (APPLY_SHEAR | APPLY_SCALE):\nM00 = m00; M01 = m01;\nM10 = m10; M11 = m11;\ndet = M00 * M11 - M01 * M10;\nif (Math.abs(det) < = Double.MIN_VALUE) {\nthrow new NoninvertibleTransformException(\"Determinant is \"+\ndet);\n}\nm00 = M11 / det;\nm10 = -M10 / det;\nm01 = -M01 / det;\nm11 = M00 / det;\n// m02 = 0.0;\n// m12 = 0.0;\nbreak;\ncase (APPLY_SHEAR | APPLY_TRANSLATE):\nM01 = m01; M02 = m02;\nM10 = m10; M12 = m12;\nif (M01 == 0.0 || M10 == 0.0) {\nthrow new NoninvertibleTransformException(\"Determinant is 0\");\n}\n// m00 = 0.0;\nm10 = 1.0 / M01;\nm01 = 1.0 / M10;\n// m11 = 0.0;\nm02 = -M12 / M10;\nm12 = -M02 / M01;\nbreak;\ncase (APPLY_SHEAR):\nM01 = m01;\nM10 = m10;\nif (M01 == 0.0 || M10 == 0.0) {\nthrow new NoninvertibleTransformException(\"Determinant is 0\");\n}\n// m00 = 0.0;\nm10 = 1.0 / M01;\nm01 = 1.0 / M10;\n// m11 = 0.0;\n// m02 = 0.0;\n// m12 = 0.0;\nbreak;\ncase (APPLY_SCALE | APPLY_TRANSLATE):\nM00 = m00; M02 = m02;\nM11 = m11; M12 = m12;\nif (M00 == 0.0 || M11 == 0.0) {\nthrow new NoninvertibleTransformException(\"Determinant is 0\");\n}\nm00 = 1.0 / M00;\n// m10 = 0.0;\n// m01 = 0.0;\nm11 = 1.0 / M11;\nm02 = -M02 / M00;\nm12 = -M12 / M11;\nbreak;\ncase (APPLY_SCALE):\nM00 = m00;\nM11 = m11;\nif (M00 == 0.0 || M11 == 0.0) {\nthrow new NoninvertibleTransformException(\"Determinant is 0\");\n}\nm00 = 1.0 / M00;\n// m10 = 0.0;\n// m01 = 0.0;\nm11 = 1.0 / M11;\n// m02 = 0.0;\n// m12 = 0.0;\nbreak;\ncase (APPLY_TRANSLATE):\n// m00 = 1.0;\n// m10 = 0.0;\n// m01 = 0.0;\n// m11 = 1.0;\nm02 = -m02;\nm12 = -m12;\nbreak;\ncase (APPLY_IDENTITY):\n// m00 = 1.0;\n// m10 = 0.0;\n// m01 = 0.0;\n// m11 = 1.0;\n// m02 = 0.0;\n// m12 = 0.0;\nbreak;\n}\n}```\nSets this transform to the inverse of itself. The inverse transform Tx' of this transform Tx maps coordinates transformed by Tx back to their original coordinates. In other words, Tx'(Tx(p)) = p = Tx(Tx'(p)).\n\nIf this transform maps all coordinates onto a point or a line then it will not have an inverse, since coordinates that do not lie on the destination point or line will not have an inverse mapping. The `getDeterminant` method can be used to determine if this transform has no inverse, in which case an exception will be thrown if the `invert` method is called.\n\n```",
null,
"public boolean isIdentity() {\nreturn (state == APPLY_IDENTITY || (getType() == TYPE_IDENTITY));\n}```\nReturns `true` if this `AffineTransform` is an identity transform.\n```",
null,
"public void preConcatenate(AffineTransform Tx) {\ndouble M0, M1;\ndouble T00, T01, T10, T11;\ndouble T02, T12;\nint mystate = state;\nint txstate = Tx.state;\nswitch ((txstate < < HI_SHIFT) | mystate) {\ncase (HI_IDENTITY | APPLY_IDENTITY):\ncase (HI_IDENTITY | APPLY_TRANSLATE):\ncase (HI_IDENTITY | APPLY_SCALE):\ncase (HI_IDENTITY | APPLY_SCALE | APPLY_TRANSLATE):\ncase (HI_IDENTITY | APPLY_SHEAR):\ncase (HI_IDENTITY | APPLY_SHEAR | APPLY_TRANSLATE):\ncase (HI_IDENTITY | APPLY_SHEAR | APPLY_SCALE):\ncase (HI_IDENTITY | APPLY_SHEAR | APPLY_SCALE | APPLY_TRANSLATE):\n// Tx is IDENTITY...\nreturn;\ncase (HI_TRANSLATE | APPLY_IDENTITY):\ncase (HI_TRANSLATE | APPLY_SCALE):\ncase (HI_TRANSLATE | APPLY_SHEAR):\ncase (HI_TRANSLATE | APPLY_SHEAR | APPLY_SCALE):\n// Tx is TRANSLATE, this has no TRANSLATE\nm02 = Tx.m02;\nm12 = Tx.m12;\nstate = mystate | APPLY_TRANSLATE;\ntype |= TYPE_TRANSLATION;\nreturn;\ncase (HI_TRANSLATE | APPLY_TRANSLATE):\ncase (HI_TRANSLATE | APPLY_SCALE | APPLY_TRANSLATE):\ncase (HI_TRANSLATE | APPLY_SHEAR | APPLY_TRANSLATE):\ncase (HI_TRANSLATE | APPLY_SHEAR | APPLY_SCALE | APPLY_TRANSLATE):\n// Tx is TRANSLATE, this has one too\nm02 = m02 + Tx.m02;\nm12 = m12 + Tx.m12;\nreturn;\ncase (HI_SCALE | APPLY_TRANSLATE):\ncase (HI_SCALE | APPLY_IDENTITY):\n// Only these two existing states need a new state\nstate = mystate | APPLY_SCALE;\n/* NOBREAK */\ncase (HI_SCALE | APPLY_SHEAR | APPLY_SCALE | APPLY_TRANSLATE):\ncase (HI_SCALE | APPLY_SHEAR | APPLY_SCALE):\ncase (HI_SCALE | APPLY_SHEAR | APPLY_TRANSLATE):\ncase (HI_SCALE | APPLY_SHEAR):\ncase (HI_SCALE | APPLY_SCALE | APPLY_TRANSLATE):\ncase (HI_SCALE | APPLY_SCALE):\n// Tx is SCALE, this is anything\nT00 = Tx.m00;\nT11 = Tx.m11;\nif ((mystate & APPLY_SHEAR) != 0) {\nm01 = m01 * T00;\nm10 = m10 * T11;\nif ((mystate & APPLY_SCALE) != 0) {\nm00 = m00 * T00;\nm11 = m11 * T11;\n}\n} else {\nm00 = m00 * T00;\nm11 = m11 * T11;\n}\nif ((mystate & APPLY_TRANSLATE) != 0) {\nm02 = m02 * T00;\nm12 = m12 * T11;\n}\ntype = TYPE_UNKNOWN;\nreturn;\ncase (HI_SHEAR | APPLY_SHEAR | APPLY_TRANSLATE):\ncase (HI_SHEAR | APPLY_SHEAR):\nmystate = mystate | APPLY_SCALE;\n/* NOBREAK */\ncase (HI_SHEAR | APPLY_TRANSLATE):\ncase (HI_SHEAR | APPLY_IDENTITY):\ncase (HI_SHEAR | APPLY_SCALE | APPLY_TRANSLATE):\ncase (HI_SHEAR | APPLY_SCALE):\nstate = mystate ^ APPLY_SHEAR;\n/* NOBREAK */\ncase (HI_SHEAR | APPLY_SHEAR | APPLY_SCALE | APPLY_TRANSLATE):\ncase (HI_SHEAR | APPLY_SHEAR | APPLY_SCALE):\n// Tx is SHEAR, this is anything\nT01 = Tx.m01;\nT10 = Tx.m10;\nM0 = m00;\nm00 = m10 * T01;\nm10 = M0 * T10;\nM0 = m01;\nm01 = m11 * T01;\nm11 = M0 * T10;\nM0 = m02;\nm02 = m12 * T01;\nm12 = M0 * T10;\ntype = TYPE_UNKNOWN;\nreturn;\n}\n// If Tx has more than one attribute, it is not worth optimizing\n// all of those cases...\nT00 = Tx.m00; T01 = Tx.m01; T02 = Tx.m02;\nT10 = Tx.m10; T11 = Tx.m11; T12 = Tx.m12;\nswitch (mystate) {\ndefault:\nstateError();\n/* NOTREACHED */\ncase (APPLY_SHEAR | APPLY_SCALE | APPLY_TRANSLATE):\nM0 = m02;\nM1 = m12;\nT02 += M0 * T00 + M1 * T01;\nT12 += M0 * T10 + M1 * T11;\n/* NOBREAK */\ncase (APPLY_SHEAR | APPLY_SCALE):\nm02 = T02;\nm12 = T12;\nM0 = m00;\nM1 = m10;\nm00 = M0 * T00 + M1 * T01;\nm10 = M0 * T10 + M1 * T11;\nM0 = m01;\nM1 = m11;\nm01 = M0 * T00 + M1 * T01;\nm11 = M0 * T10 + M1 * T11;\nbreak;\ncase (APPLY_SHEAR | APPLY_TRANSLATE):\nM0 = m02;\nM1 = m12;\nT02 += M0 * T00 + M1 * T01;\nT12 += M0 * T10 + M1 * T11;\n/* NOBREAK */\ncase (APPLY_SHEAR):\nm02 = T02;\nm12 = T12;\nM0 = m10;\nm00 = M0 * T01;\nm10 = M0 * T11;\nM0 = m01;\nm01 = M0 * T00;\nm11 = M0 * T10;\nbreak;\ncase (APPLY_SCALE | APPLY_TRANSLATE):\nM0 = m02;\nM1 = m12;\nT02 += M0 * T00 + M1 * T01;\nT12 += M0 * T10 + M1 * T11;\n/* NOBREAK */\ncase (APPLY_SCALE):\nm02 = T02;\nm12 = T12;\nM0 = m00;\nm00 = M0 * T00;\nm10 = M0 * T10;\nM0 = m11;\nm01 = M0 * T01;\nm11 = M0 * T11;\nbreak;\ncase (APPLY_TRANSLATE):\nM0 = m02;\nM1 = m12;\nT02 += M0 * T00 + M1 * T01;\nT12 += M0 * T10 + M1 * T11;\n/* NOBREAK */\ncase (APPLY_IDENTITY):\nm02 = T02;\nm12 = T12;\nm00 = T00;\nm10 = T10;\nm01 = T01;\nm11 = T11;\nstate = mystate | txstate;\ntype = TYPE_UNKNOWN;\nreturn;\n}\n}```\nConcatenates an `AffineTransform` `Tx` to this `AffineTransform` Cx in a less commonly used way such that `Tx` modifies the coordinate transformation relative to the absolute pixel space rather than relative to the existing user space. Cx is updated to perform the combined transformation. Transforming a point p by the updated transform Cx' is equivalent to first transforming p by the original transform Cx and then transforming the result by `Tx` like this: Cx'(p) = Tx(Cx(p)) In matrix notation, if this transform Cx is represented by the matrix [this] and `Tx` is represented by the matrix [Tx] then this method does the following:\n``` [this] = [Tx] x [this]\n```\n```",
null,
"public void quadrantRotate(int numquadrants) {\ncase 0:\nbreak;\ncase 1:\nrotate90();\nbreak;\ncase 2:\nrotate180();\nbreak;\ncase 3:\nrotate270();\nbreak;\n}\n}```\nConcatenates this transform with a transform that rotates coordinates by the specified number of quadrants. This is equivalent to calling:\n``` rotate(numquadrants * Math.PI / 2.0);\n```\nRotating by a positive number of quadrants rotates points on the positive X axis toward the positive Y axis.\n```",
null,
"public void quadrantRotate(int numquadrants,\ndouble anchorx,\ndouble anchory) {\ncase 0:\nreturn;\ncase 1:\nm02 += anchorx * (m00 - m01) + anchory * (m01 + m00);\nm12 += anchorx * (m10 - m11) + anchory * (m11 + m10);\nrotate90();\nbreak;\ncase 2:\nm02 += anchorx * (m00 + m00) + anchory * (m01 + m01);\nm12 += anchorx * (m10 + m10) + anchory * (m11 + m11);\nrotate180();\nbreak;\ncase 3:\nm02 += anchorx * (m00 + m01) + anchory * (m01 - m00);\nm12 += anchorx * (m10 + m11) + anchory * (m11 - m10);\nrotate270();\nbreak;\n}\nif (m02 == 0.0 && m12 == 0.0) {\nstate &= ~APPLY_TRANSLATE;\n} else {\nstate |= APPLY_TRANSLATE;\n}\n}```\nConcatenates this transform with a transform that rotates coordinates by the specified number of quadrants around the specified anchor point. This method is equivalent to calling:\n``` rotate(numquadrants * Math.PI / 2.0, anchorx, anchory);\n```\nRotating by a positive number of quadrants rotates points on the positive X axis toward the positive Y axis.\n```",
null,
"public void rotate(double theta) {\ndouble sin = Math.sin(theta);\nif (sin == 1.0) {\nrotate90();\n} else if (sin == -1.0) {\nrotate270();\n} else {\ndouble cos = Math.cos(theta);\nif (cos == -1.0) {\nrotate180();\n} else if (cos != 1.0) {\ndouble M0, M1;\nM0 = m00;\nM1 = m01;\nm00 = cos * M0 + sin * M1;\nm01 = -sin * M0 + cos * M1;\nM0 = m10;\nM1 = m11;\nm10 = cos * M0 + sin * M1;\nm11 = -sin * M0 + cos * M1;\n}\n}\n}```\nConcatenates this transform with a rotation transformation. This is equivalent to calling concatenate(R), where R is an `AffineTransform` represented by the following matrix:\n``` [ cos(theta) -sin(theta) 0 ]\n[ sin(theta) cos(theta) 0 ]\n[ 0 0 1 ]\n```\nRotating by a positive angle theta rotates points on the positive X axis toward the positive Y axis. Note also the discussion of Handling 90-Degree Rotations above.\n```",
null,
"public void rotate(double vecx,\ndouble vecy) {\nif (vecy == 0.0) {\nif (vecx < 0.0) {\nrotate180();\n}\n// If vecx > 0.0 - no rotation\n// If vecx == 0.0 - undefined rotation - treat as no rotation\n} else if (vecx == 0.0) {\nif (vecy > 0.0) {\nrotate90();\n} else { // vecy must be < 0.0\nrotate270();\n}\n} else {\ndouble len = Math.sqrt(vecx * vecx + vecy * vecy);\ndouble sin = vecy / len;\ndouble cos = vecx / len;\ndouble M0, M1;\nM0 = m00;\nM1 = m01;\nm00 = cos * M0 + sin * M1;\nm01 = -sin * M0 + cos * M1;\nM0 = m10;\nM1 = m11;\nm10 = cos * M0 + sin * M1;\nm11 = -sin * M0 + cos * M1;\n}\n}```\nConcatenates this transform with a transform that rotates coordinates according to a rotation vector. All coordinates rotate about the origin by the same amount. The amount of rotation is such that coordinates along the former positive X axis will subsequently align with the vector pointing from the origin to the specified vector coordinates. If both `vecx` and `vecy` are 0.0, no additional rotation is added to this transform. This operation is equivalent to calling:\n``` rotate(Math.atan2(vecy, vecx));\n```\n```",
null,
"public void rotate(double theta,\ndouble anchorx,\ndouble anchory) {\n// REMIND: Simple for now - optimize later\ntranslate(anchorx, anchory);\nrotate(theta);\ntranslate(-anchorx, -anchory);\n}```\nConcatenates this transform with a transform that rotates coordinates around an anchor point. This operation is equivalent to translating the coordinates so that the anchor point is at the origin (S1), then rotating them about the new origin (S2), and finally translating so that the intermediate origin is restored to the coordinates of the original anchor point (S3).\n\nThis operation is equivalent to the following sequence of calls:\n\n``` translate(anchorx, anchory); // S3: final translation\nrotate(theta); // S2: rotate around anchor\ntranslate(-anchorx, -anchory); // S1: translate anchor to origin\n```\nRotating by a positive angle theta rotates points on the positive X axis toward the positive Y axis. Note also the discussion of Handling 90-Degree Rotations above.\n```",
null,
"public void rotate(double vecx,\ndouble vecy,\ndouble anchorx,\ndouble anchory) {\n// REMIND: Simple for now - optimize later\ntranslate(anchorx, anchory);\nrotate(vecx, vecy);\ntranslate(-anchorx, -anchory);\n}```\nConcatenates this transform with a transform that rotates coordinates around an anchor point according to a rotation vector. All coordinates rotate about the specified anchor coordinates by the same amount. The amount of rotation is such that coordinates along the former positive X axis will subsequently align with the vector pointing from the origin to the specified vector coordinates. If both `vecx` and `vecy` are 0.0, the transform is not modified in any way. This method is equivalent to calling:\n``` rotate(Math.atan2(vecy, vecx), anchorx, anchory);\n```\n```",
null,
"public void scale(double sx,\ndouble sy) {\nint state = this.state;\nswitch (state) {\ndefault:\nstateError();\n/* NOTREACHED */\ncase (APPLY_SHEAR | APPLY_SCALE | APPLY_TRANSLATE):\ncase (APPLY_SHEAR | APPLY_SCALE):\nm00 *= sx;\nm11 *= sy;\n/* NOBREAK */\ncase (APPLY_SHEAR | APPLY_TRANSLATE):\ncase (APPLY_SHEAR):\nm01 *= sy;\nm10 *= sx;\nif (m01 == 0 && m10 == 0) {\nstate &= APPLY_TRANSLATE;\nif (m00 == 1.0 && m11 == 1.0) {\nthis.type = (state == APPLY_IDENTITY\n? TYPE_IDENTITY\n: TYPE_TRANSLATION);\n} else {\nstate |= APPLY_SCALE;\nthis.type = TYPE_UNKNOWN;\n}\nthis.state = state;\n}\nreturn;\ncase (APPLY_SCALE | APPLY_TRANSLATE):\ncase (APPLY_SCALE):\nm00 *= sx;\nm11 *= sy;\nif (m00 == 1.0 && m11 == 1.0) {\nthis.state = (state &= APPLY_TRANSLATE);\nthis.type = (state == APPLY_IDENTITY\n? TYPE_IDENTITY\n: TYPE_TRANSLATION);\n} else {\nthis.type = TYPE_UNKNOWN;\n}\nreturn;\ncase (APPLY_TRANSLATE):\ncase (APPLY_IDENTITY):\nm00 = sx;\nm11 = sy;\nif (sx != 1.0 || sy != 1.0) {\nthis.state = state | APPLY_SCALE;\nthis.type = TYPE_UNKNOWN;\n}\nreturn;\n}\n}```\nConcatenates this transform with a scaling transformation. This is equivalent to calling concatenate(S), where S is an `AffineTransform` represented by the following matrix:\n``` [ sx 0 0 ]\n[ 0 sy 0 ]\n[ 0 0 1 ]\n```\n```",
null,
"public void setToIdentity() {\nm00 = m11 = 1.0;\nm10 = m01 = m02 = m12 = 0.0;\nstate = APPLY_IDENTITY;\ntype = TYPE_IDENTITY;\n}```\nResets this transform to the Identity transform.\n```",
null,
"public void setToQuadrantRotation(int numquadrants) {\ncase 0:\nm00 = 1.0;\nm10 = 0.0;\nm01 = 0.0;\nm11 = 1.0;\nm02 = 0.0;\nm12 = 0.0;\nstate = APPLY_IDENTITY;\ntype = TYPE_IDENTITY;\nbreak;\ncase 1:\nm00 = 0.0;\nm10 = 1.0;\nm01 = -1.0;\nm11 = 0.0;\nm02 = 0.0;\nm12 = 0.0;\nstate = APPLY_SHEAR;\nbreak;\ncase 2:\nm00 = -1.0;\nm10 = 0.0;\nm01 = 0.0;\nm11 = -1.0;\nm02 = 0.0;\nm12 = 0.0;\nstate = APPLY_SCALE;\nbreak;\ncase 3:\nm00 = 0.0;\nm10 = -1.0;\nm01 = 1.0;\nm11 = 0.0;\nm02 = 0.0;\nm12 = 0.0;\nstate = APPLY_SHEAR;\nbreak;\n}\n}```\nSets this transform to a rotation transformation that rotates coordinates by the specified number of quadrants. This operation is equivalent to calling:\n``` setToRotation(numquadrants * Math.PI / 2.0);\n```\nRotating by a positive number of quadrants rotates points on the positive X axis toward the positive Y axis.\n```",
null,
"public void setToQuadrantRotation(int numquadrants,\ndouble anchorx,\ndouble anchory) {\ncase 0:\nm00 = 1.0;\nm10 = 0.0;\nm01 = 0.0;\nm11 = 1.0;\nm02 = 0.0;\nm12 = 0.0;\nstate = APPLY_IDENTITY;\ntype = TYPE_IDENTITY;\nbreak;\ncase 1:\nm00 = 0.0;\nm10 = 1.0;\nm01 = -1.0;\nm11 = 0.0;\nm02 = anchorx + anchory;\nm12 = anchory - anchorx;\nif (m02 == 0.0 && m12 == 0.0) {\nstate = APPLY_SHEAR;\n} else {\nstate = APPLY_SHEAR | APPLY_TRANSLATE;\n}\nbreak;\ncase 2:\nm00 = -1.0;\nm10 = 0.0;\nm01 = 0.0;\nm11 = -1.0;\nm02 = anchorx + anchorx;\nm12 = anchory + anchory;\nif (m02 == 0.0 && m12 == 0.0) {\nstate = APPLY_SCALE;\n} else {\nstate = APPLY_SCALE | APPLY_TRANSLATE;\n}\nbreak;\ncase 3:\nm00 = 0.0;\nm10 = -1.0;\nm01 = 1.0;\nm11 = 0.0;\nm02 = anchorx - anchory;\nm12 = anchory + anchorx;\nif (m02 == 0.0 && m12 == 0.0) {\nstate = APPLY_SHEAR;\n} else {\nstate = APPLY_SHEAR | APPLY_TRANSLATE;\n}\nbreak;\n}\n}```\nSets this transform to a translated rotation transformation that rotates coordinates by the specified number of quadrants around the specified anchor point. This operation is equivalent to calling:\n``` setToRotation(numquadrants * Math.PI / 2.0, anchorx, anchory);\n```\nRotating by a positive number of quadrants rotates points on the positive X axis toward the positive Y axis.\n```",
null,
"public void setToRotation(double theta) {\ndouble sin = Math.sin(theta);\ndouble cos;\nif (sin == 1.0 || sin == -1.0) {\ncos = 0.0;\nstate = APPLY_SHEAR;\n} else {\ncos = Math.cos(theta);\nif (cos == -1.0) {\nsin = 0.0;\nstate = APPLY_SCALE;\n} else if (cos == 1.0) {\nsin = 0.0;\nstate = APPLY_IDENTITY;\ntype = TYPE_IDENTITY;\n} else {\nstate = APPLY_SHEAR | APPLY_SCALE;\ntype = TYPE_GENERAL_ROTATION;\n}\n}\nm00 = cos;\nm10 = sin;\nm01 = -sin;\nm11 = cos;\nm02 = 0.0;\nm12 = 0.0;\n}```\nSets this transform to a rotation transformation. The matrix representing this transform becomes:\n``` [ cos(theta) -sin(theta) 0 ]\n[ sin(theta) cos(theta) 0 ]\n[ 0 0 1 ]\n```\nRotating by a positive angle theta rotates points on the positive X axis toward the positive Y axis. Note also the discussion of Handling 90-Degree Rotations above.\n```",
null,
"public void setToRotation(double vecx,\ndouble vecy) {\ndouble sin, cos;\nif (vecy == 0) {\nsin = 0.0;\nif (vecx < 0.0) {\ncos = -1.0;\nstate = APPLY_SCALE;\n} else {\ncos = 1.0;\nstate = APPLY_IDENTITY;\ntype = TYPE_IDENTITY;\n}\n} else if (vecx == 0) {\ncos = 0.0;\nsin = (vecy > 0.0) ? 1.0 : -1.0;\nstate = APPLY_SHEAR;\n} else {\ndouble len = Math.sqrt(vecx * vecx + vecy * vecy);\ncos = vecx / len;\nsin = vecy / len;\nstate = APPLY_SHEAR | APPLY_SCALE;\ntype = TYPE_GENERAL_ROTATION;\n}\nm00 = cos;\nm10 = sin;\nm01 = -sin;\nm11 = cos;\nm02 = 0.0;\nm12 = 0.0;\n}```\nSets this transform to a rotation transformation that rotates coordinates according to a rotation vector. All coordinates rotate about the origin by the same amount. The amount of rotation is such that coordinates along the former positive X axis will subsequently align with the vector pointing from the origin to the specified vector coordinates. If both `vecx` and `vecy` are 0.0, the transform is set to an identity transform. This operation is equivalent to calling:\n``` setToRotation(Math.atan2(vecy, vecx));\n```\n```",
null,
"public void setToRotation(double theta,\ndouble anchorx,\ndouble anchory) {\nsetToRotation(theta);\ndouble sin = m10;\ndouble oneMinusCos = 1.0 - m00;\nm02 = anchorx * oneMinusCos + anchory * sin;\nm12 = anchory * oneMinusCos - anchorx * sin;\nif (m02 != 0.0 || m12 != 0.0) {\nstate |= APPLY_TRANSLATE;\ntype |= TYPE_TRANSLATION;\n}\n}```\nSets this transform to a translated rotation transformation. This operation is equivalent to translating the coordinates so that the anchor point is at the origin (S1), then rotating them about the new origin (S2), and finally translating so that the intermediate origin is restored to the coordinates of the original anchor point (S3).\n\nThis operation is equivalent to the following sequence of calls:\n\n``` setToTranslation(anchorx, anchory); // S3: final translation\nrotate(theta); // S2: rotate around anchor\ntranslate(-anchorx, -anchory); // S1: translate anchor to origin\n```\nThe matrix representing this transform becomes:\n``` [ cos(theta) -sin(theta) x-x*cos+y*sin ]\n[ sin(theta) cos(theta) y-x*sin-y*cos ]\n[ 0 0 1 ]\n```\nRotating by a positive angle theta rotates points on the positive X axis toward the positive Y axis. Note also the discussion of Handling 90-Degree Rotations above.\n```",
null,
"public void setToRotation(double vecx,\ndouble vecy,\ndouble anchorx,\ndouble anchory) {\nsetToRotation(vecx, vecy);\ndouble sin = m10;\ndouble oneMinusCos = 1.0 - m00;\nm02 = anchorx * oneMinusCos + anchory * sin;\nm12 = anchory * oneMinusCos - anchorx * sin;\nif (m02 != 0.0 || m12 != 0.0) {\nstate |= APPLY_TRANSLATE;\ntype |= TYPE_TRANSLATION;\n}\n}```\nSets this transform to a rotation transformation that rotates coordinates around an anchor point according to a rotation vector. All coordinates rotate about the specified anchor coordinates by the same amount. The amount of rotation is such that coordinates along the former positive X axis will subsequently align with the vector pointing from the origin to the specified vector coordinates. If both `vecx` and `vecy` are 0.0, the transform is set to an identity transform. This operation is equivalent to calling:\n``` setToTranslation(Math.atan2(vecy, vecx), anchorx, anchory);\n```\n```",
null,
"public void setToScale(double sx,\ndouble sy) {\nm00 = sx;\nm10 = 0.0;\nm01 = 0.0;\nm11 = sy;\nm02 = 0.0;\nm12 = 0.0;\nif (sx != 1.0 || sy != 1.0) {\nstate = APPLY_SCALE;\ntype = TYPE_UNKNOWN;\n} else {\nstate = APPLY_IDENTITY;\ntype = TYPE_IDENTITY;\n}\n}```\nSets this transform to a scaling transformation. The matrix representing this transform becomes:\n``` [ sx 0 0 ]\n[ 0 sy 0 ]\n[ 0 0 1 ]\n```\n```",
null,
"public void setToShear(double shx,\ndouble shy) {\nm00 = 1.0;\nm01 = shx;\nm10 = shy;\nm11 = 1.0;\nm02 = 0.0;\nm12 = 0.0;\nif (shx != 0.0 || shy != 0.0) {\nstate = (APPLY_SHEAR | APPLY_SCALE);\ntype = TYPE_UNKNOWN;\n} else {\nstate = APPLY_IDENTITY;\ntype = TYPE_IDENTITY;\n}\n}```\nSets this transform to a shearing transformation. The matrix representing this transform becomes:\n``` [ 1 shx 0 ]\n[ shy 1 0 ]\n[ 0 0 1 ]\n```\n```",
null,
"public void setToTranslation(double tx,\ndouble ty) {\nm00 = 1.0;\nm10 = 0.0;\nm01 = 0.0;\nm11 = 1.0;\nm02 = tx;\nm12 = ty;\nif (tx != 0.0 || ty != 0.0) {\nstate = APPLY_TRANSLATE;\ntype = TYPE_TRANSLATION;\n} else {\nstate = APPLY_IDENTITY;\ntype = TYPE_IDENTITY;\n}\n}```\nSets this transform to a translation transformation. The matrix representing this transform becomes:\n``` [ 1 0 tx ]\n[ 0 1 ty ]\n[ 0 0 1 ]\n```\n```",
null,
"public void setTransform(AffineTransform Tx) {\nthis.m00 = Tx.m00;\nthis.m10 = Tx.m10;\nthis.m01 = Tx.m01;\nthis.m11 = Tx.m11;\nthis.m02 = Tx.m02;\nthis.m12 = Tx.m12;\nthis.state = Tx.state;\nthis.type = Tx.type;\n}```\nSets this transform to a copy of the transform in the specified `AffineTransform` object.\n```",
null,
"public void setTransform(double m00,\ndouble m10,\ndouble m01,\ndouble m11,\ndouble m02,\ndouble m12) {\nthis.m00 = m00;\nthis.m10 = m10;\nthis.m01 = m01;\nthis.m11 = m11;\nthis.m02 = m02;\nthis.m12 = m12;\n}```\nSets this transform to the matrix specified by the 6 double precision values.\n```",
null,
"public void shear(double shx,\ndouble shy) {\nint state = this.state;\nswitch (state) {\ndefault:\nstateError();\n/* NOTREACHED */\ncase (APPLY_SHEAR | APPLY_SCALE | APPLY_TRANSLATE):\ncase (APPLY_SHEAR | APPLY_SCALE):\ndouble M0, M1;\nM0 = m00;\nM1 = m01;\nm00 = M0 + M1 * shy;\nm01 = M0 * shx + M1;\nM0 = m10;\nM1 = m11;\nm10 = M0 + M1 * shy;\nm11 = M0 * shx + M1;\nreturn;\ncase (APPLY_SHEAR | APPLY_TRANSLATE):\ncase (APPLY_SHEAR):\nm00 = m01 * shy;\nm11 = m10 * shx;\nif (m00 != 0.0 || m11 != 0.0) {\nthis.state = state | APPLY_SCALE;\n}\nthis.type = TYPE_UNKNOWN;\nreturn;\ncase (APPLY_SCALE | APPLY_TRANSLATE):\ncase (APPLY_SCALE):\nm01 = m00 * shx;\nm10 = m11 * shy;\nif (m01 != 0.0 || m10 != 0.0) {\nthis.state = state | APPLY_SHEAR;\n}\nthis.type = TYPE_UNKNOWN;\nreturn;\ncase (APPLY_TRANSLATE):\ncase (APPLY_IDENTITY):\nm01 = shx;\nm10 = shy;\nif (m01 != 0.0 || m10 != 0.0) {\nthis.state = state | APPLY_SCALE | APPLY_SHEAR;\nthis.type = TYPE_UNKNOWN;\n}\nreturn;\n}\n}```\nConcatenates this transform with a shearing transformation. This is equivalent to calling concatenate(SH), where SH is an `AffineTransform` represented by the following matrix:\n``` [ 1 shx 0 ]\n[ shy 1 0 ]\n[ 0 0 1 ]\n```\n```",
null,
"public String toString() {\nreturn (\"AffineTransform[[\"\n+ _matround(m00) + \", \"\n+ _matround(m01) + \", \"\n+ _matround(m02) + \"], [\"\n+ _matround(m10) + \", \"\n+ _matround(m11) + \", \"\n+ _matround(m12) + \"]]\");\n}```\nReturns a `String` that represents the value of this Object .\n```",
null,
"public Point2D transform(Point2D ptSrc,\nPoint2D ptDst) {\nif (ptDst == null) {\nif (ptSrc instanceof Point2D.Double) {\nptDst = new Point2D.Double();\n} else {\nptDst = new Point2D.Float();\n}\n}\n// Copy source coords into local variables in case src == dst\ndouble x = ptSrc.getX();\ndouble y = ptSrc.getY();\nswitch (state) {\ndefault:\nstateError();\n/* NOTREACHED */\ncase (APPLY_SHEAR | APPLY_SCALE | APPLY_TRANSLATE):\nptDst.setLocation(x * m00 + y * m01 + m02,\nx * m10 + y * m11 + m12);\nreturn ptDst;\ncase (APPLY_SHEAR | APPLY_SCALE):\nptDst.setLocation(x * m00 + y * m01, x * m10 + y * m11);\nreturn ptDst;\ncase (APPLY_SHEAR | APPLY_TRANSLATE):\nptDst.setLocation(y * m01 + m02, x * m10 + m12);\nreturn ptDst;\ncase (APPLY_SHEAR):\nptDst.setLocation(y * m01, x * m10);\nreturn ptDst;\ncase (APPLY_SCALE | APPLY_TRANSLATE):\nptDst.setLocation(x * m00 + m02, y * m11 + m12);\nreturn ptDst;\ncase (APPLY_SCALE):\nptDst.setLocation(x * m00, y * m11);\nreturn ptDst;\ncase (APPLY_TRANSLATE):\nptDst.setLocation(x + m02, y + m12);\nreturn ptDst;\ncase (APPLY_IDENTITY):\nptDst.setLocation(x, y);\nreturn ptDst;\n}\n/* NOTREACHED */\n}```\nTransforms the specified `ptSrc` and stores the result in `ptDst`. If `ptDst` is `null`, a new Point2D object is allocated and then the result of the transformation is stored in this object. In either case, `ptDst`, which contains the transformed point, is returned for convenience. If `ptSrc` and `ptDst` are the same object, the input point is correctly overwritten with the transformed point.\n```",
null,
"public void transform(Point2D[] ptSrc,\nint srcOff,\nPoint2D[] ptDst,\nint dstOff,\nint numPts) {\nint state = this.state;\nwhile (--numPts >= 0) {\n// Copy source coords into local variables in case src == dst\nPoint2D src = ptSrc[srcOff++];\ndouble x = src.getX();\ndouble y = src.getY();\nPoint2D dst = ptDst[dstOff++];\nif (dst == null) {\nif (src instanceof Point2D.Double) {\ndst = new Point2D.Double();\n} else {\ndst = new Point2D.Float();\n}\nptDst[dstOff - 1] = dst;\n}\nswitch (state) {\ndefault:\nstateError();\n/* NOTREACHED */\ncase (APPLY_SHEAR | APPLY_SCALE | APPLY_TRANSLATE):\ndst.setLocation(x * m00 + y * m01 + m02,\nx * m10 + y * m11 + m12);\nbreak;\ncase (APPLY_SHEAR | APPLY_SCALE):\ndst.setLocation(x * m00 + y * m01, x * m10 + y * m11);\nbreak;\ncase (APPLY_SHEAR | APPLY_TRANSLATE):\ndst.setLocation(y * m01 + m02, x * m10 + m12);\nbreak;\ncase (APPLY_SHEAR):\ndst.setLocation(y * m01, x * m10);\nbreak;\ncase (APPLY_SCALE | APPLY_TRANSLATE):\ndst.setLocation(x * m00 + m02, y * m11 + m12);\nbreak;\ncase (APPLY_SCALE):\ndst.setLocation(x * m00, y * m11);\nbreak;\ncase (APPLY_TRANSLATE):\ndst.setLocation(x + m02, y + m12);\nbreak;\ncase (APPLY_IDENTITY):\ndst.setLocation(x, y);\nbreak;\n}\n}\n/* NOTREACHED */\n}```\nTransforms an array of point objects by this transform. If any element of the `ptDst` array is `null`, a new `Point2D` object is allocated and stored into that element before storing the results of the transformation.\n\nNote that this method does not take any precautions to avoid problems caused by storing results into `Point2D` objects that will be used as the source for calculations further down the source array. This method does guarantee that if a specified `Point2D` object is both the source and destination for the same single point transform operation then the results will not be stored until the calculations are complete to avoid storing the results on top of the operands. If, however, the destination `Point2D` object for one operation is the same object as the source `Point2D` object for another operation further down the source array then the original coordinates in that point are overwritten before they can be converted.\n\n```",
null,
"public void transform(float[] srcPts,\nint srcOff,\nfloat[] dstPts,\nint dstOff,\nint numPts) {\ndouble M00, M01, M02, M10, M11, M12; // For caching\nif (dstPts == srcPts &&\ndstOff > srcOff && dstOff < srcOff + numPts * 2)\n{\n// If the arrays overlap partially with the destination higher\n// than the source and we transform the coordinates normally\n// we would overwrite some of the later source coordinates\n// with results of previous transformations.\n// To get around this we use arraycopy to copy the points\n// to their final destination with correct overwrite\n// handling and then transform them in place in the new\n// safer location.\nSystem.arraycopy(srcPts, srcOff, dstPts, dstOff, numPts * 2);\n// srcPts = dstPts; // They are known to be equal.\nsrcOff = dstOff;\n}\nswitch (state) {\ndefault:\nstateError();\n/* NOTREACHED */\ncase (APPLY_SHEAR | APPLY_SCALE | APPLY_TRANSLATE):\nM00 = m00; M01 = m01; M02 = m02;\nM10 = m10; M11 = m11; M12 = m12;\nwhile (--numPts >= 0) {\ndouble x = srcPts[srcOff++];\ndouble y = srcPts[srcOff++];\ndstPts[dstOff++] = (float) (M00 * x + M01 * y + M02);\ndstPts[dstOff++] = (float) (M10 * x + M11 * y + M12);\n}\nreturn;\ncase (APPLY_SHEAR | APPLY_SCALE):\nM00 = m00; M01 = m01;\nM10 = m10; M11 = m11;\nwhile (--numPts >= 0) {\ndouble x = srcPts[srcOff++];\ndouble y = srcPts[srcOff++];\ndstPts[dstOff++] = (float) (M00 * x + M01 * y);\ndstPts[dstOff++] = (float) (M10 * x + M11 * y);\n}\nreturn;\ncase (APPLY_SHEAR | APPLY_TRANSLATE):\nM01 = m01; M02 = m02;\nM10 = m10; M12 = m12;\nwhile (--numPts >= 0) {\ndouble x = srcPts[srcOff++];\ndstPts[dstOff++] = (float) (M01 * srcPts[srcOff++] + M02);\ndstPts[dstOff++] = (float) (M10 * x + M12);\n}\nreturn;\ncase (APPLY_SHEAR):\nM01 = m01; M10 = m10;\nwhile (--numPts >= 0) {\ndouble x = srcPts[srcOff++];\ndstPts[dstOff++] = (float) (M01 * srcPts[srcOff++]);\ndstPts[dstOff++] = (float) (M10 * x);\n}\nreturn;\ncase (APPLY_SCALE | APPLY_TRANSLATE):\nM00 = m00; M02 = m02;\nM11 = m11; M12 = m12;\nwhile (--numPts >= 0) {\ndstPts[dstOff++] = (float) (M00 * srcPts[srcOff++] + M02);\ndstPts[dstOff++] = (float) (M11 * srcPts[srcOff++] + M12);\n}\nreturn;\ncase (APPLY_SCALE):\nM00 = m00; M11 = m11;\nwhile (--numPts >= 0) {\ndstPts[dstOff++] = (float) (M00 * srcPts[srcOff++]);\ndstPts[dstOff++] = (float) (M11 * srcPts[srcOff++]);\n}\nreturn;\ncase (APPLY_TRANSLATE):\nM02 = m02; M12 = m12;\nwhile (--numPts >= 0) {\ndstPts[dstOff++] = (float) (srcPts[srcOff++] + M02);\ndstPts[dstOff++] = (float) (srcPts[srcOff++] + M12);\n}\nreturn;\ncase (APPLY_IDENTITY):\nif (srcPts != dstPts || srcOff != dstOff) {\nSystem.arraycopy(srcPts, srcOff, dstPts, dstOff,\nnumPts * 2);\n}\nreturn;\n}\n/* NOTREACHED */\n}```\nTransforms an array of floating point coordinates by this transform. The two coordinate array sections can be exactly the same or can be overlapping sections of the same array without affecting the validity of the results. This method ensures that no source coordinates are overwritten by a previous operation before they can be transformed. The coordinates are stored in the arrays starting at the specified offset in the order `[x0, y0, x1, y1, ..., xn, yn]`.\n```",
null,
"public void transform(double[] srcPts,\nint srcOff,\ndouble[] dstPts,\nint dstOff,\nint numPts) {\ndouble M00, M01, M02, M10, M11, M12; // For caching\nif (dstPts == srcPts &&\ndstOff > srcOff && dstOff < srcOff + numPts * 2)\n{\n// If the arrays overlap partially with the destination higher\n// than the source and we transform the coordinates normally\n// we would overwrite some of the later source coordinates\n// with results of previous transformations.\n// To get around this we use arraycopy to copy the points\n// to their final destination with correct overwrite\n// handling and then transform them in place in the new\n// safer location.\nSystem.arraycopy(srcPts, srcOff, dstPts, dstOff, numPts * 2);\n// srcPts = dstPts; // They are known to be equal.\nsrcOff = dstOff;\n}\nswitch (state) {\ndefault:\nstateError();\n/* NOTREACHED */\ncase (APPLY_SHEAR | APPLY_SCALE | APPLY_TRANSLATE):\nM00 = m00; M01 = m01; M02 = m02;\nM10 = m10; M11 = m11; M12 = m12;\nwhile (--numPts >= 0) {\ndouble x = srcPts[srcOff++];\ndouble y = srcPts[srcOff++];\ndstPts[dstOff++] = M00 * x + M01 * y + M02;\ndstPts[dstOff++] = M10 * x + M11 * y + M12;\n}\nreturn;\ncase (APPLY_SHEAR | APPLY_SCALE):\nM00 = m00; M01 = m01;\nM10 = m10; M11 = m11;\nwhile (--numPts >= 0) {\ndouble x = srcPts[srcOff++];\ndouble y = srcPts[srcOff++];\ndstPts[dstOff++] = M00 * x + M01 * y;\ndstPts[dstOff++] = M10 * x + M11 * y;\n}\nreturn;\ncase (APPLY_SHEAR | APPLY_TRANSLATE):\nM01 = m01; M02 = m02;\nM10 = m10; M12 = m12;\nwhile (--numPts >= 0) {\ndouble x = srcPts[srcOff++];\ndstPts[dstOff++] = M01 * srcPts[srcOff++] + M02;\ndstPts[dstOff++] = M10 * x + M12;\n}\nreturn;\ncase (APPLY_SHEAR):\nM01 = m01; M10 = m10;\nwhile (--numPts >= 0) {\ndouble x = srcPts[srcOff++];\ndstPts[dstOff++] = M01 * srcPts[srcOff++];\ndstPts[dstOff++] = M10 * x;\n}\nreturn;\ncase (APPLY_SCALE | APPLY_TRANSLATE):\nM00 = m00; M02 = m02;\nM11 = m11; M12 = m12;\nwhile (--numPts >= 0) {\ndstPts[dstOff++] = M00 * srcPts[srcOff++] + M02;\ndstPts[dstOff++] = M11 * srcPts[srcOff++] + M12;\n}\nreturn;\ncase (APPLY_SCALE):\nM00 = m00; M11 = m11;\nwhile (--numPts >= 0) {\ndstPts[dstOff++] = M00 * srcPts[srcOff++];\ndstPts[dstOff++] = M11 * srcPts[srcOff++];\n}\nreturn;\ncase (APPLY_TRANSLATE):\nM02 = m02; M12 = m12;\nwhile (--numPts >= 0) {\ndstPts[dstOff++] = srcPts[srcOff++] + M02;\ndstPts[dstOff++] = srcPts[srcOff++] + M12;\n}\nreturn;\ncase (APPLY_IDENTITY):\nif (srcPts != dstPts || srcOff != dstOff) {\nSystem.arraycopy(srcPts, srcOff, dstPts, dstOff,\nnumPts * 2);\n}\nreturn;\n}\n/* NOTREACHED */\n}```\nTransforms an array of double precision coordinates by this transform. The two coordinate array sections can be exactly the same or can be overlapping sections of the same array without affecting the validity of the results. This method ensures that no source coordinates are overwritten by a previous operation before they can be transformed. The coordinates are stored in the arrays starting at the indicated offset in the order `[x0, y0, x1, y1, ..., xn, yn]`.\n```",
null,
"public void transform(float[] srcPts,\nint srcOff,\ndouble[] dstPts,\nint dstOff,\nint numPts) {\ndouble M00, M01, M02, M10, M11, M12; // For caching\nswitch (state) {\ndefault:\nstateError();\n/* NOTREACHED */\ncase (APPLY_SHEAR | APPLY_SCALE | APPLY_TRANSLATE):\nM00 = m00; M01 = m01; M02 = m02;\nM10 = m10; M11 = m11; M12 = m12;\nwhile (--numPts >= 0) {\ndouble x = srcPts[srcOff++];\ndouble y = srcPts[srcOff++];\ndstPts[dstOff++] = M00 * x + M01 * y + M02;\ndstPts[dstOff++] = M10 * x + M11 * y + M12;\n}\nreturn;\ncase (APPLY_SHEAR | APPLY_SCALE):\nM00 = m00; M01 = m01;\nM10 = m10; M11 = m11;\nwhile (--numPts >= 0) {\ndouble x = srcPts[srcOff++];\ndouble y = srcPts[srcOff++];\ndstPts[dstOff++] = M00 * x + M01 * y;\ndstPts[dstOff++] = M10 * x + M11 * y;\n}\nreturn;\ncase (APPLY_SHEAR | APPLY_TRANSLATE):\nM01 = m01; M02 = m02;\nM10 = m10; M12 = m12;\nwhile (--numPts >= 0) {\ndouble x = srcPts[srcOff++];\ndstPts[dstOff++] = M01 * srcPts[srcOff++] + M02;\ndstPts[dstOff++] = M10 * x + M12;\n}\nreturn;\ncase (APPLY_SHEAR):\nM01 = m01; M10 = m10;\nwhile (--numPts >= 0) {\ndouble x = srcPts[srcOff++];\ndstPts[dstOff++] = M01 * srcPts[srcOff++];\ndstPts[dstOff++] = M10 * x;\n}\nreturn;\ncase (APPLY_SCALE | APPLY_TRANSLATE):\nM00 = m00; M02 = m02;\nM11 = m11; M12 = m12;\nwhile (--numPts >= 0) {\ndstPts[dstOff++] = M00 * srcPts[srcOff++] + M02;\ndstPts[dstOff++] = M11 * srcPts[srcOff++] + M12;\n}\nreturn;\ncase (APPLY_SCALE):\nM00 = m00; M11 = m11;\nwhile (--numPts >= 0) {\ndstPts[dstOff++] = M00 * srcPts[srcOff++];\ndstPts[dstOff++] = M11 * srcPts[srcOff++];\n}\nreturn;\ncase (APPLY_TRANSLATE):\nM02 = m02; M12 = m12;\nwhile (--numPts >= 0) {\ndstPts[dstOff++] = srcPts[srcOff++] + M02;\ndstPts[dstOff++] = srcPts[srcOff++] + M12;\n}\nreturn;\ncase (APPLY_IDENTITY):\nwhile (--numPts >= 0) {\ndstPts[dstOff++] = srcPts[srcOff++];\ndstPts[dstOff++] = srcPts[srcOff++];\n}\nreturn;\n}\n/* NOTREACHED */\n}```\nTransforms an array of floating point coordinates by this transform and stores the results into an array of doubles. The coordinates are stored in the arrays starting at the specified offset in the order `[x0, y0, x1, y1, ..., xn, yn]`.\n```",
null,
"public void transform(double[] srcPts,\nint srcOff,\nfloat[] dstPts,\nint dstOff,\nint numPts) {\ndouble M00, M01, M02, M10, M11, M12; // For caching\nswitch (state) {\ndefault:\nstateError();\n/* NOTREACHED */\ncase (APPLY_SHEAR | APPLY_SCALE | APPLY_TRANSLATE):\nM00 = m00; M01 = m01; M02 = m02;\nM10 = m10; M11 = m11; M12 = m12;\nwhile (--numPts >= 0) {\ndouble x = srcPts[srcOff++];\ndouble y = srcPts[srcOff++];\ndstPts[dstOff++] = (float) (M00 * x + M01 * y + M02);\ndstPts[dstOff++] = (float) (M10 * x + M11 * y + M12);\n}\nreturn;\ncase (APPLY_SHEAR | APPLY_SCALE):\nM00 = m00; M01 = m01;\nM10 = m10; M11 = m11;\nwhile (--numPts >= 0) {\ndouble x = srcPts[srcOff++];\ndouble y = srcPts[srcOff++];\ndstPts[dstOff++] = (float) (M00 * x + M01 * y);\ndstPts[dstOff++] = (float) (M10 * x + M11 * y);\n}\nreturn;\ncase (APPLY_SHEAR | APPLY_TRANSLATE):\nM01 = m01; M02 = m02;\nM10 = m10; M12 = m12;\nwhile (--numPts >= 0) {\ndouble x = srcPts[srcOff++];\ndstPts[dstOff++] = (float) (M01 * srcPts[srcOff++] + M02);\ndstPts[dstOff++] = (float) (M10 * x + M12);\n}\nreturn;\ncase (APPLY_SHEAR):\nM01 = m01; M10 = m10;\nwhile (--numPts >= 0) {\ndouble x = srcPts[srcOff++];\ndstPts[dstOff++] = (float) (M01 * srcPts[srcOff++]);\ndstPts[dstOff++] = (float) (M10 * x);\n}\nreturn;\ncase (APPLY_SCALE | APPLY_TRANSLATE):\nM00 = m00; M02 = m02;\nM11 = m11; M12 = m12;\nwhile (--numPts >= 0) {\ndstPts[dstOff++] = (float) (M00 * srcPts[srcOff++] + M02);\ndstPts[dstOff++] = (float) (M11 * srcPts[srcOff++] + M12);\n}\nreturn;\ncase (APPLY_SCALE):\nM00 = m00; M11 = m11;\nwhile (--numPts >= 0) {\ndstPts[dstOff++] = (float) (M00 * srcPts[srcOff++]);\ndstPts[dstOff++] = (float) (M11 * srcPts[srcOff++]);\n}\nreturn;\ncase (APPLY_TRANSLATE):\nM02 = m02; M12 = m12;\nwhile (--numPts >= 0) {\ndstPts[dstOff++] = (float) (srcPts[srcOff++] + M02);\ndstPts[dstOff++] = (float) (srcPts[srcOff++] + M12);\n}\nreturn;\ncase (APPLY_IDENTITY):\nwhile (--numPts >= 0) {\ndstPts[dstOff++] = (float) (srcPts[srcOff++]);\ndstPts[dstOff++] = (float) (srcPts[srcOff++]);\n}\nreturn;\n}\n/* NOTREACHED */\n}```\nTransforms an array of double precision coordinates by this transform and stores the results into an array of floats. The coordinates are stored in the arrays starting at the specified offset in the order `[x0, y0, x1, y1, ..., xn, yn]`.\n```",
null,
"public void translate(double tx,\ndouble ty) {\nswitch (state) {\ndefault:\nstateError();\n/* NOTREACHED */\ncase (APPLY_SHEAR | APPLY_SCALE | APPLY_TRANSLATE):\nm02 = tx * m00 + ty * m01 + m02;\nm12 = tx * m10 + ty * m11 + m12;\nif (m02 == 0.0 && m12 == 0.0) {\nstate = APPLY_SHEAR | APPLY_SCALE;\nif (type != TYPE_UNKNOWN) {\ntype -= TYPE_TRANSLATION;\n}\n}\nreturn;\ncase (APPLY_SHEAR | APPLY_SCALE):\nm02 = tx * m00 + ty * m01;\nm12 = tx * m10 + ty * m11;\nif (m02 != 0.0 || m12 != 0.0) {\nstate = APPLY_SHEAR | APPLY_SCALE | APPLY_TRANSLATE;\ntype |= TYPE_TRANSLATION;\n}\nreturn;\ncase (APPLY_SHEAR | APPLY_TRANSLATE):\nm02 = ty * m01 + m02;\nm12 = tx * m10 + m12;\nif (m02 == 0.0 && m12 == 0.0) {\nstate = APPLY_SHEAR;\nif (type != TYPE_UNKNOWN) {\ntype -= TYPE_TRANSLATION;\n}\n}\nreturn;\ncase (APPLY_SHEAR):\nm02 = ty * m01;\nm12 = tx * m10;\nif (m02 != 0.0 || m12 != 0.0) {\nstate = APPLY_SHEAR | APPLY_TRANSLATE;\ntype |= TYPE_TRANSLATION;\n}\nreturn;\ncase (APPLY_SCALE | APPLY_TRANSLATE):\nm02 = tx * m00 + m02;\nm12 = ty * m11 + m12;\nif (m02 == 0.0 && m12 == 0.0) {\nstate = APPLY_SCALE;\nif (type != TYPE_UNKNOWN) {\ntype -= TYPE_TRANSLATION;\n}\n}\nreturn;\ncase (APPLY_SCALE):\nm02 = tx * m00;\nm12 = ty * m11;\nif (m02 != 0.0 || m12 != 0.0) {\nstate = APPLY_SCALE | APPLY_TRANSLATE;\ntype |= TYPE_TRANSLATION;\n}\nreturn;\ncase (APPLY_TRANSLATE):\nm02 = tx + m02;\nm12 = ty + m12;\nif (m02 == 0.0 && m12 == 0.0) {\nstate = APPLY_IDENTITY;\ntype = TYPE_IDENTITY;\n}\nreturn;\ncase (APPLY_IDENTITY):\nm02 = tx;\nm12 = ty;\nif (tx != 0.0 || ty != 0.0) {\nstate = APPLY_TRANSLATE;\ntype = TYPE_TRANSLATION;\n}\nreturn;\n}\n}```\nConcatenates this transform with a translation transformation. This is equivalent to calling concatenate(T), where T is an `AffineTransform` represented by the following matrix:\n``` [ 1 0 tx ]\n[ 0 1 ty ]\n[ 0 0 1 ]\n```\n```",
null,
"void updateState() {\nif (m01 == 0.0 && m10 == 0.0) {\nif (m00 == 1.0 && m11 == 1.0) {\nif (m02 == 0.0 && m12 == 0.0) {\nstate = APPLY_IDENTITY;\ntype = TYPE_IDENTITY;\n} else {\nstate = APPLY_TRANSLATE;\ntype = TYPE_TRANSLATION;\n}\n} else {\nif (m02 == 0.0 && m12 == 0.0) {\nstate = APPLY_SCALE;\ntype = TYPE_UNKNOWN;\n} else {\nstate = (APPLY_SCALE | APPLY_TRANSLATE);\ntype = TYPE_UNKNOWN;\n}\n}\n} else {\nif (m00 == 0.0 && m11 == 0.0) {\nif (m02 == 0.0 && m12 == 0.0) {\nstate = APPLY_SHEAR;\ntype = TYPE_UNKNOWN;\n} else {\nstate = (APPLY_SHEAR | APPLY_TRANSLATE);\ntype = TYPE_UNKNOWN;\n}\n} else {\nif (m02 == 0.0 && m12 == 0.0) {\nstate = (APPLY_SHEAR | APPLY_SCALE);\ntype = TYPE_UNKNOWN;\n} else {\nstate = (APPLY_SHEAR | APPLY_SCALE | APPLY_TRANSLATE);\ntype = TYPE_UNKNOWN;\n}\n}\n}\n}```\nManually recalculates the state of the transform when the matrix changes too much to predict the effects on the state. The following table specifies what the various settings of the state field say about the values of the corresponding matrix element fields. Note that the rules governing the SCALE fields are slightly different depending on whether the SHEAR flag is also set.\n``` SCALE SHEAR TRANSLATE\nm00/m11 m01/m10 m02/m12\n\nIDENTITY 1.0 0.0 0.0\nTRANSLATE (TR) 1.0 0.0 not both 0.0\nSCALE (SC) not both 1.0 0.0 0.0\nTR | SC not both 1.0 0.0 not both 0.0\nSHEAR (SH) 0.0 not both 0.0 0.0\nTR | SH 0.0 not both 0.0 not both 0.0\nSC | SH not both 0.0 not both 0.0 0.0\nTR | SC | SH not both 0.0 not both 0.0 not both 0.0\n```"
] | [
null,
"http://docjar.org/inherit.gif",
null,
"http://docjar.org/plus.gif",
null,
"http://docjar.org/plus.gif",
null,
"http://docjar.org/plus.gif",
null,
"http://docjar.org/plus.gif",
null,
"http://docjar.org/plus.gif",
null,
"http://docjar.org/plus.gif",
null,
"http://docjar.org/plus.gif",
null,
"http://docjar.org/plus.gif",
null,
"http://docjar.org/plus.gif",
null,
"http://docjar.org/plus.gif",
null,
"http://docjar.org/plus.gif",
null,
"http://docjar.org/plus.gif",
null,
"http://docjar.org/plus.gif",
null,
"http://docjar.org/plus.gif",
null,
"http://docjar.org/plus.gif",
null,
"http://docjar.org/plus.gif",
null,
"http://docjar.org/plus.gif",
null,
"http://docjar.org/plus.gif",
null,
"http://docjar.org/plus.gif",
null,
"http://docjar.org/plus.gif",
null,
"http://docjar.org/plus.gif",
null,
"http://docjar.org/plus.gif",
null,
"http://docjar.org/plus.gif",
null,
"http://docjar.org/plus.gif",
null,
"http://docjar.org/plus.gif",
null,
"http://docjar.org/plus.gif",
null,
"http://docjar.org/plus.gif",
null,
"http://docjar.org/plus.gif",
null,
"http://docjar.org/plus.gif",
null,
"http://docjar.org/plus.gif",
null,
"http://docjar.org/plus.gif",
null,
"http://docjar.org/plus.gif",
null,
"http://docjar.org/plus.gif",
null,
"http://docjar.org/plus.gif",
null,
"http://docjar.org/plus.gif",
null,
"http://docjar.org/plus.gif",
null,
"http://docjar.org/plus.gif",
null,
"http://docjar.org/plus.gif",
null,
"http://docjar.org/plus.gif",
null,
"http://docjar.org/plus.gif",
null,
"http://docjar.org/plus.gif",
null,
"http://docjar.org/plus.gif",
null,
"http://docjar.org/plus.gif",
null,
"http://docjar.org/plus.gif",
null,
"http://docjar.org/plus.gif",
null,
"http://docjar.org/plus.gif",
null,
"http://docjar.org/plus.gif",
null,
"http://docjar.org/plus.gif",
null,
"http://docjar.org/plus.gif",
null,
"http://docjar.org/plus.gif",
null,
"http://docjar.org/plus.gif",
null,
"http://docjar.org/plus.gif",
null,
"http://docjar.org/plus.gif",
null,
"http://docjar.org/plus.gif",
null,
"http://docjar.org/plus.gif",
null,
"http://docjar.org/plus.gif",
null,
"http://docjar.org/plus.gif",
null,
"http://docjar.org/plus.gif",
null,
"http://docjar.org/plus.gif",
null,
"http://docjar.org/plus.gif",
null,
"http://docjar.org/plus.gif",
null,
"http://docjar.org/plus.gif",
null,
"http://docjar.org/plus.gif",
null,
"http://docjar.org/plus.gif",
null,
"http://docjar.org/plus.gif",
null,
"http://docjar.org/plus.gif",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.60206836,"math_prob":0.99659705,"size":66320,"snap":"2023-14-2023-23","text_gpt3_token_len":20924,"char_repetition_ratio":0.20403825,"word_repetition_ratio":0.66755015,"special_character_ratio":0.3672497,"punctuation_ratio":0.21257915,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99766403,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127,128,129,130,131,132,133,134],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-09T01:55:41Z\",\"WARC-Record-ID\":\"<urn:uuid:39c26f3e-92dd-4ffb-9dc0-88181536b9a5>\",\"Content-Length\":\"147633\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7d73feff-2c42-4762-979b-9a1f68ebc530>\",\"WARC-Concurrent-To\":\"<urn:uuid:7016d698-adad-4561-abfd-f9917517abed>\",\"WARC-IP-Address\":\"74.208.90.88\",\"WARC-Target-URI\":\"http://docjar.org/docs/api/java/awt/geom/AffineTransform.html\",\"WARC-Payload-Digest\":\"sha1:H7P55NLWLTYNCMCFPJO5T2MGYKBPH3BY\",\"WARC-Block-Digest\":\"sha1:BD2X6K7XSZ62L2LIHKLY3OABK7JUBT6K\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224655244.74_warc_CC-MAIN-20230609000217-20230609030217-00741.warc.gz\"}"} |
https://es.mathworks.com/matlabcentral/answers/484288-how-to-calculate-determinant-of-matrices-without-loop | [
"# How to calculate determinant of matrices without loop?\n\n24 views (last 30 days)\nI am new to Matlab and this might seem very easy.\nI have 2 matrices:\na = [1 1 1; 2 2 2 ; 3 3 3 ; 4 4 4 ; 5 5 5];\nb = [4 4 4; 3 2 4 ; 1 5 7 ; 4 3 8 ; 2 4 7];\nI wanted to calculate the determinant of each row of the two matrices added by a row of ones (a 3*3 matrix), and put all the determinants in another array. For example, first determinant (d(1)) would be from this matrix:\n1 1 1\n4 4 4\n1 1 1\nand the second one (d(2)) would be from this matrix:\n2 2 2\n3 2 4\n1 1 1\nand so on...\nWhen I try this:\nm = size(a,1);\nons = ones(m,3);\nd = det([a(:,:) ; b(:,:) ; ons(:,:)]);\nI get this error:\nError using det\nMatrix must be square.\nHow can I calculate all the determinants at once without using loop?\n\nJames Tursa on 9 Oct 2019\nWell, for this particular example, the result is simply\nd = [0,0,0,0,0];\nbecause all the rows of a are multiples of [1 1 1]\nYou are right. But when I use:\nd = arrayfun(@(x)det([a(x,:);b(x,:);ones(1,3)]),1:length(a));\nthe result is:\n1.0e-15 *\n0 0 -0.3331 0 0.8327\nRik on 10 Oct 2019\nThere are 16 orders of magnitude between input and output. That is fairly close to eps, so this could be a float rounding error (and it is).\n\nBruno Luong on 10 Oct 2019\na(:,1).*b(:,2)-a(:,2).*b(:,1)-a(:,1).*b(:,3)+a(:,3).*b(:,1)+a(:,2).*b(:,3)-a(:,3).*b(:,2)\n\nBruno Luong on 10 Oct 2019\nSome timing\na=rand(1e6,3);\nb=rand(1e6,3);\ntic\nd=arrayfun(@(x)det([a(x,:);b(x,:);ones(1,3)]),(1:length(a))');\ntoc % 5.066323 seconds.\ntic\nA = reshape(a.',1,3,[]);\nB = reshape(b.',1,3,[]);\nABC = [A; B];\nABC(3,:,:) = 1;\nd = zeros(size(a,1),1);\nfor k=1:size(a,1)\nd(k) = det(ABC(:,:,k));\nend\ntoc % Elapsed time is 1.533522 seconds.\ntic\nd = a(:,1).*b(:,2)-a(:,2).*b(:,1)-a(:,1).*b(:,3)+a(:,3).*b(:,1)+a(:,2).*b(:,3)-a(:,3).*b(:,2);\ntoc % Elapsed time is 0.060121 seconds.\nI keep writing since day one that ARRAYFUN is mostly useless when speed is important.\n\nDavid Hill on 9 Oct 2019\nd=arrayfun(@(x)det([a(x,:);b(x,:);ones(1,3)]),1:length(a));\n\nRik on 9 Oct 2019\nThis is probably the solution OP is looking for, but I want to add that this still contains a loop. It just moves the loop internally, while adding a layer of complexity with the anonymous function.\nI know it is an internal loop, but this is what I was looking for and it is working well. Thank you.\nBruno Luong on 10 Oct 2019\nYou have accepted the worse answer in term of runing time, see the tic/toc I made below\n\nSteven Lord on 9 Oct 2019\nThis line of code:\nd = det([a(:,:) ; b(:,:) ; ons(:,:)]);\nStacks all of a on top of all of b, and stacks that on top of all of ons. It then tries to take the determinant of that array. Let's see the matrix you created.\nd = [a(:,:) ; b(:,:) ; ons(:,:)]\nd =\n1 1 1\n2 2 2\n3 3 3\n4 4 4\n5 5 5\n4 4 4\n3 2 4\n1 5 7\n4 3 8\n2 4 7\n1 1 1\n1 1 1\n1 1 1\n1 1 1\n1 1 1\nThe easiest way to accomplish what you want is a for loop that iterates through the rows of the a and b matrices. It's hard to give an example of the technique on your data that doesn't just give you the solution (which I'd prefer not to do, since this sounds like a homework assignment.) So I'll just point to the array indexing documentation. You want to access all of one of the rows of a (and all of the same row of b) and use that accessed data to create the d matrix for that iteration of the for loop. Each iteration will have a different d."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.6091992,"math_prob":0.9961955,"size":2802,"snap":"2020-24-2020-29","text_gpt3_token_len":795,"char_repetition_ratio":0.16940671,"word_repetition_ratio":0.34188035,"special_character_ratio":0.2748037,"punctuation_ratio":0.15779468,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.995273,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-03T11:10:20Z\",\"WARC-Record-ID\":\"<urn:uuid:1cbe3c3e-9607-4f00-a835-d93c06563a7c>\",\"Content-Length\":\"213690\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:95bdcb9e-1b67-4b65-bca7-40ee1d61a572>\",\"WARC-Concurrent-To\":\"<urn:uuid:78a54097-8b12-44fc-ae05-02247af612ce>\",\"WARC-IP-Address\":\"23.66.56.59\",\"WARC-Target-URI\":\"https://es.mathworks.com/matlabcentral/answers/484288-how-to-calculate-determinant-of-matrices-without-loop\",\"WARC-Payload-Digest\":\"sha1:N3EBPVHBJ7ISXXR5NZY3WKBZV6KEZICX\",\"WARC-Block-Digest\":\"sha1:75VPY5SABOL2VBPCD54TT5LK4XFBVI23\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593655881984.34_warc_CC-MAIN-20200703091148-20200703121148-00034.warc.gz\"}"} |
https://www.physicsforums.com/threads/light-wave-inteference.309193/ | [
"# Light wave inteference\n\nI feel so silly right now, because this problem is the 1st problem of my physics problem set(and easiest), and I've done all the rest of the harder problems, but I can't get this simple problem!\n\n## Homework Statement\n\ntwo in phase sources of waves are separated by a distance of 4.00m these sources produce identical waves that have a wave length of 5.00 m. on the line between them there are two places at which the same type of interference occurs. a) is it constructive or destructive interference and b) where are the places located\n\n## Homework Equations\n\nl2-l1= m lambda (wavelength)\nl2-l1= (m+1/2) lambda\n\n## The Attempt at a Solution\n\nl2-l1= 4 meters\nlambda = 5 meters\nm= 0.80 meters\n\nI'm so lost.\n\nRelated Introductory Physics Homework Help News on Phys.org\nAw man, can no one help me?! This sucks!\n\nWhy don't we do a drawing?\n\nPosition two points, left and right, 4 metres apart.\nDraw a horizontal line between them\n\nFrom the left point draw 5 metre sine wave heading right.\nFrom the right point draw 5 metre sine wave heading left.\n\nAre there two places where the two waves are equidistant from the horizontal line?\nCan you calculate the distances?\n\nI don't see any points parts of the waves that are equidistant from the horizontal line\n\nWhy don't you post your drawing?\n\nIt's attached. :)\n\n#### Attachments\n\n• 38.1 KB Views: 1,336\nI see my explanation was not clear to you. Check this diagram out.\n\n#### Attachments\n\n• 13.5 KB Views: 1,489\nRedbelly98\nStaff Emeritus\nHomework Helper\nl2-l1= 4 meters\nThis is wrong. Do you see the problem?\n\nif they are in phase\nthen you already know that the midpoint between them will be maximum constructive interference (since a peak from each wave will take the same amount of time to travel to the midpoint)\nNow, remember that the distance between 2 points of maximum constructive (or the distance between two points of perfect desctrutive) is wavelength/2\nand that a point of perfect descructive is halways between 2 points of maximum constructive, and vice-versa\nuse these 2 facts to draw all the spots of maximum constructive and perfect descrtuctive, and just count them to see how many there are , and where they are\n\nSorry guys, I'm just not getting it. Can someone explain it in numbers?\n\nRedbelly98\nStaff Emeritus\nHomework Helper\n\n## Homework Equations\n\nl2-l1= m lambda (wavelength)\nl2-l1= (m+1/2) lambda\n\n## The Attempt at a Solution\n\nl2-l1= 4 meters\nlambda = 5 meters\nSince the point lies in between the two sources, we have\n\nl2 + l1= 4 m\n\nwhere l1 and l2 are the distances to each source, and are both positive numbers.\n\nSo, what are l1 and l2 when m=0 and\n\nl2-l1= (m+1/2) lambda\n\nThen do the same for m = -1.\n\nl2-l1= 2.5m when m=0\nl2-l1 = -2.5m when m=-1\n\n??? still doesn't get me the answer\n\nRedbelly98\nStaff Emeritus\nHomework Helper\nl2-l1= 2.5m when m=0\nYes, okay. And we also know\nl1 + l2 = 4m\nYou have 2 equations with 2 unknowns here, so it is possible to find l1 and l2.\n\nl1=3.25 and l2=0.75\n\nBUUUT, how did you know l1+l2=4m\nhow do i know the wave have destructive interference\nand how was I suppose to know I was looking for l1 and l2\n\nRedbelly98\nStaff Emeritus\nHomework Helper\nWe are not finished.",
null,
"All we have done, so far, is to find one location, in between the sources, where destructive interference occurs. That location is 3.25m from one source, and 0.75m from the other source.\n\nTo see if there's another location, try the same procedure using m=-1. And remember, l1+l2=4m.\n\nBUUUT, how did you know l1+l2=4m\nl1 is the distance to one of the sources.\nl2 is the distance to the other source.\nSince we are considering locations directly in between the two sources, l1+l2 must equal the distance between the two sources. If you don't see that, draw a diagram.\n\nhow do i know the wave have destructive interference\nWe didn't know that. We're finding the locations where destructive interference occurs if they exist.\n\nand how was I suppose to know I was looking for l1 and l2\nHuh? You're trying to find locations where there is interference aren't you? Those locations can be specified by the distance to each source, l1 and l2.\n\nIf you haven't yet drawn a diagram indicating both sources, an unspecified location between the sources (where interference might occur), and also showing the distances l1 & l2 in the figure, I urge you to do that. Understanding what's going on starts with having that diagram. Without that diagram, this is pointless.\n\nI've drawn it. It just doesn't make sense to me.\n\nRedbelly98\nStaff Emeritus\nHomework Helper\nThe diagram looks something like this:\nCode:\n <--- l1 ---> <------ l2 ------>\n\no-----------x-----------------o\n: : :\n1st interference 2nd\nsource location source\nIs it clearer now why l1+l2 = 4m?\n\nIn the attached diagram the two wave trains meet and interfere constructively at the dotted line in the centre of the diagram. The other two vertical dotted lines which are one quarter of a wavelength away from the central line represent the locations of destructive interference.\n\nWavelength = 5m\nOne quarter wavelength = 1.25m\nCentral peak is at distance 2m\n2m + 1.25m = 3.25m\n2m - 1.25m = 0.75m\n\n#### Attachments\n\n• 54.4 KB Views: 943\nOkay I think I get it now! THANKS SO MUCH for sticking with me. You guys are great!"
] | [
null,
"data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9131287,"math_prob":0.94753444,"size":1141,"snap":"2020-10-2020-16","text_gpt3_token_len":317,"char_repetition_ratio":0.113456465,"word_repetition_ratio":0.6372549,"special_character_ratio":0.2716915,"punctuation_ratio":0.06866953,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99312574,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-26T04:30:04Z\",\"WARC-Record-ID\":\"<urn:uuid:03d101eb-435c-4e46-b089-563cb90b53c9>\",\"Content-Length\":\"139394\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8beac0c9-3e00-4209-b5fe-0773a6af2ef8>\",\"WARC-Concurrent-To\":\"<urn:uuid:a6141fb4-f0cb-492b-966c-aa8a50b142ac>\",\"WARC-IP-Address\":\"23.111.143.85\",\"WARC-Target-URI\":\"https://www.physicsforums.com/threads/light-wave-inteference.309193/\",\"WARC-Payload-Digest\":\"sha1:BVX4RBIJARCSNE2YUCVNZQEICPUMLPI7\",\"WARC-Block-Digest\":\"sha1:35B2OW6346EG3DFTL7XCSZWZUS5YEV2C\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875146186.62_warc_CC-MAIN-20200226023658-20200226053658-00253.warc.gz\"}"} |
https://www.extrica.com/article/16639 | [
"# Structure stability evaluation of offshore heave compensator using multi-body dynamics analysis method\n\n## Gwi-Nam Kim1, Sun-Chul Huh2, Sung-Gu Hwang3, Yong-Gil Jung4, Jang-Hwan Hyun5, Hee-Sung Yoon6\n\n1, 2, 3, 4Gyeongsang National University, Tongyeong, Gyeongsangnam-do, Republic of Korea\n\n5, 6Khan Co., Ltd., Geoje, Gyeongsangnam-do, Republic of Korea\n\n2Corresponding author\n\nJournal of Vibroengineering, Vol. 17, Issue 8, 2015, p. 4134-4141.\nReceived 15 September 2015; received in revised form 10 November 2015; accepted 20 November 2015; published 30 December 2015\n\nCopyright © 2015 JVE International Ltd. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.\nViews 235\nAbstract.\n\nHeave compensator attenuate vessel heave motion during drilling operation of drillship. Heave compensator functions as damping form motion of drillship, such as principle spring of suspension system. The load transfers on the parts of heave compensator. Stress and deformation of all parts is evaluated to diagnose the stability of the compensator. This study makes a decision on the safety of structure. Results of analysis confirm the structure stability of heave compensator for simulation. This result can be used as data for structural analysis to determine safety of a structure.\n\nKeywords: heave compensation system, structure analysis, multi-body dynamics, ANSYS workbench.\n\n#### 1.1. Outline\n\nThe compensator located on the upper part of the derrick for compensation of heave motion of wave applied during a drilling work is a compensation device which uses an oil pressure cylinder, and is affected by the load of the drill string and the load from the external environment. Accordingly, it is important to determine safety of structure considering various loads. This system is consist of drill string compensator(DSC) and active heave compensator, put the oil and air pressure control to use. Drill string compensator serves to reduce the disturbance to 90 % for the external environment. The Active Heave Compensator is designed to be added to the Drill String Compensator to play the role of reducing disturbance of heave motion by about 5 % and to apply control force to the part supporting the heavy object, and can attenuate disturbance of heave motion by applying control force in positive or negative direction. An oil pressure system is used in general to apply the control force. Various motion control devices are used to compensate for the vertical motion of the vessel.\n\nJ. T. Hatleskog researched compensation system by the 5 body (platform, crown block, traveling block, upper drill string and lower drill string). The input data of platform’s behavior is period [1, 2]. But, in this study, the safety of the structure was not determining. Nam-Kug Ku researched dynamic behavior of vessel’s hosting system in order to analyze dynamic behavior of multi-body dynamics system using developed kernel . This study conducted dynamic analysis about heave compensation system. In addition, this studied load of drill string depending on compensation system following system mechanism. But, Safety of compensator structure having external loads is not studied. K. D. Do examined the operation of the drilling system using mathematical model . But, in this research, the stress state of the structure was not identifying. Therefore, this study confirms influence of AHC-cylinder and DSC-cylinder to identify the safety compensator structure of heave compensator. Heave compensator is modeled by the finite element. The stress distribution and safety was judged to the working compensator.\n\n#### 1.2. Wave load\n\nUnlike the case of the ground, the drillship operating on offshore drilling is influenced by live load, dead load and environmental load. Therefore, Environmental load considers wave height of target area influence. Once analysis, ocean waves are irregular and random in shape, height, length and speed of propagation. Regular travelling wave is propagating with permanent form. It has a distinct wave length, wave period and wave height. Therefore, analysis has to take them into consideration.\n\nFig. 1. Six degrees of freedom of vessel",
null,
"Analysis of ocean floating structure is import to take six degrees of freedom (surge, sway, heave, roll, pitch and yaw) into consideration. Fig. 1 shows the six degrees of freedom of vessel. For a floating structure the vertical displacement of the structure can be written as:\n\n(1)\n$z\\left(x,y,z\\right)={\\zeta }_{3}\\left(t\\right)-x\\mathrm{s}\\mathrm{i}\\mathrm{n}\\left[{\\zeta }_{4}\\left(t\\right)\\right]+y\\mathrm{s}\\mathrm{i}\\mathrm{n}\\left[{\\zeta }_{5}\\left(t\\right)\\right],$\n\nwhere $z$ is vertical displacement of platform, and it is meant value that the different coordinates add up. because it is affected on rolling, pitching and heaving motion. ${\\zeta }_{3}\\left(t\\right)$ is heave translational motion, ${\\zeta }_{4}\\left(t\\right)$ is roll rotational motion, ${\\zeta }_{5}\\left(t\\right)$ is pitch rotational motion.\n\nThe stress distribution the topside structure is governed by the structural design of the topside, local loads and ship accelerations and deformation. The total design force for topside/deck equipment is calculated as follow:\n\n(2)\n${F}_{v}=\\left({\\gamma }_{s}\\mathrm{g}+{\\gamma }_{w}{a}_{v}\\right)m,$\n(3)\n${F}_{t}=\\left(g\\mathrm{s}\\mathrm{i}\\mathrm{n}\\theta +{\\gamma }_{w}{a}_{t}\\right)m,$\n(4)\n${F}_{l}=\\left(g\\mathrm{s}\\mathrm{i}\\mathrm{n}\\mathrm{\\Phi }+{\\gamma }_{w}{a}_{l}\\right)m,$\n\nwhere ${F}_{v}$ is force on the structure due to gravity and vertical acceleration, ${F}_{t}$ is force on the structure due to transverse acceleration, ${F}_{l}$ is force on the structure due to longitudinal acceleration, ${\\gamma }_{s}$ is load factor for static loads, ${\\gamma }_{w}$ is environmental load factor, $g$ is acceleration of gravity (9.81m/${s}^{2}\\right)$, ${a}_{v}$ is vertical acceleration at the relevant C.O.G. calculated by hydrodynamic analysis with annual probability of exceedance, ${a}_{t}$ is transverse acceleration calculated at the relevant C.O.G. by hydrodynamic analysis, $\\theta$ is roll angle of ship, ${a}_{l}$ is longitudinal acceleration at the relevant C.O.G. by hydrodynamic analysis, $\\mathrm{\\Phi }$ is pitch angle, $m$ is mass of the structure/equipment .\n\nRoll, pitch and heave in the equations above are deducted by value for acceleration. When value of roll and pitch are 13 degrees and 8 degree, ${a}_{v}$, ${a}_{t}$ and ${a}_{l}$ are 3.6 m/s2, 6.4 m/s2 and 4.5 m/s2, respectively.\n\n#### 1.3. Multi-body dynamics\n\nDevelopment of the product is necessary that achieve the desired performance, the vibration analysis, a structure stress evaluation, a noise characteristics of fatigue life prediction in pursuance stability of the structure and vibration characteristics. Recently, Trend of mechanical structure is high speed and light weight ranging from high-precision machine to large machine. Performance of mechanical structure depends on static stiffness, dynamic stiffness and thermal characteristic. Behavior analysis of structure on dynamic load part is becoming more and more important [6-9].\n\nMulti-body dynamic is based on Lagrange Equation. Which is the calculation of motion and position, speed and acceleration through time-integrating, also reaction force is calculated in the joint. Therefore, it judges behavior of parts according to load and deformation, the design error between parts can be checked.\n\nThe multi-body dynamics solution from ANSYS offers extensive versatility in handling the degree of required complexity. Multi-body dynamics from ANSYS Workbench supports two methods. One is rigid dynamics the other is flexible multi-body dynamic. Multi-body dynamics analysis is used on joint to connect parts. Multi-body dynamics simulates motions and forces of parts interconnected to one another via sets of constraints modeled as joints.\n\n#### 2.1. Heave compensator modeling\n\nThe 3D model of the heave compensator was made using CATIA v5. Fig. 2 is the shape of 3D model using CATIA. 3D model composed with crown block as the center, frame connect derrick is fixed position of crown block. AHC-cylinder is on the crown block, Slanted DSC-Cylinder 4 sets are under crown block. The locker-arm is located flank the crown block. Fig. 3 shows the arrangement of heave compensator. In this research, dynamic analysis of the heave compensator apply to the flexible body. First, heave compensator is modeled by the finite element. And, the stiffness and damping of the cylinder was applied to a kinetic model. Second, Environmental loads used in the analysis were applied to the wave loads in sea. Third, the stress distribution and safety examined heave compensator.\n\nFig. 2. Heave compensator 3D model from CATIA",
null,
"Fig. 3. Compensator arrangement",
null,
"#### 2.2. Boundary conditions\n\nFig. 4 shows the joint condition of multi-body dynamics analysis from ANSYS program. This joint select between the connection to the parts necessary for the analysis. Interconnection of parts via joints is greatly simplified by considering the finite motion at the two nodes forming the joint element. Revolute joint is a two-node element that has only on primary degree of freedom, the relative rotation about the revolute axis. Translational joint element is a two-node element that has on relative displacement degree of freedom. Cylindrical joint element is a two-node element that has on free relative displacement degree freedom and one free relative rotational degree of freedom (around the cylindrical or revolute axis). Conditions appropriate for each part of the joint was applied as follows:\n\nA. Revolute condition was applied on pin for connection of each parts.\n\nB. Translational condition was applied on between Frame and crown block.\n\nC. Cylindrical condition was applied on DSC-Cylinder and AHC-Cylinder.\n\nD. Fixed support was applied while there is no movement between each part.\n\nE. DSC-Cylinder is applied a spring condition for damping role.\n\nFig. 4. Boundary conditions of ANSYS",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"Fig. 5. Mesh of compensator model",
null,
"Mesh is generated for finite-element analysis. Cylinder is applied to the hex-dominant. The other parts are applied to the tetrahedrons. Nodes number are 402,841 and elements number are 216,595. The size of mesh is 50 mm to 100 mm. Mesh number is a sufficient to indicate the stress distribution. Therefore, compensator is big and complex. Fig. 5 shows the finite-element analysis.\n\nThe material used in finite-element analysis is DH36 structural steel. Thick plate of DH36 with high strength is applied as material in the finite-element analysis. Table 1 shows material property of DH36. The yield strength of the DH36 steel agrees with the requirement of DNV. In addition, while comparing the yield strength in ASTM with DNV, DNV is superior to ASTM.\n\nTable 1. Material properties of DH 36\n\n ASTM A131 steel, DH 36 Density 7850 kg/m Yield strength 350 MPa Ultimate strength 490-630 MPa Young’s modulus 216 GPa Poisson’s ratio 0.26\n\n#### 2.3. Load condition\n\nSheave of crown block is applied 1040 ton of live load (standard is drill depth the 10000 m class). Heave motion is applied wave period (height 6 m, period 9 s) of target area to control of AHC-Cylinder. Acceleration is applied consider rolling, Pitching and heaving. Fig. 6 shows the load condition of Heave compensator. A live load of 908 ton was applied to the sheave of the crown block, and its own weight was taken into account. Fig. 7 shows the motion of AHC-Cylinder motion and heave motion.\n\nA sine wave with 3,000 mm amplitude and 9 second cycle was applied to the frame connected to the derrick to take into account the flow of the external fluid (heave motion) transferred to the hull, and a sine wave with 2,850 mm amplitude and 9 second cycle was applied in reverse waveform to take into account the control force of the AHC cylinder over such heave motion.\n\nFig. 6. Load condition of analysis for heave compensator",
null,
"Fig. 7. Motion of heave and AHC cylinder",
null,
"#### 3. Results\n\nFig. 8 shows the result of equivalent stress analysis. Maximum stress generate on the rod of AHC-Cylinder connect crown block. Where, Maximum stress value is 181.22 MPa. AHC-Cylinder rod stress value is 147.82 MPa, DSC-Cylinder rod stress value is 114.64 MPa. The rod of AHC-cylinder is show the maximum value. Because, AHC-cylinder lifts the loads, it acts as the rise of the crown block. Stress value of crown block top part and frame bottom part is 50-70 MPa. Safety factor means allowable stress/von-mises stress. This is using the von-mises method of finite element analysis. Safety factor of structure is 1.8.\n\nThis analysis is further applied to the motion of AHC-Cylinder. Stress concentration is generated on the rod of AHC-Cylinder and DSC-Cylinder influence of the live load and motion of the parts. This position is AHC-Cylinder rod. Fig. 9 shows result of equivalent elastic strain. Maximum value of equivalent elastic strain is 0.001122 mm. This position is AHC-Cylinder rod.\n\nFig. 8. Results of analysis of equivalent stress",
null,
"Fig. 9. Results of analysis of equivalent elastic strain",
null,
"Fig. 10 shows the result of displacements from the initial locations of the crown block and the frame obtained through MBD analysis. Fig. 11 shows the speeds of the DSC cylinder and the AHC cylinder by time band. When the compensation rate for heave movement of ±3,000 mm was 95 %, the maximum speeds of the DSC cylinder and the AHC cylinder were 1.750 m/s and 1.989 m/s respectively. This result can be used as data for structural analysis to determine safety of a structure.\n\nFig. 10. Position of crown block and frame",
null,
"Fig. 11. Velocity of AHC and DSC cylinder",
null,
"#### 4. Conclusions\n\nThis study is structure analysis of true size assembly for technical development of heave compensator. Safety assessment of structure is simulation of true environment by multi-body dynamics of ANSYS.\n\n1) Result of simulation is high stress concentration to AHC-Cylinder rod. But, Safety coefficient consequential stress distribution of structure is 1.8. Standard of a design is DNV rules. It standard safety coefficient is 1.2-1.3. Therefore, Heave compensator is safe taking DNV rule into consideration. Result of this study is only considers wave condition. Thus, safety decision of structure is to consider additional external load.\n\n2) The speeds of the AHC cylinder and the DSC cylinder were confirmed through MOD analysis when the attenuation is 95 %. The design speed of the 1/5 size cylinder model is 1.5 m/s. The AHC cylinder speed of 1.989 m/s and the DSC cylinder speed of 1.750 m/s obtained through a dynamics analysis are confirmed, and can be utilized for design of an actual size cylinder.\n\n#### Acknowledgements\n\nThis research is financially supported by Korea Institute of Original Technology Development Project of Ministry of Knowledge Economy (No. 10035350).\n\n1. Hatleskog J. T., Dummigan M. W. Passive compensator load variation for deep water contact operation. Conference of OCEANS, 2006, p. 18-21. [Search CrossRef]\n2. Hatleskog J. T., Dummigan M. W. Peer-reviewed technical communication-passive compensator load variation for deep-water drilling. IEEE Journal of Oceanic Engineering, Vol. 32, Issue 3, 2007, p. 593-602. [Search CrossRef]\n3. Ku N. K., Ha S., Roh M. I. Study on the applicability of a new multi-body dynamics program though the application to the heave compensation system. Journal of the Computational Structural Engineering Institute of Korea, Vol. 26, 2013, p. 247-254. [Search CrossRef]\n4. Do K. D., Pan J. Nonlinear control of an active heave compensation system. Ocean Engineering, Vol. 35, 2008, p. 558-571. [Search CrossRef]\n5. DNV Offshore Standards. DNV-RP-C102, Appendix B, 2002. [Search CrossRef]\n6. Jang J. S., Sohn J. H. Dynamic analysis of wave energy generation system by using multibody dynamics. The Korean Society of Mechanical Engineers, Vol. 35, Issue 12, 2011, p. 1579-1584. [Search CrossRef]\n7. Park K. P., Cha J. H., Lee K. Y. Analysis of dynamic response of a floating and a cargo with elastic booms based on flexible multibody system dynamics. Journal of the Society of Naval Architects of Korea, Vol. 47, Issue 1, 2010, p. 47-57. [Search CrossRef]\n8. Oh Y. S., Kim S. S. Development and implementation of real time multibody vehicle dynamics model. The Korean Society of Mechanical Engineers A, Vol. 25, Issue 5, 2001, p. 934-840. [Search CrossRef]\n9. Jo A. R., Ku N. K., Cha J. H., Park K. P., Lee K. Y. Simulation of contacts between wire rope and shell plate of a block for shipbuilding industry based on multibody dynamics. Transactions of the Society of CAD/CAM Engineers, Vol. 17, Issue 5, 2012, p. 324-332. [Search CrossRef]\n10. Andrzej Piotr Koszewnik, Zdzislaw Gosiewski Frequency domain identification of the active 3D mechanical structure for the vibration control system. Journal of Vibroengineering, Vol. 14, Issue 2, 2012, p. 451-457. [Search CrossRef]"
] | [
null,
"https://cdn.jvejournals.com/articles/16639/xml/img1.jpg",
null,
"https://cdn.jvejournals.com/articles/16639/xml/img2.jpg",
null,
"https://cdn.jvejournals.com/articles/16639/xml/img3.jpg",
null,
"https://cdn.jvejournals.com/articles/16639/xml/img4.jpg",
null,
"https://cdn.jvejournals.com/articles/16639/xml/img5.jpg",
null,
"https://cdn.jvejournals.com/articles/16639/xml/img6.jpg",
null,
"https://cdn.jvejournals.com/articles/16639/xml/img7.jpg",
null,
"https://cdn.jvejournals.com/articles/16639/xml/img8.jpg",
null,
"https://cdn.jvejournals.com/articles/16639/xml/img9.jpg",
null,
"https://cdn.jvejournals.com/articles/16639/xml/img10.jpg",
null,
"https://cdn.jvejournals.com/articles/16639/xml/img11.jpg",
null,
"https://cdn.jvejournals.com/articles/16639/xml/img12.jpg",
null,
"https://cdn.jvejournals.com/articles/16639/xml/img13.jpg",
null,
"https://cdn.jvejournals.com/articles/16639/xml/img14.jpg",
null,
"https://cdn.jvejournals.com/articles/16639/xml/img15.jpg",
null,
"https://cdn.jvejournals.com/articles/16639/xml/img16.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8862134,"math_prob":0.919963,"size":16011,"snap":"2022-40-2023-06","text_gpt3_token_len":3647,"char_repetition_ratio":0.155432,"word_repetition_ratio":0.040937997,"special_character_ratio":0.22072326,"punctuation_ratio":0.15006468,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96602315,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32],"im_url_duplicate_count":[null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-09-26T18:58:48Z\",\"WARC-Record-ID\":\"<urn:uuid:b3faaf6f-48dd-4d0f-af8c-ab1238c9be73>\",\"Content-Length\":\"60428\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:584a674a-aaee-4cad-be3c-c1b28d4e18d2>\",\"WARC-Concurrent-To\":\"<urn:uuid:d5baa0ae-76e7-419f-b4ae-2a1b8b6768ee>\",\"WARC-IP-Address\":\"20.50.64.3\",\"WARC-Target-URI\":\"https://www.extrica.com/article/16639\",\"WARC-Payload-Digest\":\"sha1:GTHBYYSWZLJXP5H6I4JRFSTQZSOXP35Z\",\"WARC-Block-Digest\":\"sha1:UOY4NQIMTQAHMOIZC7L5ITBUSN4PBDKX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030334915.59_warc_CC-MAIN-20220926175816-20220926205816-00625.warc.gz\"}"} |
https://cefymaluvawukaso.jikishinkobudo.com/principles-of-applied-mathematics-book-27623tq.php | [
"",
null,
"# Principles of Applied Mathematics\n\nTransformation and Approximation (Advanced Book Program)\n• 603 Pages\n• 1.28 MB\n• English\nby\nPerseus Books Group\nApplied mathematics, Mathematics, Science/Mathematics, Applied, Logic, Asymptotic expansions, Transformations (Mathema\nThe Physical Object\nFormatHardcover\nID Numbers\nOpen LibraryOL7899272M\nISBN 100738201294\nISBN 139780738201290\n\nJames P. Keener is a professor of mathematics at the University of Utah. He received his Ph.D. from the California Institute of Technology in applied mathematics in In addition to teaching and research in applied mathematics, Professor Keener served as editor in chief of the SIAM Journal on Applied Mathematics and continues to serve as editor of several leading research s: 5.\n\nWhile other books, such as Strang's Intro to Applied Mathematics and Rudin's Real & Complex Analysis, provide you with one mathematical \"toy\" after another (Fourier series, Lebesgue integration, etc.), Keener's book tells you why you need the toy before giving it to you/5.\n\nApplied statistics is more than data analysis, but it is easy to lose sight of the big picture. David Cox and Christl Donnelly distil decades of scientific experience into usable principles for the successful application of statistics, showing how good statistical strategy shapes every stage of an investigation.3/5(5).\n\nPrinciples of Mathematics Book 1 lays a solid foundation—both academically and spiritually—as your student prepares for High School math. Students will study concepts of arithmetic and Principles of Applied Mathematics book, further develop their problem-solving skills, see how mathematical concepts are applied in a practical way to everyday life, and strengthen their 95%(22).\n\nDerivations of Applied Mathematics is a book of applied mathematical proofs. This book covers the following topics in applied mathematics: Classical Principles of Applied Mathematics book and geometry, Trigonometry, derivative, The complex exponential, Primes, roots and averages, Taylor series, Integration techniques, Matrices and vectors, Transforms and special functions.\n\nSummary Principles of Applied Mathematics provides a comprehensive look at how classical methods are used in many fields and contexts.\n\n### Details Principles of Applied Mathematics EPUB\n\nUpdated to reflect developments of the last twenty years, it shows how two areas of classical applied mathematics?spectral theory of operators and asymptotic analysis?are useful for solving a wide range of applied science problems. Principles of Applied Mathematics Book Preview Selected Pages TEXT BOOK.\n\nRelated. MATHEMATICS S5 S6 TEXT BOOK. Leave a Reply Cancel reply. Resource Categories. Principles of Applied Mathematics. Derivations of Applied Mathematics is a book of applied mathematical proofs.\n\nThis book covers the following topics in applied mathematics: Classical algebra and geometry, Trigonometry, derivative, The complex exponential, Primes, roots and averages, Taylor series, Integration techniques, Matrices and. Principles of Mathematics Student Textbook Book 1 Binder with Notebook Paper — Students will need to tear out the reference section from this book and put it in the binder, as well as add notes to.\n\nPrinciples of Applied Mathematics provides a comprehensive look at how classical methods are used in many fields and contexts. Updated to reflect developments of the last twenty years, it shows how two areas of classical applied mathematics?spectral theory of operators and asymptotic analysis?are useful for solving a wide range of applied.\n\nPrinciples of Applied Mathematics provides a comprehensive look at how classical methods are used in many fields and contexts. Updated to reflect developments of the last twenty years, it shows how two areas of classical applied mathematics spectral theory of operators and asymptotic analysis are useful for solving a wide range of applied science by: Principles of Mathematics We dont often think of math in terms of being presented with a Christian Worldview, but Master Books and author, Katherine A.\n\nLoop, have done just that.\n\nUsing a Biblical lens, this comprehensive, two-year, junior high math course will cover the basics of arithmetic and pre-algebra and thoroughly prepare your student.\n\nPrinciples of Mathematics Book 2 lays a solid foundation—both academically and spiritually—as your student prepares for high school algebra.\n\nStudents will study pre-algebra concepts, further develop their problem-solving skills, see how algebraic concepts are applied in a practical way to everyday life, and strengthen their faith. Principles and Techniques of Applied Mathematics book. Read reviews from world’s largest community for readers.\n\nStimulating, thought-provoking study show /5(5). This book is primarily about the principles that one uses to solve problems in applied mathematics. It is written for beginning graduate students in applied mathematics, science, and engineering, and is appropriate as a one-year course in applied mathematical by: Principles and Techniques of Applied Mathematics Paperback – Nov.\n\n16 by Bernard Friedman (Author) out of 5 stars 4 ratings. See all 5 formats and editions Hide other formats and editions. Amazon Price New from Used from Reviews: 4. This course is an introduction to discrete applied mathematics. Topics include probability, counting, linear programming, number-theoretic algorithms, sorting, data compression, and error-correcting codes.\n\nThis is a Communication Intensive in the Major (CI-M) course, and thus includes a writing component. Hints Solutions. The Principles of Mathematics Sets include a student text and the teacher guide. Supplies suggested for this course are the student textbook and workbook, binder with notebook paper, abacus, blank index cards, calculator, graph paper, compass, measuring tape with Metric and US measure, ruler with Metric and US measure, and a protractor.\n\nKak and Malcolm Slaney, Principles of Computerized Tomographic Imaging, Society of Industrial and Applied Mathematics, Electronic copy Each chapter of this book is available as an Adobe PDF File.\n\nFree readers for most computer platforms are available from Adobe. The table of contents for the book and the PDF files are available here. This book is primarily about the principles that one uses to solve problems in applied mathematics.\n\nIt is written for beginning graduate students in applied mathematics, science, and engineering, and is appropriate as a one-year course in applied mathematical techniques. James P. Keener, Principles Of Applied Mathematics: Transformation And Approximation, Second Edition, Westview Press Software I will use Mathematica throughout the course to demonstrate some key concepts.\n\nPrinciples of Applied Mathematics provides a comprehensive look at how classical methods are used in many fields and contexts. Updated to reflect developments of the last twenty years, it shows how two areas of classical applied mathematics—spectral theory of operators and asymptotic analysis—are useful for solving a wide range of applied science problems/5(12).\n\nThe Principles of Mathematics curriculum integrates a biblical worldview with mathematics. Year 1 of a 2-year course (year 2 sold-separately), this text is designed to give students an academic and spiritual mathematical foundation through building thinking and problem-solving skills while also teaching them how a biblical worldview affects their approach to mathematical concepts.\n\nThe Principles of Mathematics (PoM) is a book by Bertrand Russell, in which the author presented his famous paradox and argued his thesis that mathematics and logic are identical. The book presents a view of the foundations of mathematics and has become a classic reference.\n\nIt reported on developments by Giuseppe Peano, Mario Pieri, Richard Dedekind, Georg Cantor, and others. It will be an excellent way to just look, open, and check out the book Green's Functions And Linear Differential Equations: Theory, Applications, And Computation (Chapman & Hall/CRC Applied Mathematics & Nonlin while because time.\n\nAs understood, experience as well as skill do not constantly featured the much cash to obtain them. Principles of Mathematical Modeling (Computer Science and Applied Mathematics) by Clive Dym, Elizabeth Ivey and a great selection of related books, art and collectibles available now at Principles of Applied Mathematics provides a comprehensive look at how classical methods are used in many fields and contexts.\n\nUpdated to reflect developments of the last twenty years, it shows how two areas of classical applied mathematicsspectral theory of operators and asymptotic analysisare useful for solving a wide range of applied science problems/5.\n\n### Description Principles of Applied Mathematics PDF\n\nKak and Malcolm Slaney, Principles of Computerized Tomographic Imaging, IEEE Press, or if you prefer A. Kak and Malcolm Slaney, Principles of Computerized Tomographic Imaging, Society of Industrial and Applied Mathematics, Electronic copy Each chapter of this book is available as an Adobe PDF File.\n\nSection Applied Problems Madison College’s College Mathematics Textbook Page 2 of Chapter 1 Pre-Algebra.\n\nSection 1. 1 Calculator Use. Throughout most of human history computation has been a tedious task that was often postponed You order 15 DVD’s at \\$8 a DVD and 24 books at \\$5 a book.\n\nWhat is the cost of the order. Principles of Applied Mathematics provides a comprehensive look at how classical methods are used in many fields and contexts. Updated to reflect developments of the last twenty years, it shows how two areas of classical applied mathematics?spectral theory of operators and asymptotic analysis?are useful for solving a wide range of applied science problems/5(12).\n\nPrinciples of Mathematics, Book 1 {a Master Books review} A Christian math program. Katherine Loop's Principles of Mathematics Biblical Worldview Curriculum is a first of its kind. and there is no such thing as a separation between secular and sacred.\n\nGod is part of it all, and that applied to theology, biology, ethics, history.Your book, The Princeton Companion to Applied Mathematics, seems to have got some very high praise: a ‘tour de force,’ ‘The treasures [in this book] go on and on.’Is it a unique book, in terms of getting together lots of experts in the field?\n\nYes, I don’t think it’s been done before in applied mathematics."
] | [
null,
"https://covers.openlibrary.org/b/id/465076-M.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8753808,"math_prob":0.60087675,"size":12013,"snap":"2021-43-2021-49","text_gpt3_token_len":2496,"char_repetition_ratio":0.18685986,"word_repetition_ratio":0.27334467,"special_character_ratio":0.203696,"punctuation_ratio":0.12032994,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9537524,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-05T16:28:23Z\",\"WARC-Record-ID\":\"<urn:uuid:9dd79ff6-bc2a-4e32-9143-17a5fd6bbfa2>\",\"Content-Length\":\"23996\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d3e41f39-fcf8-49bb-957e-f0a1849406a5>\",\"WARC-Concurrent-To\":\"<urn:uuid:3e57a377-0037-47f6-a97d-b7494c1d3595>\",\"WARC-IP-Address\":\"172.67.135.246\",\"WARC-Target-URI\":\"https://cefymaluvawukaso.jikishinkobudo.com/principles-of-applied-mathematics-book-27623tq.php\",\"WARC-Payload-Digest\":\"sha1:FZV4LW6TW7UULYZ5FFWZSTA2XHDKTK73\",\"WARC-Block-Digest\":\"sha1:Z37D5HICG4BKUEW2CP6REJ76AIGO3OQ6\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964363215.8_warc_CC-MAIN-20211205160950-20211205190950-00494.warc.gz\"}"} |
https://www.colorhexa.com/108946 | [
"# #108946 Color Information\n\nIn a RGB color space, hex #108946 is composed of 6.3% red, 53.7% green and 27.5% blue. Whereas in a CMYK color space, it is composed of 88.3% cyan, 0% magenta, 48.9% yellow and 46.3% black. It has a hue angle of 146.8 degrees, a saturation of 79.1% and a lightness of 30%. #108946 color hex could be obtained by blending #20ff8c with #001300. Closest websafe color is: #009933.\n\n• R 6\n• G 54\n• B 27\nRGB color chart\n• C 88\n• M 0\n• Y 49\n• K 46\nCMYK color chart\n\n#108946 color description : Dark cyan - lime green.\n\n# #108946 Color Conversion\n\nThe hexadecimal color #108946 has RGB values of R:16, G:137, B:70 and CMYK values of C:0.88, M:0, Y:0.49, K:0.46. Its decimal value is 1083718.\n\nHex triplet RGB Decimal 108946 `#108946` 16, 137, 70 `rgb(16,137,70)` 6.3, 53.7, 27.5 `rgb(6.3%,53.7%,27.5%)` 88, 0, 49, 46 146.8°, 79.1, 30 `hsl(146.8,79.1%,30%)` 146.8°, 88.3, 53.7 009933 `#009933`\nCIE-LAB 50.029, -46.503, 27.329 10.264, 18.443, 8.813 0.274, 0.492, 18.443 50.029, 53.939, 149.558 50.029, -43.453, 39.922 42.945, -32.491, 17.894 00010000, 10001001, 01000110\n\n# Color Schemes with #108946\n\n• #108946\n``#108946` `rgb(16,137,70)``\n• #891053\n``#891053` `rgb(137,16,83)``\nComplementary Color\n• #178910\n``#178910` `rgb(23,137,16)``\n• #108946\n``#108946` `rgb(16,137,70)``\n• #108983\n``#108983` `rgb(16,137,131)``\nAnalogous Color\n• #891016\n``#891016` `rgb(137,16,22)``\n• #108946\n``#108946` `rgb(16,137,70)``\n• #831089\n``#831089` `rgb(131,16,137)``\nSplit Complementary Color\n• #894610\n``#894610` `rgb(137,70,16)``\n• #108946\n``#108946` `rgb(16,137,70)``\n• #461089\n``#461089` `rgb(70,16,137)``\n• #538910\n``#538910` `rgb(83,137,16)``\n• #108946\n``#108946` `rgb(16,137,70)``\n• #461089\n``#461089` `rgb(70,16,137)``\n• #891053\n``#891053` `rgb(137,16,83)``\n• #084523\n``#084523` `rgb(8,69,35)``\n• #0b5b2f\n``#0b5b2f` `rgb(11,91,47)``\n• #0d723a\n``#0d723a` `rgb(13,114,58)``\n• #108946\n``#108946` `rgb(16,137,70)``\n• #13a052\n``#13a052` `rgb(19,160,82)``\n• #15b75d\n``#15b75d` `rgb(21,183,93)``\n• #18ce69\n``#18ce69` `rgb(24,206,105)``\nMonochromatic Color\n\n# Alternatives to #108946\n\nBelow, you can see some colors close to #108946. Having a set of related colors can be useful if you need an inspirational alternative to your original color choice.\n\n• #108928\n``#108928` `rgb(16,137,40)``\n• #108932\n``#108932` `rgb(16,137,50)``\n• #10893c\n``#10893c` `rgb(16,137,60)``\n• #108946\n``#108946` `rgb(16,137,70)``\n• #108950\n``#108950` `rgb(16,137,80)``\n• #10895a\n``#10895a` `rgb(16,137,90)``\n• #108964\n``#108964` `rgb(16,137,100)``\nSimilar Colors\n\n# #108946 Preview\n\nThis text has a font color of #108946.\n\n``<span style=\"color:#108946;\">Text here</span>``\n#108946 background color\n\nThis paragraph has a background color of #108946.\n\n``<p style=\"background-color:#108946;\">Content here</p>``\n#108946 border color\n\nThis element has a border color of #108946.\n\n``<div style=\"border:1px solid #108946;\">Content here</div>``\nCSS codes\n``.text {color:#108946;}``\n``.background {background-color:#108946;}``\n``.border {border:1px solid #108946;}``\n\n# Shades and Tints of #108946\n\nA shade is achieved by adding black to any pure hue, while a tint is created by mixing white to any pure color. In this example, #020e07 is the darkest color, while #fbfffd is the lightest one.\n\n• #020e07\n``#020e07` `rgb(2,14,7)``\n• #042010\n``#042010` `rgb(4,32,16)``\n• #063119\n``#063119` `rgb(6,49,25)``\n• #084322\n``#084322` `rgb(8,67,34)``\n• #0a542b\n``#0a542b` `rgb(10,84,43)``\n• #0c6634\n``#0c6634` `rgb(12,102,52)``\n• #0e773d\n``#0e773d` `rgb(14,119,61)``\n• #108946\n``#108946` `rgb(16,137,70)``\n• #129b4f\n``#129b4f` `rgb(18,155,79)``\n• #14ac58\n``#14ac58` `rgb(20,172,88)``\n• #16be61\n``#16be61` `rgb(22,190,97)``\n• #18cf6a\n``#18cf6a` `rgb(24,207,106)``\n• #1ae173\n``#1ae173` `rgb(26,225,115)``\n• #29e67d\n``#29e67d` `rgb(41,230,125)``\n• #3ae888\n``#3ae888` `rgb(58,232,136)``\n• #4cea92\n``#4cea92` `rgb(76,234,146)``\n• #5dec9d\n``#5dec9d` `rgb(93,236,157)``\n• #6feea8\n``#6feea8` `rgb(111,238,168)``\n• #81f0b2\n``#81f0b2` `rgb(129,240,178)``\n• #92f2bd\n``#92f2bd` `rgb(146,242,189)``\n• #a4f4c8\n``#a4f4c8` `rgb(164,244,200)``\n• #b5f6d2\n``#b5f6d2` `rgb(181,246,210)``\n• #c7f8dd\n``#c7f8dd` `rgb(199,248,221)``\n• #d8fae8\n``#d8fae8` `rgb(216,250,232)``\n• #eafdf2\n``#eafdf2` `rgb(234,253,242)``\n• #fbfffd\n``#fbfffd` `rgb(251,255,253)``\nTint Color Variation\n\n# Tones of #108946\n\nA tone is produced by adding gray to any pure hue. In this case, #4b4e4c is the less saturated color, while #049545 is the most saturated one.\n\n• #4b4e4c\n``#4b4e4c` `rgb(75,78,76)``\n• #45544c\n``#45544c` `rgb(69,84,76)``\n• #3f5a4b\n``#3f5a4b` `rgb(63,90,75)``\n• #39604a\n``#39604a` `rgb(57,96,74)``\n• #33664a\n``#33664a` `rgb(51,102,74)``\n• #2d6c49\n``#2d6c49` `rgb(45,108,73)``\n• #287149\n``#287149` `rgb(40,113,73)``\n• #227748\n``#227748` `rgb(34,119,72)``\n• #1c7d47\n``#1c7d47` `rgb(28,125,71)``\n• #168347\n``#168347` `rgb(22,131,71)``\n• #108946\n``#108946` `rgb(16,137,70)``\n• #0a8f45\n``#0a8f45` `rgb(10,143,69)``\n• #049545\n``#049545` `rgb(4,149,69)``\nTone Color Variation\n\n# Color Blindness Simulator\n\nBelow, you can see how #108946 is perceived by people affected by a color vision deficiency. This can be useful if you need to ensure your color combinations are accessible to color-blind users.\n\nMonochromacy\n• Achromatopsia 0.005% of the population\n• Atypical Achromatopsia 0.001% of the population\nDichromacy\n• Protanopia 1% of men\n• Deuteranopia 1% of men\n• Tritanopia 0.001% of the population\nTrichromacy\n• Protanomaly 1% of men, 0.01% of women\n• Deuteranomaly 6% of men, 0.4% of women\n• Tritanomaly 0.01% of the population"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.56508136,"math_prob":0.6293131,"size":3690,"snap":"2020-34-2020-40","text_gpt3_token_len":1614,"char_repetition_ratio":0.12398264,"word_repetition_ratio":0.011049724,"special_character_ratio":0.57344174,"punctuation_ratio":0.23581758,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9934388,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-22T05:16:01Z\",\"WARC-Record-ID\":\"<urn:uuid:c1c25399-f24b-4d9c-b61d-1073b27b2d02>\",\"Content-Length\":\"36267\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9b3d3c4b-d589-4015-b5f7-2120b166b9c9>\",\"WARC-Concurrent-To\":\"<urn:uuid:fd6acb3f-a778-44ec-883c-f2e59aa8ac01>\",\"WARC-IP-Address\":\"178.32.117.56\",\"WARC-Target-URI\":\"https://www.colorhexa.com/108946\",\"WARC-Payload-Digest\":\"sha1:SYSI37KEHS2WQBBTFC3BBF2T7ALUVHX2\",\"WARC-Block-Digest\":\"sha1:5SUEUHW3XQ2NHAIUDDBCC5O5Z25Q6WZ7\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600400203096.42_warc_CC-MAIN-20200922031902-20200922061902-00380.warc.gz\"}"} |
https://www.nagwa.com/en/videos/298168761291/ | [
"# Question Video: Finding the Equation of a Straight Line Mathematics\n\nA line passing through (8, 2) makes an angle 𝜃 with the line 6𝑥 + 4𝑦 + 9 = 0, and tan 𝜃 = 15/13. What is the equation of this line?\n\n11:30\n\n### Video Transcript\n\nA line passing through eight, two makes an angle 𝜃 with the line six 𝑥 plus four 𝑦 plus nine equals zero, and tan of 𝜃 equals 15 over 13. What is the equation of this line?\n\nLet’s think about what we know. We know that line one passes through the point eight, two. We know that the equation for our second line is six 𝑥 plus four 𝑦 plus nine equals zero. We know that these two lines intersect at the angle 𝜃. And we should remember something about the tangent of this angle. It’s equal to the absolute value of the slope of line one minus the slope of line two over one plus the slope of line one times the slope of line two. And how many of these values do we already know? We know that tan of 𝜃 equals 15 over 13. We don’t know the slope of the second line, but we can find it using the information we were given.\n\nThis means we will be able to solve to find 𝑚 one, the slope of the first line. If we find the slope 𝑚 one, because we were already given a point, we’ll have the slope of line one and a point that line one passes through, which will allow us to find the equation of this line, line one. So let’s get started. The tan of 𝜃 equals the absolute value of 𝑚 one minus 𝑚 two over one plus 𝑚 one times 𝑚 two. The tan of 𝜃 equals 15 over 13. Before we can plug in any other information, we need to solve for the slope of line two.\n\nTo isolate the slope easily from this equation, we would prefer to rewrite it in slope intercept form, which is 𝑦 equals 𝑚𝑥 plus 𝑏. And in this form, 𝑚 is the slope. To find this form, we need to get 𝑦 by itself. And that means we need to subtract six 𝑥 and nine from both sides of the equation. When we do that, we’ll be left with four 𝑦 equals negative six 𝑥 minus nine. But we need 𝑦 completely by itself. Its coefficient needs to be one. And so we divide everything by four. Four 𝑦 over four equals 𝑦. Negative six over four 𝑥 can be simplified to negative three-halves 𝑥 minus nine-fourths. The slope of line two is then negative three-halves. And so we’ll say 𝑚 two equals negative three-halves.\n\nAnd so we plug in negative three-halves into our equation everywhere we see 𝑚 two. At this point, we can simplify our numerator. We’re taking 𝑚 one and we’re subtracting negative three-halves. We simplify that to say 𝑚 one plus three-halves. And then our denominator will be one minus three-halves times 𝑚 one. Because we’re dealing with absolute value, we will be dealing with two different solutions, a positive solution and a negative solution. The positive solution, 15 over 13, equals 𝑚 one plus three-halves over one minus three-halves 𝑚 one. And the negative solution, negative 15 over 13, equals 𝑚 one plus three-halves over one minus three-halves 𝑚 one.\n\nLet’s focus on the positive solution first. We first need to cross multiply the numerators and the denominators, which will be the numerator 15 multiplied by the denominator one minus three-halves 𝑚 one is equal to the denominator 13 multiplied by the numerator 𝑚 one plus three-halves. We distribute the multiplication across the brackets. 15 times one is 15. 15 times negative three-halves 𝑚 one is negative forty-five halves 𝑚 one, which is equal to 13 times 𝑚 one, 13 𝑚 one, plus 13 times three-halves, 39 over two. To solve for 𝑚 one, we need to get 𝑚 one on the same side of the equation. And so we subtract 13 𝑚 one from both sides of the equation. Negative 45 halves 𝑚 one minus 13 𝑚 one will equal negative seventy-one halves 𝑚 one.\n\nWe also want to move this positive 15 to the other side of the equation. And so we subtract 15 from both sides. On the left, 15 minus 15 equals zero and thirty-nine halves minus 15 equals nine-halves. Since we have a denominator of two on both sides of the equation, we can multiply both sides by two, which will leave us with negative 71 𝑚 one equals nine. To find 𝑚 one, we divide both sides of the equation by negative 71. And we see that 𝑚 one equals negative nine over 71. This is one possible option for the slope of line one. But because we had an equation with absolute value, we need to also consider the negative solution.\n\nTo deal with negative 15 13ths, we can say negative 15 over 13, and we’ll follow the same cross-multiplication procedure. Multiply the numerator, negative 15, by the denominator, one minus three-halves 𝑚 one. And then we multiplied the denominator, 13, by the numerator, 𝑚 one plus three-halves. First, we distribute the negative 15 across the brackets, which gives us negative 15. Negative 15 times negative three-halves 𝑚 one is positive forty-five halves 𝑚 one. We’ll distribute the 13 over the 𝑚 one and the three-halves, which is the same as the first time, 13𝑚 one plus thirty-nine halves.\n\nAgain, to solve for 𝑚 one, we’ll need to get them both on the same side of the equation. And so we subtract 13𝑚 one from both sides. Be careful here; this time, we’re subtracting 13𝑚 one from positive 45 over two 𝑚 one, instead of negative 45 over two 𝑚 one. Forty-five halves minus 13 equals nineteen-halves. We have nineteen-halves 𝑚 one. This time we have a negative 15 on the left. And so we add 15 to both sides. Negative 15 plus 15 equals zero. Thirty-nine halves plus 15 equals sixty-nine halves.\n\nAgain, we’ll multiply the entire equation by two, which leaves us with 19𝑚 one equals 69. And we divide both sides of the equation by 19 to see that 𝑚 one is also equal to 69 over 19. So far, we can say that line one has a slope of either negative nine over 71 or 69 over 19. But we’re looking for an equation for this line. Since we have the slope of line one and a point for line one, we can use the point-slope formula to find the equations for this line. This formula says 𝑦 minus 𝑦 one equals 𝑚 times 𝑥 minus 𝑥 one where 𝑥 one, 𝑦 one is your point. And of course, 𝑚 is your slope. Our point is eight, two. And we have two different slopes, which means we’ll have to use this formula twice, the first time for 𝑚 equals negative nine over 71 and the second time for 69 over 19.\n\nWith our point in our slope, we can say 𝑦 minus two equals negative nine over 71 times 𝑥 minus eight. We distribute our negative nine over 71. And we get 𝑦 minus two is equal to negative nine over 71𝑥 plus 72 over 71. From there, we add two to both sides. And we’ll get 𝑦 equals negative nine over 71𝑥 plus 214 over 71. This is one equation for line one written in slope intercept form. Sometimes it’s useful to not have a fraction in the denominator. And so we could rearrange this equation. By multiplying everything by 71, we would then have 71𝑦 equals negative nine 𝑥 plus 214. The equation for line two was given to us as an equation set equal to zero. And so we could also subtract 71𝑦 from both sides. And we could have the equation zero equals negative nine 𝑥 minus 71𝑦 plus 214.\n\nIn this case, we have negative values as our coefficients for 𝑥 and 𝑦. If we didn’t want that, we could rearrange it by multiplying the equation through by negative one to give us positive nine 𝑥 plus 71𝑦 minus 214. All four of these formats are valid ways of expressing the equation of this line. You don’t need all four, but it’s helpful to know how to rearrange them to find equivalent expressions of the same line. We now do this process one more time using the slope of 69 over 19. The point-slope formula gives us 𝑦 minus two equals 69 over 19 times 𝑥 minus eight. We distribute the 69 over 19 which gives us 𝑦 minus two equals 69 over 19𝑥 minus 552 over 19. And then we add two to both sides. And then we have 𝑦 equals 69 over 19𝑥 minus 514 over 19.\n\nThis is the slope-intercept form of the second option for line one. If we multiply that whole equation by 19, we would have the equivalent equation: 19𝑦 equals 69𝑥 minus 514. And if we subtracted 19𝑦 from both sides, we would have the equivalent equation zero equals 69𝑥 minus 19𝑦 minus 514. If we multiplied that equation by negative one, we would get zero equals negative 69𝑥 plus 19𝑦 plus 514. What all this information tells us is that there are two lines that pass through the point eight, two and make the angle tan of 𝜃 equals 15 over 13 with the line six 𝑥 plus four 𝑦 plus nine equals zero. So you just need to select one equation from each column and you’ll have an equation for the two lines that fit this criteria."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9056473,"math_prob":0.99911296,"size":8248,"snap":"2023-40-2023-50","text_gpt3_token_len":2212,"char_repetition_ratio":0.19347404,"word_repetition_ratio":0.08640406,"special_character_ratio":0.24599902,"punctuation_ratio":0.09607951,"nsfw_num_words":3,"has_unicode_error":false,"math_prob_llama3":0.9998535,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-28T01:47:17Z\",\"WARC-Record-ID\":\"<urn:uuid:83c4b1ac-f434-4a04-b494-242d6b6ed463>\",\"Content-Length\":\"50552\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:96fc03c2-bcf1-4895-96e4-b43401882e88>\",\"WARC-Concurrent-To\":\"<urn:uuid:55405730-12df-4cf1-a935-cb13aae630bb>\",\"WARC-IP-Address\":\"104.26.15.217\",\"WARC-Target-URI\":\"https://www.nagwa.com/en/videos/298168761291/\",\"WARC-Payload-Digest\":\"sha1:BF3XYW4OV74TV5L5AK56TPZA4LLXHGSQ\",\"WARC-Block-Digest\":\"sha1:RMQBZHU5GVUPSHGTGMUY3TYFVE3SAR6I\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510334.9_warc_CC-MAIN-20230927235044-20230928025044-00295.warc.gz\"}"} |
https://gateway.ipfs.io/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/wiki/Thermistor.html | [
"# Thermistor\n\nType",
null,
"Negative temperature coefficient (NTC) thermistor, bead type, insulated wires Passive Electric resistance",
null,
"Thermistor symbol\n\nA thermistor is a type of resistor whose resistance is dependent on temperature, more so than in standard resistors. The word is a portmanteau of thermal and resistor. Thermistors are widely used as inrush current limiter, temperature sensors (Negative Temperature Coefficient or NTC type typically), self-resetting overcurrent protectors, and self-regulating heating elements. (Positive Temperature Coefficient or PTC type typically).\n\nThermistors are of two opposite fundamental types:\n\n• With NTC, resistance decreases as temperature rises to protect against inrush overvoltage conditions. Commonly installed parallel in a circuit. As current sink.\n• With PTC, resistance increases as temperature rises to protect against overcurrent conditions. Commonly installed series in a circuit. As resetteable fuse.\n\nThermistors differ from resistance temperature detectors (RTDs) in that the material used in a thermistor is generally a ceramic or polymer, while RTDs use pure metals. The temperature response is also different; RTDs are useful over larger temperature ranges, while thermistors typically achieve a greater precision within a limited temperature range, typically −90 °C to 130 °C.\n\n## Basic operation\n\nAssuming, as a first-order approximation, that the relationship between resistance and temperature is linear, then:",
null,
"where",
null,
", change in resistance",
null,
", change in temperature",
null,
", first-order temperature coefficient of resistance\n\nThermistors can be classified into two types, depending on the classification of",
null,
". If",
null,
"is positive, the resistance increases with increasing temperature, and the device is called a positive temperature coefficient (PTC) thermistor, or posistor. If",
null,
"is negative, the resistance decreases with increasing temperature, and the device is called a negative temperature coefficient (NTC) thermistor. Resistors that are not thermistors are designed to have a",
null,
"as close to 0 as possible, so that their resistance remains nearly constant over a wide temperature range.\n\nInstead of the temperature coefficient k, sometimes the temperature coefficient of resistance",
null,
"(alpha sub T) is used. It is defined as",
null,
"This",
null,
"coefficient should not be confused with the",
null,
"parameter below.\n\n## Steinhart–Hart equation\n\nIn practice, the linear approximation (above) works only over a small temperature range. For accurate temperature measurements, the resistance/temperature curve of the device must be described in more detail. The Steinhart–Hart equation is a widely used third-order approximation:",
null,
"where a, b and c are called the Steinhart–Hart parameters, and must be specified for each device. T is the absolute temperature and R is the resistance. To give resistance as a function of temperature, the above can be rearranged into:",
null,
"where",
null,
"The error in the Steinhart–Hart equation is generally less than 0.02 °C in the measurement of temperature over a 200 °C range. As an example, typical values for a thermistor with a resistance of 3 kΩ at room temperature (25 °C = 298.15 K) are:",
null,
"## B or β parameter equation\n\nNTC thermistors can also be characterised with the B (or β) parameter equation, which is essentially the Steinhart–Hart equation with",
null,
",",
null,
"and",
null,
",",
null,
"Where the temperatures are in kelvin and R0 is the resistance at temperature T0 (25 °C = 298.15 K). Solving for R yields:",
null,
"or, alternatively,",
null,
"where",
null,
".\n\nThis can be solved for the temperature:",
null,
"The B-parameter equation can also be written as",
null,
". This can be used to convert the function of resistance vs. temperature of a thermistor into a linear function of",
null,
"vs.",
null,
". The average slope of this function will then yield an estimate of the value of the B parameter.\n\n## Conduction model\n\n### NTC\n\nMany NTC thermistors are made from a pressed disc, rod, plate, bead or cast chip of semiconducting material such as sintered metal oxides. They work because raising the temperature of a semiconductor increases the number of active charge carriers - it promotes them into the conduction band. The more charge carriers that are available, the more current a material can conduct. In certain materials like ferric oxide (Fe2O3) with titanium (Ti) doping an n-type semiconductor is formed and the charge carriers are electrons. In materials such as nickel oxide (NiO) with lithium (Li) doping a p-type semiconductor is created where holes are the charge carriers.\n\nThis is described in the formula:",
null,
"",
null,
"= electric current (amperes)",
null,
"= density of charge carriers (count/m³)",
null,
"= cross-sectional area of the material (m²)",
null,
"= drift velocity of electrons (m/s)",
null,
"= charge of an electron (",
null,
"coulomb)\n\nOver large changes in temperature, calibration is necessary. Over small changes in temperature, if the right semiconductor is used, the resistance of the material is linearly proportional to the temperature. There are many different semiconducting thermistors with a range from about 0.01 kelvin to 2,000 kelvins (−273.14 °C to 1,700 °C).\n\n### PTC\n\nMost PTC thermistors are made from doped polycrystalline ceramic (containing barium titanate (BaTiO3) and other compounds) which have the property that their resistance rises suddenly at a certain critical temperature. Barium titanate is ferroelectric and its dielectric constant varies with temperature. Below the Curie point temperature, the high dielectric constant prevents the formation of potential barriers between the crystal grains, leading to a low resistance. In this region the device has a small negative temperature coefficient. At the Curie point temperature, the dielectric constant drops sufficiently to allow the formation of potential barriers at the grain boundaries, and the resistance increases sharply with temperature. At even higher temperatures, the material reverts to NTC behaviour.\n\nAnother type of thermistor is a silistor, a thermally sensitive silicon resistor. Silistors employ silicon as the semiconductive component material. Unlike ceramic PTC thermistors, silistors have an almost linear resistance-temperature characteristic.\n\nBarium titanate thermistors can be used as self-controlled heaters; for a given voltage, the ceramic will heat to a certain temperature, but the power used will depend on the heat loss from the ceramic.\n\nThe dynamics of PTC thermistors being powered also is extremely useful. When first connected to a voltage source, a large current corresponding to the low, cold, resistance flows, but as the thermistor self-heats, the current is reduced until a limiting current (and corresponding peak device temperature) is reached. The current-limiting effect can replace fuses. They are also used in the degaussing circuits of many CRT monitors and televisions where the degaussing coil only has to be connected in series with an appropriately chosen thermistor; a particular advantage is that the current decrease is smooth, producing optimum degausing effect. Improved degaussing circuits have auxiliary heating elements to heat the thermistor further (and reduce the final current) or timed relays to disconnect the degaussing circuit entirely after it has operated.\n\nAnother type of PTC thermistor is the polymer PTC, which is sold under brand names such as \"Polyswitch\" \"Semifuse\", and \"Multifuse\". This consists of plastic with carbon grains embedded in it. When the plastic is cool, the carbon grains are all in contact with each other, forming a conductive path through the device. When the plastic heats up, it expands, forcing the carbon grains apart, and causing the resistance of the device to rise, which then causes increased heating and rapid resistance increase. Like the BaTiO3 thermistor, this device has a highly nonlinear resistance/temperature response useful for thermal or circuit control, not for temperature measurement. Besides circuit elements used to limit current, self-limiting heaters can be made in the form of wires or strips, useful for heat tracing. PTC thermistors 'latch' into a hot / low resistance state: once hot, they stay in that low resistance state, until cooled. In fact, Neil A Downie showed how you can use the effect as a simple latch/memory circuit, the effect being enhanced by using two PTC thermistors in series, with thermistor A cool, thermistor B hot, or vice versa.\n\n## Self-heating effects\n\nWhen a current flows through a thermistor, it will generate heat which will raise the temperature of the thermistor above that of its environment. If the thermistor is being used to measure the temperature of the environment, this electrical heating may introduce a significant error if a correction is not made. Alternatively, this effect itself can be exploited. It can, for example, make a sensitive air-flow device employed in a sailplane rate-of-climb instrument, the electronic variometer, or serve as a timer for a relay as was formerly done in telephone exchanges.\n\nThe electrical power input to the thermistor is just:",
null,
"where I is current and V is the voltage drop across the thermistor. This power is converted to heat, and this heat energy is transferred to the surrounding environment. The rate of transfer is well described by Newton's law of cooling:",
null,
"where T(R) is the temperature of the thermistor as a function of its resistance R,",
null,
"is the temperature of the surroundings, and K is the dissipation constant, usually expressed in units of milliwatts per degree Celsius. At equilibrium, the two rates must be equal.",
null,
"The current and voltage across the thermistor will depend on the particular circuit configuration. As a simple example, if the voltage across the thermistor is held fixed, then by Ohm's Law we have",
null,
"and the equilibrium equation can be solved for the ambient temperature as a function of the measured resistance of the thermistor:",
null,
"The dissipation constant is a measure of the thermal connection of the thermistor to its surroundings. It is generally given for the thermistor in still air, and in well-stirred oil. Typical values for a small glass bead thermistor are 1.5 mW/°C in still air and 6.0 mW/°C in stirred oil. If the temperature of the environment is known beforehand, then a thermistor may be used to measure the value of the dissipation constant. For example, the thermistor may be used as a flow rate sensor, since the dissipation constant increases with the rate of flow of a fluid past the thermistor.\n\nThe power dissipated in a thermistor is typically maintained at a very low level to ensure insignificant temperature measurement error due to self heating. However, some thermistor applications depend upon significant \"self heating\" to raise the body temperature of the thermistor well above the ambient temperature so the sensor then detects even subtle changes in the thermal conductivity of the environment. Some of these applications include liquid level detection, liquid flow measurement and air flow measurement.\n\n## Applications\n\n### PTC\n\n• As current-limiting devices for circuit protection, as replacements for fuses. Current through the device causes a small amount of resistive heating. If the current is large enough to generate more heat than the device can lose to its surroundings, the device heats up, causing its resistance to increase. This creates a self-reinforcing effect that drives the resistance upwards, therefore limiting the current.\n• As timers in the degaussing coil circuit of most CRT displays. When the display unit is initially switched on, current flows through the thermistor and degaussing coil. The coil and thermistor are intentionally sized so that the current flow will heat the thermistor to the point that the degaussing coil shuts off in under a second. For effective degaussing, it is necessary that the magnitude of the alternating magnetic field produced by the degaussing coil decreases smoothly and continuously, rather than sharply switching off or decreasing in steps; the PTC thermistor accomplishes this naturally as it heats up. A degaussing circuit using a PTC thermistor is simple, reliable (for its simplicity), and inexpensive.\n• As heater in automotive industry to provide additional heat inside cabin with diesel engine or to heat diesel in cold climatic conditions before engine injection.\n• In temperature compensated synthesizer voltage controlled oscillators.\n• In lithium battery protection circuits.\n• In an electrically actuated Wax motor to provide the heat necessary to expand the wax.\n\n### NTC\n\n• As a resistance thermometer for low-temperature measurements of the order of 10 K.\n• As an inrush current limiter device in power supply circuits, they present a higher resistance initially, which prevents large currents from flowing at turn-on, and then heat up and become much lower resistance to allow higher current flow during normal operation. These thermistors are usually much larger than measuring type thermistors, and are purposely designed for this application.\n• As sensors in automotive applications to monitor things like coolant or oil temperature inside the engine, and provide data to the ECU and to the dashboard.\n• To monitor the temperature of an incubator.\n• Thermistors are also commonly used in modern digital thermostats and to monitor the temperature of battery packs while charging.\n• Thermistors are often used in the hot ends of 3D printers; they monitor the heat produced and allow the printer's control circuitry to keep a constant temperature for melting the plastic filament.\n• In the food handling and processing industry, especially for food storage systems and food preparation. Maintaining the correct temperature is critical to prevent foodborne illness.\n• Throughout the consumer appliance industry for measuring temperature. Toasters, coffee makers, refrigerators, freezers, hair dryers, etc. all rely on thermistors for proper temperature control.\n• NTC thermistors come in bare and lugged forms, the former is for point sensing to achieve high accuracy for specific points, such as laser diode die, etc.\n\n## History\n\nThe first NTC thermistor was discovered in 1833 by Michael Faraday, who reported on the semiconducting behavior of silver sulfide. Faraday noticed that the resistance of silver sulfide decreased dramatically as temperature increased. (This was also the first documented observation of a semiconducting material.)\n\nBecause early thermistors were difficult to produce and applications for the technology were limited, commercial production of thermistors did not begin until the 1930s. A commercially viable thermistor was invented by Samuel Ruben in 1930.\n\n1. \"NTC Thermistors\". Micro-chip Technologies. 2010.\n2. Thermistor Terminology. U.S. Sensor\n3. \"Practical Temperature Measurements\". Agilent Application Note. Agilent Semiconductor.\n4. L. W Turner, ed. (1976). Electronics Engineer's Reference Book (4 ed.). Butterworths. pp. 6–29 to 6–41. ISBN 0408001682.\n5. \"PTC Thermistors and Silistors\" The Resistor Guide\n6. Downie, Neil A, The Ultimate Book of Saturday Science (Princeton 2012) ISBN 0-691-14966-6\n7. Temperature Compensated VCO\n8. Patent CN 1273423A (China)\n9. Inrush Current Limiting Power Thermistors. U.S. Sensor\n10. \"1833 - First Semiconductor Effect is Recorded\". Computer History Museum. Retrieved 24 June 2014.\n11. McGee, Thomas (1988). \"Chapter 9\". Principles and Methods of Temperature Measurement. John Wiley & Sons. p. 203.\n12. Jones, Deric P., ed. (2009). Biomedical Sensors. Momentum Press. p. 12.",
null,
"Wikimedia Commons has media related to Thermistors."
] | [
null,
"https://gateway.ipfs.io/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/I/m/NTC_bead.jpg",
null,
"https://gateway.ipfs.io/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/I/m/Thermistor.svg.png",
null,
"https://gateway.ipfs.io/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/I/m/27e3891e2d7b03335713bdf3afeb78e5cbe92192.svg",
null,
"https://gateway.ipfs.io/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/I/m/8161ff064c8577af87c3a5465854d611f288a9a1.svg",
null,
"https://gateway.ipfs.io/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/I/m/e61e7deb9c7c7b7dda762b0935e757add2acc559.svg",
null,
"https://gateway.ipfs.io/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/I/m/c3c9a2c7b599b37105512c5d570edc034056dd40.svg",
null,
"https://gateway.ipfs.io/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/I/m/c3c9a2c7b599b37105512c5d570edc034056dd40.svg",
null,
"https://gateway.ipfs.io/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/I/m/c3c9a2c7b599b37105512c5d570edc034056dd40.svg",
null,
"https://gateway.ipfs.io/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/I/m/c3c9a2c7b599b37105512c5d570edc034056dd40.svg",
null,
"https://gateway.ipfs.io/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/I/m/c3c9a2c7b599b37105512c5d570edc034056dd40.svg",
null,
"https://gateway.ipfs.io/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/I/m/1bae4c2d7a1f5097997a285df47dd01fb03ac1dc.svg",
null,
"https://gateway.ipfs.io/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/I/m/150f502e863b6973091038a6d2224c91a0b30c6e.svg",
null,
"https://gateway.ipfs.io/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/I/m/1bae4c2d7a1f5097997a285df47dd01fb03ac1dc.svg",
null,
"https://gateway.ipfs.io/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/I/m/ffd2487510aa438433a2579450ab2b3d557e5edc.svg",
null,
"https://gateway.ipfs.io/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/I/m/a511eed9413d7b4ad0e7180610774b108d9bc921.svg",
null,
"https://gateway.ipfs.io/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/I/m/d5cda0d82543f5a1e22421a922830381a3195ad2.svg",
null,
"https://gateway.ipfs.io/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/I/m/0f0b1cdd7671d57e48ddad3085c5d42c2f045d10.svg",
null,
"https://gateway.ipfs.io/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/I/m/7b680b965692d70765c4836a68c89665fbf73ff2.svg",
null,
"https://gateway.ipfs.io/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/I/m/cdbb655ac3692d5d0190942c2d7761d1da79c1ea.svg",
null,
"https://gateway.ipfs.io/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/I/m/9272bf45b4c4e348559c0839f338ef9dc126e36d.svg",
null,
"https://gateway.ipfs.io/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/I/m/d9ee918699d0cb4b8c633cc1f520a8a7a174f44a.svg",
null,
"https://gateway.ipfs.io/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/I/m/a1147553d76289cd6f397d0d31d0c820a3a14933.svg",
null,
"https://gateway.ipfs.io/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/I/m/140293e54049e507f60526fe91195158efce899a.svg",
null,
"https://gateway.ipfs.io/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/I/m/638efe3b9e73bbd5d7ac4cda425f29d4cc314f9b.svg",
null,
"https://gateway.ipfs.io/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/I/m/3e9985b1e92a6f9df1095ebf75cc19c74204d045.svg",
null,
"https://gateway.ipfs.io/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/I/m/ef3b79fba6a8ecabfa017055dedd4bec0cdf71ff.svg",
null,
"https://gateway.ipfs.io/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/I/m/2b8c0db000ad0b08e78e9497144fc1780e6682d2.svg",
null,
"https://gateway.ipfs.io/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/I/m/e53eba7bc1a28a5968afd5afb32ded0bbd30b266.svg",
null,
"https://gateway.ipfs.io/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/I/m/ee06bfe8f48b840ea1c11f78977a90f661f2375e.svg",
null,
"https://gateway.ipfs.io/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/I/m/6dd21e45b232281e027f924a56fc1a6583be1179.svg",
null,
"https://gateway.ipfs.io/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/I/m/535ea7fc4134a31cbe2251d9d3511374bc41be9f.svg",
null,
"https://gateway.ipfs.io/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/I/m/a601995d55609f2d9f5e233e36fbe9ea26011b3b.svg",
null,
"https://gateway.ipfs.io/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/I/m/7daff47fa58cdfd29dc333def748ff5fa4c923e3.svg",
null,
"https://gateway.ipfs.io/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/I/m/e07b00e7fc0847fbd16391c778d65bc25c452597.svg",
null,
"https://gateway.ipfs.io/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/I/m/cd253103f0876afc68ebead27a5aa9867d927467.svg",
null,
"https://gateway.ipfs.io/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/I/m/2c64a0e802e887a57aa48b179b1ad89825a792c4.svg",
null,
"https://gateway.ipfs.io/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/I/m/bf50e1392299dc8b5766c13df0cb7d93361d510d.svg",
null,
"https://gateway.ipfs.io/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/I/m/92a13948a8fa384329875f99fabd3814d2cd08d4.svg",
null,
"https://gateway.ipfs.io/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/I/m/55b9e7d7b96196b5a6a26f4349caa3ac82fd67e3.svg",
null,
"https://gateway.ipfs.io/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/I/m/598f4c19c8555333a0d59c555b5c6a4b1cca451e.svg",
null,
"https://gateway.ipfs.io/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/I/m/27e4c45df3312425dcea36c8a4e48a19d29bda39.svg",
null,
"https://gateway.ipfs.io/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/I/m/e56a5bf450b3a04b2caeb98a20116807d86a4391.svg",
null,
"https://gateway.ipfs.io/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/I/m/Commons-logo.svg.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.90378946,"math_prob":0.92073524,"size":15816,"snap":"2021-43-2021-49","text_gpt3_token_len":3361,"char_repetition_ratio":0.1811915,"word_repetition_ratio":0.00990099,"special_character_ratio":0.19859636,"punctuation_ratio":0.11176255,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9581996,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null,6,null,null,null,null,null,null,null,null,null,null,null,2,null,1,null,2,null,null,null,1,null,1,null,1,null,1,null,1,null,1,null,8,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,null,null,null,null,null,null,null,null,null,null,1,null,1,null,1,null,null,null,1,null,1,null,1,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-03T15:50:47Z\",\"WARC-Record-ID\":\"<urn:uuid:2e210810-26b6-4f13-9ec5-81aa509eaebc>\",\"Content-Length\":\"89823\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a6d68092-8512-41c9-afb3-35ecf4b2cf20>\",\"WARC-Concurrent-To\":\"<urn:uuid:d37618d4-cc8b-443f-9d86-ef53fbaaf8b7>\",\"WARC-IP-Address\":\"209.94.90.1\",\"WARC-Target-URI\":\"https://gateway.ipfs.io/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/wiki/Thermistor.html\",\"WARC-Payload-Digest\":\"sha1:EQOE3VHCZBJNOSHXG2MQ3X72GZVRW3QM\",\"WARC-Block-Digest\":\"sha1:OFUAUTY5WDYCEJOGZNCPNYP6TWDPKS4K\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964362891.54_warc_CC-MAIN-20211203151849-20211203181849-00314.warc.gz\"}"} |
https://www.easycalculation.com/factors-of-1348.html | [
"# Factors of 1348\n\nFactors of 1348 are 1, 2, 4, 337, 674, 1348. So, 1348 can be derived from smaller number in 3 possible ways using multiplication.\n\nFactors of 1348\n1, 2, 4, 337, 674, 1348\nFactor Pairs of 1348\n1 x 1348 = 1348\n2 x 674 = 1348\n4 x 337 = 1348\n\nFactors of 1348 are 1, 2, 4, 337, 674, 1348. So, 1348 can be derived from smaller number in 3 possible ways using multiplication."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7559391,"math_prob":0.9881144,"size":259,"snap":"2021-21-2021-25","text_gpt3_token_len":88,"char_repetition_ratio":0.11764706,"word_repetition_ratio":0.90909094,"special_character_ratio":0.41312742,"punctuation_ratio":0.25,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9925087,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-06-15T00:59:00Z\",\"WARC-Record-ID\":\"<urn:uuid:326f6ab2-1034-4ce1-a4eb-5d3b64623fbd>\",\"Content-Length\":\"27286\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0fbb0117-09aa-46b5-8951-d0ae1f1a51e3>\",\"WARC-Concurrent-To\":\"<urn:uuid:a77ad2f4-db59-4738-a781-abb6ea3d1bdc>\",\"WARC-IP-Address\":\"50.116.14.108\",\"WARC-Target-URI\":\"https://www.easycalculation.com/factors-of-1348.html\",\"WARC-Payload-Digest\":\"sha1:QLQHRGHXI2SHLLM3FWYQWP35SG2LFXAY\",\"WARC-Block-Digest\":\"sha1:NAZVCRE3H5CPCG3BFGWJMTLBZRK3EHF5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-25/CC-MAIN-2021-25_segments_1623487614006.8_warc_CC-MAIN-20210614232115-20210615022115-00450.warc.gz\"}"} |
http://mathcentral.uregina.ca/QandQ/topics/x-intercepts | [
"",
null,
"",
null,
"Math Central - mathcentral.uregina.ca",
null,
"",
null,
"Quandaries & Queries",
null,
"",
null,
"",
null,
"",
null,
"Q & Q",
null,
"",
null,
"",
null,
"",
null,
"Topic:",
null,
"x-intercepts",
null,
"",
null,
"",
null,
"start over\n\nOne item is filed under this topic.",
null,
"",
null,
"",
null,
"",
null,
"Page1/1",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"x and y-intercepts 2002-03-10",
null,
"From A student:I have the problem- f(x)= X-5/X2+X-6 and I have to find the vertical asymtope, horizontal asymtope, x-intercept, and y-intercept. And graph the problem. I am having problems finding the y-intercept.Answered by Harley Weston.",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"Page1/1",
null,
"",
null,
"",
null,
"",
null,
"Math Central is supported by the University of Regina and The Pacific Institute for the Mathematical Sciences.",
null,
"",
null,
"",
null,
"",
null,
"about math central :: site map :: links :: notre site français"
] | [
null,
"http://mathcentral.uregina.ca/lid/images/pixels/transparent.gif",
null,
"http://mathcentral.uregina.ca/lid/images/pixels/transparent.gif",
null,
"http://mathcentral.uregina.ca/lid/images/pixels/transparent.gif",
null,
"http://mathcentral.uregina.ca/lid/images/pixels/transparent.gif",
null,
"http://mathcentral.uregina.ca/lid/images/pixels/transparent.gif",
null,
"http://mathcentral.uregina.ca/lid/images/pixels/transparent.gif",
null,
"http://mathcentral.uregina.ca/lid/images/pixels/transparent.gif",
null,
"http://mathcentral.uregina.ca/lid/images/pixels/transparent.gif",
null,
"http://mathcentral.uregina.ca/lid/images/pixels/transparent.gif",
null,
"http://mathcentral.uregina.ca/lid/images/boxes/whiteonwhite/topleft.gif",
null,
"http://mathcentral.uregina.ca/lid/images/boxes/whiteonwhite/topright.gif",
null,
"http://mathcentral.uregina.ca/lid/QQ/images/topic.gif",
null,
"http://mathcentral.uregina.ca/images/transparent.gif",
null,
"http://mathcentral.uregina.ca/images/transparent.gif",
null,
"http://mathcentral.uregina.ca/lid/images/boxes/whiteonwhite/bottomleft.gif",
null,
"http://mathcentral.uregina.ca/lid/images/boxes/whiteonwhite/bottomright.gif",
null,
"http://mathcentral.uregina.ca/images/transparent.gif",
null,
"http://mathcentral.uregina.ca/images/transparent.gif",
null,
"http://mathcentral.uregina.ca/images/nav_but_inact_first.gif",
null,
"http://mathcentral.uregina.ca/images/nav_but_inact_previous.gif",
null,
"http://mathcentral.uregina.ca/images/nav_but_inact_next.gif",
null,
"http://mathcentral.uregina.ca/images/nav_but_inact_last.gif",
null,
"http://mathcentral.uregina.ca/images/transparent.gif",
null,
"http://mathcentral.uregina.ca/images/transparent.gif",
null,
"http://mathcentral.uregina.ca/images/transparent.gif",
null,
"http://mathcentral.uregina.ca/images/transparent.gif",
null,
"http://mathcentral.uregina.ca/images/transparent.gif",
null,
"http://mathcentral.uregina.ca/images/transparent.gif",
null,
"http://mathcentral.uregina.ca/images/transparent.gif",
null,
"http://mathcentral.uregina.ca/images/transparent.gif",
null,
"http://mathcentral.uregina.ca/images/transparent.gif",
null,
"http://mathcentral.uregina.ca/images/transparent.gif",
null,
"http://mathcentral.uregina.ca/images/transparent.gif",
null,
"http://mathcentral.uregina.ca/images/transparent.gif",
null,
"http://mathcentral.uregina.ca/images/transparent.gif",
null,
"http://mathcentral.uregina.ca/images/transparent.gif",
null,
"http://mathcentral.uregina.ca/images/transparent.gif",
null,
"http://mathcentral.uregina.ca/images/nav_but_inact_first.gif",
null,
"http://mathcentral.uregina.ca/images/nav_but_inact_previous.gif",
null,
"http://mathcentral.uregina.ca/images/nav_but_inact_next.gif",
null,
"http://mathcentral.uregina.ca/images/nav_but_inact_last.gif",
null,
"http://mathcentral.uregina.ca/images/transparent.gif",
null,
"http://mathcentral.uregina.ca/images/transparent.gif",
null,
"http://mathcentral.uregina.ca/lid/images/pixels/transparent.gif",
null,
"http://mathcentral.uregina.ca/lid/styles/mathcentral/interior/cms.gif",
null,
"http://mathcentral.uregina.ca/lid/images/pixels/transparent.gif",
null,
"http://mathcentral.uregina.ca/lid/images/pixels/transparent.gif",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.84513503,"math_prob":0.43097538,"size":334,"snap":"2019-13-2019-22","text_gpt3_token_len":101,"char_repetition_ratio":0.1969697,"word_repetition_ratio":0.071428575,"special_character_ratio":0.3233533,"punctuation_ratio":0.13235295,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95984817,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-05-20T02:18:59Z\",\"WARC-Record-ID\":\"<urn:uuid:a8e3d509-01aa-4688-9022-389ac24ec9b5>\",\"Content-Length\":\"13726\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:aa1729b3-214b-46b9-b498-cf0983a05e5a>\",\"WARC-Concurrent-To\":\"<urn:uuid:1c31c91d-8268-4138-8dc8-9d6d48da7812>\",\"WARC-IP-Address\":\"142.3.156.43\",\"WARC-Target-URI\":\"http://mathcentral.uregina.ca/QandQ/topics/x-intercepts\",\"WARC-Payload-Digest\":\"sha1:63DENUMU4ZX62MRBUCLHA6LIZTF5BZHT\",\"WARC-Block-Digest\":\"sha1:RGBDZ3GPMLFL4TEJISELLOTJ2YRGOSH5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-22/CC-MAIN-2019-22_segments_1558232255536.6_warc_CC-MAIN-20190520021654-20190520043654-00511.warc.gz\"}"} |
http://conceptmap.cfapps.io/wikipage?lang=en&name=Butler%E2%80%93Volmer_equation | [
"# Butler–Volmer equation\n\nIn electrochemistry, the Butler–Volmer equation (named after John Alfred Valentine Butler and Max Volmer), also known as Erdey-Grúz–Volmer equation, is one of the most fundamental relationships in electrochemical kinetics. It describes how the electrical current through an electrode depends on the voltage difference between the electrode and the bulk electrolyte for a simple, unimolecular redox reaction, considering that both a cathodic and an anodic reaction occur on the same electrode:\n\n## The Butler-Volmer equation\n\nThe upper graph shows the current density as function of the overpotential η . The anodic and cathodic current densities are shown as ja and jc, respectively for α=αac=0.5 and j0 =1mAcm−2 (close to values for platinum and palladium). The lower graph shows the logarithmic plot for different values of α (Tafel plot)\n\nThe Butler–Volmer equation is:\n\n$j=j_{0}\\cdot \\left\\{\\exp \\left[{\\frac {\\alpha _{\\rm {a}}zF}{RT}}(E-E_{\\rm {eq}})\\right]-\\exp \\left[-{\\frac {\\alpha _{\\rm {c}}zF}{RT}}(E-E_{\\rm {eq}})\\right]\\right\\}$\n\nor in a more compact form:\n\n$j=j_{0}\\cdot \\left\\{\\exp \\left[{\\frac {\\alpha _{a}zF\\eta }{RT}}\\right]-\\exp \\left[-{\\frac {\\alpha _{c}zF\\eta }{RT}}\\right]\\right\\}$\n\nwhere:\n\n• $j$ : electrode current density, A/m2 (defined as j = I/S)\n• $j_{0}$ : exchange current density, A/m2\n• $E$ : electrode potential, V\n• $E_{eq}$ : equilibrium potential, V\n• $T$ : absolute temperature, K\n• $z$ : number of electrons involved in the electrode reaction\n• $F$ : Faraday constant\n• $R$ : universal gas constant\n• $\\alpha _{\\rm {c}}$ : so-called cathodic charge transfer coefficient, dimensionless\n• $\\alpha _{\\rm {a}}$ : so-called anodic charge transfer coefficient, dimensionless\n• $\\eta$ : activation overpotential (defined as $\\eta =E-E_{eq}$ ).\n\nThe right hand figure shows plots valid for $\\alpha _{a}=1-\\alpha _{c}$ .\n\n### The limiting cases\n\nThere are two limiting cases of the Butler–Volmer equation:\n\n• the low overpotential region (called \"polarization resistance\", i.e., when E ≈ Eeq), where the Butler–Volmer equation simplifies to:\n$j=j_{0}{\\frac {zF}{RT}}(E-E_{\\rm {eq}})$ ;\n• the high overpotential region, where the Butler–Volmer equation simplifies to the Tafel equation. When $(E-E_{\\rm {eq}})>>0$ , the first term dominates, and when $(E-E_{\\rm {eq}})<<0$ , the second term dominates.\n$E-E_{\\rm {eq}}=a_{\\rm {c}}-b_{\\rm {c}}\\log j$ for a cathodic reaction, when E << Eeq, or\n$E-E_{\\rm {eq}}=a+b_{\\rm {a}}\\log j$ for an anodic reaction, when E >> Eeq\n\nwhere $a$ and $b$ are constants (for a given reaction and temperature) and are called the Tafel equation constants. The theoretical values of the Tafel equation constants are different for the cathodic and anodic processes. However, the Tafel slope $b$ can be defined as:\n\n$b=\\left({\\frac {\\partial E}{\\partial \\ln |I_{\\rm {F}}|}}\\right)_{c_{i},T,p}$\n\nwhere $I_{\\rm {F}}$ is the faradaic current, expressed as $I_{\\rm {F}}=I_{\\rm {c}}+I_{\\rm {a}}$ , being $I_{\\rm {c}}$ and $I_{\\rm {a}}$ the cathodic and anodic partial currents, respectively.\n\n## The extended Butler-Volmer equation\n\nThe more general form of the Butler–Volmer equation, applicable to the mass transfer-influenced conditions, can be written as:\n\n$j=j_{0}\\left\\{{\\frac {c_{\\rm {o}}(0,t)}{c_{\\rm {o}}^{*}}}\\exp \\left[{\\frac {\\alpha _{\\rm {a}}zF\\eta }{RT}}\\right]-{\\frac {c_{\\rm {r}}(0,t)}{c_{\\rm {r}}^{*}}}\\exp \\left[-{\\frac {\\alpha _{\\rm {c}}zF\\eta }{RT}}\\right]\\right\\}$\n\nwhere:\n\n• j is the current density, A/m2,\n• co and cr refer to the concentration of the species to be oxidized and to be reduced, respectively,\n• c(0,t) is the time-dependent concentration at the distance zero from the surface of the electrode.\n\nThe above form simplifies to the conventional one (shown at the top of the article) when the concentration of the electroactive species at the surface is equal to that in the bulk.\n\nThere are two rates which determine the current-voltage relationship for an electrode. First is the rate of the chemical reaction at the electrode, which consumes reactants and produces products. This is known as the charge transfer rate. The second is the rate at which reactants are provided, and products removed, from the electrode region by various processes including diffusion, migration, and convection. The latter is known as the mass-transfer rate[Note 1] . These two rates determine the concentrations of the reactants and products at the electrode, which are in turn determined by them. The slowest of these rates will determine the overall rate of the process.\n\nThe simple Butler–Volmer equation assumes that the concentrations at the electrode are practically equal to the concentrations in the bulk electrolyte, allowing the current to be expressed as a function of potential only. In other words, it assumes that the mass transfer rate is much greater than the reaction rate, and that the reaction is dominated by the slower chemical reaction rate. Despite this limitation, the utility of the Butler–Volmer equation in electrochemistry is wide, and it is often considered to be \"central in the phenomenological electrode kinetics\".\n\nThe extended Butler–Volmer equation does not make this assumption, but rather takes the concentrations at the electrode as given, yielding a relationship in which the current is expressed as a function not only of potential, but of the given concentrations as well. The mass-transfer rate may be relatively small, but its only effect on the chemical reaction is through the altered (given) concentrations. In effect, the concentrations are a function of the potential as well. A full treatment, which yields the current as a function of potential only, will be expressed by the extended Butler–Volmer equation, but will require explicit inclusion of mass transfer effects in order to express the concentrations as functions of the potential.\n\n### Derivation\n\n#### General expression\n\nA plot of various Gibbs energies as a function of reaction coordinate. The reaction will proceed towards the lower energy - reducing for the blue curve, oxidizing for the red curve curve. The green curve illustrates equilibrium.\n\nThe following derivation of the extended Butler–Volmer equation is adapted from that of Bard and Faulkner and Newman and Thomas-Alyea. For a simple unimolecular, one-step reaction of the form:\n\nO+ne → R\n\nThe forward and backward reaction rates (vf and vb) and, from Faraday's laws of electrolysis, the associated electrical current densities (j), may be written as:\n\n$v_{f}=k_{f}c_{o}=j_{f}/nF$\n$v_{b}=k_{b}c_{r}=j_{b}/nF$\n\nwhere kf and kb are the reaction rate constants, with units of frequency (1/time) and co and cr are the surface concentrations (mol/area) of the oxidized and reduced molecules, respectively (written as co(0,t) and cr(0,t) in the previous section). The net rate of reaction v and net current density j are then:[Note 2]\n\n$v=v_{b}-v_{f}={\\frac {j_{b}-j_{f}}{nF}}={\\frac {j}{nF}}$\n\nThe figure above plots various Gibbs energy curves as a function of the reaction coordinate ξ. The reaction coordinate is roughly a measure of distance, with the body of the electrode being on the left, the bulk solution being on the right. The blue energy curve shows the increase in Gibbs energy for an oxidized molecule as it moves closer to the surface of the electrode when no potential is applied. The black energy curve shows the increase in Gibbs energy as a reduced molecule moves closer to the electrode. The two energy curves intersect at $\\Delta G^{*}(0)$ . Applying a potential E to the electrode will move the energy curve downward[Note 3] (to the red curve) by nFE and the intersection point will move to $\\Delta G^{*}(E)$ . $\\Delta ^{\\ddagger }G_{c}$ and $\\Delta ^{\\ddagger }G_{a}$ are the activation energies (energy barriers) to be overcome by the oxidized and reduced species respectively for a general E, while $\\Delta ^{\\ddagger }G_{oc}$ and $\\Delta ^{\\ddagger }G_{oa}$ are the activation energies for E=0. [Note 4]\n\nAssume that the rate constants are well approximated by an Arrhenius equation,\n\n$k_{f}=A_{f}\\exp[-\\Delta ^{\\ddagger }G_{c}/RT]$\n$k_{b}=A_{b}\\exp[-\\Delta ^{\\ddagger }G_{a}/RT]$\n\nwhere the Af and Ab are constants such that Af co = Ab cr is the \"correctly oriented\" O-R collision frequency, and the exponential term (Boltzmann factor) is the fraction of those collisions with sufficient energy to overcome the barrier and react.\n\nAssuming that the energy curves are practically linear in the transition region, they may be represented there by:\n\n $\\Delta G=S_{c}\\xi +K_{c}$",
null,
"(blue curve) $\\Delta G=S_{c}\\xi +K_{c}-nFE$",
null,
"(red curve) $\\Delta G=-S_{a}\\xi +K_{a}$",
null,
"(black curve)\n\nThe charge transfer coefficient for this simple case is equivalent to the symmetry factor, and can be expressed in terms of the slopes of the energy curves:\n\n$\\alpha ={\\frac {S_{c}}{S_{a}+S_{c}}}$\n\nIt follows that:\n\n$\\Delta ^{\\ddagger }G_{c}=\\Delta ^{\\ddagger }G_{oc}+\\alpha nFE$\n$\\Delta ^{\\ddagger }G_{a}=\\Delta ^{\\ddagger }G_{oa}-(1-\\alpha )nFE$\n\nFor conciseness, define:\n\n$f_{\\alpha }=\\alpha nF/RT$\n$f_{\\beta }=(1-\\alpha )nF/RT$\n$f=f_{\\alpha }+f_{\\beta }=nF/RT$\n\nThe rate constants can now be expressed as:\n\n$k_{f}=k_{fo}e^{-f_{\\alpha }E}$\n$k_{b}=k_{bo}e^{f_{\\beta }E}$\n\nwhere the rate constants at zero potential are:\n\n$k_{fo}=A_{f}e^{-\\Delta ^{\\ddagger }G_{oc}/RT}$\n$k_{bo}=A_{b}e^{-\\Delta ^{\\ddagger }G_{oa}/RT}$\n\nThe current density j as a function of applied potential E may now be written::§ 8.3\n\n$j=nF(c_{r}k_{bo}e^{f_{\\beta }E}-c_{o}k_{fo}e^{-f_{\\alpha }E})$\n\n#### Expression in terms of the equilibrium potential\n\nAt a certain voltage Ee, equilibrium will attain and the forward and backward rates (vf and vb) will be equal. This is represented by the green curve in the above figure. The equilibrium rate constants will be written as kfe and kbe, and the equilibrium concentrations will be written coe and cre. The equilibrium currents (jce and jae) will be equal and are written as jo, which is known as the exchange current density.\n\n$v_{fe}=k_{fe}c_{oe}=j_{o}/nF$\n$v_{be}=k_{be}c_{re}=j_{o}/nF$\n\nNote that the net current density at equilibrium will be zero. The equilibrium rate constants are then:\n\n$k_{fe}=k_{fo}e^{-f_{\\alpha }E_{e}}$\n$k_{be}=k_{bo}e^{f_{\\beta }E_{e}}$\n\nSolving the above for kfo and kbo in terms of the equilibrium concentrations coe and cre and the exchange current density jo, the current density j as a function of applied potential E may now be written::§ 8.3\n\n$j=j_{o}\\left({\\frac {c_{r}}{c_{re}}}e^{f_{\\beta }(E-E_{e})}-{\\frac {c_{o}}{c_{oe}}}e^{-f_{\\alpha }(E-E_{e})}\\right)$\n\nAssuming that equilibrium holds in the bulk solution, with concentrations $c_{o}^{*}$ and $c_{r}^{*}$ , it follows that $c_{oe}=c_{o}^{*}$ and $c_{re}=c_{r}^{*}$ , and the above expression for the current density j is then the Butler–Volmer equation. Note that E-Ee is also known as η, the activation overpotential.\n\n#### Expression in terms of the formal potential\n\nFor the simple reaction, the change in Gibbs energy is:[Note 5]\n\n$\\Delta G=\\Delta G_{o}-\\Delta G_{r}=(\\Delta G_{o}^{o}-\\Delta G_{r}^{o})+RT\\ln \\left({\\frac {a_{oe}}{a_{re}}}\\right)$\n\nwhere aoe and are are the activities at equilibrium. The activities a are related to the concentrations c by a=γc where γ is the activity coefficient. The equilibrium potential is given by the Nernst equation:\n\n$E_{e}=-{\\frac {\\Delta G}{nF}}=E^{o}+{\\frac {RT}{nF}}\\ln \\left({\\frac {a_{oe}}{a_{re}}}\\right)$\n\nwhere $E^{o}$ is the standard potential\n\n$E^{o}=-(\\Delta G_{o}^{o}-\\Delta G_{r}^{o})/nF$\n\nDefining the formal potential::§ 2.1.6\n\n$E^{o'}=E^{o}+{\\frac {RT}{nF}}\\ln \\left({\\frac {\\gamma _{oe}}{\\gamma _{re}}}\\right)$\n\nthe equilibrium potential is then:\n\n$E_{e}=E^{o'}+{\\frac {RT}{nF}}\\ln \\left({\\frac {c_{oe}}{c_{re}}}\\right)$\n\nSubstituting this equilibrium potential into the Butler–Volmer equation yields:\n\n$j={\\frac {j_{o}}{c_{oe}^{1-\\alpha }c_{re}^{\\alpha }}}\\left(c_{r}e^{f_{\\beta }(E-E^{o'})}-c_{o}e^{-f_{\\alpha }(E-E^{o'})}\\right)$\n\nwhich may also be written in terms of the standard rate constant ko as::§ 3.3.2\n\n$j=nFk^{o}\\left(c_{r}e^{f_{\\beta }(E-E^{o'})}-c_{o}e^{-f_{\\alpha }(E-E^{o'})}\\right)$\n\nThe standard rate constant is an important descriptor of electrode behavior, independent of concentrations. It is a measure of the rate at which the system will approach equilibrium. In the limit as $k^{0}\\rightarrow 0$ , the electrode becomes an ideal polarizable electrode and will behave electrically as an open circuit (neglecting capacitance). For nearly ideal electrodes with small ko, large changes in the overpotential are required to generate a significant current. In the limit as $k^{0}\\rightarrow \\infty$ , the electrode becomes an ideal non-polarizable electrode and will behave as an electrical short. For a nearly ideal electrodes with large ko, small changes in the overpotential will generate large changes in current."
] | [
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/8ac92b9fc4fd93fbfc0d042d209fdc43b6f10b15",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/a5222e3001c1679eca7ea8b6b92f24835563a189",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/3b6281a0870c43d401bc48073d73e3eeea958c6d",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.89572376,"math_prob":0.99945337,"size":12053,"snap":"2021-04-2021-17","text_gpt3_token_len":2681,"char_repetition_ratio":0.1732094,"word_repetition_ratio":0.04255319,"special_character_ratio":0.21745624,"punctuation_ratio":0.12432675,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99997616,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,4,null,4,null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-21T21:56:42Z\",\"WARC-Record-ID\":\"<urn:uuid:9850faf8-6061-44c1-b44e-fc5038a0289f>\",\"Content-Length\":\"198779\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c1c197a0-40d6-4168-b5e9-c2880fa94890>\",\"WARC-Concurrent-To\":\"<urn:uuid:1b8ba9e5-d78b-4984-9d2d-b4c2d520c435>\",\"WARC-IP-Address\":\"52.207.57.166\",\"WARC-Target-URI\":\"http://conceptmap.cfapps.io/wikipage?lang=en&name=Butler%E2%80%93Volmer_equation\",\"WARC-Payload-Digest\":\"sha1:J4WCNT6GSS36CRBRJTEPX4DAEBXRMNLP\",\"WARC-Block-Digest\":\"sha1:LW7CZC2GQKTRJ5XOVARTJF3R3BWK4L45\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610703527850.55_warc_CC-MAIN-20210121194330-20210121224330-00357.warc.gz\"}"} |
https://www.binaryintellect.net/articles/925a24c7-7301-468c-b00a-a4ecb7134282.aspx | [
"",
null,
"अजपा जप आणि शांभवी मुद्रा ऑनलाईन कोर्स : श्वास, मंत्र, मुद्रा आणि ध्यान यांच्या सहाय्याने मनःशांती, एकाग्रता, चक्र संतुलन आणि कुंडलिनी जागृती. अधिक माहिती आणि आगामी तारखांसाठी येथे जा.\n\n# Understand TypeScript Data Types",
null,
"In the previous article and companion video of this series you prepared an ASP.NET Core project in Visual Studio to use TypeScript. You also developed a simple \"Hello World\" example. Now let's move ahead and learn something about the data types supported by TypeScript.\n\nOne of the strengths of TypeScript is its strongly typed nature. Unlike plain JavaScript where the data type of a variable is determined dynamically, TypeScript allows you to declare a variable with a specific data type. Some data types that are commonly used include string, number, boolean, any, and object. You can see the complete list of supported data types here. Rather than simply enumerating through the available data types I am going to develop a simple Temperature Convertor that allows you to convert temperature values between Celsius and Fahrenheit. While developing this example you will be introduced with a few basic data types supported by TypeScript along with the associated keywords and syntax.\n\nThe Temperature Convertor application that you build here looks like this:",
null,
"As you can see, there is a textbox for entering a temperature value. There are two buttons - Convert to Celsius and Convert to Fahrenheit that perform the respective conversion. The result of conversion is displayed below the buttons. A date-time stamp is also rendered at the bottom.\n\nBegin by opening the same project that you created in the previous part of this series. Then add two files to the TypeScript folder using Add New Item dialog - DataTypes.ts and DataTypes.html.",
null,
"Then open DataTypes.ts file and define an enumeration named ConversionType:\n\n```enum ConversionType {\nCelsiusToFahrenheit,\nFahrenheitToCelsius\n}```\n\nAs a C# developer you are already familiar with enumerations. TypeScript enumerations server the same purpose. The ConversionType enumeration is defined using enum keyword and contains two values - CelsiusToFahrenheit and FahrenheitToCelsius. Enumerations are basically numeric values and here they are 0 and 1 respectively. You could have also explicitly assigned numeric values to them.\n\nNext, you will write a function called convert(). The convert() function is intended to convert a temperature value from one measuring unit to another and its signature looks like this:\n\n```function convert(value: number,\nconversionType: ConversionType) {\n}```\n\nNotice the function signature carefully. The convert() function takes two parameters, value and type. You can indicate data type of the parameter using colon (:) syntax. So, value parameter is of type number and type parameter is of type ConversionType.\n\nInside, you will declare a few variables and constants. The following code shows them:\n\n```let tempInC: number;\nlet tempInF: number;\nlet message: string;\nconst timeStamp: Date = new Date();\nconst clearValue: boolean = true; ```\n\nIn order to declare a variable you use let keyword. If you used JavaScript before you are probably familiar with declaring variables using var keyword. Although you can still use var keyword in TypeScript, modern JavaScript has introduced the let keyword that does the job. The main difference between var and let is - a variable declared with var has scope of that function whereas a variable declared using let has scope of a block. You can read more about var and let here and here.\n\nSo, the above code declares tempInC, tempInF, and message. A variable's data type is specified after the colon (:). For example, data type of tempInC and tempInF is number. The code also declares two constants timeStamp and clearValue. The timeStamp constant holds the current date-time value and is used to display time-stamp on the page. The clearValue constant holds a boolean value of true and it indicates whether the textbox should be cleared upon converting a temperature value. Since both of them are constants, their values need to be assigned while declaring them.\n\nNext, you need to perform the conversion between Celsius and Fahrenheit depending on the ConversionType value passed in the convert() function. The code that does this conversion is discussed below:\n\n```if (conversionType == ConversionType.CelsiusToFahrenheit) {\ntempInC = value;\ntempInF = tempInC * 9 / 5 + 32;\nmessage = tempInC + \"\\xB0C = \" + tempInF + \" \\xB0F\";\n}\n\nif (conversionType == ConversionType.FahrenheitToCelsius) {\ntempInF = value;\ntempInC = (tempInF - 32) * 5 / 9;\nmessage = tempInF + \"\\xB0F = \" + tempInC + \" \\xB0C\";\n}```\n\nThe if statement that follows checks whether the temperature is to be converted from Celsius to Fahrenheit. If so, the temperature value is stored in tempInC variable. The next line converts the temperature into Fahrenheit equivalent using a mathematical formula and stores the result into tempInF variable. Then the message variable is assigned a message string by concatenating tempInC and tempInF values.\n\nThe second if statement is quite similar but converts a temperature value from Fahrenheit to Celsius.\n\nThe final piece of code outputs the values of message and timeStamp on the page. This code is shown below:\n\n```document.getElementById(\"msg\").innerHTML\n= \"<h2>Result : \" + message + \"</h2>\";\ndocument.getElementById(\"stamp\").innerHTML\n= \"<h3>Calculated on : \" +\ntimeStamp.toISOString() + \"</h3>\";\nif (clearValue) {\nlet tempValue: HTMLInputElement;\ntempValue = <HTMLInputElement>document\n.getElementById(\"tempValue\");\ntempValue.value = \"\";\n}```\n\nThe first line of this code grabs a DOM element whose ID is msg. and sets its innerHTML property to a message. The next line renders the date-time stamp in a DOM element with ID of stamp. The if statement that follows checks the clearValue constant and if it is true empties the textbox. Notice the type casting syntax of TypeScript used there. The getEmelemnyById() method returns HTMLElement whereas a textbox (<input> element) is represented by HTMLInputElement. So, type conversion is necessary. After obtaining a reference to HTMLInputElement, its value property is assigned an empty string.\n\nThis completes the convert() function. For the sake of clarity the complete convert() function is given below:\n\n```function convert(value: number,\nconversionType: ConversionType) {\nlet tempInC: number;\nlet tempInF: number;\nlet message: string;\nconst timeStamp: Date = new Date();\nconst clearValue: boolean = true;\n\nif (conversionType ==\nConversionType.CelsiusToFahrenheit) {\ntempInC = value;\ntempInF = tempInC * 9 / 5 + 32;\nmessage = tempInC + \"\\xB0C = \"\n+ tempInF + \" \\xB0F\";\n}\n\nif (conversionType ==\nConversionType.FahrenheitToCelsius) {\ntempInF = value;\ntempInC = (tempInF - 32) * 5 / 9;\nmessage = tempInF + \"\\xB0C = \"\n+ tempInC + \" \\xB0F\";\n}\n\ndocument.getElementById(\"msg\").innerHTML\n= \"<h2>Result : \" + message + \"</h2>\";\ndocument.getElementById(\"stamp\").innerHTML\n= \"<h3>Calculated on : \" + timeStamp.toISOString()\n+ \"</h3>\";\nif (clearValue) {\nlet tempValue: HTMLInputElement;\ntempValue = <HTMLInputElement>document.\ngetElementById(\"tempValue\");\ntempValue.value = \"\";\n}\n}```\n\nYou need to write two more functions that get called when the \"Convert to Celsius\" and \"Convert to Fahrenheit\" buttons are clicked. These functions are shown next:\n\n```function convertToCelsius() {\nlet value: number;\nlet tempValue: HTMLInputElement;\ntempValue = <HTMLInputElement>\ndocument.getElementById(\"tempValue\");\nvalue = parseInt(tempValue.value);\nconvert(value, ConversionType.FahrenheitToCelsius);\n}\n\nfunction convertToFahrenheit() {\nlet value: number;\nlet tempValue: HTMLInputElement;\ntempValue = <HTMLInputElement>\ndocument.getElementById(\"tempValue\");\nvalue = parseInt(tempValue.value);\nconvert(value, ConversionType.CelsiusToFahrenheit);\n}```\n\nThe convertToCelsius() function converts a value entered into the textbox into Celsius equivalent. Inside, the code grabs the value from the textbox and stores it in value variable. Notice the use of parseInt() to convert a string value into an integer. Then the code calls the convert() function created earlier. The value and ConversionType of FahrenheitToCelsius is passed to the convert() function.\n\nThe convertToFahrenheit() function is similar but calls the convert() function by passing ConversionType.CelsiusToFahrenheit.\n\nThis completes DataTypes.ts. If you save the file or build the project you should get DataTypes.js output file under the Output folder.",
null,
"Now let's proceed further and add some markup in the DataTypes.html file. The following markup shows the important pieces from the DataTypes.html.\n\n``` <h1>Temperature Converter</h1>\n<tr>\n<td>Temperature Value : </td>\n<td><input id=\"tempValue\" type=\"text\" /></td>\n</tr>\n<tr>\n<td colspan=\"2\">\n<button onclick=\"convertToCelsius()\">\nConvert to Celsius</button>\n<button onclick=\"convertToFahrenheit()\">\nConvert to Fahrenheit</button>\n</td>\n</tr>\n</table>\n<div id=\"msg\"></div>\n<div id=\"stamp\"></div>\n<script src=\"/TypeScript/Output/DataTypes.js\"></script>```\n\nAs you can see, there is an <input> element with ID tempValue. There are two <button> elements that trigger convertToCelsius() and convertToFahrenheit() functions when clicked.\n\nThe two <div> elements msg and stamp are used to display message and date-time stamp respectively (see earlier code).\n\nFinally, the <script> tag points to the DateTypes.js file.\n\nSave all your work and open DataTypes.html in the browser. Enter some value in the textbox and click on one of the buttons. Confirm whether the conversion is happening as expected.\n\nThat's it for now! Keep coding!!\n\nBipin Joshi is an independent software consultant and trainer by profession specializing in Microsoft web development technologies. Having embraced the Yoga way of life he is also a meditation teacher and spiritual guide to his students. He is a prolific author and writes regularly about software development and yoga on his websites. He is programming, meditating, writing, and teaching for over 27 years. To know more about his ASP.NET online courses go here. More details about his Ajapa Japa and Shambhavi Mudra online course are available here.\n\nPosted On : 06 April 2020"
] | [
null,
"https://www.binaryintellect.net/images/misc/leftcolumnspacer.png",
null,
"https://www.binaryintellect.net/articles/content/Images/T_TypeScript_DataTypes.jpg",
null,
"https://www.binaryintellect.net/articles/content/Images/T_TypeScript_DataTypes_01.png",
null,
"https://www.binaryintellect.net/articles/content/Images/T_TypeScript_DataTypes_02.png",
null,
"https://www.binaryintellect.net/articles/content/Images/T_TypeScript_DataTypes_03.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.5776876,"math_prob":0.8685898,"size":9385,"snap":"2023-40-2023-50","text_gpt3_token_len":2102,"char_repetition_ratio":0.15883169,"word_repetition_ratio":0.15345454,"special_character_ratio":0.21694192,"punctuation_ratio":0.13855036,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9639911,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-28T00:09:48Z\",\"WARC-Record-ID\":\"<urn:uuid:4c544f6d-dd44-4570-9f25-25c1511a7b1e>\",\"Content-Length\":\"32555\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4ab62f93-dd51-45c5-bb67-7bce6ee6c181>\",\"WARC-Concurrent-To\":\"<urn:uuid:6b6a9bbd-9115-4ae8-ae1b-921e851aa8b5>\",\"WARC-IP-Address\":\"199.233.254.89\",\"WARC-Target-URI\":\"https://www.binaryintellect.net/articles/925a24c7-7301-468c-b00a-a4ecb7134282.aspx\",\"WARC-Payload-Digest\":\"sha1:5STFK6HLLRUI3VHFPWWBNB2KCAY5BSTM\",\"WARC-Block-Digest\":\"sha1:PJOIA3LKO7GRSI3ETV3IYXCDMDDEYBWG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510334.9_warc_CC-MAIN-20230927235044-20230928025044-00686.warc.gz\"}"} |
https://medium.com/@panuviljamaa/using-multi-arg-javascript-functions-with-map-48b896593fd6?source=---------8------------------ | [
"# Using multi-arg JavaScript functions with map()\n\n`function times (x, i, a, y=this){ return x * y;}let [ten, twenty, thirty]= [1, 2, 3].map (times, 10);let tenB = times.call(10, 1);`\n\nThe arguments ‘i’ and ‘a’ are never used, but need to be there because map() passes them in. The argument ‘y’ is not passed in by map() but map() makes its second argument to be the ‘this’ inside the function being mapped over. Since we are using ES6 default-argument-value “y=this”, the 2nd argument we pass to map, 10, will show up as the y inside the function.\n\nWhat if your function takes more than two “real”arguments, say x, y and z? You can use array destructuring to make such a function callable also by map, like this:\n\n`function manyArgs (x, i, a, [y, z]=this){ return x * y * z;}[1,2,3] .map (manyArgs, [2,3] ); // == [6, 12, 18]`"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.753393,"math_prob":0.9790472,"size":823,"snap":"2019-43-2019-47","text_gpt3_token_len":254,"char_repetition_ratio":0.13186814,"word_repetition_ratio":0.0,"special_character_ratio":0.3329283,"punctuation_ratio":0.2,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9720705,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-15T03:00:48Z\",\"WARC-Record-ID\":\"<urn:uuid:d3cdb313-a896-42c1-88c1-943d27ab6192>\",\"Content-Length\":\"84764\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:848ef366-bc05-41f8-b124-7fac815958d1>\",\"WARC-Concurrent-To\":\"<urn:uuid:cc6703d3-c5bd-488d-929b-6b30d3f041f7>\",\"WARC-IP-Address\":\"104.16.121.127\",\"WARC-Target-URI\":\"https://medium.com/@panuviljamaa/using-multi-arg-javascript-functions-with-map-48b896593fd6?source=---------8------------------\",\"WARC-Payload-Digest\":\"sha1:77VEVYTMLYT42OICCC6CAYHTWVS6RVYX\",\"WARC-Block-Digest\":\"sha1:LAZBI5PBECC4BHBCJL4ZZV6CZ2SZVMBW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496668561.61_warc_CC-MAIN-20191115015509-20191115043509-00361.warc.gz\"}"} |
https://shirleydu.com/2017/07/30/a-brief-overview-of-machine-learning/ | [
"## A Brief Overview of Machine Learning\n\nThis blog post is going to be a compilation of most common machine learning concepts for those who are getting started on machine learning.\n\nFirst of all, the relationship between the AI, ML, and deep learning:",
null,
"In one sentence: machine learning is an approach to achieve artificial intelligence, and deep learning is a subfield of machine learning.\n\n# Machine Learning\n\nAccording to Arthur Samuel in 1959, machine learning gives “computers the ability to learn without being explicitly programmed”. Machine learning tasks are typically divided into:\n\n• Supervised learning: data is labelled — “I tell you: this is a cat and that is a dog. Now guess what this picture is?”\n• regression: predict continuous valued output (linear regression: `y = ax + b`)\n• classification: predict discrete valued output (models: KNN, mixture of Gaussians, logistic regression, threshold perceptron, logistic perception, etc)\n• Unsupervised learning: data is unlabelled — “I don’t know what those images are, but these look like one creature and those images look like another one”\n• clustering (e.g., k-means)\n• anomaly detection\n• Reinforcement learning: machine gets feedback while learning — just like how animals get trained\n\nAn algorithm is what is used to train a modelA model is something to which when you give an input, gives an output. Common ML models include:\n\n– Decision tree",
null,
"– Logistic Regression",
null,
"",
null,
"– Neural network",
null,
"– Bayesian network",
null,
"",
null,
"– Support vector machine",
null,
"– Nearest neighbor",
null,
"– k-means",
null,
"– Markov",
null,
"#### Model Comparisons\n\nKNN takes no time to train – memory-intensive (must compute the distances and sort all the training data at each prediction)\n– performance depends on the number of dimensions (curse of dimensionality)\n– test time efficiency low\nDeep NNs – compactly represent a significantly larger set of functions\n– eliminates the need for feature engineering\n– great for unstructured datasets such as images, audio, and video\n– require a long time to train\n– computationally expensive\nBayesian networks useful when data is scarce (e.g., medical diagnosis) – computationally expensive (could be NP-hard)\n– dependent on quality of prior beliefs\nSVM – regularization parameter (avoids overfitting)\n– uses kernel trick (defines the similarity function in terms of original space without even knowing what the transformation function K is)\nhigh algorithmic complexity and extensive memory requirements of the required quadratic programming in large-scale tasks (Horváth in Suykens et al. p 392)\nLogistic Regression Naive Bayes\nestimates the probability(y|x) from training data by minimizing error estimates a joint probability from the training data\nsplits feature space linearly; doesn’t matter if some features are correlated in case some features are correlated, the prediction might be poor because Naive Bayes assumes conditionally independent features\ngenerally needs a large training dataset in order to draw a good linear separator a small training dataset is fine\nLogistic Regression Decision Tree\nassumes there is one smooth linear decision boundary assumes that our decision boundaries are parallel to the axes (partitions the feature space into rectangles)\n\n#### How To choose a classifier based on training set size?\n\nFirst thing we need to be clear about: how do we know if a classifier is better than others? We calculate the validation error for these classifiers and try to find the one with minimum validation error. Now, what is the validation error and how do we find that?\n\nIn a typical machine learning application, we split data into 3 sets: 70% training set, 20% cross validation set, and 10% test set. The training set is what’s used for training the model. The cross validation set is what’s used to estimate how well the model has been trained to select the best performing model. The test set is finally used to estimate the accuracy of the selected model. While selecting a model, it’s useful to compare the cross validation error with the training error.\n\nIn linear regression, we have the cost functions as follows:\n\n```Jtrain(θ) = 1/2m * ∑(hθ(x(i)) - y(i))2 JCV(θ) = 1/2mcv * ∑(hθ(xcv(i)) - ycv(i))2 Jtest(θ) = 1/2mtest * ∑(hθ(xtest(i)) - ytest(i))2 where θ's are the parameters to the linear function that we define (hθ(x) = θ0 + θ1x + θ2x2 + ...), m is the size of dataset ```\n\nThe goal here is to find parameters θ that will minimize the cost functions (errors). As we have a more complex model (i.e., a higher degree of polynomial), we get a graph like this:",
null,
"What is the ideal fit? Low bias and low variance (in the middle of the graph). Unfortunately, it’s almost impossible in practice. Therefore, a bias-variance tradeoff must be made.\n\nSee this blog post for an illustration of bias-variance tradeoffs.\n\nBias: the ability of your model function to approximate the data.\n\n1) Using a linear regression model to model a quadratic relationship will cause a high bias (underfit) because you’ll not be able to approximate the relationship well regardless how you tune your parameters\n\n2) KNN has low bias because it doesn’t assume anything about the distribution of data\n\nHigh bias (underfit): validation error ≈ training error\n\n-> getting more training data will not help much\n\n-> if training set is small, high bias models tend to perform better because they are less likely to overfit (e.g., Naive Bayes)\n\nVariance: the stability of your model in response to new training example.\n\n1) KNN has high variance (overfit) because it can easily change its prediction if only a few points in the training dataset are changed\n\n2) Linear algorithms tend to have low variance because they have rigid underlying structure that will not change much in the face of new data\n\nHigh variance (overfit): validation error » training error\n\n-> getting more training data is likely to help (gap can become smaller)\n\n-> if training set is big, high variance models tend to perform better because they can reflect more complex relationships (e.g., logistic regression)\n\n#### Differences between model parameters and model hyperparameters\n\nModel parameters: properties of the training data that are learned by the model on its own. They differ for each experiment.\n\nE.g., weight coefficients (or slope) of a linear regression line and its bias (or y-axis intercept) term, the weights of convolution filters\n\nModel hyperparameters: tuning parameters of an algorithm. They are not to be learned by the model and are set beforehand. They are common for similar models. Running an algorithm over a training dataset with different hyperparameter settings will result in different models.\n\nE.g., regularization penalty, number of hidden layers, learning rate, dropout and gradient clipping threshold, a value for setting the maximum depth of a decision tree\n\n## Neural networks\n\nA neural network is a computing system that are inspired by human brains.\n\nNo hidden layer: if your data is linearly separable, then you don’t need any hidden layer\n\n1 hidden layer: can approximate any function that contains a continuous mapping from one finite space to another",
null,
"—- wait, why do we need an additional layer? Let’s take a look at the XOR classic example:\n\nSuppose we are to build a neural network that will produce the XOR truth table.\n\na b a XOR b\n1 1 0\n0 1 1\n1 0 1\n0 0 0\n\nCan you try to represent it with no hidden layer (threshold perceptron)?\n\nNow, can you try to represent it with one hidden layer?\n\n2 hidden layers: can represent an arbitrary decision boundary to arbitrary accuracy with activation functions",
null,
"*note: One hidden layer is sufficient for the majority of problems\n\n(source: Introduction to Neural Networks for Java, Second Edition)\n\nActivation functions produce a non-linear decision boundary of the weighted inputs.",
null,
"## Deep Learning\n\nDeep learning is the application of neural networks with many hidden layers to learning tasks.\n\nWhen Should You Use Deep Learning? When you’ve got a large training dataset and a good hardware (GPU).\n\nTake a look at why GPUs are necessary for deep learning here.\n\n### Feed-forward neural network:\n\nAn acyclic graph that connects neurons.",
null,
"### Convolutional neural networks (CNN):\n\n– made up of neurons that have learnable weights and biases\n\n– local connectivity in many layers -> deep locally connected network -> reduces number of connections",
null,
"Important feature of CNN: weight sharing.",
null,
"This contrasts with ordinary deep neural networks where weights are set arbitrarily:",
null,
"#### Differences between CNN and a fully-connected neural network:\n\nConvolutional neural network Fully connected neural network\nEach neuron is only connected to a few nearby (aka local) neurons in the previous layer Each neuron is connected to all neurons in the previous layer\nSame set of weights is used for every neuron Weights can vary in each connection\nCheap Expensive in terms of memory (weights) and computation (connections)\n\n#### Why CNN for Image Recognition and Object Detection?\n\nBecause CNNs have filters (“locally shared weight layers”) that mimic the human visual system (consider when you recognize a dog: you look at its eyes, its nose, its shape, its color, etc). Each layer of a CNN can be trained to recognize higher level features than the previous layer.\n\nE.g., 1st layer: recognize only edges, blobs and corners.\n\n2nd layer: combine these edges, blobs and corners to identify higher level shapes.\n\n3rd layer: recognize objects like eyes, mouth etc.\n\nlast layer: classify the specific object",
null,
"This is a good blog post about CNN that is very intuitive to understand.\n\n### Recurrent Neural Networks (RNN)\n\nOutputs are fed back to the network as inputs, as illustrated below:",
null,
"#### Differences between CNN and RNN\n\nCNN RNN\ntakes a fixed size input and generates fixed-size outputs takes arbitrary size of input and output\na type of feed-forward NN (acyclic) not a type of feed-forward NN (cyclic)\nconnectivity pattern between neurons are inspired by human visual system (e.g., I see a yellow waterbird with a broad blunt bill, short legs, webbed feet, and a waddling gait — I think it’s a duck) uses time-series information (e.g., It’s been sunny this entire past week, I think it might be sunny again tomorrow)\nideal for images and videos processing ideal for text and speech analysis\n\nCommon architectures of RNN: Long short-term memory (LSTM), Gated Recurrent Unit (GRU)\n\n## Unsupervised Learning\n\n### k-means\n\nK-means is an unsupervised machine learning algorithm that finds clusters of data. It starts with randomly initializing k cluster centroids. It keeps grouping data into clusters by closest distance to each centroid, and then it updates the centroids by calculating the new average of points assigned to cluster k. It repeats this process until the algorithm converges.\n\nIllustration:",
null,
"### Anomaly Detection\n\nAnomaly detection identifies events that are not expected to happen. A typical usage is fraud detection. Given an unlabeled test dataset, the detector assumes that the majority of the data are set to be “normal” and looks for data that seems to fit least to others.",
null,
"A discriminative model is one that discriminates between 2 classes of data (real data and fake data)\n\nA generative model doesn’t know anything about classes of data\n\n• Purpose: generate new data which fits the distribution of the training data.\n• Objective: be so good at producing fake data that it will fool the discriminator\n\nThe discriminator will be tasked with discriminating between samples from the true data X and the artificial data generated by g.\n\nWe train both in an alternating manner. Each of their objective can be expressed as a loss function that we can optimize via gradient descent.\n\nResult: both get better at their objectives in tandem. The generator is able to fool the most sophisticated discriminator. This method ends up with generative neural nets that are incredibly good at producing new data.",
null,
"Can you imagine? These beautiful shoes and handbags are completely generated by GAN on its own!",
null,
"source: “Learning to Discover Cross-Domain Relations with Generative Adversarial Networks” by Kim et al. (2017)\n\nThat’s it! This concludes a very brief overview of machine learning. I hope you liked this blog post and found it helpful. Please let me know if you have any questions or suggestions!\n\nSpecial thanks to Rudi Chen, Bai Li, and Michael Tu for comments and suggestions!"
] | [
null,
"https://shirleyyldu.files.wordpress.com/2017/07/machine-learning.png",
null,
"https://i0.wp.com/help.prognoz.com/en/mergedProjects/Lib/img/decisiontree.gif",
null,
"https://www.leansigmacorporation.com/wp/wp-content/uploads/2016/01/Logistic-Regression-EQ1.png",
null,
"https://media.licdn.com/mpr/mpr/AAEAAQAAAAAAAAkUAAAAJDJlMDNjMGM5LTlmZjktNDlhNy1iNmNmLTE5NTM1YjE3NzA0Yw.png",
null,
"https://upload.wikimedia.org/wikipedia/commons/thumb/4/46/Colored_neural_network.svg/300px-Colored_neural_network.svg.png",
null,
"https://shirleyyldu.files.wordpress.com/2017/07/bayes.png",
null,
"https://i0.wp.com/www.intechopen.com/source/html/19354/media/image3.jpeg",
null,
"https://shirleyyldu.files.wordpress.com/2017/07/svm1.png",
null,
"https://shirleyyldu.files.wordpress.com/2017/07/70bfc-p1.png",
null,
"https://i0.wp.com/stanford.edu/~cpiech/cs221/img/kmeansViz.png",
null,
"https://upload.wikimedia.org/wikipedia/commons/thumb/4/43/HMMGraph.svg/1280px-HMMGraph.svg.png",
null,
"https://i0.wp.com/www.learnopencv.com/wp-content/uploads/2017/02/Bias-Variance-Tradeoff-In-Machine-Learning-1.png",
null,
"https://shirleyyldu.files.wordpress.com/2017/07/400px-artificial_neural_network-svg.png",
null,
"https://shirleyyldu.files.wordpress.com/2017/07/neural_net2.jpeg",
null,
"https://i.stack.imgur.com/Waz75.png",
null,
"https://shirleyyldu.files.wordpress.com/2017/07/multi-layer_neural_network-vector-blank-svg.png",
null,
"https://shirleyyldu.files.wordpress.com/2017/07/screen-shot-2015-11-07-at-7-26-20-am.png",
null,
"https://shirleyyldu.files.wordpress.com/2017/07/main-qimg-be1d825abc83695768003bacd39a3884.png",
null,
"https://i0.wp.com/imgur.com/yE88Ryt.png",
null,
"https://shirleyyldu.files.wordpress.com/2017/07/main-qimg-eaf93f1027f136b4d3fd9cbe7a452327.png",
null,
"https://shirleyyldu.files.wordpress.com/2017/07/rnn-unrolled.png",
null,
"https://i0.wp.com/www.learnbymarketing.com/wp-content/uploads/2015/01/method-k-means-steps-example.png",
null,
"https://shirleyyldu.files.wordpress.com/2017/07/anomaly.png",
null,
"https://image.slidesharecdn.com/dlcvd4l1generativemodelsandaversarialtraining-160803172437/95/deep-learning-for-computer-vision-generative-models-and-adversarial-training-upc-2016-5-638.jpg",
null,
"https://raw.githubusercontent.com/SKTBrain/DiscoGAN/master/assets/discogan.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.89447963,"math_prob":0.8947138,"size":12205,"snap":"2023-14-2023-23","text_gpt3_token_len":2661,"char_repetition_ratio":0.11277764,"word_repetition_ratio":0.014249364,"special_character_ratio":0.20704629,"punctuation_ratio":0.09291628,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98076487,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50],"im_url_duplicate_count":[null,6,null,2,null,null,null,6,null,null,null,6,null,2,null,6,null,6,null,1,null,6,null,1,null,6,null,6,null,null,null,6,null,6,null,6,null,2,null,null,null,6,null,1,null,6,null,6,null,6,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-08T18:49:04Z\",\"WARC-Record-ID\":\"<urn:uuid:47f4d693-45e9-4b32-9217-6cda7440150d>\",\"Content-Length\":\"184103\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d6f2ec11-292b-418d-a3c0-29df37d5b444>\",\"WARC-Concurrent-To\":\"<urn:uuid:6efd3881-48ac-4374-abd2-326017ae49da>\",\"WARC-IP-Address\":\"192.0.78.24\",\"WARC-Target-URI\":\"https://shirleydu.com/2017/07/30/a-brief-overview-of-machine-learning/\",\"WARC-Payload-Digest\":\"sha1:PUUYXI5X6OWA3MPGRUF7FGPAINJRVGGF\",\"WARC-Block-Digest\":\"sha1:PMBTD3X4VPVHCAB5TXOZKVFA5LX6OAQP\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224655092.36_warc_CC-MAIN-20230608172023-20230608202023-00526.warc.gz\"}"} |
http://damir.cavar.me/stat4ling-10/material/R/index.en.html | [
"Instructions for working with R\nby Damir Cavar (Feb. 2010)\n\nInstallation\nIf you are not using some common Linux distribution (Ubuntu, Debian, Suse, Fedora etc.), download the version of R for your OS. If you use Linux, your package manager will allow you to add R and its components, including a graphical interface (e.g. R-Commander).\nFor the following examples it is necessary that the module stats is loaded in R. For all examples we use the R Console, which might look different on different Operating systems:\n\nMac OS X:",
null,
"Ubuntu Linux 10.04 - Terminal and R-Commander:",
null,
"",
null,
"Microsoft Windows:\n\nBasic functions\nSome data for analyses can be added manually in R in the following way:\n\ndata1 = c(2.34, 4.32, 3.24, 4.34)\n\nThis means that a list of results or measures will be saved in memory under the name data1. The name data1 serves like a variable, when calling functions and setting up analyses of the data, avoiding the repetition of the same data from the scratch in all these functions.\n\nIf you look at the set of slides 1, the discussed analyses can be performed in R now. For example, to get the arithmetic mean, simply type the following command in the R shell:\n\nmean(data1)\n\nThis command is the same as:\n\nmean(c(2.34, 4.32, 3.24, 4.34))\n\nThe median can be calculated using the following command:\n\nmedian(data1)\n\nThe smallest and the largest result or measure can be found using the following commands:\n\nmin(data1)\nmax(data1)\n\nThe range can be found using this command (this means, the result will be the smallest and the largest measure or result in the data sample, and the range could be calculated by subtract the smaller from the larger value):\n\nrange(data1)\n\nYou can sum up all results or values in the data1 list using the following command:\n\nsum(data1)\n\nThe variance can be calculated using the following command:\n\nvar(data1)\n\nThe standard deviation for the data above we get using the following command:\n\nsd(data1)\n\nIf you collect your results and measures in a file, you can read it in from the file, without having to type it into R manually. The data might be collected in form of tables, as in:\n\ntoken frequency length\nmeštrom 1 7\npićan 1 5\nznamenite 2 9\nmanzonijeve 3 11\nsnime 1 5\niis 1 3\ndaržavom 1 8\nprofiliranu 1 11\nosmjehnu 2 8\nbraku 10 5\norane 1 5\n...\n\nThis table is generated from some randomly selected books from the Croatian Language Corpus. The complete table can be downloaded from the folder Files. Download the file sample.dat to your computer and in R use the following command to load the data into memory:\n\nIf you use Microsoft Windows and R does not read in the data from the file, try to specify the encoding. The file sample.dat is encoded in UTF-8 format. The Windows version of R should be able to open and read the ANSI encoded version of the data in the file samles-ANSI.dat, without specification of the encoding, as shown above. The ANSI encoded version of the data can be found in the Files section.\n\nSome versions of R-Commander on Linux do not open the file select box/windows with the command file=file.choose(). A quick solution is to specify the complete file name without the file select window, for example if the file sample.dat is located in the folder /tmp:\n\nThis command would open the file without a file select window, where you can choose the specific input file.\n\nThe command with the file.choose() component will open a file selection window. This is the result of the sub-command: file.choose(). The additional option “header=TRUE” informs R that the data in the file has a header in the first line. If you select the file sample.dat in the file selection windows, R will read in the data and store it in the variable words.\n\nThe content of the variable, that is all the tokens with their frequency and length, are added to the current R workspace using the command:\n\nattach(words)\n\nNow you can for example plot the relation between frequency and length of words with the following command:\n\nplot(length,frequency)\n\nIf everything went well, the result of the last command should generate a graph in a new windows that looks like the following one:",
null,
"The same graph can be found in the folder Files in PDF format (open it with Acrobat Reader), with the file name Word-Freq-Length.pdf."
] | [
null,
"http://damir.cavar.me/stat4ling-10/material/R/files/r-macosx-en.png",
null,
"http://damir.cavar.me/stat4ling-10/material/R/files/r-commander-ex-en.png",
null,
"http://damir.cavar.me/stat4ling-10/material/R/files/screenshot-terminal2.png",
null,
"http://damir.cavar.me/stat4ling-10/material/R/files/word-freq-length.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.84824467,"math_prob":0.58213717,"size":4189,"snap":"2021-21-2021-25","text_gpt3_token_len":986,"char_repetition_ratio":0.15794504,"word_repetition_ratio":0.022889843,"special_character_ratio":0.22941037,"punctuation_ratio":0.11494253,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96640646,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,2,null,2,null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-05-18T18:01:57Z\",\"WARC-Record-ID\":\"<urn:uuid:b84acdf6-bd81-4488-91fb-d32168c90e37>\",\"Content-Length\":\"13434\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:16859c84-615c-4904-9d4b-01da2a07b574>\",\"WARC-Concurrent-To\":\"<urn:uuid:ab5b0cee-be44-4ae3-9ac8-589b2df58a61>\",\"WARC-IP-Address\":\"185.199.108.153\",\"WARC-Target-URI\":\"http://damir.cavar.me/stat4ling-10/material/R/index.en.html\",\"WARC-Payload-Digest\":\"sha1:NOCAO77S7NCR2DERT5ENBYYB5LVXZH3I\",\"WARC-Block-Digest\":\"sha1:BXIUDZJBRXTQ75UKWDAPSVXWFPLBOYIQ\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-21/CC-MAIN-2021-21_segments_1620243991288.0_warc_CC-MAIN-20210518160705-20210518190705-00156.warc.gz\"}"} |
https://math.stackexchange.com/questions/164811/explanation-for-why-1-neq-0-is-explicitly-mentioned-in-chapter-1-of-spivaks-c | [
"Explanation for why $1\\neq 0$ is explicitly mentioned in Chapter 1 of Spivak's Calculus for properties of numbers.\n\nDuring the first few pages of Spivak's Calculus (Third edition) in chapter 1 it mentions six properties about numbers.\n\n(P1) If $a,b,c$ are any numbers, then $a+(b+c)=(a+b)+c$\n\n(P2) If $a$ is any number then $a+0=0+a=a$\n\n(P3) For every number $a$, there is a number $-a$ such that $a+(-a)=(-a)+a=0$\n\n(P4) If $a$ and $b$ are any numbers, then $a+b=b+a$\n\n(P5) If $a,b$ and $c$ are any numbers, then $a\\cdot(b\\cdot c)=(a\\cdot b)\\cdot c$\n\n(P6) If $a$ is any number, then $a\\cdot 1=1\\cdot a=a$\n\nThen it further states that $1\\neq 0$. In the book it says that it was an important fact to list because there is no way that it could be proven on the basis of the $6$ properties listed above - these properties would all hold if there were only one number, namely $0$.\n\nQuestions:\n\n1) How does one rigorously prove that $1\\neq0$ cannot be proven from the $6$ properties listed?\n\n2) It says that \"these properties would all hold if there were only one number, namely $0$.\" Is a reason as to why this is explicitly mentioned is to avoid this trivial case where we only have the number $0$? Is there another deeper reason as to why this sentence was mentioned in relation to $1\\neq 0$?\n\nNB: Can someone please check if the tags are appropriate and edit if necessary? Thanks.\n\n• It's basically just to rule out the trivial case so that in all proofs, theorems and such we don't have to say \"except in this trivial case\". Jun 30 '12 at 5:27\n• If all properties $P_i$ $\\ (1\\leq i\\leq 6)$ hold in a system containing only one element $0$ then it's impossible to prove $1\\ne0$ from the $P_i$ alone. Jun 30 '12 at 8:43\n• I had a go at the tags, but I'm probably missing some, I don't know that many. Jun 30 '12 at 12:43\n• Jun 30 '12 at 16:43\n\nTo show that $1\\neq 0$ cannot be proven from the other six properties, consider the set that contains only one element, $\\{\\bullet\\}$. Define $+$ by $\\bullet+\\bullet = \\bullet$ and $\\cdot$ by $\\bullet\\cdot\\bullet = \\bullet$. Then letting $0=\\bullet$, $-\\bullet =\\bullet$, and $1=\\bullet$, all six axioms are satisfied, but $1=0$. Thus, $1\\neq 0$ cannot be proven from the first six axioms, since you have a model in which the first six axioms are true, but $1\\neq 0$ is not.\n\nYes: the reason we need to specify it is so that we don't just have the one-element \"field\". Basicallly, the condition that $1\\neq 0$ is formally undecidable from the first six properties, so it needs to be specified.\n\n• So based on your definition of addition and multiplication with the one element set, if we have ${0}$ then all 6 properties will be satisfied where in (P6) we replace $1$ with $0$? But then for ${1}$ how do we check (P3)? We haven't defined what $(-1)+1$ or $1+(-1)$ is? Jun 30 '12 at 7:16\n• @user22678: There is only one element in the set, namely $\\cdot$. \"$1$\", \"$0$\", \"$-0$\", \"$-1$\", \"$--0$\" and so on are all just different ways of referring to $\\cdot$. Jun 30 '12 at 8:20\n• @user22678: \"0\" and \"1\" are names; absent a prohibition (which is the new axiom $1\\neq 0$) they can be two different names of the same thing. In this particular object, every single name that you see in the axioms refers to the same object, $\\bullet$. So \"$1$\" refers to $\\bullet$, and \"$-1$\" also refers to $\\bullet$, and \"$0$\" refers to $\\bullet$. So \"$(-1)+1$\" has been defined: it's $\\bullet + \\bullet$, which we defined to be $\\bullet$. Same for \"$1+(-1)$\". Jun 30 '12 at 17:35\n• @user22678: yes; except that the set is not defined by the operations; the operations are defined on the set, which \"pre-exists\" the operations. Jul 1 '12 at 1:01\n• @user22678: It's important to keep clear what is what. An operation is a function on a set; you cannot define an operation before you have a set (just like you cannot define a function without first having a domain and a codomain). You don't get the set \"out of\" the operation. You start with a set, and the operation is defined on the set. Jul 1 '12 at 1:19\n\nTo prove that $1 \\neq 0$ can't be proven from those properties, one can just construct an example where $1 = 0$ and those properties hold, since this means that you have structures that satisfy the axioms where some have $1 = 0$ and others have $1 \\neq 0$, so this property is independent of the axioms.\n\nSpecifically, the zero ring (what you get if you have a single number) satisfies all of them. If you want, you can easily check manually since the only choice for any variable is $0$:\n\nP1) $0 + (0 + 0) = 0 = (0 + 0) + 0$\n\nP2) $0 + 0 = 0 = 0 + 0$\n\netc.\n\nP6 is the one we need to look at specifically, it can be written as \"There exists a number $x$ such that $a \\cdot x = x \\cdot a = a$, and we call this $x$ the $1$ of the ring\". In the zero ring, choose $x = 0$, so\n\n$0 \\cdot x = 0 \\cdot 0 = 0 = x \\cdot 0$\n\nThus $0$ satisfies the rule for the \"$1$\" in the ring, and so $0 = 1$.\n\n• I understand that if you have just the set ${0}$ then yes, $0$ satisfies the \"$1$\" in (P6). But what if you have ${1}$ and replace all the $0$'s in the six properties into $1$'s, namely (P2) and (P3)? For $(-0)+0=0+(-0)$ it seems quite comfortable to verify this, but when $(-1)+1=1+(-1)=1$ it seems like something isn't right? Jun 30 '12 at 9:58\n• In this context, \"1\" is just a label for the multiplicative identity (i.e. the number that satisfies $a\\cdot 1 = 1 \\cdot a = a$), and so it has different meanings in different rings, and you can't transfer the meaning from the integers to the meaning in the zero ring.\n– huon\nJun 30 '12 at 13:31\n\nThe important thing to realise here is that $1$, $0$, $+$, $\\cdot$, $-$ need not mean the things that you're used to them meaning. We can come up for any rule for combining any things, and if it behaves a bit like $+$ (i.e. follows rules (1)-(4)) we may call it $+$, and if it behaves a bit like $\\cdot$ we may call it $\\cdot$. Then if an object follows rule (2) we might call it $0$ and if an object follows rule (6) we might call it $1$. Given just these six rules, we can't even be sure that $+$ and $\\cdot$ are different operations1, so we can't be sure that the object we called $0$ and the object we called $1$ are different objects.\n\n1 Conspicuous by its omission from your list is the distributive law that defines how multiplication and addition interact: $a\\cdot(b + c)=a\\cdot b+a\\cdot c$. If that were included, we'd have a sort of way of telling the difference between addition and multiplication, but there'd still be the completely uninteresting case (only one object, and all operations just give you that object back again) where it worked but still $0 = 1$.\n\n(1) How does one rigorously prove that 1≠0 cannot be proven from the 6 properties listed?\n\n(2) It says that \"these properties would all hold if there were only one number, namely 0 .\" Is a reason as to why this is explicitly mentioned is to avoid this trivial case where we only have the number 0 ? Is there another deeper reason as to why this sentence was mentioned in relation to 1≠0 ?\n\nAny equational algebraic theory whose axioms are all universal, i.e. that assert equalities of terms composed of operations, variables, and constants, for all values of the variables, necessarily has a one element model. Indeed, defining all of the constants to be the one element (say $$0)$$ and defining all the operations to have value $$0$$ makes all axioms true, since they evaluate to $$\\,0 = 0.$$\n\nHence $$\\,1\\ne 0\\,$$ is not deducible from your axioms since it is not true in a one element model.\n\nThe reason that $$\\rm\\:1\\ne 0\\:$$ is adjoined as an axiom for fields (and domains) is simply a matter of convenience. For example, it proves a very convenient target for proofs by contradiction, which often conclude by deducing $$\\rm\\:1 = 0.\\:$$ Also, it avoids the inconvenience of needing to explicitly exclude in proofs motley degenerate cases that occur in one element rings, e.g. that $$\\rm\\:0\\:$$ is invertible, since $$\\rm\\:0\\cdot 0 = 1\\, (= 0).\\:$$ Much more so than proofs by contradiction, this confuses many students (and even some experienced mathematicians) as witnessed here in the past, e.g. see the long comment threads here and here (see esp. my comments in Hendrik's answer)."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9069389,"math_prob":0.99912554,"size":1251,"snap":"2022-05-2022-21","text_gpt3_token_len":370,"char_repetition_ratio":0.13632719,"word_repetition_ratio":0.07305936,"special_character_ratio":0.3013589,"punctuation_ratio":0.07636364,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99973124,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-24T12:17:09Z\",\"WARC-Record-ID\":\"<urn:uuid:24ca95ea-b63f-4106-b4d3-8ee7db3c951e>\",\"Content-Length\":\"185251\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c19a0c0e-d759-4e7b-b734-459ce115b197>\",\"WARC-Concurrent-To\":\"<urn:uuid:fda5bd52-d889-4d89-b30b-34e89dd6903f>\",\"WARC-IP-Address\":\"151.101.65.69\",\"WARC-Target-URI\":\"https://math.stackexchange.com/questions/164811/explanation-for-why-1-neq-0-is-explicitly-mentioned-in-chapter-1-of-spivaks-c\",\"WARC-Payload-Digest\":\"sha1:3YFTMSGWN3ZOL45FZOIDWUNTP5FBXICL\",\"WARC-Block-Digest\":\"sha1:MMCYE4TJJAUIVDWXE2RH62RP7RIAHMJO\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320304528.78_warc_CC-MAIN-20220124094120-20220124124120-00080.warc.gz\"}"} |
https://www.datarails.com/excel-volatile-functions-performance/ | [
"",
null,
"By Dany Hoter, Datarails Solutions Architect\n\nIn this article, I’ll explain a category of functions in Excel that are referred to as volatile.\n\nVolatile functions are recalculated every time you make any change to the spreadsheet. Even worse, if you have multiple Excel files open, and you make a change in one that doesn’t have volatile functions, all volatile functions in all opened files will still be calculated. Whenever one of these functions is calculated, it triggers the calculation of any cell that is dependent on the function, as well as those dependent on the dependent. As a result, using volatile functions can eventually make your spreadsheet barely usable.\n\nVolatile built-in functions include:\n\n• Offset\n• Indirect\n• Rand\n• Randbetween\n• Now\n• Today\n• Info (in some cases)\n• Cell (in some cases)\n\nFor this article, I will focus on the popular Offset, which can be replaced in many cases by a non-volatile function.\n\n## The Offset Function\n\nOffset can be used in one of two forms:\n\n=OFFSET(Sheet1!\\$A\\$1,G5,H5)\n\n• This Offset function returns the value of a single cell. In this case, the two last arguments are optional. The returned cell in this example is G5 rows and H5 columns remote from A1.\n\n=OFFSET(Sheet1!\\$A\\$1,0,0,G5,H5)\n\n• This example returns a range of G5 rows by H5 columns starting from A1. The range cannot be returned to a cell, but instead it can be consumed by another function like Sum or Count.\n\nThe performance gain can be achieved by replacing the first example of Offset with the Index function.\n\n## The Effect of Offset on Excel Overall Performance\n\nThe Offset function does not take significant time to calculate.\n\nIn my example, I combined the Offset function with other cells that use Vlookup. The value returned by Offset is used by Vlookup. The Vlookup functions, rather than Offset, are what make the overall performance sluggish.\n\n## Two Examples\n\nYou can download two files that are essentially doing the same job: One using Offset and the other using Index for the same purpose.\n\nSo if the original formula was:\n\n=OFFSET(Sheet1!\\$A\\$1,B3,C3)\n\nOr\n\n=OFFSET(Sheet1!\\$A\\$1,B3,C3,1,1)\n\nWe can seamlessly replace it with\n\n=INDEX(indextable,B3+1,C3+1)\n\n## Differences\n\nThe Offset function receives a single cell as first argument and returns the cell based on the distance from this cell in rows and columns. The Index function, by contrast, expects a range as the first argument that includes all the cells that might be returned. This forces you to determine the maximum size of the range that can be imagined, allowing for data growth.\n\nNote that it is not expensive to exaggerate the size. Also, if the 1,1 values were used as the last arguments, they would need to be removed.\n\n## Difference in Performance\n\n1. Open the Offset file.\n1. Change the value in C3 from 2 to 1. It should take a few seconds for Excel to accept the value and calculate results. This is because all Offset functions are calculated on every change and as a result, all Vlookup functions are calculated as well.\n2. Now open the Index file. Apply the same change.\n\nWait—what happened? It took the same time! I managed to trick you. Remember that all volatile functions in all opened Excel files are calculated, so even when you change a cell in the index file, all the Offset functions are calculated.\n\n1. Close the Offset file and attempt the change in the Index file again. Now you will see that it doesn’t take any time at all."
] | [
null,
"https://www.datarails.com/wp-content/uploads/2018/03/960x640-spreadsheet.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9110249,"math_prob":0.90313405,"size":3667,"snap":"2022-27-2022-33","text_gpt3_token_len":826,"char_repetition_ratio":0.14332515,"word_repetition_ratio":0.006441224,"special_character_ratio":0.21625307,"punctuation_ratio":0.11491108,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97535163,"pos_list":[0,1,2],"im_url_duplicate_count":[null,8,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-18T04:28:18Z\",\"WARC-Record-ID\":\"<urn:uuid:8c99be15-e849-4de0-94db-aedbe9f04a49>\",\"Content-Length\":\"121069\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7e16e404-4162-455f-b468-5e2faa86b0b5>\",\"WARC-Concurrent-To\":\"<urn:uuid:0689d49f-f5fd-473f-b022-da288db66734>\",\"WARC-IP-Address\":\"141.193.213.10\",\"WARC-Target-URI\":\"https://www.datarails.com/excel-volatile-functions-performance/\",\"WARC-Payload-Digest\":\"sha1:SGADZQQY3MLSOVBUZYWXUFXEY6YJA55W\",\"WARC-Block-Digest\":\"sha1:YZE3OW6RPU5LWCQAEUC7CK3QLWGLFAMJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882573163.7_warc_CC-MAIN-20220818033705-20220818063705-00448.warc.gz\"}"} |
https://gtae6343.fandom.com/wiki/Work | [
"## FANDOM\n\nCreated by John Bennewitz\n\nThis article was created to educate a reader on the fundamental topic of work, by including various definitions of work (both thermodynamic & mechanical) and discussing the differences between reversible/irreversible/impossible processes. Also. certain engineering examples of work drawn from James Prescott Joule's experiments are also considered to provide further understanding of the topic.\n\n## Definition",
null,
"Edit\n\nThermodynamically, work is defined as a process in which energy is transferred across a system boundary. Work can be thought of as an effect of one system imposed onto another (e.g. a thermodynamic system acting on its surroundings). The presence of work performed by a system can be realized from the following statement, \"Work is done by a system on its surroundings if the sole effect of everything external to the system could have been produced by the raising of a weight\" . It should be noted that from this statement, it is not necessary for a system to physically raise a weight (or essentially have a force act across a distance) to have work present, but rather that the same effect on the system could have been replicated by raising a weight.\n\nMechanically, work is defined as an energy transfer process in which a force acts across a distance. This type of work is dictated by the following simple mathematical expression: $W = \\int\\bold{F} \\cdot \\mathrm{d}\\bold{s}$, where F is the applied force & dS is the displacement of the acting force. From this mathematical expression, it is apparent that if both the displacement and force act in the same direction, the work produced is positive; negative work on the other hand is created when the force and displacement act 180° opposite of each other. It should also be noted that in a cyclic process where a force is acted across a distance and then returned to its original position (causing dS = 0), there is no mechanical work performed.\n\nThe units of work are generally either a Joule (when working in S.I. units) or foot-pound (when working in English units). Some physical significance can be drawn from the units of work. While considering the foot-pound, it is readily apparent that work involves a force acting over a distance ([foot-pound] = distance*Force). For the S.I. standard unit, a bit more analysis needs to be performed to reach the same understanding. Essentially, a Joule is a Newton-meter ([J] = N*m). In the S.I. system, the unit of force is a Newton, so the same form can then be reached: [J] = N*m = Force*distance).\n\n## Joule's Experiments",
null,
"Edit\n\nTo further understand this process of energy transfer, it is worth examining a series of experiments performed by James Prescott Joule in the 1840's. In his experiments, he aimed to determine the mechanical energy (i.e. mechanical work) necessary to raise the temperature inside a water bath by 1 °F . For these experiments, there existed an insulated bath of water, which would be configured in a way to be effected by various sources of work. These experiments demonstrate a fundamental understanding of certain variations of work and how they are able to alter the state of the system through the First Law of Thermodynamics. The First Law of Thermodynamics states that energy is conserved throughout a system, and that when one form of energy (work, heat or internal energy) is altered, the others change correspondingly to permit dE = 0. Upon studying the First Law of Thermodynamics, it is important to understand that not only is energy conserved throughout a system, but that the various types of energy transfer (work, heat or internal energy) are independent of one another and have different characteristics. From this, it can be said that the ability for work to be an organized energy transfer process compared to heat transfer is an extremely important characteristic of work which is demonstrated by Joule's experiments. His tests are explained below in further detail:\n\n### First Experiment",
null,
"Edit",
null,
"Joule's First Experiment (Image created by Author)\n\nThe first set-up consisted of a propeller and weight apparatus to supply work to the water bath. Essentially, the work necessary to raise the temperature of the water bath for this experiment was able to be quantified by measuring the vertical displacement (dy) of the mass (of a known value). By applying a known mass value to the pulley, only the vertical displacement needed to be measured (simply using a length measuring device) in order to quantify the amount of work. It was determined that 773 ft*lbf was required to raise the temperature in the water bath by 1 °F ..\n\n### Second Experiment",
null,
"Edit",
null,
"Joule's Second Experiment (Image created by Author)\n\nThe second experiment consisted of two masses, one directly on top of another. Also, an outward force was applied to the top mass (creating heating of the water bath due to friction). Overall, the work necessary to raise the temperature of the water bath for this experiment was quantified by measuring the horizontal applied force (of a known value) across the displacement (dx). By applying a known force, again, only the displacement needed to be measured in order to quantify the amount of work. It was determined that 775 ft*lbf was required to raise the temperature in the water bath by 1 °F ..\n\n### Third Experiment",
null,
"Edit",
null,
"Joule's Third Experiment (Image created by Author)\n\nThe third experiment consisted of a piston cylinder sytem located inside the water bath. From this set-up, a force (of a known value) was applied to the piston to compress the cylinder (creating heating of the water bath due to simple compressive work). Essentially, the work necessary to raise the temperature of the water bath for this experiment was determined by quantifying the horizontal displacement (dx) of the force applied to the cylinder. It was determined that 793 ft*lbf was required to raise the temperature in the water bath by 1 °F ..\n\nFrom these experiments, it is apparent that by applying various types work to a system, it is possible to generate heat within the system. Essentially, due to all three of the systems being insulated, there is no heat transferred from the systems to the surroundings. Then, from the First Law of Thermodynamics, when work is applied to the system, the internal energy of the water bath must increase. Also, it should be noted that the work required to raise the bath temperature 1 °F for each of the various external work sources is essentially constant across the three experiments. Due to the constant volume nature of the systems in Joule's Experiments, the amount of heat generated in the system which raises the water bath 1 °F is now realized as the specific heat at constant volume for water (which Joule measured to be approximately 781 (ft*lbf)/(lbm*R) and is now accepted to be 778 (ft*lbf)/(lbm*R)) .\n\n## Simple Compressible Substance",
null,
"Edit\n\nWhen considering thermodynamics, it is often useful to apply the simple compressible substance assumption to a flow system. Under this assumption, the only type of work that can naturally occur in the flow system is fluid compression work. \"This term designates substances whose surface effects, magnetic effects, and electrical effects are insignificant when dealing with [simple compressible] substances\" . Fluid compression work, or \"pdV\" work as it is sometimes called, is a type of reversible work which is quantified by the following equation: $W = pdV$, where p is the pressure of the fluid & dV is a finite change in volume of the fluid. From this relation, it is apparent that when a fluid is expanded (positive dV value), work is released from the system, and while the fluid is compressed (negative dV value), work must be supplied to the system. Thus, this relation will adequately describe the compression and expansion of a simple compressible fluid. Once applying this assumption to the First Law of Thermodynamics, many thermodynamic systems (some of which are applicable to the field of aerospace engineering) can be simplified and solved. An example of one of these processes would be the Brayton Cycle, which is a thermodynamic cycle used in the analysis of turbine engines.\n\n## Reversible / Irreversible / Impossible Processes",
null,
"Edit\n\nUpon discussing the concept of work in thermodynamics, it is important to consider the types of processes a system can undergo. Thermodynamic processes can be divided into the three following designations: Reversible, Irreversible and Impossible. Differentiation between these three types is directly attributed to the Second Law of Thermodynamics and the production of entropy\n\n### Reversible Process",
null,
"Edit\n\nEssentially, a reversible process is an ideal thermodynamic process. In other words, when a process is reversible, no change on a system or its surroundings has taken place . Thus, the process that initially took place is able to be completely reversed with no net change in the entropy of the system (i,e. Entropy Production, Ps = 0). Due to the entropy staying constant, reversible processes are the most efficient as opposed to processes with entropy generation. One example of a reversible process is an extremely slow process (e.g. upon considering combustion, an example would be an ideal gas mixture being slightly altered and having enough time to reach equilibrium) .\n\nReversible Process: $\\Delta s = Ps = 0$\n\n### Irreversible Process",
null,
"Edit\n\nAn irreversible process is one that cannot be reversed and thus permanently changes the system and/or its surroundings . Due to the Second Law of Thermodynamics, for an irreversible process, entropy is produced and the overall efficiency of the process is decreased. As opposed to a reversible process, there is a net change in the entropy of the system. Almost all thermodynamic processes are irreversible, especially those of interest to aerospace propulsion engineers. One example of an irreversible process would be the mixing of two gas species (a fuel and oxidizer) during combustion.\n\nIrreversible Process: $Ps \\ge 0$\n\n### Impossible Process",
null,
"Edit\n\nAs the name suggests, an impossible process is one that cannot occur due to violations of the principles of thermodynamics. Essentially, the definition of an impossible process can be viewed as a direct extension of the Second Law of Thermodynamics. From the Second Law, it can be said that entropy of a system can only stay constant or be created during a thermodynamic process; thus, entropy can never be decreased across a process. For an impossible process, entropy generation is said to be negative (leading to an overall decrease in the entropy of the system). Although there are no actual impossible processes that exist, a theoretical example from propulsion would be a weak detonation. For a weak detonation, it would be necessary to accelerate a flow from subsonic conditions to supersonic through only heat addition (the violation is a result of heat addition, always driving flow conditions to M=1) .\n\nImpossible Process: $Ps < 0$"
] | [
null,
"data:image/gif;base64,R0lGODlhAQABAIABAAAAAP///yH5BAEAAAEALAAAAAABAAEAQAICTAEAOw%3D%3D",
null,
"data:image/gif;base64,R0lGODlhAQABAIABAAAAAP///yH5BAEAAAEALAAAAAABAAEAQAICTAEAOw%3D%3D",
null,
"data:image/gif;base64,R0lGODlhAQABAIABAAAAAP///yH5BAEAAAEALAAAAAABAAEAQAICTAEAOw%3D%3D",
null,
"https://vignette.wikia.nocookie.net/gtae6343/images/0/07/Experiment_1.jpg/revision/latest/scale-to-width-down/308",
null,
"data:image/gif;base64,R0lGODlhAQABAIABAAAAAP///yH5BAEAAAEALAAAAAABAAEAQAICTAEAOw%3D%3D",
null,
"https://vignette.wikia.nocookie.net/gtae6343/images/7/75/Experiment_2.jpg/revision/latest/scale-to-width-down/308",
null,
"data:image/gif;base64,R0lGODlhAQABAIABAAAAAP///yH5BAEAAAEALAAAAAABAAEAQAICTAEAOw%3D%3D",
null,
"https://vignette.wikia.nocookie.net/gtae6343/images/1/16/Experiment_3.jpg/revision/latest/scale-to-width-down/308",
null,
"data:image/gif;base64,R0lGODlhAQABAIABAAAAAP///yH5BAEAAAEALAAAAAABAAEAQAICTAEAOw%3D%3D",
null,
"data:image/gif;base64,R0lGODlhAQABAIABAAAAAP///yH5BAEAAAEALAAAAAABAAEAQAICTAEAOw%3D%3D",
null,
"data:image/gif;base64,R0lGODlhAQABAIABAAAAAP///yH5BAEAAAEALAAAAAABAAEAQAICTAEAOw%3D%3D",
null,
"data:image/gif;base64,R0lGODlhAQABAIABAAAAAP///yH5BAEAAAEALAAAAAABAAEAQAICTAEAOw%3D%3D",
null,
"data:image/gif;base64,R0lGODlhAQABAIABAAAAAP///yH5BAEAAAEALAAAAAABAAEAQAICTAEAOw%3D%3D",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9454041,"math_prob":0.88942385,"size":11305,"snap":"2019-13-2019-22","text_gpt3_token_len":2340,"char_repetition_ratio":0.13635962,"word_repetition_ratio":0.06626833,"special_character_ratio":0.20380363,"punctuation_ratio":0.10014585,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.986716,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26],"im_url_duplicate_count":[null,null,null,null,null,null,null,1,null,null,null,1,null,null,null,1,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-05-26T08:57:56Z\",\"WARC-Record-ID\":\"<urn:uuid:6f8d48d9-8280-44cc-b572-716007089414>\",\"Content-Length\":\"120280\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7fecd854-7190-4193-aa1f-d60ac5c1fd82>\",\"WARC-Concurrent-To\":\"<urn:uuid:aca95252-0b55-41fd-af2c-e59e11ac08e0>\",\"WARC-IP-Address\":\"151.101.64.194\",\"WARC-Target-URI\":\"https://gtae6343.fandom.com/wiki/Work\",\"WARC-Payload-Digest\":\"sha1:Y4QJEAEQ47MN5LXLPFJQEVX7M4KNBCBN\",\"WARC-Block-Digest\":\"sha1:Y2ATQHZFIDYDTNXA7456EYPMSRQCAMQS\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-22/CC-MAIN-2019-22_segments_1558232259015.92_warc_CC-MAIN-20190526085156-20190526111156-00212.warc.gz\"}"} |
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2014/n4171.html | [
"Document number: N4171 2014-10-05 Programming Language C++, Library Evolution Working Group Tomasz Kamiński\n\n# Parameter group placeholders for bind\n\n## Introduction\n\nThe aim of this proposal is to introduce a new class of placeholder that could be used with `std::bind`: a group placeholder that represents a set of (zero or more) call arguments.\n\n## Motivation and Scope\n\nBefore going thought this chapter the author recommends the reader to familiarize with Appendix: Why we need `bind` when we have a generic lambda?.\n\nIn the scope of this proposal we introduce a new type of parameter placeholder: group placeholders that are to be replaced by zero or more arguments in the invocation of the stored functor. The meaning of a single placeholder is independent from the context, so if the given placeholder is used twice, appropriate set of values will be passed twice, reproducing existing behaviour.\n\nThe following group placeholders are proposed in this paper:\n\nplaceholder range of arguments\n`_all` [1st, last]\n`_from<N>` [Nth, last]\n`_to<N>` [1st, Nth)\n`_between<N, K>`[Nth, Kth)\n\nTo complement set of group placeholders, the variable template representation of a single-argument placeholder named `_at<N>` is also proposed.\n\n### Binding object with member function\n\nThe most common use case for placeholder `_all` is to bind an object to a member function, effectively emulating the result of expressions `obj.*ptr`.\n\nFor example, given the following definitions:\n\n```struct Strategy { double process(std:string, std::string, double, double); };\nstd::unique_ptr<Strategy> createStrategy();```\n\nWe want to create a functor that will invoke method `process` on a given strategy. Compare the two solutions, one using lambda, the other using `bind`:\n\n`[s = createStrategy()] (auto&&... args) -> decltype(auto) { return s->process(std::forward<decltype(args)>(args)...); }`\n`std::bind(&Strategy::process, createStrategy(), _1, _2, _3, _4)`\n\nThe lambda approach allow us to write a functor that will be immune to the changes in the number of method arguments, but it requires an explicit specification of the return type and the use of perfect forwarding to accomplish such a simple task. The solution that uses `bind` is pretty straightforward, but it requires a modification every time the number of parameters of the method changes. The extension proposed in this paper allows us to avoid this problem by writing:\n\n`std::bind(&Strategy::process, createStrategy(), _all)`\n\nThe same problem is also addressed by paper N3702, by the introduction of a new version of function `mem_fn`:\n\n`std::mem_fn(&Strategy::process, createStrategy())`\n\nAlthough the solution presented in N3702 provides nice and concise syntax, it address only this specific use case, and can be easily emulated with `std::bind` and group placeholder:\n\n```template<typename Class, typename Member, typename Object>\nauto mem_fn(Member Class::* mem_ptr, Object&& obj)\n{ return std::bind(mem_ptr, std::forward<Object>(obj), _all); }\n```\n\n### Argument list manipulation\n\nWith the proposed set of the placeholders a programmer is able to perform various types of manipulation with the argument list, including, but not limited to:\n\n• insert new argument at Nth position: `bind(f{}, _to<N>, val, _from<N>)`\n• replace Nth argument with given value: `bind(f{}, _to<N>, val, _from<N+1>)`\n• change order of first and second argument: `bind(f{}, _2, _1, _from<3>)`\n• swap positions of Nth and Kth argument: `bind(f{}, _to<N>, _at<K>, _between<N+1, K>, _at<N>, _from<K+1>)`\n• forward only first N arguments: `bind(f{}, _to<N>)`\n• drop first N arguments: `bind(f{}, _from<N+1>)`\n\n### Defining custom placeholders\n\nThe existing design of `std::bind` allows the programmer to specify his own placeholder types via the specialization of trait `is_placeholder`. It is used both to check if a given type `P` represents placeholder (`is_placeholder<P>::value > 0`) and to define the index of the call argument that will be passed as a replacement of the placeholder (`is_placeholder<P>::value`).\n\nThe extension proposed in this paper, preserves this functionality, and also adds the ability to create user-defined group placeholders. To achieve this goal the previous responsibility of the `is_placeholder` is divided into two separate type functions:\n\n• `is_placeholder` that is used to determine if given type `P` represents a placeholder. To mark type as placeholder, the specialization `is_placeholder<P>` should be derived from `integral_constant<int, K>`, where `K > 0`.\n• `parameter_indices` that is used to determine indices of call arguments that should be passed as replacement of the placeholder. This type trait accepts the number of call arguments passed to functor as a second parameter. To signal that arguments with indices `i1, i2, ..., iK` should be used instead of placeholder, the specialization of the `parameter_indices<T, N>` should be derived from `integer_sequence<int, i1, i2, ..., iK>`.\n\nTo preserve backward compatibility with the existing single argument placeholders, the default implementation of `parameter_indices<P, N>` is derived from `integer_sequence<int, is_placeholder<P>::value>` for every placeholder type `P`.\n\nAn example implementation of `_args` placeholder that accepts indices of arguments that should be forwarded is provided bellow:\n\n```template<int... Is>\nstruct args_placeholder {};\n\ntemplate<int... Is>\nconstexpr args_placeholder _args{};\n\nnamespace std\n{\ntemplate<int... Is>\nstruct is_placeholder<args_placeholder<Is...>>\n: integral_constant<int, 1> {};\n\ntemplate<int... Is, int N>\nstruct parameter_indices<args_placeholder<Is...>, N>\n: integer_sequence<int, Is...> {};\n}```\n\n## Design Decisions\n\n### Use of variables templates to define placeholders\n\nThe variable templates are used to define placeholders instead of set of extern variables. This approach allows the programmer to compute positions of passed parameters at compile time, which is cumbersome in case of existing placeholders. In addition the author finds single definition of `_from`, instead list of `_1onwards, _2onwards, ..., _Nonwards`, being more elegant.\n\n### Naming of placeholders\n\nParameter group placeholder proposed in this paper has names that begins with underscore (`_all`, `_from`) instead of the most obvious `all`, `from`. These names was chosen to reduce risk name collision in code, that uses `bind` in combination with using directive for `std::placeholders` namespace. Example:\n\n``` std::vector<std::string> erase_empty(std::vector<std::string> v)\n{\nusing namespace std;\nusing namespace std::placeholders;\n\nauto from = remove_if(begin(v), end(v), bind(&string::empty, _1));\nv.erase(from, end(v));\nreturn v;\n}\n```\n\nFurthermore the author perceive this names as more consistent with the existing numbered placeholders (`_1`, `_2`, ...).\n\n### Number of parameters required by `_from<N>`\n\nThe addition of the `_from<N>` placeholder opens the question about its behaviour when the number of parameters passed to the forwarding call wrapper produced as a result of `bind(&foo, _from<N>)` is equal to `N-1`. There are two possible approaches:\n\n1. forward no arguments in place of `_from<N>` to target callable object,\n2. make such invocation ill-formed and require at least `N` arguments if the `_from<N>` is used.\n\nThe first behaviour was chosen by this proposal, because it is more general and allows to easily simulate the second one by passing `_N, _from<N+1>` instead of `_from<N>` as argument.\n\n### Non-type template argument of type `int`\n\nThe non-type template argument of `_at`, `_from`, `_to`, `_between` and `parameter_indices` has an `int` type, although they values are required to be non-negative. This decision was made to keep them consistent with existing `is_placeholder` trait, that uses `int` to represent index of forwarded parameter.\n\n## Impact On The Standard\n\nThis proposal has no dependencies beyond a C++14 compiler and Standard Library implementation. (It depends on perfect forwarding, varidatic templates, variable templates, `decltype` and trailing return types.)\n\nNothing depends on this proposal.\n\n## Proposed wording\n\nChange the section 20.10 [function.objects]/2.\n\n```// 20.10.9, bind:\ntemplate<class T> struct is_bind_expression;\ntemplate<class T> struct is_placeholder;\ntemplate<class T, int N> struct parameter_indices;\n\ntemplate<class F, class... BoundArgs>\nunspecified bind(F&&, BoundArgs&&...);\ntemplate<class R, class F, class... BoundArgs>\nunspecified bind(F&&, BoundArgs&&...);\n\nnamespace placeholders {\n// M is the implementation-defined number of placeholders\nextern unspecified _1;\nextern unspecified _2;\n.\n.\n.\nextern unspecified _M;\n\ntemplate<int N>\nunspecified _at;\n\ntemplate<int N>\nunspecified _from;\n\ntemplate<int N>\nunspecified _to;\n\ntemplate<int B, int E>\nunspecified _between;\n\nextern unspecified _all;\n}\n```\n\nChange the paragraph 20.10.9.1.2 Class template `is_placeholder` [func.bind.isplace].\n\n`is_placeholder` can be used to detect the standard placeholders `_all`, `_between<B, E>`, `_to<N>`, `_from<N>`, `_at<N>`, `_1`, `_2`, and so on. `bind` uses `is_placeholder` to detect placeholders.\n\nInstantiations of the `is_placeholder` template shall meet the UnaryTypeTrait requirements (20.11.1). The implementation shall provide a definition that has the BaseCharacteristic of `integral_constant<int, J>` if `T` is the type of `std::placeholders::_J`, otherwise it shall have a BaseCharacteristic of `integral_constant<int, 0>`.:\n\n• `integral_constant<int, 1>` if `T` is the type of `std::placeholders::_all`,\n• `integral_constant<int, 1>` if `T` is the type of `std::placeholders::_from<N>` and `N > 0`,\n• `integral_constant<int, 1>` if `T` is the type of `std::placeholders::_to<N>` and `N > 0`,\n• `integral_constant<int, 1>` if `T` is the type of `std::placeholders::_between<B, E>` and `B > 0` and `E >= B`,\n• `integral_constant<int, J>` if `T` is the type of `std::placeholders::_at<J>` or `std::placeholders::_J`,\n• `integral_constant<int, 0>` otherwise.\n\nA program may specialize this template for a user-defined type `T` to have a BaseCharacteristic of `integral_constant<int, N>` with `N > 0` to indicate that `T` should be treated as a placeholder type.\n\nAfter paragraph 20.10.9.1.2 Class template `is_placeholder`, insert a new paragraph. (Paragraph 20.10.9.1.3 Function template `bind` [func.bind.bind] becomes 20.10.9.1.?)\n\n#### 20.10.9.1.3 Class template `parameter_indices`[func.bind.paramidx]\n\n```namespace std {\ntemplate<class T, int N> struct parameter_indices; // see below\n}```\n\n`bind` uses `parameter_indices` to determine indices of parameters of the forwarding call wrapper to be forwarded to stored callable object as replacement for placeholder.\n\nThe implementation shall provide a definition of `parameter_indices<T, N>` that is publicly and unambiguously derived from:\n\n• `integer_sequence<int>` if `T` is the type of `std::placeholders::_all` and `N == 0`,\n• `integer_sequence<int, 1, 2, ..., N>` if `T` is the type of `std::placeholders::_all` and `N > 0`,\n• `integer_sequence<int>` if `T` is the type of `std::placeholders::_between<B,B>` and `N >= B-1`,\n• `integer_sequence<int, B, B+1, ..., E-1>` if `T` is the type of `std::placeholders::_between<B,E>` and `B < E` and `N >= E-1`,\n• `integer_sequence<int>` if `T` is the type of `std::placeholders::_to<1>` and `N >= 0`,\n• `integer_sequence<int, 1, 2, ..., K-1>` if `T` is the type of `std::placeholders::_to<K>` and `N >= K-1`,\n• `integer_sequence<int>` if `T` is the type of `std::placeholders::_form<K>` and `N == K-1`,\n• `integer_sequence<int, K, K+1, ..., N>` if `T` is the type of `std::placeholders::_form<K>` and `N >= K`,\n• `integer_sequence<int, j>` if `T` is not one of the types described in the previous items and the value `j` defined as `is_placeholder<T>::value` is positive and `N >= j`,\n\nA program may specialize or partially specialize `parameter_indices` template for a user-defined placeholder type to be publicly and unambiguously derived from `integer_sequence<int, i1, i2, ..., iN>` with values `i1, i2, ..., iN` greater than zero, to indicate indices of parameters of the forwarding call wrapper to be forwarded to stored callable object as replacement for placeholder.\n\nA program is ill-formed if it necessitates the instantiation of `parameter_indices<T, N>` that does not satisfy criteria of any of the bullets in paragraph 1 and does not match a specialization or a partial specialization of template `parameter_indices` defined in the program.\n\nChange the paragraph 20.10.9.1.3 Function template `bind` [func.bind.bind].\n\n```template<class F, class... BoundArgs>\nunspecified bind(F&& f, BoundArgs&&... bound_args);\n```\nRequires:\n\n`is_constructible<FD, F>::value` shall be true. For each `Ti` in `BoundArgs`, `is_constructible<TiD, Ti>::value` shall be true. `INVOKE (fd, w1, w2, ..., wN)` (20.10.2) shall be a valid expression for some values `w1, w2, ..., wN`, where `N == sizeof...(bound_args)`. `fd` shall be a callable object ([func.def] 20.10.1).\n\nReturns:\n\nA forwarding call wrapper `g` with a weak result type (20.10.2). The effect of `g(u1, u2, ..., uM)` shall be ```INVOKE (fd, std::forward<V1>(v1), std::forward<V2>(v2), ..., std::forward<VN>(vN), result_of<FD cv & (V1, V2, ..., VN)>::type)``` `INVOKE (fd, std::forward<P1>(p1)..., std::forward<P2>(p2)..., ..., std::forward<PN>(pN)...)`, where `cv` represents the cv-qualifiers of `g` and the values and types of the elements of each of bound arguments `v1, v2, ..., vN` packs `p1, p2, ..., pN` are determined as specified below. The copy constructor and move constructor of the forwarding call wrapper shall throw an exception if and only if the corresponding constructor of `FD` or of any of the types `TiD` throws an exception.\n\nThrows:\n\nNothing unless the construction of `fd` or of one of the values `tid` throws an exception.\n\nRemarks:\n\nThe return type shall satisfy the requirements of `MoveConstructible`. If all of `FD` and `TiD` satisfy the requirements of `CopyConstructible`, then the return type shall satisfy the requirements of `CopyConstructible`. [ Note: This implies that all of FD and TiD are MoveConstructible. — end note ]\n\n```template<class R, class F, class... BoundArgs>\nunspecified bind(F&& f, BoundArgs&&... bound_args);\n```\nRequires:\n\n`is_constructible<FD, F>::value` shall be true. For each `Ti` in `BoundArgs`, `is_constructible<TiD, Ti>::value` shall be true. `INVOKE (fd, w1, w2, ..., wN)` (20.10.2) shall be a valid expression for some values `w1, w2, ..., wN`, where `N == sizeof...(bound_args)`. `fd` shall be a callable object ([func.def] 20.10.1).\n\nReturns:\n\nA forwarding call wrapper `g` with a weak result type (20.10.2). The effect of `g(u1, u2, ..., uM)` shall be `INVOKE (fd, std::forward<V1>(v1), std::forward<V2>(v2), ..., std::forward<VN>(vN), R)` `INVOKE (fd, std::forward<P1>(p1)..., std::forward<P2>(p2)..., ..., std::forward<PN>(pN)..., R)` , where `cv` represents the cv-qualifiers of `g` and the values and types of the elements of each of bound arguments `v1, v2, ..., vN` packs `p1, p2, ..., pN` are determined as specified below. The copy constructor and move constructor of the forwarding call wrapper shall throw an exception if and only if the corresponding constructor of `FD` or of any of the types `TiD` throws an exception.\n\nThrows:\n\nNothing unless the construction of `fd` or of one of the values `tid` throws an exception.\n\nRemarks:\n\nThe return type shall satisfy the requirements of `MoveConstructible`. If all of `FD` and `TiD` satisfy the requirements of `CopyConstructible`, then the return type shall satisfy the requirements of `CopyConstructible`. [ Note: This implies that all of FD and TiD are MoveConstructible. — end note ]\n\nThe values of theelements of each bound arguments `v1, v2, ..., vN` pack `pi` and their corresponding types `V1, V2, ..., VN` depend on the types `TiD` derived from the call to `bind`, number of parameter passed to invocation forwarding call wrapper `M = sizeof...(UnBoundArgs)` and the cv-qualifiers `cv` of the call wrapper `g` as follows:\n\n• if `TiD` is `reference_wrapper<T>`, the argument isthe pack contains single element with value `tid.get()` and its type `Vi` isof type `T&`;\n• if the value of `is_bind_expression<TiD>::value` is true, the argument isthe pack contains single element with value `tid(std::forward<Uj>(uj)...)` and its type `Vi` isof type `result_of<TiD cv & (Uj&&...)>::type&&`;\n• if the value `j` of `is_placeholder<TiD>::value` is not zeropositive and `parameter_indices<TiD, M>` is derived from `integer_sequence<int, j1, j2, ..., jK>`, the argument is `std::forward<Uj>(uj)` and its type `Vi` is `Uj&&` the pack contains `K` elements with values `std::forward<Uj1>(uj1), std::forward<Uj2>(uj2), ..., std::forward<UjK>(ujK)` of types `Uj1&&, Uj2&&, ..., UjK&&` respectively;\n• otherwise, the value isthe pack contains single element with value `tid` and its type `Vi` isof type `TiD cv &`.\n\nChange the paragraph 20.10.9.1.4 Placeholders [func.bind.place].\n\n```namespace placeholders {\n// M is the implementation-defined number of placeholders\nextern unspecified _1;\nextern unspecified _2;\n.\n.\n.\nextern unspecified _M;\n\ntemplate<int N>\nunspecified _at;\n\ntemplate<int N>\nunspecified _from;\n\ntemplate<int N>\nunspecified _to;\n\ntemplate<int B, int E>\nunspecified _between;\n\nextern unspecified _all;\n}\n```\n\nAll placeholder types shall be `DefaultConstructible` and `CopyConstructible`, and their default constructors and copy/move constructors shall not throw exceptions. It is implementation-defined whether placeholder types are `CopyAssignable`. `CopyAssignable` placeholders’ copy assignment operators shall not throw exceptions.\n\nA program that necessitates the instantiation of `_at<N>`, `_from<N>` or `_to<N>` with `N <= 0` is ill-formed.\n\nA program that necessitates the instantiation of `_between<B, E>` with `B <= 0` or `E <= 0` or `B > E` is ill-formed.\n\n## Implementability\n\nProposed change can be implemented as pure library extension in C++14. Implementation of `bind` function that conforms proposed wording can be found https://github.com/tomaszkam/proposals/tree/master/bind.\n\n## Acknowledgements\n\nJonathan Wakely originally proposed idea of multi parameter placeholders in discussion group ISO C++ Standard - Future Proposals.\n\nAndrzej Krzemieński and Ville Voutilainen offered many useful suggestions and corrections to the proposal.\n\n## Appendix: Why we need `bind` when we have a generic lambda?\n\nAfter the introduction of generic lambda and extensions to lambda capture, a portion of the C++ community expresses an opinion that `std::bind` is no longer necessary and should no longer be recommend and even become deprecated. The author disagrees with this opinion, and in support of his position, a number of use cases is discussed here that illustrate the superiority of `std::bind` over lambdas.\n\nThe author wants to emphasise that the aim of this section is to demonstrate situations where `std::bind` leads to more readable and less error prone code than lambdas. It is not to prove that `bind` should be used instead of lambda in every context.\n\n### Specifying return type\n\nThe default return type deduction for a lambda will preform return by value, which is an optimal approach when a built-in type is returned; for example when we use lambda to write a predicate (function returning `bool`) for STL algorithms; but it introduces performance overhead if returning by reference would be preferred.\n\nLet's assume that we want to transform a vector of `Employee` (`ve`) in to a vector of full names.\n\n```std::transform(std::begin(ve), std::end(ve), std::back_inserter(vfn),\n[](const Employee& e) { return e.full_name(); });```\n\nIf the `full_name` function returns a `const std::string&`, then above code will lead to copy of the string being created for every iteration to return value form the lambda and then element of vector will get initialized from this temporary. In this case cheap move construction will be used, but for legacy classes copy constructor may be used twice. To avoid the above problem we may specify the return type for a lambda.\n\n```std::transform(std::begin(ve), std::end(ve), std::back_inserter(vfn),\n[](const Employee& e) -> const auto& { return e.full_name(); });```\n\nThis approach will fix above problems, but if the function `full_name` is changed to return by value, then the code will cause a dangling reference problem. To avoid such problems we may use the `decltype(auto)` deduction:\n\n```std::transform(std::begin(ve), std::end(ve), std::back_inserter(vfn),\n[](const Employee& e) -> decltype(auto) { return e.full_name(); });```\n\nIf we attempt to repeat the same exercise using standard function wrappers, none of the above tough reasoning is necessary, and additionally we will benefit from a single syntax for handling method pointers and member pointers.\n\n```std::transform(std::begin(ve), std::end(ve), std::back_inserter(vfn),\nstd::mem_fn(&Employee::full_name));```\n\n### Passing arguments\n\nOne of the decisions that programmer must make when writing a lambda (and any other functions) is to decide how to pass arguments to it. If we write a comparator or a predicate that only checks the state of the object, then we can use `const auto&`. The choice becomes less obvious if we want to write a wrapper around a function that accepts arguments by value, for example:\n\n`std::string concat_several_times(std::string val, std::size_t n);`\n\nWe want to create a function that will concatenate given string 3 times. Lets begin with:\n\n`[](std::string val) { return concat_serveral_times(std::move(val), 3); }`\n\nThe above solution will create a temporary `std::string` every time the lambda is invoked, even if the C-style string is passed to the function. Also, we will always have a second temporary being created from the rvalue reference. In case of the `std::string` this will end up with cheap move-construction, but it may as well introduce additional copies for legacy classes that define only custom copy-construction. To avoid these problems we will use perfect forwarding to pass a parameter.\n\n`[](auto&& val) { return concat_serveral_times(std::forward<some_type>(val), 3); }`\n\nWhat type should we use as `some_type`? If we use `decltype(val)` then above code would be equivalent to:\n\n`template<typename T> void foo(T&& t) { bar(std::forward<T&&>(t)); }`\n\n`template<typename T> void foo(T&& t) { bar(std::forward<T>(t)); }`\n\nAre you ok with this additional rvalue reference? We could get rid of it by using `std::remove_rvalue_reference_t<decltype(val)>`, but according definition of `std::forward`, the behaviour is same in both cases. So finally, we can safely stick to:\n\n`[](auto&& val) { return concat_serveral_times(std::forward<decltype(val)>(val), 3); }`\n\nThe `std::bind` creates functors that perfectly forwards all non-bound parameters, so we can equivalently use:\n\n`std:bind(&concat_serveral_times, _1, 3)`\n\n### Capturing variables\n\nIn most common cases, when the closure does not outlive the context in which it was created and is invoked in the same thread of the execution, like when it is passed to STL algorithm, it is safe and optimal to use \"capture all be reference\" (`[&]`) semantics. For the situation when we want to pass closure, probably wrapped into `std::function`, outside the current context, then it is save to use \"capture all by value\" (`[=]`) — but only if we assume that we do not use any unmanaged pointer inside. However if we want to pass our functor to other thread of execution we need to be sure, that it would not be causing any data races, and they still may occur if some handler with shallow copy semantics is captured by a lambda (e.g. `std::shared_ptr`).\n\nThe above reasoning leads us to conclusion that when lambda is passed outside current context (either when passing it to other thread, or when returning it from a function) it is safer to explicitly specify the variables that should be captured. For example, given the following definitions:\n\n```struct Widget { void process(std::string&) const };\nstruct WidgetFactory { std::unqiue_ptr<Widget> create() };\n\nvoid process_in_parallel(std::vector<std::string>& vs, WidgetFactory& factory)\n{\nstd::vector<std::future<void>> results;\nfor (std::size_t i = 0; i < vs.size(); ++i)\nresults.emplace_back(std::async(some_callback));\nfor (std::future<void>& fut : results)\nfut.get();\n};```\n\nWe want to create a callback `some_callback` that will process a given element with a concrete widget. Our first attempt would be:\n\n`[&vs, &factory, i] { factory.create()->process(vs[i]); }`\n\nWe have accidentally postponed the creation of `Widget` until the point when lambda is invoked, thus causing a concurrent invocation of `factory` method, which could cause a data race. To fix this we might try to create the widget and capture it from local context:\n\n```for (std::size_t i = 0; i < vs.size(); ++i)\n{\nauto widget = factory.create();\nresults.emplace_back(std::async([&vs, widget, i] { widget->process(vs[i]); }));\n}```\n\nThe above code will not compile because we want to copy a move-only type `std::unique_ptr<Widget>`. In addition, note that we are capturing the whole vector `vs`, although it is sufficient to use only one element in a single thread. Both of this issues may be fixed with a C++14 extended lambda capture:\n\n`[&elem = vs[i], widget = factory.create()] { widget->process(elem); }`\n\nWhat it effectively does is to bind (or 'fix') two parameters to a (member) function; the Standard Library already provides a component designed exactly for such purpose, named `std::bind`:\n\n`std::bind(&Widget::process, factory.create(), std::ref(vs[i]))`\n\nIn contrast to the problem with creating a `Widget`, it is worth noticing that sometimes it is desired to capture some precomputed values in lambda. Suppose we want to find an `Employee` with the given first and last name:\n\n`std::find_if(std::begin(ve), std::end(ve), [](const auto& e) { return e.full_name() == first + ' ' + last; });`\n\nThis innocent-looking code has a performance issue inside: the string `first + ' ' + last` is constant for every element, but a new instance is created in every iteration. To avoid such problems we should capture the value:\n\n`std::find_if(std::begin(ve), std::end(ve), [name = first + \" \" + last](const auto& e) { return e.full_name() == name; });`\n\nAlthough the use of `std::bind` would also eliminate the problem, the author recommends the use of lambda is such case, because nested `std::bind` (which is necessary in this situation) would render a less readable code:\n\n`std::find_if(std::begin(ve), std::end(ve), std::bind(std::equal_to<>, std::bind(&Employee::full_name, _1), first + ' ' + last));`\n\n### Summary\n\nThe original aim of the lambda functions was to simplify writing of ad-hoc functors for use with STL algorithms, and indeed its design makes writing such predicates simple and efficient. As a consequence of such design, lambda is not efficient when used to write function wrappers. For example, let us compare the following simple bind expression and its lambda equivalent:\n\n```std::bind(&foo, _1, expr, std::ref(a));\n[e = expr, &a] (auto&& arg) -> decltype(auto) { return foo(std::forward<decltype(arg)>(arg), e, a); };```\n\nThe use of `std::bind` to write simple function wrappers allows the programmer to avoid running into correctness and performance problems described in this appendix. The choice between using `bind` and a lambda can be directly compared to the between the use of STL algorithms and writing a raw loop that performs the same task: both of the solution are feasible, but using the Standard Library is simpler.\n\nOf course, `std::bind` has its limitations, and should be used only for the simple task of binding constant values to a specific set of function arguments. If we want to create a function that contains composition of the two functions or more complex expression we should use a lambda or even write a separate function if the expression is large enough. Another drawback of `bind` is that if we want to use it with overloaded function name we need resolve the ambiguity at the point of invocation of `std::bind` and casting to appropriate function pointer is required.\n\n## References\n\n1. Chris Jefferson, Ville Voutilainen, \"Bug 40 - variadic bind\" (LEWG Bug 40, https://issues.isocpp.org/show_bug.cgi?id=40)\n2. Mikhail Semenov, \"Introducing an optional parameter for mem_fn, which allows to bind an object to its member function\" (N3702, http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2013/n3702.htm)\n3. Tomasz Kamiński, Implementation of bind function (https://github.com/tomaszkam/proposals/tree/master/bind)"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.67440873,"math_prob":0.89449286,"size":26551,"snap":"2019-26-2019-30","text_gpt3_token_len":6450,"char_repetition_ratio":0.13817003,"word_repetition_ratio":0.16908714,"special_character_ratio":0.24921848,"punctuation_ratio":0.22149411,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9608147,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-07-17T08:22:56Z\",\"WARC-Record-ID\":\"<urn:uuid:1cc89910-a5d6-4874-9b4a-205b3c232102>\",\"Content-Length\":\"41623\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:124da6a5-b8bc-42f7-b689-758e5a17dfdb>\",\"WARC-Concurrent-To\":\"<urn:uuid:f2165c5c-a119-4e5f-8fd2-570375b24fd3>\",\"WARC-IP-Address\":\"93.90.116.65\",\"WARC-Target-URI\":\"http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2014/n4171.html\",\"WARC-Payload-Digest\":\"sha1:V2JLYVDET7FS2HZYGSZVGTTQQP2JS7SW\",\"WARC-Block-Digest\":\"sha1:ZREODSC6FWJOIGXKRDHILZVQOOEBFTAE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-30/CC-MAIN-2019-30_segments_1563195525133.20_warc_CC-MAIN-20190717081450-20190717103450-00231.warc.gz\"}"} |
https://e-booksdirectory.com/details.php?ebook=2076 | [
"",
null,
"# Elementary Linear Algebra by Keith Matthews\n\nElementary Linear Algebra\nby\n\nPublisher: University of Queensland\nNumber of pages: 302\n\nDescription:\nThis an introduction to linear algebra with solutions to all exercises. It covers linear equations, matrices, subspaces, determinants, complex numbers, eigenvalues and eigenvectors, identifying second degree equations, and three–dimensional geometry.\n\n(multiple PDF files)\n\n## Similar books",
null,
"Linear Algebra\nby - UC Davis\nThis textbook is suitable for a sophomore level linear algebra course taught in about twenty-five lectures. It is designed both for engineering and science majors, but has enough abstraction to be useful for potential math majors.\n(8051 views)",
null,
"Linear Algebra: A Course for Physicists and Engineers\nby - De Gruyter Open\nThis textbook on linear algebra is written to be easy to digest by non-mathematicians. It introduces the concepts of vector spaces and mappings between them without dwelling on theorems and proofs too much. It is also designed to be self-contained.\n(6704 views)",
null,
"Computational and Algorithmic Linear Algebra and n-Dimensional Geometry\nby\nA sophomore level book on linear algebra and n-dimensional geometry with the aim of developing in college entering undergraduates skills in algorithms, computational methods, and mathematical modeling. Written in a simple style with lots of examples.\n(14213 views)",
null,
"Introduction to Applied Linear Algebra: Vectors, Matrices and Least Squares\nby - Cambridge University Press\nThis groundbreaking textbook covers the aspects of linear algebra - vectors, matrices, and least squares - that are needed for engineering applications, data science, machine learning, signal processing, tomography, navigation, control, etc.\n(5553 views)"
] | [
null,
"https://e-booksdirectory.com/img/ebd-logo.png",
null,
"https://e-booksdirectory.com/images/9475.jpg",
null,
"https://e-booksdirectory.com/images/11636.jpg",
null,
"https://e-booksdirectory.com/images/blank.gif",
null,
"https://e-booksdirectory.com/images/12049.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8626231,"math_prob":0.80385786,"size":3123,"snap":"2022-40-2023-06","text_gpt3_token_len":648,"char_repetition_ratio":0.10387945,"word_repetition_ratio":0.7379913,"special_character_ratio":0.1946846,"punctuation_ratio":0.12830189,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9952311,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,null,null,4,null,6,null,null,null,7,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-02-07T17:34:50Z\",\"WARC-Record-ID\":\"<urn:uuid:0b43f9e1-5ee8-4c26-b08b-778f0c42c1ca>\",\"Content-Length\":\"11114\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:69c81b57-ceea-421e-99b6-95505ef71818>\",\"WARC-Concurrent-To\":\"<urn:uuid:6fa14223-5390-45dc-898f-305775818f58>\",\"WARC-IP-Address\":\"209.59.191.64\",\"WARC-Target-URI\":\"https://e-booksdirectory.com/details.php?ebook=2076\",\"WARC-Payload-Digest\":\"sha1:UXBRA3PYF66RORM5SCTBLBT3FCRZNUNO\",\"WARC-Block-Digest\":\"sha1:3VIEPYF5TBDWLBTOE4HN4GMHSCS2K47S\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764500628.77_warc_CC-MAIN-20230207170138-20230207200138-00096.warc.gz\"}"} |
https://metanumbers.com/67808 | [
"# 67808 (number)\n\n67,808 (sixty-seven thousand eight hundred eight) is an even five-digits composite number following 67807 and preceding 67809. In scientific notation, it is written as 6.7808 × 104. The sum of its digits is 29. It has a total of 7 prime factors and 24 positive divisors. There are 31,104 positive integers (up to 67808) that are relatively prime to 67808.\n\n## Basic properties\n\n• Is Prime? No\n• Number parity Even\n• Number length 5\n• Sum of Digits 29\n• Digital Root 2\n\n## Name\n\nShort name 67 thousand 808 sixty-seven thousand eight hundred eight\n\n## Notation\n\nScientific notation 6.7808 × 104 67.808 × 103\n\n## Prime Factorization of 67808\n\nPrime Factorization 25 × 13 × 163\n\nComposite number\nDistinct Factors Total Factors Radical ω(n) 3 Total number of distinct prime factors Ω(n) 7 Total number of prime factors rad(n) 4238 Product of the distinct prime numbers λ(n) -1 Returns the parity of Ω(n), such that λ(n) = (-1)Ω(n) μ(n) 0 Returns: 1, if n has an even number of prime factors (and is square free) −1, if n has an odd number of prime factors (and is square free) 0, if n has a squared prime factor Λ(n) 0 Returns log(p) if n is a power pk of any prime p (for any k >= 1), else returns 0\n\nThe prime factorization of 67,808 is 25 × 13 × 163. Since it has a total of 7 prime factors, 67,808 is a composite number.\n\n## Divisors of 67808\n\n1, 2, 4, 8, 13, 16, 26, 32, 52, 104, 163, 208, 326, 416, 652, 1304, 2119, 2608, 4238, 5216, 8476, 16952, 33904, 67808\n\n24 divisors\n\n Even divisors 20 4 2 2\nTotal Divisors Sum of Divisors Aliquot Sum τ(n) 24 Total number of the positive divisors of n σ(n) 144648 Sum of all the positive divisors of n s(n) 76840 Sum of the proper positive divisors of n A(n) 6027 Returns the sum of divisors (σ(n)) divided by the total number of divisors (τ(n)) G(n) 260.4 Returns the nth root of the product of n divisors H(n) 11.2507 Returns the total number of divisors (τ(n)) divided by the sum of the reciprocal of each divisors\n\nThe number 67,808 can be divided by 24 positive divisors (out of which 20 are even, and 4 are odd). The sum of these divisors (counting 67,808) is 144,648, the average is 6,027.\n\n## Other Arithmetic Functions (n = 67808)\n\n1 φ(n) n\nEuler Totient Carmichael Lambda Prime Pi φ(n) 31104 Total number of positive integers not greater than n that are coprime to n λ(n) 1296 Smallest positive number such that aλ(n) ≡ 1 (mod n) for all a coprime to n π(n) ≈ 6747 Total number of primes less than or equal to n r2(n) 0 The number of ways n can be represented as the sum of 2 squares\n\nThere are 31,104 positive integers (less than 67,808) that are coprime with 67,808. And there are approximately 6,747 prime numbers less than or equal to 67,808.\n\n## Divisibility of 67808\n\n m n mod m 2 3 4 5 6 7 8 9 0 2 0 3 2 6 0 2\n\nThe number 67,808 is divisible by 2, 4 and 8.\n\n• Arithmetic\n• Abundant\n\n• Polite\n• Practical\n\n## Base conversion (67808)\n\nBase System Value\n2 Binary 10000100011100000\n3 Ternary 10110000102\n4 Quaternary 100203200\n5 Quinary 4132213\n6 Senary 1241532\n8 Octal 204340\n10 Decimal 67808\n12 Duodecimal 332a8\n20 Vigesimal 89a8\n36 Base36 1gbk\n\n## Basic calculations (n = 67808)\n\n### Multiplication\n\nn×y\n n×2 135616 203424 271232 339040\n\n### Division\n\nn÷y\n n÷2 33904 22602.7 16952 13561.6\n\n### Exponentiation\n\nny\n n2 4597924864 311776089178112 21140913054989418496 1433523032432722489376768\n\n### Nth Root\n\ny√n\n 2√n 260.4 40.7781 16.1369 9.25244\n\n## 67808 as geometric shapes\n\n### Circle\n\n Diameter 135616 426050 1.44448e+10\n\n### Sphere\n\n Volume 1.30596e+15 5.77792e+10 426050\n\n### Square\n\nLength = n\n Perimeter 271232 4.59792e+09 95895\n\n### Cube\n\nLength = n\n Surface area 2.75875e+10 3.11776e+14 117447\n\n### Equilateral Triangle\n\nLength = n\n Perimeter 203424 1.99096e+09 58723.5\n\n### Triangular Pyramid\n\nLength = n\n Surface area 7.96384e+09 3.67432e+13 55365\n\n## Cryptographic Hash Functions\n\nmd5 92df85ffff76a4150a03d3f1e4d4cb54 d4741333092dd6ecc22dd92f2293832893b830a7 cb86fa643a7e2e0bbff17e6e6ea7c4a2827c39e6951b908c4269144e18137dc2 2c71a64b4c5645c0847cac02fb998f8be9dba0b23cba39a031dcfe807cec3d00ed5b5e643c99430308f44ea1b861165decca98a508850e47bb9251a47f1412f4 9c354313af54647d725f9de4d7ad97bd33b62f5c"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.60755587,"math_prob":0.97982776,"size":4528,"snap":"2021-43-2021-49","text_gpt3_token_len":1608,"char_repetition_ratio":0.118700266,"word_repetition_ratio":0.025335321,"special_character_ratio":0.45649293,"punctuation_ratio":0.076326005,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9953802,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-08T12:54:30Z\",\"WARC-Record-ID\":\"<urn:uuid:ecd70943-c94d-414a-81fc-40b286acd7d5>\",\"Content-Length\":\"39729\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3acde07a-ffce-47eb-be76-60946a10d18b>\",\"WARC-Concurrent-To\":\"<urn:uuid:d310742f-7633-4cac-9afd-a1070a0a93bc>\",\"WARC-IP-Address\":\"46.105.53.190\",\"WARC-Target-URI\":\"https://metanumbers.com/67808\",\"WARC-Payload-Digest\":\"sha1:QCB5WZRC7FKFVQIR36EHMXKE633BTQEK\",\"WARC-Block-Digest\":\"sha1:XLM2S2FSZV42GQHBLCQWD7M4MH6TG3ZH\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964363510.40_warc_CC-MAIN-20211208114112-20211208144112-00376.warc.gz\"}"} |
https://www.stumblingrobot.com/2015/12/03/evaluate-the-integral-of-1-x2-4x-4x2-4x-5/ | [
"Home » Blog » Evaluate the integral of 1 / ((x2 – 4x + 4)(x2 – 4x + 5))\n\n# Evaluate the integral of 1 / ((x2 – 4x + 4)(x2 – 4x + 5))\n\nCompute the following integral.",
null,
"In the denominator we have",
null,
"Then we use partial fractions,",
null,
"This gives us the equation",
null,
"We evaluate at",
null,
"to obtain a value for",
null,
",",
null,
"Then using this value of",
null,
"and evaluating at",
null,
",",
null,
"and",
null,
"to obtain",
null,
"Solving this system of equations we obtain",
null,
"Therefore, we have",
null,
"### One comment\n\n1.",
null,
"Anonymous says:\n\nhelp \\int (3x+2)/((x^(2)+4+4)(x^(2)-1))"
] | [
null,
"https://www.stumblingrobot.com/wp-content/ql-cache/quicklatex.com-3f34796443b5e3f77501c180469308fe_l3.png",
null,
"https://www.stumblingrobot.com/wp-content/ql-cache/quicklatex.com-6cb7efd4a13764851e60079d7d56aaef_l3.png",
null,
"https://www.stumblingrobot.com/wp-content/ql-cache/quicklatex.com-0cf940f01e7ed3151c38339ea604e2a5_l3.png",
null,
"https://www.stumblingrobot.com/wp-content/ql-cache/quicklatex.com-8a2d3eb20a42e58808ed2ac49d28369e_l3.png",
null,
"https://www.stumblingrobot.com/wp-content/ql-cache/quicklatex.com-f6e3f1407e4a2d0de07a05855a9a7e58_l3.png",
null,
"https://www.stumblingrobot.com/wp-content/ql-cache/quicklatex.com-1d8f94421de0c281a0f367e25d5dcdba_l3.png",
null,
"https://www.stumblingrobot.com/wp-content/ql-cache/quicklatex.com-b603ddadf872219c8d24f1ea919f54ee_l3.png",
null,
"https://www.stumblingrobot.com/wp-content/ql-cache/quicklatex.com-1d8f94421de0c281a0f367e25d5dcdba_l3.png",
null,
"https://www.stumblingrobot.com/wp-content/ql-cache/quicklatex.com-fca7c84c6ec7d519cf5098fc316425fd_l3.png",
null,
"https://www.stumblingrobot.com/wp-content/ql-cache/quicklatex.com-32a7152fae4e40558d94e917456c3928_l3.png",
null,
"https://www.stumblingrobot.com/wp-content/ql-cache/quicklatex.com-2ef1fe1da7e0372fa788416c4c9352ef_l3.png",
null,
"https://www.stumblingrobot.com/wp-content/ql-cache/quicklatex.com-e8c74756d7f823f06fa81fddb58bb66c_l3.png",
null,
"https://www.stumblingrobot.com/wp-content/ql-cache/quicklatex.com-da18463f0d387ea145a0dd3196b75754_l3.png",
null,
"https://www.stumblingrobot.com/wp-content/ql-cache/quicklatex.com-0bcdc65c7a6cbb6ff7a337bd14e639c2_l3.png",
null,
"https://secure.gravatar.com/avatar/",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.71698856,"math_prob":0.9999205,"size":328,"snap":"2023-40-2023-50","text_gpt3_token_len":86,"char_repetition_ratio":0.11111111,"word_repetition_ratio":0.0,"special_character_ratio":0.2621951,"punctuation_ratio":0.07575758,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9995023,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30],"im_url_duplicate_count":[null,2,null,2,null,2,null,2,null,null,null,null,null,2,null,null,null,null,null,null,null,null,null,2,null,2,null,2,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-11T05:36:28Z\",\"WARC-Record-ID\":\"<urn:uuid:01de1b1e-d121-4b7d-9065-c9b3d8c14327>\",\"Content-Length\":\"63060\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6e6104c4-0818-4a3b-a3ed-a9bdfe9877fe>\",\"WARC-Concurrent-To\":\"<urn:uuid:49ab4428-a45d-4d27-b901-0d38f9d7b0c9>\",\"WARC-IP-Address\":\"194.1.147.70\",\"WARC-Target-URI\":\"https://www.stumblingrobot.com/2015/12/03/evaluate-the-integral-of-1-x2-4x-4x2-4x-5/\",\"WARC-Payload-Digest\":\"sha1:7F6LIU6MY5S6A6EFZH4NMUNJ444PJINQ\",\"WARC-Block-Digest\":\"sha1:4M7HYYO34ZEP5GZW6LK4KWJJF7GFBV7Z\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679103558.93_warc_CC-MAIN-20231211045204-20231211075204-00262.warc.gz\"}"} |
https://rudiseitz.com/2013/03/15/janya-ragas-34776-or-26864/ | [
"Music\n\n# Janya Ragas: 34,776 or 26,864?\n\nPrasanna mentioned to me that there are 34,776 theoretically possible janya ragas in the South Indian melakarta system and he asked if I could see how that number arises. I looked into the question and thought I’d post the details here for anyone who’s interested.\n\nRecall that a janya raga is one that is somehow derived from one of the 72 melakarta ragas. The 34,776 figure refers to one specific class of janya ragas: those that are created by omitting notes from the parent. In forming such “varja” ragas, we can omit up to two notes from the arohanam (ascent), the avarohanam (descent), or both. (And it’s permissible to omit different notes in the ascent from those we omit in the descent.) We can’t apply other processes of derivation like reordering notes or borrowing notes from other ragas: these processes lead to many more possibilities!\n\nSo where does the number 34,776 come from? Well, if we decide to omit one note from the arohanam of a melakarta raga, there are 6 ways to do it: any note besides sa is fair game for omission. If we’re going to omit two notes, there are (6 choose 2) = 15 possibilities. And of course, if we omit no notes, there’s only 1 way to do that. This gives 6+15+1=22 options. The same 22 options exist for the avarohanam, giving 22*22=484 ways of omitting notes from a parent raga to create a janya. Except, we don’t want to count the case where no notes are omitted in the arohanam and the avarohaman both, because this leaves us with the original raga. So, the total number of janya possibilities for each melakarta raga is actually 484-1 = 483. Multiplying this by the total number of melakarta ragas, we get 483*72 = 34,776. But there’s a catch…\n\nThe process of omitting notes from two distinct parent ragas can give us two janya ragas with the same notes. For example, we can get S R2 G3 P D2 (Mohanam), by omitting M1 and N2 from S R2 G3 M1 P D2 N2 (Harikambhoji), or by omitting M2 and N3 from S R2 G3 M2 P D2 N3 (Kalyani). In Western terms, one would say you can get 1 2 3 5 6 (major pentatonic) by omitting 4 and b7 from 1 2 3 4 5 6 b7 (Mixolydian) or by omitting #4 and 7 from 1 2 3 #4 5 6 7 (Lydian). So, the 34,776 figure contains many janya ragas that actually have identical swaras. Now, if the parent raga (including its mood, characteristic phrases, ornamentation patterns, and additional swaras) is kept in mind when performing the derived raga, then two ragas derived from different parents might be perceived as distinct even though the derived ragas happen to have the same notes. In this way of looking at raga derviation, 34,776 is a plausible theoretical count. However, if we’re only interested in janya possibilities that are distinct in their notes, and we’re not considering attachments to parent ragas, then 34,776 is an overcount.\n\nEnumerating the janya possibilities that are note-wise distinct is more complicated. The rest of the post describes an approach I came up with when I first started thinking about the problem. After I published the post, I received a comment from Narayana Santhanam pointing out that the technique of generating functions from enumerative combinatorics is another, more compact approach to counting janyas; and, in searching further, I found a 2002 paper by K. Balasubramanian that uses this approach. Finally, I learned that P. Sriram and V. N. Jambunathan had investigated this question and published similar findings in a 1991 paper in the Journal of the Madras Music Academy. (See the notes and comments at the end of this post for more details.) Although the approach I present here is more verbose than some of the others, I hope it will be interesting to readers looking for insight into how all the possibilities arise.\n\nIn what follows, I assume R2=G1, R3=G2, D2=N1, and D3=N2 even though one could say those swaras would be performed with different ornamentation and/or intonation. In this count, for example, an arohanam of S R2 M1 P D2 S will be considered equivalent to S G1 M1 P D2 as since we treat R2=G1.\n\nTo organize our count, we’ll consider how many notes occur in the union of the ascent and the descent of a janya raga. This union might have 5, 6 or 7 notes.\n\nIf the union contains 5 notes, then the ascent and descent must contain those same 5 notes. To find the number of janya ragas of this kind, we need to consider all the ways of choosing 4 notes from 11 possibilities without violating the melakarta rules. Note that we’re choosing 4 notes out of 11, not 5 out of 12, because one note, sa, is mandated. Unfortunately, we can’t use a simple formula for (11 choose 4) because this would count “illegal” cases where there are two ma’s, or where there are more than two notes betwen sa and ma. One way to count only the legal possibilities is to divide the scale above sa into four sections. The first section contains ri and ga. The second section contains ma. The third section contains pa. The fourth section contains da and ni. Now there are 6 ways of filling the first section (i.e. ri and ga can be R1/G1, R1/G2, R1/G3, R2/G2, R2/G3, or R3/G3), two ways of filling the second section (i.e. ma can be M1 or M2), one way of filling third section (pa is always Pa), and six ways of filling the fourth section. (This is how we get 6*2*1*6 = 72 melakarta ragas.) If we omit some notes, then one or more of the scale sections will no longer be full. (In particular, if the first section has only one note in it, there are 4 possibilities for its identity: R1, R2=G1, R3=G2, or G3.) If we write 2|1|1|2 to indicate a scale with all sections full, then here are the possibilities distributions of sections sizes for a five-note derived scale where the sections are not all full: 2|1|1|0, 2|0|1|1, 2|0|0|2, 2|1|0|1, 1|1|1|1, 1|0|1|2, 1|1|0|2, 0|1|1|2. Taking this list and substituting the sections sizes with the number of ways of filling each section to the designated size, we get the following count: 6*2*1*1 + 6*1*1*4 + 6*1*1*6 + 6*2*1*4 + 4*2*1*4 + 4*1*1*6 + 4*2*1*6 + 1*2*1*6 = 236.\n\nNow if the union of the ascent and descent contains 6 notes, the respective sizes of the ascent/descent might be 6/6, 5/6, 6/5, or 5/5. The 6/6 case is similar to above: we know the ascent and descent use the same six notes, and we need to consider the number of ways of choosing 5 out of 11 notes without violating the melakarta rules. Following the logic above, the possible distributions of section sizes are 1|1|1|2, 2|0|1|2, 2|1|0|2, and 2|1|1|1. This gives 4*2*1*6 + 6*1*1*6 + 6*2*1*6 + 6*2*1*4 = 204 possibilities. In the 5/6 or 6/5 case, we can see that side containing 5 notes must be a proper subset of the side containing 6. The number of possibilities here can be calculated as (# ways of choosing 5 out of 11 notes legally)*(# ways of removing one of those 5 notes to create the smaller side) = 204*5 = 1020. Since we can make the smaller side the ascent or the descent, we multiply the last figure by two, giving 2040. Finally, in the 5/5 case, we can see that the ascent and descent must share precisely 4 notes including sa (each side contains a note that’s not in the other, creating a union of size 6). The number of possibilities can be calculated as (# ways of choosing 5 out of 11 notes legally, to create the 6-note union including Sa)*(# ways of picking 3 notes from those 5 to be shared by the ascent and descent)*(# ways of picking one of the remaining 2 notes to be in the ascent, while the other goes in the descent) = 204*(5 choose 3)*2 = 4080.\n\nThe last situation is where the union of the ascent and descent contains 7 notes. Here, the respective sizes of the ascent/descent might be 5/5, 6/5, 5/6, 6/6, 5/7, 7/5, 6/7, or 7/6. In the 5/5 case, the intersection must contain 3 notes. The number of possibilities is: (# of ways of choosing 6 out of 11 notes following the melakarta rules, to create a 7-note parent raga including Sa)*(# ways of choosing 2 out of those 6, to create a 3-note intersection including Sa)*(# ways of picking two from outside the intersection to be in the ascent, while the other 2 go in the descent) = 72*(6 choose 2)*(4 choose 2) = 72*15*6 = 6480. In the 5/6 or 6/5 case, the intersection is size 4, and we can choose 3 out of six notes to form it. By similar logic to the previous case, the number of possibilities is 72*(6 choose 3)*(3 choose 2) = 72*20*3 = 4320 for each case by itself. In the 6/6 case the intersection must be size 5, and the number of possibilities is 72*(6 choose 4)*2 = 72*15*2 = 2160. In the 5/7 and 7/5 cases, the smaller side of the scale must be a proper subset of the larger one, and we can form the smaller side by omitting two out of six notes from the larger. This gives 72*(6 choose 2) = 1080 possibilities for each case. And in the 6/7 and 7/6 cases, the smaller side again must be a proper subset of the larger one, so the number of possibilities is 72*(6 choose 1) = 432 for each case.\n\nHere’s a recap of the cases we considered and the possibilities that exist in each case:\n\nUnion size 5. Case 5/5: 236 possibilities.\n\nUnion size 6. Case 6/6: 204 possibilities. Cases 5/6 and 6/5: 2040 possibilities together. Case 5/5: 4080 possibilities.\n\nUnion size 7. Case 5/5: 6480 possibilities. Cases 5/6 and 6/5: 8640 possibilities together. Case 6/6: 2160 possibilities. Cases 5/7 and 7/5: 2160 possibilities together. Cases 6/7 and 7/6: 864 possibilities together.\n\nThe grand total is 236+204+2040+4080+6480+8640+2160+2160+864= 26864.\n\nNow, how can we be sure that 26864 correct? In fact, I made many calculation errors as I was working through this the first time, leaving me in a state of great suspicion. However, after I spotted and corrected my mistakes, I gained further confidence by writing a Groovy script that generates all distinct janya possibilities by brute force, and also yields… 26864! (Warning: this is script-style code, meant for one-time use and not designed to be maintainable or easy to read, but it gets the job done.)\n\n```// this script is written in Groovy 2.1.0\n\n// generate all melakarta ragas, representing each\n// raga as a list of 11 ones and zeros\nmelakartas = []\nfor (riGa in [1,1,0,0].permutations()) {\nfor (ma in [0,1].permutations()) {\nfor (daNi in [1,1,0,0].permutations()) {\nmelakartas.add(riGa + ma + + daNi)\n}\n}\n}\nassert melakartas.size()==72\n\n// generate all ways of deleting up to two notes from\n// the ascent and descent of a raga\ndeletions = ([0..5,0..5].combinations()).collect( {it as Set} ) as Set\ndeletionPairs = [deletions,deletions].combinations()\ndeletionPairs.remove([[],[]])\n\n// generate all possible janyas as ascending/descending scale pairs,\n// by applying all possible deletions to each melakarta raga;\n// keep the janyas in a set so that identical janyas will\n// not be double-counted\njanyas = [] as Set\nfor (mela in melakartas) {\ndef noteIndices = (0..(mela.size()-1)).findAll({mela[it]==1})\nfor (deletionPair in deletionPairs) {\ndef ascending = mela.clone()\ndef descending = mela.clone()\ndeletionPair.each { ascending[noteIndices[it]]=0 }\ndeletionPair.each { descending[noteIndices[it]]=0 }\n}\n}\n\nprintln \"Number of Distinct Janyas: \\${janyas.size()}\"\n```\n\nNotes:\n\nHow many Janya ragas have actually been named and used in Carnatic music? I came across Raga Pravaham, a wonderful reference by musician, composer and musicologist D. Pattammal—the work includes over 5000 named ragas. At the end of the text, there is an appendix that places the number of janya possibilities at 126936, without eliminating duplicates. This count is so much larger than 34776 because it uses a looser definition of permissible janyas in which up to five notes may be deleted from the arohanam, avarohanam, or both.\n\nSee the comments by Narayana Santhanam below for an approach to counting janya ragas using generating functions. Also see Combinatorial Enumeration of Ragas by K. Balasubramanian, published in Journal of Integer Sequences Vol. 5, 2002 and How Many Janya Ragas Are There? by P. Sriram and V. N. Jambunathan in the Journal of the Madras Music Academy, 1991, pp. 144-155. See also a discussion of this topic at rasikas.org.\n\n## 17 thoughts on “Janya Ragas: 34,776 or 26,864?”\n\n1.",
null,
"sillykannan says:\n\nWhy is there a 16200 above? I would point out where if your points had been enumerated 😉\n\nAnyway, this is good contribution. Some more readable formatting would be nice. This kind of stuff (mathematical calculation) doesn’t have to be essay-style.\n\nSomething like this for example:\n\nS(J5) = 236\n\nS(J6) = S(J6-5/5) + S(J6-5/6) + S(J6-6/5) + S(J6-6/6)\n= 4080 + 1020 + 1020 + 204 = 6324\n\nS(J7) = S(J7-5/5) + S(J7-5/6) + S(J7-5/7) + S(J7-6/5) + S(J7-6/6) + S(J7-6/7) + S(J7-7/5) + S(J7-7/6)\n= 6480 + 4320 +1080 + 4320 + 2160 + 1080 + 432 + 432 = 20304\n\nS(J) = S(J5) + S(J6) + S(J7) = 26864\n\nwheree S(Jn) stands for strength of the set of all unique Janya ragas with n notes total\nS(Jn-p/q) stands for the strength of the set of all unique Janya ragas with n notes total\nwhere p are used in the ascend and q are used in the descend\n\nExplanation for individual sets:\nS(J5): blah blah\nS(J6): blah blah blah\n\n2.",
null,
"rudiseitz says:\n\nThanks for the comment. The 16200 was a mistake left over from an earlier version of the calculation — it’s been fixed!\n\n3.",
null,
"Narayana Santhanam says:\n\ni got to this blog off prasanna’s facebook post. there is an elegant trick to calculating the 5/5 6/6 or 7/7 numbers, which can then be extended to get the number of janya ragas.\n\nthe number of ways you could obtain r/r notes (r notes going up, same r notes coming down) is just the coefficient of x^r in the product (1+4x+6x^2)^2 (1+2x)(1+x). You “automagically” get 5/5 to be 236, 6/6 to be 204 and 7/7 to be the well known 72.\n\nI sometimes give this as a problem to my students studying probability or combinatorics. bonus points for figuring out how to do the asymmetrics—5/6 5/7, etc by the same approach :). i didn’t know this was not very well known, but i suspect most, if not all my math-oriented friends know this.\n\n1.",
null,
"rudiseitz says:\n\nThanks for the comment. I’m wondering if we’re working with different definitions of acceptable janya ragas. You’re speaking of the r/r notes as “r notes going up, same r notes coming down.” In my count, I am also allowing janyas ragas that have different notes in the ascent and the descent. For example, taking S R1 G3 M1 P D1 N3 as the parent raga, I would allow [S R1 M1 P D1] [N3 P G3 R1 S] as a janya: the ascent and descent have 5 notes each, but those 5 are not identical, and in fact there are 7 notes in the union.\n\nI get 236 possibilities in the 5/5 case where same notes are used in the ascent and descent, but I also consider the 5/5 case where different notes are used on each side, leading to many additional possibilities. Does your approach consider this?\n\n1.",
null,
"Narayana Santhanam says:\n\nPatience, friend! 🙂 I just started with the symmetric case since it is the simplest. For this case the approach is perhaps transparent. But this exact approach can be used for any other constraint (as in 5 notes up 6 notes down, or 5 notes up 5 notes down but not the same—as you asked). In fact there is one line that simultaneously gives you all the answers above, obtained in exactly the way I stated above.\n\nDo you want to give it a shot? Hint: use a polynomial in two variables, x and y instead of just one in the symmetric case.\n\nBtw, if you want all janya ragas with 5 up and 5 down (not necessarily the same, but the union must be less than or equal to 7—the normal definition), the answer is 10796. The following give you numbers for other combinations: (all janya ragas, no simplification): 5 up and 6 down is 5340, while 5 up and 7 down is as you calculated to be 1080. 6 up and 6 down is 2364 while 6 up and 7 down is 432 (which I think you probably have counted).\n\nTotal is 26864 as you calculate. But you get it error free in one (albeit elaborate) line.\n\nPS: if you allow 4 notes up and 4 notes down, the number of such janya ragas is 10540 :). For the day when we expand our repertoire.\n\nPPS: These polynomials are called generators. Let me know and i can give you the generator for janya ragas in general :). But try it out first—they are a very effective and elegant way to count.\n\n4.",
null,
"Narayana Santhanam says:\n\nThe “Janya Polynomial” is [ (1+4y+6y^2) + 4x(1+4y+3y^2) + 6x^2(1+2y+y^2) ]^2 (1+2x+2y+2xy) (1+x+y+xy)\n\nIf you want the number of janya ragas that have r+1 notes going up and k+1 notes going down, it is given by the coefficient of x^r y^k in the polynomial above.\n\nBut if you want to figure out how and why it works, think about the symmetric case in my previous example first :). You can multiply out the polynomial by hand if you are careful, but most off-the-shelf scientific computing tools would multiply it out for you.\n\n1.",
null,
"rudiseitz says:\n\nNarayana,\n\nThank you very much for sharing this!\n\nIn my previous comment I did not mean to convey impatience, I only wanted to check that we were working with the same definition, which I see that we are. I see that generating functions are a more compact and general purpose way of approaching problems of this sort. On the other hand, I’ve left my original, verbose reasoning in place in this post as I’ve always found that comparing multiple pathways to a solution is a good way of building intuition into the dynamics of a problem. For what it’s worth, the numbers I calculated match those you gave in your previous comment. For example, you gave 10796 as the number of all pentatonic janyas (union size <= 7), and that can also be see in my calculations as 236 (case 5/5 where union size is 5) + 4080 (case 5/5 where union size is 6) + 6480 (case 5/5 where union size is 7).\n\nYou mentioned that you have sometimes used this problem in your university classes. Since I think the question of Janya ragas is of interest to people with all different backgrounds (some mathematical, some not), I would also invite you to mention any references that might be useful for less mathematical readers looking to interpret the \"Janya polynomial\" you provided, and build their understanding of how generating functions can be used in enumeration problems in general. (I would post such material myself, but that requires that I solidify my own knowledge first, which I might not be able to do in a satisfactory way in the next couple of days 🙂 So, I'd invite you to provide whatever tips and reference materials have been useful in your teaching experience. Thanks again,\n\nRudi\n\n1.",
null,
"Narayana Santhanam says:\n\nRudi, no worries at all. Didn’t mean “patience” in a accusatory tone—it is just the way I talk :). I would be glad to write something down with references.\n\nAnd don’t get me wrong—I enjoyed your writeup. It is very carefully done and well written, and most definitely must remain. Often understanding something is usually equivalent to seeing it in many ways, so the more ways we have at getting this, the better we will do—I was just contributing one more way to do it that I think is exciting :).\n\n5.",
null,
"Vignesh Subramanian says:\n\nDear Rudi:\nI came here from Prasanna’s FB post and I am really thrilled to see this explanation/ discussion. Though Narayanan mentioned it as a fairly well known problem/ solution in mathematical circles, musicians like me had no insight on how to eliminate the redundancies and arrive at unique number of Janyas. So this link should be preserved for posterity and perhaps popular carnatic forums across the web should be alerted of this. (I am doing my bit on this straight away).\n\nAlso the argument of Mohanam’s swaras sung as a derivative of Hari kambodhi can sound different when sung as a derivative of Kalyani is true in some cases. For example, When the note Ma is not sung at all and if I do an aalapana with just Sa Ri Ga Pa Dha Ni, One can still make out if I am singing Sankarabharanam (Over emphasize on Sa Ri Ga) of Kalyani (repeated gamakam on Ni touching the adjacent Sa). So there is surely some merit in this “Raaga Bhavam” argument, but surely it cannot be extrapolated as true for differentiating redundant janyas. Regardless, this is a subjective discussion and travels tangential to the world of mathematics.\n\nFor authoritarian discussions on unique janyas, now I can sleep with peace that what I (may have hyperbolically) considered as Riemann’s hypothesis of carnatic music is solved. Thanks again.\n\n~ Vicky\n\n1.",
null,
"rudiseitz says:\n\nDear Vicky,\n\nThank you so much for your comment. I’m glad that this post has been informative!\n\nActually, I have been meaning to credit you here, and I’m happy that you’ve given me this opportunity to do so. You see, when Prasanna first mentioned the 34,776 figure, I searched around the web to find references to it. I couldn’t find a definitive source for the figure, but I saw it mentioned on a number of Carnatic music blogs and information sites. One of the sites I stumbled upon was this blog where you posted a comment in 2005 asking about the problem of redundancies:\n\nIt was your comment there (assuming you’re the same Vicky) that got me thinking about how the redundancies could be removed from the count. (You also mentioned the question of Mohanam’s parentage, which I came across again in further investigation and incorporated into my post.) So, thank you for providing the inspiration to look more closely into this problem!\n\nRegarding the 34,776 figure, I’m interested to know the original source. You mention Sangeeta Sastra by A. S. Panchapakesa Iyer, which I haven’t been able to track down. Do you know if that’s where the figure was first presented? I also came across it at the following link, where it seems to be attributed to Shri S. Rajam:\n\nhttp://indiasvedas.blogspot.com/p/sama-veda.html\n\nThanks,\n-Rudi\n\n1.",
null,
"Vignesh Subramanian says:\n\nDear Rudi:\nWow, that was indeed me who posted those question on redundancies in Chinmayi’s blog. That was quite a while ago and its a small world !! I had tried in the meanwhile to find the answer to this question on my own with no significant success 🙂\n\nSangeeta Sastra that I mentioned is at least published in 1989 (May be a previous edition existed). Iyer himself was the principal of musical school in Mylapore, Chennai, as early as 1942. So while it is not that old/ authoritative in stating the figure of 34776, it is credible enough. I do have the abridged version of that book provided as an appendix to the “Ganamrudha Varna malika” book by the same author (One of the widely used text books for learning varnams). If you are interested I can scan those pages and post it here.\n\nAnd yes, Shri. Rajam is an authoritative figure in music theory as well. So the reference you provided on his source should be official enough as well.\n\nRegards\nVicky\n\n6.",
null,
"P. Sriram says:\n\nI worked on this issue some years ago and published my result (26864) in the Journal of the Madras Music Academy in 1991. Scanned copy of this issue can be found at http://issuu.com/themusicacademy/docs/1991. I had a nice camera ready manuscript, unfortunately the Academy was still typesetting, so my 26864 shows up as several different numbers. And all my nice tables (referred to in the text of the paper) disappeared; in the words of the Editor, tables were too expensive to typeset.\n\n1.",
null,
"rudiseitz says:\n\nProf. Sriram,\n\nThank you so much for sharing the link to your work from 1991! Your paper must have been the first time the 26864 figure was published.\n\nWow — tables too expensive to typeset — how times have changed! Another sign of changing times: you verified your result with a Fortran program that ran for a few minutes on a Macintosh II. For this post, I used a Groovy 2.1.0 script that ran on a MacBook Air in a little under 5 seconds. One thing that didn’t change in all those years: the number 26864 🙂\n\nFor the convenience of anyone who follows the issuu.com link above, the article “How Many Janya Ragas Are There?” begins on page 144 of the Journal, which is page 149 of the scanned document.\n\nThanks again,\n-Rudi\n\n7.",
null,
"ravikumarv says:\n\nWonderful explanation. I liked the way you explained the duplication (through a Audava Audava) raga like Mohanam and clearly explained the calculations and arrived at 26,864.\n\n1.",
null,
"ravikumarv says:\n\nI am impressed by the way you cross checked using a script 🙂\n\n1.",
null,
"rudiseitz says:\n\nThanks! 🙂\n\n8.",
null,
"Amal says:\n\nok….\nThe… Tetratonic scales and Tritonic scales are not included in the above 34776 or 26864\n\nThe maximum number of janya ragas that can be derrived from a particular major scale is 3428\nso the maximum no of possible janya ragas = 3428*72 = 246816,without excluding duplication"
] | [
null,
"https://2.gravatar.com/avatar/e072bb05583813364b3cb45034104ac3",
null,
"https://1.gravatar.com/avatar/101471d0464ffbcd99edfcf71238598b",
null,
"https://i2.wp.com/graph.facebook.com/3323218/picture",
null,
"https://1.gravatar.com/avatar/101471d0464ffbcd99edfcf71238598b",
null,
"https://i2.wp.com/graph.facebook.com/3323218/picture",
null,
"https://i2.wp.com/graph.facebook.com/3323218/picture",
null,
"https://1.gravatar.com/avatar/101471d0464ffbcd99edfcf71238598b",
null,
"https://i2.wp.com/graph.facebook.com/3323218/picture",
null,
"https://1.gravatar.com/avatar/44e257b581686136f3b708663f65c4b2",
null,
"https://1.gravatar.com/avatar/101471d0464ffbcd99edfcf71238598b",
null,
"https://1.gravatar.com/avatar/44e257b581686136f3b708663f65c4b2",
null,
"https://1.gravatar.com/avatar/d3fcb0fd05f761f6a7ca3b1739bdb706",
null,
"https://1.gravatar.com/avatar/101471d0464ffbcd99edfcf71238598b",
null,
"https://2.gravatar.com/avatar/8494da9fc77f81cbcc94d0c57828e290",
null,
"https://2.gravatar.com/avatar/8494da9fc77f81cbcc94d0c57828e290",
null,
"https://1.gravatar.com/avatar/101471d0464ffbcd99edfcf71238598b",
null,
"https://1.gravatar.com/avatar/194bbe95075504d673d2a8311faa52eb",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.91449934,"math_prob":0.8618542,"size":24164,"snap":"2019-51-2020-05","text_gpt3_token_len":6557,"char_repetition_ratio":0.12975994,"word_repetition_ratio":0.033870593,"special_character_ratio":0.27267838,"punctuation_ratio":0.104211554,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95078826,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-25T18:53:44Z\",\"WARC-Record-ID\":\"<urn:uuid:7d223f56-0351-4b40-acb6-2987eb234378>\",\"Content-Length\":\"118172\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1aa05a19-d3be-492c-8456-a21d50bb0b4c>\",\"WARC-Concurrent-To\":\"<urn:uuid:4e689516-e84d-4437-b0a2-eaef1392dddf>\",\"WARC-IP-Address\":\"192.0.78.24\",\"WARC-Target-URI\":\"https://rudiseitz.com/2013/03/15/janya-ragas-34776-or-26864/\",\"WARC-Payload-Digest\":\"sha1:BOVVF4U57ZCPLUFXANGJXJ525PL4UI3V\",\"WARC-Block-Digest\":\"sha1:UFPY3DOCWDLIBDMKSWADJE33GEGEN3L2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579251678287.60_warc_CC-MAIN-20200125161753-20200125190753-00247.warc.gz\"}"} |
https://percentagecalculator.guru/what-is-20-percent-of-12000/ | [
"Take the help of What is x percent of y calculator an online math tool that calculates 20% of 12000 easily along with a step by step solution detailing how the result 2400 arrived.\n\nWhat is\n% of\n\n## What is 20 Percent of 12000?\n\n20 percent *12000\n\n= (20/100)*12000\n\n= (20*12000)/100\n\n= 240000/100 = 2400\n\nNow we have: 20 percent of 12000 = 2400\n\nQuestion: What is 20 percent of 12000?\n\nWe need to determine 20% of 12000 now and the procedure explaining it as such\n\nStep 1: In the given case Output Value is 12000.\n\nStep 2: Let us consider the unknown value as x.\n\nStep 3: Consider the output value of 12000 = 100%.\n\nStep 4: In the Same way, x = 20%.\n\nStep 5: On dividing the pair of simple equations we got the equation as under\n\n12000 = 100% (1).\n\nx = 20% (2).\n\n(12000%)/(x%) = 100/20\n\nStep 6: Reciprocal of both the sides results in the following equation\n\nx%/12000% = 20/100\n\nStep 7: Simplifying the above obtained equation further will tell what is 20% of 12000\n\nx = 2400%\n\nTherefore, 20% of 12000 is 2400\n\n### Solution for What is 12000 Percent of 20\n\n12000 percent *20\n\n= (12000/100)*20\n\n= (12000*20)/100\n\n= 240000/100 = 2400\n\nNow we have: 12000 percent of 20 = 2400\n\nQuestion: Solution for What is 12000 percent of 20?\n\nWe need to determine 12000% of 20 now and the procedure explaining it as such\n\nStep 1: In the given case Output Value is 20.\n\nStep 2: Let us consider the unknown value as x.\n\nStep 3: Consider the output value of 20 = 100%.\n\nStep 4: In the Same way, x = 12000%.\n\nStep 5: On dividing the pair of simple equations we got the equation as under\n\n20 = 100% (1).\n\nx = 12000% (2).\n\n(20%)/(x%) = 100/12000\n\nStep 6: Reciprocal of both the sides results in the following equation\n\nx%/20% = 12000/100\n\nStep 7: Simplifying the above obtained equation further will tell what is 12000% of 20\n\nx = 2400%\n\nTherefore, 12000% of 20 is 2400\n\n### Nearby Results\n\n20% of Result\n12000 2400\n12000.01 2400.002\n12000.02 2400.004\n12000.03 2400.006\n12000.04 2400.008\n12000.05 2400.01\n12000.06 2400.012\n12000.07 2400.014\n12000.08 2400.016\n12000.09 2400.018\n12000.1 2400.02\n12000.11 2400.022\n12000.12 2400.024\n12000.13 2400.026\n12000.14 2400.028\n12000.15 2400.03\n12000.16 2400.032\n12000.17 2400.034\n12000.18 2400.036\n12000.19 2400.038\n12000.2 2400.04\n12000.21 2400.042\n12000.22 2400.044\n12000.23 2400.046\n12000.24 2400.048\n20% of Result\n12000.25 2400.05\n12000.26 2400.052\n12000.27 2400.054\n12000.28 2400.056\n12000.29 2400.058\n12000.3 2400.06\n12000.31 2400.062\n12000.32 2400.064\n12000.33 2400.066\n12000.34 2400.068\n12000.35 2400.07\n12000.36 2400.072\n12000.37 2400.074\n12000.38 2400.076\n12000.39 2400.078\n12000.4 2400.08\n12000.41 2400.082\n12000.42 2400.084\n12000.43 2400.086\n12000.44 2400.088\n12000.45 2400.09\n12000.46 2400.092\n12000.47 2400.094\n12000.48 2400.096\n12000.49 2400.098\n20% of Result\n12000.5 2400.1\n12000.51 2400.102\n12000.52 2400.104\n12000.53 2400.106\n12000.54 2400.108\n12000.55 2400.11\n12000.56 2400.112\n12000.57 2400.114\n12000.58 2400.116\n12000.59 2400.118\n12000.6 2400.12\n12000.61 2400.122\n12000.62 2400.124\n12000.63 2400.126\n12000.64 2400.128\n12000.65 2400.13\n12000.66 2400.132\n12000.67 2400.134\n12000.68 2400.136\n12000.69 2400.138\n12000.7 2400.14\n12000.71 2400.142\n12000.72 2400.144\n12000.73 2400.146\n12000.74 2400.148\n20% of Result\n12000.75 2400.15\n12000.76 2400.152\n12000.77 2400.154\n12000.78 2400.156\n12000.79 2400.158\n12000.8 2400.16\n12000.81 2400.162\n12000.82 2400.164\n12000.83 2400.166\n12000.84 2400.168\n12000.85 2400.17\n12000.86 2400.172\n12000.87 2400.174\n12000.88 2400.176\n12000.89 2400.178\n12000.9 2400.18\n12000.91 2400.182\n12000.92 2400.184\n12000.93 2400.186\n12000.94 2400.188\n12000.95 2400.19\n12000.96 2400.192\n12000.97 2400.194\n12000.98 2400.196\n12000.99 2400.198\n\n### Frequently Asked Questions on What is 20 percent of 12000?\n\n1. How do I calculate percentage of a total?\n\nTo calculate percentages, start by writing the number you want to turn into a percentage over the total value so you end up with a fraction. Then, turn the fraction into a decimal by dividing the top number by the bottom number. Finally, multiply the decimal by 100 to find the percentage.\n\n2. What is 20 percent of 12000?\n\n20 percent of 12000 is 2400.\n\n3. How to calculate 20 percent of 12000?\n\nMultiply 20/100 with 12000 = (20/100)*12000 = (20*12000)/100 = 2400."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.6792107,"math_prob":0.9746583,"size":4481,"snap":"2023-40-2023-50","text_gpt3_token_len":1991,"char_repetition_ratio":0.2972973,"word_repetition_ratio":0.20689656,"special_character_ratio":0.66949344,"punctuation_ratio":0.2249104,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.999057,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-07T04:50:09Z\",\"WARC-Record-ID\":\"<urn:uuid:f93285fc-0c95-4e08-9b2d-4ec0042ad76c>\",\"Content-Length\":\"51074\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3996225d-6a8b-4bc5-aecb-7fd194b3171c>\",\"WARC-Concurrent-To\":\"<urn:uuid:0b7e1677-3af0-4aef-aaa3-c5e2853373a1>\",\"WARC-IP-Address\":\"104.26.3.136\",\"WARC-Target-URI\":\"https://percentagecalculator.guru/what-is-20-percent-of-12000/\",\"WARC-Payload-Digest\":\"sha1:G6A6GJFPYQ4UXXIWHZ2S3V5CUY4NAHPT\",\"WARC-Block-Digest\":\"sha1:DL2THYUPNL5IH233CL6G3EC5YQVTCMU3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100632.0_warc_CC-MAIN-20231207022257-20231207052257-00305.warc.gz\"}"} |
https://mc-stan.org/docs/2_18/stan-users-guide/coding-an-ode-system.html | [
"## 13.2 Coding an ODE System\n\nA system of ODEs is coded directly in Stan as a function with a strictly specified signature. For example, the simple harmonic oscillator can be coded using the following function in Stan (see the user-defined functions chapter for more information on coding user-defined functions).\n\nreal[] sho(real t, // time\nreal[] y, // state\nreal[] theta, // parameters\nreal[] x_r, // data (real)\nint[] x_i) { // data (integer)\nreal dydt;\ndydt = y;\ndydt = -y - theta * y;\nreturn dydt;\n}\n\nThe function takes in a time t (a real value), a a system state y (real array), system parameters theta (a real array), along with real data in variable x_r (a real array) and integer data in variable x_i (an integer array). The system function returns the array of derivatives of the system state with respect to time, evaluated at time t and state y. The simple harmonic oscillator coded here does not have time-sensitive equations; that is, t does not show up in the definition of dydt. The simple harmonic oscillator does not use real or integer data, either. Nevertheless, these unused arguments must be included as arguments in the system function with exactly the signature shown above.\n\n### Strict Signature\n\nThe function defining the system must have exactly these argument types and return type. This may require passing in zero-length arrays for data or parameters if the system does not involve data or parameters. A full example for the simple harmonic oscillator, which does not depend on any constant data variables, is provided in the simple harmonic oscillator trajectory plot.\n\n### Discontinuous ODE System Function\n\nThe ODE integrator is able to integrate over discontinuities in the state function, although the accuracy of points near the discontinuity may be problematic (requiring many small steps). An example of such a discontinuity is a lag in a pharmacokinetic model, where a concentration is going to be zero for times $$0 < t < t'$$ for some lag-time $$t'$$, whereas it will be nonzero for times $$t \\geq t'$$. As an example, would involve code in the system such as\n\nif (t < t_lag)\nreturn 0;\nelse\n... return non-zero value...;\n\n### Varying Initial Time\n\nStan’s ODE solvers require the initial time argument to be a constant (i.e., a function of data or transformed data variables and constants). This means that, in general, there’s no way to use the integrate_ode function to accept a parameter for the initial time and thus no way in general to estimate the initial time of an ODE system from measurements."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7090786,"math_prob":0.9843696,"size":2543,"snap":"2019-13-2019-22","text_gpt3_token_len":596,"char_repetition_ratio":0.120913744,"word_repetition_ratio":0.0046838406,"special_character_ratio":0.23240267,"punctuation_ratio":0.10559006,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9961832,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-03-18T22:44:41Z\",\"WARC-Record-ID\":\"<urn:uuid:3c2b09c5-200c-4dcd-95f2-fd518c527678>\",\"Content-Length\":\"89443\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d1076c14-9b93-47a6-b429-d4df9a9a08b2>\",\"WARC-Concurrent-To\":\"<urn:uuid:0db3990a-6c3f-44f5-8603-37f5c7b985f7>\",\"WARC-IP-Address\":\"185.199.108.153\",\"WARC-Target-URI\":\"https://mc-stan.org/docs/2_18/stan-users-guide/coding-an-ode-system.html\",\"WARC-Payload-Digest\":\"sha1:CCIWKXL5ZYBATV2Y4XVLL3ZX23BEFUU3\",\"WARC-Block-Digest\":\"sha1:2GNCPTUXAV2XK2QMAPLTX4HRDSEA4UOG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-13/CC-MAIN-2019-13_segments_1552912201707.53_warc_CC-MAIN-20190318211849-20190318233849-00004.warc.gz\"}"} |
https://www.arxiv-vanity.com/papers/0708.3622/ | [
"# On cosmological observables in a swiss-cheese universe\n\nValerio Marra Dipartimento di Fisica “G. Galilei” Università di Padova, INFN Sezione di Padova, via Marzolo 8, Padova I-35131, Italy Department of Astronomy and Astrophysics, the University of Chicago, Chicago, IL 60637-1433 Edward W. Kolb Department of Astronomy and Astrophysics, Enrico Fermi Institute, and Kavli Institute for Cosmological Physics, the University of Chicago, Chicago, IL 60637-1433 Sabino Matarrese Dipartimento di Fisica “G. Galilei” Università di Padova, INFN Sezione di Padova, via Marzolo 8, Padova I-35131, Italy Antonio Riotto Département de Physique Théorique, Université de Gèneve, 24 Quai Ansermet, Gèneve, Switzerland, and INFN Sezione di Padova, via Marzolo 8, Padova I-35131, Italy\n###### Abstract\n\nPhoton geodesics are calculated in a swiss-cheese model, where the cheese is made of the usual Friedmann-Robertson-Walker solution and the holes are constructed from a Lemaître-Tolman-Bondi solution of Einstein’s equations. The observables on which we focus are the changes in the redshift, in the angular-diameter–distance relation, in the luminosity-distance–redshift relation, and in the corresponding distance modulus. We find that redshift effects are suppressed when the hole is small because of a compensation effect acting on the scale of half a hole resulting from the special case of spherical symmetry. However, we find interesting effects in the calculation of the angular distance: strong evolution of the inhomogeneities (as in the approach to caustic formation) causes the photon path to deviate from that of the FRW case. Therefore, the inhomogeneities are able to partly mimic the effects of a dark-energy component. Our results also suggest that the nonlinear effects of caustic formation in cold dark matter models may lead to interesting effects on photon trajectories.\n\n98.70.Cq\n\n## I Introduction\n\nIn this paper we explore a toy cosmological model in order to attempt to understand the role of large-scale non-linear cosmic inhomogeneities in the interpretation of observable data. The model is based on a swiss-cheese model, where the cheese consists of the usual Friedmann-Robertson-Walker (FRW) solution and the holes are constructed out of a Lemaître-Tolman-Bondi (LTB) solution. The advantage of this model is that it is a solvable model with strong nonlinearities, in particular the formation of caustics as expected in the cold dark matter (CDM) models.\n\nMost, if not all, observations are consistent with the cosmic concordance model according to which, today, one-fourth of the mass-energy of the universe is clustered and dominated by cold dark matter. The remaining three-quarters is uniform and dominated by a fluid with a negative pressure (dark energy, or ).\n\nWhile the standard CDM model seems capable of accounting for the observations, it does have the feature that approximately 95% of the mass-energy of the present universe is unknown. We are either presented with the opportunity of discovering the nature of dark matter and dark energy, or nature might be different than described by the CDM model. Regardless, until such time as dark matter and dark energy are completely understood, it is useful to look for alternative cosmological models that fit the data.\n\nOne non-standard possibility is that there are large effects on the observed expansion rate due to the back reaction of inhomogeneities in the universe. The basic idea is that all evidence for dark energy comes from observational determination of the expansion history of the universe. Anything that affects the observed expansion history of the universe alters the determination of the parameters of dark energy; in the extreme it may remove the need for dark energy.\n\nThis paper focuses on the effects of large-scale nonlinear inhomogeneities on observables such as the luminosity-distance–redshift relation. The ultimate goal is to find a realistic dust model that can explain observations (like the luminosity-distance–redshift relation) without the need of dark energy. The ultimate desire will be to have an exactly solvable realistic inhomogeneous model. Our model is but a first small step in this pragmatic and necessary direction.\n\nIf this first step is successful, we would show that inhomogeneities must be factored into the final solution. Even if we live in a CDM universe, inhomogeneities “renormalize” the theory adding an effective extra source to the dark energy. We have to be very careful in what we mean. The inhomogeneities renormalize the dust Einstein–de Sitter universe only from the observational point of view, that is luminosity and redshift of photons. Average dynamics is beyond this and we will not be concerned with this issue in this paper: if we find an effective cosmological constant, this will not mean that the universe is accelerating, but only that its luminosity distance-redshift relation will fit the observational data.\n\nHere we are not primarily interested in the backreaction effect that comes from the averaging procedure in General Relativity (see e.g. buchert_new ). Since we have an exact solution, we can directly calculate observables. Indeed, in this paper we are mainly interested in the effect of inhomogeneities on the dynamics of photons.\n\nWe can reformulate our present approach as follows: inhomogeneities renormalize the geodesics of photons. In the extreme case in which such a renormalization leads to a negative effective deceleration parameter in the luminosity distance-redshift relation, it might make us think that a dark-energy component exists.\n\nThe paper is organized as follows: In Sect. II we will specify the parameters of our swiss-cheese model. In Sect. III we study its dynamics. Then in Sect. IV we will discuss the geodesic equations for light propagation. We will apply them to see what an observer in the cheese (Sect. V) or in the hole (Sect. VI) would observe. The observables on which we will focus are the change in redshift , angular-diameter–distance , luminosity-distance–redshift relation , and the distance modulus .\n\nConclusions are given in Sect. VII. In two appendices we discuss the role of arbitrary functions in LTB models (Appendix A) and some technical issues in the solution of photon geodesics in our swiss-cheese model (Appendix B).\n\n## Ii The model\n\nWe study a swiss-cheese model where the cheese consists of the usual Friedmann–Robertson–Walker solution and the spherically symmetric holes are constructed from a Lemaître-Tolman-Bondi solution. The particular FRW solution we will choose is a matter-dominated, spatially-flat solution, i.e., the Einstein–de Sitter (EdS) model.\n\nIn this section we will describe the FRW and LTB model parameters we have chosen. But first, in Table 1 we list the units we will use for mass density, time, the radial coordinate, the expansion rate, and two quantities, and , that will appear in the metric.\n\nThe time appearing in Table 1 is not the usual time in FRW models. Rather, , where is the usual cosmological time and is the present age of the universe. Thus, is the present time and is the time of the big bang. Finally, the initial time of the LTB evolution is defined as .\n\nBoth the FRW and the LTB metrics can be written in the form\n\n ds2=−dt2+Y′2(r,t)W2(r)dr2+Y2(r,t)dΩ2, (1)\n\nwhere here and throughout, the “prime” superscript denotes and the “dot” superscript will denote . It is clear that the Robertson–Walker metric is recovered with the substitution and .\n\nThe above metric is expressed in the synchronous and comoving gauge.\n\n### ii.1 The cheese\n\nWe choose for the cheese model a spatially-flat, matter-dominated universe (the EdS model). So in the cheese there is no dependence to or . Furthermore, factors into a function of multiplying (), and in the EdS model . In this model , so in the cheese, the value of today, denoted as , is unity in the units of Table 1. In order to connect with the LTB solution, we can express the line element in the form\n\n ds2=−dt2+Y′2(r,t)dr2+Y2(r,t)dΩ2. (2)\n\nIn the cheese, the Friedman equation and its solution are (recall corresponds to the present time):\n\n H2(t) = 49ρ(t)=49(t+1)−2 (3) Y(r,t) = ra(t)=r(t+1)2/3(¯t+1)2/3, (4)\n\nwhere the scale factor is normalized so that at the beginning of the LTB evolution it is .\n\nFor the EdS model, . We also note that the comoving distance traveled by a photon since the big bang is .\n\n### ii.2 The holes\n\nThe holes are chosen to have an LTB metric lemaitre ; tolman ; bondi . The model is based on the assumptions that the system is spherically symmetric with purely radial motion and the motion is geodesic without shell crossing (otherwise we could not neglect the pressure).\n\nIt is useful to define an “Euclidean” mass and an “average” mass density , defined as\n\n M(r)=4π∫r0ρ(r,t)Y2Y′dr=4π3Y3(r,t)¯ρ(r,t). (5)\n\nIn spherically symmetric models, in general there are two expansion rates: an angular expansion rate, , and a radial expansion rate, . (Of course in the FRW model .) The angular expansion rate is given by\n\n H2⊥(r,t)=49¯ρ(r,t)+W2(r)−1Y2(r,t). (6)\n\nUnless specified otherwise, we will identify .\n\nTo specify the model we have to specify initial conditions, i.e., the position , the velocity and the density of each shell at time . In the absence of shell crossing it is possible to give the initial conditions at different times for different shells : let us call this time . The initial conditions fix the arbitrary curvature function :\n\n (7)\n\nwhere we can choose so that .\n\nIn a general LTB model there are therefore three arbitrary functions: , and . Their values for the particular LTB model we study are specified in the following subsection.\n\nIn Appendix A we provide a discussion about the number of independent arbitrary functions in a LTB model.\n\n#### ii.2.1 Our LTB model\n\nFirst of all, for simplicity we choose ; i.e., we specify the initial conditions for each shell at the same moment of time.\n\nWe now choose and in order to match the flat FRW model at the boundary of the hole: i.e., at the boundary of the hole has to match the FRW density and has to go to unity. A physical picture is that, given a FRW sphere, all the matter in the inner region is pushed to the border of the sphere while the quantity of matter inside the sphere does not change. With the density chosen in this way, an observer outside the hole will not feel the presence of the hole as far as local physics is concerned (this does not apply to global quantities, such the luminosity-distance–redshift relation for example). So the cheese is evolving as an FRW universe while the holes evolve differently. In this way we can imagine putting in the cheese as many holes as we want, even with different sizes and density profiles, and still have an exact solution of the Einstein equations (as long as there is no superposition among the holes and the correct matching is achieved). The limiting picture of this procedure is the Apollonian Gasket of Fig. 1, where all the possible holes are placed, and therefore the model has the strange property that it is FRW nowhere, but it behaves as an FRW model on the average. This idea was first proposed by Einstein and Straus einstein .\n\nTo be specific, we choose to be\n\n ρ(r,¯t)=Aexp[−(r−rM)2/2σ2]+ϵ(rrh), (8)\n\nwhere , , , , , and . In Fig. 2 we plot this chosen Gaussian density profile. The hole ends at which is111To get this number from Table 1 you need to multiply by . Mpc and roughly times smaller than . Note that this is not a very big bubble. But it is an almost empty region: in the interior the matter density is roughly times smaller than in the cheese. Our model consists of a sequence of up to five holes and the observer is looking through them. The idea, however, is that the universe is completely filled with these holes, which form a sort of lattice as shown in Fig. 3. In this way an observer at rest with respect to a comoving cheese-FRW observer will see an isotropic CMB along the two directions of sight shown in Fig. 3.",
null,
"Figure 2: The densities ρ(r,¯t) (solid curve) and ¯ρ(r,¯t) (dashed curve). Here, ¯t=−0.8 (recall tBB=−1). The hole ends at rh=0.042. The matching to the FRW solution is achieved as one can see from the plot of ¯ρ(r,¯t).",
null,
"Figure 3: Sketch of our swiss-cheese model. An observer at rest with respect to a comoving cheese-FRW observer will see an isotropic CMB along the two directions of sight marked with dotted red lines. Three possible positions for an observer are shown.\n\nIt is useful to consider the velocity of a shell relative to the FRW background. We define\n\n Δvsh(r,t)=˙aLTB(r,t)−˙aFRW(t), (9)\n\nwhere . To have a realistic evolution, we demand that there are no initial peculiar velocities at time , that is, to have an initial expansion independent of : . From Eq. (7) this implies\n\n E(r)=12H2FRW(¯t)r2−16πM(r)r. (10)\n\nThe graph of chosen in this way is shown in Fig. 4. As seen from the figure, the curvature is small compared with unity. Indeed, in many formulae appears, therefore one should compare with . In spite of its smallness, the curvature will play a crucial role to allow a realistic evolution of structures, as we will see in the next section.\n\nAlso in Fig. 4 we graph , which is the generalization of the factor in the usual FRW models. (It is not normalized to unity.) As one can see, is very nearly constant in the empty region inside the hole. This is another way to see the reason for our choice of the curvature function: we want to have in the center an empty bubble dominated by negative curvature.\n\nIt is important to note that the dynamics of the hole is scale-independent: small holes will evolve in the same way as big holes. To show this, we just have to express Eq. (6) with respect to a generic variable where fixes the scale. If we change , i.e., scale the density profile, we will find the same scaled shape for and the same time evolution. This property is again due to spherical symmetry which frees the inner shells from the influence of the outer ones: We can think of a shell as an infinitesimal FRW solution and its behavior is scale independent because it is a homogeneous and isotropic solution.",
null,
"Figure 4: Curvature E(r) and k(r) necessary for the initial conditions of no peculiar velocities.\n\n## Iii The dynamics\n\nNow we explore the dynamics of this swiss-cheese model. As we have said, the cheese evolves as in the standard FRW model. Of course, inside the holes the evolution is different. This will become clear from the plots given below.\n\nWe will discuss two illustrative cases: a flat case where , and a curved case where is given by Eq. (10). We are really interested only in the second case because the first will turn out to be unrealistic. But the flat case is useful to understand the dynamics.\n\n### iii.1 The flat case\n\nIn Fig. 5 we show the evolution of for the flat case, . In the figure is plotted for three times: (recall ), , and (corresponding to today).",
null,
"Figure 5: Behavior of Y(r,t) with respect to r, the peculiar velocities v(r,t) with respect to r, and the density profiles ρ(r,t) with respect to rFRW=Y(r,t)/a(t), for the flat case at times t=¯t=−0.8, t=−0.4 and t=t0=0. The straight lines for Y(r,t) are the FRW solutions while the dashed lines are the LTB solutions. For the peculiar velocities, matter is escaping from high density regions. The center has no peculiar velocity because of spherical symmetry, and the maximum of negative peculiar velocity is before the peak in density. Finally, the values of ρ(∞,t) are 1, 2.8, and 25, for t=0, −0.4, −0.8, respectively.\n\nFrom Fig. 5 it is clear that outside the hole, i.e., for , evolves as a FRW solution, . However, deep inside the hole where it is almost empty, there is no time evolution to : it is Minkowski space. Indeed, thanks to spherical symmetry, the outer shells do not influence the interior. If we place additional matter inside the empty space, it will start expanding as an FRW universe, but at a lower rate because of the lower density. It is interesting to point out that a photon passing the empty region will undergo no redshift: again, it is just Minkowski space.\n\nThis counterintuitive behavior (empty regions expanding slowly) is due to the fact that the spatial curvature vanishes. This corresponds to an unrealistic choice of initial peculiar velocities. To see this we plot the peculiar velocity that an observer following a shell has with respect to an FRW observer passing through that same spatial point. The result is also shown in Fig. 5 where it is seen that matter is escaping from the high density regions. This causes the evolution to be reversed as one can see in Fig. 5 from the density profile at different times: structures are not forming, but spreading out.\n\nRemember that is only a label for the shell whose Euclidean position at time is . In the plots of the energy density we have normalized using .\n\n### iii.2 The curved case\n\nNow we move to a more interesting and relevant case. We are going to use the given by Eq. (10); the other parameters will stay the same. Comparison with the flat case is useful to understand how the model behaves, and in particular the role of the curvature.\n\nIn Fig. 6 the results for in the curved case are plotted. Again time goes from to (recall that and is today).",
null,
"Figure 6: Behavior of Y(r,t) with respect to r, the peculiar velocities v(r,t) with respect to r, and the density profiles ρ(r,t) with respect to rFRW=Y(r,t)/a(t), for the curved case at times t=¯t=−0.8, t=−0.4 and t=t0=0. The straight lines for Y(r,t) are the FRW solutions while the dashed lines are the LTB solutions. For the peculiar velocities, the matter gradually starts to move toward high density regions. The solid vertical line marks the position of the peak in the density with respect to r. For the densities, note that the curve for ρ(r,0) has been divided by 10. Finally, the values of ρ(∞,t) are 1, 2.8, and 25, for t=0, −0.4, −0.8, respectively.\n\nAs one can see, now the inner almost empty region is expanding faster than the outer (cheese) region. This is shown clearly in Fig. 7, where also the evolution of the inner and outer sizes is shown. Now the density ratio between the cheese and the interior region of the hole increases by a factor of between and . Initially the density ratio was , but the model is not sensitive to this number since the evolution in the interior region is dominated by the curvature ( is much larger than the matter density).",
null,
"Figure 7: Evolution of the expansion rate and the size for the inner and outer regions. Here “inner” refers to a point deep inside the hole, and “outer” refers to a point in the cheese.\n\nThe peculiar velocities are now natural: as can be seen from Fig. 6, matter is falling towards the peak in the density. The evolution is now realistic, as one can see from Fig. 6, which shows the density profile at different times. Overdense regions start contracting and they become thin shells (mimicking structures), while underdense regions become larger (mimicking voids), and eventually they occupy most of the volume.\n\nLet us explain why the high density shell forms and the nature of the shell crossing. Because of the distribution of matter, the inner part of the hole is expanding faster than the cheese; between these two regions there is the initial overdensity. It is because of this that there is less matter in the interior part. (Remember that we matched the FRW density at the end of the hole.) Now we clearly see what is happening: the overdense region is squeezed by the interior and exterior regions which act as a clamp. Shell crossing eventually happens when more shells—each labeled by its own —are so squeezed that they occupy the same physical position , that is when . Nothing happens to the photons other than passing through more shells at the same time: this is the meaning of the metric coefficient going to zero.\n\nA remark is in order here: In the inner part of the hole there is almost no matter, it is empty. Therefore it has only negative curvature, which is largely dominant over the matter: it is close to a Milne universe.\n\n## Iv Photons\n\nWe are mostly interested in observables associated with the propagation of photons in our swiss-cheese model: indeed, our aim is to calculate the luminosity-distance–redshift relation in order to understand the effects of inhomogeneities on observables. Our setup is illustrated in Fig. 8, where there is a sketch of the model with only holes for the sake of clarity. Notice that photons are propagating through the centers.\n\nWe will discuss two categories of cases: 1) when the observer is just outside the last hole as in Fig. 8, and 2) when the observer is inside the hole. The observer in the hole will have two subcases: a) the observer located on a high-density shell, and b) the observer in the center of the hole. We are mostly interested in the first case: the observer is still a usual FRW observer, but looking through the holes in the swiss cheese.",
null,
"Figure 8: Sketch of our model in comoving coordinates. The shading mimics the initial density profile: darker shading implies larger denser. The uniform gray is the FRW cheese. The photons pass through the holes as shown by the arrow.\n\n### iv.1 Finding the photon path: an observer in the cheese\n\nWe will discuss now the equations we will use to find the path of a photon through the swiss cheese. The geodesic equations can be reduced to a set of four first-order differential equations (we are in the plane ):\n\n (11)\n\nwhere is an affine parameter that grows with time. The third equation is actually the null condition for the geodesic. Thanks to the initial conditions chosen we have . These equations describe the general path of a photon. To solve the equations we need to specify the constant , a sort of angular momentum density. A first observation is that setting allows us to recover the equations that describe a photon passing radially trough the centers: .\n\nWe are interested in photons that hit the observer at an angle and are passing trough all the holes as shown in Fig. 8. To do this we must compute the inner product of and , which are the normalized spatial vectors tangent to the radial axis and the geodesic as shown in Fig. 9. A similar approach was used in Ref. alnes0607 .",
null,
"Figure 9: A photon hitting the observer at an angle α.\n\nThe inner product of and is expressed through\n\n xi = −WY′(1,0,0)|λ=0 (12) yi = 1dt/dλ(ddλ,0,dϕdλ)∣∣∣λ=0=(drdλ,0,dϕdλ)∣∣∣λ=0 (13) xiyigij = Y′Wdrdλ∣∣∣λ=0=cosα (14) cϕ = Ysinα|λ=0. (15)\n\nThe vectors are anchored to the shell labeled by the value of the affine parameter , that is, to the border of the hole. Therefore, they are relative to the comoving observer located there. In the second equation we have used the initial conditions given in the previous set of equations, while to find the last equation we have used the null condition evaluated at .\n\nThe above calculations use coordinates relative to the center. However, the angle is a scalar in the hypersurface we have chosen: we are using the synchronous and comoving gauge. Therefore, is the same angle measured by a comoving observer of Fig. 9 located on the shell : it is a coordinate transformation within the same hypersurface.\n\nGiven an angle we can solve the equations. We have to change the sign in Eq. (11) when the photon is approaching the center with respect to the previous case where it is moving away. Also, we have to sew together the solutions between one hole and another, giving not only the right initial conditions, but also the appropriate constants (see Appendix B).\n\nEventually we end up with the solution , , and from which we can calculate the observables of interest.\n\n### iv.2 Finding the photon path: an observer in the hole\n\nFinding the solution in this case is the same as in the previous case with the only difference that in Eq. (11) the initial condition is now . But this observer has a peculiar velocity with respect to an FRW observer passing by. This, for example, will make the observer see an anisotropic cosmic microwave background as it is clear from Fig. 3. This Doppler effect, however, is already corrected in the solution we are going to find since we have chosen as initial condition.\n\nThere is however also the effect of light aberration which changes the angle seen by the comoving observer with respect to the angle seen by an FRW observer. The photon can be thought as coming from a source very close to the comoving observer: therefore there is no peculiar motion between them. The FRW observer is instead moving with respect to this reference frame as pictured in Fig. 10. The relation between and is given by the relativistic aberration formula:\n\n cosαFRW=cosα+v/c1+v/ccosα. (16)\n\nThe angle changes because the hypersurface has been changed. The velocity will be taken from the calculation (see Fig. 6 for the magnitude of the effect).",
null,
"Figure 10: A comoving observer and a FRW observer live in different frames, this results in a relative velocity vFRW between observers.\n\n### iv.3 Distances\n\nThe angular diameter distance is defined as:\n\n dA=DαFRW, (17)\n\nwhere is the proper diameter of the source and is the angle at which the source is seen by the observer. Using this definition to find we have\n\n dA=2Y(r(λ),t(λ))sinϕ(λ)2αFRW. (18)\n\nThe luminosity distance will then be:\n\n dL=(1+z)2dA. (19)\n\nThe formula we are going to use for is exact in the limit of zero curvature. However in our model is on average less than and never more than , as it can be seen from Fig. 4: therefore the approximation is good. Moreover, we are interested mainly in the case when the source is out of the last hole as pictured in Fig. 8, and in this case the curvature is exactly zero and the result is exact.\n\nWe have checked that the computation of is independent of for small angles and that the result using the usual FRW equation coincides with theoretical prediction for . We also checked that reduces to when the observer is in the center.\n\nFinally we checked our procedure in comparison with the formula () of Ref. notari : this is a rather different way to find the angular distance and therefore this agreement serves as a consistency check. We placed the observer in the same way and we found the same results provided that we use the angle uncorrected for the light-aberration effect.\n\n## V Results: observer in the cheese\n\nNow we will look through the swiss cheese comparing the results with respect to a FRW-EdS universe and a CDM case.\n\nWe will first analyze in detail the model with five holes, which is the one which we are most interested in. For comparison, we will study models with one big hole and one small hole. In the model with one big hole, the hole will be five-times bigger in size than in the model with five holes: i.e., they will cover the same piece of the universe.\n\nThe observables on which we will focus are the changes in redshift , angular-diameter distance , luminosity distance , and the corresponding distance modulus .\n\n### v.1 Redshift histories\n\nNow we will first compare the redshift undergone by photons that travel through the model with either five holes or one hole to the FRW solution of the cheese. In Fig. 11 the results are shown for a photon passing through the center with respect to the coordinate radius. As one can see, the effects of the inhomogeneities on the redshift are smaller in the five-hole case.",
null,
"Figure 11: Redshift histories for a photon that travels from one side of the one-hole chain (left) and five-hole chain (right) to the other where the observer will detect it at present time. The “regular” curve is for the FRW model. The vertical lines mark the edges of the holes. The plots are with respect to the coordinate radius r. Notice also that along the voids the redshift is increasing faster: indeed z′(r)=H(z) and the voids are expanding faster.\n\nIt is natural to expect a compensation, due to the spherical symmetry, between the ingoing path and the outgoing one inside the same hole. This compensation is evident in Fig. 11.\n\nHowever, there is a compensation already on the scale of half a hole as it is clear from the plots. This mechanism is due to the density profile chosen, that is one whose average matches the FRW density of the cheese: roughly speaking we know that . We chose the density profile in order to have , and therefore in its journey from the center to the border of the hole the photon will see a and therefore there will be compensation for .\n\nLet us see this analytically. We are interested in computing a line average of the expansion along the photon path in order to track what is going on. Therefore, we shall not use the complete expansion scalar:\n\n θ=Γk0k=2˙YY+˙Y′Y′, (20)\n\nbut, instead, only the part of it pertinent to a radial line average:\n\n θr=Γ101=˙Y′Y′≡Hr, (21)\n\nwhere are the Christoffel symbols and is the trace of the extrinsic curvature.\n\nUsing , we obtain:\n\n ⟨Hr⟩=∫rh0drHrY′/W∫rh0drY′/W≃˙YY∣∣∣r=rh=HFRW, (22)\n\nwhere the approximation comes from neglecting the (small) curvature and the last equality holds thanks to the density profile chosen. This is exactly the result we wanted to find. However, we have performed an average at constant time and therefore we did not let the hole and its structures evolve while the photon is passing: this effect will partially break the compensation. This sheds light on the fact that photon physics seems to be affected by the evolution of the inhomogeneities more than by the inhomogeneities themselves. We can argue that there should be perfect compensation if the hole will have a static metric such as the Schwarzschild one. In the end, this is a limitation of our assumption of spherical symmetry.\n\nThis compensation is almost perfect in the five-hole case, while it is not in the one-hole case: in the latter case the evolution has more time to change the hole while the photon is passing. Summarizing, the compensation is working on the scale of half a hole. These results are in agreement with Ref. notari07 .\n\nFrom the plot of the redshift one can see that the function is not monotonic. This happens at recent times when the high-density thin shell forms. This blueshift is due to the peculiar movement of the matter that is forming the shell. This feature is shown in Fig. 12 where the distance between the observer located just out of the hole at and two different shells is plotted. In the solid curve one can see the behavior with respect to a normal redshifted shell, while in dashed curve one can see the behavior with respect to a shell that will be blueshifted: initially the distance increases following the Hubble flow, but when the shell starts forming, the peculiar motion prevails on the Hubble flow and the distance decreases during the collapse.\n\nIt is finally interesting to interpret the redshift that a photon undergoes passing the inner void. The small amount of matter is subdominant with respect to the curvature which is governing the evolution, but still it is important to define the space: in the limit of zero matter in the interior of the hole, we recover a Milne universe, which is just (half of) Minkowski space in unusual coordinates. Before this limit the redshift was conceptually due to the expansion of the spacetime, after this limit it is instead due to the peculiar motion of the shells which now carry no matter: it is a Doppler effect.",
null,
"Figure 12: Distance between the observer and two different shells. In the solid curve r=0.55rh will be redshifted, while in the dashed curve, r=0.8rh will be blueshifted. The latter indeed will start to collapse toward the observer. Time goes from t=−0.8 to t=0. The observer is located just outside of the hole at r=rh.\n\n### v.2 Luminosity and Angular-Diameter Distances\n\n#### v.2.1 The five-hole model\n\nIn Fig. 13 the results for the luminosity distance and angular distance are shown. The solution is compared to the one of the CDM model with and . Therefore, we have an effective . In all the plots we will compare this CDM solution to our swiss-cheese solution. The strange features which appear near the contact region of the holes at recent times are due to the non-monotonic behavior of , which was explained in the previous section.",
null,
"Figure 13: On the bottom the luminosity distance dL(z) in the five-hole model (jagged curve) and the ΛCDM solution with ΩM=0.6 and ΩDE=0.4 (regular curve) are shown. In the middle is the change in the angular diameter distance, ΔdA(z), compared to a ΛCDM model with ΩM=0.6 and ΩDE=0.4. The top panel shows the distance modulus in various cosmological models. The jagged line is for the five-hole LTB model. The regular curves, from top to bottom, are a ΛCDM model with ΩM=0.3 and ΩDE=0.7, a ΛCDM model with ΩM=0.6 and ΩDE=0.4, the best smooth fit to the LTB model, and the EdS model. The vertical lines mark the edges of the five holes.\n\nThe distance modulus is plotted in the top panel of Fig. 13. The solution shows an oscillating behavior which is due to the simplification of this toy model in which all the voids are concentrated inside the holes and all the structures are in thin spherical shells. For this reason a fitting curve was plotted: it is passing through the points of the photon path that are in the cheese between the holes. Indeed, they are points of average behavior and represent well the coarse graining of this oscillating curve. The simplification of this model tells us also that the most interesting part of the plot is farthest from the observer, let us say at . In this region we can see the effect of the holes clearly: they move the curve from the EdS solution (in purple) to the CDM one with and (in blue). Of course, the model in not realistic enough to reach the “concordance” solution.\n\nHere we discuss a comparison of our results with those of Ref. notari07 . In that paper they do not find the large difference from FRW results that we do. First of all, we note that we are able to reproduce their results using our techniques. The difference between their results and ours is that our model has very strong nonlinear evolution, in particular, close to shell crossing where we have to stop the calculation. The authors of Ref. notari07 also used smaller holes with a different density/initial-velocity profile. This demonstrated that a large change in observables may require either non-spherical inhomogeneities, or evolution very close to shell crossing. (We remind the reader that caustics are certainly expected to form in cold dark matter models.)\n\nLet us return now to the reason for our results. As we have seen previously, due to spherical symmetry there are no significant redshift effects in the five-hole case. Therefore, these effects must be due to changes in the angular-diameter distance. Fig. 14 is useful to understand what is going on: the angle from the observer is plotted. Through the inner void and the cheese the photon is going straight: they are both solutions even if with different parameters. This is shown in the plot by constancy of the slope. The bending occurs near the peak in the density where the coefficient of the metric goes toward zero. Indeed the coordinate velocity of the photon can be split in an angular part: and a radial part . While behaves well near the peak, goes to infinity in the limit where shell crossing is reached: the photons are passing more and more matter shells in a short interval of time as the evolution approaches the shell-crossing point. Although in our model we do not reach shell crossing, this is the reason for the bending. We therefore see that all the effects in this model, redshift and angular effects, are due to the evolution of the inhomogeneities.",
null,
"Figure 14: The angle from the observer is plotted. The dashed vertical lines near the empty region mark the shell of maximum peculiar velocities of Fig. 6. The shaded regions represent the inner FRW solution. The solid vertical lines mark the peak in density. The angle at which the photon hits the observer is 2.7∘ on the left.\n\n#### v.2.2 The one-hole model: the big hole case\n\nLet us see now how the results change if instead of the five-hole model we use the one-hole model. We have already shown the redshift results in the previous section. As one can see from Fig. 15 the results are more dramatic: for high redshifts the swiss-cheese curve can be fit by a CDM model with less dark energy than as in the five-hole model. Nonetheless, the results have not changed so much compared to the change in the redshift effects discussed in the previous section. Indeed the compensation scale for angular effects is while the one for redshift effects is .",
null,
"Figure 15: On the bottom is shown the luminosity distance dL(z) in the one-hole model (jagged curve) and the ΛCDM solution with ΩM=0.6 and ΩDE=0.4 (regular curve). In the middle is the change in the the angular diameter distance, ΔdA(z), compared to a ΛCDM model with ΩM=0.6 and ΩDE=0.4. On the top is shown the distance modulus in various cosmological models. The jagged line is for the one-hole LTB model. The regular curves, from top to bottom are a ΛCDM model with ΩM=0.3 and ΩDE=0.7, a ΛCDM model with ΩM=0.6 and ΩDE=0.4 and the EdS model. The vertical lines mark the edges of the hole.\n\n#### v.2.3 The one-hole model: the small hole case\n\nFinally if we remove four holes from the five-hole model, we lose almost all the effects. This is shown in Fig. 16: now the model can be compared to a CDM model with and .",
null,
"Figure 16: On the bottom is shown the luminosity distance dL(z) in the 1-hole model (jagged curve) and the ΛCDM solution with ΩM=0.95 and ΩDE=0.05 (regular curve). In the middle is the change in the the angular diameter distance, ΔdA(z), compared to a ΛCDM model with ΩM=0.95 and ΩDE=0.05. On the top is shown the distance modulus in various cosmological models. The jagged line is for the one-hole LTB model. The regular curves, from top to bottom are a ΛCDM model with ΩM=0.3 and ΩDE=0.7, a ΛCDM model with ΩM=0.95 and ΩDE=0.05 and the EdS model. The vertical lines mark the edges of the hole.\n\n## Vi Results: observer in the hole\n\nNow we will examine the case in which the observer is inside the last hole in the five-hole model. We will first put the observer on the high-density shell and then place the observer in the center.\n\n### vi.1 Observer on the high density shell\n\nIn the section we show the results for the observer on the high-density shell. As one can see from Fig. 17, now the compensation in the redshift effect is lost: the photon is not completing the entire last half of the last hole. The results for the luminosity distance and the angular distance do not change much as shown in Fig. 18.\n\nRemember that in this case the observer has a peculiar velocity compared to the FRW observer passing through the same point. We correct the results taking into account both the Doppler effect and the light aberration effect.",
null,
"Figure 17: Redshift histories for a photon that travels through the five-hole-chain to the observer placed on the high density shell. The “regular” line is for the FRW model. λ is the affine parameter and it grows with the time which go from the left to the right. The vertical lines mark the end and the beginning of the holes.",
null,
"Figure 18: On the bottom is shown the luminosity distance dL(z) in the five-hole model (jagged curve) and the ΛCDM solution with ΩM=0.6 and ΩDE=0.4 (regular curve). In the middle is the change in the angular diameter distance, ΔdA(z), compared to a ΛCDM model with ΩM=0.6 and ΩDE=0.4. On the top is shown the distance modulus in various cosmological models. The jagged line is for the five-hole LTB model. The regular curves, from top to bottom are a ΛCDM model with ΩM=0.3 and ΩDE=0.7, a ΛCDM model with ΩM=0.6 and ΩDE=0.4, the best smooth fit to the LTB model, and the EdS model. The vertical lines mark the edges of the five holes.\n\n### vi.2 Observer in the center\n\nIn this section we show the results for the observer in the center. As confirmed by Fig. 19, the compensation in the redshift effect is good: the photon is passing through an integer number of half holes.\n\nThe results for the luminosity distance and the angular distance look worse as shown in Fig. 20, but this is mainly due to the fact that now the photon crosses half a hole less than in the previous cases and therefore it undergoes less bending.\n\nIn this case the observer has no peculiar velocity compared to the FRW one: this is a result of spherical symmetry.",
null,
"Figure 19: Redshift histories for a photon that travels through the five-hole-chain to the observer placed in the center. The “regular” line is for the FRW model. λ is the affine parameter and it grows with the time which go from the left to the right. The vertical lines mark the end and the beginning of the holes.",
null,
"Figure 20: The bottom panel shows the luminosity distance dL(z) in the five-hole model (jagged curve) and the ΛCDM solution with ΩM=0.6 and ΩDE=0.4 (regular curve). In the middle is the change in the angular diameter distance, ΔdA(z), compared to a ΛCDM model with ΩM=0.6 and ΩDE=0.4. On the top panel the distance modulus in various cosmological models is shown. The jagged line is for the five-hole LTB model. The regular curves, from top to bottom are a ΛCDM model with ΩM=0.3 and ΩDE=0.7, a ΛCDM model with ΩM=0.6 and ΩDE=0.4, the best smooth fit to the LTB model, and the EdS model. The vertical lines mark the edges of the five holes.\n\n## Vii Conclusions\n\nThe aim of this paper was to understand the role of large-scale non-linear cosmic inhomogeneities in the interpretation of observational data. This problem can be studied perturbatively, see for example Ref. kmr . Here, instead, we focused on an exact (even if toy) solution, based on the Lemaître-Tolman-Bondi (LTB) model. This solution has been studied extensively in the literature alnes0607 ; notari ; alnes0602 ; celerier ; mansouri ; flanagan ; rasanen ; tomita ; chung ; nambu . It has been shown that it can be used to fit the observed without the need of dark energy (for example in alnes0602 ). To achieve this result, however, it is necessary to place the observer at the center of a rather big underdensity. To overcome this fine-tuning problem we built a swiss-cheese model, placing the observer in the cheese and having the observer look through the swiss-cheese holes as pictured in Fig. 8. A similar idea was at the basis of Refs. notari07 ; tetradis .\n\nSummarizing, we firstly defined the model in Section II: it is a swiss-cheese model where the cheese is made of the usual FRW solution and the holes are made of a LTB solution. We defined carefully the free functions of the LTB model in order to have a realistic (even if still toy) model and we showed its dynamics in Section III.\n\nThen, as anticipated in the Introduction, we focused on the effects of inhomogeneities on photons. The observables on which we focused are the change in redshift in angular-diameter distance , in the luminosity distance-redshift relation , and in the distance modulus .\n\nWe found that redshift effects are suppressed when the hole is small because of a compensation effect acting on the scale of half a hole, due to spherical symmetry: it is roughly due to the fact that and we chose the density profile in order to have . It is somewhat similar to the screening among positive and negative charges.\n\nHowever, we found interesting effects in the calculation of the angular distance: the evolution of the inhomogeneities bends the photon path compared to the FRW case. Therefore, inhomogeneities will be able (at least partly) to mimic the effects of dark energy. We were mainly interested in making the observer look through the swiss cheese from the cheese. However, for a better understanding, we examined also the case where the observer is inside the hole. We found bigger effects than those found in Refs. notari07 ; tetradis : this could be due to the different model. Indeed, Refs. notari07 ; tetradis used smaller holes with a different initial-density/initial-velocity profile.\n\n###### Acknowledgements.\nIt is a pleasure to thank Alessio Notari and Marie-Noëlle Célérier for useful discussions and suggestions. V.M. acknowledges support from “Fondazione Ing. Aldo Gini” and “Fondazione Angelo Della Riccia.”\n\n## References\n\n• (1) T. Buchert, arXiv:0707.2153 [gr-qc].\n• (2) A. Einstein and E. G. Straus, Rev. Mod. Phys. 17, 120 (1945).\n• (3) A. G. Lemaitre, Ann. Soc. Sci. Bruxelles A53, 51 (1933).\n• (4) R. C. Tolman, Proc. Nat. Acad. Sci. USA 20, 169 (1934).\n• (5) H. Bondi, Mon. Not. Roy. Astron. Soc. 107, 410 (1947).\n• (6) T. Buchert, Gen. Rel. Grav. 32, 105 (2000).\n• (7) H. Alnes and M. Amarzguioui, Phys. Rev. D 74, 103520 (2006).\n• (8) T. Biswas, R. Mansouri, and A. Notari, astro-ph/0606703.\n• (9) T. Biswas and A. Notari, astro-ph/0702555.\n• (10) E. V. Linder, Phys. Rev. Lett. 90, 091301 (2003).\n• (11) H. Alnes, M. Amarzguioui, and O. Grøn, Phys. Rev. D 73, 083519 (2006).\n• (12) M. N. Célérier, Astron. Astrophys. 353, 63 (2000).\n• (13) R. Mansouri, astro-ph/0512605.\n• (14) R. A. Vanderveld, E. E. Flanagan, and I. Wasserman, Phys. Rev. D 74, 023506 (2006).\n• (15) S. Rasanen, JCAP 0411, 010 (2004).\n• (16) N. Brouzakis, N. Tetradis, and E. Tzavara, JCAP 0702, 013 (2007).\n• (17) K. Tomita, Prog. Theor. Phys. 106, 929 (2001).\n• (18) D. J. H. Chung and A. E. Romano, Phys. Rev. D 74, 103507 (2006).\n• (19) T. Kai, H. Kozaki, K. Nakao, Y. Nambu and C. Yoo, Prog. Theor. Phys. 117, 229-240 (2007).\n• (20) E. W. Kolb, S. Matarrese, and A. Riotto, New J. Phys. 8, 322 (2006).\n• (21)\n\n## Appendix A About the arbitrary functions in a LTB model\n\nHere we illustrate, by means of an example, the choice of the arbitrary functions in LTB models. We are going to analyze the flat case. Indeed we have an analytical solution for it and this will help in understanding the issues.\n\nWe said previously that there are three arbitrary functions in the LTB model: , and . They specify the position and velocities of the shells at a chosen time. In general, depends on ; because of the absence of shell crossing it is possible to give the initial conditions at different times for different shells labeled by .\n\nWe start, therefore, by choosing the curvature to vanish, which can be thought as a choice of initial velocities at the time :\n\n 2E(r)=˙Y2−13πMY∣∣∣r,¯t(r). (23)\n\nFor , the model becomes\n\n ds2=−dt2+dY2+Y2dΩ2, (24)\n\nwith solution\n\n Y(r,t) = (3M(r)4π)1/3[t−^t(r)]2/3 ¯ρ(r,t) = [t−^t(r)]−2, (25)\n\nwhere\n\n ^t(r) ≡ ¯t(r)−¯ρ−1/2(r,¯t(r)) ¯ρ(r,¯t(r)) = 3M(r)4π1Y3∣∣∣r,¯t(r). (26)\n\nThe next step is to choose the position of the shells, that is, to choose the density profile. As far as is concerned, only the combination matters. This, however, is not true for , which appears also by itself in Eq. (A).\n\nLooking at Eq. (A) we see that to achieve an inhomogeneous profile we can either assign a homogeneous profile at an inhomogeneous initial time, or an inhomogeneous density profile at a homogeneous initial time, or both. Moreover, if we assign the function , then we can use our freedom to relabel in order to obtain all the possible . So we see that one of the three arbitrary functions expresses the gauge freedom.\n\nIn this paper we fixed this freedom by choosing and in order to have a better intuitive understanding of the initial conditions.\n\n## Appendix B Sewing the photon path\n\nIn the Appendix we will demonstrate how to sew together the photon path between two holes. We will always use center-of-symmetry coordinates, and therefore we will move from the coordinates of to the ones of illustrated in Fig. 21. The geodesic near the contact point is represented by the dashed line segment in Fig. 21.",
null,
"Figure 21: Illustration of the procedure to calculate the transition between two holes. The dashed line is a segment of the geodesic. O1 and O2 represent the two coordinate systems.\n\nFirst, we want to find the value of the affine parameter for which the photon is at . This is found solving\n\n G2 = (r1(λ)cosϕ1(λ)−2rh,r1(λ)sinϕ1(λ)) r2h = x22G+y22G. (27)\n\nThese equations imply\n\n r21(λ)+3r2h−4r1(λ)rhcosϕ1(λ)=0. (28)\n\nThen we can give the initial conditions for the second hole:\n\n q2(¯λ) = q1(¯λ) t2(¯λ) = t1(¯λ) r2(¯λ) = rh ϕ2(¯λ) = arccos(x2G/rh). (29)\n\nFinally, we need the constant , a sort of constant angular momentum density. Repeating the procedure of Sect. IV.1 for the first hole we find\n\n c2ϕ=sinα2q1(¯λ)Y2|¯λ. (30)\n\nOnly is missing. One way to find it is to calculate the inner product in coordinates of the geodesic with the normalized spatial vector parallel to (see Fig. 21)."
] | [
null,
"https://media.arxiv-vanity.com/render-output/4292440/x2.png",
null,
"https://media.arxiv-vanity.com/render-output/4292440/x3.png",
null,
"https://media.arxiv-vanity.com/render-output/4292440/x4.png",
null,
"https://media.arxiv-vanity.com/render-output/4292440/x5.png",
null,
"https://media.arxiv-vanity.com/render-output/4292440/x6.png",
null,
"https://media.arxiv-vanity.com/render-output/4292440/x7.png",
null,
"https://media.arxiv-vanity.com/render-output/4292440/x8.png",
null,
"https://media.arxiv-vanity.com/render-output/4292440/x9.png",
null,
"https://media.arxiv-vanity.com/render-output/4292440/x10.png",
null,
"https://media.arxiv-vanity.com/render-output/4292440/x11.png",
null,
"https://media.arxiv-vanity.com/render-output/4292440/x12.png",
null,
"https://media.arxiv-vanity.com/render-output/4292440/x13.png",
null,
"https://media.arxiv-vanity.com/render-output/4292440/x14.png",
null,
"https://media.arxiv-vanity.com/render-output/4292440/x15.png",
null,
"https://media.arxiv-vanity.com/render-output/4292440/x16.png",
null,
"https://media.arxiv-vanity.com/render-output/4292440/x17.png",
null,
"https://media.arxiv-vanity.com/render-output/4292440/x18.png",
null,
"https://media.arxiv-vanity.com/render-output/4292440/x19.png",
null,
"https://media.arxiv-vanity.com/render-output/4292440/x20.png",
null,
"https://media.arxiv-vanity.com/render-output/4292440/x21.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.92531145,"math_prob":0.96825665,"size":39274,"snap":"2021-04-2021-17","text_gpt3_token_len":8811,"char_repetition_ratio":0.16167557,"word_repetition_ratio":0.044233575,"special_character_ratio":0.21938178,"punctuation_ratio":0.11760154,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":0.99020404,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-17T07:13:08Z\",\"WARC-Record-ID\":\"<urn:uuid:2a7ba00e-6147-4df5-892a-93d5dc2a4bd7>\",\"Content-Length\":\"731827\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:629b2d48-615a-45b9-ad8f-965cf0c9a19f>\",\"WARC-Concurrent-To\":\"<urn:uuid:377fb02d-187d-4ada-b528-b36c69bed119>\",\"WARC-IP-Address\":\"104.21.14.110\",\"WARC-Target-URI\":\"https://www.arxiv-vanity.com/papers/0708.3622/\",\"WARC-Payload-Digest\":\"sha1:6CMCHNCGTMXR55QQ6JLQ7TGWA2TN4MJV\",\"WARC-Block-Digest\":\"sha1:RZC4EZJU23K5XOOKDXRPIASKHFE4MTKZ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610703509973.34_warc_CC-MAIN-20210117051021-20210117081021-00036.warc.gz\"}"} |
http://osr600doc.sco.com/en/man/html.CP/size.CP.html | [
"DOC HOME SITE MAP MAN PAGES GNU INFO SEARCH PRINT BOOK\n\n# size(CP)\n\nsize -- print section sizes in bytes of object files\n\n## Synopsis\n\nsize [-F -f -n -o -V -x] files\n\n## Description\n\nThe size command produces segment or section size information in bytes for each loaded section in ELF or COFF object files. size prints out the size of the text, data, and bss (uninitialized data) segments (or sections) and their total.\n\nsize processes ELF and COFF object files entered on the command line. If an archive file is input to the size command, the information for each object file in the archive is displayed.\n\nWhen calculating segment information, the size command prints out the total file size of the non-writable segments, the total file size of the writable segments, and the total memory size of the writable segments minus the total file size of the writable segments.\n\nIf it cannot calculate segment information, size calculates section information. When calculating section information, it prints out the total size of sections that are allocatable, non-writable, and not NOBITS, the total size of the sections that are allocatable, writable, and not NOBITS, and the total size of the writable sections of type NOBITS. (NOBITS sections do not actually take up space in the file.)\n\nIf size cannot calculate either segment or section information, it prints an error message and stops processing the file.\n\n-F\nPrints out the size of each loadable segment, the permission flags of the segment, then the total of the loadable segment sizes. If there is no segment data, size prints an error message and stops processing the file.\n\n-f\nPrints out the size of each allocatable section, the name of the section, and the total of the section sizes. If there is no section data, size prints out an error message and stops processing the file.\n\n-n\nPrints out non-loadable segment or non-allocatable section sizes. If segment data exists, size prints out the memory size of each loadable segment or file size of each non-loadable segment, the permission flags, and the total size of the segments. If there is no segment data, size prints out, for each allocatable and non-allocatable section, the memory size, the section name, and the total size of the sections. If there is no segment or section data, size prints an error message and stops processing.\n\n-o\nPrints numbers in octal, not decimal.\n\n-V\nPrints the version information for the size command on the standard error output.\n\n-x\nPrints numbers in hexadecimal; not decimal.\n\n## Examples\n\nThe examples below are typical size output.\n``` size file 2724 + 88 + 0 = 2812\n\nsize -f file 26(.text) + 5(.init) + 5(.fini) = 36\n\nsize -F file 2724(r-x) + 88(rwx) + 0(rwx) = 2812\n```\n\n## References\n\na.out(F), ar(F), as(C), cc(C), ld(C)\n\n## Notices\n\nSince the size of bss sections is not known until link-edit time, the size command does not give the true total size of pre-linked objects."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.68312615,"math_prob":0.89646375,"size":2926,"snap":"2021-43-2021-49","text_gpt3_token_len":720,"char_repetition_ratio":0.20431212,"word_repetition_ratio":0.104627766,"special_character_ratio":0.23342447,"punctuation_ratio":0.11826087,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9576494,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-15T20:14:15Z\",\"WARC-Record-ID\":\"<urn:uuid:1ee8b94d-8945-4981-aee4-4621b66b747d>\",\"Content-Length\":\"6737\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:80af37fb-d895-49fa-8b5e-2758d7924434>\",\"WARC-Concurrent-To\":\"<urn:uuid:87394d84-0889-42ea-aaa6-892203e1af61>\",\"WARC-IP-Address\":\"132.147.224.138\",\"WARC-Target-URI\":\"http://osr600doc.sco.com/en/man/html.CP/size.CP.html\",\"WARC-Payload-Digest\":\"sha1:JKLICSNGI4UIXK7LUUBOB5TNV7F57BSM\",\"WARC-Block-Digest\":\"sha1:WWNL6MVAAGBUS3C4RPGNYZP6OYD7P4SZ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323583083.92_warc_CC-MAIN-20211015192439-20211015222439-00477.warc.gz\"}"} |
https://www.compadre.org/physlets/mechanics/ex3_6.cfm | [
"## Exploration 3.6: Uniform Circular Motion\n\nA point (red) on a rotating wheel is shown in the animation (position is given in meters and time is given in seconds). Restart.\n\n1. Note that the speed of the red point is constant. Is its velocity constant?\n2. Click here to view the velocity vector. After viewing the vector rethink your answer: is the velocity of the red point constant?\n3. What is the direction of the red point's acceleration vector? Click here to view the acceleration and velocity vectors.\n4. How does the speed of the red point compare to the speed of another point, say a green one, which is at only half the radius of the red point? Click here to view both points. For clarity the green point is shown on the opposite side from the red one.\n5. Why is the speed of the green point less than the speed of the red point?\n6. How does the magnitude of the acceleration of the red point compare to the magnitude of the acceleration of the green point? Click here to view both points and their velocity and acceleration vectors.",
null,
"Exploration authored by Aaron Titus with support by the National Science Foundation under Grant No. DUE-9952323 and placed in the public domain.\n\nPhyslets were developed at Davidson College and converted from Java to JavaScript using the SwingJS system developed at St. Olaf College.\n\nOSP Projects:",
null,
"Open Source Physics - EJS Modeling",
null,
"Tracker",
null,
"Physlet Physics",
null,
"Physlet Quantum Physics",
null,
"STP Book"
] | [
null,
"https://www.compadre.org/physlets/images/downloadPDF.png",
null,
"https://www.compadre.org/osp/images/ospn/osp.png",
null,
"https://www.compadre.org/osp/images/ospn/tracker.png",
null,
"https://www.compadre.org/osp/images/ospn/physlets3.png",
null,
"https://www.compadre.org/osp/images/ospn/pqp.png",
null,
"https://www.compadre.org/osp/images/ospn/stpbook-trans.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9265067,"math_prob":0.8743053,"size":1383,"snap":"2022-27-2022-33","text_gpt3_token_len":289,"char_repetition_ratio":0.17984046,"word_repetition_ratio":0.09504132,"special_character_ratio":0.21113521,"punctuation_ratio":0.09090909,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9813726,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-15T00:38:47Z\",\"WARC-Record-ID\":\"<urn:uuid:d39a6c76-ea6c-40ec-bf26-75fa0e052945>\",\"Content-Length\":\"18211\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6e2ec0aa-4d97-4742-ac5a-167bbd98c5d4>\",\"WARC-Concurrent-To\":\"<urn:uuid:914e54e9-4c10-46de-a035-9d27d4c17ad1>\",\"WARC-IP-Address\":\"54.209.62.36\",\"WARC-Target-URI\":\"https://www.compadre.org/physlets/mechanics/ex3_6.cfm\",\"WARC-Payload-Digest\":\"sha1:DZJOYRT5AKN6E6ILYLM3P6SLAVAQA7BU\",\"WARC-Block-Digest\":\"sha1:NM2BFZZLKIJG2LZ7D4CTTBVRZGKFAFAT\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882572089.53_warc_CC-MAIN-20220814234405-20220815024405-00027.warc.gz\"}"} |
https://music.stackexchange.com/questions/113843/three-dotted-notes-in-6-8-time-signature/113846 | [
"# Three dotted notes in 6/8 time signature\n\nHow many seconds will each note take in this 6/8 time signature, if tempo is 120 bpm?\n\nMy analysis :\n\nRest in the beginning will take three 1/8 beats - 0.75 seconds in total.\n\nThe three A note will take 3/2*1/4 = 3/8 seconds each.\n\nIs this correct?",
null,
"• There are no dotted notes in this example. There is a dotted rest, and there are three staccato notes. Staccato is indicated by a dot, but the staccato dot does not make the note a \"dotted note.\" – phoog Apr 21 at 4:24\n• It would be interesting to find out how one would time onself when playing the notes. – Tim Apr 21 at 7:13\n\n## 3 Answers",
null,
"The notes in your picture are not dotted notes, they are notes with staccato dots. https://en.wikipedia.org/wiki/Staccato\n\nThe dot that affects note length is positioned horizontally after the notehead. https://en.wikipedia.org/wiki/Dotted_note\n\n• I've never understood why someone would write a dotted note (half as long again) as a staccato played note. – Tim Apr 21 at 7:11\n• @Tim To notate the rhythm, why not. In my example here it doesn't make sense though, since after the dotted note there is a rest. But if it was an eight note, then it would make sense. – piiperi Reinstate Monica Apr 21 at 7:18\n• Since the staccato dot means shorten by roughly half, why not just write a note of that length instead. A player will make whichever note the appropriate length anyway. – Tim Apr 21 at 7:22\n• @Tim Note length is not the same as sound length. Staccato modifies the length of sound, but not the rhythm. For example, if you first write a rhythm with dotted notes and without staccato. Then you write the exact same thing, but with staccato dots. It's the same rhythm, but to be played with short-sounding notes. – piiperi Reinstate Monica Apr 21 at 7:40\n• I understand that. What I don't understand is the difference between, say, in 4/4 four staccato crotchets and four quavers with quaver rests between. – Tim Apr 21 at 7:51\n\nAs Aaron writes, what a \"beat\" is is defined differently in different time signatures. Within regular time signatures (where every beat is the same) there are two types: simple and compound. Simple meters have beats subdivided into two parts each. Some examples are 2/4, 3/4, and 4/4. Compound meters have beats subdivided into three (usually) parts, for example as in 6/8, where there are two beats of three pulses. This is as opposed to 3/4 where there are three beats of two pulses. The general trick is to use an 8 in the bottom of the time signature when it would seem redundant (6/8 would mathematically reduce to 3/4 if it were actually a fraction).\n\nSo, in your excerpt, each beat can be assumed to have a length of a dotted quarter (crotchet). Since there are 60 seconds in a minute, 120 bpm = 2 beats per second, so an eighth note subdivision of one beat will take one-sixth of a second.\n\nAs a note, this is why tempo markings are often written e.g. ♩. = 120, instead of just 120 bpm, since that can be ambiguous.\n\nThe timings calculated are correct only if the beat is understood in terms of the quarter-note, which is unusual for sheet-music in 6/8 time, but typical for DAWs regardless of time signature.\n\nThe time depends on how the beat is represented. In 6/8 time, the tempo (BPM) is sometimes given in terms of the dotted quarter-note and sometimes in terms of the eighth-note.\n\n### ♩. = 120\n\nIn this case, each beat lasts 1/2 seconds, and each eighth-note lasts 1/3 beat. Thus, the rest lasts 1/2 seconds, and each eighth-note lasts 1/6 seconds.\n\n### ♪ = 120\n\nIn this case, the rest lasts 3/2 seconds, and each eighth-note lasts 1/2 second.\n\n### ♩ = 120\n\nSome DAWs always specify tempo in terms of the quarter-note, regardless the time signature. In that case, the rest lasts 3/4 seconds, and each eighth-note lasts 3/8 second."
] | [
null,
"https://i.stack.imgur.com/wlvM1.png",
null,
"https://i.stack.imgur.com/WTO0v.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9323451,"math_prob":0.894373,"size":2709,"snap":"2021-21-2021-25","text_gpt3_token_len":689,"char_repetition_ratio":0.13419594,"word_repetition_ratio":0.20895523,"special_character_ratio":0.25544482,"punctuation_ratio":0.12563667,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95851606,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-06-19T12:14:25Z\",\"WARC-Record-ID\":\"<urn:uuid:61c588a8-34c5-4524-b2d6-6b5721125464>\",\"Content-Length\":\"183714\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8316d8ac-2aa6-4156-91a8-8819490f3f0b>\",\"WARC-Concurrent-To\":\"<urn:uuid:520ca55d-87f7-4ae5-86ef-727ae928d18e>\",\"WARC-IP-Address\":\"151.101.1.69\",\"WARC-Target-URI\":\"https://music.stackexchange.com/questions/113843/three-dotted-notes-in-6-8-time-signature/113846\",\"WARC-Payload-Digest\":\"sha1:BTQ4TBWRCJJZRALN6IOKRO67OPHOCNWX\",\"WARC-Block-Digest\":\"sha1:YIFK5EE7VNXYE2UI7F37JU6CZUDRRNB7\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-25/CC-MAIN-2021-25_segments_1623487648194.49_warc_CC-MAIN-20210619111846-20210619141846-00555.warc.gz\"}"} |
https://ktsk.xyz/posts/mu123-unit9/ | [
"# MU123 Unit 9 Notes\n\nTomas Koutsky published on\n\n5 min, 899 words\n\nCategories: uni math\n\n# 1 Number patterns and algebra\n\n## 1.1 Arithmetic sequences\n\nSum of a sequence of natural numbers can be rewritten as a sum of pairs of numbers: $$1 + 2 + ... + 99 + 100 = (1 + 100) + (2 + 99) + ... + (50 + 51)$$ There are 50 pairs, each of them having a sum of $101$, so: $$1 + 2 + ... + 99 + 100 = 50 \\times 101 = 5050$$\n\nIn case of a sum of successive odd numbers the sum is always the square of how many odd numbers I add, e.g. $1 + 3 + 5 + 7 = 16 = 4^2$ . For $n$ numbers the formula is $n^2$.\n\nFormula for $n$ successive natural numbers is: $$1 + 2 + ... + n = \\frac{1}{2}n(n + 1)$$ The numbers given by the formula are called triangular numbers.\n\nAny list of numbers is called a sequence\n\n• $1, 2, 3, ..., 100$ is finite sequence\n• $1, 2, 3, ...$ is infinite sequence\n\nThe numbers in a sequence are called the terms of the sequence. Arithmetic sequence is a sequence where the difference between consequtive terms is constant, e.g.: $1, 2, 3, ...$ is a arithmetic sequence with difference of 1, To specify an arithemtic sequence we can give the first term denoted by $a$, the difference denoted by $d$. If the sequence is finite the number of terms is denoted by $n$.\n\nThe $n$th term of an arithmetic sequence with first term $a$ and difference $d$ is given by the formula $$n\\text{th term} = a + (n - 1)d$$\n\nThe number of terms $n$ of a finite arithmetic sequence with first term $a$, last term $L$ and non-zero difference $d$ is given by the formula: $$n = \\frac{L - a}{d} + 1$$\n\nThe sum of the finite arithmetic sequence with first term $a$, difference $d$ and numbers of term $n$ is given by the formula: $$S = \\frac{1}{2}n(2a + (n - 1)d)$$ An alternative formula involving the last term $L$: $$S = \\frac{1}{2}n(a + L)$$\n\n# 2 Multiplying out pairs of brackets\n\n## 2.1 Pairs of brackets\n\nStrategy to multiply out two brackets\n\nMultiply each term inside the first bracket by each term inside the second bracket, and add the resulting terms.\n\n## 2.2 Squaring brackets\n\n$$(x + p)^2 = (x + p)(x + p) \\\\ = x^2 + xp + px + p^2 \\\\ = x^2 + 2px + p^2$$\n\nHence the general formula: $$(x + p)^2 = x^2 + 2px + p^2$$ and $$(x - p)^2 = x^2 - 2px + p^2$$\n\n## 2.3 Differences of two squares\n\n$$(x - p)(x + p) = x^2 + xp - px - p^2$$ Hence: $$(x - p)(x + p) = x^2 - p^2$$\n\n# 3 Quadratic expressions and equations\n\nAn expression of the form $ax^2 + bx + c$, where $a, b, c$ are numbers and $a \\ne 0$ is called a quadratic expression in $x$. $a, b, c$ are called the coefficients of the quadratic.\n\nAn equation that can be expressed in the form: $$ax^2 + bx + c = 0$$ is called a quadratic equation in $x$.\n\n## 3.3 Solving simple quadratic equations\n\nAn equation of the form $x^2 = d$, where $d > 0$ has two solutions $x = \\pm \\sqrt{d}$.\n\nEquations as $x^2 + 1 = 0$ have no solution among the real numbers.\n\n## 3.4 Factorising quadratics of the form $x^2 + bx + c$\n\nFill in the gaps in the brackets on the right-hand side of the equation $$x^2 + bx + c = (x...)(x...)$$ with two numbers whose product is $c$ and whose sum is $b$. I can search systematically by writing down all the factor pairs of $c$ and choosing (if possible) a pair whose sum is $b$.\n\n## 3.5 Solving quadratic equations by factorisation\n\nIf the product of two or more numbers is 0, then at least one of the numbers must be 0. Hence in an equation as $(x - 2)(x - 3) = 0$ either $x - 2$ or $x - 3$ is 0. If $x - 2 = 0$ then $x = 2$, if $x - 3 = 0$ then $x - 3$ so the equation has two solutions.\n\n### Strategy to solve $x^2 + bx + c = 0$ by factorisation\n\n1. Find a factorisation: $x^2 + bx + c = (x + p)(x + p)$\n2. Then $(x + p)(x + p) = 0$, so $x + p = 0$ or $x + q = 0$ and hence the solutions are $x = -p$ and $x = -q$.\n\nWhen the two solutions are same it's said that equation has a repeated solution.\n\n## 3.6 Factorising quadratics of the form $ax^2 + bx + c$\n\nE.g. following quadratic expression; $$2x^2 - x - 6$$ Where: $$a = 2\\\\ b = -1\\\\ c = -6$$\n\nFind two numbers whose product is $ac$ and whose sum is $b$.\n\nThe numbers are $3$ and $-4$.\n\nRewrite the quadratic expression, splitting the term in $x$ using the above factor pair.\n\n$$2x^2 - x - 6 = 2x^2 +3x -4x - 6$$\n\nGroup the four terms in pairs and take out common factors to give the required factorisation.\n\n$$2x^2 - x - 6 = 2x^2 +3x -4x - 6 \\\\ = x(2x + 3) -2(2x + 3) \\\\ = (x - 2)(2x + 3)$$\n\n• If the coefficient of $x^2$ is negative, then multiply the equation through by $-1$ to make this coefficient positive.\n$\\frac{a}{b}$ is equivalent to $\\frac{a(a + 1)}{b(b + 1)}$ and common factor can be cancelled out as usual."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.865006,"math_prob":1.0000098,"size":4845,"snap":"2021-31-2021-39","text_gpt3_token_len":1569,"char_repetition_ratio":0.1481099,"word_repetition_ratio":0.090640396,"special_character_ratio":0.3748194,"punctuation_ratio":0.11175899,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000099,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-07-25T06:21:35Z\",\"WARC-Record-ID\":\"<urn:uuid:9e9b59b4-f3d7-4a48-ae73-9dfe3599e255>\",\"Content-Length\":\"15107\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:56b48a2b-6fb4-4e16-8378-36857245ad08>\",\"WARC-Concurrent-To\":\"<urn:uuid:4649a9a5-d557-4f63-9a08-570491695a61>\",\"WARC-IP-Address\":\"185.199.111.153\",\"WARC-Target-URI\":\"https://ktsk.xyz/posts/mu123-unit9/\",\"WARC-Payload-Digest\":\"sha1:3XMHBZ5UZ3DWOVDCSIRMVBM2N2FAG3CN\",\"WARC-Block-Digest\":\"sha1:UFWLHCAZAAJYI5AH5XRCZV2QWYC2VXQ2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046151638.93_warc_CC-MAIN-20210725045638-20210725075638-00396.warc.gz\"}"} |
https://permanentkisses.com/understanding-the-nature-of-work/ | [
"# Understanding the Nature of Work",
null,
"# Understanding the Nature of Work\n\nWork is the transfer of energy from one object to another through force and displacement. It is usually represented as the product of force and displacement. The process of transferring energy from one object to another is called an eddy current. This is why it is essential to understand the nature of work. Here are some examples of the effects of eddy currents. Let us define work. The concept of work has been around for thousands of years. But why is it so important to us?\n\nWhat is the nature of work? The concept of work is often explained as the movement of an object. For instance, when you push a rock up a hill, you do not perform any work. You are not transferring energy to that object. However, when you drop a pencil, you do work. The movement of the pencil is greater than zero, and the force that is pushing the pencil upwards is acting on the opposite end. This means that the action of dropping the pencil creates work.\n\nThe process of calculating work involves the calculation of the amount of energy transferred from one object to another. When a force acts on an object, it causes the object to move over a distance. To calculate the amount of work done, you must know three quantities: the force, the displacement, and the angle between the displacement and force. The length of the path is what determines the amount of work done. In this case, the length of the rope equals the distance d, and the angle between the displacement and the force.\n\nWhen a heavy object is moved by a rope, it is the displacement of that object that transfers work. It is important to remember that this type of energy transfer is a result of a change in direction and a force component that is along the path. For example, when a heavy object is moved by a strong rope, the force acting on the rope is at the right angle to the direction of the displacement. For the opposite of this process, frictional forces are the result of a difference in velocity.\n\nCI is the process of creating more value through continuous creative opportunities. It involves problem-solving and solution development, and focuses on continuously creating more value for internal and external customers, suppliers, and partners. Despite this, the question of how to make the world a better place is still largely unanswered. The most fundamental principle of work is “equality” — the energy transfer from one object to another is inextricably linked.\n\nWork is the transfer of energy from one object to another. It is the result of displacement and force components that are at the right angle to the direction of displacement. For example, when a heavy object is lifted by a rope, the force exerted by the rope on the weight will cause it to move. In this case, work is the transfer of energy from one object to the other. If the displacement is horizontal, no work is done."
] | [
null,
"https://i.imgur.com/Q6hRmg3.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.95434016,"math_prob":0.9401807,"size":2881,"snap":"2023-40-2023-50","text_gpt3_token_len":581,"char_repetition_ratio":0.15328467,"word_repetition_ratio":0.100196466,"special_character_ratio":0.19888927,"punctuation_ratio":0.1,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97585166,"pos_list":[0,1,2],"im_url_duplicate_count":[null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-01T19:34:27Z\",\"WARC-Record-ID\":\"<urn:uuid:ec8cd100-cd2e-4374-a8ca-dd222b5681d0>\",\"Content-Length\":\"47625\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3b8304da-f5dc-400e-bb5c-4e645e6b7c43>\",\"WARC-Concurrent-To\":\"<urn:uuid:d78bc171-707f-45c6-ae2d-008a69996358>\",\"WARC-IP-Address\":\"104.21.94.238\",\"WARC-Target-URI\":\"https://permanentkisses.com/understanding-the-nature-of-work/\",\"WARC-Payload-Digest\":\"sha1:GON6EHSY6ZV73SPP7CHNHCIDPLAU4BPI\",\"WARC-Block-Digest\":\"sha1:W6PMIQX2KGIKQCUFO3G34RGL4RDQC24Z\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100304.52_warc_CC-MAIN-20231201183432-20231201213432-00315.warc.gz\"}"} |
https://www.gradesaver.com/textbooks/science/physics/physics-10th-edition/chapter-1-introduction-and-mathematical-concepts-focus-on-concepts-page-19/2 | [
"## Physics (10th Edition)\n\nThis problem asks to compare the magnitudes of two vectors, $\\overrightarrow{A}$ and $\\overrightarrow{B}$, with the resultant vector, $\\overrightarrow{R}$ as shown in the picture below. Because the length of $\\overrightarrow{R}$ is less than the total lengths of $\\overrightarrow{A}$ and $\\overrightarrow{B}$, we will chose answer (c).",
null,
""
] | [
null,
"https://gradesaver.s3.amazonaws.com/uploads/solution/b65436d4-aab2-4c4c-bfed-4488bbe165b5/steps_image/small_1510345378.JPG",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8125447,"math_prob":0.99971384,"size":378,"snap":"2020-24-2020-29","text_gpt3_token_len":95,"char_repetition_ratio":0.24598931,"word_repetition_ratio":0.0,"special_character_ratio":0.24338624,"punctuation_ratio":0.092307694,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9920822,"pos_list":[0,1,2],"im_url_duplicate_count":[null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-06-02T02:56:50Z\",\"WARC-Record-ID\":\"<urn:uuid:ce0f80cc-302b-402c-bb0d-8c342ff2539c>\",\"Content-Length\":\"55333\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c0835ca8-38ba-4289-9f2e-1b06d14b98b9>\",\"WARC-Concurrent-To\":\"<urn:uuid:8c296416-b0ec-4ef0-8364-28a17f09c709>\",\"WARC-IP-Address\":\"54.86.5.209\",\"WARC-Target-URI\":\"https://www.gradesaver.com/textbooks/science/physics/physics-10th-edition/chapter-1-introduction-and-mathematical-concepts-focus-on-concepts-page-19/2\",\"WARC-Payload-Digest\":\"sha1:DRMRLR5VUUI3CFBAKBFDOJPJWWL4P2A7\",\"WARC-Block-Digest\":\"sha1:4LAAL4SH74UD6SSC3HP5EGCEJQQS4YZO\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-24/CC-MAIN-2020-24_segments_1590347422065.56_warc_CC-MAIN-20200602002343-20200602032343-00471.warc.gz\"}"} |
https://docs.endaq.com/en/development/webinars/Webinar_Intro_Python_Acceleration_CSV_Analysis.html | [
"# Intro to Python Acceleration and CSV Analysis¶\n\n## Introduction¶\n\nThis notebook serves as an introduction to Python for a mechanical engineer looking to plot and analyze some acceleration data in a CSV file. Being a Colab, this tool can freely be used without installing anything.\n\nFor more information on making the swith to Python see enDAQ’s blog, Why and How to Get Started in Python for a MATLAB User.\n\nThis is part of our webinar series on Python for Mechanical Engineers:\n\n## Import Data File¶\n\nWe will assume that the first column is time in seconds. Some example files are provided or you can load your own.\n\n### Example Files¶\n\nHere are some example datasets you can use to do some initial testing. If you have uploaded your own data, you’ll want to comment this out or not run it!\n\n[ ]:\n\nfilenames = ['https://info.endaq.com/hubfs/data/surgical-instrument.csv',\n'https://info.endaq.com/hubfs/data/blushift.csv',\n'https://info.endaq.com/hubfs/Plots/bearing_data.csv', #used in this dataset: https://blog.endaq.com/top-vibration-metrics-to-monitor-how-to-calculate-them\n'https://info.endaq.com/hubfs/data/Motorcycle-Car-Crash.csv', #used in this blog: https://blog.endaq.com/shock-analysis-response-spectrum-srs-pseudo-velocity-severity\n'https://info.endaq.com/hubfs/data/Calibration-Shake.csv',\n'https://info.endaq.com/hubfs/data/Mining-Hammer.csv'] #largest dataset\nfilename = filenames\n\n[ ]:\n\nfilenames\n\n'https://info.endaq.com/hubfs/data/Calibration-Shake.csv'\n\n\n## Install & Import Libraries¶\n\nFirst we’ll install all libraries we’ll need, then import them.\n\nNote that if running this locally you’ll only need to install one time, then subsequent runs can just do the import. But colab and anaconda will contain all the libraries we’ll need anyways so the install isn’t necessary. Here is how the install would be done though:\n\n!pip install pandas\n!pip install numpy\n!pip install matplotlib\n!pip install plotly\n!pip install scipy\n\n\nYou can always check which libraries you have installed by doing:\n\n!pip freeze\n\n\nWe do need to upgrade plotly though to work in Colab\n\n[ ]:\n\n!pip install --upgrade plotly\n\nRequirement already satisfied: plotly in /usr/local/lib/python3.7/dist-packages (5.3.1)\nRequirement already satisfied: six in /usr/local/lib/python3.7/dist-packages (from plotly) (1.15.0)\nRequirement already satisfied: tenacity>=6.2.0 in /usr/local/lib/python3.7/dist-packages (from plotly) (8.0.1)\n\n\nNow we’ll import our libraries we’ll use later.\n\n[ ]:\n\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport plotly.express as xp\nimport plotly.io as pio; pio.renderers.default = \"iframe\"\nfrom scipy import signal\n\n\n## Load the CSV, Analyze & Plot¶\n\nWe’ll load the data into pandas, display it, do some very basic analysis, and plot the time history in a few ways.\n\n### Load the CSV File and Prepare¶\n\nRemember we are expecting the first column to be time and will set it as the index. This is loading a CSV file, but Pandas supports a lot of other file formats, see: Pandas Input/Output.\n\nIf you must/need to use .MAT files, scipy can read these: scipy.io.loadmat\n\n[ ]:\n\ndf = pd.read_csv(filename) #load the data\ndf = df.set_index(df.columns) #set the first column as the index\ndf\n\n\"X (2000g)\" \"Y (2000g)\" \"Z (2000g)\"\nTime\n0.004394 -0.122072 -0.122072 -0.061036\n0.004594 -0.061036 0.488289 -0.366217\n0.004794 0.183108 0.122072 -0.061036\n0.004994 0.122072 -0.122072 -0.122072\n0.005194 0.122072 0.122072 -0.244144\n... ... ... ...\n27.691064 -0.427253 -0.152590 -0.671397\n27.691264 -0.122072 -0.335698 -0.305180\n27.691464 -0.183108 -0.152590 -0.122072\n27.691664 -0.305180 0.030518 -0.244144\n27.691864 -0.305180 -0.030518 -0.366217\n\n138440 rows × 3 columns\n\n### Basic Analysis¶\n\nOnce in a Pandas dataframe, doing some basic analysis is SUPER easy as shown. Here’s a link to the docs for the .max() function, but notice the many others readily available.\n\nThe peak will be applied after first finding the absolute value to ensure we don’t ignore large negative values.\n\n$peak=\\max(\\left | a \\right |)$\n\nThen the RMS is a simple square root of the mean of all the values squared.\n\n$rms = \\sqrt{\\left(\\frac{1}{n}\\right)\\sum_{i=1}^{n}(a_{i})^{2}}$\n\nThe crest factor is equal to the peak divided by the RMS.\n\n$crest = \\frac{peak}{rms}$\n[ ]:\n\nabs_max = df.abs().max() #maximum of the absolute values, the peak\nstd = df.std() #standard deviation (equivalent to the AC coupled RMS value)\ncrest_factor = abs_max/std #crest factor (peak / RMS)\n\nstats = pd.concat([abs_max, #combine the stats into one table\nstd,\ncrest_factor],\naxis=1)\nstats.columns = ['Peak Acceleration (g)','RMS (g)','Crest Factor'] #set the headers of the table\nstats\n\nPeak Acceleration (g) RMS (g) Crest Factor\n\"X (2000g)\" 1.709010 0.249129 6.859939\n\"Y (2000g)\" 1.647974 0.279338 5.899566\n\"Z (2000g)\" 8.850233 2.687501 3.293109\n\n## Plot Full Time Series¶\n\nYou can create a plot very simply once you have a dataframe by just doing: ~~df.plot()~~ Here we show how to manipulate this plot a bit with axes labels in Matplotlib which has a very similar interface to MATLAB. There are a lot of pretty well documented examples on matplotlib’s docs site (but their docs are confusing to navigate).\n\n[ ]:\n\nfig, ax = plt.subplots() #create an empty plot\n\ndf.plot(ax=ax) #use the dataframe to add plot data, tell it to add to the already created axes\n\nax.set(xlabel='Time (s)',\nylabel='Acceleration (g)',\ntitle=filename)\nax.grid() #turn on gridlines\n\nfig.savefig('full-time-history.png')\nplt.show()",
null,
"## Plot with Plotly¶\n\nMatplotlib may be familiar, but Plotly offers much more interactivity directly in a browser. They also have really good documentation with a ton of examples online.\n\nThe trouble is that plotting too many data points may be sluggish. I’ve long maintained that plotting 10s of thousands of data points isn’t very useful anyways, you just get a shaded mess.\n\nSo here we’ll plot:\n\n• The moving peak value\n\n• The moving RMS\n\n• The time history around the peak\n\n### Moving Peak¶\n\nThis takes advantage of Pandas rolling() function.\n\n[ ]:\n\nn_steps = 100 #number of points to plot\nn = int(df.shape/n_steps) #number of data points to use in windowing\ndf_rolling_peak = df.abs().rolling(n).max().iloc[::n] #finds the absolute value of every datapoint, then does a rolling maximum of the defined window size, then subsamples every nth point\n\ndf_rolling_peak\n\n\"X (2000g)\" \"Y (2000g)\" \"Z (2000g)\"\nTime\n0.004394 NaN NaN NaN\n0.281203 0.976577 1.159686 0.976577\n0.558025 1.159686 1.403830 1.068132\n0.834870 1.403830 1.159686 1.129168\n1.111689 1.220722 1.281758 1.220722\n... ... ... ...\n26.576863 1.159686 1.037613 0.793469\n26.853655 0.976577 1.037613 0.854505\n27.130463 1.068132 1.068132 0.946059\n27.407260 0.976577 1.281758 0.793469\n27.684064 1.098650 1.129168 0.915541\n\n101 rows × 3 columns\n\n[ ]:\n\nfig = xp.line(df_rolling_peak)\nfig.update_layout(\ntitle=\"Rolling Peak\",\nxaxis_title=\"Time (s)\",\nyaxis_title=\"Acceleration (g)\",\n)\nfig.show()\nfig.write_html('rolling_peak.html',full_html=False,include_plotlyjs='cdn')\n\n\n### Moving RMS¶\n\nNow we’ll plot the rolling RMS using the standard deviation. Notice that these rolling value plots make it much easier to compare the datasets than by trying to plot all values which result in a shaded mess.\n\nAlso in this example I’m showing how easy it is to change the theme of the plotly figure, see their documentation for more examples and information. You can also make custom themes.\n\n[ ]:\n\ndf_rolling_rms = df.rolling(n).std().iloc[::n] #does a rolling standard deviation of the defined window size, then subsamples every nth point\n\nfig = xp.line(df_rolling_rms)\nfig.update_layout(\ntitle=\"Rolling RMS\",\nxaxis_title=\"Time (s)\",\nyaxis_title=\"Acceleration (g)\",\ntemplate=\"plotly_dark\"\n)\nfig.show()\nfig.write_html('rolling_rms.html',full_html=False,include_plotlyjs='cdn')\n\n\n### Time History Around Peak¶\n\nNow let’s find the time that had the maximum value and display the time history around that.\n\n[ ]:\n\ndf.abs().max(axis=1).idxmax()\n\n8.659936\n\n[ ]:\n\npeak_time = df.abs().max(axis=1).idxmax() #get the time at which the peak value occurs\nd_t = (df.index[-1]-df.index)/(len(df.index)-1) #find the average time step\nfs = 1/d_t #find the sampling rate\n\nnum = 1000 / 2 #total number of datapoints to plot (divide by 2 because it will be two sided)\ndf_peak = df[peak_time - num / fs : peak_time + num / fs ] #segment the dataframe to be around that peak value\n\nfig = xp.line(df_peak)\nfig.update_layout(\ntitle=\"Time History around Peak\",\nxaxis_title=\"Time (s)\",\nyaxis_title=\"Acceleration (g)\",\ntemplate=\"plotly_white\"\n)\nfig.show()\nfig.write_html('time_history_peak.html',full_html=False,include_plotlyjs='cdn')\n\n\n## PSD¶\n\nNow using SciPy we can easily compute and plot a PSD using a custom function we’ll make to ease the interface to SciPy’s Welch function. This is very similar to MATLAB’s version.\n\n[ ]:\n\ndef get_psd(df, bin_width=1.0, window=\"hann\"):\nd_t = (df.index[-1]-df.index)/(len(df.index)-1)\nfs = 1/d_t\nf, psd = signal.welch(\ndf.values, fs=fs, nperseg= int(fs / bin_width), window=window, axis=0\n)\n\ndf_psd = pd.DataFrame(psd, columns=df.columns)\ndf_psd[\"Frequency (Hz)\"] = f\ndf_psd = df_psd.set_index(\"Frequency (Hz)\")\nreturn df_psd\n\n[ ]:\n\ndf_psd = get_psd(df,bin_width=4) #compute a PSD with a 1 Hz bin width\ndf_psd.to_csv('psd.csv') #save to a CSV file\ndf_psd\n\n\"X (2000g)\" \"Y (2000g)\" \"Z (2000g)\"\nFrequency (Hz)\n0.000000 0.000048 0.000049 0.000072\n4.000048 0.000294 0.000276 0.000332\n8.000095 0.000256 0.000254 0.000287\n12.000143 0.000189 0.000206 0.000230\n16.000191 0.000170 0.000156 0.000193\n... ... ... ...\n2484.029606 0.000007 0.000006 0.000007\n2488.029654 0.000007 0.000006 0.000006\n2492.029702 0.000007 0.000007 0.000007\n2496.029749 0.000007 0.000007 0.000007\n2500.029797 0.000003 0.000004 0.000004\n\n626 rows × 3 columns\n\n[ ]:\n\nfig = xp.line(df_psd)\nfig.update_layout(\ntitle=\"Power Spectral Density (PSD)\",\nxaxis_title=\"Frequency (Hz)\",\nyaxis_title=\"Acceleration (g^2/Hz)\",\nxaxis_type=\"log\",\nyaxis_type=\"log\"\n)\nfig.show()\nfig.write_html('psd.html',full_html=False,include_plotlyjs='cdn')\n\n\n## Cumulative RMS from PSD¶\n\nNow that we have the PSD, we can easily compute and plot the overall RMS value. This is partially thanks to the cumulative sum function in Pandas.\n\nThe nice thing about a PSD (in addition to the easy control of the bin width) is that the area directly relates to the RMS level in the time domain. The equation is as follows.\n\n$g_{\\text{RMS}}=\\sqrt{\\int \\text{PSD}(f)\\ df}$\n\nLet’s demonstrate by quickly using the PSD just calculated, integrating, and taking the square root and compare to the values we calculated from the time domain.\n\n[ ]:\n\ndef rms_from_psd(df_psd):\nd_f = df_psd.index - df_psd.index\ndf_rms = df_psd.copy()\ndf_rms = df_rms*d_f\ndf_rms = df_rms.cumsum()\nreturn(df_rms**0.5)\n\n[ ]:\n\ndf_rms = rms_from_psd(df_psd)\n\nfig = xp.line(df_rms)\nfig.update_layout(\ntitle=\"Cumulative RMS\",\nxaxis_title=\"Frequency (Hz)\",\nyaxis_title=\"Acceleration (g RMS)\",\nxaxis_type=\"log\",\n#yaxis_type=\"log\"\n)\nfig.show()\nfig.write_html('cum_rms.html',full_html=False,include_plotlyjs='cdn')\n\n\n## FFT¶\n\n### Typical FFT (Or Should We Say DFT)¶\n\nThis uses SciPy’s discrete Fourier transform function. The trouble here is that this may be very long and therefore plotting a LOT of data.\n\n[ ]:\n\nfrom scipy.fft import fft, fftfreq\n\ndef get_fft(df):\nN=len(df)\nfs = len(df)/(df.index[-1]-df.index)\n\nx_plot= fftfreq(N, 1/fs)[:N//2]\n\ndf_fft = pd.DataFrame()\ndf_phase = pd.DataFrame()\nfor name in df.columns:\nyf = fft(df[name].values)\ny_plot= 2.0/N * np.abs(yf[0:N//2])\n\nphase = np.unwrap(2 * np.angle(yf)) / 2 * 180/np.pi\nphase = phase[0:N//2]\n\ndf_phase = pd.concat([df_phase,\npd.DataFrame({'Frequency (Hz)':x_plot[1:],\nname:phase[1:]}).set_index('Frequency (Hz)')],axis=1)\ndf_fft = pd.concat([df_fft,\npd.DataFrame({'Frequency (Hz)':x_plot[1:],\nname:y_plot[1:]}).set_index('Frequency (Hz)')],axis=1)\n\nreturn df_fft, df_phase\n\n[ ]:\n\ndf_fft, df_phase = get_fft(df)\n\n[ ]:\n\nfig, ax = plt.subplots() #create an empty plot\n\ndf_fft.plot(ax=ax) #use the dataframe to add plot data, tell it to add to the already created axes\n\nax.set(xlabel='Frequency (Hz)',\nylabel='Acceleration (g)',\ntitle=filename)\nax.grid() #turn on gridlines\n\nfig.savefig('fft.png')\nplt.show()",
null,
"### FFT from PSD¶\n\nHere we can use the output of a PSD and convet it to a typical DFT. This has the benefit of allowing you to explicitely define the frequency bin width.\n\n[ ]:\n\ndf\n\n\"X (2000g)\" \"Y (2000g)\" \"Z (2000g)\"\nTime\n0.004394 -0.122072 -0.122072 -0.061036\n0.004594 -0.061036 0.488289 -0.366217\n0.004794 0.183108 0.122072 -0.061036\n0.004994 0.122072 -0.122072 -0.122072\n0.005194 0.122072 0.122072 -0.244144\n... ... ... ...\n27.691064 -0.427253 -0.152590 -0.671397\n27.691264 -0.122072 -0.335698 -0.305180\n27.691464 -0.183108 -0.152590 -0.122072\n27.691664 -0.305180 0.030518 -0.244144\n27.691864 -0.305180 -0.030518 -0.366217\n\n138440 rows × 3 columns\n\n[ ]:\n\ndef get_fft_from_psd(df,bin_width):\nfs = len(df)/(df.index[-1]-df.index)\nf, psd = signal.welch(df.to_numpy(),\nfs=fs,\nnperseg=fs/bin_width,\nwindow='hanning',\naxis=0,\nscaling = 'spectrum'\n)\n\ndf_psd = pd.DataFrame(psd**0.5,columns=df.columns)\ndf_psd.columns\ndf_psd['Frequency (Hz)'] = f\nreturn df_psd.set_index('Frequency (Hz)')\n\n[ ]:\n\ndf_fft_from_psd = get_fft_from_psd(df,.25)\n\nfig = xp.line(df_fft_from_psd)\nfig.update_layout(\ntitle=\"FFT from PSD\",\nxaxis_title=\"Frequency (Hz)\",\nyaxis_title=\"Acceleration (g)\",\n#xaxis_type=\"log\",\n#yaxis_type=\"log\"\n)\nfig.show()\nfig.write_html('fft_from_psd.html',full_html=False,include_plotlyjs='cdn')"
] | [
null,
"https://docs.endaq.com/en/development/_images/webinars_Webinar_Intro_Python_Acceleration_CSV_Analysis_16_0.png",
null,
"https://docs.endaq.com/en/development/_images/webinars_Webinar_Intro_Python_Acceleration_CSV_Analysis_37_0.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.64055175,"math_prob":0.92658454,"size":13352,"snap":"2023-40-2023-50","text_gpt3_token_len":4216,"char_repetition_ratio":0.10915493,"word_repetition_ratio":0.10382202,"special_character_ratio":0.37702218,"punctuation_ratio":0.2085976,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9977236,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-09T08:02:17Z\",\"WARC-Record-ID\":\"<urn:uuid:8aaecbe6-4785-4852-956d-3cc927023dee>\",\"Content-Length\":\"1049906\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:51cf30e0-6de9-4970-9d9b-c3b32611d288>\",\"WARC-Concurrent-To\":\"<urn:uuid:7ade64db-806e-4871-b265-b307959f76d0>\",\"WARC-IP-Address\":\"104.18.0.163\",\"WARC-Target-URI\":\"https://docs.endaq.com/en/development/webinars/Webinar_Intro_Python_Acceleration_CSV_Analysis.html\",\"WARC-Payload-Digest\":\"sha1:T4XZWWD6MSCKUFL2LETRSNUIOCVJ6NHM\",\"WARC-Block-Digest\":\"sha1:BEJYM4ZLDWVQOMVAAUWVHP4E57ZVJUII\",\"WARC-Truncated\":\"length\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100873.6_warc_CC-MAIN-20231209071722-20231209101722-00012.warc.gz\"}"} |
https://smcclatchy.github.io/mapping/ | [
"# Quantitative Trait Mapping\n\nThis lesson introduces genetic mapping using qtl2, a R package for analyzing quantitative phenotypes and genetic data from complex crosses like the Diversity Outbred (DO). Genetic mapping with qtl2 allows researchers in fields as diverse as medicine, evolution, and agriculture to identify specific chromosomal regions that contribute to variation in phenotypes (quantitative trait loci or QTL). The goal is to identify the action, interaction, number, and precise location of these regions.\n\nParticipants will learn to\n\n• calculate genotype and allele probabilities\n• perform a genome scan and plot the results\n• evaluate statistical significance of results\n• find estimated effects of a QTL on a phenotype\n• account for relationships among individuals by using a kinship matrix\n• perform SNP association analysis\n\nThe lesson concludes with a complete analytical workflow from a study of DO mice.The lesson is adapted from Karl Broman’s software, tutorials, and book co-authored with Saunak Sen, A Guide to QTL Mapping with R/qtl.\n\n## Prerequisites\n\nUnderstand fundamental genetic principles Know how to access files not in the working directory by specifying the path.\nKnow how to install a R package.\nKnow how to assign a value to a variable. Know how to apply a built-in function.\n\n## Schedule\n\n Setup Download files required for the lesson 00:00 1. Introduction What is quantitative trait mapping? 00:15 2. Input File Format How are the data files formatted for qtl2? Which data files are required for qtl2? Where can I find sample data for mapping with the qtl2 package? 01:00 3. Calculating Genotype Probabilities How do I calculate QTL at positions between genotyped markers? How do I calculate QTL genotype probabilities? How do I calculate allele probabilities? How can I speed up calculations if I have a large data set? 02:00 4. Special covariates for the X chromosome How do I find the chromosome X covariates for a cross? 02:30 5. Performing a genome scan How do I perform a genome scan? How do I plot a genome scan? 03:30 6. Performing a permutation test How can I evaluate the statistical significance of genome scan results? 04:00 7. Finding LOD peaks How do I locate LOD peaks above a certain threshold value? 05:00 8. Calculating A Kinship Matrix Why would I calculate kinship between individuals? How do I calculate kinship between individuals? What does a kinship matrix look like? 06:00 9. Performing a genome scan with a linear mixed model How do I use a linear mixed model in a genome scan? How do different mapping and kinship calculation methods differ? 06:30 10. Performing a genome scan with binary traits How do I create a genome scan for binary traits? 07:00 11. Estimated QTL effects How do I find the estimated effects of a QTL on a phenotype? 07:30 12. SNP association mapping How do I identify SNPs in a QTL? 08:30 13. QTL analysis in Diversity Outbred Mice How do I bring together each step in the workflow? How is the workflow implemented in an actual study? 09:30 Finish\n\nThe actual schedule may vary slightly depending on the topics and exercises chosen by the instructor.\n\nThis lesson was funded by NIH grant R25GM123516 awarded to Dr. Gary Churchill at The Jackson Laboratory."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.809704,"math_prob":0.69113284,"size":2946,"snap":"2019-43-2019-47","text_gpt3_token_len":671,"char_repetition_ratio":0.12508498,"word_repetition_ratio":0.020618556,"special_character_ratio":0.23625255,"punctuation_ratio":0.12390925,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":0.9546522,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-14T21:55:03Z\",\"WARC-Record-ID\":\"<urn:uuid:f126a349-8209-4c96-ae35-df9f54d95484>\",\"Content-Length\":\"15582\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:bdd32ae7-241b-4869-a290-6d157b8c8e4b>\",\"WARC-Concurrent-To\":\"<urn:uuid:cef2dec6-c3da-4482-b0a8-aacac749ba36>\",\"WARC-IP-Address\":\"185.199.110.153\",\"WARC-Target-URI\":\"https://smcclatchy.github.io/mapping/\",\"WARC-Payload-Digest\":\"sha1:Z2BONMKLNGIKR6OHTFQVYEXMCZAHPVIY\",\"WARC-Block-Digest\":\"sha1:MNMNEVAYKCCD5BLHWHBZMNWGLCDMVPX2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496668539.45_warc_CC-MAIN-20191114205415-20191114233415-00014.warc.gz\"}"} |
http://seaborn.pydata.org/examples/simple_violinplots.html | [
"# Violinplots with observations#",
null,
"seaborn components used: `set_theme()`, `violinplot()`\n\n```import numpy as np\nimport seaborn as sns\n\nsns.set_theme()\n\n# Create a random dataset across several variables\nrs = np.random.default_rng(0)\nn, p = 40, 8\nd = rs.normal(0, 2, (n, p))\nd += np.log(np.arange(1, p + 1)) * -5 + 10\n\n# Show each distribution with both violins and points\nsns.violinplot(data=d, palette=\"light:g\", inner=\"points\", orient=\"h\")\n```"
] | [
null,
"http://seaborn.pydata.org/_images/simple_violinplots.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.6495852,"math_prob":0.98362446,"size":427,"snap":"2023-14-2023-23","text_gpt3_token_len":129,"char_repetition_ratio":0.10638298,"word_repetition_ratio":0.0,"special_character_ratio":0.31381732,"punctuation_ratio":0.21348314,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9939215,"pos_list":[0,1,2],"im_url_duplicate_count":[null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-20T22:55:45Z\",\"WARC-Record-ID\":\"<urn:uuid:f8fa3a76-739e-47ad-a215-f7f6b8ec11bc>\",\"Content-Length\":\"23012\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:41d05ed1-0f93-452f-a963-be17e7a6c56a>\",\"WARC-Concurrent-To\":\"<urn:uuid:8809a3bd-b0ec-46dc-8a0f-d58947d8b46d>\",\"WARC-IP-Address\":\"185.199.110.153\",\"WARC-Target-URI\":\"http://seaborn.pydata.org/examples/simple_violinplots.html\",\"WARC-Payload-Digest\":\"sha1:FSAYL6R7ZXJHQXOUCMRPG4S4YYK674ZO\",\"WARC-Block-Digest\":\"sha1:HYP7RLDY5GDC4V5AURIYFCPFOFTJF67O\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296943562.70_warc_CC-MAIN-20230320211022-20230321001022-00322.warc.gz\"}"} |
http://mbond.free.fr/ASAS1826/ASAS1826.htm | [
"MBCAA Observatory\n\n# ASAS1826+12: the eclipsing Cepheid\n\n## Observed: 7, 12, 13, 14, 15, 17, 19 July, 5, 18 Aug, 5, 6, 12, 15, 19, 20 Sept, 2, 4, 13, 14, 15, 16, 18 Oct, 3, 8 Nov 2007\n\n### Abstract\n\nMulticolor photometric measurements of this Cepheid in an eclipsing binary system (the only known one in our galaxy) are presented.\n\n### Introduction\n\nASAS1826+12 (actually ASAS182611+1212.6 or ASAS182612 or TYC 1031 01262 1) is the first known Cepheid in an eclipsing binary system in our galaxy (Antipin et al (2007)). The pulsational period is 4.2 days and the orbital period is 51.4 days. Both the primary and secondary eclipses are visible and there is an orbital modulation between them.\n\nThe ephemerides are (Antipin et al (2007)):\nPulsation maximum = 2,453,196.529 + 4.1523*E\nPrimary eclipse minimum = 2,453,571.36 + 51.38*E\n\nThe AAVSO issued the Alert Notice 351 (June 8, 2007) to start a campaign of observations of this system (and of KU Her in the same field).\n\n### Observations\n\nThe observations were carried out with a 203mm f/6.3 SC telescope, BVRcIc Johnson-Cousins filters (from Schuler) in a filter wheel and a SBIG ST7E camera (KAF401E CCD). Each exposure is 200s long with one exposure/measurement (no stack).\n\nFor the photometry, the comparison star and the check star recommended by the AAVSO are used. They are:\n\n ID RA(J2000) DEC V B-V V-Rc Rc-Ic V-Ic comp 18:26:09.44 +12:17:37.2 11.914 0.748 0.444 0.432 0.876 check 18:26:23.68 +12:15:47.2 10.892 0.665 0.383 0.358 0.738\n(comp is AAVSO chart 070925 star 119).\n\nThe phase coverage of the observations:",
null,
"Red solid line: the orbital phase;\nBlue dot line: the pulsational phase;\nRed squares: the orbital phases of the V measurements;\nBlue circles: the pulsational phases of the V measurements.\n\n### Transformations\n\nThe measured b,v magnitudes may be transformed for the instrument response. The transformed B,V magnitudes are then:\n\nV = v + TvTbv*[(b-v)-(Bc-Vc)]\nB = b + TbTbv*[(b-v)-(Bc-Vc)]\n\nBc,Vc are the magnitudes of the comparison star. The transformation coefficients were measured HERE; they are:\nTbTbv = 0.002 +/- 0.027\nTvTbv = -0.025 +/- 0.015\n\nThe transformed R magnitudes (Rc filter) are computed from the observed r magnitudes the same way:\n\nR = r + TrTvr*[(v-r)-(Vc-Rc)]\n\nwith the transformation coefficient:\nTrTvr = -0.134 +/- 0.017\n\n(one Rc measurement without a match in V in transformed using a Ic measurement).\n\nAnd the transformed I magnitudes (Ic filter) from the observed i magnitudes:\nI = i + TiTri*[(r-i)-(Rc-Ic)]\n\nwith the transformation coefficient:\nTiTri =0.022 +/- 0.011\n\n### Check star\n\nThe raw magnitudes and the transformed magnitudes of the check star are:",
null,
"Blue circles: the raw magnitudes, the error bars are +/- the 1 sigma statistical uncertainties;\nRed squares: the transformed magnitudes, the errors bars are computed from the +/- 1 sigma statistical uncertainties on the raw magnitudes and the standard deviations on the transformation coefficients;\n11.557 is the magnitude according to the AAVSO.",
null,
"",
null,
"",
null,
"### Pulsational phase analysis\n\nAssuming that the eclipses have a width of 0.25 phase, the transformed magnitudes as a function of the pulsational phases are:",
null,
"Blue: for the orbital phases around the primary eclipse;\nCyan: for the orbital phases around the secondary eclipse;\nRed: all the other (they are still modulated by the orbital movement according to Antipin et al (2007)).\nThe error bars are computed from the 1 sigma statistical uncertainties on the raw magnitudes and the standard deviations on the transformation coefficients.",
null,
"",
null,
"",
null,
"Another pulsational phase plot:",
null,
"The magnitudes are for the orbital phases outside the eclipses:\nBlue: B magnitudes;\nGreen: V magnitudes;\nRed: Rc magnitudes;\nBrown: Ic magnitudes.\n\nLate by 0.12 pulsational period?\n\n### Orbital phase analysis\n\nThe diameter of a cepheid is at its maximum when the brightness is diminishing (around phase 0.25) and it is at its minimium when the brightness is on its rise (~0.75); see for example TX Mon.\n\nWhen the cepheid of this binary system is eclipsed, the eclipse width should then be larger when the diameter is at its maximum.\n\n### Reference\n\nAntipin S.V., Sokolovsky K.V., Ignatieva T.I. (2007) MNRAS arXiv/astro-ph:0705.0605.\n\n### Technical notes\n\nTelescope and camera configuration.\n\nComputer and software configuration.\n\n### Astronomical notes",
null,
"Software for astronomy"
] | [
null,
"http://mbond.free.fr/ASAS1826/coverage.jpg",
null,
"http://mbond.free.fr/ASAS1826/checkB.jpg",
null,
"http://mbond.free.fr/ASAS1826/checkV.jpg",
null,
"http://mbond.free.fr/ASAS1826/checkR.jpg",
null,
"http://mbond.free.fr/ASAS1826/checkI.jpg",
null,
"http://mbond.free.fr/ASAS1826/pulseB.jpg",
null,
"http://mbond.free.fr/ASAS1826/pulseV.jpg",
null,
"http://mbond.free.fr/ASAS1826/pulseR.jpg",
null,
"http://mbond.free.fr/ASAS1826/pulseI.jpg",
null,
"http://mbond.free.fr/ASAS1826/pulseBVRI2.jpg",
null,
"http://mbond.free.fr/wb_logou.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.75429314,"math_prob":0.9360307,"size":3370,"snap":"2020-10-2020-16","text_gpt3_token_len":967,"char_repetition_ratio":0.16785502,"word_repetition_ratio":0.054158606,"special_character_ratio":0.2991098,"punctuation_ratio":0.20224719,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96667796,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22],"im_url_duplicate_count":[null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-29T04:14:01Z\",\"WARC-Record-ID\":\"<urn:uuid:97d4903d-5976-40e2-ae3f-bdb0c3245c63>\",\"Content-Length\":\"14546\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b8473f9a-4bdb-4a1a-a6b7-ed3f69f4f5e3>\",\"WARC-Concurrent-To\":\"<urn:uuid:f3162a1a-1c74-40e0-af19-04bd385c2868>\",\"WARC-IP-Address\":\"212.27.63.159\",\"WARC-Target-URI\":\"http://mbond.free.fr/ASAS1826/ASAS1826.htm\",\"WARC-Payload-Digest\":\"sha1:RBDZJAWJDJCTZH2VGO6LW6NK3APRDE5W\",\"WARC-Block-Digest\":\"sha1:OSGC3GDGPJGYY63FECIGOIKYPOSWSJ2K\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875148375.36_warc_CC-MAIN-20200229022458-20200229052458-00162.warc.gz\"}"} |
http://sobo.com.au/rbiqoi4j/wtuds/4-5-practice-determinants-and-cramers-rule-answers | [
"",
null,
"# 4 5 practice determinants and cramers rule answers\n\n powershow. Given a system of linear equations, Cramer's Rule uses determinants of a matrix to solve for just one of the variables without having to solve the whole system of Here you can solve systems of simultaneous linear equations using Cramer's Rule Calculator with complex If the main determinant is zero the system of linear 20/07/2006 · Why is cramers rule more efficient than Gauss for Computers? then viewing your answers. Answers to Practice Quiz: Determinants -Inverse Matrices - Cramer's Rule 1) -42) 43) -114) 3 5) No unique solution 6) -3-4 7) 0-11 79 8) No unique solutionCramer’s Rule - Mathematics - Old Exam Paper, Use Cramer’s Rule to confirm your answer for y. com/view1/1dca97-ZDc1Z/CRAMERS_RULE_powerpoint_pptCramers Rule 4'3 Cont because Cramer 5. 4 Third Order Determinants and Cramer - 5. Use determinants to solve the normal equations. 4 Third Order Determinants and Replace column 2 with the answers to Help Center Detailed answers to any questions you might have How can I typeset determinants of Cramer's rule? up vote 4 down vote favorite Math for Economics II Invertible Matrices Determinants and Cramers Rule 161 168 from ECON econ ua at New York UniversityHelp Center Detailed answers to any questions you might have Cramer's rule for Determinants are rational functions and eigenvalues Calculate Determinant of a Matrix. Check answers using these . The reason is that, with practice, With Cramer's rule, Status: ResolvedAnswers: 3CRAMERS RULE PowerPoint presentation - …www. This is a tutorial on how to calculate the determinant of a 2 by 2 matrix and that of a 3 by 3 matrix\n © Copyright Cellguru.Ru 2006-2012 Сайт управляется системой"
] | [
null,
"http://www.cellguru.ru/images/woman.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8049044,"math_prob":0.76381767,"size":1616,"snap":"2021-04-2021-17","text_gpt3_token_len":401,"char_repetition_ratio":0.15384616,"word_repetition_ratio":0.04597701,"special_character_ratio":0.23329207,"punctuation_ratio":0.06690141,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95038456,"pos_list":[0,1,2],"im_url_duplicate_count":[null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-04-16T10:10:27Z\",\"WARC-Record-ID\":\"<urn:uuid:a2cdd922-3a16-43ed-9d36-db239f2d2031>\",\"Content-Length\":\"5652\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3575191e-b21f-4cbb-ae87-29c217f90ee3>\",\"WARC-Concurrent-To\":\"<urn:uuid:c1f8b3a0-e0be-413b-bc49-451626b8e7b2>\",\"WARC-IP-Address\":\"81.19.186.191\",\"WARC-Target-URI\":\"http://sobo.com.au/rbiqoi4j/wtuds/4-5-practice-determinants-and-cramers-rule-answers\",\"WARC-Payload-Digest\":\"sha1:NULUD76LAUCEBA2TNMVBEGU6K355WNXR\",\"WARC-Block-Digest\":\"sha1:RLVL3Z4USTUTKVGQOW7GTJT3TN7K44K7\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-17/CC-MAIN-2021-17_segments_1618038056325.1_warc_CC-MAIN-20210416100222-20210416130222-00383.warc.gz\"}"} |
http://www.jb.man.ac.uk/research/gravlens/intro/intro.html | [
"Introduction to lensing",
null,
"Illustration of a gravitational lens (courtesy A. Gunn).",
null,
"MERLIN image of the first known gravitational lens, 0957+561.\n\nWhen a galaxy lies close to the line of sight to a distant quasar, the paths of the quasar's light and radio rays are bent by the galaxy's gravitational field. This can produce intensified, multiple images of the quasar - a phenomenon known as gravitational lensing.\n\nIf the line of sight to the quasar passes exactly through the galaxy, the symmetry of the system results in the formation of an ``Einstein ring''. If the line of sight is slightly off-centre, this produces multiple point images, as in the case of 0957+561, the first lens system discovered.\n\nOn the right is a MERLIN image of 0957+561. Note that the core of the quasar (the bright red point) has a double image, the secondary being about six arcseconds south of the primary image. Observations with optical telescopes show that these two images have the same optical spectrum, and so we can deduce that they are images of the same background object.\n\nAbout 150-200 gravitational lens systems are now known. In some cases the background object is a quasar, and in some cases it is a galaxy. 22 of these were discovered by the CLASS survey, based at Jodrell Bank.\n\nApart from their intrinsic interest, gravitational lenses have exciting cosmological applications. Study of the lensed images, together with knowledge of gravitational physics, allows us to study the mass distribution of the lensing galaxy. This is important because gravitational lensing is sensitive to all matter, whether it emits light or not, and so allows us to measure both normal light-emitting matter, and also dark matter which is thought to make up much of the mass of galaxies. We are beginning to be able to make more detailed studies of mass distributions in galaxies, and test the degree to which the matter in galaxies is distributed in smaller lumps, as predicted by large numerical simulations with Cold Dark Matter.\n\nIf the background quasar is variable, the time delay between variations of two different components in the image allows us to calculate the difference in the paths of the corresponding rays from the distant quasar. Given the distances (i.e. redshifts) of the galaxy and quasar, and assuming that we can obtain a good enough mass model for the lensing galaxy, we can work out the absolute scale of the system -- and hence the Hubble expansion constant (Ho) can be calculated.\n\nLast updated Fri Apr 18 16:56:17 2008"
] | [
null,
"http://www.jb.man.ac.uk/research/gravlens/intro/geom.gif",
null,
"http://www.jb.man.ac.uk/research/gravlens/intro/0957.gif",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9084777,"math_prob":0.97616905,"size":2479,"snap":"2022-05-2022-21","text_gpt3_token_len":520,"char_repetition_ratio":0.14868687,"word_repetition_ratio":0.00973236,"special_character_ratio":0.20532472,"punctuation_ratio":0.09287257,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96077466,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-20T05:07:28Z\",\"WARC-Record-ID\":\"<urn:uuid:9eab27e0-47a3-4e95-bfdb-a80815950a02>\",\"Content-Length\":\"9161\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:37d67ded-41de-43b8-aab9-6385603df184>\",\"WARC-Concurrent-To\":\"<urn:uuid:3df41bda-db6e-467e-840e-271a5a5b225d>\",\"WARC-IP-Address\":\"130.88.24.62\",\"WARC-Target-URI\":\"http://www.jb.man.ac.uk/research/gravlens/intro/intro.html\",\"WARC-Payload-Digest\":\"sha1:GQWJWKFZ7ID75FI73BIFWERSREGRUVQJ\",\"WARC-Block-Digest\":\"sha1:GFKJT6MOPFRZ73ZL2XJ3WXIKM2GLVD3G\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320301720.45_warc_CC-MAIN-20220120035934-20220120065934-00188.warc.gz\"}"} |
https://www.coolkidfacts.com/questions-about-electricity/ | [
"",
null,
"# 10 Frequently Asked Questions About Electricity\n\nDefined as the flow of charge that powers the world around us, have you ever felt intrigued to learn more about electricity?\n\nConsidered to be one of the most crucial topics in Science, it is flabbergasting to learn about the multiple theories and concepts in this topic. But as we know, practice brings precision, solving questions on electricity can help you to clear your doubts and bring clarity in the concepts.\n\nAnswer our quiz on the chapter ‘electricity’ and understand the concepts in the most simple language. We have handpicked all the questions about electricity to ensure you learn the best.\n\nBut before that, remember you just have 10 seconds to answer. The correct option is given at the bottom of the page. Score your answers once you complete the quiz.\n\nSo, without any further ado, let’s get started!\n\n## Questions\n\n1. What is electricity?\n\na) Electricity is defined as the flow of electrons in a conductor.\nb) Electricity is the connection of charges in a conductor.\nc) Electricity is the difference in the potential in a conductor.\nd) Electricity is defined as the movement of neutrons in a conductor.\n\n1. What are the two types of electricity?\n\na) Positive Electricity and Negative Electricity\nb) Static Electricity and Current Electricity\nc) Electron Electricity and Proton Electricity\nd) Charged Electricity and Neutral Electricity\n\n1. A plastic wire is a\n\na) Conductor\nb) Insulator\nc) None of these\nd) Both of these\n\n1. The concept of electroplating is based on\n\na) Magnetic effect of electricity\nb) Heating effect of electricity\nc) Physical effect of electricity\nd) Chemical effect of electricity\n\n1. The potential difference in an electric circuit is due to\n\na) Cell or battery\nb) Switch\nc) Voltmeter\nd) Ammeter\n\n1. Who invented the battery?\n\na)Thomas Edison\nb) Allesandro Volta\nd) None of the above\n\n1. Which device is used for measuring electricity?\n\na) Ammeter\nb) Galvanometer\nc) Voltmeter\nd) Odometer\n\n1. The electric current flows in an electric circuit due to\n\na) Difference in current\nb) Difference in potential difference\nc) Same potential difference\nd) different in heating capacity\n\n1. Tin cans are electroplated with tin onto the iron. Why?\n\na) Tin coating makes the vessel cheap\nb) Tin gives a shiny appearance to the vessel\nc) Tin makes the vessel lighter in weight\nd) Tin is less reactive than iron.\n\n1. Tap water is a good conductor of electricity, but distilled water is not. Why?\n\na) Distilled does not contain salts\nb) Tap water contains salts\nc) Only a) is correct\nd) Both a) and b) are correct\n\n1) Answer – a)\n\nElectricity is the flow of electrons from a higher potential to a low potential inside a conductor. An example of electricity is the lighting of a bulb, lightning in the sky, etc.\n\n2) Answer – b)\n\nElectricity is of 2 types: static electricity and current electricity.\n\nIn static electricity, the charges are at rest, and in current electricity, they move inside the conductor.\n\n3) Answer – b)\n\nA plastic wire cannot conduct electricity, and hence it is an insulator.\n\n4) Answer – d)\n\nElectroplating is the deposition of a thin layer of the desired metal using electric current and depends on the chemical effect of electricity.\n\n5) Answer – a)\n\nThe potential difference is the difference in electric potential at any two points and is created by a cell/battery.\n\n6) Answer – b)\n\nAlessandro Volta invented the first battery in 1800. It was after him the ‘volt’ is named.\n\n7) Answer – a)\n\nAn Ammeter is a device used for measuring electric current in an electric circuit.\n\n8) Answer – b)\n\nThe potential difference allows the electrons to move from high to low potential.\n\n9) Answer – d)\n\nTin is less reactive than iron. Thus, your food is safe inside the electroplated tin can.\n\n10) Answer – d)\n\nDistilled water does not contain salts, and tap water does. These salts conduct electricity. Thus both the options are correct.\n\nDid these questions sparked the Einstein instinct in you?\nElectricity involves different concepts and theories. We hope our quiz helped you to understand the basic fundamentals. For more such questions, practice our other questionnaires."
] | [
null,
"https://ct.pinterest.com/v3/",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.87059724,"math_prob":0.9343985,"size":4132,"snap":"2022-40-2023-06","text_gpt3_token_len":925,"char_repetition_ratio":0.18483527,"word_repetition_ratio":0.011428571,"special_character_ratio":0.21103582,"punctuation_ratio":0.08032128,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95589024,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-10-05T20:56:35Z\",\"WARC-Record-ID\":\"<urn:uuid:410e76c0-498a-4614-8700-6c379e5a0fde>\",\"Content-Length\":\"95953\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:db2e6c25-2162-40af-859f-10aff751d1e6>\",\"WARC-Concurrent-To\":\"<urn:uuid:444ea7d9-6806-487c-8c6c-ee9abdf155a9>\",\"WARC-IP-Address\":\"149.28.121.166\",\"WARC-Target-URI\":\"https://www.coolkidfacts.com/questions-about-electricity/\",\"WARC-Payload-Digest\":\"sha1:7ZT76G6G7S567DHVVCL4RCUKYQB2Y35R\",\"WARC-Block-Digest\":\"sha1:RVJ4BGUHN7IYQ32CLNW3NL4K6DR5Y6T7\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030337668.62_warc_CC-MAIN-20221005203530-20221005233530-00681.warc.gz\"}"} |
https://jax.readthedocs.io/en/latest/_autosummary/jax.scipy.special.lpmn_values.html | [
"# jax.scipy.special.lpmn_values#\n\njax.scipy.special.lpmn_values(m, n, z, is_normalized)[source]#\n\nThe associated Legendre functions (ALFs) of the first kind.\n\nUnlike lpmn, this function only computes the values of ALFs. The ALFs of the first kind can be used in spherical harmonics. The spherical harmonic of degree l and order m can be written as $$Y_l^m(\\theta, \\phi) = N_l^m * P_l^m(\\cos \\theta) * \\exp(i m \\phi)$$, where $$N_l^m$$ is the normalization factor and θ and φ are the colatitude and longitude, repectively. $$N_l^m$$ is chosen in the way that the spherical harmonics form a set of orthonormal basis function of $$L^2(S^2)$$. Normalizing $$P_l^m$$ avoids overflow/underflow and achieves better numerical stability.\n\nParameters\n• m (int) – The maximum order of the associated Legendre functions.\n\n• n (int) – The maximum degree of the associated Legendre function, often called l in describing ALFs. Both the degrees and orders are [0, 1, 2, …, l_max], where l_max denotes the maximum degree.\n\n• z (ndarray) – A vector of type float32 or float64 containing the sampling points at which the ALFs are computed.\n\n• is_normalized (bool) – True if the associated Legendre functions are normalized. With normalization, $$N_l^m$$ is applied such that the spherical harmonics form a set of orthonormal basis functions of $$L^2(S^2)$$.\n\nReturn type\n\nndarray\n\nReturns\n\nA 3D array of shape (l_max + 1, l_max + 1, len(z)) containing the values of the associated Legendre functions of the first kind. The return type matches the type of z.\n\nRaises\n• TypeError if elements of array z are not in (float32, float64).\n\n• ValueError if array z is not 1D.\n\n• NotImplementedError if m!=n."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7492287,"math_prob":0.9973257,"size":1563,"snap":"2022-27-2022-33","text_gpt3_token_len":417,"char_repetition_ratio":0.12636305,"word_repetition_ratio":0.057613168,"special_character_ratio":0.26167625,"punctuation_ratio":0.13220339,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99932003,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-12T05:04:34Z\",\"WARC-Record-ID\":\"<urn:uuid:c62eea55-9f77-4f62-91dc-8e0b916167da>\",\"Content-Length\":\"32394\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6c4d1d6e-4301-48ec-a1fb-20fcace6db94>\",\"WARC-Concurrent-To\":\"<urn:uuid:3edda52a-3574-4101-bcc5-f8c65cdb42e4>\",\"WARC-IP-Address\":\"104.17.33.82\",\"WARC-Target-URI\":\"https://jax.readthedocs.io/en/latest/_autosummary/jax.scipy.special.lpmn_values.html\",\"WARC-Payload-Digest\":\"sha1:R3PMB7OSIZAQS5OLHQXKHSHBG2H3YGKZ\",\"WARC-Block-Digest\":\"sha1:KKHM5IRNT4UXOUFPNMNE36W6WPT6OXPM\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882571584.72_warc_CC-MAIN-20220812045352-20220812075352-00300.warc.gz\"}"} |
https://nanoscalereslett.springeropen.com/articles/10.1186/s11671-019-2852-y | [
"# Coupled Resonance Enhanced Modulation for a Graphene-Loaded Metamaterial Absorber\n\n## Abstract\n\nA graphene-loaded metamaterial absorber is investigated in the mid-infrared region. The light-graphene interaction is greatly enhanced by virtue of the coupled resonance through a cross-shaped slot. The absorption peaks show a significant blueshift with increasing Fermi level, enabling a wide range of tunability for the absorber. A simple circuit model well explains and predicts this modulation behavior. Our proposal may find applications in a variety of areas such as switching, sensing, modulating, and biochemical detecting.\n\n## Background\n\nPlasmonic metamaterial (PM) absorbers work with metallic nanostructures at deep subwavelength scale. Perfect absorptions can be achieved and tailored at particular wavelengths, leading to a variety of applications including light emitter/detector, sensor, photothermal therapy, optical-mechanical interaction, and hyperspectral imaging [1,2,3,4,5,6,7]. PM absorbers also provide a promising platform for designing novel functional devices with tunable properties. By introducing components such as liquid crystals, semiconductors, or phase-change materials, the optical response can be modulated electrically, optically, or thermally [8,9,10,11,12,13], which enables new types of modulators, switches, and multispectral detectors.\n\nMost recently, graphene has received considerable attention because of its high-speed modulation capability and tunability as a plasmonic material [14,15,16,17,18,19,20]. Specifically, the graphene conductivity depends on the Fermi level (EF) which can be continuously tuned through bias voltage within several nanoseconds, enabling a high modulation rate in the near infrared and mid-infrared regions [17, 19,20,21,22,23,24]. However, as the single graphene layer is only atomically thick, the interaction between the incident light and the plasmonic resonance is quite weak. And this interaction becomes even weaker in the mid-infrared area due to the Pauli blocking of interband transitions . As a result, the wavelength tuning range as well as the modulation depth is quite limited. The wavelength shift is generally less than 10% of the resonance wavelength [21, 22, 25,26,27,28], which is still a challenge for practical applications in optical communications and wideband spectral detections. Thus, in order to achieve efficient electro-optical modulation, the graphene-light interaction needs to be greatly strengthened. Some progresses have been made in previous studies. Based on the designs of complex nanostructures such as nano-antennas and split ring resonators [19, 21, 22, 25, 27, 28], the enhancement of graphene-light interaction has been theoretically and experimentally demonstrated. Yet, these designs are usually complicated or polarization-dependent, the range of working frequency is relatively small and the tunability is still limited.\n\nIn this work, we have proposed a graphene-loaded absorber with modulation range from 9 to 14 μm, which is of great interest for applications such as biochemical sensing and thermal imaging [5, 29,30,31]. The coupled-resonances inside the cross-shaped slot offer four orders of enhancement for the electric field, strongly intensifying the graphene-light interaction and resulting in a shift of up to 25% in the central wavelength. In addition, we propose a simple LC circuit model which well explains and predicts the graphene-induced modulation controlled by the voltage and geometric parameters. Such a large range of tunability would be promising in many applications.\n\n## Methods\n\nAs shown in Fig. 1a, patterned metallic patches are arranged with a period of Λ = 8 μm on the metal substrate separated by a dielectric spacer. A single layer of graphene sandwiches between the patches and the spacer. The substrate is very thick and acts as a reflection mirror. The thickness of the spacer layer is td = 520 nm and that of the metallic patches is tm = 100 nm. Figure 1b shows the top view of one unit cell. Two subunits are arranged in a diagonal symmetry in order to support the polarization independence. A cross-shaped slot is etched on each square patch, dividing it into four small identicals. The sizes of the small identicals in S1 and S2 are l1 = 1.5 μm and l2 = 1.7 μm, respectively. The slot width for both subunits is a = 20 nm. In our study, the metallic material is chosen as gold (Au), whose optical property is described by the Drude model of $$\\varepsilon \\left(\\omega \\right)=1-{\\omega}_p^2/\\left(\\omega \\left(\\omega +\\tau \\right)\\right)$$ with ωp = 1.369 × 1016 Hz and τ = 1.224 × 1014 Hz . The dielectric spacer is composed of zinc sulfide (ZnS), whose optical index is n = 2.2 with negligible loss in the mid-infrared region .\n\nThe finite-difference time-domain (FDTD; Lumerical FDTD Solutions) method is employed to calculate reflectance spectra and electromagnetic field distribution. The simulations are carried out with periodic boundary conditions in the x and y directions and perfect matched layer conditions in the z directions. The single graphene layer is modeled as a two-dimensional structure by the surface conductivity approach . The surface conductivity of the graphene layer σg, including the interband term σinter and the intraband term σintra, can be calculated by the Kubo formula .\n\n$$\\begin{array}{l}{\\sigma}_{\\mathrm{g}}\\left(\\omega, {E}_{\\mathrm{F}},\\Gamma, T\\right)={\\sigma}_{\\mathrm{intra}}+{\\sigma}_{\\mathrm{inter}}\\\\ {}=\\frac{-{ie}^2}{\\pi {\\mathrm{\\hslash}}^2\\left(\\omega +i2\\Gamma \\right)}\\underset{0}{\\overset{\\infty }{\\int }}\\xi \\left(\\frac{\\partial {f}_d\\left(\\xi \\right)}{\\partial \\xi }-\\frac{\\partial {f}_d\\left(-\\xi \\right)}{\\partial \\xi}\\right) d\\xi +\\frac{ie^2\\left(\\omega +i2\\Gamma \\right)}{\\pi {\\mathrm{\\hslash}}^2}\\underset{0}{\\overset{\\infty }{\\int }}\\xi \\left(\\frac{f_d\\left(-\\xi \\right)-{f}_d\\left(\\xi \\right)}{{\\left(\\omega +i2\\Gamma \\right)}^2-4{\\left(\\xi /\\mathrm{\\hslash}\\right)}^2}\\right) d\\xi \\end{array}}$$\n(1)\n\nwhere e and ξ are the charge and energy of the electron, is the reduced plank’s constant, ω is the angular frequency, $${f}_d\\equiv 1/\\left({e}^{\\left(\\xi -{E}_F\\right)/{k}_BT}+1\\right)$$ refers to the Fermi-Dirac distribution, T is the absolute temperature, Γ is the scattering rate, kB is the Boltzmann constant, and EF is the Fermi level. In our calculation, T = 300 K, and Γ = 10 meV . The mesh size near the graphene layer is 0.25 nm, and 2.5 nm in the slots. The effective permittivity of graphene can then be expressed as\n\n$${\\varepsilon}_{\\mathrm{g}}=1+\\mathrm{i}{\\sigma}_{\\mathrm{g}}/\\left({\\varepsilon}_0\\omega {t}_{\\mathrm{g}}\\right)$$\n(2)\n\nwhere ε0 is the permittivity of vacuum, and tg is the thickness of graphene layer. Equations (1) and (2) demonstrate that the optical constants of graphene change with EF. This change leads to tunability of the absorption frequency, whose range can be greatly enlarged by the coupled resonance in the nanostructures, substantially lowering the applied voltage in devices.\n\n## Results and Discussion\n\nFigure 2a shows the absorption spectra for x-polarized wave (φ = 0) at the normal incidence. When the Fermi level is EF = 0eV, two absorption peaks are observed at the wavelength λ = 12.4 μm and 13.3 μm, respectively. The incident light ranging from 12.1 to 13.5 μm is almost absorbed by the nanostructure. As EF increases, the resonance moves toward shorter wavelength. At EF = 0.2 eV, the absorption peaks shift to 11.8 μm and 12.46 μm, indicating respectively a relative shift of 4.8% and 6%. Meanwhile, the absorbance of peak 2 declines, which is attributed to the impedance mismatch between the metamaterial and air at a higher EF . Here, it is interesting that peak 2 blueshifts faster than peak 1 as the Fermi level keeps increasing. This observed behavior will be explained later by a circuit model.\n\nThe modulation can be quantified by a parameter M = Δλ/λ0, where λ0 is the resonance wavelength at EF = 0 eV and Δλ is the wavelength shift due to the change of EF. Figure 2a shows M1 = 20.1% and M2 = 25.5% for peak 1 and peak 2, respectively, when EF reaches 0.6 eV. The modulation range of resonances is much broader compared with previous works [19, 21, 22, 25,26,27,28]. Such a large modulation at a low EF is highly desirable for many applications. Separate calculations show that the absorption peaks blueshift with decreasing thickness of the spacer (Additional file 1). Thus, we can optimize the thickness to set a suitable start point of modulation. In addition, the optical response of the proposed metamaterial is polarization-independent as shown in Fig. 2b. The absorption spectrum keeps unchanged when the polarization angle φ varies from 0 to 90°, owing to the symmetry of the design.\n\nThe mechanism of perfect absorptions is clearly illustrated by the field distributions at the resonances. Because of the well-known metal-insulator-metal (MIM) structure [3, 32, 36,37,38] shown in Fig. 1, localized SPPs are stimulated to form compact magnetic resonances in each patch. Figure 3a and b demonstrate the normalized magnetic field |H|2 in the graphene layer for EF = 0.2 eV at the resonance wavelengths of λ1 = 11.8 μm and λ2 = 12.46 μm, respectively. Since the SPPs are strongly localized, two subunits can work independently. However, due to the narrow width of the splitting slot inside each subunit, the resonances of the four identicals are actually coupled to each other. And this coupling tremendously increases the electric field inside the slot, as shown in Fig. 3c and d. Only the E fields in the y-direction slot are obvious here because the incident light is in the x polarization. The intensity of the E field enhanced by the resonance coupling is four orders of magnitude larger than that of the incident light Einc. In contrast, the most intensified fields used for modulation in previous work are at the patch edges. Figure 3e and f show the sharp comparison of the enhancements between the slots and edges along the white line in Fig. 3c and d, respectively.\n\nSuch field distributions well explain the reason why the modulation is so great in our proposal. Based on a perturbation theory, the graphene-induced shifting of resonance can be evaluated as Δω = − gS|Es|2dS/W0 . Here, |Es|2is the intensity of the electric field in the graphene layer, W0 is the stored energy, and S denotes the area covered by the graphene. The spectral shift of the resonance (Re(Δω)) is decided by the imaginary part of σg, which is much greater than its real part in the mid-infrared region [22, 28]. As clearly shown in the Fig. 3c–f, the enhancement of electric field inside the narrow slot is more than 10 times of that at the edges. As a result, the integral value is mainly contributed by the greatly enhanced E field in the patch slots, leading to a much bigger shift of the peaks than in previous cases which only possess the enhanced E fields at the metallic edges [21, 22, 25, 27, 28].\n\nAccording to the field distributions and above discussions, an LC circuit model is proposed to study the tuning behavior. As shown in Fig. 4a, Li and Ci (i = 1, 2) are, respectively, the inductance and capacitance for the patch Si in Fig. 1b. When the slot width a is very big and there is no graphene layer, we can ignore the effects induced by the slots and graphene. Then, Li and Ci can be decided by separate calculations through fitting with the resonant wavelength obtained in absorption spectra [37, 39, 40]. The results are L1 = 0.07 pH and C1 = 350 aF for subunit S1, while L2 = 0.075 pH and C2 = 380 aF for subunit S2. The slot-induced coupling effect inside each subunit can be described by a shunt capacitance Cc, which is found to decrease with the increasing slot width a. In our cases, Cc is 290 aF for a = 20 nm, and becomes 200 aF, 180 aF, and 135 aF with every increasing 10 nm of a. The resonance wavelength is obtained by letting the impedance of the circuit to be zero, i.e., $${\\lambda}_i^0=2\\pi {c}_0\\sqrt{L_i{\\mathrm{C}}_i^0}$$. Here, c0 is the light speed in vacuum, “i” refers to subunit Si, and $${C}_i^0={C}_i+{C}_c$$.\n\nThe two-dimensional graphene layer basically acts as an inductor. As shown in Fig. 3, the main contribution of the graphene layer comes from the slot position where the electric field is intensified. Since the slot width is much smaller than the operating wavelength and the wavelength of graphene plasmon, the quasi-static approximation is valid. The voltage V and the current I across the slot can be evaluated by V = aE andI = 2litg(σg − iωε0)E, where E is the electric field in the graphene layer. So, we can introduce an inductance Lg = − 1/ω Im(V/I) , which describes the contribution of the graphene layer and is found to be\n\n$${L}_{\\mathrm{g}}=\\frac{a}{2{l}_i{\\omega}^2{\\varepsilon}_0\\left|\\operatorname{Re}\\left({\\varepsilon}_{\\mathrm{g}}\\right)\\right|{t}_{\\mathrm{g}}}\\kern0.5em \\left(i=1,2\\right)$$\n(3)\n\nThis inductor serves as a parallel element shown in Fig. 4a. As a result, the total inductance of one patch is obtained by $$1/{L}_i^{\\prime }=1/{L}_i+1/{L}_{\\mathrm{g}}$$. The final resonance wavelength of each subunit, with the graphene layer, becomes\n\n$${\\lambda}_i^{\\prime }=2\\pi {c}_0\\sqrt{L_i^{\\prime }{\\mathrm{C}}_i^0}\\kern0.5em \\left(i=1,2\\right)$$\n(4)\n\nBecause each subunit works independently, the total impedance of the metamaterial can be obtained from the parallel connection of the impedances of the two subunits.\n\nThis LC model predicts a blueshift of the resonance with increasing EF. Deduced from Eqs. (1) and (2), we get a larger value of |Re(εg)| for the graphene at a higher EF, which gives a smaller Lg in Eq. (3). Because of the parallel connection of the inductors, the final inductor $${L}_i^{\\prime }$$ becomes smaller, leading to a shorter wavelength of resonance in Eq. (4). The calculated result is summarized in Fig. 4b, showing a good agreement with the resonant wavelength obtained by the FDTD simulations. Small deviation is seen because our LC model ignores the contribution of weak fields at the edges of each patch (Fig. 3c–f). The LC model also shows how the geometric parameters influence the blueshift of the resonance. Differentiating Eq. (4), we have $$\\partial {\\lambda}_i^{\\prime }/\\partial {L}_i^{\\prime}\\propto 1/\\sqrt{L_i^{\\prime }}$$. It is obvious that a small value of $$\\sqrt{L_i^{\\prime }}$$ is favored to increase the sensitivity of this blueshift. Because the inductors are parallelly connected and Li is fixed, a small value of the total inductance $${L}_i^{\\prime }$$ means a small value of the graphene inductance Lg. In order to increase the tuning range, the slot width a should be small and the patch size l be large, according to Eq. (3). Figure 4c shows that the blueshift of resonance at EF = 0.4 eV increases from around 6 to 15%, when the slot width inside S1 decreases from 50 to 20 nm. On the other hand, if we fix the slot width at a = 20 nm, the resonance increases from 15 to 22% with patch size changing from 1.5 to 1.8 μm as shown in Fig. 4d. The good agreement with the FDTD simulations demonstrates that such a simple circuit model is an efficient method for studying related metamaterials devices.\n\n## Conclusions\n\nIn conclusion, we have designed a polarization-independent, broadband metamaterial absorber with a large range of modulation. For both resonances, the tuning range reach up to 20.1% and 25.5% of the central wavelength when EF increases from 0 to 0.6 eV. Such a large modulation comes from the graphene-light interaction tremendously enhanced by the coupled-resonances inside the cross-shaped slot of each metallic patch. This effect is well described by a graphene-introduced inductor in the LC model. Such a simple model predicts the modulation behavior under different geometric parameters, and the results agree well with the FDTD simulations. Our proposal is beneficial to potential applications such as optical communication, sensing, and thermal imaging.\n\n## Abbreviations\n\nE F :\n\nFermi level\n\nFDTD:\n\nFinite-different time-domain\n\nMIM:\n\nMetal-insulator-metal\n\nPM:\n\nPlasmonic metamaterial\n\nZnS:\n\nZinc sulfide\n\n## References\n\n1. 1.\n\nZhu H, Yi F, Cubukcu E (2016) Plasmonic metamaterial absorber for broadband manipulation of mechanical resonances. Nat Photonics 10(11):709\n\n2. 2.\n\nNdukaife JC, Shalaev VM, Boltasseva A (2016) Plasmonics—turning loss into gain. Science 351:334–335\n\n3. 3.\n\nXie Q, Dong G, Wang B-X, Huang W-Q (2018) Design of quad-band terahertz metamaterial absorber using a perforated rectangular resonator for sensing applications. Nanoscale Res Lett 13:137\n\n4. 4.\n\nWang H, Chen Q, Wen L, Song S, Hu X et al (2015) Titanium-nitride-based integrated plasmonic absorber/emitter for solar thermophotovoltaic application. Photonics Research 3:329–334\n\n5. 5.\n\nTittl A, Michel AKU, Schäferling M, Yin X, Gholipour B et al (2015) A switchable mid-infrared plasmonic perfect absorber with multispectral thermal imaging capability. Adv Mater 27:4597–4603\n\n6. 6.\n\nLi H, Qin M, Wang L, Zhai X, Ren R et al (2017) Total absorption of light in monolayer transition-metal dichalcogenides by critical coupling. Opt Express 25:31612–31621\n\n7. 7.\n\nLi H, Ren Y, Hu J, Qin M, Wang L (2018) Wavelength-selective Wide-angle Light Absorption Enhancement in Monolayers of Transition-Metal Dichalcogenides. J Lightwave Technol. 36(16):3236–3241\n\n8. 8.\n\nYang A, Yang K, Yu H, Tan X, Li J et al (2016) Piezoelectric tuning of narrowband perfect plasmonic absorbers via an optomechanic cavity. Opt Lett 41:2803–2806\n\n9. 9.\n\nCarrillo SG-C, Nash GR, Hayat H, Cryan MJ, Klemm M et al (2016) Design of practicable phase-change metadevices for near-infrared absorber and modulator applications. Opt Express 24:13563–13573\n\n10. 10.\n\nZhu Z, Evans PG, Haglund RF Jr, Valentine JG (2017) Dynamically reconfigurable metadevice employing nanostructured phase-change materials. Nano Lett 17:4881–4885\n\n11. 11.\n\nSong Z, Wang Z, Wei M (2019) Broadband tunable absorber for terahertz waves based on isotropic silicon metasurfaces. Mater Lett 234:138–141\n\n12. 12.\n\nSong Z, Wang K, Li J, Liu QH (2018) Broadband tunable terahertz absorber based on vanadium dioxide metamaterials. Opt Express 26:7148–7154\n\n13. 13.\n\nChu Q, Song Z, Liu QH (2018) Omnidirectional tunable terahertz analog of electromagnetically induced transparency realized by isotropic vanadium dioxide metasurfaces. Appl Phys Express 11:082203\n\n14. 14.\n\nGuo Z, Nie X, Shen F, Zhou H, Zhou Q et al (2018) Actively tunable terahertz switches based on subwavelength graphene waveguide. Nanomaterials 8:665\n\n15. 15.\n\nLuo L, Wang K, Ge C, Guo K, Shen F et al (2017) Actively controllable terahertz switches with graphene-based nongroove gratings. Photonics Research 5:604–611\n\n16. 16.\n\nEmani NK, Chung T-F, Kildishev AV, Shalaev VM, Chen YP et al (2013) Electrical modulation of Fano resonance in plasmonic nanostructures using graphene. Nano Lett 14:78–82\n\n17. 17.\n\nHe X, Zhao Z-Y, Shi W (2015) Graphene-supported tunable near-IR metamaterials. Opt Lett 40:178–181\n\n18. 18.\n\nYu R, Pruneri V, García de Abajo FJ (2015) Resonant visible light modulation with graphene. ACS Photonics 2:550–558\n\n19. 19.\n\nZeng B, Huang Z, Singh A, Yao Y, Azad AK et al (2018) Hybrid graphene metasurfaces for high-speed mid-infrared light modulation and single-pixel imaging. Light Sci Appl 7:51\n\n20. 20.\n\nLi H, Wang L, Liu J, Huang Z, Sun B et al (2013) Investigation of the graphene based planar plasmonic filters. Appl Phys Lett 103:211104\n\n21. 21.\n\nYao Y, Kats MA, Shankar R, Song Y, Kong J et al (2014) Wide wavelength tuning of optical antennas on graphene with nanosecond response time. Nano Lett 14:214–219\n\n22. 22.\n\nDabidian N, Kholmanov I, Khanikaev AB, Tatar K, Trendafilov S et al (2015) Electrical switching of infrared light using graphene integration with plasmonic Fano resonant metasurfaces. ACS Photonics 2:216–227\n\n23. 23.\n\nChen Y, Yao J, Song Z, Ye L, Cai G et al (2016) Independent tuning of double plasmonic waves in a free-standing graphene-spacer-grating-spacer-graphene hybrid slab. Opt Express 24:16961–16972\n\n24. 24.\n\nLi H, Ji C, Ren Y, Hu J, Qin M et al (2019) Investigation of multiband plasmonic metamaterial perfect absorbers based on graphene ribbons by the phase-coupled method. Carbon 141:481–487\n\n25. 25.\n\nMousavi SH, Kholmanov I, Alici KB, Purtseladze D, Arju N et al (2013) Inductive tuning of Fano-resonant metasurfaces using plasmonic response of graphene in the mid-infrared. Nano Lett 13:1111–1117\n\n26. 26.\n\nLiao Y-L, Zhao Y (2017) Graphene-based tunable ultra-narrowband mid-infrared TE-polarization absorber. Opt Express 25:32080–32089\n\n27. 27.\n\nZhang YP, Li TT, Chen Q, Zhang HY, O'Hara JF et al (2015) Independently tunable dual-band perfect absorber based on graphene at mid-infrared frequencies. Sci Rep 5:18463\n\n28. 28.\n\nVasić B, Gajić R (2013) Graphene induced spectral tuning of metamaterial absorbers at mid-infrared frequencies. Appl Phys Lett 103:261111\n\n29. 29.\n\nRodrigo D, Limaj O, Janner D, Etezadi D, De Abajo FJG et al (2015) Mid-infrared plasmonic biosensing with graphene. Science 349:165–168\n\n30. 30.\n\nVollmer M, Mollmann, KP (2010) Infrared Thermal Imaging|Fundamentals, Research and Applications. Wiley-VCH, Weinheim\n\n31. 31.\n\nGuo Q, Yu R, Li C, Yuan S, Deng B, et al. (2018) Efficient electrical detection of mid-infrared graphene plasmons at room temperature. Nat Mater 17:986–992\n\n32. 32.\n\nLiu N, Mesch M, Weiss T, Hentschel M, Giessen H (2010) Infrared perfect absorber and its application As plasmonic sensor. Nano Lett 10:2342–2348\n\n33. 33.\n\nPalik ED (1985) Handbook of optical constants of solids. Academic Press, New York\n\n34. 34.\n\nHanson GW (2008) Dyadic Green’s functions and guided surface waves for a surface conductivity model of graphene. J Appl Phys 103:064302\n\n35. 35.\n\nGusynin V, Sharapov S, Carbotte JP (2006) Magneto-optical conductivity in graphene. J Phys-Condens Mat 19(2):026222\n\n36. 36.\n\nWu D, Li R, Liu Y, Yu Z, Yu L et al (2017) Ultra-narrow band perfect absorber and its application as plasmonic sensor in the visible region. Nanoscale Res Lett 12:427\n\n37. 37.\n\nXiao D, Tao K, Wang Q (2016) Ultrabroadband mid-infrared light absorption based on a multi-cavity plasmonic metamaterial array. Plasmonics 11:389–394\n\n38. 38.\n\nXiao D, Tao K (2015) Ultra-compact metamaterial absorber for multiband light absorption at mid-infrared frequencies. Appl Phys Express 8:102001\n\n39. 39.\n\nMatsuno Y, Sakurai A (2017) Perfect infrared absorber and emitter based on a large-area metasurface. Optical Materials Express 7:618–626\n\n40. 40.\n\nTassin P, Koschny T, Kafesaki M, Soukoulis CM (2012) A comparison of graphene, superconductors and metals as conductors for metamaterials and plasmonics. Nat Photonics 6:259\n\n41. 41.\n\nYao Y, Kats MA, Genevet P, Yu N, Song Y et al (2013) Broad electrical tuning of graphene-loaded plasmonic antennas. Nano Lett 13:1257–1264\n\n## Acknowledgements\n\nNot applicable.\n\n### Funding\n\nThis work was supported by the Chinese Natural Science Foundation (Grant Nos. 61107049, 11734012), Basic Research Program of Shenzhen (JCYJ20170302151033006), Natural Science Foundation of Guangdong Province (2017A030310131).\n\n### Availability of Data and Materials\n\nAll date can be provided on a suitable request.\n\n## Author information\n\nDX conducted the simulations and processed the data and figures. All authors participated in discussions. KYT, DX, ZBOY, QL, and LL prepared the manuscript. KYT supervised the whole work. All authors read and approved the final manuscript.\n\nCorrespondence to Keyu Tao.\n\n## Ethics declarations\n\n### Competing Interests\n\nThe authors declare that they have no competing interests.\n\n### Publisher’s Note\n\nSpringer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.",
null,
""
] | [
null,
"https://nanoscalereslett.springeropen.com/track/article/10.1186/s11671-019-2852-y",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8614996,"math_prob":0.9435042,"size":22997,"snap":"2019-35-2019-39","text_gpt3_token_len":5930,"char_repetition_ratio":0.13051799,"word_repetition_ratio":0.0055524707,"special_character_ratio":0.25512025,"punctuation_ratio":0.1395661,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97519606,"pos_list":[0,1,2],"im_url_duplicate_count":[null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-09-15T06:22:33Z\",\"WARC-Record-ID\":\"<urn:uuid:840fa085-0e77-4ff8-a5c9-69bca5464a0c>\",\"Content-Length\":\"238306\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5159fb00-dbc1-4b79-b9b0-1d7252100a5c>\",\"WARC-Concurrent-To\":\"<urn:uuid:e3e40a91-f050-4498-87ea-f81d21902f01>\",\"WARC-IP-Address\":\"151.101.200.95\",\"WARC-Target-URI\":\"https://nanoscalereslett.springeropen.com/articles/10.1186/s11671-019-2852-y\",\"WARC-Payload-Digest\":\"sha1:YSP4BSGFBO3X224CZUH7FILWKKNEITVN\",\"WARC-Block-Digest\":\"sha1:IELVC2KDL24TUC2TP7RYD24JVK57PGOL\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-39/CC-MAIN-2019-39_segments_1568514570740.10_warc_CC-MAIN-20190915052433-20190915074433-00365.warc.gz\"}"} |
https://examupdates.in/gate-exam-pattern/ | [
"# GATE Exam Pattern 2020 – Type Of Questions, Marks Weightage & Paper Model\n\nGATE Exam Pattern 2020 – Changes in GATE Syllabus are explained here. GATE Exam Structure has not changed in this year. Before going to apply for GATE 2020, first know what is the pattern of GATE 2020 Exam. GATE (Graduate Aptitude test in Engineering) is one of the popular entrance test of the country.Candidates who are willing to study MTech/ME/Ph.D from different institutions in India should write the entrance test. Students planning for this test has to know the exam pattern that is the duration of the exam, maximum marks obtained,syllabus and topics to be covered then the gaining of marks is easy while knowing the pattern of the exam.So, below are the details about the GATE entrance exam. In this exam the questions will be based on the aptitude and on the core subjects.\n\nImportant Articles :\n\n About GATE 2020 GATE Notification 2020 GATE Eligibility Criteria GATE Question papers GATE Answer Key GATE Reference Books GATE Exam Pattern GATE Online Application 2020 GATE Syllabus\n\n## GATE Exam Pattern 2020 – Changes in GATE Exam Structure\n\nGATE 2018 will be conducted on 23 subjects (papers). Below given Table shows the list of papers and paper codes for GATE 2018. A candidate is allowed to appear in ONLY ONE paper in any ONE SESSION.\n\nThe syllabus for each of the papers is given on GATE website www.gate.iitg.ac.in. Making a choice of the appropriate paper in the GATE application is the responsibility of the candidate.\n\nSome guidelines in this respect are suggested below.\n\nCandidates are expected to appear in a paper appropriate to the discipline of their qualifying degree. However, candidates are free to choose any one of the GATE 2020 papers as per their admission or employment plan, while keeping in mind the eligibility criteria for the institutions in which they wish to seek admission/employment. For more details regarding the admission criteria in any particular institute, the candidate is advised to refer to the website of that institute.\n\nCandidates must learn about the paper code of their choice, as this information is essential during making the application, as well as, during the examination. As the candidates are permitted to appear in ONLY ONE of the 23 papers of the GATE 2020, they should make their choice (of the paper), with due care.\n\nAfter submission of application, any change of paper is NOT permitted.\n\n## Pattern of Questions\n\nGATE 2020 would contain questions of two different types in all the papers:\n\n(i) Multiple Choice Questions (MCQ) carrying 1 or 2 marks each in all the papers and sections. These questions are objective in nature, and each will have a choice of four answers, out of which the candidate has to select (mark) the correct answer.\n\n### Negative Marking for Wrong Answers\n\nFor a wrong answer chosen in a MCQ, there will be negative marking. For 1-mark MCQ, 1/3 mark will be deducted for a wrong answer. Likewise, for 2-mark MCQ, 2/3 mark will be deducted for a wrong answer.\n\n(ii) Numerical Answer Type (NAT) Questions carrying 1 or 2 marks each in all the papers and sections. For these questions, the answer is a signed real number, which needs to be entered by the candidate using the virtual numeric keypad on the monitor (keyboard of the computer will be disabled). No choices will be shown for these type of questions. The answer can be a number such as 10 or -10 (an integer only). The answer may be in decimals as well, for example, 10.1 (one decimal) or 10.01 (two decimals) or -10.001 (three decimals). These questions will be mentioned with, up to which decimal places, the candidates need to make an answer. Also, an appropriate range will be considered while evaluating the numerical answer type questions so that the candidate is not penalized due to the usual round-off errors. Wherever required and possible, it is better to give NAT answer up to a maximum of three decimal places.\n\nThere is NO negative marking for a wrong answer in NAT questions.\n\n## Marks Weightage in GATE Paper\n\nIn all the papers, there will be a total of 65 questions carrying 100 marks, out of which 10 questions carrying a total of 15 marks will be on General Aptitude (GA), which is intended to test the Language and Analytical Skills.\n\nIn the papers bearing the codes AE, AG, BT, CE, CH, CS, EC, EE, IN, ME, MN, MT, PE, PI, TF and XE, the Engineering Mathematics will carry around 15% of the total marks, the General Aptitude section will carry 15% of the total marks and the remaining 70% of the total marks is devoted to the subject of the paper.\n\nIn the papers bearing the codes AR, CY, EY, GG, MA, PH and XL, the General Aptitude section will carry 15% of the total marks and the remaining 85% of the total marks is devoted to the subject of the paper.\n\n## Design of Questions\n\nThe questions in a paper may be designed to test the following abilities:\n\n(i) Recall: These are based on facts, principles, formulae or laws in the discipline of the paper. The candidate is expected to be able to obtain the answer either from his/her memory of the subject or at most from a one-line computation.\n\nExample:\nQ. During machining, maximum heat is produced\n(A) in flank face\n(B) in rake face\n(C) in shear zone\n(D) due to friction between chip and tool\n\n(ii) Comprehension: These questions will test the candidate’s understanding of the basics of his/her field, by requiring him/her to draw simple conclusions from fundamental ideas.\n\nExample\nQ. A DC motor requires a starter in order to\n\n(A) develop a starting torque\n(B) compensate for auxiliary field ampere turns\n(C) limit armature current at starting\n(D) provide regenerative braking\n\n(iii) Application: In these questions, the candidate is expected to apply his/her knowledge either through computation or by logical reasoning.\n\nExample\n\nThe sequent depth ratio of a hydraulic jump in a rectangular channel is 16.48. The Froude number at the beginning of the jump is:\n\n(A) 5.0 (B) 8.0 (C) 10.0 (D) 12.0\n\nThe questions based on the above logics may be a mix of single standalone statement/phrase/data type questions, combination of option codes type questions or match items type questions.\n\n(iv) Analysis and Synthesis\n\nIn these questions, the candidate is presented with data, diagrams, images, etc. that require analysis before a question can be answered. A Synthesis question might require the candidate to compare two or more pieces of information. Questions in this category could, for example, involve candidates in recognizing unstated assumptions, or separating useful information from irrelevant information.\n\n## Marking Scheme – Marks and Questions Distribution\n\nGeneral Aptitude (GA) Questions\n\nIn all papers, GA questions carry a total of 15 marks. The GA section includes 5 questions carrying 1-mark each (sub-total 5 marks) and 5 questions carrying 2-marks each (subtotal 10 marks).\n\nQuestion Papers other than GG, XE and XL\n\nThese papers would contain 25 questions carrying 1-mark each (sub-total 25 marks) and 30 questions carrying 2-marks each (sub-total 60 marks) consisting of both the MCQ and NAT Questions.\n\nGG (Geology and Geophysics) Paper\n\nApart from the General Aptitude (GA) section, the GG question paper consists of two parts: Part A and Part B. Part A is compulsory for all the candidates. Part B contains two sections: Section 1 (Geology) and Section 2 (Geophysics). Candidates will have to attempt questions in Part A and questions in either Section 1 or Section 2 of Part B.\n\nPart A consists of 25 questions carrying 1-mark each (sub-total 25 marks and some of these may be numerical answer type questions). Either section of Part B (Section 1 and Section 2) consists of 30 questions carrying 2-marks each (sub-total 60 marks and some of these may be numerical answer type questions).\n\nXE Paper (Engineering Sciences)\n\nA candidate appearing in the XE paper has to answer the following:\n\n• GA – General Aptitude carrying a total of 15 marks.\n• Section A – Engineering Mathematics (Compulsory): This section contains 11 questions carrying a total of 15 marks: 7 questions carrying 1-mark each (sub-total 7 marks), and 4 questions carrying 2-marks each (sub-total 8 marks). Some questions may be of numerical answer type.\n• Any two of XE Sections B to H: The choice of two sections from B to H can be made during the examination after viewing the questions. Only TWO optional sections can be answered at a time. A candidate wishing to change midway of the examination to another optional section must first choose to deselect one of the previously chosen optional sections (B to H). Each of the optional sections of the XE paper (Sections B through H) contains 22 questions carrying a total of 35 marks: 9 questions carrying 1-mark each (sub-total 9 marks) and 13 questions carrying 2-marks each (sub-total 26 marks). Some questions may be of numerical answer type.\n\nXL Paper (Life Sciences)\n\nA candidate appearing in the XL paper has to answer the following:\n\n• GA – General Aptitude carrying a total of 15 marks.\n• Section P– Chemistry (Compulsory): This section contains 15 questions carrying a total of 25 marks: 5 questions carrying 1-mark each (sub-total 5 marks) and 10 questions carrying 2-marks each (sub-total 20 marks). Some questions may be of numerical answer type.\n• Any two of XL Sections Q to U: The choice of two sections from Q to U can be made during the examination after viewing the questions. Only TWO optional sections can be answered at a time. A candidate wishing to change midway of the examination to another optional section must first choose to deselect one of the previously chosen optional sections (Q to U). Each of the optional sections of the XL paper (Sections Q through U) contains 20 questions carrying a total of 30 marks: 10 questions carrying 1-mark each (sub-total 10 marks) and 10 questions carrying 2-marks each (sub-total 20 marks). Some questions may be of numerical answer type.\n\n## Types of questions given in GATE 2020:\n\nIn this test there are two types of questions\n\n1. Multiple choice questions(MCQs):\n• These questions carry 1 or 2 marks\n• These are objective type questions and each question has four options.\n\n• These type of questions should not have options.\n• Each question carry either 1 or 2 marks.\n• The answer for these questions should be a real number. So, the candidate should enter the real value.\n• The answer can be either in decimal value,integer or numerical number.\n• 25 to 40 marks are awarded to NAT questions.\n\n## Exam Pattern of GATE 2020:\n\n• Duration of the exam is 3 hours.\n• Number of questions asked are 65.\n• Total marks are 100.\n• For 1 mark MCQs, if the answer is incorrect then 1/3 mark will be reduced.\n• While in 2 mark MCQs, the reduction of mark is 2/3 for incorrect answer.\n• In NAT there is no negative marking.\n• Zero marking is given for the questions that are not attempted.\n• Each question paper consists of General Aptitude(GA), Engineering Mathematics(it is chemistry for XL papers) and subject specific questions.\n• Candidate should do rough work on the scribble pad given during the exam.\n\n### Marking scheme:\n\n• For subject specific questions the total marks are 85 and the remaining 15 marks for general aptitude questions.\n• Candidate should attempt both the sections.\n• Engineering mathematics section will carry 15% of marks in total marks for the Papers with the codes AE, AG, BT, CE, CH, CS, EC, EE, IN, ME, MN, MT, PE, PI, TF and XE. The remaining 70% of marks are for the subject of the paper.\n• 85% of the total marks will be given to the subject of the paper for the codes AR, CY, EY, GG, MA, PH and XL.\n• Question papers excluding GG, XE and XL should have 25 questions carrying 1 mark and 30 questions of each carrying 2 marks.\n• 1 to 10 questions of general aptitude are given and marks carrying for each question is 1.5 then the total marks are 15.\n• 25 questions are given in technical section and 1 mark for each question then the total marks are 25.\n• 30 questions based on engineering mathematics and marks per each question is 2 marks then the total marks are 60.\n\n### Modified section in GATE 2020:\n\nAs section H, Oceanic and atmospheric unit is added. With the new section, Engineering paper has eight sections as A to H."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8937774,"math_prob":0.72092086,"size":12691,"snap":"2019-43-2019-47","text_gpt3_token_len":2904,"char_repetition_ratio":0.19586979,"word_repetition_ratio":0.15900736,"special_character_ratio":0.23087227,"punctuation_ratio":0.1021571,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9806369,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-22T18:55:09Z\",\"WARC-Record-ID\":\"<urn:uuid:9aab6b76-65f8-486a-9d8d-b0e6b8861e58>\",\"Content-Length\":\"102738\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5eed34e5-19c1-45d3-a704-9666c50f5978>\",\"WARC-Concurrent-To\":\"<urn:uuid:f711712e-b7f2-4958-9ea4-02beeb738d77>\",\"WARC-IP-Address\":\"104.24.99.174\",\"WARC-Target-URI\":\"https://examupdates.in/gate-exam-pattern/\",\"WARC-Payload-Digest\":\"sha1:KRRQXI52U45I5CXNL7QFWSGP2HIIG5FP\",\"WARC-Block-Digest\":\"sha1:TKHPRMYIBBX7SORNYE5T7AFVO4FR54ID\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496671411.14_warc_CC-MAIN-20191122171140-20191122200140-00482.warc.gz\"}"} |
https://ask.sagemath.org/question/49881/lie-bracket-of-derivations-over-polynomial-ring/?answer=49905 | [
"# Lie bracket of derivations over polynomial ring\n\nI want to take the Lie bracket of derivations defined for an arbitrary polynomial ring. Using the notation for injecting variables into the global scope:\n\nE.<x0,x1> = QQ[]\nM = E.derivation_module()\nf=(x1*M.gens())\ng=x0*M.gens()\nf.bracket(g)\n\n\ngives -x0*d/dx0 + x1*d/dx. But I want to be able to construct vector fields programmatically for an arbitrary number of x0, x1, x2, ..., xn so I tried the following:\n\nE = QQ[['x%i'%i for i in range(2)]]\nE.inject_variables()\nM = E.derivation_module()\nf=(x1*M.gens())\ng=x0*M.gens()\nf.bracket(g)\n\n\nwhich fails to take the Lie bracket with TypeError: unable to convert x1 to a rational (which causes another error TypeError: Unable to coerce into background ring.) ... which looks a bit like something is not right? or is this just not a permissible way to construct derivations in sagemath? or is the only way to do this using SageManifolds?\n\nE = EuclideanSpace(2, coordinates='Cartesian', symbols='x0 x1')\nU = E.default_chart()\nf = U*U.frame()\ng = U*U.frame()\nf.bracket(g).display()\n\n\ngives -x0 e_x0 + x1 e_x1\n\nedit retag close merge delete\n\nSort by » oldest newest most voted",
null,
"You can do it like this:\n\nE = PolynomialRing(QQ, 2, names='x')\nx = E.gens()\nM = E.derivation_module()\nddx = M.gens()\nf=x*ddx\ng=x*ddx\nf.bracket(g)\n\n\nOutput:\n\n-x0*d/dx0 + x1*d/dx1\n\n\nYou can pass names a list if you want to be more picky (and then you can omit the number of variables, 2 above, because it will be the length of the list).\n\nmore"
] | [
null,
"https://www.gravatar.com/avatar/dab2f5dc05d812841ee7e10d04e29e7c",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7311098,"math_prob":0.9753235,"size":1115,"snap":"2021-43-2021-49","text_gpt3_token_len":325,"char_repetition_ratio":0.10711071,"word_repetition_ratio":0.036363635,"special_character_ratio":0.30403587,"punctuation_ratio":0.15767635,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9951578,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-06T17:23:51Z\",\"WARC-Record-ID\":\"<urn:uuid:a2b8c583-fd11-44d9-832b-69947616bf11>\",\"Content-Length\":\"53795\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:558a95f9-109b-41e1-87b7-25ee31c71a83>\",\"WARC-Concurrent-To\":\"<urn:uuid:2ba78b96-b684-4c16-b571-b2eebb084438>\",\"WARC-IP-Address\":\"194.254.163.53\",\"WARC-Target-URI\":\"https://ask.sagemath.org/question/49881/lie-bracket-of-derivations-over-polynomial-ring/?answer=49905\",\"WARC-Payload-Digest\":\"sha1:32DF233REI4AYIAV5BOUVTSTZQTR3C7C\",\"WARC-Block-Digest\":\"sha1:JB2C4MOVLXWCEVCFMQJASX6MKJL6KHUU\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964363309.86_warc_CC-MAIN-20211206163944-20211206193944-00491.warc.gz\"}"} |
https://su.shops-com.pp.ua/2048/1/probability-axioms.html | [
"",
null,
"# ⓘ Probability axioms\n\n## ⓘ Probability axioms\n\nProbabiliti P {\\displaystyle P} tina sababaraha kajadian E {\\displaystyle E} {\\displaystyle P}) is defined with respect to a \"universe\" or sample space Ω {\\displaystyle \\Omega } of all possible elementary events in such a way that P {\\displaystyle P} must satisfy the Kolmogorov axioms.\n\nAlternatively, a probability can be interpreted as a measure on a σ-algebra of subsets of the sample space, those subsets being the events, such that the méasure of the whole set equals 1. This property is important, since it gives rise to the natural concept of conditional probability. Every set A {\\displaystyle A} with non-zero probability defines another probability\n\nP B | A = P B ∩ A P A {\\displaystyle PB\\vert A={PB\\cap A \\over PA}}\n\non the space. This is usually réad as \"probability of B {\\displaystyle B} given A {\\displaystyle A} \". If the conditional probability of B {\\displaystyle B} given A {\\displaystyle A} is the same as the probability of B {\\displaystyle B}, then B {\\displaystyle B} and A {\\displaystyle A} are said to be independent.\n\nIn the case that the sample space is finite or countably infinite, a probability function can also be defined by its values on the elementary events { e 1 }, { e 2 }. {\\displaystyle \\{e_{1}\\},\\{e_{2}\\}.} where Ω = e 1, e 2. {\\displaystyle \\Omega ={e_{1},e_{2}.}}.\n\n## 1. Kolmogorov axioms\n\nThe following three axioms are known as the Kolmogorov axioms, after Andrey Kolmogorov who developed them.\n\nFirst axiom\n\nFor any set E {\\displaystyle E}, 0 ≤ P E ≤ 1 {\\displaystyle 0\\leq PE\\leq 1}.\n\nThat is, the probability of an event set is represented by a réal number between 0 and 1.\n\n### 1.1. Kolmogorov axioms Second axiom\n\nP Ω = 1 {\\displaystyle P\\Omega=1}\n\nThat is, the probability that some elementary event in the entire sample set will occur is 1. More specifically, there are no elementary events outside the sample set.\n\nThis is often overlooked in some mistaken probability calculations; if you cannot precisely define the whole sample set, then the probability of any subset cannot be defined either.\n\n### 1.2. Kolmogorov axioms Third axiom\n\nAny countable sequence of mutually disjoint events E 1, E 2. {\\displaystyle E_{1},E_{2}.} satisfies P E 1 ∪ E 2 ∪ ⋯ = ∑ P E i {\\displaystyle PE_{1}\\cup E_{2}\\cup \\cdots=\\sum PE_{i}}.\n\nThat is, the probability of an event set which is the union of other disjoint subsets is the sum of the probabilities of those subsets. This is called σ-additivity. If there is any overlap among the subsets this relation does not hold.\n\nFor an algebraic alternative to Kolmogorovs approach, see algebra of random variables.\n\n## 2. Lemmas in probability\n\nFrom the Kolmogorov axioms one can deduce other useful rules for calculating probabilities:\n\nP A ∪ B = P A + P B − P A ∩ B {\\displaystyle PA\\cup B=PA+PB-PA\\cap B}\n\nThat is, the probability that A or B will happen is the sum of the probabilities that A will happen and that B will happen, minus the probability that A and B will happen. This can be extended to the inclusion-exclusion principle.\n\nP Ω − E = 1 − P E {\\displaystyle P\\Omega -E=1-PE}\n\nThat is, the probability that any event will not happen is 1 minus the probability that it will.\n\nUsing conditional probability as defined above, it also follows immediately that\n\nP A ∩ B = P A ⋅ P B | A {\\displaystyle PA\\cap B=PA\\cdot PB\\vert A}\n\nThat is, the probability that A and B will happen is the probability that A will happen, times the probability that B will happen given that A happened; this relationship gives Bayes theorem. It then follows that A and B are independent if and only if\n\nP A ∩ B = P A ⋅ P B {\\displaystyle PA\\cap B=PA\\cdot PB}."
] | [
null,
"data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAQAAAC1HAwCAAAAC0lEQVR42mNkYAAAAAYAAjCB0C8AAAAASUVORK5CYII=",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8798459,"math_prob":0.9794041,"size":3609,"snap":"2021-43-2021-49","text_gpt3_token_len":965,"char_repetition_ratio":0.22024965,"word_repetition_ratio":0.06451613,"special_character_ratio":0.24909948,"punctuation_ratio":0.09090909,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.999869,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-11-29T01:44:13Z\",\"WARC-Record-ID\":\"<urn:uuid:5146ee8b-1eb7-4386-9a03-3af9bba6447c>\",\"Content-Length\":\"37451\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6075ed6a-2154-4658-8998-5354083595a2>\",\"WARC-Concurrent-To\":\"<urn:uuid:7e8e7011-2729-470b-8b4c-b35b338080e4>\",\"WARC-IP-Address\":\"95.217.186.71\",\"WARC-Target-URI\":\"https://su.shops-com.pp.ua/2048/1/probability-axioms.html\",\"WARC-Payload-Digest\":\"sha1:RXGLLC3H56PHURLW4LANDMF2DQQRUIYF\",\"WARC-Block-Digest\":\"sha1:DHEYPLGSYLVKPXQGNXMAI7DKTK6BTLVV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964358685.55_warc_CC-MAIN-20211129014336-20211129044336-00449.warc.gz\"}"} |
http://www.365yrb.com/laliga/index_236.html | [
"/ ()阅读()\n\n/ ()阅读()\n\n/ ()阅读()\n\n/ ()阅读()\n\n/ ()阅读()\n\n/ ()阅读()\n\n/ ()阅读()\n\n/ ()阅读()\n\n/ ()阅读()\n\n/ ()阅读()\n\n/ ()阅读()\n\n/ ()阅读()\n\n/ ()阅读()\n\n/ ()阅读()\n\n/ ()阅读()\n\n/ ()阅读()\n\n/ ()阅读()\n\n/ ()阅读()"
] | [
null
] | {"ft_lang_label":"__label__zh","ft_lang_prob":0.92324007,"math_prob":0.9988749,"size":1569,"snap":"2022-05-2022-21","text_gpt3_token_len":1959,"char_repetition_ratio":0.1968051,"word_repetition_ratio":0.0,"special_character_ratio":0.21669853,"punctuation_ratio":0.3127572,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999616,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-25T14:18:24Z\",\"WARC-Record-ID\":\"<urn:uuid:3be67ea4-5a9f-4eb1-bb61-97199e2d04de>\",\"Content-Length\":\"34105\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:662d2079-b30b-46e7-ac46-a23154bbfba7>\",\"WARC-Concurrent-To\":\"<urn:uuid:805583e1-3685-4d91-a7f4-6a473e1c2372>\",\"WARC-IP-Address\":\"101.36.117.191\",\"WARC-Target-URI\":\"http://www.365yrb.com/laliga/index_236.html\",\"WARC-Payload-Digest\":\"sha1:3QMOPPILUBJ4YCOSLDVHX75I25HR56TH\",\"WARC-Block-Digest\":\"sha1:5NNAVYJ53J6PYELKTM3MFFU2XJWIFJEZ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662587158.57_warc_CC-MAIN-20220525120449-20220525150449-00407.warc.gz\"}"} |
http://bradleysaul.us/ | [
"# Bats and Public Health: An Emerging Concern",
null,
"Since researchers identified Horseshoe Bats as a likely reservoir for the Severe Acute Respiratory Syndrome (SARS) coronavirus, research on the connection between bats and emerging infectious diseases (EIDs) has increased. Bats have been known reservoirs for the rabies virus since the early 1900s when vampire bats caused a pandemic of rabies in South American cattle. The enormous health and economic costs of disease outbreaks such as SARS make identifying the sources of disease and developing policies to prevent outbreaks imperative. Beginning in April 2012, another coronavirus, Middle East Respiratory Syndrome (MERS) began affecting humans in Saudi Arabia. Like SARS, bats are again implicated as possible reservoirs for this new disease.\n\n201311_bats_public_health\n\n# Math is Beautiful\n\nHere’s a fun video showing the mathematics of different phenomena in equation, visualization, and realization. This is best viewed in full screen mode.\n\nvia FlowingData.\n\n# Battle for Bats\n\nBats are rapidly dying off across the eastern United States due to White Nose Syndrome. In less than 10 years, populations of certain bat species have gone from hundreds of thousands to near extinction.\n\n# Forget Big Data. MegaData is here.\n\nMegaData sounds better don’t you think? Or maybe Megalodata.\n\nYesterday, Dr. Jeremy Wu spoke at our Biostatistics seminar about “Statistics 2.0” and big data. I think his main point was to get people thinking about where statistics is going as a field.\n\nI had a couple of thoughts. First, rather than statistics version 2.0, we’re looking at statistics version 103.543. Statistics is well past version 2. Second, big mega data talk focuses on leveraging massive integrated datasets built for the most part by corporations and governments. This focus ignores the massive numbers of “small” datasets generated by individuals and small organizations. Not only are datasets getting larger, the tools to generate digital data are more widely available.\n\nMy question is: how can statisticians help the citizen, business owner, or community leader understand and make use of the “small” data they have?\n\n# UNIREP Anova Expected Mean Squares\n\n“It is hoped that the material here will be sufficiently illustrative to show what is involved generally, and also to enable the reader to decide whether he wants to be involved generally” (Crowder and Hand)\n\nWhen your statistics professor says to the class, “You should know how to derive these results (but you won’t be tested on this)” think to yourself: “I could do that.” Be satisfied at that point. Or be satisfied that there is likely to be one idiot in the class (like me) who takes the declaration seriously.\n\nIn today’s post, I will show how to derive the expected mean squares of the univariate repeated measures ANOVA model. I’ll start with the sum of squares for between groups. I’ll also end there, as I would like to keep my hair from turning gray faster than it already is.\n\nFirst, the preliminaries must be hashed out. Assume a linear model for observation $$Y_{hlj}$$ of individual $$h$$ $$(1, \\dots, r_l)$$ in group $$l$$ $$(1, \\dots, q)$$ at time (or repeated measure) $$j$$ $$(1, \\dots, n)$$ such that:\n\n$Y_{hlj} = \\mu + \\tau_l + \\gamma_j + (\\tau\\gamma)_{lj} + b_{hl} + e_{hlj}$\n\nThe parameters are:\n\n• $$\\mu$$ is the overall mean.\n• $$\\tau_l$$ is the deviation from $$\\mu$$ associated with group $$l$$.\n• $$\\gamma_j$$ is the deviation from $$\\mu$$ associated with time $$j$$.\n• $$(\\tau\\gamma)_{lj}$$ is the deviation associated with group $$l$$ at time $$j$$ (i.e., the interaction of time and group).\n• $$b_{hl}$$ is the random effect associated with unit $$h$$ unit in group $$l$$.\n• $$e_{hlj}$$ represents the within-unit sources of variation.\n\n• $$b_{hl} \\sim N(0, \\sigma^2_b)$$ and all independent.\n• $$e_{hlj} \\sim N(0, \\sigma^2_e)$$ and all independent.\n• $$b_{hl}$$ and $$e_{hlj}$$ are mutually independent.\n• To force identifiability of the parameter estimates, we make the these constraints:\n$$\\sum_{l=1}^q \\tau_l = 0$$, $$\\sum_{j=1}^n \\gamma_j = 0$$, $$\\sum_{l=1}^q (\\tau\\gamma)_{lj} = 0 = \\sum_{j=1}^n (\\tau\\gamma)_{lj}$$.\n\nThe distributional assumptions imply a compound symmetric covariance structure for $$\\mathbf{Y}_{hl}$$ . That is, $$\\mathbf{Y}_{hl} = \\sigma^2_b\\mathbf{J}_n + \\sigma^2_e\\mathbf{I}_n$$, where $$\\mathbf{J}_n$$ is a $$n \\times n$$ matrix of ones and $$\\mathbf{I}_n$$ is a $$n \\times n$$ identity matrix.\n\nDefine $$\\bar{Y}_{.l.} = \\sum_{h=1}^{r_l} \\sum_{j=1}^n Y_{hlj}$$ as the sample mean across groups and $$\\bar{Y}_{…} = \\sum_{l=1}^{q} \\sum_{h=1}^{r_l} \\sum_{j=1}^n Y_{hlj}$$ as the overall sample mean. Now we can get started.\n\n\\begin{aligned} E(MS_G) = & E\\left(\\frac{SS_G}{q-1}\\right) = \\frac{1}{q-1} E\\left[\\sum_{l=1}^qnr_l(\\bar{Y}_{.l.} – \\bar{Y}_{…})^2\\right] \\\\ = & \\frac{n}{q-1} \\sum_{l=1}^qr_lE\\left[(\\bar{Y}_{.l.} – \\bar{Y}_{…})^2\\right]\\\\ = & \\frac{n}{q-1} \\sum_{l=1}^qr_lE\\left[ \\left( \\sum_{h=1}^{r_l} \\sum_{j=1}^n (\\mu + \\tau_l + \\gamma_j + (\\tau\\gamma)_{lj} + b_{hl} + e_{hlj})/r_ln \\right.\\nonumber\\right.\\nonumber\\\\ & \\qquad \\left. \\left. {} – \\sum_{l=1}^{q} \\sum_{h=1}^{r_l} \\sum_{j=1}^n (\\mu + \\tau_l + \\gamma_j + (\\tau\\gamma)_{lj} + b_{hl} + e_{hlj})/mn\\right)^2\\right] \\\\ = & \\frac{n}{q-1} \\sum_{l=1}^qr_lE\\left[ \\left(\\tau_l + \\frac{1}{r_ln}\\sum_{h=1}^{r_l} \\sum_{j=1}^n b_{hl} + \\frac{1}{r_ln}\\sum_{h=1}^{r_l} \\sum_{j=1}^n e_{hlj} \\right.\\nonumber\\right.\\nonumber\\\\ & \\qquad \\left. \\left. {} – \\frac{1}{mn}\\sum_{l=1}^{q}\\sum_{h=1}^{r_l} \\sum_{j=1}^n b_{hl} – \\frac{1}{mn}\\sum_{l=1}^{q}\\sum_{h=1}^{r_l} \\sum_{j=1}^n e_{hlj}\\right)^2 \\right] \\end{aligned}\n\nTo see how $$\\mu$$, $$\\gamma_j$$, and $$(\\tau\\gamma)_{lj}$$ fall out, remember the constraints! For example, $\\sum_{h=1}^{r_l} \\sum_{j=1}^n \\frac{\\tau_l}{r_ln} – \\sum_{l=1}^{q} \\sum_{h=1}^{r_l} \\sum_{j=1}^n \\frac{\\tau_l}{mn} = \\tau_l – \\sum_{l=1}^{q} \\frac{r_l\\tau_l}{m} = \\tau_l – 0 = \\tau_l$.\n\nRecall the following. $$E[X^2] = E[X]^2 + V(X)$$. $$E(b_{hl}) = 0$$. $$E(e_{hlj}) = 0$$. $$E(\\tau_l) = \\tau_l$$. $$V(\\tau_l) = 0$$ (since it is a constant). $$b_{hl}$$ is independent of $$e_{hlj}$$, so $$Cov(b_{hl}, e_{hlj}) = 0$$.\n\n\\begin{aligned} E(MS_G) = & \\frac{n}{q-1} \\sum_{l=1}^qr_l\\left\\{ \\left[E\\left(\\tau_l + \\frac{1}{r_ln}\\sum_{h=1}^{r_l} \\sum_{j=1}^n b_{hl} + \\frac{1}{r_ln}\\sum_{h=1}^{r_l} \\sum_{j=1}^n e_{hlj} – \\frac{1}{mn}\\sum_{l=1}^{q}\\sum_{h=1}^{r_l} \\sum_{j=1}^n b_{hl} – \\frac{1}{mn}\\sum_{l=1}^{q}\\sum_{h=1}^{r_l} \\sum_{j=1}^n e_{hlj}\\right)\\right]^2 \\right.\\nonumber\\\\ & \\qquad \\qquad \\left. {} + V\\left(\\tau_l + \\frac{1}{r_ln}\\sum_{h=1}^{r_l} \\sum_{j=1}^n b_{hl} + \\frac{1}{r_ln}\\sum_{h=1}^{r_l} \\sum_{j=1}^n e_{hlj} – \\frac{1}{mn}\\sum_{l=1}^{q}\\sum_{h=1}^{r_l} \\sum_{j=1}^n b_{hl} – \\frac{1}{mn}\\sum_{l=1}^{q}\\sum_{h=1}^{r_l} \\sum_{j=1}^n e_{hlj}\\right) \\right\\} \\\\ = & \\frac{n}{q-1} \\sum_{l=1}^qr_l\\left\\{ \\tau_l^2 + V\\left(\\frac{1}{r_ln}\\sum_{h=1}^{r_l} \\sum_{j=1}^n b_{hl} – \\frac{1}{mn}\\sum_{l=1}^{q}\\sum_{h=1}^{r_l} \\sum_{j=1}^n b_{hl}\\right) \\right.\\nonumber \\\\ & \\qquad \\qquad \\qquad \\left.{} + V\\left(\\frac{1}{r_ln}\\sum_{h=1}^{r_l} \\sum_{j=1}^n e_{hlj} – \\frac{1}{mn}\\sum_{l=1}^{q}\\sum_{h=1}^{r_l} \\sum_{j=1}^n e_{hlj}\\right) \\right\\} \\end{aligned}\n\nLet’s take the variance terms one at a time.\n\n\\begin{aligned} & V\\left(\\frac{1}{r_ln}\\sum_{h=1}^{r_l} \\sum_{j=1}^n b_{hl} – \\frac{1}{mn}\\sum_{l=1}^{q}\\sum_{h=1}^{r_l} \\sum_{j=1}^n b_{hl}\\right) \\\\ = & V\\left(\\frac{1}{r_ln}\\sum_{h=1}^{r_l} \\sum_{j=1}^n b_{hl}\\right) + V\\left(\\frac{1}{mn}\\sum_{l=1}^{q}\\sum_{h=1}^{r_l} \\sum_{j=1}^n b_{hl}\\right) – 2Cov\\left( \\frac{1}{r_ln}\\sum_{h=1}^{r_l} \\sum_{j=1}^n b_{hl}, \\frac{1}{mn}\\sum_{l=1}^{q}\\sum_{h=1}^{r_l} \\sum_{j=1}^n b_{hl}\\right) \\\\ = & \\frac{1}{r_l^2}\\sum_{h=1}^{r_l} V(b_{hl}) + \\frac{1}{m^2}\\sum_{l=1}^{q}\\sum_{h=1}^{r_l} V(b_{hl}) – \\frac{2}{r_lm}Cov\\left( \\sum_{h=1}^{r_l} b_{hl}, \\sum_{l=1}^{q}\\sum_{h=1}^{r_l} b_{hl}\\right) \\\\ = & \\frac{1}{r_l^2}\\sum_{h=1}^{r_l} V(b_{hl}) + \\frac{1}{m^2}\\sum_{l=1}^{q}\\sum_{h=1}^{r_l} V(b_{hl}) – \\frac{2}{r_lm}V\\left( \\sum_{h=1}^{r_l} b_{hl}\\right) \\\\ = & \\frac{1}{r_l^2}\\sum_{h=1}^{r_l} V(b_{hl}) + \\frac{1}{m^2}\\sum_{l=1}^{q}\\sum_{h=1}^{r_l} V(b_{hl}) – \\frac{2}{r_lm}\\sum_{h=1}^{r_l} V(b_{hl}) \\\\ = & \\frac{1}{r_l}\\sigma_b^2 + \\frac{1}{m} \\sigma_b^2 – 2\\frac{1}{m}\\sigma_b^2 \\\\ = & \\sigma_b^2\\left(\\frac{1}{r_l} – \\frac{1}{m} \\right) \\end{aligned}\n\nThe fourth line in the above section follows from the assumption that the units are independent, thus $$Cov(b_{hl}, b_{hl})$$ = 0 where the indexes are not equal.\n\nFollowing a similar process, we can show:\n\n$V\\left(\\frac{1}{r_ln}\\sum_{h=1}^{r_l} \\sum_{j=1}^n e_{hlj} – \\frac{1}{mn}\\sum_{l=1}^{q}\\sum_{h=1}^{r_l} \\sum_{j=1}^n e_{hlj}\\right) = \\sigma_e^2\\left(\\frac{1}{r_ln} – \\frac{1}{mn} \\right)$\n\nNow, let’s go back to where we left off.\n\n\\begin{aligned} E(MS_G) & = \\frac{n}{q-1} \\sum_{l=1}^qr_l\\left\\{ \\tau_l^2 + V\\left(\\frac{1}{r_ln}\\sum_{h=1}^{r_l} \\sum_{j=1}^n b_{hl} – \\frac{1}{mn}\\sum_{l=1}^{q}\\sum_{h=1}^{r_l} \\sum_{j=1}^n b_{hl}\\right) \\right.\\nonumber \\\\ & \\qquad \\qquad \\qquad \\left.{} + V\\left(\\frac{1}{r_ln}\\sum_{h=1}^{r_l} \\sum_{j=1}^n e_{hlj} – \\frac{1}{mn}\\sum_{l=1}^{q}\\sum_{h=1}^{r_l} \\sum_{j=1}^n e_{hlj}\\right) \\right\\} \\\\ & = \\frac{n}{q-1} \\sum_{l=1}^qr_l \\tau_l^2 + \\frac{n\\sigma_b^2}{q-1} \\sum_{l=1}^qr_l \\left(\\frac{1}{r_l} – \\frac{1}{m} \\right) + \\frac{n\\sigma_e^2}{q-1} \\sum_{l=1}^qr_l \\left(\\frac{1}{r_ln} – \\frac{1}{mn} \\right) \\\\ & = \\frac{n}{q-1} \\sum_{l=1}^qr_l \\tau_l^2 + \\frac{n\\sigma_b^2}{q-1}\\left( \\sum_{l=1}^q 1 – \\frac{\\sum_{l=1}^q r_l}{m} \\right) + \\frac{\\sigma_e^2}{q-1} \\left( \\sum_{l=1}^q 1 – \\frac{\\sum_{l=1}^q r_l}{m} \\right) \\\\ &= \\frac{n}{q-1} \\sum_{l=1}^qr_l \\tau_l^2 + \\frac{n\\sigma_b^2}{q-1}\\left( q -1 \\right) + \\frac{\\sigma_e^2}{q-1} \\left(q-1 \\right) \\\\ & = \\frac{n}{q-1} \\sum_{l=1}^qr_l \\tau_l^2 + n\\sigma_b^2 + \\sigma_e^2 \\end{aligned}\n\nAnd there we are."
] | [
null,
"http://smithsonianscience.org/wordpress/wp-content/uploads/2011/02/batsmall-300x263.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.71453464,"math_prob":1.0000018,"size":10247,"snap":"2021-43-2021-49","text_gpt3_token_len":4055,"char_repetition_ratio":0.2496339,"word_repetition_ratio":0.12659235,"special_character_ratio":0.40431345,"punctuation_ratio":0.069292836,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000099,"pos_list":[0,1,2],"im_url_duplicate_count":[null,5,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-03T15:46:07Z\",\"WARC-Record-ID\":\"<urn:uuid:bb5e3fca-9349-4222-961a-cf949f25a29e>\",\"Content-Length\":\"64334\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:667386ea-4c93-49df-8926-600441e94603>\",\"WARC-Concurrent-To\":\"<urn:uuid:fc039c27-9b69-4b2b-bc32-c9c873142cfa>\",\"WARC-IP-Address\":\"69.175.109.250\",\"WARC-Target-URI\":\"http://bradleysaul.us/\",\"WARC-Payload-Digest\":\"sha1:STRQGET6TV22XDCGYDQIHP2OJASAOIFL\",\"WARC-Block-Digest\":\"sha1:FAIS6BPDDZKJATGOPL43L7S4YQ24645M\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964362891.54_warc_CC-MAIN-20211203151849-20211203181849-00004.warc.gz\"}"} |
http://mathcentral.uregina.ca/QQ/database/QQ.09.06/h/alfredo1.html | [
"",
null,
"",
null,
"",
null,
"SEARCH HOME",
null,
"Math Central Quandaries & Queries",
null,
"",
null,
"Question from alfredo, a parent: i want to divide my land. It is north is 11.86 ft, the S is 110.00 ft, the W is 644.34 ft the E is 644.58 ft. which = to 1.606 acre. how many feet going from N to S would it be to make 1.5 acre",
null,
"Alfredo,\n\nI am somewhat confused by your question. I drew a rough diagram of your land (not to scale).",
null,
"If the north side and south side are parallel then this is a trapezoid and the area is the average of the lengths of the parallel sides times the distance between the parallel sides. In this case the average of the lengths of the parallel sides is\n\n(110.00 + 11.86)/2 = 60.93 feet.\n\nThe distance between the parallel sides is the perpendicular distance which less than the length of either side, so let's say it is 644 ft then the area is\n\n60.93",
null,
"644 = 39239 sq ft.\n\nAn acre is 43560 sq ft so this is less than an acre. Even if you take the distance between the parallel sides as 645 ft the area is still less than an acre. I don't see how that area can be 1.606 acres.\n\nPenny",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"Math Central is supported by the University of Regina and The Pacific Institute for the Mathematical Sciences."
] | [
null,
"http://mathcentral.uregina.ca/lid/images/pixels/transparent.gif",
null,
"http://mathcentral.uregina.ca/lid/images/pixels/transparent.gif",
null,
"http://mathcentral.uregina.ca/lid/images/pixels/transparent.gif",
null,
"http://mathcentral.uregina.ca/lid/images/pixels/transparent.gif",
null,
"http://mathcentral.uregina.ca/lid/images/pixels/transparent.gif",
null,
"http://mathcentral.uregina.ca/lid/images/search.gif",
null,
"http://mathcentral.uregina.ca/lid/images/pixels/transparent.gif",
null,
"http://mathcentral.uregina.ca/QQ/database/QQ.09.06/h/alfredo1.1.gif",
null,
"http://mathcentral.uregina.ca/images/multiply.gif",
null,
"http://mathcentral.uregina.ca/lid/images/pixels/transparent.gif",
null,
"http://mathcentral.uregina.ca/lid/images/pixels/transparent.gif",
null,
"http://mathcentral.uregina.ca/lid/images/pixels/transparent.gif",
null,
"http://mathcentral.uregina.ca/lid/images/qqsponsors.gif",
null,
"http://mathcentral.uregina.ca/lid/images/mciconnotext.gif",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.92739993,"math_prob":0.98412234,"size":837,"snap":"2020-24-2020-29","text_gpt3_token_len":191,"char_repetition_ratio":0.16806723,"word_repetition_ratio":0.08805031,"special_character_ratio":0.23536439,"punctuation_ratio":0.05952381,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9980436,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,2,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-06T17:20:28Z\",\"WARC-Record-ID\":\"<urn:uuid:03db9ce9-3256-4eed-a1d7-73ff2ef6a706>\",\"Content-Length\":\"7739\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:05a43a70-bb5c-4b41-8b72-bd1619d6e621>\",\"WARC-Concurrent-To\":\"<urn:uuid:6ef50ce8-deb3-4310-bc5c-3ca7fcb33434>\",\"WARC-IP-Address\":\"142.3.156.43\",\"WARC-Target-URI\":\"http://mathcentral.uregina.ca/QQ/database/QQ.09.06/h/alfredo1.html\",\"WARC-Payload-Digest\":\"sha1:ETMOWS4M3XAG744PWYCJ4VV74O477EPV\",\"WARC-Block-Digest\":\"sha1:4GCZY7MIX63V6D2WNTDKFZVLQVPY2V3V\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593655881763.20_warc_CC-MAIN-20200706160424-20200706190424-00371.warc.gz\"}"} |
https://epjquantumtechnology.springeropen.com/articles/10.1140/epjqt/s40507-020-00082-8 | [
"# Niobium quarter-wave resonator with the optimized shape for quantum information systems\n\n## Abstract\n\nQuantum computers (QC), if realized, could disrupt many computationally intense fields of science. The building block element of a QC is a quantum bit (qubit). Qubits enable the use of quantum superposition and multi-state entanglement in QC calculations, allowing a QC to simultaneously perform millions of computations at once. However, quantum states stored in a qubit degrade with decreased quality factors and interactions with the environment. One technical solution to improve qubit lifetimes and network interactions is a circuit comprised of a Josephson junction-based qubit located inside of a high Q-factor superconducting 3D cavity.\n\nIt is known that niobium resonators can reach $$Q_{0}>10^{11}$$. However, existing cavity geometries are optimized for particle acceleration rather than hosting qubits. RadiaBeam Technologies, in collaboration with Argonne National Laboratory and The University of Chicago, has developed a niobium superconducting radio frequency quarter-wave resonant cavity (QWR) for quantum computation. A 6 GHz QWR was optimized to include tapering of the inner and outer conductors, a toroidal shape for the resonator shorting plane, and an inner conductor tip to reduce parasitic capacitance. In this paper, we present the results of the resonator design optimization, fabrication, processing, and testing.\n\n## Introduction\n\nNearly all areas of modern life are influenced by the incredible impact of computational capabilities. Quantum computers may make many computationally intense fields of science, such as cosmology, quantum field theory, particle interactions, and nuclear physics, tractable. The building block element of a QC is a quantum bit, which is a two-level quantum system. Qubits enable the use of quantum superposition and multi-state entanglement in QC calculations, allowing a QC to perform millions of quantum mechanical computations at once . Entanglement lets a QC change the state of multiple qubits simultaneously via adjusting the state stored in a single bit, enabling computational power scalability unachievable with traditional computers . These advantages are not just theoretical: it was recently reported that quantum supremacy was experimentally demonstrated for the first time using Google’s superconducting Sycamore processor .\n\nOne of the greatest challenges in building QCs is controlling quantum decoherence , rapid degradation of the qubit’s quantum state due to interactions with the environment and the integrated control channels. Because of decoherence, the relevant quantum behavior is lost, and time-consuming tasks may render some quantum algorithms inoperable .\n\nOne approach to building a quantum computer is using superconducting RF oscillator circuits with Josephson junctions for anharmonic dynamics . Planar chips allow integrating a large number of qubits in topologies that increase the lifetimes of quantum states by applying error-correction techniques . However, the low intrinsic Q-factors of microstrip resonators limits the coherence times of the qubits while planar geometries require non-trivial solutions for coherence-preserving coupling to the chip [9, 10]. Another solution to improve qubit lifetimes and allow for network interactions is to couple the Josephson junction through a high Q-factor superconducting 3D cavity . The quantum state excited in the Josephson junction is protected from environmental noise and loss via the encoding qubit states in the high-Q resonant cavity modes .\n\nThe coherence time is closely related to the Q-factor of the resonator and its energy dissipation. Current qubit 3D resonators can achieve $$Q\\sim 10^{8}$$ with coherence times of several milliseconds . On the other hand, niobium resonators used in particle accelerators reach quality factors of 1011 , potentially enabling storage times approaching seconds if adopted for operation in a QC.\n\nTo pursue this opportunity, we developed a 3D superconducting RF (SRF) quarter-wave resonator (QWR) with a shape optimized for high-Q operation in the quantum regime (see Fig. 1). The QWR is an attractive choice, due to the simplicity of its integration with the Josephson junction, which can be placed near the central conductor providing a high coupling strength, and for its feasibility for scaling to multi-qubit systems. The optimization of the cavity includes the introduction of inner and outer conductor tapering, toroidal shaping of the resonator ends and optimization of the inner conductor tip to reduce parasitic capacitances. These geometrical features provide a better Q-factor and thus a longer lifetime for quantum memory. In addition, we performed a series of numerical simulations and optimizations to decrease energy dissipation in the cavity due to both surface currents and dielectric losses in the niobium oxide layer.\n\nIn SRF cavities the unloaded Q-factor can be defined as $$Q_{0} = \\frac{G}{R_{s}}$$. Here, G is a geometry factor, which ranks the cavity’s effectiveness in providing “useful” electric field due to the influence of its shape alone and excludes specific material wall loss. The G-factor can be formally defined as $$G= \\frac{\\omega \\mu _{0} \\int \\vert {\\vec{H}} \\vert ^{2} \\,dV}{\\int \\vert {\\vec{H}} \\vert ^{2} \\,dA}$$, where ω is the resonant frequency, $$\\mu _{0}$$ is permeability of free space, the top integral is for volumetric RF magnetic field and the bottom is for surface RF magnetic field . Increasing the value of G results in higher $$Q_{0}$$ and thus indicates a better cavity design. $$R_{s}$$ is the surface resistance, defined by the material and operating conditions. The surface resistance $$R_{s}$$ can be expanded into two terms: BCS-resistance and residual resistance, $$R_{s} = R_{\\mathrm{BCS}} + R_{\\mathrm{res}}$$. The BCS-resistance is given by the Bardeen–Cooper–Schrieffer theory , in which the superconducting Cooper pairs, which have zero resistance for DC current, have finite mass and their momentum alternates sinusoidally due to the AC currents of RF fields, giving rise to energy loss. BCS resistance for niobium depends on frequency and temperature, $$R_{\\mathrm{BCS}} \\sim \\frac{A}{T} f^{2} e^{- \\frac{\\Delta T}{kT}}$$ , and thus one should keep the frequency and temperature as low as possible to maximize the unloaded Q-factor. The residual resistance arises from several sources, such as material defects, oxides and hydrides that can form on the surface due to hot chemistry and slow cool-down, and other sources related to cavity processing and surface treatment.\n\n## Methods/experimental\n\n### Geometrical optimization\n\nAs a reference model for comparison of our optimization efforts, we used the straight QWR shape used by Yale . As an initial step for optimization, we adopted the shape of the 72.5 MHz QWR cavity developed at ANL . This geometry was designed for use with high-power accelerating fields with limitations on the maximum surface electric field and surface magnetic fields of 35 MV/m and 50 mT respectively . These limitations are introduced by electric (E-) field stimulated emission and thermal superconductivity breakdown caused by surface currents induced by the surface magnetic (H-) field. While these surface fields limitations are not relevant for low-power quantum applications, the optimization approach used for these cavities helps to reach better G-factor by reducing peak and integral surface H-fields. We performed further shape optimization by adjusting the geometrical parameters shown in Fig. 2. The simulations were performed in CST Microwave Studio .\n\nThe machining methods used to produce the cavity put limits on the cavity dimensions. Here we used a milling center to hollow out the cavity resonator from a solid block of niobium. The cutting tool width is constrained by the gap between the inner and outer conductors $$w_{b}$$ and vibrations experienced during cutting. By analyzing the cutting tool and cavity dimensions (Fig. 3), a simple relation was inferred: $$w_{b} = l_{1} + l_{3} + w_{c}$$, where $$l_{1}$$, $$l_{3}$$ are the outer and inner conductor tapering parameters, and the minimum width of the cutting tool $$w_{c}$$ is limited by vibrations during machining. This relation generally means that for bigger tapering we will need to increase the gap width. Therefore, we performed simulations to find the dependence of the RF parameters on these dimensions and then optimized them.\n\nFirst, we adjusted the geometry of the top part of the resonator. For the QWR geometry, this is the region where the magnetic field energy density is highest (see Fig. 4). By increasing the volume of this part of the resonator, the magnetic energy is distributed over a larger volume and the peak energy density is decreased. Decreasing the magnetic energy density reduces the magnetic surface field and improves the G-factor. We started by simulating geometries with different blending radius of the top part of the cavity. After we explored the outer conductor blending, we optimized the shape of inner and outer conductors by adding tapering. We optimized the taper shape by adjusting the tapering start point for the inner and outer conductors. To maximize the coupling strength with the transmon (Josephson junction), we optimized the inner and outer conductor dimensions, as well as the inner conductor radius. Finally, we performed simulations to find the optimal width of the gap between the inner conductor tip and outer conductor wall. The simulated G-factor dependences on the geometric parameters are presented in Fig. 5.\n\nTurning to geometrical optimization to reduce $$R_{s}$$, the primary source of residual resistance is the thin dielectric layer that is usually present on the surface of superconducting niobium. As shown in recent work at Fermi National Accelerator Laboratory [27, 28], removing this layer can improve the Q-factor by an order of magnitude. Losses in the dielectric layers can be calculated by integrating the E-field over the thin surface dielectric volume:\n\n$$P_{\\mathrm{diel}} = \\frac{1}{2} \\int _{V_{d}} \\overline{J} \\overline{E}^{*} \\,dv = \\frac{\\omega \\varepsilon ^{\\prime \\prime }}{2} \\int _{V_{d}} \\vert \\overline{E} \\vert ^{2} \\,dv,$$\n\nwhere ω is the working frequency and $$\\varepsilon ^{\\prime \\prime }$$ is the imaginary part of complex dielectric permittivity. By creating a more uniform E-field distribution, we can potentially lower the surface layer integral and thus decrease the dielectric loss. The E-field is concentrated at the end of the quarter-wavelength loading element and concentrated by the sharp edges of the straight geometry of the central pin as shown in Fig. 4. We blended this feature to achieve a more uniform E-field distribution. Full blending of the inner conductor tip helped to reduce the E-field integral by 22%, thus significantly reducing this source of dielectric loss.\n\nWe further investigated the dielectric loss using simulations of a thin Nb oxide layer on the cavity surface. The prevailing surface oxide in niobium RF cavities is $$\\mathrm{Nb} _{2}\\mathrm{O}_{5}$$ . RF measurements in thin $$\\mathrm{Nb} _{2}\\mathrm{O}_{5}$$ films have found a dielectric constant of about 50 with a loss tangent of 0.01 for temperatures below $$\\sim 100\\mbox{ K}$$. The results presented in show that the loss tangent remains constant for frequencies larger than 1 MHz. Thus we used these parameters in simulations to estimate the upper limit of the Q-factor.\n\nThe typical thickness of the oxide dielectric film in niobium RF cavities is around 50 Å . This is much smaller than the cavity dimensions, which are on the order of millimeters. Thus such thin films cannot be simulated directly. Instead, we simulated losses in a thicker 1 μm layer, and divided the simulated by a factor of 200 to approximate the expected actual film thicknesses. By doing this, we assumed that the E-field density does not change in this thin oxide film. Inferred dielectric losses thus should depend linearly on the surface thickness h:\n\n\\begin{aligned}& P_{\\mathrm{loss}} \\propto \\iiint \\vec{E} \\vec{D} \\,d V_{d} = \\iint h \\vec{E} \\vec{D} \\cdot \\,d S_{s}, \\\\& P_{\\mathrm{loss}} \\propto h. \\end{aligned}\n\nIn these simulations, as we are interested in the Q-factor as limited by dielectric losses, we assumed infinite electric conductivity for the cavity walls. The results of these simulations showed that a cavity with a 50 Å thick $$\\mathrm{Nb} _{2}\\mathrm{O}_{5}$$ layer in the optimized geometry almost two times better than the simulated dielectric Q-factor the non-optimized QWR geometry.\n\nWe then studied how to reduce the RF losses in the joints between the different components of the cavity assembly. Our fabrication method introduces limits on the dimensions of the cavities. One of the most substantial restrictions is the cavity length. From a manufacturing point of view, it is desirable to make the cavity as short as possible. The shorter cavity allows using a shorter cutter (Fig. 3), reducing vibrations and improving the quality of the machined surface.\n\nWe thus introduced the concept of machining a short cavity elongated by attaching an additional part (Fig. 6) made of a different material. An obvious material choice is aluminum, as this becomes superconducting at 1.2 K, well above our 10 mK operating temperature.\n\nOur new design added a number of parameters for optimization. We performed simulations to define the minimum distance from the niobium inner conductor to the niobium to aluminum connection ($$L_{\\mathrm{g}}$$) shown in Fig. 7. The next step was to simulate the joint losses. The surface magnetic field in the region of the QWR joints drives currents across the seam. Lower seam conductivity introduces additional losses; therefore we tried to minimize the H-field on the seam.\n\nOne design solution to reduce the joint loss is an RF choke (Fig. 8(e)). The addition of the choke, however, introduces difficulties with the cavity coupling, where the deep choke groove pushes the RF coupling pin (see Fig. 6) too far from the cavity, making the coupling between the cavity and RF transmission line unacceptably weak. On the other hand, a big concern was that coupling through the choke may introduce parasitic modes. This is inacceptable for qubit cavities, which require a clean spectrum for single mode operation. Moreover, eigenmode simulations showed a lower-order mode with E-field concentrated inside the choke. The appearance of a lower-order mode was a significant drawback, and the choke could not be used.\n\nThe evolution of the design during the optimization process is illustrated in Fig. 8 and compared quantitatively in Table 1; we managed to increase the G-factor by 65%. Further optimization seems to be possible; however, those designs are not feasible with machining fabrication.\n\n### Engineering and fabrication\n\nIn order to test this approach, we engineered and built a proof of principle prototype. The engineering design of the prototype includes system integration, thermal management, magnetic shielding, vacuum considerations, and signal integration. The complete system is shown in Fig. 9. Starting from the inside out, the original full-length cavity was truncated to have a separate lower field region made from aluminum that would also provide connection points for the RF signals through non-magnetic SMA connectors. Then considerations were made for sufficient thermal contact and choice of fastener. This was accomplished both through analytical calculation and computational stress analysis in ANSYS Multiphysics , as seen in Fig. 10, shown with magnified deformation scales.\n\nNext, a simple magnetic shield was designed in Cryoperm 10 , a unique cryogenically friendly steel alloy that retains functional permeability at cryogenic temperatures more effectively than conventional MuMetal . This magnetic shield required two separate shielding cans, each manufactured from 1 mm thick Cryoperm 10, that are separable for assembly of the device under test while permitting small penetrations for SMA cables. Finally, the full system was re-analyzed to ensure that the additional material did not affect the original assembly thermal calculations and provisions were made to ensure the full system could be mounted onto the available test plate within a dilution refrigerator operated at 10 mK. At this stage, spring washers were incorporated into the design, and a torque value and target compression were calculated and provided to technicians for assembly.\n\nRegarding the fabrication of the cavity, we encountered challenges during niobium machining that were amplified by over-annealing of the niobium by the vendor. This resulted in a ‘gummier’ metal consistency, which leads to surface burnishing rather than clean shearing of material and may have limited the Q of the prototype.\n\nOptical measurements were made on the first fabricated prototype cavity. All dimensions of the cavity were within the design tolerances except for the inner conductor length and inner conductor tilt. The former was decreased by 270 μm which, according to simulations, could lead to a frequency increase of 140 MHz. Inner conductor tilt was 1.6 degrees and, according to simulations, could not disturb RF parameters of the cavity. Furthermore, this tilt could be due to pressure applied to the sample during the cut for inspection.\n\nWe used optical measurements to estimate surface roughness on the outer conductor tapering which was one of the most challenging surfaces from a fabrication point of view. The measured profile is presented in Fig. 11. The surface roughness estimate based on this measurement is 20 μm which, however, could be up to 30 μm due to the high error of optical measurements of surface profiles.\n\nTwo solid niobium 6.2 GHz quarter wave resonators were cleaned, etched and high pressure rinsed at Argonne National Laboratory (see Fig. 12). Figure 13 shows the cavities before and after etching. Both cavities were processed according to the following procedure: (1) a 1 hour ultrasonic cleaning in a 40C 4% alconox solution ; (2) a ultra-high purity water rinse; (3) dried with filtered and de-ionized nitrogen boil-off gas; (4) a 150 minute buffered chemical polished (BCP) in 1:1:2 (HF:$$\\mathrm{HNO} _{3}$$:$$\\mbox{H}_{3}\\mathrm{PO} _{4}$$); (5) a ultra-high purity water rinse; (6) another 1 hour ultrasonic cleaning at 40C in a 4% alconox solution; (7) a ultra-high purity water rinse; (8) a high-pressure ultra-high purity water rinse; and finally (9) air dried in class 10 clean room and bagged in a class 100 clean room for transfer to the cryogenic laboratory.\n\n### Resonator tests\n\nWe performed a series of room temperature (RT) measurements for all fabricated cavities to keep track of their performance. First, we measured the RT quarter-wave resonant frequency and Q-factor (Fig. 14, left). We used a single coupling pin (Fig. 14, middle) to perform reflection-type measurements. The pin length was chosen to be 4.1 mm above the aluminum part edge to provide sufficient coupling. During the experiment, the cavity was tightly clamped to the aluminum part (Fig. 14, right), until a plateau in the measured Q-factor was observed.\n\nWe started from measuring the magnitude of $$S_{11}$$ (Fig. 15, left), which gave us information about the resonant frequency, $$f_{0}=6160\\text{ MHz}$$. We used the Smith chart representation (Fig. 15, right) to determine the frequencies $$f_{1}$$ and $$f_{2}$$ where the real and imaginary parts of the load impedance were equal. These values then were used to calculate the internal Q-factor of the cavity\n\n$$Q_{0} = \\frac{f_{0}}{f_{2} - f_{1}}.$$\n\nMeasurements of the non-optimized and optimized niobium and copper cavities are summarized in Table 2. The non-optimized niobium cavity appeared to be detuned by $$+15\\mbox{ MHz}$$, while the optimized cavity with 400 μm shorter inner conductor was detuned by $$-150\\mbox{ MHz}$$, which was 100 MHz less than expected. In order to implement the S11-curve measurement method, we added a circulator to the excitation port to measure the reflected power (we note that the unshielded magnetic fields from the circulators may affected the measured Q-factor of the resonators).\n\nIn order to accurately calculate the internal Q-factor $$Q_{0}$$ of the cavity, the external Q-factors $$Q_{\\mathrm{EXT}}$$ of the probes should differ (relative to $$Q_{0}$$) by no more than by an order of magnitude, and ideally by no more that a factor of two . However, since the $$Q_{0}$$ of the cavity was initially unknown, we had to perform a series of measurements of loaded Q-factor $$Q_{L}$$ with different external Q-factors $$Q_{\\mathrm{EXT}}$$ of the probes, i.e. the coupling pin of various lengths (Fig. 16, left). For these measurements, a copper cavity with already measured internal Q-factor (at room temperature) was used, so we just calculated external Q-factor for each coupler length by inferring the coupling coefficient from $$S_{11}$$ measurements. However, since the room temperature $$Q_{0}\\sim 10^{3}$$, these measurements could only be done for $$Q_{\\mathrm{EXT}}\\sim 10^{5}$$. For shorter couplers, we used electromagnetic simulations (Fig. 16, right).\n\nIn order to measure the Q-factor of the resonator, we adopted the ringdown technique that is widely used for measurements of superconducting accelerating cavities . In this method, the resonant cavity is driven by a short rf pulse into one port. Then the exponentially decaying cavity signal coming out of the second port is captured by an oscilloscope where we measure the decay time. One method of exciting the cavity is with positive feedback or a self-excited loop (SEL) , which works without any external rf reference matched to the cavity resonance.\n\nThe schematic of the SEL circuit built for our tests is shown in Fig. 17. When the rf switch is in the on state, the rf signal is amplified by a low-noise amplifier (LNA) from a noise that further on gets filtered by a combination of the test cavity itself and an additional band-pass filter. As this signal fills up the cavity the switch turns off and the stored energy decay time is measured by the scope. This method has the advantage of being independent of resonant frequency due to the microphonics, which is essential for measuring high-Q cavities with extremely narrow bandwidths.\n\nThe initial tests of the ringdown circuit were carried out on a 6 GHz copper test resonator (see Fig. 18). Typical waveforms are shown in Fig. 19. The decay time τ was measured, and then the loaded Q is calculated using $$Q_{L} = \\tau \\pi f$$, where f is the cavity resonant frequency.\n\nMeasurements of the loaded Q using S-parameter measurements in the frequency domain using a vector network analyzer (VNA) showed $$Q_{L}=1500$$, which was in good agreement with the results of the ringdown measurements that showed $$Q_{L}=1550$$ ($$\\tau \\sim 82\\mbox{ ns}$$). The SEL-based ringdown measurement circuit proved to work well and matched the results from the frequency domain measurements at room temperature.\n\nMoving to the cryogenic tests, two etched resonators, one with the simple and the other with the optimized shape (shapes b and d, as shown in Fig. 8) were used for Q-factor measurements in the quantum regime (10 mK temperature and a single-photon power level). The assembly is shown in Fig. 20.\n\nWe connected the resonators using the amplification chain installed inside the dilution refrigerator. The room temperature measurements of the installed resonators revealed that the resonant frequencies of both resonators increased by 200-250 MHz after polishing. The frequency of the non-optimized resonator became 6226 MHz, while the frequency of the optimized resonator increased to 6098 MHz (see Table 2, columns 2 and 4 for the frequencies before etching).\n\nBoth S-parameters and ringdown measurements were used to measure the loaded Q-factor ($$Q_{L}$$), which is a combination of the resonator Q-factor ($$Q_{0}$$), external Q-factor of the coupling elements ($$Q_{\\mathrm{EXT}}$$) and the Q-factor of the parasitic elements such as aluminum piece, seam losses, etc. ($$Q_{\\mathrm{PAR}}$$):\n\n$$\\frac{1}{Q_{L}} = \\frac{1}{Q_{0}} + \\frac{1}{Q_{\\mathrm{EXT}1}} + \\frac{1}{Q_{\\mathrm{EXT}2}} + \\frac{1}{Q_{\\mathrm{PAR}}}$$\n\nTherefore, in order to measure the real Q-factor, we need to operate in the under-coupled regime: $$Q_{\\mathrm{EXT}}\\gg Q_{0}$$. However, since $$Q_{0}$$ is not zero and $$Q_{\\mathrm{EXT}}$$ scales directly with the coupling pin length, we decided to perform a series of Q-factor measurements with different $$Q_{\\mathrm{EXT}}$$, starting from 105. One of the couplers was used to excite the signal in the resonator, and the other one was used as the field probe.\n\nFor the first measurement, SMA couplers lengths were adjusted to $$Q_{\\mathrm{EXT}1}=4\\cdot 10^{5}$$ and $$Q_{\\mathrm{EXT}2}=10^{7}$$ according to the calibration curve shown in Fig. 16. Then, on each successive run we increased the $$Q_{\\mathrm{EXT}}$$ by about an order of magnitude by reducing the coupler length. When the $$Q_{0}$$ starts to dominate in the $$Q_{L}$$, we should see no further changes in the measured Q-factor. Due to the project constraints, we had to limit the number of runs to three.\n\n## Results and discussion\n\nFor each measurement, the resonators were then cooled down to 10 mK within 24 hours. It is worth mentioning that the resonators remained in the 100–150 K region for about 9 hours, long enough to be affected by Q-disease and have their Q-factor reduced [41, 42].\n\nThe results for all three runs are summarized in Fig. 21. We can see a clear trend, where the Q-factor of the resonator with the optimized shape is higher by 25% than the Q-factor of the non-optimized resonators, measured by different techniques. Also, it is important to note that the Q-factor has not reached a plateau and grows exponentially with the length of the coupler, which may indicate that we are still in the overcoupled regime.\n\nFinally, we have measured the Q-factor of the optimized resonator at single photon power levels and below and observed no change for the level of one photon or above. The Q-factor drops rapidly for levels below one photon. However, the signal to noise ratio is very low at these power levels. The results of the Q-factor measurements as a function of average input power level and temperature are shown in Fig. 22. The saturation of losses at higher power levels demonstrates that the low-power Q-factor is limited by losses in the dielectric layer and material imperfections [33, 43, 44].\n\n## Conclusions\n\nRadiaBeam, in collaboration with the University of Chicago and Argonne National Laboratory, has developed a 3D SRF quarter-wave resonator with shape optimized for operation in the quantum regime. We used the known merits of SRF resonator design performance to demonstrate superior Q-factor performance of the optimized resonator. We have fabricated and etched two niobium resonators: one with simple and one with optimized shape. These prototype resonators were machined out of high-RRR niobium, chemically polished and high-pressure rinsed at Argonne National Laboratory and tested at the University of Chicago at 10 mK up to 1 photon power levels. We used several methods to measure Q-factor, which demonstrated that the Q-factors of the resonators with optimized shaped are 25% higher than for the non-optimized resonators (see Fig. 23). This result demonstrates the proof-of-concept of the higher Q-factor due to shape optimization. Our future work will focus on improving the fabrication and surface processing of these resonators while incorporating new methods to limit the role surface oxides and nitrides play in reducing the cavity quality factors.\n\n## Abbreviations\n\nAC:\n\nalternating current\n\nANL:\n\nArgonne National Laboratory\n\nBCP:\n\nbuffered chemical polishing\n\nBCS:\n\nBardeen–Cooper–Schrieffer theory\n\nEM:\n\nelectromagnetic\n\nIR:\n\ninfrared\n\nLNA:\n\nlow-noise amplifier\n\nQC:\n\nquantum computers\n\nQWR:\n\nquarter-wave resonant cavity\n\nRF:\n\nRRR:\n\nresidual-resistivity ratio\n\nRT:\n\nroom temperature\n\nSBIR:\n\nSEL:\n\nself-excited loop\n\nSMA:\n\nSubMiniature connector version A\n\nSRF:\n\nVNA:\n\nvector network analyzer\n\n## References\n\n1. Schumacher B. Phys Rev A. 1995;51:2738.\n\n2. Gershenfeld N, Chuang IL. Scientific American, June 1998.\n\n3. Arute F et al.. Nature. 2019;574(7779):505–10.\n\n4. Zeh HD. Found Phys. 1970;1:69–76.\n\n5. Amy M, et al. 2016. arXiv:1603.09383.\n\n6. Makhlin Y, Schön G, Shnirman A. Nature. 1999;398(6725):305–7.\n\n7. Ofek N et al.. Nature. 2016;536(7617):441–5.\n\n8. Paik H et al.. Phys Rev Lett. 2011;107(24):240501.\n\n9. Bronn NT et al.. Quantum Sci Technol. 2018;3(2):024007.\n\n10. Béjanin JH et al.. Phys Rev Appl. 2016;6(4):044010.\n\n11. Josephson BD. Phys Lett. 1962;1(7):251–3.\n\n12. Leek PJ et al.. Phys Rev Lett. 2010;104:100504. 0911.4951.\n\n13. Sillanpaa MA, Park JI, Simmonds RW. Nature. 2007;449:438–42.\n\n14. Gambetta J et al.. Phys Rev A. 2006;74:042318.\n\n15. Houck AA et al.. Phys Rev Lett. 2008;101:080502.\n\n16. Reagor M et al.. Phys Rev B. 2016;94:014506.\n\n17. Posen S, Liepe M. Phys Rev Spec Top, Accel Beams. 2014;17:112001.\n\n18. Kuhr S et al.. Appl Phys Lett. 2007;90:164101.\n\n19. Kutsaev S, et al. Quantum computing structures and resonators thereof. US Patent Application 62/837655. Filed April 23.\n\n20. Padamsee H, Knobloch J, Hays T. RF superconductivity for accelerators. New York: Wiley; 1998.\n\n21. Bardeen J, Cooper LN, Schrieffer JR. Phys Rev. 1957;108:1175.\n\n22. Mattis DC, Bardeen J. Phys Rev. 1958;111:412.\n\n23. Reagor M et al.. Phys Rev B. 2016;94:014506.\n\n24. Schultheiss TJ, et al. Proceedings of 2011 particle accelerator conference. New York, NY, USA.\n\n25. Padamsee H. RF superconductivity: science, technology and applications (v. 2). Wiley-VCH; 2009. p. 1626.\n\n26. Romanenko A et al.. Phys Rev Appl. 2020;13:034032.\n\n27. Romanenko A. Employing SRF to boost coherence of 3D quantum systems. Talk at SRF’2019 conference, Dresden, Germany, 2019, https://srf2019.vrws.de/talks/thfub5_talk.pdf.\n\n28. Daccà A et al.. Appl Surf Sci. 1998;126:219–30.\n\n29. Manuspiya H. Thesis, The Pennsylvania State University, 2003, p. 137.\n\n30. Fuschillo N, Lalevic B, Annamalai NK. Thin Solid Films. 1975;30:145–54.\n\n31. Knobloch J. Dissertation, Cornell University, August 1997, p. 26.\n\n32. Martinis JM et al.. Phys Rev Lett. 2005;95:210503.\n\n33. Weast R. Handbook of chemistry and physics. 64th ed. Boca Raton: CRC Press; 1983. p. E-108.\n\n34. Powers T. Theory and practice of cavity RF test systems. Proc. SRF’05. Ithaca: Cornell University; 2005.\n\n35. Romanenko A, Schuster DI. Phys Rev Lett. 2017;119:264801.\n\n36. Delayen J. Dissertation (Ph.D.), California Institute of Technology, 1979.\n\n37. Gonnella D, Liepe M. Cool down and flux trapping studies on SRF cavities. In: Proc. of LINAC2014. Geneva, Switzerland. 2014.\n\n38. Romanenko A et al.. Appl Phys Lett. 2014;105:234103.\n\n39. Megrant A et al.. Appl Phys Lett. 2012;100:113510.\n\n40. Wenner J. Appl Phys Lett. 2011;99:113513.\n\n### Acknowledgements\n\nThe authors would like to thank Dr. Alexander Romanenko from Fermi National Accelerator Laboratory for the important discussions about SRF qubit cavities. We would also like to acknowledge our RadiaBeam colleagues Peter Phillip and Daniel Villaseñor for their work on cavity fabrication, as well as Salime Boucher for constructive criticism of the manuscript.\n\n### Availability of data and materials\n\nThe datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.\n\n## Funding\n\nThis work was supported by the U.S. Department of Energy, Office of High Energy Physics, under SBIR grant DE- SC0018753.\n\n## Author information\n\nAuthors\n\n### Contributions\n\nThe resonator design and optimization were done by KT and SK. Engineering design and fabrication studies were performed by RA, AM and PC. AS developed the SEL measurement stand. ZC performed the chemical treatment of the niobium cavities and consulted the authors in SRF technology. AC provided consultation in quantum technologies, qubit integration and hosted the experiment. The experiment was performed by KT, ED, AS and SK. The data was analyzed and interpreted by KT, ED, ZC, SK and AS. The manuscript was written and edited by SK, KT, ZC, RA, AS, ED and AC. The project was led by SK. All authors commented on the manuscript. All authors have read and approved the final manuscript.\n\n### Corresponding author\n\nCorrespondence to S. V. Kutsaev.\n\n## Ethics declarations\n\n### Competing interests\n\nThe authors declare that they have no competing interests.\n\n## Rights and permissions",
null,
""
] | [
null,
"https://epjquantumtechnology.springeropen.com/track/article/10.1140/epjqt/s40507-020-00082-8",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.90421945,"math_prob":0.9594964,"size":32793,"snap":"2022-27-2022-33","text_gpt3_token_len":7591,"char_repetition_ratio":0.13922352,"word_repetition_ratio":0.022271268,"special_character_ratio":0.23346446,"punctuation_ratio":0.12534773,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98269063,"pos_list":[0,1,2],"im_url_duplicate_count":[null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-07T22:41:52Z\",\"WARC-Record-ID\":\"<urn:uuid:5aac8196-09ed-437e-bb39-57d4a568a2c4>\",\"Content-Length\":\"333390\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6880d061-6236-4ed7-9602-fc5475613005>\",\"WARC-Concurrent-To\":\"<urn:uuid:2a7681ad-d4a2-4ca8-9e7a-1915e9c1785e>\",\"WARC-IP-Address\":\"146.75.32.95\",\"WARC-Target-URI\":\"https://epjquantumtechnology.springeropen.com/articles/10.1140/epjqt/s40507-020-00082-8\",\"WARC-Payload-Digest\":\"sha1:BKKTSZTQSEH5QXTFKJO6GQ6HRTY2D327\",\"WARC-Block-Digest\":\"sha1:SJF7ZWTIOBA3L26TSBSUDMNVX3G4NKJL\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882570730.59_warc_CC-MAIN-20220807211157-20220808001157-00471.warc.gz\"}"} |
https://ggobi.github.io/ggally/articles/ggally_plots.html | [
"The purpose of this vignette is to display all high-level plots available in GGally to be used in particular with ggduo() and ggpairs(). The name of all the corresponding functions are of the form ggally_*(). Most of them accept a discrete variables to be passed to the colour aesthetic.\n\nWe can distinct bivariate plots requiring two variables for x and y axis respectively and diagonal plots when the same variable is plotted on x and y axis.\n\nlibrary(GGally, quietly = TRUE)\n#> Registered S3 method overwritten by 'GGally':\n#> method from\n#> +.gg ggplot2\ndata(tips)\n\n## Bivariate plots\n\n### with 2x continuous variables\n\n#### ggally_autopoint()\n\nggally_autopoint(tips, aes(x = total_bill, y = tip))",
null,
"ggally_autopoint(tips, aes(x = total_bill, y = tip, colour = time))",
null,
"#### ggally_cor()\n\nggally_cor(tips, aes(x = total_bill, y = tip))",
null,
"ggally_cor(tips, aes(x = total_bill, y = tip, colour = time))",
null,
"See also ggally_statistic().\n\n#### ggally_density()\n\nggally_density(tips, aes(x = total_bill, y = tip))",
null,
"ggally_density(tips, aes(x = total_bill, y = tip, colour = time))",
null,
"#### ggally_points()\n\nggally_points(tips, aes(x = total_bill, y = tip))",
null,
"ggally_points(tips, aes(x = total_bill, y = tip, colour = time))",
null,
"#### ggally_smooth(), ggally_smooth_lm() & ggally_smooth_loess()\n\nggally_smooth_lm(tips, aes(x = total_bill, y = tip))",
null,
"ggally_smooth_lm(tips, aes(x = total_bill, y = tip, colour = time))",
null,
"ggally_smooth_loess(tips, aes(x = total_bill, y = tip))",
null,
"ggally_smooth_loess(tips, aes(x = total_bill, y = tip, colour = time))",
null,
"See also ggally_smooth() for more options.\n\n### with 2x discrete variables\n\n#### ggally_colbar()\n\nggally_colbar(tips, aes(x = day, y = smoker))",
null,
"Note: the colour aesthetic is not taken into account.\n\n#### ggally_autopoint()\n\nggally_autopoint(tips, aes(x = day, y = smoker))",
null,
"ggally_autopoint(tips, aes(x = day, y = smoker, colour = time))",
null,
"#### ggally_count()\n\nggally_count(tips, aes(x = day, y = smoker))",
null,
"ggally_count(tips, aes(x = day, y = smoker, colour = time))",
null,
"#### ggally_cross()\n\nggally_cross(tips, aes(x = day, y = smoker))",
null,
"ggally_cross(tips, aes(x = day, y = smoker, colour = time))",
null,
"ggally_cross(tips, aes(x = day, y = smoker, colour = smoker))",
null,
"Note: colour aesthetic is taken into account only if it corresponds to x or to y.\n\n#### ggally_crosstable()\n\nggally_crosstable(tips, aes(x = day, y = smoker))",
null,
"ggally_crosstable(tips, aes(x = day, y = smoker), cells = \"col.prop\", fill = \"std.resid\")",
null,
"Note: colour aesthetic is not taken into account.\n\n#### ggally_facetbar()\n\nggally_facetbar(tips, aes(x = day, y = smoker))",
null,
"ggally_facetbar(tips, aes(x = day, y = smoker, colour = time))",
null,
"#### ggally_ratio()\n\nggally_ratio(tips, aes(x = day, y = smoker))",
null,
"ggally_ratio(tips, aes(x = day, y = smoker, colour = time))",
null,
"#### ggally_rowbar()\n\nggally_rowbar(tips, aes(x = day, y = smoker))",
null,
"Note: the colour aesthetic is not taken into account.\n\n#### ggally_table()\n\nggally_table(tips, aes(x = day, y = smoker))",
null,
"ggally_table(tips, aes(x = day, y = smoker, colour = time))",
null,
"ggally_table(tips, aes(x = day, y = smoker, colour = smoker))",
null,
"Note: colour aesthetic is taken into account only if it corresponds to x or to y.\n\nggally_trends(tips, aes(x = day, y = smoker))",
null,
"ggally_trends(tips, aes(x = day, y = smoker, colour = time))",
null,
"### with 1x continuous and 1x discrete variables\n\n#### ggally_autopoint()\n\nggally_autopoint(tips, aes(x = total_bill, y = day))",
null,
"ggally_autopoint(tips, aes(x = total_bill, y = day, colour = time))",
null,
"#### ggally_box() & ggally_box_no_facet()\n\nggally_box(tips, aes(x = total_bill, y = day))",
null,
"ggally_box(tips, aes(x = total_bill, y = day, colour = time))",
null,
"ggally_box_no_facet(tips, aes(x = total_bill, y = day))",
null,
"ggally_box_no_facet(tips, aes(x = total_bill, y = day, colour = time))",
null,
"#### ggally_denstrip()\n\nggally_denstrip(tips, aes(x = total_bill, y = day))\n#> stat_bin() using bins = 30. Pick better value with binwidth.\n#> Warning: Removed 45 rows containing missing values (geom_bar()).",
null,
"ggally_denstrip(tips, aes(x = total_bill, y = day, colour = time))\n#> stat_bin() using bins = 30. Pick better value with binwidth.\n#> Warning: Removed 45 rows containing missing values (geom_bar()).",
null,
"#### ggally_dot() & ggally_dot_no_facet()\n\nggally_dot(tips, aes(x = total_bill, y = day))",
null,
"ggally_dot(tips, aes(x = total_bill, y = day, colour = time))",
null,
"ggally_dot_no_facet(tips, aes(x = total_bill, y = day))",
null,
"ggally_dot_no_facet(tips, aes(x = total_bill, y = day, colour = time))",
null,
"#### ggally_facetdensitystrip()\n\nggally_facetdensitystrip(tips, aes(x = total_bill, y = day))",
null,
"ggally_facetdensitystrip(tips, aes(x = total_bill, y = day, colour = time))\n#> Warning: Groups with fewer than two data points have been dropped.\n#> Warning: Removed 1 row containing missing values (geom_line()).",
null,
"#### ggally_facethist()\n\nggally_facethist(tips, aes(x = total_bill, y = day))\n#> stat_bin() using bins = 30. Pick better value with binwidth.",
null,
"ggally_facethist(tips, aes(x = total_bill, y = day, colour = time))\n#> stat_bin() using bins = 30. Pick better value with binwidth.",
null,
"#### ggally_summarise_by()\n\nggally_summarise_by(tips, aes(x = total_bill, y = day))",
null,
"ggally_summarise_by(tips, aes(x = total_bill, y = day, colour = day))",
null,
"Note: colour aesthetic is kept only if corresponding to the discrete axis.\n\nggally_trends(tips, aes(y = total_bill, x = day))",
null,
"ggally_trends(tips, aes(y = total_bill, x = day, colour = time))",
null,
"## Diagonal plots\n\n### with 1x continuous variable\n\n#### ggally_autopointDiag()\n\nggally_autopointDiag(tips, aes(x = total_bill))",
null,
"ggally_autopointDiag(tips, aes(x = total_bill, colour = time))",
null,
"#### ggally_barDiag()\n\nggally_barDiag(tips, aes(x = total_bill))\n#> stat_bin() using bins = 30. Pick better value with binwidth.",
null,
"ggally_barDiag(tips, aes(x = total_bill, colour = time))\n#> stat_bin() using bins = 30. Pick better value with binwidth.",
null,
"#### ggally_densityDiag()\n\nggally_densityDiag(tips, aes(x = total_bill))",
null,
"ggally_densityDiag(tips, aes(x = total_bill, colour = time))",
null,
"### with 1x discrete variable\n\n#### ggally_autopointDiag()\n\nggally_autopointDiag(tips, aes(x = day))",
null,
"ggally_autopointDiag(tips, aes(x = day, colour = time))",
null,
"#### ggally_barDiag()\n\nggally_barDiag(tips, aes(x = day))",
null,
"ggally_barDiag(tips, aes(x = day, colour = time))",
null,
"#### ggally_countDiag()\n\nggally_countDiag(tips, aes(x = day))",
null,
"ggally_countDiag(tips, aes(x = day, colour = time))",
null,
"#### ggally_densityDiag()\n\nggally_densityDiag(tips, aes(x = day))",
null,
"ggally_densityDiag(tips, aes(x = day, colour = time))\n#> Warning: Groups with fewer than two data points have been dropped.\n#> Warning in max(ids, na.rm = TRUE): no non-missing arguments to max; returning\n#> -Inf",
null,
"• ggally_statistic() and ggally_text() to display custom text\n• ggally_blank() and ggally_blankDiag() for blank plot\n• ggally_na() and ggally_naDiag() to display a large NA"
] | [
null,
"https://ggobi.github.io/ggally/articles/ggally_plots_files/figure-html/unnamed-chunk-3-1.png",
null,
"https://ggobi.github.io/ggally/articles/ggally_plots_files/figure-html/unnamed-chunk-3-2.png",
null,
"https://ggobi.github.io/ggally/articles/ggally_plots_files/figure-html/unnamed-chunk-4-1.png",
null,
"https://ggobi.github.io/ggally/articles/ggally_plots_files/figure-html/unnamed-chunk-4-2.png",
null,
"https://ggobi.github.io/ggally/articles/ggally_plots_files/figure-html/unnamed-chunk-5-1.png",
null,
"https://ggobi.github.io/ggally/articles/ggally_plots_files/figure-html/unnamed-chunk-5-2.png",
null,
"https://ggobi.github.io/ggally/articles/ggally_plots_files/figure-html/unnamed-chunk-6-1.png",
null,
"https://ggobi.github.io/ggally/articles/ggally_plots_files/figure-html/unnamed-chunk-6-2.png",
null,
"https://ggobi.github.io/ggally/articles/ggally_plots_files/figure-html/unnamed-chunk-7-1.png",
null,
"https://ggobi.github.io/ggally/articles/ggally_plots_files/figure-html/unnamed-chunk-7-2.png",
null,
"https://ggobi.github.io/ggally/articles/ggally_plots_files/figure-html/unnamed-chunk-7-3.png",
null,
"https://ggobi.github.io/ggally/articles/ggally_plots_files/figure-html/unnamed-chunk-7-4.png",
null,
"https://ggobi.github.io/ggally/articles/ggally_plots_files/figure-html/unnamed-chunk-8-1.png",
null,
"https://ggobi.github.io/ggally/articles/ggally_plots_files/figure-html/unnamed-chunk-9-1.png",
null,
"https://ggobi.github.io/ggally/articles/ggally_plots_files/figure-html/unnamed-chunk-9-2.png",
null,
"https://ggobi.github.io/ggally/articles/ggally_plots_files/figure-html/unnamed-chunk-10-1.png",
null,
"https://ggobi.github.io/ggally/articles/ggally_plots_files/figure-html/unnamed-chunk-10-2.png",
null,
"https://ggobi.github.io/ggally/articles/ggally_plots_files/figure-html/unnamed-chunk-11-1.png",
null,
"https://ggobi.github.io/ggally/articles/ggally_plots_files/figure-html/unnamed-chunk-11-2.png",
null,
"https://ggobi.github.io/ggally/articles/ggally_plots_files/figure-html/unnamed-chunk-11-3.png",
null,
"https://ggobi.github.io/ggally/articles/ggally_plots_files/figure-html/unnamed-chunk-12-1.png",
null,
"https://ggobi.github.io/ggally/articles/ggally_plots_files/figure-html/unnamed-chunk-12-2.png",
null,
"https://ggobi.github.io/ggally/articles/ggally_plots_files/figure-html/unnamed-chunk-13-1.png",
null,
"https://ggobi.github.io/ggally/articles/ggally_plots_files/figure-html/unnamed-chunk-13-2.png",
null,
"https://ggobi.github.io/ggally/articles/ggally_plots_files/figure-html/unnamed-chunk-14-1.png",
null,
"https://ggobi.github.io/ggally/articles/ggally_plots_files/figure-html/unnamed-chunk-14-2.png",
null,
"https://ggobi.github.io/ggally/articles/ggally_plots_files/figure-html/unnamed-chunk-15-1.png",
null,
"https://ggobi.github.io/ggally/articles/ggally_plots_files/figure-html/unnamed-chunk-16-1.png",
null,
"https://ggobi.github.io/ggally/articles/ggally_plots_files/figure-html/unnamed-chunk-16-2.png",
null,
"https://ggobi.github.io/ggally/articles/ggally_plots_files/figure-html/unnamed-chunk-16-3.png",
null,
"https://ggobi.github.io/ggally/articles/ggally_plots_files/figure-html/unnamed-chunk-17-1.png",
null,
"https://ggobi.github.io/ggally/articles/ggally_plots_files/figure-html/unnamed-chunk-17-2.png",
null,
"https://ggobi.github.io/ggally/articles/ggally_plots_files/figure-html/unnamed-chunk-18-1.png",
null,
"https://ggobi.github.io/ggally/articles/ggally_plots_files/figure-html/unnamed-chunk-18-2.png",
null,
"https://ggobi.github.io/ggally/articles/ggally_plots_files/figure-html/unnamed-chunk-19-1.png",
null,
"https://ggobi.github.io/ggally/articles/ggally_plots_files/figure-html/unnamed-chunk-19-2.png",
null,
"https://ggobi.github.io/ggally/articles/ggally_plots_files/figure-html/unnamed-chunk-19-3.png",
null,
"https://ggobi.github.io/ggally/articles/ggally_plots_files/figure-html/unnamed-chunk-19-4.png",
null,
"https://ggobi.github.io/ggally/articles/ggally_plots_files/figure-html/unnamed-chunk-20-1.png",
null,
"https://ggobi.github.io/ggally/articles/ggally_plots_files/figure-html/unnamed-chunk-20-2.png",
null,
"https://ggobi.github.io/ggally/articles/ggally_plots_files/figure-html/unnamed-chunk-21-1.png",
null,
"https://ggobi.github.io/ggally/articles/ggally_plots_files/figure-html/unnamed-chunk-21-2.png",
null,
"https://ggobi.github.io/ggally/articles/ggally_plots_files/figure-html/unnamed-chunk-21-3.png",
null,
"https://ggobi.github.io/ggally/articles/ggally_plots_files/figure-html/unnamed-chunk-21-4.png",
null,
"https://ggobi.github.io/ggally/articles/ggally_plots_files/figure-html/unnamed-chunk-22-1.png",
null,
"https://ggobi.github.io/ggally/articles/ggally_plots_files/figure-html/unnamed-chunk-22-2.png",
null,
"https://ggobi.github.io/ggally/articles/ggally_plots_files/figure-html/unnamed-chunk-23-1.png",
null,
"https://ggobi.github.io/ggally/articles/ggally_plots_files/figure-html/unnamed-chunk-23-2.png",
null,
"https://ggobi.github.io/ggally/articles/ggally_plots_files/figure-html/unnamed-chunk-24-1.png",
null,
"https://ggobi.github.io/ggally/articles/ggally_plots_files/figure-html/unnamed-chunk-24-2.png",
null,
"https://ggobi.github.io/ggally/articles/ggally_plots_files/figure-html/unnamed-chunk-25-1.png",
null,
"https://ggobi.github.io/ggally/articles/ggally_plots_files/figure-html/unnamed-chunk-25-2.png",
null,
"https://ggobi.github.io/ggally/articles/ggally_plots_files/figure-html/unnamed-chunk-26-1.png",
null,
"https://ggobi.github.io/ggally/articles/ggally_plots_files/figure-html/unnamed-chunk-26-2.png",
null,
"https://ggobi.github.io/ggally/articles/ggally_plots_files/figure-html/unnamed-chunk-27-1.png",
null,
"https://ggobi.github.io/ggally/articles/ggally_plots_files/figure-html/unnamed-chunk-27-2.png",
null,
"https://ggobi.github.io/ggally/articles/ggally_plots_files/figure-html/unnamed-chunk-28-1.png",
null,
"https://ggobi.github.io/ggally/articles/ggally_plots_files/figure-html/unnamed-chunk-28-2.png",
null,
"https://ggobi.github.io/ggally/articles/ggally_plots_files/figure-html/unnamed-chunk-29-1.png",
null,
"https://ggobi.github.io/ggally/articles/ggally_plots_files/figure-html/unnamed-chunk-29-2.png",
null,
"https://ggobi.github.io/ggally/articles/ggally_plots_files/figure-html/unnamed-chunk-30-1.png",
null,
"https://ggobi.github.io/ggally/articles/ggally_plots_files/figure-html/unnamed-chunk-30-2.png",
null,
"https://ggobi.github.io/ggally/articles/ggally_plots_files/figure-html/unnamed-chunk-31-1.png",
null,
"https://ggobi.github.io/ggally/articles/ggally_plots_files/figure-html/unnamed-chunk-31-2.png",
null,
"https://ggobi.github.io/ggally/articles/ggally_plots_files/figure-html/unnamed-chunk-32-1.png",
null,
"https://ggobi.github.io/ggally/articles/ggally_plots_files/figure-html/unnamed-chunk-32-2.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.58852106,"math_prob":0.99945384,"size":6439,"snap":"2023-40-2023-50","text_gpt3_token_len":1941,"char_repetition_ratio":0.32634032,"word_repetition_ratio":0.46719456,"special_character_ratio":0.29849356,"punctuation_ratio":0.19318181,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9959936,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127,128,129,130,131,132],"im_url_duplicate_count":[null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,1,null,1,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,1,null,1,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-02T08:20:19Z\",\"WARC-Record-ID\":\"<urn:uuid:604f5b70-12dd-43d7-b9af-e40eeace37a5>\",\"Content-Length\":\"65621\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2265f624-f50a-4963-9807-808ff966c87a>\",\"WARC-Concurrent-To\":\"<urn:uuid:1a524950-3fc4-451e-888b-9509f914fafc>\",\"WARC-IP-Address\":\"185.199.111.153\",\"WARC-Target-URI\":\"https://ggobi.github.io/ggally/articles/ggally_plots.html\",\"WARC-Payload-Digest\":\"sha1:WN76BZQQRGOBNLK6Z2S4KWL5VPN36Q7N\",\"WARC-Block-Digest\":\"sha1:A4RCWYMJNU2ZEGP2FAYZHFW5YBC5Q6OR\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100381.14_warc_CC-MAIN-20231202073445-20231202103445-00465.warc.gz\"}"} |
https://www.answers.com/Q/What_is_the_probability_of_landing_on_heads_6_times | [
"Statistics\nProbability\n\n# What is the probability of landing on heads 6 times?\n\n567\n\nThe probability is 6 in 12, or 1 in 2.\n\n🙏\n0\n🤨\n0\n😮\n0\n😂\n0\n\n## Related Questions",
null,
"Experimental probability is calculated by taking the data produced from a performed experiment and calculating probability from that data. An example would be flipping a coin. The theoretical probability of landing on heads is 50%, .5 or 1/2, as is the theoretical probability of landing on tails. If during an experiment, however, a coin is flipped 100 times and lands on heads 60 times and tails 40 times, the experimental probability for this experiment for landing on heads is 60%, .6 or 6/10. The experimental probability of landing on tails would be 40%, .4, or 6/10.",
null,
"the probability of tossing a coin and it landing on head is a 1 in 2 chance the probability of rolling a 5 on a dice is a 1 in 6 chance",
null,
"Coin landing of heads = 1/2 (either heads or tails) Dice landing on even number = 1/2 (no matter how many faces there are on the dice unless the number of faces is odd, 6 sided=3 even sides/6)",
null,
"Since there are 6 possible outcomes, and you want the probability of obtaining one of the outcomes (in your case 6), the probability of it landing on a 6 is 1/6.",
null,
"",
null,
"Generally you should express it as a number between 0 (impossible) and 1 (absolute certainty). Either as a decimal or fraction is OK. For example, the probability of a fair coin toss landing on heads is 0.5. The probability of a fair die landing on 4 is 1/6. But it's also common to see probability written as a percentage between 0% (impossible) and 100% (absolute certainty). So P(coin landing on heads)=50.0% and P(die landing on 4)=16.7%.",
null,
"",
null,
"Probability is the likelihood that something will occur. If you subtract it from 1, we get the likelihood (or probability) that it will not occur. If a coin is tossed and rolls heads 6 times, the (empirical) probability of obtaining a head is 6/10 or .6. 1-.6 =.4 is the empirical probability (or likelihood) of not getting a head.",
null,
"The probability of flipping a quarter and getting heads is 1 in 2. the probability of rolling a die and getting 6 is 1 in 6.",
null,
"On a 6-sided die, the probability of not having a six is 5/6, or 83 1/3 %",
null,
"Assuming that it is a regular shaped spinner, the probability is 1/6*1/6 = 1/36",
null,
"",
null,
"",
null,
"There is a 1/6 chance of rolling a 4 on a fair die, and a 1/2 chance of a fair coin landing heads up. Multiply 1/6 X 1/2. The probability of both happening is 1/12.",
null,
"only about 75% because very often it is not equal The probability of 3 heads and 3 tails is 0.3125",
null,
"",
null,
"It is neither. If you repeated sets of 8 tosses and compared the number of times you got 6 heads as opposed to other outcomes, it would comprise proper experimental probability.",
null,
"",
null,
"",
null,
"The faces greater than 2 are 3, 4, 5 and 6 ; that's 4/6 = 2/3 A flipped coin will be Heads 1/2 and Tails 1/2 Combining these we have a probability of 1/2 times 2/3 = 1/3 So the probability of the combined events is 1/3 or 33.3(recurring)%",
null,
"A die normally has 6 sides numbered 1 through 6. The probability of you landing on ANY number is 1:6, or you have a 1 in 6 chance of landing a 3.",
null,
"The probability of tossing 6 heads in 6 dice is 1 in 26, or 1 in 64, or 0.015625. THe probability of doing that at least once in six trials, then, is 6 in 26, or 6 in 64, or 3 in 32, or 0.09375.",
null,
"First event is to roll a 3 or 6 on a die, which gives you a probability of 2 out of 6. Second event is tossing a heads on a coin, so a probability of 1 out of 2. Since both chances are not related, you can multiply both chances: 2/6 times 1/2 = 1/6 = 0,166666...",
null,
"Probability of coin heads up: 1/2 Rolling a 4 or 5 on the cube: 2/6 1/2 times 2/6 = 2/12, or 1/6.",
null,
"The probability of flipping a heads is 1/2 and the probability of rolling a 6 is 1/6. By the laws of probability it would be logical to multiply them together, (1/2)(1/6) thus the answer being 1/12 with is roughly eight percent.\n\n###### ProbabilityMath and ArithmeticStatistics",
null,
"Copyright © 2020 Multiply Media, LLC. All Rights Reserved. The material on this site can not be reproduced, distributed, transmitted, cached or otherwise used, except with prior written permission of Multiply."
] | [
null,
"https://img.answers.com/answ/image/upload/q_auto,f_auto,w_20,h_20/Icons/IAB_fullsize_icons/IAB5.png",
null,
"https://img.answers.com/answ/image/upload/q_auto,f_auto,w_20,h_20/Icons/IAB_fullsize_icons/IAB5.png",
null,
"https://img.answers.com/answ/image/upload/q_auto,f_auto,w_20,h_20/Icons/IAB_fullsize_icons/IAB5.png",
null,
"https://img.answers.com/answ/image/upload/q_auto,f_auto,w_20,h_20/Icons/IAB_fullsize_icons/IAB5.png",
null,
"https://img.answers.com/answ/image/upload/q_auto,f_auto,w_20,h_20/Icons/IAB_fullsize_icons/IAB5.png",
null,
"https://img.answers.com/answ/image/upload/q_auto,f_auto,w_20,h_20/Icons/IAB_fullsize_icons/IAB5.png",
null,
"https://img.answers.com/answ/image/upload/q_auto,f_auto,w_20,h_20/Icons/IAB_fullsize_icons/IAB5.png",
null,
"https://img.answers.com/answ/image/upload/q_auto,f_auto,w_20,h_20/Icons/IAB_fullsize_icons/IAB5.png",
null,
"https://img.answers.com/answ/image/upload/q_auto,f_auto,w_20,h_20/Icons/IAB_fullsize_icons/IAB5.png",
null,
"https://img.answers.com/answ/image/upload/q_auto,f_auto,w_20,h_20/Icons/IAB_fullsize_icons/IAB5.png",
null,
"https://img.answers.com/answ/image/upload/q_auto,f_auto,w_20,h_20/Icons/IAB_fullsize_icons/IAB5.png",
null,
"https://img.answers.com/answ/image/upload/q_auto,f_auto,w_20,h_20/Icons/IAB_fullsize_icons/IAB5.png",
null,
"https://img.answers.com/answ/image/upload/q_auto,f_auto,w_20,h_20/Icons/IAB_fullsize_icons/IAB5.png",
null,
"https://img.answers.com/answ/image/upload/q_auto,f_auto,w_20,h_20/Icons/IAB_fullsize_icons/IAB5.png",
null,
"https://img.answers.com/answ/image/upload/q_auto,f_auto,w_20,h_20/Icons/IAB_fullsize_icons/IAB5.png",
null,
"https://img.answers.com/answ/image/upload/q_auto,f_auto,w_20,h_20/Icons/IAB_fullsize_icons/IAB5.png",
null,
"https://img.answers.com/answ/image/upload/q_auto,f_auto,w_20,h_20/Icons/IAB_fullsize_icons/IAB5.png",
null,
"https://img.answers.com/answ/image/upload/q_auto,f_auto,w_20,h_20/Icons/IAB_fullsize_icons/IAB5.png",
null,
"https://img.answers.com/answ/image/upload/q_auto,f_auto,w_20,h_20/Icons/IAB_fullsize_icons/IAB5.png",
null,
"https://img.answers.com/answ/image/upload/q_auto,f_auto,w_20,h_20/Icons/IAB_fullsize_icons/IAB5.png",
null,
"https://img.answers.com/answ/image/upload/q_auto,f_auto,w_20,h_20/Icons/IAB_fullsize_icons/IAB5.png",
null,
"https://img.answers.com/answ/image/upload/q_auto,f_auto,w_20,h_20/Icons/IAB_fullsize_icons/IAB5.png",
null,
"https://img.answers.com/answ/image/upload/q_auto,f_auto,w_20,h_20/Icons/IAB_fullsize_icons/IAB5.png",
null,
"https://img.answers.com/answ/image/upload/q_auto,f_auto,w_20,h_20/Icons/IAB_fullsize_icons/IAB5.png",
null,
"https://img.answers.com/answ/image/upload/q_auto,f_auto,w_20,h_20/Icons/IAB_fullsize_icons/IAB5.png",
null,
"https://img.answers.com/answ/image/upload/q_auto,f_auto,dpr_2.0/v1589555119/logos/Answers_throwback_logo.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.93429023,"math_prob":0.9907446,"size":5124,"snap":"2020-45-2020-50","text_gpt3_token_len":1493,"char_repetition_ratio":0.19238281,"word_repetition_ratio":0.013347022,"special_character_ratio":0.30113193,"punctuation_ratio":0.12279226,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9988768,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-12-05T09:50:57Z\",\"WARC-Record-ID\":\"<urn:uuid:d8b49a9f-9d5a-4c87-8d40-5fccba2c7741>\",\"Content-Length\":\"187088\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:290d28fc-1c57-4ecb-af0b-9a4a6af45c02>\",\"WARC-Concurrent-To\":\"<urn:uuid:d1341104-ebed-4ce2-8a6d-5d14a576359f>\",\"WARC-IP-Address\":\"151.101.200.203\",\"WARC-Target-URI\":\"https://www.answers.com/Q/What_is_the_probability_of_landing_on_heads_6_times\",\"WARC-Payload-Digest\":\"sha1:TSBKSCAEBQIHKH5VNZGIAXMZ5SIEYHUH\",\"WARC-Block-Digest\":\"sha1:G3SB46G5NV64XLXQKG3O3QNPQEK6JPGB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141747323.98_warc_CC-MAIN-20201205074417-20201205104417-00691.warc.gz\"}"} |
https://www.math-only-math.com/geometric-progression.html | [
"# Geometric Progression\n\nWe will discuss here about the Geometric Progression along with examples.\n\nA sequence of numbers is said to be Geometric Progression if the ratio of any term and its preceding term is always a constant quantity.\n\nDefinition of Geometric Progression:\n\nA sequence of non-zero number is said to be in Geometric Progression (abbreviated as G.P.) if each term, after the first, is obtained by multiplying the preceding term by a constant quantity (positive or negative).\n\nThe constant ratio is said to be the common ratio of the Geometric Progression and is denoted by dividing any term by that which immediately precedes it.\n\nIn other words, the sequence {a$$_{1}$$, a$$_{2}$$, a$$_{3}$$, a$$_{4}$$, ..................., a$$_{n}$$, ................. } is said to be in Geometric Progression, if $$\\frac{a_{n + 1}}{a_{n}}$$ = constant for all n ϵ N i.e., for all integral values of a, the ratio $$\\frac{a_{n + 1}}{a_{n}}$$ is constant.\n\nExamples on Geometric Progression\n\n1. The sequence 3, 15, 75, 375, 1875, .................... is a Geometric Progression, because $$\\frac{15}{5}$$ = $$\\frac{75}{15}$$ = $$\\frac{375}{75}$$ = $$\\frac{1875}{375}$$ = .................. = 5, which is constant.\n\nClearly, this sequence is a Geometric Progression with first term 3 and common ratio 5.\n\n2. The sequence $$\\frac{1}{3}$$, -$$\\frac{1}{2}$$, $$\\frac{3}{4}$$, -$$\\frac{9}{8}$$, is a Geometric Progression with first term $$\\frac{1}{3}$$ and common ratio $$\\frac{-\\frac{1}{2}}{\\frac{1}{3}}$$ = -$$\\frac{3}{2}$$\n\n3. The sequence of numbers {4, 12, 36, 108, 324, ........... } forms a Geometric Progression whose common ratio is 3, because,\n\nSecond term (12) = 3 × First term (4),\n\nThird term (36) = 3 × Second term (12),\n\nFourth term (108) = 3 × Third term (36),\n\nFifth term (324) = 3 × Fourth term (108) and so on.\n\nIn other words,\n\n$$\\frac{Second term (12)}{First term (4)}$$ = $$\\frac{Third term (36)}{Second term (12)}$$ = $$\\frac{Fourth term (108)}{Third term (36)}$$ = $$\\frac{Fifth term (324)}{Fourth term (108)}$$ = ................. = 3 (a constant)\n\nSolved example on Geometric Progression\n\nShow that the sequence given by an = 3(2$$^{n}$$), for all n ϵ N, is a Geometric Progression. Also, find its common ratio.\n\nSolution:\n\nThe given sequence is a$$_{n}$$ = 3(2$$^{n}$$)\n\nNow putting n = n +1 in the given sequence we get,\n\na$$_{n + 1}$$ = 3(2$$^{n + 1}$$)\n\nNow, $$\\frac{a_{n + 1}}{a_{n}}$$ = $$\\frac{3(2^{n + 1})}{3(2^{n})}$$ = 2\n\nTherefore, we clearly see that for all integral values of n, the $$\\frac{a_{n + 1}}{a_{n}}$$ = 2 (constant). Thus, the given sequence is an Geometric Progression with common ratio 2.\n\nGeometric Series:\n\nIf a$$_{1}$$, a$$_{2}$$, a$$_{3}$$, a$$_{4}$$, ..............., a$$_{n}$$, .......... is a Geometric Progression, then the expression a$$_{1}$$ + a$$_{2}$$ + a$$_{3}$$ + ......... + a$$_{n}$$ + .................... is called a geometric series.\n\nNotes:\n\n(i) The geometric series is finite according as the corresponding Geometric Progression consists of finite number of terms.\n\n(ii) The geometric series is infinite according as the corresponding Geometric Progression consists of infinite number of terms.\n\nGeometric Progression\n\n### New! Comments\n\nHave your say about what you just read! Leave me a comment in the box below. Ask a Question or Answer a Question.\n\nDidn't find what you were looking for? Or want to know more information about Math Only Math. Use this Google Search to find what you need."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.87150997,"math_prob":1.0000061,"size":3462,"snap":"2019-35-2019-39","text_gpt3_token_len":1061,"char_repetition_ratio":0.235107,"word_repetition_ratio":0.037453182,"special_character_ratio":0.3974581,"punctuation_ratio":0.29075426,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999926,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-08-21T20:20:24Z\",\"WARC-Record-ID\":\"<urn:uuid:8598d85b-6810-456e-a7f5-0af80c92fd3d>\",\"Content-Length\":\"33994\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:be381902-a9bd-41c6-94e0-67e223e6cab8>\",\"WARC-Concurrent-To\":\"<urn:uuid:130e7572-9130-4f63-9a4c-62bbb9e90886>\",\"WARC-IP-Address\":\"173.247.219.53\",\"WARC-Target-URI\":\"https://www.math-only-math.com/geometric-progression.html\",\"WARC-Payload-Digest\":\"sha1:YTLFCAMJS2GMD422MHQYBYPLDD22IMRS\",\"WARC-Block-Digest\":\"sha1:KC4WXO5RXADNCJ3IM2TRQIRMLECXTYEA\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-35/CC-MAIN-2019-35_segments_1566027316194.18_warc_CC-MAIN-20190821194752-20190821220752-00219.warc.gz\"}"} |
https://bklein.ece.gatech.edu/laser-photonics/pumping-schemes/ | [
"# Pumping Schemes\n\nAs we saw from Beer’s Law, in order to have optical amplification occur, we must have the density of atoms in the upper energy state",
null,
"$N_2 > (g_2 / g_1) N_1$, or if the degeneracies are equal to one, just",
null,
"$N_2 > N_1$. As previously mentioned, this is called inversion. Inversion requires pumping – an injection of energy into the atoms. This pumping could take many forms. In gas lasers, an electrical discharge is a common pumping mechanism. It is also possible to pump with light (optical absorption); however, if we are pumping with light, there must be at least one more atomic energy state involved besides",
null,
"$E_1$ and",
null,
"$E_2$ to make it work. Why?\n\nImagine we have a collection of atoms which are all initially in the lower energy state",
null,
"$E_1$. For simplicity, take",
null,
"$g_2 = g_1 = 1$. Let’s try to pump atoms into the upper energy state",
null,
"$E_2$ using a light source whose frequency",
null,
"$\\nu = (E_2 - E_1) / h$. This will work fine initially – all the incoming light will be absorbed by the atoms in state 1, and those atoms will be promoted to state 2. However, as the density of atoms in state 2",
null,
"$N_2$ climbs, eventually we will reach a point where",
null,
"$N_2 = N_1$. At this point, an incoming photon is equally likely to be absorbed or cause stimulated emission. Therefore, the light won’t be able to push",
null,
"$N_2$ any higher, at least in steady state. If we briefly manage to get",
null,
"$N_2 > N_1$, stimulated emission will dominate over absorption, and",
null,
"$N_2$ will be driven back down.\n\nSo, we must introduce a third energy state",
null,
"$E_3$ into our consideration. We could pump atoms from",
null,
"$E_1$ to",
null,
"$E_3$, as shown in the diagram below. We would be wise to choose a set of states for which there is a fast relaxation process that quickly drops atoms from state",
null,
"$E_3$ to",
null,
"$E_2$. In this way,",
null,
"$N_3$ will always be close to zero, and pump photons whose frequency",
null,
"$\\nu_{pump} = (E_3 - E_1) / h$ will always be absorbed. At the same time, we would like atoms to linger in state",
null,
"$E_2$ a long time (a so-called ‘metastable’ state). If we satisfy these conditions, we can increase",
null,
"$N_2$ until it is greater than",
null,
"$N_1$ using the pump from",
null,
"$E_1$ to",
null,
"$E_3$. Once",
null,
"$N_2 > N_1$, we’ve achieved inversion and therefore amplification for photons whose frequency",
null,
"$\\nu_{stim} = (E_2 - E_1) / h$.",
null,
"The three-level scheme above has a weakness. It is relatively easy to increase",
null,
"$N_2$, but we would also like to decrease",
null,
"$N_1$, in order to maximize the inversion",
null,
"$(N_2 - N_1)$. If the state",
null,
"$E_1$ is the ground state, the pump rate must be extremely large to lower",
null,
"$N_1$ significantly (a very intense pump would be required). One way to address this challenge is to introduce a fourth state into the pumping scheme, as diagrammed below. In this case, we want a pump tuned to the transition from",
null,
"$E_1$ to",
null,
"$E_4$, a fast relaxation process from",
null,
"$E_4$ to",
null,
"$E_3$, a fast relaxation process from",
null,
"$E_2$ to",
null,
"$E_1$, and a metastable state",
null,
"$E_3$. This scheme is designed to amplify light with frequency",
null,
"$\\nu_{stim} = (E_3 - E_2) / h$. Atoms are pumped up to",
null,
"$E_4$ and quickly drop to",
null,
"$E_3$, where they linger. After a stimulated emission event drops the atom to state",
null,
"$E_2$, it quickly relaxes to state",
null,
"$E_1$, ensuring that",
null,
"$N_2$ will remain small. In this way, we can maximize the inversion",
null,
"$(N_3 - N_2)$.",
null,
"Of course, in engineering as in life, no improvement comes without a price. The 3- and 4-level pumping schemes make it easier to achieve a large inversion. The price we pay is an inherent loss of efficiency. If we are pumping with a single photon of energy",
null,
"$h \\nu_{pump} = (E_4 - E_1)$, at best we will get out a single photon of energy",
null,
"$h \\nu_{stim} = (E_3 - E_2)$. The energy difference",
null,
"$(h \\nu_{pump} - h \\nu_{stim})$ represents unavoidable energy loss in this system."
] | [
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://bklein.ece.gatech.edu/wp-content/uploads/sites/457/2017/06/three_level-300x160.png",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://bklein.ece.gatech.edu/wp-content/uploads/sites/457/2017/06/four_level-300x201.png",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9546064,"math_prob":0.999144,"size":3215,"snap":"2021-43-2021-49","text_gpt3_token_len":668,"char_repetition_ratio":0.11865462,"word_repetition_ratio":0.02739726,"special_character_ratio":0.21337481,"punctuation_ratio":0.11819596,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99852914,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,4,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,4,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-26T14:01:58Z\",\"WARC-Record-ID\":\"<urn:uuid:86b9bc1e-5f82-49ea-b222-22a787180c8c>\",\"Content-Length\":\"49319\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:727954f1-0e5f-4419-bdeb-68eaec0eee40>\",\"WARC-Concurrent-To\":\"<urn:uuid:2cbf6c3b-0fa1-4036-8771-9640a20d5187>\",\"WARC-IP-Address\":\"34.216.237.15\",\"WARC-Target-URI\":\"https://bklein.ece.gatech.edu/laser-photonics/pumping-schemes/\",\"WARC-Payload-Digest\":\"sha1:NWB7NDEWZXBDCNTZJXAHSQJ6PCG53ZCZ\",\"WARC-Block-Digest\":\"sha1:MJ7IQMIBQUDEKQ7TOBOKU33KSEWF3PKL\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323587908.20_warc_CC-MAIN-20211026134839-20211026164839-00689.warc.gz\"}"} |
https://www.geeksforgeeks.org/n-bit-johnson-counter-in-digital-logic/?ref=rp | [
"Related Articles\n\n# n-bit Johnson Counter in Digital Logic\n\n• Difficulty Level : Medium\n• Last Updated : 16 Sep, 2021\n\nPrerequisite – Counters\n\nJohnson counter also known as creeping counter, is an example of synchronous counter. In Johnson counter, the complemented output of last flip flop is connected to input of first flip flop and to implement n-bit Johnson counter we require n flip-flop.It is one of the most important type of shift register counter. It is formed by the feedback of the output to its own input.Johnson counter is a ring with an inversion.Another name of Johnson counter are:creeping counter, twisted ring counter, walking counter, mobile counter and switch tail counter.\n\nAttention reader! Don’t stop learning now. Practice GATE exam well before the actual exam with the subject-wise and overall quizzes available in GATE Test Series Course.\n\nLearn all GATE CS concepts with Free Live Classes on our youtube channel.\n\nTotal number of used and unused states in n-bit Johnson counter:\nnumber of used states=2n\nnumber of unused states=2n – 2*n\n\nExample:\nIf n=4\n4-bit Johnson counter\n\nInitially, suppose all flip-flops are reset.",
null,
"Truth Table:",
null,
"where,\nCP is clock pulse and\nQ1, Q2, Q3, Q4 are the states.\n\nQuestion: Determine the total number of used and unused states in 4-bit Johnson counter.\n\nAnswer: Total number of used states= 2*n\n= 2*4\n= 8\nTotal number of unused states= 2n – 2*n\n= 24-2*4\n= 8\n\n• The Johnson counter has same number of flip flop but it can count twice the number of states the ring counter can count.\n• It can be implemented using D and JK flip flop.\n• Johnson ring counter is used to count the data in a continuous loop.\n• Johnson counter is a self-decoding circuit.\n\n• Johnson counter doesn’t count in a binary sequence.\n• In Johnson counter more number of states remain unutilized than the number of states being utilized.\n• The number of flip flops needed is one half the number of timing signals.\n• It can be constructed for any number of timing sequence.\n\nApplications of Johnson counter:\n\n• Johnson counter is used as a synchronous decade counter or divider circuit.\n• It is used in hardware logic design to create complicated Finite states machine. ex: ASIC and FPGA design.\n• The 3 stage Johnson counter is used as a 3 phase square wave generator which produces 1200 phase shift.\n• It is used to divide the frequency of the clock signal by varying their feedback.\n\nMy Personal Notes arrow_drop_up"
] | [
null,
"https://media.geeksforgeeks.org/wp-content/cdn-uploads/11-2.jpg",
null,
"https://media.geeksforgeeks.org/wp-content/cdn-uploads/22.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8719521,"math_prob":0.7596695,"size":2437,"snap":"2021-43-2021-49","text_gpt3_token_len":544,"char_repetition_ratio":0.21208385,"word_repetition_ratio":0.029126214,"special_character_ratio":0.21748051,"punctuation_ratio":0.10042735,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9631095,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,7,null,7,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-17T23:36:00Z\",\"WARC-Record-ID\":\"<urn:uuid:810f64cf-2622-45ae-9d15-bfa9b8f19010>\",\"Content-Length\":\"105754\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:701bd9d4-59f1-4d79-9344-04813966c969>\",\"WARC-Concurrent-To\":\"<urn:uuid:a7921686-4afa-4e11-8826-41b8553c817e>\",\"WARC-IP-Address\":\"23.222.12.16\",\"WARC-Target-URI\":\"https://www.geeksforgeeks.org/n-bit-johnson-counter-in-digital-logic/?ref=rp\",\"WARC-Payload-Digest\":\"sha1:2HS4OQPRDE5LKH5OM4VNRQYPHT5647MC\",\"WARC-Block-Digest\":\"sha1:SQS44425X7NHVFTY474NIXA7NYH5RQGI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585183.47_warc_CC-MAIN-20211017210244-20211018000244-00458.warc.gz\"}"} |
https://www.colorhexa.com/22e8f9 | [
"# #22e8f9 Color Information\n\nIn a RGB color space, hex #22e8f9 is composed of 13.3% red, 91% green and 97.6% blue. Whereas in a CMYK color space, it is composed of 86.3% cyan, 6.8% magenta, 0% yellow and 2.4% black. It has a hue angle of 184.7 degrees, a saturation of 94.7% and a lightness of 55.5%. #22e8f9 color hex could be obtained by blending #44ffff with #00d1f3. Closest websafe color is: #33ffff.\n\n• R 13\n• G 91\n• B 98\nRGB color chart\n• C 86\n• M 7\n• Y 0\n• K 2\nCMYK color chart\n\n#22e8f9 color description : Vivid cyan.\n\n# #22e8f9 Color Conversion\n\nThe hexadecimal color #22e8f9 has RGB values of R:34, G:232, B:249 and CMYK values of C:0.86, M:0.07, Y:0, K:0.02. Its decimal value is 2287865.\n\nHex triplet RGB Decimal 22e8f9 `#22e8f9` 34, 232, 249 `rgb(34,232,249)` 13.3, 91, 97.6 `rgb(13.3%,91%,97.6%)` 86, 7, 0, 2 184.7°, 94.7, 55.5 `hsl(184.7,94.7%,55.5%)` 184.7°, 86.3, 97.6 33ffff `#33ffff`\nCIE-LAB 84.426, -38.58, -21.054 46.61, 64.888, 99.685 0.221, 0.307, 64.888 84.426, 43.951, 208.622 84.426, -61.999, -28.073 80.553, -37.684, -16.985 00100010, 11101000, 11111001\n\n# Color Schemes with #22e8f9\n\n• #22e8f9\n``#22e8f9` `rgb(34,232,249)``\n• #f93322\n``#f93322` `rgb(249,51,34)``\nComplementary Color\n• #22f99f\n``#22f99f` `rgb(34,249,159)``\n• #22e8f9\n``#22e8f9` `rgb(34,232,249)``\n• #227df9\n``#227df9` `rgb(34,125,249)``\nAnalogous Color\n• #f99f22\n``#f99f22` `rgb(249,159,34)``\n• #22e8f9\n``#22e8f9` `rgb(34,232,249)``\n• #f9227d\n``#f9227d` `rgb(249,34,125)``\nSplit Complementary Color\n• #e8f922\n``#e8f922` `rgb(232,249,34)``\n• #22e8f9\n``#22e8f9` `rgb(34,232,249)``\n• #f922e8\n``#f922e8` `rgb(249,34,232)``\n• #22f933\n``#22f933` `rgb(34,249,51)``\n• #22e8f9\n``#22e8f9` `rgb(34,232,249)``\n• #f922e8\n``#f922e8` `rgb(249,34,232)``\n• #f93322\n``#f93322` `rgb(249,51,34)``\n• #05bac9\n``#05bac9` `rgb(5,186,201)``\n• #06d0e2\n``#06d0e2` `rgb(6,208,226)``\n• #09e5f8\n``#09e5f8` `rgb(9,229,248)``\n• #22e8f9\n``#22e8f9` `rgb(34,232,249)``\n• #3bebfa\n``#3bebfa` `rgb(59,235,250)``\n• #54edfa\n``#54edfa` `rgb(84,237,250)``\n• #6cf0fb\n``#6cf0fb` `rgb(108,240,251)``\nMonochromatic Color\n\n# Alternatives to #22e8f9\n\nBelow, you can see some colors close to #22e8f9. Having a set of related colors can be useful if you need an inspirational alternative to your original color choice.\n\n• #22f9d4\n``#22f9d4` `rgb(34,249,212)``\n• #22f9e6\n``#22f9e6` `rgb(34,249,230)``\n• #22f9f8\n``#22f9f8` `rgb(34,249,248)``\n• #22e8f9\n``#22e8f9` `rgb(34,232,249)``\n• #22d6f9\n``#22d6f9` `rgb(34,214,249)``\n• #22c4f9\n``#22c4f9` `rgb(34,196,249)``\n• #22b2f9\n``#22b2f9` `rgb(34,178,249)``\nSimilar Colors\n\n# #22e8f9 Preview\n\nThis text has a font color of #22e8f9.\n\n``<span style=\"color:#22e8f9;\">Text here</span>``\n#22e8f9 background color\n\nThis paragraph has a background color of #22e8f9.\n\n``<p style=\"background-color:#22e8f9;\">Content here</p>``\n#22e8f9 border color\n\nThis element has a border color of #22e8f9.\n\n``<div style=\"border:1px solid #22e8f9;\">Content here</div>``\nCSS codes\n``.text {color:#22e8f9;}``\n``.background {background-color:#22e8f9;}``\n``.border {border:1px solid #22e8f9;}``\n\n# Shades and Tints of #22e8f9\n\nA shade is achieved by adding black to any pure hue, while a tint is created by mixing white to any pure color. In this example, #000808 is the darkest color, while #f4feff is the lightest one.\n\n• #000808\n``#000808` `rgb(0,8,8)``\n• #01191b\n``#01191b` `rgb(1,25,27)``\n• #012b2e\n``#012b2e` `rgb(1,43,46)``\n• #023c41\n``#023c41` `rgb(2,60,65)``\n• #024e55\n``#024e55` `rgb(2,78,85)``\n• #036068\n``#036068` `rgb(3,96,104)``\n• #03717b\n``#03717b` `rgb(3,113,123)``\n• #04838e\n``#04838e` `rgb(4,131,142)``\n• #0495a1\n``#0495a1` `rgb(4,149,161)``\n• #05a6b4\n``#05a6b4` `rgb(5,166,180)``\n• #05b8c7\n``#05b8c7` `rgb(5,184,199)``\n• #06c9da\n``#06c9da` `rgb(6,201,218)``\n• #06dbed\n``#06dbed` `rgb(6,219,237)``\n• #0fe6f8\n``#0fe6f8` `rgb(15,230,248)``\n• #22e8f9\n``#22e8f9` `rgb(34,232,249)``\n• #35eafa\n``#35eafa` `rgb(53,234,250)``\n• #48ecfa\n``#48ecfa` `rgb(72,236,250)``\n• #5beefb\n``#5beefb` `rgb(91,238,251)``\n• #6ef0fb\n``#6ef0fb` `rgb(110,240,251)``\n• #81f2fc\n``#81f2fc` `rgb(129,242,252)``\n• #95f4fc\n``#95f4fc` `rgb(149,244,252)``\n• #a8f6fd\n``#a8f6fd` `rgb(168,246,253)``\n• #bbf8fd\n``#bbf8fd` `rgb(187,248,253)``\n• #cefafe\n``#cefafe` `rgb(206,250,254)``\n• #e1fcfe\n``#e1fcfe` `rgb(225,252,254)``\n• #f4feff\n``#f4feff` `rgb(244,254,255)``\nTint Color Variation\n\n# Tones of #22e8f9\n\nA tone is produced by adding gray to any pure hue. In this case, #8b9090 is the less saturated color, while #22e8f9 is the most saturated one.\n\n• #8b9090\n``#8b9090` `rgb(139,144,144)``\n• #829799\n``#829799` `rgb(130,151,153)``\n• #799ea2\n``#799ea2` `rgb(121,158,162)``\n• #71a6aa\n``#71a6aa` `rgb(113,166,170)``\n``#68adb3` `rgb(104,173,179)``\n• #5fb5bc\n``#5fb5bc` `rgb(95,181,188)``\n• #56bcc5\n``#56bcc5` `rgb(86,188,197)``\n• #4ec3cd\n``#4ec3cd` `rgb(78,195,205)``\n• #45cbd6\n``#45cbd6` `rgb(69,203,214)``\n• #3cd2df\n``#3cd2df` `rgb(60,210,223)``\n• #33d9e8\n``#33d9e8` `rgb(51,217,232)``\n• #2be1f0\n``#2be1f0` `rgb(43,225,240)``\n• #22e8f9\n``#22e8f9` `rgb(34,232,249)``\nTone Color Variation\n\n# Color Blindness Simulator\n\nBelow, you can see how #22e8f9 is perceived by people affected by a color vision deficiency. This can be useful if you need to ensure your color combinations are accessible to color-blind users.\n\nMonochromacy\n• Achromatopsia 0.005% of the population\n• Atypical Achromatopsia 0.001% of the population\nDichromacy\n• Protanopia 1% of men\n• Deuteranopia 1% of men\n• Tritanopia 0.001% of the population\nTrichromacy\n• Protanomaly 1% of men, 0.01% of women\n• Deuteranomaly 6% of men, 0.4% of women\n• Tritanomaly 0.01% of the population"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.5385944,"math_prob":0.72476417,"size":3698,"snap":"2020-34-2020-40","text_gpt3_token_len":1694,"char_repetition_ratio":0.122902006,"word_repetition_ratio":0.011111111,"special_character_ratio":0.5443483,"punctuation_ratio":0.23634337,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98516005,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-10T06:03:02Z\",\"WARC-Record-ID\":\"<urn:uuid:ede9794a-2cf9-453c-be2a-060881502e44>\",\"Content-Length\":\"36297\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3d8771e1-59b8-4640-9698-5581d18bfff8>\",\"WARC-Concurrent-To\":\"<urn:uuid:88e5ffe2-43a8-45af-b14c-4aaaa196e4c1>\",\"WARC-IP-Address\":\"178.32.117.56\",\"WARC-Target-URI\":\"https://www.colorhexa.com/22e8f9\",\"WARC-Payload-Digest\":\"sha1:KIAZEGABNR2G3PL6WUOHBNR4ATJQH5G4\",\"WARC-Block-Digest\":\"sha1:FX3KSFXTPXVOOHIRJHMYVTSQLQE2Q5JS\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439738609.73_warc_CC-MAIN-20200810042140-20200810072140-00560.warc.gz\"}"} |
http://forums.wolfram.com/mathgroup/archive/2007/May/msg00162.html | [
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"Re: MathKernel Crash while calculating limit\n\n• To: mathgroup at smc.vnet.net\n• Subject: [mg75607] Re: MathKernel Crash while calculating limit\n• From: \"Nasser Abbasi\" <nma at 12000.org>\n• Date: Sun, 6 May 2007 01:54:54 -0400 (EDT)\n• References: <f1hla2\\$om6\\[email protected]>\n• Reply-to: \"Nasser Abbasi\" <nma at 12000.org>\n\n```\"Ulrik Günther\" <ulrik at 42degreesoffreedom.com> wrote in message\nnews:f1hla2\\$om6\\$1 at smc.vnet.net...\n>\n> Hi everybody,\n>\n> during a calculation of a limit, MathKernel crashed. I tried the\n> following:\n>\n> Limit[((-2^(n - 1)*M)/(n!*L^(n - 1)))^(n/2), n -> (infinity)]\n>\n> After some seconds, Mathematica produced a beep, then I got a crash\n> report concerning MathKernel. I attached a full crash log to this\n> mail. Oh, and I'm using Mathematica (for Students) 5.2 on a PowerBook\n> G4 running MacOS 10.4.9.\n> Also tried calculating that limit without the constants L and M, this\n> time it worked.\n>\n> Maybe somebody can explain that behaviour to me...\n>\n> Thanks in advance,\n>\n> ulrik\n>\n\nno problem on Mathematica 6:\n\nIn:= Limit[(((-2^(n - 1))*M)/(n!*L^(n - 1)))^(n/2), n -> infinity]\nOut= (-((2^(-1 + infinity)*L^(1 - infinity)*M)/Gamma[1 +\ninfinity]))^(infinity/2)\n\nYou need to upgrade, Mathematica 6 is VERY FAST !\n\nNasser\n\n```\n\n• Prev by Date: Re: MathKernel Crash while calculating limit\n• Next by Date: Re: unable to \"evaluate notebook\"\n• Previous by thread: Re: MathKernel Crash while calculating limit\n• Next by thread: Re: MathKernel Crash while calculating limit"
] | [
null,
"http://forums.wolfram.com/mathgroup/images/head_mathgroup.gif",
null,
"http://forums.wolfram.com/mathgroup/images/head_archive.gif",
null,
"http://forums.wolfram.com/mathgroup/images/numbers/2.gif",
null,
"http://forums.wolfram.com/mathgroup/images/numbers/0.gif",
null,
"http://forums.wolfram.com/mathgroup/images/numbers/0.gif",
null,
"http://forums.wolfram.com/mathgroup/images/numbers/7.gif",
null,
"http://forums.wolfram.com/mathgroup/images/search_archive.gif",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7315474,"math_prob":0.69604766,"size":1251,"snap":"2019-35-2019-39","text_gpt3_token_len":413,"char_repetition_ratio":0.10825983,"word_repetition_ratio":0.041025642,"special_character_ratio":0.3661071,"punctuation_ratio":0.19615385,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9932078,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-09-17T15:52:27Z\",\"WARC-Record-ID\":\"<urn:uuid:d481f820-a779-4d7e-ab8d-0f92e133fa83>\",\"Content-Length\":\"43123\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7ec3ca3a-28b9-414a-835d-47920e286cfc>\",\"WARC-Concurrent-To\":\"<urn:uuid:b748bc3a-852f-4856-a0f8-5792abe36984>\",\"WARC-IP-Address\":\"140.177.205.73\",\"WARC-Target-URI\":\"http://forums.wolfram.com/mathgroup/archive/2007/May/msg00162.html\",\"WARC-Payload-Digest\":\"sha1:7L5OEWCFMGQNLBFTF7A2PHLGTUGQRE5W\",\"WARC-Block-Digest\":\"sha1:FTX6FZAYQSABZUTCRIXXNBDH2CAVVXBO\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-39/CC-MAIN-2019-39_segments_1568514573080.8_warc_CC-MAIN-20190917141045-20190917163045-00393.warc.gz\"}"} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.