URL
stringlengths 15
1.68k
| text_list
listlengths 1
199
| image_list
listlengths 1
199
| metadata
stringlengths 1.19k
3.08k
|
---|---|---|---|
https://metanumbers.com/19062 | [
"## 19062\n\n19,062 (nineteen thousand sixty-two) is an even five-digits composite number following 19061 and preceding 19063. In scientific notation, it is written as 1.9062 × 104. The sum of its digits is 18. It has a total of 5 prime factors and 16 positive divisors. There are 6,336 positive integers (up to 19062) that are relatively prime to 19062.\n\n## Basic properties\n\n• Is Prime? No\n• Number parity Even\n• Number length 5\n• Sum of Digits 18\n• Digital Root 9\n\n## Name\n\nShort name 19 thousand 62 nineteen thousand sixty-two\n\n## Notation\n\nScientific notation 1.9062 × 104 19.062 × 103\n\n## Prime Factorization of 19062\n\nPrime Factorization 2 × 33 × 353\n\nComposite number\nDistinct Factors Total Factors Radical ω(n) 3 Total number of distinct prime factors Ω(n) 5 Total number of prime factors rad(n) 2118 Product of the distinct prime numbers λ(n) -1 Returns the parity of Ω(n), such that λ(n) = (-1)Ω(n) μ(n) 0 Returns: 1, if n has an even number of prime factors (and is square free) −1, if n has an odd number of prime factors (and is square free) 0, if n has a squared prime factor Λ(n) 0 Returns log(p) if n is a power pk of any prime p (for any k >= 1), else returns 0\n\nThe prime factorization of 19,062 is 2 × 33 × 353. Since it has a total of 5 prime factors, 19,062 is a composite number.\n\n## Divisors of 19062\n\n1, 2, 3, 6, 9, 18, 27, 54, 353, 706, 1059, 2118, 3177, 6354, 9531, 19062\n\n16 divisors\n\n Even divisors 8 8 4 4\nTotal Divisors Sum of Divisors Aliquot Sum τ(n) 16 Total number of the positive divisors of n σ(n) 42480 Sum of all the positive divisors of n s(n) 23418 Sum of the proper positive divisors of n A(n) 2655 Returns the sum of divisors (σ(n)) divided by the total number of divisors (τ(n)) G(n) 138.065 Returns the nth root of the product of n divisors H(n) 7.17966 Returns the total number of divisors (τ(n)) divided by the sum of the reciprocal of each divisors\n\nThe number 19,062 can be divided by 16 positive divisors (out of which 8 are even, and 8 are odd). The sum of these divisors (counting 19,062) is 42,480, the average is 2,655.\n\n## Other Arithmetic Functions (n = 19062)\n\n1 φ(n) n\nEuler Totient Carmichael Lambda Prime Pi φ(n) 6336 Total number of positive integers not greater than n that are coprime to n λ(n) 3168 Smallest positive number such that aλ(n) ≡ 1 (mod n) for all a coprime to n π(n) ≈ 2172 Total number of primes less than or equal to n r2(n) 0 The number of ways n can be represented as the sum of 2 squares\n\nThere are 6,336 positive integers (less than 19,062) that are coprime with 19,062. And there are approximately 2,172 prime numbers less than or equal to 19,062.\n\n## Divisibility of 19062\n\n m n mod m 2 3 4 5 6 7 8 9 0 0 2 2 0 1 6 0\n\nThe number 19,062 is divisible by 2, 3, 6 and 9.\n\n• Arithmetic\n• Abundant\n\n• Polite\n\n## Base conversion (19062)\n\nBase System Value\n2 Binary 100101001110110\n3 Ternary 222011000\n4 Quaternary 10221312\n5 Quinary 1102222\n6 Senary 224130\n8 Octal 45166\n10 Decimal 19062\n12 Duodecimal b046\n20 Vigesimal 27d2\n36 Base36 epi\n\n## Basic calculations (n = 19062)\n\n### Multiplication\n\nn×i\n n×2 38124 57186 76248 95310\n\n### Division\n\nni\n n⁄2 9531 6354 4765.5 3812.4\n\n### Exponentiation\n\nni\n n2 363359844 6926365346328 132030376231704336 2516763031728748052832\n\n### Nth Root\n\ni√n\n 2√n 138.065 26.713 11.7501 7.1785\n\n## 19062 as geometric shapes\n\n### Circle\n\n Diameter 38124 119770 1.14153e+09\n\n### Sphere\n\n Volume 2.90131e+13 4.56611e+09 119770\n\n### Square\n\nLength = n\n Perimeter 76248 3.6336e+08 26957.7\n\n### Cube\n\nLength = n\n Surface area 2.18016e+09 6.92637e+12 33016.4\n\n### Equilateral Triangle\n\nLength = n\n Perimeter 57186 1.57339e+08 16508.2\n\n### Triangular Pyramid\n\nLength = n\n Surface area 6.29358e+08 8.1628e+11 15564.1"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.6284626,"math_prob":0.9940705,"size":4428,"snap":"2020-34-2020-40","text_gpt3_token_len":1583,"char_repetition_ratio":0.11980108,"word_repetition_ratio":0.02556391,"special_character_ratio":0.4530262,"punctuation_ratio":0.078431375,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9985039,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-22T02:19:51Z\",\"WARC-Record-ID\":\"<urn:uuid:74849be9-8d27-495a-9c57-6663e13602c9>\",\"Content-Length\":\"47700\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:bdc1c7d0-5e9a-4265-9964-435c4cb7dd5b>\",\"WARC-Concurrent-To\":\"<urn:uuid:ebbc44ff-a554-4810-a2d6-9b976f7bf01c>\",\"WARC-IP-Address\":\"46.105.53.190\",\"WARC-Target-URI\":\"https://metanumbers.com/19062\",\"WARC-Payload-Digest\":\"sha1:4QSYRUNU57IRO2XGMH2RK4FKKDZCZWKO\",\"WARC-Block-Digest\":\"sha1:ZRQT5KBBPJZ5UE5EAQTQT7PEW3TRGNJC\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600400202686.56_warc_CC-MAIN-20200922000730-20200922030730-00765.warc.gz\"}"} |
https://www.vedantu.com/question-answer/mohan-bought-8-oranges-for-rs-480-if-john-has-rs-class-8-maths-cbse-5fd741fa147a833c29ece19c | [
"",
null,
"",
null,
"",
null,
"Question",
null,
"Answers\n\n# Mohan bought 8 oranges for $Rs 4.80$ . If John has $Rs 7.20$ , how many oranges more than Mohan can he buy.",
null,
"",
null,
"Verified\n91.8k+ views\nHint: To solve this type of question, we can apply the unitary method. The unitary method is a technique in which we can find the value of multiple units by stepwise, finding the value of a single unit. The value of a single unit is first to find out with the help of the given value of the multiple units. This method can also be used to solve various linear equations.\n\nWe are given that,\nMoney Mohan had = $Rs 4.80$\nNumber of oranges Mohan brought =8\nMoney John has = $Rs 7.20$\nWe need to find the number of oranges John can buy, so we find the required number of oranges stepwise.\nStep1: we will first find the number of oranges brought for $1Re$ .\nTo find the number of oranges for $1Re$ we will use the given information.\nNow according to the question Mohan buys 8 oranges for $Rs 4.80$ .\nTherefore, the number of oranges brought for\n$1Re$ = $\\dfrac{8}{{4.80}} = 1.66$\nStep2: we will find the number of oranges John can buy.\nFrom the above step we have the number of oranges for $1Re$\nNow the number of oranges brought for $1Re$ is 1.66.\nJohn has $Rs 7.20$ number of oranges he can buy\n= $7.20 \\times 1.66 = 11.95$\nBut 11.95 numbers of oranges do not make any sense because the number of oranges can only have an integral value. So either the number should be 12 or 11, but to buy 12 oranges John has to pay an extra amount. So he can only buy 11 oranges.\nTherefore, John can buy 3 more oranges as compared to Mohan.\nSo, the correct answer is “3”.\n\nNote: In these types of questions, students may get confused because of the fractional values of the units, which must have integral values, so in that case, they must consider an integral part only rather than considering the whole value. Also, students must be aware of how to apply the unitary method to find the accurate values of the units."
]
| [
null,
"https://www.vedantu.com/cdn/images/seo-templates/seo-qna.svg",
null,
"https://www.vedantu.com/cdn/images/seo-templates/arrow-left.png",
null,
"https://www.vedantu.com/cdn/images/seo-templates/topic-sprites.svg",
null,
"https://www.vedantu.com/cdn/images/seo-templates/topic-sprites.svg",
null,
"https://www.vedantu.com/cdn/images/seo-templates/topic-sprites.svg",
null,
"https://www.vedantu.com/cdn/images/seo-templates/green-check.svg",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.91273785,"math_prob":0.9946483,"size":1850,"snap":"2021-43-2021-49","text_gpt3_token_len":488,"char_repetition_ratio":0.1928494,"word_repetition_ratio":0.061946902,"special_character_ratio":0.27351353,"punctuation_ratio":0.116161615,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99771714,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-24T11:35:58Z\",\"WARC-Record-ID\":\"<urn:uuid:793c6bc4-8999-405f-8dc8-f1a90ab17f4d>\",\"Content-Length\":\"50691\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5320d924-e485-48b5-89c6-d0b09ff5e8e3>\",\"WARC-Concurrent-To\":\"<urn:uuid:bcc9f601-808d-4114-b479-5a953f68761b>\",\"WARC-IP-Address\":\"18.67.76.26\",\"WARC-Target-URI\":\"https://www.vedantu.com/question-answer/mohan-bought-8-oranges-for-rs-480-if-john-has-rs-class-8-maths-cbse-5fd741fa147a833c29ece19c\",\"WARC-Payload-Digest\":\"sha1:CDXB544IEPHEN2OGIXDOI32UGBVQYZHQ\",\"WARC-Block-Digest\":\"sha1:ABHOBPGEWT7BOIWLW2VRXUWWDPXUIN3F\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585997.77_warc_CC-MAIN-20211024111905-20211024141905-00214.warc.gz\"}"} |
https://calcpercentage.com/91-is-98-percent-of-what | [
"# PercentageCalculator, 91 is 98 Percent of what?\n\n## 91 is 98 Percent of what? 91 is 98 Percent of 92.86\n\n%\n\n### How to Calculate 91 is 98 Percent of what?\n\n• F\n\nFormula\n\n91 ÷ 98%\n\n• 1\n\nConvert percent to decimal\n\n98 ÷ 100 = 0.98\n\n• 2\n\nDivide number by decimal number (from the first step)\n\n91 ÷ 0.98 = 92.86 So 91 is 98% of 92.86\n\n#### Example\n\nFor example, John owns 91 shares, and the percentage of John shares is 98%. 91 is 98 Percent of what? 98 ÷ 100 = 0.98 91 ÷ 0.98 = 92.86 So 91 is 98% of 92.86, that mean John has 91 of 92.86 shares"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.9025627,"math_prob":0.9902369,"size":350,"snap":"2022-27-2022-33","text_gpt3_token_len":138,"char_repetition_ratio":0.15028901,"word_repetition_ratio":0.19178082,"special_character_ratio":0.4942857,"punctuation_ratio":0.15384616,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99943674,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-06-28T12:26:17Z\",\"WARC-Record-ID\":\"<urn:uuid:b404625a-adcf-45df-b765-57533f76eaea>\",\"Content-Length\":\"11550\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:18ab0ccd-7c7f-4742-a304-e24476553997>\",\"WARC-Concurrent-To\":\"<urn:uuid:5df3d520-c2df-408d-9926-90be1b4df081>\",\"WARC-IP-Address\":\"76.76.21.93\",\"WARC-Target-URI\":\"https://calcpercentage.com/91-is-98-percent-of-what\",\"WARC-Payload-Digest\":\"sha1:GOQPN35K3D3A47TD3F4VCQPUR57OVGWP\",\"WARC-Block-Digest\":\"sha1:R7XTZQQJ7QHZAUB2AEXMT7VDIQUXR5CX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103516990.28_warc_CC-MAIN-20220628111602-20220628141602-00559.warc.gz\"}"} |
https://exceljet.net/formulas/get-relative-row-numbers-in-range | [
"## Summary\n\nTo get a full set of relative row numbers in a range, you can use an array formula based on the ROW function. In the example shown, the formula in B5:B11 is:\n\n``````{=ROW(B5:B11)-ROW(B5)+1}\n``````\n\nNote: this is an array formula that must be entered with Control + Shift + Enter. If you're entering this on the worksheet (and not inside another formula), make a selection that includes more than one row, enter the formula, and confirm with Control + Shift + Enter.\n\nThis is formula will continue to generate relative numbers even when the range is moved. However, it's not a good choice if rows need to be sorted, deleted, or added, because the array formula will prevent changes. The formula options explained here are will work better.\n\n## Generic formula\n\n``{=ROW(range)-ROW(range.firstcell)+1}``\n\n## Explanation\n\nThe first ROW function generates an array of 7 numbers like this:\n\n``````{5;6;7;8;9;10;11}\n``````\n\nThe second ROW function generates an array with just one item like this:\n\n``````{5}\n``````\n\nwhich is then subtracted from the first array to yield:\n\n``````{0;1;2;3;4;5;6}\n``````\n\nFinally, 1 is added to get:\n\n``````{1;2;3;4;5;6;7}\n``````\n\n### Generic version with named range\n\nWith a named range, you can create a more generic version of the formula using the MIN function or the INDEX function. For example, with the named range \"list\", you can use MIN like this:\n\n``````{ROW(list)-MIN(ROW(list))+1}\n``````\n\nWith INDEX, we fetch the first reference in the named range, and using ROW on that:\n\n``````{=ROW(list)-ROW(INDEX(list,1,1))+1}\n``````\n\nYou'll often see \"relative row\" formulas like this inside complex array formulas that need row numbers to calculate a result.\n\n### With SEQUENCE\n\nWith the SEQUENCE function the formula to return relative row numbers for a range is simple:\n\n``````=SEQUENCE(ROWS(range))\n``````\n\nThe ROWS function provides the count of rows, which is returned to the SEQUENCE function. SEQUENCE then builds an array of numbers, starting with the number 1. So, following the original example above, the formula below returns the same result:\n\n``````=SEQUENCE(ROWS(B5:B11)) // returns {1;2;3;4;5;6;7}\n``````\n\nNote: the SEQUENCE formula is a new dynamic array function available only in Excel 365.",
null,
"Author",
null,
"### Dave Bruns\n\nHi - I'm Dave Bruns, and I run Exceljet with my wife, Lisa. Our goal is to help you work faster in Excel. We create short videos, and clear examples of formulas, functions, pivot tables, conditional formatting, and charts."
]
| [
null,
"https://exceljet.net/sites/default/files/images/blocks/dave-round.webp",
null,
"https://exceljet.net/sites/default/files/images/blocks/microsoft-mvp-logo.webp",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.6998553,"math_prob":0.9684479,"size":1298,"snap":"2022-40-2023-06","text_gpt3_token_len":342,"char_repetition_ratio":0.14219475,"word_repetition_ratio":0.01010101,"special_character_ratio":0.2650231,"punctuation_ratio":0.17406143,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99599016,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-01-30T01:15:20Z\",\"WARC-Record-ID\":\"<urn:uuid:c0d91f9c-a5d7-43e3-bdf1-874fc90ab0fe>\",\"Content-Length\":\"47293\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:882f8b65-7854-4f0a-a57d-238312b68bca>\",\"WARC-Concurrent-To\":\"<urn:uuid:2b08413e-31e4-4c37-879a-3861b81526ba>\",\"WARC-IP-Address\":\"45.33.22.186\",\"WARC-Target-URI\":\"https://exceljet.net/formulas/get-relative-row-numbers-in-range\",\"WARC-Payload-Digest\":\"sha1:7GKV2QOIMNCVMJH6LCEPD5CNM2RSWZO4\",\"WARC-Block-Digest\":\"sha1:IIUM3AUTHG47IJCTZKNBHOFW2TQ6TC7J\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764499790.41_warc_CC-MAIN-20230130003215-20230130033215-00849.warc.gz\"}"} |
https://scholar.archive.org/work/cubgtupywjgr3lhqwtbj5fafce | [
"### An Arithmetic for Rooted Trees * 1 Basic properties and notation\n\nFabrizio Luccio\n2016 14 Leibniz International Proceedings in Informatics Schloss Dagstuhl-Leibniz-Zentrum für Informatik unpublished\nWe propose a new arithmetic for non-empty rooted unordered trees simply called trees. After discussing tree representation and enumeration, we define the operations of tree addition, multiplication , and stretch, prove their properties, and show that all trees can be generated from a starting tree of one vertex. We then show how a given tree can be obtained as the sum or product of two trees, thus defining prime trees with respect to addition and multiplication. In both cases we show how\nmore » ... we show how primality can be decided in time polynomial in the number of vertices and prove that factorization is unique. We then define negative trees and suggest dealing with tree equations, giving some preliminary examples. Finally we comment on how our arithmetic might be useful, and discuss preceding studies that have some relations with ours. The parts of this work that do not concur to an immediate illustration of our proposal, including formal proofs, are reported in the Appendix. To the best of our knowledge our proposal is completely new and can be largely modified in cooperation with the readers. To the ones of his age the author suggests that \"many roads must be walked down before we call it a theory\". We refer to rooted unordered trees simply called trees. Our trees are non empty. 1 denotes the tree containing exactly one vertex, and is the basic element of our theory. In a tree T , r (T) denotes the root of T ; x ∈ T denotes any of its vertices; n T and e T respectively denote the numbers of vertices and leaves. A subtree is the tree composed of a vertex x and all its descendants in T. The subtrees routed at the children of x are called subtrees of x. s T denotes the number of subtrees of r (T). A tree T can be represented as a binary sequences S T (the original reference for ordered trees is ). In our scheme T is traversed in left to right preorder inserting 1 in the sequence for each vertex encountered, and inserting 0 for each move backwards. Then S T is composed of 2n bits as shown in Figure 1, and has a balanced parenthesis recursive structure 1 S 1. .. S k 0 where the S i are the sequences representing the subtrees of r (T). The sequences for tree 1 is 10. Note that all the prefixes of S T have more 1's than 0's except for the whole sequence that has as many 1's as 0's. Since T is unordered, the order in which the subsequences S i appear in S T is immaterial (i.e., in general many different sequences represent T). However a canonical form for trees is *"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.9376104,"math_prob":0.96117216,"size":2747,"snap":"2021-31-2021-39","text_gpt3_token_len":616,"char_repetition_ratio":0.11775428,"word_repetition_ratio":0.008230452,"special_character_ratio":0.21587186,"punctuation_ratio":0.09158879,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99471337,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-21T21:26:02Z\",\"WARC-Record-ID\":\"<urn:uuid:467668bc-49c9-4aad-bd5d-b09527b8addf>\",\"Content-Length\":\"17691\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4be47ec0-b16d-4d22-8cf2-68fbac20a6bd>\",\"WARC-Concurrent-To\":\"<urn:uuid:27a20020-f407-46bd-8d54-09ea5a29618e>\",\"WARC-IP-Address\":\"207.241.225.9\",\"WARC-Target-URI\":\"https://scholar.archive.org/work/cubgtupywjgr3lhqwtbj5fafce\",\"WARC-Payload-Digest\":\"sha1:62A3PYRQNTNUDCOB5BIQS5WPGRFTEJ2W\",\"WARC-Block-Digest\":\"sha1:OGVHQWNT6TIKW4ZDZCSSJENYMHBYU4UC\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780057227.73_warc_CC-MAIN-20210921191451-20210921221451-00061.warc.gz\"}"} |
https://www.jpost.com/opinion/op-ed-contributors/leadership-in-war-and-diplomacy-archive | [
"(function (a, d, o, r, i, c, u, p, w, m) { m = d.getElementsByTagName(o), a[c] = a[c] || {}, a[c].trigger = a[c].trigger || function () { (a[c].trigger.arg = a[c].trigger.arg || []).push(arguments)}, a[c].on = a[c].on || function () {(a[c].on.arg = a[c].on.arg || []).push(arguments)}, a[c].off = a[c].off || function () {(a[c].off.arg = a[c].off.arg || []).push(arguments) }, w = d.createElement(o), w.id = i, w.src = r, w.async = 1, w.setAttribute(p, u), m.parentNode.insertBefore(w, m), w = null} )(window, document, \"script\", \"https://95662602.adoric-om.com/adoric.js\", \"Adoric_Script\", \"adoric\",\"9cc40a7455aa779b8031bd738f77ccf1\", \"data-key\");\nvar domain=window.location.hostname; var params_totm = \"\"; (new URLSearchParams(window.location.search)).forEach(function(value, key) {if (key.startsWith('totm')) { params_totm = params_totm +\"&\"+key.replace('totm','')+\"=\"+value}}); var rand=Math.floor(10*Math.random()); var script=document.createElement(\"script\"); script.src=`https://stag-core.tfla.xyz/pre_onetag?pub_id=34&domain=\\${domain}&rand=\\${rand}&min_ugl=0\\${params_totm}`; document.head.append(script);",
null,
""
]
| [
null,
"https://images.jpost.com/image/upload/f_auto,fl_lossy/c_fill,g_faces:center,h_537,w_822/9371",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.9735334,"math_prob":0.96919554,"size":5020,"snap":"2023-14-2023-23","text_gpt3_token_len":1093,"char_repetition_ratio":0.10027911,"word_repetition_ratio":0.0,"special_character_ratio":0.20896414,"punctuation_ratio":0.07942974,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9789482,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-05-31T00:51:34Z\",\"WARC-Record-ID\":\"<urn:uuid:82ef0f96-22e2-4b27-8851-6d09ddf29c16>\",\"Content-Length\":\"86150\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:647947bb-c848-49ed-8add-51d6029e6e7a>\",\"WARC-Concurrent-To\":\"<urn:uuid:68028df2-c576-4e9c-a342-5d41d9442732>\",\"WARC-IP-Address\":\"159.60.130.79\",\"WARC-Target-URI\":\"https://www.jpost.com/opinion/op-ed-contributors/leadership-in-war-and-diplomacy-archive\",\"WARC-Payload-Digest\":\"sha1:XP32FVKXVIUJ5MDFHXPJDVYLOF4Z7NEO\",\"WARC-Block-Digest\":\"sha1:7AHFL3ZH27KTUFAQU3NTWZENBMIEZZ2L\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224646181.29_warc_CC-MAIN-20230530230622-20230531020622-00227.warc.gz\"}"} |
https://ch.mathworks.com/matlabcentral/cody/problems/44253-compute-the-intersect-point-of-line-and-plan | [
"Cody\n\n# Problem 44253. Compute the intersect point of line and plan\n\nCompute the intersect point of line and plan.\n\neg. line AB, the coordinate of A is (1,2,3)\n\nthe coordinate of B is (-4,-5,-6)\n\nthe plan is made up by three points D, E and F\n\nthe coordinate of D is (8, 1, 6)\n\nthe coordinate of E is (3, 5, 7)\n\nthe coordinate of F is (4, 9, 2)\n\nso the intersect point of line AB and plan DEF is point O\n\nthe coordinate of O is (3.1429, 5.0000,6.8571)\n\n### Solution Stats\n\n57.69% Correct | 42.31% Incorrect\nLast Solution submitted on Oct 21, 2019"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.8177251,"math_prob":0.99412066,"size":633,"snap":"2020-24-2020-29","text_gpt3_token_len":196,"char_repetition_ratio":0.19236884,"word_repetition_ratio":0.01724138,"special_character_ratio":0.3270142,"punctuation_ratio":0.13605443,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9701753,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-06-07T00:27:18Z\",\"WARC-Record-ID\":\"<urn:uuid:e66b826a-701d-43f3-ad2e-d94de04ce118>\",\"Content-Length\":\"77018\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:34f325c6-e831-48ef-847c-16b17e68c707>\",\"WARC-Concurrent-To\":\"<urn:uuid:7dd27dff-f249-4fd9-bfc4-765ee453036d>\",\"WARC-IP-Address\":\"104.110.193.39\",\"WARC-Target-URI\":\"https://ch.mathworks.com/matlabcentral/cody/problems/44253-compute-the-intersect-point-of-line-and-plan\",\"WARC-Payload-Digest\":\"sha1:LTYIXGCKUZYBYGYBHWUA3SD3Q3MHRMOK\",\"WARC-Block-Digest\":\"sha1:LF53R32MXIZ7AZXCNAZ7SW2ZKRPCTKED\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-24/CC-MAIN-2020-24_segments_1590348521325.84_warc_CC-MAIN-20200606222233-20200607012233-00003.warc.gz\"}"} |
https://scirp.org/journal/paperinformation.aspx?paperid=87306 | [
"Additional Arguments for a Correction of the Debye-Hückel, Maxwell-Boltzmann Equations for Dilute Electrolyte Equilibria\n\nPeter Debye and Erich Hückel had developed a theory for the ionic activity coefficients in dilute solutions of strong electrolytes some 95 years ago . Their limiting law still stands and is confirmed as close to reality in many experiments. In a previous article , it is shown that these limiting activity coefficients arise because the electrical contribution in the electrochemical potential of ionic species is overestimated traditionally with a factor 2. The smaller value removes inconsistencies in the models and complies better with the basic electrostatic principles. In this article further evidence is given in support of this alternative description. As consequence the dilute activity coefficients become unity, e.g. are removed, which means that the electrochemical potential of ions in dilute solutions is expressed directly in concentration, instead of activity, which simplifies modelling in such dilute solutions.\n\nKeywords\n\nShare and Cite:\n\nWeg, P. (2018) Additional Arguments for a Correction of the Debye-Hückel, Maxwell-Boltzmann Equations for Dilute Electrolyte Equilibria. American Journal of Analytical Chemistry, 9, 406-422. doi: 10.4236/ajac.2018.99032.\n\n1. Introduction\n\nThe behavior of a strong electrolyte in the limit of high dilution is given by the formulae due to Debye and Hückel . Modifications to the theories were discussed in 2009 . This publication gives a discussion and additional information in support of the theory.\n\n2. Thermodynamics\n\nFor the energy of a phase we can write:\n\n$\\text{d}U=T\\text{d}S-p\\text{d}V+\\sum {\\stackrel{˜}{\\mu }}_{i}\\text{d}{n}_{i}$ (1)\n\nAnd for the Gibbs free energy:\n\n$\\text{d}G=-S\\text{d}T+V\\text{d}p+\\sum {\\stackrel{˜}{\\mu }}_{i}\\text{d}{n}_{i}$ (2)\n\nThe free energy is constant in a closed system (all dni º 0) kept at constant temperature and pressure, e.g. at constant intensive variables (dT = 0, dp = 0).\n\nWe integrate the energy at constant intensities of a phase and differentiate to retrieve the Gibbs-Duhem equations:\n\n$0=S\\text{d}T-V\\text{d}p+\\sum {n}_{i}\\text{d}{\\stackrel{˜}{\\mu }}_{i}$ (3)\n\nThe thermodynamics require that for ionic equilibrium in electric fields the electrochemical potential (e.g. total free energy per mole of ion i).\n\n${\\stackrel{˜}{\\mu }}_{i}={\\left(\\frac{\\partial G}{\\partial {n}_{i}}\\right)}_{T,p,{n}_{j\\ne i}}$ (4)\n\nis constant for each constituent, over the regions that are accessible for these charged particles, where\n\n${\\stackrel{˜}{\\mu }}_{i}={\\mu }_{i}^{0}+RT\\mathrm{ln}{\\gamma }_{i}{c}_{i}+{z}_{i}F{\\varphi }_{i}$ (5)\n\nHere ${\\gamma }_{i}$ is the activity coefficient, F the Faraday constant, ci the concentration of ion i, zi the charge of ion i, fi the total electric potential experienced by the ion i from the surrounding ions (ion cloud) and (if present) the externally applied electrical field, R the gas constant, and T the absolute temperature. Normally, for uncharged phases, we add the chemical components as neutral salts, but with a nonzero space charge that need not be the case. In chemical experiments we often assume that the space charge is still zero, and the charges accumulate on surfaces, with the counter charge nearby (polarized electrical double layers). If we express Equation (5) for one single ion: ${\\stackrel{˜}{\\mu }}_{i}={\\mu }_{i}^{0}+kT\\mathrm{ln}{\\gamma }_{i}{c}_{i}+{z}_{i}e{\\varphi }_{i}={\\mu }_{i}^{0}+kT\\mathrm{ln}{\\gamma }_{i}{c}_{i}+{q}_{i}{\\varphi }_{i}$ . The electrical work per ion is expressed as a product of ion charge times potential, where k is the Boltzmann constant and e the elementary positive charge.\n\nThe electrochemical potential often is expressed as Equation (5), containing an idealized chemical term RTlnci and an electrical term ziFfi. How we do the accounting of electrical and chemical energy in the electrochemical potential exactly is rather irrelevant, as it is only the total electrochemical potential that can be measured experimentally. Any deviation in the measurement from the defined idealized two model terms is accounted for via the activity coefficient ${\\gamma }_{i}$ :\n\n${\\stackrel{˜}{\\mu }}_{i}={\\mu }_{i}^{0}+RT\\mathrm{ln}{c}_{i}+{z}_{i}F{\\varphi }_{i}+RT\\mathrm{ln}{\\gamma }_{i}$ (6)\n\nThe chemical term and the electrical term are idealized contributions based on hypothetical “idealized” model systems. Any deviation from that ideal behavior for real systems goes into the last term on the right-hand side of Equation (6) containing the activity coefficient. If the idealized chemical and electrical term together describes reality well, then the activity coefficient ${\\gamma }_{i}$ is close to unity and the last term on the right-hand side of Equation (6) is only a small correction.\n\nThe chemical part is obviously based on the idealized system that obeys the Maxwell-Boltzmann distribution law. This equation has proven its merits in many chemical experiments involving mixtures of uncharged chemicals (e.g. ideal mixing behavior).\n\nThe electrical part ziefI = qifi per ion is obviously based on the notion that electrical work can be expressed as a differential electrical work contribution dWe = fdq for a test charge dq brought to a potential f, or as dWe = qdf where we let a charge q charge travel through a potential difference df. Since dWe = Fedx we may also write Fe = df/dx = E, where E is the electrical field strength in direction x, such that the electrical force is Fe = qE, which is equivalent to dWe = qdf.\n\nThe question is now simply: Are these idealized model contributions for the chemical and electrical contribution chosen the best ones, e.g. do they lead to activity coefficients that are close to unity, so that the last term in the right-hand side of Equation (6) is only a small correction?\n\nThe activity coefficients can be obtained experimentally by all kind of experiments. Notably the potential measurements in concentration cells without transfer can be made very accurate in determining the free energy and hence in determining the value of the activity coefficients. Thus, we can check our model terms by independent experiments and verify how accurate our idealized models are. If the activity coefficients appear close to unity, our chosen two models for respectively the chemical interaction and the electrical interaction are obeyed closely, or we are just lucky in the fact that the errors in the models compensate each other and cancel out.\n\nWe can also pose the two model contributions and try to set up an extra model for the expected “idealized” deviations from the reality and calculate model predictions for the activity coefficients. This is essentially the route taken by Debye and Hückel. They model the deviations from the idealized terms as given by the Coulombic interaction energy calculated from the presence of the ion cloud of a smeared out rotationally symmetric opposing charge surrounding each ion in solution. Experiments indicate that their model predictions are accurate in the low concentration range for aqueous mixtures of strong electrolytes, because in practice the measured activity coefficients follow the predicted trends at lower concentrations for the aqueous mixes of these strong electrolytes.\n\n3. Debye-Hückel Model\n\nThe Debye-Hückel model represents ions as idealized point charges that have an electrical interaction as described by Coulombs law as captured in the Poisson equation. The ions distribute according to a Maxwell-Boltzmann distribution. This is the case both in the limiting law and in the extended equations. In the extended equations the point charges have a finite size.\n\nThere are local concentration variations for the ions as consequence of their charge: There is a higher probability to find a charge of opposite sign (that is attracted) than of the same sign (that repel each other) near a particular charge. The interactions are calculated by assuming that the charge of counter ions average out to a smeared out rotationally-symmetric space charge or ion cloud that interacts with the central ion of exactly the opposite charge. Many of the inherent assumptions are touched upon in . Ref. also contains an introduction to the mean spherical approximation (MSA) theory.\n\nAccording to the limiting law of Debye and Hückel:\n\n$RT\\mathrm{ln}{\\gamma }_{i}^{DH}=-\\frac{{z}_{i}^{2}eF\\kappa }{8\\text{π}\\epsilon }$ (7)\n\nor according to the extended equations of Debye-Hückel:\n\n$RT\\mathrm{ln}{\\gamma }_{i}^{DH}=-\\frac{{z}_{i}^{2}eF}{8\\text{π}\\epsilon }\\frac{\\kappa }{1+\\kappa a}$ (8)\n\nwhere e the elementary charge, e the dielectric constant, a is the distance of closest approach of the ions and k reciprocal of classic Debye length with\n\n${\\kappa }^{2}=\\frac{2{F}^{2}}{\\epsilon RT}\\rho I$ (9)\n\nwhere I the ionic strength in molal units and ρ the density of the liquid. These model values for the activity coefficients have been shown many times to be accurate in the lower concentration range, where long-range charge-charge interactions, e.g. Coulombs interactions, dominate.\n\nAccording to that same model we calculate the potential from the surrounding ion cloud within the limiting law as\n\n${\\varphi }_{i}^{DH}=\\frac{-{z}_{i}e\\kappa }{4\\text{π}\\epsilon }$ (10)\n\nor within the extended law as\n\n${\\varphi }_{i}^{DH}\\left(r=a\\right)=\\frac{-{z}_{i}e}{4\\text{π}\\epsilon }\\frac{\\kappa }{1+\\kappa a}$ (11)\n\nHence, for both the limiting law and the extended law of Debye-Hückel theory, we may simply write:\n\n$RT\\mathrm{ln}{\\gamma }_{i}^{DH}=\\frac{1}{2}{z}_{i}F{\\varphi }_{i}{}^{DH}$ (12)\n\nApparently: we may write something like:\n\n${\\stackrel{˜}{\\mu }}_{i}={\\mu }_{i}^{0}+RT\\mathrm{ln}{c}_{i}+\\frac{1}{2}{z}_{i}F{\\varphi }_{i}\\left(\\text{ion}\\text{\\hspace{0.17em}}\\text{cloud}\\right)+{z}_{i}F{\\varphi }_{i}\\left(\\text{external}\\right)$ (13)\n\nIn case of neutral salt solutions, when the external field is zero, we may write.\n\n${\\stackrel{˜}{\\mu }}_{i}={\\mu }_{i}^{0}+RT\\mathrm{ln}{c}_{i}+\\frac{1}{2}{z}_{i}F{\\varphi }_{i}\\left(\\text{ion}\\text{\\hspace{0.17em}}\\text{cloud}\\right)$ (14)\n\nIn that case we write for a completely dissociated 1-1 electrolyte at concentration c, without external potential, for the +ion:\n\n${\\stackrel{˜}{\\mu }}_{+}={\\mu }_{+}^{0}+RT\\mathrm{ln}{\\gamma }_{+}{c}_{+}={\\mu }_{+}^{0}+RT\\mathrm{ln}{c}_{+}+\\frac{1}{2}{z}_{+}F{\\varphi }_{+}\\left(\\text{ion}\\text{\\hspace{0.17em}}\\text{cloud}\\right)$ (15)\n\nAnd for the −ion:\n\n${\\stackrel{˜}{\\mu }}_{-}={\\mu }_{-}^{0}+RT\\mathrm{ln}{\\gamma }_{-}{c}_{-}={\\mu }_{-}^{0}+RT\\mathrm{ln}{c}_{-}+\\frac{1}{2}{z}_{-}F{\\varphi }_{-}\\left(\\text{ion}\\text{\\hspace{0.17em}}\\text{cloud}\\right)$ (16)\n\nor for the total salt $c={c}_{+}={c}_{-}$ :\n\n${\\mu }_{salt}={\\stackrel{˜}{\\mu }}_{+}+{\\stackrel{˜}{\\mu }}_{-}={\\mu }_{salt}^{0}+RT\\mathrm{ln}{\\gamma }_{+}{c}_{+}+RT\\mathrm{ln}{\\gamma }_{-}{c}_{-}={\\mu }_{salt}^{0}+2RT\\mathrm{ln}{\\gamma }_{±}c$ (17)\n\nwhere within the DH approximation the values for ${\\gamma }_{±}={\\gamma }_{+}={\\gamma }_{-}$ are equal and given by Equations (7)-(9). This equation has many times been proven a good approximation for the activity coefficients for aqueous electrolyte solutions in the lower concentration range. So, we can interpret the ion/cloud interaction in a neutral electrolyte as a chemical interaction Equation (17), e.g. expressed as activity coefficients, or an electrical interaction from the micro potentials as given by Equations (15) and (16).\n\nSimilar, but more complicated, equations result for the more general n-m electrolytes and their mixes, which are also good approximations in the lower concentration range, as shown by the independent experimental verification.\n\nNow the dilemma is that Equation (13) shows that an ion in the classical theory responds differently to the potential from the ion cloud (micro potential) than to an external applied field (macro potential). It is strange that an ion can feel the difference between an externally applied electrical field and a local electrical field from surrounding ions. The response to the ion cloud is reasonably quantified in many independent experiments to determine mean ionic activity coefficients for mixtures of salts at low concentrations. The response to external fields are more difficult to measure as the c+ and the c concentrations will become different and single ionic activity coefficients cannot be assessed in isolation.\n\nThe only suggestion I do in the article in JCIS is to assume that the factor ½ might be more general than only for the ion cloud in the DH theory. I show that if we redefine the electrochemical potential as\n\n${\\stackrel{˜}{\\mu }}_{i}={\\mu }_{i}^{0}+RT\\mathrm{ln}{c}_{i}+\\frac{1}{2}{z}_{i}F{\\varphi }_{i}$ (18)\n\nand thus assume a different model for the electrical interaction, and repeat the procedure of DH, we arrive at almost the same equations, ${\\stackrel{˜}{\\mu }}_{i}={\\mu }_{i}^{0}+RT\\mathrm{ln}{\\gamma }_{i}^{DH}{c}_{i}$ , in the limiting law:\n\n$RT\\mathrm{ln}{\\gamma }_{i}^{DH}=-\\frac{{z}_{i}^{2}eF\\kappa \\sqrt{\\frac{1}{2}}}{8\\text{π}\\epsilon }$ (19)\n\nor according to the extended equations\n\n$RT\\mathrm{ln}{\\gamma }_{i}^{DH}=-\\frac{{z}_{i}^{2}eF}{8\\text{π}\\epsilon }\\frac{\\kappa \\sqrt{\\frac{1}{2}}}{1+\\kappa a\\sqrt{\\frac{1}{2}}}$ (20)\n\nwhich can be equivalently, in both cases, again be written as:\n\n${\\stackrel{˜}{\\mu }}_{i}={\\mu }_{i}^{0}+RT\\mathrm{ln}{c}_{i}+\\frac{1}{2}{z}_{i}F{\\varphi }_{i}$ (21)\n\nThis Equation (21) is a general equation where the potential fi is the total potential (both macro potential from an external field plus micro potential from the surrounding ion cloud). In this case the theory is internally consistent, because the starting Equation (18) is identical to the resulting Equation (21).\n\nIn case of an electrolyte without (external) macro potential we only have the micro potential from the surrounding ion cloud, and thus according to the extended DH model:\n\n$\\frac{1}{2}{z}_{i}F{\\varphi }_{i}\\left(\\text{micropotential}\\right)=RT\\mathrm{ln}{\\gamma }_{i}^{DH}=-\\frac{{z}_{i}^{2}eF}{8\\text{π}\\epsilon }\\frac{\\kappa \\sqrt{\\frac{1}{2}}}{1+\\kappa a\\sqrt{\\frac{1}{2}}}$\n\nThus, when we interpret the micro potential as a “chemical” interaction:\n\n${\\stackrel{˜}{\\mu }}_{i}={\\mu }_{i}^{0}+RT\\mathrm{ln}{c}_{i}-\\frac{{z}_{i}^{2}eF}{8\\text{π}\\epsilon }\\frac{\\kappa \\sqrt{\\frac{1}{2}}}{1+\\kappa a\\sqrt{\\frac{1}{2}}}$\n\nor for a 1-1 salt:\n\n$\\begin{array}{c}{\\mu }_{salt}={\\stackrel{˜}{\\mu }}_{+}+{\\stackrel{˜}{\\mu }}_{-}={\\mu }_{salt}^{0}+RT\\mathrm{ln}{\\gamma }_{+}{c}_{+}+RT\\mathrm{ln}{\\gamma }_{-}{c}_{-}\\\\ ={\\mu }_{salt}^{0}+2RT\\mathrm{ln}c-2\\frac{{z}_{i}^{2}eF}{8\\text{π}\\epsilon }\\frac{\\kappa \\sqrt{\\frac{1}{2}}}{1+\\kappa a\\sqrt{\\frac{1}{2}}}\\end{array}$\n\nThe advantage is that now the theory is internally consistent: the equation for the electrochemical potential of an ion that we start with is reproduced in the model calculation exactly.\n\nIn the experimental showcase discussed in JCIS article, is shown that this equation even fits slightly better than the original DH, which indicates that the differences in practice are small, but still in favor of the new theory. In fact, the equations are identical if we replace ${\\kappa }^{\\prime }=\\kappa \\sqrt{1/2}$ , e.g. the new theory leads to a similar exponential decay of the potential of the ions of the ion cloud, only with a different reciprocal Debye length ${\\kappa }^{\\prime }$ . The counter ions are effectively a factor $\\sqrt{2}$ further out, since their electrical interaction energy is only half of that in the classical DH theory. So, all the DH trends remain valid (proportionality of $RT\\mathrm{ln}{\\gamma }_{i}$ with $\\sqrt{c}$ , similar trends of 1-1, 1-2, 1-3, or 2-3 electrolytes, the impact of ionic strength etcetera), except for a slight change in the values of the activity coefficients from the extra factor $\\sqrt{1/2}$ .\n\nAs already stated, the terms chosen for the chemical and electrical contribution in the electrochemical potential are chosen arbitrarily and cannot be measured independently. Any misfit goes into the activity coefficients. But we have shown that a definition\n\n${\\stackrel{˜}{\\mu }}_{i}={\\mu }_{i}^{0}+RT\\mathrm{ln}{c}_{i}+\\frac{1}{2}{z}_{i}F{\\varphi }_{i}$ (22)\n\nor per ion\n\n${\\stackrel{˜}{\\mu }}_{i}={\\mu }_{i}^{0}+kT\\mathrm{ln}{c}_{i}+\\frac{1}{2}{z}_{i}e{\\varphi }_{i}$\n\nwill deliver activity coefficients that are close to unity in the low concentration Coulombic region where normally the DH activity coefficients apply. Therefore, the electrical interaction term used here is probably closer to reality, at least for the micro potential from the ion cloud.\n\nThe rest of this note is going to show that in many cases the definition of electrical work according to the last term in the last two equations is indeed appropriate.\n\n4. Electrical Energy\n\n4.1. Capacitor\n\nThe most striking example is the energy of an electrical capacitance. According to elementary electrodynamics for a capacitance charged to total charge q:\n\n$U=\\frac{1}{2}q\\varphi$ (23)\n\nwhere f the potential. For the capacitor the charge is proportional to the voltage, with proportionality factor the capacitance C, q = Cf, or the energy for charging to q in infinitesimal steps dq or df:\n\n$U={\\int }_{0}^{\\varphi }q\\text{d}\\varphi ={\\int }_{0}^{q}\\varphi \\text{d}q={\\int }_{0}^{q}\\frac{q}{C}\\text{d}q=\\frac{1}{2}\\frac{{q}^{2}}{C}=\\frac{1}{2}q\\varphi$ (24)\n\nHere q is the charge and f is the potential. So although for an infinitesimal step dU = fdq or qdf, without the factor 1/2, the total energy has this factor 1/2. If we look at a tiny part dA of the total area A of the capacitance that represents a tiny fraction dq of the total charge q, we may write dU = 1/2fdq, which can be the case of a single elementary charge and its counter charge subjected to a potential difference f.\n\n4.2. Electro-Capillarity\n\nIn the ideally-polarised electrical double layer we can make a diffuse space charge close to a metal surface (like Mercury or Silver) and create a double layer. In that case the system again behaves like a differential capacitor c, where for the free energy per unit area or interfacial tension:\n\n$\\gamma -{\\gamma }_{z}=-\\underset{{\\varphi }_{z}}{\\iint }c\\text{d}{\\varphi }^{2}$ (25)\n\ne.g. the free energy has in first order a parabolic shape around the point of zero charge with $c=\\partial q/\\partial \\varphi =-{\\partial }^{\\text{2}}\\gamma /\\partial {\\varphi }^{\\text{2}}$ , which shows that the charge is in first order proportional to the potential difference. This is in fact the only system where we can easily create a variable space charge and a variable macroscopic potential.\n\n4.3. Linear Superposition Principle\n\nWe may arrive at proportionality between charge and potential very generally via the linear superposition principle of electrostatics, which states that we may generate the response of a system by simply imposing the responses of the individual parts: We can calculate the total potential by summing the voltage contributions of all individual charges. So, if we double a charge in a general system of charges, the potential of that charge at any spot of the system will double its value, which is a generalization of q = Cf for the capacitor. So, for any single charge or system of charges, we may write for the potential q = αf, or for the energy, when we let all charges grow from zero:\n\n$U={\\int }_{0}^{q}\\varphi \\text{d}q={\\int }_{0}^{q}\\frac{q}{\\alpha }\\text{d}q=\\frac{1}{2}\\frac{{q}^{2}}{\\alpha }=\\frac{1}{2}q\\varphi$\n\nThis is a very general equation based on the peculiarities of the Coulomb law that allows us to obey and apply the linear superposition principle.\n\nIt is obvious that for the electrical energy we may state the electrical contribution as the differential term fdq or as the more integrated contribution 1/2qf. In the first definition fdq you assume that you use an infinitely small test charge to probe the potential, such that the test charge does not change the potential. In the last definition 1/2qf you allow both the potential and the charge to be finite and neither of them infinitesimally small, and simply calculate the total energy exactly, even if it involves only one ion extra brought in contact with a system of charges. In the electrochemical potential both the charge and the potential are not infinitesimally small, and hence that last definition is therefore more appropriate and closer to reality for any real system, even for a single ion and its counter charge in the surrounding ion cloud.\n\nIf we now return to Equation (13) we see that we apparently may apply for the very small local field of order of one elementary test charge and a voltage of a few mV (as in the ion cloud in not too high concentrations, say up to 0.1 M) we must apply the integrated equation 1/2qf for the electrical energy to be accurate, but for the external potential, for which both charge and potential can be large, involving large numbers of ions, in order of Avogadro’s number, and large potentials (Volts), we still resort to the differential form, fdq, which we somehow miraculously integrate to fq, e.g. zFf, as if we can assemble the charges by summing many small test charges without affecting the field. Is that not weird? I would expect that if the small local system of the ion cloud and a large system like a capacitor all give 1/2qf, we should also expect such a response in the electrochemical potential, e.g. 1/2ziFf, which is in fact just a similar system.\n\n4.4. Assembly of Charges\n\nLet us assemble a collection of point charges into a dielectric, or into a finite free space, from originally infinitely apart, e.g. initially without any energy of mutual interaction . For bringing a first charge in a (zero) field from infinity we spend no electrical work:\n\n${W}_{1}=0$ (26)\n\nFor bringing in a second charge we spend:\n\n${W}_{2}={q}_{2}{\\varphi }_{1}\\left({r}_{2}\\right)={q}_{1}{\\varphi }_{2}\\left({r}_{1}\\right)$ (27)\n\nwhere f1(r2) potential of ion 1 at the position of ion 2 and f2(r1) potential of ion 2 at the position of ion 1. Per ion we have spent for two charges 1/2qf.\n\nFor bringing in the third charge:\n\n${W}_{3}={q}_{3}\\left[{\\varphi }_{1}\\left({r}_{3}\\right)+{\\varphi }_{2}\\left({r}_{3}\\right)\\right]$ (28)\n\nFor the three charges we have spent in total $W={W}_{2}+{W}_{3}={q}_{2}{\\varphi }_{1}\\left({r}_{2}\\right)+{q}_{3}\\left[{\\varphi }_{1}\\left({r}_{3}\\right)+{\\varphi }_{2}\\left({r}_{3}\\right)\\right]$ , or again 1/2qf per charge.\n\nFor the transport of the n-th charge\n\n${W}_{n}={q}_{n}\\left[{\\varphi }_{1}\\left({r}_{n}\\right)+{\\varphi }_{2}\\left({r}_{n}\\right)+\\cdots +{\\varphi }_{n-1}\\left({r}_{n}\\right)\\right]$ (29)\n\nThe total electrical free energy is given by the sum of all Wi:\n\n$W=\\underset{i=2}{\\overset{N}{\\sum }}{W}_{i}=\\underset{i=2}{\\overset{N}{\\sum }}{q}_{i}\\underset{k=1}{\\overset{i-1}{\\sum }}{\\varphi }_{k}\\left({r}_{i}\\right)$ (30)\n\nWe can now replace the potentials by their explicit expressions:\n\n${\\varphi }_{k}\\left({r}_{i}\\right)=\\frac{{q}_{k}}{4\\text{π}\\epsilon |{r}_{i}-{r}_{k}|}$ (31)\n\nthen such energy can be expressed as\n\n$W=\\underset{i=2}{\\overset{N}{\\sum }}\\underset{k=1}{\\overset{i-1}{\\sum }}\\frac{{q}_{i}{q}_{k}}{4\\text{π}\\epsilon |{r}_{i}-{r}_{k}|}=\\frac{1}{2}\\underset{i}{\\overset{N}{\\sum }}{q}_{i}\\underset{k\\ne i}{\\overset{N}{\\sum }}\\frac{{q}_{k}}{4\\text{π}\\epsilon |{r}_{i}-{r}_{k}|}$ (32)\n\nor\n\n$W=\\frac{1}{2}\\underset{i}{\\overset{N}{\\sum }}{q}_{i}\\varphi \\left({r}_{i}\\right)$ (33)\n\nwhere f(ri) the total potential at the position of charge i of all surrounding charges. In summing over all particles would count every interaction twice, hence the factor 1/2 in front of the second summation in Equation (32), which is the same as saying that the pair interaction energy must be equally divided over each of the two ions involved.\n\nSo, if we want to calculate the electrical energy of a particular ion in a sea of positive and negative ions, we may simply hypothetically freeze the system and calculate the potential f at the spot of the ion from all the other ions. The electrical energy per ion is\n\n$\\Delta W=\\frac{W}{N}=\\frac{1}{2}\\frac{\\underset{i}{\\overset{N}{\\sum }}{q}_{i}\\varphi \\left({r}_{i}\\right)}{N}={\\left(\\frac{1}{2}q\\varphi \\right)}_{\\text{averaged}}$ (34)\n\nHere we see that for any general assembly of charges, we may calculate the total electrical energy as a contribution of order 1/2qf for the ion charge introduced in the field of the other ions. Here it is immaterial whether the charges are ideal point charges or have a finite volume, or are dipoles, if they are separated in space, such that they do not occupy the same volume element. The contributions of ions and (water) dipoles simply add together. Often, we treat the water as a continuous dielectric with a dielectric constant, which is probably most of the time a good approximation. The presence of the dielectric modifies the field, and thus the value of the potential at the spot of each ion.\n\nIn an electrolyte the positive and negative charges almost cancel each other in the effective electrical potential, except for the ion cloud of opposite sign around the ion, in double layers, and in the case that an effective nonzero space charge is present from unbalance in positive and negative charges.\n\nHere we must realize that Equation (33) is exact for the electrical work needed to assemble any physical system of “point” charges, irrespective of the size of the individual charges and irrespective of the size of the ultimate resulting potentials, and independent of the path used to create that assembly: We can build up the field by assembling the charges from infinite distances apart, or let all charges grow from zero to their value while they are present at the right spot, or any other assembly process that is done at constant pressure and temperature. We only must require that the ions cannot occupy the same spot at the same time due to their finite size, as potentials would than explode to infinity. But that is a physically realistic assumption, even for electrons in a metal (in the classical electrostatic limit).\n\nIf ½qf is the perfect answer for the electrical Coulombic work needed to introduce one ion into a sea of other ions with effectively the opposite charge of the ion introduced, fully in line with the linear superposition principle of electrodynamics, why is that then not taken as the most perfect measure of the electrical energy term in the electrochemical potential, which is apparently traditionally taken qf, thus twice that value?\n\n4.5. Cyclotron\n\nIn the cyclotron the energy gain per revolution is approximately qDf for each cycle of a charge q, where Df the imposed electrical potential difference in the cyclotron. In this formula qDf the electrical field is assumed so strong that a single or a few particles with charges q in the beam do not matter, e.g. do not modify the strength of the imposed electrical field.\n\nBut if you look closely, the particles in the beam must interfere with the electrical field: They will modify the electrical field slightly by their presence via their reaction (imaging) forces. The total field therefore will depend on the presence of these particles. Only when there is no particle at all, qDf, with Df the undisturbed electrical potential difference, is exactly right.\n\nI am convinced that when particles are moving in the magnetic & electrical fields in the cyclotron, the fields are slightly, but significantly, modified. If two particles approach each other, the front one will accelerate and the rear one will lose speed. Moreover, the particles will interact with the magnets and charges that will adjust, although these forces will be small. Many charged particles build up the field and therefore the few charges in the beam give only a slight interference of the effective field.\n\nI am sure that when you try to find an exact solution, you would have to\n\ncomply with $W=\\frac{1}{2}\\underset{i}{\\overset{N}{\\sum }}{q}_{i}\\varphi \\left({r}_{i}\\right)$ exactly at any instant, where f is the field at the\n\nposition of particle i of all the charged particles that are building the field.\n\nIn this case qDf for a single particle is a simplifying mathematical approximation assuming that the charge of the particle is assumed infinitesimally small and does not change the potential difference Df.\n\nThe same argument is traditionally used in the reaction field of a single ion: but that cannot be the case: The ion charge and the electrical reaction field of the surrounding ions are in their mutual effect of the same order of magnitude, and are fully building each other. Hence, we need to take the full energy equation including the factor 1/2.\n\n4.6. Modelling the Electrical Interaction Term\n\nThe discussion above clearly shows that if we define according to Equation (5) the electrical part of the electrochemical potential per ion as\n\n$\\Delta {\\stackrel{˜}{\\mu }}_{i}\\left(\\text{electrical}\\right)=\\frac{+{z}_{i}F{\\varphi }_{i}}{{N}_{A}}={q}_{i}{\\varphi }_{i}$ (35)\n\nthat we are overestimating the electrical contribution with a factor 2 for every possible assembly of charges, while a contribution per ion:\n\n$\\Delta {\\stackrel{˜}{\\mu }}_{i}\\left(\\text{electrical}\\right)=\\frac{+\\frac{1}{2}{z}_{i}F{\\varphi }_{i}}{{N}_{A}}=\\frac{1}{2}{q}_{i}{\\varphi }_{i}$ (36)\n\nis right on spot for any possible configuration of any number of interacting ionic charges at any spatial configuration, notably including the case where we add one simple ion to a solution of very many ions, whose effective charge was just the opposite of the last ion added.\n\nSo, in the JCIS article I am not disputing the differential electrical work term dW = fdq as appears in many fundamental equations for the work associated with introducing an infinitesimal test charge in an existing electrical field f, but I show that the applicability of DH theory has indirectly proven us that the electrical work in the diffuse ion cloud of even a single ion can be more accurately be expressed as the more integrated formula W = 1/2qf.\n\nAnd then I suggested that if such a model is appropriate for the small field of the ion cloud:\n\n${\\stackrel{˜}{\\mu }}_{i}={\\mu }_{i}^{0}+RT\\mathrm{ln}{c}_{i}+\\frac{1}{2}{z}_{i}F{\\varphi }_{i}\\left(\\text{ion}\\text{\\hspace{0.17em}}\\text{cloud}\\right)$ , (37)\n\nwe might generalize that to all electrical interactions for strong electrolytes:\n\n${\\stackrel{˜}{\\mu }}_{i}={\\mu }_{i}^{0}+RT\\mathrm{ln}{\\gamma }_{i}{c}_{i}+\\frac{1}{2}{z}_{i}F{\\varphi }_{i}$ (38)\n\nHere the electrical energy is not the differential form dU = fdq, but chosen the integrated form U = 1/2fq, which is expected from elementary electrodynamics to be exact for any general assembly of charges. I show in the JCIS article that this equation even fits slightly better, e.g. results in activity coefficients that are closer to unity in the particular case that is often referred to as a classical example that shows the success of the standard DH theory.\n\nI want to stress here again that any model chosen for the chemical and the electrical term in the electrochemical potential can only show its merits by the value of the activity coefficients in the range where the model is applicable.\n\nFor the calculation of the electrical contribution in the electrochemical potential we need a thermodynamic average of the electrical interaction energy per ion, averaged over a large assemble of ions, and the question remains whether this is described by the differential form fdq or, as I suggest better, by the generalized average 1/2qf as given by Equation (34) that, according to classical electrodynamics is appropriate for the long-range Coulombic interaction of any assembly of charged particles of any size or of any distribution and fully in line with the Superposition Principle of Electrostatics.\n\nNow that we have a better, and more generally applicable, model for the free energy associated with the Coulombic electrical ionic interactions of strong electrolytes, inherently fully in line with the Superposition Principle of Electrostatics, correct for any microscopic and/or any macroscopic configuration or assembly of charges, dipoles, etcetera, it would be foolish not to use that improved model in the expression of the electrochemical potential of ions.\n\n4.7. Electrical Work in an Electrochemical Cell\n\nI want to show next that the normal equations for potential differences of electrochemical cells remain valid, irrespective of the adapted equations of the electrochemical potential.\n\nLet us reconsider the concentration cell\n\n$\\text{Cu}’|\\text{Ag},\\text{AgCl}|\\text{HCl}\\left(\\text{m}’\\right)\\text{}{\\text{H}}_{\\text{2}}\\left(\\text{Pt}\\right)-{\\text{H}}_{\\text{2}}\\left(\\text{Pt}\\right)|\\text{HCl}\\left(\\text{m}\\right)|\\text{AgCl},\\text{Ag}|\\text{Cu}$ (39)\n\nwithout transport with potential difference\n\n$emf={\\varphi }_{right}-{\\varphi }_{left}=\\Delta \\varphi$ (40)\n\nWith net reaction of 1 mole for 1 Faraday of charge (n = 1):\n\n$\\text{HCl}\\left(\\text{m ′}\\right)\\to \\text{HCl}\\left(\\text{m}\\right)$ (41)\n\nIf an infinitely small electric charge dq is passed through the voltage drop emf in the external circuit, the system produces a quantity dWext of work:\n\n$\\text{d}{W}_{ext}=emf\\text{ }\\text{d}q$ (42)\n\n(This equation is the integrated form of the familiar electrical formula: power = voltage × current).\n\nIf the cell operates reversibly at a given temperature and pressure, the external work is accompanied by a decrease dG in the free energy of the entire cell:\n\n$\\text{d}{W}_{ext}=-\\text{d}G$ (43)\n\nThe free energy change is due to a reduction of dm moles of one reactant in the cathode and a simultaneous oxidation of dm moles of the other reactant in the anode. For one mole of reactants converted to products, the free energy change is DG, so for dm moles reacted:\n\n$\\text{d}G=\\Delta G\\text{d}m$ (44)\n\nCombining the preceding three equations gives:\n\n$\\Delta G\\text{d}m=-emf\\text{ }\\text{d}q$ (45)\n\nLet n be the number of electrons transferred for each atom reduced in the cathode and oxidized in the anode (for our reaction n = 1). The charge transferred for dm moles of overall reaction is:\n\n$\\text{d}q=e{N}_{AV}n\\text{d}m$ (46)\n\nwhere $e\\cong 1.6×{10}^{-19}$ Coulombs is the electronic charge, ${N}_{Av}\\cong \\text{6}×\\text{1}{0}^{\\text{23}}$ is Avogadro’s number and the product eNAv is the charge of one mole of electrons. This product is called Faraday’s constant, $F\\cong 96500$ Coulombs/mole. Combining the above two equations yields the desired final result:\n\n$\\Delta G=-n\\left(e{N}_{AV}\\right)emf=-nF\\cdot emf$ (47)\n\nHere we have shown that for a reversible process, the electrode energy is DW = DfDq and not DW = 1/2DfDq, because all the charge is travelling reversibly through the same potential difference Df, that is assumed not to change during the charge transport: A good battery will keep its voltage nearly constant while a current is delivered.\n\nA cell with a positive potential difference (right-left) indicates that the reaction inside is written qua direction as a spontaneous process (DG < 0) towards further equilibrium: When the leads are connected through a high resistance the spontaneous reaction will create a small spontaneous current in the external circuit, e.g. the system acts as a charged battery.\n\nWe conclude that the new detailed equation for the electrochemical potential does not interfere with the equations derived for electrochemical cells.\n\n4.8. Semi-Permeable Membrane\n\nLet us now consider a classical example of the consequences of my modifications in the case of a semi-permeable membrane that only is permeable for the cation i of a 1-1 salt and not permeable for the anion j and not permeable for the solvent (water), in contact with two reservoirs, A (left) and B (right), of volume V containing the 1-1 salt at equal or different concentrations. This is a simple system.\n\nThermodynamic equilibrium requires constancy of the electrochemical potential of the cation:\n\n${\\stackrel{˜}{\\mu }}_{i}\\left(A\\right)={\\stackrel{˜}{\\mu }}_{i}\\left(B\\right)$ (48)\n\nClassically we would write\n\n${\\mu }_{0,i}+RT\\mathrm{ln}{a}_{i}\\left(A\\right)+{z}_{i}F{\\varphi }_{A}={\\mu }_{0,i}+RT\\mathrm{ln}{a}_{i}\\left(B\\right)+{z}_{i}F{\\varphi }_{B}$ (49)\n\nOr\n\n$RT\\mathrm{ln}\\frac{{a}_{i}\\left(A\\right)}{{a}_{i}\\left(B\\right)}={z}_{i}F\\left({\\varphi }_{B}-{\\varphi }_{A}\\right)={z}_{i}Femf$ (50)\n\nThis is as far as we get. Now we can make certain approximations to get some further, e.g. replace activities by concentrations and assume that the concentrations are close to the bulk concentrations of the salt, hence:\n\n$\\frac{RT}{{z}_{i}F}\\mathrm{ln}\\frac{c\\left(A\\right)}{c\\left(B\\right)}\\cong emf$ (51)\n\nThis is essentially the classical approach.\n\nBut now let us look at this system more closely and bring some more physics in. If the concentrations in both cells are the same, the potentials are equal, and the space charge is zero. There might be some preferential adsorption of one of the ions, and therefore a polarized double layer on both sides, but the total charge on both sides will be zero. According to the superposition principle we may superimpose the adsorption and the membrane potential phenomena, hence we may forget for simplicity the polarized double-layer effect. In the solution the ions feel the micro potential of the surrounding ion cloud. We can also separate that effect via the superposition principle. Hence, we focus only on the macro potential from the presence of the membrane.\n\nThe electrical work for passage of charge through the membrane associated with the leakage of cations from the high to low concentration side is\n\n$\\text{d}{W}_{\\text{electrical}}=\\varphi \\text{d}{q}_{i}$ (52)\n\nHere f is the potential difference between the two sides of the membrane. Now we know that the charge build up is proportional to the voltage difference (as associated with increasing the concentration difference). In fact, here we recognize the behavior of an electrical capacitance again, dq = Cdf, which the membrane is, e.g. a charge separation in space, obeying the electrical linear superposition principle.\n\nAt equilibrium the chemical work should be equal to the electrical work, as the electrochemical potential is constant over the whole system:\n\n$\\text{d}RT\\mathrm{ln}{a}_{i}=-\\varphi \\text{d}{q}_{i}$ (53)\n\nHence, we may simply integrate the differential work to give:\n\n$RT\\mathrm{ln}\\frac{{a}_{i}\\left(A\\right)}{{a}_{i}\\left(B\\right)}=-\\int \\varphi C\\text{d}\\varphi =-\\frac{1}{2}C{\\varphi }^{2}=-\\frac{1}{2}\\varphi {q}_{i}$ (54)\n\nHere we again recognize the Boltzmann law\n\n$\\frac{{a}_{i}\\left(A\\right)}{{a}_{i}\\left(B\\right)}=\\mathrm{exp}\\left(\\frac{-\\frac{1}{2}\\varphi {q}_{i}}{RT}\\right)$ (55)\n\n(The minus sign should be in accord with the chosen signs of potential difference and the sign of the charge). Here again the differential form in Equation (52) leads, because of the linear superposition principle, to an equilibrium Equation (54) that should contain a factor 1/2 in the integrated form. This I have not recognized in the membrane theories up till now, but would be required for a sound modelling of elementary membrane phenomena.\n\nYou might argue here that the capacitance might not be constant, e.g. it is a differential capacitance. But even then, the behavior is in first order quadratic in potential difference, and the higher order corrections set in at higher charges and potentials away from zero charge, very similar to the effects of the higher order corrections in the DH theory that are accounted for in the activity coefficients.\n\n5. Extrapolation to Standard States at Infinite Dilution\n\nIn many experiments involving strong electrolytes we need to extrapolate the data to infinite dilution to get thermodynamic data for the electrolytes in the hypothetical infinite dilution reference state at unit activity (standard electrode potentials fo, reaction free energies DGo, enthalpies, etcetera). In these extrapolations we traditionally employ the Debye-Hückel activity coefficients to extrapolate over the lower concentrations towards infinite dilution, as we had expected them to be essentially correct. Now I have shown that these traditional activity coefficients might be slightly in error. This might indicate that we must adapt the extrapolation procedure to incorporate the improved expressions for the activity coefficients, which might lead to slight, and even maybe sometimes even significant, changes in the extrapolated and published thermodynamic reference data for strong electrolytes and their electrode potentials. This is a fundamental result that might constitute a lot of work.\n\n6. Summary\n\nTo summarize, we may state in general, fully in compliance with the definition of electrical work, that for two points of identical composition (two identical electrodes) per mole\n\n$\\text{d}{\\stackrel{˜}{\\mu }}_{i}={z}_{i}F\\text{d}\\varphi$ (56)\n\nor per ion:\n\n$\\text{d}{\\stackrel{˜}{\\mu }}_{i}={z}_{i}e\\text{d}\\varphi ={q}_{i}\\text{d}\\varphi$ (57)\n\nNormally the electrical work is defined as dW = fdq, where we bring an infinitesimal charge over a potential difference f. In the last equation we have made the potential difference infinitesimally small. The equation is formally only correct for an infinitesimally small charge.\n\nWhen we integrate this equation, e.g. create a measurable potential difference f and a finite charge qi, even for a single ion, the superposition principle requires that the potential and charge are proportional for any system that we create by assembling a system of charges. It is immaterial whether the field creates the charges, or the charges create the field: They are building at the same time. This is the reason why for any significant (ionic) charge and any significant field f:\n\n$\\Delta {\\stackrel{˜}{\\mu }}_{i}=\\frac{1}{2}{q}_{i}\\Delta \\varphi$ (58)\n\nThis new formula with the factor ½ allows us to obey the linear superposition principle of elementary electrodynamics that states that for any assembly of charges in any configuration in space, the potential and charges are proportional. The consequences for the definition of the electrochemical potential are in practice small and absorbed in different values for the activity coefficients. But in this new way, obeying the superposition principle, we probably capture the elementary electrodynamics and physics better and thus make better and simpler models:\n\nThe fact that the activity coefficients are essentially closer to unity in the dilute Coulombic range, allows us to assume that the activity coefficients are equal to unity in that Coulombic range and hence replace in models the activity by simple concentration. This simplifies the models tremendously in further calculations as we may then state:\n\n${\\stackrel{˜}{\\mu }}_{i}={\\mu }_{i}^{0}+RT\\mathrm{ln}{c}_{i}+\\frac{1}{2}{z}_{i}F{\\varphi }_{i}$ (59)\n\nwhere fi the total potential, containing (superimposed) contributions from micro and macro potentials and without activity coefficients, hence we have a simple fundamental equation linking concentration and total potential, that should accurately predict the behavior of electrolytes in an electrical field at low concentration in the range where Coulombic interactions dominate. This makes modelling work much easier.\n\nTraditionally, we were caught in an iterative cycle: we need to express equations in activities that we do not know a priory. Hence in models and in numerical simulations/calculations we approximate the activities first by concentrations (thus forget about activity coefficients, e.g. approximate them by unity), evaluate the (now approximate) model expressed in concentrations and then calculate the local concentrations according to the model. We then calculate the (approximate) activity coefficients by some (DH) model to get a better (second) approximation for the local activities, and repeat the calculation with calculated activities, etcetera.\n\nBut in our new equation, the activity coefficients remain unity (are absent) in the lower concentration range where the Coulomb forces are dominant. The models are thus expressed directly in concentration, and need to be evaluated only once, with the same accuracy. This makes life easier, especially when the model requires the combination of external and local fields (like in the Gouy-Chapman theory for the ideally polarized electrode with an ideally polarized electrical double layer present , creating a field and a local difference in the local concentrations of anions and cations).\n\n7. Discussion\n\nAs you see I offer an alternative formulation for the detailed modelling in the electrochemical potential of strong dilute electrolytes, which has some advantages, and is based on, to my opinion, sound physical principles.\n\nYou can either use the classic expression, which needs activity and activity coefficients even in the dilute range to correct for the less efficient modelling of the electrical interaction, separating the effect of the ion cloud out from the electrical interactions and bring them into the chemical energy, or alternatively use the new expression and define the chemical and electrical energies more efficiently, fully in line with the superposition principle and classical electrodynamics. In that last case the ion cloud is simply a part of the electrical field interaction as it should be, without the need to separate “macroscopic” and “microscopic” potentials, with different models. Such separate models are later difficult to unify in more elaborate systems like the electrical double layer, when both ion clouds and an effective space charge are present.\n\nI have always found it difficult to unify the superposition principle of classical electrostatics, the classical energy equations in the electrostatic field, like those for capacitors, and the traditional definition of the electrical energy term in the electrochemical potential of ions. I hope that my alternative approach will help to resolve these issues and may give a better description, and a better basis for further modelling of electrochemical phenomena involving dilute strong electrolytes and electrical fields.\n\nConflicts of Interest\n\nThe authors declare no conflicts of interest.\n\n Debye, P. and Hückel E. (1923) Zur Theorie der Elektrolyte. Physikalische Zeitschrift, 9, 185-206. van der Weg, P.B. (2009) The Electrochemical Potential and Ionic Activity Coefficients. A Possible Correction for Debye-Hückel and Maxwell-Boltzmann Equations for Dilute Electrolyte Equilibria. Journal of Colloid and Interface Science, 339, 542-544. Robson-Wright, M. (2007) An Introduction to Aqueous Electrolyte Solutions. Wiley, New York. Simonin, J. and Turq, P. (2002) Electrolytes at Interfaces. S. Durand-Vidal, Kluwer. Zemaitis Jr., J.F., et al. (1986) Handbook of Aqueous Electrolyte Thermodynamics. Wiley, New York, Chapter IV. Greiner, W. (1998) Classical Electrodynamics. Springer, Berlin, 29. van der Weg, P.B. (1985) Surface Tension and Differential Capacitance of the Ideally Polarised Electrical Double Layer of Aqueous Potassium Bromide on Mercury. Ph.D. Dissertation, Free University, Amsterdam.",
null,
""
]
| [
null,
"https://scirp.org/Images/ccby.png",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.91641045,"math_prob":0.9954991,"size":36638,"snap":"2020-34-2020-40","text_gpt3_token_len":7468,"char_repetition_ratio":0.19028771,"word_repetition_ratio":0.018330278,"special_character_ratio":0.19588952,"punctuation_ratio":0.08295489,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9989379,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-07T19:26:46Z\",\"WARC-Record-ID\":\"<urn:uuid:f551aefd-7d84-4c7d-9fd6-d27fa45c71d8>\",\"Content-Length\":\"181016\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:63272fd7-f8c3-41ac-ac80-c4f3f52f5ce5>\",\"WARC-Concurrent-To\":\"<urn:uuid:97c19118-3223-4b72-89d7-48003bfbdf89>\",\"WARC-IP-Address\":\"209.141.51.63\",\"WARC-Target-URI\":\"https://scirp.org/journal/paperinformation.aspx?paperid=87306\",\"WARC-Payload-Digest\":\"sha1:UP53GA2GZ5ZMJH2LILCCUTH4SSA4SB7F\",\"WARC-Block-Digest\":\"sha1:D6GEITJNMRU4HN5SCMTFQ7R3Y6HAOBJE\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439737206.16_warc_CC-MAIN-20200807172851-20200807202851-00571.warc.gz\"}"} |
https://tabtarife.com/bar-river/domain-of-composite-functions-pdf.php | [
"# Bar River Domain Of Composite Functions Pdf\n\n## Composite functions Physicsservello\n\n### How to Adjust the Domain and Range of Combined Functions",
null,
"Composite Functions (solutions examples videos). FUNCTIONS: DOMAIN, RANGE & COMPOSITE FUNCTIONS ©MathsDIY.com Page 1 of 4 FUNCTIONS: DOMAIN, RANGE & COMPOSITE FUNCTIONS A2 Unit 3: Pure Mathematics B WJEC past paper questions: 2010 – 2017 Total marks available 84 (approximately 1 hour 50 minutes) (Summer 10) (January 11) (Summer 12) 1. 2. 3., Get an answer for ' Find the domain of the composite function fog (x) if f(x)= 2x+1; g(x)= x +4' and find homework help for other Math questions at eNotes.\n\n### 1.5 Composition of Functions Mathematics LibreTexts\n\nQuiz & Worksheet Composite Function Domain & Range. FUNCTIONS: DOMAIN, RANGE & COMPOSITE FUNCTIONS ©MathsDIY.com Page 1 of 4 FUNCTIONS: DOMAIN, RANGE & COMPOSITE FUNCTIONS A2 Unit 3: Pure Mathematics B WJEC past paper questions: 2010 – 2017 Total marks available 84 (approximately 1 hour 50 minutes) (Summer 10) (January 11) (Summer 12) 1. 2. 3., Sal explains what it means to compose two functions. He gives examples for finding the values of composite functions given the equations, the graphs, or tables of values of the two composed functions..\n\nLet f and g be two transcendental entire functions. In this paper, mainly by using Iversen's theorem on the singularities, we studied the dynamics of composite functions. We have proved that the Get an answer for ' Find the domain of the composite function fog (x) if f(x)= 2x+1; g(x)= x +4' and find homework help for other Math questions at eNotes\n\ndomain(f g)=domain(g) and range(f g)=range(f). 5. If f and g are two functions such that range(g)∩domain(f) 6= ∅,then f g 6= ∅. 1.1 Forming Compositions of Functions Given by For-mulas If we have two functions, f and g,thataredefined by formulas, we can obtain a formula for the composite function, f g. This is illustrated in the Composite Functions What Are Composite Functions? Composition of functions is when one function is inside of another function. For example, if we look at the function h(x) = (2x – 1) 2 . We can say that this function, h(x), was formed by the composition o f two other\n\nFUNCTIONS: DOMAIN, RANGE & COMPOSITE FUNCTIONS ©MathsDIY.com Page 1 of 4 FUNCTIONS: DOMAIN, RANGE & COMPOSITE FUNCTIONS A2 Unit 3: Pure Mathematics B WJEC past paper questions: 2010 – 2017 Total marks available 84 (approximately 1 hour 50 minutes) (Summer 10) (January 11) (Summer 12) 1. 2. 3. Math 30-1 Function Operations Practice Test ID: B 1 Math 30-1 Function Operations *ANSWER KEY is at the end of this document* 1. Here is the graph of y = f(x).What are the domain and range of its inverse?\n\nComposite Functions This lesson explains the concept of composite functions. An example is given demonstrating how to work algebraically with composite functions and another example involves an application that uses the composition of functions. Say we will like to find the following composite function and whether this function exists Step 1- sketch both functions to see what they look like and determine their domains and ranges Sketch the two functions and find their respective domains and ranges Equation of the graph Domain of the graph- what is the values of x it can have , f f\n\nEvaluating composite functions (advanced) Our mission is to provide a free, world-class education to anyone, anywhere. Khan Academy is a 501(c)(3) nonprofit organization. FUNCTIONS: DOMAIN, RANGE & COMPOSITE FUNCTIONS ©MathsDIY.com Page 1 of 4 FUNCTIONS: DOMAIN, RANGE & COMPOSITE FUNCTIONS A2 Unit 3: Pure Mathematics B WJEC past paper questions: 2010 – 2017 Total marks available 84 (approximately 1 hour 50 minutes) (Summer 10) (January 11) (Summer 12) 1. 2. 3.\n\ndomain of (q p)(x) is [1;1) and its corresponding range is [0;1). Notice that we are incorrectly tempted to look at (q p)(x) = x 1 and claim its domain is R and its range is R. However as this is a composite of two functions its input and output values depend on the domain and range of … Questions on composition of functions are presented and their detailed solutions discussed. These questions have been designed to help you deepen your understanding of the concept of composite functions as well as to develop the computational skills needed while solving questions related to these functions.\n\nEvaluating composite functions (advanced) Our mission is to provide a free, world-class education to anyone, anywhere. Khan Academy is a 501(c)(3) nonprofit organization. Composite Functions What Are Composite Functions? Composition of functions is when one function is inside of another function. For example, if we look at the function h(x) = (2x – 1) 2 . We can say that this function, h(x), was formed by the composition o f two other\n\nFunction Composition Worksheet NAME For problems 1–4, use f 2xxx= −2 Questions on composition of functions are presented and their detailed solutions discussed. These questions have been designed to help you deepen your understanding of the concept of composite functions as well as to develop the computational skills needed while solving questions related to these functions.\n\nFree functions domain calculator - find functions domain step-by-step This website uses cookies to ensure you get the best experience. By using this website, you agree to our Cookie Policy. f (g(x)) of two simpler functions, an outer function f and an inner function g. Find the inner function first. #Write as a composition . x2 2 6 f g x x2 2 6 inner function g x x2 2 ( outer function does what remains f x to be done. x6) f x x6. check: . f g x f x2 2 x2 2 6 #Write as a composition . 41x 3 f g x 4 1 inner function\n\n3 Functions 46 SECTION E Properties of Composite Functions By the end of this section you will be able to • understand what is meant by the identity function • prove properties of inverse function Let f and g be two transcendental entire functions. In this paper, mainly by using Iversen's theorem on the singularities, we studied the dynamics of composite functions. We have proved that the\n\nf (g(x)) of two simpler functions, an outer function f and an inner function g. Find the inner function first. #Write as a composition . x2 2 6 f g x x2 2 6 inner function g x x2 2 ( outer function does what remains f x to be done. x6) f x x6. check: . f g x f x2 2 x2 2 6 #Write as a composition . 41x 3 f g x 4 1 inner function inside function is NOT ALL contained in the domain of the outside function, the above procedure must be used to find the composite domain. B. Find the range of . The composite range of is f ,0@. You may also think about the graph of . Basically, since S x x( ) ln cos and the composite domain is , …\n\nDomains. It has been easy so far, but now we must consider the Domains of the functions.. The domain is the set of all the values that go into a function.. The function must work for all values we give it, so it is up to us to make sure we get the domain correct! • find the domain and range of a composite function gf given the functions f and g. Contents 1. Introduction 2 2. Order of composition 3 3. Decomposition of a function 3 4. Domains and ranges of composed functions 4 1 c mathcentre July 18, 2005\n\n### Quiz & Worksheet Composite Function Domain & Range",
null,
"1.8 Combinations of Functions Composite Functions. This example shows that knowledge of the range of functions (specifically the inner function) can also be helpful in finding the domain of a composite function. It also shows that the domain of [latex]f\\circ g[/latex] can contain values that are not in the domain of [latex]f[/latex], though they must be in the domain of [latex]g[/latex]., The domain of a composition will be those values which can \"move through\" to the end of the composition. The \"obstacle\" is whether all of the values created by g(x), in this case, can be \"picked up\" by function f (x). Algebraic Interpretation of this example: 1. Function g(x) cannot pick up the value +2 since it creates a zero denominator. Consequently, the composition also cannot pick up the value of +2..\n\n### Worksheet 4.8 Composite and Inverse Functions",
null,
"Composite Functions Worksheets & Teaching Resources TpT. This example shows that knowledge of the range of functions (specifically the inner function) can also be helpful in finding the domain of a composite function. It also shows that the domain of [latex]f\\circ g[/latex] can contain values that are not in the domain of [latex]f[/latex], though they must be in the domain of [latex]g[/latex]. Composite Functions What Are Composite Functions? Composition of functions is when one function is inside of another function. For example, if we look at the function h(x) = (2x – 1) 2 . We can say that this function, h(x), was formed by the composition o f two other.",
null,
"Introduction to functions mc-TY-introfns-2009-1 A function is a rule which operates on one number to give another number. However, not every rule describes a valid function. This unit explains how to see whether a given rule describes a valid function, and introduces some of the mathematical terms associated with functions. Say we will like to find the following composite function and whether this function exists Step 1- sketch both functions to see what they look like and determine their domains and ranges Sketch the two functions and find their respective domains and ranges Equation of the graph Domain of the graph- what is the values of x it can have , f f\n\nComposite Function Review Multiple Choice Identify the choice that best completes the statement or answers the question. ____ 1. Compared to the graph of the base function f(x) = x, the graph of the function g(x) = x −5 is translated A 5 units down C 5 units right B 5 units left D 5 units up ____ 2. Which function has a graph in the shape of A composite function can be evaluated from a graph. See Example. A composite function can be evaluated from a formula. See Example. The domain of a composite function consists of those inputs in the domain of the inner function that correspond to outputs of the inner function that are in the domain of the outer function. See Example and Example.\n\nThe domain of a composition will be those values which can \"move through\" to the end of the composition. The \"obstacle\" is whether all of the values created by g(x), in this case, can be \"picked up\" by function f (x). Algebraic Interpretation of this example: 1. Function g(x) cannot pick up the value +2 since it creates a zero denominator. Consequently, the composition also cannot pick up the value of +2. When you begin combining functions (like adding a polynomial and a square root, for example), the domain of the new combined function is affected. The same can be said for the range of a combined function; the new function will be based on the restriction(s) of the original functions. The domain is affected when you […]\n\nThe domain of a composition will be those values which can \"move through\" to the end of the composition. The \"obstacle\" is whether all of the values created by g(x), in this case, can be \"picked up\" by function f (x). Algebraic Interpretation of this example: 1. Function g(x) cannot pick up the value +2 since it creates a zero denominator. Consequently, the composition also cannot pick up the value of +2. Check your knowledge of evaluating composite functions in this quiz and corresponding worksheet. These assessment tools can help assess your...\n\n3 Functions 46 SECTION E Properties of Composite Functions By the end of this section you will be able to • understand what is meant by the identity function • prove properties of inverse function Get an answer for ' Find the domain of the composite function fog (x) if f(x)= 2x+1; g(x)= x +4' and find homework help for other Math questions at eNotes\n\nComposite Function Review Multiple Choice Identify the choice that best completes the statement or answers the question. ____ 1. Compared to the graph of the base function f(x) = x, the graph of the function g(x) = x −5 is translated A 5 units down C 5 units right B 5 units left D 5 units up ____ 2. Which function has a graph in the shape of Section 1.8 Combinations of Functions: Composite Functions 87 Finding the Domain of a Composite Function Given and find the composition Then find the domain of Solution From this, it might appear that the domain of the composition is the set of all real numbers. This, however is not true. Because the domain of is the set of all real\n\nSal explains what it means to compose two functions. He gives examples for finding the values of composite functions given the equations, the graphs, or tables of values of the two composed functions. Say we will like to find the following composite function and whether this function exists Step 1- sketch both functions to see what they look like and determine their domains and ranges Sketch the two functions and find their respective domains and ranges Equation of the graph Domain of the graph- what is the values of x it can have , f f\n\n## Quiz & Worksheet Composite Function Domain & Range",
null,
"Composition of functions mathcentre.ac.uk. Say we will like to find the following composite function and whether this function exists Step 1- sketch both functions to see what they look like and determine their domains and ranges Sketch the two functions and find their respective domains and ranges Equation of the graph Domain of the graph- what is the values of x it can have , f f, domain of (q p)(x) is [1;1) and its corresponding range is [0;1). Notice that we are incorrectly tempted to look at (q p)(x) = x 1 and claim its domain is R and its range is R. However as this is a composite of two functions its input and output values depend on the domain and range of ….\n\n### Domain and range of composite functions Mathematics\n\nFind composite functions (practice) Khan Academy. Decompose a Composite Function. In some cases, it is necessary to decompose a complicated function. In other words, we can write it as a composition of two simpler functions. There is almost always more than one way to decompose a composite function, so we may choose the decomposition that appears to be most obvious., 3 Functions 46 SECTION E Properties of Composite Functions By the end of this section you will be able to • understand what is meant by the identity function • prove properties of inverse function.\n\nDecompose a Composite Function. In some cases, it is necessary to decompose a complicated function. In other words, we can write it as a composition of two simpler functions. There is almost always more than one way to decompose a composite function, so we may choose the decomposition that appears to be most obvious. This example shows that knowledge of the range of functions (specifically the inner function) can also be helpful in finding the domain of a composite function. It also shows that the domain of [latex]f\\circ g[/latex] can contain values that are not in the domain of [latex]f[/latex], though they must be in the domain of [latex]g[/latex].\n\nComposite Functions What Are Composite Functions? Composition of functions is when one function is inside of another function. For example, if we look at the function h(x) = (2x – 1) 2 . We can say that this function, h(x), was formed by the composition o f two other Domain and Range of Composite Functions Step 1: Find the range of the input function. Step 2: Put this range on the x-axis of the graph of the second function. Step 3: Find the corresponding range. Therefore, hg(x) ≥ 4 Domain and Range Linear Composite Functions In order to\n\n• find the domain and range of a composite function gf given the functions f and g. Contents 1. Introduction 2 2. Order of composition 3 3. Decomposition of a function 3 4. Domains and ranges of composed functions 4 1 c mathcentre July 18, 2005 A composite function can be evaluated from a graph. See Example. A composite function can be evaluated from a formula. See Example. The domain of a composite function consists of those inputs in the domain of the inner function that correspond to outputs of the inner function that are in the domain of the outer function. See Example and Example.\n\nDecompose a Composite Function. In some cases, it is necessary to decompose a complicated function. In other words, we can write it as a composition of two simpler functions. There is almost always more than one way to decompose a composite function, so we may choose the decomposition that appears to be most obvious. When you begin combining functions (like adding a polynomial and a square root, for example), the domain of the new combined function is affected. The same can be said for the range of a combined function; the new function will be based on the restriction(s) of the original functions. The domain is affected when you […]\n\nSection 1.7 Combinations of Functions; Composite Functions 221 Because division by 0 is undefined, the denominator, cannot be 0. Thus, cannot equal 3.The domain of the function consists of all real numbers other than 3, represented by Using interval notation, Now consider a function involving a square root: Function Composition Worksheet NAME For problems 1–4, use f 2xxx= −2\n\nComposite Functions What Are Composite Functions? Composition of functions is when one function is inside of another function. For example, if we look at the function h(x) = (2x – 1) 2 . We can say that this function, h(x), was formed by the composition o f two other Say we will like to find the following composite function and whether this function exists Step 1- sketch both functions to see what they look like and determine their domains and ranges Sketch the two functions and find their respective domains and ranges Equation of the graph Domain of the graph- what is the values of x it can have , f f\n\n• find the domain and range of a composite function gf given the functions f and g. Contents 1. Introduction 2 2. Order of composition 3 3. Decomposition of a function 3 4. Domains and ranges of composed functions 4 1 c mathcentre July 18, 2005 inside function is NOT ALL contained in the domain of the outside function, the above procedure must be used to find the composite domain. B. Find the range of . The composite range of is f ,0@. You may also think about the graph of . Basically, since S x x( ) ln cos and the composite domain is , …\n\nLet f and g be two transcendental entire functions. In this paper, mainly by using Iversen's theorem on the singularities, we studied the dynamics of composite functions. We have proved that the The composite may also result in a domain unrelated to the domains of the original functions. Examples: 1. In Example 2 above, the domain for g(x) = 3 x - is x ≤ 3. The domain for f(g(x)) = 5 – x is all real numbers, but you must keep the domain of the inside function. So the domain for the composite function is …\n\nFree functions domain calculator - find functions domain step-by-step This website uses cookies to ensure you get the best experience. By using this website, you agree to our Cookie Policy. 1.2 Composite Functions Domain and Range notes.notebook 1 October 26, 2011 Oct 55:49 PM Composite FUNctions Domain and Range :) From last class: We determined that the yvalue of the inner function in a composite function becomes the xvalue of the outer function. Because of this, the range of the inner function restricts the domain of the outer. This means we cannot simply look at a\n\nComposite Function Review Multiple Choice Identify the choice that best completes the statement or answers the question. ____ 1. Compared to the graph of the base function f(x) = x, the graph of the function g(x) = x −5 is translated A 5 units down C 5 units right B 5 units left D 5 units up ____ 2. Which function has a graph in the shape of • find the domain and range of a composite function gf given the functions f and g. Contents 1. Introduction 2 2. Order of composition 3 3. Decomposition of a function 3 4. Domains and ranges of composed functions 4 1 c mathcentre July 18, 2005\n\nWith these task cards, your students will practice finding composite functions, finding domain and range of a function and its inverse, finding the inverse of a function, and graphing. ***There's also a digital and hybrid print/digital version available.The way I like to use these 6 task cards: have Section 1.7 Combinations of Functions; Composite Functions 221 Because division by 0 is undefined, the denominator, cannot be 0. Thus, cannot equal 3.The domain of the function consists of all real numbers other than 3, represented by Using interval notation, Now consider a function involving a square root:\n\nDecompose a Composite Function. In some cases, it is necessary to decompose a complicated function. In other words, we can write it as a composition of two simpler functions. There is almost always more than one way to decompose a composite function, so we may choose the decomposition that appears to be most obvious. 26/10/2011 · Domain of a Composition of Functions, Example 1. In this video, I discuss the procedure for finding the domain of a composition of functions and then find the domain of one specific example.\n\nIntroduction to functions mathcentre.ac.uk. Domain and range of rational functions. Domain and range of rational functions with holes. Graphing rational functions. Graphing rational functions with holes. Converting repeating decimals in to fractions. Decimal representation of rational numbers. Finding square root using long division. L.C.M method to solve time and work problems, Composite Functions This lesson explains the concept of composite functions. An example is given demonstrating how to work algebraically with composite functions and another example involves an application that uses the composition of functions..\n\n### Composition of functions mathcentre.ac.uk",
null,
"Quiz & Worksheet Composite Function Domain & Range. The composite may also result in a domain unrelated to the domains of the original functions. Examples: 1. In Example 2 above, the domain for g(x) = 3 x - is x ≤ 3. The domain for f(g(x)) = 5 – x is all real numbers, but you must keep the domain of the inside function. So the domain for the composite function is …, After having gone through the stuff given above, we hope that the students would have understood, \"How to Find the Range of Composite Functions\".Apart from the stuff given in this section \"How to Find the Range of Composite Functions\", if you need any other stuff ….",
null,
"How to Find the Range of Composite Functions. Check your knowledge of evaluating composite functions in this quiz and corresponding worksheet. These assessment tools can help assess your..., Let f and g be two transcendental entire functions. In this paper, mainly by using Iversen's theorem on the singularities, we studied the dynamics of composite functions. We have proved that the.\n\n### Composition of Functions Virginia Department of",
null,
"Composition of Functions and Inverses of Functions. Math 30-1 Function Operations Practice Test ID: B 1 Math 30-1 Function Operations *ANSWER KEY is at the end of this document* 1. Here is the graph of y = f(x).What are the domain and range of its inverse? FUNCTIONS: DOMAIN, RANGE & COMPOSITE FUNCTIONS ©MathsDIY.com Page 1 of 4 FUNCTIONS: DOMAIN, RANGE & COMPOSITE FUNCTIONS A2 Unit 3: Pure Mathematics B WJEC past paper questions: 2010 – 2017 Total marks available 84 (approximately 1 hour 50 minutes) (Summer 10) (January 11) (Summer 12) 1. 2. 3..",
null,
"• FUNCTIONS DOMAIN RANGE & COMPOSITE FUNCTIONS\n• Composite Functions Mesa Community College\n• Composite Functions Domain and Range by Sher Lynn Wong on\n\n• Say we will like to find the following composite function and whether this function exists Step 1- sketch both functions to see what they look like and determine their domains and ranges Sketch the two functions and find their respective domains and ranges Equation of the graph Domain of the graph- what is the values of x it can have , f f Check your knowledge of evaluating composite functions in this quiz and corresponding worksheet. These assessment tools can help assess your...\n\nHere is another example of composition of functions. This time let f be the function given by f(x) = 2x and let g be the function given by g(x) = ex. As before, we write down f(x) first, and then apply g to the whole of f(x). In this case, f(x) is just 2x. Applying the function g then raises e to the power f(x). So we obtain gf(x) = g(f(x)) = g(2x) = e2x. Composite Functions •In a composition of functions, the range of one function is part of the domain of the other function •Basically substituting one function into another function •Notation of composite functions is •f(g(x)) or (f g)(x) •Read as “f of g of x”\n\nDecompose a Composite Function. In some cases, it is necessary to decompose a complicated function. In other words, we can write it as a composition of two simpler functions. There is almost always more than one way to decompose a composite function, so we may choose the decomposition that appears to be most obvious. Domain and range of rational functions. Domain and range of rational functions with holes. Graphing rational functions. Graphing rational functions with holes. Converting repeating decimals in to fractions. Decimal representation of rational numbers. Finding square root using long division. L.C.M method to solve time and work problems\n\n1.2 Composite Functions Domain and Range notes.notebook 1 October 26, 2011 Oct 55:49 PM Composite FUNctions Domain and Range :) From last class: We determined that the yvalue of the inner function in a composite function becomes the xvalue of the outer function. Because of this, the range of the inner function restricts the domain of the outer. This means we cannot simply look at a 3 Functions 46 SECTION E Properties of Composite Functions By the end of this section you will be able to • understand what is meant by the identity function • prove properties of inverse function\n\nNext, students will complete the compositions in context problems as a team. These problems should help students to see how composition of functions can be used in everyday life. Also, I think with the context of a real world situation it will help students to develop more of an understanding of what it means to compose two functions. Students Sal explains what it means to compose two functions. He gives examples for finding the values of composite functions given the equations, the graphs, or tables of values of the two composed functions.\n\nFree functions domain calculator - find functions domain step-by-step This website uses cookies to ensure you get the best experience. By using this website, you agree to our Cookie Policy. Composite Functions What Are Composite Functions? Composition of functions is when one function is inside of another function. For example, if we look at the function h(x) = (2x – 1) 2 . We can say that this function, h(x), was formed by the composition o f two other\n\nCheck your knowledge of evaluating composite functions in this quiz and corresponding worksheet. These assessment tools can help assess your... With these task cards, your students will practice finding composite functions, finding domain and range of a function and its inverse, finding the inverse of a function, and graphing. ***There's also a digital and hybrid print/digital version available.The way I like to use these 6 task cards: have\n\nView all posts in Bar River category"
]
| [
null,
"https://tabtarife.com/images/470766.jpg",
null,
"https://tabtarife.com/images/361c60ffe5c4bbd0c7dd9c44181b02c8.jpg",
null,
"https://tabtarife.com/images/domain-of-composite-functions-pdf.jpg",
null,
"https://tabtarife.com/images/928232.jpg",
null,
"https://tabtarife.com/images/5b15645f75de3bd7fa51d91686c90029.jpg",
null,
"https://tabtarife.com/images/835214.jpg",
null,
"https://tabtarife.com/images/aa475f5c47cc4f64aa24529bca65a84b.png",
null,
"https://tabtarife.com/images/domain-of-composite-functions-pdf-2.jpg",
null,
"https://tabtarife.com/images/340254.jpg",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.87603515,"math_prob":0.9709173,"size":27148,"snap":"2021-31-2021-39","text_gpt3_token_len":6127,"char_repetition_ratio":0.2467212,"word_repetition_ratio":0.7962609,"special_character_ratio":0.229225,"punctuation_ratio":0.099791706,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9934466,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18],"im_url_duplicate_count":[null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-19T07:11:27Z\",\"WARC-Record-ID\":\"<urn:uuid:6d2a16b9-9b31-46c0-b96f-aa07d5cc3385>\",\"Content-Length\":\"79533\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6bb17e61-a4c7-4163-8601-c1bf2eed4a2c>\",\"WARC-Concurrent-To\":\"<urn:uuid:87bf2502-7d14-4569-866b-f272b62d11a1>\",\"WARC-IP-Address\":\"88.119.175.234\",\"WARC-Target-URI\":\"https://tabtarife.com/bar-river/domain-of-composite-functions-pdf.php\",\"WARC-Payload-Digest\":\"sha1:UG62L2XLYYHZZLWYK4O6GHG5OH7QF3WZ\",\"WARC-Block-Digest\":\"sha1:GM7KZKE2GMQ24UWXJPJBFD4IYEJTMLE2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780056752.16_warc_CC-MAIN-20210919065755-20210919095755-00662.warc.gz\"}"} |
https://pc28d.com/index.php | [
"# 二八测 28ce.com\n\n2763466\n2763465 06:59 5+7+6=18\n2763464 06:55 1+0+2=03\n2763463 06:52 5+6+8=19\n2763462 06:48 0+5+4=09\n2763461 06:45 2+7+8=17\n2763460 06:41 6+7+1=14\n2763459 06:38 1+3+4=08\n2763458 06:34 1+5+2=08\n2763457 06:31 0+3+4=07\n2763456 06:28 5+9+3=17\n2763455 06:24 2+1+9=12\n2763454 06:20 5+5+3=13\n2763453 06:17 4+7+0=11\n2763452 06:14 1+0+1=02\n2763451 06:10 5+8+6=19\n2763450 06:07 6+7+2=15\n2763449 06:03 6+0+0=06\n2763448 05:59 7+8+6=21\n2763447 05:56 2+5+8=15\n2763446 05:52 4+8+7=19"
]
| [
null
]
| {"ft_lang_label":"__label__zh","ft_lang_prob":0.5297174,"math_prob":0.99208814,"size":713,"snap":"2021-31-2021-39","text_gpt3_token_len":464,"char_repetition_ratio":0.30888575,"word_repetition_ratio":0.0,"special_character_ratio":0.8990182,"punctuation_ratio":0.0877193,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9998982,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-23T23:02:38Z\",\"WARC-Record-ID\":\"<urn:uuid:c5d0470e-113f-4656-a4cd-c9829f7ef473>\",\"Content-Length\":\"16518\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6e4fab8b-4fc3-494e-bd6d-1dc739fd691d>\",\"WARC-Concurrent-To\":\"<urn:uuid:e5e94d85-f830-4259-b36a-f75e4695c063>\",\"WARC-IP-Address\":\"116.213.39.26\",\"WARC-Target-URI\":\"https://pc28d.com/index.php\",\"WARC-Payload-Digest\":\"sha1:TPQ4CTRVIQSJ26OETU2SNFWAN7K37PAX\",\"WARC-Block-Digest\":\"sha1:SSCKX2KR6QK2XXXEYQXFBE4UL55VS6LK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780057479.26_warc_CC-MAIN-20210923225758-20210924015758-00475.warc.gz\"}"} |
https://gamedev.stackexchange.com/questions/18036/problem-representing-torque-as-a-quaternion | [
"# Problem representing torque as a quaternion\n\nEuler angles are much more intuitive to me than quaternions for representing 3-dimensional rotations. In fact, I barely understand quaternions at all. I use quaternions for rotation because people with more knowledge than me say they're better. (I'm familiar with the gimbal lock problem and axis-angle rotations, but that's getting away from the point.)\n\nGiven my nebulous understanding, what I'm trying to do might be really stupid. I want to rotate an object by applying an angular force (torque) which I'm representing as a quaternion plus a duration. To move the object, I do something like\n\nabstract class PhysicsBody\n{\nprotected Vector3 velocity = Vector3.Zero;\n\npublic void ApplyForce(Vector3 force, float duration)\n{\n// mass and other concepts omitted for brevity\nthis.velocity += force * duration;\n}\n}\n\n\nand it works as expected. I figure rotation should work similarly, like so:\n\n...\n\nprotected Quaternion orientation = Quaternion.Identity;\n\npublic void ApplyTorque(Quaternion torque, float duration)\n{\nthis.orientation *= torque * duration;\n}\n\n\nBut of course it does not. If duration is less than 1, orientation does not change. If duration is greater than 1, things get weird and break after a few seconds.\n\nI've experimented with renormalization, but I'm fumbling in the dark. What is the \"correct\" way to do this?\n\nA torque has an axis and a magnitude, so in principle you can represent it as a quaternion. However, you have two problems. The first is that your parallel is wrong. torque :: force as orientation :: position, so you need to integrate it twice. Secondly, the way you're applying it is wrong.\n\nQuestion: given a quaternion representing a rotation of a certain amount around a given axis, how do you derive the quaternion representing twice the rotation around the same axis?\n\nThe answer isn't \"Multiply by two\". You have to square the quaternion to apply the rotation twice. So scalar multiplication isn't what you're looking for.\n\nYou'll probably find that it's easiest to use axis-angle for the angular velocity and torque, and use quaternions only for the orientation. These links may also be helpful, but are too long to summarise here:\n\n• I haven't looked too closely at the second link yet, but the first link (and the rest of his stuff) is AWESOME! – Metaphile Oct 6 '11 at 16:57\n\nTorque and angular momentum should probably be represented as ordinary 3D vectors, not quaternions. Angular momentum vectors add according to the parallelogram rule, and torque is the time derivative of angular momentum, so you apply a torque to change angular momentum exactly the same way as applying an acceleration to change velocity.\n\nThen you multiply the angular momentum by the inverse of the inertia tensor, to get an angular velocity, and integrate it to get a quaternion for the orientation by using the rule: dq/dt = 1/2 omega q, where q is the quaternion representing the current orientation of the body, and omega is the angular velocity vector. Omega has to be converted to a quat by placing 0 in the scalar (real) component, so you can multiply it by q using quaternion multiplication.\n\nYou might want to check out slerp, it is used for rotating 'more or less' with quaternions. Just use an empty quaternion, the base quaternion and 'slerp' more or less."
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.89459723,"math_prob":0.8659336,"size":1317,"snap":"2020-45-2020-50","text_gpt3_token_len":288,"char_repetition_ratio":0.12338157,"word_repetition_ratio":0.0,"special_character_ratio":0.21412301,"punctuation_ratio":0.1440678,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9921987,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-24T21:25:03Z\",\"WARC-Record-ID\":\"<urn:uuid:3a5c397d-fccb-4633-a28d-905bf76bc704>\",\"Content-Length\":\"164000\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3976c1a2-0855-4be1-aa6e-070dc2bff5ad>\",\"WARC-Concurrent-To\":\"<urn:uuid:1204a61a-4c8a-495a-9a67-1e5fe9b8a07d>\",\"WARC-IP-Address\":\"151.101.1.69\",\"WARC-Target-URI\":\"https://gamedev.stackexchange.com/questions/18036/problem-representing-torque-as-a-quaternion\",\"WARC-Payload-Digest\":\"sha1:DPDPY3LCD2526GRSXOODN6XGRSIZ32N6\",\"WARC-Block-Digest\":\"sha1:5R7EUGOXYHHEZOYL4ZIX2C5PUFPZB4WH\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107884755.46_warc_CC-MAIN-20201024194049-20201024224049-00262.warc.gz\"}"} |
http://blog.zorilestore.ro/e70bxu/83ae7b-electrical-engineering-math-equations | [
"# electrical engineering math equations\n\nMagnetic constant = 4 x PI x 10 -7 . Your email address will not be published. Get Free Android App | Download Electrical Technology App Now! Electrical Engineering Formulas. Your email address will not be published. The laws of nature (e.g., Maxwell's equations for electromagnetics, Kirchhoff's Rules for circuit analysis) are mathematical expressions. Solved Examples. The general solution is therefore x = c1et+(c2t)e2t, x˙ = c1et+( 1 2c2+2t)e2t. Electric Bill Calculator with Examples, How to Find The Suitable Size of Cable & Wire for Electrical Wiring Installation? Our 1000+ Engineering Mathematics questions and answers focuses on all areas of Engineering Mathematics subject covering 100+ topics in Engineering Mathematics. IfAis square thenAx=bhas a unique solutionx=A1bifA1exists, i.e., if jAj 6= 0. Basic Electrical Engineering Formulas & Equations Basic Electrical Quantities Formulas Ohm’s, Kirchhoff’s & Coulomb’s Laws – Formulas Voltage & Current Divider Rules (VDR & CDR) Equations Power Formulas in DC & AC Single & Three-Phase Circuits Resistance, Conductance, Impedance & Admittance Formulas Resistance, Capacitance & Inductance in Series/Parallel Formulas Formula and Equations For Capacitor and Capacitance Formula and Equations For Inductor and Inductance Electric & Magnetic Flux, Density & Intensity Formulas Magnetic Terms used in Magnetic Circuits – Definition & Formulas Power, Voltage & EMF Equation of a DC Motor – Formulas DC Generator Formulas and Equations Losses in Electrical Machines – Formulas & Equations Induction Motor & Linear Induction Motors Formulas & Equations Synchronous, Stepper & AC Motors Formulas & Equations Synchronous Generator & Alternator Formulas & Equations Transformer Formulas and Equations Equations & Formulas For RLC Circuits (Series & Parallel) Time Constant τ “Tau” Formulas for RC, RL & RLC Circuits Diode Formulas & Equations – Zenner, Schockley & Rectifier Bipolar Junction Transistor (BJT) – Formulas & Equations Operational Amplifier (OP-AMP) – Formulas & Equations Active & Passive Frequency Filters – Formulas & Equations, this is really a good site for elect engineers. Transformer Formula. Maxwell's equations are a set of partial differential equations that, together with … There is always a table that is available to the engineer that contains information on the Laplace transforms. All the essential mathematics for electrical and electronic students. Electrical is the branch of Physics dealing with electricity, electronics and electromagnetism.Electrical formulas play a great role in finding the parameter value in any electrical circuits. Following are the electrical engineering formulas and equations for the basic quantities i.e. Required fields are marked *, All about Electrical & Electronics Engineering & Technology. (a) 75A (b) 80A (c) 100A (d) 125A Answer: (c) 100A. Fourier analysis and Z-transforms are also subjects which are usually included in electrical engineering programs. The final solution is x(t) = et(1 +t)e2t. How to Calculate the Battery Charging Time & Battery Charging Current – Example, How To Calculate Your Electricity Bill. College Algebra, Geometry, Trigonometry, Calculus I and II, Linear Algebra, Differential Equations, Statistics When Math is Used: There are three key reasons why mathematics is important for engineers: 1.The laws of nature (e.g., Maxwell's equations for electromagnetics, Kirchhoff's Rules for circuit analysis) are mathematical expressions. Their work is mathematically demanding, and they constantly face challenging technical problems. MME is the broadest of the engineering disciplines, and is present in every sector of our society (environment, housing, food and water, energy, industry, transportation, education, health care, government). Don’t trust spreadsheets for your electrical engineering equations Get Whitepaper / / / Safeguard Your Most Important Calculations Designing a product and getting it to market is challenging enough without entrusting your electrical engineering calculations – your intellectual property – to spreadsheets. I am trying to find a formula to show as the winding temperature of a motor or the core temperature changes up or down it has a direst effect on the energy consumption. Through the power of science, the University has contributed to society, education and welfare since 1640. Title. This is usually applied in the financial field and … The basic algebra students learn in high school is only the beginning, a necessary foundation for almost any further development in either mathematics or electrical engineering. 25% Off on Electrical Engineering Shirts. Includes all the conventional methods of solving mathematical problems, and looks at special methods of solving simultaneous equations and of plotting transient curves in circuits. Equation of a plane A point r (x, y, z)is on a plane if either (a) r bd= jdj, where d is the normal from the origin to the plane, or (b) x X + y Y + z Z = 1 where X,Y, Z are the intercepts on the axes. Electronics engineering careers usually include courses in calculus (single and multivariable), complex analysis, differential equations (both ordinary and partial), linear algebra and probability. i am looking for someone who can do differential equation tasks ... Post a Project . Ideal for BTEC National, Higher, GNVQ, City & Guilds, A-level and for the professional. ... that he claimed governed all electrical phenomena. Closed. Basic Voltage, Current, Power and Resistance Formulas in AC and DC Circuits. for every degree in temperature reduction is worth “X” in energy. An over-constrained set of equationsAx=bis one in whichAhasmrows andncolumns, wherem(the number of equations) is greater thann(the number of … The basic algebra students learn in high school is only the beginning, a necessary foundation for almost any further development in either mathematics or electrical engineering. Maxwell's Equation. i 1=(1/76)(25i 2+50i 3+10) -> -25((1/76)(25i 2+50i 3+10)) + 56 i 2 - i 3 = 0 (-625/76) i 2 – (1250/76) i The laws of nature (e.g., Maxwell's equations for electromagnetics, Kirchhoff's Rules for circuit analysis) are mathematical expressions. B.Tech Courses Syllabus and Structure for all 4 Years B.tech is a 4 year UG course that supports the semester system and contains both practical and theoretical examinations. The unit of Inductance “L” is Henrys “H”. The resulting equation yields A = 1. Electrical & electronic formulas - Basic electronics, electrical units, symbols, basic concepts, DC/AC circuit laws, resistor color code Discover (and save!) i am looking for someone who can do differential equation tasks ... Post a Project . Electrical Engineering General Formulas (photo by Thomas W @ Flickr) Introduction This spreadsheet calculates the most common and basic electrical engineering formulas. Mathematics is the language of physical science and engineering. This course is about the mathematics that is most widely used in the mechanical engineering core subjects: An introduction to linear algebra and ordinary differential equations (ODEs), including general numerical approaches to solving systems of equations. Electrical Engineering General Formulas (photo by Thomas W @ Flickr) Introduction This spreadsheet calculates the most common and basic electrical engineering formulas. Your email address will not be published. All Electrical Engineering Formulas List Cable Length from Sag, Span. Calculate the potential difference across its ends. Electronics Engineering Formulas. Closed. inverse) of resistance. on Amazon.com. Our website is made possible by displaying online advertisements to our visitors. The unit of capacitance is Farad “F” or microfarad “μF”. Getting started Preparing to study electrical engineering on Khan Academy A summary of math and science preparation that will help you have the best experience with electrical engineering on Khan Academy. An example of Laplace transform table has been made below. Engineering mathematics. Up tp 93% Off - Launching Official Electrical Technology Store - Shop Now! Eformulae.com is a online resource of engineering formulas, science formulas, math formulas, physics formulas, chemistry formulas, tables, glossary of terms related to computer engineering, manufacturing technology, mechanical engineering, agricultural engineering, electronics engineering, metallurgy and machining processes. c Example 2. Math 4340 (Advanced Engineering Mathematics), Summer II 2020 (online) \"Mathematics, rightly viewed, possesses not only truth, but supreme beauty.\" Making statements based on opinion; back them up with references or personal experience. Formula : 1/C Total = 1/C 0 + 1/C 1 + 1/C 2 + .... + 1/C n. Where, C 0 ,C 1 ,..,C n are the individual capacitors values. From its beginnings in the late nineteenth century, electrical engineering has blossomed from focusing on electrical circuits for power, telegraphy and telephony to focusing on a much broader range of disciplines. Step by Step Procedure with Solved Example, Energy and Power Consumption Calculator – kWh Calculator. For example. – Examples in British and SI System, Thevenin’s Theorem. A Text-Book of Engineering Mathematics by Peter O’ Neil, Thomson Asia Pte Ltd., Singapore. EE-Tools, Instruments, Devices, Components & Measurements, Other Additional Electrical Quantities Formulas, Power Formulas in DC and AC Single-Phase & Three-Phase Circuits, Electrical & Electronics Engineering Formulas & Equations, A Complete Guide About Solar Panel Installation. Maxwell's Equation. I. The following table shows the current, voltage, power and resistance equations and formulas in DC and 1-Φ & 3-Φ AC circuits. It is also known as Ohm’s law for inductance. Electrical Formulas Maxwell's equations are a set of partial differential equations that, together with … The formula sheet contains different formulas on 13 DC and AC topics and is important for all Engineering students who are doing their engineering, and for those who are appearing in various competitive tests. A prospective engineering student must be able to solve variable equations and to understand how … Step 1: Convert 125 percent to a decimal: 1.25 Step 2: Multiply the value of the 80A load by 1.25 = 100A. Mathematics is the language of physical science and engineering. Most commonly used electrical formulas are formulas related to voltage, current, power, resistance etc. Electrical Formulas helps us to calculate the parameters related to electricals in any electrical components. Please consider supporting us by disabling your ad blocker. They use math to help design and test electrical equipment. differential equation expert. Ideal for BTEC National, Higher, GNVQ, City & Guilds, A-level and for the professional. Mathematics in electronics. Pocket Book of Electrical Engineering Formulas An over current protection device, such as a fuse or a circuit breaker, must be sized no less than 125% of the continuous load. Solution: Given: Current I = 4 A, Resistance R = 5 $\\omega$ Step by Step Procedure with Calculation & Diagrams. By simplifying and manipulating these equations, eventually all the unknowns will be solved assuming there were the same number of equations as there were unknowns. Electronics Engineering Formulas. differential equation expert. Where “f” is frequency in Hertz (Hz) and “T” is the time periods in seconds. Get Free Android App | Download Electrical Technology App Now! Question: The maximum continuous load on an overcurrent device is limited to 80 percent of the device rating. Length from Sag, Span your ad blocker free formula sheet on Electrical... J. Tallarida, Ronald J. 1-Φ & 3-Φ AC Circuits ( single phase three. / ( 1.732 V I PF 1.732 ) / 1,000 ( 7 ).! “ Q ” is other answers Copyright 2020, All about Electrical & Electronics &!: Click on the desired box button below to see the related Electrical and Electronics Formulas... Thomas W @ Flickr ) Introduction This spreadsheet calculates the most common and basic Electrical Formulas. Measurements, Electrical & Electronic Abbreviations with Full Forms Consumption Calculator – kWh.! That contains information on the desired box button below to see the related Electrical and Electronic students Examples British. Creating quality content for you to learn and enjoy for free information on the Laplace transform table been! Or “ ℧ ” references or personal experience, Singapore the general solution is x t! Common conventions: Intensive quantities in Physics are usually included in Electrical Engineering Formulas [ Dorf, C.. Transform table has been made below applications in various Engineering and science disciplines ( c ) (. The current, power, resistance and impedance in both DC and AC Circuits single. To learn and enjoy for free This is usually applied in the two equations c1+c2= and. May 18, 2016 - Physics equations - Newtonian Mechanics, Electricity and Magnetism online advertisements to our.. Electricals in any Electrical Components Electrical & Electronic Engineering Formulas and equation with details 1.25 = 100A related! Will come to know about the class This course is an Introduction to fourier and!, and they constantly face challenging technical problems Intensive quantities in Physics are usually denoted with minuscules extensive! For the basic quantities i.e There is always a table that is 80A, the over current protection can! Suitable Size of Cable & Wire for Electrical Projects satisfying the initial conditions in! Captative and inductive circuit ( already mentioned above ) Example of Laplace transform of various common functions the. And science disciplines in kVA ( single phase and three phase ) Tallarida, Ronald.. And AC Circuits in Ohms, where “ c ” is capacitance in Farads made below general Formulas photo., how to Find the Suitable Size of Cable & Wire for Electrical Wiring Installation Formulas and equations are here. Made below us by disabling your ad blocker 125A answer: ( c ) 100A d. Formulas, Electrical Engineering, study guide the two equations c1+c2= 0 and c12c21 0... 2019 - Explore Suleiman Qandah 's board Electrical Engineering Formulas and equations are listed.... Bill Calculator with Examples, how to Find the Suitable Size of Cable & Wire Electrical. References or personal experience $250 Formulas List Cable Length from Sag, Span Kirchhoff 's Rules for analysis! Who can do differential equation tasks... Post a Project = 1 to electricals in any Electrical Components reduction worth. Are mathematical expressions e2t, x˙ = c1et+ ( 1 2c2+2t ) e2t, x˙ c1et+. The Rating of Transformer in kVA ( single phase and three phase ) of! Impedance = resistance of 5$ \\omega $c = capacitance in Farads or., clarification, or responding to other answers for someone who can do differential equation tasks... Post a.... Is an Introduction to fourier Series and Partial differential equations % electrical engineering math equations - Official. 2015 - Generator formula and they constantly face challenging technical problems Ronald J.$ 250 step step... Xyz shares free formula sheet on basic Electrical Engineering Formulas analysis ) are mathematical expressions on basic Engineering... The parameters related to voltage, power, resistance and impedance in both and. How to Calculate your Electricity Bill are a unique breed differential equations have wide applications in various Engineering science. On basic Electrical Engineering Projects for $30 -$ 250 ) or alternatively subjects which are usually included Electrical... Only if jAj 6= 0 7 ) where to help design and Electrical... = resistance of 5 $\\omega$ Asia Pte Ltd., Singapore c. A resistance of AC Circuits ( single phase and three phase ) basic quantities i.e transform of various common from... Technology App Now on basic Electrical Engineering '' on Pinterest Electrical Wiring Installation “. Device Rating power ( watts ) or alternatively Helsinki seeks solutions for global and... The University of Helsinki seeks solutions for global challenges and creates new of! V I PF ) ( 6b ) Electrical motor, Tallarida, J. “ μF ” Q ” is frequency in Hertz ( Hz ) and RC … Electrical There. Size of Cable & Wire for Electrical and Electronic students Intensive quantities Physics! Pf 1.732 ) / 1,000 ( 7 ) where constant = 4 x PI x 10 -7 in.! Mho and represented by the symbol of “ G ” or “ ℧ ” been made below 93. Listed here and AC Circuits ( single phase and three phase ) other answers Ohms, where “ c is. Is 80A, the over current protection device can not be sized less than 100A of Engineering mathematics about. C1Et+ ( 1 2c2+2t ) e2t, x˙ = c1et+ ( c2t ) e2t of Electrical programs... Resistance Formulas in DC and 1-Φ & 3-Φ AC Circuits ) 125A answer: c. Of physical science and Engineering - Physics equations - Newtonian Mechanics, Electricity and..... Common and basic Electrical Engineering Formulas & equations, basic voltage, power and resistance equations and Formulas in and! Transform of various common functions from the following table Physics equations - Newtonian Mechanics, Electricity and Magnetism final. To Calculate the Battery Charging Time & Battery Charging Time & Battery Charging current – Example, Energy power! Protection device can not be sized less than 100A for Inductance the financial field and … the resulting yields. Are also subjects which are usually included in Electrical Engineering Stack Exchange and Electronics Engineering Technology... S Theorem are chosen from a collection of most authoritative and best reference books on Electrical... Capacitance in Farads, “ Q ” is amp and volt requirements for and... And AC Circuits in Ohms, where “ c ” is the Total capacitance value of an Electrical.... Of “ G ” or microfarad “ μF ” equation with details Intensive in. Statements based on opinion ; back them up with references or personal experience kVA single. To fourier Series and Partial differential equations Engineering XYZ shares free formula sheet on basic Engineering... A resistance of AC Circuits ( single phase and three phase ) Wiring Installation are included... With solution c1= 1 and c2= 1 $250 AC and DC Circuits electrical engineering math equations are... = resistance of AC Circuits circuit ( already mentioned above ) ) or alternatively convert 125 % 1.25! 1 2c2+2t ) e2t, power and resistance Formulas in AC and DC.... Final solution is therefore x = c1et+ ( c2t ) e2t 746 p hp / ( V. Sized less than 100A Electrical Projects input Electrical power 3-phase motor ( )! Current protection device can not be sized less than 100A thenAx=bhas a unique solutionx=A1bifA1exists, i.e., jAj... The final solution is therefore x = c1et+ ( 1 +t ).! Continuous load on an overcurrent device is limited to 80 percent of the device Rating to see the Electrical! The device Rating solving problems electrical engineering math equations mathematics courses can develop intellectual maturity for who. Maxwell 's equations for the professional There is always a table that is 80A the. Science, the over current protection device can not be sized less than 100A and... 746 p hp / ( 1.732 V I PF ) ( 6b ) Electrical motor power. And power Consumption Calculator – kWh Calculator Download Electrical Technology App Now All about Electrical & Electronics Engineering Technology! Use math to help design and test Electrical equipment an overcurrent device is limited to 80 percent of the Rating... University has contributed to society, education and welfare since 1640 in creating computer simulations and for..., Maxwell 's equations for electromagnetics, Kirchhoff 's Rules for circuit analysis ) mathematical... Is having a resistance of 5$ \\omega \\$ enjoy for free table shows the current, power and Formulas... If and only if jAj 6= 0 80A, the over current protection can! Maxwell 's equations for the basic quantities i.e This course is an Introduction fourier..., education and welfare since 1640 is Farad “ F ” is Henrys H! | Download Electrical Technology Store - Shop Now functions from the following shows... Engineering concepts and topics current – Example, Energy and power Consumption Calculator – kWh Calculator are denoted minuscules..., Higher, GNVQ, City & Guilds, A-level and for the basic i.e! There is always a table that is available to the engineer that contains information on desired... Engineering and science disciplines and Magnetism ) 75A ( b ) 80A ( c ) 100A.! Education and welfare since 1640 Full Forms up with references or personal.... And welfare since 1640 quantities i.e most common and basic Electrical Engineering programs your own Pins on Pinterest law... A unique solutionx=A1bifA1exists, i.e., if jAj = 0 non-trivial solution if and only if jAj 0. Satisfying the initial conditions results in the two equations c1+c2= 0 and =! Already mentioned above ) phase ) Electronic Abbreviations with Full Forms Electronic Engineering,... Generator formula courses can develop intellectual maturity and multiply 80 x 1.25 = 100A laws of (. By Thomas W @ Flickr ) Introduction This spreadsheet calculates the most common and basic Electrical Projects!\n\n0 comentarii pentru: electrical engineering math equations"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.8501451,"math_prob":0.9769,"size":21563,"snap":"2021-21-2021-25","text_gpt3_token_len":4649,"char_repetition_ratio":0.17598219,"word_repetition_ratio":0.18225133,"special_character_ratio":0.2190326,"punctuation_ratio":0.14530373,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99356246,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-05-11T07:19:40Z\",\"WARC-Record-ID\":\"<urn:uuid:82a85d92-d9bc-487d-87c5-ccbdd8158c46>\",\"Content-Length\":\"44660\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e9f044e1-0333-4cfd-a035-0277f2b358d9>\",\"WARC-Concurrent-To\":\"<urn:uuid:f157a548-5cc4-4d14-9bdc-b3495572d10f>\",\"WARC-IP-Address\":\"188.213.33.19\",\"WARC-Target-URI\":\"http://blog.zorilestore.ro/e70bxu/83ae7b-electrical-engineering-math-equations\",\"WARC-Payload-Digest\":\"sha1:GBVKACI6RWWGPSAZPPGUY6MH3IZLBJF5\",\"WARC-Block-Digest\":\"sha1:2PDUMRE7KVUMZORICGUMAGTN77F6XB5G\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-21/CC-MAIN-2021-21_segments_1620243991904.6_warc_CC-MAIN-20210511060441-20210511090441-00263.warc.gz\"}"} |
https://aitopics.org/mlt?cdid=conferences%3AF0CCCAFE&dimension=concept-tags | [
"### Extracting Certainty from Uncertainty: Transductive Pairwise Classification from Pairwise Similarities\n\nIn this work, we study the problem of transductive pairwise classification from pairwise similarities \\footnote{The pairwise similarities are usually derived from some side information instead of the underlying class labels.}. The goal of transductive pairwise classification from pairwise similarities is to infer the pairwise class relationships, to which we refer as pairwise labels, between all examples given a subset of class relationships for a small set of examples, to which we refer as labeled examples. We propose a very simple yet effective algorithm that consists of two simple steps: the first step is to complete the sub-matrix corresponding to the labeled examples and the second step is to reconstruct the label matrix from the completed sub-matrix and the provided similarity matrix. Our analysis exhibits that under several mild preconditions we can recover the label matrix with a small error, if the top eigen-space that corresponds to the largest eigenvalues of the similarity matrix covers well the column space of label matrix and is subject to a low coherence, and the number of observed pairwise labels is sufficiently enough. We demonstrate the effectiveness of the proposed algorithm by several experiments.\n\n### An Analysis of the Effectiveness of Tagging in Blogs\n\nTags have recently become popular as a means of annotating and organizing Web pages and blog entries. Advocates of tagging argue that the use of tags produces a'folksonomy', a system in which the meaning of a tag is determined by its use among the community as a whole. We analyze the effectiveness of tags for classifying blog entries by gathering the top 350 tags from Technorati and measuring the similarity of all articles that share a tag. We find that tags are useful for grouping articles into broad categories, but less effective in indicating the particular content of an article. We then show that automatically extracting words deemed to be highly relevant can produce more focused categorization of articles. We also provide anecdotal evidence of some of tagging's weaknesses, and discuss future directions that could make tagging more effective as a tool for information organization and retrieval.\n\n### On the Robustness of Regularized Pairwise Learning Methods Based on Kernels\n\nRegularized empirical risk minimization including support vector machines plays an important role in machine learning theory. In this paper regularized pairwise learning (RPL) methods based on kernels will be investigated. One example is regularized minimization of the error entropy loss which has recently attracted quite some interest from the viewpoint of consistency and learning rates. This paper shows that such RPL methods have additionally good statistical robustness properties, if the loss function and the kernel are chosen appropriately. We treat two cases of particular interest: (i) a bounded and non-convex loss function and (ii) an unbounded convex loss function satisfying a certain Lipschitz type condition.\n\n### Extracting Certainty from Uncertainty: Transductive Pairwise Classification from Pairwise Similarities\n\nIn this work, we study the problem of transductive pairwise classification from pairwise similarities~\\footnote{The pairwise similarities are usually derived from some side information instead of the underlying class labels.}. The goal of transductive pairwise classification from pairwise similarities is to infer the pairwise class relationships, to which we refer as pairwise labels, between all examples given a subset of class relationships for a small set of examples, to which we refer as labeled examples. We propose a very simple yet effective algorithm that consists of two simple steps: the first step is to complete the sub-matrix corresponding to the labeled examples and the second step is to reconstruct the label matrix from the completed sub-matrix and the provided similarity matrix. Our analysis exhibits that under several mild preconditions we can recover the label matrix with a small error, if the top eigen-space that corresponds to the largest eigenvalues of the similarity matrix covers well the column space of label matrix and is subject to a low coherence, and the number of observed pairwise labels is sufficiently enough. We demonstrate the effectiveness of the proposed algorithm by several experiments.\n\n### Active Ranking using Pairwise Comparisons\n\nThis paper examines the problem of ranking a collection of objects using pairwise comparisons (rankings of two objects). We are interested in natural situations in which relationships among the objects may allow for ranking using far fewer pairwise comparisons. We show that under this assumption the number of possible rankings grows like \\$n {2d}\\$ and demonstrate an algorithm that can identify a randomly selected ranking using just slightly more than \\$d\\log n\\$ adaptively selected pairwise comparisons, on average.} If instead the comparisons are chosen at random, then almost all pairwise comparisons must be made in order to identify any ranking. In addition, we propose a robust, error-tolerant algorithm that only requires that the pairwise comparisons are probably correct."
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.90868914,"math_prob":0.8781262,"size":4963,"snap":"2020-10-2020-16","text_gpt3_token_len":897,"char_repetition_ratio":0.13490623,"word_repetition_ratio":0.47739363,"special_character_ratio":0.17167036,"punctuation_ratio":0.07013301,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9521826,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-04-08T06:08:31Z\",\"WARC-Record-ID\":\"<urn:uuid:9c0913e6-dc2c-4a84-a279-70779f6dcf67>\",\"Content-Length\":\"71602\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ee2904d1-69a8-49c1-bcab-5b88f0bd2a2f>\",\"WARC-Concurrent-To\":\"<urn:uuid:1faa1df3-8727-4eb7-bb1c-2a3305b81253>\",\"WARC-IP-Address\":\"35.188.181.171\",\"WARC-Target-URI\":\"https://aitopics.org/mlt?cdid=conferences%3AF0CCCAFE&dimension=concept-tags\",\"WARC-Payload-Digest\":\"sha1:ZPZKJK32XEBOUO7Z3SXVLZUSICYRCOAI\",\"WARC-Block-Digest\":\"sha1:IT6VAJOLNC4SAXATFDYUD4UIFOP4YHFG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-16/CC-MAIN-2020-16_segments_1585371810617.95_warc_CC-MAIN-20200408041431-20200408071931-00030.warc.gz\"}"} |
https://web2.0calc.com/questions/help-asap_107 | [
"+0\n\n# HELP ASAP\n\n+1\n633\n2\n\nGiven that $n$ is a positive integer, and given that $\\mathop{\\text{lcm}}[24,n]=72$ and $\\mathop{\\text{lcm}}[n,27]=108$, what is $n$?\n\nMar 19, 2019\n\n#1\n0\n\n$${\\text{lcm}}[24,n]=72$$\n\n$${\\text{lcm}}[n,27]=108$$\n\nGCD [ 72, 108 ] = 36 = n",
null,
"",
null,
"",
null,
"Mar 19, 2019\n#2\n+2\n\nthanks\n\nCorbellaB.15 Mar 19, 2019"
]
| [
null,
"https://web2.0calc.com/img/emoticons/smiley-cool.gif",
null,
"https://web2.0calc.com/img/emoticons/smiley-cool.gif",
null,
"https://web2.0calc.com/img/emoticons/smiley-cool.gif",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.5370323,"math_prob":0.9998814,"size":218,"snap":"2021-31-2021-39","text_gpt3_token_len":94,"char_repetition_ratio":0.19626169,"word_repetition_ratio":0.0,"special_character_ratio":0.5229358,"punctuation_ratio":0.16,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9986616,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-20T14:08:23Z\",\"WARC-Record-ID\":\"<urn:uuid:77d022f7-6dec-47e1-a3b9-a0d687631a24>\",\"Content-Length\":\"21793\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e3d9f3cf-14e7-451c-adf0-0a465f075fc2>\",\"WARC-Concurrent-To\":\"<urn:uuid:9fa553f4-d788-4f0c-9912-9400d86fe917>\",\"WARC-IP-Address\":\"176.9.4.231\",\"WARC-Target-URI\":\"https://web2.0calc.com/questions/help-asap_107\",\"WARC-Payload-Digest\":\"sha1:7UOP23CUCAJAADFDBEFQZ7BAYUSSNL73\",\"WARC-Block-Digest\":\"sha1:6OUHY4QXW6NCJ3RIJX66ETAFHR2LR6IB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780057039.7_warc_CC-MAIN-20210920131052-20210920161052-00605.warc.gz\"}"} |
https://losllanoslonas.com/pa-cdfi-sify/how-to-calculate-roof-area-5f80f4 | [
" how to calculate roof area",
null,
"# how to calculate roof area\n\n### how to calculate roof area\n\nThere are different methods available to calculate roof area. How to calculate Roof surface It is often compared to slope, but is not exactly the same. For multi-pitched roofs simply add the areas of all common pitches together by toggling layer visibility from the summed layer (identified by the ‘eye’ symbol) at the left of the layers control. And that’s the number you were looking for. For example, you will need to figure out the exact number of shingles required to cover the entire roof area. To accurately calculate the area of your roof, you must first determine its slope, or pitch. Take the area of the roof’s footprint, say it's 5,000 square feet, and multiply 5,000 by 1.118. Using dormer windows as an example, you’ll add a new layer after drawing each dormer. If, however, the method employed is manual, the following steps are followed to determine roof area. Calculate the area of the wall below the gable by multiplying the width of the wall by the height of the eaves. (l×h)/2 = Area Calculating an area of a triangle when estimating … So if you place a fill on top of the slab, the fill area will… Experts recommend that a pitch measurement is taken before using a roof area … The answer will be 1/3. Enter Square Footage. This calculation option will also give you a very accurate result. When calculating the area of a flat roof for laying the roll-up roll materials, the area of the parapets is taken into account separately. Roof Area Estimates. The roof pitch conversion factor is a number that, when multiplied by the area covered by the roof, gives an estimate the total surface area of the sloped roof itself. You can also adjust the grid transparency and drawing line weight. SketchAndCalc™ can be used in other ways. In addition, whatever the roofing material, it is usually put lapped to prevent water from leaking past the roofing. Use our roof pitch calculator to find the pitch of your roof. So if you divide your answer of a product of length times height by two, you will get the area of a triangle. The following tools estimate the area of a roof, as well as the amount of materials necessary to construct a roof of a given area. Here is the online hip roof area calculator which helps you calculate the hip roof parameters such as roof rise, common and hip rafters length and roof area based on width, height of roof base and the roof pitch (identical for all sides). Find the square footage of the roof. Google Maps or other overhead mapping software can also be used to calculate your roof surface area. Accounting for dormer windows and other structures that might share a different pitch, the first step is to calculate the footprint of the roof, or the area it occupies. A number of users have asked if Revu’s built-in measurement tools are able to calculate the area of a roof from a plan view. The roof’s pitch is the number of inches the roof rises in 12 inches. Take your area result and multiply it by the pitch multiplier (or slope correction factor) found in the table above to arrive at the roof’s surface area. Depending whether the roof area is measured horizontally (possibly from a drawing or photograph), a correction factor is necessary to determine the actual area of the roof. Let Owens Corning Roofing help you calculate exactly how much ventilation you will need for a healthy and balanced attic, with our 4-step ventilation calculator. That is, trimming, intersection, and increasing the area … The roof’s pitch is the number of inches the roof rises in 12 inches. That measurement is the number of inches the roof rises in 12 inches. Step 4: Calculate The Simple Roof Areas On a simple hip or gable roof, multiplying the eave to ridge length by eave length will give the area to be multiplied by pitch factor. (⅓ squared = 1/9) Add one to your answer. If you have a simple gable roof, you’ll only need to measure and sum up the 2 planes of the roof. Hip Roof Area Calculator. Some homes will have roofs with multiple pitches. Determine the slope of the roof. The rise is the distance from the top of a stud wall to the peak of the roof. calculate roof area #94050. Knowing how to estimate roofing materials is important. That gives you 5,590 square feet, which is the actual area of the surface of the roof. 5m x 4m x (4/2) = 40 square metres. Roof pitch is given as the number of inches in height change over the distance of one foot. Roof pitches are described in terms of rise and run. This tells you how to calculate roof areas suitable for using in all the programs on this site. Here you will find roofing related tools like: roof area calculators, roof materials calculators, pitch calculators, geometric shapes calculator. Planswift – How To Calculate Roof Area, OSB Sheets, Rolls . It is the Horizontal area that is required, (not the sloping, actual area). How to calculate the roof area of a building . Now you’re ready to price out the materials. The incline or pitch of a roof can be easily measured with a level, tape measure, and a pencil. Square the result of Step 1. The simplest is the roof square calculator which requires feeding of various roof measurement and the calculator provides with an answer. Privacy Policy • Terms and Conditions • Cookie Policy, SketchAndCalc uses cookies to ensure you get the best experience. - House surface (plus the area with which the roof extends outside the house - it can be calculated by multiplying the extension with the perimeter of the house). Two User Defined Columns need to be created. The most common roofing materials used in the United States include shingles, membrane roofing, and ceramic tile, all of which have different life spans. How to Calculate Your Roof Area. Roofing materials are typically packaged in “squares,” each of which is the equivalent of 100 square feet (9.3 m 2) of roof space. If, however, the method employed is manual, the following steps are followed to determine roof area. While it is possible to estimate the amount of necessary materials using only the total roof area measurement, as can be seen from the table, depending how large the pitch of the roof, the actual area of the roof can differ by up to 2.236 from the measured total area at a pitch of 24/12. Measure the length of the roof surface, including overhangs. Calculate the area of multiple plots of land. Take the area of the roof’s footprint, say it's 5,000 square feet, and multiply 5,000 by 1.118. Then proceed to draw the surface area of the roof that has a different pitch. Roof Area Calculator • Surface Area Multiplied by Pitch . Roof Pitch Area Multipliers The calculated area is only an estimation. There are different methods available to calculate roof area. It is sometimes called the roof pitch multiplier.. In roofing we call the calculation of roofing materials needed a “take-off”. Here you will find roofing related tools like: roof area calculators, roof materials calculators, pitch calculators, geometric shapes calculator. TIP: Line length labels are displayed as you draw. The effect of the roof slope is allowed for by entering the angle of the roof in the programs. These have different areas because the roof lies on the hypotenuse of a triangle, whereas the area protected by it is horizontal. Measure the length of the roof surface, including overhangs. In … Measuring the Roof’s Pitch. The only difference in this calculation to the one for a Gable roof area is the amount of the Ridge. The pitch of the roof is the rise over a 12-inch run. Use your measuring tape to measure 12 inches on your large level and put a mark at the 12-inch line. You want a roof area calculator that provides accurate estimates to help you order materials such as shingles, membrane, roofing, or ceramic tile. Therefore, mark off 12 inches on the level and place it down horizontally against the roof rafter. Straight to the answer Divide your roof pitch by 12. When determining sloped roof area for rainfall catchment, for example, you calculate the area protected by the roof, but when replacing the tiles you calculate the surface area of the roof … Fortunately, if you know the roof’s pitch, or have access to the home’s attic to measure its incline you can arrive at an accurate roof area calculation using SketchAndCalc™. Knowing how to calculate your roof area can be a beneficial first step in estimating the cost of a new metal roof. Given pitch and a horizontal area measurement, multiply the horizontal area by a correction factor corresponding to pitch, provided in the table below, to determine the actual area of the roof to be used in the Roofing Material Calculator. Step 2: Determine leader size. Or perhaps just an irregular shape? Next, measure vertically from the 12-inch mark on the level straight up to the underside of the rafter, as illustrated below. Depending whether the roof area is measured horizontally (possibly from a drawing or photograph), a correction factor is necessary to determine the actual area of the roof. Although ceramic tile roofs are expensive, they can have a life span of over 100 years. Not only will it help eliminate waste, but it will also ensure that you buy just enough for the roofing job. To accurately calculate the area of your roof, you must first determine its slope, or pitch. Determine the dimensions of the roof. First, use your measuring tape to measure 12 inches on your large level and make a mark at … Determine leader or pipe size. The Basic Element List will give you the area of the roof surface, not the 'Area on Plan', like the Element Information will. Let’s Get Calculating. More unusual uses here. Step 2. 1. Next, multiply the footprint of the roof by the multiplier below for your roof pitch to find the overall roof area. Using a Fill Fills have the ability to display Area. These have different areas because the roof lies on the hypotenuse of a triangle, whereas the area protected by it is horizontal. The Area Calculator can be used to calculate the area of a variety of simple shapes that together can comprise the area of the roof. If the roof has multiple pitches, such as those found on dormer windows or porches you will need to draw these separately. a Slab with hole, pictured here), there are several ways to do it. First, multiply the length by the width. All these tools are free and we hope - … And for a roof with a slope of 6:12, that number, 1.118, is your roof slope multiplier. Login to SketchAndCalc™ and select ‘Show Maps’ from the Menu. Place the ladder beside your building at the gable end. 14 How To Calculate The Area Of A Roof : Planswift – How To Calculate Roof Area, OSB Sheets, Rolls – How To Calculate The Area Of A Roof. Calculate the total surface area of the roof using the following formula: pi times the radius times the rafter length, also called the slant height. Ascend to the top of your roof. The maths behind this number is that it is the square root of ((rise/run) 2 + 1). Calculating your roof surface area Building plans offer the easiest way to calculate your roof surface area – after all, the numbers should all be on there. Porches and dormer windows, for example, can have a different pitch from that of the rest of the house. For one of our branches to give you a quote we need the following information about your roof: What type of Piched Roof are you working on (see diagrams) and the measurements if possible Pitch of roof or Rafter length What parts of the roof you want a quote for (e.g. Determine the roof area by using a mathematical formula that accounts for the roof length, total span, and roof pitch: Determine your roof pitch by using a pitch gauge (available at most home improvement stores) or a smartphone app (available free through any app store). The roof slope height is 6.98m. When zoomed in, switch to satellite view. Calculate the area of a roof to work out how much guttering is required. The \"House Base Area\" is the area of land that the house covers, and for more complex shapes, can be estimated using the Area Calculator. The rise is the height of the roof, and the run is the horizontal span (as pictured above). For example, a 4/12 pitch roof that is 100 square feet: 100 × 1.054 = 105.4ft 2. This is expressed in terms of how many inches the roof rises for … All these tools are free and we hope - easy and intuitive to use. Calculate the exact roof area. Gable & Hip Roofs will have the same area provided that the pitch remains the same. You’ll have to calculate your roof area, or the overall size of your roof, to determine the amount of materials you’ll need. A good roofing contractor will take the time make sure that the take-off is accurate. Here is the online hip roof area calculator which helps you calculate the hip roof parameters such as roof rise, common and hip rafters length and roof area based on width, height of roof base and the roof pitch (identical for all sides). Calculate the total roof area by multiplying its length by width. Watch this short video titled Area of a roof instead. Hip roofs can present a problem for height measurement, and the slope method may be substituted in place of height measurement. How to calculate Roof surface For a roof incline of (pitch x/12): /12 Calculating the surface area of a sloped roof can be time-consuming without the right tools. It is necessary to take into account how the overhang will be located - along the perimeter, with a closed parapet or with a lower overhang and a triangular parapet. Sometimes it is easier to calculate the area by summing up the areas of smaller parts. Calculating the surface area of a sloped roof can be time-consuming without the right tools. When calculating the area of a flat roof for laying the roll-up roll materials, the area of the parapets is taken into account separately. The run is the distance from the outside edge of a perimeter stud wall to the center of the house. Now, assuming an average pitch of 9:12, we multiply the 2D area of 1,300 by 1.25. This calculation is important because it can greatly affect your estimates, the air circulation in a room, and more.The Custom Columns in Revu’s Markups tab supports formulas and math constants, functions and operators. Search the homes postal address from the map controls that appear in the top left corner. (⅓ squared = 1/9) Add one to your answer. The square footage of a Hip roof area is identical to the square footage of a Gable roof of the same size. You want a roof area calculator that provides accurate estimates to help you order materials such as shingles, membrane, roofing, or ceramic tile. This gives us a 3D roof area of 1,625 sq.ft. Fortunately, if you know the roof’s pitch, or have access to the home’s attic to measure its incline you can arrive at an accurate roof area calculation using SketchAndCalc™. Hip roof is a roof with a sharp edge or edges from the ridge to the eaves where the two sides meet. Follow these steps to figure out the exact area of your roof: Take your pitch number and divide it by 12. The Basic Element List will give you the area of the roof surface, not the 'Area on Plan', like the Element Information will. Let’s Get Calculating Once you have these measurements, you can calculate your roof area as follows: Roof width (m) x rafter length (m) = half roof area (m²) The above calculation estimates the area of half of your roof so you will need to double this to get a final estimate of the full roof area. A roof pitch multiplier, also known as a roof pitch factor, is a number that, when multiplied by the area covered by a sloped roof, gives the actual area of the roof. After drawing the perimeter of the roof, add a new drawing layer with the plus (+) symbol found in the bottom right corner. Multiply the area by the pitch multiplier to get the roof's square footage. For example, if a roof has a pitch of 4/12, then for every 12 inches the building extends horizontally, it rises 4 inches. Just find the footprint area and multiply it by pitch factor. For instance, a 7/12 roof pitch means that the roof rises 7 inches for every 12 horizontal inches. You’ll notice as an area is drawn the area and perimeter results are displayed in the bottom right corner. This gives us a 3D roof area of 1,625 sq.ft. Dear Jerry: I’m autograph in that hopes you can advice us with a botheration with the gutters on our in advance Cape Cod-fashion house. The run is the distance from the outside edge of a perimeter stud wall to the center of the house. Identify the roof’s structure. Assuming this is a hip roof, we multiply this by 1.1, which gives us roughly 18 squares or 1,800 square feet of the roof surface. Using the aggregate area of these simple shapes can yield a more accurate roof area to be used with the Roofing Material Calculator. The formula for the roof pitch multiplier comes from the formula for a right triangle, and more specifically the Pythagorean Theorem: On a ladder beside the roof, place the level a foot or so up the roof, hold it level, and measure from the 12-inch... 2. For example, if your pitch is 4 in 12, you need to divide 4 by 12. Roof Area Calculators: For each of the roof types pictured below, L = Horizontal Length, W = Horizontal Width, and H = Vertical Height, all entered in feet.Slope in inches per foot may substituted for H. . Both metric and imperial measurement systems are displayed in a popup menu over the results area. The rise is the distance from the top of a stud wall to the peak of the roof. (ex: If pitch is 4 in 12, divide 4x12. The pitch is commonly defined as the ratio of rise over run in the form of x/12. This applies to lengths as well as areas. How to Calculate Roof Area Step 1. Roof Area calculator. Roof pitch affects the actual area of the roof. As such, while it can be cumbersome, measuring the area and pitch of each part of the roof and multiplying by the corresponding correction factor will result in the most accurate estimate of necessary roofing materials. The best part of a roofing calculator is that it offers a free insight into the total cost of the replacement roof project on a sq ft basis. Roofing professionals can also use SketchAndCalc™ to provide fast accurate quotations over the phone. It affects walkability as well as drainage, and roofs in areas of high rain or snowfall tend to have steeper pitches. The number you get will be an accurate estimate of how much area you have to cover for your roofing project. Add 1, then take the square root. Although there’s isn’t any standard pitch of a roof used on all kinds of sloped roofs, you can determine the range of pitches by using a roof angle calculator and by considering factors like the local climate and roofing … This calculator will help you estimate hip roof parameters, including rafters and roof area person_outline Anton schedule 2011-08-29 17:53:20 One of our users asked us to create a calculator that would help him estimate hip roof parameters, such as rafter lengths, roof rise and roof area. Hip Roof Area Calculator. If you wish to find out the area of a construction element (e.g. Dividing the total area of your roof by 100 will therefore help you figure out how many squares worth of shingles to order. Google Maps or other overhead mapping software can also be used to calculate your roof surface area. CALCULATING THE TRUE AREA. This is the first thing to do when you set out to calculate the area of your roof. Therefore, mark off 12 inches on the level and place it down horizontally against the roof rafter. Free roof area calculator. That gives you 5,590 square feet, which is the actual area of the surface of the roof. Step 1: Calculate total roof area. Shingle roofs typically have a life span of 15-30 years, while membrane roofs usually last 5-15 years. This gives you ⅓) Square the answer you calculated before. This will provide an area calculation for clay or concrete roof tile, natural or composite slate, underfelt or breathable felt and ridge length. Roof pitches are described in terms of rise and run. When determining sloped roof area for rainfall catchment, for example, you calculate the area protected by the roof, but when replacing the tiles you calculate the surface area of the roof itself. Calculate the total surface area of the roof using the following formula: pi times the radius times the rafter length, also called the slant height. Gable & Hip Roofs will have the same area provided that the pitch remains the same. The first is for the roof angle and the second is a formula that calculates the area. Use the appropriate pitch multiplier for each totaled area calculated, then sum all the areas together. Regardless of complexity (multiple ridge lines, hips, valleys etc), if the roof has a single pitch or incline throughout you can simply draw the perimeter of the entire roof using SketchAndCalc’s™ line drawing tools. Roof pitch refers to the measurement of the slope of a roof and you express this as a ratio. CALCULATING THE TRUE AREA. (ex: If pitch is 4 in 12, divide 4x12. Ridge vent openings are best finished with coil rather than partial shingle. The roof can be divided into small rectangles, triangles and squares. Dear Jerry: I’m autograph in that hopes you can advice us with a botheration with the gutters on our in advance Cape Cod-fashion house. Example: A 200 ft. x 250 ft. = 50,000 sq. Then, take the product of these two dimensions and multiply it by your pitch multiplier. This will provide an area calculation for clay or concrete roof tile, natural or composite slate, underfelt or breathable felt and ridge length. Calculate the square footage area of a Hip Roof. And that’s the number you were looking for. Next, you need to square your outcome. Measure the length and width of each portion of the roof, multiply length by width for each plane, and then add the planes together for the total square footage. Experts recommend that a pitch measurement is taken before using a roof area calculator.Don’t have time to read? This calculation option will also give you a very accurate result. The incline or pitch of a roof can be easily measured with a level, tape measure, and a pencil. It is necessary to take into account how the overhang will be located - along the perimeter, with a closed parapet or with a lower overhang and a triangular parapet. Roof Area calculator. Multiply the length times the width to calculate the square footage. First, you need to know two specific values in order to calculate the area of a roof: (1) the area of the roof in plan view and (2) the angle of the roof pitch. Hip roof is a roof with a sharp edge or edges from the ridge to the eaves where the two sides meet. Roof width (m) x roof slope height (m) = half roof area (m²) Take this example of a perfectly square house with 76m² floor space, a 45° pitched roof and a 2.35m roof height. The roof pitch is the slope of the rafter. Remember – calculate the area of the roof should not be at the edges of the existing structure, but to the eaves. Using the table below, lookup the pitch/es and make a note of the number/s in the multiplier column. 1. For an accurate surface area measurement you should make note of these additional structures. There's a difference, so be careful. Calculating your roof surface area Building plans offer the easiest way to calculate your roof surface area – after all, the numbers should all be on there. In cases where a roof has a complex shape, such as in the image to the right, measuring the dimensions and areas of each part of the roof to calculate total area will result in a more accurate measurement of area. tiles, battens, felt, syklights … Watch this short video titled Area of a roof instead. Roof pitch is a determining factor for cost of the roof, as well as the roof area, and the type of materials used. This multiplier number will be used to multiply the surface area calculation that SketchAndCalc™ will provide. Outside of the U.S., a degree angle is typically used. Roof slope, pitch, rise, run, area calculation methods: here we explain and include examples of simple calculations and also examples of using the Tangent function to tell us the roof slope or angle, the rise and run of a roof, the distance under the ridge to the attic floor, and how wide we can build an attic room and still have decent head-room. These two values can be calculated to provide the actual roof area. So keeping with our roof pitch of 4 this gives us 1/9. In this article, we will talk about calculating the roof surface starting from - Roof incline (pitch US ) - House surface (plus the area with which the roof extends outside the house - it can be calculated by multiplying the extension with the perimeter of the house). In the United States, a run of 12 inches (1 foot) is used, and pitch is measured as the rise of the roof over 12 inches. Assuming this is a hip roof, we multiply this by 1.1, which gives us roughly 18 squares or 1,800 square feet of the roof surface. Roof pitch is the measurement of a roof's vertical rise divided by its horizontal run. What is the roof pitch factor? Once you have these measurements, you can calculate your roof area as follows: Roof width (m) x rafter length (m) = half roof area (m²) The above calculation estimates the area of half of your roof so you will need to double this to get a final estimate of the full roof area. There's a difference, so be careful. So for a roof that is 5 m long, with a height of 4 m and the roof width is 8 m (which makes the half roof width 4m) we can calculate the effective roof area like this. There are 5 easy steps to figure out how many drains your commercial roofing project will need. For example, if the wall is 25 feet wide and the eaves are 20 feet above ground level, the wall has a surface area of 500 square feet: 25 x 20 = 500. Estimate the area of the simple parts of the roof. Calculate the area of multiple plots of land, Land Area Calculator • Multiple Irregular Shapes, UPDATED: Area Calculator • Phone • Tablet • Desktop. 14 How To Calculate The Area Of A Roof : Roof Area Calculator • Surface Area Multiplied By Pitch – How To Calculate The Area Of A Roof. The surface area of the roof is found by multiplying the number found in Step 3 by the number found in Step 1: \\$\\$Surface\\,area\\,of\\,roof = Area\\,covered\\,by\\,roof × Roof\\,pitch\\,factor = 800 × 1.12 = 896\\,square\\,feet\\$\\$ In other words, the surface area is impacted by the slope of the roof. ft. area. 4. An important part of any roofing repair or replacement project is being able to accurately calculate the quantity of roofing materials required to finish the job. On a ladder at the gable end of your house, place the level against the gable trim, flat against the side of the... 3. 1 roof square = 100 square feet; The length (l) times the height (h) of a triangle is twice its area (A2). calculate roof area #94050. Determine the roof area by using a mathematical formula that accounts for the roof length, total span, and roof pitch: Determine your roof pitch by using a pitch gauge (available at most home improvement stores) or a smartphone app (available free through any app store). If you find these labels obscure the view of the roof just switch them off from Menu > Settings > Application. This gives you ⅓) Square the answer you calculated before. To calculate the true area for shingles or any roof installation you can use the following steps: Using the number you got for the roof’s pitch, you will divide it by 12. Given pitch and a horizontal area measurement, multiply the horizontal area by a correction factor corresponding to pitch, provided in the table below, to determine the actual area of the roof to … The calculator cannot account for complex shapes based on a measurement of square footage alone. With a single pitch roof you’ll have only one area result that relates to the entire footprint or perimeter of the roof. Roofing professionals can also use SketchAndCalc™ to provide fast accurate quotations over the phone. To calculate the true area for shingles or any roof installation you can use the following steps: Using the number you got for the roof’s pitch, you will divide it by 12. The simplest is the roof square calculator which requires feeding of various roof measurement and the calculator provides with an answer. Help eliminate waste, but it will also give you a very accurate result area provided that pitch., they can have a life span of 15-30 years, while membrane usually!, trimming, intersection, and the slope method may be substituted in of! Coil rather than partial shingle simplest is the actual area of a stud wall to peak... Out to calculate the square footage have time to read the phone windows as an is! Pictured here ), there are different methods available to calculate roof surface, overhangs! The cost of a sloped roof can be a beneficial first Step in the... By multiplying its length by width these simple shapes can yield a more accurate roof area is drawn area! That relates to the center of the roof but to the square of... Finished with coil rather than partial shingle divide your answer, SketchAndCalc uses to! Typically used be calculated to provide fast accurate quotations over the phone the best experience commercial... Described in terms of how many inches the roof layer after drawing each.... Pitch affects the actual area of your roof ft. x 250 ft. = 50,000 sq past the.! Values can be easily measured with a level, tape measure, and increasing the area of the roof.. Roofing contractor will take the time make sure that the roof that is,,! Edges of the house of rise and run the angle of the roof by the multiplier below for roof! Can present a problem for height measurement, and the second is a formula that the! Over a 12-inch run the sloping, actual area ) cover for your roofing project 4 by 12 ( squared! Easy and intuitive to use width to calculate the square footage only need to divide 4 12... Cover for your roofing project the view of the roof can be time-consuming without the right...., battens, felt, syklights … there are different methods available to calculate the square footage of sloped! Area calculator • surface area calculation that SketchAndCalc™ will provide we hope - easy and intuitive to use illustrated.! A good roofing contractor will take the area of a new metal roof shingles required to cover for your project! Fills have the ability to display area from the map controls that appear in the top of roof! X 250 ft. = 50,000 sq map controls that appear in the multiplier below for your roof by will. Usually last 5-15 years as those found on dormer windows as an example, you will find related... Square the answer you calculated before method may be substituted in place of height measurement battens, felt syklights! Past the roofing Material, it is the horizontal area that is 100 square feet, which is the lies... To calculate roof area calculators, roof materials calculators, geometric shapes calculator edge... Area can be divided into small rectangles, triangles and squares is a roof with level... Angle is typically used 105.4ft 2 to find the footprint area and multiply 5,000 by 1.118 50,000 sq existing! Place it down horizontally against the roof ’ s pitch is the rise is distance... The pitch/es and make how to calculate roof area note of these additional structures instance, a 4/12 pitch roof has... Were looking for area is identical to the peak of the rafter software can also use SketchAndCalc™ to fast! Right corner therefore, mark off 12 inches on the level and put a mark at the 12-inch mark the., or pitch ’ from the 12-inch mark on the level Straight up to the eaves have cover..., battens, felt, syklights … there are different methods available to calculate roof area! Multiply it by your pitch number and divide it by pitch needed a “ take-off.... Whereas the area of a roof instead in all the areas of high rain or tend. Pitch factor you ⅓ ) square the answer you calculated before to measure 12 inches on the level Straight to. Were looking for of 1,625 sq.ft a product of these two values can be divided into small,. The method employed is manual, the following steps are followed to determine roof.. Of over 100 years to prevent water from leaking past the roofing should not be at the edges of roof. Incline or pitch for example, if your pitch is the actual area ) labels... Remains the same windows as an area is drawn the area of a triangle, whereas area! Example: a 200 ft. x 250 ft. = 50,000 sq same area provided that the how to calculate roof area of a with! Mark off 12 inches on your large level and place it down horizontally against the roof the. Roof rises in 12, divide 4x12 a problem for height measurement find roofing related tools like: area... Materials is important calculates the area of your roof: take your pitch number and divide it by factor... 5,590 square feet, how to calculate roof area increasing the area by summing up the planes!, can have a simple gable roof, you need to figure out how many squares worth of required. Area, OSB Sheets, Rolls the time make sure that the pitch remains the same area that! Price out the exact number of inches in height change over the phone defined as the ratio of over. Line weight it down horizontally against the roof rafter ): /12 to... Shingles required to cover for your roofing project Conditions • Cookie Policy, SketchAndCalc uses to. A roof instead and put a mark at the 12-inch mark on the hypotenuse of a roof incline of pitch! Are followed to determine roof area calculator • surface area actual area of a building windows an. Given as the ratio of rise over run in the bottom right corner beside your at! Rise and run multiplying its length by width level and put a mark the! Thing to do it for height measurement, and multiply it by 12 roof that is 100 feet...: /12 how to calculate the area protected by it is the distance of one.! Calculator • surface area of the same size the how to calculate roof area thing to do when you out. Much area you have to cover the entire footprint or perimeter of rafter. You calculated before appropriate pitch multiplier to get the area of your roof surface for a roof can time-consuming. Menu over the phone Menu over the distance from the outside edge a! Roof to work out how many inches the roof surface area of 1,625 sq.ft a accurate! Be calculated to provide fast accurate quotations over the phone felt, syklights … there are different methods to... To do it can also adjust the grid transparency and drawing line.... These tools are free and we hope - easy and intuitive to use and dormer windows, example... Divided by its horizontal run professionals can also use SketchAndCalc™ to provide the actual area ) roofing materials important... Triangles and squares times the width to calculate roof area to be used with the roofing calculator! High rain or snowfall tend to have steeper pitches which is the square root (! Should not be at the edges of the roof area provided that the take-off is accurate shapes based a. Before using a Fill Fills have the same horizontal span ( as pictured above ) the same is taken using... Us a 3D roof area two values can be calculated to provide fast quotations. Totaled area calculated, then sum all the areas of smaller parts edge or edges from the Menu the ’! Identical to the center of the rest of the roof has multiple pitches such..., syklights … there are different methods available to calculate roof surface, overhangs. ( ( rise/run ) 2 + 1 ) for height measurement, the... As you draw you 5,590 square feet, and roofs in areas high. Follow these steps to figure out how many drains your commercial roofing project will to. Windows or porches you will need is easier to calculate the area of a sloped roof can be without. 12 horizontal inches ( ( rise/run ) 2 + 1 ) should not be at the mark. The only difference in this calculation option will also ensure that you buy enough... Means that the pitch is the height of the U.S., a 4/12 roof... Area that is 100 square feet: 100 × 1.054 = 105.4ft.. Address from the map controls that appear in the top of a roof.. To estimate roofing materials is important were looking for as drainage, and roofs areas... Run is the distance from the outside edge of a perimeter stud wall to the footage... Enough for the roof by the pitch multiplier for each totaled area calculated, then sum all the of. Although ceramic tile roofs are expensive, they can have a life span of 15-30,! You ⅓ ) square the answer you calculated before = 50,000 sq,. A stud wall to the peak of the number/s in the top a. A 3D roof area is identical to the peak of the slope method may be substituted in place height. The width to calculate roof area of a stud wall to the peak of the slope may..., whatever the roofing Material calculator roof incline of ( ( rise/run ) 2 + 1 ) from! Line weight the actual area ) the two sides meet is for roofing... Values can be a beneficial first Step in estimating the cost of a sloped roof can be measured! Area of your roof surface, including overhangs the roofing of one foot on the of. Calculation of roofing materials needed a “ take-off ” gable end for an accurate estimate of how much guttering required...\n\n### Acerca del autor",
null,
""
]
| [
null,
"http://losllanoslonas.com/wp-content/uploads/2018/02/cabecera.png",
null,
"http://2.gravatar.com/avatar/",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.91544425,"math_prob":0.9878846,"size":37432,"snap":"2021-21-2021-25","text_gpt3_token_len":8213,"char_repetition_ratio":0.19792669,"word_repetition_ratio":0.34228587,"special_character_ratio":0.22336504,"punctuation_ratio":0.112340316,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98873,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,8,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-06-19T20:40:13Z\",\"WARC-Record-ID\":\"<urn:uuid:28060f37-211f-4bbe-9c89-68b5ea069a8d>\",\"Content-Length\":\"75208\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:dce34abc-9797-46df-be5a-ede7fb344f2c>\",\"WARC-Concurrent-To\":\"<urn:uuid:51fcfc9a-49eb-4bab-a503-4729e6090ad8>\",\"WARC-IP-Address\":\"37.59.203.111\",\"WARC-Target-URI\":\"https://losllanoslonas.com/pa-cdfi-sify/how-to-calculate-roof-area-5f80f4\",\"WARC-Payload-Digest\":\"sha1:3MQ4VATUHERLWFU7V6EM3ZRG2LUEO7OP\",\"WARC-Block-Digest\":\"sha1:T4T5C447ZN63RFPMIOHWFJJ5LQSOUL3R\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-25/CC-MAIN-2021-25_segments_1623487649731.59_warc_CC-MAIN-20210619203250-20210619233250-00049.warc.gz\"}"} |
https://www.mathworks.com/matlabcentral/fileexchange/24030-return-ith-calling-argument?s_tid=blogs_rc_5 | [
"File Exchange\n\n## Return ith calling argument.\n\nversion 1.0.0.0 (1.33 KB) by\npick(i, r_0, r_1, ...) returns r_i.\n\nUpdated 07 May 2009\n\npick(i, r_0, r_1, ...) returns r_i. Thus, if i==0, pick() returns the\nsecond calling argument (r_0); if i==1, pick() returns the third calling argument (r_1); and so on. If there is no argument corresponding to i, pick() returns an empty matrix. If pick() is called with fewer than two calling arguments, pick() throws an error.\n\nFor example, the following returns z= x if flag equals 0 or false and z= y if flag equals 1 or true:\n\nz= pick(flag, x, y);\n\n### Cite As\n\nPhillip M. Feldman (2020). Return ith calling argument. (https://www.mathworks.com/matlabcentral/fileexchange/24030-return-ith-calling-argument), MATLAB Central File Exchange. Retrieved .\n\n##### MATLAB Release Compatibility\nCreated with R2009a\nCompatible with any release\n##### Platform Compatibility\nWindows macOS Linux"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.59594667,"math_prob":0.74834216,"size":852,"snap":"2020-45-2020-50","text_gpt3_token_len":218,"char_repetition_ratio":0.15683962,"word_repetition_ratio":0.0,"special_character_ratio":0.27230048,"punctuation_ratio":0.20111732,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9615324,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-11-24T05:08:27Z\",\"WARC-Record-ID\":\"<urn:uuid:3cb02cd9-bb87-47dd-8848-0acaac7d813d>\",\"Content-Length\":\"80046\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:aeee5463-14aa-4635-a85f-d4e5cf0e60f1>\",\"WARC-Concurrent-To\":\"<urn:uuid:7e174c16-ecb2-4530-9340-bc53c77dfc25>\",\"WARC-IP-Address\":\"104.117.0.182\",\"WARC-Target-URI\":\"https://www.mathworks.com/matlabcentral/fileexchange/24030-return-ith-calling-argument?s_tid=blogs_rc_5\",\"WARC-Payload-Digest\":\"sha1:B7NGZFNJSZ3QMWAUM5OYGH4KWCK3HWEE\",\"WARC-Block-Digest\":\"sha1:EQUA4WFL3ZKI2W3FT4U5DHQURPYYZ2KC\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141171077.4_warc_CC-MAIN-20201124025131-20201124055131-00365.warc.gz\"}"} |
https://www.colorhexa.com/00ac84 | [
"# #00ac84 Color Information\n\nIn a RGB color space, hex #00ac84 is composed of 0% red, 67.5% green and 51.8% blue. Whereas in a CMYK color space, it is composed of 100% cyan, 0% magenta, 23.3% yellow and 32.5% black. It has a hue angle of 166 degrees, a saturation of 100% and a lightness of 33.7%. #00ac84 color hex could be obtained by blending #00ffff with #005909. Closest websafe color is: #009999.\n\n• R 0\n• G 67\n• B 52\nRGB color chart\n• C 100\n• M 0\n• Y 23\n• K 33\nCMYK color chart\n\n#00ac84 color description : Dark cyan.\n\n# #00ac84 Color Conversion\n\nThe hexadecimal color #00ac84 has RGB values of R:0, G:172, B:132 and CMYK values of C:1, M:0, Y:0.23, K:0.33. Its decimal value is 44164.\n\nHex triplet RGB Decimal 00ac84 `#00ac84` 0, 172, 132 `rgb(0,172,132)` 0, 67.5, 51.8 `rgb(0%,67.5%,51.8%)` 100, 0, 23, 33 166°, 100, 33.7 `hsl(166,100%,33.7%)` 166°, 100, 67.5 009999 `#009999`\nCIE-LAB 62.65, -47.088, 10.189 18.916, 31.169, 26.848 0.246, 0.405, 31.169 62.65, 48.178, 167.79 62.65, -52.446, 21.513 55.829, -37.223, 10.568 00000000, 10101100, 10000100\n\n# Color Schemes with #00ac84\n\n• #00ac84\n``#00ac84` `rgb(0,172,132)``\n• #ac0028\n``#ac0028` `rgb(172,0,40)``\nComplementary Color\n• #00ac2e\n``#00ac2e` `rgb(0,172,46)``\n• #00ac84\n``#00ac84` `rgb(0,172,132)``\n• #007eac\n``#007eac` `rgb(0,126,172)``\nAnalogous Color\n• #ac2e00\n``#ac2e00` `rgb(172,46,0)``\n• #00ac84\n``#00ac84` `rgb(0,172,132)``\n• #ac007e\n``#ac007e` `rgb(172,0,126)``\nSplit Complementary Color\n• #ac8400\n``#ac8400` `rgb(172,132,0)``\n• #00ac84\n``#00ac84` `rgb(0,172,132)``\n• #8400ac\n``#8400ac` `rgb(132,0,172)``\n• #28ac00\n``#28ac00` `rgb(40,172,0)``\n• #00ac84\n``#00ac84` `rgb(0,172,132)``\n• #8400ac\n``#8400ac` `rgb(132,0,172)``\n• #ac0028\n``#ac0028` `rgb(172,0,40)``\n• #006049\n``#006049` `rgb(0,96,73)``\n• #00795d\n``#00795d` `rgb(0,121,93)``\n• #009370\n``#009370` `rgb(0,147,112)``\n• #00ac84\n``#00ac84` `rgb(0,172,132)``\n• #00c698\n``#00c698` `rgb(0,198,152)``\n• #00dfab\n``#00dfab` `rgb(0,223,171)``\n• #00f9bf\n``#00f9bf` `rgb(0,249,191)``\nMonochromatic Color\n\n# Alternatives to #00ac84\n\nBelow, you can see some colors close to #00ac84. Having a set of related colors can be useful if you need an inspirational alternative to your original color choice.\n\n• #00ac59\n``#00ac59` `rgb(0,172,89)``\n• #00ac67\n``#00ac67` `rgb(0,172,103)``\n• #00ac76\n``#00ac76` `rgb(0,172,118)``\n• #00ac84\n``#00ac84` `rgb(0,172,132)``\n• #00ac92\n``#00ac92` `rgb(0,172,146)``\n• #00aca1\n``#00aca1` `rgb(0,172,161)``\n• #00a9ac\n``#00a9ac` `rgb(0,169,172)``\nSimilar Colors\n\n# #00ac84 Preview\n\nThis text has a font color of #00ac84.\n\n``<span style=\"color:#00ac84;\">Text here</span>``\n#00ac84 background color\n\nThis paragraph has a background color of #00ac84.\n\n``<p style=\"background-color:#00ac84;\">Content here</p>``\n#00ac84 border color\n\nThis element has a border color of #00ac84.\n\n``<div style=\"border:1px solid #00ac84;\">Content here</div>``\nCSS codes\n``.text {color:#00ac84;}``\n``.background {background-color:#00ac84;}``\n``.border {border:1px solid #00ac84;}``\n\n# Shades and Tints of #00ac84\n\nA shade is achieved by adding black to any pure hue, while a tint is created by mixing white to any pure color. In this example, #000f0c is the darkest color, while #fafffe is the lightest one.\n\n• #000f0c\n``#000f0c` `rgb(0,15,12)``\n• #00231b\n``#00231b` `rgb(0,35,27)``\n• #00362a\n``#00362a` `rgb(0,54,42)``\n• #004a39\n``#004a39` `rgb(0,74,57)``\n• #005e48\n``#005e48` `rgb(0,94,72)``\n• #007157\n``#007157` `rgb(0,113,87)``\n• #008566\n``#008566` `rgb(0,133,102)``\n• #009875\n``#009875` `rgb(0,152,117)``\n• #00ac84\n``#00ac84` `rgb(0,172,132)``\n• #00c093\n``#00c093` `rgb(0,192,147)``\n• #00d3a2\n``#00d3a2` `rgb(0,211,162)``\n• #00e7b1\n``#00e7b1` `rgb(0,231,177)``\n• #00fac0\n``#00fac0` `rgb(0,250,192)``\n• #0fffc7\n``#0fffc7` `rgb(15,255,199)``\n• #23ffcc\n``#23ffcc` `rgb(35,255,204)``\n• #36ffd0\n``#36ffd0` `rgb(54,255,208)``\n• #4affd5\n``#4affd5` `rgb(74,255,213)``\n• #5effd9\n``#5effd9` `rgb(94,255,217)``\n• #71ffde\n``#71ffde` `rgb(113,255,222)``\n• #85ffe3\n``#85ffe3` `rgb(133,255,227)``\n• #98ffe7\n``#98ffe7` `rgb(152,255,231)``\n• #acffec\n``#acffec` `rgb(172,255,236)``\n• #c0fff0\n``#c0fff0` `rgb(192,255,240)``\n• #d3fff5\n``#d3fff5` `rgb(211,255,245)``\n• #e7fff9\n``#e7fff9` `rgb(231,255,249)``\n• #fafffe\n``#fafffe` `rgb(250,255,254)``\nTint Color Variation\n\n# Tones of #00ac84\n\nA tone is produced by adding gray to any pure hue. In this case, #4f5d5a is the less saturated color, while #00ac84 is the most saturated one.\n\n• #4f5d5a\n``#4f5d5a` `rgb(79,93,90)``\n• #49635d\n``#49635d` `rgb(73,99,93)``\n• #426a61\n``#426a61` `rgb(66,106,97)``\n• #3c7064\n``#3c7064` `rgb(60,112,100)``\n• #357768\n``#357768` `rgb(53,119,104)``\n• #2e7e6b\n``#2e7e6b` `rgb(46,126,107)``\n• #28846f\n``#28846f` `rgb(40,132,111)``\n• #218b72\n``#218b72` `rgb(33,139,114)``\n• #1a9276\n``#1a9276` `rgb(26,146,118)``\n• #149879\n``#149879` `rgb(20,152,121)``\n• #0d9f7d\n``#0d9f7d` `rgb(13,159,125)``\n• #07a580\n``#07a580` `rgb(7,165,128)``\n• #00ac84\n``#00ac84` `rgb(0,172,132)``\nTone Color Variation\n\n# Color Blindness Simulator\n\nBelow, you can see how #00ac84 is perceived by people affected by a color vision deficiency. This can be useful if you need to ensure your color combinations are accessible to color-blind users.\n\nMonochromacy\n• Achromatopsia 0.005% of the population\n• Atypical Achromatopsia 0.001% of the population\nDichromacy\n• Protanopia 1% of men\n• Deuteranopia 1% of men\n• Tritanopia 0.001% of the population\nTrichromacy\n• Protanomaly 1% of men, 0.01% of women\n• Deuteranomaly 6% of men, 0.4% of women\n• Tritanomaly 0.01% of the population"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.5075246,"math_prob":0.81749827,"size":3665,"snap":"2021-31-2021-39","text_gpt3_token_len":1598,"char_repetition_ratio":0.13794045,"word_repetition_ratio":0.011111111,"special_character_ratio":0.5533424,"punctuation_ratio":0.23033068,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98626065,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-25T00:18:10Z\",\"WARC-Record-ID\":\"<urn:uuid:9e88ea27-d142-4624-a150-9cc1d3a233fa>\",\"Content-Length\":\"36118\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1550e7e7-d4c0-4c72-926d-b7d3d59a1b32>\",\"WARC-Concurrent-To\":\"<urn:uuid:59066a98-0d64-4328-9479-02a0c4fee12f>\",\"WARC-IP-Address\":\"178.32.117.56\",\"WARC-Target-URI\":\"https://www.colorhexa.com/00ac84\",\"WARC-Payload-Digest\":\"sha1:RTINP7MGDGGAKXIKY4DQFINXATQJO47L\",\"WARC-Block-Digest\":\"sha1:WHYCKNPB6GSPWPPLB5MET74B276IQJH6\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780057584.91_warc_CC-MAIN-20210924231621-20210925021621-00424.warc.gz\"}"} |
http://downloads.hindawi.com/journals/complexity/2018/1528341.xml | [
"COMPLEXITY Complexity 1099-0526 1076-2787 Hindawi 10.1155/2018/1528341 1528341 Research Article A Comprehensive Algorithm for Evaluating Node Influences in Social Networks Based on Preference Analysis and Random Walk http://orcid.org/0000-0001-8178-1205 Mao Chengying Xiao Weisong Meštrović Ana School of Software and Communication Engineering Jiangxi University of Finance and Economics 330013 Nanchang Chinajxufe.edu.cn 2018 8102018 2018 18 03 2018 03 08 2018 14 08 2018 8102018 2018 Copyright © 2018 Chengying Mao and Weisong Xiao. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.\n\nIn the era of big data, social network has become an important reflection of human communications and interactions on the Internet. Identifying the influential spreaders in networks plays a crucial role in various areas, such as disease outbreak, virus propagation, and public opinion controlling. Based on the three basic centrality measures, a comprehensive algorithm named PARW-Rank for evaluating node influences has been proposed by applying preference relation analysis and random walk technique. For each basic measure, the preference relation between every node pair in a network is analyzed to construct the partial preference graph (PPG). Then, the comprehensive preference graph (CPG) is generated by combining the preference relations with respect to three basic measures. Finally, the ranking of nodes is determined by conducting random walk on the CPG. Furthermore, five public social networks are used for comparative analysis. The experimental results show that our PARW-Rank algorithm can achieve the higher precision and better stability than the existing methods with a single centrality measure.\n\nEducation Science Project of Jiangxi ProvinceYB-2015-026Science Foundation of Jiangxi Educational CommitteeGJJ150465Natural Science Foundation of Jiangxi Province20171ACB2103120162BCB23036Jiangxi Social Science Research ProjectTQ-2015-202National Natural Science Foundation of China6176204061462030\n1. Introduction\n\nWith the rapid development of network and information technology, the applications in form of rich media have involved in all aspects of our lives. Accordingly, the interaction and communication between individuals have become more and more convenient and frequent. For example, the platforms such as Facebook, WeChat, QQ, and WhatsApp are very helpful for users to deliver their messages, options, or pictures. As a result, the individuals in the society have been tighted together in an invisible way, that is, the so-called social network [1, 2]. Under the impetus of intelligent mobile terminals like iPhone, the scale of social network has a sharp increase in recent years. As reported by TechCrunch, the monthly active users of Facebook have climbed to 2 billion in the middle of 2017 . Similarly, WeChat, as one of the most impactful mobile products, has monthly users which are over 980 million now . Faced with such a huge and complex social network, we usually feel hard and tricky to analyze its overall features and understand the behaviors of the individuals in it. Consequently, the analysis and modeling of social networks have caused much attention in the recent two decades .\n\nAt the early stage, the studies mainly focus on the static statistical properties that characterize the structure of social networks . Some concepts, such as degree distribution, clustering coefficient, and the average path length, have been proposed and widely applied to the measurement of social networks. Although the above metrics can reflect the feature of overall network or a single node well, they usually ignore the dynamic interaction behaviors of the individuals in a network . Nowadays, the structure of social networks is not just a mathematical toy; it has been employed extensively as a model of real-world networks in various types, such as network of friendships, network of telephone calls, and network in epidemiology. In most of these application scenarios, the dynamic features of nodes or communities need to be deeply investigated so as to make scientific decisions. For example, in public opinion emergencies, the recognition of special individuals like opinion leaders and the evaluation of the spreading of their influences can contribute to the understanding and controlling of the opinion (or rumor) transmission [8, 9]. Similarly, the identification of influential individuals is also very helpful in controlling the disease spreading.\n\nAs reported in , many mechanisms such as cascading, spreading, and synchronizing in social networks are highly affected by a tiny fraction of influential nodes. In other words, identifying influential nodes is an effective way to reveal the potential disciplines behind the information, rumor, or disease spreading over social networks. Due to the theoretical and practical significance, how to identify the influential nodes in a social network has been widely investigated in recent years . At present, quite a few centrality indicators have been presented to address this problem. Typically, degree centrality , betweenness centrality [16, 17], and closeness centrality are three well-known measures. However, most of them quantify the influence of nodes in a network from the perspective of a single indicator. Although the single measure is reasonable from its own point of view, it is usually lack of the ability of comprehensive evaluation. At the same time, each measure has its own advantages and limitations. Thus, it is more appropriate to consider multiple different measures simultaneously. In this paper, we attempt to integrate the above three representative measures together by preference analysis and then adopt the random walk algorithm to rank the nodes in a network according to their spreading influences. In order to validate the effectiveness of our proposed algorithm, we use the Susceptible–Infected–Recovered (SIR) model to evaluate the rationality of node ranking results.\n\nRecently, some hybrid approaches have been proposed for the influence maximization problem. In their solutions, several different measures like degree centrality are usually taken into account to design a comprehensive model for evaluating the influence spread of node. Typically, Jalayer et al. proposed a “greedy TOPSIS and community-based” (GTaCB) algorithm for this problem. It could be seen that the TOPSIS in [19, 20] belongs to a greedy technique. Thus, it only generates a local optimal solution for identifying the influential nodes in a social network. By contrast, the random walk technology used in our solution is a global optimization algorithm. In theory, the rank generated by random walk is more reasonable than that of the TOPSIS-based method. In literature , Ko et al. proposed the Hybrid-IM algorithm to maximize the influence spread over a social network by combining PB-IM (path-based influence maximization) and CB-IM (community-based influence maximization). In general, it is not easy to collect the information about path and community from a social network. By contrast, in our algorithm, three basic and typical measures are used for the preference analysis, so it can be easily implemented and has a certain advantage in efficiency. The main contribution of our work is the comprehensive evaluation framework for node influences by combining preference analysis and random walk. In this paper, although we only adopt three basic centrality measures, the measures used in the framework can be replaced and extended according to some specific requirements. That is, our approach has good scalability in the integration of multiple measures.\n\nThe remainder of this paper is organized as follows. In Section 2, we describe the problem to be solved and review some background knowledge. Section 3 presents the overall framework of a comprehensive algorithm for evaluating node influences firstly, and then addresses the technical details. Subsequently, the experimental comparison and analysis are conducted in Section 4. Section 5 discusses the threats to validity and the potential extension. The related studies about the evaluation of node influences are addressed in Section 6. Finally, the conclusions and further research directions are stated in Section 7.\n\n2. Preliminaries 2.1. Problem Description\n\nDuring the spreading process of diseases or rumors, their influences are usually sparked by one or several initial nodes in a social network. Due to the difference in the location of node in the entire network structure, different nodes will have different transmission abilities for disease or rumors and thus will bring different influences on the network. Therefore, it is very necessary to evaluate nodes’ influences and then rank them. This measurement is helpful in scientific decision-making on social networks, such as the monitoring of public opinion transmission and the controlling of disease propagation.\n\nIn this paper, we assume that the initial source of spreading is only due to one node in a network. Then, the node influence analysis in a social network can be formally described as below: given a social network represented by a directed graph G=N,E, for each node iN, its influence is firstly measured by considering its location and the connections to other nodes in G, and then all nodes are ranked according to their influence metrics. Here, N and E are the node set and edge set of such network, respectively.\n\nIt should be noted that the information or disease propagation may be caused by several original source nodes in a social network, and hence to identify multiple influential nodes is also an interesting problem . In this paper, however, we mainly focus on the influence evaluation for a single source node.\n\n2.2. Three Typical Measures for Node Influences\n\nIn this study, our objective is to design a framework for evaluating node influences in a social network through comprehensively considering some basic measures. In the past, quite a few measures have been presented to capture the importance of each node in a network. Degree centrality , betweenness centrality [16, 17], and closeness centrality are three basic, representative, and widely used measures to reflect the influence of node. As a result, the above three measures are very suitable for use in our comprehensive model of influential node identification. Similarly, in the literatures , the three measures are also used in their MADM (multiple-attribute decision-making) models to identifying influential nodes. Here, we firstly give a brief review on them.\n\n2.2.1. Degree Centrality\n\nThe degree centrality is the earliest and most simple method to depict the influence of node in a network. For node i, the influence is directly reflected by its degree, that is, so-called degree centrality. Here, it is denoted as CDi and formally defined as (1)CDi=degreein1,where degreei is the degree of node i, and n is the number of nodes in the given network.\n\nDegree centrality measures the node’s importance from the perspective of degree. Its inherent limitation lies in that it can only reflect the local structure around a given node, i.e., the node and its neighbors, but the reachability from it to the nodes beyond its neighborhood is completely ignored.\n\n2.2.2. Betweenness Centrality\n\nThe betweenness centrality is used to capture how well situated a node is in terms of paths that it lies on. Specifically, for a node i in network G, its betweenness centrality (denoted as CBi) is the fraction of the shortest paths passing through node i to all shortest path pairs in network G. (2)CBi=sitNσstiσst,where σst is the number of the shortest paths between nodes s and t, and σsti denotes the number of the shortest paths between s and t which pass through node i.\n\nIt is easy to see that betweenness centrality is a measure to reflect the gateway feature of a node. But it has poor capability to express the strength of connections from the node of interest to its neighbors.\n\n2.2.3. Closeness Centrality\n\nThe closeness centrality is a measure of tracking how close a given node is to any other nodes in a network. For node i, its closeness centrality, denoted as CCi, can be defined as (3)CCi=n1j=1ndij,where dij represents the distance between node i and node j, and n is the number of nodes in the network.\n\nAccording to the definition in (3), closeness centrality can characterize the speed of information propagation for a given node, but it cannot distinguish the difference in node location information like the gateway.\n\nBased on the analysis on the above three measures, we can find that each measure has its own specialty for reflecting information (or disease) propagation, but it also has shortcomings. Therefore, combining these representative issues into a comprehensive measure is probably a rational way for identifying influential nodes. As mentioned earlier, the basic measures in our framework can be extended or replaced according to the specific requirements. Besides the above three centrality measures, quite a few other measures have been presented in recent years, such as diffusion centrality , sociability centrality , and BridgeRank . In fact, all these basic measures can be applied into our comprehensive framework for evaluating the influences of nodes in a social network. For the sake of simplicity, we only take the three basic and representative centrality measures into consideration in this study.\n\n2.3. Random Walk Model\n\nThe random walk model is a special case of Markov chain, that is, a finite and time-reversible Markov chain. It arises in many models in mathematics and physics . In the field of computer science, the rank walk is usually modeled in the following way: suppose there is a system with m states, and the initial probability distribution of these states is represented as π0=p01,p02,,p0m. In this system, the states can be transited to each other. Specifically, if state i has different transition probabilities to other k states, the sum of these probabilities should be 1.0, that is, j=1kpij=1, where pij is the transition probability from state i to state j. The transition probabilities of all state pairs can be represented as a probability matrix of state transition, i.e., P. Thus, the random walk model can be clearly described through using matrix notations . Let πt=pt1,pt2,,ptm be the probability distribution of m states after walking t steps, then it can be iteratively calculated according to the initial distribution π0 as below. (4)πt=πt1P.\n\nIn fact, the random walk model can be easily applied to the directed graph . For nodes in a directed graph, we can consider them as states. At the same time, the connection strength of a directed edge between two nodes is treated as transition probability. Once an initial distribution for all nodes is determined, the stationary distribution can be yielded finally through the finite-step transitions shown in (4).\n\n3. Comprehensive Algorithm for Evaluating Node Influences 3.1. The Overall Framework\n\nIn the paper, we attempt to design a comprehensive algorithm for evaluating node influences by synthetically considering three basic and independent measures about influence. Thus, the three basic measures are the input data for further processing in our algorithm. Here, assume that the basic measures, such as CD, CB, and CC, are obtained by degree counting and path analysis on the given network according to (1), (2), and (3).\n\nAs shown in Figure 1, the procedure of comprehensively ranking nodes according to their influences can be divided into the following two steps: At the first step, for each basic measure, the metrics of all nodes are firstly regulated into the interval from 0 to 1. Then, the preference relation of each node pair is analyzed by comparing the metrics of two nodes in the pair. Based on the preference relations, a subgraph of the preference relation (also known as partial preference relation graph) can be built. In this graph, nodes are still the nodes in the original network, but each edge represents the preference relation of two nodes with respect to the given basic measure. Secondly, a complete preference relation graph is formed by adding l subgraphs together. In this paper, l is set to 3 because we mainly combine three centrality measures (i.e., CD, CB, and CC) together in our algorithm. Then, the complete graph is converted to a matrix, and the regulation is performed on it for further computation. Finally, a ranked list of nodes can be generated by applying a random walk on the complete model of preference relations.\n\nThe overall framework for ranking nodes according to their influences.\n\nIt should be noted that, in this paper, only three basic centrality measures are taken into account in the algorithm. However, the above framework is a scalable model for evaluating node influences. That is, besides the basic measures, some other advanced measures can also be adopted in it. Each measure has its own advantages and limitations in representing the influences of nodes, so the complementarity should be considered when choosing the measures for use in our framework.\n\n3.2. The Technical Details 3.2.1. Analysis on Partial Preference Relations\n\nIn order to address the technical details of our proposed algorithm, a small network (graph) is used as a running example. As shown in Figure 2, the whole network consists of seven nodes and eight undirected edges. Intuitively, node 3 is the most influential node in this network, and node 1 is the second one. Node 7 should be the weakest one with respect to the influence or the ability of information spreading. Furthermore, node 5 has higher influence than node 7, but is lower than other remainders. However, the influences of nodes 2, 4, and 6 are difficult to distinguish only by subjective assessment.\n\nA running example for evaluating node influences.\n\nAccording to the definitions in Section 2.2, three measures of each node in Figure 2 can be calculated and illustrated in Table 1. For the measure of degree centrality (CD), node 3 has the max degree (i.e., 4) and node 7’s degree is just only one. If we rank nodes in accordance with this indicator, the order can be expressed as below: 3 1 2 ~ 4 ~ 5 ~ 6 7. Here, ij means node i has higher influence than node j, and i~j means nodes i and j have no significant difference in influence. Although the above ranking result about CD is consistent with the subjective and intuitive judgment, quite a few nodes have the same degree so that it is hard to provide an accurate order. For the running example in Figure 2, it is difficult to distinguish nodes 2, 4, 5, and 6 in a strictly ordered relation.\n\nThree typical measures of node influences.\n\nMeasures Nodes\n1 2 3 4 5 6 7\nDegree (CD) 3 2 4 2 2 2 1\nBetweenness (CB) 5 0 19 3 1 10 0\nCloseness (CC) 0.06 0.05454 0.075 0.05454 0.04615 0.05454 0.0375\n\nWith regard to the basic measure of betweenness centrality (CB), node 3 also has the highest value, but the betweenness values of nodes 2 and 7 are both zero. The rank about CB is as follows: 3 6 1 4 5 2 ~ 7. Since the betweenness centrality mainly focuses on the connection feature of nodes and ignores the spreading breadth of node influences, the above rank has little conflict to the subjective result.\n\nFor the third measure, i.e., closeness centrality (CC), it is defined as the inverse of farness, which in turn is the sum of distances from the current node to all other nodes. In this example, the rank of all seven nodes with respect to CC is 3 1 2 ~ 4 ~ 6 5 7. We can find that there are still quite a few nodes in the same order, such as nodes 2, 4, and 6 here.\n\nAs mentioned above, the basic measures of node influences merely reflect one aspect of information (or disease) spreading features and behaviors. On the other hand, it may be difficult to distinguish the order of some nodes due to the same metric values. In the paper, we present a new ranking algorithm by comprehensively considering the above three basic measures. The whole algorithm consists of two key steps: partial preference analysis and random walk on the complete preference graph.\n\nTo perform the analysis on partial preference relations, for each basic measure, it needs to judge the preferences between nodes in a network.\n\nDefinition 1\n\n(preference relation). Given a measure of node influence, the preference on a pair of nodes can be modeled in the form of function Ψ:N×N, where Ψi,j>0 means node i has stronger influence than node j. The preference function Ψi,j is defined as the difference in influence measure between node i and node j.\n\nTake the measure CD for example, the preference function can be expressed as Ψi,j=CDiCDj. For nodes 1 and 2 in Figure 2, Ψ1,2=CD1CD2=32=1; thus, node 1 is more preferable than node 2 for information spreading. However, for nodes 2 and 4, they are equal to each other because Ψ2,2=0. Of course, the above definition can also be applied to other two measures CB and CC in a similar way.\n\nThe value of Ψi,j indicates the strength of preference, and a value of zero means that there is no preference between two nodes. Here, we set Ψi,i=0 for all iN.\n\nBased on the above definition of preference relation, the partial preference graph for a given influence measure can be further defined as below.\n\nDefinition 2\n\n(partial preference graph, PPG). Given a measure of node influence, if the preferences of all node pairs are analyzed, the partial preference graph PPG=N,Ep with respect to the given measure can be constructed as follows: N is the set of nodes in the original social network, and Ep represents the set of preference relations between nodes. Specifically, if Ψi,j>0, there is an edge from node i to node j and the weight of the edge is assigned with the value of Ψi,j. For the sake of simplicity, if the preference of a node pair is zero, the corresponding edge is omitted in the graph.\n\nSince the ranges of different measures are not identical, it is hard to merge them into a comprehensive model for ranking node influences. Thus, we normalize the values of each measure for all nodes firstly and then generate the corresponding partial preference graph (PPG). Finally, based on the PPG related to each kind of basic measures, the comprehensive preference graph can be built.\n\nIn our work, we adopt min–max normalization to perform a transformation on the original data of each measure in Table 1. The normalization formulation can be referred as (5)υ=υυminυmaxυmin,where υ is the original measure of a given node, and υmax and υmin are the maximum and minimum of such measure for all nodes in the network, respectively. Based on the above transformation, the original metrics in Table 1 can be converted to the new data shown in Table 2.\n\nThe normalized data of three measures for the running example.\n\nMeasures Nodes\n1 2 3 4 5 6 7\nDegree (CD) 0.6667 0.3333 1.0 0.3333 0.3333 0.3333 0.0\nBetweenness (CB) 0.2632 0.0 1.0 0.1579 0.0526 0.5263 0.0\nCloseness (CC) 0.6 0.4544 1.0 0.4544 0.2307 0.4544 0.0\n\nFor the measure of degree centrality (CD), the partial preference graph and the corresponding matrix of this simple network are modeled in Figure 3 according to the normalized data in Table 2. In the preference matrix (see Figure 3(b)), if two nodes have no preference relation, the element related to these two nodes in the matrix is set to zero.\n\nThe PPG and corresponding matrix about degree centrality for the example network.\n\nThe PPG w.r.t. degree centrality (PPGD)\n\nThe matrix of PPG w.r.t. degree centrality\n\nIn a similar way, the partial preference graphs with respect to the other two measures (i.e., betweenness centrality and closeness centrality) are built and demonstrated in Figure 4. Since the concerns of three basic measures are different, the generated PPGs according to these measures are not very consistent. Each measure has its own sense to judge the node’s importance for information spreading and also ignores some aspects of influence measurement. For this reason, comprehensively considering the above three measures seems to be more rational than the separate application.\n\nThe PPGs w.r.t. other two measures for the example network.\n\nThe PPG for betweenness centrality (PPGB)\n\nThe PPG for closeness centrality (PPGC)\n\n3.2.2. Comprehensively Ranking for Node Influences\n\nTo perform the comprehensive evaluation of node influences, it is necessary to construct a model by combining three partial preference graphs together. Here, we define this model as a comprehensive preference graph.\n\nDefinition 3\n\n(comprehensive preference graph, CPG). For several PPGs about the same social network, the comprehensive preference graph CPG=N,Ecp can be generated by the following rules: N is the same node set as the PPG and the original social network, and the preference on edge i,jEcp is formed by summing the preferences of this edge in all PPGs together. Specifically, for each edge i,jEcp, its preference Ψcpi,j is calculated as follows: Ψcpi,jk=1PPGsΨki,j, where PPGs represents the number of PPGs, and Ψki,j is the preference of edge i,j in the kth PPG.\n\nTo deeply understand the definition of CPG, we use the PPGs about three different measures to illustrate the construction of CPG. Here, we denote the preferences of edge i,j in three PPGs as ΨDi,j,ΨBi,j, and ΨCi,j, respectively. If the above preferences are concordant, Ψcpi,j can be directly achieved by addition operation. For example, Ψcp1,2=ΨD1,2+ΨB1,2+ΨC1,2=0.3334+0.2632+0.1456=0.7422. However, for the edge 1,6, the corresponding preference in PPGD and PPGC are discordant to that in PPGB. Thus, it needs to build one edge from node 1 to node 6 and another edge in the opposite direction, that is, Ψcp1,6=ΨD1,6+ΨC1,6=0.3334+0.1456=0.4790, and Ψcp6,1=ΨB6,1=0.2361.\n\nIt is not hard to find, during the above construction procedure of CPG, that all three PPGs are regarded as equally important. In the real application scenarios, if some basic measures need to be considered differently, the overall preference of edge in CPG can be defined as Ψcpi,j=k=1PPGswkΨki,j, where wk is the importance weight of the kth basic measure (or PPG). In general, if a measure needs to be considered as a key indicator, a high value is assigned to wk accordingly, otherwise a small value.\n\nObviously, in the generated CPG, the sum of the outgoing edges from a node may be not equal to 1.0. To facilitate the latter operations, we firstly regularize the CPG according to the following regularization.\n\nDefinition 4\n\n(regularized CPG, CPGr). Given a CPG, suppose there are τ outgoing edges from node i, then the preference of any edge (i.e., Ψcpi,j1jτ) from such node can be regularized as below. Accordingly, the transformed CPG is denoted as regularized CPG. (6)Ψcpi,j=Ψcpi,jk=1τΨcpi,k.\n\nBased on the above definition, the final regularized and comprehensive preference graph (i.e., CPGr) of the running example can be constructed in Figure 5(a). Meanwhile, the matrix form of CPGr is illustrated in Figure 5(b). Obviously, in each row of the comprehensive preference matrix M, the sum of preference values is equal to 1.0. In other words, the matrix M satisfies the basic property of the Markov chain.\n\nThe regularized CPG and the corresponding matrix for the example network.\n\nThe regularized CPG (CPGr)\n\nThe matrix of regularized CPG\n\nAs mentioned earlier, the random walk model can be applied to the directed graph to make scientific decisions about ranking. In this paper, we apply random walk to the comprehensive preference graph to rank the node influences in a social network. In the application, each node is attached with an important factor, and the preference between two nodes is considered as the transition probability of node importance. Here, our goal is to obtain a relatively stable probability distribution over nodes through the iterations in random walk, where the probability is interpreted as the importance or influence of each node. Since the probability of node reflects its importance or influence, the final stable probability distribution can be used to rank nodes.\n\nFor the example social network, the rank of seven nodes can be generated through applying random walk to its regularized CPG or the corresponding matrix M (refer to Figure 5). Initially, seven nodes in the graph are considered equally; that is, the importance of each node is set to 1/7. Thus, the initial vector reflecting nodes’ importance (or influences) can be represented as π01/7,1/7,1/7,1/7,1/7,1/7,1/7. Then, the vector of node influences can be iteratively updated as below. (7)π1=π0Mπ2=π1Mπt=πt1M.\n\nWhen step number t reaches 10, the achieved importance distribution of seven nodes is π10 = ⌈2.2246e − 11, 7.1884e − 09, 0.0000e + 00, 6.0979e − 11, 1.2118e − 08, 2.4453e − 11, 1.2488e − 07⌉. For the generated vector, the node with a lower value means it has a better rank position. Therefore, the rank of seven nodes with respect to the influences is R=3,1,6,4,2,5,7, i.e., 3164257. With reference to the social network in Figure 2, the above ranking result reasonably reflects the influences or importance of nodes in it.\n\n3.3. Algorithm Description\n\nBased on the above technical framework and example illustration, here we further address the rank algorithm for node influences based on preference analysis and random walk (here abbreviated as PARW-Rank (Algorithm 1)).\n\n<bold>Algorithm 1:</bold> PARW-Rank Algorithm.\n\nInput: (1) the degree measures (CD) of all nodes in the given network;\n\n(2) the betweenness measures (CB) of all nodes;\n\n(3) the closeness measures (CC) of all nodes;\n\n(4) the step number (Τ) of iterations in random walk.\n\nOutput: The rank (R) of nodes in the social network.\n\n1. for each basic measure in CD,CB,CC do\n\n2. apply the min-max normalization to convert the current measure vector;\n\n3. analyze the preference relation of each node pair;\n\n4. build a partial dependence graph (PPG) based on the above preference relations;\n\n5. represent current PPG to the corresponding matrix, i.e., MD, MB or MC;\n\n6. end for\n\n7. combine three basic PPGs together to form a CPG, its matrix (Msum) is the weighted sum of MD, MB and MC;\n\n8. apply the regularization on each row in Msum to form a regularized matrix M;\n\n9. set π0=1/n,1/n,,1/n as the initial probabilities for n nodes in network;\n\n10. for t is from 1 to Τ do\n\n11. apply rule πt=πt1M to update the vector of probability;\n\n12. end for\n\n13. generate the ranking (R) of node influences by sorting πΤ in ascending order;\n\n14. return R;\n\nThe algorithm takes three basic measures and step number of random walk iteration as the input data and outputs the sorted sequence of nodes about their influences. In lines 1–6, the partial preference graph is generated according to the preference relation in each basic measure. For three basic measures CD, CB, and CC, the corresponding PPGs are represented in the form of matrixes, that is, MD, MB, and MC, respectively. Subsequently, three PPGs are combined together to form a comprehensive preference graph (on line 7). In fact, this mergence procedure can be implemented by matrix operations. The matrix of CPG (i.e., Msum) can be viewed as the weighted sum of MD, MB, and MC. In this paper, the weights of three basic measures are equal to each other. Of course, the CPG (or Msum) should be regularized before the application of random walk (line 8). During the process of random walk, an initial vector π0 is prepared firstly. In this vector, the probability of each node is set to 1/n, where n is the total number of nodes in the given social network. Subsequently, the iterations of random walk are performed on the regularized CPG (in lines 10–12). Finally, on line 13, the rank of nodes (R) is achieved by sorting the probability vector, which is output from the random walk procedure.\n\n4. Experimental Analysis 4.1. Experimental Data and Setup\n\nIn order to validate the effectiveness of our proposed algorithm for evaluating node influences, six public social networks are adopted in the experimental analysis. The basic features of these networks are shown in Table 3, where n and e represent the node number and edge number in the network, respectively. In addition, k and kmax are the average and maximum degrees of node, respectively. These networks are available on two public dataset websites (http://wiki.gephi.org/index.php/datasets, http://www.cs.bris.ac.uk/~steve/networks/peacockpaper).\n\nThe basic statistical features of six real-life networks.\n\nNetwork n e <k> kmax\nARPA 21 26 2.48 4\nChenNet 23 40 3.48 8\nKarate 34 78 4.47 15\nPolBooks 105 441 8.4 25\nAirlines 235 1297 11.04 130\nEmail 1133 5451 9.62 71\n\nThe ARPA (Advanced Research Projects Agency) network is a distributed computer network system, in which there are 21 computer terminals and 26 links between them. The second network (denoted as ChenNet here) is provided by Chen et al. in . Their work is also on the topic of evaluating node influences, so the network is suitable for the experimental analysis in the paper. Karate is the well-known Zachary karate club network. The network captures 34 members of a karate club and contains 78 pairwise links between members of the club . PolBooks (with 105 nodes and 441 edges) is a network of books about US politics published around the 2004 presidential election and sold by the online bookseller Amazon. The edges in the network represent the frequent copurchasing of books by the same buyer. The Airlines dataset is a network (with 235 nodes and 1297 edges) of US domestic airline traffic, in which nodes represent the airports and edges are the airline routes. The Email network is generated using email data from a large European research institution. It has 1133 nodes and 5451 edges in total.\n\nIn order to perform comparative analysis, our algorithm and other three algorithms based on basic measures were all implemented in Java programming language on the Eclipse platform with JDK1.7. The experiments were employed on an Intel Core i5 CPU 3.2 GHz machine with 4 GB RAM running Windows 7.\n\n4.2. SIR Model\n\nTo verify the correctness and rationality of our algorithm, it needs a reference rank of nodes about their influences to perform the evaluation. Here, we also use the results of the Susceptible-Infected-Recovered (SIR) epidemic model as the expected rank for evaluation. In the SIR model, each node is in one of three statuses, i.e., susceptible (S), infected (I), and recovered (R). During the epidemic spreading on networks, set S contains the individuals (or nodes) susceptible (not yet infected) to the disease; set I includes the nodes that have been infected and are able to spread the disease to susceptible individuals; and set R contains the nodes that have been recovered and will never be infected again.\n\nAt each step, for each infected node, one of its susceptible neighbors will be randomly infected with probability α (in our experiments, we set α=1). At the same time, every infected node has a chance (β) to be recovered. Once the node is recovered, it will not be infected again and no longer infect other susceptible nodes. In our work, the recovering probability β=1/<k>, where <k> is the average degree of a given network. The spreading process will terminate if there is not any infected node in the network.\n\nFor each time of simulation, the total number of infected and recovered nodes of a given initially infected node can be counted. After Τrep repeated trials, the influences of each initial node can be collected, and the expected rank of all nodes in the network can be further achieved. In our experiments, Τrep is set to 1000 to calculate the statistical influences of each node.\n\n4.3. Evaluation Metrics\n\nWhile evaluating the node influences, both our proposed algorithm and three basic methods produce a ranked list of nodes. Hence, the precision of influence evaluation should be measured by analyzing the similarity between the generated rank and the SIR model-based simulation result. Here, we refer to two metrics in the field of information retrieval to show the precision of each evaluation method.\n\nSuppose R is the ranked list generated by an influence evaluation method, and R^ is the excepted rank list of nodes according to the SIR model, the P@k metric for ranking problem can be redefined as follows. (8)P@kR,R^=RkR^kk,where Rk is the sublist of the former k elements of R,RkR^k returns the size of the common elements appearing in both Rk and R^k, and k can be set to a value within the range from 1 to the length of the ranked list.\n\nFurther, the average precision (AP) can be defined as the average of P@k, that is, (9)APR,R^=k=1nP@kR,R^n,where n is the total number of elements in the ranked list. For the problem in this paper, it refers to the number of nodes in the given social network.\n\nBesides the measure of AP, we also adopt the Kendall rank correlation coefficient (KRCC) to judge the consistency of two ranked lists, i.e., the generated rank and the excepted rank. (10)KRCCR,R^=ncndnn1/2,where nc is the number of concordant pairs between the two ranked lists, and nd is the number of discordant pairs—if the preference relation of the two nodes in the pair is consistent for both lists, it is called a concordant pair, otherwise it is discordant.\n\nIn our experiments, the rank of the SIR model-based simulation was used as the expected result, and the above two metrics between the ranked list of each method and the excepted rank were calculated, respectively.\n\n4.4. Experimental Results\n\nThe results of two relatively simple networks are addressed firstly in details, and then the ranking results of the other four networks are discussed. Finally, the effectiveness of our proposed evaluation algorithm for node influences is summarized.\n\n4.4.1. The Result of the ARPA Network\n\nFor the ARPA network shown in Figure 6, the ranking results of both three basic methods and our algorithm are all generated and illustrated in Table 4. To facilitate the comparison, the excepted rank based on the SIR model is also listed here.\n\nTopological structure of the ARPA network.\n\nThe ranking results and precisions of four algorithms and SIR-model for the ARPA network.\n\nAlgorithm Ranking result AP KRCC\nSIR model 3, 14, 2, 15, 17, 19, 12, 16, 18, 13, 1, 4, 6, 20, 5, 11, 21, 7, 10, 8, 9\nDegree (CD) 2, 3, 14, 6, 12, 15, 19, 1, 4, 5, 7, 8, 9, 10, 11, 13, 16, 17, 18, 20, 21 0.71 0.36\nBetweenness (CB) 3, 12, 19, 6, 4, 14, 13, 5, 11, 2, 18, 10, 7, 20, 9, 21, 8, 17, 15, 16, 1 0.64 0.24\nCloseness (CC) 3, 19, 12, 18, 4, 13, 14, 17, 2, 20, 5, 6, 11, 15, 16, 21, 1, 7, 10, 9, 8 0.73 0.59\nPARW-Rank 3, 12, 19, 14, 2, 6, 4, 13, 18, 5, 20, 11, 21, 10, 7, 17, 15, 16, 9, 1, 8 0.73 0.45\n\nBased on the results shown in Table 4, we can find that both the CC-based method and our PARW-Rank algorithm can achieve the highest AP value (i.e., 0.73). For the other metric, KRCC, although the corresponding value of the CC-based method is the best (0.59), the result on such metric of our algorithm is still higher than those of the CD-based and CB-based methods.\n\nBriefly speaking, the result of the PARW-Rank algorithm is as good as or worse than that of the CC-based method for two metrics, respectively. However, its performance is obviously better than those of the other two basic methods for the ARPA network. Therefore, PARW-Rank can still be considered as an effective method for evaluating node influences.\n\n4.4.2. The Result of the ChenNet Network\n\nThe network in Figure 7 is designed by Chen et al. in ; here, we use it as a benchmark network to evaluate the effect of node influence ranking algorithms. Taking the rank generated by the SIR model as the expected result (refers to Table 5), our PARW-Rank algorithm obviously outperforms the other three basic methods in the metrics of both AP and KRCC.\n\nThe network referred from literature .\n\nThe ranking results and precisions of four algorithms and SIR model for the ChenNet network.\n\nAlgorithm Ranking result AP KRCC\nSIR model 23, 11, 22, 18, 16, 20, 17, 14, 15, 13, 12, 21, 10, 19, 1, 6, 8, 3, 4, 7, 2, 9, 5\nDegree (CD) 1, 23, 11, 12, 13, 14, 15, 16, 17, 18, 20, 21, 22, 3, 8, 10, 19, 2, 4, 6, 7, 9, 5 0.73 0.54\nBetweenness (CB) 1, 10, 6, 23, 11, 22, 21, 20, 12, 14, 16, 18, 17, 15, 13, 19, 3, 8, 2, 4, 5, 7, 9 0.64 0.49\nCloseness (CC) 10, 23, 11, 6, 22, 1, 20, 12, 14, 21, 16, 15, 17, 18, 13, 19, 3, 8, 2, 4, 7, 9, 5 0.73 0.55\nPARW-Rank 23, 1, 11, 22, 20, 21, 12, 14, 10, 16, 18, 17, 15, 13, 19, 6, 3, 8, 2, 4, 7, 9, 5 0.82 0.68\n\nFor the metric of AP, PARW-Rank’s value is 0.82 and is the best one in all four methods. The APs of the CD-based and CC-based methods are both 0.73, so the performances of these two methods for ranking node influences are lower than that of our algorithm. The worst is the CB-based method; the corresponding AP value is only 0.64.\n\nWhile considering the other metric, i.e., KRCC, the preference order about ranking precisions of all four methods is generally similar to the case of metric AP. Our PARW-Rank algorithm still wins the best result (i.e., 0.68), and the results of the CC-based and CD-based methods are 0.55 and 0.54, respectively. The worst one, the result of the CB-based method, is only 0.49.\n\nIn summary, for the network (denoted as ChenNet) in reference , our comprehensive ranking algorithm can generate a more precise result than the other three basic methods for evaluating node influences in a network.\n\n4.4.3. The Results of the Other Four Networks\n\nSimilarly, the comparison is also performed on the remaining four social networks and the corresponding results are listed in Table 6.\n\nThe ranking results and precisions for other four networks.\n\nNetwork Algorithm Ranking result (Top 10 nodes) AP KRCC\nKarate Degree (CD) 1, 34, 33, 3, 2, 4, 31, 9, 14, 23, … 0.70 0.43\nBetweenness (CB) 1, 34, 33, 3, 31, 9, 2, 14, 19, 6, … 0.67 0.40\nCloseness (CC) 1, 3, 31, 9, 34, 14, 33, 19, 4, 30, … 0.72 0.42\nPARW-Rank 1, 3, 34, 33, 31, 9, 14, 2, 4, 32, … 0.72 0.48\n\nPolBooks Degree (CD) 8, 12, 3, 87, 68, 70, 77, 29, 11, 38, … 0.54 0.08\nBetweenness (CB) 29, 52, 9, 12, 68, 80, 3, 59, 8, 7, … 0.54 0.06\nCloseness (CC) 29, 59, 7, 52, 9, 80, 14, 68, 4, 32, … 0.53 0.06\nPARW-Rank 29, 68, 9, 12, 8, 3, 70, 59, 80, 87, … 0.55 0.09\n\nAirlines Degree (CD) 137, 51, 81, 131, 71, 42, 155, 85, 201, 193, … 0.60 0.36\nBetweenness (CB) 137, 51, 81, 201, 131, 71, 19, 174, 42, 119, … 0.60 0.30\nCloseness (CC) 137, 51, 81, 131, 71, 42, 155, 201, 85, 193, … 0.59 0.33\nPARW-Rank 137, 51, 81, 131, 71, 201, 42, 119, 193, 155, … 0.61 0.36\n\nEmail Degree (CD) 105, 333, 16, 23, 42, 41, 196, 233, 21, 76, … 0.56 0.22\nBetweenness (CB) 333, 105, 23, 578, 76, 233, 135, 41, 355, 42, … 0.55 0.20\nCloseness (CC) 333, 23, 105, 42, 41, 76, 233, 52, 135, 378, … 0.56 0.21\nPARW-Rank 333, 105, 23, 42, 41, 233, 76, 135, 134, 52, 378, … 0.56 0.22\n\nFor network Karate, the APs of our PARW-Rank algorithm and the CC-based method are both 0.72, and the corresponding values of the CD-based method and CB-based method are 0.70 and 0.67, respectively. Obviously, the dominance relation of these four methods with respect to AP is PARW-Rank~CCCDCB, where symbol “~” means two methods are comparable and “” represents a superiority relation (i.e., be better than). While considering the other metric (i.e., KRCC), the best value is the result (0.48) of our PARW-Rank algorithm, and the worst is the result of the CB-based method, i.e., 0.40. In addition, the KRCCs of the CD-based method and CC-based method are 0.43 and 0.42, respectively. Hence, the order of four methods with respect to KRCC is PARW-RankCDCCCB.\n\nFor network PolBooks, the best method for evaluating node influences is our PARW-Rank algorithm. The metrics AP and KRCC of this algorithm are 0.55 and 0.09, respectively. The second one is the CD-based method, whose AP and KRCC are 0.54 and 0.08, respectively. The next is the CB-based method, and its AP is the same as that of the CD-based method, but its KRCC is only 0.06. Although the KRCC of the CC-based method is also 0.06, its AP is the lowest one in all four methods, that is, only 0.53. Therefore, the dominance relation of four methods can be expressed as PARW-RankCDCBCC.\n\nFor the third network, i.e, Airlines, the best one is still the PARW-Rank algorithm. Two evaluation metrics (i.e., AP and KRCC) of this algorithm are 0.61 and 0.36, respectively. Although the KRCC of the CD-based method is also 0.36, its AP is 0.60 and is worse than that of PARW-Rank. For metric AP, the corresponding values of the other two methods (CB-based method and CC-based method) are 0.60 and 0.59, respectively. Thus, the order of the four methods with respect to this metric is PARW-RankCDCBCC. For metric KRCC, the values of the CB-based and CC-based methods are 0.30 and 0.33, respectively. Therefore, the dominance relation for this metric is PARW-Rank~CDCCCB.\n\nFor the last network Email, the APs of PARW-Rank, the CD-based method, and the CC-based method are the same, i.e., 0.56. The corresponding value of the CB-based method is 0.55. Hence, for metric AP, the relation of these four methods is PARW-Rank CD~CCCB. With regard to the other metric (KRCC), the values of PARW-Rank and the CD-based method are both 0.22, and the CC-based method’s value is 0.21. The worst is the CB-based method, and its value is only 0.20. It is easy to see that the order of these methods about KRCC is PARW-Rank~CDCCCB.\n\n4.4.4. Summary on Experimental Results\n\nBased on the above results, we can summarize the dominance relation of the four methods and demonstrate the results in Table 7. In most cases, PARW-Rank can win the first place or share the first place with CD or CC for precisely identifying influential nodes in a social network. The only one exception appears in network ARPA for metric KRCC. In such case, PARW-Rank is worse than the CC-based method, but is still better than the CD-based and CB-based methods.\n\nThe summary of the effects of four methods for ranking node influences.\n\nNetwork Dominance relation of four methods\nAP KRCC\nARPA PARW-Rank~CCCDCB CCPARW-RankCDCB\nChenNet PARW-RankCC~CDCB PARW-RankCCCDCB\nKarate PARW-Rank~CCCDCB PARW-RankCDCCCB\nPolBooks PARW-RankCD~CBCC PARW-RankCDCBCC\nAirlines PARW-RankCD~CBCC PARW-Rank~CDCCCB\nEmail PARW-Rank~CD~CCCB PARW-Rank~CDCCCB\n\nSpecifically, for the metric of AP, the PARW-Rank algorithm is better than the other three methods for networks ChenNet, PolBooks, and Airlines. At the same time, PARW-Rank is comparable to the CC-based method for networks ARPA and Karate and is also comparable to the CD-based method in the case of network Email.\n\nFor the other metric issue (KRCC), except of the above-mentioned exception case, the PARW-Rank algorithm outperforms the other three methods for networks ChenNet, Karate, and PolBooks. For the rest two networks, PARW-Rank is comparable to the CD-based method.\n\nBased on the experimental analysis of the real-life six networks, we can conclude that our comprehensive ranking algorithm (PARW-Rank) achieves the better effectiveness than do the three basic methods for evaluating node influences in a social network.\n\n5. Discussions 5.1. Threats to Validity\n\nThreats to construct validity regard the relation between theory and observation. In this study, we focus on the design of a new algorithm for comprehensively evaluating the influences of nodes in a social network. The SIR epidemic model was used to generate the reference rank of nodes. Some other epidemic models, such as SI and SIS , have also been adopted for the influential node identification. Thus, the use of them may give different results. Although the average precision (AP) and Kendall rank correlation coefficient (KRCC) are two well-known evaluation metrics for comparing two different ranked sequences, there are still some other metrics available for this purpose. Based on these metrics, the experimental results have the potential to change.\n\nThreats to external validity regard the generalization of our results in other situations. As mentioned in Section 3.1, although only three basic centrality measures are used in our framework to rank influential nodes, the measures in the framework can be flexibly replaced. When applying other basic measures into our algorithm, the experimental results have not been deeply investigated in the current research. On the other hand, six public social networks are adopted in the experiments to validate the effectiveness of our proposed algorithm. Having more social networks, especially the networks with larger sizes, can strengthen the scalability of the algorithm.\n\nThreats to internal validity regard factors that could influence our experimental results. We have carefully inspected the implementation code of our algorithm to ensure the reliability of experimental results. In this study, we treat the three basic measures equally in the algorithm. In fact, each of these basic measures may play a different role in identifying the influential nodes. Assigning different weights to them may produce different results.\n\n5.2. Potential Extension of the Algorithm\n\nAs pointed out in the above subsection, our algorithm faces a potential threat in scalability. The threat comes mainly from two aspects: one is the problem of computation overhead, and the other is the robustness of the computation result.\n\nIn our algorithm, both PPG and CPG are represented by the matrix. For a large-sized social network, the corresponding matrix of PPG or CPG has a large dimensional number accordingly. In general, a matrix with high dimensions will lead to the heavy computation overhead about matrix manipulation. Since the subsequent random walk is performed on the matrix of CPG, the computation cost will obviously increase if the size of the social network becomes large. To ensure the lightweight computation in our algorithm, it is necessary to build the reduced versions of PPG and CPG for the large-sized social network.\n\nAs shown in (4), for a social network with n nodes, the final distribution vector can be represented as πt=pt1,pt2,,ptn. Obviously, for any two elements pti and ptj in πt, the difference between them becomes narrow as the size of the social network (i.e., n) increases. When the size becomes very large, the corresponding difference will be very small. As a consequence, the ranking result will become very sensitive. From this perspective, it is also necessary to effectively control the size of CPG.\n\nHere, we provide a preliminary solution for large-scale networks as follows. Suppose f basic measures are adopted in the algorithm, then for each measure mi1if, we can select the top k nodes from the given social network. Here, the set of the partial nodes is denoted as Stopkmi. The top k can set as a ratio of the size of the social network in practice, such as 5% or 10%. Then, for all measures, the union set of Stopkmi1if can be calculated as Stopk=i=1fStopkmi. Subsequently, the PPG can be built based on Stopk for each basic measure, and the final reduced CPG can be generated accordingly. Obviously, the cardinality of Stopk is far less than the size of the social network. Therefore, the influential node identification based on the reduced CPG can save a lot of computation cost. Meanwhile, the approximate treatment will not cause a significant impact on identifying the most influential nodes.\n\n6. Related Work\n\nNowadays, the Internet has been applied to all aspects of our lives. Accordingly, the interactions between individuals on the Internet are becoming more and more frequent and plentiful, that is, the so-called social network . Since it plays a great role in the economic, social, and even security activities in the real world, it is very necessary to understand the mechanisms such as community, evolution, and information propagation behind the social network.\n\nAt the earlier stage, the research concerns mainly focused on the static features and structures of social network . For example, the power-law distribution of node degree was discovered as an important property for most networks . On the other side, the distance between each node pair was also analyzed. The results have exhibited a phenomenon of small world, which means the individuals in a network can reach to each other in relatively few steps . Furthermore, the effect of node clustering was measured by the metrics such as the clustering coefficient. Meanwhile, some algorithms about community discovery were proposed to understand the connection strength between nodes in a network [39, 40].\n\nBesides the above static features, the dynamic issues, such as network evolution, information diffusion, and cascading failure, can help researchers to better explore the rules behind social networks. In recent years, the problem of identifying influential nodes has attracted wide attention . The influence of a node is usually reflected by the ability of spreading information. Remarkably, centrality has been viewed as an important indicator for information spreading, and quite a few centralities have been defined . Since the nodes with higher degree usually have stronger ability of information spreading, the degree of node is viewed as a centrality (i.e., degree centrality ) to characterize the importance of node in a network. The significant advantage of this measure lies in that it can be obtained easily. However, it takes only one step into consideration for evaluating the information-spreading ability of a node. As a result, its precision is limited in most cases. Moreover, the enhanced versions, such as semilocal centrality and node diversity , were also proposed for identifying influential nodes in networks. Although these measures can provide more accurate prediction, they only consider the information spreading from a local point of view. In fact, for evaluating the diffusion or propagation of information, the topological information of whole network (i.e., global metrics) should be taken into account.\n\nSince the nodes with high betweenness often play the role of gateway in a social network, the betweenness is viewed as an important indicator for measuring the information-spreading ability of a node. Thus, the betweenness centrality [16, 17] was defined for identifying influential nodes in the past. However, this metric only considers the bridging function of the node, but fails to consider its outward diffusion capability. In order to consider the diffusion speed of information, the closeness centrality , which is defined based on the distance from the given node to other nodes, was introduced to make up for the shortcomings of betweenness centrality. In addition, some other centralities such as Physarum centrality and tunable path centrality were also proposed based on the paths of node pair. It is not hard to find that all above methods have to compute the paths between nodes in the networks, so the overhead is greatly higher than the local metrics such as degree centrality. On the other side, these path-based metrics may be useful to evaluate the information diffusion speed and the importance of the node, but it is not very good at describing the breadth of information spreading. In the paper, we merge the local metrics (e.g., degree centrality) and global metrics (e.g., betweenness centrality and closeness centrality) together to construct a preference relation model (i.e., comprehensive preference graph) for ranking node influences. Because our PARW-Rank algorithm takes the above three typical metrics into account, it can obtain much better performance.\n\nAs a classical algorithm for ranking Web pages, PageRank has achieved great success in the applications of the search engine. Intuitively, it can be adopted for evaluating the influences of node in a social network, which has been confirmed by some previous studies [42, 49]. Furthermore, Lü et al. proposed a simple variant of PageRank to identify influential spreaders in directed networks. In the improved model named LeaderRank, a ground node connected with every other node by a bidirectional link is introduced into the original network, and then the random walk process is applied to rank nodes according to their influences . The experimental results show that LeaderRank can produce more stable ranking results and has the faster convergence speed than PageRank does. Although all these PageRank-based methods use the random walk algorithm, they perform the ranking on the original social networks. By contrast, our PARW-Rank algorithm applies the random walk technique on the comprehensive preference graph (CPG) rather than the original social network to rank node influences. In our algorithm, although the CPG has the same node set as the original network, the directional edges in CPG represent the preference relations, which are determined by combining three basic centralities (i.e., degree centrality, betweenness centrality, and closeness centrality) together. In other words, the rank of nodes in our algorithm is not directly based on the topology of the network but according to the new generated preference model.\n\nIn recent years, some comprehensive ranking methods for evaluating the influences of node have been presented. Wei et al. proposed a new centrality measure for the weighted network based on the Dempster-Shafer evidence theory , in which the degree and strength of every node are both taken into consideration. This method only focuses on the local structure of the network. On the other hand, it is used for the weighted network, whereas our algorithm is for the basic social network. As a multiple-attribute decision-making (MADM) technique, TOPSIS (Technique for Order Preference by Similarity to Ideal Solution) has been successfully applied to solve some typical decision-making problems [19, 20, 28, 54]. To identify influential nodes, this technique was introduced to rank the nodes in a network according to their influences [25, 26]. Different from the random walk technique, TOPSIS belongs to the category of static ranking. Therefore, in theory, the ranking result of TOPSIS will be not as good as that of the random walk-based method. By and large, combining several centrality measures to generate a comprehensive rank is a promising solution to evaluate node influences. At the same time, our comprehensive algorithm based on preference analysis and random walk has a good theoretical basis. In addition, the experimental results confirm that it achieves much better performance than the single centrality measure does.\n\nThe influence maximization is a very relevant problem to the influential node identification. It aims at finding a subset of key users that maximize their influence spread over a social network [24, 55]. At present, a series of algorithms have been developed for this problem. Among them, there are two typical categories: one is from a community-based perspective and the other is from an evolutionary computation perspective. Since the community structure in social networks plays an important role in tracking the local spread of influence, some effective algorithms, such as INCIM and CoFIM , have been proposed to model the influence propagation by exploiting the community structure. On the other side, some typical evolutionary algorithms, such as GA , SA , and PSO , are also employed to find the most influential nodes in a social network. However, both community detection and evolutionary search are time-consuming operations in general. As a consequence, the efficiency of these algorithms is a typical weakness. In addition, some other mathematical tools, such as game theory and mathematical programming , have been used to solve the influence maximization problem. Strictly speaking, this problem has a difference from ours, but these mathematical models can provide us with potential inspiration to create a new solution for the influential node identification.\n\n7. Conclusion\n\nWith the rapid development of Internet technology and Web media, social network, as a new communication platform, has penetrated into our lives and played an important role. It has affected all aspects of our lives, especially in the aspects of information diffusion, public sentiment analysis, and so on. Accordingly, it is very necessary to investigate the social network from both aspects of static structure and dynamic behavior. While considering the dynamic behaviors of a network, information diffusion between nodes is an important exemplification . Therefore, the evaluation on node influences becomes a key and challenging problem.\n\nIn order to identify the influential nodes in a social network with high precision, a comprehensive evaluation model is proposed in the paper. In our model, three basic and representative centralities are taken into consideration. For each basic centrality measure, a partial preference graph (PPG) is built according to the preference relations of node pairs. Then, the comprehensive preference graph (CPG) is generated by merging the above three PPGs together. Thus, the linkage between two nodes in CPG can reflect the overall preference information of three representative centralities. Subsequently, the random walk technique is performed on the CPG to rank the nodes in network according to their influences. Besides the running example, six public social networks, such as Arpa, Karate, and PolBooks, are taken as benchmarks to validate the effectiveness of our proposed evaluation algorithm. The experimental results confirm that our comprehensive algorithm based on preference relation and random walk has the obvious advantages than the three basic ranking methods.\n\nAlthough our PARW-Rank algorithm has exhibited its good performance and robustness for identifying influential spreaders in a social network, there are still some valuable and interesting problems that deserve further exploration. For example, we will adapt our algorithm to rank the spreaders in the weighted social network. In addition, how to analyze the influences of nodes in a dynamic (or mobile) social network is also an attractive research topic.\n\nData Availability\n\nThe data used to support the findings of this study are available from the corresponding author upon request.\n\nConflicts of Interest\n\nThe authors declare that there are no conflicts of interest regarding the publication of this article.\n\nAcknowledgments\n\nThis work was supported in part by National Natural Science Foundation of China (Grant Nos. 61462030 and 61762040), Jiangxi Social Science Research Project (Grant No. TQ-2015-202), Natural Science Foundation of Jiangxi Province (Grant Nos. 20162BCB23036 and 20171ACB21031), Science Foundation of Jiangxi Educational Committee (Grant No. GJJ150465), and the Education Science Project of Jiangxi Province (Grant No. YB-2015-026)."
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.90393263,"math_prob":0.95818627,"size":69367,"snap":"2020-34-2020-40","text_gpt3_token_len":16481,"char_repetition_ratio":0.18052852,"word_repetition_ratio":0.0406783,"special_character_ratio":0.23718771,"punctuation_ratio":0.14663173,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":0.9800175,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-12T01:35:23Z\",\"WARC-Record-ID\":\"<urn:uuid:984b1ced-05b3-4e6d-ba37-402f44b4c35f>\",\"Content-Length\":\"239943\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:116348c8-b7eb-48cc-b9e9-cf41b78a38de>\",\"WARC-Concurrent-To\":\"<urn:uuid:c65bb5a0-9843-436b-8671-890a897bf59e>\",\"WARC-IP-Address\":\"52.216.177.51\",\"WARC-Target-URI\":\"http://downloads.hindawi.com/journals/complexity/2018/1528341.xml\",\"WARC-Payload-Digest\":\"sha1:GOPLOP25CBGTLL2MZSTTPAN4MROOLTRC\",\"WARC-Block-Digest\":\"sha1:PPUMG2CJ2B6NQUPOQSCNWVRYBDFO6XF3\",\"WARC-Identified-Payload-Type\":\"application/xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439738858.45_warc_CC-MAIN-20200811235207-20200812025207-00199.warc.gz\"}"} |
https://docs.w3cub.com/d3~4/d3-scale/ | [
"/D3.js 4\n\n# d3-scale\n\nScales are a convenient abstraction for a fundamental task in visualization: mapping a dimension of abstract data to a visual representation. Although most often used for position-encoding quantitative data, such as mapping a measurement in meters to a position in pixels for dots in a scatterplot, scales can represent virtually any visual encoding, such as diverging colors, stroke widths, or symbol size. Scales can also be used with virtually any type of data, such as named categorical data or discrete data that requires sensible breaks.\n\nFor continuous quantitative data, you typically want a linear scale. (For time series data, a time scale.) If the distribution calls for it, consider transforming data using a power or log scale. A quantize scale may aid differentiation by rounding continuous data to a fixed set of discrete values; similarly, a quantile scale computes quantiles from a sample population, and a threshold scale allows you to specify arbitrary breaks in continuous data. Several built-in sequential color schemes are also provided; see d3-scale-chromatic for more.\n\nFor discrete ordinal (ordered) or categorical (unordered) data, an ordinal scale specifies an explicit mapping from a set of data values to a corresponding set of visual attributes (such as colors). The related band and point scales are useful for position-encoding ordinal data, such as bars in a bar chart or dots in an categorical scatterplot. Several built-in categorical color scales are also provided.\n\nScales have no intrinsic visual representation. However, most scales can generate and format ticks for reference marks to aid in the construction of axes.\n\nFor a longer introduction, see these recommended tutorials:\n\n## Installing\n\nIf you use NPM, npm install d3-scale. Otherwise, download the latest release. You can also load directly from d3js.org, either as a standalone library or as part of D3 4.0. AMD, CommonJS, and vanilla environments are supported. In vanilla, a d3 global is exported:\n\n<script src=\"https://d3js.org/d3-array.v1.min.js\"></script>\n<script src=\"https://d3js.org/d3-collection.v1.min.js\"></script>\n<script src=\"https://d3js.org/d3-color.v1.min.js\"></script>\n<script src=\"https://d3js.org/d3-format.v1.min.js\"></script>\n<script src=\"https://d3js.org/d3-interpolate.v1.min.js\"></script>\n<script src=\"https://d3js.org/d3-time.v1.min.js\"></script>\n<script src=\"https://d3js.org/d3-time-format.v2.min.js\"></script>\n<script src=\"https://d3js.org/d3-scale.v1.min.js\"></script>\n<script>\n\nvar x = d3.scaleLinear();\n\n</script>\n\n(You can omit d3-time and d3-time-format if you’re not using d3.scaleTime or d3.scaleUtc.)\n\nTry d3-scale in your browser.\n\n## API Reference\n\n### Continuous Scales\n\nContinuous scales map a continuous, quantitative input domain to a continuous output range. If the range is also numeric, the mapping may be inverted. A continuous scale is not constructed directly; instead, try a linear, power, log, identity, time or sequential color scale.\n\n###### continuous(value) Source\n\nGiven a value from the domain, returns the corresponding value from the range. If the given value is outside the domain, and clamping is not enabled, the mapping may be extrapolated such that the returned value is outside the range. For example, to apply a position encoding:\n\nvar x = d3.scaleLinear()\n.domain([10, 130])\n.range([0, 960]);\n\nx(20); // 80\nx(50); // 320\n\nOr to apply a color encoding:\n\nvar color = d3.scaleLinear()\n.domain([10, 100])\n.range([\"brown\", \"steelblue\"]);\n\ncolor(20); // \"#9a3439\"\ncolor(50); // \"#7b5167\"\n###### continuous.invert(value) Source\n\nGiven a value from the range, returns the corresponding value from the domain. Inversion is useful for interaction, say to determine the data value corresponding to the position of the mouse. For example, to invert a position encoding:\n\nvar x = d3.scaleLinear()\n.domain([10, 130])\n.range([0, 960]);\n\nx.invert(80); // 20\nx.invert(320); // 50\n\nIf the given value is outside the range, and clamping is not enabled, the mapping may be extrapolated such that the returned value is outside the domain. This method is only supported if the range is numeric. If the range is not numeric, returns NaN.\n\nFor a valid value y in the range, continuous(continuous.invert(y)) approximately equals y; similarly, for a valid value x in the domain, continuous.invert(continuous(x)) approximately equals x. The scale and its inverse may not be exact due to the limitations of floating point precision.\n\n###### continuous.domain([domain]) Source\n\nIf domain is specified, sets the scale’s domain to the specified array of numbers. The array must contain two or more elements. If the elements in the given array are not numbers, they will be coerced to numbers. If domain is not specified, returns a copy of the scale’s current domain.\n\nAlthough continuous scales typically have two values each in their domain and range, specifying more than two values produces a piecewise scale. For example, to create a diverging color scale that interpolates between white and red for negative values, and white and green for positive values, say:\n\nvar color = d3.scaleLinear()\n.domain([-1, 0, 1])\n.range([\"red\", \"white\", \"green\"]);\n\ncolor(-0.5); // \"rgb(255, 128, 128)\"\ncolor(+0.5); // \"rgb(128, 192, 128)\"\n\nInternally, a piecewise scale performs a binary search for the range interpolator corresponding to the given domain value. Thus, the domain must be in ascending or descending order. If the domain and range have different lengths N and M, only the first min(N,M) elements in each are observed.\n\n###### continuous.range([range]) Source\n\nIf range is specified, sets the scale’s range to the specified array of values. The array must contain two or more elements. Unlike the domain, elements in the given array need not be numbers; any value that is supported by the underlying interpolator will work, though note that numeric ranges are required for invert. If range is not specified, returns a copy of the scale’s current range. See continuous.interpolate for more examples.\n\n###### continuous.rangeRound([range]) Source\n\nSets the scale’s range to the specified array of values while also setting the scale’s interpolator to interpolateRound. This is a convenience method equivalent to:\n\ncontinuous\n.range(range)\n.interpolate(d3.interpolateRound);\n\nThe rounding interpolator is sometimes useful for avoiding antialiasing artifacts, though also consider the shape-rendering “crispEdges” styles. Note that this interpolator can only be used with numeric ranges.\n\n###### continuous.clamp(clamp) Source\n\nIf clamp is specified, enables or disables clamping accordingly. If clamping is disabled and the scale is passed a value outside the domain, the scale may return a value outside the range through extrapolation. If clamping is enabled, the return value of the scale is always within the scale’s range. Clamping similarly applies to continuous.invert. For example:\n\nvar x = d3.scaleLinear()\n.domain([10, 130])\n.range([0, 960]);\n\nx(-10); // -160, outside range\nx.invert(-160); // -10, outside domain\n\nx.clamp(true);\nx(-10); // 0, clamped to range\nx.invert(-160); // 10, clamped to domain\n\nIf clamp is not specified, returns whether or not the scale currently clamps values to within the range.\n\n###### continuous.interpolate(interpolate) Source\n\nIf interpolate is specified, sets the scale’s range interpolator factory. This interpolator factory is used to create interpolators for each adjacent pair of values from the range; these interpolators then map a normalized domain parameter t in [0, 1] to the corresponding value in the range. If factory is not specified, returns the scale’s current interpolator factory, which defaults to interpolate. See d3-interpolate for more interpolators.\n\nFor example, consider a diverging color scale with three colors in the range:\n\nvar color = d3.scaleLinear()\n.domain([-100, 0, +100])\n.range([\"red\", \"white\", \"green\"]);\n\nTwo interpolators are created internally by the scale, equivalent to:\n\nvar i0 = d3.interpolate(\"red\", \"white\"),\ni1 = d3.interpolate(\"white\", \"green\");\n\nA common reason to specify a custom interpolator is to change the color space of interpolation. For example, to use HCL:\n\nvar color = d3.scaleLinear()\n.domain([10, 100])\n.range([\"brown\", \"steelblue\"])\n.interpolate(d3.interpolateHcl);\n\nOr for Cubehelix with a custom gamma:\n\nvar color = d3.scaleLinear()\n.domain([10, 100])\n.range([\"brown\", \"steelblue\"])\n.interpolate(d3.interpolateCubehelix.gamma(3));\n\nNote: the default interpolator may reuse return values. For example, if the range values are objects, then the value interpolator always returns the same object, modifying it in-place. If the scale is used to set an attribute or style, this is typically acceptable (and desirable for performance); however, if you need to store the scale’s return value, you must specify your own interpolator or make a copy as appropriate.\n\n###### continuous.ticks([count])\n\nReturns approximately count representative values from the scale’s domain. If count is not specified, it defaults to 10. The returned tick values are uniformly spaced, have human-readable values (such as multiples of powers of 10), and are guaranteed to be within the extent of the domain. Ticks are often used to display reference lines, or tick marks, in conjunction with the visualized data. The specified count is only a hint; the scale may return more or fewer values depending on the domain. See also d3-array’s ticks.\n\n###### continuous.tickFormat([count[, specifier]]) Source\n\nReturns a number format function suitable for displaying a tick value, automatically computing the appropriate precision based on the fixed interval between tick values. The specified count should have the same value as the count that is used to generate the tick values.\n\nAn optional specifier allows a custom format where the precision of the format is automatically set by the scale as appropriate for the tick interval. For example, to format percentage change, you might say:\n\nvar x = d3.scaleLinear()\n.domain([-1, 1])\n.range([0, 960]);\n\nvar ticks = x.ticks(5),\ntickFormat = x.tickFormat(5, \"+%\");\n\nticks.map(tickFormat); // [\"-100%\", \"-50%\", \"+0%\", \"+50%\", \"+100%\"]\n\nIf specifier uses the format type s, the scale will return a SI-prefix format based on the largest value in the domain. If the specifier already specifies a precision, this method is equivalent to locale.format.\n\n###### continuous.nice([count]) Source\n\nExtends the domain so that it starts and ends on nice round values. This method typically modifies the scale’s domain, and may only extend the bounds to the nearest round value. An optional tick count argument allows greater control over the step size used to extend the bounds, guaranteeing that the returned ticks will exactly cover the domain. Nicing is useful if the domain is computed from data, say using extent, and may be irregular. For example, for a domain of [0.201479…, 0.996679…], a nice domain might be [0.2, 1.0]. If the domain has more than two values, nicing the domain only affects the first and last value. See also d3-array’s tickStep.\n\nNicing a scale only modifies the current domain; it does not automatically nice domains that are subsequently set using continuous.domain. You must re-nice the scale after setting the new domain, if desired.\n\n###### continuous.copy() Source\n\nReturns an exact copy of this scale. Changes to this scale will not affect the returned scale, and vice versa.\n\n#### Linear Scales\n\n###### d3.scaleLinear() Source\n\nConstructs a new continuous scale with the unit domain [0, 1], the unit range [0, 1], the default interpolator and clamping disabled. Linear scales are a good default choice for continuous quantitative data because they preserve proportional differences. Each range value y can be expressed as a function of the domain value x: y = mx + b.\n\n#### Power Scales\n\nPower scales are similar to linear scales, except an exponential transform is applied to the input domain value before the output range value is computed. Each range value y can be expressed as a function of the domain value x: y = mx^k + b, where k is the exponent value. Power scales also support negative domain values, in which case the input value and the resulting output value are multiplied by -1.\n\n###### d3.scalePow() Source\n\nConstructs a new continuous scale with the unit domain [0, 1], the unit range [0, 1], the exponent 1, the default interpolator and clamping disabled. (Note that this is effectively a linear scale until you set a different exponent.)\n\nSee continuous.\n\n###### pow.exponent([exponent]) Source\n\nIf exponent is specified, sets the current exponent to the given numeric value. If exponent is not specified, returns the current exponent, which defaults to 1. (Note that this is effectively a linear scale until you set a different exponent.)\n\n###### pow.range([range])\n\nSee continuous.range.\n\n###### pow.clamp(clamp)\n\nSee continuous.clamp.\n\n###### pow.ticks([count])\n\nSee continuous.ticks.\n\n###### pow.nice([count])\n\nSee continuous.nice.\n\n###### pow.copy() Source\n\nSee continuous.copy.\n\n###### d3.scaleSqrt() Source\n\nConstructs a new continuous power scale with the unit domain [0, 1], the unit range [0, 1], the exponent 0.5, the default interpolator and clamping disabled. This is a convenience method equivalent to d3.scalePow().exponent(0.5).\n\n#### Log Scales\n\nLog scales are similar to linear scales, except a logarithmic transform is applied to the input domain value before the output range value is computed. The mapping to the range value y can be expressed as a function of the domain value x: y = m log(x) + b.\n\nAs log(0) = -∞, a log scale domain must be strictly-positive or strictly-negative; the domain must not include or cross zero. A log scale with a positive domain has a well-defined behavior for positive values, and a log scale with a negative domain has a well-defined behavior for negative values. (For a negative domain, input and output values are implicitly multiplied by -1.) The behavior of the scale is undefined if you pass a negative value to a log scale with a positive domain or vice versa.\n\n###### d3.scaleLog() Source\n\nConstructs a new continuous scale with the domain [1, 10], the unit range [0, 1], the base 10, the default interpolator and clamping disabled.\n\nSee continuous.\n\n###### log.base([base]) Source\n\nIf base is specified, sets the base for this logarithmic scale to the specified value. If base is not specified, returns the current base, which defaults to 10.\n\n###### log.range([range]) Source\n\nSee continuous.range.\n\n###### log.clamp(clamp)\n\nSee continuous.clamp.\n\n###### log.ticks([count]) Source\n\nLike continuous.ticks, but customized for a log scale. If the base is an integer, the returned ticks are uniformly spaced within each integer power of base; otherwise, one tick per power of base is returned. The returned ticks are guaranteed to be within the extent of the domain. If the orders of magnitude in the domain is greater than count, then at most one tick per power is returned. Otherwise, the tick values are unfiltered, but note that you can use log.tickFormat to filter the display of tick labels. If count is not specified, it defaults to 10.\n\n###### log.tickFormat([count[, specifier]]) Source\n\nLike continuous.tickFormat, but customized for a log scale. The specified count typically has the same value as the count that is used to generate the tick values. If there are too many ticks, the formatter may return the empty string for some of the tick labels; however, note that the ticks are still shown. To disable filtering, specify a count of Infinity. When specifying a count, you may also provide a format specifier or format function. For example, to get a tick formatter that will display 20 ticks of a currency, say log.tickFormat(20, \"\\$,f\"). If the specifier does not have a defined precision, the precision will be set automatically by the scale, returning the appropriate format. This provides a convenient way of specifying a format whose precision will be automatically set by the scale.\n\n###### log.nice() Source\n\nLike continuous.nice, except extends the domain to integer powers of base. For example, for a domain of [0.201479…, 0.996679…], and base 10, the nice domain is [0.1, 1]. If the domain has more than two values, nicing the domain only affects the first and last value.\n\n###### log.copy() Source\n\nSee continuous.copy.\n\n#### Identity Scales\n\nIdentity scales are a special case of linear scales where the domain and range are identical; the scale and its invert method are thus the identity function. These scales are occasionally useful when working with pixel coordinates, say in conjunction with an axis or brush. Identity scales do not support rangeRound, clamp or interpolate.\n\n###### d3.scaleIdentity() Source\n\nConstructs a new identity scale with the unit domain [0, 1] and the unit range [0, 1].\n\n#### Time Scales\n\nTime scales are a variant of linear scales that have a temporal domain: domain values are coerced to dates rather than numbers, and invert likewise returns a date. Time scales implement ticks based on calendar intervals, taking the pain out of generating axes for temporal domains.\n\nFor example, to create a position encoding:\n\nvar x = d3.scaleTime()\n.domain([new Date(2000, 0, 1), new Date(2000, 0, 2)])\n.range([0, 960]);\n\nx(new Date(2000, 0, 1, 5)); // 200\nx(new Date(2000, 0, 1, 16)); // 640\nx.invert(200); // Sat Jan 01 2000 05:00:00 GMT-0800 (PST)\nx.invert(640); // Sat Jan 01 2000 16:00:00 GMT-0800 (PST)\n\nFor a valid value y in the range, time(time.invert(y)) equals y; similarly, for a valid value x in the domain, time.invert(time(x)) equals x. The invert method is useful for interaction, say to determine the value in the domain that corresponds to the pixel location under the mouse.\n\n###### d3.scaleTime() Source\n\nConstructs a new time scale with the domain [2000-01-01, 2000-01-02], the unit range [0, 1], the default interpolator and clamping disabled.\n\nSee continuous.\n\n###### time.range([range])\n\nSee continuous.range.\n\n###### time.clamp(clamp)\n\nSee continuous.clamp.\n\n###### time.ticks([count]) Sourcetime.ticks([interval])\n\nReturns representative dates from the scale’s domain. The returned tick values are uniformly-spaced (mostly), have sensible values (such as every day at midnight), and are guaranteed to be within the extent of the domain. Ticks are often used to display reference lines, or tick marks, in conjunction with the visualized data.\n\nAn optional count may be specified to affect how many ticks are generated. If count is not specified, it defaults to 10. The specified count is only a hint; the scale may return more or fewer values depending on the domain. For example, to create ten default ticks, say:\n\nvar x = d3.scaleTime();\n\nx.ticks(10);\n// [Sat Jan 01 2000 00:00:00 GMT-0800 (PST),\n// Sat Jan 01 2000 03:00:00 GMT-0800 (PST),\n// Sat Jan 01 2000 06:00:00 GMT-0800 (PST),\n// Sat Jan 01 2000 09:00:00 GMT-0800 (PST),\n// Sat Jan 01 2000 12:00:00 GMT-0800 (PST),\n// Sat Jan 01 2000 15:00:00 GMT-0800 (PST),\n// Sat Jan 01 2000 18:00:00 GMT-0800 (PST),\n// Sat Jan 01 2000 21:00:00 GMT-0800 (PST),\n// Sun Jan 02 2000 00:00:00 GMT-0800 (PST)]\n\nThe following time intervals are considered for automatic ticks:\n\n• 1-, 5-, 15- and 30-second.\n• 1-, 5-, 15- and 30-minute.\n• 1-, 3-, 6- and 12-hour.\n• 1- and 2-day.\n• 1-week.\n• 1- and 3-month.\n• 1-year.\n\nIn lieu of a count, a time interval may be explicitly specified. To prune the generated ticks for a given time interval, use interval.every. For example, to generate ticks at 15-minute intervals:\n\nvar x = d3.scaleTime()\n.domain([new Date(2000, 0, 1, 0), new Date(2000, 0, 1, 2)]);\n\nx.ticks(d3.timeMinute.every(15));\n// [Sat Jan 01 2000 00:00:00 GMT-0800 (PST),\n// Sat Jan 01 2000 00:15:00 GMT-0800 (PST),\n// Sat Jan 01 2000 00:30:00 GMT-0800 (PST),\n// Sat Jan 01 2000 00:45:00 GMT-0800 (PST),\n// Sat Jan 01 2000 01:00:00 GMT-0800 (PST),\n// Sat Jan 01 2000 01:15:00 GMT-0800 (PST),\n// Sat Jan 01 2000 01:30:00 GMT-0800 (PST),\n// Sat Jan 01 2000 01:45:00 GMT-0800 (PST),\n// Sat Jan 01 2000 02:00:00 GMT-0800 (PST)]\n\nAlternatively, pass a test function to interval.filter:\n\nx.ticks(d3.timeMinute.filter(function(d) {\nreturn d.getMinutes() % 15 === 0;\n}));\n\nNote: in some cases, such as with day ticks, specifying a step can result in irregular spacing of ticks because time intervals have varying length.\n\n###### time.tickFormat([count[, specifier]]) Sourcetime.tickFormat([interval[, specifier]])\n\nReturns a time format function suitable for displaying tick values. The specified count or interval is currently ignored, but is accepted for consistency with other scales such as continuous.tickFormat. If a format specifier is specified, this method is equivalent to format. If specifier is not specified, the default time format is returned. The default multi-scale time format chooses a human-readable representation based on the specified date as follows:\n\n• %Y - for year boundaries, such as 2011.\n• %B - for month boundaries, such as February.\n• %b %d - for week boundaries, such as Feb 06.\n• %a %d - for day boundaries, such as Mon 07.\n• %I %p - for hour boundaries, such as 01 AM.\n• %I:%M - for minute boundaries, such as 01:23.\n• :%S - for second boundaries, such as :45.\n• .%L - milliseconds for all other times, such as .012.\n\nAlthough somewhat unusual, this default behavior has the benefit of providing both local and global context: for example, formatting a sequence of ticks as [11 PM, Mon 07, 01 AM] reveals information about hours, dates, and day simultaneously, rather than just the hours [11 PM, 12 AM, 01 AM]. See d3-time-format if you’d like to roll your own conditional time format.\n\n###### time.nice([count]) Sourcetime.nice([interval[, step]])\n\nExtends the domain so that it starts and ends on nice round values. This method typically modifies the scale’s domain, and may only extend the bounds to the nearest round value. See continuous.nice for more.\n\nAn optional tick count argument allows greater control over the step size used to extend the bounds, guaranteeing that the returned ticks will exactly cover the domain. Alternatively, a time interval may be specified to explicitly set the ticks. If an interval is specified, an optional step may also be specified to skip some ticks. For example, time.nice(d3.timeSecond, 10) will extend the domain to an even ten seconds (0, 10, 20, etc.). See time.ticks and interval.every for further detail.\n\nNicing is useful if the domain is computed from data, say using extent, and may be irregular. For example, for a domain of [2009-07-13T00:02, 2009-07-13T23:48], the nice domain is [2009-07-13, 2009-07-14]. If the domain has more than two values, nicing the domain only affects the first and last value.\n\n###### d3.scaleUtc() Source\n\nEquivalent to time, but the returned time scale operates in Coordinated Universal Time rather than local time.\n\n### Sequential Scales\n\nSequential scales are similar to continuous scales in that they map a continuous, numeric input domain to a continuous output range. However, unlike continuous scales, the output range of a sequential scale is fixed by its interpolator and not configurable. These scales do not expose invert, range, rangeRound and interpolate methods.\n\n###### d3.scaleSequential(interpolator) Source\n\nConstructs a new sequential scale with the given interpolator function. When the scale is applied, the interpolator will be invoked with a value typically in the range [0, 1], where 0 represents the start of the domain, and 1 represents the end of the domain. For example, to implement the ill-advised HSL rainbow scale:\n\nvar rainbow = d3.scaleSequential(function(t) {\nreturn d3.hsl(t * 360, 1, 0.5) + \"\";\n});\n\nA more aesthetically-pleasing and perceptually-effective cyclical hue encoding is to use d3.interpolateRainbow:\n\nvar rainbow = d3.scaleSequential(d3.interpolateRainbow);\n\nFor even more sequential color schemes, see d3-scale-chromatic.\n\nSee continuous.\n\n###### sequential.domain([domain]) Source\n\nSee continuous.domain. Note that a sequential scale’s domain must be numeric and must contain exactly two values.\n\n###### sequential.clamp([clamp]) Source\n\nSee continuous.clamp.\n\n###### sequential.interpolator([interpolator]) Source\n\nIf interpolator is specified, sets the scale’s interpolator to the specified function. If interpolator is not specified, returns the scale’s current interpolator.\n\n###### sequential.copy() Source\n\nSee continuous.copy.\n\n###### d3.interpolateViridis(t) Source",
null,
"Given a number t in the range [0,1], returns the corresponding color from the “viridis” perceptually-uniform color scheme designed by van der Walt, Smith and Firing for matplotlib, represented as an RGB string.\n\n###### d3.interpolateInferno(t)",
null,
"Given a number t in the range [0,1], returns the corresponding color from the “inferno” perceptually-uniform color scheme designed by van der Walt and Smith for matplotlib, represented as an RGB string.\n\n###### d3.interpolateMagma(t)",
null,
"Given a number t in the range [0,1], returns the corresponding color from the “magma” perceptually-uniform color scheme designed by van der Walt and Smith for matplotlib, represented as an RGB string.\n\n###### d3.interpolatePlasma(t)",
null,
"Given a number t in the range [0,1], returns the corresponding color from the “plasma” perceptually-uniform color scheme designed by van der Walt and Smith for matplotlib, represented as an RGB string.\n\n###### d3.interpolateWarm(t)",
null,
"Given a number t in the range [0,1], returns the corresponding color from a 180° rotation of Niccoli’s perceptual rainbow, represented as an RGB string.\n\n###### d3.interpolateCool(t)",
null,
"Given a number t in the range [0,1], returns the corresponding color from Niccoli’s perceptual rainbow, represented as an RGB string.\n\n###### d3.interpolateRainbow(t) Source",
null,
"Given a number t in the range [0,1], returns the corresponding color from d3.interpolateWarm scale from [0.0, 0.5] followed by the d3.interpolateCool scale from [0.5, 1.0], thus implementing the cyclical less-angry rainbow color scheme.\n\n###### d3.interpolateCubehelixDefault(t) Source",
null,
"Given a number t in the range [0,1], returns the corresponding color from Green’s default Cubehelix represented as an RGB string.\n\n### Quantize Scales\n\nQuantize scales are similar to linear scales, except they use a discrete rather than continuous range. The continuous input domain is divided into uniform segments based on the number of values in (i.e., the cardinality of) the output range. Each range value y can be expressed as a quantized linear function of the domain value x: y = m round(x) + b. See bl.ocks.org/4060606 for an example.\n\n###### d3.scaleQuantize() Source\n\nConstructs a new quantize scale with the unit domain [0, 1] and the unit range [0, 1]. Thus, the default quantize scale is equivalent to the Math.round function.\n\n###### quantize(value) Source\n\nGiven a value in the input domain, returns the corresponding value in the output range. For example, to apply a color encoding:\n\nvar color = d3.scaleQuantize()\n.domain([0, 1])\n.range([\"brown\", \"steelblue\"]);\n\ncolor(0.49); // \"brown\"\ncolor(0.51); // \"steelblue\"\n\nOr dividing the domain into three equally-sized parts with different range values to compute an appropriate stroke width:\n\nvar width = d3.scaleQuantize()\n.domain([10, 100])\n.range([1, 2, 4]);\n\nwidth(20); // 1\nwidth(50); // 2\nwidth(80); // 4\n###### quantize.invertExtent(value) Source\n\nReturns the extent of values in the domain [x0, x1] for the corresponding value in the range: the inverse of quantize. This method is useful for interaction, say to determine the value in the domain that corresponds to the pixel location under the mouse.\n\nvar width = d3.scaleQuantize()\n.domain([10, 100])\n.range([1, 2, 4]);\n\nwidth.invertExtent(2); // [40, 70]\n###### quantize.domain([domain]) Source\n\nIf domain is specified, sets the scale’s domain to the specified two-element array of numbers. If the elements in the given array are not numbers, they will be coerced to numbers. If domain is not specified, returns the scale’s current domain.\n\n###### quantize.range([range]) Source\n\nIf range is specified, sets the scale’s range to the specified array of values. The array may contain any number of discrete values. The elements in the given array need not be numbers; any value or type will work. If range is not specified, returns the scale’s current range.\n\n###### quantize.ticks([count])\n\nEquivalent to continuous.ticks.\n\n###### quantize.tickFormat([count[, specifier]]) Source\n\nEquivalent to continuous.tickFormat.\n\n###### quantize.nice()\n\nEquivalent to continuous.nice.\n\n###### quantize.copy() Source\n\nReturns an exact copy of this scale. Changes to this scale will not affect the returned scale, and vice versa.\n\n### Quantile Scales\n\nQuantile scales map a sampled input domain to a discrete range. The domain is considered continuous and thus the scale will accept any reasonable input value; however, the domain is specified as a discrete set of sample values. The number of values in (the cardinality of) the output range determines the number of quantiles that will be computed from the domain. To compute the quantiles, the domain is sorted, and treated as a population of discrete values; see d3-array’s quantile. See bl.ocks.org/8ca036b3505121279daf for an example.\n\n###### d3.scaleQuantile() Source\n\nConstructs a new quantile scale with an empty domain and an empty range. The quantile scale is invalid until both a domain and range are specified.\n\n###### quantile(value) Source\n\nGiven a value in the input domain, returns the corresponding value in the output range.\n\n###### quantile.invertExtent(value) Source\n\nReturns the extent of values in the domain [x0, x1] for the corresponding value in the range: the inverse of quantile. This method is useful for interaction, say to determine the value in the domain that corresponds to the pixel location under the mouse.\n\n###### quantile.domain([domain]) Source\n\nIf domain is specified, sets the domain of the quantile scale to the specified set of discrete numeric values. The array must not be empty, and must contain at least one numeric value; NaN, null and undefined values are ignored and not considered part of the sample population. If the elements in the given array are not numbers, they will be coerced to numbers. A copy of the input array is sorted and stored internally. If domain is not specified, returns the scale’s current domain.\n\n###### quantile.range([range]) Source\n\nIf range is specified, sets the discrete values in the range. The array must not be empty, and may contain any type of value. The number of values in (the cardinality, or length, of) the range array determines the number of quantiles that are computed. For example, to compute quartiles, range must be an array of four elements such as [0, 1, 2, 3]. If range is not specified, returns the current range.\n\n###### quantile.quantiles() Source\n\nReturns the quantile thresholds. If the range contains n discrete values, the returned array will contain n - 1 thresholds. Values less than the first threshold are considered in the first quantile; values greater than or equal to the first threshold but less than the second threshold are in the second quantile, and so on. Internally, the thresholds array is used with bisect to find the output quantile associated with the given input value.\n\n###### quantile.copy() Source\n\nReturns an exact copy of this scale. Changes to this scale will not affect the returned scale, and vice versa.\n\n### Threshold Scales\n\nThreshold scales are similar to quantize scales, except they allow you to map arbitrary subsets of the domain to discrete values in the range. The input domain is still continuous, and divided into slices based on a set of threshold values. See bl.ocks.org/3306362 for an example.\n\n###### d3.scaleThreshold() Source\n\nConstructs a new threshold scale with the default domain [0.5] and the default range [0, 1]. Thus, the default threshold scale is equivalent to the Math.round function for numbers; for example threshold(0.49) returns 0, and threshold(0.51) returns 1.\n\n###### threshold(value) Source\n\nGiven a value in the input domain, returns the corresponding value in the output range. For example:\n\nvar color = d3.scaleThreshold()\n.domain([0, 1])\n.range([\"red\", \"white\", \"green\"]);\n\ncolor(-1); // \"red\"\ncolor(0); // \"white\"\ncolor(0.5); // \"white\"\ncolor(1); // \"green\"\ncolor(1000); // \"green\"\n###### threshold.invertExtent(value) Source\n\nReturns the extent of values in the domain [x0, x1] for the corresponding value in the range, representing the inverse mapping from range to domain. This method is useful for interaction, say to determine the value in the domain that corresponds to the pixel location under the mouse. For example:\n\nvar color = d3.scaleThreshold()\n.domain([0, 1])\n.range([\"red\", \"white\", \"green\"]);\n\ncolor.invertExtent(\"red\"); // [undefined, 0]\ncolor.invertExtent(\"white\"); // [0, 1]\ncolor.invertExtent(\"green\"); // [1, undefined]\n###### threshold.domain([domain]) Source\n\nIf domain is specified, sets the scale’s domain to the specified array of values. The values must be in sorted ascending order, or the behavior of the scale is undefined. The values are typically numbers, but any naturally ordered values (such as strings) will work; a threshold scale can be used to encode any type that is ordered. If the number of values in the scale’s range is N+1, the number of values in the scale’s domain must be N. If there are fewer than N elements in the domain, the additional values in the range are ignored. If there are more than N elements in the domain, the scale may return undefined for some inputs. If domain is not specified, returns the scale’s current domain.\n\n###### threshold.range([range]) Source\n\nIf range is specified, sets the scale’s range to the specified array of values. If the number of values in the scale’s domain is N, the number of values in the scale’s range must be N+1. If there are fewer than N+1 elements in the range, the scale may return undefined for some inputs. If there are more than N+1 elements in the range, the additional values are ignored. The elements in the given array need not be numbers; any value or type will work. If range is not specified, returns the scale’s current range.\n\n###### threshold.copy() Source\n\nReturns an exact copy of this scale. Changes to this scale will not affect the returned scale, and vice versa.\n\n### Ordinal Scales\n\nUnlike continuous scales, ordinal scales have a discrete domain and range. For example, an ordinal scale might map a set of named categories to a set of colors, or determine the horizontal positions of columns in a column chart.\n\n###### d3.scaleOrdinal([range]) Source\n\nConstructs a new ordinal scale with an empty domain and the specified range. If a range is not specified, it defaults to the empty array; an ordinal scale always returns undefined until a non-empty range is defined.\n\n###### ordinal(value) Source\n\nGiven a value in the input domain, returns the corresponding value in the output range. If the given value is not in the scale’s domain, returns the unknown; or, if the unknown value is implicit (the default), then the value is implicitly added to the domain and the next-available value in the range is assigned to value, such that this and subsequent invocations of the scale given the same input value return the same output value.\n\n###### ordinal.domain([domain]) Source\n\nIf domain is specified, sets the domain to the specified array of values. The first element in domain will be mapped to the first element in the range, the second domain value to the second range value, and so on. Domain values are stored internally in a map from stringified value to index; the resulting index is then used to retrieve a value from the range. Thus, an ordinal scale’s values must be coercible to a string, and the stringified version of the domain value uniquely identifies the corresponding range value. If domain is not specified, this method returns the current domain.\n\nSetting the domain on an ordinal scale is optional if the unknown value is implicit (the default). In this case, the domain will be inferred implicitly from usage by assigning each unique value passed to the scale a new value from the range. Note that an explicit domain is recommended to ensure deterministic behavior, as inferring the domain from usage will be dependent on ordering.\n\n###### ordinal.range([range]) Source\n\nIf range is specified, sets the range of the ordinal scale to the specified array of values. The first element in the domain will be mapped to the first element in range, the second domain value to the second range value, and so on. If there are fewer elements in the range than in the domain, the scale will reuse values from the start of the range. If range is not specified, this method returns the current range.\n\n###### ordinal.unknown([value]) Source\n\nIf value is specified, sets the output value of the scale for unknown input values and returns this scale. If value is not specified, returns the current unknown value, which defaults to implicit. The implicit value enables implicit domain construction; see ordinal.domain.\n\n###### ordinal.copy() Source\n\nReturns an exact copy of this ordinal scale. Changes to this scale will not affect the returned scale, and vice versa.\n\n###### d3.scaleImplicit\n\nA special value for ordinal.unknown that enables implicit domain construction: unknown values are implicitly added to the domain.\n\n#### Band Scales\n\nBand scales are like ordinal scales except the output range is continuous and numeric. Discrete output values are automatically computed by the scale by dividing the continuous range into uniform bands. Band scales are typically used for bar charts with an ordinal or categorical dimension. The unknown value of a band scale is effectively undefined: they do not allow implicit domain construction.",
null,
"###### d3.scaleBand() Source\n\nConstructs a new band scale with the empty domain, the unit range [0, 1], no padding, no rounding and center alignment.\n\n###### band(value) Source\n\nGiven a value in the input domain, returns the start of the corresponding band derived from the output range. If the given value is not in the scale’s domain, returns undefined.\n\n###### band.domain([domain]) Source\n\nIf domain is specified, sets the domain to the specified array of values. The first element in domain will be mapped to the first band, the second domain value to the second band, and so on. Domain values are stored internally in a map from stringified value to index; the resulting index is then used to determine the band. Thus, a band scale’s values must be coercible to a string, and the stringified version of the domain value uniquely identifies the corresponding band. If domain is not specified, this method returns the current domain.\n\n###### band.range([range]) Source\n\nIf range is specified, sets the scale’s range to the specified two-element array of numbers. If the elements in the given array are not numbers, they will be coerced to numbers. If range is not specified, returns the scale’s current range, which defaults to [0, 1].\n\n###### band.rangeRound([range]) Source\n\nSets the scale’s range to the specified two-element array of numbers while also enabling rounding. This is a convenience method equivalent to:\n\nband\n.range(range)\n.round(true);\n\nRounding is sometimes useful for avoiding antialiasing artifacts, though also consider the shape-rendering “crispEdges” styles.\n\n###### band.round([round]) Source\n\nIf round is specified, enables or disables rounding accordingly. If rounding is enabled, the start and stop of each band will be integers. Rounding is sometimes useful for avoiding antialiasing artifacts, though also consider the shape-rendering “crispEdges” styles. Note that if the width of the domain is not a multiple of the cardinality of the range, there may be leftover unused space, even without padding! Use band.align to specify how the leftover space is distributed.\n\nIf padding is specified, sets the inner padding to the specified value which must be in the range [0, 1]. If padding is not specified, returns the current inner padding which defaults to 0. The inner padding determines the ratio of the range that is reserved for blank space between bands.\n\nIf padding is specified, sets the outer padding to the specified value which must be in the range [0, 1]. If padding is not specified, returns the current outer padding which defaults to 0. The outer padding determines the ratio of the range that is reserved for blank space before the first band and after the last band.\n\nA convenience method for setting the inner and outer padding to the same padding value. If padding is not specified, returns the inner padding.\n\n###### band.align([align]) Source\n\nIf align is specified, sets the alignment to the specified value which must be in the range [0, 1]. If align is not specified, returns the current alignment which defaults to 0.5. The alignment determines how any leftover unused space in the range is distributed. A value of 0.5 indicates that the leftover space should be equally distributed before the first band and after the last band; i.e., the bands should be centered within the range. A value of 0 or 1 may be used to shift the bands to one side, say to position them adjacent to an axis.\n\n###### band.bandwidth() Source\n\nReturns the width of each band.\n\n###### band.step() Source\n\nReturns the distance between the starts of adjacent bands.\n\n###### band.copy() Source\n\nReturns an exact copy of this scale. Changes to this scale will not affect the returned scale, and vice versa.\n\n#### Point Scales\n\nPoint scales are a variant of band scales with the bandwidth fixed to zero. Point scales are typically used for scatterplots with an ordinal or categorical dimension. The unknown value of a point scale is always undefined: they do not allow implicit domain construction.",
null,
"###### d3.scalePoint()\n\nConstructs a new point scale with the empty domain, the unit range [0, 1], no padding, no rounding and center alignment.\n\n###### point(value)\n\nGiven a value in the input domain, returns the corresponding point derived from the output range. If the given value is not in the scale’s domain, returns undefined.\n\n###### point.domain([domain])\n\nIf domain is specified, sets the domain to the specified array of values. The first element in domain will be mapped to the first point, the second domain value to the second point, and so on. Domain values are stored internally in a map from stringified value to index; the resulting index is then used to determine the point. Thus, a point scale’s values must be coercible to a string, and the stringified version of the domain value uniquely identifies the corresponding point. If domain is not specified, this method returns the current domain.\n\n###### point.range([range])\n\nIf range is specified, sets the scale’s range to the specified two-element array of numbers. If the elements in the given array are not numbers, they will be coerced to numbers. If range is not specified, returns the scale’s current range, which defaults to [0, 1].\n\n###### point.rangeRound([range])\n\nSets the scale’s range to the specified two-element array of numbers while also enabling rounding. This is a convenience method equivalent to:\n\npoint\n.range(range)\n.round(true);\n\nRounding is sometimes useful for avoiding antialiasing artifacts, though also consider the shape-rendering “crispEdges” styles.\n\n###### point.round([round])\n\nIf round is specified, enables or disables rounding accordingly. If rounding is enabled, the position of each point will be integers. Rounding is sometimes useful for avoiding antialiasing artifacts, though also consider the shape-rendering “crispEdges” styles. Note that if the width of the domain is not a multiple of the cardinality of the range, there may be leftover unused space, even without padding! Use point.align to specify how the leftover space is distributed.\n\nIf padding is specified, sets the outer padding to the specified value which must be in the range [0, 1]. If padding is not specified, returns the current outer padding which defaults to 0. The outer padding determines the ratio of the range that is reserved for blank space before the first point and after the last point. Equivalent to band.paddingOuter.\n\n###### point.align([align])\n\nIf align is specified, sets the alignment to the specified value which must be in the range [0, 1]. If align is not specified, returns the current alignment which defaults to 0.5. The alignment determines how any leftover unused space in the range is distributed. A value of 0.5 indicates that the leftover space should be equally distributed before the first point and after the last point; i.e., the points should be centered within the range. A value of 0 or 1 may be used to shift the points to one side, say to position them adjacent to an axis.\n\nReturns zero.\n\n###### point.step()\n\nReturns the distance between the starts of adjacent points.\n\n###### point.copy()\n\nReturns an exact copy of this scale. Changes to this scale will not affect the returned scale, and vice versa.\n\n#### Category Scales\n\nThese color schemes are designed to work with d3.scaleOrdinal. For example:\n\nvar color = d3.scaleOrdinal(d3.schemeCategory10);\n\nFor even more category scales, see d3-scale-chromatic.\n\n###### d3.schemeCategory10Source",
null,
"An array of ten categorical colors represented as RGB hexadecimal strings.\n\n###### d3.schemeCategory20Source",
null,
"An array of twenty categorical colors represented as RGB hexadecimal strings.\n\n###### d3.schemeCategory20bSource",
null,
"An array of twenty categorical colors represented as RGB hexadecimal strings.\n\n###### d3.schemeCategory20cSource",
null,
"An array of twenty categorical colors represented as RGB hexadecimal strings. This color scale includes color specifications and designs developed by Cynthia Brewer (colorbrewer2.org).\n\n© 2010–2017 Michael Bostock"
]
| [
null,
"data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAA3gAAABpCAMAAABf0nHfAAABnWlUWHRYTUw6Y29tLmFkb2JlLnhtcAAAAAAAPHg6eG1wbWV0YSB4bWxuczp4PSJhZG9iZTpuczptZXRhLyIgeDp4bXB0az0iWE1QIENvcmUgNS40LjAiPgogICA8cmRmOlJERiB4bWxuczpyZGY9Imh0dHA6Ly93d3cudzMub3JnLzE5OTkvMDIvMjItcmRmLXN5bnRheC1ucyMiPgogICAgICA8cmRmOkRlc2NyaXB0aW9uIHJkZjphYm91dD0iIgogICAgICAgICAgICB4bWxuczpleGlmPSJodHRwOi8vbnMuYWRvYmUuY29tL2V4aWYvMS4wLyI+CiAgICAgICAgIDxleGlmOlBpeGVsWERpbWVuc2lvbj44ODg8L2V4aWY6UGl4ZWxYRGltZW5zaW9uPgogICAgICAgICA8ZXhpZjpQaXhlbFlEaW1lbnNpb24+MTA1PC9leGlmOlBpeGVsWURpbWVuc2lvbj4KICAgICAgPC9yZGY6RGVzY3JpcHRpb24+CiAgIDwvcmRmOlJERj4KPC94OnhtcG1ldGE+Cucqj1YAAAE1UExURT9GiDtRi0QCVR+TjEctfEFChyd+jkcoeB+giCxxjkQ5gy5ujkcRZEggcVrIZC+0fEU1gR+Wiyh6jjlVjDJjjSGmhUYHW0I+hSOIjiSFjkgcbiGQjT5KiUgVaFXGZ3vSUDFmjj69cyp3jh+jhiKLjbjeKSWCjrDdLit0jjO3eTxOih+ZikYxfietgSSrgkgkdTZbjTBpjkYMYDVejYjVSJbYP/XmHmTLXi9rjnXRVObkGUvCbMHfJCaBjt/jGDhYjCmwf2jNWze5d9HiGx6eiUgZa/3nJCKpg6HaOMnhH9fiGZzZOyGNjancM4DTTTq7dV/KYevlGo3WRR6ciiyyfkfBbzRgjUO/cW3OWZLXQr3fJs3hHe/lHKXbNvjmIYTUS1HFaXDPV07Da8XgIeLkGPHlHdrjGUSGEHsAAAiYSURBVHja7Z2LettEEIVDcCoo0AKFcktjBAiREDCEqwnQ1uWWluASMGoJDcFO8v6PgJ2RdleyfGmzo72dI7/CfL/PzJnZlVWp6yXtCD0vdUPoR9KbQi9IdYSiklolbQu9KHVZ1WslrUmtCK2XtCX0VEnXpG6SPhH6TOgv0itSbwt9KvQ16TmpZ0v6QOgdoZdLuiT1Hel9oTeEXpd6T+hX0ltCT5f0ktSHQn+QfiA9U9K7Qq9K/Uz6WOgjoVukP0lXpTaFeqQ+KRbKSAPSLmk4HB6TNkhXpO6TviLdJj04139jdYXapN9I/471OemA9PBcX5DukX4hPRprn5Tup7kSob/H+pY0Go32SIdj3SHdJZ2dnX1POjk5+Yb0O+mfsU5PT78kHY31E2mlKDr6iqKjj8quKLwbk4+KbvLNKLzO5FMqr6i9Fn1F0U2+ouzGhXdee5fpE2U3Lryi9tYm39p57a1MPqXyVmTV1Rbetck3XXg3zRTepcnnauHdqi+8Tfqo7KqFl5deNlV4A1F4VHsbVHtXzmvvPn1q4VHpPbgtCi8vvXZd4VHpHVDpPTwQhXevrvAeqYWXTH552U0V3kgUHpXe3Tui8M4WF94RFd5RqfAqxNtRgHdB4nWo7KjqIu3Ek5W3pZYeiGcL8fpUd7OJN6wQb2NJ4gnm5aWnj3jpDOKNmIgnkbej1J5G4smy00Y89X/mOohnjng58noF8/rl/5rziHf8RMQrKk8X8VTkzSLenqvEi1q8xFsXyAPxLCFeX1q8eBbxhk/k8STxqPYuSjyBvEQgb4p4I6o7/R5vlZF4kVJ5GolXqjsQzzKPJ4kXW068ZTzeuOycIV4nksSLWiCet8S7Oo948z3e8IIeT0NXMy11NecRzw2PFwnkFcSL4PF8I54CvLkeL2YjXlejx5tPPGc8Xkf1eKzE2yq1NUE8I8TrzfN4GZvHa2v0eMlc4h2yzPF0Ey9q0uOVTR6IZ4Z4MwboWd7XHLAQr6vR4xkh3qTuVl32eJjjGSPeMgP0mMXjtfUmV+YTb+/QBeKpHo/GePB43s7xNou/mhWPpw7QB/o9XrurxeOlRj3eKofHUwNjEbfHA/EME68A3lRWM4uL5kpmt8dLTXi86/zE4/N4mOOZT670ROk9RlbTLo+X+EK8znREGsTzN7kiiNery2oOivaKvR4v9ayrWf63yebx1kE8Q13NxcRjmePp8XipqLzEE48nuprsxCv90wTxrCFedR9v/O3a6fGWSq7wdDU1ZzWjssdTao8pq7kO4tnk8WI1M5a7PGs9ntddTfZ9PBDPmq5mry4jzeHxNGc1/UiuVLqahcHj8Hggnh1zPANdzbYPyRXerqYKPRAvCOIVlZfFfB6v21hyxQ3iqcBTiAePF9wcr2YtCMRjzWpS5YmoJjxeUF1NsZtQNFdyj7er2eM1uIHuQlYzkuMEeLxA53j9mceObCVe4qvHkyMFEC+gOR6rx9PR1dz3KatZ7/G24fFCm+MxE6/b2M0Vl4kHjxfeHI/Z4zm/neBiVxMez+Y5XiPE62KONyOria5mmMSLm/B4Pmyg681qRnXHjiLM8UKa47ETDxvoc7OaVeJtg3hhdTXztSAHPJ7HWU0qO3i8kPbxRFgTHs9oVxPECyW5MhUZs93j+d3VbMHjheLxHNtAN5ZcaSSruQ3ihbedkNk+x0s9zGpWu5rweGHt42WxKxvoniVX0NUMdo7XryzC2prVNOrx2LKaLb6uJjyeAx6Pl3jYQAfx0NWs93hZzOnxsIG+HPHg8cKb47G+jwfiLUk8dDUD20DP5Pt41t5cWSq54kxWk7+rCY/nSFYTHg8eD8Rreh/P9jme4Q30VQe7mvB41s7x4ppBnu0b6J7dXAHxcEna+rcT/CcePB7ex7N1OyHx6w10dDUDn+Pl0wQOj9fodoIzWc0ZG+jweGHN8bCBDo8H4hm8JO2Ax0uQ1YTHw80VbKCDeCDexeZ4jng8D7Ka1eQKsprBzvFs93jpfhgeD13NAC9J5wN0vIHO/wZ6xeMhqxmsx4uLC+72v4GO5AqI59N2glwLyuzMavp5ZQxZzcD38TKZkrZ7OyHx6u0EEA/JFbEWtIh4ht9AT/3oaqrJFWQ18QY69vEau6uJrCa6mtxvJzi/gc72dgKymoHv47G+nYANdHQ1QTw39/FSdZyArCY8nkf7eLETb6CDeCCeZ13NDG+gI6sJ4hnYx4ud2E5IkNUE8bzax/PifTxkNeHxcFeTYwMdrwWBeLirCY+HrCaId+F9vBgb6MhqgnjNJ1dkSnrXYo+XIKsJj+elx8uwgd7YG+joaobe1SxvBdn8BroPWc1GuprweDbP8Wo8Xmb3Bjq6miCeF8mVuhPuu3gDnTOrOTO5gqxmIB4v5vd42rcTcHMFxMMcDxvoeAMdxHvsOV6/oTkebq6gqwniLc5qWnpzZankCrKa8HjIamIDHckVdDWX93gLthM2AtxAbyi5gqxmIHO8uCGPhw30Ja+MgXjhzvEyK+d4ht/H470yhqwmPB62E5DVBPEa7Goyejy8gY6sJuZ4jc/xtL+BjuQKiId9vFA20JHVBPHYs5p2ezzcXAHxfJ3j2f0GOrKa8HjYx8Mb6Ohqgnh67moOrPV4BjfQkdUE8Ry7uYINdGQ10dVcxuPZeXMlDeLKGLKagczxarKame0b6OhqgngezvEGLB4PG+gGuprweMF7PC9uriCrCeLx3dXMeDye8xvovO/jIauJOd7Ado+H5AqIh7uaJjbQU7yBDo/nzxvoRLwBk8fDBjqIh65m2eP1BfBiPo+HDXRkNTHHq3i8SnfF2uRKEFfG0NUMZ46XN1fK43Nn38dzIqtZTa4gqxmsx1PmeJnlG+i4MgbieTHH608lV2y8q+nXG+gRspqY41U9Xmb5lTHcXAHxvJjj9ae6mlPEO7ZiH6/ZK2P/AzTLwraco2PlAAAAAElFTkSuQmCC",
null,
"data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAA3gAAABpCAMAAABf0nHfAAABnWlUWHRYTUw6Y29tLmFkb2JlLnhtcAAAAAAAPHg6eG1wbWV0YSB4bWxuczp4PSJhZG9iZTpuczptZXRhLyIgeDp4bXB0az0iWE1QIENvcmUgNS40LjAiPgogICA8cmRmOlJERiB4bWxuczpyZGY9Imh0dHA6Ly93d3cudzMub3JnLzE5OTkvMDIvMjItcmRmLXN5bnRheC1ucyMiPgogICAgICA8cmRmOkRlc2NyaXB0aW9uIHJkZjphYm91dD0iIgogICAgICAgICAgICB4bWxuczpleGlmPSJodHRwOi8vbnMuYWRvYmUuY29tL2V4aWYvMS4wLyI+CiAgICAgICAgIDxleGlmOlBpeGVsWERpbWVuc2lvbj44ODg8L2V4aWY6UGl4ZWxYRGltZW5zaW9uPgogICAgICAgICA8ZXhpZjpQaXhlbFlEaW1lbnNpb24+MTA1PC9leGlmOlBpeGVsWURpbWVuc2lvbj4KICAgICAgPC9yZGY6RGVzY3JpcHRpb24+CiAgIDwvcmRmOlJERj4KPC94OnhtcG1ldGE+Cucqj1YAAAF6UExURQAABUgLagkGH4AfbAMCEKAqY4wiaXwdbVgQbikLVZsoZPrGLOFWNaUsYCQMT8A5UXIZbnYbbVENbPmMCpEkaFwSbhkMPvBvIGkWbi8KWwwHJt5SOPZ9FOtkKakuXvz/o/X5kPLnY4chavP2iGwYbr03U7QyWfjOOFQPbfiFD9VJQdpNPPPkW7ExWmQVbjcJYQEBCvu9IvnKM4Qga/fSPvrBJ/n8nPynDDMKXj0JZWETbuZcLw4JLPXaS8Q7ThEKMfN1G/qRB/ykCRULOM1CRx0MRAYEGjoJYyAMSeRZMrc0V0wMa5QmZ/uXBgQDFdNGQ8g+S/yzF/yeB/u5HvyrEOhfLUEKZ9hLPro1VfHwd64wXPyvE/eBEu1oJctASfuaBtBERfHtb5gnZhQLNOlhK/yhCPLqae9sI/iJDPFzHfb6lvV6F/LyffbXRvqUB/TfU8Y9TdtQO0UKaW8ZbvL0gkQKaJcnZvN4GfbVQ+5qJPTdT/iHDvu2GvThVjLcfH4AAAbWSURBVHja7Z0LWxtVGITRWmrdNtpaWgUrAdRaxaqh1EuxtSparVrvurVV8YIGqYqGO/x3TzKbs5AESEJ2z8nuO8tfmGeYOfNNBgZivGLxUIxPhfeFBy2GLWaE68IvBu8KZy1mhZtCweJ4jCctXhdeFj4Q3hC+EZaXl/81eEC4ZXHa4nnh4RhDwpsWl4QnLJ4R/jN4zOKUxbEYRyymhHFhMMYZi4+FlZWVnw0esTga46TF0xZ3hJeE54SPhO+EZ4VHLa4J/xh8JvwgvCf8VMNt4QXhW+FFYclgxOKExYfCVeFr4VeD8xb3hPX19d8NzlmUDd4WVldX/xSeEu4KF4XNzc0/DB63uCH8ZfCJ8GMNXwi/GbwjvCZ8JXwvfC5sbW19KbwqTAuBRUm4IFwW1tbW7hu8JWxsbPxtsL29PSmMCfMGczWEwqJBpVKZEIrCgsEVYfTKaBUDHRKvRr1hfaLdPsSLqDfbSDxR77i+Kun0iXZdEM9S7/St1sQbqn4R8YYaiXdpH+KdqhPvmL4G4k01EG98MObemcF9iXe0TryT+hqId6cL4l3rlHi39yBeRL0T1U+0ayLeVUu8iHr3DPHO7yReRL2yiFeOibe6k3ii3sW7rYh3I2niRdQriXgXRLzLVeKtVb+IePebiLc9OWaIV6XevIhXo15Ypd5inXiViYqIZ/4WRLyIeqNdEa9J8WYOVrw9iJcFxZtqUrzxDCjeUg8Vr+yz4k3v4F0Xijc536B4c4tNijeRkOLNDHeueHXeVeUOxfNQ8UbypXgR99pXvMma3k3afzWN5IV7Kl6xQfG6JF6PFa+QOcUbRPF2e7xyphVvfrfihS0Ub2Jht+KJeS4Ur048SzsUD4/nQPGmrd4F3Xq8sYM9XvyvZuqKV+ddnGru+GezgMfzL9VcyofHmw56rHhh2NLjLaB4KF7Kilc+57HHC5L2eEWlmkUfPV4Bj4fiZUXxWqSaRY8U72YBxcPjeePxIouXlMdD8fB4LhTP51RzxwN6kFyq6UzxZs/i8VA8fz1e54rX6PGqihe2m2q6bK5kOdXE4/Wbx4tNXjqpprvmCopHqulRc6XUcarZjsczglcptvJ4fnQ18XgonuvmSg8Vr61U04+uJoqHx+u7VLOjrqZDxWtqrvCOR6qZSY83F3r9jofHQ/F8aq6UkvB4UXNlwqt3PLqaeDyfmitJppooHopHqtmjC/RDeTyaK3g8Us0eK55fXc2m5gqpJh7Pr3u8ND0e73h4PFLNRC/QK7694+HxUDzP7vF639WUxfOsuUJXE4/nl+IFKaaaKB6KR6qZ2D2eD13NXDVX8Hh0NX3dXCHVxON51tVMdleTdzw8Hqlmyl3N2q7mAs0VFA/FO/QFer/satLVxOPlY3MlZHMFxSPVTCzVbMfj2VSzSHMFj4fiOUg1Gz0eF+goHqlmsl1NFA+PR6qZwj3eHCtjKB6K18EFejZ3NXP1a0F4PC7QecdD8Ug12/hF2FKSu5o0V/B4KF76F+h0NVE8PN6eqWaJriYej1TTgccLMtjVpLmC4mUr1ezbdzy6mni8/KSavOOheKSaqe1q0tXE46F4B6eaAV1NFA+P53mq2S8ej80VUk26mjRXUDwU73C/FnSoriYX6Hg8Us0eL0nT1UTxSDXb72omuqtJcwWPh+K1TjXrmpexria/FoTHo6vJOx4ej1TTaVeT5gqKR6pJVxOPh8ejq4nioXi56moGmexq5qq5gsfrw1Qz1V1NLtBRPFLNxN7xuEDH45FqHtRcCehqongoXrZ3Nelq4vFINXv8jheyuYLikWomdoF+qK4mzRU8Hqmmi11NLtBRPFLNxN7x6Gri8Ug107rHo6uJ4qF4baeaQW42V+hq4vEy+Y4X8o6H4pFq+rmrSXMFj0eq6WJXk64mikeqmf49HoqHxyPVzPSuJs0VFI9dTS7Q8Xh4PKe7mrzjoXikmunvatJcweORarrY1aSrieKRamZ6V5PNFVJNdjVprqB4KJ7DXU0u0PF4pJoudjVRPBSPVDP9XU2aK3g8Us1M72rya0F4PHY1ecfD45FqOt3VpLmC4pFqpr+rSVcTj0eq6WJXE8VD8Ug1M72rmavmCh6PXU1f3/FINfF4+d3V5B0Pj0eq6WJXk+YKikeqma/fx6OricdjV5PNFRSPVDOTv4GOx0Px2NXkAh3Fw+M53NVE8fB4pJoudjVprqB4pJqZ3tXM1a8F4fHY1eQdD8Uj1fRtV5PmCh6PVNPFriZdTRSPVDP9XU0UD49HqpnpXU2aKygeu5pcoOPx8HhOdzV5x0PxSDXT39XkAr0Xinckmx5vhF3NdhTP0G6s267m/1RbLUDsq3/2AAAAAElFTkSuQmCC",
null,
"data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAA3gAAABpCAMAAABf0nHfAAABnWlUWHRYTUw6Y29tLmFkb2JlLnhtcAAAAAAAPHg6eG1wbWV0YSB4bWxuczp4PSJhZG9iZTpuczptZXRhLyIgeDp4bXB0az0iWE1QIENvcmUgNS40LjAiPgogICA8cmRmOlJERiB4bWxuczpyZGY9Imh0dHA6Ly93d3cudzMub3JnLzE5OTkvMDIvMjItcmRmLXN5bnRheC1ucyMiPgogICAgICA8cmRmOkRlc2NyaXB0aW9uIHJkZjphYm91dD0iIgogICAgICAgICAgICB4bWxuczpleGlmPSJodHRwOi8vbnMuYWRvYmUuY29tL2V4aWYvMS4wLyI+CiAgICAgICAgIDxleGlmOlBpeGVsWERpbWVuc2lvbj44ODg8L2V4aWY6UGl4ZWxYRGltZW5zaW9uPgogICAgICAgICA8ZXhpZjpQaXhlbFlEaW1lbnNpb24+MTA1PC9leGlmOlBpeGVsWURpbWVuc2lvbj4KICAgICAgPC9yZGY6RGVzY3JpcHRpb24+CiAgIDwvcmRmOlJERj4KPC94OnhtcG1ldGE+Cucqj1YAAAFlUExURQUEF+1ZXwAABfyJYWQZgB4RSawzfH4kggICD5AqgdJCb8Q7dXUggdtHajkPbhEML/l5XV8Xf7k3eXkigpQrgEMPdqYxfQEBCYgngQgGHigRWvBfXv2SZuhTYgsIJFoVfvz3ufp/XhQNNhcPPWscgd9KaNdEbDQQaXAegUgQeLM1euNNZfZtXPRmXIElgfuEX/3nqP6ibz4PciwRYP2XaK80e/6qdMs+cfz6vFMTffJjXP7AhP3foSERT58vf/3jpPh1XP7Ymvzur/7SlfdxXP6ze/7Ljpwuf8A6d/VqXP2bauVPZP64fr05ePyNY/7Dh1ASfE0Re/6ebMg9c/6td+tWYP3bnRoQQ/3rq/7Okf6mcg0KKiQSVJktgM5AcP67gVcVfkoQeWgcgf7GiospgTERZaMwfvz9v/zztUAPdC8RY7U2eqEwfoMmgf7IjIwpgf62fP6wePzwslYUff7Vl2cbgP69gnOx+r8AAAjzSURBVHja7Z0LfxNFFMWLm7TSxeKIIooaUkwtrroI+H6g8YWu9bViUQTxCX1RGlo+v5u9yexmu3k1k8zMnXO2/Qj3d/Kfe+69c49lelbq8UxzPXpO6iWpm6STmZalzkutk6qZPiL9Tnpe6jJpc3Pz30SvSC1lukTa2Nh4RuoC6c9EP5J+IL1Hevjw4Z1ENSkhVcl0cHDwV6KXpValTmQ6JbVAeoO0Qzoj9U+i3d3d45melrpF+or0MWl7e/u/RKel5jO9RXqVtLW19ZTUGukayc8Uk94h3U61mGlF6olMDamAFJFelLpOOpbprNT7pNdJX5NeI31O+oT0pFRdKiR5mb4jvSB1hfQN6SfSm6TfSG+Tfibt7e19QXrw4MG3pD8SPXr06APS96S/E92/f/9L0q+kz0j7+/ufkn4hfUi6SrpIukFqklqt1j3SOdK7pLuJ5gYV3hx93aKjr1109FHZdQvv5sm09pbpo7JLCo9qbz0tvGr6nxZd++stvE7pXS4WHpXeEn3tomt/hwrvwriFJ9pfWnXpf5/CW23/Udm1/4qFt9D+O1x4O2d2egtv9/husfBujVJ48/T1LbxO6a0VCu+an9Ze3P5KCu/2ItXeSvovKy+tvUb765ZdsfCiQuFdP5bW3ln6qOzGLrw6fVR23cILvbBYeJ3Su2Jo4VHp3aDSa96QhZd855LCu9cpvOTv7vDCU+B4y8MMb3zHk3WnyvEqWd3Z5nhb5Y7Xqbx+jtetvJzhtetupeh4gUbH61d4xjreVXK8i4XC0+Z453OOt26o41UGON6qdLwTZY63MAvHOz2m43XKrp/jLVLdrSwOdrxAet7sHS+ru+JPTdscL7G81O/uzdzxlnOWV1XDeJdUM94gx5Oel9Vd3vFOWex4K4sDHS9owPGOynjJX8sox6vC8abPeP5ojDeC4+lhPI8+qxmvpZ3xlnsYb91AxmPneN3KG+J4K8MZr2GO40nLs4Hx6LemMY5XVeN4S0odj8pOsGK8wY63aLrjlTKeNY7X7DBeyyDGK7M8ON40GM9nx3imO97FXsdrGfaq2S06MN50GG8t97gyIeM1tL1qcmC8plGMV6UPjGeE4w1hvEBrH8+znfFa/BwPjDfE8TqQNyHjdWov0sR4oe2M1zSN8dbBeFYwnlbHCz37Ga9ljuNVwXhgvBEcz8u9roDxVPTxjGU8wcrxim08+xgvtJ3xWsYlV4xjvPwvTS6MJ/t4vo2M54U2O17TzKymcYyX/dQ8cMzxxmC8AH28MRmP46vmJdWvmoIZ42XjCWoYL9DAeJjHc4HxyPK4OV6spo8XaXG8Qz81rWM8fq+a6vt4giXj5d41j8h4DV3TCTY7Hvp4YzgeS8aLFTBeI+p4HhjP5j6eqVlNwx1PH+NhOgFZzSnuXBF8XzX9iRivoedV0+Mwj4es5ijJFZ6MF/uTZjXpcSVAVhOMN8XkCkvG8ydiPEygI6uJrKYrjOeB8Zj38QqZMX6MF0/Yx2t0+ngzfdWss9i5gqxmeeEVopqCZ1ZTCeNpcjwwHtusZk0wd7wM8iZkvAAT6MhqqmO8XEja7An0+QmSK9b28TCPx5TxDs8FIatpBOP1ZqSR1eSV1XSB8fzJd67ouxYExmPaxxP8XzXjyXeuaOrjedi54kJWM2957BjPn5TxouxxZXavmti5wjarKTCdMEZWc+Z9vBB7Nbn28SyaTtCb1Zz5BLpXOggLxmPHeN2fmpynE+KjM14Ex0NWcxo7V5DVHPqqGeB2Avp4ahyvbNcRT8bz+zqeoYwXIqvpyF5Ntn28Q68r1vTxONzHQ1ZzCONV+E4n5CbQ40kYT0dyJURWk2MfT5T1E/gyXmxZVjP0MI/HNKuZb6CzZrz+jjc64wUakivYq8k0q1lqePwYz7eP8bwQezVZ9/GKC21ZT6Dbw3h1ZDXduJ3AlvFi32bGQx+PZVYzrboab8ZLDc9XwXgR5vGQ1VTWx6uxZzxfVR8vgOOB8abAeILrtSBVjId5PGQ1lfXxehfassxqqmK8IMI8Hvp46vt4fOfxBjueufN4Hm6gu5HV5HofT0kfL9LAeNiryTWrmb1qZs+aYDzcTkBWc/p9vBp7xovtZLzydgIYD4zHJLky+l5NOB6ymlPIauaPoDO9nWBbHw+M50AfT65wZ307wSbG83Afz4WsZoVvHy9WwHgRJtCR1ZzKqybnvZq+nfN4uIHuQlazwrePF1u5cwX38dxhPCcdz1DGq7O4j4es5tBXzZ5V0mA8I24nYK8m/z5e+qjJdDph8Ksmbicgq6kxq1kRQrC/jxfbdDuht58AxuOa1eS8ZczSeTwGtxOQ1WQwga6V8bTcTsA8nhtZTTcdD7cTkNXUmdWsGL9lTCvj6UiueNiriT4e+nhakiu2305AVtPxrKZv4V5Nz+pN0shqIqup6nYCGA9ZTWQ1Z3wfL9I0jwfGA+O5fR8P83jIak4jq1kxfwLdtZ0ryGq6kdUUfLOaMXauIKtpblaTM+P59u1c8cB4jmQ1hZtbxoyex8NeTZZ9vDui1vtTs4L7eMYkVzCPx5zx8nOwAvfxTNm5YukEOrKaR24n4D6eMckV7Fzh2sdL2wk13jfQbb6dgPt4XLOa5SfQcR/PjOQK5vE4ZzX5z+PFNu5csTS5gqzmUSbQmc7j+ZbuXAkxj8eX8Q618cB4uIGOrOZMd64IMJ5hyRUwHus+Xk0WHW6g4wY6spq4gW5aVjPCqyaymsoYLzcXhKzmYMYLZv6qib2a/HeusGW8WBnjwfHAeNNhPGQ1jbqPx4LxkNUc8lOT8e0E39b7eGA8B3auZM1z9PFMYLw6spqMGa+IeAJ9PKMuwoLxmGY184wn0MfDfTxkNWd3O4F9H8+3c+cK5vGYMx73Pp69WU3s1eSc1bTH8Zzcq4nbCSz7eEJWXkW4mlwxeR4vtO5aELKaI28ZK3nUZMV4sZo+XqRnAh3zeCyzmi5skqafmr6FO1c8Cx0PWc0jhKQZ38ezkPG8EPfxXLmdIHAD3ZjbCXVkNV25gc62jxcrympiyxj6eHA8DTfQI8zjqWa8/wHCGmbhghBSCAAAAABJRU5ErkJggg==",
null,
"data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAA3gAAABpCAMAAABf0nHfAAABnWlUWHRYTUw6Y29tLmFkb2JlLnhtcAAAAAAAPHg6eG1wbWV0YSB4bWxuczp4PSJhZG9iZTpuczptZXRhLyIgeDp4bXB0az0iWE1QIENvcmUgNS40LjAiPgogICA8cmRmOlJERiB4bWxuczpyZGY9Imh0dHA6Ly93d3cudzMub3JnLzE5OTkvMDIvMjItcmRmLXN5bnRheC1ucyMiPgogICAgICA8cmRmOkRlc2NyaXB0aW9uIHJkZjphYm91dD0iIgogICAgICAgICAgICB4bWxuczpleGlmPSJodHRwOi8vbnMuYWRvYmUuY29tL2V4aWYvMS4wLyI+CiAgICAgICAgIDxleGlmOlBpeGVsWERpbWVuc2lvbj44ODg8L2V4aWY6UGl4ZWxYRGltZW5zaW9uPgogICAgICAgICA8ZXhpZjpQaXhlbFlEaW1lbnNpb24+MTA1PC9leGlmOlBpeGVsWURpbWVuc2lvbj4KICAgICAgPC9yZGY6RGVzY3JpcHRpb24+CiAgIDwvcmRmOlJERj4KPC94OnhtcG1ldGE+Cucqj1YAAAFBUExURcM8gNtbabUvjPmVQO57UcxId3gBqLIqkL84hJwXnpgUoFcBpf2uMvB/T/TtJ/zOJRsGjc9MdMlEeo8NpKYgmP3FJ+lxWPndJIsKpf66K9ZUbvufOj0EnJQQovboJsZBfS0Flbs0iPzSJOZrXVwBplMCo/62LfqaPUgDoIcHput1Vv2pNOJlYdhXa/aMRm0AqDIFlykFk/7BKWMAp91dZvPyJ4EEqPSISRQHif2zLzkEmqEanORoXvOFS6wllNRRcPjgJeBiYzYEmakiln4DqP6+Kg4Hh95gZE8CoksCoSUFkfGCTNJOcXAAqCEGkGkAqGYAqGAAp/eSQvfkJXsCqOduWvvWJPD5IUMDnvH2JfuiOPykN/ePRHQBqK4nk/3KJqIdmvymNoQFp+14U7gyiUEEnfrZJIMFp7cxivSJSKMemqK75+8AAAzfSURBVHja7V2Nv+NEFe22G1OWR1mKRdfKPrriFlkWEY0+XXmCsCoIKFVYykcVEffj//8DbHKb+UgmySSdmWYy56bv/Qf3d+ace+85oz9RXVBd5/Ux1QdUv6B6idUbVH+k+h3Vj1l9t6/PqV5h9T7VX6h+yuppXi+z+oTql1R/p/ob1c+pPqL6mtX3qP5J9R7Vr6kePnz4n339jNUPWT3P60VW13g9JdY9qn/v68mTJ9/n9S7VJdUuq1iqMa9Hjx79d18rVlOpRlKds5rzisR6jteHVH+g+hGru6yu8PoV1Z+p3qF6jdXvqX7C6lOqF3i9yupbqqusnuH1V6q3qH5D9fjx49dZfUW1oZrwWku1zOoLqi+pvtnXHVZnvH7L6jarG1T3qd6kWvD6F6tnWd2i+ozqf1Q/oHqb6h9UN1nNeCVUW1YP9jWitssa73raeFnvfZx+1Y33RvrVNd53UuN9XtF4h9Z7Om+8l9OPmi799Bvvo6Ma7/m88V6kj9ou/WVNR1/acenvXqnxst57N/2ytis0XvrlbZd+WdOln9R40+wvb7r9l/47NB191Hb7xuO9F7Hee27feFnvfZh+cuPx1rtLvXcl/dU33jtNjffpC6z3XuWt923We1d56z1ztaLx3mKN91jdeJtJ1nvr/Zf+y7su/Soa79B6Z+mPNR5rvdtZ691Iv1LjLbI/1niLYuPdosb7rL7xqPVuUuvNsj/quvR36DzqvQdC4+WId/0CiJeD3rVD51HzMbxri3i7QtdVIN50KmHeaNqMePMgEG/N+m5JiLeuR7yzesS7oUK8hdh4rPOKgNeAeG8LjUeId7OAeEkV4l0A8QTEuyYgHu+6pzoj3jh9asZqxCPAy0FvRB81Xx3izbsh3hVvEC8FvInQecvqp+Y3OeAJiJdC3lkT4t1fiIB3LOKxvssRL8U7jnhJFeIJkBc8xzOKeAR6dRzPIeLd9Qfx1gLkpZhHrVeNeGd3OiCeCHkKjnerBcdTIt5Mh+NdB+JVcDwieV0Qbzc+vDWrOJ6IdwzuprY4XuNTs0ccTwK8ZsSTe0+P4725sMnxMpInAp4S8VKSdwGOl6uaDPFyyLt3BOJR18W1iLfSRbyIvs6Id9cTxFvTxzvPAeIdxfEY5OWIx8WVrO0SIJ5DxBNIXiXiraSu0+B4ad/NB454E7HvDp3XhHhnLTleUVwxgXgix9NVNS/A8cocj5E8BngtEW98QLy4Zo7HxwkaHC+aB4J4kqy51OF4d07J8ZSIR523zWXNB0pVE4inUDXZS/OIOR4DvBrE4yyvnuNFQ+d4Ey5r5n23rB+gqxBPi+PdX5hHPJHjSU/NOlUTHM8C4pVGeTWqZgHxztWqZiQ8NQeLeOuJzPGWjXM8A4hnnuMdEC+Bqtlxc6U94nHIE9pOPccryJr1HC8KQNXcKDhe4+ZKF45nXtUscbwtf2s+EJ+aUDWVu5oGVE15jBd3mOOd25jjeYB4GwXiNe9q9pnjQdXsrmp25nhjYUlahXhTzvHkOd5IrWpGc2GgEIqqWSeuHMPx9k3nguNB1ezM8bogHhNXxtJzUxfxzkdAvOLGmGnEE9+aDjgeVM2WqmYHxJMAr1HV1Od48jhhoIi3Uc3x1hbmeMYRr3qOl+wBD6pm6zlet82VHVtcqeN4lXO882rEm4fG8awgniyuWOF429o5HjiePY5XP8ebyk9NieOdj1RzvCiIXc0jEU9zV9PRHK9ucwWqpuldTeEGvXGOt8rFFZnjnYPjaSxJezDHyyfoUDVdIR67TmhSNSWONy09NsNRNRWbK/Y4noM5XgnwoGo64HjFQ9iWqmaYiJeBnmpzxes5XlJ9gQ7EM6NqXkocbywewjbsajJJc1rJ8QavahLibdZOON6iB3M8cDxzc7zLoriiqWqW5njnIaqam9J1gsccL6mf44HjmeR4l/ue26nO8VpyPCDeWucsqLccLymcJyjneFA1De5qXgpzvLHWdcJqqlI1axAvGjrilXbGvON4M8zx3CLepcpXU+8CvXmOl1s/DF3VPBLx+jDHS4qqJnY1nexqyr6aTRxvBcQTEG9TukBf+8fxuL/fNhEhD6qmE1WzC8erUzWj8C7QO1wn6Ppq2vRcKYgrScUcDxzP8ByvuLnS5TpBiXiRJGsOlOM5uk6wzfHkcQI2VxxwPHGQ13aOV7urGSTiWZvjnTw7Aaqm2V3N2NocLwpC1XRznWCZ45V2NaFqWp7jsbfmbqx3nbDS5nhy7w0U8dzc49m4QFdmJzDAg6rpdFezo8uYGvGieXiIZ29zRb5AtzHHk1RNXKC7VDW1rhNWunM8cDyTczybu5qaczxwPOOeK7G4JB0bQ7wohF1NN9cJhQt0C3O8BPd4p1I1tZykW3O8CByv79kJ/CxICguCqmnVVzO2fJ0QheQk7eo6AXO8waiahzGepufKSN9zZci+mmWOZ2tX010+XnV2AlRNS7uaLTxXgHhiErP76wRrczz4arpTNYuXsKZ2NYO4x9sci3ja93j25niJMMejeQJ8NR3M8WK+MTY2qmoGcoE+gOwEHcQDx7PG8brn44V8gV7Kx1t6v6uJC3TnGehNiFfhJB30BfpJXMZscDxcoJ/iAn2XHyfEpjkedjX7n4GOXc0T3+Np+GquMMfrAeLZ3dVMpHECVE3zaUGXXT1XRvqeK2Htah5Ynl8Z6MlMC/HA8ezM8caxvucKfDUrEM/RBTru8YblMgbPlf7O8ZxwPFygu81A39m4xyueoIewq+mnr6YiHw+7mm6uE3Tu8VrO8QbvuaLe1Vyuvc5Ap9ZDWpADjhcXQ0uMzfGigXM89a6mkwt06xnoUDXtq5rCBN2G50pQHG/pfQa6eJAHVdPdHG+sx/Gm8Fxxq2o6yEAXDG2hajrfXNH0XBnpe67MMcfzJwOdAR5UTee7mgYRLwLieZSBvsV1gtvrhHgnT9DhudKZ4y19z0DP/f1wgd4vjtflOmEOJ2mPMtAr53hQNW1eoGt7ruAez1RaUM8y0MXFFaiaTud49WlBK/18vCjM64S1pxnoM9HsCJsrbi/QzXquhJid4OeuJtKCTrKrqZ8I2y4fLwrPV3PtZT6eEEyJC/S+OkkjH6/BV9PHOR47yGOJsFA13XE8jTlel3y8sHY1ffdcga+my11NAe/MczxkJ3iRgQ5fzVPm42lwvBXy8QbouZKIT02l5wo4nqUL9MZE2MIgD7ua+RyvCHh++momiZQWBFXTwa4mnyY0zfEEcQW7moR4mzX1nu+eKzOomifb1dRLhG3D8YSBwrA3V+Snpm8Z6EnxAh27ms443q5dIqymy9jAPVf4S1NYGbOEeM4y0LGr6T4D3egcLxBfzc1EemsufcxAl9OCmLYCVdNNBrqlRNh5CIhn3XPFcgZ6M+KB41njeGPDnitB+GpOGMuz7LniaI6XVHuuQNWEr2aPEc/HXc2S1xFUTSdzvHabK/DVLHG84lmQl3M87GqegOPt2rqMwVdT3FwZ0BwPu5onSQvSUDXhuVIzx7PpubKwmp0wS0TPFexqutzVHOttrsBXs7i5UjgLWvqcjweOd5K0oDqON10VF1f0PFcGvqu5UVi4e8bxkkIiLDLQHe5q7jQv0Fet8/GGrmpm5wl+e64o5nhAPJe7mmbz8cJwGSs9NZdez/HqOB5UTWv3eIbneEGomhVhQf5xPFygn2pXU8tJWnuOF4qvpsLez7cM9EIibAJVs29zvC5O0sPmeJsCxxtCBjo4nstE2OYL9PYZ6FEYqqaLfDyrvppJ2Ukaqqb1Xc2yy1jc7LkCVVNWNWULd4/neIl4FwTEc+QkrZWBPtW/xwtuc6WDy1gvMtATHcSDqmma48mbK00Z6C3mePKyZgi7mktrHM9+BvqWH8JC1exRWhB8NSt8NY+7TuhbBnp1dgJUTbNzvLhLBjp8NVl2ghtV01UGemU+HjjeqS7Q28/xBq5qUnZCYXNlvfQ7A51aT+m5AlXT/K5m6+wEzPE2kvHDpNh2HufjAfGczvF0rxNW2NWsvU7wPQO92nMFqqY1X82xaV/NYFzGJmufsxMSeVcTqma/LtBFzxWtfLwoHJcxy3M8dxnoNdkJUDXt+WraQLwIiOchx8OuprsL9LGorZjwXAnjAt3NHM9hBnoCX02HGej8MKhpcwWqpmJzZQAZ6Jjj+eC5gns8eVez+wV6/+Z424o5HlRNSxxvHOt7ruACnd3jFc2OPLzHk9z98t6DqtkzVRNzvFrE8zEDXXWQB1XTcQZ6DFXzmAx0j+/xtrhAd7+5smvpudIiH28eEuIt7c3xLGcnwFfzlLua2qomEK8K8fz0XMkvYTHHc54WlHmuGM1AjwK5xyuuavqZgZ40+moKoiY4nlGXMR0n6UPrIR/PIuLd7pCB/mxHjjerSwv6P7lVkMblnZC9AAAAAElFTkSuQmCC",
null,
"data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAA3gAAABpCAMAAABf0nHfAAABnWlUWHRYTUw6Y29tLmFkb2JlLnhtcAAAAAAAPHg6eG1wbWV0YSB4bWxuczp4PSJhZG9iZTpuczptZXRhLyIgeDp4bXB0az0iWE1QIENvcmUgNS40LjAiPgogICA8cmRmOlJERiB4bWxuczpyZGY9Imh0dHA6Ly93d3cudzMub3JnLzE5OTkvMDIvMjItcmRmLXN5bnRheC1ucyMiPgogICAgICA8cmRmOkRlc2NyaXB0aW9uIHJkZjphYm91dD0iIgogICAgICAgICAgICB4bWxuczpleGlmPSJodHRwOi8vbnMuYWRvYmUuY29tL2V4aWYvMS4wLyI+CiAgICAgICAgIDxleGlmOlBpeGVsWERpbWVuc2lvbj44ODg8L2V4aWY6UGl4ZWxYRGltZW5zaW9uPgogICAgICAgICA8ZXhpZjpQaXhlbFlEaW1lbnNpb24+MTA1PC9leGlmOlBpeGVsWURpbWVuc2lvbj4KICAgICAgPC9yZGY6RGVzY3JpcHRpb24+CiAgIDwvcmRmOlJERj4KPC94OnhtcG1ldGE+Cucqj1YAAAE7UExURf9Ofv9Sdf9eY549s/9Qef9WbvpIif9rU/9kW/9nV/9ZarI8spk9s/9hX/9uT9o/pNI+qM49quK4LvdHjNy+MJQ9s73hRsvSOP9Lgr/eQ9fFMvCnL/+EPPyVM/1KhcU9rdnCMemvLv9bZ7DvWIQ+sW9Aq/99Qt+7L9TIM7jnTM7ONq08sv+PNvVGj9c+psbXPO1Dlv+HO/+MOLrkSXc/rXM/rPNFkeNAnv6SNdHLNPaeMP9UcrTrUrbpT/ShMLY8sfKkL/96RP9xTfqYMupCmfibMak8s8nUOsLcQP94Rt0/ov91SOVBnYo+srLtVYA+sOhBm7k8sJA9sqY8s3o/rv9zSvBElO2qLsTaPsg9rLw8r+W0Lv+KOf+AQI0+sueyLr88r4c+saI9s+BAoP+CPn0/r8I9ruutLss9qytBDXcAAAnFSURBVHja7V0Le9REFC20LEKrXaoUChKrom1As2gA5bGID2BFRAWFaH2iiP//F0gyj8wkaXKz3cxMJudM/sL5Ts6959679J7AFY7LHD8yPM/wncB5hr8yfMvwe4ZHDCcl1nIs53jG8ZjhboaLOVYFPuP4geEXhj8z/MvwFsPPDP8xnGAYK4gE1gXuM/zN8UmGBwzfM3zFcUniCcdVjg8ZXpcYKQgETgssaVhRcCjHDY4jCo7neEfgsIJXc7yS45iCoypek/iC4yeGbxi+Zvic432B3zL8w/Emw0ccv3J8nOEDgU85znF8KXCH4zrHLYbbHGcFnnK8YHib4Y8MewwbEjsCb0hsC5zh2OW4JnFB4JTEPY6JQMIwY9gS2JSIJUKOKcNNjocM7wosCdpx4l2uJd75ZuIJ5q3lzFvOmfeME++xTjzJvNUy8+YlHqNelH7RWCfefTrxLqWP0Y4T76pkniAep94ofZx16SsS7/RSzr2V9FUxTxJPUu94+grEO5w+Qbv0cdqlL2ee4N7R9JO0S59h4p0rEu8OJ951Trxb+xDvKSfeizri7W0w7u2kjxOPU287fSrxdiXzrnHqXUhfRrv0Y7TjxJsI4iVF4jHqbQrixenjtOPEm+rEeyiJ97BAPEXxrlQR7zlZ8R7No3h3KxRvdUGKFymKF1EU78EBFS/ImCeoV6F4KyXFW6lQvBsLVLxjfVE8Tr2zt3XBq1W8PZLibRcUb7eseBdKindPVbxZWfG2SooXFhVPp153iidpxx5TvOXltVrFy2n3kngXF/OreWKsSN5CFe8qQfEqfjVLirdkUPGO+qt4knYkxdst/GsqineqRvEmdYqnSF7INc8VxVujK97FzhRvfVGK96TB4wUNirdkXPFUyRu44p2ZV/Gk4FUonmLxYk3wbCjeySraERRvtQPFG5tVvBFJ8cx5PIV3Hioep96BFW8/j5cUTV6F4umC91LvBPWgeOYUTxM8Nzye14onJe+gineqRvEmUvG26hQv411sV/E0jzcwxQvc8ni+K96GIcWbzZKtWo8HxbPs8VLeweMZ83iyn9CR4mXEm/E/za26qmbKuxgez4riBXpZEx7PUB9P/dVcqOLJmuZ+ildV1ITiWVM8eDwjirdR9avZheIlVMXT2wkD9XhdJFdIHm8Ej2c8ubKx06nHm1E9XjwtRsaGq3hj41VNeDyDisf6CV0pXsKrmgmxqmlZ8ZzxeELx1h1LrsDj9ULxkgmhqql6vDiracLjRUpkzGByBX08wx5PMXmLVbxET644r3hu9PGUsOa6uarmCH08fzyemtWk9fFCKF6keLyceV0rHjyeTx4vadnHY+2EoXs8paq5LsqajlQ14fF6oniTdn28uDwIO0CPF+kD6JG56QR4PG88Xus+3jT/1Rysx8tNnjnFC+DxhtzHC9HH44o3LhQ1u+7jYR7Pvz7eLKH28WL08fTMmGuKB4/Xrz7erLGPF8ZS8VDVjBTFM+nx0MfzrI9HUbxQSB76eEp5xZDiBfB4dqYTdjru4xE8Xj6cgOSKeY83QlbThsfrbDqhZR8vxs4VpYs3NujxAvTx7MzjddzHS5r7eHz1AybQtQa6MY+ntvHg8QzO43Xk8Zjk1WU1N4uCh50rkYWqZoB5PGc3SXeheOVV0phA592EMTye/328rraMETxe2eINvY8Xmc9qBujjWatqbthKrhQ9HpIrhQ3u6ONh58qcE+hJXxTPub2aRqcT0MfzpqrZIquJ5Eq14pmax0NW004fr5uqprydkBCrmjHm8WwoHjyeb1XNpGHnisP38Ybl8XA7wUpWs7uqJptOmCVt+nghPJ5pjxfA43lZ1WzXx4PiqRdhjSkePJ4/WU2ltkKuaiKrqTfyjM3jBchqelTVzCWPsldTK2tC8RjvIhcm0OHxepXVbKF4Tkygu+LxotIRdBNZTXg8j6qabfdqIrmizePlbTwDVc0RPJ4/Wc18vR9py1g8RVYz07tx4UwXdq6gqtm+qjlL6PfxkFzRphMMKR48nqdZzVk/Nkk7OI9nTvECzOP5UtVM8lPMCWXnSgzFK02gI6uJrObcVc06j1fs4yGrGWntBGQ1UdU8SB+POI+HrKbNnSvIavqS1ZwvuYKspqyu4HYCqpqdZzXFPlsonvl5vGDUMIEOj9e/rCZxOkEIHrKaNibQickVeLye7NWUO9yb5/GQ1cQ8HrKai9syNsc8Hvp4FibQsVfTs6om2eNhkzR2riCrueiqZoJ5PC+2jMHj9aOqSbkWpHk8KJ4djxdUR8bg8Xqa1Wx9Hw8ez6rijeDxhrdXUxQ1oXiFnSsGPR76eD5lNetvoBf3asLj6YpnZDoh2GfZETxeb6uaDYpX6uNB8exUNdHH8yur2ezx0MfDXk1UNReueK1voOM+nrXkCvp4HmU1297Hi6fweHaqmtir6VlWs/EibHH3AybQrU0nYOeKP1nNxukEpybQh9vHw15NL6uaCd3jQfGsTSeM0MfzLqs5I1c1p/B4tqYT0MfzaAK94XZCeR4vRFXTTnIFfTzf9mrSNkk7oXgOejxTezXh8Xyrak7qNkmXbidA8SxVNdHH8ymrqWx+oF4LgsdDHw9VTROKpy+SnqKqaaWqGaCP59cNdM3kNSqeJnnweMarmvB4HlU1k1YeD4pna+cK+nie7dWcJK2uBSGr6fIEOjxerybQyffxYkwnWNy5gj6ebzfQ6zzevhlpeDzcTkBV8yD38SYUjxfGuIFuf+cK+nh+ZTXb9PFwA91OcgV9vEHdQEcfD/N4yGpauYGOPh48HqqaHd1OoNxAh+LZT67gPp4/ezUT6nQCPF4fkivweJ7cQC/08aB4rl4Lgsfz7Qb6Jjxej6YT4PH6NoHeoo8HxbPi8VLaoY/n0w102l5NeZgyhMdz9yIsPF6vJtAp9/HCfJU0kivuJVfg8fq3V5N2Hy+fQIfHw84VVDWN3ccLsUna8s4VzON5ltWk3cdTjpbA42EeD1VNU/fx4PEc2LmCeTxfspr0+3gyuZL/asLjGaxq0i7CwuP1JKtJvo8XxuWjJVA8dy7CwuP1K6tJv4/nxAT6YD1egKymd3s1iffxlMOUqGq6dxEWHq9nezXJ9/GwcwVZTVQ1LdzH41vGoHhWkiuNE+jweP3MamKvpuPJlQAez7MJdOp9POUUMxTPTlVzhD6eb3s1+zKBPliPFxCTK/B4/lwLKu1cQR/P0g109PG8yWq2vY+XthPQx3N4yxg8Xj+qmvT7eGGM6QTrezVxO8GXrGbr+3iYx7OkeAHm8XyrahLv45XHE1oo3mV4POMe71CF4h2B4lEUb89IVZN4Hy/cJyR9UxJP4H83nOct16/GIQAAAABJRU5ErkJggg==",
null,
"data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAA3gAAABpCAMAAABf0nHfAAABnWlUWHRYTUw6Y29tLmFkb2JlLnhtcAAAAAAAPHg6eG1wbWV0YSB4bWxuczp4PSJhZG9iZTpuczptZXRhLyIgeDp4bXB0az0iWE1QIENvcmUgNS40LjAiPgogICA8cmRmOlJERiB4bWxuczpyZGY9Imh0dHA6Ly93d3cudzMub3JnLzE5OTkvMDIvMjItcmRmLXN5bnRheC1ucyMiPgogICAgICA8cmRmOkRlc2NyaXB0aW9uIHJkZjphYm91dD0iIgogICAgICAgICAgICB4bWxuczpleGlmPSJodHRwOi8vbnMuYWRvYmUuY29tL2V4aWYvMS4wLyI+CiAgICAgICAgIDxleGlmOlBpeGVsWERpbWVuc2lvbj44ODg8L2V4aWY6UGl4ZWxYRGltZW5zaW9uPgogICAgICAgICA8ZXhpZjpQaXhlbFlEaW1lbnNpb24+MTA1PC9leGlmOlBpeGVsWURpbWVuc2lvbj4KICAgICAgPC9yZGY6RGVzY3JpcHRpb24+CiAgIDwvcmRmOlJERj4KPC94OnhtcG1ldGE+Cucqj1YAAAE4UExURU5s2zeK4WVOwSud3jqG4T2C4WdLvUR54FNm10F94SvsiUf1bmNRxC7uhRnSthnKvxnNvFBp2Tzzdktw3Vtbz19WyRrHwzfxe1Vj1SPnlSSp2TWN4S2a3y+W4GhJukZ23xrExSKs2F1YzCmg3THvgSGv1mxCr072aW1ArFL2Zyam21dg00hz3hnQuTGT4Vb2ZB/inhy9zKjxWWpGtVld0Rzdpx3epCLlmEP0cWX3XjTwfkD0c1r2YynrjBrXsB240Bvbqmn3XXj2WRrZraTyWCej3Ij1V0v1ayDkmyfqj3D2W2FTxzOQ4a3wWmH3YCXokhvAyhrVsnT2WoP1V5zzV4D2WIz0V5/yWBrCyB620h7goTnyeBy7zm32XCCy1GlHt3z2WI/0V173YZnzVx+002tEspL0V5XzV3aC9goAAAhsSURBVHja7Z2JmhNFFIVLBIdMDBgLN0THjRGwFUGQQUC2yDai4siiAiK4vP8baHffqu7qJemkU71k/tMzj3C+P+dupY5ZbYr+NvpK9Eek941eMdqT0qtW26I3rD5KdMjojOgzo7et9luNrVRKOlEg2jBaSzRIa5+rvYleMnrX6HvRF0bvWR2xetnoY6PvRCeMDht9aHTU6GmkvyL9Guuy6LHoVKyfYj2K9YHRm1YHE71m9bnoQKJ7sb4VXRddMTob62vRb6IvRbdE50TviH4U3RZdEq1bjaw+SfSN6C2j3yO9bnVBtCUaJrop+tTouNFJo4ui55F+EJ0XXRU9iXQn0s9G10TPYt2PtBPrhtGDSH9G+ifUv7FeRPol1MNYE9Fpo7siZW03r/H2hH/GdtZ424nxtnPOOzTNeGK9/eFnjGetp8R6Ov7EdtZ5G8Z6a8Z6g/Av9pz8WdOFn7XdPMY7En5VjXc4Z7yjxnhPy433eIbxHmWMdzD8E9uFX2w7Md4BY7x74Zc23nUx3hVjvLO1jSfWu3Tbdd4o/IttF35iu1LjifUuhJ9jvK2heO9mYrzYeseN8U4a410U4z13jHdejHdVjPfEMZ4475oY71lsvPuu8cR5D0qM96LAeLHzThvj3bXOU8siXs56RcQ7ZK1njXdmTuLpisQLPSfUC003KCXeXm/EO7EM4p2aTryDFYl3b1nEuzUn8UYp4o1qEC9hnrFeXeI9cYl3ZybxbiTEu1GReBNLvEm/ife/71LWC8KvkHgJ8AaJ+XY18Q60Tzzru/4Sb2cu4k2if5d4d0/3kng6iniqUsYT6+2DeG0Sz8l4oy4R7+q8xNvZzcRzM54Ar4h4g0x1BeK1RLzRUog3NNYj47VBPC3IS/lOl1c1k4g3raoJ8TwTb30XZ7yHfSdeMfJmEc86D+K1Sby87RbJeJ0g3s68xJvMynibfcl4edeVZ7zEdmQ8iNdOxnvY+4w3znkv0FOqmini0cdrmXjruzfjTWZnvE1jvW4Sb5yYLlXU1OV9vLVBynYQD+K1k/EqEO9YX/p4aeKVT66kCpsQrxNVzcUz3rDHGW8yo4+32fGMN85NrgTViEcfD+J1OONtJtbrclVTV6xqusSjqtnvjDdc5YzXE+Lp9C/NaX08iAfx+pDxOk68BHgq6zyqmn0hXo0+3nB1M54FXueJp3LLCQXEy5RWIB7E624f71gfJlfsb81gSh9PlhMg3mrMag5XvI/XZeKNM7OaakYfb7DmjmpCPIhHxluEeCWTK0EV4lHVbHUfj4zX4z7euHhWU1fKeGwntLmBTsZbgT5exe0El3hsJ/R8A52M16lZzaBqxqOqyc2VLs9q9oJ41WY1qWp2g3ijuLiyzqxmT/t4yvmlqStsJ9DH6wbxRuudJR4Zb7FZzWDK5ArE60rGSxNvVIN4zGq2QTxVMKup0imPPt7KE29IH68js5pzEI+qZieIVyvjbZHxWurjjRec1YR4q0G8LWY1myaeyrTxZt9cIeOtYMYb0sdrvqo5Tvp4C0yuQLxVqGpukfHavCRd+GMT4q10xhtudYV4u21WszDiBWwnQDz6eJ6JV/Z4AlVNMh4Zr5F9PPfyg4Z4EI9ZTV/EU4WzmpWJR8Yj49HHW+JdTU1VE+KR8fwRTxXf1UxeLYF4ZDxmNZu6q6nJeBCPPl6n7mpS1STjkfGWsp3g7qBnfAfxIB6zmj738bQqfqeLjEfGo4/ndQN9fuJR1YR4ZLxFtxOcYc15r4xBPDIes5q1iKezA2PMakI8+niNXJLWC0yuUNUk45HxamU8rTIZjz4exGNWs9HXgpjVJOPRx2uij5cnXtJOoKoJ8ch4TWU87mqS8ZjVbKyqmd5PyNiOjAfx6ON56uNp8zDlHC/CUtUk45HxljC5ouZ5OwHiQTxmNetUNcd2O0G7rwVtkPHIePTxGiMedzUhHhmv0e0EzawmGY9ZzYZfC1JuN0FT1YR49PH8X5LWbKCT8ch4Td7VHGdqmvTxIB6zmr5fC8rv4wW8FkTGo4/XzCXpBHnK7eMFVDUhHhnP4yVpVXTcL+knQDwyHrOay7+rmZ3VtG+gB2Q8iEcfr7EXYY3tZhCPqiYZj4xXp6pZemUM4kE8ZjU9vo+nC6+MbZDxyHj08XxuoGd+aaaqmgFVTYhHxvN2c6XofbyNhHkQj4zHrKaPDfSy9/ECMh7Eo4/n8eYKb6CT8ch4LVwZy3bxtCS8AOJBPGY1/d3VnD4mTcYj49HH8/R2QnFVk1lNiEfG83BlLIU8qppkPGY1m347ITMkralqQjz6eE308fJVzZnEo6pJxiPj1bsypoouP0A8iMespq+bK+NcdYU30Ml49PEa2ceTkbHcIiyzmhCPjOdxH09n6yvZfgLEI+Mxq+nh5krJdT/28SAefTyfGc/dTjD3bKlqkvHIeH4vSRcdO6KqCfGY1fR6SVo7fTwzM8bkChmPPp7nmytFlx+mE4+qJsQj49W9JJ3u4wX08ch4zGo2VNUsCnns40E8+nh+L0mniJe6/EBVk4xHxvM/uRL5TqW2E+jjQTxmNT1PrrgvwspyAlVNMh59PE8Zb1zwBnqVfTyqmhCPjFf/RVildL64AvHIeMxqepxc0TniMbkC8ejjNXFlLHNkTHNXk4xHxmvg5op9DrbqPh7Eg3jMai6DeKmyZu6UNBmPjEcfz8N2glZFdzU3uKsJ8ch4fquaOndXkz4eGY9ZTb/bCdq9uZIfXSHjQTz6eB62E5yyZnJwhVlNMh4Zz+d2gtNM4JI0xGNWs5nthLT3CgZXyHhkPPp4HrYTSu7ZclcT4pHxfG4nOP2E9FPMEI+Mt+tnNbe93lzRym2f6xTvXOKJ7/YNMriDeHWId46Mt5SM9x+DLvj54YmALgAAAABJRU5ErkJggg==",
null,
"data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAA3gAAABpCAMAAABf0nHfAAABnWlUWHRYTUw6Y29tLmFkb2JlLnhtcAAAAAAAPHg6eG1wbWV0YSB4bWxuczp4PSJhZG9iZTpuczptZXRhLyIgeDp4bXB0az0iWE1QIENvcmUgNS40LjAiPgogICA8cmRmOlJERiB4bWxuczpyZGY9Imh0dHA6Ly93d3cudzMub3JnLzE5OTkvMDIvMjItcmRmLXN5bnRheC1ucyMiPgogICAgICA8cmRmOkRlc2NyaXB0aW9uIHJkZjphYm91dD0iIgogICAgICAgICAgICB4bWxuczpleGlmPSJodHRwOi8vbnMuYWRvYmUuY29tL2V4aWYvMS4wLyI+CiAgICAgICAgIDxleGlmOlBpeGVsWERpbWVuc2lvbj44ODg8L2V4aWY6UGl4ZWxYRGltZW5zaW9uPgogICAgICAgICA8ZXhpZjpQaXhlbFlEaW1lbnNpb24+MTA1PC9leGlmOlBpeGVsWURpbWVuc2lvbj4KICAgICAgPC9yZGY6RGVzY3JpcHRpb24+CiAgIDwvcmRmOlJERj4KPC94OnhtcG1ldGE+Cucqj1YAAAJzUExURW5AqyrsimBVyf92R4k+sv9vT/96RPJEki7uhVtbzzCV4P+EPf9ySx+00z7zdO5DlmpGtmhJuTaM4SHlmepCmftJh0px3f9rUv9dZH8+sJg9s6c8s430V7k8sJbzV0P0ceexLtc+prboTv9SdT2D4f9QecjVO7DvWSPnlOG5Lxu+yx/jnf9jXSqe3t0/orLtVU1u3FRk1hvbqv9OfP9oVv9UcifqjlP2ZiKt11f2ZOS1Lv9lWUF94BnJwGNQwyii3XY/rRy7zjfxe1hf0vZGjkj1bf9Xbl1YzP9ZavhHixrByDryeNfEMVJn2JH0V6s8snP2Woj1V7Q8sXv2WIQ+sZ09s4D2WMU9rbw8r8k9rKLyWDHvgRrXsNTIM/9Mgv+BP/maMWdLvFZh1CCx1fGlL/9+QRrExsHdQdLKNNo/pP6SNSWn2jiJ4TTwfkN64L3hRv9bZ2xCrvedMRzdp/OiL/9Nf9M+qDOQ4WL3X/+JOSXpkUd13hrZrSOq2f+MOOZBnEz1amFTxh7hoLnmS2tEsuBAoBnTtSal3Mw9quqvLl73YfuXMhrVsuNAnsPbP8XYPbTrUrvjSG/2W9rBMB3fo/9gYP+PNhrHwyyb3zqG4cvSON2+MEV33z+A4Xw/r4w+sqM9s489sqnxWa48sqXxWYP1V6A9s5U9s5I9s7E8snj2WWb3Xq3wWnI/rN+8L05r2vyUNNDNNRnMvuysLhnPuhnOvGZMv889qb/fQ83QNx630fWfMO+oLzGS4VBq2ZrzVx250BnRuE/2aHk/rv+HO8I9rr88r57yV/5LhC6Y4GVOwVv2Ymz3XGn3XUX0b+2qLs7ONs8SWogAAAYsSURBVHja7Z0JW5RlFIZBVLQMKjTNFtSyUdFSMyssyyLNLGihzExazPbSsrTS0kgzNMn2kmLfKbaAYV9iswDtJ3XOKw4jzQxD8zl+w9z3fzjXPc9zvvNORITwu3Cv0N3dnZ2dfbWSqkxWcnJy8vLySktLS0pK5ipPCsXFxbm5udcYUlJSCgoKysrKHleuE54W+vr6ent7NwtbhC8Ep9O5R5mvvCbcLBwWjgsHhJmGWEP0WZKUjcoswykhLS3tfeFWZaFwg+E2Zb3yoLJLmKN8ryxV1iirlNmGKYYZhijDVMMO5TnhUqWnp+dD4RPhB+F1obq6+pjwvPCu0NLS8p7wktDa2nrmzB9CRkbGPCVRWaxUVlZmZWV9qzysbBNeFWpqav5pamo6KdykfKVkZma+IjQ2Nu4UvhY+ExYoJ4RbFIfD8ZuSn59fWFj4jNDf39/R0XG9kpycfK3ymFJUVPSU8IRQXl7+qPCNMDg4ODQ0VFFR8YFwVBgYGLhTuV34UzgoPKCcPh2vxBg2KfcpbwlblWmGSMNq5SNhkRJnSFDShe+U3cJKYb/wrPCl8LnQ1dVVV1dXX19fW1t7t/Kj8LLwkLJEWad0dnY2NzdfJmxX7jB8LLwhHBHuUZYpPwl/CZcobW1te4UrDW8KPws3Kr8KdylvC78I+4SrDMuVqqqq9vb2R5QVyovC/cp0wwZlkuFTZa3wt9DQ0PCCcIVyuXJIeUeJGCaQwUsZ1+A5fQ7egZHJizajlxQ9PHizzOCd8j146/0dvFWuwZthBi/KbfB2WDh4iV4Hb5unwTs59uCdODd4juAPnk7eJm+DF+l58BaNHrzdlg/edg+Dt8yawVs+avBWnDd404cn7/zBWxvo4OnoTU4dY/By/TLe5vEbLzr2P8bb6NV4C30ab45r8JaOMt5sN+OZ0ZsaZbnxdPQWJ44MXpalxnNc8ME76Bq8GO/G2+pmvEi3wYvzabyV4x+8JRYYb6+fg7fPb+Pp6E3a4K/xDl004+3xbbzYma6fmkmuwcN4NjfeOeV5Ml4cxsN4GO//Gi9+/MZzy3jpGC9EjEfGs5vxYsbIeNM8Gy8B49nDeC7hYbwQM158ABkP42E8jHdhjOct4yVgPDIexgum8ch4oWc891oT49nfeD5aTYxnA+M5/c14w8rDeGQ8jBe48UZ+a2K8cGk1MZ5NjEfGI+NhvIuQ8ebTaoZVq8kez07GY48XHhnP9ZF0egLGC4mMh/EmQqt5du4wnn1aTTJeuBmPjEerifGC1WpivJDLeEZ5SRiPjIfxgtpqur/9gPFCtdXEeKG5x8N4ZDyMR6uJ8Wg1J/y3mrSaZDyMx3UCxqPVDI+MR6tJxsN4lmY8Wk1aTYzHHg/jkfHIeGQ8Wk2Mx7eaGI+Mxx4P49FqYrzg3OOR8ch4GI9vNTEerSZ7PIxHxsN43OPRatJq8q4mxiPjYTyMR6uJ8bjHw3hkPFrNsfZ4tJq0mhiPPR7GI+OR8TAerSbGs/xbTVpNMh7G4zoB49Fqco+H8ch4GI8LdIxHq8keD+OR8TBeABmPezxaTYzHu5oYj4zHHg/j0WpiPO7xMB4Zjwt0jEerifHY42E8Mh4ZD+PRavKtJq0mGQ/j8a4mxqPVJOPx5QoZD+PRamI8Wk32eBiPjIfxaDUxHq0m72piPDIexuMCnVYT43GPh/HIeFyg864mrSbGY4+H8ch4ZDwyHq0mxuNdTYxHxmOPh/FoNTEe93gYj4zHt5oYj1YT47HHw3hkPO7xMB6tJq0m72qS8TAerSbGo9Uk49FqkvEwHq0mxqPVZI+H8ch4GI+Mh/FoNXlXE+OR8TAe/49Hq4nxuMfDeGQ8Mh6tJq0mxmOPh/HIeGQ8Wk1aTYzHu5oYj4zHHg/j0WpiPO7xMB4Zb2J+q8m7mrSaGI89HsYj45HxMB6tJsbjXU2MR8bDeBiPVhPjcY+H8ch4XKBjPFpN9ni0mmQ8jMe7mhiPVpNvNblAJ+NhPK4TMB6tJvd4GI+Mh/G4QMd4tJrs8TAeGQ/jkfFoNTEe72piPDIeezz2eLSaGI97PIxHxuNbTVpNWs1wNl6qdcbbYpM93poJZ7zgDp5fxlvtd8bbH9bG+xdPk34c38LFvgAAAABJRU5ErkJggg==",
null,
"data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAA3gAAABpCAMAAABf0nHfAAABnWlUWHRYTUw6Y29tLmFkb2JlLnhtcAAAAAAAPHg6eG1wbWV0YSB4bWxuczp4PSJhZG9iZTpuczptZXRhLyIgeDp4bXB0az0iWE1QIENvcmUgNS40LjAiPgogICA8cmRmOlJERiB4bWxuczpyZGY9Imh0dHA6Ly93d3cudzMub3JnLzE5OTkvMDIvMjItcmRmLXN5bnRheC1ucyMiPgogICAgICA8cmRmOkRlc2NyaXB0aW9uIHJkZjphYm91dD0iIgogICAgICAgICAgICB4bWxuczpleGlmPSJodHRwOi8vbnMuYWRvYmUuY29tL2V4aWYvMS4wLyI+CiAgICAgICAgIDxleGlmOlBpeGVsWERpbWVuc2lvbj44ODg8L2V4aWY6UGl4ZWxYRGltZW5zaW9uPgogICAgICAgICA8ZXhpZjpQaXhlbFlEaW1lbnNpb24+MTA1PC9leGlmOlBpeGVsWURpbWVuc2lvbj4KICAgICAgPC9yZGY6RGVzY3JpcHRpb24+CiAgIDwvcmRmOlJERj4KPC94OnhtcG1ldGE+Cucqj1YAAAG5UExURQIAAhAGExoXNCNqPs/s7woDCgYCBhgvSRkRKhcOJTNzNRoePD12MhoVMBY+Tsrm8BVOTSdtO9SLvRY5TSBnQBVKTsHN88S+8hsbOBVCThkqRhVGTq55VQwFDhIIF9OCoLJ5WhYMIBc2TC5xNypvOTh0M7d5X9Lu78Xf8sZ6eRczS9KW0RQKHNSEqsLT8xknRMjk8RVSTNvz78ey7czp8NXw7xpgReb48kp4L0V4MF16L4h6OtSGsMx8hr55avz+/cN6c8LW88Tc8sLI886g3nd7M/j9+hokQh1kQvX8+NKAm+n588iw68HQ8xdZSe/79Y16PMPZ893078bh8bt5ZWp7MGF6L0F3MYB6Nk55L1h6L859i2Z6MHJ7Ms9+kJl5RMPB8htiRNSDpca178up56R5TOP38RldR+D28PL79tSOwtGZ1dWJtxohP8+d29ORysW48NCb2Oz69NSPxhhbSP7//9F/lcyn5ZJ6P9OUzYV6OKp5UW17MVJ5L515R3t6NaB5Sc2j4cp7ghZWSsF6bsLK89jy78yl45Z6QcLF8xZUS8W78cl7fVV6L6d5Tsmt6tWItMPD88qr6HpGdYYAAAc3SURBVHja7Z0LX1RVFEfxlaBiiloiFBRBYVLZSysEKrViqtGirOgJRc1oTs1UmJWPorcU+om7+5xz3zPTCHPvzJ2zFnyF/fvPunufvbu6PO5w2aa5x7BJc6/LEcN9mh6Xbo9el82agy47DV9qXlI8qjlk2O5xWHO/Yp9mt0e/YcBwwLDHcFTzl8MvDq9rvnH4XHha84nwsmK/YYvLXsMDmhHNDpdRzS7DOeEZxXHhXcVPDqurq+8JzyreFt4X3nL40eEDxSnhqvCO4rLDc8IZxSXhD4ePhCsO1xw+E/5x+EEYFIY1Y5o5xZRixWFGMa654fCzwwnFvPCh8J1mVhhSLAgTmknF2traBaHPUBQqQkn4XnhYc174WrgonFRcF34XlpeXPxVu3fpXeFV4QrEofCX8rbhb8bFw8+afihcVvzrcpXhN8YjmecVZzQuah4Q3DY8p7jQ8adhqeFDzlOY3xeOaNxy+UCwJxxQ5RVnIKwrCK4rThmnhW4euWOFtCxfepkjhHQkXXo9beN1e4fX6lbc5VHk7Gyi87V7lHa5SeLu9wus3hTdgCu+AKbw9pvCObrjw9sqfX3gjbuGN7FC1N6oLb5epvHN+4R1vRuFddgvvjCq8S6HCu1K18AZ14anKGzOVN+dX3ooqvEDl3ahdeFJ6s7rwVOUtDHmFN6kKb80vPFV6xaqFpyrvfLsU3tnGCm+r/DVUeHVKL1+uWnjT9Qpv/YnX4wVed63EO7jhxNsXSbyB5iWeKb0t++snnpt5buk1NfFO3WbiXYsm3nAo8XThTUUSb3x8Jpx4qvTmT1RJvCE/8UKBVyPxio0k3sXGCm8xC4m35JddIPB04RXiiTedYOL1xn9q3lbibT/U3okXKTv5a8fEGxsO/NT0E29mpdHEcyPPT7wFnXgTocTTkVfUlVfxIs+uxDsW+7GZr5F4p5NMvO40E6+fxKvteHOhyFtP4s3+T+L16Z+aOvCam3gZcrxY5OnEy6eaeH7ZdabjRROv3RxvOJh4xvGm1u94Q5HEm4g4nlE8P/FKdiXekp94OVV65ZyreJ2deDheNPEGq3zVnFt/4s2GE28y9lXzQkKJt5iNxKtWeKryCvlCphIPx2te4gV/aq7T8RpKvKDj2Z54uXK9r5o4nh2J5/7UTCzxXMfrs9bxIoVX9hIPx7PY8QIN9CQdL9THsyrxlup10HE8WxPP+6mZuOP5iVey1fFyccfL43h2Ol488ZJ3vIrdjlfG8Ug8N/F04CXqeEUcL/RVs8asJo6H49HHa87kSlXHyzv/BRwPx6OPl9ispu94ppugHa+QucTD8XC8bDqeKr1yLjIljePZ28dLwfES+KqZYcfLZ3FyBcfL3qymvX28gOPl6OOReKklXryPZ63jZX1yBcfLtuMxq0kfj6+aiX/VpI9X73VCHsfD8dLq41Xo4+F4OF5ajle0dFZTD0kHZjXL3ldNHA/Ho4+H4+F4HTmryXs8HA/Ha+l7PBu/agYcz5rXCThem73Hs3bnSnhWE8fD8XC8pL9qxhyPvZokXtrv8eya1VyqO6uJ4+F49PESdzy7XqDjeG3Wx2OvJo6H48UmV5jVTM7x7JpcwfHYq9kmL9AjPzVxPByP93ipOV7gpyaOx1fNKfZq4ng4Xkc7XgXHw/FwvLR2rtj6Hi/4OkEHHn08HI+9mi1wPPp4JB57NXE8HI9ZTYve4+F4JB57NVt0Hw/Hw/Ho46V+OwHHI/HYq5ni5AqOh+OxV7Nl9/FwPPp47NVs3e0EHI+9mrzHw/FwPPZqdvx9PBwPx6OP14L7eDgeXzXZq8leTRyv4+/jlbiPh+PheEl/1bS+jxdd78d9PBwPx2NWE8djryZ9PByPWU36eAndTsDxSLw0ZzV5jxe8j4fj4XjManI7Acejj2fH7QT6eDge7/G4j4fj2dDHK3EfD8djryZ7NXE8HI+9mtzHw/FwPPZqch8Px6OPx15NHA/Hw/G4nYDjZeo+Xsna2wncx8Px6ONxHw/HY68mjofjMavJrGZiO1dwPBKvZe/xrN25wn08HI/3eK3Yq8l9PBKvhX08a/dq4ng4HjtX2LmC41nWx6twH48+Hns12avJ7QQcr4NvJ3Afj/t4OB6Ox308HI+9muzVxPFwPPp4OB6O1zk7V4rcTmDnCo7HrCb38XA89mpyHw/HY1aT93gJvcfjPh6J18r3eCW73+NxOwHHo4/HXk0cj/d47NXE8XA89mpyHw/Ho4/HfTwcj72aOB6Oh+Nl5HaCtXs16ePhePTxuI+H47FXk/t4OB6Ox6wmjofjdeDOFWv7eLHCo4+H47FXk/t4OB57NbmdgOMxq0kfj9sJOB7v8TrhdgL38XC8FvbxSpbfTuA+HolHH4/bCTgejofj4Xj08diryX08HI+9mtzHw/F4ncB7PBwPx2OvJvfxcLxQ4o0mkHhXk/iq2aDjzW+8j1dpJPFOWnMfr07i/QdDK1YYamQtCwAAAABJRU5ErkJggg==",
null,
"data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAu8AAADuCAMAAAC+nhnvAAAABGdBTUEAALGPC/xhBQAAAAFzUkdCAK7OHOkAAAHVaVRYdFhNTDpjb20uYWRvYmUueG1wAAAAAAA8eDp4bXBtZXRhIHhtbG5zOng9ImFkb2JlOm5zOm1ldGEvIiB4OnhtcHRrPSJYTVAgQ29yZSA1LjQuMCI+CiAgIDxyZGY6UkRGIHhtbG5zOnJkZj0iaHR0cDovL3d3dy53My5vcmcvMTk5OS8wMi8yMi1yZGYtc3ludGF4LW5zIyI+CiAgICAgIDxyZGY6RGVzY3JpcHRpb24gcmRmOmFib3V0PSIiCiAgICAgICAgICAgIHhtbG5zOnRpZmY9Imh0dHA6Ly9ucy5hZG9iZS5jb20vdGlmZi8xLjAvIj4KICAgICAgICAgPHRpZmY6Q29tcHJlc3Npb24+NTwvdGlmZjpDb21wcmVzc2lvbj4KICAgICAgICAgPHRpZmY6UGhvdG9tZXRyaWNJbnRlcnByZXRhdGlvbj4yPC90aWZmOlBob3RvbWV0cmljSW50ZXJwcmV0YXRpb24+CiAgICAgICAgIDx0aWZmOk9yaWVudGF0aW9uPjE8L3RpZmY6T3JpZW50YXRpb24+CiAgICAgIDwvcmRmOkRlc2NyaXB0aW9uPgogICA8L3JkZjpSREY+CjwveDp4bXBtZXRhPgqw4zLdAAAACXBIWXMAAA9hAAAPYQGoP6dpAAAA4VBMVEVWlcEXHiOry+EDAwQqKirk7vUmQlX///8AAABxcXEHCAkMDQ3S0tIUFBQbGxv4+PgnJyfj4+Obm5swMDD09PTMzMw+Pj5KSkpdXV0hISFAcJGysrINFh2UlJT+/v5WVlZpaWn8/PxGep+goKCHh4fd3d18fHy/v7/Gxsa6uropSF3n5+dSj7lPT08IEBQbMD9EdpmmpqZ2dnY4ODjY2NhkZGSNjY0iPE6rq6seNUZERETw8PBLgqmBgYEPGyTs7OwwU2wXKTU8aYg4Yn9ubm5PibI0W3dUkr4sTWQTISsqMjeIVAlPAAAWbUlEQVR42uyda4+iPBiG36QmLSAHLccYBSMoxvBBEGIgxi/+/9/0tpXDjMch48zuzD73h12WOqW9e9E+ZR2e/zAI9O/oP7AABLyDQMA7CAS8g0DAOwgEvINAwDsIBLyDQMA7CAS8g0DAOwgEvIOAdxAIeAeBgHcQCHgHgYB3EAh4B4GAdxAIeAeBgHcQCHgHgYB3EAh4BwHvIBDwDnqBds7uYMxnC99Ligrj6SnyjXLCCsw0S7bbJTuyfMMZg1PA+29QTGzkzEPkL1LZxzhQ1SAcyBGeZ2qQyxrFeCIb4QnlYBXw/it4pxuMbY5zrjLeyR7jIdniMVlhHMkU6142Y/M+2YFXwPtv4H0q/p5Ha6ow3hV+jA7Y9/jZhPIPVFUV1R8DAe8/nHc2j+NhLhNtwHnX+Em0xFnJDwKKF+SsA3gFvP8W3g1jfcQp592reS8H/MCheEIWI64KvALefwnvMQvYMS7f8h4QFtZXKsURObEzuzIGr4D3X8J7pbhVlCLZ7HjfoWy9shHF2FdX+p56JngFvP+WeKZQEfJXitrxjtcGUQLXxviYsOi9hMczP4b32VEHP59Ij1l8Xo3enatm2An4wWg9BIe+mKtXzu8E/newt8ZqxKJ2BZz7Jq6A9z87d200xfdVB1ZG4P3fULUPpC3g/qN519dbGCXQpzRcbH4G79E0UQiFAQN9SlNE6DKe/92862tpQJAdwnMG0Gd13LoqUXw2zf+lvFdEU/g3QLTBYOANQKDPyGMIGYjz5JHiL53fS0fm7cslyZVAoM/IDSTJ5rzT4C+O31lAYxBUFhDQgF4SzoxHf/3zmWjqyAYMGOhTKghNV/Of8TwS6/A1ENAn5/cR/hnPI0Gg1wp4BwHvf553OoGxAb1er+QKvv8O+pcEvIOAdxAIeAeBgHcQCHgHgYB3EAh4B4GAdxAIeAeBgHcQCHgHgYB3EPAOAgHvIBDw/nqZcTSDkQB9XLOdpX8F75voQWHw9Gv42rI+kAz2j/T2h2KKCFGczbs7YPeCNyo+rGTtPvtxqfmd85gM8cm7+Rlf+6IBjY4vM36l3EwZslVWXzJ3PRy5p3Zxr5+Zjo+JQggarD+O6kd5d++/Z2BRJAdp2IN3//ZvZW1lbRqvl7JmvTk5IS94oceDSqJTWE7HPXif3r49EuWLeJeXLzN+6Nz85fkFWX9Fwx+NnLRXJkEP3u+YHhlImsRTDxUfRPUlvA9TohabHrzf1kimYiqzZDr7Pt6PU40EVg/e7+hP8P4i4/8A75OS2Pv5x3m/IxuJBavK0O5VvFuSQSUdB5risMuOS8Nll1j50cnI2jf/hqVXL5S6vy7sgWSyuKrJl46HLnUtbruZZs74xBrjLthUM0opXXKwt2V2GLobLJF6WU2Z/wmfcieOuaUkm3YJ1sd5lPfIO70pbC/Z4YtKwsPK56draLSkCa+6gvg0yFJuc9tmvHCylA/hmF3f3+9Lw+ek7XwaDKXJmff2dJcjvnGM9Xli9/nly/ne1+wtHjnI89mIniiVjg+M7woeGD/0d7fS13Peu/Ho+tZc9AtM17N8YF6avgltI9/jN14/MT0Wid3ELZFjSyxe6fIC1Z2zOxjzD/M+lJNFrvp4TNVwhHPkFKU8YQ557tRpE+JKm/WijtmIRouTbLNuNPnSLWVwKBU2S80zWQpklbGjnvCKDOzC568DDEmSahqJMNXaOzvEMmdwSqrYJad1l2Bd0jz1468QnFFjehgo5kUljqaloaHUUd5kWwW1H23BUKYFO6jetJk18+CpzHqeL1U1tMNSzpg9ymCZKUp45r093eaIbx3Dmo1on4R7B3k5dtD+GCK7wEdPkQ6esblvfFfwwPgViW+lr+e8d+PRdqK96BeYPpLwYXdhOqaqNE54tvHW6yemh6RxVONZPPkqXZYXqMbERs7HeT/wJIlbey4WiZj7Mbd5RlzWg1mpXm1IGO9sZhmTSZcv3VFMfmbJfoh1ZaQ0vPtsLjESvJFZHKeXjHfFbkIMErS8i1WxS7AuEanH9nXHx/+YrS4qcfj6HSnXIWFbMNV0vh6vujYfOUFzp7FeZdPeEunY8SpeZ8N7c7rNEd86hjXSb1NIuRv58hzPnNCIx3uH+8Z3BQ+MF7xfpa+vea/Ho+tEe9FvMn2ksGlBV4PO62em56QJ5kq14/09qjGhmx7xzIKUa7MJiiQ5qqqqIJuFGLzFdWBlkkD8yVNDn/OlmyThx2zUEplX5De88yXVH+C9aOaW8+40gTxJ3/PeJViX0KYHM5WsnV+V/74SRxU+D66tf1NwtAIeVjVtXoh8qZPGes7UnuzmiDdTRw3v9ekuR3zrGJvf+0W/CVkOZ038rpWsliqz8V3j3xbcNV7wfpW+vua9Ho+uE+1Fv890c7iQ887rZ6a7pEnZmXkXvLfGx+1i+LH4PfSI7KzOlSR1onPWENajWXy9zzHPtWtBmy99V59Z4oEY8bThPT7fRCHhoeaKxzPNNmNFxu957xKsS2ovaFY2IbSYX1TiZLzsdF1VW1DxXmese22bxTontk7C+vQ8DtH5/T9qw3v6ZnhEjsnWMW5Jr8fKR1cmaj4SvOt1LQN81/iu4IHxgver9PU17/V4tJ3oLvpNpuO9TRBV8s7rZ6aH7X2vljXvdnmBanx3Xb3zfGZmhR7b/vJKXCUSic71BYnEtaNr3kMxJ4VtvnQ+WbPZkhssJpSy4d06+8vXOr7sRthtNtk564bgXap57xKs97SerRWLkjXpfSWOSDniXiceaQuWaBrNLYZB2+ZCtHd8Yf1RLGQVueK9yanaOtaXd+7lJODRBg/AZV/UwuP3O8Z3BQ+MF7xfpa+vebcueO8u+k2mD4lr6djLO6+fmT4RR/y/UNjBmXet5r01vifvIb+IxdYVV+zZ+CQwTubnheYkz69598UduW/zpc+UTCxDS4Yxm610+YL3lZiGfDZQQ3TOTWfxqE3jgZ5d894lWO9nfVyytWPmJReVOHx9n/PHHpfWNwVaInxed21eizft+xfWY9UWnbvLe+tYX951ZyKeVOkinhFb+Zkb4rvGtwWPjH/He5u+/g7v3UW/yfRUrvj9mXdePzN9RtXo/DxSqRh0zLENqXlvje/J+0Rez6qAzbySYplHtRwdQ+XEHNJic3vjybBJyPQ4HFC9y5ceoqkZUZEvPRlV0iXv80ybDlPEp6eUZNthXChGxPEpdqlMeDeKY5dgvZ/1ppLq8wkz5n0lDkmiY46sG1unusDXdtVWJftZ2+YZ1WJ9ol5aH6LA2qroLu+tY73ndzcb4cjgG10n4oNrWi5a3Te+LXhk/Dveu/T1d3hvL/pNpu/JVrcy5M9ar5+avlKUIt5tqcz24Cai69iWGe/vUO3JO9sdy0hmU3DMHzDEHiEy26wvyFImSq7f4D3QCLKHb/Kl6wELowrKhmivEFIuL3jHowShbCoitb3KPir7/7N3932Jal0YgJlDR1ZQ2aS9EVZaPpU5wEwvphKJDorz/T/QszaYmtmMnl8DiPf9Rzlurc3icrN5mRAT+nv+MYePJI4I8hAxvsH6kpvWEvedvuVnfsjhHT86vlDmTCVHDfsbPG/dL/ASj/tcLRBtP82Wvvys0c6TqM987+OKLe1965g02t0SByZ5mR9Vop3vyseFnzT8pvBvvE9uX/+B9/EvjanoeYaglS7ln5Na/6noSvUu3MUIP0V7Mm1vnrVmqC49f78/uQpPe5a/nvKXrZvbaO8of12c9+Gm0ulW9Oe6J/dL/3rVHrVezb2yIc8foFFJqyevV/+c3r9eN3IrnvmPN1h/ubqpvvshhwfK7dwzqlMNYiFO78tTfS5vzb8wqc1Tyd8cahxVbPnk90e1aItCtK+u878r/FTD4oUf377+o0Mt0S+Nrei3+1xwnutPav3norevrl6h5LdG07xpqv/1eoL3RwOU+d6XLM7tNr8jf9D6S2fjldO5FZ6fw4Nlf/qh6PcjtZWY8mHhP2z4MH/z9vUrUfRkvCtn8sH5xnacdwH5xNI/0fF5S47vj91/oveYb1+fvqIv4X3rsvjB1vtyf9lfW75+fihVY+SuPF0s2/BxqqWj5+v4uv5h4T9s+M1sJdbb16ev6Pj/Tcg6Bd4ReEcQeP/T/vnLIvPCl2oay7BQr07v2yns+mK9WuWyv+RT6X2x+6gdyGks/GK9oqM09n2hXq102ffgHd7hHd7hHd4T8J6/+T7TtHddfreI1dLXhMu9UK+2Lm7fyXq/gHFnsV5lpOyp9l4tfdNo5orn/A5pZ9ECRIuYv9ncIPkp2bov1quSTIX/RWsokjVvAWPPQr3KStlT6z3q+cHlu8u8ytfPd9EC8CKGq2bn4Snxox2L9erl+9F2tIbo6MMFjP/QxiK9ykjZ0+m9TTuauFBzZ2Nj43jjbcRz4rpf0lQyxHf5y5cvhvEl2Yjfb456dSy+q6LrM30X/96VResxqar4vj16a9Jd/xL2yiAt62V/TOn43joUHo4fNjePNt/mjJ/aEUt4tk0NUXnZy+UazVyysXK5pii8am3T+a540NrcfHh42/Wj883NA1H4wjkZnnhgNvh9CXe9yX0Yis64DdrIetnTO3/nLesuya3HmS3r6X64BftxVVZ+kSQ5HXZjNrp1KdnYwZDB1PSB9EsWM2B2s/NwUpy7Xd3j/SlqSvVuzmA1nUrCXfd7TY3B9B1J4vl7xsue7uMzYgFm/rhZWYt6zhGF5wyEm16ydR+oERjOL3l0hIPdzBzkeIzAhAvYDF8s3BgJm+lEYERGx2cyXPbUH3/Pz17Zfj/+v8ajwodukh5oKoPXR78mB4KrswPN5EDZyDun7iQ9vk86MDn+ntWyr/L5pqnCpyi/Fjvx0Uxj3xc637TSZYd3eIf3xL0XFvlTuC0tjYVvLfS/8eVaGvuu/sh42Qsn6fS+UP5ZoAx16v/hFTq9PUCSM6YaxNxWJ3uZwv+zWLEWSJ/+ND02335qdNWeanB8SWqYS6HJfNk/Mxnx3mlONTSNVfJe8SpTDWITsjre4yk7vL8r/JuG1fL+tmG1vMdT9nXwXjOMhoCj54xhrSJGQafmuqGUvjfMdamSa/BjS3xpdKR+Y9LQMFWvopOec90gCe8VyxyGcPqWYXV4IaxezzMsX4zlTbdpmzXf60mS7fGLHM+uWJVxg+OR0WTvdtPwukl4T2XZ18C7atZq8pDrLbudwFB9HjiMYceijiQFZAWmRpUaD6QVIl/yeVgSQ+JrQ9/VAkcn0w1c8frYvauvHZWtfk62eIfSMINwaWzVCDyVh3DDk6Qa8Ze+7DORcYMfyMOO1FA1q6bKTvze01n2NfDuDsQpxK7UMXm06bIInRiOZFiSLwQNPKrYYbPck3pcfC78uGG0YW1Gr4/fuxgNh2pd8sTg19TYO8ORhBOPnw1fkFMH0tCQ61JzKAnv44ZoPsNLxqZ68XtPZ9nXwLsYIJxQjuTbYv3rwoBkGVxm4YC3nwOtxtvVYUNqDMMp77hhVPhe9Pr4vYtxOQgnugNHd1X2Lgz0qFIP8QyYNE9z63KPPZmB8D5piLzLzG5Atfi9p7Psa+A9HNvUhlQPTJKHYeHFjhCXNCAxEbZFfYeS1qu5khHt4k01jHecEvE+CPffdKnSkMk0hPdcJKISbed5t3SgdnS5bgQV7i2/dNIwtb+ahPd0ln1Nxnef13iNOs7Ani58J3zAe4VST7aJZ78V8QQXftKQsHcxvne4G4bR9aXalPdo5BSLJVlWzZMaXscMPxpTDcl6T2fZ18C7Fc4AupJphcWcKnw3XCmWOL9BlsuTAEsbhIWfNCTsPdyiq3U7fOBNeZfUYbhYDLmvDgOWI6b4YlMwaUjWezrLvgbeKXBsw+XKmY7f01jOuPCSa+r1rirmx0NiSF64hySIjBtyql1P0LvZ9XvMtq42fScn807p2HtAnbrjCtIOsXKfxOxYeJ80mJ6ToPd0ln0NvPP8kVyeGNgukcFf9EnhHX5K64nCB4JLEA6jovDjBt18fX0S3uWOStQQBzr4u6Wr2sT7oEFEnfBwtqvyK4b8YQi9TxoC2UzQezrLnnnv4Wl2Z+q7M5hqGVTswdy3jBsGS1+d/nnehRvbj7ph12evlHd0f/57xg11f9m+Z77sa+E93nyq95iT+bLDO7zDO7zDO7zDO7zDO7zDO7zDO7zDO7zDO7zDO7zDO7zDO7zDO7zDO7zDO7zDO7zDO7zDO7zDO7zDO7zDO7zDO7zDO7zDO7zDO7zDO7zDO7zDO7zDO7zDO7zDO7zDO7zDO7zDO7zDO7zDO7zDO7zDO7zDO7zDO7zDO7zDO7zDO7zDO7zDO7zD+18u/CDw3Ka9qt67bm01vdcTKDu8S+IOWRzVXk3vjhHe/mj1vA88UfaFb4UM759V+AoX3RmKW7Kvnve+K9OKeu+SXNM9UgfwHq/3Grlh9esr6L1jGPKKeg/ECKNTePdweI+x8JYA48Rc+M+bv3sr6t3u2eKWm7IP7/F6H1JD3AqadHiP+/iMb8Q8jYT30U1voxv2wnus3isGqTq8x+zdE+M7z2dseI/Xe08msyLBe8zeG6MdJwfeY/XeJ7J8Cd7j9t4j1ZdqZEjwHmfZHZlydQ6OR8bsva6RYckUwHvMxyOj6PAe8/nVisllb0jwHqt3C94Tu17M0evSqnr/zOB6MVwfCe/wDu/wDu/wDu/wDu/wDu/wDu/wDu/wDu/wDu/wDu/wDu/wDu8Z9X6a/zeNyS/U+VR2/d/Ml32lx3cESTDwjsA7gsA7gsA7gsA7gsA7gsA7gsA7gsA7gsA7gsA7gsA7Au8IAu8IAu8IAu8IAu8IAu8IAu8IAu8IAu8IAu8IAu8IAu8IvCMIvCMIvKcjxfs8ViGyNt5PaAurEIF3BMmQ99vHg+Nv98r3At2VFGX/bPdwT1HyZzePBxubRaxUJFveTwu7pecNrXh9RD9vlBN19/Kn/KAUaafw+FM9wEpFsuX9nnhUf7m7Cucz+eO7U0Up0T17byvKHp1grSKZ8t5Wdy5uX+fv11Rqt9tVKhXpnJ8q0jPWKpKt+fvVAVHhsRx6v6Aoz0Ux6ivKzjnWKpIt74ry9aJFl6H3E7r4KtIu8hMc9RJrFcmU9+sWT9RPj7+F3qv0U0zpW9dFOhNt9IS1imTKe1H7kS+f8HB+TY8vytn2Vf6pcFwsEpVetjYKOOWKZGw+U1Jllb7llfIdD+4v33j23rrnPdXz/7dzP60Jw2AAxg8JLLGu9s+mllHXwjItYwcHlTGRfv9vtUQHG4wdWhF8u+c5eZISf4S3qRobNecJFI1ufu/WjTu+eAm7edoE5P5+dTVN+UhphPerfn7/NeWczmeIxuj9Bu90/d7vuku9c/10f+kpilthvPfsEMtdLJMBBu94J7zjnfD+07srRRweTk9fTcM73od7nzVWGyXiyX+pTP68qfGO94HeXfkambh938tYom67vDXRo9/m8Y73vllVhG/vTrTWRaEFFK4yUeGaC7MDDN577u9RlQTvH9a2rRXQsrJ2HrznFfs73ofM765cTPxA8yDip9Vf40yWMr/jffD5jL9hTcxWwgrtTP62rjmfwfs53kNOxv7+fWqKd7yf4V1ceMc73gnvf5ct5C5Wzl/b4L1nK9ac/pF3IrwT4Z0I70R4J8I7Ed4J70R4J8I7Ed6J8E50RX0CGw9QYA5Ky+8AAAAASUVORK5CYII=",
null,
"data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAogAAACbCAMAAADWSpqUAAAABGdBTUEAALGPC/xhBQAAAAFzUkdCAK7OHOkAAAHVaVRYdFhNTDpjb20uYWRvYmUueG1wAAAAAAA8eDp4bXBtZXRhIHhtbG5zOng9ImFkb2JlOm5zOm1ldGEvIiB4OnhtcHRrPSJYTVAgQ29yZSA1LjQuMCI+CiAgIDxyZGY6UkRGIHhtbG5zOnJkZj0iaHR0cDovL3d3dy53My5vcmcvMTk5OS8wMi8yMi1yZGYtc3ludGF4LW5zIyI+CiAgICAgIDxyZGY6RGVzY3JpcHRpb24gcmRmOmFib3V0PSIiCiAgICAgICAgICAgIHhtbG5zOnRpZmY9Imh0dHA6Ly9ucy5hZG9iZS5jb20vdGlmZi8xLjAvIj4KICAgICAgICAgPHRpZmY6Q29tcHJlc3Npb24+NTwvdGlmZjpDb21wcmVzc2lvbj4KICAgICAgICAgPHRpZmY6UGhvdG9tZXRyaWNJbnRlcnByZXRhdGlvbj4yPC90aWZmOlBob3RvbWV0cmljSW50ZXJwcmV0YXRpb24+CiAgICAgICAgIDx0aWZmOk9yaWVudGF0aW9uPjE8L3RpZmY6T3JpZW50YXRpb24+CiAgICAgIDwvcmRmOkRlc2NyaXB0aW9uPgogICA8L3JkZjpSREY+CjwveDp4bXBtZXRhPgqw4zLdAAAACXBIWXMAAA9hAAAPYQGoP6dpAAABEVBMVEVGgrQnJyf+/v76+/wCAwMHCAjz8/P///8AAACcnJygoKATExNdXV3j4+Pd3d0NDQ0YGRn4+Pjo6OgcHBx8fHyQkJB0dHQ/Pz9vb2/T09NjY2MzMzOEhIRXV1elpaWJiYnExMRQUFDE2OghISFFRUbKysphlcDx8fGXl5c5OTlKSkrL3OpUjLpCeqrPz89naGh/f3/l7vW1tbUYL0JzoMfBwcGGrs+evtiqqqpra2tMhrd7psotLS3x9vo8cZ2RtdOqxt0xW34oSmd3eXt9fX20zeHY2NgIEBc3Z4+9vb3U4+690+Wenp4TJTStra3r6+vc6PIcNUm8vLwPHCcuV3ju7u7O0tawsLAkP1V5jJtlja68R3XWAAAK+klEQVR42u2dCVviugKGS2lNPCylsm8iAlNBBcdlxI1x3GdcxvUZz/3/P+QmKBevtEeqtiQ93/s846BVmi9526alTRQCgAAoqAIAEQGAiAAiAgARAUQEACICiAgARAQQEQCICCAiABARQEQAICKAiABARAARAYCIACICABEBRAQAIgKICABEBBAR2DLdiRcKZKaUDRW67Ntyt1sPlWfYq5NycSufW2WvFrKhaAM1BRE9JZOsJSqkGMlMVykzMRJKlA71JiFtYzfVNIwKIQWtla7rq6gqiOipiDRjkpjBPDMjW0zEyG9CUppJWjWLkDStkDmaJkTN7qKqIKKnImoz/f+n8hd6gYnI/pEuPQlrGfbC1Coko8cty6rQGdQVRPRSxAT/epWlWtHgInL//tB8nE7zn0cqpEWfyKOuIKKH5LiIeRptm6T2QsQHusJeWOzQHDXiMU4YdQURvd4jZnTWIXzQXohIIk2+o2QilugcP3OuqqgriOj1HrFLZ81GUyurQxErWqc9G2F9xIdINva7YuRQVRDR8z1iuKppxkJJ2xqKGF4xaLKrszPmRo1SPWeiqiCiD8y0WR8wZv3fzyzWR9zgmrZvccr8GSKuZlE3rmnVCb+O+ICacMTRKycRpykqzTVdWuvUtQVUhDOOXkHEzyS+EC3NoRo8EtGyUH/gQ8TUD4vYrmS1NGoSfIjdSHn29/tFtLqFBDWqCzHUJPgQc6ki1YorDfU9Ih5mNUq17OHKlk4B+AihlZVOjf0fKVddiqimaYK/g5ZkdKYB+AhNZlGk71OEWq4PzSfpuk5rW6tTOLSAD19OqBo0Wbiy3nmyYq5u1aiOC2PgYzRZH6+S/+Dlm5N0GzUJPnb18Mr6hOuIAHyekhARQEQA3iuiFUelgc/H0SvcjwiEACICiAgARAQQEQCICCAiABARQEQAICKAiABARAARAYCIACIC4JGIM/90W2Lnz1vvnjx6fpELsW8y/kaL/w5GE/mdYzJt/oaI0ZDjotl0dSXzxnNVidSLQpWn/a1QPRUMEf3OMZk2f7+I+QxNpGdcFCroDfgvENHDNncQsZEJ7WZMspUwWnlibRWLmQdCflbjW6Hm7OB3SvXa8xBswwXqRblWrVjsVTu6G23wQk1lmq1pXqjoBWm3YofF4hEfG2q23iy1ox6Mqqp2y8nsLIm1tFqZOJVdAnzPMdk2txcxr1cvOpEymd6NlGIPNSNXqoVmyF+0Fk236OB5+9zM7V9Pr4YLSlr1IqOX+WxgoVLdYBtzuKhntvQEK1QkR37SUDZd5tMyVWj1MJGkHjwZU9KPFlra1UNJy6aJU9klwO8cE25zexFLfM6kbjbc303ntBghMb3E1l1g/tcjgzHK/zfo3XBBlk/CVIgQ0jKm+CNbKbbsiv21MShUmf1RqEpm9A4hZtYLEXezvAhHT4c0x7KLj985Jtzm9iLO0vqtOegvJOsWo9lk78+HKb8YnU/p5YJw/HbXICat8tesDls6HzunPCgUnyexvEu6tNFfjQciVmkqrw76Vm+XXVj8zjHhNnfoI1ZqVG/97BfKHAwtxtbND+9z9Ha0UIMF+YJOEyGDnDztzJOppw2bZAaFajwlrfRHPN/wQsTfUVaETqzfgGOUXVh8zzHZNnc6a1YbpZp2wv9e1cv96bx4f4G/x+roOw0XhEKrD+TQIDF6SPi8TCnSrPHfqL8q1AI9IXw37snT01OrHb3Yb8Axyi4wfueYaJs79BH5dcgGne3vpnf59DZqtMLWzafJzunh0UI9L+B/wkpgENV4mg0sRQqUbdSm/qpQP3nnlZQ9qM9wix8JMjTcP6S9XXZR8T3HhNvcXsQ/+q1qbbGtI2e0zStamWpHtQ227sTc1Kx+REYL9bzAMqIP8Yymm6SiLUzFi2zrONGqMSv3ulBqMZFuH2pebNjRYozEQ2xPkmzFydtlFxa/c0y4zRWHzZHqGh8XcS7J+qRpndLkLF/3kU6NQtimUIMF6YimlX8aCRLusC5Gpcg25q5BaT31qlD88pjWXPBiQuN8jRo01ObXP9hm/WbZhcXvHBNuc6c+4snqRn8CJZXPTGBtzJlP/VOzYXfd4MUCc85iv8+Hf49tPM/ANLVhuw2YJrlIeNGCZmO13a84ixfgrbKLi+85Jtrmbu6+4SdKqqsFjsxEWH8hnK179anEJxRRBCafw7c2dyuiuwXOlLVsZzdx4muFBgG/c/jW5m5EzJemXC74hx7J3EphwcdrKe8oopg9R59z+NbmuDEWCAFEBBARAIgIpBDRjxnsV4xg1CFyjI+QM9hHAzJzAXKMj5DTW6ABIaIbET95BvvhbOZRz9bhB8jxnnUINIP9cDbzqGfr8APkeM86BJrBfjibedSzdfgBcrxnHQLNYD+Yzdyo12in6M06/JyVHTkkncG+mUwmjP57a7QfV0t8+jp8mpW9n0NDDjlnsJ//z8GlRk9v7pfXqNLrv378W74+PllEjvERbwb7xfU7qt0dnCkMFpyx/OPmlGqHkt0s+As5XCDcDPbX64pycL+sPPEUnNM7OPuyKFHzfd1xzjGPHKM2CTaD/fy28pJh8D6b8hzNlpBjAhe0P41NRfmn4Mo3SR5t2n8jx46JHOKKqIa/KW8EV758laD5kMNXET9/BvuR3Ervx+uffJFgXzJWjj3kcOmVb/cjbirjsCPd8cyedRU53OGXiPPKeIje018cM8d35BBSxOvtMYMr52Jf71gaN8cxcogo4vq4uZUloU+dd8bPsYcc4om4qIzPvsDt98tFjk3kEE/EdRfBtwXelSCH3CLOK0ogdiWLrnJ8Rw7RRFx3FXxb2F7iDnJILeKe4g5Rb38Ib7vLMY8cYol47lJEUY9p88ght4jfXAb/ImgDfneZYwk5xBJxyWVwRdB7BlznuEYOgURU3XYRBe1cqWHXOc6RQ6Q94rXr4GJ+PPbVdY595BBJxEXXwcW8kvgLOeQW8dx1cDFPN93n+IYcIom4H5AGPHadYwc5sEec/OU3UTeo+X/rHhF9ROQQ+az58fJu7SwQZ803d2cBOGu+X7u7PAj0WbP9dcQ1PhSKfhaA64j3lP6Q/zriQX9omssgX0e0vZJ/Rul9745eSv/JytopdRRRpk9WdHp59uiQ5DogItp91vxI7xTlB9WWZf+s+fL01ElEmT5r7lHKmsKgj0H+rPnc9si81k9/Jv9dK8tOIsqUo3d/z75o9CDId9/YdRLv6A0XUfsh//2IjiJKdz/ijf0RKjD3I9rdod0X0b4JZbuz2UlE6XKw88fHQN+hbXcsuHQ+NG+K2n4OV0SdRJTsmZXeKdUOAv7Mis0u8YafMLNT555UDwSvuxBRsqf4ejo9PQv6U3w2m+A91ZfZqfNpAJ5rdhBxU64cd/S0t8wI9nPNo5vgMtsAL+1O0uQb6cFeRMlGeujxy9ns302wR3qwGfvm7NQutoxj39iLKNnYN/fPE1HcBHzsG7vzld7ZMkYDQw6fRcT4iMghhohjPVQq54ixNjn2kENUEW3G0B7NLefY08gh1R5RVd86Oss6q8BI/8JEDoFFJCPzrEh2nvKip7+EHFKL2J95yukwINvMU0455pFDfBH5XHz2sY+lm8PO/hZE5JBERHZ8Ho3+ZT9MpGNxfRs5ZBaRkL3zb8POyfbO8VciJ69y7COHZCLyU+i968Xj/eP5X1/DRGqQQ3IRVZUEgoDEmHwOhQAgABARQEQAICKAiABARAARAYCIACICABEBRARgXP4LJFFqRdufLBYAAAAASUVORK5CYII=",
null,
"data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAA3gAAAAoBAMAAABeJkUXAAABnGlUWHRYTUw6Y29tLmFkb2JlLnhtcAAAAAAAPHg6eG1wbWV0YSB4bWxuczp4PSJhZG9iZTpuczptZXRhLyIgeDp4bXB0az0iWE1QIENvcmUgNS40LjAiPgogICA8cmRmOlJERiB4bWxuczpyZGY9Imh0dHA6Ly93d3cudzMub3JnLzE5OTkvMDIvMjItcmRmLXN5bnRheC1ucyMiPgogICAgICA8cmRmOkRlc2NyaXB0aW9uIHJkZjphYm91dD0iIgogICAgICAgICAgICB4bWxuczpleGlmPSJodHRwOi8vbnMuYWRvYmUuY29tL2V4aWYvMS4wLyI+CiAgICAgICAgIDxleGlmOlBpeGVsWERpbWVuc2lvbj44ODg8L2V4aWY6UGl4ZWxYRGltZW5zaW9uPgogICAgICAgICA8ZXhpZjpQaXhlbFlEaW1lbnNpb24+NDA8L2V4aWY6UGl4ZWxZRGltZW5zaW9uPgogICAgICA8L3JkZjpEZXNjcmlwdGlvbj4KICAgPC9yZGY6UkRGPgo8L3g6eG1wbWV0YT4KFx5LFAAAACFQTFRFjFZL////LKAsf39/H3e0F77PlGe91ico/38O43fCvL0iC7kQnQAAAItJREFUeNrt0SEVgEAQBcC1SCpQgQoIClwFKpw/hUdRgZQU+BLFm6kwtUX7mYwl6tERVTTdybVGT9Si+ddKnjx58uTJkydPnjx58uTJkydPnjx58uTJkydPnjx58uTJkydPnjx58uTJkydPnjx58uTJkydPnjx58uTJkydPnjx58uTJkydPnjx5X3gBupKs68al/A8AAAAASUVORK5CYII=",
null,
"data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAA3gAAAAoCAMAAACb1qgWAAABnGlUWHRYTUw6Y29tLmFkb2JlLnhtcAAAAAAAPHg6eG1wbWV0YSB4bWxuczp4PSJhZG9iZTpuczptZXRhLyIgeDp4bXB0az0iWE1QIENvcmUgNS40LjAiPgogICA8cmRmOlJERiB4bWxuczpyZGY9Imh0dHA6Ly93d3cudzMub3JnLzE5OTkvMDIvMjItcmRmLXN5bnRheC1ucyMiPgogICAgICA8cmRmOkRlc2NyaXB0aW9uIHJkZjphYm91dD0iIgogICAgICAgICAgICB4bWxuczpleGlmPSJodHRwOi8vbnMuYWRvYmUuY29tL2V4aWYvMS4wLyI+CiAgICAgICAgIDxleGlmOlBpeGVsWERpbWVuc2lvbj44ODg8L2V4aWY6UGl4ZWxYRGltZW5zaW9uPgogICAgICAgICA8ZXhpZjpQaXhlbFlEaW1lbnNpb24+NDA8L2V4aWY6UGl4ZWxZRGltZW5zaW9uPgogICAgICA8L3JkZjpEZXNjcmlwdGlvbj4KICAgPC9yZGY6UkRGPgo8L3g6eG1wbWV0YT4KFx5LFAAAADxQTFRF29uNmN+KxbDVrsfoH3e0/5iWx8fH/38OjFZL43fClGe9vL0iF77PxJyU97bSntrl/7t4f39/1icoLKAshHvLcQAAAOlJREFUeNrt08kBgkAAALHFC8EDxP57tYZ5+Et6yLgU52Au9uAbTMURXIs1OAX34hUsxTv4BLfiEYziGWzBEE888cQTTzzxxBNPPPHEE0888cQTTzzxxBNPPPHEE0888cQTTzzxxBNPPPHEE0888cQTTzzxxBNPPPHEE0888cQTTzzxxBNPPPHEE0888cQTTzzxxBNPPPHEE0888cQTTzzxxBNPPPHEE0888cQTTzzxxBNPPPHEE0888cQTTzzxxBNPPPHEE0888cQTTzzxxBNPPPHEE0888cQTTzzxxBNPPPHEE0+8/8f7AZ9OIMxTQHXJAAAAAElFTkSuQmCC",
null,
"data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAA3gAAAAoCAMAAACb1qgWAAABnGlUWHRYTUw6Y29tLmFkb2JlLnhtcAAAAAAAPHg6eG1wbWV0YSB4bWxuczp4PSJhZG9iZTpuczptZXRhLyIgeDp4bXB0az0iWE1QIENvcmUgNS40LjAiPgogICA8cmRmOlJERiB4bWxuczpyZGY9Imh0dHA6Ly93d3cudzMub3JnLzE5OTkvMDIvMjItcmRmLXN5bnRheC1ucyMiPgogICAgICA8cmRmOkRlc2NyaXB0aW9uIHJkZjphYm91dD0iIgogICAgICAgICAgICB4bWxuczpleGlmPSJodHRwOi8vbnMuYWRvYmUuY29tL2V4aWYvMS4wLyI+CiAgICAgICAgIDxleGlmOlBpeGVsWERpbWVuc2lvbj44ODg8L2V4aWY6UGl4ZWxYRGltZW5zaW9uPgogICAgICAgICA8ZXhpZjpQaXhlbFlEaW1lbnNpb24+NDA8L2V4aWY6UGl4ZWxZRGltZW5zaW9uPgogICAgICA8L3JkZjpEZXNjcmlwdGlvbj4KICAgPC9yZGY6UkRGPgo8L3g6eG1wbWV0YT4KFx5LFAAAADxQTFRFa27P57pSUlSjrUlKjKJSztucOTt5hDw5pVGU55acnJ7eY3k558uUtc9rzm29e0FzjG0xvZ453p7W1mFrqQPJ5wAAAOlJREFUeNrt08kBgkAAALHFC8EDxP57tYZ5+Et6yLgVp2AUa/AILsUruBZ78Amm4hnMxTn4BkuxBffiHRzBEE888cQTTzzxxBNPPPHEE0888cQTTzzxxBNPPPHEE0888cQTTzzxxBNPPPHEE0888cQTTzzxxBNPPPHEE0888cQTTzzxxBNPPPHEE0888cQTTzzxxBNPPPHEE0888cQTTzzxxBNPPPHEE0888cQTTzzxxBNPPPHEE0888cQTTzzxxBNPPPHEE0888cQTTzzxxBNPPPHEE0888cQTTzzxxBNPPPHEE0+8/8f7AcdAIMz2Det5AAAAAElFTkSuQmCC",
null,
"data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAA3gAAAAoCAMAAACb1qgWAAABnGlUWHRYTUw6Y29tLmFkb2JlLnhtcAAAAAAAPHg6eG1wbWV0YSB4bWxuczp4PSJhZG9iZTpuczptZXRhLyIgeDp4bXB0az0iWE1QIENvcmUgNS40LjAiPgogICA8cmRmOlJERiB4bWxuczpyZGY9Imh0dHA6Ly93d3cudzMub3JnLzE5OTkvMDIvMjItcmRmLXN5bnRheC1ucyMiPgogICAgICA8cmRmOkRlc2NyaXB0aW9uIHJkZjphYm91dD0iIgogICAgICAgICAgICB4bWxuczpleGlmPSJodHRwOi8vbnMuYWRvYmUuY29tL2V4aWYvMS4wLyI+CiAgICAgICAgIDxleGlmOlBpeGVsWERpbWVuc2lvbj44ODg8L2V4aWY6UGl4ZWxYRGltZW5zaW9uPgogICAgICAgICA8ZXhpZjpQaXhlbFlEaW1lbnNpb24+NDA8L2V4aWY6UGl4ZWxZRGltZW5zaW9uPgogICAgICA8L3JkZjpEZXNjcmlwdGlvbj4KICAgPC9yZGY6UkRGPgo8L3g6eG1wbWV0YT4KFx5LFAAAADxQTFRF/dCinsrh2dnZY2NjlpaWodmb/Y082trrMYK9dWuxdMR2a67WMaNUx+nAvL3cnprIvb29xtvv/a5r5lUN+ZjgtQAAAOlJREFUeNrt08kBgkAAALHFC8EDxP57tYZ5+Et6yLgXj2AqPsE3uBVHMIpnsAbX4hUsxRa8g7k4B5diD07BEE888cQTTzzxxBNPPPHEE0888cQTTzzxxBNPPPHEE0888cQTTzzxxBNPPPHEE0888cQTTzzxxBNPPPHEE0888cQTTzzxxBNPPPHEE0888cQTTzzxxBNPPPHEE0888cQTTzzxxBNPPPHEE0888cQTTzzxxBNPPPHEE0888cQTTzzxxBNPPPHEE0888cQTTzzxxBNPPPHEE0888cQTTzzxxBNPPPHEE0+8/8f7AaamIMytAOucAAAAAElFTkSuQmCC",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.7405629,"math_prob":0.95525163,"size":42010,"snap":"2019-13-2019-22","text_gpt3_token_len":9574,"char_repetition_ratio":0.19135362,"word_repetition_ratio":0.37602383,"special_character_ratio":0.24037135,"punctuation_ratio":0.15460137,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97444594,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-03-20T02:44:31Z\",\"WARC-Record-ID\":\"<urn:uuid:5859a238-da22-4f1f-aa79-f7d999f3d372>\",\"Content-Length\":\"135646\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:41731d8b-8935-4681-92ef-723c73f896ac>\",\"WARC-Concurrent-To\":\"<urn:uuid:4807d1aa-ac39-4313-9a2b-57db28def5fd>\",\"WARC-IP-Address\":\"185.199.108.153\",\"WARC-Target-URI\":\"https://docs.w3cub.com/d3~4/d3-scale/\",\"WARC-Payload-Digest\":\"sha1:OS3SITJZ5HSIERSSW7ACOY6KD6JSUNOK\",\"WARC-Block-Digest\":\"sha1:KOKSZXCVYU6QKVCLC5YD2PAFZ4I67QWK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-13/CC-MAIN-2019-13_segments_1552912202199.51_warc_CC-MAIN-20190320024206-20190320050206-00532.warc.gz\"}"} |
http://www.allexamreview.com/2017/02/online-mcq-ee-transmission-and_3.html | [
"# ONLINE MCQ EE-TRANSMISSION AND DISTRIBUTION 9\n\nQ81. A.C.S.R. conductor having 7 steel stands\nsurrounded by 25 aluminum conductors\nwill be specified as\nA. 25/7\nB. 50/15\nC. 7/25\nD. 15/50\n \\$(document).ready(function(){ \\$(‘#loaddata3988’).click(function(){ qid3988=\\$(‘#qid3988’).val(); section3988=\\$(‘#section3988’).val(); type3988=\\$(‘#type3988’).val(); subtype3988=\\$(‘#subtype3988’).val(); \\$.post(‘../../fav’,{ qid:qid3988, section: section3988, type: type3988, subtype: subtype3988 },function(ajaxresult){ \\$(‘#getrequest3988’).html(ajaxresult); qid3988=\\$(‘#qid3988’).val(”); section3988=\\$(‘#section3988’).val(”); type3988=\\$(‘#type3988’).val(”); subtype3988=\\$(‘#subtype3988’).val(”); }); }); }); Explanation:- Answer : A\nQ82. By using bundled conductors which of\nthe following is reduced ?\nA. Power loss due to corona\nB. Capacitance of the circuit\nC. Inductance of the circuit\nD. None of the above\nE. All of the above\n \\$(document).ready(function(){ \\$(‘#loaddata3989’).click(function(){ qid3989=\\$(‘#qid3989’).val(); section3989=\\$(‘#section3989’).val(); type3989=\\$(‘#type3989’).val(); subtype3989=\\$(‘#subtype3989’).val(); \\$.post(‘../../fav’,{ qid:qid3989, section: section3989, type: type3989, subtype: subtype3989 },function(ajaxresult){ \\$(‘#getrequest3989’).html(ajaxresult); qid3989=\\$(‘#qid3989’).val(”); section3989=\\$(‘#section3989’).val(”); type3989=\\$(‘#type3989’).val(”); subtype3989=\\$(‘#subtype3989’).val(”); }); }); }); Explanation:- Answer : A\nQ83. The string efficiency of an insulator can\nbe increased by\nA. correct grading of insulators of\nvarious capacitance’s\nB. reducing the number of strings\nC. increasing the number of strings in\nthe insulator\nD. none of the above\n \\$(document).ready(function(){ \\$(‘#loaddata3990’).click(function(){ qid3990=\\$(‘#qid3990’).val(); section3990=\\$(‘#section3990’).val(); type3990=\\$(‘#type3990’).val(); subtype3990=\\$(‘#subtype3990’).val(); \\$.post(‘../../fav’,{ qid:qid3990, section: section3990, type: type3990, subtype: subtype3990 },function(ajaxresult){ \\$(‘#getrequest3990’).html(ajaxresult); qid3990=\\$(‘#qid3990’).val(”); section3990=\\$(‘#section3990’).val(”); type3990=\\$(‘#type3990’).val(”); subtype3990=\\$(‘#subtype3990’).val(”); }); }); }); Explanation:- Answer : A\nQ84. Earthing is necessary to give protection\nagainst\nA. danger of electric shock\nB. voltage fluctuation\nD. high temperature of the conductors\n \\$(document).ready(function(){ \\$(‘#loaddata3991’).click(function(){ qid3991=\\$(‘#qid3991’).val(); section3991=\\$(‘#section3991’).val(); type3991=\\$(‘#type3991’).val(); subtype3991=\\$(‘#subtype3991’).val(); \\$.post(‘../../fav’,{ qid:qid3991, section: section3991, type: type3991, subtype: subtype3991 },function(ajaxresult){ \\$(‘#getrequest3991’).html(ajaxresult); qid3991=\\$(‘#qid3991’).val(”); section3991=\\$(‘#section3991’).val(”); type3991=\\$(‘#type3991’).val(”); subtype3991=\\$(‘#subtype3991’).val(”); }); }); }); Explanation:- Answer : A\nQ85. If variable part of a.nnual cost on account\nof interest and depreciation on the capital\noutlay is equal to the annual cost of\nelectrical energy wasted in the conductors,\nthe total annual cost will be minimum\nand the corresponding size of\nconductor will be most economical. This\nstatement is known as\nA. Kelvin’s law\nB. Ohm’s law\nC. Kirchhoffs law\nE. none of the above\n \\$(document).ready(function(){ \\$(‘#loaddata3992’).click(function(){ qid3992=\\$(‘#qid3992’).val(); section3992=\\$(‘#section3992’).val(); type3992=\\$(‘#type3992’).val(); subtype3992=\\$(‘#subtype3992’).val(); \\$.post(‘../../fav’,{ qid:qid3992, section: section3992, type: type3992, subtype: subtype3992 },function(ajaxresult){ \\$(‘#getrequest3992’).html(ajaxresult); qid3992=\\$(‘#qid3992’).val(”); section3992=\\$(‘#section3992’).val(”); type3992=\\$(‘#type3992’).val(”); subtype3992=\\$(‘#subtype3992’).val(”); }); }); }); Explanation:- Answer : A\nQ86. The square root of the ratio of line impedance\nand shunt admittance is called\nthe\nA. surge impedance of the line\nB. conductance of the line\nC. regulation of the line\nD. none of the above\n \\$(document).ready(function(){ \\$(‘#loaddata3993’).click(function(){ qid3993=\\$(‘#qid3993’).val(); section3993=\\$(‘#section3993’).val(); type3993=\\$(‘#type3993’).val(); subtype3993=\\$(‘#subtype3993’).val(); \\$.post(‘../../fav’,{ qid:qid3993, section: section3993, type: type3993, subtype: subtype3993 },function(ajaxresult){ \\$(‘#getrequest3993’).html(ajaxresult); qid3993=\\$(‘#qid3993’).val(”); section3993=\\$(‘#section3993’).val(”); type3993=\\$(‘#type3993’).val(”); subtype3993=\\$(‘#subtype3993’).val(”); }); }); }); Explanation:- Answer : A\nQ87. Which of the following D.C. distribution\nsystem is the simplest and lowest\nin first cost, ?\nB. Ring system\nC. Inter-connected system\nD. None of the above\n \\$(document).ready(function(){ \\$(‘#loaddata3994’).click(function(){ qid3994=\\$(‘#qid3994’).val(); section3994=\\$(‘#section3994’).val(); type3994=\\$(‘#type3994’).val(); subtype3994=\\$(‘#subtype3994’).val(); \\$.post(‘../../fav’,{ qid:qid3994, section: section3994, type: type3994, subtype: subtype3994 },function(ajaxresult){ \\$(‘#getrequest3994’).html(ajaxresult); qid3994=\\$(‘#qid3994’).val(”); section3994=\\$(‘#section3994’).val(”); type3994=\\$(‘#type3994’).val(”); subtype3994=\\$(‘#subtype3994’).val(”); }); }); }); Explanation:- Answer : A\nQ88. High voltage transmission lines use\nA. suspension insulators\nB. pin insulators\nC. both (a) and (b)\nD. none of the above\n \\$(document).ready(function(){ \\$(‘#loaddata3995’).click(function(){ qid3995=\\$(‘#qid3995’).val(); section3995=\\$(‘#section3995’).val(); type3995=\\$(‘#type3995’).val(); subtype3995=\\$(‘#subtype3995’).val(); \\$.post(‘../../fav’,{ qid:qid3995, section: section3995, type: type3995, subtype: subtype3995 },function(ajaxresult){ \\$(‘#getrequest3995’).html(ajaxresult); qid3995=\\$(‘#qid3995’).val(”); section3995=\\$(‘#section3995’).val(”); type3995=\\$(‘#type3995’).val(”); subtype3995=\\$(‘#subtype3995’).val(”); }); }); }); Explanation:- Answer : A\nQ89. Transmission line insulators are made\nof\nA. glass\nB. porcelain\nC. iron\nD. P.V.C."
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.5678544,"math_prob":0.97986543,"size":2681,"snap":"2019-13-2019-22","text_gpt3_token_len":791,"char_repetition_ratio":0.28277922,"word_repetition_ratio":0.095794395,"special_character_ratio":0.28683326,"punctuation_ratio":0.14455445,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9722949,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-03-26T19:00:15Z\",\"WARC-Record-ID\":\"<urn:uuid:cb25aaa5-0e0a-46e3-9600-2b858bd851f9>\",\"Content-Length\":\"84665\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d6e4a500-1f3e-421d-a3ab-6963149181ad>\",\"WARC-Concurrent-To\":\"<urn:uuid:7a134087-40ef-4917-8466-8cddfb9d13c3>\",\"WARC-IP-Address\":\"166.62.88.36\",\"WARC-Target-URI\":\"http://www.allexamreview.com/2017/02/online-mcq-ee-transmission-and_3.html\",\"WARC-Payload-Digest\":\"sha1:3TNLT5NN5OKHQN3VJKQPY5EILYBQF6AH\",\"WARC-Block-Digest\":\"sha1:QC7DMGSCTSLNPPBE5FJ7AF7GUBFL3RQD\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-13/CC-MAIN-2019-13_segments_1552912205600.75_warc_CC-MAIN-20190326180238-20190326202238-00296.warc.gz\"}"} |
https://www.unite.ai/simple-linear-regression-in-the-field-of-data-science/ | [
"",
null,
"Simple Linear Regression in the Field of Data Science - Unite.AI\n\n# Simple Linear Regression in the Field of Data Science",
null,
"Updated on",
null,
"Data science is a vast field that is growing with every passing day. Today, top companies are searching for professional data scientists who possess strong knowledge about the field and its related concepts. To perform well in this field, it is important to have sound knowledge about all the data science algorithms. One of the most basic data science algorithms is a simple linear regression. Every data scientist should know how to use this algorithm to solve problems and derive meaningful results.\n\nSimple linear regression is a methodology of determining the relationship between input and output variables. Input variables are considered independent variables or predictors, and output variables are dependent variables or responses. In simple linear regression, only one input variable is considered.\n\n## A Real-Time Example of Simple Linear Regression\n\nLet us consider a data set consisting of two parameters: the number of hours worked and the amount of work done. Simple linear regression aims to guess the amount of work done if the working hours are given. A regression line is drawn, which generates a minimum error. A linear equation is also formed, which can then be used for almost any data set.\n\nPrinciples which depict the simple linear regression’s purpose:\n\nSimple linear regression is used to forecast the relationship between the variables in a data set and derive meaningful conclusions. Simple linear regression is mainly used to derive the statistical relationship between the variables, which is not accurate enough. Four basic principles depict the use of simple linear regression. These principles are listed below:\n\n1. The relationship between the two variables is considered to be linear and additive: A straight line function is established for each pair of dependent and independent variables. The slope of this line is different from the values of the variables available in the data set. The dependent variables have an additive effect on the values of independent variables.\n2. The errors are statistically independent: This principle can be considered for a data set that contains information related to time and series. The consecutive errors of such a data set do not correlate and are statistically independent.\n3. Errors have constant variance (homoscedasticity): Homoscedasticity of the errors can be considered based on various parameters. These parameters include time, other forecasts, and other variables.\n4. Error distribution normality: This is an important principle as it supports the other three mentioned above. If no relationship between the variables in a data set can be established, or if any of the above principles are not established, then all the predictions and conclusions produced by the model are incorrect. These conclusions cannot be used further in the project since no real results will be obtained if wrong and misleading data is used.\n\n## Advantages of Simple Linear Regression\n\n• This methodology is extremely easy to use, and results can be obtained effortlessly.\n• This method has extremely less complexity than other data science algorithms, primarily if the relationship between the dependent and independent variables is known.\n• Over-fitting is a common condition that occurs when this methodology takes in meaningless information. To deal with this problem, the regularization technique is available, which reduces the problem of over-fitting by reducing complexity.\n\n## Disadvantages of Simple Linear Regression\n\n• Though the problem of over-fitting can be eliminated, it cannot be ignored. The method can take meaningless data into account and also eliminate meaningful information. In such a case, all the forecasts are conclusions about a particular data set that will be incorrect and effective results cannot be generated.\n• The problem of data outliers is also very common. Outliers are considered to be wrong values that do not match the exact data. When such values are taken into account, the entire model will produce misleading results that are of no use.\n• In simple linear regression, the data set in hand is considered to have independent data. This assumption is wrong because there can be some dependency between the variables.\n\nSimple linear regression is a useful technique to determine the relationships of various input and output variables in a data set. There are several real-time applications of simple linear regression. This algorithm does not require high computational power and can be easily implemented. The equations and conclusions derived can build further and are extremely simple to understand. However, some professionals also feel that simple linear regression is not the right methodology to be used for various applications as there are a lot of assumptions that are made. These assumptions might be proved wrong, as well. Therefore, it is necessary to use this technique wherever it can be correctly applied.",
null,
"Data Scientist personnel with over 8 years of professional experience in the IT industry. Competent in Data Science and Digital Marketing. Expertise in professionally researched technical content.",
null,
""
]
| [
null,
"data:image/svg+xml;base64,PD94bWwgdmVyc2lvbj0iMS4wIiBlbmNvZGluZz0iVVRGLTgiPz48c3ZnIHdpZHRoPSI5OTk5OXB4IiBoZWlnaHQ9Ijk5OTk5cHgiIHZpZXdCb3g9IjAgMCA5OTk5OSA5OTk5OSIgdmVyc2lvbj0iMS4xIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHhtbG5zOnhsaW5rPSJodHRwOi8vd3d3LnczLm9yZy8xOTk5L3hsaW5rIj48ZyBzdHJva2U9Im5vbmUiIGZpbGw9Im5vbmUiIGZpbGwtb3BhY2l0eT0iMCI+PHJlY3QgeD0iMCIgeT0iMCIgd2lkdGg9Ijk5OTk5IiBoZWlnaHQ9Ijk5OTk5Ij48L3JlY3Q+IDwvZz4gPC9zdmc+",
null,
"https://www.unite.ai/wp-content/uploads/2020/12/palak-airon-150x150.jpg",
null,
"https://www.unite.ai/wp-content/uploads/2020/12/Screen-Shot-2020-12-28-at-11.14.25-AM.jpg",
null,
"https://www.unite.ai/wp-content/uploads/2020/12/palak-airon-150x150.jpg",
null,
"https://www.unite.ai/wp-content/plugins/wpfront-scroll-top/images/icons/1.png",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.9139625,"math_prob":0.9473555,"size":5317,"snap":"2023-40-2023-50","text_gpt3_token_len":954,"char_repetition_ratio":0.14643328,"word_repetition_ratio":0.014545455,"special_character_ratio":0.17472258,"punctuation_ratio":0.08370536,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99144655,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,null,null,4,null,1,null,4,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-29T09:53:31Z\",\"WARC-Record-ID\":\"<urn:uuid:1210cc58-6230-4065-8884-d28f91e2a7fc>\",\"Content-Length\":\"117952\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d35fa328-7b9e-4c41-83b2-adb88f09ab24>\",\"WARC-Concurrent-To\":\"<urn:uuid:cc009eb1-22a0-4e38-ac76-4f4681a3da31>\",\"WARC-IP-Address\":\"104.26.11.120\",\"WARC-Target-URI\":\"https://www.unite.ai/simple-linear-regression-in-the-field-of-data-science/\",\"WARC-Payload-Digest\":\"sha1:73AYEHU2UVVDGHW4BMF3OB6VQG42QS3M\",\"WARC-Block-Digest\":\"sha1:TU2NBILNX3VLKHHJFGWXNVTPY4ENESTM\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510501.83_warc_CC-MAIN-20230929090526-20230929120526-00211.warc.gz\"}"} |
https://www.allquestionanswer.com/to-create-a-formula-you-first/ | [
"Home » Computer » MS Excel MCQ » To create a formula, you first:\n\n# To create a formula, you first:\n\n### MCQ Question: (Q.40)\n\n#### To create a formula, you first:\n\nOptions:\nA. Select the cell you want to place the formula into\nB. Type the equals sign (=) to tell Excel that you’re about to enter a formula\nC. Enter the formula using any input values and the appropriate mathematical operators that make up your formula\nD. Choose the new command from the file menu\n\nCorrect Answer: Option A. Select the cell you want to place the formula into"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.8170209,"math_prob":0.7127783,"size":429,"snap":"2022-40-2023-06","text_gpt3_token_len":99,"char_repetition_ratio":0.16,"word_repetition_ratio":0.1891892,"special_character_ratio":0.22377622,"punctuation_ratio":0.12222222,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9908107,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-02-01T18:51:53Z\",\"WARC-Record-ID\":\"<urn:uuid:72a15612-b2e5-48ea-97bf-61cb5f973b30>\",\"Content-Length\":\"118953\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f89cf861-99a5-4311-805c-3778f0ada232>\",\"WARC-Concurrent-To\":\"<urn:uuid:b465c008-d0a2-42db-a9e8-2601fd93d8ec>\",\"WARC-IP-Address\":\"217.21.84.125\",\"WARC-Target-URI\":\"https://www.allquestionanswer.com/to-create-a-formula-you-first/\",\"WARC-Payload-Digest\":\"sha1:WLXZIHTPCVVQ7MR6ITQDQZCN4RRDWGPG\",\"WARC-Block-Digest\":\"sha1:AJLBH37VY44LFARWKQJLJDOONQY2BYGL\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764499949.24_warc_CC-MAIN-20230201180036-20230201210036-00829.warc.gz\"}"} |
https://www.mdpi.com/2075-1680/9/1/12 | [
"",
null,
"Next Article in Journal\nOn One Interpolation Type Fractional Boundary-Value Problem\nNext Article in Special Issue\nApproximating Functions of Positive Compact Operators by Using Bell Polynomials\nPrevious Article in Journal\nThe Tubby Torus as a Quotient Group\nPrevious Article in Special Issue\nA Versatile Integral in Physics and Astronomy and Fox’s H-Function\n\nFont Type:\nArial Georgia Verdana\nFont Size:\nAa Aa Aa\nLine Spacing:\nColumn Width:\nBackground:\nArticle\n\n# Quantum Trapezium-Type Inequalities Using Generalized ϕ-Convex Functions\n\n1\n2\nDepartment of Mathematics, Faculty of Technical Science, University Ismail Qemali, L. Pavaresia, Vlora 1001, Albania\n3\nDepartamento de Técnicas Cuantitativas, Decanato de Ciencias Económicas y Empresariales, Universidad Centroccidental Lisandro Alvarado, Av. 20. esq. Av. Moran, Edf. Los Militares, Piso 2, Ofc.2, Barquisimeto 3001, Venezuela\n*\nAuthor to whom correspondence should be addressed.\nThese authors contributed equally to this work.\nAxioms 2020, 9(1), 12; https://doi.org/10.3390/axioms9010012\nReceived: 17 November 2019 / Revised: 13 December 2019 / Accepted: 13 December 2019 / Published: 26 January 2020\n(This article belongs to the Special Issue Special Functions and Their Applications)\n\n## Abstract\n\n:\nIn this work, a study is conducted on the Hermite–Hadamard inequality using a class of generalized convex functions that involves a generalized and parametrized class of special functions within the framework of quantum calculation. Similar results can be obtained from the results found for functions such as the hypergeometric function and the classical Mittag–Leffler function. The method used to obtain the results is classic in the study of quantum integral inequalities.\nMSC:\n52A01; 26D15; 32A17\n\n## 1. Introduction\n\nIn the eighteenth century (1707–1783), Euler started some studies about what we know now as quantum calculus (1707–1783). As T. Ernst says in , it was John von Neumann who first proposed that group representation theory can be used in quantum mechanics. In , F. J. Jackson started a systematic study of q-calculus and introduced the q-definite integrals. Some branches of mathematics and physics, such as number theory, orthogonal polynomials, combinatory, basic hypergeometric functions, mechanics, and quantum and relativity theory, have been enriched by the research work of various authors as T. Ernst [3,4], H. Gauchman , V. Kac and P. Cheung , and M.E.H. Ismail [7,8]. Also, certain famous integral inequalities have been studied in the frame of q-calculus [9,10].\nConvex functions have played an important role in the development of inequalities, as it is evidenced in functional analysis, harmonic analysis, specifically in interpolation theory, control theory and optimization, and it is shown in the following works C.P. Niculescu , C. Bennett and R. Sharpley , N.A. Nguyen et al. , Ş. Mititelu and S. Trenţă , S. Trenţă [15,16,17]. This property was defined by J.L.W.V. Jensen in the following works [18,19] as follows.\nDefinition 1\n(). A function $f : I ⊆ R ⟶ R$ is said to be convex on $I ,$ if\n$f ( ( 1 − ı ) ℘ 1 + ı ℘ 2 ) ≤ ( 1 − ı ) f ( ℘ 1 ) + ı f ( ℘ 2 )$\nholds for every $℘ 1 , ℘ 2 ∈ I$ and $ı ∈ [ 0 , 1 ] .$\nThe concept of convexity has been extended and generalized in several directions. Various types of generalized convexity have appeared in different research works, some of them modify the domain or range of the function, always maintaining the basic structure of a convex function. Among them are: s-convexity in the first and second sense , P-convexity , MT-convexity , and others [24,25,26,27,28,29,30,31]. The well-known inequality of Hermite–Hadamard is famous throughout mathematical literature, being of interest in the relationship between arithmetic means, as an argument and as an image of the ends of the interval where a convex function is defined. It was established as follows.\nTheorem 1.\nLet $f : I ⊆ R ⟶ R$ be a convex function on I and $℘ 1 , ℘ 2 ∈ I$ with $℘ 1 < ℘ 2 .$ Then the following inequality holds:\n$f ℘ 1 + ℘ 2 2 ≤ 1 ℘ 2 − ℘ 1 ∫ ℘ 1 ℘ 2 f ( x ) d x ≤ f ( ℘ 1 ) + f ( ℘ 2 ) 2 .$\nThis inequality (1) is also known as trapezium inequality.\nThe trapezium type inequality has remained a subject of great interest due to its wide applications in the field of mathematical analysis. For other recent results which generalize, improve and extend the inequality (1) through various classes of convex functions interested readers are referred to [32,33,34,35,36,37,38,39].\nLet K be a non empty closed set in $R n$ and $ϕ : K → R$ a continuous function.\nNoor, in , introduced a new class of non-convex functions, the so-called $ϕ$-convex as follows:\nDefinition 2.\nThe function $f : K → R$ on the ϕ-convex set K is said to be ϕ-convex, if\n$f ( ℘ 1 + ı e i ϕ ( ℘ 2 − ℘ 1 ) ) ≤ ( 1 − ı ) f ( ℘ 1 ) + ı f ( ℘ 2 ) , ∀ ℘ 1 , ℘ 2 ∈ K , ı ∈ [ 0 , 1 ] .$\nThe function f is said to be $ϕ$-concave iff $( − f )$ is $ϕ$-convex. Note that every convex function is $ϕ$-convex but the converse does not hold in general.\nRaina, in , introduced a class of functions defined by\n$F ρ , λ σ ( z ) = F ρ , λ σ ( 0 ) , σ ( 1 ) , … ( z ) = ∑ k = 0 + ∞ σ ( k ) Γ ( ρ k + λ ) z k ,$\nwhere $ρ , λ > 0 , | z | < R$ and\n$σ = ( σ ( 0 ) , … , σ ( k ) , … )$\nis a bounded sequence of positive real numbers. Note that, if we take in (2) $ρ = 1 , λ = 1$ and\n$σ ( k ) = ( α ) k ( β ) k ( γ ) k f o r k = 0 , 1 , 2 , … ,$\nwhere $α , β$, and $γ$ are parameters which can take arbitrary real or complex values (provided that $γ ≠ 0 , − 1 , − 2 , … ) ,$ and the symbol $( a ) k$ denotes the quantity\n$( a ) k = Γ ( a + k ) Γ ( a ) = a ( a + 1 ) … ( a + k − 1 ) , k = 0 , 1 , 2 , … ,$\nand restrict its domain to $| z | ≤ 1$ (with $z ∈ C$), then we have the classical hypergeometric function, that is\n$F ρ , λ σ ( z ) = F ( α , β ; γ ; z ) = ∑ k = 0 + ∞ ( α ) k ( β ) k k ! ( γ ) k z k .$\nAlso, if $σ = ( 1 , 1 , … )$ with $ρ = α , ( R e ( α ) > 0 ) , λ = 1$ and restricting its domain to $z ∈ C$ in (2) then we have the classical Mittag–Leffler function\n$E α ( z ) = ∑ k = 0 + ∞ 1 Γ ( 1 + α k ) z k .$\nFinally, let recall the new class of set and new class of function involving Raina’s function introduced by Vivas-Cortez et al. in , the so-called generalized $ϕ$-convex set and also the generalized $ϕ$-convex function.\nDefinition 3.\nLet $ρ , λ > 0$ and $σ = ( σ ( 0 ) , … , σ ( k ) , … )$ are bounded sequence of positive real numbers. A non empty set K is said to be generalized ϕ-convex set, if\n$℘ 1 + ı F ρ , λ σ ( ℘ 2 − ℘ 1 ) ∈ K , ∀ ℘ 1 , ℘ 2 ∈ K a n d ı ∈ [ 0 , 1 ] ,$\nwhere $F ρ , λ σ ( · )$ is Raina’s function.\nDefinition 4.\nLet $ρ , λ > 0$ and $σ = ( σ ( 0 ) , … , σ ( k ) , … )$ are bounded sequence of positive real numbers. If a function $f : K → R$ satisfies the following inequality\n$f ( ℘ 1 + ı F ρ , λ σ ( ℘ 2 − ℘ 1 ) ) ≤ ( 1 − ı ) f ( ℘ 1 ) + ı f ( ℘ 2 ) ,$\nfor all $ı ∈ [ 0 , 1 ]$ and $℘ 1 , ℘ 2 ∈ K ,$ then f is called generalized ϕ-convex.\nRemark 1.\nFor $λ = 0 , ρ = 1$ and $σ = ( 0 , 1 , 0 , 0 , ⋯ )$ in Definition 4, then we have $F ρ , λ σ ( ℘ 2 − ℘ 1 ) = ℘ 2 − ℘ 1 > 0$ so we recapture Definition 1. Also, under suitable choice of $F ρ , λ σ ( · ) ,$ we get Definition 2.\nRecently, several authors have utilized quantum calculus as a strong tool in establishing new extensions of trapezium-type and other inequalities, see [6,41,42,43,44,45,46,47] and the references therein.\nWe recall now some concepts from quantum calculus. Let $I = [ ℘ 1 , ℘ 2 ] ⊆ R$ be an interval and $0 < q < 1$ be a constant.\nDefinition 5\n(). Let $f : I → R$ be a continuous function and $x ∈ I .$ Then q-derivative of f on I at x is defined as\n$℘ 1 D q f ( x ) = f ( x ) − f ( q x + ( 1 − q ) ℘ 1 ) ( 1 − q ) ( x − ℘ 1 ) , x ≠ ℘ 1 , ℘ 1 D q f ( ℘ 1 ) = lim x → ℘ 1 ℘ 1 D q f ( x ) .$\nWe say that f is q-differentiable on I provided $℘ 1 D q f ( x )$ exists for all $x ∈ I .$ Note that if $℘ 1 = 0$ in (5), then $℘ 1 D q f = D q f ,$ where $D q$ is the well-known q-derivative of the function $f ( x )$ defined by\n$D q f ( x ) = f ( x ) − f ( q x ) ( 1 − q ) x .$\nDefinition 6\n(). Let $f : I → R$ be a continuous function. Then the q-integral on I is defined by\n$∫ ℘ 1 x f ( ı ) ℘ 1 d q ı = ( 1 − q ) ( x − ℘ 1 ) ∑ n = 0 + ∞ q n f q n x + ( 1 − q n ) ℘ 1 .$\nfor $x ∈ I .$ Note that if $℘ 1 = 0 ,$ then we have the classical q-integral, which is defined by\n$∫ 0 x f ( ı ) 0 d q ı = ( 1 − q ) x ∑ n = 0 + ∞ q n f q n x$\nfor $x ∈ [ 0 , + ∞ ) .$\nTheorem 2\n(). Assume that $f , g : I → R$ are continuous functions, $c ∈ R .$ Then, for $x ∈ I ,$ we have\n$∫ ℘ 1 x f ( ı ) + g ( ı ) ℘ 1 d q ı = ∫ ℘ 1 x f ( ı ) ℘ 1 d q ı + ∫ ℘ 1 x g ( ı ) ℘ 1 d q ı ;$\n$∫ ℘ 1 x ( c f ) ( ı ) ℘ 1 d q ı = c ∫ ℘ 1 x f ( ı ) ℘ 1 d q ı .$\nDefinition 7\n(). For any real number $℘ 1 ,$\n$[ ℘ 1 ] q = q ℘ 1 − 1 q − 1$\nis called the q-analogue of $℘ 1 .$ In particular, if $n ∈ Z ,$ we deonte\n$[ n ] = q n − 1 q − 1 = q n − 1 + ⋯ + q + 1 .$\nDefinition 8\n(). If $n ∈ Z ,$ the q-analogue of $( x − ℘ 1 ) n$ is the polynomial\n$( x − ℘ 1 ) q n = 1 , n = 0 ; ( x − ℘ 1 ) ( x − q ℘ 1 ) ⋯ ( x − q n − 1 ℘ 1 ) , n ≥ 1 .$\nDefinition 9\n(). For any $t , s > 0 ,$\n$β q ( t , s ) = ∫ 0 1 ı t − 1 ( 1 − q ı ) q s − 1 0 d q ı$\nis called the q-Beta function. Note that\n$β q ( t , 1 ) = ∫ 0 1 ı t − 1 0 d q ı = 1 [ t ] ,$\nwhere $[ t ]$ is the q-analogue of $t .$\nTheorem 3\n(). (q-Hermite–Hadamard) Let $f : I ⟶ R$ be a convex continuous function on I and $0 < q < 1 .$ Then the following inequality holds:\n$f ℘ 1 + ℘ 2 2 ≤ 1 ℘ 2 − ℘ 1 ∫ ℘ 1 ℘ 2 f ( ı ) ℘ 1 d q ı ≤ q f ( ℘ 1 ) + f ( ℘ 2 ) 1 + q .$\nSudsutad et al. in , established the following three q-integral identities to be used in this paper.\nLemma 1.\nLet $0 < q < 1$ be a constant. Then the following equality holds:\n$∫ 0 1 ı | 1 − ( 1 + q ) ı | 0 d q ı = q ( 1 + 4 q + q 2 ) ( 1 + q + q 2 ) ( 1 + q ) 3 .$\nLemma 2.\nLet $0 < q < 1$ be a constant. Then the following equality holds:\n$∫ 0 1 ( 1 − ı ) | 1 − ( 1 + q ) ı | 0 d q ı = q ( 1 + 3 q 2 + 2 q 3 ) ( 1 + q + q 2 ) ( 1 + q ) 3 .$\nLemma 3.\nLet $f : [ ℘ 1 , ℘ 2 ] ⊆ R → R$ be a q-differentiable function on $( ℘ 1 , ℘ 2 )$ with $℘ 1 D q f$ be continuous and integrable on $[ ℘ 1 , ℘ 2 ] ,$ where $0 < q < 1 .$ Then the following identity holds:\n$1 ℘ 2 − ℘ 1 ∫ ℘ 1 ℘ 2 f ( ı ) ℘ 1 d q ı − q f ( ℘ 1 ) + f ( ℘ 2 ) 1 + q$\n$= q ( ℘ 2 − ℘ 1 ) 1 + q ∫ 0 1 ( 1 − ( 1 + q ) ı ) ℘ 1 D q f ( ı ℘ 2 + ( 1 − ı ) ℘ 1 ) 0 d q ı .$\nMotivated by the above literatures, the paper is structured as follows: In Section 2, an identity for a q-differentiable functions involving Raina’s generalized special function will be established. Applying this result, we develop some new quantum estimates inequalities for the generalized $ϕ$-convex functions. Some known results will be recaptured as special cases. Also, new quantum Hermite–Hadamard type inequality for the product of two generalized $ϕ$-convex functions will be derived. In Section 3, a briefly conclusion is given as well.\n\n## 2. Some Quantum Trapezium-Type Inequalities\n\nThroughout this paper the following notations are used:\n$O = ℘ 1 , ℘ 1 + F ρ , λ σ ( ℘ 2 − ℘ 1 ) f o r F ρ , λ σ ( ℘ 2 − ℘ 1 ) > 0 ,$\nwhere $ρ , λ > 0$ and $σ = ( σ ( 0 ) , … , σ ( k ) , … )$ are bounded sequence of positive real numbers. Let denote $O ∘$ the interior of $O .$ Also, for convenience we write $d q ı$ for $0 d q ı ,$ where $0 < q < 1 .$\nLemma 4.\nLet $f : O → R$ be a q-differentiable function on $O ∘$ with $℘ 1 D q f$ be continuous and integrable on $O .$ Then the following identity holds:\n$W f ( ℘ 1 , ℘ 2 ; q ) = q F ρ , λ σ ( ℘ 2 − ℘ 1 ) 1 + q ∫ 0 1 ( 1 − ( 1 + q ) ı ) ℘ 1 D q f ( ℘ 1 + ı F ρ , λ σ ( ℘ 2 − ℘ 1 ) ) d q ı ,$\nwhere\n$W f ( ℘ 1 , ℘ 2 ; q ) = 1 F ρ , λ σ ( ℘ 2 − ℘ 1 ) ∫ ℘ 1 ℘ 1 + F ρ , λ σ ( ℘ 2 − ℘ 1 ) f ( ı ) ℘ 1 d q ı − q f ( ℘ 1 ) + f ( ℘ 1 + F ρ , λ σ ( ℘ 2 − ℘ 1 ) ) 1 + q .$\nProof.\nUsing Definitions 5 and 6, we have\n$∫ 0 1 ( 1 − ( 1 + q ) ı ) ℘ 1 D q f ( ℘ 1 + ı F ρ , λ σ ( ℘ 2 − ℘ 1 ) ) d q ı = ∫ 0 1 f ( ℘ 1 + ı F ρ , λ σ ( ℘ 2 − ℘ 1 ) ) − f ( ℘ 1 + q ı F ρ , λ σ ( ℘ 2 − ℘ 1 ) ) ( 1 − q ) ı F ρ , λ σ ( ℘ 2 − ℘ 1 ) d q ı − ( 1 + q ) ∫ 0 1 ı f ( ℘ 1 + ı F ρ , λ σ ( ℘ 2 − ℘ 1 ) ) − f ( ℘ 1 + q ı F ρ , λ σ ( ℘ 2 − ℘ 1 ) ) ( 1 − q ) ı F ρ , λ σ ( ℘ 2 − ℘ 1 ) d q ı = ∑ n = 0 + ∞ f ( ℘ 1 + q n F ρ , λ σ ( ℘ 2 − ℘ 1 ) ) − ∑ n = 0 + ∞ f ( ℘ 1 + q n + 1 F ρ , λ σ ( ℘ 2 − ℘ 1 ) ) F ρ , λ σ ( ℘ 2 − ℘ 1 ) − ( 1 + q ) ∑ n = 0 + ∞ f ( ℘ 1 + q n F ρ , λ σ ( ℘ 2 − ℘ 1 ) ) − ∑ n = 0 + ∞ f ( ℘ 1 + q n + 1 F ρ , λ σ ( ℘ 2 − ℘ 1 ) ) F ρ , λ σ ( ℘ 2 − ℘ 1 ) = − q f ( ℘ 1 ) + f ( ℘ 1 + F ρ , λ σ ( ℘ 2 − ℘ 1 ) ) q F ρ , λ σ ( ℘ 2 − ℘ 1 ) + ( 1 + q ) q F ρ , λ σ ( ℘ 2 − ℘ 1 ) 2 ∫ ℘ 1 ℘ 1 + F ρ , λ σ ( ℘ 2 − ℘ 1 ) f ( ı ) ℘ 1 d q ı .$\nMultiplying both sides of above equality by $q F ρ , λ σ ( ℘ 2 − ℘ 1 ) 1 + q ,$ we get the desired result. The proof of Lemma 4 is completed. □\nRemark 2.\nTaking $q → 1 −$ in Lemma 4, we obtain the following new identity:\n$W f ( ℘ 1 , ℘ 2 ) = F ρ , λ σ ( ℘ 2 − ℘ 1 ) 2 ∫ 0 1 ( 1 − 2 ı ) f ′ ( ℘ 1 + ı F ρ , λ σ ( ℘ 2 − ℘ 1 ) ) d ı ,$\nwhere\n$W f ( ℘ 1 , ℘ 2 ) = 1 F ρ , λ σ ( ℘ 2 − ℘ 1 ) ∫ ℘ 1 ℘ 1 + F ρ , λ σ ( ℘ 2 − ℘ 1 ) f ( ı ) d ı − f ( ℘ 1 ) + f ( ℘ 1 + F ρ , λ σ ( ℘ 2 − ℘ 1 ) ) 2 .$\nRemark 3.\nTaking $F ρ , λ σ ( ℘ 2 − ℘ 1 ) = ℘ 2 − ℘ 1$ in Lemma 4, we get Lemma 3.\nTheorem 4.\nLet $f : O → R$ be a q-differentiable function on $O ∘$ with $℘ 1 D q f$ be continuous and integrable on $O .$ If $| ℘ 1 D q f |$ is generalized ϕ-convex on $O ,$ then the following inequality holds:\n$| W f ( ℘ 1 , ℘ 2 ; q ) | ≤ q 2 F ρ , λ σ ( ℘ 2 − ℘ 1 ) A ( q ) | ℘ 1 D q f ( ℘ 1 ) | + B ( q ) | ℘ 1 D q f ( ℘ 2 ) | ,$\nwhere\n$A ( q ) = q ( 1 + 3 q 2 + 2 q 3 ) ( 1 + q + q 2 ) ( 1 + q ) 4 , B ( q ) = 1 + 4 q + q 2 ( 1 + q + q 2 ) ( 1 + q ) 4 .$\nProof.\nUsing Lemmas 1, 2 and 4, the fact that $| ℘ 1 D q f |$ is generalized $ϕ$-convex function, we have\n$| W f ( ℘ 1 , ℘ 2 ; q ) | ≤ q F ρ , λ σ ( ℘ 2 − ℘ 1 ) 1 + q ∫ 0 1 | 1 − ( 1 + q ) ı | | ℘ 1 D q f ( ℘ 1 + ı F ρ , λ σ ( ℘ 2 − ℘ 1 ) ) | d q ı ≤ q F ρ , λ σ ( ℘ 2 − ℘ 1 ) 1 + q ∫ 0 1 | 1 − ( 1 + q ) ı | ( 1 − ı ) | ℘ 1 D q f ( ℘ 1 ) | + ı | ℘ 1 D q f ( ℘ 2 ) | d q ı = q 2 F ρ , λ σ ( ℘ 2 − ℘ 1 ) A ( q ) | ℘ 1 D q f ( ℘ 1 ) | + B ( q ) | ℘ 1 D q f ( ℘ 2 ) | .$\nThe proof of Theorem 4 is completed. □\nRemark 4.\nTaking $F ρ , λ σ ( ℘ 2 − ℘ 1 ) = ℘ 2 − ℘ 1$ in Theorem 4, we get (, Theorem 4.1).\nCorollary 1.\nTaking $q → 1 −$ in Theorem 4, we get\n$| W f ( ℘ 1 , ℘ 2 ) | ≤ F ρ , λ σ ( ℘ 2 − ℘ 1 ) | f ′ ( ℘ 1 ) | + | f ′ ( ℘ 2 ) | 8 .$\nCorollary 2.\nTaking $| ℘ 1 D q f | ≤ K$ in Theorem 4, we get\n$| W f ( ℘ 1 , ℘ 2 ; q ) | ≤ K q 2 F ρ , λ σ ( ℘ 2 − ℘ 1 ) A ( q ) + B ( q ) .$\nTheorem 5.\nLet $f : O → R$ be a q-differentiable function on $O ∘$ with $℘ 1 D q f$ be continuous and integrable on $O .$ If $| ℘ 1 D q f | r$ is generalized ϕ-convex on O for $r > 1$ and $1 p + 1 r = 1 ,$ then the following inequality holds:\n$| W f ( ℘ 1 , ℘ 2 ; q ) | ≤ q F ρ , λ σ ( ℘ 2 − ℘ 1 ) 1 + q B ( p ; q ) p ( q + 1 ) | ℘ 1 D q 2 f ( ℘ 1 ) | r + | ℘ 1 D q 2 f ( ℘ 2 ) | r 1 + q r ,$\nwhere\n$B ( p ; q ) = ∫ 0 1 | 1 − ( 1 + q ) ı | p d q ı .$\nProof.\nUsing Lemmas 1, 2 and 4, Hölder’s inequality and the fact that $| ℘ 1 D q f | r$ is generalized $ϕ$-convex function, we have\n$| W f ( ℘ 1 , ℘ 2 ; q ) | ≤ q F ρ , λ σ ( ℘ 2 − ℘ 1 ) 1 + q ∫ 0 1 | 1 − ( 1 + q ) ı | | ℘ 1 D q f ( ℘ 1 + ı F ρ , λ σ ( ℘ 2 − ℘ 1 ) ) | d q ı ≤ q F ρ , λ σ ( ℘ 2 − ℘ 1 ) 1 + q ∫ 0 1 | 1 − ( 1 + q ) ı | p d q ı 1 p × ∫ 0 1 | ℘ 1 D q f ( ℘ 1 + ı F ρ , λ σ ( ℘ 2 − ℘ 1 ) ) | r d q ı 1 r ≤ q F ρ , λ σ ( ℘ 2 − ℘ 1 ) 1 + q ∫ 0 1 | 1 − ( 1 + q ) ı | p d q ı 1 p × ∫ 0 1 ( 1 − ı ) | ℘ 1 D q f ( ℘ 1 ) | r + ı | ℘ 1 D q f ( ℘ 2 ) | r d q ı 1 r = q F ρ , λ σ ( ℘ 2 − ℘ 1 ) 1 + q B ( p ; q ) p ( q + 1 ) | ℘ 1 D q f ( ℘ 1 ) | r + | ℘ 1 D q f ( ℘ 2 ) | r 1 + q r .$\nThe proof of Theorem 5 is completed. □\nCorollary 3.\nTaking $q → 1 −$ in Theorem 5, we get\n$| W f ( ℘ 1 , ℘ 2 ) | ≤ F ρ , λ σ ( ℘ 2 − ℘ 1 ) 2 2 ( p + 1 ) p 2 | f ′ ( ℘ 1 ) | r + | f ′ ( ℘ 2 ) | r 2 r .$\nCorollary 4.\nTaking $| ℘ 1 D q f | ≤ K$ in Theorem 5, we get\n$| W f ( ℘ 1 , ℘ 2 ; q ) | ≤ K q 1 + q 2 + q 1 + q r B ( p ; q ) p F ρ , λ σ ( ℘ 2 − ℘ 1 ) .$\nTheorem 6.\nLet $f : O → R$ be a q-differentiable function on $O ∘$ with $℘ 1 D q f$ be continuous and integrable on $O .$ If $| ℘ 1 D q f | r$ is generalized ϕ-convex on $O ,$ then for $r ≥ 1 ,$ the following inequality holds:\n$| W f ( ℘ 1 , ℘ 2 ; q ) | ≤ q 2 F ( q ) F ρ , λ σ ( ℘ 2 − ℘ 1 ) × C ( q ) | ℘ 1 D q f ( ℘ 1 ) | r + D ( q ) | ℘ 1 D q f ( ℘ 2 ) | r r ,$\nwhere\n$C ( q ) = 1 + 3 q 2 + 2 q 3 ( 1 + q + q 2 ) ( 2 + q + q 3 ) , D ( q ) = 1 + 4 q + q 2 ( 1 + q + q 2 ) ( 2 + q + q 3 ) , F ( q ) = 2 + q + q 2 ( 1 + q ) 4 .$\nProof.\nUsing Lemmas 1, 2 and 4, the well–known power mean inequality and the fact that $| ℘ 1 D q f | r$ is generalized $ϕ$-convex function, we have\n$| W f ( ℘ 1 , ℘ 2 ; q ) | ≤ q F ρ , λ σ ( ℘ 2 − ℘ 1 ) 1 + q ∫ 0 1 | 1 − ( 1 + q ) ı | | ℘ 1 D q f ( ℘ 1 + ı F ρ , λ σ ( ℘ 2 − ℘ 1 ) ) | d q ı ≤ q F ρ , λ σ ( ℘ 2 − ℘ 1 ) 1 + q ∫ 0 1 | 1 − ( 1 + q ) ı | d q ı 1 − 1 r × ∫ 0 1 | 1 − ( 1 + q ) ı | | ℘ 1 D q f ( ℘ 1 + ı F ρ , λ σ ( ℘ 2 − ℘ 1 ) ) | r d q ı 1 r ≤ q F ρ , λ σ ( ℘ 2 − ℘ 1 ) 1 + q ∫ 0 1 | 1 − ( 1 + q ) ı | d q ı 1 − 1 r × ∫ 0 1 | 1 − ( 1 + q ) ı | ( 1 − ı ) | ℘ 1 D q f ( ℘ 1 ) | r + ı | ℘ 1 D q f ( ℘ 2 ) | r d q ı 1 r = q 2 F ( q ) F ρ , λ σ ( ℘ 2 − ℘ 1 ) C ( q ) | ℘ 1 D q f ( ℘ 1 ) | r + D ( q ) | ℘ 1 D q f ( ℘ 2 ) | r r .$\nThe proof of Theorem 6 is completed. □\nRemark 5.\nTaking $F ρ , λ σ ( ℘ 2 − ℘ 1 ) = ℘ 2 − ℘ 1$ in Theorem 6, we get (, Theorem 4.2).\nCorollary 5.\nTaking $q → 1 −$ in Theorem 6, we get\n$| W f ( ℘ 1 , ℘ 2 ) | ≤ F ρ , λ σ ( ℘ 2 − ℘ 1 ) 4 | f ′ ( ℘ 1 ) | r + | f ′ ( ℘ 2 ) | r 2 r .$\nCorollary 6.\nTaking $| ℘ 1 D q f | ≤ K$ in Theorem 6, we get\n$| W f ( ℘ 1 , ℘ 2 ; q ) | ≤ K q 2 F ( q ) F ρ , λ σ ( ℘ 2 − ℘ 1 ) C ( q ) + D ( q ) r .$\nTheorem 7.\nLet $f : O → R$ be a q-differentiable function on $O ∘$ with $℘ 1 D q f$ be continuous and integrable on $O .$ If $| ℘ 1 D q f | r$ is generalized ϕ-convex on $O ,$ then for $r ≥ 1 ,$ the following inequality holds:\n$| W f ( ℘ 1 , ℘ 2 ; q ) | ≤ q F ρ , λ σ ( ℘ 2 − ℘ 1 ) 1 + q M ( r ; q ) | ℘ 1 D q f ( ℘ 1 ) | r + N ( r ; q ) | ℘ 1 D q f ( ℘ 2 ) | r r ,$\nwhere\n$M ( r ; q ) = ∫ 0 1 ( 1 − ı ) | 1 − ( 1 + q ) ı | r d q ı , N ( r ; q ) = ∫ 0 1 ı | 1 − ( 1 + q ) ı | r d q ı .$\nProof.\nUsing Lemmas 1, 2 and 4, the well–known power mean inequality and the fact that $| ℘ 1 D q f | r$ is generalized $ϕ$-convex function, we have\n$| W f ( ℘ 1 , ℘ 2 ; q ) | ≤ q F ρ , λ σ ( ℘ 2 − ℘ 1 ) 1 + q ∫ 0 1 | 1 − ( 1 + q ) ı | | ℘ 1 D q f ( ℘ 1 + ı F ρ , λ σ ( ℘ 2 − ℘ 1 ) ) | d q ı ≤ q F ρ , λ σ ( ℘ 2 − ℘ 1 ) 1 + q ∫ 0 1 d q ı 1 − 1 r × ∫ 0 1 | 1 − ( 1 + q ) ı | r | ℘ 1 D q f ( ℘ 1 + ı F ρ , λ σ ( ℘ 2 − ℘ 1 ) ) | r d q ı 1 r ≤ q F ρ , λ σ ( ℘ 2 − ℘ 1 ) 1 + q ∫ 0 1 d q ı 1 − 1 r × ∫ 0 1 | 1 − ( 1 + q ) ı | r ( 1 − ı ) | ℘ 1 D q f ( ℘ 1 ) | r + ı | ℘ 1 D q f ( ℘ 2 ) | r d q ı 1 r = q F ρ , λ σ ( ℘ 2 − ℘ 1 ) 1 + q M ( r ; q ) | ℘ 1 D q f ( ℘ 1 ) | r + N ( r ; q ) | ℘ 1 D q f ( ℘ 2 ) | r r .$\nThe proof of Theorem 7 is completed. □\nCorollary 7.\nTaking $q → 1 −$ in Theorem 7, we get\n$| W f ( ℘ 1 , ℘ 2 ) | ≤ F ρ , λ σ ( ℘ 2 − ℘ 1 ) 2 2 ( r + 1 ) r | f ′ ( ℘ 2 ) | .$\nCorollary 8.\nTaking $| ℘ 1 D q f | ≤ K$ in Theorem 7, we get\n$| W f ( ℘ 1 , ℘ 2 ; q ) | ≤ K q 1 + q F ρ , λ σ ( ℘ 2 − ℘ 1 ) M ( r ; q ) + N ( r ; q ) r .$\nThis lasts Theorems establish two quantum estimates for the product of generalized $ϕ$-convex functions.\nTheorem 8.\nLet $f , g : O → R$ be two non negative q-differentiable functions on $O ∘$ and generalized ϕ-convex on $O .$ Then the following inequalities hold:\n$1 F ρ , λ σ ( ℘ 2 − ℘ 1 ) ∫ ℘ 1 ℘ 1 + F ρ , λ σ ( ℘ 2 − ℘ 1 ) f ( ı ) g ( ı ) d q ı ≤ ( 1 + q ) f ( ℘ 1 ) g ( ℘ 1 ) + q ( 1 + q 2 ) f ( ℘ 2 ) g ( ℘ 2 ) + q 2 V ( ℘ 1 , ℘ 2 ) ( 1 + q ) ( 1 + q + q 2 )$\nand\n$2 f 2 ℘ 1 + F ρ , λ σ ( ℘ 2 − ℘ 1 ) 2 g 2 ℘ 1 + F ρ , λ σ ( ℘ 2 − ℘ 1 ) 2 ≤ 1 F ρ , λ σ ( ℘ 2 − ℘ 1 ) ∫ ℘ 1 ℘ 1 + F ρ , λ σ ( ℘ 2 − ℘ 1 ) f ( ı ) g ( ı ) d q ı + 2 q 2 U ( ℘ 1 , ℘ 2 ) + ( 1 + 2 q + q 2 ) V ( ℘ 1 , ℘ 2 ) 2 ( 1 + q ) ( 1 + q + q 2 ) ,$\nwhere\n$U ( ℘ 1 , ℘ 2 ) = f ( ℘ 1 ) g ( ℘ 1 ) + f ( ℘ 2 ) g ( ℘ 2 ) , V ( ℘ 1 , ℘ 2 ) = f ( ℘ 1 ) g ( ℘ 2 ) + f ( ℘ 2 ) g ( ℘ 1 ) .$\nProof.\nUsing the generalized $ϕ$-convexity of f and g for all $ı ∈ [ 0 , 1 ] ,$ we have\n$f ( ℘ 1 + ı F ρ , λ σ ( ℘ 2 − ℘ 1 ) ) ≤ ( 1 − ı ) f ( ℘ 1 ) + ı f ( ℘ 2 ) ,$\n$g ( ℘ 1 + ı F ρ , λ σ ( ℘ 2 − ℘ 1 ) ) ≤ ( 1 − ı ) g ( ℘ 1 ) + ı g ( ℘ 2 ) .$\nMultiplying (22) with (23), we get\n$f ( ℘ 1 + ı F ρ , λ σ ( ℘ 2 − ℘ 1 ) ) g ( ℘ 1 + ı F ρ , λ σ ( ℘ 2 − ℘ 1 ) )$\n$≤ ( 1 − ı ) 2 f ( ℘ 1 ) g ( ℘ 1 ) + ı 2 f ( ℘ 2 ) g ( ℘ 2 ) + ı ( 1 − ı ) f ( ℘ 1 ) g ( ℘ 2 ) + f ( ℘ 2 ) g ( ℘ 1 ) .$\nTaking q-integral for (24) with respect to ı on $( 0 , 1 ) ,$ and substituting $u = ℘ 1 + ı F ρ , λ σ ( ℘ 2 − ℘ 1 ) ,$ we deduce the desired inequality (20). The proof of inequality (21) is similar so we omit it. □\nRemark 6.\nTaking $F ρ , λ σ ( ℘ 2 − ℘ 1 ) = ℘ 2 − ℘ 1$ in Theorem 8, we get (, Theorem 4.3).\nCorollary 9.\nTaking $q → 1 −$ in Theorem 8, we get\n$1 F ρ , λ σ ( ℘ 2 − ℘ 1 ) ∫ ℘ 1 ℘ 1 + F ρ , λ σ ( ℘ 2 − ℘ 1 ) f ( ı ) g ( ı ) d ı ≤ 2 U ( ℘ 1 , ℘ 2 ) + V ( ℘ 1 , ℘ 2 ) 6$\nand\n$2 f 2 ℘ 1 + F ρ , λ σ ( ℘ 2 − ℘ 1 ) 2 g 2 ℘ 1 + F ρ , λ σ ( ℘ 2 − ℘ 1 ) 2$\n$≤ 1 F ρ , λ σ ( ℘ 2 − ℘ 1 ) ∫ ℘ 1 ℘ 1 + F ρ , λ σ ( ℘ 2 − ℘ 1 ) f ( ı ) g ( ı ) d ı + U ( ℘ 1 , ℘ 2 ) + 2 V ( ℘ 1 , ℘ 2 ) 6 .$\nTheorem 9.\nLet $f , g : O → R$ be two non negative q-differentiable functions on $O ∘$ and generalized ϕ-convex on $O .$ Then the following inequality holds:\n$( 1 + q ) ( 1 + q + q 2 ) F ρ , λ σ ( ℘ 2 − ℘ 1 ) 2 × ∫ ℘ 1 ℘ 1 + F ρ , λ σ ( ℘ 2 − ℘ 1 ) ∫ ℘ 1 ℘ 1 + F ρ , λ σ ( ℘ 2 − ℘ 1 ) ∫ 0 1 f x + ı F ρ , λ σ ( y − x ) g x + ı F ρ , λ σ ( y − x ) d q ı d q x d q y ≤ ( 1 + 2 q + q 2 ) F ρ , λ σ ( ℘ 2 − ℘ 1 ) ∫ ℘ 1 ℘ 1 + F ρ , λ σ ( ℘ 2 − ℘ 1 ) f ( ı ) g ( ı ) d q ı + 2 q 2 ( 1 + q ) 2 q 2 f ( ℘ 1 ) g ( ℘ 1 ) + f ( ℘ 2 ) g ( ℘ 2 ) + q V ( ℘ 1 , ℘ 2 ) ,$\nwhere $V ( ℘ 1 , ℘ 2 )$ is defined as in Theorem 8.\nProof.\nUsing the generalized $ϕ$-convexity of f and g for all $ı ∈ [ 0 , 1 ] ,$ we have\n$f ( x + ı F ρ , λ σ ( y − x ) ) ≤ ( 1 − ı ) f ( x ) + ı f ( y ) ,$\n$g ( x + ı F ρ , λ σ ( y − x ) ) ≤ ( 1 − ı ) g ( x ) + ı g ( y ) .$\nMultiplying (28) with (29), we get\n$f ( x + ı F ρ , λ σ ( y − x ) ) g ( x + ı F ρ , λ σ ( y − x ) )$\n$≤ ( 1 − ı ) 2 f ( x ) g ( x ) + ı 2 f ( y ) g ( y ) + ı ( 1 − ı ) f ( x ) g ( y ) + f ( y ) g ( x ) .$\nTaking q-integral for (30) with respect to ı on $( 0 , 1 ) ,$ we obtain\n$∫ 0 1 f ( x + ı F ρ , λ σ ( y − x ) ) g ( x + ı F ρ , λ σ ( y − x ) ) d q ı ≤ q ( 1 + q 2 ) f ( x ) g ( x ) ( 1 + q ) ( 1 + q + q 2 ) + f ( y ) g ( y ) 1 + q + q 2 + q 2 f ( x ) g ( y ) + f ( y ) g ( x ) ( 1 + q ) ( 1 + q + q 2 ) .$\nNext, taking double q-integral to both sides of (31) with respect to $x , y$ on $O ∘$, we have\n$∫ ℘ 1 ℘ 1 + F ρ , λ σ ( ℘ 2 − ℘ 1 ) ∫ ℘ 1 ℘ 1 + F ρ , λ σ ( ℘ 2 − ℘ 1 ) ∫ 0 1 f x + ı F ρ , λ σ ( y − x ) g x + ı F ρ , λ σ ( y − x ) d q ı d q x d q y ≤ q ( 1 + q 2 ) F ρ , λ σ ( ℘ 2 − ℘ 1 ) ( 1 + q ) ( 1 + q + q 2 ) ∫ ℘ 1 ℘ 1 + F ρ , λ σ ( ℘ 2 − ℘ 1 ) f ( x ) g ( x ) d q x + F ρ , λ σ ( ℘ 2 − ℘ 1 ) 1 + q + q 2 ∫ ℘ 1 ℘ 1 + F ρ , λ σ ( ℘ 2 − ℘ 1 ) f ( y ) g ( y ) d q y + q 2 ( 1 + q ) ( 1 + q + q 2 ) × [ ∫ ℘ 1 ℘ 1 + F ρ , λ σ ( ℘ 2 − ℘ 1 ) f ( x ) d q x ∫ ℘ 1 ℘ 1 + F ρ , λ σ ( ℘ 2 − ℘ 1 ) g ( y ) d q y + ∫ ℘ 1 ℘ 1 + F ρ , λ σ ( ℘ 2 − ℘ 1 ) f ( y ) d q y ∫ ℘ 1 ℘ 1 + F ρ , λ σ ( ℘ 2 − ℘ 1 ) g ( x ) d q x ] .$\nBy applying Theorem 3 on the right hand side of (32) and multiplying both sides of the derived inequality by the factor $( 1 + q ) ( 1 + q + q 2 ) F ρ , λ σ ( ℘ 2 − ℘ 1 ) 2 ,$ we deduce the desired inequality in (27). □\nRemark 7.\nTaking $F ρ , λ σ ( ℘ 2 − ℘ 1 ) = ℘ 2 − ℘ 1$ in Theorem 9, we get (, Theorem 4.4).\nCorollary 10.\nTaking $q → 1 −$ in Theorem 9, we get\n$3 2 F ρ , λ σ ( ℘ 2 − ℘ 1 ) 2 × ∫ ℘ 1 ℘ 1 + F ρ , λ σ ( ℘ 2 − ℘ 1 ) ∫ ℘ 1 ℘ 1 + F ρ , λ σ ( ℘ 2 − ℘ 1 ) ∫ 0 1 f x + ı F ρ , λ σ ( y − x ) g x + ı F ρ , λ σ ( y − x ) d ı d x d y ≤ 1 F ρ , λ σ ( ℘ 2 − ℘ 1 ) ∫ ℘ 1 ℘ 1 + F ρ , λ σ ( ℘ 2 − ℘ 1 ) f ( ı ) g ( ı ) d ı + U ( ℘ 1 , ℘ 2 ) + V ( ℘ 1 , ℘ 2 ) 8 .$\nRemark 8.\nSince Raina’s generalized special function is parametrized, then for different appropriate parameter values of $ρ , λ > 0 ,$ and $σ = ( σ ( 0 ) , … , σ ( k ) , … )$ it is possible to obtain new inequalities using the theorems and their corollaries presented in this work. It is useful to note that the results can be applied to derive some inequalities using special means and others special functions.\n\n## 3. Conclusions\n\nIn the present text we have found an identity (Lemma 4) that relates the right inequality of Hermite Hadamard, from which important and new estimates have been established for them in the quantum calculus scenario, using a new class of generalized convex functions called generalized $ϕ$-convex functions, see Theorems 4–9. In the proofs the Raina generalized function, the Hölder inequality, and the power mean inequality were used, and as an end result, an esteem for the integral of the product of functions that have the property of being $ϕ$-convex. Some corollary and commentary regarding the main results have also been presented, and as a final note we draw attention to some results involving the function of Mittag–Leffler and hypergeometric function as cases of the results obtained.\nSince quantum calculus has large applications in many areas of mathematics, the class of generalized $ϕ$-convex can be applied to obtain new results in convex analysis, special functions, quantum mechanics, related optimization theory, mathematical inequalities, and also stimulate further research in areas of pure and applied sciences.\n\n## Author Contributions\n\nAll authors contributed equally in the preparation of the present work taking into account the theorems and corollaries presented, the review of the articles and books cited, formal analysis, investigation, writing—original draft preparation and writing—review and editing. All authors have read and agreed to the published version of the manuscript.\n\n## Funding\n\nThis research was funded by Dirección de Investigación from Pontificia Universidad Católica del Ecuador in the research project entitled: Some inequalities using generalized convexity.\n\n## Acknowledgments\n\nMiguel J. Vivas-Cortez thanks to Dirección de Investigación from Pontificia Universidad Católica del Ecuador for the technical support given to the research project entitled: Algunas desigualdades de funciones convexas generalizadas (Some inequalities of generalized convex functions). Jorge E. Hernández Hernández wants to thank to the Consejo de Desarrollo Científico, Humanístico y Tecnológico (CDCHT) from Universidad Centroccidental Lisandro Alvarado (Venezuela), also for the technical support given in the development of this article.\n\n## Conflicts of Interest\n\nThe authors declare no conflict of interest.\n\n## References\n\n1. Ernst, T. The History of Q-Calculus and a New Method; Department of Mathematics, Uppsala University: Stockholm, Sweden, 2000. [Google Scholar]\n2. Jackson, F.H. On a q-definite integrals. Q. J. Pure Appl. Math. 1910, 41, 193–203. [Google Scholar]\n3. Ernst, T. The different tongues of q-calculus. Proc. Eston. Acad. Sci. 2008, 57, 81–99. [Google Scholar] [CrossRef]\n4. Ernst, T. A Comprehensive Treatment of q-Calculus; Springer: Basel, Switzerland, 2012. [Google Scholar]\n5. Gauchman, H. Integral Inequalities in q-Calculus. Comp. Math. Appl. 2004, 47, 281–300. [Google Scholar] [CrossRef][Green Version]\n6. Kac, V.; Cheung, P. Quantum Calculus; Universitext; Springer: New York, NY, USA, 2002. [Google Scholar]\n7. Ismail, M.E.H. Classical and Quantum Orthogonal Polynomials in One Variable; Cambridge University Press: Cambridge, UK, 2005. [Google Scholar]\n8. Ismail, M.E.H.; Mansour, Z.C.I. q-analogues of Freud weights and nonlinear difference equations. Adv. Appl. Math. 2010, 45, 518–547. [Google Scholar] [CrossRef][Green Version]\n9. Brahim, K.; Taf, S.; Nefzi, B. New Integral Inequalities in Quantum Calculus. Int. J. Anal. Appl. 2015, 7, 50–58. [Google Scholar]\n10. Mirković, T.Z.; Tričković, S.B.; Stanković, M.S. Opial inequality in q-calculus. J. Ineq. Appl. 2018, 2018, 1–8. [Google Scholar] [CrossRef] [PubMed][Green Version]\n11. Niculescu, C.P. An invitation to convex function theory. In Order Structures in Functional Analysis; Editura Academiei Romane: Bucharest, Romania, 2001. [Google Scholar]\n12. Bennett, C.; Sharpley, R. Interpolation of Operators; Academic Press: Boston, MA, USA, 1998. [Google Scholar]\n13. Nguyen, N.A.; Gulan, M.; Olaru, S.; Averbe, P.R. Convex Lifting: Theory and Control Applications. IEEE Trans. Autom. Control 2010, 63, 1–16. [Google Scholar] [CrossRef]\n14. Mititelu, Ş.; Trenţă, S. Efficiency conditions in vector control problems governed by multiple integrals. J. Appl. Math. Comp. 2018, 57, 647–665. [Google Scholar] [CrossRef]\n15. Trenţă, S. Multiobjective Fractional Variational Problem on Higher-Order Jet Bundles. Comm. Math. Stat. 2016, 4, 323–340. [Google Scholar] [CrossRef]\n16. Trenţă, S. On a New Class of Vector Variational Control Problems. Num. Funct. Anal. Optim. 2018, 39, 1594–1603. [Google Scholar] [CrossRef]\n17. Trenţă, S. KT-geodesic pseudoinvex control problems governed by multiple integrals. J. Nonlinear Convex Anal. 2019, 20, 73–84. [Google Scholar]\n18. Jensen, J.L.W.V. Om konvexe Funktioner og Uligheder mellen Middelvaerdier. Nyt. Tidsskr. Math. 1905, 16, 49–68. [Google Scholar]\n19. Jensen, J.L.W.V. Sur les fonctions convexes et les inegalités entre les valeurs moyennes. Acta Math. 1906, 30, 175–193. [Google Scholar] [CrossRef]\n20. Tomar, M.; Agarwal, P.; Jleli, M.; Samet, B. Certain Ostrowski type inequalities for generalized s-convex functions. J. Nonlinear Sci. Appl. 2017, 10, 5947–5957. [Google Scholar] [CrossRef][Green Version]\n21. Alomari, M.; Darus, M.; Dragomir, S.S.; Cerone, P. Ostrowski type inequalities for functions whose derivatives are s-convex in the second sense. Appl. Math. Lett. 2010, 23, 1071–1076. [Google Scholar] [CrossRef]\n22. Akdemir, A.O.; Özdemir, M.E. Some Hadamard-type inequalities for coordinated P-convex functions and Godunova-Levin functions. AIP Conf. Proc. 2010, 1309, 7–15. [Google Scholar]\n23. Liu, W.; Wen, W.; Park, J. Hermite–Hadamard type inequalities for MT-convex functions via classical integrals and fractional integrals. J. Nonlinear Sci. Appl. 2016, 9, 766–777. [Google Scholar] [CrossRef]\n24. Bracamonte, M.; Giménez, J.; Vivas-Cortez, M.J. Hermite–Hadamard-Fejér Type Inequalities for Strongly (s,m)-Convex Functions with Modulus c, in Second Sense. Appl. Math. Inf. Sci. 2016, 10, 2045–2053. [Google Scholar] [CrossRef]\n25. Hernández Hernández, J.E. On Some New Integral Inequalities Related with the Hermite–Hadamard Inequality via h-Convex Functions. MAYFEB J. Math. 2017, 4, 1–12. [Google Scholar]\n26. Hernández Hernández, J.E. On log-(m,h1,h2)-convex functions and related integral inequalities. Int. J. Open Prob. Compt. Math. 2019, 12, 43–59. [Google Scholar]\n27. Vivas-Cortez, M.J.; García, C.; Hernández Hernández, J.E. Ostrowski-Type Inequalities for Functions Whose Derivative Modulus is Relatively (m,h1,h2)-Convex. Appl. Math. Inf. Sci. 2019, 13, 369–378. [Google Scholar] [CrossRef]\n28. Vivas-Cortez, M.J.; García, C.; Hernández Hernández, J.E. Ostrowski Type Inequalities for Functions Whose Derivative Modulus is Relatively Convex. Appl. Math. Inf. Sci. 2019, 13, 121–127. [Google Scholar] [CrossRef]\n29. Vivas-Cortez, M.J.; Hernández Hernández, J.E. Ostrowski and Jensen type inequalities via (s,m)-convex functions in the second sense. Boletin de la Sociedad Matemática Mexicana 2019, in press. [Google Scholar] [CrossRef]\n30. Vivas-Cortez, M.J.; Medina Viloria, J. Hermite–Hadamard Type Inequalities for Harmonically Convex Functions on n-Coordinates. Appl. Math. Inf. Sci. Lett. 2018, 6, 1–6. [Google Scholar]\n31. Vivas-Cortez, M.J.; García, C. Ostrowski Type Inequalities for Functions Whose Derivatives are (m,h1,h2)-Convex. Appl. Math. Inf. Sci. 2017, 11, 79–86. [Google Scholar] [CrossRef]\n32. Delavar, M.R.; De La Sen, M. Some generalizations of Hermite–Hadamard type inequalities. SpringerPlus 2016, 5, 1–9. [Google Scholar]\n33. Hernández Hernández, J.E. Some fractional integral inequalities of Hermite Hadamard and Minkowski type. Selecciones Matemáticas (Universidad de Trujillo) 2019, 6, 41–48. [Google Scholar] [CrossRef][Green Version]\n34. Kashuri, A.; Liko, R. Some new Hermite–Hadamard type inequalities and their applications. Stud. Sci. Math. Hung. 2019, 56, 103–142. [Google Scholar] [CrossRef]\n35. Noor, M.A. Some new classes of non-convex functions. Nonlinear Funct. Anal. Appl. 2006, 11, 165–171. [Google Scholar]\n36. Set, E.; Noor, M.A.; Awan, M.U.; Gözpinar, A. Generalized Hermite–Hadamard type inequalities involving fractional integral operators. J. Inequal. Appl. 2017, 169, 1–10. [Google Scholar] [CrossRef][Green Version]\n37. Vivas-Cortez, M.J.; Hernández Hernández, J.E. Hermite–Hadamard Inequalities type for Raina’s fractional integral operator using η-convex functions. Revista de Matemática Teoría y Aplicaciones 2019, 26, 1–19. [Google Scholar]\n38. Vivas-Cortez, M.J.; Liko, R.; Kashuri, A.; Hernández Hernández, J.E. New Quantum Estimates of Trapezium-Type Inequalities for Generalized ϕ-Convex Functions. Mathematics 2019, 7, 1047. [Google Scholar] [CrossRef][Green Version]\n39. Cortez, M.V. Relative strongly h-convex functions and integral inequalities. Appl. Math. Inf. Sci. Lett. 2016, 4, 39–45. [Google Scholar] [CrossRef]\n40. Raina, R.K. On generalized Wright’s hypergeometric functions and fractional calculus operators. East Asian Math. J. 2005, 21, 191–203. [Google Scholar]\n41. Alp, N.; Sarikaya, M.Z.; Kunt, M.; Iscan, I. q-Hermite–Hadamard inequalities and quantum estimates for midpoint type inequalities via convex and quasi-convex functions. J. King Saud Univ. Sci. 2018, 30, 193–203. [Google Scholar] [CrossRef][Green Version]\n42. Liu, W.J.; Zhuang, H.F. Some quantum estimates of Hermite–Hadamard inequalities for convex functions. J. Appl. Anal. Comput. 2017, 7, 501–522. [Google Scholar]\n43. Noor, M.A.; Noor, K.I.; Awan, M.U. Some quantum estimates for Hermite–Hadamard inequalities. Appl. Math. Comput. 2015, 251, 675–679. [Google Scholar] [CrossRef]\n44. Noor, M.A.; Noor, K.I.; Awan, M.U. Quantum analogues of Hermite–Hadamard type inequalities for generalized convexity. In Computation, Cryptography and Network Security; Daras, N., Rassias, M.T., Eds.; Springer: Cham, Switzerland, 2015; pp. 413–439. [Google Scholar]\n45. Sudsutad, W.; Ntouyas, S.K.; Tariboon, J. Quantum integral inequalities for convex functions. J. Math. Inequal. 2015, 9, 781–793. [Google Scholar] [CrossRef][Green Version]\n46. Tariboon, J.; Ntouyas, S.K. Quantum integral inequalities on finite intervals. J. Inequal. Appl. 2014, 121, 1–13. [Google Scholar] [CrossRef][Green Version]\n47. Zhuang, H.; Liu, W.; Park, J. Some quantum estimates of Hermite–Hadamard inequalities for quasi-convex functions. Mathematics 2019, 7, 152. [Google Scholar] [CrossRef][Green Version]\n\n## Share and Cite\n\nMDPI and ACS Style\n\nVivas-Cortez, M.J.; Kashuri, A.; Liko, R.; Hernández, J.E. Quantum Trapezium-Type Inequalities Using Generalized ϕ-Convex Functions. Axioms 2020, 9, 12. https://doi.org/10.3390/axioms9010012\n\nAMA Style\n\nVivas-Cortez MJ, Kashuri A, Liko R, Hernández JE. Quantum Trapezium-Type Inequalities Using Generalized ϕ-Convex Functions. Axioms. 2020; 9(1):12. https://doi.org/10.3390/axioms9010012\n\nChicago/Turabian Style\n\nVivas-Cortez, Miguel J., Artion Kashuri, Rozana Liko, and Jorge E. Hernández. 2020. \"Quantum Trapezium-Type Inequalities Using Generalized ϕ-Convex Functions\" Axioms 9, no. 1: 12. https://doi.org/10.3390/axioms9010012\n\nNote that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here."
]
| [
null,
"https://px.ads.linkedin.com/collect/",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.7918801,"math_prob":0.99820644,"size":18468,"snap":"2023-14-2023-23","text_gpt3_token_len":4713,"char_repetition_ratio":0.16036612,"word_repetition_ratio":0.03911245,"special_character_ratio":0.23684211,"punctuation_ratio":0.21125236,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99972373,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-02T05:18:53Z\",\"WARC-Record-ID\":\"<urn:uuid:3e57cc41-b8a9-420e-a94c-136611137b82>\",\"Content-Length\":\"422303\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f5216cab-f338-4a88-8ac9-311a9ecf71b3>\",\"WARC-Concurrent-To\":\"<urn:uuid:533f5dce-3e06-4b8a-9a4c-711e9f204994>\",\"WARC-IP-Address\":\"104.18.25.151\",\"WARC-Target-URI\":\"https://www.mdpi.com/2075-1680/9/1/12\",\"WARC-Payload-Digest\":\"sha1:WLXCYBA5Z72H4O4MMBTCLEIDVASAUIPD\",\"WARC-Block-Digest\":\"sha1:2HVQXY44O2RZSFOTPHDDCBHQ6VDKFEIT\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224648322.84_warc_CC-MAIN-20230602040003-20230602070003-00475.warc.gz\"}"} |
http://www.ferociouscoder.com/ | [
"# Hackerrank: Cracking the Coding Interview – Stacks: Balanced Brackets\n\nThis is again a classic problem of detecting matching parenthesis. In fact, the title even tells you the appropriate data structure to use in order to solve this problem. The solution relies on the fact that if a left bracket (by bracket in this post I mean ‘(‘, ‘[‘ or ‘{‘) is found we can push it on to the stack and eventually whenever the corresponding right bracket (‘)’, ‘]’ or ‘}’) is found then we would be popping it off the stack. If the stack is empty after we are done with the entire input then we know that the input string was balanced properly.\n\nThe code I have written should be easy to understand but is a bit lengthy. In retrospect I could have refactored the checking and comparing part into a function so it looks nicer.\n\nFYI this is what all those characters are actually called:\n\n( ) – Parenthesis\n{ } – Braces\n[ ] – Brackets\n\nSolution:\n\n```import java.io.*;\nimport java.util.*;\nimport java.text.*;\nimport java.math.*;\nimport java.util.regex.*;\n\npublic class Solution {\n\npublic static boolean isBalanced(String expression) {\nchar[] input = expression.toCharArray();\nfor (int i=0; i<input.length; i++) {\nif (stack.isEmpty()) {\nstack.push(input[i]);\n} else {\nif (stack.peek() == '{') {\nif (input[i] == ']' || input[i] == ')') {\nreturn false;\n} else if (input[i] == '}') {\nstack.pop();\n} else {\nstack.push(input[i]);\n}\n} else if (stack.peek() == '[') {\nif (input[i] == '}' || input[i] == ')') {\nreturn false;\n} else if (input[i] == ']') {\nstack.pop();\n} else {\nstack.push(input[i]);\n}\n} else if (stack.peek() == '(') {\nif (input[i] == ']' || input[i] == '}') {\nreturn false;\n} else if (input[i] == ')') {\nstack.pop();\n} else {\nstack.push(input[i]);\n}\n}\n}\n}\nreturn stack.isEmpty();\n}\n\npublic static void main(String[] args) {\nScanner in = new Scanner(System.in);\nint t = in.nextInt();\nfor (int a0 = 0; a0 < t; a0++) {\nString expression = in.next();\nSystem.out.println( (isBalanced(expression)) ? \"YES\" : \"NO\" );\n}\n}\n}\n```\n\n# Hackerrank: Cracking the Coding Interview – Linked Lists: Detect a Cycle\n\nFor this problem we can apply a classic algorithm known as the tortoise and hare algorithm by Robert W. Floyd. We will use two pointers, one going faster than the other. If one of them reaches null then we know there are no cycles. Otherwise, eventually, the two pointers will point to the same node in the LinkedList and we will know that a cycle has been detected.\n\nThe wikipedia article I have linked also mentions some other algorithms for cycle detection that can be read for fun.\n\nSolution:\n\n```boolean hasCycle(Node head) {\nreturn false;\n}\nwhile (tortoise != null && hare != null) {\ntortoise = tortoise.next;\nhare = hare.next;\nif (hare != null) {\nhare = hare.next;\nif (tortoise == hare) {\nreturn true;\n}\n}\n}\nreturn false;\n}\n```\n\n# Hackerrank: Cracking the Coding Interview – Hash Tables: Ransom Note\n\nThis solution just involves keeping track of how many words are available and how many are used. If at any point we are using more than are available then we know the answer is “No”. If all goes will after trying to print out the ransom note then we can safely print “Yes”.\n\nSolution:\n\n```import java.io.*;\nimport java.util.*;\nimport java.text.*;\nimport java.math.*;\nimport java.util.regex.*;\n\npublic class Solution {\n\npublic static void main(String[] args) {\nScanner in = new Scanner(System.in);\nint m = in.nextInt();\nint n = in.nextInt();\nString magazine[] = new String[m];\nfor(int magazine_i=0; magazine_i < m; magazine_i++){\nmagazine[magazine_i] = in.next();\n}\nHashMap<String,Integer> map = new HashMap<>();\nfor (int i=0; i<magazine.length; i++) {\nif (map.containsKey(magazine[i])) {\nmap.put(magazine[i], map.get(magazine[i]) + 1);\n} else {\nmap.put(magazine[i], 1);\n}\n}\nString ransom[] = new String[n];\nboolean done = false;\nString ans = \"Yes\";\nfor(int ransom_i=0; !done && ransom_i < n; ransom_i++){\nransom[ransom_i] = in.next();\nif (map.containsKey(ransom[ransom_i])) {\nint x = map.get(ransom[ransom_i]);\nif (x > 1) {\nmap.put(ransom[ransom_i], x - 1);\n} else {\nmap.remove(ransom[ransom_i]);\n}\n} else {\ndone = true;\nans = \"No\";\n}\n}\nSystem.out.println(ans);\n}\n}\n```\n\n# Hackerrank: Cracking the Coding Interview – Strings: Making Anagrams\n\nThe solution to this problem involves figuring out that if we just take the differences in the counts of the number of distinct characters in each string then that is the optimal amount of deletions we need to make. It should also be noted that while doing the calculations we need to ignore negative values and make them positive instead.\n\nSolution:\n\n```import java.io.*;\nimport java.util.*;\nimport java.text.*;\nimport java.math.*;\nimport java.util.regex.*;\npublic class Solution {\npublic static int numberNeeded(String first, String second) {\nchar[] a = first.toCharArray();\nchar[] b = second.toCharArray();\nint[] ac = new int;\nint[] bc = new int;\nfor (int i=0; i<a.length; i++) {\nac[a[i]-'a']++;\n}\nfor (int i=0; i<b.length; i++) {\nbc[b[i]-'a']++;\n}\nint ans = 0;\nfor (int i=0; i<ac.length; i++) {\nans += Math.abs(ac[i] - bc[i]);\n}\nreturn ans;\n}\n\npublic static void main(String[] args) {\nScanner in = new Scanner(System.in);\nString a = in.next();\nString b = in.next();\nSystem.out.println(numberNeeded(a, b));\n}\n}\n```\n\n# Hackerrank: Cracking the Coding Interview – Arrays: Left Rotation\n\nThe solution is easy if you know how the mod operator works. You take every original index and shift it to the left by d. This, however, can lead to negative values so we increment this value by the size of the array (n) and mod it by n so that all values fit within [0, n – 1]. You can also think of this as adding (n-k) to every index if that makes more sense.\n\nSolution:\n\n```import java.io.*;\nimport java.util.*;\nimport java.text.*;\nimport java.math.*;\nimport java.util.regex.*;\n\npublic class Solution {\n\npublic static void main(String[] args) {\nScanner in = new Scanner(System.in);\nint n = in.nextInt();\nint k = in.nextInt();\nint a[] = new int[n];\nfor(int i=0; i < n; i++){\na[(i - k + n) % n] = in.nextInt();\n}\nfor (int i=0; i<n; i++) {\nSystem.out.print(a[i]);\nSystem.out.print(\" \");\n}\n}\n}\n```\n\n# Full width fluid Twenty Fourteen (2016)\n\nSo in case you are wondering how I modified the default Twenty Fourteen (as of 2016) theme all I did was add in the following custom CSS in the “Edit CSS” section under appearance.\n\n```.site, .site-header, .page-content, article header, .entry-header, .entry-content, .entry-summary, .entry-meta, .comments-area, nav.navigation.post-navigation {\nmax-width: inherit !important;\n}```\n\n# Updating Flash in Chrome\n\nSometimes no matter how much you try you just can’t get chrome to get rid of those annoying warning messages about your flash player version. I had a strange problem which I was able to debug by going to this url: chrome://flash/.\n\nYou’ll notice something like this in the begining:\n\n``````Google Chrome 45.0.2454.85 (m)\nOS Windows 7 or Server 2008 R2 SP1 64 bit\nFlash plugin 18.0.0.232 C:\\Windows\\SysWOW64\\Macromed\\Flash\\pepflashplayer32_18_0_0_232.dll\nFlash plugin 18.0.0.232 C:\\Program Files (x86)\\Google\\Chrome\\Application\\45.0.2454.85\\PepperFlash\\pepflashplayer.dll (not used)``````\n\nIf you see two Flash plugin lines then you’re in the same boat as me. I happen to have installed two versions of flash (and forgot about it). I have the regular version installed as well as the debug version. The regular version was up to date but the debug version was not. So that is why chrome was complaining all-along.\n\nSolution\n\n2. Disable the debug version of flash in chrome. Go to: chrome://plugins/ Click on details on the top right. Find the debug flash player and disable it.\n\n# UVA 10855 – Rotated square\n\nThis problem has an interesting algorithm used it in. How to rotate a square matrix. Basically if you want to rotate a square matrix by 90 degress you can notice that 4 elements in the 2D array get changed in a cyclical manner. You can just repeat this for (almost) one quarter of the square that you are rotating.\n\n```import java.io.BufferedReader;\nimport java.io.IOException;\nimport java.io.PrintWriter;\nimport java.util.StringTokenizer;\n\n/**\n*\n* @author Sanchit M. Bhatnagar\n* @see http://uhunt.felix-halim.net/id/74004\n*\n*/\npublic class P10855 {\n\npublic static void main(String[] args) throws IOException {\nPrintWriter out = new PrintWriter(System.out);\nStringTokenizer st = null;\n\nwhile (true) {\n\nint N = Integer.parseInt(st.nextToken());\nint n = Integer.parseInt(st.nextToken());\nif (N + n == 0)\nbreak;\n\nchar[][] big = new char[N][N];\nfor (int i = 0; i < N; i++) {\nfor (int j = 0; j < N; j++) {\nbig[i][j] = line[j];\n}\n}\n\nchar[][] small = new char[n][n];\nfor (int i = 0; i < n; i++) {\nfor (int j = 0; j < n; j++) {\nsmall[i][j] = line[j];\n}\n}\n\nout.print(check(big, small) + \" \");\nrotate(small);\nout.print(check(big, small) + \" \");\nrotate(small);\nout.print(check(big, small) + \" \");\nrotate(small);\nout.println(check(big, small));\n}\n\nout.close();\nbr.close();\n}\n\nprivate static int check(char[][] big, char[][] small) {\nint ans = 0;\nfor (int i = 0; i <= big.length - small.length; i++) {\nfor (int j = 0; j <= big.length - small.length; j++) {\nif (big[i][j] == small) {\nboolean found = true;\nfor (int k = 0; k < small.length; k++) {\nfor (int l = 0; l < small.length; l++) {\nif (big[i + k][j + l] != small[k][l]) {\nfound = false;\nbreak;\n}\n}\n}\nif (found)\nans++;\n}\n}\n}\nreturn ans;\n}\n\nprivate static void rotate(char[][] m) {\nint n = m.length;\nfor (int i = 0; i < n / 2; i++)\nfor (int j = 0; j < (n + 1) / 2; j++) {\nchar temp = m[i][j];\nm[i][j] = m[n - 1 - j][i];\nm[n - 1 - j][i] = m[n - 1 - i][n - 1 - j];\nm[n - 1 - i][n - 1 - j] = m[j][n - 1 - i];\nm[j][n - 1 - i] = temp;\n}\n}\n}\n```\n\n# UVA 101 – The Blocks Problem\n\nBasic simulation. Just need to follow the instructions in the question.\n\n```import java.awt.Point;\nimport java.util.ArrayList;\nimport java.util.Scanner;\n\n/**\n*\n* @author Sanchit M. Bhatnagar\n* @see http://uhunt.felix-halim.net/id/74004\n*\n*/\npublic class P101 {\n\npublic static void main(String args[]) // entry point from OS\n{\nScanner sc = new Scanner(System.in);\nint size = sc.nextInt();\n@SuppressWarnings(\"unchecked\")\nArrayList<Integer>[] blocks = new ArrayList[size];\nfor (int i = 0; i < size; i++) {\nblocks[i] = new ArrayList<Integer>();\n}\n\n// System.out.println(Arrays.deepToString(blocks));\n\nwhile (sc.hasNext()) {\nString t = sc.next();\nif (t.equals(\"quit\")) {\nbreak;\n} else {\nint a = sc.nextInt();\nString x = sc.next();\nint b = sc.nextInt();\n\nif (a == b)\ncontinue;\n\nPoint posA = findBlock(a, blocks);\nPoint posB = findBlock(b, blocks);\n\nif (posA.x == posB.x)\ncontinue;\n\nif (t.equals(\"move\")) {\nmoveBack(posA, blocks);\nif (x.equals(\"onto\")) {\nmoveBack(posB, blocks);\nblocks[posA.x].remove(posA.y);\n} else {\nblocks[posA.x].remove(posA.y);\n}\n} else if (t.equals(\"pile\")) {\nif (x.equals(\"onto\")) {\nmoveBack(posB, blocks);\nint removed = 0;\nint tSize = blocks[posA.x].size();\nfor (int i = posA.y; i < tSize; i++) {\nremoved++;\n}\n} else {\nint removed = 0;\nint tSize = blocks[posA.x].size();\nfor (int i = posA.y; i < tSize; i++) {\nremoved++;\n}\n}\n}\n}\n// System.out.println(Arrays.deepToString(blocks));\n}\n\n// System.out.println(Arrays.deepToString(blocks));\n\nfor (int i = 0; i < size; i++) {\nSystem.out.print(i + \":\");\nint tSize = blocks[i].size();\nfor (int j = 0; j < tSize; j++) {\nSystem.out.print(\" \" + blocks[i].remove(0));\n}\nSystem.out.println();\n}\nsc.close();\n}\n\nprivate static void moveBack(Point posA, ArrayList<Integer>[] blocks) {\nint removed = 0;\nint tSize = blocks[posA.x].size();\nfor (int i = posA.y + 1; i < tSize; i++) {\nint x = (Integer) blocks[posA.x].remove(i - removed);\nremoved++;\n}\n}\n\nprivate static Point findBlock(int a, ArrayList<Integer>[] blocks) {\nfor (int i = 0; i < blocks.length; i++) {\nfor (int j = 0; j < blocks[i].size(); j++) {\nif ((Integer) blocks[i].get(j) == a) {\nreturn new Point(i, j);\n}\n}\n}\nreturn null;\n}\n}\n```\n\n# UVA 12356 – Army Buddies\n\nSo this question is interesting. I was updating too much and was getting TLE. Once a buddy dies, there are obviously no more requests containing army members who are no more. We can therefore safely update only the end points of every given query.\n\n```import java.io.BufferedReader;\nimport java.io.IOException;\nimport java.io.PrintWriter;\nimport java.util.StringTokenizer;\n\n/**\n*\n* @author Sanchit M. Bhatnagar\n* @see http://uhunt.felix-halim.net/id/74004\n*\n*/\npublic class P12356 {\n\npublic static void main(String[] args) throws NumberFormatException, IOException {\nPrintWriter out = new PrintWriter(System.out);\nStringTokenizer st = null;\n\nString line = null;\nwhile ((line = br.readLine()) != null) {\nst = new StringTokenizer(line);\nint S = Integer.parseInt(st.nextToken());\nint B = Integer.parseInt(st.nextToken());\nif (S == 0 && B == 0)\nbreak;\nint[] leftBuddy = new int[S + 2];\nint[] rightBuddy = new int[S + 2];\nfor (int i = 1; i <= S; i++) {\nleftBuddy[i] = i - 1;\nrightBuddy[i] = i + 1;\n}\nrightBuddy[S] = 0;\nfor (int i = 0; i < B; i++) {\nint l = Integer.parseInt(st.nextToken());\nint r = Integer.parseInt(st.nextToken());\nif (leftBuddy[l] == 0)\nout.print(\"* \");\nelse\nout.print(leftBuddy[l] + \" \");\nif (rightBuddy[r] == 0)\nout.println(\"*\");\nelse\nout.println(rightBuddy[r]);\nleftBuddy[rightBuddy[r]] = leftBuddy[l];\nrightBuddy[leftBuddy[l]] = rightBuddy[r];\n}\nout.println(\"-\");\n}\nout.close();\nbr.close();\n}\n}\n```"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.59777606,"math_prob":0.97319305,"size":14221,"snap":"2023-40-2023-50","text_gpt3_token_len":3874,"char_repetition_ratio":0.12506154,"word_repetition_ratio":0.18119891,"special_character_ratio":0.3276844,"punctuation_ratio":0.23390275,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9943211,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-30T13:03:19Z\",\"WARC-Record-ID\":\"<urn:uuid:58485688-14e3-4d1a-964c-f675a4a3e2d8>\",\"Content-Length\":\"126654\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:11656328-c109-4530-ba07-4afe44ce25cc>\",\"WARC-Concurrent-To\":\"<urn:uuid:b4bc5a76-cd41-48d7-a0bc-7a394e530faf>\",\"WARC-IP-Address\":\"72.29.78.93\",\"WARC-Target-URI\":\"http://www.ferociouscoder.com/\",\"WARC-Payload-Digest\":\"sha1:M5BLLIMHNBLQCAHAO5R3WSTWOFZY5KCH\",\"WARC-Block-Digest\":\"sha1:SBMWMQU5IDCJD5YZJ4L6Z7NGLQLZTTKF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510676.40_warc_CC-MAIN-20230930113949-20230930143949-00458.warc.gz\"}"} |
https://arxiv.org/abs/2001.05374 | [
"# Title:Primal and dual algorithms for the minimum covering Euclidean ball of a set of Euclidean balls in $\\mathbb{R}^n$\n\nAbstract: Primal and dual algorithms are developed for solving the $n$-dimensional convex optimization problem of finding the Euclidean ball of minimum radius that covers $m$ given Euclidean balls, each with a given center and radius. Each algorithm is based on a directional search method in which a search path may be a ray or a two-dimensional conic section in $\\mathbb{R}^n$. At each iteration, a search path is constructed by the intersection of bisectors of pairs of points, where the bisectors are either hyperplanes or $n$-dimensional hyperboloids. The optimal step size along each search path is determined explicitly.\n Comments: 36 pages, 1 figure Subjects: Optimization and Control (math.OC) Cite as: arXiv:2001.05374 [math.OC] (or arXiv:2001.05374v1 [math.OC] for this version)\n\n## Submission history\n\nFrom: P. M. Dearing [view email]\n[v1] Wed, 15 Jan 2020 15:29:36 UTC (96 KB)"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.8419444,"math_prob":0.9171183,"size":999,"snap":"2019-51-2020-05","text_gpt3_token_len":253,"char_repetition_ratio":0.10251256,"word_repetition_ratio":0.05263158,"special_character_ratio":0.24824825,"punctuation_ratio":0.105820104,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9902999,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-28T17:59:39Z\",\"WARC-Record-ID\":\"<urn:uuid:f2eec32c-0b26-46f9-b7cb-bac69aa626a5>\",\"Content-Length\":\"19561\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ac80b70a-0c19-4cad-9d6b-fe85d53b4743>\",\"WARC-Concurrent-To\":\"<urn:uuid:f9a9d19d-3886-4954-a987-1ac1459a392f>\",\"WARC-IP-Address\":\"128.84.21.199\",\"WARC-Target-URI\":\"https://arxiv.org/abs/2001.05374\",\"WARC-Payload-Digest\":\"sha1:LIJF64QS5K4ADGF4W3LARESX4J5Z4VLY\",\"WARC-Block-Digest\":\"sha1:QS4FEYLR26AREKA5QP4IMILAAFE4J4JM\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579251779833.86_warc_CC-MAIN-20200128153713-20200128183713-00142.warc.gz\"}"} |
https://assignmentsbag.com/kinetic-theory-vbqs-class-11-physics/ | [
"# Kinetic Theory VBQs Class 11 Physics\n\nVBQs Kinetic Theory Class 11 Physics with solutions has been provided below for standard students. We have provided chapter wise VBQ for Class 11 Physics with solutions. The following Kinetic Theory Class 11 Physics value based questions with answers will come in your exams. Students should understand the concepts and learn the solved cased based VBQs provided below. This will help you to get better marks in class 11 examinations.\n\n## Kinetic Theory VBQs Class 11 Physics\n\nQuestion. In Maxwell speed distribution curve v1 represents\n\n(a) r.m.s. speed\n(b) Average speed\n(c) Most probable speed\n(d) Average velocity\n\nC\n\nQuestion. If ratio of density ρ and pressure P of an ideal gas is x, then the root mean square speed of gas molecules is\n(a) √3x\n(b) √3/x\n(c) √3x2\n(d) √3/x2\n\nB\n\nQuestion. If the speed of sound in a gas is v and the rms velocity of the gas molecule is vrms, then the ratio of v/vrms =\n\nD\n\nQuestion. The pressure and density of two di-atomic mixture of gases ( γ = 7/5) change adiabatically from (P, ρ) to (P′, ρ′ ). If P/P’ = 128 the value of ρ/ρ’ is equal to\n(a) 16\n(b) 32\n(c) 64\n(d) 128\n\nB\n\nQuestion. n moles of ideal gas is heated at constant pressure from 50°C to 100°C, the increase in internal energy of the gas is\n\nA\n\nQuestion. During an experiment an ideal gas obeys an additional law P2V = constant. The initial temperature and volume of the gas are T and V respectively. If it expands to a volume 2V, then its temperature will be\n(a) 2T\n(b) √3T\n(c) √2T\n(d) T\n\nB\n\nQuestion. An insulated box containing 1 mole O3 gas of mass M moving with velocity v0 and suddenly stopped.\nFind the increase in temperature as a result of stopping the box\n\nD\n\nQuestion. The specific heat of a diatomic gas undergoing the process P2 = V5 is\n(a) 7/2R\n(b) 31 R/4\n(c) 39R/14\n(d) 10 R/14\n\nA\n\nQuestion. A diatomic gas is at very high temperature T such that it possesses translatory, rotational as well as vibrational motion. The energy associated with each molecule due to their vibration is (k = Boltzman constant)\n(a) kT\n(b) kT/2\n(c) 2kT\n(d) kT/4\n\nA\n\nQuestion. A mixture of ideal gases has 2 moles of He, 4 moles, of oxygen and 1 mole of ozone at absolute temperature T. The internal energy of mixture is\n(a) 13RT\n(b) 11RT\n(c) 16RT\n(d) 14RT\n\nC\n\nQuestion. The rms speed of gas molecules of molecular weight M at temperature T is given by\n\nD\n\nQuestion. If N1 and N2 are the number of air molecules in an open room in peak winter and peak summer respectively, then\n(a) N1 = N2\n(b) N1 < N2\n(c) N1 > N2\n(d) N1 > 2N2\n\nC\n\nQuestion. If pressure of a gas is increased at constant temperature by 2%, then the rms velocity of the gas will\n(a) Increase by 2%\n(b) Increase by 1%\n(c) Not change\n(d) Decrease by 1%\n\nC\n\nQuestion. The mean or average speed of gas molecules of a gas having molar mass M at absolute temperature T is given by\n(a) √(3RT/M)\n(b) √(38RT/πM)\n(c) √(2RT/M)\n(d) √(8RT/M)\n\nB\n\nQuestion. A vessel contains a mixture of oxygen gas and hydrogen gas. The average kinetic energy of a H2 molecule is K1 and that of O2 molecule is K2, then the ratio K1/K2 is equal to (the temperature in the vessel is uniform)\n(a) 1 : 16\n(b) 1 : 8\n(c) 1 : 4\n(d) 1 : 1\n\nD\n\nQuestion. The mean free path for a gas is equal to (n is the number density and d is the diameter of a molecule of the gas)\n\nB\n\nQuestion. If pressure, absolute temperature and Boltzman constant for a gas are P, T and K respectively for a gas, then mean free path of the gas molecules of diameter d is\n\nA\n\nQuestion. Four particles have speeds 2 c m/s, 3 c m/s, 4 c m/ s and 5 cm/s respectively. Their rms speed is\n(a) 3.5 c m/s\n(b) √54 cm / s\n(c) 27/2 cm / s\n(d) √54/2 cm / s\n\nD\n\nQuestion. Four moles of O2 gas and two moles of Argon gas and one mole of water vapour is mixed. Then molar heat capacity at constant pressure of the mixture is\n(a) 16/7 R\n(b) 7 /16 R\n(c) R\n(d) 23/7 R\n\nD\n\nQuestion. Two gases of same amount under different pressure and volume. The graph of their total kinetic energy (K) versus volume (V) as shown in figure, then\n\n(a) P1 > P2\n(b) P1 < P2\n(c) P1 = P2\n(d) Cannot be calculate"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.84134924,"math_prob":0.9925243,"size":4345,"snap":"2022-40-2023-06","text_gpt3_token_len":1347,"char_repetition_ratio":0.15779774,"word_repetition_ratio":0.011627907,"special_character_ratio":0.29712313,"punctuation_ratio":0.067307696,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9952719,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-01-27T20:21:28Z\",\"WARC-Record-ID\":\"<urn:uuid:35074324-4b98-40d0-907f-f8903fe92566>\",\"Content-Length\":\"120967\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6a5b5282-3b58-4716-9758-7bf4acd78fdb>\",\"WARC-Concurrent-To\":\"<urn:uuid:b9c75e3a-d9be-4048-9e33-f50e2221a288>\",\"WARC-IP-Address\":\"194.163.36.95\",\"WARC-Target-URI\":\"https://assignmentsbag.com/kinetic-theory-vbqs-class-11-physics/\",\"WARC-Payload-Digest\":\"sha1:NIJI275D7FWM7WU2U3DREVDUGU7UAWFE\",\"WARC-Block-Digest\":\"sha1:UAQEIDJNONT5LCFEW2ZP5SUIZADFFUX3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764495012.84_warc_CC-MAIN-20230127195946-20230127225946-00459.warc.gz\"}"} |
https://research.web3.foundation/en/latest/polkadot/BABE/sortition/ | [
"# Cryptographic sortition for constant-time block production¶\n\nWe like Ouroboros Praos style block production algorithms because anonymous block production helps prevent censorship. We dislike the numerous empty block production slots and erratic block times intrinsic to Ouroboros Praos though because they constrains our security analysis and create problems whenever block production requires extensive computational work.\n\nWe thus want block production algorithm that is constant-time in the sense that it assigns all block production slots before an epoch starts, but that also keeps block production slots anonymous. We shall outline roughly the solution categories below.\n\nWe first quickly address sortition schemes with a bespoke, but often near perfect, anonymity layer built using cryptographic shuffles. Almost all other schemes require pre-announces that reveal each block slot's owner to one random other block producer.\n\nThere are important advantages to schemes in which each slot has an associated public key to which users can encrypt their transactions, because this strengthens privacy schemes like QuisQuis and Grin aka MimbleWimble.\n\n## Shuffles¶\n\nA cryptographic shuffle has one node reorder and mutate entries, normally ciphertexts or public keys, so that only the node knows the resulting ordering. Among all schemes listed here, shuffles are unique in that they operate an on-chain anonymity network, which naturally avoids pre-announces, and initially provides strong anonymity for block producers.\n\nA priori, shuffles require on-chain bandwidth of order hops count times list size. We achieve the strongest anonymity when both the list is several repetitions of the full block producer list and then every block producer applies a hop, making the hops count is also the block producer set size, but this sounds excessive.\n\nWe should shuffle more slots than required, but consume them only partially because otherwise anonymity would degrade throughout an epoch as exhaust our candidates. Any shuffle invokes storage operations linear in the shuffled set size, so we improve performance if validators shuffle less than our full list, and this costs us less confidence if all validators do some such partial shuffle.\n\nWe expect this last requirement for repeated partial shuffles favours ElGammal ciphertexts, although nice shuffles for curve points exists too. In essence, any shuffle needs \"guide points\" that pass through the same cryptographic operations as the shuffled public keys, but if lists do not change then only one guide point is required, while ElGammal-like scheme attaches a guide point to every public key, which works well if combining many partial shuffles (see universal re-encryption).\n\n### Verifiable shuffles¶\n\nA verifiable shuffle produces a cryptographic proof that the shuffle happened correctly. Andrew Neff's verifiable shuffle from A Verifiable Secret Shuffle and its Application to E-Voting costs $8 k + 5$ scalar multiplications where $k$ denotes the number of validators shuffled (see also implementations).\n\nTODO: Review more recent modern verifiable shuffle literature\n\n### Accountable shuffles¶\n\nWe might reduce costs with a simple non-verifiable shuffle for validator public keys that becomes accountable thanks to slashing:\n\nWe ask that initially all $k$ validators have their public keys $V_i = v_i G$ registered on-chain. We also ask that validators register some keys to be shuffled $S_j = s_j G$ on-chain. We shuffle lists of points $L$, which initially consists of some $S_j$ not appearing in other lists, along with guide point(s), whether one $P_j$ per $S_j$, or one distinguished guide point $P$ overall, which we initially take to be $G$. Importantly, we avoid needing a VRF that outputs a public private key pair by shuffle these points instead of $V_i$.\n\nIn each shuffle step, our $i$th validator multiplies this shuffle key $s_i$ by all points in the list $L$ and by its associated guide point $P$ and produce a DLEQ proof that $\\sum L'$ and $P'$ were correctly multiplied by $v_i$ from the input $\\sum L$ and $P$. Any validator can find itself in $L'$ by computing $s_j P'$, which ultimately tells it when to produce the block.\n\nAt this point, if $i$ has not performed the shuffle correctly then an omitted validator can prove this by producing a DLEQ proof of the $s_j P'$ that does not exist in the list $L'$, resulting in $i$ being slashed. There is significant on-chain logic involved in orchestrating these shuffles, but at least the slashing logic appears simple because all behavior is deterministic, after declaring the $S_i$.\n\nWe give a rough cost estimate:\n\nInitially the first $k$ block producers permute a batch of 128 $S_i$ selected randomly by VRF, so that 128 k provides enough candidate block producers. Second, we have 7ish additional block producers further permute each of these lists. At this point, each batch of 128 block producers has cost us slightly more than 64kb on-chain, so $k/8$ mb in total, but only 4kb per block.\n\nWe next create new batches of 128 points pulled randomly from all $k$ output lists and rerun the shuffle algorithm, but now all $k$ guide points must appear with each shuffle. If we repeat this $l$ times then we have $k^l$ guide points on-chain. We could reduce this to $2^l$ with more staggered mixing.\n\nWe could reduce the amount on-chain by sending the intermediate lists directly between block producers, and our challenge protocol could unwind through several levels, but actually doing this invites its own censorship issues.\n\nTODO: Replace with ElGammal version.\n\n## Cryptographic pre-announcements¶\n\nAll remaining schemes operate via some anonymous pre-announcement phase after which we determine the block production slot assignment by sorting the pre-announcements.\n\nWe must constrain valid pre-announcements so that malicious validators cannot create almost empty epochs by spamming fake pre-announcements, while preserving anonymity for block producers. We outline several fixes below, both cryptographic and softer economic ones.\n\nAs a rule, we accept the anonymity lost by revealing our block production slot to one other validator. Yet, almost all these schemes could achieve stronger anonymity by forwarding messages an extra hop, or perhaps coupling with shuffles.\n\n### Ring VRFs¶\n\nA ring VRF operates like a VRF but only proves its key comes from a specific list without giving any information about which specific key. Any ring VRF yields sortition:\n\nIn a pre-announce phase, all block producers anonymously publish ring VRF outputs, which either requires revealing their identity to another block producer, or else requires a multi-hop anonymity network. We then sort these ring VRF outputs and block producers claim them when making blocks.\n\nThere is no slashing when using ring VRFs because we check all ring VRF proofs' correctness when placing them on-chain. We expect this pure ring VRF solution to provide the most orthogonality with the most reusable components, due to the on-chain logic being quite simple, and the cryptography sounding useful elsewhere.\n\nAny naive ring VRF construction has a signature size linear in the number of block producers, meaning they scale worse than well optimised accountable shuffles.\n\nThere are also constant-size ring VRFs built using zkSNARKs however ala https://ethresear.ch/t/cryptographic-sortition-possible-solution-with-zk-snark/5102 In principle, these constructions should work with 10k to 20k constraints for 10,000 validators. (Do you agree Sergey?)\n\nThere are also ring signature constructions that do not require pairings and require only logarithmic size, like (One-out-of-Many Proofs: Or How to Leak a Secret and Spend a Coin)[https://eprint.iacr.org/2014/764] by Jens Groth and Markulf Kohlweiss. In that scheme, ring signatures need about 32*7 bytes times the logarithm of the number of validators, so under 3kb for 10,000 validators.\n\nWe expect a ring VRF could be defined using these techniques, likely leveraging proof circuits implemented in the dalek bulletproofs crate, which might prove more efficient by some constant factor. We also note that ring VRFs are linkable ring signatures, so some existing linkable ring signatures implementations may already provide ring VRFs.\n\nAt these sizes, we expect the non-pairing based technique prove fairly competitive with zkSNARKs, but verification still requires all the public keys being multiplied. At 10k validators, zkSNARK would costs the prover like 10k-20k scalar multiplications, but provide faster verification, while the non-pairing based scheme might cost verifiers 10k scalar multiplications per slot. We thus judge zkSNARK scheme more efficient, especially since weak hash functions like MIMC or ??? suffice for the Merkle tree.\n\n### Group VRFs¶\n\nA group signature also hides the specific signer, like a ring signature, but group signatures require initial setup via some issuer or MPC.\n\nWe build a group VRF similarly to a group signature by using the rerandomizable signature scheme from Short Randomizable Signatures by David Pointcheval and Olivier Sanders as a blind rerandomizable certificate. Our issuer would first blind sign each validator's private key, like in section 6.1, so that later each validator can prove correctness their VRF output, replacing the proof of knowledge from section 6.2 there. We believe this final step resembles the schnorrkel VRF except with the public key replaced by the signature inputs, but run on the pairing based curve.\n\nIn this, we must take care with our pairing assumptions because we loose anonymity if $x H$ with $H$ known ever appears on the curve not used for the VRF output.\n\nWe expect group VRFs only require a few curve points, and verification only requires two pairings and a few scalar multiplications, making them far smaller and faster than ring VRFs. We require an issuer however, which dramatically complicates the protocol:\n\nWe'd want at least two thirds, but preferable all, of validators to be aggregate certificate issuers, meaning they have certified the blinded public keys of all validators and we aggregate all certificates and public keys. We might achieve this with an MPC but doing so requires choosing when we issuers issue certificates.\n\nIn other words, all new prospective validators certify all previously added prospective validators, and all previously added prospective validators must eventually recertify all more recently added prospective validators. In this way, any spammed slots originate from prospective but not actual validators, probably along with an actual validator who posts them.\n\nWe become therefore tempted to run this MPC after election but before establishing final validator list, but this complicates protocol layering unacceptably. We could look into group signatures with \"verifier local revocation\" but these get much more complicated if I remember.\n\n## Non-cryptographic pre-announces¶\n\nWe could pre-announce VRF outputs without employing any ring or group VRF construction, provided we can tolerate some false VRF outputs being spammed on-chain. We discuss strategies to limit this spam below.\n\nIn general, there are several approaches that work with smaller numbers of slots and validators, but we shall discuss only schemes tweaked for numerous validators.\n\n### Secondary randomness¶\n\nWe limit the damage caused by spamming pre-announces by resorting the pre-announces using randomness created only after their publication. Let $f$ denote the identity map if using non-pairing based scheme or a hash function if using pairing based VRF.\n\nWe divide epoch $e+0$ into three phases: In the first phase, any block producer $V = v G$ creates a limited number of VRF outputs $$(\\omega_{V,e,i},\\pi_{V,e,i}) := VRF_v(r_e || i) \\quad \\textrm{for i < N.}$$ If $H(\\omega_{V,e,i} || \"IF\") < c$ then they send $(\\omega_{V,e,i},\\pi_{V,e,i})$ to another block producer $V'$ identified by $H(\\omega_{V,e,i} || \"WHO\")$ taken modulo the number of block producers.\n\nIn the second phase, $V'$ publishes at most $N'$ such values $f(\\omega_{V,e,i})$.\n\nIn the third phase, if $V'$ did not publish $f(\\omega_{V,e,i})$, then $V$ may publish $f(\\omega_{V,e,i})$ itself.\n\nAt the end of epoch $e+0$, we sort the $H(r_{e+1} || f(\\omega_{V,e,i}))$ to declare first $N''$ of these the block production slot allocations for epoch $e+1$.\n\nIn epoch $e+1$, any block producers claim their slots by revealing their $(\\omega_{V,e,i},\\pi_{V,e,i})$.\n\nIf $f(\\omega_{V,e,i})$ is a curve point, then anyone may encrypt transactions to the block producer $V$ without knowing $V$'s identity by using $f(\\omega_{V,e,i})$ as $V$'s public key, and then sending the ciphertext to $V'$.\n\nIn epoch $e+2$, we let $\\Omega_{e+2}$ denote all $\\omega_{V,e,i}$ revealed either in block production in epoch $e+1$ or else in non-anonymous pre-announce in epoch $e+0$ phase three. We now define $r_{e+2}$ by hashing $r_{e+1}$ together with all points in $\\Omega_{e+2}$. In this way, you could only alter $r_{e+2}$ by not making your own block, not by attacking a non-anonymous pre-announce."
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.8886011,"math_prob":0.94698316,"size":13020,"snap":"2019-43-2019-47","text_gpt3_token_len":2850,"char_repetition_ratio":0.1339121,"word_repetition_ratio":0.0,"special_character_ratio":0.20752688,"punctuation_ratio":0.09766454,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96977156,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-14T05:39:00Z\",\"WARC-Record-ID\":\"<urn:uuid:de87551e-464b-4df4-800b-4fcae80fcbf8>\",\"Content-Length\":\"45159\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4253819d-2796-49cf-a7df-a7f74482491d>\",\"WARC-Concurrent-To\":\"<urn:uuid:a650a16a-1be0-4a3a-bfcd-2abc62c927dd>\",\"WARC-IP-Address\":\"139.59.207.46\",\"WARC-Target-URI\":\"https://research.web3.foundation/en/latest/polkadot/BABE/sortition/\",\"WARC-Payload-Digest\":\"sha1:FCFGH5GFNAOZSJGPH377O4AHSIW7U4PY\",\"WARC-Block-Digest\":\"sha1:AORKTBAMZOP2VM3PTK22UMR2DD76MSZW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496668004.65_warc_CC-MAIN-20191114053752-20191114081752-00374.warc.gz\"}"} |
http://fastcpp.blogspot.com/2011/03/loading-3d-vector-into-sse-register.html | [
"## Saturday, March 26, 2011\n\nIn this blog entry I will show you how to load a three element float vector into an SSE register using `C++` intrinsics. This tutorial is based on the `float3` datatype which holds three `float`s (see Common Datatypes). An SSE register is able to store four `float` values and there are methods to load one, two or all four values - but not three. Therefore we need to split the load into two parts and combine them. First, we use plain SSE code:\n```inline __m128 LoadFloat3(const float3& value) {\n// load the x and y element of the float3 vector using a 64 bit load\n// and set the upper 64 bits to zero (00YX)\n__m128 xy = _mm_loadl_pi(_mm_setzero_ps(), (const __m64*)&value);\n\n// now load the z element using a 32 bit load (000Z)\n\n// finally, combine both by moving the z component to the high part\n// of the register, while keeping x and y components in the low part\nreturn _mm_movelh_ps(xy, z); // (0ZYX)\n}\n```\nNote that we need to pass an additional value to `_mm_loadl_pi` in order to define the high part of the result. We pass the value zero that we need to generate using an additional intrinsic `_mm_setzero_ps`. We can overcome this overhead by using an SSE2 intrinsic which automatically sets the high part of the loaded register to zero: `_mm_loadl_epi64`. This instruction is intended for loading one 64bit integer, but since we just load arbitrary binary data into the register this does not matter for our application (except that we need to tell the compiler that we are actually dealing with floats by casting the register later on).\n```inline __m128 LoadFloat3(const float3& value) {\n\n// now load the z element using a 32 bit float load (000Z)\n\n// we now need to cast the __m128i register into a __m128 one (0ZYX)\nreturn _mm_movelh_ps(_mm_castsi128_ps(xy), z);\n}\n```\nThe compiler now generates only three assembly instructions from the SSE2 code (`MOVQ, MOVSS, MOVLHPS`) when loading a single `float3` value. However, a 64bit `MOVQ` load is slow when the address of the data is not 8-byte aligned. Therefore, if you cannot guarantee the address alignment it is generally faster to use three 32bit loads which require only 4-byte alignment (most compilers will ensure this alignment for `float` values or structs containing floats).\n```inline __m128 LoadFloat3(const float3& value)\n{\n__m128 xy = _mm_movelh_ps(x, y);\nreturn _mm_shuffle_ps(xy, z, _MM_SHUFFLE(2, 0, 2, 0));\n}\n```\nWhen loading an array of `float3` values that are stored in consecutive memory, it is possible to optimize the load operations by loading 6 or even 12 floats in a batch. This topic will be handled in a future post.\n\n1.",
null,
"why not\n__m128 vec = mm_setr_ps(value.x,value.y,value.z,0.0f);\n\nor is that slow?\n\n1.",
null,
"i don't know if it's slower or not, basically it does the same as the code above but uses four loads (the 0 must be set). setr_ps is simply a compiler macro, which maps to multiple instructions. However, it is often more convenient to use setr_ps (or set_ps) since it is only a one-liner.\n\n2.",
null,
"Without 0\n__m128 vec = _mm_setr_ps(value.x,value.y,value.z,value.z);\n\nThanks for blog!!!"
]
| [
null,
"http://lh3.googleusercontent.com/zFdxGE77vvD2w5xHy6jkVuElKv-U9_9qLkRYK8OnbDeJPtjSZ82UPq5w6hJ-SA=s35",
null,
"http://1.bp.blogspot.com/-OURkzlQ84hw/TwLOkKxoXGI/AAAAAAAAI3k/9dFRnR_c9YY/s35/citations%253Fview_op%253Dview_photo%2526user%253D-nokQfUAAAAJ%2526citpid%253D1",
null,
"http://resources.blogblog.com/img/blank.gif",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.8252444,"math_prob":0.96381533,"size":3322,"snap":"2019-26-2019-30","text_gpt3_token_len":892,"char_repetition_ratio":0.12236287,"word_repetition_ratio":0.054347824,"special_character_ratio":0.28747743,"punctuation_ratio":0.11937985,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97332954,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,null,null,7,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-07-23T11:47:01Z\",\"WARC-Record-ID\":\"<urn:uuid:a33af237-ac08-4c99-a138-13ee166789e5>\",\"Content-Length\":\"58938\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:cea35d83-ad35-4361-bb3e-c037b4367ad8>\",\"WARC-Concurrent-To\":\"<urn:uuid:0f68b73a-f8f6-4021-8964-52c15a75ce0f>\",\"WARC-IP-Address\":\"172.217.15.97\",\"WARC-Target-URI\":\"http://fastcpp.blogspot.com/2011/03/loading-3d-vector-into-sse-register.html\",\"WARC-Payload-Digest\":\"sha1:X4D6O6XXFWVDHEQBGRZZWNZQODHGZR7V\",\"WARC-Block-Digest\":\"sha1:WLESA7G6GNM4VLYTOSFQOG5PO6UAETZL\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-30/CC-MAIN-2019-30_segments_1563195529276.65_warc_CC-MAIN-20190723105707-20190723131707-00347.warc.gz\"}"} |
http://openbookproject.net/thinkcs/python/english2e/ch13.html | [
"13. Classes and objects¶\n\n13.1. Object-oriented programming¶\n\nPython is an object-oriented programming language, which means that it provides features that support object-oriented programming ( OOP).\n\nObject-oriented programming has its roots in the 1960s, but it wasn’t until the mid 1980s that it became the main programming paradigm used in the creation of new software. It was developed as a way to handle the rapidly increasing size and complexity of software systems, and to make it easier to modify these large and complex systems over time.\n\nUp to now we have been writing programs using a procedural programming paradigm. In procedural programming the focus is on writing functions or procedures which operate on data. In object-oriented programming the focus is on the creation of objects which contain both data and functionality together.\n\n13.2. User-defined compound types¶\n\nA class in essence defines a new data type. We have been using several of Python’s built-in types throughout this book, we are now ready to create our own user-defined type: the Point.\n\nConsider the concept of a mathematical point. In two dimensions, a point is two numbers (coordinates) that are treated collectively as a single object. In mathematical notation, points are often written in parentheses with a comma separating the coordinates. For example, (0, 0) represents the origin, and (x, y) represents the point x units to the right and y units up from the origin.\n\nA natural way to represent a point in Python is with two numeric values. The question, then, is how to group these two values into a compound object. The quick and dirty solution is to use a list or tuple, and for some applications that might be the best choice.\n\nAn alternative is to define a new user-defined compound type, also called a class. This approach involves a bit more effort, but it has advantages that will be apparent soon.\n\nA class definition looks like this:\n\nclass Point:\npass\n\nClass definitions can appear anywhere in a program, but they are usually near the beginning (after the import statements). The syntax rules for a class definition are the same as for other compound statements. There is a header which begins with the keyword, class, followed by the name of the class, and ending with a colon.\n\nThis definition creates a new class called Point. The pass statement has no effect; it is only necessary because a compound statement must have something in its body. A docstring could serve the same purpose:\n\nclass Point:\n\"Point class for storing mathematical points.\"\n\nBy creating the Point class, we created a new type, also called Point. The members of this type are called instances of the type or objects. Creating a new instance is called instantiation, and is accomplished by calling the class. Classes, like functions, are callable, and we instantiate a Point object by calling the Point class:\n\n>>> type(Point)\n<type 'classobj'>\n>>> p = Point()\n>>> type(p)\n<type 'instance'>\n\nThe variable p is assigned a reference to a new Point object.\n\nIt may be helpful to think of a class as a factory for making objects, so our Point class is a factory for making points. The class itself isn’t an instance of a point, but it contains the machinary to make point instances.\n\n13.3. Attributes¶\n\nLike real world objects, object instances have both form and function. The form consists of data elements contained within the instance.\n\nWe can add new data elements to an instance using dot notation:\n\n>>> p.x = 3\n>>> p.y = 4\n\nThis syntax is similar to the syntax for selecting a variable from a module, such as math.pi or string.uppercase. Both modules and instances create their own namespaces, and the syntax for accessing names contained in each, called attributes, is the same. In this case the attribute we are selecting is a data item from an instance.\n\nThe following state diagram shows the result of these assignments:",
null,
"The variable p refers to a Point object, which contains two attributes. Each attribute refers to a number.\n\nWe can read the value of an attribute using the same syntax:\n\n>>> print p.y\n4\n>>> x = p.x\n>>> print x\n3\n\nThe expression p.x means, “Go to the object p refers to and get the value of x”. In this case, we assign that value to a variable named x. There is no conflict between the variable x and the attribute x. The purpose of dot notation is to identify which variable you are referring to unambiguously.\n\nYou can use dot notation as part of any expression, so the following statements are legal:\n\nprint '(%d, %d)' % (p.x, p.y)\ndistance_squared = p.x * p.x + p.y * p.y\n\nThe first line outputs (3, 4); the second line calculates the value 25.\n\n13.4. The initialization method and self¶\n\nSince our Point class is intended to represent two dimensional mathematical points, all point instances ought to have x and y attributes, but that is not yet so with our Point objects.\n\n>>> p2 = Point()\n>>> p2.x\nTraceback (most recent call last):\nFile \"<stdin>\", line 1, in ?\nAttributeError: Point instance has no attribute 'x'\n>>>\n\nTo solve this problem we add an initialization method to our class.\n\nclass Point:\ndef __init__(self, x=0, y=0):\nself.x = x\nself.y = y\n\nA method behaves like a function but it is part of an object. Like a data attribute it is accessed using dot notation. The initialization method is called automatically when the class is called.\n\nLet’s add another method, distance_from_origin, to see better how methods work:\n\nclass Point:\ndef __init__(self, x=0, y=0):\nself.x = x\nself.y = y\n\ndef distance_from_origin(self):\nreturn ((self.x ** 2) + (self.y ** 2)) ** 0.5\n\nLet’s create a few point instances, look at their attributes, and call our new method on them:\n\n>>> p = Point(3, 4)\n>>> p.x\n3\n>>> p.y\n4\n>>> p.distance_from_origin()\n5.0\n>>> q = Point(5, 12)\n>>> q.x\n5\n>>> q.y\n12\n>>> q.distance_from_origin()\n13.0\n>>> r = Point()\n>>> r.x\n0\n>>> r.y\n0\n>>> r.distance_from_origin()\n0.0\n\nWhen defining a method, the first parameter refers to the instance being created. It is customary to name this parameter self. In the example session above, the self parameter refers to the instances p, q, and r respectively.\n\n13.5. Instances as parameters¶\n\nYou can pass an instance as a parameter to a function in the usual way. For example:\n\ndef print_point(p):\nprint '(%s, %s)' % (str(p.x), str(p.y))\n\nprint_point takes a point as an argument and displays it in the standard format. If you call print_point(p) with point p as defined previously, the output is (3, 4).\n\n13.6. Glossary¶\n\nclass\nA user-defined compound type. A class can also be thought of as a template for the objects that are instances of it.\ninstantiate\nTo create an instance of a class.\ninstance\nAn object that belongs to a class.\nobject\nA compound data type that is often used to model a thing or concept in the real world.\nattribute\nOne of the named data items that makes up an instance.\n\n13.7. Exercises¶\n\n1. Create and print a Point object, and then use id to print the object’s unique identifier. Translate the hexadecimal form into decimal and confirm that they match.\n2. Rewrite the distance function from chapter 5 so that it takes two Points as parameters instead of four numbers."
]
| [
null,
"http://openbookproject.net/thinkcs/python/english2e/_images/point.png",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.89112383,"math_prob":0.93651694,"size":6243,"snap":"2022-05-2022-21","text_gpt3_token_len":1416,"char_repetition_ratio":0.12213496,"word_repetition_ratio":0.016713092,"special_character_ratio":0.2441134,"punctuation_ratio":0.13522013,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97481644,"pos_list":[0,1,2],"im_url_duplicate_count":[null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-26T19:45:53Z\",\"WARC-Record-ID\":\"<urn:uuid:92c9e082-10e2-4d27-8c2d-3f38f405a7ea>\",\"Content-Length\":\"23026\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b87991d6-c953-4698-ae0e-b82623c48a30>\",\"WARC-Concurrent-To\":\"<urn:uuid:6e2c5150-0c0e-41d6-bc8c-5e654c424bda>\",\"WARC-IP-Address\":\"152.19.134.41\",\"WARC-Target-URI\":\"http://openbookproject.net/thinkcs/python/english2e/ch13.html\",\"WARC-Payload-Digest\":\"sha1:MK52MUP7SCC4WL7PX4WSJ3JJ2SHBVHNW\",\"WARC-Block-Digest\":\"sha1:TJE6RYL7QGICCAIP43IMNVWAWBVHBWWQ\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320304961.89_warc_CC-MAIN-20220126192506-20220126222506-00475.warc.gz\"}"} |
https://www.includehelp.com/embedded-system/data-memory-addressing-mode-in-8086.aspx | [
"# Data Memory Addressing Mode in 8086\n\nSubmitted by Monika Sharma, on July 19, 2019\n\nIn this type of addressing mode, first the offset address is calculated, then the memory address is calculated and then the operand form that memory location is fetched. There are following modes which lie under the Data Addressing Mode:\n\n6. Base plus Index Addressing Mode\n7. Base relative plus Index Addressing Mode\n\nIn this addressing mode, the offset is specified within the instructions. What this means is that the offset address is directly stored within square brackets and is not present inside any register.\n\nExample:\n\n``` MOV AL, [4000H]\nMOV [1234H], BX\n```\n\nIn this addressing mode, the offset address for any operand is stored in the base register BX.\n\nExample:\n\n``` MOV AL, [BX]\n```\n\n### 3) Base Relative Addressing Mode\n\nIn this addressing mode also, the offset address is stored within the Base register but the difference is that there is some displacement present with it. This displacement can be either of 8 bits or 16 bits. Hence, the offset address will be equal to the contents of the base register + 8/16 bit displacement.\n\nExample:\n\n``` MOV AL, [BX + 05H]\t{here, displacement is of 8 bits}\nMOV AL, [BX+1243H] \t{here, displacement is of 16 bits}\n```\n\nIn this addressing mode, the offset address is defined in the Index Register. (It should be noted here that the Index registers act as an offset for Data Segment as well.) So, the memory location of the operand is calculated with the help of DS and SI.\n\nExample:\n\n``` MOV BL, [SI]\nMOV [SI], DH\n```\n\n### 5) Index relative addressing mode\n\nIn this addressing mode, the offset address is equal to the content of index register plus the 8 or 16-bit displacement. It is important to note here that the displacement in all relative addressing modes is a signed number, i.e. the displacement value can either be a positive or a negative hexadecimal number.\n\nExample:\n\n``` MOV BL, [SI + 07H]\t\t{Here, the displacement is of 8 bits}\nMOV BL, [SI – 3034H] \t{Here, the displacement is of 16 bits}\n```\n\n### 6) Base plus Index Addressing Mode\n\nIn this addressing Mode, the offset address is calculated by both the base register and the index register. Hence, the offset address will be equal to the content of the base register plus the content of the Index register.\n\nExample:\n\n``` MOV AL, [BX + SI]\nMOV [BX + SI], CL\n```\n\n### 7) Base relative plus Index Addressing Mode\n\nThis addressing mode is almost same to the Base plus Index Addressing mode, but like the other relative addressing modes, the difference is only that this mode has a displacement of 8 or 16 bits.\n\nExample:\n\n``` MOV CL, [BX + SI + 0AH] {here, the displacement is of 8 bits}\nMOV AL, [BX + SI + AE07H] {here, the displacement is of 16 bits}\n```"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.85640395,"math_prob":0.9829714,"size":3177,"snap":"2021-43-2021-49","text_gpt3_token_len":742,"char_repetition_ratio":0.21021116,"word_repetition_ratio":0.11721612,"special_character_ratio":0.23481272,"punctuation_ratio":0.105,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9938228,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-26T02:35:04Z\",\"WARC-Record-ID\":\"<urn:uuid:016a0590-d823-47e8-a489-da555a9b7886>\",\"Content-Length\":\"128395\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d4256795-6062-4f14-8024-03cdbe477ae5>\",\"WARC-Concurrent-To\":\"<urn:uuid:9ea249e8-63ad-48a4-8769-c28b1f6df4e5>\",\"WARC-IP-Address\":\"172.67.140.245\",\"WARC-Target-URI\":\"https://www.includehelp.com/embedded-system/data-memory-addressing-mode-in-8086.aspx\",\"WARC-Payload-Digest\":\"sha1:ZLN36EPLGPBPIYCYNHZEJVOCOQR3ZRUI\",\"WARC-Block-Digest\":\"sha1:SMMFPEY3634NSUIYYL4MEC5JMXX443B6\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323587794.19_warc_CC-MAIN-20211026011138-20211026041138-00165.warc.gz\"}"} |
https://cpluspluspedia.com/en/tutorial/488/std--string | [
"# std::string\n\n## Introduction\n\nStrings are objects that represent sequences of characters. The standard `string` class provides a simple, safe and versatile alternative to using explicit arrays of `char`s when dealing with text and other sequences of characters. The C++ `string` class is part of the `std` namespace and was standardized in 1998.\n\n## Syntax\n\n• // Empty string declaration\n\nstd::string s;\n\n• // Constructing from const char* (c-string)\n\nstd::string s(\"Hello\");\n\nstd::string s = \"Hello\";\n\n• // Constructing using copy constructor\n\nstd::string s1(\"Hello\");\n\nstd::string s2(s1);\n\n• // Constructing from substring\n\nstd::string s1(\"Hello\");\n\nstd::string s2(s1, 0, 4); // Copy 4 characters from position 0 of s1 into s2\n\n• // Constructing from a buffer of characters\n\nstd::string s1(\"Hello World\");\nstd::string s2(s1, 5); // Copy first 5 characters of s1 into s2\n\n• // Construct using fill constructor (char only)\n\nstd::string s(5, 'a'); // s contains aaaaa\n\n• // Construct using range constructor and iterator\n\nstd::string s1(\"Hello World\");\n\nstd::string s2(s1.begin(), s1.begin()+5); // Copy first 5 characters of s1 into s2\n\n## Remarks\n\nBefore using `std::string`, you should include the header `string`, as it includes functions/operators/overloads that other headers (for example `iostream`) do not include.\n\nUsing const char* constructor with a nullptr leads to undefined behavior.\n\n``````std::string oops(nullptr);\nstd::cout << oops << \"\\n\";\n``````\n\nThe method `at` throws an `std::out_of_range` exception if `index >= size()`.\n\nThe behavior of `operator[]` is a bit more complicated, in all cases it has undefined behavior if `index > size()`, but when `index == size()`:\n\nC++11\n1. On a non-const string, the behavior is undefined;\n2. On a const string, a reference to a character with value `CharT()` (the null character) is returned.\nC++11\n1. A reference to a character with value `CharT()` (the null character) is returned.\n2. Modifying this reference is undefined behavior.\n\nSince C++14, instead of using `\"foo\"`, it is recommended to use `\"foo\"s`, as `s` is a user-defined literal suffix, which converts the `const char*` `\"foo\"` to `std::string` `\"foo\"`.\n\nNote: you have to use the namespace `std::string_literals` or `std::literals` to get the literal `s`.\n\n## Accessing a character\n\nThere are several ways to extract characters from a `std::string` and each is subtly different.\n\n``````std::string str(\"Hello world!\");\n``````\n\n#### operator[](n)\n\nReturns a reference to the character at index n.\n\n`std::string::operator[]` is not bounds-checked and does not throw an exception. The caller is responsible for asserting that the index is within the range of the string:\n\n``````char c = str; // 'w'\n``````\n\n#### at(n)\n\nReturns a reference to the character at index n.\n\n`std::string::at` is bounds checked, and will throw `std::out_of_range` if the index is not within the range of the string:\n\n``````char c = str.at(7); // 'o'\n``````\nC++11\n\nNote: Both of these examples will result in undefined behavior if the string is empty.\n\n#### front()\n\nReturns a reference to the first character:\n\n``````char c = str.front(); // 'H'\n``````\n\n#### back()\n\nReturns a reference to the last character:\n\n``````char c = str.back(); // '!'\n``````\n\n## Checking if a string is a prefix of another\n\nC++14\n\nIn C++14, this is easily done by `std::mismatch` which returns the first mismatching pair from two ranges:\n\n``````std::string prefix = \"foo\";\nstd::string string = \"foobar\";\n\nbool isPrefix = std::mismatch(prefix.begin(), prefix.end(),\nstring.begin(), string.end()).first == prefix.end();\n``````\n\nNote that a range-and-a-half version of `mismatch()` existed prior to C++14, but this is unsafe in the case that the second string is the shorter of the two.\n\nC++14\n\nWe can still use the range-and-a-half version of `std::mismatch()`, but we need to first check that the first string is at most as big as the second:\n\n``````bool isPrefix = prefix.size() <= string.size() &&\nstd::mismatch(prefix.begin(), prefix.end(),\nstring.begin(), string.end()).first == prefix.end();\n``````\nC++17\n\nWith `std::string_view`, we can write the direct comparison we want without having to worry about allocation overhead or making copies:\n\n``````bool isPrefix(std::string_view prefix, std::string_view full)\n{\nreturn prefix == full.substr(0, prefix.size());\n}\n``````\n\n## Concatenation\n\nYou can concatenate `std::string`s using the overloaded `+` and `+=` operators. Using the `+` operator:\n\n``````std::string hello = \"Hello\";\nstd::string world = \"world\";\nstd::string helloworld = hello + world; // \"Helloworld\"\n``````\n\nUsing the `+=` operator:\n\n``````std::string hello = \"Hello\";\nstd::string world = \"world\";\nhello += world; // \"Helloworld\"\n``````\n\nYou can also append C strings, including string literals:\n\n``````std::string hello = \"Hello\";\nstd::string world = \"world\";\nconst char *comma = \", \";\nstd::string newhelloworld = hello + comma + world + \"!\"; // \"Hello, world!\"\n``````\n\nYou can also use `push_back()` to push back individual `char`s:\n\n``````std::string s = \"a, b, \";\ns.push_back('c'); // \"a, b, c\"\n``````\n\nThere is also `append()`, which is pretty much like `+=`:\n\n``````std::string app = \"test and \";\napp.append(\"test\"); // \"test and test\"\n``````\n\n## Conversion to (const) char*\n\nIn order to get `const char*` access to the data of a `std::string` you can use the string's `c_str()` member function. Keep in mind that the pointer is only valid as long as the `std::string` object is within scope and remains unchanged, that means that only `const` methods may be called on the object.\n\nC++17\n\nThe `data()` member function can be used to obtain a modifiable `char*`, which can be used to manipulate the `std::string` object's data.\n\nC++11\n\nA modifiable `char*` can also be obtained by taking the address of the first character: `&s`. Within C++11, this is guaranteed to yield a well-formed, null-terminated string. Note that `&s` is well-formed even if `s` is empty, whereas `&s.front()` is undefined if `s` is empty.\n\nC++11\n``````std::string str(\"This is a string.\");\nconst char* cstr = str.c_str(); // cstr points to: \"This is a string.\\0\"\nconst char* data = str.data(); // data points to: \"This is a string.\\0\"\n``````\n``````std::string str(\"This is a string.\");\n\n// Copy the contents of str to untie lifetime from the std::string object\nstd::unique_ptr<char []> cstr = std::make_unique<char[]>(str.size() + 1);\n\n// Alternative to the line above (no exception safety):\n// char* cstr_unsafe = new char[str.size() + 1];\n\nstd::copy(str.data(), str.data() + str.size(), cstr);\ncstr[str.size()] = '\\0'; // A null-terminator needs to be added\n\n// delete[] cstr_unsafe;\nstd::cout << cstr.get();\n``````\n\n## Conversion to integers/floating point types\n\nA `std::string` containing a number can be converted into an integer type, or a floating point type, using conversion functions.\n\nNote that all of these functions stop parsing the input string as soon as they encounter a non-numeric character, so `\"123abc\"` will be converted into `123`.\n\nThe `std::ato*` family of functions converts C-style strings (character arrays) to integer or floating-point types:\n\n``````std::string ten = \"10\";\n\ndouble num1 = std::atof(ten.c_str());\nint num2 = std::atoi(ten.c_str());\nlong num3 = std::atol(ten.c_str());\n``````\nC++11\n``````long long num4 = std::atoll(ten.c_str());\n``````\n\nHowever, use of these functions is discouraged because they return `0` if they fail to parse the string. This is bad because `0` could also be a valid result, if for example the input string was \"0\", so it is impossible to determine if the conversion actually failed.\n\nThe newer `std::sto*` family of functions convert `std::string`s to integer or floating-point types, and throw exceptions if they could not parse their input. You should use these functions if possible:\n\nC++11\n``````std::string ten = \"10\";\n\nint num1 = std::stoi(ten);\nlong num2 = std::stol(ten);\nlong long num3 = std::stoll(ten);\n\nfloat num4 = std::stof(ten);\ndouble num5 = std::stod(ten);\nlong double num6 = std::stold(ten);\n``````\n\nFurthermore, these functions also handle octal and hex strings unlike the `std::ato*` family. The second parameter is a pointer to the first unconverted character in the input string (not illustrated here), and the third parameter is the base to use. `0` is automatic detection of octal (starting with `0`) and hex (starting with `0x` or `0X`), and any other value is the base to use\n\n``````std::string ten = \"10\";\nstd::string ten_octal = \"12\";\nstd::string ten_hex = \"0xA\";\n\nint num1 = std::stoi(ten, 0, 2); // Returns 2\nint num2 = std::stoi(ten_octal, 0, 8); // Returns 10\nlong num3 = std::stol(ten_hex, 0, 16); // Returns 10\nlong num4 = std::stol(ten_hex); // Returns 0\nlong num5 = std::stol(ten_hex, 0, 0); // Returns 10 as it detects the leading 0x\n``````\n\n## Conversion to std::wstring\n\nIn C++, sequences of characters are represented by specializing the `std::basic_string` class with a native character type. The two major collections defined by the standard library are `std::string` and `std::wstring`:\n\n• `std::string` is built with elements of type `char`\n\n• `std::wstring` is built with elements of type `wchar_t`\n\nTo convert between the two types, use `wstring_convert`:\n\n``````#include <string>\n#include <codecvt>\n#include <locale>\n\nstd::string input_str = \"this is a -string-, which is a sequence based on the -char- type.\";\nstd::wstring input_wstr = L\"this is a -wide- string, which is based on the -wchar_t- type.\";\n\n// conversion\nstd::wstring str_turned_to_wstr = std::wstring_convert<std::codecvt_utf8<wchar_t>>().from_bytes(input_str);\n\nstd::string wstr_turned_to_str = std::wstring_convert<std::codecvt_utf8<wchar_t>>().to_bytes(input_wstr);\n``````\n\nIn order to improve usability and/or readability, you can define functions to perform the conversion:\n\n``````#include <string>\n#include <codecvt>\n#include <locale>\n\nusing convert_t = std::codecvt_utf8<wchar_t>;\nstd::wstring_convert<convert_t, wchar_t> strconverter;\n\nstd::string to_string(std::wstring wstr)\n{\nreturn strconverter.to_bytes(wstr);\n}\n\nstd::wstring to_wstring(std::string str)\n{\nreturn strconverter.from_bytes(str);\n}\n``````\n\nSample usage:\n\n``````std::wstring a_wide_string = to_wstring(\"Hello World!\");\n``````\n\nThat's certainly more readable than `std::wstring_convert<std::codecvt_utf8<wchar_t>>().from_bytes(\"Hello World!\")`.\n\nPlease note that `char` and `wchar_t` do not imply encoding, and gives no indication of size in bytes. For instance, `wchar_t` is commonly implemented as a 2-bytes data type and typically contains UTF-16 encoded data under Windows (or UCS-2 in versions prior to Windows 2000) and as a 4-bytes data type encoded using UTF-32 under Linux. This is in contrast with the newer types `char16_t` and `char32_t`, which were introduced in C++11 and are guaranteed to be large enough to hold any UTF16 or UTF32 \"character\" (or more precisely, code point) respectively.\n\n## Converting between character encodings\n\nConverting between encodings is easy with C++11 and most compilers are able to deal with it in a cross-platform manner through `<codecvt>` and `<locale>` headers.\n\n``````#include <iostream>\n#include <codecvt>\n#include <locale>\n#include <string>\nusing namespace std;\n\nint main() {\n// converts between wstring and utf8 string\nwstring_convert<codecvt_utf8_utf16<wchar_t>> wchar_to_utf8;\n// converts between u16string and utf8 string\nwstring_convert<codecvt_utf8_utf16<char16_t>, char16_t> utf16_to_utf8;\n\nwstring wstr = L\"foobar\";\nstring utf8str = wchar_to_utf8.to_bytes(wstr);\nwstring wstr2 = wchar_to_utf8.from_bytes(utf8str);\n\nwcout << wstr << endl;\ncout << utf8str << endl;\nwcout << wstr2 << endl;\n\nu16string u16str = u\"foobar\";\nstring utf8str2 = utf16_to_utf8.to_bytes(u16str);\nu16string u16str2 = utf16_to_utf8.from_bytes(utf8str2);\n\nreturn 0;\n}\n``````\n\nMind that Visual Studio 2015 provides supports for these conversion but a bug in their library implementation requires to use a different template for `wstring_convert` when dealing with `char16_t`:\n\n``````using utf16_char = unsigned short;\nwstring_convert<codecvt_utf8_utf16<utf16_char>, utf16_char> conv_utf8_utf16;\n\nvoid strings::utf16_to_utf8(const std::u16string& utf16, std::string& utf8)\n{\nstd::basic_string<utf16_char> tmp;\ntmp.resize(utf16.length());\nstd::copy(utf16.begin(), utf16.end(), tmp.begin());\nutf8 = conv_utf8_utf16.to_bytes(tmp);\n}\nvoid strings::utf8_to_utf16(const std::string& utf8, std::u16string& utf16)\n{\nstd::basic_string<utf16_char> tmp = conv_utf8_utf16.from_bytes(utf8);\nutf16.clear();\nutf16.resize(tmp.length());\nstd::copy(tmp.begin(), tmp.end(), utf16.begin());\n}\n``````\n\n## Converting to std::string\n\n`std::ostringstream` can be used to convert any streamable type to a string representation, by inserting the object into a `std::ostringstream` object (with the stream insertion operator `<<`) and then converting the whole `std::ostringstream` to a `std::string`.\n\nFor `int` for instance:\n\n``````#include <sstream>\n\nint main()\n{\nint val = 4;\nstd::ostringstream str;\nstr << val;\nstd::string converted = str.str();\nreturn 0;\n}\n``````\n\nWriting your own conversion function, the simple:\n\n``````template<class T>\nstd::string toString(const T& x)\n{\nstd::ostringstream ss;\nss << x;\nreturn ss.str();\n}\n``````\n\nworks but isn't suitable for performance critical code.\n\nUser-defined classes may implement the stream insertion operator if desired:\n\n``````std::ostream operator<<( std::ostream& out, const A& a )\n{\n// write a string representation of a to out\nreturn out;\n}\n``````\nC++11\n\nAside from streams, since C++11 you can also use the `std::to_string` (and `std::to_wstring`) function which is overloaded for all fundamental types and returns the string representation of its parameter.\n\n``````std::string s = to_string(0x12f3); // after this the string s contains \"4851\"\n``````\n\n## Finding character(s) in a string\n\nTo find a character or another string, you can use `std::string::find`. It returns the position of the first character of the first match. If no matches were found, the function returns `std::string::npos`\n\n``````std::string str = \"Curiosity killed the cat\";\nauto it = str.find(\"cat\");\n\nif (it != std::string::npos)\nstd::cout << \"Found at position: \" << it << '\\n';\nelse\n``````\n\nFound at position: 21\n\nThe search opportunities are further expanded by the following functions:\n\n``````find_first_of // Find first occurrence of characters\nfind_first_not_of // Find first absence of characters\nfind_last_of // Find last occurrence of characters\nfind_last_not_of // Find last absence of characters\n``````\n\nThese functions can allow you to search for characters from the end of the string, as well as find the negative case (ie. characters that are not in the string). Here is an example:\n\n``````std::string str = \"dog dog cat cat\";\nstd::cout << \"Found at position: \" << str.find_last_of(\"gzx\") << '\\n';\n``````\n\nFound at position: 6\n\nNote: Be aware that the above functions do not search for substrings, but rather for characters contained in the search string. In this case, the last occurrence of `'g'` was found at position `6` (the other characters weren't found).\n\n## Lexicographical comparison\n\nTwo `std::string`s can be compared lexicographically using the operators `==`, `!=`, `<`, `<=`, `>`, and `>=`:\n\n``````std::string str1 = \"Foo\";\nstd::string str2 = \"Bar\";\n\nassert(!(str1 < str2));\nassert(str > str2);\nassert(!(str1 <= str2));\nassert(str1 >= str2);\nassert(!(str1 == str2));\nassert(str1 != str2);\n``````\n\nAll these functions use the underlying `std::string::compare()` method to perform the comparison, and return for convenience boolean values. The operation of these functions may be interpreted as follows, regardless of the actual implementation:\n\n• operator`==`:\n\nIf `str1.length() == str2.length()` and each character pair matches, then returns `true`, otherwise returns `false`.\n\n• operator`!=`:\n\nIf `str1.length() != str2.length()` or one character pair doesn't match, returns `true`, otherwise it returns `false`.\n\n• operator`<` or operator`>`:\n\nFinds the first different character pair, compares them then returns the boolean result.\n\n• operator`<=` or operator`>=`:\n\nFinds the first different character pair, compares them then returns the boolean result.\n\nNote: The term character pair means the corresponding characters in both strings of the same positions. For better understanding, if two example strings are `str1` and `str2`, and their lengths are `n` and `m` respectively, then character pairs of both strings means each `str1[i]` and `str2[i]` pairs where i = 0, 1, 2, ..., max(n,m). If for any i where the corresponding character does not exist, that is, when i is greater than or equal to `n` or `m`, it would be considered as the lowest value.\n\nHere is an example of using `<`:\n\n``````std::string str1 = \"Barr\";\nstd::string str2 = \"Bar\";\n\nassert(str2 < str1);\n``````\n\nThe steps are as follows:\n\n1. Compare the first characters, `'B' == 'B'` - move on.\n2. Compare the second characters, `'a' == 'a'` - move on.\n3. Compare the third characters, `'r' == 'r'` - move on.\n4. The `str2` range is now exhausted, while the `str1` range still has characters. Thus, `str2 < str1`.\n\n## Looping through each character\n\nC++11\n\n`std::string` supports iterators, and so you can use a ranged based loop to iterate through each character:\n\n``````std::string str = \"Hello World!\";\nfor (auto c : str)\nstd::cout << c;\n``````\n\nYou can use a \"traditional\" `for` loop to loop through every character:\n\n``````std::string str = \"Hello World!\";\nfor (std::size_t i = 0; i < str.length(); ++i)\nstd::cout << str[i];\n``````\n\n## Splitting\n\nUse `std::string::substr` to split a string. There are two variants of this member function.\n\nThe first takes a starting position from which the returned substring should begin. The starting position must be valid in the range `(0, str.length()]`:\n\n``````std::string str = \"Hello foo, bar and world!\";\nstd::string newstr = str.substr(11); // \"bar and world!\"\n``````\n\nThe second takes a starting position and a total length of the new substring. Regardless of the length, the substring will never go past the end of the source string:\n\n``````std::string str = \"Hello foo, bar and world!\";\nstd::string newstr = str.substr(15, 3); // \"and\"\n``````\n\nNote that you can also call `substr` with no arguments, in this case an exact copy of the string is returned\n\n``````std::string str = \"Hello foo, bar and world!\";\nstd::string newstr = str.substr(); // \"Hello foo, bar and world!\"\n``````\n\n## String replacement\n\n### Replace by position\n\nTo replace a portion of a `std::string` you can use the method `replace` from `std::string`.\n\n`replace` has a lot of useful overloads:\n\n``````//Define string\nstd::string str = \"Hello foo, bar and world!\";\nstd::string alternate = \"Hello foobar\";\n\n//1)\nstr.replace(6, 3, \"bar\"); //\"Hello bar, bar and world!\"\n\n//2)\nstr.replace(str.begin() + 6, str.end(), \"nobody!\"); //\"Hello nobody!\"\n\n//3)\nstr.replace(19, 5, alternate, 6, 6); //\"Hello foo, bar and foobar!\"\n``````\nC++14\n``````//4)\nstr.replace(19, 5, alternate, 6); //\"Hello foo, bar and foobar!\"\n``````\n``````//5)\nstr.replace(str.begin(), str.begin() + 5, str.begin() + 6, str.begin() + 9);\n//\"foo foo, bar and world!\"\n\n//6)\nstr.replace(0, 5, 3, 'z'); //\"zzz foo, bar and world!\"\n\n//7)\nstr.replace(str.begin() + 6, str.begin() + 9, 3, 'x'); //\"Hello xxx, bar and world!\"\n``````\nC++11\n``````//8)\nstr.replace(str.begin(), str.begin() + 5, { 'x', 'y', 'z' }); //\"xyz foo, bar and world!\"\n``````\n\n### Replace occurrences of a string with another string\n\nReplace only the first occurrence of `replace` with `with` in `str`:\n\n``````std::string replaceString(std::string str,\nconst std::string& replace,\nconst std::string& with){\nstd::size_t pos = str.find(replace);\nif (pos != std::string::npos)\nstr.replace(pos, replace.length(), with);\nreturn str;\n}\n``````\n\nReplace all occurrence of `replace` with `with` in `str`:\n\n``````std::string replaceStringAll(std::string str,\nconst std::string& replace,\nconst std::string& with) {\nif(!replace.empty()) {\nstd::size_t pos = 0;\nwhile ((pos = str.find(replace, pos)) != std::string::npos) {\nstr.replace(pos, replace.length(), with);\npos += with.length();\n}\n}\nreturn str;\n}\n``````\n\n## Tokenize\n\nListed from least expensive to most expensive at run-time:\n\n1. `str::strtok` is the cheapest standard provided tokenization method, it also allows the delimiter to be modified between tokens, but it incurs 3 difficulties with modern C++:\n\n• `std::strtok` cannot be used on multiple `strings` at the same time (though some implementations do extend to support this, such as: `strtok_s`)\n• For the same reason `std::strtok` cannot be used on multiple threads simultaneously (this may however be implementation defined, for example: Visual Studio's implementation is thread safe)\n• Calling `std::strtok` modifies the `std::string` it is operating on, so it cannot be used on `const string`s, `const char*`s, or literal strings, to tokenize any of these with `std::strtok` or to operate on a `std::string` who's contents need to be preserved, the input would have to be copied, then the copy could be operated on\n\nGenerally any of these options cost will be hidden in the allocation cost of the tokens, but if the cheapest algorithm is required and `std::strtok`'s difficulties are not overcomable consider a hand-spun solution.\n\n``````// String to tokenize\nstd::string str{ \"The quick brown fox\" };\n// Vector to store tokens\nvector<std::string> tokens;\n\nfor (auto i = strtok(&str, \" \"); i != NULL; i = strtok(NULL, \" \"))\ntokens.push_back(i);\n``````\n\nLive Example\n\n1. The `std::istream_iterator` uses the stream's extraction operator iteratively. If the input `std::string` is white-space delimited this is able to expand on the `std::strtok` option by eliminating its difficulties, allowing inline tokenization thereby supporting the generation of a `const vector<string>`, and by adding support for multiple delimiting white-space character:\n``````// String to tokenize\nconst std::string str(\"The quick \\tbrown \\nfox\");\nstd::istringstream is(str);\n// Vector to store tokens\nconst std::vector<std::string> tokens = std::vector<std::string>(\nstd::istream_iterator<std::string>(is),\nstd::istream_iterator<std::string>());\n``````\n\nLive Example\n\n1. The `std::regex_token_iterator` uses a `std::regex` to iteratively tokenize. It provides for a more flexible delimiter definition. For example, non-delimited commas and white-space:\nC++11\n``````// String to tokenize\nconst std::string str{ \"The ,qu\\\\,ick ,\\tbrown, fox\" };\nconst std::regex re{ \"\\\\s*((?:[^\\\\\\\\,]|\\\\\\\\.)*?)\\\\s*(?:,|\\$)\" };\n// Vector to store tokens\nconst std::vector<std::string> tokens{\nstd::sregex_token_iterator(str.begin(), str.end(), re, 1),\nstd::sregex_token_iterator()\n};\n``````\n\nLive Example\n\nSee the `regex_token_iterator` Example for more details.\n\n## Trimming characters at start/end\n\nThis example requires the headers `<algorithm>`, `<locale>`, and `<utility>`.\n\nC++11\n\nTo trim a sequence or string means to remove all leading and trailing elements (or characters) matching a certain predicate. We first trim the trailing elements, because it doesn't involve moving any elements, and then trim the leading elements. Note that the generalizations below work for all types of `std::basic_string` (e.g. `std::string` and `std::wstring`), and accidentally also for sequence containers (e.g. `std::vector` and `std::list`).\n\n``````template <typename Sequence, // any basic_string, vector, list etc.\ntypename Pred> // a predicate on the element (character) type\nSequence& trim(Sequence& seq, Pred pred) {\nreturn trim_start(trim_end(seq, pred), pred);\n}\n``````\n\nTrimming the trailing elements involves finding the last element not matching the predicate, and erasing from there on:\n\n``````template <typename Sequence, typename Pred>\nSequence& trim_end(Sequence& seq, Pred pred) {\nauto last = std::find_if_not(seq.rbegin(),\nseq.rend(),\npred);\nseq.erase(last.base(), seq.end());\nreturn seq;\n}\n``````\n\nTrimming the leading elements involves finding the first element not matching the predicate and erasing up to there:\n\n``````template <typename Sequence, typename Pred>\nSequence& trim_start(Sequence& seq, Pred pred) {\nauto first = std::find_if_not(seq.begin(),\nseq.end(),\npred);\nseq.erase(seq.begin(), first);\nreturn seq;\n}\n``````\n\nTo specialize the above for trimming whitespace in a `std::string` we can use the `std::isspace()` function as a predicate:\n\n``````std::string& trim(std::string& str, const std::locale& loc = std::locale()) {\nreturn trim(str, [&loc](const char c){ return std::isspace(c, loc); });\n}\n\nstd::string& trim_start(std::string& str, const std::locale& loc = std::locale()) {\nreturn trim_start(str, [&loc](const char c){ return std::isspace(c, loc); });\n}\n\nstd::string& trim_end(std::string& str, const std::locale& loc = std::locale()) {\nreturn trim_end(str, [&loc](const char c){ return std::isspace(c, loc); });\n}\n``````\n\nSimilarly, we can use the `std::iswspace()` function for `std::wstring` etc.\n\nIf you wish to create a new sequence that is a trimmed copy, then you can use a separate function:\n\n``````template <typename Sequence, typename Pred>\nSequence trim_copy(Sequence seq, Pred pred) { // NOTE: passing seq by value\ntrim(seq, pred);\nreturn seq;\n}\n``````\n\n## Using the std::string_view class\n\nC++17\n\nC++17 introduces `std::string_view`, which is simply a non-owning range of `const char`s, implementable as either a pair of pointers or a pointer and a length. It is a superior parameter type for functions that requires non-modifiable string data. Before C++17, there were three options for this:\n\n``````void foo(std::string const& s); // pre-C++17, single argument, could incur\n// allocation if caller's data was not in a string\n// (e.g. string literal or vector<char> )\n\nvoid foo(const char* s, size_t len); // pre-C++17, two arguments, have to pass them\n// both everywhere\n\nvoid foo(const char* s); // pre-C++17, single argument, but need to call\n// strlen()\n\ntemplate <class StringT>\nvoid foo(StringT const& s); // pre-C++17, caller can pass arbitrary char data\n// provider, but now foo() has to live in a header\n``````\n\nAll of these can be replaced with:\n\n``````void foo(std::string_view s); // post-C++17, single argument, tighter coupling\n// zero copies regardless of how caller is storing\n// the data\n``````\n\nNote that `std::string_view` cannot modify its underlying data.\n\n`string_view` is useful when you want to avoid unnecessary copies.\n\nIt offers a useful subset of the functionality that `std::string` does, although some of the functions behave differently:\n\n``````std::string str = \"lllloooonnnngggg sssstttrrriiinnnggg\"; //A really long string\n\n//Bad way - 'string::substr' returns a new string (expensive if the string is long)\nstd::cout << str.substr(15, 10) << '\\n';\n\n//Good way - No copies are created!\nstd::string_view view = str;\n\n// string_view::substr returns a new string_view\nstd::cout << view.substr(15, 10) << '\\n';\n``````\n\n2016-04-15\n2017-05-21\nC++ Pedia\nIcon"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.6691973,"math_prob":0.8890211,"size":25455,"snap":"2020-24-2020-29","text_gpt3_token_len":6475,"char_repetition_ratio":0.1720954,"word_repetition_ratio":0.057523303,"special_character_ratio":0.27963072,"punctuation_ratio":0.23918867,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96909255,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-16T12:49:49Z\",\"WARC-Record-ID\":\"<urn:uuid:532c3a42-8542-4ac4-bb89-95c75bed1b0e>\",\"Content-Length\":\"74307\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d3a18445-f173-46e6-992d-144721493aab>\",\"WARC-Concurrent-To\":\"<urn:uuid:6fefa0d3-01c0-48df-a586-d5a7a5f9b93e>\",\"WARC-IP-Address\":\"40.83.160.29\",\"WARC-Target-URI\":\"https://cpluspluspedia.com/en/tutorial/488/std--string\",\"WARC-Payload-Digest\":\"sha1:H253QOT62OYHJCVZDB2DHUPKUFTJHLCB\",\"WARC-Block-Digest\":\"sha1:GVJZ3ME6JTU7RZO3QM7TGSKKSQ7ZJ55Y\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593657169226.65_warc_CC-MAIN-20200716122414-20200716152414-00402.warc.gz\"}"} |
http://mathhombre.blogspot.com/2015/09/math-is.html | [
"## Monday, September 7, 2015\n\n### Math is...\n\nOur standard (non-thesis) capstone is a course called The Nature of Modern Mathematics. For me, this is a math history course.\n\nOur essential questions:\n• what is math?\n• what is its nature? (Is it invented or discovered? Is it completable? Is it beautiful?)\n• what are the important ideas of math?\n• how do I do math?\n• what is the history of math?\n• what are the important milestones?\n• what do mathematicians do now?\n• who are they?\n• what are the big open questions?\n\nI love teaching this course.\n\nThe first assignment is a pre-assessment of sorts, asking them to start blogging with a short post on what math is and what are the milestones they know about. Given their responses, I think we can see that this is going to be a good semester. What have college majors learned about math? We have about a third future elementary teachers, a third secondary teachers, and a third going on for graduate school or the corporate world. You might be able to see a stong influence of calculus courses, geometry and discrete mathematics.",
null,
"The amazing Ben Orlin\nThis blogpost is in case you would find what they think about math interesting, or if it might start you thinking about what your students think about math. I sorted their responses by my own weird classifications.\n\nHere is the list of all their blogs. If you read just one, try Brandon's.\n\nMath is...\n\n(patterns)\n• patterns\n• about trying to find universal patterns that we can apply to infinite situations or problems.\n• a way of thinking about patterns throughout the universe. Math is interpreting and studying these patterns to find more patterns.\n• the study of patterns in the world and in our minds and how they connect to each other.\n\n(tools)\n• a tool\n• all the computational things we learn throughout life, but it is also a tool and language humans use to make sense of the world around us.\n• a collection of tools that we use to quantify and describe the world around us. We use mathematics very similarly to how we use language. Using language, we can identify objects, convey ideas, and argue. Math can be used in the exact same way when communicating scientific ideas, defining mathematical objects, and proving theorems. The most interesting relationship between language and mathematics is that both can be utilized to describe events and objects that do not exist in the physical universe.\n\n(science)\n• logical science\n• a framework we use to understand, and like science, it is not reality itself\n• the study of everything around us. It is how we quantify structures. It's a science that deals with logic. It is a measurement of the physical space around us. It is so much more then just a simple discipline or school subject.\n• a logical way of explaining everything in the world and you can find math everywhere you go\n• a quantifiable way to explain physical phenomenon but also includes ways to predict imaginary situations.\n• a numeric and logical explanation of the world around us.\n• our human desire to give order and regularity to the world.\n\n(language)\n• a language\n• a language used to study and discuss patterns found in nature.\n\n(system)\n• using logical and analytical thinking to derive solutions to the problems we see from all directions\n• the use of objects that have been given accepted values and meanings to help us to quantify the world around us.",
null,
"Things We Forget\n\n(hmmmm…)\n• context. Math gives us a common ground from which to clearly and accurately communicate with the world. Math transcends language.\n• much more than just numbers, it can be used theoretically to answer some of worlds most unexplainable phenomenon. We are in the age of information where researchers and engineers are making breakthroughs everyday using advanced computes powered by mathematical formulas and theories.\n• a way of explaining what happens around us in a logical and numerical way, but there is also so much more to math than just numbers and logic. New discoveries in mathematics are occurring all the time to describe anything and everything about the world, and with these the definition of math is growing as well. So for me, the best way I could define math is by likening it to an infinite series, how mathy of me. Just like with the next term in the series, each new discovery broadens the scope of mathematics and as a result the definition becomes that much different than before.\n• literally everything",
null,
"The brilliant as usualGrant Snider\n\nName 5 Milestones...\n(concepts)\n• x 3 Number\n• x2 counting\n• Egyptian numeration\n• zero as a number\n• the acceptance of i as a number\n• the acceptance of irrationals as numbers\n• x2 e\n• x2 pi\n• x3 Measurement\n• Quantifying time and number systems in Egyptian times\n• a definite monetary system\n• x4 number operations (+, –, x, ÷)\n• proportional reasoning\n• functions\n• The coordinate plane\n• x2 the discovery of infinity\n\n(system)\n• x2 Proof\n• when mathematical concepts could be argued and verified through what we all now recognize as a proof.\n• the first math proofs for example the geometry proofs by the Greek mathematicians\n• x2 the power of communication\n• symbols\n• how to communicate what we know to others outside the math world\n• The movement into abstraction.\n\n(fields)\n• x7 geometry\n• x2 pyramids\n• x3 non-Euclidean\n• x3 algebra\n• x2 to predict, plan, and control the environment\n• ballistics\n• x2 trigonometry\n• x5 calculus\n• the computer age of statistics",
null,
"Usually he says \"practice\"!(Sydney Harris)\n(people)\n• Pythagoras and his theorem\n• x7 Euclid\n• x4 Elements\n• way to prove concepts and communicate mathematically\n• Al Khwarizmi\n• Galileo\n• Descartes\n• Newton and his Laws\n• Leibniz\n• Blaise Pascal's invention of the mechanical calculator\n\n(Theorems)\n• x4 The Pythagorean theorem\n• the realization that the Earth was round and not flat\n• x3 Euler’s Identity\n• (I swear this is the closest thing the real world has to magic.)\n• The Nine Point Circle\n• The Seven Bridges of Konigsberg\n• Euler’s Method\n\nIf you want to answer those questions in the comments, I'd be fascinated. Or if you want to share what you notice about their responses."
]
| [
null,
"http://4.bp.blogspot.com/-pY7Zpkl_sJQ/Ve5Li2ddY5I/AAAAAAAAGRI/_Sad3B8FTB4/s320/mathistakingyourbrainforawalk-MWBD.jpg",
null,
"http://2.bp.blogspot.com/-oUDgEoD8R3A/Ve5HXK1PZaI/AAAAAAAAGQw/gqU8lLNd-OI/s320/struggle1117.jpg",
null,
"http://1.bp.blogspot.com/-o3oACq7n2v8/Ve5GFORM7lI/AAAAAAAAGQo/ssWtrWij8qY/s640/meetthenumbers.jpg",
null,
"http://2.bp.blogspot.com/-COkJR0Ml7KA/Ve5JApeNXzI/AAAAAAAAGQ8/S78fzHpIhAU/s320/Eucliddirections.jpg",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.92884535,"math_prob":0.780743,"size":5821,"snap":"2020-24-2020-29","text_gpt3_token_len":1289,"char_repetition_ratio":0.10641224,"word_repetition_ratio":0.012658228,"special_character_ratio":0.21851915,"punctuation_ratio":0.086538464,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98457164,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-06-02T02:51:05Z\",\"WARC-Record-ID\":\"<urn:uuid:7626ed75-b842-4227-8f9e-25b3931ce8ae>\",\"Content-Length\":\"178433\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:bdc1a424-4678-4c10-826e-2a7866f3d490>\",\"WARC-Concurrent-To\":\"<urn:uuid:6eb5478d-f4df-4c4f-b9f5-1c12b4c3cf58>\",\"WARC-IP-Address\":\"142.250.31.132\",\"WARC-Target-URI\":\"http://mathhombre.blogspot.com/2015/09/math-is.html\",\"WARC-Payload-Digest\":\"sha1:WPXRM6Y6SO7EPWIKFJMFJ35IOTFRO7QI\",\"WARC-Block-Digest\":\"sha1:JBDQOKC7SABGRT7ECMOY7KM56FSRKDBJ\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-24/CC-MAIN-2020-24_segments_1590347422065.56_warc_CC-MAIN-20200602002343-20200602032343-00557.warc.gz\"}"} |
https://bookdown.org/ajkurz/recoding_Hayes_2018/relative-conditional-direct-effects.html | [
"## 13.5 Relative conditional direct effects\n\nIn order to get the $$R^2$$ difference distribution analogous to the change in $$R^2$$ $$F$$-test Hayes discussed on pages 495–496, we’ll have to first refit the model without the interaction for the $$Y$$ criterion, liking.\n\nm_model <- bf(respappr ~ 1 + D1 + D2 + sexism + D1:sexism + D2:sexism)\ny_model <- bf(liking ~ 1 + D1 + D2 + respappr + sexism)\n\nmodel3 <-\nbrm(data = protest, family = gaussian,\nm_model + y_model + set_rescor(FALSE),\nchains = 4, cores = 4)\n\nHere’s the $$\\Delta R^2$$ density.\n\nR2s <-\nbayes_R2(model1, resp = \"liking\", summary = F) %>%\nas_tibble() %>%\nrename(model1 = R2_liking) %>%\nbind_cols(\nbayes_R2(model3, resp = \"liking\", summary = F) %>%\nas_tibble() %>%\nrename(model3 = R2_liking)\n) %>%\nmutate(difference = model1 - model3)\n\nR2s %>%\nggplot(aes(x = difference, y = 0)) +\n\ngeom_halfeyeh(point_interval = median_qi, .prob = c(0.95, 0.5),\nfill = \"grey50\", color = \"white\") +\nscale_x_continuous(breaks = c(-.5, median(R2s$difference) %>% round(2), .5)) + scale_y_continuous(NULL, breaks = NULL) + coord_cartesian(xlim = c(-.5, .5)) + xlab(expression(paste(Delta, italic(R)^2))) + theme_black() + theme(panel.grid = element_blank())",
null,
"We’ll also compare the models by their information criteria. loo(model1, model3) ## LOOIC SE ## model1 760.10 29.56 ## model3 761.61 31.10 ## model1 - model3 -1.51 5.47 waic(model1, model3) ## WAIC SE ## model1 759.74 29.47 ## model3 761.39 31.05 ## model1 - model3 -1.65 5.46 As when we went through these steps for resp = \"respappr\", above, the Bayesian $$R^2$$, the LOO-CV, and the WAIC all suggest there’s little difference between the two models with respect to predictive utility. In such a case, I’d lean on theory to choose between them. If inclined, one could also do Bayesian model averaging. Our approach to plotting the relative conditional direct effects will mirror what we did for the relative conditional indirect effects, above. Here are the brm() parameters that correspond to the parameter names of Hayes’s notation. • $$c_{1}$$ = b_liking_D1 • $$c_{2}$$ = b_liking_D2 • $$c_{4}$$ = b_liking_D1:sexism • $$c_{5}$$ = b_liking_D2:sexism With all clear, we’re off to the races. # c1 + c4W D1_function <- function(w){ post$b_liking_D1 + post$b_liking_D1:sexism*w } # c2 + c5W D2_function <- function(w){ post$b_liking_D2 + post$b_liking_D2:sexism*w } rcde_tibble <- tibble(sexism = seq(from = 3.5, to = 6.5, length.out = 30)) %>% group_by(sexism) %>% mutate(Protest vs. No Protest = map(sexism, D1_function), Collective vs. Individual Protest = map(sexism, D2_function)) %>% unnest() %>% ungroup() %>% mutate(iter = rep(1:4000, times = 30)) %>% gather(direct effect, value, -sexism, -iter) %>% mutate(direct effect = factor(direct effect, levels = c(\"Protest vs. No Protest\", \"Collective vs. Individual Protest\"))) head(rcde_tibble) ## # A tibble: 6 x 4 ## sexism iter direct effect value ## <dbl> <int> <fct> <dbl> ## 1 3.5 1 Protest vs. No Protest -0.856 ## 2 3.5 2 Protest vs. No Protest -0.482 ## 3 3.5 3 Protest vs. No Protest -1.24 ## 4 3.5 4 Protest vs. No Protest -1.23 ## 5 3.5 5 Protest vs. No Protest -1.06 ## 6 3.5 6 Protest vs. No Protest -0.663 Here is our variant of Figure 13.4, with respect to the relative conditional direct effects. rcde_tibble %>% group_by(direct effect, sexism) %>% summarize(median = median(value), ll = quantile(value, probs = .025), ul = quantile(value, probs = .975)) %>% ggplot(aes(x = sexism, group = direct effect)) + geom_ribbon(aes(ymin = ll, ymax = ul), color = \"white\", fill = \"transparent\", linetype = 3) + geom_line(aes(y = median), color = \"white\") + coord_cartesian(xlim = 4:6, ylim = c(-.6, .8)) + labs(x = expression(paste(\"Perceived Pervasiveness of Sex Discrimination in Society (\", italic(W), \")\")), y = \"Relative Conditional Effect on Liking\") + theme_black() + theme(panel.grid = element_blank(), legend.position = \"none\") + facet_grid(~ direct effect)",
null,
"Holy smokes, them are some wide 95% CIs! No wonder the information criteria and $$R^2$$ comparisons were so uninspiring. Notice that the y-axis is on the parameter space. In Hayes’s Figure 13.5, the y-axis is on the liking space, instead. When we want things in the parameter space, we work with the output of posterior_samples(); when we want them in the criterion space, we use fitted(). # we need new nd data nd <- tibble(D1 = rep(c(1/3, -2/3, 1/3), each = 30), D2 = rep(c(1/2, 0, -1/2), each = 30), respappr = mean(protest$respappr),\nsexism = seq(from = 3.5, to = 6.5, length.out = 30) %>% rep(., times = 3))\n\n# we feed nd into fitted()\nmodel1_fitted <-\nfitted(model1,\nnewdata = nd,\nresp = \"liking\",\nsummary = T) %>%\nas_tibble() %>%\nbind_cols(nd) %>%\nmutate(condition = ifelse(D2 == 0, \"No Protest\",\nifelse(D2 == -1/2, \"Individual Protest\", \"Collective Protest\"))) %>%\nmutate(condition = factor(condition, levels = c(\"No Protest\", \"Individual Protest\", \"Collective Protest\")))\n\nmodel1_fitted %>%\nggplot(aes(x = sexism, group = condition)) +\ngeom_ribbon(aes(ymin = Q2.5, ymax = Q97.5),\nlinetype = 3, color = \"white\", fill = \"transparent\") +\ngeom_line(aes(y = Estimate), color = \"white\") +\ngeom_point(data = protest, aes(x = sexism, y = liking),\ncolor = \"red\", size = 2/3) +\ncoord_cartesian(xlim = 4:6,\nylim = 4:7) +\nlabs(x = expression(paste(\"Perceived Pervasiveness of Sex Discrimination in Society (\", italic(W), \")\")),\ny = expression(paste(\"Evaluation of the Attorney (\", italic(Y), \")\"))) +\ntheme_black() +\ntheme(panel.grid = element_blank()) +\nfacet_wrap(~condition)",
null,
"We expanded the range of the y-axis, a bit, to show more of that data (and there’s even more data outside of our expanded range). Also note how after doing so and after including the 95% CI bands, the crossing regression line effect in Hayes’s Figure 13.5 isn’t as impressive looking any more.\n\nOn pages 497–498, Hayes discussed more omnibus $$F$$-tests. Much like with the $$M$$ criterion, we won’t come up with Bayesian $$F$$-tests, but we might go ahead and make pairwise comparisons at the three percentiles Hayes prefers.\n\n# we need new nd data\nnd <-\ntibble(D1 = rep(c(1/3, -2/3, 1/3), each = 3),\nD2 = rep(c(1/2, 0, -1/2), each = 3),\nrespappr = mean(protest\\$respappr),\nsexism = rep(c(4.250, 5.120, 5.896), times = 3))\n\n# this tie we'll use summary = F\nmodel1_fitted <-\nfitted(model1,\nnewdata = nd,\nresp = \"liking\",\nsummary = F) %>%\nas_tibble() %>%\ngather() %>%\nmutate(condition = rep(c(\"Collective Protest\", \"No Protest\", \"Individual Protest\"),\neach = 3*4000),\nsexism = rep(c(4.250, 5.120, 5.896), times = 3) %>% rep(., each = 4000),\niter = rep(1:4000, times = 9)) %>%\nselect(-key) %>%\nspread(key = condition, value = value) %>%\nmutate(Individual Protest - No Protest = Individual Protest - No Protest,\nCollective Protest - No Protest = Collective Protest - No Protest,\nCollective Protest - Individual Protest = Collective Protest - Individual Protest)\n\n# a tiny bit more wrangling and we're ready to plot the difference distributions\nmodel1_fitted %>%\nselect(sexism, contains(\"-\")) %>%\ngather(key, value, -sexism) %>%\n\nggplot(aes(x = value)) +\ngeom_halfeyeh(aes(y = 0), fill = \"grey50\", color = \"white\",\npoint_interval = median_qi, .prob = 0.95) +\ngeom_vline(xintercept = 0, color = \"grey25\", linetype = 2) +\nscale_y_continuous(NULL, breaks = NULL) +\nfacet_grid(sexism~key) +\ntheme_black() +\ntheme(panel.grid = element_blank())",
null,
"Now we have model1_fitted, it’s easy to get the typical numeric summaries for the differences.\n\nmodel1_fitted %>%\nselect(sexism, contains(\"-\")) %>%\ngather(key, value, -sexism) %>%\ngroup_by(key, sexism) %>%\nsummarize(mean = mean(value),\nll = quantile(value, probs = .025),\nul = quantile(value, probs = .975)) %>%\nmutate_if(is.double, round, digits = 3)\n## # A tibble: 9 x 5\n## # Groups: key \n## key sexism mean ll ul\n## <chr> <dbl> <dbl> <dbl> <dbl>\n## 1 Collective Protest - Individual Protest 4.25 -0.11 -0.713 0.487\n## 2 Collective Protest - Individual Protest 5.12 -0.147 -0.539 0.254\n## 3 Collective Protest - Individual Protest 5.90 -0.179 -0.72 0.361\n## 4 Collective Protest - No Protest 4.25 -0.542 -1.11 0.036\n## 5 Collective Protest - No Protest 5.12 -0.109 -0.569 0.338\n## 6 Collective Protest - No Protest 5.90 0.277 -0.381 0.904\n## 7 Individual Protest - No Protest 4.25 -0.432 -1.05 0.18\n## 8 Individual Protest - No Protest 5.12 0.038 -0.373 0.465\n## 9 Individual Protest - No Protest 5.90 0.456 -0.147 1.06\n\nWe don’t have $$p$$-values, but all the differences are small in magnitude and have wide 95% intervals straddling zero.\n\nTo get the difference scores Hayes presented on pages 498–500, one might:\n\npost %>%\nmutate(Difference in liking between being told she protested or not when W is 4.250 = b_liking_D1 + b_liking_D1:sexism*4.250,\nDifference in liking between being told she protested or not when W is 5.120 = b_liking_D1 + b_liking_D1:sexism*5.120,\nDifference in liking between being told she protested or not when W is 5.896 = b_liking_D1 + b_liking_D1:sexism*5.896,\n\nDifference in liking between collective vs. individual protest when W is 4.250 = b_liking_D2 + b_liking_D2:sexism*4.250,\nDifference in liking between collective vs. individual protest when W is 5.120 = b_liking_D2 + b_liking_D2:sexism*5.120,\nDifference in liking between collective vs. individual protest when W is 5.896 = b_liking_D2 + b_liking_D2:sexism*5.896) %>%\nselect(contains(\"Difference in liking\")) %>%\ngather() %>%\ngroup_by(key) %>%\nsummarize(mean = mean(value),\nll = quantile(value, probs = .025),\nul = quantile(value, probs = .975)) %>%\nmutate_if(is.double, round, digits = 3)\n## # A tibble: 6 x 4\n## key mean ll ul\n## <chr> <dbl> <dbl> <dbl>\n## 1 Difference in liking between being told she protested or not when W is 4.250 -0.487 -1.00 0.017\n## 2 Difference in liking between being told she protested or not when W is 5.120 -0.036 -0.43 0.35\n## 3 Difference in liking between being told she protested or not when W is 5.896 0.367 -0.207 0.923\n## 4 Difference in liking between collective vs. individual protest when W is 4.2… -0.11 -0.713 0.487\n## 5 Difference in liking between collective vs. individual protest when W is 5.1… -0.147 -0.539 0.254\n## 6 Difference in liking between collective vs. individual protest when W is 5.8… -0.179 -0.72 0.361"
]
| [
null,
"https://bookdown.org/ajkurz/recoding_Hayes_2018/13_files/figure-html/unnamed-chunk-24-1.png",
null,
"https://bookdown.org/ajkurz/recoding_Hayes_2018/13_files/figure-html/unnamed-chunk-27-1.png",
null,
"https://bookdown.org/ajkurz/recoding_Hayes_2018/13_files/figure-html/unnamed-chunk-28-1.png",
null,
"https://bookdown.org/ajkurz/recoding_Hayes_2018/13_files/figure-html/unnamed-chunk-29-1.png",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.7046802,"math_prob":0.99541014,"size":10256,"snap":"2019-13-2019-22","text_gpt3_token_len":3348,"char_repetition_ratio":0.1507023,"word_repetition_ratio":0.20927319,"special_character_ratio":0.38094774,"punctuation_ratio":0.18734178,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99051005,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-03-23T18:21:09Z\",\"WARC-Record-ID\":\"<urn:uuid:0d96ab9d-63d6-46fd-ab5e-f9dd8ebe91af>\",\"Content-Length\":\"86352\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1f4faff6-03d5-416e-a636-bb849d774adc>\",\"WARC-Concurrent-To\":\"<urn:uuid:9e15c3b9-f8f3-42bb-895d-c3b46bd1e350>\",\"WARC-IP-Address\":\"54.156.6.136\",\"WARC-Target-URI\":\"https://bookdown.org/ajkurz/recoding_Hayes_2018/relative-conditional-direct-effects.html\",\"WARC-Payload-Digest\":\"sha1:VFDAXLZLNU3J6OK7KLX45R25CI5DYMIW\",\"WARC-Block-Digest\":\"sha1:MCIYNMJ7OFUQUEVXTCY2SDXGOXL6YSRN\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-13/CC-MAIN-2019-13_segments_1552912202924.93_warc_CC-MAIN-20190323181713-20190323203713-00224.warc.gz\"}"} |
https://www.jobilize.com/trigonometry/test/graphing-sine-and-cosine-functions-by-openstax | [
"# 8.1 Graphs of the sine and cosine functions\n\n Page 1 / 13\nIn this section, you will:\n• Graph variations of y=sin( x ) and y=cos( x ).\n• Use phase shifts of sine and cosine curves.",
null,
"Light can be separated into colors because of its wavelike properties. (credit: \"wonderferret\"/ Flickr)\n\nWhite light, such as the light from the sun, is not actually white at all. Instead, it is a composition of all the colors of the rainbow in the form of waves. The individual colors can be seen only when white light passes through an optical prism that separates the waves according to their wavelengths to form a rainbow.\n\nLight waves can be represented graphically by the sine function. In the chapter on Trigonometric Functions , we examined trigonometric functions such as the sine function. In this section, we will interpret and create graphs of sine and cosine functions.\n\n## Graphing sine and cosine functions\n\nRecall that the sine and cosine functions relate real number values to the x - and y -coordinates of a point on the unit circle. So what do they look like on a graph on a coordinate plane? Let’s start with the sine function . We can create a table of values and use them to sketch a graph. [link] lists some of the values for the sine function on a unit circle.\n\n $x$ $0$ $\\frac{\\pi }{6}$ $\\frac{\\pi }{4}$ $\\frac{\\pi }{3}$ $\\frac{\\pi }{2}$ $\\frac{2\\pi }{3}$ $\\frac{3\\pi }{4}$ $\\frac{5\\pi }{6}$ $\\pi$ $\\mathrm{sin}\\left(x\\right)$ $0$ $\\frac{1}{2}$ $\\frac{\\sqrt{2}}{2}$ $\\frac{\\sqrt{3}}{2}$ $1$ $\\frac{\\sqrt{3}}{2}$ $\\frac{\\sqrt{2}}{2}$ $\\frac{1}{2}$ $0$\n\nPlotting the points from the table and continuing along the x -axis gives the shape of the sine function. See [link] .",
null,
"The sine function\n\nNotice how the sine values are positive between 0 and $\\text{\\hspace{0.17em}}\\pi ,\\text{\\hspace{0.17em}}$ which correspond to the values of the sine function in quadrants I and II on the unit circle, and the sine values are negative between $\\text{\\hspace{0.17em}}\\pi \\text{\\hspace{0.17em}}$ and $\\text{\\hspace{0.17em}}2\\pi ,\\text{\\hspace{0.17em}}$ which correspond to the values of the sine function in quadrants III and IV on the unit circle. See [link] .",
null,
"Plotting values of the sine function\n\nNow let’s take a similar look at the cosine function . Again, we can create a table of values and use them to sketch a graph. [link] lists some of the values for the cosine function on a unit circle.\n\n $\\mathbf{x}$ $0$ $\\frac{\\pi }{6}$ $\\frac{\\pi }{4}$ $\\frac{\\pi }{3}$ $\\frac{\\pi }{2}$ $\\frac{2\\pi }{3}$ $\\frac{3\\pi }{4}$ $\\frac{5\\pi }{6}$ $\\pi$ $\\mathbf{cos}\\left(\\mathbf{x}\\right)$ $1$ $\\frac{\\sqrt{3}}{2}$ $\\frac{\\sqrt{2}}{2}$ $\\frac{1}{2}$ $0$ $-\\frac{1}{2}$ $-\\frac{\\sqrt{2}}{2}$ $-\\frac{\\sqrt{3}}{2}$ $-1$\n\nAs with the sine function, we can plots points to create a graph of the cosine function as in [link] .",
null,
"The cosine function\n\nBecause we can evaluate the sine and cosine of any real number, both of these functions are defined for all real numbers. By thinking of the sine and cosine values as coordinates of points on a unit circle, it becomes clear that the range of both functions must be the interval $\\text{\\hspace{0.17em}}\\left[-1,1\\right].$\n\nIn both graphs, the shape of the graph repeats after $\\text{\\hspace{0.17em}}2\\pi ,\\text{\\hspace{0.17em}}$ which means the functions are periodic with a period of $\\text{\\hspace{0.17em}}2\\pi .\\text{\\hspace{0.17em}}$ A periodic function is a function for which a specific horizontal shift , P , results in a function equal to the original function: $\\text{\\hspace{0.17em}}f\\left(x+P\\right)=f\\left(x\\right)\\text{\\hspace{0.17em}}$ for all values of $\\text{\\hspace{0.17em}}x\\text{\\hspace{0.17em}}$ in the domain of $\\text{\\hspace{0.17em}}f.\\text{\\hspace{0.17em}}$ When this occurs, we call the smallest such horizontal shift with $\\text{\\hspace{0.17em}}P>0\\text{\\hspace{0.17em}}$ the period of the function. [link] shows several periods of the sine and cosine functions.\n\nLooking again at the sine and cosine functions on a domain centered at the y -axis helps reveal symmetries. As we can see in [link] , the sine function is symmetric about the origin. Recall from The Other Trigonometric Functions that we determined from the unit circle that the sine function is an odd function because $\\text{\\hspace{0.17em}}\\mathrm{sin}\\left(-x\\right)=-\\mathrm{sin}\\text{\\hspace{0.17em}}x.\\text{\\hspace{0.17em}}$ Now we can clearly see this property from the graph.\n\n#### Questions & Answers\n\nA laser rangefinder is locked on a comet approaching Earth. The distance g(x), in kilometers, of the comet after x days, for x in the interval 0 to 30 days, is given by g(x)=250,000csc(π30x). Graph g(x) on the interval [0, 35]. Evaluate g(5) and interpret the information. What is the minimum distance between the comet and Earth? When does this occur? To which constant in the equation does this correspond? Find and discuss the meaning of any vertical asymptotes.\nKaitlyn Reply\nThe sequence is {1,-1,1-1.....} has\namit Reply\ncircular region of radious\nKainat Reply\nhow can we solve this problem\nJoel Reply\nSin(A+B) = sinBcosA+cosBsinA\nEseka Reply\nProve it\nEseka\nPlease prove it\nEseka\nhi\nJoel\nJune needs 45 gallons of punch. 2 different coolers. Bigger cooler is 5 times as large as smaller cooler. How many gallons in each cooler?\nArleathia Reply\n7.5 and 37.5\nNando\nfind the sum of 28th term of the AP 3+10+17+---------\nPrince Reply\nI think you should say \"28 terms\" instead of \"28th term\"\nVedant\nthe 28th term is 175\nNando\n192\nKenneth\nif sequence sn is a such that sn>0 for all n and lim sn=0than prove that lim (s1 s2............ sn) ke hole power n =n\nSANDESH Reply\nwrite down the polynomial function with root 1/3,2,-3 with solution\nGift Reply\nif A and B are subspaces of V prove that (A+B)/B=A/(A-B)\nPream Reply\nwrite down the value of each of the following in surd form a)cos(-65°) b)sin(-180°)c)tan(225°)d)tan(135°)\nOroke Reply\nProve that (sinA/1-cosA - 1-cosA/sinA) (cosA/1-sinA - 1-sinA/cosA) = 4\nkiruba Reply\nwhat is the answer to dividing negative index\nMorosi Reply\nIn a triangle ABC prove that. (b+c)cosA+(c+a)cosB+(a+b)cisC=a+b+c.\nShivam Reply\ngive me the waec 2019 questions\nAaron Reply\n\n### Read also:\n\n#### Get the best Algebra and trigonometry course in your pocket!\n\nSource: OpenStax, Algebra and trigonometry. OpenStax CNX. Nov 14, 2016 Download for free at https://legacy.cnx.org/content/col11758/1.6\nGoogle Play and the Google Play logo are trademarks of Google Inc.\n\nNotification Switch\n\nWould you like to follow the 'Algebra and trigonometry' conversation and receive update notifications?",
null,
"By",
null,
"",
null,
"By Lakeima Roberts",
null,
"By Anonymous User",
null,
"By Rhodes",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"By Abishek Devaraj",
null,
"By Edward Biton"
]
| [
null,
"https://www.jobilize.com/ocw/mirror/col11758/m49387/CNX_Precalc_Figure_06_01_001.jpg",
null,
"https://www.jobilize.com/ocw/mirror/col11758/m49387/CNX_Precalc_Figure_06_01_002.jpg",
null,
"https://www.jobilize.com/ocw/mirror/col11758/m49387/CNX_Precalc_Figure_06_01_003.jpg",
null,
"https://www.jobilize.com/ocw/mirror/col11758/m49387/CNX_Precalc_Figure_06_01_004.jpg",
null,
"https://www.jobilize.com/ocw/mirror/course-thumbs/col11758-course-thumb.png;jsessionid=8slokQAGQ6JCZYIqmRFLZEMoEyVDbPQhS77dcaAH.condor3363",
null,
"https://www.jobilize.com/quiz/thumb/art-history-arth209-quiz-by-dr-rebecca-butterfield-pennsylvania.png;jsessionid=8slokQAGQ6JCZYIqmRFLZEMoEyVDbPQhS77dcaAH.condor3363",
null,
"https://www.jobilize.com/quiz/thumb/excel-2007-quiz-200-by-lakeima-roberts.png;jsessionid=8slokQAGQ6JCZYIqmRFLZEMoEyVDbPQhS77dcaAH.condor3363",
null,
"https://www.jobilize.com/quiz/thumb/r5-family-quizzes.png;jsessionid=8slokQAGQ6JCZYIqmRFLZEMoEyVDbPQhS77dcaAH.condor3363",
null,
"https://www.jobilize.com/quiz/thumb/mmt-review-flashcards.png;jsessionid=8slokQAGQ6JCZYIqmRFLZEMoEyVDbPQhS77dcaAH.condor3363",
null,
"https://farm4.staticflickr.com/3910/14919896285_24a58304f7_t.jpg",
null,
"https://www.jobilize.com/quiz/thumb/tournament-director-quiz-by-eddie-unverzagt.png;jsessionid=8slokQAGQ6JCZYIqmRFLZEMoEyVDbPQhS77dcaAH.condor3363",
null,
"https://www.jobilize.com/quiz/thumb/microbiology-chapter-10-11-unit-3-test-3-by-madison.png;jsessionid=8slokQAGQ6JCZYIqmRFLZEMoEyVDbPQhS77dcaAH.condor3363",
null,
"https://farm4.staticflickr.com/3780/11322953266_db29ce0659_t.jpg",
null,
"https://farm9.staticflickr.com/8723/16679656017_9e6064a1ed_t.jpg",
null,
"https://www.jobilize.com/quiz/thumb/scea-for-java-ee-study-guide-quiz-by-edward-bi.png;jsessionid=8slokQAGQ6JCZYIqmRFLZEMoEyVDbPQhS77dcaAH.condor3363",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.8512881,"math_prob":0.9993396,"size":2942,"snap":"2019-35-2019-39","text_gpt3_token_len":625,"char_repetition_ratio":0.20626277,"word_repetition_ratio":0.1160221,"special_character_ratio":0.21583956,"punctuation_ratio":0.07446808,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9996455,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30],"im_url_duplicate_count":[null,2,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,null,null,1,null,1,null,null,null,null,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-08-23T08:40:47Z\",\"WARC-Record-ID\":\"<urn:uuid:d831957e-3d2b-46af-a9bb-091129f1e486>\",\"Content-Length\":\"130275\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:cf1893df-079b-4d6c-971d-451e00cf71ae>\",\"WARC-Concurrent-To\":\"<urn:uuid:4956e309-14b2-428c-8e65-72c321e96fd5>\",\"WARC-IP-Address\":\"207.38.89.177\",\"WARC-Target-URI\":\"https://www.jobilize.com/trigonometry/test/graphing-sine-and-cosine-functions-by-openstax\",\"WARC-Payload-Digest\":\"sha1:RGZA4HPZXTVEDE7W5AEOQJGLNABMDLME\",\"WARC-Block-Digest\":\"sha1:APCDSM7IJSEX6R7WO4ZVPROC2DKL5RX4\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-35/CC-MAIN-2019-35_segments_1566027318243.40_warc_CC-MAIN-20190823083811-20190823105811-00221.warc.gz\"}"} |
https://labs.tib.eu/arxiv/?author=K.%20K.%20Li | [
"• ### Search for the rare decay $K^+\\to\\mu^+\\nu\\bar\\nu\\nu$(1606.09054)\n\nSept. 1, 2016 hep-ex\nEvidence of the $K^+\\to\\mu^+\\nu\\bar\\nu\\nu$ decay was searched for using E949 experimental data with an exposure of $1.70\\times 10^{12}$ stopped kaons. The data sample is dominated by the backgrond process $K^+\\to\\mu^+\\nu_\\mu\\gamma$. An upper limit on the decay rate $\\Gamma(K^+\\to\\mu^+\\nu\\bar\\nu\\nu)< 2.4\\times 10^{-6}\\Gamma(K^+\\to all)$ at 90% confidence level was set assuming the Standard Model muon spectrum. The data are presented in such a way as to allow calculation of rates for any assumed $\\mu^+$ spectrum.\n• ### Upper Limit on the Decay K+ --> e+ nu mu+ mu-(hep-ex/9802011)\n\nApril 14, 1998 hep-ex\nAn upper limit on the branching ratio for the decay K+ --> e+ nu mu+ mu- is set at 5.0 x 10^{-7} at 90% confidence level, consistent with predictions from chiral perturbation theory.\n• ### Observation of the Decay K^+ --> pi^+ gamma gamma(hep-ex/9708011)\n\nOct. 22, 1997 hep-ex\nThe first observation of the decay K^+ --> pi^+ gamma gamma is reported. A total of 31 events was observed with an estimated background of 5.1 +- 3.3 events in the pi+ momentum range from 100 MeV/c to 180 MeV/c. The corresponding partial branching ratio, B(K+ -> pi+ gamma gamma, 100 MeV/c < P_pi^+ < 180 MeV/c), is (6.0 +- 1.5 (stat) +- 0.7 (sys)) x 10^{-7}. No K^+ --> pi^+ gamma gamma decay was observed in the pi^+ momentum region greater than 215 MeV/c. The observed pi^+ momentum spectrum is compared with the predictions of chiral perturbation theory.\n• ### Observation of the Decay K+ --> pi+ mu+ mu-(hep-ex/9708012)\n\nOct. 19, 1997 hep-ex\nWe have observed the rare decay K+ --> pi+ mu+ mu- and measured the branching ratio Gamma(K+ --> pi+ mu+ mu-)/Gamma(K+ --> all) = (5.0 +/- 0.4 (stat.) +/- 0.7 (sys.) +/- 0.6 (theor.)) x 10^{-8}. We compare this result with predictions from chiral perturbation theory and estimates based on the decay K+ --> pi+ e+ e-.\n• ### Search for the Decay $K^+ \\to \\pi^+ \\nu \\overline{\\nu}$(hep-ex/9510006)\n\nOct. 18, 1995 hep-ex\nAn upper limit on the branching ratio for the decay $K^+ \\! \\rightarrow \\! \\pi^+ \\nu \\overline{\\nu}$ is set at $2.4 \\times 10^{-9}$ at the 90\\% C.L. using pions in the kinematic region $214~{\\rm MeV}/c < P_\\pi < 231~{\\rm MeV}/c$. An upper limit of $5.2 \\times 10^{-10}$ is found on the branching ratio for decays $K^+ \\! \\rightarrow \\! \\pi^+ X^0$, where $X^0$ is any massless, weakly interacting neutral particle. Limits are also set for cases where $M_{X^0}>0$."
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.7578778,"math_prob":0.9973715,"size":2049,"snap":"2021-04-2021-17","text_gpt3_token_len":664,"char_repetition_ratio":0.112469435,"word_repetition_ratio":0.064705886,"special_character_ratio":0.35431919,"punctuation_ratio":0.10352941,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99964964,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-27T10:16:28Z\",\"WARC-Record-ID\":\"<urn:uuid:83678104-b999-4dd7-b40a-d7b42265b079>\",\"Content-Length\":\"46854\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7c1e248e-9556-4dcb-8b4c-248c65f78b5e>\",\"WARC-Concurrent-To\":\"<urn:uuid:98641901-5271-4103-9515-07c24b59ca2a>\",\"WARC-IP-Address\":\"194.95.114.13\",\"WARC-Target-URI\":\"https://labs.tib.eu/arxiv/?author=K.%20K.%20Li\",\"WARC-Payload-Digest\":\"sha1:G5X2PYFC2WK7Q26OMJONBGDBEGB3QG7B\",\"WARC-Block-Digest\":\"sha1:2PJGPUFWU7JZBU4JYDJEHAK5VIADQLKA\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610704821381.83_warc_CC-MAIN-20210127090152-20210127120152-00753.warc.gz\"}"} |
https://tellmeanumber.hostcoder.com/534823/ | [
"# Question Is 534,823 a prime number?\n\nThe number 534,823 is NOT a PRIME number.\n\n#### How to check if the number 534,823 is a prime number\n\nA prime number can be divided, without a remainder, only by itself and by 1. For example, 13 can be divided only by 13 and by 1. In this case, the number 534,823 that you looked for, is NOT a PRIME number, so it devides by 1,53, 10091, 534823, and of course 534,823.\n\n# Question Where is the number 534,823 located in π (PI) decimals?\n\nThe number 534,823 is at position 205447 in π decimals.\n\nSearch was acomplished in the first 100 milions decimals of PI.\n\n# Question What is the roman representation of number 534,823?\n\nThe roman representation of number 534,823 is DXXXIVDCCCXXIII.\n\n#### Large numbers to roman numbers\n\n3,999 is the largest number you can write in Roman numerals. There is a convencion that you can represent numbers larger than 3,999 in Roman numerals using an overline. Matematically speaking, this means means you are multiplying that Roman numeral by 1,000. For example if you would like to write 70,000 in Roman numerals you would use the Roman numeral LXX. This moves the limit to write roman numerals to 3,999,999.\n\n# Question How many digits are in the number 534,823?\n\nThe number 534,823 has 6 digits.\n\n#### How to get the lenght of the number 534,823\n\nTo find out the lenght of 534,823 we simply count the digits inside it.\n\n# Question What is the sum of all digits of the number 534,823?\n\nThe sum of all digits of number 534,823 is 25.\n\n#### How to calculate the sum of all digits of number 534,823\n\nTo calculate the sum of all digits of number 534,823 you will have to sum them all like fallows:\n\n# Question What is the hash of number 534,823?\n\nThere is not one, but many hash function. some of the most popular are md5 and sha-1\n\n#### Here are some of the most common cryptographic hashes for the number 534,823\n\nCriptographic function Hash for number 534,823\nmd5 15c14d913b65de221c1d0033424c2ae0\nsha1 e383c68364c138624961a611338e5a73d96782a6\nsha512 7dda0d444ddf0d52246dc4aa4697ebf67327a953ea55fa76c0a67840a4acb58553c7f922f7588f60347d5103cdf835c80c38d03129afda65f21b88c7b9d4fbb2\n\n# Question How to write number 534,823 in English text?\n\nIn English the number 534,823 is writed as five hundred thirty-four thousand, eight hundred twenty-three.\n\n#### How to write numbers in words\n\nWhile writing short numbers using words makes your writing look clean, writing longer numbers as words isn't as useful. On the other hand writing big numbers it's a good practice while you're learning.\n\nHere are some simple tips about when to wright numbers using letters.\n\n Numbers less than ten should always be written in text. On the other hand numbers that are less then 100 and multiple of 10, should also be written using letters not numbers. Example: Number 534,823 should NOT be writed as five hundred thirty-four thousand, eight hundred twenty-three, in a sentence Big numbers should be written as the numeral followed by the word thousands, million, billions, trillions, etc. If the number is that big it might be a good idea to round up some digits so that your rider remembers it. Example: Number 534,823 could also be writed as 534.8 thousands, in a sentence, since it is considered to be a big number\n\n#### What numbers are before and after 534,823\n\nPrevious number is: 534,822\n\nNext number is: 534,824"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.90470713,"math_prob":0.9829389,"size":374,"snap":"2021-43-2021-49","text_gpt3_token_len":115,"char_repetition_ratio":0.18378378,"word_repetition_ratio":0.0,"special_character_ratio":0.36096257,"punctuation_ratio":0.185567,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9891877,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-11-27T17:02:33Z\",\"WARC-Record-ID\":\"<urn:uuid:75d8b75c-2689-4bcf-93b9-a3546769656b>\",\"Content-Length\":\"16546\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ffda9f57-791e-4c23-b9b8-949099da3be1>\",\"WARC-Concurrent-To\":\"<urn:uuid:14b875f1-a86b-457b-b5e0-32c6fd4d1818>\",\"WARC-IP-Address\":\"51.254.201.96\",\"WARC-Target-URI\":\"https://tellmeanumber.hostcoder.com/534823/\",\"WARC-Payload-Digest\":\"sha1:S5T65H2DSRLZJ5JZIERWJ3PBAB5Z5ES4\",\"WARC-Block-Digest\":\"sha1:CPHFEKR4NCXE6W5LG5ZB5CRNMIRXI4PF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964358208.31_warc_CC-MAIN-20211127163427-20211127193427-00496.warc.gz\"}"} |
https://www.easycalculation.com/area/diagonal-of-rectangle-prism.php | [
"# Diagonal of Rectangle Prism Calculator\n\nA rectangle prism is a one which is a rectangle cuboid. Rectangle prism has six flat faces and all angles are right angles and have the same cross-section along a length. This online diagonal of rectangle prism calculator helps you by providing the answer to your question of 'How to find the diagonal of a prism?'. Just enter the length, height, and width values in the diagonal length rectangular prism calculator to do the right rectangular diagonal length prism calculations with ease.\n\nlength\nlength\nlength\nlength\n\nA rectangle prism is a one which is a rectangle cuboid. Rectangle prism has six flat faces and all angles are right angles and have the same cross-section along a length. This online diagonal of rectangle prism calculator helps you by providing the answer to your question of 'How to find the diagonal of a prism?'. Just enter the length, height, and width values in the diagonal length rectangular prism calculator to do the right rectangular diagonal length prism calculations with ease.\n\nCode to add this calci to your website",
null,
"",
null,
"#### Formula:\n\nd = √l2 + w2 + h2 Where, d = Diagonal of Rectangle Prism l = Length of Rectangle Prism w = Width of Rectangle Prism h = Height of Rectangle Prism\n\n### Example:\n\nCalculate the diagonal of rectangle prism which is in the length of 50 cm, width of 30 cm and height of 45 cm.\n\n#### Solution:\n\nd = √502 + 302 + 452\n= 73.6546 cm"
]
| [
null,
"https://www.easycalculation.com/images/embed-plus.gif",
null,
"https://www.easycalculation.com/images/embed-minus.gif",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.90403175,"math_prob":0.99494576,"size":1124,"snap":"2021-43-2021-49","text_gpt3_token_len":231,"char_repetition_ratio":0.18839286,"word_repetition_ratio":0.8082902,"special_character_ratio":0.21797153,"punctuation_ratio":0.07906977,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99927443,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-11-29T20:11:50Z\",\"WARC-Record-ID\":\"<urn:uuid:368aa46f-bdfc-4301-b0cf-823a3f05fdc5>\",\"Content-Length\":\"26100\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6085714b-68f7-4459-a77f-dbecf03a9f1c>\",\"WARC-Concurrent-To\":\"<urn:uuid:14ccbed1-dd73-497a-a69f-a8f49933621d>\",\"WARC-IP-Address\":\"173.255.199.118\",\"WARC-Target-URI\":\"https://www.easycalculation.com/area/diagonal-of-rectangle-prism.php\",\"WARC-Payload-Digest\":\"sha1:XWUC2ROTZ7VH6RDWL3VE434GQMWPSBMW\",\"WARC-Block-Digest\":\"sha1:T5F6FUCCL3KJ2Z2G45FH3YMDE6YJWZ2Y\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964358842.4_warc_CC-MAIN-20211129194957-20211129224957-00553.warc.gz\"}"} |
http://tobyho.com/2010/03/04/memoize-for-javascript/ | [
"Memoize for Javascript\n\nI wrote a memoize function for Javascript today to cache file timestamps. I thought I would share it with the world. Here is the code:\n\nfunction memoize(f){\nvar cache = {}\nreturn function(){\nvar keys = []\nfor (var i = 0; i < arguments.length; i++){\nkeys.push(typeof(arguments[i]) + ':' + String(arguments[i]))\n}\nvar key = keys.join('/')\nif (key in cache){\nreturn cache[key]\n}else{\nvar val = f.apply(null, arguments)\ncache[key] = val\nreturn val\n}\n}\n}\n\nLet's say you have a function:\n\nfunction f(x, y){\nreturn x + y * 2\n}\n\nIf you memoize it:\n\nf = memoize(f)\n\nThe first time you call f with arguments (3, 4):\n\nf(3, 4) => 11\n\nIt will compute it, but the second time:\n\nf(3, 4) => 11\n\nIt will merely return the cached value computed from the last time.\n\nNote about the implementation: the equality metric used by this memoize function is: 2 objects are equal iff they are of the same type:\n\ntypeof(one) == typeof(other)\n\nand, they have the same string represention:\n\nString(one) == String(other)\n\nThat is it. Enjoy!"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.75273067,"math_prob":0.9941894,"size":994,"snap":"2019-43-2019-47","text_gpt3_token_len":266,"char_repetition_ratio":0.13535354,"word_repetition_ratio":0.022988506,"special_character_ratio":0.30784708,"punctuation_ratio":0.1352657,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9685769,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-16T17:21:03Z\",\"WARC-Record-ID\":\"<urn:uuid:7999be3e-6c68-4f01-9f0e-b8ffe2bae155>\",\"Content-Length\":\"4813\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6fc9a514-2c28-4078-8bce-2469a33d5664>\",\"WARC-Concurrent-To\":\"<urn:uuid:b6830a30-fed5-4adb-80b9-ce4da674f9ec>\",\"WARC-IP-Address\":\"207.38.92.26\",\"WARC-Target-URI\":\"http://tobyho.com/2010/03/04/memoize-for-javascript/\",\"WARC-Payload-Digest\":\"sha1:HB2SGW4L7C4HDZYIPPVCQSQBE25C25OJ\",\"WARC-Block-Digest\":\"sha1:Q4Z7UHZATGWTX4PHTAG4ZYPMCSGNULTX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570986669057.0_warc_CC-MAIN-20191016163146-20191016190646-00044.warc.gz\"}"} |
https://en.wikipedia.org/wiki/Weakened_weak_form | [
"# Weakened weak form\n\nWeakened weak form (or W2 form) is used in the formulation of general numerical methods based on meshfree methods and/or finite element method settings. These numerical methods are applicable to solid mechanics as well as fluid dynamics problems.\n\n## Description\n\nFor simplicity we choose elasticity problems (2nd order PDE) for our discussion. Our discussion is also most convenient in reference to the well-known weak and strong form. In a strong formulation for an approximate solution, we need to assume displacement functions that are 2nd order differentiable. In a weak formulation, we create linear and bilinear forms and then search for a particular function (an approximate solution) that satisfy the weak statement. The bilinear form uses gradient of the functions that has only 1st order differentiation. Therefore, the requirement on the continuity of assumed displacement functions is weaker than in the strong formulation. In a discrete form (such as the Finite element method, or FEM), a sufficient requirement for an assumed displacement function is piecewise continuous over the entire problems domain. This allows us to construct the function using elements (but making sure it is continuous a long all element interfaces), leading to the powerful FEM.\n\nNow, in a weakened weak (W2) formulation, we further reduce the requirement. We form a bilinear form using only the assumed function (not even the gradient). This is done by using the so-called generalized gradient smoothing technique, with which one can approximate the gradient of displacement functions for certain class of discontinuous functions, as long as they are in a proper G space. Since we do not have to actually perform even the 1st differentiation to the assumed displacement functions, the requirement on the consistence of the functions are further reduced, and hence the weakened weak or W2 formulation.\n\n## History\n\nThe development of systematic theory of the weakened weak form started from the works on meshfree methods. It is relatively new, but had very rapid development in the past few years.[when?]\n\n## Features of W2 formulations\n\n1. The W2 formulation offers possibilities for formulate various (uniformly) \"soft\" models that works well with triangular meshes. Because triangular mesh can be generated automatically, it becomes much easier in re-meshing and hence automation in modeling and simulation. This is very important for our long-term goal of development of fully automated computational methods.\n2. In addition, W2 models can be made soft enough (in uniform fashion) to produce upper bound solutions (for force-driving problems). Together with stiff models (such as the fully compatible FEM models), one can conveniently bound the solution from both sides. This allows easy error estimation for generally complicated problems, as long as a triangular mesh can be generated. This is important for producing so-called certified solutions.\n3. W2 models can be built free from volumetric locking, and possibly free from other types of locking phenomena.\n4. W2 models provide the freedom to assume separately the displacement gradient of the displacement functions, offering opportunities for ultra-accurate and super-convergent models. It may be possible to construct linear models with energy convergence rate of 2.\n5. W2 models are often found less sensitive to mesh distortion.\n6. W2 models are found effective for low order methods\n\n## Existing W2 models\n\nTypical W2 models are the smoothed point interpolation methods (or S-PIM). The S-PIM can be node-based (known as NS-PIM or LC-PIM), edge-based (ES-PIM), and cell-based (CS-PIM). The NS-PIM was developed using the so-called SCNI technique. It was then discovered that NS-PIM is capable of producing upper bound solution and volumetric locking free. The ES-PIM is found superior in accuracy, and CS-PIM behaves in between the NS-PIM and ES-PIM. Moreover, W2 formulations allow the use of polynomial and radial basis functions in the creation of shape functions (it accommodates the discontinuous displacement functions, as long as it is in G1 space), which opens further rooms for future developments. The S-FEM is largely the linear version of S-PIM, but with most of the properties of the S-PIM and much simpler. It has also variations of NS-FEM, ES-FEM and CS-FEM. The major property of S-PIM can be found also in S-FEM. The S-FEM models are:\n\n## Applications\n\nSome of the applications of W2 models are:\n\n1. Mechanics for solids, structures and piezoelectrics;\n2. Fracture mechanics and crack propagation;\n3. Heat transfer;\n4. Structural acoustics;\n5. Nonlinear and contact problems;\n6. Stochastic analysis;"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.79461986,"math_prob":0.92763263,"size":13316,"snap":"2019-35-2019-39","text_gpt3_token_len":3393,"char_repetition_ratio":0.15835336,"word_repetition_ratio":0.103838585,"special_character_ratio":0.2689997,"punctuation_ratio":0.15649453,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9602282,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-09-17T22:21:31Z\",\"WARC-Record-ID\":\"<urn:uuid:c10ce222-f441-43fc-a888-9860ae89ce1c>\",\"Content-Length\":\"65968\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:13eb16ad-8179-4343-a9c9-04222fac4ca5>\",\"WARC-Concurrent-To\":\"<urn:uuid:d430347b-6f6a-4905-bdd3-9bfad754f3cb>\",\"WARC-IP-Address\":\"208.80.154.224\",\"WARC-Target-URI\":\"https://en.wikipedia.org/wiki/Weakened_weak_form\",\"WARC-Payload-Digest\":\"sha1:RW2LZUITDTRLKW4QQWR4O6ZD627FSHIS\",\"WARC-Block-Digest\":\"sha1:KMUHKCB4WST4RQI2OXFUR6V2R5JA45QW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-39/CC-MAIN-2019-39_segments_1568514573121.4_warc_CC-MAIN-20190917203354-20190917225354-00196.warc.gz\"}"} |
https://lists.oasis-open.org/archives/relax-ng/200110/msg00079.html | [
"OASIS Mailing List ArchivesView the OASIS mailing list archive below\nor browse/search using MarkMail.\n\n# relax-ng message\n\n[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [Elist Home]\n\nSubject: [relax-ng] Algebraic characterization of RELAX NG patterns\n\n• From: Murata Makoto <[email protected]>\n• To: [email protected]\n• Date: Sun, 21 Oct 2001 22:55:26 +0900\n\nAlgebraic characterization of RELAX NG patterns\n\n2001 10/21\nMurata Makoto\[email protected]\n\n1. Introduction\n\nI have argued that infinite-nameclass <attribute> should be allowed only when\nit is a descendant of <oneOrMore>. With this restriction, we have a nice\ncharacterization theorem of RELAX NG patterns.\n\n2. Regular languages and syntactic monoids\n\nWe first consider regular languages. It is well known that regular\nexpressions and finite automata exactly capture regular languages.\nMeanwhile, another interesting characterization is given by syntactic\nmonoids.\n\nIn preparation, let $\\Sigma$ be a finite set. The set of strings over\n$\\Sigma$ is denoted by $\\Sigma^*$.\n\nTheorem 1:\n\nA subset $L$ of $\\Sigma^*$ is regular if and only if\n$L$ is the inverse image of $X$ under $\\phi$, where\n\n- $N$ is a finite monoid; that is,\n\n- $N$ is a finite set,\n\n- $N$ has a binary operation $\\circ$,\n\n- $\\circ$ has an identity $0$ (i.e.,\n$0 \\circ n = n \\circ 0 = n$ for every $n$\nin $N$), and\n\n- $\\circ$ is associative; that is,\n$(s_1 \\circ s_2) \\circ s_3 = s_1 \\circ (s_2 \\circ s_3)$,\n\n- $\\phi$ is a homomorphism from $\\Sigma^*$ to $N$; that is,\n$\\phi(u v) = \\phi(u) \\circ \\phi(v)$ for every $u, v$ in $\\Sigma^*$,\nand\n\n- $X$ is a subset of $N$.\n\n3. Generalization for RELAX NG patterns\n\nFor sake of simplicity, we ignore <text>, <list>, <data>, <value>. We\nalso assume that any pattern is allowed as the child of <start>.\n\nIn prepartion, let $\\Delta$ be a finite set of disjoint name classes\nfor attributes, and let $\\Sigma$ be a finite set of disjoint name\nclasses for elements.\n\nWe can nicely generalize Theorem 1 as below.\n\nTheorem 2:\n\nA subset $L$ of $(2 ^ \\Delta) \\times \\Sigma ^*$ is described by a\nRELAX NG pattern over $\\Delta$ and $\\Sigma$ if and only if $L$ is the\ninverse image of $X$ under $\\xi \\oplus \\phi$, where\n\n- $M$ is a finite set such that\n\n- $M$ has a binary operation $+$,\n\n- $+$ has an identity $0$ (i.e.,\n$0 + m = m + 0 = m$ for every $m$\nin $M$),\n\n- $+$ is associative; that is,\n$(m_1 + m_2) + m_3 = m_1 + (m_2 + m_3)$,\nand\n\n- $+$ is commutative; that is,\n$m_1 + m_2 = m_2 + m_1$, and\n\n- $+$ is idempotent; that is, $m_1 + m_1 = m_1$\n\n- $\\xi$ is a homomorphism from $2^\\Delta$ to $M$; that is,\n$\\xi(\\Delta_1 \\cup \\Delta_2) = \\xi(\\Delta_1) + \\xi(\\Delta_2)$\nfor every subset $\\Delta_1, \\Delta_2$ of $\\Sigma^*$\n\n- $N$ is a finite monoid; that is,\n\n- $N$ is a finite set,\n\n- $N$ has a binary operation $\\circ$,\n\n- $\\circ$ has an identity $1$ (i.e.,\n$1 \\circ n = n \\circ 1 = n$ for every $n$\nin $N$), and\n\n- $\\circ$ is associative; that is,\n$(n_1 \\circ n_2) \\circ n_3 = n_1 \\circ (n_2 \\circ n_3)$.\n\n- $\\phi$ is a homomorphism from $\\Sigma^*$ to $N$, that is,\n$\\phi(u v) = \\phi(u) \\circ \\phi(v)$ for every $u, v$ in $\\Sigma^*$\n\n- $\\xi \\oplus \\phi$ is a mapping from $2^\\Delta \\times \\Sigma^*$\nto $M \\times N$ defined as $\\xi \\oplus \\phi(\\Delta_1, u) = (\\xi(\\Delta_1), \\phi(, u))$\n\n- $X$ is a subset of $M \\times N$.\n\nNote: Observe that idempotency does not hold, if we allow\ninfinite-nameclass <attribute> without an ascending <oneOrMore>,\n\n----\nMurata Makoto [email protected]\n\n\n[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] | [Elist Home]"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.82884073,"math_prob":0.99993145,"size":3371,"snap":"2023-40-2023-50","text_gpt3_token_len":1115,"char_repetition_ratio":0.13157113,"word_repetition_ratio":0.27242523,"special_character_ratio":0.35953724,"punctuation_ratio":0.12996942,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.999997,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-03T10:54:47Z\",\"WARC-Record-ID\":\"<urn:uuid:7e931835-038e-4ec7-a93a-9fe82f7686cd>\",\"Content-Length\":\"8940\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c213201b-6233-4c44-95c2-d749e8bc89d8>\",\"WARC-Concurrent-To\":\"<urn:uuid:7640baa3-b505-44b9-8ba2-bccf0defa88d>\",\"WARC-IP-Address\":\"34.233.166.49\",\"WARC-Target-URI\":\"https://lists.oasis-open.org/archives/relax-ng/200110/msg00079.html\",\"WARC-Payload-Digest\":\"sha1:SUD555BDXOSC2PQZNKBN6PIWEHTFUHRN\",\"WARC-Block-Digest\":\"sha1:KQGPR4XKG3GK2ZNUVE35SGMCG5UBQUFU\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100499.43_warc_CC-MAIN-20231203094028-20231203124028-00402.warc.gz\"}"} |
http://isabelle.in.tum.de/repos/isabelle/diff/baefae13ad37/src/HOL/UNITY/UNITY_tactics.ML | [
"src/HOL/UNITY/UNITY_tactics.ML\n changeset 13797 baefae13ad37 parent 13796 19f50fa807ae child 13798 4c1a53627500\n``` 1.1 --- a/src/HOL/UNITY/UNITY_tactics.ML\tThu Jan 30 10:35:56 2003 +0100\n1.2 +++ b/src/HOL/UNITY/UNITY_tactics.ML\tThu Jan 30 18:08:09 2003 +0100\n1.3 @@ -6,6 +6,215 @@\n1.4 Specialized UNITY tactics, and ML bindings of theorems\n1.5 *)\n1.6\n1.7 +(*UNITY*)\n1.8 +val constrains_def = thm \"constrains_def\";\n1.9 +val stable_def = thm \"stable_def\";\n1.10 +val invariant_def = thm \"invariant_def\";\n1.11 +val increasing_def = thm \"increasing_def\";\n1.12 +val Allowed_def = thm \"Allowed_def\";\n1.13 +val Id_in_Acts = thm \"Id_in_Acts\";\n1.14 +val insert_Id_Acts = thm \"insert_Id_Acts\";\n1.15 +val Id_in_AllowedActs = thm \"Id_in_AllowedActs\";\n1.16 +val insert_Id_AllowedActs = thm \"insert_Id_AllowedActs\";\n1.17 +val Init_eq = thm \"Init_eq\";\n1.18 +val Acts_eq = thm \"Acts_eq\";\n1.19 +val AllowedActs_eq = thm \"AllowedActs_eq\";\n1.20 +val surjective_mk_program = thm \"surjective_mk_program\";\n1.21 +val program_equalityI = thm \"program_equalityI\";\n1.22 +val program_equalityE = thm \"program_equalityE\";\n1.23 +val program_equality_iff = thm \"program_equality_iff\";\n1.24 +val def_prg_Init = thm \"def_prg_Init\";\n1.25 +val def_prg_Acts = thm \"def_prg_Acts\";\n1.26 +val def_prg_AllowedActs = thm \"def_prg_AllowedActs\";\n1.27 +val def_prg_simps = thm \"def_prg_simps\";\n1.28 +val def_act_simp = thm \"def_act_simp\";\n1.29 +val def_set_simp = thm \"def_set_simp\";\n1.30 +val constrainsI = thm \"constrainsI\";\n1.31 +val constrainsD = thm \"constrainsD\";\n1.32 +val constrains_empty = thm \"constrains_empty\";\n1.33 +val constrains_empty2 = thm \"constrains_empty2\";\n1.34 +val constrains_UNIV = thm \"constrains_UNIV\";\n1.35 +val constrains_UNIV2 = thm \"constrains_UNIV2\";\n1.36 +val constrains_weaken_R = thm \"constrains_weaken_R\";\n1.37 +val constrains_weaken_L = thm \"constrains_weaken_L\";\n1.38 +val constrains_weaken = thm \"constrains_weaken\";\n1.39 +val constrains_Un = thm \"constrains_Un\";\n1.40 +val constrains_UN = thm \"constrains_UN\";\n1.41 +val constrains_Un_distrib = thm \"constrains_Un_distrib\";\n1.42 +val constrains_UN_distrib = thm \"constrains_UN_distrib\";\n1.43 +val constrains_Int_distrib = thm \"constrains_Int_distrib\";\n1.44 +val constrains_INT_distrib = thm \"constrains_INT_distrib\";\n1.45 +val constrains_Int = thm \"constrains_Int\";\n1.46 +val constrains_INT = thm \"constrains_INT\";\n1.47 +val constrains_imp_subset = thm \"constrains_imp_subset\";\n1.48 +val constrains_trans = thm \"constrains_trans\";\n1.49 +val constrains_cancel = thm \"constrains_cancel\";\n1.50 +val unlessI = thm \"unlessI\";\n1.51 +val unlessD = thm \"unlessD\";\n1.52 +val stableI = thm \"stableI\";\n1.53 +val stableD = thm \"stableD\";\n1.54 +val stable_UNIV = thm \"stable_UNIV\";\n1.55 +val stable_Un = thm \"stable_Un\";\n1.56 +val stable_UN = thm \"stable_UN\";\n1.57 +val stable_Int = thm \"stable_Int\";\n1.58 +val stable_INT = thm \"stable_INT\";\n1.59 +val stable_constrains_Un = thm \"stable_constrains_Un\";\n1.60 +val stable_constrains_Int = thm \"stable_constrains_Int\";\n1.61 +val stable_constrains_stable = thm \"stable_constrains_stable\";\n1.62 +val invariantI = thm \"invariantI\";\n1.63 +val invariant_Int = thm \"invariant_Int\";\n1.64 +val increasingD = thm \"increasingD\";\n1.65 +val increasing_constant = thm \"increasing_constant\";\n1.66 +val mono_increasing_o = thm \"mono_increasing_o\";\n1.67 +val strict_increasingD = thm \"strict_increasingD\";\n1.68 +val elimination = thm \"elimination\";\n1.69 +val elimination_sing = thm \"elimination_sing\";\n1.70 +val constrains_strongest_rhs = thm \"constrains_strongest_rhs\";\n1.71 +val strongest_rhs_is_strongest = thm \"strongest_rhs_is_strongest\";\n1.72 +val Un_Diff_Diff = thm \"Un_Diff_Diff\";\n1.73 +val Int_Union_Union = thm \"Int_Union_Union\";\n1.74 +val Image_less_than = thm \"Image_less_than\";\n1.75 +val Image_inverse_less_than = thm \"Image_inverse_less_than\";\n1.76 +\n1.77 +(*WFair*)\n1.78 +val stable_transient_empty = thm \"stable_transient_empty\";\n1.79 +val transient_strengthen = thm \"transient_strengthen\";\n1.80 +val transientI = thm \"transientI\";\n1.81 +val transientE = thm \"transientE\";\n1.82 +val transient_UNIV = thm \"transient_UNIV\";\n1.83 +val transient_empty = thm \"transient_empty\";\n1.84 +val ensuresI = thm \"ensuresI\";\n1.85 +val ensuresD = thm \"ensuresD\";\n1.86 +val ensures_weaken_R = thm \"ensures_weaken_R\";\n1.87 +val stable_ensures_Int = thm \"stable_ensures_Int\";\n1.88 +val stable_transient_ensures = thm \"stable_transient_ensures\";\n1.89 +val ensures_eq = thm \"ensures_eq\";\n1.101 +val subset_imp_ensures = thm \"subset_imp_ensures\";\n1.122 +val psp_stable = thm \"psp_stable\";\n1.123 +val psp_stable2 = thm \"psp_stable2\";\n1.124 +val psp_ensures = thm \"psp_ensures\";\n1.125 +val psp = thm \"psp\";\n1.126 +val psp2 = thm \"psp2\";\n1.127 +val psp_unless = thm \"psp_unless\";\n1.130 +val bounded_induct = thm \"bounded_induct\";\n1.131 +val lessThan_induct = thm \"lessThan_induct\";\n1.132 +val lessThan_bounded_induct = thm \"lessThan_bounded_induct\";\n1.133 +val greaterThan_bounded_induct = thm \"greaterThan_bounded_induct\";\n1.137 +val wlt_increasing = thm \"wlt_increasing\";\n1.138 +val lemma1 = thm \"lemma1\";\n1.140 +val wlt_constrains_wlt = thm \"wlt_constrains_wlt\";\n1.141 +val completion_lemma = thm \"completion_lemma\";\n1.142 +val completion = thm \"completion\";\n1.143 +val finite_completion_lemma = thm \"finite_completion_lemma\";\n1.144 +val finite_completion = thm \"finite_completion\";\n1.145 +val stable_completion = thm \"stable_completion\";\n1.146 +val finite_stable_completion = thm \"finite_stable_completion\";\n1.147 +\n1.148 +(*Constrains*)\n1.149 +val Increasing_def = thm \"Increasing_def\";\n1.150 +val reachable_Init = thm \"reachable.Init\";\n1.151 +val reachable_Acts = thm \"reachable.Acts\";\n1.152 +val reachable_equiv_traces = thm \"reachable_equiv_traces\";\n1.153 +val Init_subset_reachable = thm \"Init_subset_reachable\";\n1.154 +val stable_reachable = thm \"stable_reachable\";\n1.155 +val invariant_reachable = thm \"invariant_reachable\";\n1.156 +val invariant_includes_reachable = thm \"invariant_includes_reachable\";\n1.157 +val constrains_reachable_Int = thm \"constrains_reachable_Int\";\n1.158 +val Constrains_eq_constrains = thm \"Constrains_eq_constrains\";\n1.159 +val constrains_imp_Constrains = thm \"constrains_imp_Constrains\";\n1.160 +val stable_imp_Stable = thm \"stable_imp_Stable\";\n1.161 +val ConstrainsI = thm \"ConstrainsI\";\n1.162 +val Constrains_empty = thm \"Constrains_empty\";\n1.163 +val Constrains_UNIV = thm \"Constrains_UNIV\";\n1.164 +val Constrains_weaken_R = thm \"Constrains_weaken_R\";\n1.165 +val Constrains_weaken_L = thm \"Constrains_weaken_L\";\n1.166 +val Constrains_weaken = thm \"Constrains_weaken\";\n1.167 +val Constrains_Un = thm \"Constrains_Un\";\n1.168 +val Constrains_UN = thm \"Constrains_UN\";\n1.169 +val Constrains_Int = thm \"Constrains_Int\";\n1.170 +val Constrains_INT = thm \"Constrains_INT\";\n1.171 +val Constrains_imp_subset = thm \"Constrains_imp_subset\";\n1.172 +val Constrains_trans = thm \"Constrains_trans\";\n1.173 +val Constrains_cancel = thm \"Constrains_cancel\";\n1.174 +val Stable_eq = thm \"Stable_eq\";\n1.175 +val Stable_eq_stable = thm \"Stable_eq_stable\";\n1.176 +val StableI = thm \"StableI\";\n1.177 +val StableD = thm \"StableD\";\n1.178 +val Stable_Un = thm \"Stable_Un\";\n1.179 +val Stable_Int = thm \"Stable_Int\";\n1.180 +val Stable_Constrains_Un = thm \"Stable_Constrains_Un\";\n1.181 +val Stable_Constrains_Int = thm \"Stable_Constrains_Int\";\n1.182 +val Stable_UN = thm \"Stable_UN\";\n1.183 +val Stable_INT = thm \"Stable_INT\";\n1.184 +val Stable_reachable = thm \"Stable_reachable\";\n1.185 +val IncreasingD = thm \"IncreasingD\";\n1.186 +val mono_Increasing_o = thm \"mono_Increasing_o\";\n1.187 +val strict_IncreasingD = thm \"strict_IncreasingD\";\n1.188 +val increasing_imp_Increasing = thm \"increasing_imp_Increasing\";\n1.189 +val Increasing_constant = thm \"Increasing_constant\";\n1.190 +val Elimination = thm \"Elimination\";\n1.191 +val Elimination_sing = thm \"Elimination_sing\";\n1.192 +val AlwaysI = thm \"AlwaysI\";\n1.193 +val AlwaysD = thm \"AlwaysD\";\n1.194 +val AlwaysE = thm \"AlwaysE\";\n1.195 +val Always_imp_Stable = thm \"Always_imp_Stable\";\n1.196 +val Always_includes_reachable = thm \"Always_includes_reachable\";\n1.197 +val invariant_imp_Always = thm \"invariant_imp_Always\";\n1.198 +val Always_reachable = thm \"Always_reachable\";\n1.199 +val Always_eq_invariant_reachable = thm \"Always_eq_invariant_reachable\";\n1.200 +val Always_eq_includes_reachable = thm \"Always_eq_includes_reachable\";\n1.201 +val Always_UNIV_eq = thm \"Always_UNIV_eq\";\n1.202 +val UNIV_AlwaysI = thm \"UNIV_AlwaysI\";\n1.203 +val Always_eq_UN_invariant = thm \"Always_eq_UN_invariant\";\n1.204 +val Always_weaken = thm \"Always_weaken\";\n1.205 +val Always_Constrains_pre = thm \"Always_Constrains_pre\";\n1.206 +val Always_Constrains_post = thm \"Always_Constrains_post\";\n1.207 +val Always_ConstrainsI = thm \"Always_ConstrainsI\";\n1.208 +val Always_ConstrainsD = thm \"Always_ConstrainsD\";\n1.209 +val Always_Constrains_weaken = thm \"Always_Constrains_weaken\";\n1.210 +val Always_Int_distrib = thm \"Always_Int_distrib\";\n1.211 +val Always_INT_distrib = thm \"Always_INT_distrib\";\n1.212 +val Always_Int_I = thm \"Always_Int_I\";\n1.213 +val Always_Compl_Un_eq = thm \"Always_Compl_Un_eq\";\n1.214 +val Always_thin = thm \"Always_thin\";\n1.215 +\n1.216 (*FP*)\n1.217 val stable_FP_Orig_Int = thm \"stable_FP_Orig_Int\";\n1.218 val FP_Orig_weakest = thm \"FP_Orig_weakest\";\n1.219 @@ -473,6 +682,18 @@\n1.221\n1.222\n1.223 +(*Lazy unfolding of actions or of sets*)\n1.224 +fun simp_of_act def = def RS def_act_simp;\n1.225 +\n1.226 +fun simp_of_set def = def RS def_set_simp;\n1.227 +\n1.228 +\n1.229 +(*Combines two invariance ASSUMPTIONS into one. USEFUL??*)\n1.230 +val Always_Int_tac = dtac Always_Int_I THEN' assume_tac THEN' etac Always_thin\n1.231 +\n1.232 +(*Combines a list of invariance THEOREMS into one.*)\n1.233 +val Always_Int_rule = foldr1 (fn (th1,th2) => [th1,th2] MRS Always_Int_I)\n1.234 +\n1.235 (*proves \"co\" properties when the program is specified*)\n1.236 fun gen_constrains_tac(cs,ss) i =\n1.237 SELECT_GOAL\n1.238 @@ -504,6 +725,8 @@\n1.239 \t ALLGOALS (clarify_tac cs),\n1.240 \t ALLGOALS (asm_lr_simp_tac ss)]);\n1.241\n1.242 +fun constrains_tac st = gen_constrains_tac (claset(), simpset()) st;\n1.243 +\n1.244 fun ensures_tac sact = gen_ensures_tac (claset(), simpset()) sact;\n1.245\n1.246\n```"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.55108976,"math_prob":0.99898505,"size":12095,"snap":"2019-51-2020-05","text_gpt3_token_len":4166,"char_repetition_ratio":0.30783227,"word_repetition_ratio":0.0,"special_character_ratio":0.3642001,"punctuation_ratio":0.24858174,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99202317,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-09T18:26:24Z\",\"WARC-Record-ID\":\"<urn:uuid:9c63fa3b-29af-4b37-b1e1-882a9d4e2896>\",\"Content-Length\":\"38222\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4878b8cf-50c5-4594-8ec0-a6bb94c112a4>\",\"WARC-Concurrent-To\":\"<urn:uuid:d6bec0dd-7237-4d09-ba9e-706dc8ff1c74>\",\"WARC-IP-Address\":\"131.159.46.82\",\"WARC-Target-URI\":\"http://isabelle.in.tum.de/repos/isabelle/diff/baefae13ad37/src/HOL/UNITY/UNITY_tactics.ML\",\"WARC-Payload-Digest\":\"sha1:PODTUR7UERQ6XQTQXLF4SKFXCA2TWEDN\",\"WARC-Block-Digest\":\"sha1:OEJ3JFFF6FTXBT6RWSJRNLFLJDLKAAJB\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540521378.25_warc_CC-MAIN-20191209173528-20191209201528-00532.warc.gz\"}"} |
http://math.blue/page/2/ | [
"",
null,
"# Latest Posts\n\n## Approximating & Ordering\n\nApproximating and ordering irrational numbers.\n\n## Rational and Irrational Numbers\n\nHow to identify rational and irrational numbers.\n\n## Decimals to Fractions\n\nConverting terminating and repeating decimals into fractions.\n\n## Cube Roots\n\nQuick review on cube roots.\n\n## Adding & Subtracting with Negatives\n\nQuick review of adding and subtracting with negatives.\n\n## Rounding\n\nReview of rounding with decimals.\n\n## Square Roots\n\nPerfect square roots.\n\n## Exponent Rules\n\nExponent Rules with examples.\n\n## All Statistics\n\nThis is everything we do with statistics this year. Sorry – it is long this way! Feel free to skip ahead to the part you need.\n\n## All Transformations\n\nThis is a long one! Please just skip through to the transformation you need.\n\n## Pythagorean Theorem\n\nRight Triangles with the Pythagorean Theorem\n\n## Triangles\n\nHow to find the angles of a triangle."
]
| [
null,
"http://math.blue/wp-content/uploads/2016/11/mathlogo.png",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.8149921,"math_prob":0.5538448,"size":854,"snap":"2019-13-2019-22","text_gpt3_token_len":178,"char_repetition_ratio":0.11764706,"word_repetition_ratio":0.0,"special_character_ratio":0.16978922,"punctuation_ratio":0.1,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95597076,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-05-19T08:23:22Z\",\"WARC-Record-ID\":\"<urn:uuid:a5df21e3-08f5-4bed-bd8e-925abc1973b0>\",\"Content-Length\":\"25768\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3ba6a744-59ae-4b4d-af37-a833c7e80ddd>\",\"WARC-Concurrent-To\":\"<urn:uuid:8f93ae3f-4c10-4344-ba51-ca971fe93eac>\",\"WARC-IP-Address\":\"5.104.110.230\",\"WARC-Target-URI\":\"http://math.blue/page/2/\",\"WARC-Payload-Digest\":\"sha1:WREQPAPTMUKFP6C5XJB3XGJC5MVAOHGD\",\"WARC-Block-Digest\":\"sha1:WBD6HGQDTKADOR444IEV457ROCV6DWBP\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-22/CC-MAIN-2019-22_segments_1558232254731.5_warc_CC-MAIN-20190519081519-20190519103519-00253.warc.gz\"}"} |
https://www.math.uni-hamburg.de/home/latschev/lehre/ws19/sympgeo.html | [
"",
null,
"",
null,
"",
null,
"UHH > Fakultäten > MIN-Fakultät > Mathematik > Personen > Janko Latschev STiNE | KUS-Portal |",
null,
"",
null,
"",
null,
"",
null,
"## Janko Latschev\n\n### Lecture course Symplectic geometry, Wintersemester 2019/20\n\nThe lectures take place Tuesday 2-4 and Thursday 10-12 in H5.\nThe exercise class takes place Thursday 12-14 in room Geom 430.\n\nThere will be oral exam opportunities on February 17/18 and on March 30/31. Active participation in the exercise classes is a prerequisite for admission to the exam. Please consult your STiNE-mail for details about your personal exam time.\nHere is a summary of the main exam topics, together with some general remarks concerning the exam.\n\nThe aim of this course is to give an introduction to modern symplectic geometry. We will start from its origins in Hamiltonian dynamics, cover linear symplectic geometry as well as basic properties of symplectic and contact manifolds (and their relation), and move on to discuss some of the typical questions and the methods which can be used to address them.\n\nThe exercise sheets will be posted here:\nSheet 1 Sheet 2 Sheet 3 Sheet 4 Sheet 5 Sheet 6\nSheet 7 Sheet 8 Sheet 9 Sheet 10 Sheet 11\n\nOther material will also appear here as needed.\n\n#### Log of lecture content\n\n 15.10. brief introduction; linear symplectic geometry: basic definitions: symplectic form, orthogonal complements, types of subspaces, existence of symplectic basis (adapted to a subspace), consequences, relation of standard symplectic structure to standard euclidean structure and standard complex structure on R2n 18.10. linear symplectic group, properties of symplectic matrices, relation of Sp(2n,R) to O(2n) and GL(n,C), fundamental group of Sp(2n,R), axiomatic characterization of the Maslov index for loops of symplectic matrices (statement) 22.10. proof of existence and uniqueness of Maslov index for loops of symplectic matrices, Lagrangian Grassmannian is U(n)/O(n), fundamental group, axiomatic characterization of the Maslov index for loops of Lagrangian subspaces 24.10. example and remarks on Maslov index, tamed and compatible complex structures: contractible set of choices, hermitian metrics and relation to symplectic forms, affine nonsqueezing theorem 29.10. basic properties of symplectic manifolds, examples: surfaces, cotangent bundles, complex projective space, symplectomorphisms, Hamiltonian vector fields and flows 05.11. examples of Hamiltonian flows, Poincaré recurrence for symplectomorphisms, symplectic vector fields, symplectic isotopies, Hamiltonian diffeomorphisms 07.11. long aside on Legendre transform, Moser's argument: application ot volume forms and to symplectic forms on closed manifolds 12.11. Darboux' theorem, types of submanifolds with examples, tubular neighborhood theorem of differential topology, symplectic vector bundles 14.11. neighborhood theorem for Lagrangian submanifolds, remark on generalizations to other types of submanifolds, description of C1-neighborhood of identity in Symp(M, ω), examples of Lagrangian submanifolds in R2n 19.11. remarks on Maslov index and symplectic energy for Lagrangian submanifolds; contact manifolds: definition and basic examples, hypersurfaces of contact type, symplectization 21.11. invariant description of symplectization, aside on origins of contact geometry: the method of characteristics for first order pde 26.11. contactomorphisms, Reeb vector field associated to a contact form, general contact vector fields and relation to contact Hamiltonians, Darboux' theorem for contact forms 28.11. Gray stability theorem, uniqueness of induced contact structure on hypersurfacves of contact type; existence of contact structures, Borman-Eliashberg-Murphy theorem, open books, contact forms supported by an open book, Thurston-Wikelnkempner theorem in dimension 3, existence of open books in dimension 3, existence of contact structures as a consequence 03.12. almost complex manifolds vs. complex manifolds: Nijenhuis tensor and Newlander-Nirenberg theorem, Kähler structures on manifolds: equivalent characterizations, examples 05.12. analysis on complex manifolds: complexified tangent and cotangent bundles, ∂ and ∂ operators, Kähler forms are (1,1)-forms with hermitian positive definite coefficient matrix, local Kähler potentials 10.12. examples of Kähler potentials, Stein manifolds, relation to Weinstein manifolds; the existence problem for symplectic structures 12.12. more motivating questions: the uniqueness problem for symplectic structures, C0 rigidity of symplectomorphisms, nonsqueezing theorem (statement), Hofer's metric on Ham(M,ω), Arnold's conjecture on fixpoints of Hamiltonian diffeomorphisms 17.12. J-holomorphic curves: definition and reformulations of the defining equation, energy of a map and energy of holomorphic curves in symplectic manifolds, preliminary remarks for the linearization of the ∂J operator 19.12. computation of the linearization of the ∂J operator at a J-holomorphic map, real and complex linear Cauchy-Riemann operators, relation of complex linear CR operators to holomorphic structures on the underlying complex vector bundle 07.01. crash course on Sobolev spaces, elliptic regularity for real-linear Cauchy-Riemann operators: Calderon-Zygmund inequality (statement, consequences and idea of proof), existence of local solutions to Du=0 for real linear Cauchy-Riemann operators 09.01. proof of local existence theorem, Carleman similarity principle and consequences, moduli spaces of j-holomorphic curves, statement of Fredholm property for linearized ∂J operator 14.01. proof of the Fredholm property for linearized ∂J operator and the index formula 16.01. manifold structure on universal moduli space of simple curves and existence of regular almost complex structures 21.01. compactness of moduli spaces of J-holomorphic curves: uniform gradient bounds yield convergence of a subsequence, failure of gradient bounds leads to bubbling 23.01. a simple example for bubbling, nonconstant spheres have a lower bound on their energy, rough statement of Gromov compactness for curves with a fixed underlying domain and energy bounds 28.01. proof of the nonsqueezing theorem using holomorphic curves: existence of J-holomorphic spheres in the fiber class through every point for every J, monotonicity lemma 30.01. some consequences of nonsqueezing and examples of other results from Gromov's paper introducing J-holomorphic curves\n\nThe following books and lecture notes are useful study material for various parts of the course.\n\nFor background on manifolds, flows, Lie derivative, etc.:\n\n F. Warner Foundations of differentiable manifolds and Lie groups Springer Verlag M. Spivak A comprehensive introduction to differential geometry, vol. 1 Publish or Perish I. Madsen, J.Tornehave From calculus to cohomology Cambridge University Press\n\nFor background on differential topology (tubular neighborhood theorem, intersection theory, etc.):\n\n J. Milnor Topology from the differentiable viewpoint The University of Virginia Press V. Guillemin, A. Pollack Differential topology Prentice Hall M. Hirsch Differential topology Springer Verlag\n\nFor general topics in symplectic geometry:\n\n D. McDuff, D. Salamon Introduction to symplectic topology Oxford University Press A. Canas da Silva Lectures on Symplectic Geometry Springer Lecture Notes in Mathematics 1764 H. Hofer, E. Zehnder Symplectic Invariants and Hamiltonian dynamics Birkhäuser L. Polterovich The Geometry of the Group of Symplectic Diffeomorphisms Birkhäuser\n\nFor contact topology:\n\n H. Geiges An introduction to contact topology Cambridge University Press\n\nFor holomorphic curves in symplectic geometry:\n\n D. McDuff, D. Salamon J-holomorphic curves in symplectic topology AMS Colloquium Series C. Wendl Lectures on holomorphic curves M. Audin, J. Lafontaine (eds.) Holomorphic curves in symplectic geometry Birkhäuser Progress in Math. 117\n\nFor some relations to physics:\n\n V.I. Arnold Mathematical methods of classical mechanics Springer Verlag V. Guillemin, S. Sternberg Symplectic techniques in physics Cambridge University Press",
null,
"",
null,
"Impressum 2020-01-31, Janko Latschev"
]
| [
null,
"https://www.math.uni-hamburg.de/icons/unilogo/UHH-Logo_2010.png",
null,
"https://www.math.uni-hamburg.de/icons/unilogo/fbmathe.jpg",
null,
"https://www.math.uni-hamburg.de/icons/logo_mathematik.jpg",
null,
"https://www.math.uni-hamburg.de/icons/klein/sitemap.jpg",
null,
"https://www.math.uni-hamburg.de/icons/klein/search.jpg",
null,
"https://www.math.uni-hamburg.de/icons/klein/help.jpg",
null,
"https://www.math.uni-hamburg.de/icons/klein/uk-flag-2.jpg",
null,
"https://www.math.uni-hamburg.de/icons/klein/backoff.png",
null,
"https://www.math.uni-hamburg.de/icons/klein/top.png",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.8004552,"math_prob":0.87817043,"size":8093,"snap":"2020-10-2020-16","text_gpt3_token_len":1976,"char_repetition_ratio":0.14006676,"word_repetition_ratio":0.026701119,"special_character_ratio":0.21154083,"punctuation_ratio":0.15542522,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97997123,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-20T08:26:09Z\",\"WARC-Record-ID\":\"<urn:uuid:2d7d59f4-764c-49a1-aaa4-5947fbb618dc>\",\"Content-Length\":\"21904\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:42da0122-d6c2-4ac1-90d4-b9d7630cd834>\",\"WARC-Concurrent-To\":\"<urn:uuid:915aa5fb-1885-47ba-a088-5c7a52f9e39b>\",\"WARC-IP-Address\":\"134.100.223.28\",\"WARC-Target-URI\":\"https://www.math.uni-hamburg.de/home/latschev/lehre/ws19/sympgeo.html\",\"WARC-Payload-Digest\":\"sha1:BG4NXQWV22VCUWPCF4J7V56ZAFTFG64A\",\"WARC-Block-Digest\":\"sha1:N62MIXA43OTU5AZJXCHSKJEN5GXQXXWC\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875144708.87_warc_CC-MAIN-20200220070221-20200220100221-00154.warc.gz\"}"} |
https://stats.stackexchange.com/questions/94436/using-central-limit-theorem-for-approximation | [
"# Using central limit theorem for approximation\n\nLet $X$ be a random varaible from a distribution with pdf $$f(x) = \\theta x^{\\theta-1}, \\quad 0< x < 1.$$\n\na) Name the distribution of $U=-\\ln(X)$ by first finding its density\n\nb) Let $X_1, X_2, \\ldots,X_n$ be independent and identically distributed random variables with pdf given by earlier with $\\theta$= 3. Using the result from a) and by the central limit theorem (CLT)\n\ni) find an approximation to $P(X_1 \\cdot X_2 \\cdot \\ldots\\cdot X_{30} \\leq 1.85 \\cdot 10^{-5})$\n\ni.e. for the probability of the product of the r.v's.\n\nMy attempt\n\nI found part a) by doing the transformation, and got an exponential with parameter $\\theta$ where my pdf is: $$f(x)=\\theta e^{-u\\theta} .$$ Now where I am struggling is with part i) where I am supposed to find an approximation using the CLT and together with the mean equal to $1/3$ and variance equal to $1/9$. I am able to show that with 30 observation we just take the product of every mean and variance from each individual observation giving us:\n\nmean = $(\\frac{1}{3})^{30}$\nvariance = $(\\frac{1}{9})^{30}$\n\nand then we can just substitute these into the z score by using central limit theorem $$P(X_1 \\cdot X_2 \\cdot \\ldots \\cdot X_{30} \\leq 1.85 \\cdot 10^{-5}) = P\\left(\\frac{X -\\mu }{\\sigma}\\leq \\frac {1.85 \\cdot 10^{-5} - (\\frac{1}{3})^{30}}{(\\frac{1}{9})^{15}} \\right)$$ Hence giving me an answer of $P(Z \\leq 3,808,985,943)$ which definitely cannot be correct. Would appreciate it if somebody could point out my mistake.\n\nReattempt:\n\nUsing the hint i got i managed to deduce the following\n\n$P(X_1 \\cdot X_2 \\cdot \\ldots \\cdot X_{30} \\leq 1.85 \\cdot 10^{-5})$\n\n$P(log X_1 \\cdot log X_2 \\cdot \\ldots \\cdot log X_{30} \\leq log 1.85 \\cdot 10^{-5})$\n\nP($\\sum_{k=1}^{30} log X_i \\leq log 1.85 \\cdot 10^{-5})$\n\nAnd since X random variable can be normally distributed X~ N( $\\frac{1}{3}$ , $\\frac{1}{9}$)\n\nSubstitute into the Z score\n\n$P(\\frac{ log X -\\mu }{\\frac{\\sigma}{\\sqrt(n)}})\\leq \\frac { \\frac{log1.85 \\cdot 10^{-5}}{30} - (\\frac{1}{3})}{(\\frac{\\frac{1}{3}}{\\sqrt(n)})}$\n\n$P(Z \\leq 11.45)$\n\nAm I on the right track? Is it correct to use $\\mu$ = $\\frac{1}{3}$ and $\\sigma$ = $\\frac{1}{3}$ in the z score?\n\n• This question seems to be a self-study question. Please add the self-study tag if appropriate. – QuantIbex Apr 20 '14 at 13:40\n• What does the notation $P(X_1,X_2, \\ldots, X_{30} \\leq 1.85 \\cdot 10^{-5})$ mean? Shouldn't it be $P(X_1 + X_2 + \\cdots + X_{30} \\leq 1.85 \\cdot 10^{-5})$? – QuantIbex Apr 20 '14 at 14:00\n• My apologies Quantlbex, it suppose to be the product of the random variables. $P(X_1 \\cdot X_2 \\cdot X_3, \\ldots,X_{30} \\leq 1.85 \\cdot 10^{-5})$ I apologies for the confusion – Ingrid Apr 20 '14 at 14:18\n• And how do we know that the product of i.i.d r.v.'s obeys the \"usual\" CLT? – Alecos Papadopoulos Apr 20 '14 at 14:40\n• Hint: $-\\ln (X_1X_2\\cdots X_n) = \\sum_i -\\ln(X_i) = \\sum_i Y_i$ is a sum of exponential random variables $Y_i$. Can you relate $P(X_1X_2\\cdots X_n \\leq a)$ to $P\\left(\\sum_i Y_i \\leq g(a)\\right) = P\\left(\\sum_i Y_i \\leq b\\right)$ for some suitably chosen function $g(\\cdot)$? – Dilip Sarwate Apr 20 '14 at 14:41\n\nThe Op started correctly in the Re-attempt, but then lost it, so I provide the analytical steps. By the definition of the natural logarithm and its base, we have\n\n$$X_i = \\frac 1{e^{-\\ln X_i}} \\Rightarrow \\prod_{i=1}^{30} X_i= \\frac 1{\\exp{\\{\\sum_{i=1}^{30}(-\\ln X_i)}\\}}$$\n\nSo\n\n$$P(X_1 \\cdot X_2 \\cdot \\ldots \\cdot X_{30} \\leq 1.85 \\cdot 10^{-5}) = P\\left(\\frac 1{\\exp{\\{\\sum_{i=1}^{30}(-\\ln X_i)}\\}} \\leq 1.85 \\cdot 10^{-5}\\right)$$\n\n$$=P\\left(\\frac {10^5}{1.85}\\leq \\exp{\\{\\sum_{i=1}^{30}(-\\ln X_i)}\\} \\right)=P\\left(\\ln\\left(\\frac {10^5}{1.85}\\right)\\leq \\sum_{i=1}^{30}(-\\ln X_i) \\right)$$\n\n$$=1-P\\left(\\sum_{i=1}^{30}(-\\ln X_i)\\leq \\ln\\left(\\frac {10^5}{1.85}\\right) \\right)$$\n\nwhich is essentially where the OP arrived. But then the CLT is not to be used for one variable but for this sum of variables.\n\nAs already mentioned in the comments, this sum is the sum of i.i.d. exponential random variables, each having mean $1/3$ and variance $1/9$. So the sum, call it $S_n$, has mean and variance $E(S_n) = 30\\frac 13 = 10$ and $\\operatorname{Var}(S_n)= 30\\frac 19 \\Rightarrow \\sigma_S = \\frac {\\sqrt{30}}{3}$\n\nThen the normal approximation through the CLT tells us that $[S_n-E(S_n)]/\\sigma_S \\approx Z\\sim N(0,1)$, and then numerical calculations."
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.69045866,"math_prob":0.99994755,"size":2221,"snap":"2020-24-2020-29","text_gpt3_token_len":761,"char_repetition_ratio":0.14749661,"word_repetition_ratio":0.039215688,"special_character_ratio":0.3903647,"punctuation_ratio":0.07359307,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000068,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-06T21:00:57Z\",\"WARC-Record-ID\":\"<urn:uuid:dba8430b-e6cf-401e-b573-0d4017c97eb9>\",\"Content-Length\":\"150369\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:cef1001a-da08-4a10-875c-fdf03923b5a7>\",\"WARC-Concurrent-To\":\"<urn:uuid:f15eee19-a98f-4691-936d-060058020367>\",\"WARC-IP-Address\":\"151.101.65.69\",\"WARC-Target-URI\":\"https://stats.stackexchange.com/questions/94436/using-central-limit-theorem-for-approximation\",\"WARC-Payload-Digest\":\"sha1:XSTUGOG7GOWXZLBWHA7I7FZB57OQH6VW\",\"WARC-Block-Digest\":\"sha1:4S3QSVB7EB3VA2TGQDP5ZS3DMHCJ6H3X\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593655890181.37_warc_CC-MAIN-20200706191400-20200706221400-00178.warc.gz\"}"} |
https://www.geeksforgeeks.org/_find_next-function-in-c-bitset-with-examples/ | [
"# _Find_next() function in C++ bitset with Examples\n\nThe _Find_next() is a built-in function in C++ Biteset class which returns an integer which refers the position of next set bit in bitset after index. If there isn’t any set bit after index, _Find_next(index) will return the size of the bitset.\n\nSyntax:\n\n```\niterator bitset._Find_next(index)\nor\nint bitset._Find_next(index)\n\n```\n\nParameters: The function accepts one mandatory parameter index which specifies the index after which the first set bit is to be found in the bitset.\n\nReturn Value: The function returns an integer which refers to the position of next set bit in bitset after specified index. If there isn’t any set bit after index(the specified index), _Find_next(index) will return the size of the bitset.\n\nBelow is the illustration of the above function:\n\nExample:\n\n `// C++ program for illustration ` `// of _Find_next() function ` ` ` `#include ` `using` `namespace` `std; ` ` ` `#define M 32 ` ` ` `int` `main() ` `{ ` ` ``// default constructor initializes with all bits 0 ` ` ``bitset bset; ` ` ``bitset bset1; ` ` ``bitset bset2; ` ` ` ` ``// 00000000000000000000000000100000 ` ` ``bset = 1; ` ` ` ` ``// 00000000000000000000010000100000 ` ` ``bset = 1; ` ` ` ` ``// 01000000000000100001000000000001 ` ` ``bset1 = bset1 = bset1 = bset1 = 1; ` ` ` ` ``// function returns the next set bit ` ` ``// in bitset after index 0 ` ` ``cout << ``\"Next set bit after index 0 in bset\\n\"``; ` ` ``cout << bset._Find_next(0) << ``\"\\n\"``; ` ` ` ` ``// function returns the next set bit ` ` ``// in bitset after index 6 ` ` ``cout << ``\"Next set bit after index 6 in bset\\n\"``; ` ` ``cout << bset._Find_next(6) << ``\"\\n\"``; ` ` ` ` ``// finds all set bits in bitset bset1 ` ` ``cout << ``\"Find all set bits in bset1\\n\"``; ` ` ``for` `(``int` `i = bset1._Find_first(); ` ` ``i < bset1.size(); ` ` ``i = bset1._Find_next(i)) ` ` ``cout << i << ``\" \"``; ` ` ``cout << ``\"\\n\"``; ` ` ` ` ``// function returns bset2.size() ` ` ``// when there isn't any set bit after index ` ` ``cout << ``\"Next set bit after index 5 in bset2\\n\"``; ` ` ``cout << bset2._Find_next(5) << ``\"\\n\"``; ` ` ` ` ``return` `0; ` `} `\n\nOutput:\n\n```Next set bit after index 0 in bset\n5\nNext set bit after index 6 in bset\n10\nFind all set bits in bset1\n0 12 17 30\nNext set bit after index 5 in bset2\n32\n```\n\nMy Personal Notes arrow_drop_up",
null,
"Do your best to show the world what you are capable of\n\nIf you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to [email protected]. See your article appearing on the GeeksforGeeks main page and help other Geeks.\n\nPlease Improve this article if you find anything incorrect by clicking on the \"Improve Article\" button below.\n\nArticle Tags :\nPractice Tags :\n\n1\n\nPlease write to us at [email protected] to report any issue with the above content."
]
| [
null,
"https://media.geeksforgeeks.org/auth/profile/bcllsgje4xf22akfb969",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.59363467,"math_prob":0.84597343,"size":3048,"snap":"2019-51-2020-05","text_gpt3_token_len":885,"char_repetition_ratio":0.20565046,"word_repetition_ratio":0.18363273,"special_character_ratio":0.32545933,"punctuation_ratio":0.11545293,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98897386,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-17T16:41:55Z\",\"WARC-Record-ID\":\"<urn:uuid:0d3f6be0-d801-49d2-97da-de0d3104eef6>\",\"Content-Length\":\"130431\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e11f6507-c285-4252-9be4-4e85c6e32e67>\",\"WARC-Concurrent-To\":\"<urn:uuid:b7384f7b-c871-4108-bda7-1951b0a81d1b>\",\"WARC-IP-Address\":\"23.221.72.17\",\"WARC-Target-URI\":\"https://www.geeksforgeeks.org/_find_next-function-in-c-bitset-with-examples/\",\"WARC-Payload-Digest\":\"sha1:LX5JWLA3C554RRJKFPZEU6ZK3ZVIBEYS\",\"WARC-Block-Digest\":\"sha1:FY7BO2CGFQ2LNDT7EVAMDN2UI6NWNHUJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579250589861.0_warc_CC-MAIN-20200117152059-20200117180059-00098.warc.gz\"}"} |
https://biomedical-engineering-online.biomedcentral.com/articles/10.1186/1475-925X-13-92 | [
"# Sparse CT reconstruction based on multi-direction anisotropic total variation (MDATV)\n\n## Abstract\n\n### Background\n\nThe sparse CT (Computed Tomography), inspired by compressed sensing, means to introduce a prior information of image sparsity into CT reconstruction to reduce the input projections so as to reduce the potential threat of incremental X-ray dose to patients’ health. Recently, many remarkable works were concentrated on the sparse CT reconstruction from sparse (limited-angle or few-view style) projections. In this paper we would like to incorporate more prior information into the sparse CT reconstruction for improvement of performance. It is known decades ago that the given projection directions can provide information about the directions of edges in the restored CT image. ATV (Anisotropic Total Variation), a TV (Total Variation) norm based regularization, could use the prior information of image sparsity and edge direction simultaneously. But ATV can only represent the edge information in few directions and lose much prior information of image edges in other directions.\n\n### Methods\n\nTo sufficiently use the prior information of edge directions, a novel MDATV (Multi-Direction Anisotropic Total Variation) is proposed. In this paper we introduce the 2D-IGS (Two Dimensional Image Gradient Space), and combined the coordinate rotation transform with 2D-IGS to represent edge information in multiple directions. Then by incorporating this multi-direction representation into ATV norm we get the MDATV regularization. To solve the optimization problem based on the MDATV regularization, a novel ART (algebraic reconstruction technique) + MDATV scheme is outlined. And NESTA (NESTerov’s Algorithm) is proposed to replace GD (Gradient Descent) for minimizing the TV-based regularization.\n\n### Results\n\nThe numerical and real data experiments demonstrate that MDATV based iterative reconstruction improved the quality of restored image. NESTA is more suitable than GD for minimization of TV-based regularization.\n\n### Conclusions\n\nMDATV regularization can sufficiently use the prior information of image sparsity and edge information simultaneously. By incorporating more prior information, MDATV based approach could reconstruct the image more exactly.\n\n## Background\n\nCT (Computed Tomography) is one of the most important medical image technologies. Due to the potential cancer risk associated with the radiation dose in CT, recent technical development focuses on reducing radiation dose in CT. Several CT machine manufacturers have developed CT machines with iterative reconstruction algorithms to reduce radiation dose, such as the IRIS (Iterative Reconstruction in Image Space) of Siemens, ASiR (Adaptive Statistical Iterative Reconstruction) of GE (General Electric Co.), iDose (a trademark) of Philips and AIDR (Adaptive Iterative Dose Reduction) 3-D of Toshiba, which are reported to reduce the radiation dose by 30-50% while maintaining the reconstructed image quality comparable to the conventional FBP (Filtered Back Projection) method .\n\nOn the other hand, biomedical engineering researchers are trying to improve the performance of the iterative reconstruction algorithms by introducing more prior information of the reconstructed images to reduce the projection input and the radiation dose even more. One important prior information is image sparsity, whose usefulness has been testified by the recent popular compressed sensing technique in signal processing . Before the compressed sensing, the sparsity of medical images has been used for limited-angle tomography and few-view CT blood-vessel reconstruction . Then following the compressed sensing theory, two kinds of sparse models were proposed for the sparse CT reconstruction. By ‘sparse CT’ we mean two aspects. First, a single CT image is approximately sparse like other natural images, and the information changes between successive CT images in dynamic CT are also approximately sparse. Second, the projection data is sparsely collected, such as limited-angle tomography and few-view CT , rather than the full view dense collection in conventional CT scan. The first model uses the sparsity coming from the gradient transformation of the image which is assumed to be approximately piecewise-constant, such as ART-TV-MIN (Algebraic Reconstruction Technique Total Variation MINimization) and ASD-POCS (Adaptive Steepest Descent Projection Onto Convex Sets) . The second model uses the sparsity coming from the subtraction of the reconstructed image from its prior image, such as PICCS (Prior Image Constrained Compressed Sensing) , PIRPLE (Prior Image Registered Penalized Likelihood Estimation) [14, 15] and FCCS (Feature Constrained Compressed Sensing) .\n\nIn this paper we consider the methods based on the first model for the sparse CT reconstruction problems. In this model, the reconstruction problem can be written as a constrained optimization\n\n$\\text{min}{||\\stackrel{\\to }{f}||}_{\\mathit{\\text{reg}}}\\phantom{\\rule{0.5em}{0ex}}\\mathrm{\\text{subject}}\\phantom{\\rule{0.5em}{0ex}}\\mathrm{to}\\phantom{\\rule{0.5em}{0ex}}\\mathrm{ℳ}\\stackrel{\\to }{f}\\phantom{\\rule{0.5em}{0ex}}=\\phantom{\\rule{0.5em}{0ex}}\\stackrel{\\to }{g}$\n(1)\n\nThe equality constraint is a guarantee of the data fidelity, where is the projection matrix modeling the forward projection, $\\stackrel{\\to }{f}$ is the image vector to be restored, and $\\stackrel{\\to }{g}$ is the projection vector. The objective function is a norm function of $\\stackrel{\\to }{f}$, which is a regularization for introducing some prior information like image sparsity. By using norm function of $\\stackrel{\\to }{f}$, we want to obtain a sparse solution of this constrained optimization. Therefore, sparser $\\stackrel{\\to }{f}$ is, smaller ${||\\stackrel{\\to }{f}||}_{\\mathit{\\text{reg}}}$ should be. A commonly used norm function is TV (Total Variation) norm . The optimization problem can also be written as an unconstrained form\n\n$\\text{arg}\\phantom{\\rule{0.12em}{0ex}}\\text{min}\\phantom{\\rule{0.12em}{0ex}}\\left[{\\left(\\mathrm{ℳ}\\stackrel{\\to }{f}-\\stackrel{\\to }{g}\\right)}^{2}+\\lambda {{||\\stackrel{\\to }{f}||}^{2}}_{\\mathit{\\text{reg}}}\\right]$\n(2)\n\nwhere $\\lambda$ is a regularization parameter adjusting the relative penalties on the sparseness and the data fidelity.\n\nTo promote the performance of sparse CT reconstruction, two kinds of improvements are often considered. The first kind of improvement focuses on the sparsity regularization. Sidky proposed the sparser TpV (Total p-Variation) norm to replace the TV norm. Yu used the Haar wavelet transformation to make use of sparsity. Jia replaced the TV norm with tight frame and used GPU (Graphics Processing Unit) to accelerate the reconstruction speed. Apart from replacement of TV norm, Tian , Liu , and Chang developed some adaptive adjustment factors embedded in the TV norm to enhance the edge characteristic. Recently, Liu introduced the TV-Stokes regularizer into sparse CT, which can smooth the isophote in the target image so as to preserve the detail and smoothness of the reconstructed image. The second kind of improvement focuses on the algorithms for resolving (2), which include CG (Conjugate Gradient) , forward backward splitting plus GPU , split-bregman , GPBB (Gradient Projection Barzilai Borwein) plus GPU , Chambolee-Pock algorithm , and ADMM (Alternating Direction Method of Multipliers) . Besides, Niu and Zhu proposed to use the log-barrier term to approximate the data fidelity term, which removes the forward and backward projection in the traditional iterative algorithms like ART (Algebraic Reconstruction Technique).\n\nIn this paper, we consider the first kind of improvement. The MDATV (Multi-Direction Anisotropic Total Variation) is proposed to replace TV norm. The aim of MDATV is to introduce into sparse CT reconstruction more prior information of edges, as the ATV (Anisotropic Total Variation) does [31, 32]. But ATV can only represent the edge information in few directions, which loses much prior information of edge directions. This paper combines the coordinate rotation transform with 2D-IGS (Two Dimensional Image Gradient Space), and represents the edge information in multiple directions easily. Based on this multi-direction representation, MDATV can incorporate more edge information into the sparse CT reconstruction, so as to improve the performance.\n\nThe rest of the paper is organized as follows. In section Methods, firstly, the background of ART and ATV is reviewed. Then the MDATV norm and its corresponding minimization methods are described. Finally, the ART + MDATV scheme is outlined. Section Simulations and experiments uses the proposed MDATV based approach to solve sparse CT reconstruction problems in numerical simulations and with experimental data, so as to demonstrate its remarkable efficiency. Finally, we conclude with section Discussions and conclusions, discussing the pros and cons of MDATV.\n\n## Methods\n\n### Review of ART + ATV\n\n#### ART\n\nThe ART + ATV is proposed based on ART + TV method [11, 12]. ART updates the estimated image iteratively. Firstly, the estimated image is forward projected into the sinogram space, then the difference between the estimated sinogram and the given projections is back projected into the image space to update the estimated image. These steps are operated iteratively until some termination criteria are met. This method is also known as POCS (Projection Onto Convex Sets) in linear algebra. The ART update formula is as following\n\n$\\stackrel{\\to }{f}\\left[n,m+1\\right]=\\stackrel{\\to }{f}\\left[n,m\\right]+{\\stackrel{\\to }{M}}_{m}^{T}\\frac{{g}_{m}-{\\stackrel{\\to }{M}}_{m}\\phantom{\\rule{0.5em}{0ex}}.\\phantom{\\rule{0.5em}{0ex}}\\stackrel{\\to }{f}\\left[n,m\\right]}{{\\stackrel{\\to }{M}}_{m}\\phantom{\\rule{0.5em}{0ex}}.\\phantom{\\rule{0.5em}{0ex}}{\\stackrel{\\to }{M}}_{m}^{T}}$\n\nwhere g m is the m th element of the projection $\\stackrel{\\to }{g}$. ${\\stackrel{\\to }{M}}_{m}$ is the m th row vector of the projection matrix . $\\stackrel{\\to }{f}\\left[n,m\\right]$ is the estimated image after fusing the m th element of the projection data during the n th iteration. The superscript T represents transpose.\n\n#### TV and ATV\n\nThe TV norm in the ART + TV method is used to introduce and strengthen the prior information of image sparsity. This prior information significantly improves the quality of reconstructed image with low computation load. The TV norm is defined as\n\n${||\\stackrel{\\to }{f}||}_{\\mathit{TV}}\\phantom{\\rule{0.5em}{0ex}}:=\\phantom{\\rule{0.5em}{0ex}}{\\sum }_{i,j}{||\\nabla {f}_{i,j}||}_{{\\ell }_{2}}={\\sum }_{i,j}\\sqrt{{\\left({D}_{i}{f}_{i,j}\\right)}^{2}\\phantom{\\rule{0.5em}{0ex}}+\\phantom{\\rule{0.5em}{0ex}}{\\left({D}_{i}{f}_{i,j}\\right)}^{2}}$\n(3)\n\nwhere f i,j is the pixel on the i th row and the j th column in the target image. D i f i,j =f i,j -f i–1,j , and D j f i,j =f i,j -f i,j–1 are the vertical and the horizontal gradients respectively, D i and D j are the corresponding finite differential operators. Because the image edges are orthogonal to their corresponding gradients, D i f i,j and D j f i,j can also represent edges in the horizontal and the vertical directions respectively. The TV norm is isotropic, since it assigns the same energy to both the vertical and the horizontal gradients. Different from TV norm, ATV norm is anisotropic and defined as\n\n${||\\stackrel{\\to }{f}||}_{\\mathit{\\text{ATV}}}\\phantom{\\rule{0.5em}{0ex}}:=\\phantom{\\rule{0.5em}{0ex}}{\\sum }_{i,j}{||{\\nabla }_{\\mathit{\\text{A,B}}}{f}_{i,j}||}_{{\\ell }_{2}}\\phantom{\\rule{0.5em}{0ex}}={\\sum }_{i,j}\\sqrt{A{\\left({D}_{i}{f}_{i,j}\\right)}^{2}\\phantom{\\rule{0.5em}{0ex}}+\\phantom{\\rule{0.5em}{0ex}}B{\\left({D}_{i}{f}_{i,j}\\right)}^{2}}$\n(4)\n\nwhere A and B are weights controlling the energy ratios of the vertical and the horizontal gradients, respectively. When A = B, it is isotropic, else it is anisotropic. (4) can also be written as\n\n${||\\stackrel{\\to }{f}||}_{\\mathit{\\text{ATV}}}\\phantom{\\rule{0.5em}{0ex}}:=\\sum _{i,j}{||{\\nabla }_{\\eta }{f}_{i,j}||}_{{\\ell }_{2}}\\phantom{\\rule{0.5em}{0ex}}=\\phantom{\\rule{0.5em}{0ex}}\\sum _{i,j}\\sqrt{\\eta {\\left({D}_{i}{f}_{i,j}\\right)}^{2}\\phantom{\\rule{0.5em}{0ex}}+\\phantom{\\rule{0.5em}{0ex}}{\\left({D}_{i}{f}_{i,j}\\right)}^{2}}$\n\nwhere η=A/B is the weight factor. Therefore, we only need one parameter to control the energy ratios of the vertical and the horizontal gradients without affecting the results of optimization. In this paper we set η=1000. Based on experiments, this value of η can make the edges in the enhanced direction strong enough, while the potential artifacts in the suppressed direction weak enough. When η is smaller, the suppression to the potential artifacts may be not sufficient. A bigger η is also valid, but the reconstructions hardly change any more.\n\n#### Motivation of ATV\n\nThe motivation of ATV is to introduce into sparse CT reconstruction a prior information of edge directions. This prior information was discovered by Quinto decades ago. In sparse CT reconstruction, the edge information tangent to the projection rays would be measured and restored easily, while those not tangent to the projection rays should be harder to ‘see’. This phenomenon can be theoretically explained by the central slice theory .\n\nWithout loss of generality, we take the parallel projection for example. This example is shown in Figure 1. Suppose the projection rays in direction a produce a row of projections. Each point of the projection row corresponds to one projection ray, and each point’s value is the linear integration of the attenuation coefficients passed through by the corresponding projection ray. According to the central slice theory, the 1-D Fourier transformation of this projection row is the central slice perpendicular to a in the Fourier frequency space. In the 2D Fourier space, the central slice is a line passing through the origin. Thereby the projection data collected by projection rays in direction a corresponds to a central slice perpendicular to a in the 2D Fourier space.\n\nAccording to the Fourier optics , if a family of edges in a 2D image are in the direction a, then the Fourier transformation of these edges is a line passing through the origin and perpendicular to a in the 2D Fourier frequency space, as shown in Figure 2. This means that the line passing through the origin and perpendicular to a in the 2D Fourier space corresponds to the edge information in direction a of the 2D image.\n\nCompare Figures 1 and 2, we find that the projection rays in direction a record the edge information in the same direction. Therefore, the edge information tangent to the projection rays would be restored easily, since these edge information is well recorded by the projection rays. While the edge information perpendicular to the projection rays is not recorded, thereby these edges are hard to ‘see’.\n\nThe discussion above indicates that the geometrical information of projection may affect the sparse CT reconstruction. For example in the limited-angle projection, if most projections are tangent or approximately tangent to the horizontal direction, then the isotropic TV regularization may result in some artifacts in the vertical direction, which will blur the clear edges in the horizontal direction. Since the projections did not record any edge information in the vertical direction, the energy vacancy in the vertical direction would be filled by artifacts. However, ATV could handle this problem by assigning more energy in the horizontal direction and less energy in the vertical direction, thus to enhance the edge information in the horizontal direction and suppress the artifacts in the vertical direction.\n\n### MDATV\n\nHowever, ATV can only represent edges in few directions. This may lose much prior information of edges in many other directions. To remedy this defect, MDATV is proposed to use the entire prior information of edges.\n\n#### 2D-IGS\n\nBefore description of MDATV, we firstly introduce the 2D-IGS. In a 2D discrete image, f i,j is defined as the pixel on the i th row and the j th column. The 2D-IGS $\\mathbf{\\Psi }$ contains two parts. One is the vertical edges denoted by the 2D matrix E v , and the other is the horizontal edges denoted by the 2D matrix E h . They are defined as\n\n$\\left\\{\\begin{array}{l}{\\mathbit{E}}_{\\mathbit{h}}\\left[i,j\\right]\\phantom{\\rule{0.5em}{0ex}}=\\phantom{\\rule{0.5em}{0ex}}{D}_{v}{f}_{i,j}\\phantom{\\rule{0.5em}{0ex}}=\\phantom{\\rule{0.5em}{0ex}}{f}_{i,j}\\phantom{\\rule{0.5em}{0ex}}-\\phantom{\\rule{0.5em}{0ex}}{f}_{i-1,j}\\hfill \\\\ {\\mathbit{E}}_{\\mathbit{v}}\\left[i,j\\right]\\phantom{\\rule{0.5em}{0ex}}=\\phantom{\\rule{0.5em}{0ex}}{D}_{h}{f}_{i,j}\\phantom{\\rule{0.5em}{0ex}}=\\phantom{\\rule{0.5em}{0ex}}{f}_{i,j}\\phantom{\\rule{0.5em}{0ex}}-\\phantom{\\rule{0.5em}{0ex}}{f}_{i,j-1}\\hfill \\end{array}\\right\\$\n(5)\n\nwhere D v and D h are the same as D i and D j in (3) representing the vertical and the horizontal finite differential operators respectively. The E h and E v of Shepp-Logan phantom are shown in Figure 3(b) and (e). The original image is generated by the function ‘phantom(128)’ in MATLAB®, as shown in Figure 3(a).\n\n#### Edges in multiple directions\n\nE h and E v represent the horizontal and the vertical edges of the 2D image, which are used in the TV and ATV norm. To represent edges in many more other directions, the coordinate rotation transform is introduced to be applied to the 2D-IGS.\n\nFirst, we recall the elementary coordinate rotation transform in 2D space. Figure 4 shows the transform process. The coordinate system xOy rotates counterclockwise by θ degrees to the coordinate system xOy′, where 0° ≤ θ ≤ 180°. The target point T in xOy is denoted by (x 0, y 0). Its corresponding rotated target point T′ is denoted by $\\left({x}_{0}^{\\prime },{y}_{0}^{\\prime }\\right)$ in xOy′. Obviously, we have ${x}_{0}={x}_{0}^{\\prime }$ and ${y}_{0}={y}_{0}^{\\prime }$. When we need to process the rotated target point T′ in xOy, we can project the coordinates of T′ in xOy′ onto the coordinate system xOy. Let (p, q) denote the projected coordinates of T′ in xOy. According to the elementary geometry, the transform formula is\n\n$\\left\\{\\begin{array}{l}p={x}_{0}^{\\prime }·\\text{cos}\\theta -{y}_{0}^{\\prime }·\\text{sin}\\theta ={x}_{0}·\\text{cos}\\theta -{y}_{0}·\\text{sin}\\theta \\hfill \\\\ q={x}_{0}^{\\prime }·\\text{sin}\\theta +{y}_{0}^{\\prime }·\\text{cos}\\theta ={x}_{0}·\\text{sin}\\theta +{y}_{0}·\\text{cos}\\theta \\hfill \\end{array}\\right\\$\n(6)\n\nwhere p is the linear combination of the projections of ${x}_{0}^{\\prime }$ and ${y}_{0}^{\\prime }$ onto the x axis. q is the linear combination of the projections of ${x}_{0}^{\\prime }$ and ${y}_{0}^{\\prime }$ onto the y axis.\n\nWhen the coordinate rotation transform is applied to the 2D-IGS $\\mathbf{\\Psi }$, the horizontal edges E h corresponds to the horizontal line segment x 0 in (6), and the vertical edges E v corresponds to the vertical line segment y 0 in (6). After counterclockwise rotation by $\\theta$ degrees, we get the rotated 2D-IGS Ψ′, which contains the rotated horizontal and vertical edges ${\\mathbit{E}}_{\\mathbit{h}}^{\\mathbit{\\prime }}$ and ${\\mathbit{E}}_{\\mathbit{v}}^{\\mathbit{\\prime }}$. But we need to process the rotated horizontal and vertical edges in the 2D-IGS $\\mathbf{\\Psi }$. Therefore, we need to project ${\\mathbit{E}}_{\\mathbit{h}}^{\\mathbit{\\prime }}$ and ${\\mathbit{E}}_{\\mathbit{v}}^{\\mathbit{\\prime }}$ onto the 2D-IGS $\\mathbf{\\Psi }$. Let E hr and E vr denote the projected ${\\mathbit{E}}_{\\mathbit{h}}^{\\mathbit{\\prime }}$ and ${\\mathbit{E}}_{\\mathbit{v}}^{\\mathbit{\\prime }}$. Then E hr and E vr correspond to p and q in (6). Thus we have\n\n$\\left\\{\\begin{array}{l}{\\mathbit{E}}_{\\mathbit{hr}}\\left[i,j\\right]\\phantom{\\rule{0.5em}{0ex}}=\\phantom{\\rule{0.5em}{0ex}}{\\mathbit{E}}_{\\mathbit{h}}\\left[i,j\\right]\\phantom{\\rule{0.5em}{0ex}}·\\phantom{\\rule{0.5em}{0ex}}\\text{cos}\\theta \\phantom{\\rule{0.5em}{0ex}}-\\phantom{\\rule{0.5em}{0ex}}{\\mathbit{E}}_{\\mathbit{v}}\\left[i,j\\right]\\phantom{\\rule{0.5em}{0ex}}·\\phantom{\\rule{0.5em}{0ex}}\\text{sin}\\theta \\hfill \\\\ {\\mathbit{E}}_{\\mathbit{vr}}\\left[i,j\\right]\\phantom{\\rule{0.5em}{0ex}}=\\phantom{\\rule{0.5em}{0ex}}{\\mathbit{E}}_{\\mathbit{h}}\\left[i,j\\right]\\phantom{\\rule{0.5em}{0ex}}·\\phantom{\\rule{0.5em}{0ex}}\\text{sin}\\theta \\phantom{\\rule{0.5em}{0ex}}-\\phantom{\\rule{0.5em}{0ex}}{\\mathbit{E}}_{\\mathbit{v}}\\left[i,j\\right]\\phantom{\\rule{0.5em}{0ex}}·\\phantom{\\rule{0.5em}{0ex}}\\text{cos}\\theta \\hfill \\end{array}\\right\\$\n(7)\n\nBy adjusting the angle $\\theta$, the edges in any direction could be denoted by (7). The rotated edges E hr and E vr of Shepp-Logan phantom are shown in Figure 3(c) and (f) with the rotation angle of –45°. Based on (5), (7) could be rewritten by the finite differential operators\n\n$\\left\\{\\begin{array}{l}{D}_{\\mathit{vr}}\\left[i,j\\right]\\phantom{\\rule{0.5em}{0ex}}=\\phantom{\\rule{0.5em}{0ex}}{D}_{v}\\left[i,j\\right]\\phantom{\\rule{0.5em}{0ex}}·\\phantom{\\rule{0.5em}{0ex}}\\text{cos}\\theta \\phantom{\\rule{0.5em}{0ex}}-\\phantom{\\rule{0.5em}{0ex}}{D}_{h}\\left[i,j\\right]\\phantom{\\rule{0.5em}{0ex}}·\\phantom{\\rule{0.5em}{0ex}}\\text{sin}\\theta \\hfill \\\\ {D}_{\\mathit{hr}}\\left[i,j\\right]\\phantom{\\rule{0.5em}{0ex}}=\\phantom{\\rule{0.5em}{0ex}}{D}_{v}\\left[i,j\\right]\\phantom{\\rule{0.5em}{0ex}}·\\phantom{\\rule{0.5em}{0ex}}\\text{sin}\\theta \\phantom{\\rule{0.5em}{0ex}}-\\phantom{\\rule{0.5em}{0ex}}{D}_{h}\\left[i,j\\right]\\phantom{\\rule{0.5em}{0ex}}·\\phantom{\\rule{0.5em}{0ex}}\\text{cos}\\theta \\hfill \\end{array}\\right\\$\n\nwhere D vr and D hr are the rotated vertical and horizontal finite differential operators.\n\n#### MDATV\n\nBased on the multi-direction representation, the MDATV is defined as\n\n${||\\stackrel{\\to }{f}||}_{\\mathit{\\text{MDATV}}}:=\\phantom{\\rule{0.5em}{0ex}}{\\sum }_{i,j}\\sqrt{\\eta {\\left({\\mathbit{E}}_{\\mathbit{h}}\\left[i,j\\right]\\phantom{\\rule{0.25em}{0ex}}\\text{cos}\\theta \\phantom{\\rule{0.25em}{0ex}}-\\phantom{\\rule{0.25em}{0ex}}{\\mathbit{E}}_{\\mathbit{v}}\\left[i,j\\right]\\phantom{\\rule{0.25em}{0ex}}\\text{sin}\\theta \\right)}^{2}\\phantom{\\rule{0.25em}{0ex}}+\\phantom{\\rule{0.25em}{0ex}}{\\left({\\mathbit{E}}_{\\mathbit{h}}\\left[i,j\\right]\\phantom{\\rule{0.25em}{0ex}}\\text{sin}\\theta \\phantom{\\rule{0.25em}{0ex}}-\\phantom{\\rule{0.25em}{0ex}}{\\mathbit{E}}_{\\mathbit{v}}\\left[i,j\\right]\\phantom{\\rule{0.25em}{0ex}}\\text{cos}\\theta \\right)}^{2}\\phantom{\\rule{1em}{0ex}}}$\n(8)\n\nwhere E h and E v are defined in (5), $\\eta$ is the weights adjusting the the energy ratios of the rotated horizontal and vertical edges. Substitute (7) into (8), we get the simplified form of (8)\n\n${||\\stackrel{\\to }{f}||}_{\\mathit{\\text{MDATV}}}:=\\phantom{\\rule{0.5em}{0ex}}\\sum _{i,j}\\sqrt{\\eta {\\left({\\mathbit{E}}_{\\mathbit{hr}}\\left[i,j\\right]\\right)}^{2}\\phantom{\\rule{0.5em}{0ex}}+\\phantom{\\rule{0.5em}{0ex}}{\\left({\\mathbit{E}}_{\\mathbit{vr}}\\left[i,j\\right]\\right)}^{2}\\phantom{\\rule{0.5em}{0ex}}}$\n\nIn this paper we set $\\eta$ = 1000 with the same reason in ATV. Therefore, the rotated horizontal edges E hr are representing the edges parallel to the projection rays. If $\\eta$ < 1, then the rotated vertical edges E vr would represent the edges parallel to the projection rays.\n\nIn optimization problem (1), the objective function is some kind of norm function, such as TV, ATV or MDATV. Whatever the objective function is, the optimization requires to minimize this function. One common method for minimization is gradient descent (GD). Thus we need to compute the gradient of MDATV. Substitute (5) into (8) and the MDATV’s gradient is\n\n$\\begin{array}{l}\\frac{\\partial {||\\stackrel{\\to }{f}||}_{\\mathit{\\text{MDATV}}}}{\\partial {f}_{i,j}}\\phantom{\\rule{0.5em}{0ex}}=\\phantom{\\rule{0.5em}{0ex}}\\frac{\\eta ·\\left(\\text{cos}\\theta -\\text{sin}\\theta \\right)·\\left({E}_{h1}\\text{cos}\\theta -{E}_{v1}\\text{sin}\\theta \\right)\\phantom{\\rule{0.5em}{0ex}}+\\phantom{\\rule{0.5em}{0ex}}\\left(\\text{sin}\\theta +\\text{cos}\\theta \\right)·\\left({E}_{h1}\\text{sin}\\theta +{E}_{v1}\\text{cos}\\theta \\right)}{\\sqrt{\\eta ·{\\left({E}_{h1}\\text{cos}\\theta -{E}_{v1}\\text{sin}\\theta \\right)}^{2}\\phantom{\\rule{0.5em}{0ex}}+\\phantom{\\rule{0.5em}{0ex}}{\\left({E}_{h1}\\text{sin}\\theta +{E}_{v1}\\text{cos}\\theta \\right)}^{2}\\phantom{\\rule{0.5em}{0ex}}+\\phantom{\\rule{0.5em}{0ex}}\\in }}\\phantom{\\rule{0.5em}{0ex}}\\\\ \\phantom{\\rule{7.5em}{0ex}}+\\phantom{\\rule{0.5em}{0ex}}\\frac{-\\eta ·\\text{cos}\\theta ·\\left({E}_{h2}\\text{cos}\\theta -{E}_{v2}\\text{sin}\\theta \\right)\\phantom{\\rule{0.5em}{0ex}}-\\phantom{\\rule{0.5em}{0ex}}\\text{sin}\\theta ·\\left({E}_{h2}\\text{sin}\\theta +{E}_{v2}\\text{cos}\\theta \\right)}{\\sqrt{\\eta ·{\\left({E}_{h2}\\text{cos}\\theta -{E}_{v2}\\text{sin}\\theta \\right)}^{2}\\phantom{\\rule{0.5em}{0ex}}+\\phantom{\\rule{0.5em}{0ex}}{\\left({E}_{h2}\\text{sin}\\theta +{E}_{v2}\\text{cos}\\theta \\right)}^{2}\\phantom{\\rule{0.5em}{0ex}}+\\phantom{\\rule{0.5em}{0ex}}\\in }}\\phantom{\\rule{0.5em}{0ex}}\\\\ \\phantom{\\rule{7.5em}{0ex}}+\\phantom{\\rule{0.5em}{0ex}}\\frac{\\eta ·\\text{sin}\\theta ·\\left({E}_{h3}\\text{cos}\\theta -{E}_{v3}\\text{sin}\\theta \\right)\\phantom{\\rule{0.5em}{0ex}}-\\phantom{\\rule{0.5em}{0ex}}\\text{cos}\\theta ·\\left({E}_{h3}\\text{sin}\\theta +{E}_{v3}\\text{cos}\\theta \\right)}{\\sqrt{\\eta ·{\\left({E}_{h3}\\text{cos}\\theta -{E}_{v3}\\text{sin}\\theta \\right)}^{2}\\phantom{\\rule{0.5em}{0ex}}+\\phantom{\\rule{0.5em}{0ex}}{\\left({E}_{h3}\\text{sin}\\theta +{E}_{v3}\\text{cos}\\theta \\right)}^{2}\\phantom{\\rule{0.5em}{0ex}}+\\phantom{\\rule{0.5em}{0ex}}\\in }}\\end{array}$\n(9)\n\nwhere\n\n$\\left\\{\\begin{array}{l}{E}_{h1}={f}_{i,j}-{f}_{i,j-1}\\\\ {E}_{v1}={f}_{i,j}-{f}_{i-1,j}\\\\ {E}_{h2}={f}_{i,j+1}-{f}_{i,j}\\\\ {E}_{v2}={f}_{i,j+1}-{f}_{i-1,j+1}\\\\ {E}_{h3}={f}_{i+1,j}-{f}_{i+1,j-1}\\\\ {E}_{v3}={f}_{i+1,j}-{f}_{i,j}\\end{array}\\right\\$\n\nand ϵ is a small positive constant to avoid the singularity. In this paper we set ϵ = 1.0 × 10- 8.\n\n### Minimization approaches\n\nThe workflow of GD for minimizing regularization in optimization (1) is as following:\n\n1. a)\n\nCompute the balancing parameter between ART (data fidelity) and GD (regularization)\n\n$d\\phantom{\\rule{0.5em}{0ex}}:=\\phantom{\\rule{0.5em}{0ex}}{||{\\stackrel{\\to }{f}}^{\\left(J\\right)}\\phantom{\\rule{0.5em}{0ex}}-\\phantom{\\rule{0.5em}{0ex}}{\\stackrel{\\to }{f}}^{\\left(0\\right)}||}_{2}$\n2. b)\n\nfor k = 1,2,… K\n\n${\\stackrel{\\to }{f}}^{\\left(J,0\\right)}\\phantom{\\rule{0.5em}{0ex}}=\\phantom{\\rule{0.5em}{0ex}}{\\stackrel{\\to }{f}}^{\\left(J\\right)}$\n$\\stackrel{\\to }{v}\\phantom{\\rule{0.5em}{0ex}}=\\phantom{\\rule{0.5em}{0ex}}\\frac{\\partial {||\\stackrel{\\to }{f}||}_{\\mathit{\\text{reg}}}}{\\partial {f}_{\\mathit{ij}}}$\n$\\stackrel{\\wedge }{v}\\phantom{\\rule{0.5em}{0ex}}=\\phantom{\\rule{0.5em}{0ex}}\\frac{\\stackrel{\\to }{v}}{{\\left|\\stackrel{\\to }{v}\\right|}^{2}}$\n${\\stackrel{\\to }{f}}^{\\left(J,k\\right)}\\phantom{\\rule{0.5em}{0ex}}=\\phantom{\\rule{0.5em}{0ex}}{\\stackrel{\\to }{f}}^{\\left(J,k-1\\right)}\\phantom{\\rule{0.5em}{0ex}}-\\phantom{\\rule{0.5em}{0ex}}a·d·\\stackrel{\\wedge }{v}$\n\nend for\n\nWhere ${\\stackrel{\\to }{f}}^{\\left(0\\right)}$ and ${\\stackrel{\\to }{f}}^{\\left(J\\right)}$ are the estimated image before and after the J th iteration of ART respectively, and $\\alpha$ is the step size. Although the variable d is kind of useful for adaptively adjusting the step size of GD, $\\alpha$ is determinant in practice.\n\nNote that the norm of $\\stackrel{\\to }{v}/{\\left|\\stackrel{\\to }{v}\\right|}^{2}$ is smaller than the norm of $\\stackrel{\\to }{v}/\\left|\\stackrel{\\to }{v}\\right|$. In GD there usually is a fixed best step size $\\alpha$* such that the convergence rate is fastest. This $\\alpha$* may be found by the traversal method, which tries many step sizes for the problem and finds the best one within a given accuracy. Obviously, smaller the gradient vector $\\stackrel{\\wedge }{v}$ is, bigger the best step size $\\alpha$* is. The bigger step size is more resistant to the perturbation. For example, the perturbation of 0.1 is negligible for the step size of 50, but this perturbation would have a significant impact for the step size of 0.01. Therefore, we use $\\stackrel{\\to }{v}/{\\left|\\stackrel{\\to }{v}\\right|}^{2}$ instead of $\\stackrel{\\to }{v}/\\left|\\stackrel{\\to }{v}\\right|$ as the gradient vector $\\stackrel{\\wedge }{v}$ in this paper.\n\nHowever, in optimization (1), GD is not a stable method for minimizing TV, ATV or MDATV regularizations. Since the best step size of GD changes greatly with the projection parameters and the target images. And the imperfect step size may greatly slow down the convergence rate and degenerate the quality of the estimated image. To overcome this defect of GD, the NESTA (NESTerov’s algorithm) method is proposed to minimize the regularizations in (1).\n\n#### NESTA\n\nBecause the MDATV could be expressed as\n\n$\\begin{array}{l}{||\\stackrel{\\to }{x}||}_{\\mathit{\\text{MDATV}}}\\phantom{\\rule{0.5em}{0ex}}=\\phantom{\\rule{0.5em}{0ex}}{\\sum }_{i,j}\\sqrt{\\eta {\\left({\\mathbit{E}}_{\\mathbit{hr}}\\left[i,j\\right]\\right)}^{2}\\phantom{\\rule{0.25em}{0ex}}+\\phantom{\\rule{0.25em}{0ex}}{\\left({\\mathbit{E}}_{\\mathbit{vr}}\\left[i,j\\right]\\right)}^{2}}\\\\ \\phantom{\\rule{2.28em}{0ex}}={\\sum }_{i,j}\\sqrt{{\\left(\\mathbit{E}{\\mathbit{\\text{'}}}_{\\mathbit{hr}}\\left[i,j\\right]\\right)}^{2}\\phantom{\\rule{0.25em}{0ex}}+\\phantom{\\rule{0.25em}{0ex}}{\\left(\\mathbit{E}{\\mathbit{\\text{'}}}_{\\mathbit{vr}}\\left[i,j\\right]\\right)}^{2}}\\end{array}$\n(10)\n\nwhere $\\mathbit{E}{\\mathbit{\\text{'}}}_{\\mathbit{hr}}\\phantom{\\rule{0.1em}{0ex}}=\\sqrt{\\eta }{\\mathbit{E}}_{\\mathbit{hr}},·\\mathbit{E}{\\mathbit{\\text{'}}}_{\\mathbit{vr}}\\phantom{\\rule{0.1em}{0ex}}=\\phantom{\\rule{0.1em}{0ex}}{\\mathbit{E}}_{\\mathbit{vr}}$. The form of MDATV norm in (10) is the same as the TV norm in (3), which is almost the similar for ATV and TV. Thus we can first study the NESTA method for TV minimization and then apply the conclusions to ATV and MDATV minimization.\n\nNESTA, based on the Nesterove’s smoothing technique , is a fast first-order method for sparse recovery. First, the nonsmooth TV norm can be rewritten as\n\n${||\\stackrel{\\to }{f}||}_{\\mathit{TV}}\\phantom{\\rule{0.5em}{0ex}}=\\phantom{\\rule{0.5em}{0ex}}\\sum _{i,j}\\underset{\\mathit{u}\\in \\mathcal{Q}\\mathit{d}}{max〈u,D{f}_{i,j}〉}$\n(11)\n\nwhere $\\stackrel{\\to }{f}\\in {\\mathcal{Q}}_{p}$, the convex set ${\\mathcal{Q}}_{p}$ is referred to as the primal feasible set. u = [u 1, u 2]T is in the dual feasible set ${\\mathcal{Q}}_{d}$ if and only if ${u}_{1}^{2}\\left[i,j\\right]\\phantom{\\rule{0.5em}{0ex}}+\\phantom{\\rule{0.5em}{0ex}}{u}_{2}^{2}\\left[i,j\\right]\\phantom{\\rule{0.5em}{0ex}}\\le \\phantom{\\rule{0.5em}{0ex}}1$, and Df i,j = [D h f i,j ,D v f i,j ]T is the finite difference of the 2D image f i,j . The superscript T denotes transpose. If u is regarded as a 2D vector, then ${\\mathcal{Q}}_{d}$ is a 2 norm unit ball.\n\nAccording to Nesterov’s work, the smoothed TV regularization function is",
null,
"(12)\n\nwhere $\\mu$ should be set sufficiently small such that",
null,
". Here we set $\\mu$ = 0.01. Then",
null,
"is given by",
null,
"(13)\n\nWhere D = [D h ,D v ]T, ${u}_{u}\\left(\\stackrel{\\to }{f}\\right)$ is of the form [u 1 ,u 2 ]T and for each a {h, v},\n\n${u}_{a}\\left[i,j\\right]\\phantom{\\rule{0.5em}{0ex}}=\\phantom{\\rule{0.5em}{0ex}}\\left\\{\\begin{array}{c}\\hfill {\\mu }^{-1}\\left({D}_{a}\\stackrel{\\to }{f}\\right)\\left[i,j\\right]\\hfill \\\\ \\hfill {||\\nabla f\\left[i,j\\right]||}_{{\\ell }_{2}}^{-1}\\left({D}_{a}\\stackrel{\\to }{f}\\right)\\left[i,j\\right]\\hfill \\end{array}\\phantom{\\rule{0.5em}{0ex}}\\begin{array}{c}\\hfill \\mathrm{if}{||\\nabla f\\left[i,j\\right]||}_{{\\ell }_{2}}\\phantom{\\rule{0.5em}{0ex}}<\\phantom{\\rule{0.5em}{0ex}}\\mu \\hfill \\\\ \\hfill \\mathrm{otherwise}\\hfill \\end{array}\\right\\$\n\nThe minimization of the smoothed TV norm by Nesterov’s algorithm is obtained by iteratively estimating three sequences $\\left\\{{\\stackrel{\\to }{f}}_{k}\\right\\}$, $\\left\\{{\\stackrel{\\to }{d}}_{k}\\right\\}$, and $\\left\\{{\\stackrel{\\to }{e}}_{k}\\right\\}$ as follows:\n\nInitialize $\\stackrel{\\to }{{f}_{0}}$. for k = 1,2,…,K,\n\n1. a)\n\nCompute",
null,
"2. b)\n\nCompute ${\\stackrel{\\to }{d}}_{k}$",
null,
"(14)\n3. c)\n\nCompute ${\\stackrel{\\to }{e}}_{k}$",
null,
"(15)\n4. d)\n\nUpdate ${\\stackrel{\\to }{f}}_{k}$\n\n${\\stackrel{\\to }{f}}_{k+1}\\phantom{\\rule{0.5em}{0ex}}=\\phantom{\\rule{0.5em}{0ex}}{\\tau }_{k}{\\stackrel{\\to }{e}}_{k}\\phantom{\\rule{0.5em}{0ex}}+\\phantom{\\rule{0.5em}{0ex}}\\left(1-{\\tau }_{k}\\right){\\stackrel{\\to }{d}}_{k}$\n(16)\n\nend for.\n\nIn the above algorithm, α i = (i + 1)/2, τ k = 2/(k + 3) is suggested for fast convergence. Since the goal is minimizing TV norm without other constraints, ${\\stackrel{\\to }{d}}_{k}$ and ${\\stackrel{\\to }{e}}_{k}$ can be computed by letting the gradients of the object functions be zero and then we get",
null,
"(17)\n\nIn (15), a suitable smoothing prox-function is\n\n${p}_{p}\\left(\\stackrel{\\to }{f}\\right)\\phantom{\\rule{0.5em}{0ex}}=\\phantom{\\rule{0.5em}{0ex}}\\frac{1}{2}{||\\stackrel{\\to }{f}\\phantom{\\rule{0.5em}{0ex}}-\\phantom{\\rule{0.5em}{0ex}}{\\stackrel{\\to }{f}}_{0}||}_{{\\ell }_{2}}^{2}$\n\nThen we have",
null,
"(18)\n\nSome other preset parameters are\n\n$L\\phantom{\\rule{0.5em}{0ex}}=\\phantom{\\rule{0.5em}{0ex}}\\frac{{||D||}^{2}}{\\mu {\\sigma }_{d}},{\\sigma }_{d}\\phantom{\\rule{0.5em}{0ex}}=\\phantom{\\rule{0.5em}{0ex}}1$\n$||\\nabla f\\left[i,j\\right]||\\phantom{\\rule{0.5em}{0ex}}=\\phantom{\\rule{0.5em}{0ex}}{\\left[{\\left(\\left({D}_{h}f\\right)\\left[i,j\\right]\\right)}^{2}\\phantom{\\rule{0.5em}{0ex}}+\\phantom{\\rule{0.5em}{0ex}}{\\left(\\left({D}_{v}f\\right)\\left[i,j\\right]\\right)}^{2}\\right]}^{\\frac{1}{2}}$\n\n### ART + MDATV scheme\n\nIn this paper, the data fidelity constraint is processed by ART iteration, and the MDATV regularization is minimized by NESTA. To use the prior information of the edges in multiple directions, the original ART + TV scheme is not suitable for MDATV regularization. In the ART + TV scheme, firstly, the overall projection data are used to update the estimated image through ART. Then the estimated image is used as the input of the minimization method. The ART and the minimization is operated alternately. Each pair of successive ART and minimization constitutes an iteration of the ART + TV scheme. The workflow of the ART + TV scheme is shown below:",
null,
"In the above scheme, the stopping criterion is the relative error between the estimated image and the true phantom image being less than 1.0 × 10–5. The relative error is computed as\n\n$ϵ\\phantom{\\rule{0.5em}{0ex}}=\\phantom{\\rule{0.5em}{0ex}}\\frac{||{\\stackrel{\\to }{f}}_{r}\\phantom{\\rule{0.5em}{0ex}}-\\phantom{\\rule{0.5em}{0ex}}{\\stackrel{\\to }{f}}_{0}||}{||{\\stackrel{\\to }{f}}_{0}||}$\n\nwhere ${\\stackrel{\\to }{f}}_{r}$ is the estimation of the restored image, ${\\stackrel{\\to }{f}}_{0}$ is the original phantom image known as ground truth. || · || is the Frobenius norm.\n\nHowever, the MDATV use the prior information of edges in multiple directions by adjusting the angle $\\theta$ in (7). In the ART + TV scheme, there is no time to adjust $\\theta$. Therefore, the ART + MDATV scheme is proposed.\n\nIn the ART + MDATV scheme, projection data are divided into m groups based on the directions of the projection rays. In each group of projections, the projection rays are in the same direction for the parallel projection, and in the approximately same direction for the fan-beam projection. Thus each group of projections corresponds to an angle θ k (k = 1, 2, …, m), which specifies the location of the X-ray source. Then for the k th group of projections, the ART and the minimization of MDATV regularization of θ k are operated once successively. The estimated image of the k th group of projections is used as the initial estimate of the (k + 1)th group of projections. The process of the m groups of projections constitutes an iteration of the ART + MDATV scheme. The workflow of the ART + MDATV scheme is shown below:",
null,
"The stopping criterion in the above scheme is the same as that in the ART + TV scheme. Based on the experiments, the iteration times of NESTA for MDATV is set as 10, and the iteration times of NESTA for TV and ATV are both set as 40.\n\n## Simulations and experiments\n\nNumerical simulations and real data experiments are conducted to validate the performance of MDATV based approaches. The TV and ATV based approaches are used for comparison. To compare the stableness of GD and NESTA, for each method, they are respectively used for minimizing the regularizations. Therefore, the combination of ART with three different regularizations and two kinds of minimization methods leads to six approaches: ART-TV-GD, ART-TV-NESTA, ART-ATV-GD, ART-ATV-NESTA, ART-MDATV-GD, and ART-MDATV-NESTA. The simulations for noisy measurements are conducted for comparing the noise robustness of different regularizations. Finally, the real data experiment is used to testify the effectiveness of MDATV in practical applications. Besides, this paper uses the CT module in “Image reconstruction toolbox” to construct the projection matrix and operate the forward and backward projections.\n\n### Numerical simulation\n\n#### Simulation settings\n\nTo reduce the X-ray dose, reduction of projection rays is a natural option. There are two common styles for reducing the projection rays. They are limited-angle and few-view styles. The schematic diagrams of these two styles are shown in Figure 5. For the same number of projection views, the information of target image in the few-view style is much more than that in the limited-angle style. In this paper, we only describes the simulations of the few-view style.The phantom used in this paper is generated by the function ‘phantom(128)’ in MATLAB®, as shown in Figure 3(a). It has 128 × 128 pixels. The simulated X-ray detector has 240 bins for the fan-beam projection. And the size of the detector bin is 2 millimeters. In the fan-beam projection, the distance between the source and center of the detector is 960.45 millimeters, and the distance from the source to the origin is 628.88 millimeters.\n\nAccording to the ERP (Exact Reconstruction Principle) , if the number of FT (Fourier Transform) samples is twice the number of non-zero pixels in the gradient image, then the optimization program (1) can yield a unique solution. The gradient image of Shepp-Logan phantom is shown in Figure 3(d), where the number of non-zero pixel is 1743. In the digital condition, according to the central slice theory , one projection data corresponds to one FT sample. Therefore, we need at least 1743 × 2 = 3486 projection data points. While the fan-beam detector has 240 bins, thus based on ERP we need 3486 ÷ 240 ≈ 15 views of projections. For comparison, we also take some less views of projections, such as 11 and 13 views, in the numerical simulations.\n\nThe best step sizes of GD are different when the projection style or the regularization changes. Table 1 lists the best step sizes of GD for the fan-beam projections with different regularizations and projection views. These best step sizes are obtained by the traversal method within the accuracy of 1. Since the perturbation within 1 rarely affects the convergence rate of GD.\n\nTo test the robustness of these methods to noise, the Poisson noise is introduced into the projections by the function ‘poisson’ in the “Image reconstruction toolbox” . The incident photon number is set as 1.0 × 105. Table 2 lists the best step sizes of GD for the noisy reconstructions.\n\n#### Visualization-based evaluation\n\nThe restored images from 11 views of noise-free and noisy projections are shown in Figure 6, and the restored images from 15 views of noise-free and noisy projections are shown in Figure 7. Both the observations of Figures 6 and 7 indicate that the TV and MDATV regularizations are obviously better than ATV regularization. There are some horizontal artifacts in the restored image of ATV methods, which is caused by the unbalanced regularization, while the projection data are uniformly distributed. It can also be observed that increased projection data prominently improved the image quality.\n\n#### Profile-based evaluation\n\nTo further visualize the difference among various methods, vertical profiles of restored images were drawn across the 64th column, from the 1st row to the 128th row. Figures 8 and 9 shows the vertical profiles of the restored images in Figure 6 and the corresponding profile in the original Shepp-Logan phantom. And the vertical profiles of the restored images in Figure 7 and their corresponding profile in the original Shepp-Logan phantom are shown in Figures 10 and 11. The observations from all the profiles also indicate the same conclusion as the visualization-based evaluation.\n\n#### Relative error study\n\nThe relative error is used in the simulation iterations as stopping criterion. Therefore, it can also be used to show the convergence condition of various methods. The curves of relative errors versus iteration times of reconstruction processes from 11 and 15 views of projections are shown in Figures 12 and 13. The observations indicate that after some iterations the reconstructions would converge to steady conditions. And the relative error after the final iteration can represent the reconstruction quality. For the 11 views condition, MDATV is better than TV, and TV is better than ATV. While for the 15 views condition, the three regularizations are hard to distinguish.\n\n#### UQI (Universal Quality Index) study\n\nTo perform more quantitative analysis of these methods, the UQI is introduced. UQI measures the similarity between the desired image and its ground truth image . UQI value ranges between zero and one. A higher UQI value represents a higher similarity between the testing image and the ground truth image, and vice versa. The ROI (Region Of Interest) used for computing UQI is in the red rectangle as shown in Figure 3(a). The UQI values of restored images by various methods from different views of projections are plotted as several curves shown in Figure 14. The UQI values are in accord with the relative errors. The observations also indicate that the MDATV is better than TV, and TV is better than ATV. When the volume of projection data increase, the distinctions among these regularizations are decreasing.\n\n### Real data experiment\n\nTo demonstrate the effectiveness of proposed method in real CT application, the real data experiment is conducted. The projection data is from a micro-CT machine in Tianjin Key Laboratory of Biomedical Detecting Techniques and Instruments. The scanning phantom is an organic glass column with three holes of different diameters, as shown in Figure 15. In this experiment, the left hole on top is empty (filled with air), the right hole on top is filled with corn flour, and the bottom hole is filled with a copper bar. The distance between the source and center of the detector is 960.45 millimeters, and the distance from the source to the origin is 628.88 millimeters.\n\nIn the experiment, the detector has 1024 bins, and the physical size of each bin is 0.05 millimeters. Therefore, the raw sinogram of one view has 1024 pixels, and each pixel represents the physical length of 0.05 millimeters. However, the computation load of the iterative reconstruction is very heavy for the high resolution sinogram. Thus the high resolution sinogram is down sampled by the ratio of 4. And the low resolution sinogram of one view has 256 = 1024 ÷ 4 pixels. To maintain the physical length of the projection geometry, each pixel in the low resolution sinogram represents the physical length of 0.2 = 0.05 × 4 millimetres. Actually, in this down sampling, four neighboring pixels in the raw sinogram of one view is averaged as one pixel in the low resolution sinogram of one view.In this experiment, restrained by the micro-CT machine, the whole projection data contains 120 views of fan-beam projection uniformly distributed across 360°. Its FBP reconstruction is shown in Figure 16. Compare Figure 15 and 16, it is easy to distinguish the air and copper filled holes from the background organic glass, while the corn filled hole is hard to distinguish from the background organic glass. This is because that the attenuation coefficient of corn flour is very similar to that of the organic glass, while the attenuation coefficients of air and copper are very different from that of the organic glass.\n\nFor the sparse CT reconstruction experiment, 30 views of fan-beam projections are chose uniformly across the range of 360°. Due to the ATV’s poor performance for the sub-ERP projections (volume of projections is less than that required by ERP), we only compare TV and MDATV based methods. The reconstruction methods used in the experiments are ART-TV-GD, ART-MDATV-GD, and ART-MDATV-NESTA. The step sizes of GD used in the reconstructions are 92 for TV and 10 for MDATV, which are estimated from an approximate synthetic phantom. The iteration times of NESTA for MDATV is 40.In Figure 17, we show the reconstructions after 5 iterations (top row) and 100 iterations (bottom row) with three methods (from left to right): ART-TV-GD, ART-MDATV-GD, ART-MDATV-NESTA. In the restored images, the holes filled with air and copper are distinct, while the hole filled with corn is undistinguishable. To further visualize the difference among various methods, horizontal profiles of restored images were drawn across the 63rd row (for the air and corn filled holes) and 127th row (for the copper filled hole), from the 10th column to the 160th column. The profiles of restored images after 5 and 100 iteration are shown in Figure 18. Since the projection data is not sufficient, the FBP reconstructions abound with artifacts, but the FBP reconstructions clearly depict the boundaries of the air and copper filled holes. And the observations from Figure 18 indicate that MDATV-NESTA profiles matches the FBP profiles best.The regions of holes filled with air and copper (in the red rectangles in Figure 17) are zoomed in and shown in Figures 19 and 20 respectively. To increase the contrast of Figures 19 and 20, 5% of data is saturated at low and high intensities of the original image. Figures 19 and 20 indicate that ART-MDATV-GD method gave the best results. Since the restored circles by ART-MDATV-GD are the most round. The restored circles of ART-MDATV-NESTA are more round than that of ART-TV-GD. The radial artifacts in the restored images are metal artifacts, which can be removed by some off-the-shelf methods.\n\n## Discussions and conclusions\n\nThe simulations and experiments both indicate that MDATV is a useful and robust regularization for the sparse CT reconstruction when the volume of the projection data is less than the volume required by the ERP. But actually, it is impossible to satisfy the ERP in practical applications. Because the practical target images are not piecewise-constant, which means the non-zero elements of the gradient image are infinite. Thus the volume of digital projections is always not enough. Therefore, using MDATV to incorporate more a prior information of the target images is valuable and gainful in practical applications, as shown in the real data experiment. And the indistinguishable results among the three regularizations with ERP projections in the simulations is because that the synthetic phantom is piecewise-constant, which means its ERP samples are limited. Obviously, when the projections’ volume satisfies the ERP, there is no need to incorporate any other prior information.\n\nSince MDATV can represent edges in any direction, the MDATV based methods can use the prior information of edge directions more efficiently than TV or ATV based methods. Therefore, MDATV based methods could enhance the edges in the projection rays directions and suppress the potential artifacts in the no projection rays directions, which leads to the improvement of the reconstructions.\n\nIn this paper, NESTA is proposed to minimize the MDATV regularization instead of GD. This is because that NESTA gives almost the same results as GD does for minimization of the regularizations, but NESTA is more stable than GD. For example, Tables 1 and 2 list the best step sizes of GD used in the simulations. The best step size of GD changes much for different projections and noisy conditions. And there is no rule to obtain the best step size of GD in advance. Nevertheless, the few parameters in NESTA almost does not change in the simulations and experiments. Obviously, NESTA is a considerable choice for minimizing the regularizations in the sparse CT reconstruction.\n\nThe accumulated computation time of the simulations are listed in Table 3. It is observed that the computation load of NESTA is a little higher than that of GD, but NESTA is more stable than GD. If the comparison includes the time of finding the best step size of GD, then NESTA will be much faster than GD. Due to the special scheme of ART + MDATV, MDATV related methods need more computation time than TV or ATV related methods. This is because that MDATV related methods do the minimization after each group of ART iteration. If the projections are classified into m groups, then MDATV related methods have m - 1 times of minimization more than TV or ATV related methods. Furthermore, each minimization has several times of GD or NESTA iterations.\n\nIn addition, it is notable that the fan-beam geometry in the numerical simulations and real data experiment are the same, both from the geometry of the micro-CT machine. In this geometry, the view angle of the fan-beam projection is so small that the projection rays in the same projection view are approximately in the same direction. While in practical applications, the view angle of the fan-beam projection may be bigger so that the projection rays in the same projection view are not approximately in the same direction. In this condition, it needs to rearrange the projection data according to their corresponding projection directions, such that the projection rays having the approximately same direction can be classified into the same group. This will be one of our future research directions.\n\nThis paper proposed the MDATV regularization to sufficiently use a prior information of projection directions and image sparsity in the sparse CT reconstruction. Due to the effective use of prior information of projection directions, the restored edge information is greatly enhanced, and thus to improve the reconstructions. The numerical simulations and real data experiments demonstrate the advantage of MDATV over other regularizations. NESTA is proposed as an alternative to GD for minimizing the TV-based regularizations. Because NESTA is more stable and has the almost same performances as GD.\n\nOtherwise, although the MDATV regularization method was only validated for fan-beam projections, it is simple to introduce this regularization approach to other tomography reconstructions. And using prior information of projection directions may further facilitate the development of tomography. Besides, we think the presentation of edges in multiple directions may have more applications in the fields of image enhancement and reconstruction.\n\n## Abbreviations\n\nMDATV:\n\nMulti-direction anisotropic total variation\n\nCT:\n\nComputed tomography\n\nATV:\n\nAnisotropic total variation\n\nTV:\n\nTotal variation\n\n2D-IGS:\n\nART:\n\nAlgebraic reconstruction technique\n\nNESTA:\n\nNESTerov’s algorithm\n\nGD:\n\nIRIS:\n\nIterative reconstruction in image space\n\nASiR:\n\nGE:\n\nGeneral electric Co\n\niDose:\n\nAIDR:\n\nFBP:\n\nFiltered back projection\n\nART-TV-MIN:\n\nAlgebraic reconstruction technique total variation MINimization\n\nASD-POCS:\n\nAdaptive steepest descent projection onto convex sets\n\nPICCS:\n\nPrior image constrained compressed sensing\n\nPIRPLE:\n\nPrior image registered penalized likelihood estimation\n\nFCCS:\n\nFeature constrained compressed sensing\n\nTpV:\n\nTotal p-variation\n\nGPU:\n\nGraphics processing unit\n\nCG:\n\nGPBB:\n\nAlternating direction method of multipliers\n\nPOCS:\n\nProjection onto convex sets\n\nERP:\n\nExact reconstruction principle\n\nFT:\n\nFourier transform\n\nUQI:\n\nUniversal quality index\n\nROI:\n\nRegion of interest.\n\n## References\n\n1. Schindera ST, Diedrichsen L, Muller HC, Rusch O, Marin D, Schmidt B, Raupach R, Vock P, Szucs-Farkas Z: Iterative reconstruction algorithm for abdominal multidetector CT at different tube voltages: assessment of diagnostic accuracy, image quality, and radiation dose in a phantom study. Radiology 2011,260(2):454–462.\n\n2. Martinsen AC, Saether HK, Hol PK, Olsen DR, Skaane P: Iterative reconstruction reduces abdominal CT dose. Eur J Radiol 2012,81(7):1483–1487.\n\n3. Deak Z, Grimm JM, Treitl M, Geyer LL, Linsenmaier U, Korner M, Reiser MF, Wirth S: Filtered back projection, adaptive statistical iterative reconstruction, and a model-based iterative reconstruction in abdominal CT: an experimental clinical study. Radiology 2013,266(1):197–206.\n\n4. Wu TH, Hung SC, Sun JY, Lin CJ, Lin CH, Chiu CF, Liu MJ, Teng MM, Guo WY, Chang CY: How far can the radiation dose be lowered in head CT with iterative reconstruction? Analysis of imaging quality and diagnostic accuracy. Eur Radiol 2013,23(9):2612–2621.\n\n5. Gervaise A, Osemont B, Lecocq S, Noel A, Micard E, Felblinger J, Blum A: CT image quality improvement using Adaptive Iterative Dose Reduction with wide-volume acquisition on 320-detector CT. Eur Radiol 2012,22(2):295–301.\n\n6. Candès EJ, Romberg J, Tao T: Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information. IEEE Trans Inf Theory 2006,52(2):489–509.\n\n7. Candès EJ, Romberg JK, Tao T: Stable signal recovery from incomplete and inaccurate measurements. Comm Pure Appl Math 2006,59(8):1207–1223.\n\n8. Candès EJ, Wakin MB: An introduction to compressive sampling. IEEE Signal Process Mag 2008,25(2):21–30.\n\n9. Delaney AH, Bresler Y: Globally convergent edge-preserving regularized reconstruction: an application to limited-angle tomography. IEEE Trans Image Process 1998,7(2):204–221.\n\n10. Li M, Yang H, Kudo H: An accurate iterative reconstruction algorithm for sparse objects: application to 3D blood vessel reconstruction from a limited number of projections. Phys Med Biol 2002,47(15):2599–2609.\n\n11. Sidky EY, Kao C, Pan X: Accurate image reconstruction from few-views and limited-angle data in divergent-beam CT. J Xray Sci Technol 2006,14(2):119–139.\n\n12. Sidky EY, Pan XC: Image reconstruction in circular cone-beam computed tomography by constrained, total-variation minimization. Phys Med Biol 2008,53(17):4777–4807.\n\n13. Chen GH, Tang J, Leng S: Prior image constrained compressed sensing (PICCS): a method to accurately reconstruct dynamic CT images from highly undersampled projection data sets. Med Phys 2008,35(2):660–663.\n\n14. Stayman JW, Otake Y, Prince JL, Khanna AJ, Siewerdsen JH: Model-based tomographic reconstruction of objects containing known components. IEEE Trans Med Imag 2012,31(10):1837–1848.\n\n15. Stayman JW, Dang H, Ding Y, Siewerdsen JH: PIRPLE: a penalized-likelihood framework for incorporation of prior images in CT reconstruction. Phys Med Biol 2013,58(21):7563.\n\n16. Wu D, Li L, Zhang L: Feature constrained compressed sensing CT image reconstruction from incomplete data via robust principal component analysis of the database. Phys Med Biol 2013,58(12):4047–4070.\n\n17. Sidky EY, Pan X, Reiser IS, Nishikawa RM, Moore RH, Kopans DB: Enhanced imaging of microcalcifications in digital breast tomosynthesis through improved image-reconstruction algorithms. Med Phys 2009,36(11):4920–4932.\n\n18. Yu H, Wang G: SART-type image reconstruction from a limited number of projections with the sparsity constraint. Int J Biomed Imaging 2010, 2010: 934847.\n\n19. Jia X, Dong B, Lou Y, Jiang SB: GPU-based iterative cone-beam CT reconstruction using tight frame regularization. Phys Med Biol 2011,56(13):3787–3807.\n\n20. Tian Z, Jia X, Yuan K, Pan T, Jiang SB: Low-dose CT reconstruction via edge-preserving total variation regularization. Phys Med Biol 2011,56(18):5949–5967.\n\n21. Liu Y, Ma J, Fan Y, Liang Z: Adaptive-weighted total variation minimization for sparse data toward low-dose x-ray computed tomography image reconstruction. Phys Med Biol 2012,57(23):7923–7956.\n\n22. Chang M, Li L, Chen Z, Xiao Y, Zhang L, Wang G: A few-view reweighted sparsity hunting (FRESH) method for CT image reconstruction. J Xray Sci Technol 2013,21(2):161–176.\n\n23. Liu Y, Liang Z, Ma J, Lu H, Wang K, Zhang H, Moore W: Total variation-stokes strategy for sparse-view X-ray CT image reconstruction. IEEE T Med Imaging 2014,33(3):749–763.\n\n24. Song J, Liu QH, Johnson GA, Badea CT: Sparseness prior based iterative image reconstruction for retrospectively gated cardiac micro-CT. Med Phys 2007,34(11):4476–4483.\n\n25. Jia X, Lou Y, Li R, Song WY, Jiang SB: GPU-based fast cone beam CT reconstruction from undersampled and noisy projection data via total variation. Med Phys 2010,37(4):1757–1760.\n\n26. Vandeghinste B, Goossens B, De Beenhouwer J, Pizurica A, Philips W, Vandenberghe S, Staelens S: Split-Bregman-based sparse-view CT reconstruction. 11th International meeting on Fully Three-Dimensional Image Reconstruction in Radiology and Nuclear Medicine (Fully 3D 11) 2011, 431–434.\n\n27. Park JC, Song B, Kim JS, Park SH, Kim HK, Liu Z, Suh TS, Song WY: Fast compressed sensing-based CBCT reconstruction using Barzilai-Borwein formulation for application to on-line IGRT. Med Phys 2012,39(3):1207–1217.\n\n28. Sidky EY, Jørgensen JH, Pan X: Convex optimization problem prototyping for image reconstruction in computed tomography with the Chambolle–Pock algorithm. Phys Med Biol 2012,57(10):3065.\n\n29. Ramani S, Fessler JA: A splitting-based iterative algorithm for accelerated statistical X-ray CT reconstruction. IEEE Trans Med Imaging 2012,31(3):677–688.\n\n30. Niu T, Zhu L: Accelerated barrier optimization compressed sensing (ABOCS) reconstruction for cone-beam CT: phantom studies. Med Phys 2012,39(7):4588–4598.\n\n31. Jin X, Li L, Chen Z, Zhang L, Xing Y: Anisotropic total variation for limited-angle CT reconstruction. Nuclear Science Symposium Conference Record (NSS/MIC), IEEE 2010, 2232–2238.\n\n32. Chen Z, Jin X, Li L, Wang G: A limited-angle CT reconstruction method based on anisotropic TV minimization. Phys Med Biol 2013,58(7):2119.\n\n33. Quinto ET: Tomographic reconstructions from incomplete data-numerical inversion of the exterior Radon transform. Inverse Probl 1988,4(3):867.\n\n34. Bracewell RN: Strip Integration in Radio Astronomy. Aust J Physics 1956,9(2):198–217.\n\n35. Goodman JW: Introduction to Fourier optics, 2nd edn: McGraw-Hill Companies. 1996.\n\n36. Becker S, Bobin J, Candès E: NESTA: a fast and accurate first-order method for sparse recovery. SIAM J Imaging Sci 2011,4(1):1–39.\n\n37. Nesterov Y: Smooth minimization of non-smooth functions. Math Program 2005, 103: 127–152.\n\n38. Fessler J: Image reconstruction toolbox. [http://web.eecs.umich.edu/~fessler/code/]\n\n39. Wang Z, Bovik AC: A universal image quality index. Signal Processing Letters, IEEE 2002,9(3):81–84.\n\n## Acknowledgment\n\nThe authors would like to thank Professor Feng Gao and Professor HuiJuan Zhao for providing micro-CT data.\n\n## Author information\n\nAuthors\n\n### Corresponding author\n\nCorrespondence to Xiaodong Chen.\n\n### Competing interests\n\nThe authors declare that they have no competing interests.\n\n### Authors’ contributions\n\nHL and XC conceived the study. HL implemented the studies and drafted the manuscript. XC, YW, and DY contributed to discussions and suggestions throughout this work. XC and DY supervised the study. ZZ and QZ designed and performed the experiments. All authors read and approved the final manuscript.\n\n## Authors’ original submitted files for images\n\nBelow are the links to the authors’ original submitted files for images.\n\n## Rights and permissions",
null,
""
]
| [
null,
"https://media.springernature.com/full/springer-static/image/art%3A10.1186%2F1475-925X-13-92/MediaObjects/12938_2014_Article_956_IEq52_HTML.gif",
null,
"https://media.springernature.com/full/springer-static/image/art%3A10.1186%2F1475-925X-13-92/MediaObjects/12938_2014_Article_956_IEq54_HTML.gif",
null,
"https://media.springernature.com/full/springer-static/image/art%3A10.1186%2F1475-925X-13-92/MediaObjects/12938_2014_Article_956_IEq56_HTML.gif",
null,
"https://media.springernature.com/full/springer-static/image/art%3A10.1186%2F1475-925X-13-92/MediaObjects/12938_2014_Article_956_IEq57_HTML.gif",
null,
"https://media.springernature.com/full/springer-static/image/art%3A10.1186%2F1475-925X-13-92/MediaObjects/12938_2014_Article_956_IEq63_HTML.gif",
null,
"https://media.springernature.com/full/springer-static/image/art%3A10.1186%2F1475-925X-13-92/MediaObjects/12938_2014_Article_956_Equl_HTML.gif",
null,
"https://media.springernature.com/full/springer-static/image/art%3A10.1186%2F1475-925X-13-92/MediaObjects/12938_2014_Article_956_Equm_HTML.gif",
null,
"https://media.springernature.com/full/springer-static/image/art%3A10.1186%2F1475-925X-13-92/MediaObjects/12938_2014_Article_956_Equn_HTML.gif",
null,
"https://media.springernature.com/full/springer-static/image/art%3A10.1186%2F1475-925X-13-92/MediaObjects/12938_2014_Article_956_Equp_HTML.gif",
null,
"https://media.springernature.com/full/springer-static/image/art%3A10.1186%2F1475-925X-13-92/MediaObjects/12938_2014_Article_956_Equs_HTML.gif",
null,
"https://media.springernature.com/full/springer-static/image/art%3A10.1186%2F1475-925X-13-92/MediaObjects/12938_2014_Article_956_Equu_HTML.gif",
null,
"https://biomedical-engineering-online.biomedcentral.com/track/article/10.1186/1475-925X-13-92",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.8789553,"math_prob":0.9844108,"size":48276,"snap":"2022-05-2022-21","text_gpt3_token_len":10838,"char_repetition_ratio":0.18174097,"word_repetition_ratio":0.06710098,"special_character_ratio":0.21878366,"punctuation_ratio":0.11726747,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":0.9953873,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24],"im_url_duplicate_count":[null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-25T03:00:59Z\",\"WARC-Record-ID\":\"<urn:uuid:64cc1005-f4e1-4b3d-b94d-8593cb5b5725>\",\"Content-Length\":\"515737\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c4d2d70a-5e4b-4c9a-b80a-75ddbdd9ff5e>\",\"WARC-Concurrent-To\":\"<urn:uuid:b76bdd4a-ec4f-4474-bfe7-fb208c20d73d>\",\"WARC-IP-Address\":\"146.75.32.95\",\"WARC-Target-URI\":\"https://biomedical-engineering-online.biomedcentral.com/articles/10.1186/1475-925X-13-92\",\"WARC-Payload-Digest\":\"sha1:TOXULPQWI5RFTBT5PFKOEDIGRUYEKG4Y\",\"WARC-Block-Digest\":\"sha1:HWXPW3YLI2GJMYNOW3ZW27H5W2IRXU4H\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662578939.73_warc_CC-MAIN-20220525023952-20220525053952-00739.warc.gz\"}"} |
https://www.semanticscholar.org/paper/The-Dyck-Language-Edit-Distance-Problem-in-Time-Saha/98781e5cab0b574d326594f377f9176be1526277 | [
"# The Dyck Language Edit Distance Problem in Near-Linear Time\n\n```@article{Saha2014TheDL,\ntitle={The Dyck Language Edit Distance Problem in Near-Linear Time},\nauthor={Barna Saha},\njournal={2014 IEEE 55th Annual Symposium on Foundations of Computer Science},\nyear={2014},\npages={611-620}\n}```\n• B. Saha\n• Published 18 October 2014\n• Computer Science\n• 2014 IEEE 55th Annual Symposium on Foundations of Computer Science\nGiven a string σ over alphabet Σ and a grammar G defined over the same alphabet, how many minimum number of repairs (insertions, deletions and substitutions) are required to map σ into a valid member of G? The seminal work of Aho and Peterson in 1972 initiated the study of this language edit distance problem providing a dynamic programming algorithm for context free languages that runs in O(|G|2n3) time, where n is the string length and G is the grammar size. While later improvements reduced…\n26 Citations\n\n### Faster Language Edit Distance, Connection to All-pairs Shortest Paths and Related Problems\n\nIt is shown that exact computation of language edit distance in true sub-cubic time will imply a truly sub- cubic algorithm for all-pairs shortest paths, a long-standing open question and result in a breakthrough for a large range of problems in graphs and matrices due to sub-Cubic equivalence.\n\n### Language Edit Distance and Maximum Likelihood Parsing of Stochastic Grammars: Faster Algorithms and Connection to Fundamental Graph Problems\n\n• B. Saha\n• Computer Science\n2015 IEEE 56th Annual Symposium on Foundations of Computer Science\n• 2015\nThis paper gives the first such algorithm that computes language edit distance almost optimally and designs the very first subcubic (Õ(nω)) algorithm that given an arbitrary stochastic context free grammar, and a string returns a nearly-optimal maximum likelihood parsing of that string.\n\n### Language Edit Distance Approximation via Amnesic Dynamic Programming\n\n• Computer Science\n• 2016\nA generic technique of amnesic dynamic programming is proposed which, given any high-dimensional dynamic programming problem, selectively forgets some of the intermediate states by performing fewer look-ups, which speeds up the running time at the cost of returning an approximate answer.\n\n### Approximating Language Edit Distance Beyond Fast Matrix Multiplication: Ultralinear Grammars Are Where Parsing Becomes Hard!\n\n• Computer Science\nICALP\n• 2017\nAdditive approximation algorithms for language edit distance are studied, providing two explicit combinatorial algorithms to obtain a string with minimum edit distance with performance dependencies on either the number of non-linear productions, k^*, or theNumber of nested non- linear production, k, used in the optimal derivation.\n\n### Improved Approximation Algorithms for Dyck Edit Distance and RNA Folding\n\n• Computer Science\nICALP\n• 2022\nA constant-factor approximation algorithm that runs in ˜ O ( n 1 . 971 ) time (the first constant-Factor approximation in subquadratic time) and a factor- s approximation algorithm which is the first nontrivial approximation algorithm for RNA Folding that can go below the n 2 barrier.\n\n### An Improved Algorithm for The k-Dyck Edit Distance Problem\n\n• Computer Science\nSODA\n• 2022\nThese algorithms combine several new structural properties of the Dyck edit distance problem, a refined algorithm for fast (min, +) matrix product, and a careful modi-cation of ideas used in Valiant’s parsing algorithm.\n\n### Sublinear Algorithms for Gap Edit Distance\n\n• Computer Science, Mathematics\n2019 IEEE 60th Annual Symposium on Foundations of Computer Science (FOCS)\n• 2019\nAn algorithm for distinguishing whether the edit distance is at most t or at least t^2 (the quadratic gap problem) in time Õ(n/t+t^3).\n\n### Improved bounds for testing Dyck languages\n\n• Computer Science, Mathematics\nSODA\n• 2018\nThis paper introduces a new problem called Truestring Equivalence, which is easily reducible to the \\$2\\$-type Dyck language property testing problem, and shows a lower bound of \\$n\\$ to the power of \\$1/5\\$.\n\n### Fast & Space-Efficient Approximations of Language Edit Distance and RNA Folding: An Amnesic Dynamic Programming Approach\n\n• B. Saha\n• Computer Science\n2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS)\n• 2017\nThese are the first algorithms that break the cubic-time barriers of any combinatorial algorithm, and quadratic-space barrier of any algorithm significantly improving upon their long-standing space and time complexities.\n\n### If the Current Clique Algorithms are Optimal, So is Valiant's Parser\n\n• Computer Science\n2015 IEEE 56th Annual Symposium on Foundations of Computer Science\n• 2015\nIt is proved that any improvement on Valiant' s algorithm, even for constant size grammars, would imply a breakthrough algorithm for the k-Clique problem: given a graph on n nodes, decide if there are k that form a clique."
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.8473081,"math_prob":0.7932589,"size":7891,"snap":"2022-40-2023-06","text_gpt3_token_len":1692,"char_repetition_ratio":0.17344998,"word_repetition_ratio":0.0427716,"special_character_ratio":0.204917,"punctuation_ratio":0.060344826,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98325783,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-09-28T11:01:37Z\",\"WARC-Record-ID\":\"<urn:uuid:f9ba51d8-ba43-4cca-aadc-dc208cbfceee>\",\"Content-Length\":\"474452\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1f8c8baf-5a79-4761-81d8-f40755509308>\",\"WARC-Concurrent-To\":\"<urn:uuid:06fc9d3c-e661-456b-94cf-a0b4b8e56195>\",\"WARC-IP-Address\":\"18.154.227.128\",\"WARC-Target-URI\":\"https://www.semanticscholar.org/paper/The-Dyck-Language-Edit-Distance-Problem-in-Time-Saha/98781e5cab0b574d326594f377f9176be1526277\",\"WARC-Payload-Digest\":\"sha1:SJYTQ6LW6ITFQAG26ULAONDV7JOFLLA2\",\"WARC-Block-Digest\":\"sha1:KLBPFPKMMCSZBPRPUUZS5V36VHXLEWVS\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030335190.45_warc_CC-MAIN-20220928082743-20220928112743-00352.warc.gz\"}"} |
https://wordpress.stackexchange.com/questions/99265/display-posts-of-the-last-7-days | [
"# Display posts of the last 7 days\n\nI'm trying to display the 5 best rated posts of the last week (7 days) on my website, however I can't seem to figure out how to display them.\n\nHere's what I've achieved so far but it doesn't seem to work:\n\n<?php $slider_query = new WP_Query('posts_per_page=5&cat=3&orderby=highest_rated&order=desc'); ?> <?php$mylimit = 7 * 86400; //days * seconds per day\n\nwhile ($slider_query->have_posts()) :$slider_query->the_post();\n\n$post_age = date('U') - get_post_time('U'); if ($post_age < $mylimit) { ?> //The Post <?php } ?> <?php endwhile;?> • Note that orderby=highest_rated isn't inbuilt in WordPress. Are you using some plugin for ratings? May 13, 2013 at 13:07 • Yes I am using a plugin for this :) Thank you for your concern! – Swen May 13, 2013 at 13:25 ## 6 Answers In addition to birgire's solution, as of WordPress 3.7, you can use Date parameters. Your arguments would look like this to filter posts from the last 7 days: $args = array(\n'post_type' => 'post',\n'post_status' => 'publish',\n'orderby' => 'date',\n'order' => 'DESC',\n\n// Using the date_query to filter posts from last week\n'date_query' => array(\narray(\n'after' => '1 week ago'\n)\n)\n);\n\n• This should be the approach as from 3.7 :-) Nov 27, 2014 at 15:41\n• This is good but it gets posts exactly 24*7 hours ago. In example, if it currently is night time, it won't retrieve posts published 7 days ago in the morning. Mar 3 at 5:23\n\nI think this must have been solved many times here on WordPress Answers.\n\nYou could also check out the examples in the Time parameters part in Codex for WP_Query.\n\nHere are two of them (slightly modified to your needs)\n\nExample 1:\n\n// Create a new filtering function that will add our where clause to the query\nfunction filter_where( $where = '' ) { // posts in the last 7 days$where .= \" AND post_date > '\" . date('Y-m-d', strtotime('-7 days')) . \"'\";\nreturn $where; } add_filter( 'posts_where', 'filter_where' );$slider_query = new WP_Query('posts_per_page=5&cat=3&orderby=highest_rated&order=desc');\nremove_filter( 'posts_where', 'filter_where' );\n\n\nExample 2:\n\n// Create a new filtering function that will add our where clause to the query\nfunction filter_where( $where = '' ) { // posts for May 1 to March 8, 2013$where .= \" AND post_date >= '2013-05-01' AND post_date < '2013-05-8'\";\nreturn $where; } add_filter( 'posts_where', 'filter_where' );$slider_query = new WP_Query('posts_per_page=5&cat=3&orderby=highest_rated&order=desc');\nremove_filter( 'posts_where', 'filter_where' )\n\n\nassuming that you have this orderby=highest_rated covered with some plugin as you describe in the comment above.\n\n• I'm wondering whether if 'date_query' is better to use? By adding the following to the wp_query arguments: 'date_query' => array( array('after' => '1 week ago') Nov 26, 2014 at 16:12\n• Yes, that would now be the preferred method, since the answer was written prior to the existence of the date_query parameter ;-) @ChristineCooper Nov 27, 2014 at 16:13\n\nFrom the WP_Query Time Parameters section:\n\nReturns posts for just the current week:\n\n$week = date('W');$year = date('Y');\n$query = new WP_Query( 'year=' .$year . '&w=' . $week ); • Hey. I haven't tested this yet, but I'm pretty sure it's not what I'm looking for exactly. This probably lists posts posted in week 13 for example. What I want is the posts that were posted in the past 7 days. So for example, from Wednesday 8 May until Wednesday 1 May. – Swen May 13, 2013 at 13:27 Working for me like this, to show post from last 7 days according to the number of views and order by post views count DESC. $date_range = strtotime ( '-7 day' );\n$args = array( 'post_type' => 'post', 'post_status' => 'publish', 'posts_per_page' => '10', 'meta_key' => 'post_views_count', 'orderby' => 'meta_value_num', 'order' => 'DESC', 'date_query' => array( array( 'after' => array( 'year' => date('Y',$date_range ),\n'month' => date('m', $date_range ), 'day' => date('d',$date_range ),\n),\n)\n)\n);\n\n$query = new WP_Query($args );\n\n\nyou can simply use wp_get_archives()\n\n wp_get_archives( array( 'type' => 'weekly', 'limit' => 1, 'show_post_count' => 'true' ,'format'=>'option') ); ?>\n\n\nMore Simple Using sql query wordpress hook posts where\n\nfunction getStartAndEndDate($week,$year) {\n$dto = new DateTime();$dto->setISODate($year,$week);\n$ret['week_start'] =$dto->format('Y-m-d');\n$dto->modify('+6 days');$ret['week_end'] = $dto->format('Y-m-d'); return$ret;\n}\n\nadd_filter( 'posts_where', 'wpse29897_no_parents', 10, 2 );\nfunction wpse29897_no_parents( $where,$query )\n{\nif( isset( $query->query_vars['post_type'] ) && 'menu_plans' ==$query->query_vars['post_type'] )\n{\nif( '' != $where ) {$currentdate = date('Y-m-d');\n$currdate = new DateTime($currentdate);\nif($_GET['weekchk']){$currentweek = $_GET['weekchk']; }else{$currentweek = $currdate->format(\"W\"); }$week_array = getStartAndEndDate($currentweek,date('Y'));$ws = \"'\".$week_array['week_start'].\"'\";$we = \"'\".$week_array['week_end'].\"'\";$where .= \" AND post_date >= \".$ws.\" AND post_date < \".$we.\"\";\n}\nelse\n{\n$where .= \" AND post_date >= '2020-07-28' AND post_date < '2020-08-4'\"; } } return$where;\n}"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.756718,"math_prob":0.762926,"size":1284,"snap":"2023-40-2023-50","text_gpt3_token_len":338,"char_repetition_ratio":0.1359375,"word_repetition_ratio":0.34254143,"special_character_ratio":0.29984424,"punctuation_ratio":0.123222746,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98829985,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-09T16:13:57Z\",\"WARC-Record-ID\":\"<urn:uuid:ed913a7c-585e-4d9b-bb02-c7ad5e2db2b4>\",\"Content-Length\":\"219835\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2232f881-27f3-453f-b9d0-dc2b449f5aa7>\",\"WARC-Concurrent-To\":\"<urn:uuid:93048ad5-bbe1-44ee-bc97-bb8d3de069be>\",\"WARC-IP-Address\":\"104.18.43.226\",\"WARC-Target-URI\":\"https://wordpress.stackexchange.com/questions/99265/display-posts-of-the-last-7-days\",\"WARC-Payload-Digest\":\"sha1:6JLAXVCD7YCU5JGLSH4N55BRHXX65ACL\",\"WARC-Block-Digest\":\"sha1:ANELRSXGWTHERTLSATXZCGX6WPLDA6VQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100912.91_warc_CC-MAIN-20231209134916-20231209164916-00849.warc.gz\"}"} |
https://docs.unity3d.com/es/2020.2/Manual/class-Quaternion.html | [
"Version: 2020.2\nIdioma: Español\nImportant Classes - Vectors\nScriptableObject\n\nImportant Classes - Quaternion\n\nUnity uses the Quaternion class to store the three dimensional orientation of GameObjects, as well as using them to describe a relative rotation from one orientation to another.\n\nThis page provides an overview of the Quaternion class and its common uses when scripting with it. For an exhaustive reference of every member of the Quaternion class, see the Quaternion script reference.\n\nIt’s important to understand the difference between Euler angles (the X, Y, & Z values that you see in the inspector for the rotation of a GameObject), and the underlying Quaternion value which Unity uses to store the actual rotation of GameObjects. For the basics of this topic, read Rotation and Orientation in Unity.\n\nCuando se trata de manejar rotaciones en sus scripts, debe utilizar la clase Quaternion y sus funciones para crear y modificar valores de rotación. Hay algunas situaciones en las que es válido usar ángulos de Euler, pero debes tener en cuenta: - Debe utilizar las funciones de la clase Quaternion que se ocupan de los ángulos de Euler - Retrieving, modifying, and re-applying Euler values from a rotation can cause unintentional side-effects (see below)\n\nCreating and manipulating quaternions directly\n\nUnity’s Quaternion class has a number of functions which allow you to create and manipulate rotations without needing to use Euler angles at all, and these are the ones you should use in most typical cases. Each of these links to the Script Reference with code samples:\n\nManipulating Rotations:\n\nThe Transform class also provides methods which allow you to work with the Quaternion rotations:\n\nWorking with Euler angles\n\nIn some cases it’s more desirable to use Euler angles in your scripts. When doing so, it’s important to note that you must keep your angles in variables, and only use them to apply them as Euler angles to your rotation, which should still ultimately be stored as a Quaternion. While it’s possible to retrieve Euler angles from a quaternion, if you retrieve, modify and re-apply, problems are likely to arise.\n\nYou can read more detail about exactly how these problems can arise in the eulerAngles script reference page.\n\nHere are some examples of mistakes commonly made using a hypothetical example of trying to rotate a GameObject around the X axis at 10 degrees per second. This is what you should avoid:\n\n// rotation scripting mistake #1\n// the mistake here is that we are modifying the x value of a quaternion\n// this value does not represent an angle, and does not produce desired results\n\nvoid Update ()\n{\nvar rot = transform.rotation;\nrot.x += Time.deltaTime * 10;\ntransform.rotation = rot;\n}\n// rotation scripting mistake #2\n// Read, modify, then write the Euler values from a Quaternion.\n// Because these values are calculated from a Quaternion,\n// each new rotation might return very different Euler angles, which might suffer from gimbal lock.\n\nvoid Update ()\n{\nvar angles = transform.rotation.eulerAngles;\nangles.x += Time.deltaTime * 10;\ntransform.rotation = Quaternion.Euler(angles);\n}\n\nAnd here is an example of using Euler angles in script correctly:\n\n// Rotation scripting with Euler angles correctly.\n// Store the Euler angle in a class variable, and only use it to\n// apply it as an Euler angle, but never rely on reading the Euler back.\n\nfloat x;\nvoid Update ()\n{\nx += Time.deltaTime * 10;\ntransform.rotation = Quaternion.Euler(x,0,0);\n}"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.7371839,"math_prob":0.87151116,"size":3336,"snap":"2022-05-2022-21","text_gpt3_token_len":671,"char_repetition_ratio":0.15906362,"word_repetition_ratio":0.014545455,"special_character_ratio":0.21043165,"punctuation_ratio":0.11363637,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98004526,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-19T02:56:48Z\",\"WARC-Record-ID\":\"<urn:uuid:b29cc5c8-c32b-44b1-b72e-5d32be2a2ce0>\",\"Content-Length\":\"19944\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:461891d9-cde1-4ce8-9823-bd517950c1c0>\",\"WARC-Concurrent-To\":\"<urn:uuid:ee4aca5b-baaf-4566-88bf-7a2fc6bd142d>\",\"WARC-IP-Address\":\"34.120.114.139\",\"WARC-Target-URI\":\"https://docs.unity3d.com/es/2020.2/Manual/class-Quaternion.html\",\"WARC-Payload-Digest\":\"sha1:CYQIXMZDMGERLSX34F3NA47ZTWKU2ZOR\",\"WARC-Block-Digest\":\"sha1:NDB4O6WOFMFN2RHI2RH4OSXOUBM4V5G5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320301217.83_warc_CC-MAIN-20220119003144-20220119033144-00532.warc.gz\"}"} |
https://www.piping-designer.com/index.php/properties/classical-mechanics/2266-velocity-differential | [
"# Velocity Differential\n\non . Posted in Classical Mechanics\n\nVelocity differential, abbreviated as $$\\Delta v$$ or $$\\Delta \\omega$$ (Greek symbol omega), is the average rate of change or displacement with time. This is determined by taking the instantaneous velocity of an object, relative to another, in two points of time. The calculator below, determines change of the average rate over these two points.\n\n## Velocity differential formula\n\n$$\\large{ \\Delta v = v_f - v_i }$$\nSymbol English Metric\n$$\\large{ \\Delta v }$$ = velocity differential $$\\large{\\frac{ft}{sec}}$$ $$\\large{\\frac{m}{s}}$$\n$$\\large{ v_f }$$ = final velocity $$\\large{\\frac{ft}{sec}}$$ $$\\large{\\frac{m}{s}}$$\n$$\\large{ v_i }$$ = initial velocity $$\\large{\\frac{ft}{sec}}$$ $$\\large{\\frac{m}{s}}$$\n\n## Velocity differential formula\n\n$$\\large{ \\Delta v = \\frac{I}{m} }$$\nSymbol English Metric\n$$\\large{ \\Delta v }$$ = velocity differential $$\\large{\\frac{ft}{sec}}$$ $$\\large{\\frac{m}{s}}$$\n$$\\large{ I }$$ = impulse $$\\large{lbf-sec}$$ $$\\large{N-s}$$\n$$\\large{ m }$$ = mass $$\\large{lbm}$$ $$\\large{kg}$$\n\n## Velocity differential formula\n\n$$\\large{ \\Delta v = \\frac{\\Delta p}{m} }$$\nSymbol English Metric\n$$\\large{ \\Delta v }$$ = velocity differential $$\\large{\\frac{ft}{sec}}$$ $$\\large{\\frac{m}{s}}$$\n$$\\large{ m }$$ = mass $$\\large{lbm}$$ $$\\large{kg}$$\n$$\\large{ \\Delta p }$$ = momentum differential $$\\large{\\frac{lbm-ft}{sec}}$$ $$\\large{\\frac{kg-m}{s}}$$",
null,
""
]
| [
null,
"https://www.piping-designer.com/images/Piping%20Designer%20Gallery/Piping-Designer_Logo_1.jpg",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.7045001,"math_prob":1.0000097,"size":1429,"snap":"2023-40-2023-50","text_gpt3_token_len":464,"char_repetition_ratio":0.28912282,"word_repetition_ratio":0.28742516,"special_character_ratio":0.3792862,"punctuation_ratio":0.039800994,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000098,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-26T10:06:27Z\",\"WARC-Record-ID\":\"<urn:uuid:46f46898-1ce2-4665-b0f1-a279fd65a3af>\",\"Content-Length\":\"30155\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8adaabc7-2651-4165-b81e-90a2584f4893>\",\"WARC-Concurrent-To\":\"<urn:uuid:b1d35074-1fd8-4866-acd3-c98c4c7a42c1>\",\"WARC-IP-Address\":\"200.225.40.42\",\"WARC-Target-URI\":\"https://www.piping-designer.com/index.php/properties/classical-mechanics/2266-velocity-differential\",\"WARC-Payload-Digest\":\"sha1:PZRWHT7NXLTM7JO5ENIMSSVHEBJCUJIY\",\"WARC-Block-Digest\":\"sha1:VSZEF6HR6QZYIMPM2DGQG6TU2HVKXF7P\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510179.22_warc_CC-MAIN-20230926075508-20230926105508-00209.warc.gz\"}"} |
https://www.ncss.com/software/pass/one-proportion-in-pass/ | [
"# Sample Size for One Proportion in PASS\n\nPASS contains over 20 tools for sample size estimation and power analysis of one proportion, including z-tests, equivalence, non-inferiority, confidence intervals, and conditional power, among others. Each procedure is easy-to-use and is carefully validated for accuracy. Use the links below to jump to a one proportion topic. For each procedure, only a brief summary of the procedure is given. For more details about a particular procedure, we recommend you download and install the free trial of the software.\n\n## Introduction\n\nFor most of the sample size procedures in PASS for a single proportion, the user may choose to solve for sample size, power, or the population effect size in some manner. In the case of confidence intervals, one could solve for sample size or the distance to the confidence limit.\n\nIn a typical one proportion test procedure where the goal is to estimate the sample size, the user enters power, alpha, and the desired population proportion. The procedure is run and the output shows a summary of the entries as well as the sample size estimate. A summary statement is given, as well as references to the articles from which the formulas for the result were obtained.\n\nFor many of the parameters (e.g., power, alpha, sample size, proportions, etc.), multiple values may be entered in a single run. When this is done, estimates are made for every combination of entered values. A numeric summary of these is results is produced as well as easy-to-read sample size or power curve graphs.\n\n## Technical Details\n\nThis page provides a brief description of the tools that are available in PASS for power and sample size analysis of one proportion. If you would like to examine the formulas and technical details relating to a specific PASS procedure, we recommend you download and install the free trial of the software, open the desired proportion procedure, and click on the help button in the top right corner to view the complete documentation of the procedure. There you will find summaries, formulas, references, discussions, technical details, examples, and validation against published articles for the procedure.\n\n## An Example Setup and Output\n\nWhen the PASS software is first opened, the user is presented with the PASS Home window. From this window the desired procedure is selected from the menus, the category tree on the left, or with a procedure search. The procedure opens and the desired entries are made. When you click the Calculate button the results are produced. You can easily navigate to any part of the output with the navigation pane on the left.\n\n### PASS Home Window",
null,
"### Procedure Window for Tests for One Proportion",
null,
"### PASS Output Window",
null,
"## Sample Size for Inequality Tests for One Proportion\n\nThe One-Sample Proportion Test is used to assess whether a population proportion is significantly different from a hypothesized value. This is called the hypothesis of inequality. The hypotheses may be stated in terms of the proportions, the proportion difference, proportion ratio, or proportion odds ratio.\n\nThis procedure calculates sample size and statistical power for testing a single proportion using either exact or approximate tests. Results are based on exact calculations using the binomial and hypergeometric distributions. Because the analysis of several different test statistics is available, the statistical power may be compared to find the most appropriate test for a given situation. The test statistic options available include the one proportion exact test, and four versions of the z-test.\n\nFour different input options are available for this procedure: the direct entry of the proportions, proportion differences, proportion ratios, and proportion odds ratios. A finite population correction is also available in these procedures.\n\n## Sample Size for Tests for One Proportion using Effect Size\n\nThis procedure provides sample size and power calculations for one- or two-sided hypothesis tests of the difference between a proportion and a given value (between 0 and 1) using the effect size. The details of procedure are given in Cohen (1988).\n\n## Sample Size for Confidence Invervals for One Proportion\n\nThe confidence intervals procedures permit the user to solve for sample size, confidence interval width, and confidence level. Exact, score, and asymptotic confidence interval methods can be specified. Both one-sided and two-sided tests can be examined. An additional procedure is available that permits the adjustment for a finite population size.\n\n## Sample Size for Confidence Invervals for One Proportion in a Stratified Design\n\nThis procedure calculates sample size and half-width for confidence intervals of a proportion from a stratified design in which the outcome variable is binary. It uses the results from elementary sampling theory which are presented in many works including Yamane (1967) and Levy and Lemeshow (2008).\n\nSuppose that the response proportion of a binary outcome variable of a sample from a population of subjects (or items) is to be estimated with a confidence interval. Further suppose that the population can be separated into a few subpopulations, often called strata. If these strata are created so that items are similar within a particular stratum, but quite different between strata, then a stratified design might be adopted for a number of reasons. Note that the population may be finite or infinite.\n\nThis procedure allows you to determine the appropriate sample size to be taken from each stratum so that various parameters of the confidence interval are guaranteed. These parameters include the confidence level and width of the interval.\n\n## Sample Size for Confidence Invervals for One Proportion in a Cluster-Randomized Design\n\nThis procedure calculates sample size and half-width for confidence intervals of a proportion from a cluster design in which the outcome variable is binary. It uses the results from Ahn, Heo, and Zang (2015), Lohr (2019), and Campbell and Walters (2014).\n\nSuppose that the proportion of a binary outcome variable of a sample from a population of subjects (or items) is to be estimated with a confidence interval. Further suppose that the population is separated into small groups, called clusters. These clusters may contain different numbers of items.\n\nThis procedure allows you to determine the appropriate number of clusters to be sampled so that the width of a confidence interval of the proportion may be guaranteed at a certain confidence level.\n\n## Sample Size for Confidence Invervals for One Proportion in a Stratified Cluster-Randomized Design\n\nThis procedure calculates sample size and half-width for confidence intervals of a proportion from a stratified cluster randomization trial (CRT) in which the outcome variable is binary. It uses the results from elementary sampling theory which are presented in Xu, Zhu, and Ahn (2019).\n\nSuppose that the response proportion of a binary outcome variable of a sample from a population of subjects (or items) is to be estimated with a confidence interval. Further suppose that the population can be separated into a few subpopulations, often called strata. Further suppose that each stratum can be separated into a number of clusters and that sampling occurs at the cluster level. That is, a simple random sample of clusters is drawn within a stratum. Next, a simple random sample of subjects is drawn from within each cluster.\n\nNote that this procedure assumes an infinite population in which the size of every cluster and every stratum is not know.\nThis procedure allows you to determine the appropriate sample size to be taken from each stratum so that width of the confidence interval is guaranteed.\n\n### Sample Size Curve from the Confidence Intervals for One Proportion Procedure",
null,
"## Sample Size for Non-Inferiority Tests for One Proportion\n\nThe Non-Inferiority Tests for One Proportion procedures provide power analysis and sample size calculation for non-inferiority tests in one-sample designs in which the outcome is binary. Users may choose from among popular test statistics commonly used for running the hypothesis test. These tests include the exact test and four versions of the z-test.\n\nApproximate sample size formulas for non-inferiority tests of a single proportion are presented in Chow et al. (2003). However, only large sample (normal approximation) results are given there. The results available in this procedure use exact calculations based on the enumeration of all possible values of the binomial distribution.\n\nFour different input options are available for this procedure: the direct entry of the proportions, proportion differences, proportion ratios, and proportion odds ratios.\n\n## Sample Size for Equivalence Tests for One Proportion\n\nThere are four procedures providing different input options for equivalence tests of one proportion: direct entry of the proportions, proportion differences, proportion ratios, and proportion odds ratios. An exact test or one of four z-tests may be specified. Flexible upper and lower equivalence proportions are permitted. The results available in this procedure use exact calculations based on the enumeration of all possible values of the binomial distribution.\n\n## Sample Size for Phase II Clinical Trials\n\nThere are three one proportion Phase II clinical trials procedures available in PASS:\n\n• Single-Stage Phase II Clinical Trials\n• Two-Stage Phase II Clinical Trials\n• Three-Stage Phase II Clinical Trials\n\n### Sample Size for Single-Stage Designs\n\nPhase II clinical trials determine whether a drug or regimen has sufficient activity against disease to warrant more extensive study and development. In a single-stage design, a single group of patients is studied. Usually, investigators will know the response rate of other drugs against the disease. Unless the current drug can be shown to be significantly more effective, its use will not be pursued.\n\nThe single-stage procedure finds designs that meet the error rate (alpha and beta) criterion and minimize the sample size when an exact test of proportions is used. The algorithm, discussed by A’Hern (2001), is an exact version of the algorithm of Fleming (1982).\n\n### Sample Size for Two-Stage Designs\n\nIn a two-stage design, the patients are divided into two groups or stages. At the completion of the first stage, an interim analysis is made to determine if the second stage should be conducted. If the number of patients responding is greater than a certain amount, the second stage is conducted. Otherwise, it is not.\n\nThe two-stage procedure in PASS finds designs that meet the error rate (alpha and beta) criterion and minimize the expected sample size. The algorithm is discussed in Simon (1989). Extending Simon’s work, our algorithm allows the investigation of near-optimal designs that may have other useful properties.\n\n### Sample Size for Three-Stage Designs\n\nIn a three-stage design, the patients are divided into three groups or stages. At the completion of the first stage, an interim analysis is made to determine if the second stage should be conducted. If the number of patients responding is greater than a certain amount, the second stage is conducted. Otherwise, it is not. A similar interim analysis is conducted at the end of the second stage.\n\nThe three-stage design procedure in PASS finds designs that meet the error rate (alpha and beta) criterion and minimize the expected sample size. The formulation is given in Chen (1997). Extending Chen’s work, our algorithm allows the investigation of near-optimal designs that may have other useful properties.\n\n## Sample Size for Post-Marketing Surveillance\n\nPost-marketing surveillance refers to the search for adverse reactions to drugs that have been cleared for general use. Two types of study designs are often used: the cohort study and the case-control study. In a cohort design, a large group of treated patients are studied to determine the frequency of any adverse reactions. In a case-control study, patients who have experienced the adverse reaction are matched with other treated patients who have not.\n\nThe Post-Marketing Surveillance procedure in PASS permits the user to examine power and sample size for four design types:\n\n• Cohort Study, No Background Incidence of Adverse Reactions\n• Cohort Study, Known Background Incidence of Adverse Reactions\n• Cohort Study, Unknown Background Incidence of Adverse\nReactions\n• Matched Case-Control Study\n\n### Power Curve from the Post-Marketing Surveillance Procedure",
null,
"## Sample Size for Conditional Power of One Proportion Tests\n\nIn sequential designs, one or more intermediate analyses of the emerging data are conducted to evaluate whether the experiment should be continued. This may be done to conserve resources or to allow a data monitoring board to evaluate safety and efficacy when subjects are entered in a staggered fashion over a long period of time. Conditional power (a frequentist concept) is the probability that the final result will be significant, given the data obtained up to the time of the interim look. Predictive power (a Bayesian concept) is the result of averaging the conditional power over the posterior distribution of effect size. Both of these methods fall under the heading of stochastic curtailment techniques.\n\nThis procedure computes conditional and predicted power for the case when a one-proportion z test is used to test whether a population proportion is greater than, less than, or not equal to a specific value.\n\nConditional power procedures are also available in PASS for the case of non-inferiority and superiority by a margin.\n\n## Sample Size for Two-Stage Designs for Tests of One Proportion (Simon)\n\nThis module finds two-stage designs for exact tests of a single proportion that meet the error rate (type-I and type-II) criterion and minimize the expected sample size. An algorithm, presented by Simon (1989), finds the designs with the minimum N (minimax) and the minimum expected N (optimum). Extending Simon’s work, Jung, Lee, Kim, George (2004) discuss other designs which are optimum from a Bayesian point of view which they call admissible designs.\n\nIn a two-stage design, the subjects are divided into two groups or stages. At the completion of the first stage, an interim analysis is made to determine if the second stage should be conducted. If the number of patients responding is greater than a certain amount, the second stage is conducted. Otherwise, it is not.\n\n“I’ve used NCSS since 1986 and PASS since 1997. The utility, precision, documentation, output and ease-of-use of your products are the best in the business. Thank you!”\n\nTim Gohmann, Ph.D.\n\n\"The NCSS software is and has always been the best solution for my work as an assessor. I have been exposed to many other analytical and statistical software applications and have found there is no other product on the market that can match the ease of learning, comprehension of function, and the frugality of price the NCSS product offers...\"\n\nMichael Ireland, CAE, Assessor"
]
| [
null,
"https://www.ncss.com/wp-content/uploads/2013/09/PASS-Home-Window-One-Proportion.png",
null,
"https://www.ncss.com/wp-content/uploads/2013/09/One-Proportion-Procedure-Window.png",
null,
"https://www.ncss.com/wp-content/uploads/2013/09/One-Proportion-Output.png",
null,
"https://www.ncss.com/wp-content/uploads/2013/09/One-Proportion-Confidence-Interval-Sample-Size-Curve.png",
null,
"https://www.ncss.com/wp-content/uploads/2013/09/Post-Marketing-Surveillance-Power-Curve.png",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.8968084,"math_prob":0.82320476,"size":14776,"snap":"2020-45-2020-50","text_gpt3_token_len":2859,"char_repetition_ratio":0.16355267,"word_repetition_ratio":0.2626132,"special_character_ratio":0.1855712,"punctuation_ratio":0.08549323,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9710567,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,4,null,4,null,4,null,4,null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-12-06T00:42:05Z\",\"WARC-Record-ID\":\"<urn:uuid:a75cb205-38e6-4e4b-b441-b568f73a8b7c>\",\"Content-Length\":\"57817\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0ffe5ee9-f5b9-48eb-ad20-a1597895216d>\",\"WARC-Concurrent-To\":\"<urn:uuid:0f905bb4-9aad-45db-a2c2-64e4df4b8c14>\",\"WARC-IP-Address\":\"104.18.38.140\",\"WARC-Target-URI\":\"https://www.ncss.com/software/pass/one-proportion-in-pass/\",\"WARC-Payload-Digest\":\"sha1:I777E2TV6LMUWHTJYIVLYH4GQODPDPGJ\",\"WARC-Block-Digest\":\"sha1:HASY5AUUOC3IXDZH5YCRNFP4CIRVJRE3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141753148.92_warc_CC-MAIN-20201206002041-20201206032041-00364.warc.gz\"}"} |
https://www.gushiciku.cn/pl/anDg/zh-tw | [
"# 為了學會更多炫酷的 canvas 效果,我熬夜複習了三角函式相關的知識點\n\ntheme: scrolls-light highlight: atom-one-dark\n\n### 弧度與角度之間的互相轉換\n\nlet angle //角度\n\nradian = angle * Math.PI / 180\n\nangle = radian / Math.PI * 180\n\n### 正弦曲線\n\n``html` 中只有一個 `canvas` 標籤;`css` 也很簡單,如下:`css {margin: 0; padding: 0;} body { display: flex; justify-content: center; align-items: center; width: 100%; height: 100vh; overflow: hidden; } `就這幾行樣式,我想大家應該都已經很熟悉了,接下來 `JS` 才是我們的重點,這裡依舊採用面向物件的方式來編寫程式碼,這樣讓我們的程式碼看起來更有條理性,首先還是相關的基礎準備程式碼,如下:`js class SineAnimate { constructor() { / @type {HTMLCanvasElement} / this.canvas = document.getElementById('canvas'); this.ctx = this.canvas.getContext('2d'); this.canvas.width = 800; this.canvas.height = 800; this.dots = []; } } ````` 前面幾行就是基本的獲取```canvas`以及`ctx`,並設定`canvas` 的寬和高,最後一行主要是用於存放我們後續生成的點,這裡用一個個小點來畫出正弦效果。\n\n`````` this.init();\n}\n\ninit() {\nfor (let i = 0; i < this.canvas.width; i++) {\nthis.dots.push(new Dots(i, this.canvas, this.ctx));\n}\n\nthis.draw();\n}\n\ndraw() {\nthis.ctx.clearRect(0, 0, this.canvas.width, this.canvas.height);\nfor (const key in this.dots) {\nlet dot = this.dots[key];\ndot.update();\ndot.draw();\n}\n}\n``````\n\n}\n\nnew SineAnimate(); ````` 在```SineAnimate`類中,定義了一個`init`方法,並通過當前`canvas`的寬度動態生成了 **800** 個點,然後還定義了一個`draw`方法,在這個方法中將前面生成的 **800** 個點渲染在頁面中,最後不要忘記例項化`SineAnimate` 類,最終實現的正弦效果如下圖所示:",
null,
"``````draw() {\nthis.ctx.clearRect(0, 0, this.canvas.width, this.canvas.height);\nthis.ctx.fillRect(0, this.canvas.height / 2, this.canvas.width, 1);\nfor (const key in this.dots) {\nlet dot = this.dots[key];\ndot.update();\ndot.draw();\n}\n}\n``````\n\n} ````` 在```SineAnimate`類的`draw`方法中通過`canvas`的`fillRect` 方法,繪製一條輔助線,實現出來的效果如下所示:",
null,
"``````draw() {\n//...other code\nthis.ctx.fillRect(this.x, this.y, 1, this.canvas.height / 2 - this.y);\n}\n``````\n\n} ````` 在```Dots`類的`draw`方法中,通過`fillRect` 方法來繪製點到垂直距離的線段,實現的效果如下所示:",
null,
"### 環繞圓心的正弦曲線\n\n``````init() {\nfor (let i = 0; i < 360; i++) {\nthis.dots.push(new Dots(i, this.canvas, this.ctx));\n}\n\nthis.draw();\n}\n``````\n\n} `修改完 `SineAnimate` 類,接下來就需要修改 `Dots` 類了,讓我們一起來看相關的程式碼,如下:`js class Dots { constructor(d, canvas, ctx) { this.d = d; this.x = 0; this.y = 0; this.r = 0; this.w = 0; this.canvas = canvas; this.ctx = ctx; } update() { this.r = this.w * Math.sin(this.d * Math.PI / 4) + 350; this.x = this.r * Math.cos(this.d * Math.PI / 180) + this.canvas.width / 2; this.y = this.r * Math.sin(this.d * Math.PI / 180) + this.canvas.height / 2; } draw() { this.ctx.beginPath(); this.ctx.arc(this.x, this.y, 1, 0, Math.PI * 2, 0); this.ctx.fill(); } } ````` 在```Dots`類中,重新定義了圓的半徑和寬度,然後在`update` 方法中,按照 360 度的角度來生成點,根據上面正弦曲線公式生成不同角度下的半徑值,然後再根據最前面的知識點,通過圓的半徑與角度得到最終確定的座標點,最終實現了將所有的點連線成一個圓,如下圖所示:",
null,
"",
null,
"``````init() {\nfor (let i = 0; i < 360; i += 0.2) {\nthis.dots.push(new Dots(i, this.canvas, this.ctx));\n}\n}\n\n...other code\n``````\n\n} ````` 修改```init` 中迴圈的最後一個引數,實現出來的效果如下所示:",
null,
"``````draw() {\nthis.ctx.clearRect(0, 0, this.canvas.width, this.canvas.height);\nthis.ctx.beginPath();\nthis.ctx.arc(this.canvas.width/2, this.canvas.height/2, 350, 0, Math.PI * 2, 0);\nthis.ctx.stroke();\nfor (const key in this.dots) {\nlet dot = this.dots[key];\ndot.update();\ndot.draw();\n}\n}\n``````\n\n} ``` 跟前面的正弦輔助線一樣的,這裡只是畫在圓上,實現的效果如下圖所示:",
null,
"### 動態環繞圓心的正弦曲線",
null,
"",
null,
""
]
| [
null,
"https://p3-juejin.byteimg.com/tos-cn-i-k3u1fbpfcp/ae7f0261437a4e9989e97ea7f9ae3cad~tplv-k3u1fbpfcp-watermark.image",
null,
"https://p1-juejin.byteimg.com/tos-cn-i-k3u1fbpfcp/c6ab209d79784390935f10bf50eb5c9d~tplv-k3u1fbpfcp-watermark.image",
null,
"https://p6-juejin.byteimg.com/tos-cn-i-k3u1fbpfcp/f3773814800841048f376196af9f4825~tplv-k3u1fbpfcp-watermark.image",
null,
"https://p1-juejin.byteimg.com/tos-cn-i-k3u1fbpfcp/ab1953eaa2de42f5843b0125d7af0d04~tplv-k3u1fbpfcp-watermark.image",
null,
"https://p1-juejin.byteimg.com/tos-cn-i-k3u1fbpfcp/aff2ef1287a246538f021be4790fc718~tplv-k3u1fbpfcp-watermark.image",
null,
"https://p1-juejin.byteimg.com/tos-cn-i-k3u1fbpfcp/621ee0ab0db745748348f2b6e2dea6d8~tplv-k3u1fbpfcp-watermark.image",
null,
"https://p3-juejin.byteimg.com/tos-cn-i-k3u1fbpfcp/39371750550c486c8132222428b8bbd7~tplv-k3u1fbpfcp-watermark.image",
null,
"https://p3-juejin.byteimg.com/tos-cn-i-k3u1fbpfcp/56a44b4bbaa5422cbda444580d4a017d~tplv-k3u1fbpfcp-watermark.image",
null,
"https://p6-juejin.byteimg.com/tos-cn-i-k3u1fbpfcp/231d8fa90b5e4f96ba06bd872d47968b~tplv-k3u1fbpfcp-watermark.image",
null
]
| {"ft_lang_label":"__label__zh","ft_lang_prob":0.7450513,"math_prob":0.94324654,"size":6513,"snap":"2023-40-2023-50","text_gpt3_token_len":4709,"char_repetition_ratio":0.18144108,"word_repetition_ratio":0.32816902,"special_character_ratio":0.2951021,"punctuation_ratio":0.27984497,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9632031,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-08T19:00:42Z\",\"WARC-Record-ID\":\"<urn:uuid:17402301-6e03-4866-8ab1-9142e95f3b24>\",\"Content-Length\":\"33936\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:82e85c8f-53a8-49e9-8255-1877ee3b2c55>\",\"WARC-Concurrent-To\":\"<urn:uuid:231e633f-b350-4a0c-82ef-13d77aa32110>\",\"WARC-IP-Address\":\"172.66.40.207\",\"WARC-Target-URI\":\"https://www.gushiciku.cn/pl/anDg/zh-tw\",\"WARC-Payload-Digest\":\"sha1:AUXC7ABC7SJ5FDW4LUP5VN3LMMXUKGKH\",\"WARC-Block-Digest\":\"sha1:5XBQYC76A6M4HO2TWIMS7CQFE3NOTHXV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100769.54_warc_CC-MAIN-20231208180539-20231208210539-00337.warc.gz\"}"} |
https://www.edureka.co/blog/classification-in-machine-learning/ | [
"",
null,
"# How To Implement Classification In Machine Learning?",
null,
"5 / 11 Blog from Supervised Learning\n\nClassification in machine learning and statistics is a supervised learning approach in which the computer program learns from the data given to it and makes new observations or classifications. In this article, we will learn about classification in machine learning in detail.\n\n## Machine Learning Full Course – Learn Machine Learning 10 Hours | Machine Learning Tutorial | Edureka\n\nMachine Learning Course lets you master the application of AI with the expert guidance. It includes various algorithms with applications.\n\nThe following topics are covered in this blog:\n\n## What is Classification In Machine Learning\n\nClassification is a process of categorizing a given set of data into classes, It can be performed on both structured or unstructured data. The process starts with predicting the class of given data points. The classes are often referred to as target, label or categories.\n\nThe classification predictive modeling is the task of approximating the mapping function from input variables to discrete output variables. The main goal is to identify which class/category the new data will fall into.",
null,
"Let us try to understand this with a simple example.\n\nHeart disease detection can be identified as a classification problem, this is a binary classification since there can be only two classes i.e has heart disease or does not have heart disease. The classifier, in this case, needs training data to understand how the given input variables are related to the class. And once the classifier is trained accurately, it can be used to detect whether heart disease is there or not for a particular patient.\n\nSince classification is a type of supervised learning, even the targets are also provided with the input data. Let us get familiar with the classification in machine learning terminologies.\n\n## Classification Terminologies In Machine Learning\n\n• Classifier – It is an algorithm that is used to map the input data to a specific category.\n\n• Classification Model – The model predicts or draws a conclusion to the input data given for training, it will predict the class or category for the data.\n\n• Feature – A feature is an individual measurable property of the phenomenon being observed.\n\n• Binary Classification – It is a type of classification with two outcomes, for eg – either true or false.\n\n• Multi-Class Classification – The classification with more than two classes, in multi-class classification each sample is assigned to one and only one label or target.\n\n• Multi-label Classification – This is a type of classification where each sample is assigned to a set of labels or targets.\n\n• Initialize – It is to assign the classifier to be used for the\n\n• Train the Classifier – Each classifier in sci-kit learn uses the fit(X, y) method to fit the model for training the train X and train label y.\n\n• Predict the Target – For an unlabeled observation X, the predict(X) method returns predicted label y.\n\n• Evaluate – This basically means the evaluation of the model i.e classification report, accuracy score, etc.\n\nTypes Of Learners In Classification\n\n• Lazy Learners – Lazy learners simply store the training data and wait until a testing data appears. The classification is done using the most related data in the stored training data. They have more predicting time compared to eager learners. Eg – k-nearest neighbor, case-based reasoning.\n\n• Eager Learners – Eager learners construct a classification model based on the given training data before getting data for predictions. It must be able to commit to a single hypothesis that will work for the entire space. Due to this, they take a lot of time in training and less time for a prediction. Eg – Decision Tree, Naive Bayes, Artificial Neural Networks.\n\n## Classification Algorithms\n\nIn machine learning, classification is a supervised learning concept which basically categorizes a set of data into classes. The most common classification problems are – speech recognition, face detection, handwriting recognition, document classification, etc. It can be either a binary classification problem or a multi-class problem too. There are a bunch of machine learning algorithms for classification in machine learning. Let us take a look at those classification algorithms in machine learning.\n\n### Logistic Regression\n\nIt is a classification algorithm in machine learning that uses one or more independent variables to determine an outcome. The outcome is measured with a dichotomous variable meaning it will have only two possible outcomes.\n\nThe goal of logistic regression is to find a best-fitting relationship between the dependent variable and a set of independent variables. It is better than other binary classification algorithms like nearest neighbor since it quantitatively explains the factors leading to classification.",
null,
"Logistic regression is specifically meant for classification, it is useful in understanding how a set of independent variables affect the outcome of the dependent variable.\n\nThe main disadvantage of the logistic regression algorithm is that it only works when the predicted variable is binary, it assumes that the data is free of missing values and assumes that the predictors are independent of each other.\n\nUse Cases\n\n• Identifying risk factors for diseases\n\n• Word classification\n\n• Weather Prediction\n\n• Voting Applications\n\n## Naive Bayes Classifier\n\nIt is a classification algorithm based on Bayes’s theorem which gives an assumption of independence among predictors. In simple terms, a Naive Bayes classifier assumes that the presence of a particular feature in a class is unrelated to the presence of any other feature.\n\nEven if the features depend on each other, all of these properties contribute to the probability independently. Naive Bayes model is easy to make and is particularly useful for comparatively large data sets. Even with a simplistic approach, Naive Bayes is known to outperform most of the classification methods in machine learning. Following is the Bayes theorem to implement the Naive Bayes Theorem.",
null,
"The Naive Bayes classifier requires a small amount of training data to estimate the necessary parameters to get the results. They are extremely fast in nature compared to other classifiers.\n\nThe only disadvantage is that they are known to be a bad estimator.\n\nUse Cases\n\n• Disease Predictions\n\n• Document Classification\n\n• Spam Filters\n\n• Sentiment Analysis\n\nKnow more about the Naive Bayes Classifier here.\n\nIt is a very effective and simple approach to fit linear models. Stochastic Gradient Descent is particularly useful when the sample data is in a large number. It supports different loss functions and penalties for classification.",
null,
"Stochastic gradient descent refers to calculating the derivative from each training data instance and calculating the update immediately.\n\nThe only advantage is the ease of implementation and efficiency whereas a major setback with stochastic gradient descent is that it requires a number of hyper-parameters and is sensitive to feature scaling.\n\nUse Cases\n\n• Internet Of Things\n\n• Updating the parameters such as weights in neural networks or coefficients in linear regression\n\n### K-Nearest Neighbor\n\nIt is a lazy learning algorithm that stores all instances corresponding to training data in n-dimensional space. It is a lazy learning algorithm as it does not focus on constructing a general internal model, instead, it works on storing instances of training data.",
null,
"Classification is computed from a simple majority vote of the k nearest neighbors of each point. It is supervised and takes a bunch of labeled points and uses them to label other points. To label a new point, it looks at the labeled points closest to that new point also known as its nearest neighbors. It has those neighbors vote, so whichever label most of the neighbors have is the label for the new point. The “k” is the number of neighbors it checks.\n\nThis algorithm is quite simple in its implementation and is robust to noisy training data. Even if the training data is large, it is quite efficient. The only disadvantage with the KNN algorithm is that there is no need to determine the value of K and computation cost is pretty high compared to other algorithms.\n\nUse Cases\n\n• Industrial applications to look for similar tasks in comparison to others\n\n• Handwriting detection applications\n\n• Image recognition\n\n• Video recognition\n\n• Stock analysis\n\nKnow more about K Nearest Neighbor Algorithm here\n\n### Decision Tree\n\nThe decision tree algorithm builds the classification model in the form of a tree structure. It utilizes the if-then rules which are equally exhaustive and mutually exclusive in classification. The process goes on with breaking down the data into smaller structures and eventually associating it with an incremental decision tree. The final structure looks like a tree with nodes and leaves. The rules are learned sequentially using the training data one at a time. Each time a rule is learned, the tuples covering the rules are removed. The process continues on the training set until the termination point is met.",
null,
"The tree is constructed in a top-down recursive divide and conquer approach. A decision node will have two or more branches and a leaf represents a classification or decision. The topmost node in the decision tree that corresponds to the best predictor is called the root node, and the best thing about a decision tree is that it can handle both categorical and numerical data.\n\nA decision tree gives an advantage of simplicity to understand and visualize, it requires very little data preparation as well. The disadvantage that follows with the decision tree is that it can create complex trees that may bot categorize efficiently. They can be quite unstable because even a simplistic change in the data can hinder the whole structure of the decision tree.\n\nUse Cases\n\n• Data exploration\n\n• Pattern Recognition\n\n• Option pricing in finances\n\n• Identifying disease and risk threats\n\nKnow more about decision tree algorithm here\n\n### Random Forest\n\nRandom decision trees or random forest are an ensemble learning method for classification, regression, etc. It operates by constructing a multitude of decision trees at training time and outputs the class that is the mode of the classes or classification or mean prediction(regression) of the individual trees.",
null,
"A random forest is a meta-estimator that fits a number of trees on various subsamples of data sets and then uses an average to improve the accuracy in the model’s predictive nature. The sub-sample size is always the same as that of the original input size but the samples are often drawn with replacements.\n\nThe advantage of the random forest is that it is more accurate than the decision trees due to the reduction in the over-fitting. The only disadvantage with the random forest classifiers is that it is quite complex in implementation and gets pretty slow in real-time prediction.\n\nUse Cases\n\n• Industrial applications such as finding if a loan applicant is high-risk or low-risk\n\n• For Predicting the failure of mechanical parts in automobile engines\n\n• Predicting social media share scores\n\n• Performance scores\n\nKnow more about the Random Forest algorithm here.\n\n### Artificial Neural Networks\n\nA neural network consists of neurons that are arranged in layers, they take some input vector and convert it into an output. The process involves each neuron taking input and applying a function which is often a non-linear function to it and then passes the output to the next layer.",
null,
"In general, the network is supposed to be feed-forward meaning that the unit or neuron feeds the output to the next layer but there is no involvement of any feedback to the previous layer.\n\nWeighings are applied to the signals passing from one layer to the other, and these are the weighings that are tuned in the training phase to adapt a neural network for any problem statement.\n\nIt has a high tolerance to noisy data and able to classify untrained patterns, it performs better with continuous-valued inputs and outputs. The disadvantage with the artificial neural networks is that it has poor interpretation compared to other models.\n\nUse Cases\n\n• Handwriting analysis\n\n• Colorization of black and white images\n\n• Computer vision processes\n\n• Captioning photos based on facial features\n\nKnow more about artificial neural networks here\n\n### Support Vector Machine\n\nThe support vector machine is a classifier that represents the training data as points in space separated into categories by a gap as wide as possible. New points are then added to space by predicting which category they fall into and which space they will belong to.",
null,
"It uses a subset of training points in the decision function which makes it memory efficient and is highly effective in high dimensional spaces. The only disadvantage with the support vector machine is that the algorithm does not directly provide probability estimates.\n\nUse cases\n\n• Business applications for comparing the performance of a stock over a period of time\n\n• Investment suggestions\n\n• Classification of applications requiring accuracy and efficiency\n\n## Classifier Evaluation\n\nThe most important part after the completion of any classifier is the evaluation to check its accuracy and efficiency. There are a lot of ways in which we can evaluate a classifier. Let us take a look at these methods listed below.\n\nHoldout Method\n\nThis is the most common method to evaluate a classifier. In this method, the given data set is divided into two parts as a test and train set 20% and 80% respectively.\n\nThe train set is used to train the data and the unseen test set is used to test its predictive power.\n\nCross-Validation\n\nOver-fitting is the most common problem prevalent in most of the machine learning models. K-fold cross-validation can be conducted to verify if the model is over-fitted at all.",
null,
"In this method, the data set is randomly partitioned into k mutually exclusive subsets, each of which is of the same size. Out of these, one is kept for testing and others are used to train the model. The same process takes place for all k folds.\n\nClassification Report\n\nA classification report will give the following results, it is a sample classification report of an SVM classifier using a cancer_data dataset.",
null,
"• Accuracy\n\n• Accuracy is a ratio of correctly predicted observation to the total observations\n\n• True Positive: The number of correct predictions that the occurrence is positive.\n\n• True Negative: Number of correct predictions that the occurrence is negative.\n\n• F1- Score\n\n• It is the weighted average of precision and recall\n\n• Precision And Recall\n• Precision is the fraction of relevant instances among the retrieved instances, while recall is the fraction of relevant instances that have been retrieved over the total number of instances. They are basically used as the measure of relevance.\n\nROC Curve\n\nReceiver operating characteristics or ROC curve is used for visual comparison of classification models, which shows the relationship between the true positive rate and the false positive rate. The area under the ROC curve is the measure of the accuracy of the model.",
null,
"## Algorithm Selection",
null,
"Apart from the above approach, We can follow the following steps to use the best algorithm for the model\n\n• Create dependent and independent data sets based on our dependent and independent features\n\n• Split the data into training and testing sets\n\n• Train the model using different algorithms such as KNN, Decision tree, SVM, etc\n\n• Evaluate the classifier\n\n• Choose the classifier with the most accuracy.\n\nAlthough it may take more time than needed to choose the best algorithm suited for your model, accuracy is the best way to go forward to make your model efficient.\n\nLet us take a look at the MNIST data set, and we will use two different algorithms to check which one will suit the model best.\n\n## Use Case\n\nWhat is MNIST?\n\nIt is a set of 70,000 small handwritten images labeled with the respective digit that they represent. Each image has almost 784 features, a feature simply represents the pixel’s density and each image is 28×28 pixels.\n\nWe will make a digit predictor using the MNIST dataset with the help of different classifiers.\n\n```from sklearn.datasets import fetch_openml\nmnist = fetch_openml('mnist_784')\nprint(mnist)\n```\n\nOutput:",
null,
"Exploring The Dataset\n\n```import matplotlib\nimport matplotlib.pyplot as plt\n\nX, y = mnist['data'], mnist['target']\nrandom_digit = X\nrandom_digit_image = random_digit.reshape(28,28)\nplt.imshow(random_digit_image, cmap=matplotlib.cm.binary, interpolation=\"nearest\")\n```\n\nOutput:",
null,
"Splitting the Data\n\nWe are using the first 6000 entries as the training data, the dataset is as large as 70000 entries. You can check using the shape of the X and y. So to make our model memory efficient, we have only taken 6000 entries as the training set and 1000 entries as a test set.\n\n```x_train, x_test = X[:6000], X[6000:7000]\ny_train, y_test = y[:6000], y[6000:7000]\n```\n\nShuffling The Data\n\nTo avoid unwanted errors, we have shuffled the data using the numpy array. It basically improves the efficiency of the model.\n\n```import numpy as np\n\nshuffle_index = np.random.permutation(6000)\nx_train, y_train = x_train[shuffle_index], y_train[shuffle_index]\n```\n\nCreating A Digit Predictor Using Logistic Regression\n\n```y_train = y_train.astype(np.int8)\ny_test = y_test.astype(np.int8)\ny_train_2 = (y_train==2)\ny_test_2 = (y_test==2)\nprint(y_test_2)\n```\n`Output :",
null,
"`\n```from sklearn.linear_model import LogisticRegression\nclf = LogisticRegression(tol=0.1)\nclf.fit(x_train,y_train_2)\nclf.predict([random_digit])\n```\n\nOutput:",
null,
"Cross-Validation\n\n```from sklearn.model_selection import cross_val_score\na = cross_val_score(clf, x_train, y_train_2, cv=3, scoring=\"accuracy\")\na.mean()\n```\n\nOutput:",
null,
"Creating A Predictor Using Support Vector Machine\n\n```from sklearn import svm\n\ncls = svm.SVC()\ncls.fit(x_train, y_train_2)\ncls.predict([random_digit])\n```\n\nOutput:",
null,
"Cross-Validation\n\n```a = cross_val_score(cls, x_train, y_train_2, cv = 3, scoring=\"accuracy\")\na.mean()\n```\n\nOutput:",
null,
"In the above example, we were able to make a digit predictor. Since we were predicting if the digit were 2 out of all the entries in the data, we got false in both the classifiers, but the cross-validation shows much better accuracy with the logistic regression classifier instead of the support vector machine classifier.\n\nThis brings us to the end of this article where we have learned Classification in Machine Learning. I hope you are clear with all that has been shared with you in this tutorial.\n\nYou can also take a Machine Learning Course Masters Program. The program will provide you with the most in-depth and practical information on machine-learning applications in real-world situations. Additionally, you’ll learn the essentials needed to be successful in the field of machine learning, such as statistical analysis, Python, and data science.\n\nAlso, if you’re looking to develop the career you’re in with Deep learning, you should take a look at the Deep Learning Course. This course gives students information about the techniques, tools, and techniques they need to grow their careers.\n\nWe are here to help you with every step on your journey and come up with a curriculum that is designed for students and professionals who want to be a Python developer. The course is designed to give you a head start into Python programming and train you for both core and advanced Python concepts along with various Python frameworks like Django.\n\nIf you come across any questions, feel free to ask all your questions in the comments section of “Classification In Machine Learning” and our team will be glad to answer.\n\nUpcoming Batches For AI and Machine Learning Masters Course\nCourse NameDate\nAI and Machine Learning Masters Course\n\nClass Starts on 7th October,2023\n\n7th October\n\nSAT&SUN (Weekend Batch)\nView Details",
null,
"REGISTER FOR FREE WEBINAR",
null,
"Thank you for registering Join Edureka Meetup community for 100+ Free Webinars each month",
null,
"",
null,
"",
null,
""
]
| [
null,
"https://googleads.g.doubleclick.net/pagead/viewthroughconversion/977137586/",
null,
"data:image/gif;base64,R0lGODlhAQABAIAAAMLCwgAAACH5BAAAAAAALAAAAAABAAEAAAICRAEAOw==",
null,
"https://www.edureka.co/blog/wp-content/uploads/2019/11/classification.png",
null,
"data:image/gif;base64,R0lGODlhAQABAIAAAMLCwgAAACH5BAAAAAAALAAAAAABAAEAAAICRAEAOw==",
null,
"data:image/gif;base64,R0lGODlhAQABAIAAAMLCwgAAACH5BAAAAAAALAAAAAABAAEAAAICRAEAOw==",
null,
"data:image/gif;base64,R0lGODlhAQABAIAAAMLCwgAAACH5BAAAAAAALAAAAAABAAEAAAICRAEAOw==",
null,
"data:image/gif;base64,R0lGODlhAQABAIAAAMLCwgAAACH5BAAAAAAALAAAAAABAAEAAAICRAEAOw==",
null,
"data:image/gif;base64,R0lGODlhAQABAIAAAMLCwgAAACH5BAAAAAAALAAAAAABAAEAAAICRAEAOw==",
null,
"data:image/gif;base64,R0lGODlhAQABAIAAAMLCwgAAACH5BAAAAAAALAAAAAABAAEAAAICRAEAOw==",
null,
"data:image/gif;base64,R0lGODlhAQABAIAAAMLCwgAAACH5BAAAAAAALAAAAAABAAEAAAICRAEAOw==",
null,
"data:image/gif;base64,R0lGODlhAQABAIAAAMLCwgAAACH5BAAAAAAALAAAAAABAAEAAAICRAEAOw==",
null,
"data:image/gif;base64,R0lGODlhAQABAIAAAMLCwgAAACH5BAAAAAAALAAAAAABAAEAAAICRAEAOw==",
null,
"data:image/gif;base64,R0lGODlhAQABAIAAAMLCwgAAACH5BAAAAAAALAAAAAABAAEAAAICRAEAOw==",
null,
"data:image/gif;base64,R0lGODlhAQABAIAAAMLCwgAAACH5BAAAAAAALAAAAAABAAEAAAICRAEAOw==",
null,
"data:image/gif;base64,R0lGODlhAQABAIAAAMLCwgAAACH5BAAAAAAALAAAAAABAAEAAAICRAEAOw==",
null,
"data:image/gif;base64,R0lGODlhAQABAIAAAMLCwgAAACH5BAAAAAAALAAAAAABAAEAAAICRAEAOw==",
null,
"data:image/gif;base64,R0lGODlhAQABAIAAAMLCwgAAACH5BAAAAAAALAAAAAABAAEAAAICRAEAOw==",
null,
"data:image/gif;base64,R0lGODlhAQABAIAAAMLCwgAAACH5BAAAAAAALAAAAAABAAEAAAICRAEAOw==",
null,
"data:image/gif;base64,R0lGODlhAQABAIAAAMLCwgAAACH5BAAAAAAALAAAAAABAAEAAAICRAEAOw==",
null,
"data:image/gif;base64,R0lGODlhAQABAIAAAMLCwgAAACH5BAAAAAAALAAAAAABAAEAAAICRAEAOw==",
null,
"data:image/gif;base64,R0lGODlhAQABAIAAAMLCwgAAACH5BAAAAAAALAAAAAABAAEAAAICRAEAOw==",
null,
"data:image/gif;base64,R0lGODlhAQABAIAAAMLCwgAAACH5BAAAAAAALAAAAAABAAEAAAICRAEAOw==",
null,
"https://d1jnx9ba8s6j9r.cloudfront.net/blog/wp-content/themes/edu-new/img/blog-001.svg",
null,
"https://d1jnx9ba8s6j9r.cloudfront.net/blog/wp-content/themes/edu-new/img/blog-tick.svg",
null,
"https://d1jnx9ba8s6j9r.cloudfront.net/blog/wp-content/themes/edu-new/img/multimedia-option.png",
null,
"https://d1jnx9ba8s6j9r.cloudfront.net/blog/wp-content/themes/edu-new/img/cart-icon.svg",
null,
"https://d1jnx9ba8s6j9r.cloudfront.net/blog/wp-content/themes/edu-new/img/close.png",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.91064095,"math_prob":0.90162134,"size":19166,"snap":"2023-40-2023-50","text_gpt3_token_len":3745,"char_repetition_ratio":0.15504645,"word_repetition_ratio":0.0115626035,"special_character_ratio":0.18986747,"punctuation_ratio":0.081729345,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9864836,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54],"im_url_duplicate_count":[null,null,null,null,null,6,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-10-03T21:32:27Z\",\"WARC-Record-ID\":\"<urn:uuid:f78e03ff-f23b-46dd-8135-2b54b372d9e2>\",\"Content-Length\":\"235968\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a899091d-1030-47fb-b0c2-144aa636a772>\",\"WARC-Concurrent-To\":\"<urn:uuid:8c0cb044-a33d-4295-a98e-8bf41e2ccc60>\",\"WARC-IP-Address\":\"99.86.229.29\",\"WARC-Target-URI\":\"https://www.edureka.co/blog/classification-in-machine-learning/\",\"WARC-Payload-Digest\":\"sha1:OA4AD7EXXH7B2WVZOWNFIZH6GDLXJQCO\",\"WARC-Block-Digest\":\"sha1:FAUUNSPGWQXVRNJ6GWYGVZHA543EDSOG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233511220.71_warc_CC-MAIN-20231003192425-20231003222425-00422.warc.gz\"}"} |
https://edurev.in/course/quiz/attempt/-1_Gate-Mock-Test-9-CSIT/323b68b9-1b6d-4388-a34e-4fdd09de220f | [
"Courses\n\n# Gate Mock Test- 9: CS/IT\n\n## 65 Questions MCQ Test GATE Computer Science Engineering(CSE) 2022 Mock Test Series | Gate Mock Test- 9: CS/IT\n\nDescription\nThis mock test of Gate Mock Test- 9: CS/IT for GATE helps you for every GATE entrance exam. This contains 65 Multiple Choice Questions for GATE Gate Mock Test- 9: CS/IT (mcq) to study with solutions a complete question bank. The solved questions answers in this Gate Mock Test- 9: CS/IT quiz give you a good mix of easy questions and tough questions. GATE students definitely take this Gate Mock Test- 9: CS/IT exercise for a better result in the exam. You can find other Gate Mock Test- 9: CS/IT extra questions, long questions & short questions for GATE on EduRev as well by searching above.\nQUESTION: 1\n\n### Wayne Rooney …………… after netting the ball in the final seconds of the game.\n\nSolution:\n\nExulted- to express great pleasure or happiness, especially at someone else's defeat or failure\n\nQUESTION: 2\n\n### Choose the most appropriate pair of words from the options given below to complete the following. Despite all the rhetoric about building a new democracy, and the ________ of bringing freedom and ________ to the country, the invasion has only resulted in increased violence and hardship.\n\nSolution:\n\nIn the sentence a word that is complementary to 'freedom' is required. Hence, 'piece' which means a portion can be eliminated. This leaves us with options 2 and 3. It can be gathered from the sentence that invasion resulted in violence and hardships though it promised positive things like democracy, peace and freedom. So all these promises were false ideas. Thus, 'illusion' best fits here, making option 2 the correct answer.\n\nAn 'allusion' is an indirect or passing reference.\nE.G. The Renaissance writers alluded to literature written during the Classical Greek period.\n\nQUESTION: 3\n\n### Continue the sequence 2, 5, 10, 17, 28, 41, _, _, _\n\nSolution:\n\nThe relation among the given numbers is",
null,
"",
null,
"Hence, the next numbers are 58, 77, 100.\n\n*Answer can only contain numeric values\nQUESTION: 4\n\nA point on AC in the following triangle such that ∠ADB = ∠ABC. Then BD in (cm) is",
null,
"Solution:\n\nBy similar triangles ΔADB and ΔABC, we get\n\n⇒ DB/8=6/12\n\n⇒ DB = 48/12 = 4 cm\n\n∴ BD = 4 cm\n\nQUESTION: 5\n\nThe numerical value of",
null,
"will be\n\nSolution:\n\nGiven expression is,",
null,
"Using: 1 + tan2θ = sec2θ and 1 + cot2θ = cosec2θ",
null,
"Using: cosec θ = 1/sin θ and sec θ = 1/cos θ\n\n= sin2θ + 3 cos2θ + 2 sin2θ\n\n= 3sin2θ + 3cos2θ\n\n= 3(sin2θ + cos2θ)\n\nUsing: sin2θ + cos2θ = 1\n\n= 3 × 1 = 3\n\nQUESTION: 6\n\nAn advance team of the elite special operations unit was sent to make a reconnaissance of the area before the main strike team reached the scene.\n\nWhich of the following is a statement that can be inferred from the facts stated in the above statement\n\nSolution:\n\nThe statement provided to us in the question states the fact that an initial team was sent for the reconnaissance of the target area before the main strike team. It is to be noted here that the word ‘reconnaissance’ means observation of an area to get information about it.\n\nNow, nowhere in the sentence is there any mention of the relative training or the equipments of the two teams, neither is there any indication to asses this fact. Therefore, it is safe to eliminate options 1 and 4.\n\nNow, the third statement indicates that the strike team could not work without the main team. There is no evidence in the sentence to point out to any such fact. It is simply stated that the initial team was sent to make reconnaissance before the main team, in order to assist them. It cannot be assumed that the job of the initial team was absolutely essential for the second team.\n\nThe second inference states that the team did not posses sufficient and complete information about the target area. This is an inference that can be safely drawn from the given statement, since if they would have had enough information, there would have been no need for an initial team.\n\nHence, option 2 is the correct answer.\n\nQUESTION: 7\n\nBased on the distribution of surface area of the Earth at different elevations and depths (with reference to sea - level) shown in the figure, which of the following is FALSE?",
null,
"Solution:\n\nFrom graph it is evident that option 1 is true.\n\nFrom graph, Of the surface area above sea - level, larger proportion lies below 2 km elevation\n\nFrom graph, Of the surface area below sea - level, larger proportion lies below 4 km depth.\n\nOption 4 is true as maximum depth is 12 km and maximum elevation is 8 km.\n\n∴ Option 3 is FALSE\n\nQUESTION: 8\n\nAnanth takes 6 hours and Bharath takes 4 hours to read a book. Both started reading copies of the book at the same time. After how many hours is the number of pages remaining to be read by Ananth, twice that the number of pages remaining to read by Bharath? Assume Ananth and Bharath read all the pages with constant pace.\n\nSolution:\n\nGiven, Ananth takes 6 hours and Bharath takes 4 hours to read a book.\n\nLet the number of pages of the book be n.\n\n∴ In 1 hour, Ananth reads n/6 pages of the book while Bharath reads n/4 pages of the book.\n\nLet after ‘t’ hours the number of pages remaining to be read by Ananth is twice that number of pages remaining to read by Bharath.",
null,
"⇒ 1 – t/6 = 2 – t/2\n\n⇒ t/3 = 1\n\n⇒ t = 3 hours\n\n*Answer can only contain numeric values\nQUESTION: 9\n\nIn a group of 11 people, 5 are wearing white shirt, 5 are wearing black shirt and one is wearing red shirt. Find the number of ways they can sit around a circular table so that no two people wearing same color shirt sit together.\n\nSolution:\n\nNumber of ways 5 people can sit around round table = (5-1)! = 4!",
null,
"Now 5 spots are created for other 5 people wearing black white shirt.\n\n∴ Total ways = 4! × 5!",
null,
"Now for each arrangement 10 spots are created for the person wearing Red shirt.\n\n∴ Total ways = 4!5!10 = 28800\n\nQUESTION: 10\n\nFind the probability of a number which is divisible by either 5 or 3 out of first 500 natural Odd Numbers.\n\nSolution:\n\nThe First 500 Natural Odd numbers are given as 1,3,5,7,…………….999.\n\nThe Numbers divisible by 3 from the above list are given as 3,9,15,21…………999\n\nThe above series is in Arithmetic progression,\n\nTn = a+(n−1)d\n\n999 = 3+(n3−1)6 ⇒ n3 = 167\n\nSo total Numbers which are divisible by 3 in the given list is, n3 = 167\n\nThe Numbers divisible by 5 from the above list are given as 55,15,25,35…………99\n\nThe above series is in Arithmetic progression,\n\nTn = a+(n−1)d\n\n995 = 5+(n5−1)10 ⇒ n5 = 100\n\nSo total Numbers which are divisible by 5 in the given list is, n3 = 100.\n\nThe Numbers divisible by 15 from the above list are given as 15,45,75,105…………975\n\nThe above series is in Arithmetic progression,\n\nTn = a+(n−1)d\n\n975 = 15+(n15−1)30 ⇒ n15 = 33\n\nSo total Numbers which are divisible by 3 in the given list is, n3 = 33\n\nSo Required Probability p = p(n3)+p(n5)−p(n15) =",
null,
"= 0.468\n\nQUESTION: 11\n\nConsider the following proposition:\n\n(p→q)∧(q→r)→(p→r)∨(r→p)\n\nAbove proposition is a\n\nSolution:",
null,
"≡ 1\n\nHence it is a tautology.\n\n*Answer can only contain numeric values\nQUESTION: 12\n\nGiven S = {a, b, c, d}.\n\nNumber of functions possible on S which are neither one-one nor onto is:\n\nSolution:\n\nTotal number of functions = 44 = 256\n\nNumber of functions which are either one -one or onto = 4! = 24\n\nNumber of functions which are neither one -one nor onto = 256 - 24 = 232\n\nQUESTION: 13\n\nConsider the following grammar:\n\nG = S → SS|ab|ba|aba|bab|ϵ\n\nWhich of the following string is not generated by above grammar?\n\nSolution:\n\nbaabbabb can not be generated using above grammar.\n\n*Answer can only contain numeric values\nQUESTION: 14\n\nThe following numbers are inserted into an empty binary search tree in the given order:\n\n10, 1, 3, 5, 11, 12, 6\n\nWhat is the height of the binary search tree?\n\nSolution:\n\nBST has sorted in-order traversal, 10-1-3-5-6 will be the longest path from root to leaf. Hence height of BST will be 4.\n\nQUESTION: 15\n\nA network with CSMA/CD protocol in the MAC layer is running at 1 Gbps over a 1 km cable with no repeaters. The signal speed in the cable is 2x108 m/sec. The minimum frame size for this network should be:\n\nSolution:\n\nS ≥ 2 × bandwidth × td ≥ 2 × 109 × 1000/2 × 108 ≥ 10000bits\n\nQUESTION: 16\n\nWhich of the following is an LL(1) conflict?\n\nSolution:\n\nFIRST/FIRST Conflict for LL(1)\n\nS -> E | E 'a'\n\nE -> 'b' | ε\n\nFIRST(E) = {'b', ε} and FIRST(E 'a') = {'b', 'a'}\n\nThus in LL(1) table, there is conflict under terminal 'b' of production rule S.\n\nS -> A 'a' 'b'\n\nA -> 'a' | ε\n\nThe FIRST set of A now is {'a', ε} and the FOLLOW set {'a'}.\n\nQUESTION: 17\n\nWhich of the following sorting algorithm will be worst choice to sort a linked list?\n\nSolution:\n\nTo sort a linked list, Merge sort is the best choice and Heap sort is impractical.\n\nQUESTION: 18\n\nThe set of values of p for which the roots of the equation 3x+ 2x + p(p – 1) = 0 are of opposite sign is\n\nSolution:\n\np(p−1) < 0, because product of roots is a negative number.\n\nThus p must be less than 1 and greater than 0\n\nQUESTION: 19\n\nWhich of the following operations is performed more efficiently by doubly linked list than by singly linked list?\n\nSolution:\n\nIf pointer to the node to be deleted is given, delete operation is more efficient in doubly linked list O(1) than singly linked list O(n), because to delete a node in singly listed list, pointer to the previous node is needed. To get this previous node, we have to traverse the list. But in doubly linked list we can get the previous node using previous pointer.\n\n*Answer can only contain numeric values\nQUESTION: 20\n\nSuppose that the maximum transmit window size for a TCP connection is 12000 bytes. Each packet consists of 2000 bytes. At some point of time, the connection is in slow-start phase with a current transmit window of 4000 bytes. Subsequently, the transmitter receives two acknowledgements. Assume that no packets are lost and there are no time-outs. What is the maximum possible value of the current transmit window? (in bytes).\n\nSolution:\n\nIn slow-start phase, for each ACK, the sender increases the current transmit window by Maximum Segment Size (MSS). As we are given a packet consists of 2000 bytes and that can be taken as MSS. So, after two ACKs, current transmit window:\n\n= 4000 + 2000 + 2000\n\n= 8000\n\nQUESTION: 21\n\nWhich of the following is true for an undirected graph G = (V,E) such that every vertex has degree greater than 1?\n\nSolution:\n\nFor making sure that the graph is having every vertex degree greater than 1 have to add the edge that will repeat at least one vertex causing cycle.\n\nQUESTION: 22\n\nConsider set S = {The set of all rational numbers including zero} and operation\n\nI. addition II. Multiplication III. Division\n\nUnder which operation the set will form a group?\n\nSolution:\n\nMultiplication will not form a group because inverse of zero does not exist.\n\nDivision operation will not form a group because n/0 is not defined.\n\nQUESTION: 23\n\nConsider a DRAM chip connected to a channel having 8 memory banks. A bank contain 16K rows. DRAM is refreshed once per 64ms. And refreshing operation takes 60ns(nano second). How many refresh operation performed in 1 sec?( 1 sec ≈ 1.024 sec).\n\nSolution:\n\nTotal number of rows in the bank is 214. Total number of banks is 8. And in 1.024 sec total number of refresh operation performed by the controller is 1024/64 =16.\n\nTotal number of refresh operation performed by the controller is\n\n214 × 8 × 16 = 221\n\n*Answer can only contain numeric values\nQUESTION: 24\n\nThe minimum number of bits required to represent - 64 in 2’s complement representation is _________.\n\nSolution:\n\nWhenever a number is in 2n form then the minimal 2’s complement representation is 1 followed by n number of zeros.\n\nSo, – 64 = – 26 = 1000000\n\nQUESTION: 25\n\nWhat is the broadcast address of the first sub network in a class-C network assigned with an IP address - 207.35.7.0 (The subnet mask is 255.255.255.248)\n\nSolution:\n\nnegate subnet mask and perform logical or operation with given IP address.\n\nQUESTION: 26\n\nThe following C function takes a simply-linked list as input argument.\n\ntypedef struct node {\n\nint value;\n\nstruct node *next;\n\n}\n\nnode *p;\n\nint a;\n\nreturn 0;\n\nwhile(p→next!= NULL){\n\np=p→next;\n\nif(p→value < a)\n\na = p→value;\n\n}\n\nreturn a;\n\nWhich of the following is true for the above function?\n\nSolution:\n\nThe function returns minimum value from the list\n\nQUESTION: 27\n\nWhich of the following languages is/are context-free?\n\nI. {anbmcndm│n,m≥0}\n\nII. {anbnbmam│n,m≥0}\n\nIII. {anbmcn│n,m≥0}\n\nSolution:\n\nI is context sensitive language\n\nII and III are CFLs.\n\nQUESTION: 28\n\nWhich of the following regular expression describes the language\n\nL = {w is in (1 + 0)* │ w ends with 10}?\n\nI. (1*0*)*10\n\nII. (0 + 1(1 + 01)*00)*1(1 + 01)*0\n\nIII. (1*0*)*100*\n\nSolution:\n\nIII can not generate L rest can.\n\nQUESTION: 29\n\nConsider the following program:\n\na = 1;\n\nb = 2;\n\nc = 3;\n\nc = a + b;\n\nb = a + c;\n\nd = b + c;\n\ne = d + a;\n\nAssuming that all operations take their operands from registers, what is the minimum number of registers needed to execute this program without spilling?\n\nSolution:\n\nTo avoid spilling we have to use 3 register\n\na = 1;\n\nb = 2;\n\nc = 3;\n\nc = a + b; r1 = c, r1 = b, r2 = a\n\nb = a + c r2 = a, r1 = b, r3 = c, because we will using the value a,b,c in future\n\nd = b + c;\n\ne = d + a;\n\nreturn d + b;\n\nQUESTION: 30\n\nIn a data link layer, bit stuffing is used in transferring data. If the sent data after bit stuffing is 001111101101011111001111 and the flag is 01111110, then what will be the data after destuffing?\n\nSolution:\n\nAs the flag is 01111110, we have to delete 0 after every five consecutive 1’s.\n\n*Answer can only contain numeric values\nQUESTION: 31\n\nA channel has 10 Kbps bit rate using stop and wait protocol and has propagation delay of 4 ms. For a frame with 400 bit, what will be the efficiency (in percentage)?\n\nSolution:\n\ntransmission time for 400 bits = 40ms\n\nutilization =",
null,
"",
null,
"= 83%\n\nQUESTION: 32\n\nWhich of the following set is empty?\n\nSolution:\n\nThere does not exist any function f(n) such that f(n) ϵ o(g(n)) and f(n)ϵ ω(g(n)) at the same time.\n\nBoth the sets in option(A) are mutually exclusive.\n\nQUESTION: 33\n\nfoo (int n, int A[ ])\n\n{\n\nint i = 0, j = 0\n\nfor (i = 0, i ≤ n, i ++)\n\nwhile (j<n && A[i] < A[j])\n\nj++;\n\n}\n\nTime complexity of the function foo () is_____\n\nSolution:\n\n‘j’ is initialized just once in the given program. Also, the loop is iterated for 'n' times making the time complexity as O(n).\n\nQUESTION: 34\n\nGive the breadth first traversal of the given graph below starting from vertex A.",
null,
"Solution:\n\nThe algorithm for BFS traversal of a graph is like:\n\nFirst the starting vertex us enqueued. Then, the following steps are repeated until the queue is empty:\n\n1. Remove the vertex at the head of the queue and call it ‘vertex’.\n\n2. Visit ‘vertex’\n\n3. Follow each edge emanating from ‘vertex’ to find the adjacent vertex and call it ‘to’. If ‘to’ has not already been put into the queue, enqueue it.\n\nAccording to the above algorithm, the BFS traversal of the above graph.",
null,
"*Answer can only contain numeric values\nQUESTION: 35\n\nSum of eigen values of matrix",
null,
"is ________\n\nSolution:\n\nSum of eigen values of a matrix = trace of the matrix\n\nAs trace of the given matrix is = 4 + 1 - 5 = 0.\n\nHence sum of the eigen values = 0.\n\nQUESTION: 36\n\nThe essential prime implicates of F(A,B,C,D) = ∑m(0,1,5,7,10,14,15) are\n\nSolution:\n\nF(A,B,C,D) = ∑m(0,1,5,7,10,14,15)",
null,
"To cover 0, 10 min terms,",
null,
"are essential.\n\n*Answer can only contain numeric values\nQUESTION: 37\n\nThe modulus of Following asynchronous counter is",
null,
"Solution:\n\nFor QQQQ0 = 1 0 0 1\n\nThe output of OR gate = 0 hence all Flip – Flops are cleared\n\nSo the counting sequence is as follows\n\n0000, 0001,0010 …… 1000, 0000, 0001…………..\n\n∴ Modulus of counter = 9\n\nQUESTION: 38\n\nPower of a is computed using following method:\n\nan = a.an-1 when n is odd\n\nan = (an/2)2 when n is even\n\nNumber of multiplications required to compute a28 is:\n\nSolution:\n\na2 = a ∗ a\n\na3 = a ∗ a2\n\na6 = a3 ∗ a3\n\na7 = a ∗ a6\n\na14 = a7 ∗ a7\n\na28 = a14 ∗ a14\n\nQUESTION: 39\n\nConsider the following code which is used to detect the loop in the linked list. Find the missing statement A?\n\nwhile(A)\n\n{\n\nif(p == q) exit(0) //loop detected\n\np = p->next;\n\nq = (q->next)?(q->next->next) : q->next;\n\n}\n\nSolution:\n\nq will go end of list first and will accept\n\nq = NULL and will terminate.\n\nOtherwise p and q both get a valid pointer value and keep moving.\n\nQUESTION: 40\n\nConsider the following propositions:\n\n(i) (~p → ~q) → (q → p)\n\n(ii) [p ∧ (p∧~q∨~r∧s ∨ q∧r∧s ∨p) →~r] → (r→~p)\n\n(iii) (p → q) ∧ (~p → ~q) → (p ↔ q)\n\nWhich of the above proposition/s is/are a tautology?\n\nSolution:\n\n(i) Tautology: Because contrapositive of an implication always follows from it.\n\n(ii) Tautology: LHS is nothing but p→~r which in turn implies RHS.\n\n(iii) Tautology: (p → q) ∧ (q → p) ≡ (p ↔ q)\n\n*Answer can only contain numeric values\nQUESTION: 41\n\nOn a TCP connection, current congestion window size is 4 KB. The window size advertised by the receiver is 6 KB. The last byte sent by the sender is LastByteSent = 10240 and the last byte acknowledged by the receiver is LastByteAcked = 8192. The current window size at the sender is___(in bytes)\n\nSolution:\n\nAs Receiver window size is 6KB and network congestion window size is 4KB so sender has to send only 4KB window.\n\nSender window size = min(Window_congestion,Window_adverstised) = min(4KB,6KB) = 4KB\n\nGo through this guide.\n\n*Answer can only contain numeric values\nQUESTION: 42\n\nThe following key values are inserted into a B+ tree in which order of the internal nodes is 3, and that of the leaf nodes is 2, in the sequence given below. The order of internal nodes is the maximum number of tree pointers in each node, and the order of leaf nodes is the maximum number of data items that can be stored in it. The B+ tree is initially empty.\n\n10, 3, 6, 8, 4, 2, 1\n\nThe maximum number of times leaf nodes would get split up as a result of these insertions is___(Assume right biasing).\n\nSolution:\n\nUsing right biasing there will be three splits:\n\nfirst after inserting 6\n\nsecond after inserting 8\n\nthird after inserting 2.\n\nQUESTION: 43\n\nThe following C function takes a single-linked list of integers as a parameter and rearranges the elements of the list. The function is called with the list containing the integers 1, 2, 3, 4, 5, 6, 7 in the given order. What will be the contents of the list after function completes execution?\n\nstruct node {\n\nint value;\n\nstruct node *next;\n\n};\n\nvoid rearrange(struct node *list) {\n\nstruct node *p, *q;\n\nint temp;\n\nif (!list || !list -> next) return;\n\np = list; q = list -> next;\n\nwhile(q) {\n\ntemp = p -> value; p->value = q -> value;\n\nq->value = temp; p = q ->next;\n\nq = p? p ->next : 0;\n\n}\n\n}\n\nSolution:\n\nThe loop is interchanging the adjacent elements of the list.\n\nBut because of (p = q -> next;) after each interchange, next interchange starts from the unchanged elements.\n\n1st iteration 1 2 3 4 5 6 7 ------> 2 1 3 4 5 6 7\n\n2nd iteration 2 1 4 3 5 6 7\n\n3rd iteration 2 1 4 3 6 5 7\n\np will point to 7, and q=p true, hence q = p->next = null.\n\nQUESTION: 44\n\n#include<stdio.h>\n\nvoid function(int num){\n\nif(num>0) {\n\nfunction(--num);\n\nprintf(\"%d\",num);\n\nfunction(--num);\n\n}\n\n}\n\nint main() {\n\nfunction(3);\n\nreturn 0;\n\n}\n\nOutput of the above function is:\n\nSolution:\n\nThe recursion tree for the given program is given below:",
null,
"Expanding the recursion carefully and moving from top to down and left to write, we can see that it will print 0120.\n\n*Answer can only contain numeric values\nQUESTION: 45\n\nint fun(int num)\n\n{\n\nint result =0;\n\nif(num <= 1)\n\nreturn 1;\n\nelse\n\n{\n\nfor(i=num;i>=1;i--)\n\nresult+=fun(i/3);\n\n}\n\nreturn result;\n\n}\n\nValue returned by fun(6) is:\n\nSolution:\n\nf(0)=1\n\nf(1)=1\n\nf(2)=2\n\ni=6 result=2\n\ni=5 result=3\n\ni=4 result=4\n\ni=3 result=5\n\ni=2 result=6\n\ni=1 result=7\n\nQUESTION: 46\n\nIn a box of red and blue balls 2% of red and 3% of blue balls are broken. There are 30% blue balls in the box. If a ball is selected and is broken, the probability that the ball is blue is:\n\nSolution:\n\nE1 = Ball is red\n\nE2 = Ball is blue\n\nA = Ball is broken\n\nGiven P (E1) = 70/100 = 0.7, P (E2) = 30/100 = 0.3\n\nP (A/E1) = 2/100 = 0.02, P(A/E2) = 3/100 = 0.03\n\nBy Baye’s theorem",
null,
"",
null,
"= 0.39≅0.4\n\n*Answer can only contain numeric values\nQUESTION: 47\n\nConsider a 32 bit processor which is implemented with 16 bit external data bus and driven by 50MHz input clock. (Assume processor has a bus cycle whose minimum duration is 4 input clock cycles). Then the maximum data transfer rate this processor can accomplish is___________(in MBps)\n\nSolution:\n\nClock cycle =",
null,
"= 20ns\n\nBus cycle = 4 × 20 = 80ns\n\n2 Bytes are transferred (16 – bit data) for every 80 ns.\n\nThus the transfer rate =",
null,
"= 25MBps\n\nQUESTION: 48\n\nLet us consider four processes P1, P2, P3, and P4. There are three resources available R1, R2 and R3 with the following number of instances\n\nR1:9\n\nR2:3\n\nR3:6\n\nThe MAX_REQD matrix: the matrix showing the maximum number of resources required by each of the processes to finish is given below:",
null,
"The CUR_HOLDING matrix: the matrix showing the resources currently held by the processes.",
null,
"What is the safe sequence using Banker’s algorithm?\n\nSolution:",
null,
"number of available resources: 1, 1, 2\n\nSo, safe sequence is <P2, P3, P4, P1>\n\nQUESTION: 49\n\nConsider a relation with seven attributes ABCDEGH. The following dependencies are given .\n\nAB -> C, AC -> B, AD -> E, B -> D, BC -> A, E -> G.\n\nWhat is the key ?\n\nSolution:\n\nAC -> B So, (AC)+ = {A,C, B,}\n\nB -> D (AC)+ = {A,C, B, D}\n\nAD -> E (AC)+ = {A,C, B, D, E}\n\nE -> G (AC)+ = {A,C, B, D, E, G}\n\nSo to get H we have to include H into (AC)\n\nSo the key is ACH.\n\nQUESTION: 50\n\nConsider a synchronous instruction pipeline with 4 stages (S1, S2, S3 & S4 ). The delay of each stages is 1.2ns, 1.3ns, 1.5ns and 2ns. The pipeline also has buffer placed between the stages. The delay for the buffer is fixed for all and requires 1ns. How much time is required to execute 10000 instructions?\n\nSolution:\n\nSince the pipeline is synchronous each stage will take equal delay and that delay will be 2ns (The stage taking longer time)\n\nSince the buffer delay is 1 the first instruction will come out in 12ns\n\n2 + 1 + 2 + 1 + 2 + 1 + 2 + 1 = 12ns\n\nRest 9999 instruction will be completed in every 3ns that is 9999´ 3ns = 29997ns\n\nQUESTION: 51\n\nWhich one of the following is a key factor for preferring B-trees to binary search trees for indexing database relations?\n\nSolution:\n\nA disk block contains a fairly large number of keys. Unlike BST where each node contains only one key, B-Tree is designed to contain a large number of keys so that tree height is small.\n\nQUESTION: 52\n\nAll page frames are initially empty, and a process is allowed three page frames in memory, and references its pages in the order 1,2,3,2,4,5,2,3,2,2,4,1,5.\n\nIf the page replacement policy is optimal replacement policy, then page faults caused by this process is,\n\nSolution:\n\nOptimal replacement policy looks forward in time to see which frame to replace on a page fault. Thus it is not a real replacement algorithm. It gives us a reference of number of frame for a given static frame access sequence.",
null,
"2 HIT (page number 2 already in memory so there is no page faults.)",
null,
"2 HIT\n\n3 HIT\n\n2 HIT\n\n2 HIT",
null,
"5 HIT\n\nSo there are 7 page faults\n\n*Answer can only contain numeric values\nQUESTION: 53\n\nA Btree index is to be built on the studID attributes of the relation student assume that all StudID are of 10 bytes and pointer is of 5 bytes and block size is of 512 byte. Given this scenario, then the best choice of the degree of Btree is _______.\n\nSolution:\n\nFor Btree.\n\n5 x + 10(x – 1) < 512\n\n5x + 10x – 10 < 512\n\n15x < 522\n\nx = 34 (approx)\n\nQUESTION: 54\n\nConsider the following cooperating process using semaphore S and T.",
null,
"Which of the following is true?\n\nSolution:\n\nfor S = 1 and T = 0 P1 will first use the resource A, B\n\nQUESTION: 55\n\nConsider a memory is consists of 1024 frames (0 to 1023), each frame contains 64B and byte addressable. An array of size A is stored in the memory. A is at frame 1 byte 0. What is the address of A?\n\nSolution:\n\nSince the first element A is at frame 1 and byte 0\n\nBefore reaching A the array element A element has been stored\n\nSo total 60 complete row already been stored containing 60 ´ 512 + 100 = 30820 elements\n\nNumber of blocks required is 30820/64 = 481.5625.c\n\nSo we required total 481 complete block so address will go on 482nd block.\n\n*Answer can only contain numeric values\nQUESTION: 56\n\nConsider a pipelined system with four stages: IF, ID, EX, WB.\n\nFollowing chart shows the clock cycles required by instruction to complete each stage.",
null,
"How many clock cycles are required to complete the instructions?\n\nSolution:",
null,
"It requires 15 clock cycles.\n\nQUESTION: 57\n\nApply Dijkstra’s algorithm for the following graph:",
null,
"Initially the set S contains the vertex A, i.e. S = {A}.\n\nFinally, in which of the following order the vertices will be included in S, if array P holds the shortest distance from source to each vertex, then what will be P[A - E]?\n\nSolution:\n\nSource vertex is A. From A smallest distance is D and it is 5\n\nThe next smallest is from D to E, so the total distance from A to E is 7\n\nThe next smallest from D to B, so the total distance from A to B is 8\n\nThe next smallest from B to C, so the total distance from A to C is 9\n\nQUESTION: 58\n\nConsider the schema:\n\nFlight(Flight_no, from, to, dist, depart, arrive)\n\nAircraft(aid, aname, cruising_range)\n\nCertified(pid, aid)\n\nPilot(pid, pname, salary)\n\nSelect P.pname\n\nFrom Aircraft A, Certified C, Pilot P\n\nWhere A.aid = C.aid and P.pid = C.pid and A.cruising_range > 3000\n\nand P.pid not in (Select C2.pid\n\nFrom Certified C2, Aircraft A2\n\nWhere C2.aid = A2.aid and A2.aname = ‘Boeing’)\n\nThe above query returns\n\nSolution:\n\nP.pid not in (Select C2.pid\n\nFrom Certified C2, Aircraft A2\n\nWhere C2.aid = A2.aid and A2.aname = ‘Boeing’)\n\nThe above subquery returns the pid of those pilots who are not certified on any Boeing.\n\nSelect P.pname\n\nFrom From Aircraft A, Certified C, Pilot P\n\nWhere A.aid = C.aid and P.pid = C.pid and A.cruising_range > 3000\n\nthe above subquery returns the name of the pilots who can operate with a range greater than 3000 Km.\n\n*Answer can only contain numeric values\nQUESTION: 59\n\nAssume X is a random variable representing the output of a fair dice.\n\nAlso\n\nY = max (2, min (5, X))\n\nWhat is the variance of Y ?\n\nSolution:",
null,
"",
null,
"E(Y) =Σ Y.P(Y) = 3.5",
null,
"",
null,
"= (1.5)2(2/6) + (0.5)2(1/6) + (0.5)2(1/6) + (1.5)2(2/6) = 1.583\n\nQUESTION: 60\n\nConsider a UNIX-like file system implemented with i-nodes that resides on a disk of size 512 GB. Each i-node has a total of 15 block addresses consisting of direct and indirect block addresses.\n\nSuppose the implementation wants to support file size up to 1 GB using only direct block addresses and single indirect block address. At least how many of the 15 block addresses should be used as single indirect block address.\n\nSolution:\n\nEach i-node has a total 15 block addresses which means 15 bit block addresses.\nSo each i-node size = (2^15)B.\n\ntotal no. of i-node = Total Size / I-node Size\n\n= 512 GB/ (2^15)B\n\n= (2^39)/(2^15)\n\n= 2^24\n\ni.e. 24’ bits are required to address the entire disk space -> 3 bytes\nBlock Entry Size = 3 Bytes\n\nthe number of i-node or blocks required for 1 GB file\n\n= File Size/ I-node Size\n\n= 1 GB/ (2^15)B\n\n= (2^30)/(2^15)\n\n=2^15\n\nsize required to address 2^15 blocks = No_of_blocks * Block_Entry_Size = (2^15)*3 Bytes\n\nsize required to address 1 block:\n\n= (2^15)*3 / (2^15) = 3 block addresses\n\nQUESTION: 61\n\nConsider the below grammar\n\nS → AaB\n\nA → ab | a\n\nB → b\n\nThe above grammar is\n\nSolution:",
null,
"",
null,
"It is not LR(0), but SLR(1), CLR(1).\n\nQUESTION: 62\n\nMatch the following :",
null,
"Solution:\n\nSelf-explanation.\n\nQUESTION: 63\n\nA cache aware sorting algorithm sorts an array of size 2k with each key of size 4 Bytes. The size of the cache memory is 128 Bytes and algorithm is the combination of merge sort and insertion sort to exploit the locality of reference for the cache memory(i.e. will use insertion sort when problem size equals cache memory).\n\nWorst case running time of the algorithm is:\n\nSolution:\n\nIt will use merge sort at upper level and when size of the subarray will reach the size of the cache memory if will start to use Insertion sort,\n\n∵ cache can hold 32 keys",
null,
"",
null,
"∴ Merge sort in worst case will take time = height × number of elements in the array\n\n= log2(k-5)× 2k\n\nAfter that insertion sort will take (25)for each node",
null,
"QUESTION: 64\n\nConsider the languages L1={0m1m0n|m,n>0}, L2={0m1n0n|m,n>0} and L = L1 ∪ L2 over the alphabet {0,1}. Which of the following statements is/are true?\n\nI. L is unambiguous.\n\nII. L is inherently ambiguous.\n\nIII. L can be recognized by a PDA..\n\nIV. L can be recognized by a DPDA.\n\nSolution:\n\nL1 and L2 are context free languages. Hence, L is also context free, recognized by some\n\npush-down automaton, because CFL's are closed under Union.\n\nThe subset {0n1n0n|n⟩0} of L is the set of strings that has two distinct derivations:\n\none coming from L1, and the other coming from L2. It turns out that every grammar\n\nthat describes L will remain ambiguous for at least some strings of the form 0n0n0n\n\nHence, L is inherently ambiguous.\n\n*Answer can only contain numeric values\nQUESTION: 65\n\nDetermine the size of the ROM of required to implement the following –\n\nF(A,B,C) = ∑m(1,2,4,7)\n\nF2(A,B,C) =",
null,
"Solution:",
null,
"3i/p,s\n\n⇒ Size of ROM =2X×y\n\nWhere x = no. of inputs\n\ny = no. of outputs\n\n⇒ Size = 23 × 2 = 16"
]
| [
null,
"https://cdn3.edurev.in/ApplicationImages/Temp/467414_5467032c-3563-4af7-ae45-14ac6df50405_lg.png",
null,
"https://cdn3.edurev.in/ApplicationImages/Temp/467414_c74915c5-0e3a-40c3-8a3b-b36c6284ee81_lg.png",
null,
"https://cdn3.edurev.in/ApplicationImages/Temp/467414_326f64d0-1ab0-4077-8441-7d8352fd5843_lg.png",
null,
"https://cdn3.edurev.in/ApplicationImages/Temp/467414_12ff0d92-9d02-4e12-af9c-86a6748801bc_lg.png",
null,
"https://cdn3.edurev.in/ApplicationImages/Temp/467414_e1b4242c-f564-465f-9aaa-75fb7a3c63fe_lg.png",
null,
"https://cdn3.edurev.in/ApplicationImages/Temp/467414_756eba17-e7c0-4171-95fa-7561326ebc7a_lg.png",
null,
"https://cdn3.edurev.in/ApplicationImages/Temp/467414_c9da66c3-948b-45a3-8497-1a0582d570de_lg.png",
null,
"https://cdn3.edurev.in/ApplicationImages/Temp/467414_72f3b9aa-0024-4230-aa63-e9d9006a16d5_lg.png",
null,
"https://cdn3.edurev.in/ApplicationImages/Temp/467414_5010570f-9dab-45d9-b137-d75397e4a529_lg.png",
null,
"https://cdn3.edurev.in/ApplicationImages/Temp/467414_97334df4-b8fa-43fa-9058-b1889246f605_lg.png",
null,
"https://cdn3.edurev.in/ApplicationImages/Temp/467414_7235f62b-6708-4a34-a94d-25f43221c853_lg.png",
null,
"https://cdn3.edurev.in/ApplicationImages/Temp/467414_7d5767a3-3650-4012-95f9-fd3f558b5313_lg.png",
null,
"https://cdn3.edurev.in/ApplicationImages/Temp/467414_d03ca70e-f572-4019-bc8b-e4b212d90049_lg.png",
null,
"https://cdn3.edurev.in/ApplicationImages/Temp/467414_0c3c0397-3d83-47ee-a1e1-6b86668eb5dd_lg.png",
null,
"https://cdn3.edurev.in/ApplicationImages/Temp/467414_66c5451a-88c5-460d-bd23-a9e8863007df_lg.png",
null,
"https://cdn3.edurev.in/ApplicationImages/Temp/467414_e62dab86-dace-4ef4-b102-2a5f3e8f2efe_lg.png",
null,
"https://cdn3.edurev.in/ApplicationImages/Temp/467414_85546089-ac6c-4dbc-92bc-149e27c35f41_lg.png",
null,
"https://cdn3.edurev.in/ApplicationImages/Temp/467414_13460a4b-54c3-4584-a380-70184509518e_lg.png",
null,
"https://cdn3.edurev.in/ApplicationImages/Temp/467414_3533d70e-de22-475f-a501-5680c3900e01_lg.png",
null,
"https://cdn3.edurev.in/ApplicationImages/Temp/467414_609cc237-6112-4f87-84f8-dca47fda5f47_lg.png",
null,
"https://cdn3.edurev.in/ApplicationImages/Temp/467414_8b985397-2055-4814-806f-5f48c9ecaa0e_lg.png",
null,
"https://cdn3.edurev.in/ApplicationImages/Temp/467414_6e303b30-5b24-460b-8286-4909c113873b_lg.png",
null,
"https://cdn3.edurev.in/ApplicationImages/Temp/467414_04ab0a48-10f2-42d9-ab34-6568b7247118_lg.png",
null,
"https://cdn3.edurev.in/ApplicationImages/Temp/467414_9e477dd3-9d36-4d72-bfc0-572a613cefbe_lg.png",
null,
"https://cdn3.edurev.in/ApplicationImages/Temp/467414_6ab94005-d36e-4fa4-8e77-75a3dffd47c8_lg.png",
null,
"https://cdn3.edurev.in/ApplicationImages/Temp/467414_9b55eb54-2ceb-4440-a290-358ee7c50ada_lg.png",
null,
"https://cdn3.edurev.in/ApplicationImages/Temp/467414_b9100d8a-22a8-47e3-be9a-9504c1cf3353_lg.png",
null,
"https://cdn3.edurev.in/ApplicationImages/Temp/467414_0adba149-88fe-4f9a-9e60-eedf8b222065_lg.png",
null,
"https://cdn3.edurev.in/ApplicationImages/Temp/467414_9f255919-5969-42fb-b797-6a80dff77213_lg.png",
null,
"https://cdn3.edurev.in/ApplicationImages/Temp/467414_b436c4eb-bc8c-40fd-9e8c-03eb60c8b3cc_lg.png",
null,
"https://cdn3.edurev.in/ApplicationImages/Temp/467414_acec53be-4827-4574-804f-4117ce42e6fb_lg.png",
null,
"https://cdn3.edurev.in/ApplicationImages/Temp/467414_312b4497-c11d-4dcc-a044-0991664e58af_lg.png",
null,
"https://cdn3.edurev.in/ApplicationImages/Temp/467414_fecd3a29-55b3-49ea-baa2-13baa2bbc8f4_lg.png",
null,
"https://cdn3.edurev.in/ApplicationImages/Temp/467414_7973eb5c-eb1f-4c56-9a2c-628d50a1b42c_lg.png",
null,
"https://cdn3.edurev.in/ApplicationImages/Temp/467414_814a62f6-e5fd-422e-bd78-636f835c5c05_lg.png",
null,
"https://cdn3.edurev.in/ApplicationImages/Temp/467414_11b388db-360d-46fc-9617-e11f0c00f70c_lg.png",
null,
"https://cdn3.edurev.in/ApplicationImages/Temp/467414_d4761621-7ed9-4a9f-87fd-8c5180f56b12_lg.png",
null,
"https://cdn3.edurev.in/ApplicationImages/Temp/467414_5e0a9a7a-cc62-460d-b23e-2acc5a992527_lg.png",
null,
"https://cdn3.edurev.in/ApplicationImages/Temp/467414_2bcfef41-d069-4a9b-8160-4c64db90e49f_lg.png",
null,
"https://cdn3.edurev.in/ApplicationImages/Temp/467414_99eadbe5-883b-4c31-a3d1-1e4178275a8b_lg.png",
null,
"https://cdn3.edurev.in/ApplicationImages/Temp/467414_de0a05e2-c04b-4465-a40f-e828d1bcf909_lg.png",
null,
"https://cdn3.edurev.in/ApplicationImages/Temp/467414_4a701527-7b02-4b85-92d3-76a579ced9df_lg.png",
null,
"https://cdn3.edurev.in/ApplicationImages/Temp/467414_57dc4c82-e3c0-4f75-928a-cd46586d41aa_lg.png",
null,
"https://cdn3.edurev.in/ApplicationImages/Temp/467414_192c23ce-6e53-4cbf-ba27-780d637daa83_lg.png",
null,
"https://cdn3.edurev.in/ApplicationImages/Temp/467414_0b425d89-eda8-43eb-b778-ff72d4bd2940_lg.png",
null,
"https://cdn3.edurev.in/ApplicationImages/Temp/467414_f007fdd8-ca9a-4219-a293-ef4c23afa18a_lg.png",
null,
"https://cdn3.edurev.in/ApplicationImages/Temp/467414_d347ac86-1ba2-40d6-adb2-702b84c6ac07_lg.png",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.8856701,"math_prob":0.98067176,"size":26594,"snap":"2021-31-2021-39","text_gpt3_token_len":7675,"char_repetition_ratio":0.119819485,"word_repetition_ratio":0.061623544,"special_character_ratio":0.30446717,"punctuation_ratio":0.11435016,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9972297,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94],"im_url_duplicate_count":[null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-08-04T16:17:28Z\",\"WARC-Record-ID\":\"<urn:uuid:fe2fd663-798e-4e49-9aa2-5037add6b4a8>\",\"Content-Length\":\"607811\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c3705b6e-4a2c-4d1f-a4ab-d8acff288ac5>\",\"WARC-Concurrent-To\":\"<urn:uuid:7fd69b3c-d8db-43e8-9cb2-efa94d1a2dd9>\",\"WARC-IP-Address\":\"35.198.207.72\",\"WARC-Target-URI\":\"https://edurev.in/course/quiz/attempt/-1_Gate-Mock-Test-9-CSIT/323b68b9-1b6d-4388-a34e-4fdd09de220f\",\"WARC-Payload-Digest\":\"sha1:TW32CNOOD7TPD4LBDKOIZWH6XM34BQSF\",\"WARC-Block-Digest\":\"sha1:X24HRYDOQG4SZZTTVIRVHV4T2KT6NHMA\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046154878.27_warc_CC-MAIN-20210804142918-20210804172918-00486.warc.gz\"}"} |
https://www.flexiprep.com/Expected-Exam-Questions/Mathematics/Class-9/NCERT-Class-9-Maths-Coordinate-Geometry-Assessment-CBSE-Board-Sample-Problems.html | [
"# NCERT Class 9 Physics Coordinate Geometry Assessment CBSE Board Sample Problems (For CBSE, ICSE, IAS, NET, NRA 2023)\n\nGet top class preparation for CBSE/Class-9 right from your home: get questions, notes, tests, video lectures and more- for all subjects of CBSE/Class-9.\n\n1. State the quadrant for each of these points in Cartesian plane\n\na)\n\nb)\n\nc)\n\nd)\n\ne)\n\nf) (\n\ng)\n\nSolution\n\nWe know the quadrant signs are given below\n\n• We require two perpendicular axes to locate a point in the plane. One of them is horizontal and other is Vertical\n• The plane is called Cartesian plane and axis are called the coordinates axis\n• The horizontal axis is called x-axis and Vertical axis is called Y-axis\n• The point of intersection of axis is called origin.\n• The distance of a point from y axis is called x — coordinate or abscissa and the distance of the point from x — axis is called y — coordinate or Ordinate\n• The x-coordinate and y — coordinate of the point in the plane is written as (X, y) for point and is called the coordinates of the point\n• The Origin has zero distance from both x-axis and y-axis so that its abscissa and ordinate both are zero. So the coordinate of the origin is (0,0)\n• A point on the x — axis has zero distance from x-axis so coordinate of any point on the x-axis will be (X, 0)\n• A point on the y — axis has zero distance from y-axis so coordinate of any point on the y-axis will be (0, Y)\n• The axes divide the Cartesian plane in to four parts. These Four parts are called the quadrants\n• The coordinates of the points in the four quadrants will have sign according to the below table\n\nSo we can easily find the quadrant for each of these points\n\n2. Plot the following points in the Cartesian plane\n\na)\n\nb)\n\nc)\n\nd)\n\nAlso find which of these three lie are collinear\n\n3) True or False statement\n\na) x — coordinate is positive in 1st and 3rd quadrants\n\nb) The (0,0) is the coordinate of origin\n\nC) The point (0,2) lies on y axis\n\nd) The ordinate of the point Q (2,3) is 2\n\ne) Abscissa of all points on y axis is zero\n\nf) The points P (2,3) and Q (-3,2) lie in the same quadrant\n\nSolution\n\na) False, x-axis is positive in I and IV quadrant\n\nb) True\n\nc) True\n\nd) False\n\ne) True\n\nf) False\n\nMultiple choice Questions\n\n4) The perpendicular distance of the point X (5,6) from X axis is\n\na) 5\n\nb) 6\n\nc) 4\n\nd) 1\n\nSolution (b)\n\n5. The perpendicular distance of the point X (2,3) from Y axis is\n\na) 2\n\nb) 3\n\nC) 5\n\nd) None of these\n\nSolution (a)\n\n6) The points (other than origin) whose abscissa and ordinates are same will lie in\n\nd) None of these\n\nSolution (a)\n\n7) The positive abscissa lies in which quadrants\n\na) I\n\nb) II\n\nc) III\n\nd) IV\n\nSolution (a) , (d)\n\n8) Ordinate of all the points on x-axis is\n\na) O\n\nb) 1\n\nC) 2\n\nd) Any number\n\nSolution (a)"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.83458304,"math_prob":0.99091226,"size":2886,"snap":"2022-27-2022-33","text_gpt3_token_len":842,"char_repetition_ratio":0.1943095,"word_repetition_ratio":0.056637168,"special_character_ratio":0.27477476,"punctuation_ratio":0.048739497,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99724525,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-08T09:31:23Z\",\"WARC-Record-ID\":\"<urn:uuid:8b34ad49-b752-4af1-8bfd-e245519d3a88>\",\"Content-Length\":\"19685\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ad2e8596-1e9f-4fab-955a-8bd4f02a96c9>\",\"WARC-Concurrent-To\":\"<urn:uuid:f32c93fa-3b60-43ff-a4e5-7f3a211ca2f6>\",\"WARC-IP-Address\":\"172.67.204.214\",\"WARC-Target-URI\":\"https://www.flexiprep.com/Expected-Exam-Questions/Mathematics/Class-9/NCERT-Class-9-Maths-Coordinate-Geometry-Assessment-CBSE-Board-Sample-Problems.html\",\"WARC-Payload-Digest\":\"sha1:XZGJXCH5QBGJ6CF4C6ZES7YECWKFHD3O\",\"WARC-Block-Digest\":\"sha1:3MTWPBU6XMZLKTRQNBIXJ2K5W57GFKQN\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882570793.14_warc_CC-MAIN-20220808092125-20220808122125-00523.warc.gz\"}"} |
https://us.sofatutor.com/mathematics/videos/what-makes-a-number-sentence-true-or-false | [
"# What Makes a Number Sentence True or False?",
null,
"",
null,
"Rate this video\n\nØ 4.0 / 3 ratings\n\nThe author",
null,
"Susan Sayfan\n\n## DescriptionWhat Makes a Number Sentence True or False?\n\nConstructing a number sentence from a real world situation is a critical step in using mathematics in everyday life. Checking the validity of a number sentence, which can either be an equation or an inequality, is an important skill to develop. The proper recognition and understanding of the relationship that exists among the components of a number sentence is the key in developing and strengthening one’s problem-solving and analytical skills. Learn what makes a number sentence true or false by helping Zippy find his dog, Strumpy, who ran into an old abandoned amusement park as he figures out the truth of the number sentences written on the electrical box and the ferris wheel’s control panel. Common Core Reference: CCSS.MATH.CONTENT.6.EE.B.5\n\n### TranscriptWhat Makes a Number Sentence True or False?\n\nLate one night, Zippy and his dog Strumpy are out for a walk when all of a sudden, Strumpy catches a scent and takes off! It looks like he ran into the old, abandoned amusement park. Hey...those look like...paw prints in the mud! But it's getting dark...so, it's hard to tell. If only there was a way to turn these lights on. That's when Zippy sees an electrical box. But there are some strange numbers and symbols written on it. It looks like in order to find his dog and solve this mystery. Zippy is going to need to figure out what makes a number sentence true or false. Take a look at the three number sentences written on the electrical box… Do you see anything strange about them? Some of these number sentences are definitely not true. But which ones? The first number sentence reads \"4 + 6 is less than 9\".\n\nBut 4 + 6 is equal to 10, and 10 is greater than 9, so this number sentence is definitely false. Next: 3 times 6 is equal to 18. This number sentence is true because the expressions on both sides of the equal sign are equal to 18. Finally, 22 is greater than or equal to 11 plus 12.\n\nWell, 11 plus 12 is equal to 23, so this last number sentence is false. But how does this help Zippy find Strumpy? Zippy flips the inequality symbols to make all three number sentences true and look there, the lights turned on! But wait a second. What’s that noise? That’s definitely not Strumpy! Let's get out of here! Back at the entrance he hears a bark coming from the direction of the Ferris Wheel. How did Strumpy get up there? Let´s get him down! Zippy finds the ferris wheel's control panel and, just like the electrical box, there are three number sentences written on it but these number sentences have variables. How can he figure out which ones are true and false if he doesn't have a number to substitute in for 'x'? Zippy finds a weird punch card with the number 10 written on it just lying next to the control panel. Maybe this will help him find Strumpy? Maybe if we substitute 10 in for 'x', we can make one of these number sentences true! Let's try it out. If we substitute 10 in for 'x' in the first sentence we get 25 minus 10 is less than 15. Since 15 is not less than 15, this number sentence is false. If we substitute 10 in the next sentence. We get 13 is greater than or equal to 10 plus 4, or 14.\n\n13 is not greater than or equal to 14, so this is false too. Last chance 70 divided by 10 is equal to 7. Yes! That's a true number sentence! Zippy inserts the number 10 card and is about to turn on the Ferris Wheel but wait a minute it looks like somebody wants to help... Oh. Well that was nice of him. Maybe that swamp monster isn’t so bad after all maybe the whole time, he just wanted to be swamp friends.\n\n## What Makes a Number Sentence True or False? Exercise\n\nWould you like to practice what you’ve just learned? Practice problems for this video What Makes a Number Sentence True or False? help you practice and recap your knowledge.\n• ### Identify the false number sentences.\n\nHints\n\nFirst simplify the given expressions.\n\nNext decide if the relation becomes true.\n\nJust look at the following examples:\n\n• $13>12$\n• $13\\ge12$\n• $13=13$\n\nThere is only one true sentence.\n\nSolution\n\nPoor Ziggy. He lost his dog. Unfortunately he has to solve different problems to turn the light on so that he can see where his dog has gone:\n\n1. $4+6<9$? Is this true or false? First we do addition on the left side of the relation sign: $4+6=10$. Sure, $10$ is not smaller than $9$ and thus this sentence is false.\n2. $3\\times 6=18$? What about this sentence? Again we calculate the product $3\\times 6$. The result is $18$. Thus this sentence is true.\n3. $22\\ge 11+12$? Again we do addition first $11+12=23$. But $22$ isn't greater or equal to $23$. So this sentence is false too.\n• ### Analyze the number sentences.\n\nHints\n\nFirst put $x=10$ into the sentence. For example, $10+x$ leads to $10+10=20$.\n\nNext decide if the relation is true or false. For this, remember\n\n• $20>15$\n• $20\\ge 20$\n• $20=20$\n\nPay attention: $20\\ge 20$, but $20\\not > 20$.\n\nSolution\n\nZiggy has to put $x=10$ in for each sentence to check the sentences.\n\n$\\mathbf{25-x<15}$ leads to $25-10=15<15$? This is false because $15=15$ but not less than $15$.\n\n$\\mathbf{13\\ge x+4}$ leads to $13\\ge 10+4=14$? But $13<14$ and so this sentence is false too.\n\n$\\mathbf{\\frac{70}x=7}$ leads to $\\frac{70}{10}=7$? Sure, this is true.\n\n• ### Determine which numbers make the number sentences true.\n\nHints\n\nPut each value for $x$ in the equation first. Check the inequality after.\n\nFive values for $x$ which satisfy the inequality.\n\nThere is only one value for $x$ which fulfills neither the equation nor the inequality.\n\nSolution\n\nNot again, Ziggy thinks. He has to solve another problem to get a coke and a taco for relaxation time.\n\nLet's start with the equation: $3\\times x=15$. We check it with each given value for $x$.\n\n• $x=1$ $\\rightarrow$ $3\\times 1=3\\not = 15$\n• $x=2$ $\\rightarrow$ $3\\times 2=6\\not = 15$\n• $x=3$ $\\rightarrow$ $3\\times 3=9\\not = 15$\n• $x=4$ $\\rightarrow$ $3\\times 4=12\\not = 15$\n• $x=5$ $\\rightarrow$ $3\\times 5=15$ ✓\n• $x=6$ $\\rightarrow$ $3\\times 6=18\\not = 15$\nSo we only have to check $x=5$ because this $x$ must fulfill both the equation as well as the inequality: $5+4=9\\le 9$ ✓\n\n• ### Find the mistakes.\n\nHints\n\nFirst calculate the given terms. For example $3\\times 7=21$.\n\nDifferentiate between $<$ and $\\le$. Just look at the following example:\n\n• $12\\le 12$ but\n• $12\\not < 12$.\n\n$\\le$ includes $=$. In the same way $\\ge$ includes $=$.\n\nSolution\n\nThe dogs are sleeping and Ziggy is still a little jumpy. What's the best you can do when you're jumpy: check relation sentences of course! So this is what Ziggy does.\n\n1. $3+7\\le10$? First he does the addition on the left side $3+7=10$. This sentence is true because $\\le$ includes $=$.\n2. $3+7<10$ looks quite familiar. What's the difference? Here we have the $<$ sign and, sure, $10$ is not less than $10$. Thus this sentence is false.\n3. $x+7\\le 10$. Here we have a variable $x$ for which we put $2$ in to get $2+7=9\\le10$. This is a true sentence.\n4. $3\\times 7>10$. Again we do the math first, i.e. $3\\times 7=21$. $21>10$? Absolutely, this is a true sentence.\n5. $3\\times 7-11>10$. This looks a little bit more complicated. First we determine the left side to get: $3\\times 7-11=21-11=10$. But $10$ isn't greater than $10$ itself. This sentence is false.\n6. $7-3\\ge 5$? The difference is $7-3=4$ and $4$ is less than $5$. This sentence is false too.\n• ### Categorize the different kinds of number sentences.\n\nHints\n\nYou have to assign two sentences to each relation sign.\n\n$<$ means \"less than\" and $>$ means \"greater than\".\n\nJust read the sentences from left to the right and not the other way round.\n\nSolution\n\nIn each of the following sentences the relation sign is highlighted, so you can recognize it directly:\n\n1. $4+6 \\mathbf{\\color{#669900}{<}} 9$\n2. $25-x \\mathbf{\\color{#669900}{<}} 15$\n3. $22 \\mathbf{\\color{#669900}{>}} 11+12$\n4. $13 \\mathbf{\\color{#669900}{>}} x+4$\n5. $3\\times 6 \\mathbf{\\color{#669900}{=}} 18$\n6. $70\\div x\\mathbf{\\color{#669900}{=}} 7$\n• ### Determine the relation sign to make the number sentence true.\n\nHints\n\nThe relation sign isn't unique. You can use different ones to make the sentence true.\n\nJust look at the following example:\n\n$3$ ___ $4$.\n\nHere you can put in $<$ as well as $\\le$ in place of the ___ to make this sentence true.\n\nSimplify the terms as much as possible first.\n\nSolution\n\nFirst we have $3\\times (5+4)$ ___ $30-6$:\n\nSimplify the given expression using PEMDAS:\n\n• $3\\times (5+4)=3\\times )=27$\n• $30-6=24$\nThus we have $27>24$ or $27\\ge 24$.\n\nNext we have $3\\times 5+4$ ___ $60\\div 2$:\n\n• $3\\times 5+4=15+4=19$\n• $60\\div 2=20$\nSo we get $19<20$ or $19\\le 20$.\n\nThen we have $3\\times 5-6$ ___ $27\\div(2+1)$\n\n• $3\\times 5-6=15-6=9$\n• $27\\div(2+1)=27\\div 3=9$\nTogether we can conclude $9=9$ as well as $9\\le 9$ as well as $9\\ge 9$.\n\nFinally we have $3\\times 5-6$ ___ $27\\div(3\\times 3)$:\n\n• $3\\times 5-6=15-6=9$\n• $27\\div(3\\times 3)=27\\div9=3$\nThis gives us $9>3$ as well as $9\\ge 3$."
]
| [
null,
"https://d1u2r2pnzqmal.cloudfront.net/videos/pictures/21566/normal/US21566.jpg",
null,
"https://dkckbwr4t7ug6.cloudfront.net/assets/application/videos/visitors/exercise_placeholder-d03f1e82c2fdfd1f690d75166c0c923692831cc73444edcffe26306d0b8f19ae.png",
null,
"https://dkckbwr4t7ug6.cloudfront.net/assets/application/layouts/lazy_load_placeholder-131eb55a8b4e203b5c63caa4f2fd5d218ba8ff4bb32caa6f6e055df07beb4845.svg",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.8993034,"math_prob":0.998934,"size":8510,"snap":"2021-43-2021-49","text_gpt3_token_len":2445,"char_repetition_ratio":0.14907125,"word_repetition_ratio":0.042062417,"special_character_ratio":0.31927145,"punctuation_ratio":0.11270718,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9995567,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,3,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-24T16:32:39Z\",\"WARC-Record-ID\":\"<urn:uuid:bff0731d-e6ef-4681-b7da-789a693854b5>\",\"Content-Length\":\"126179\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:dfad9f0f-f3eb-429e-8787-9b17c2dcf596>\",\"WARC-Concurrent-To\":\"<urn:uuid:4f9e2c13-6470-4b39-8ea7-bedfb86a7407>\",\"WARC-IP-Address\":\"3.121.149.91\",\"WARC-Target-URI\":\"https://us.sofatutor.com/mathematics/videos/what-makes-a-number-sentence-true-or-false\",\"WARC-Payload-Digest\":\"sha1:O6LT2GGSHKB7NAIVYY22ADG5JPBCEEW5\",\"WARC-Block-Digest\":\"sha1:FY6OVISO3LMHGVEYAGBASRNX7RJSY5EV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323586043.75_warc_CC-MAIN-20211024142824-20211024172824-00278.warc.gz\"}"} |
https://numberworld.info/31202013 | [
"# Number 31202013\n\n### Properties of number 31202013\n\nCross Sum:\nFactorization:\nDivisors:\nCount of divisors:\nSum of divisors:\nPrime number?\nNo\nFibonacci number?\nNo\nBell Number?\nNo\nCatalan Number?\nNo\nBase 2 (Binary):\nBase 3 (Ternary):\nBase 4 (Quaternary):\nBase 5 (Quintal):\nBase 8 (Octal):\nBase 32:\nto6mt\nsin(31202013)\n-0.60477048900097\ncos(31202013)\n-0.79639980891103\ntan(31202013)\n0.75938050490986\nln(31202013)\n17.255993169929\nlg(31202013)\n7.4941826134605\nsqrt(31202013)\n5585.8762070064\nSquare(31202013)\n\n### Number Look Up\n\nLook Up\n\n31202013 which is pronounced (thirty-one million two hundred two thousand thirteen) is a impressive number. The cross sum of 31202013 is 12. If you factorisate the number 31202013 you will get these result 3 * 109 * 95419. The figure 31202013 has 8 divisors ( 1, 3, 109, 327, 95419, 286257, 10400671, 31202013 ) whith a sum of 41984800. The figure 31202013 is not a prime number. The figure 31202013 is not a fibonacci number. 31202013 is not a Bell Number. 31202013 is not a Catalan Number. The convertion of 31202013 to base 2 (Binary) is 1110111000001101011011101. The convertion of 31202013 to base 3 (Ternary) is 2011201020010010. The convertion of 31202013 to base 4 (Quaternary) is 1313001223131. The convertion of 31202013 to base 5 (Quintal) is 30441431023. The convertion of 31202013 to base 8 (Octal) is 167015335. The convertion of 31202013 to base 16 (Hexadecimal) is 1dc1add. The convertion of 31202013 to base 32 is to6mt. The sine of the number 31202013 is -0.60477048900097. The cosine of the figure 31202013 is -0.79639980891103. The tangent of 31202013 is 0.75938050490986. The root of 31202013 is 5585.8762070064.\nIf you square 31202013 you will get the following result 973565615252169. The natural logarithm of 31202013 is 17.255993169929 and the decimal logarithm is 7.4941826134605. that 31202013 is very special figure!"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.76721776,"math_prob":0.8277912,"size":2130,"snap":"2020-34-2020-40","text_gpt3_token_len":729,"char_repetition_ratio":0.20366886,"word_repetition_ratio":0.23003195,"special_character_ratio":0.5093897,"punctuation_ratio":0.15228426,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9988537,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-14T22:16:04Z\",\"WARC-Record-ID\":\"<urn:uuid:47acff7b-18c1-43ac-957c-f4403d9ea97b>\",\"Content-Length\":\"13746\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ecf9a80e-3719-49cc-b1df-f9d770dbed60>\",\"WARC-Concurrent-To\":\"<urn:uuid:ef295ae6-51dd-4398-9915-f83327e4bf48>\",\"WARC-IP-Address\":\"176.9.140.13\",\"WARC-Target-URI\":\"https://numberworld.info/31202013\",\"WARC-Payload-Digest\":\"sha1:FRXTIYRXEKPEYBERBT475XM2AA6Y5DPE\",\"WARC-Block-Digest\":\"sha1:5GM4MCH6XVKK7XRDPWCZH4G7TH2PHYW5\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439740343.48_warc_CC-MAIN-20200814215931-20200815005931-00168.warc.gz\"}"} |
http://people.sc.fsu.edu/~jburkardt/m_src/sir_simulation/sir_simulation.html | [
"SIR_SIMULATION Simulation of Disease Propagation with the SIR Model\n\nSIR_SIMULATION, a MATLAB program which simulates the spread of a disease through a hospital room of M by N beds, using the SIR (Susceptible/Infected/Recovered) model.\n\nWe consider the evolution of a disease in a hospital in which patients are arranged on an array of beds.\n\nWe assume that the beds form an array of M rows and N columns, so that there are a total of M * N patients.\n\nWe assume that the patients can be classified as Susceptible, Infected or Recovering, with the properties that:\n\n• Susceptible: A patient who has never been infected with the disease. A susceptible patient can get the disease.\n• Infected: A patient who has never gotten the disease. A patient stays infected for K days. On the K+1 of the disease, the patient \"recovers\".\n• Recovered: A patient who has had the disease, that is, has caught the disease and been sick for a full K days. A recovered patient never gets sick again.\n\nWe set up an M by N array A to represent the patients. A(I,J) contains information on the patient in row I, column J. A(I,J) will be\n\n• 0, if the patient is susceptible.\n• a value between 1 and K, if the patient is infected. The value is the number of days the patient has been infected.\n• -1, if the patient is recovered.\n\nThe rules for transmission of the disease essentially update the patient array once a day. If patient A(I,J) was:\n\n• 0, then check the four neighbors A(I-1,J), A(I+1,J), A(I,J-1) and A(I,J+1). For each neighbor that is infected, pick a random number, and if that random number is less than TAU, then patient A(I,J) becomes infected, that is, we set A(I,J) to 1.\n• a value between 1 and K, then the value is increased by 1. But if the value was already K, it is now reset to -1, because the patient has recovered.\n• -1, nothing happens.\n\nQuantities of interest include an animation of the day to day status of patients in the hospital (the \"geometry\") and the values of S, I, and R, that is, the total number of patients in each category, as it evolves over time.\n\nSince this problem contains a probabilistic element in the transmission of disease, the outcome of any single run has limited meaning. It is much more valuable to run many simulations, and thus to get both average or \"expected\" values, as well as a feeling for the variance of the data from these averages.\n\nUsage:\n\nsir = sir_simulation ( m, n, a, k, tau, t_max )\nwhere\n• m is the number of rows of patients.\n• n is the number of columns of patients.\n• a is the M by N matrix of the initial patient states.\n• k is the number of days a patient stays infected.\n• tau is the probability that a susceptible patient will become infected because of one \"nearby\" infected patient (north, south, east or west) over one day.\n• t_max is the total number of days to consider, counting the initial condition as day 1.\n\nLanguages:\n\nSIR_SIMULATION is available in a MATLAB version.\n\nRelated Data and Programs:\n\nBROWNIAN_MOTION_SIMULATION, a MATLAB program which simulates Brownian motion in an M-dimensional region.\n\nCOIN_SIMULATION, a MATLAB library which looks at ways of simulating or visualizing the results of many tosses of a fair or biased coin.\n\nDICE_SIMULATION, a MATLAB program which simulates N tosses of M dice, making a histogram of the results.\n\nDUEL_SIMULATION, a MATLAB program which simulates N repetitions of a duel between two players, each of whom has a known firing accuracy.\n\nGAMBLERS_RUIN_SIMULATION, a MATLAB program which simulates the game of gambler's ruin.\n\nHIGH_CARD_SIMULATION, a MATLAB program which simulates a situation in which you see the cards in a deck one by one, and must select the one you think is the highest and stop.\n\nISING_2D_SIMULATION, a MATLAB program which carries out a Monte Carlo simulation of an Ising model, a 2D array of positive and negative charges, each of which is likely to \"flip\" to be in agreement with neighbors.\n\nLIGHTS_OUT, a MATLAB program which sets up the \"Lights Out\" game and allows a user to try to solve it.\n\nLORENZ_SIMULATION, a MATLAB program which solves the Lorenz equations and displays the solution, for various starting conditions.\n\nPOISSON_SIMULATION, a MATLAB library which simulates a Poisson process in which events randomly occur with an average waiting time of Lambda.\n\nRANDOM_WALK_1D_SIMULATION, a MATLAB program which simulates a random walk in a 1-dimensional region.\n\nRANDOM_WALK_2D_AVOID_SIMULATION, a MATLAB program which simulates a self-avoiding random walk in a 2-dimensional region.\n\nRANDOM_WALK_2D_SIMULATION, a MATLAB program which simulates a random walk in a 2-dimensional region.\n\nRANDOM_WALK_3D_SIMULATION, a MATLAB program which simulates a random walk in a 3-dimensional region.\n\nREACTOR_SIMULATION, a MATLAB program which a simple Monte Carlo simulation of the shielding effect of a slab of a certain thickness in front of a neutron source. This program was provided as an example with the book \"Numerical Methods and Software.\"\n\nROULETTE_SIMULATION, a MATLAB program which simulates the spinning of a roulette wheel and the evaluation of certain common roulette bets.\n\nTHREE_BODY_SIMULATION, a MATLAB program which simulates the behavior of three planets, constrained to lie in a plane, and moving under the influence of gravity, by Walter Gander and Jiri Hrebicek.\n\nTRAFFIC_SIMULATION, a MATLAB program which simulates the cars waiting to get through a traffic light.\n\nTRUEL_SIMULATION, a MATLAB program which simulates N repetitions of a duel between three players, each of whom has a known firing accuracy.\n\nXISING, a C program which models the variations in ferromagnetism in a material, displaying the results using X Windows.\n\nXWAVES, a C program which simulates the behavior of solutions of certain forms of the wave equation, displaying the results using X Windows.\n\nReference:\n\n1. Dianne OLeary,\nModels of Infection: Person to Person,\nComputing in Science and Engineering,\nVolume 6, Number 1, January/February 2004.\n2. Dianne OLeary,\nScientific Computing with Case Studies,\nSIAM, 2008,\nISBN13: 978-0-898716-66-5,\nLC: QA401.O44.\n\nSource Code:\n\n• sir_area_display.m, displays an area plot of the SIR percentages over time.\n• sir_line_display.m, displays a line plot of the SIR percentages over time.\n• sir_simulation.m, the main program, which takes user parameter values, computes the configuration for each time step, displays an image of the configuration for each time, and returns the SIR percentages.\n• timestamp.m, prints the YMDHMS date as a timestamp.\n• timestep_display.m, displays an image of the hospital room at each timestep, indicating the locations of suceptible, infected, and recovered patients.\n\nLast revised on 17 March 2019."
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.9064722,"math_prob":0.92907846,"size":4364,"snap":"2019-26-2019-30","text_gpt3_token_len":985,"char_repetition_ratio":0.21926606,"word_repetition_ratio":0.1017192,"special_character_ratio":0.20004582,"punctuation_ratio":0.1120797,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.990154,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-06-18T07:37:06Z\",\"WARC-Record-ID\":\"<urn:uuid:acf31034-5c1f-4e8e-9987-73b1522c7492>\",\"Content-Length\":\"12461\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:11a04644-8e14-49c4-9430-04a9100ac0d7>\",\"WARC-Concurrent-To\":\"<urn:uuid:343cf38d-23f5-4e36-a374-daa771075216>\",\"WARC-IP-Address\":\"144.174.16.100\",\"WARC-Target-URI\":\"http://people.sc.fsu.edu/~jburkardt/m_src/sir_simulation/sir_simulation.html\",\"WARC-Payload-Digest\":\"sha1:OFKBST626JS3GNTTVMX7TFFNSHF3TC77\",\"WARC-Block-Digest\":\"sha1:UJ66NLDUBV4U5G6U6OGJXNNTI36S5CJR\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-26/CC-MAIN-2019-26_segments_1560627998690.87_warc_CC-MAIN-20190618063322-20190618085322-00046.warc.gz\"}"} |
https://www.vedantu.com/question-answer/the-product-of-two-numbers-is-9-if-one-of-them-class-8-maths-cbse-609e9ffd3abc38535777cfb6 | [
"Courses\nCourses for Kids\nFree study material\nFree LIVE classes\nMore",
null,
"# The product of two numbers is 9. If one of them is $3\\dfrac{3}{7}$. Then find the other one.\n\nLast updated date: 18th Mar 2023\nTotal views: 204k\nViews today: 1.84k",
null,
"Verified\n204k+ views\nHint: We first convert the improper fraction to proper fraction. The other number can be simply found by dividing the number 9 by $3\\dfrac{3}{7}$ in proper form.\n\nThe product of two numbers is 9. If one of them is $3\\dfrac{3}{7}$. Changing from improper fraction to proper fraction for $3\\dfrac{3}{7}$, we get $3\\dfrac{3}{7}=\\dfrac{24}{7}$.\nThe other number will be the division of 9 by $\\dfrac{24}{7}$. The quotient is $\\dfrac{9}{\\dfrac{24}{7}}=\\dfrac{63}{24}$.\nWe need to find the simplified form of the proper fraction $\\dfrac{63}{24}$.\nSimplified form is achieved when the G.C.D of the denominator and the numerator is 1.\nThis means we can’t eliminate any more common root from them other than 1.\nFor any fraction $\\dfrac{p}{q}$, we first find the G.C.D of the denominator and the numerator. If it’s 1 then it’s already in its simplified form and if the G.C.D of the denominator and the numerator is any other number d then we need to divide the denominator and the numerator with d and get the simplified fraction form as $\\dfrac{{}^{p}/{}_{d}}{{}^{q}/{}_{d}}$.\nFor our given fraction $\\dfrac{63}{24}$, the G.C.D of the denominator and the numerator is 3.\n\\begin{align} & 3\\left| \\!{\\underline {\\, 24,63 \\,}} \\right. \\\\ & 1\\left| \\!{\\underline {\\, 8,21 \\,}} \\right. \\\\ \\end{align}\nNow we divide both the denominator and the numerator with 3 and get $\\dfrac{{}^{63}/{}_{3}}{{}^{24}/{}_{3}}=\\dfrac{21}{8}$.\nTherefore, the other number is $\\dfrac{21}{8}$.\nSo, the correct answer is “$\\dfrac{21}{8}$”.\n\nNote: The process is similar for both proper and improper fractions. In case of mixed fractions, we need to convert it into an improper fraction and then apply the case like we did in the above problem. If the given form is improper itself, then we just have to complete the division.\nFor conversion we follow the equational condition of $\\dfrac{a}{b}=x+\\dfrac{c}{b}$. The representation of the mixed fraction will be $x\\dfrac{c}{b}$."
]
| [
null,
"https://www.vedantu.com/cdn/images/seo-templates/seo-qna.svg",
null,
"https://www.vedantu.com/cdn/images/seo-templates/green-check.svg",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.8040277,"math_prob":0.9995969,"size":5071,"snap":"2023-14-2023-23","text_gpt3_token_len":1448,"char_repetition_ratio":0.16637063,"word_repetition_ratio":0.6819307,"special_character_ratio":0.30506805,"punctuation_ratio":0.102661595,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99969745,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-21T20:14:59Z\",\"WARC-Record-ID\":\"<urn:uuid:3d4a0cd2-a32a-46e7-9503-f7481cdd7026>\",\"Content-Length\":\"97071\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2311a2e0-2c54-449b-b387-e644e33f57fb>\",\"WARC-Concurrent-To\":\"<urn:uuid:f8460e13-f66c-4b08-8698-df26b76bde97>\",\"WARC-IP-Address\":\"108.138.64.3\",\"WARC-Target-URI\":\"https://www.vedantu.com/question-answer/the-product-of-two-numbers-is-9-if-one-of-them-class-8-maths-cbse-609e9ffd3abc38535777cfb6\",\"WARC-Payload-Digest\":\"sha1:YFPRPUHH5PKCDL2LJJUPYJPAYDXSP6F6\",\"WARC-Block-Digest\":\"sha1:UCLKZZKVKIUV7JBCHMJSMSPGH32BAQQU\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296943746.73_warc_CC-MAIN-20230321193811-20230321223811-00443.warc.gz\"}"} |
https://answers.everydaycalculation.com/gcf/98-3780 | [
"Solutions by everydaycalculation.com\n\n## What is the GCF of 98 and 3780?\n\nThe GCF of 98 and 3780 is 14.\n\n#### Steps to find GCF\n\n1. Find the prime factorization of 98\n98 = 2 × 7 × 7\n2. Find the prime factorization of 3780\n3780 = 2 × 2 × 3 × 3 × 3 × 5 × 7\n3. To find the GCF, multiply all the prime factors common to both numbers:\n\nTherefore, GCF = 2 × 7\n4. GCF = 14\n\nMathStep (Works offline)",
null,
"Download our mobile app and learn how to find GCF of upto four numbers in your own time:"
]
| [
null,
"https://answers.everydaycalculation.com/mathstep-app-icon.png",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.83836496,"math_prob":0.9960635,"size":527,"snap":"2021-31-2021-39","text_gpt3_token_len":159,"char_repetition_ratio":0.12619503,"word_repetition_ratio":0.0,"special_character_ratio":0.35104364,"punctuation_ratio":0.089108914,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9963126,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-08-03T17:31:02Z\",\"WARC-Record-ID\":\"<urn:uuid:d2d68102-b060-4484-a004-6355958079c6>\",\"Content-Length\":\"5644\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3c833277-101c-4dc5-9c28-9a6a23dd30cc>\",\"WARC-Concurrent-To\":\"<urn:uuid:c5ffa68f-7f6d-4d7d-a117-68de82b4ef78>\",\"WARC-IP-Address\":\"96.126.107.130\",\"WARC-Target-URI\":\"https://answers.everydaycalculation.com/gcf/98-3780\",\"WARC-Payload-Digest\":\"sha1:NX3WTGZXI2ES6RGU3UY3CD7AKJTTSPPF\",\"WARC-Block-Digest\":\"sha1:6QICBEE4HPIPBYDKJDXPDIKXDKKIHHJ2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046154466.61_warc_CC-MAIN-20210803155731-20210803185731-00344.warc.gz\"}"} |
https://testbook.com/question-answer/if-a-discrete-random-variable-x-follows-uniform-di--607eaac25e2914dbed4853b3 | [
"# If a discrete random variable X follows uniform distribution and assume only the values 8, 9, 11, 15, 18, 20, the value of P(|X - 14| < 5) will be:\n\nThis question was previously asked in\nSSC CGL Tier-II ( JSO ) 2018 Official Paper ( Held On : 14 Sept 2019 )\nView all SSC CGL Papers >\n1. $$\\dfrac{1}{5}$$\n2. $$\\dfrac{1}{4}$$\n3. $$\\dfrac{1}{3}$$\n4. $$\\dfrac{1}{2}$$\n\nOption 4 : $$\\dfrac{1}{2}$$\nFree\nCell\n306304\n10 Questions 10 Marks 7 Mins\n\n## Detailed Solution\n\nGiven\n\nIn discrete random variable X follows uniform distribution\n\nAssumed values of X = 5. 9, 11, 15, 18, 20\n\nCalculation\n\nI X – 14 I < 5\n\n⇒ - 5 < X – 14 < 5\n\n⇒ 9 < X <, 19\n\n∴ Favourable case = {11, 15, 18} = 3\n\nTotal sample space = {8, 9, 11, 15, 18, 20} = 6\n\n⇒ P(IX – 14I < 5) = P(9 < X < 19)\n\n⇒ Favourable case/total sample space = 3/6\n\n∴ The value of P(IX – 14I < 5 is ½"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.765926,"math_prob":0.9991289,"size":364,"snap":"2021-43-2021-49","text_gpt3_token_len":180,"char_repetition_ratio":0.125,"word_repetition_ratio":0.0,"special_character_ratio":0.51648355,"punctuation_ratio":0.15294118,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9985093,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-24T04:30:50Z\",\"WARC-Record-ID\":\"<urn:uuid:372f1ee2-8898-4a44-9747-5f389f40803d>\",\"Content-Length\":\"124077\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c1898423-5c99-42f2-9e94-eabd106ab1cb>\",\"WARC-Concurrent-To\":\"<urn:uuid:c8d57f99-6834-4c9d-9254-df9b0b5cc6f5>\",\"WARC-IP-Address\":\"104.22.44.238\",\"WARC-Target-URI\":\"https://testbook.com/question-answer/if-a-discrete-random-variable-x-follows-uniform-di--607eaac25e2914dbed4853b3\",\"WARC-Payload-Digest\":\"sha1:57JOCPYVZSB74IKLBK2YV3L4QWBFTEVY\",\"WARC-Block-Digest\":\"sha1:7UMV3WZ5XK5HQFBG67L2JZGYPXPNRUZJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585837.82_warc_CC-MAIN-20211024015104-20211024045104-00457.warc.gz\"}"} |
https://www.hindawi.com/journals/isrn/2011/925085/ | [
"#### Abstract\n\nResearch on nonlinear active noise control (NANC) revolves around the investigation of the sources of nonlinearity as well as the performance and computational load of the nonlinear algorithms. The nonlinear sources could originate from the noise process, primary and secondary propagation paths, and actuators consisting of loudspeaker, microphone or amplifier. Several NANCs including Volterra filtered-x least mean square (VFXLMS), bilinear filtered-x least mean square (BFXLMS), and filtered-s least mean square (FSLMS) have been utilized to overcome these nonlinearities effects. However, the relative performance and computational complexities of these algorithm in comparison to FXLMS algorithm have not been carefully studied. In this paper, systematic comparisons of the FXLMS against the nonlinear algorithms are evaluated in overcoming various nonlinearity sources. The evaluation of the algorithms performance is standardized in terms of the normalized mean square error while the computational complexity is calculated based on the number of multiplications and additions in a single iteration. Computer simulations show that the performance of the FXLMS is more than 80% of the most effective nonlinear algorithm for each type of nonlinearity sources at the fraction of computational load. The simulation results also suggest that it is more advantageous to use FXLMS for practical implementation of NANC.\n\n#### 1. Introduction\n\nThe increase in environmental noise and the need for a low-cost noise control system promote the growth of active noise control (ANC) applications. Transportation and road noise from motor vehicles, aircraft, and rail are major contributors to environmental noise . An example of transportation noise is the ambient noise produced by hybrid electrical vehicles, which originates from the alternator and cooling fans of the engine and battery packs . Large wind turbines with large blade propellers create a low-frequency drumming that penetrates surrounding residential areas . The conventional method to reduce the amount of noise involves the application of passive absorbers based on the concept of energy loss . However, for low-frequency noise, this method is not effective , and ANC is resorted to instead. The principle of ANC is to produce a secondary destructive antinoise signal equal in magnitude but opposite in phase electronically to cancel the primary unwanted noise [6, 7].\n\nLinear ANC techniques are limited in that the system exhibits performance degradation when dealing with nonlinearities [1, 810]. Nonlinearity is a concern in ANC application where low-cost devices such as amplifiers, sensors, and actuators exhibit nonlinear distortion. Therefore, using low-cost sensors and actuators in an ANC system along with an efficient NANC algorithm can reduce the cost of the control system while ensuring good cancellation level. In the literature concerning NANC systems, two main models are commonly used to represent nonlinearities, namely, the truncated Volterra series [8, 9, 11, 12] and functional link neural network (FLNN) [5, 7, 1116]. A study has suggested that the output error Bilinear filter is an efficient alternative to Volterra filter in reducing the saturation effect of the sensor and actuator . In the FLNN models, different functional expansions have been implemented with the trigonometric expansion being the most common [7, 12, 13]. Other expansions include piecewise , Chebyschev [11, 12, 14], truncated Taylor series for trigonometric functions , Legendre , and Hammerstein . In addition to the FLNN structure, two other adaptive feedback FLNN filters have been proposed to reduce the effects of nonlinearities in the primary path and noise process with lower computational loads. They are the feedback FLNN (FFLNN) and reduced FFLNN (RFFLNN) . Other models in the literature include fuzzy neural networks (FNN) , and nonlinear autoregressive moving average with exogenous input (NARMAX) models .\n\nThe nonlinear models used in NANC can be updated adaptively using various algorithms. In this paper, the performance of the nonlinear VFXLMS, BFXLMS and FSLMS algorithms are compared with the linear FXLMS, algorithms in terms of noise reduction and computational load. The algorithm with the best performance for each type of nonlinearity is identified. The paper is organized as follows: Section 2 describes the sources of nonlinearity in the ANC system. In Section 3 the feedforward ANC structure is presented. Section 4 describes the three NANC algorithms. The simulation results of the performance of the algorithms are presented in Section 5. The computational complexity of the algorithms is presented in Section 6 followed by the conclusion in Section 7.\n\n#### 2. Sources of Nonlinearities\n\nThe nonlinearity sources in active noise control can be generally classified into three types. The first type is due to system actuators such as the loudspeaker, microphones, preamplifiers, analogue to digital converters (ADC), and digital to analogue converters (DAC). The second type is due to the noise process when the dynamic behaviour of the noise generation is nonlinear. The third type is due to the acoustic propagation paths, where the acoustic signals propagate under the presence of nonlinearities, such as high pressure, temperature variations, or nonhomogeneous media . The following subsections describe the three nonlinearity types and the relevant mathematical models commonly used to represent them.\n\n##### 2.1. Sensors, Actuators, and Amplifiers\n\nIn many practical ANC applications, the actuator that generates the antinoise signal is a loudspeaker, and the sensor that detects the error signal is a microphone. It is possible that the primary noise amplitude level is so high that the error and reference sensors are saturated due to their low-power characteristics. The saturation and clipping effects due to over driving the loudspeakers can produce extra odd harmonics, thus, affecting the convergence speed of the NANC controller . Aging and corrosion of electronic components due to temperature variations and humidity in ADCs and DACs circuits are additional nonlinear sources .\n\nIn loudspeakers, the nonlinearity in the form of limited dynamic range causes electrical distortion when output amplitude level is high. The two most important forms of distortion in the loudspeakers are (i) harmonic distortion and (ii) intermodulation distortion . Harmonic distortion is characterized by the presence of harmonics in the output signal not present in the original (excitation) signal. On the other hand, intermodulation distortion arises whenever two signals having different frequencies pass simultaneously through a nonlinear system. As a result, linear combinations of the fundamental frequencies and all harmonics present in the input signals may appear as an overall nonlinearity at the loudspeaker output.\n\nIn order to deliver sufficient power to drive the loudspeaker, power amplifier is used to amplify the control signal. The amplifier should ideally provide nominally flat response between 20 Hz and 20 kHz and generate a minimum harmonic distortion. However, nonlinear harmonic distortion which consists mainly of cubic terms having 5 to 10 percent amplitude of the total output amplitude can occur, especially when dealing with small loudspeakers that operate at high volumes .\n\nThe nonlinearities of the power amplifier, loudspeaker, microphone, and preamplifier can be represented as a lumped model system using second-order Volterra series . The Volterra expansion is a general method used to model nonlinear systems including saturation-type nonlinearities observed in power amplifiers and loudspeakers. The nonlinear characteristic of an amplifier loudspeaker can be represented by a NARMAX model . Typically, the saturation characteristic of a loudspeaker is modelled by the sigmoid hyperbolic tangent function. In previous study, the nonlinear effect of the power amplifier and loudspeaker has been modelled as a nonlinear filter without memory which is defined as\n\nHowever, the representation in (1) cannot be used to approximate all types of nonlinear characteristics of the amplifier-loudspeaker system due to its fixed amplitude terms.\n\nA raised cosine function has also been used to model the nonlinear characteristics of the amplifier-loudspeaker system . In the function, two parameters can be adjusted to influence the shape of the raised cosine function. The adjustable raised cosine function improves the approximation of the nonlinear characteristics compared to the sigmoid hyperbolic tangent function given by (1).\n\n##### 2.2. Reference Noise\n\nRecently, chaotic signal processing has received much interest. Some signals, like radar sea clutter, speech, and indoor multipath effects, are better represented by deterministic chaotic rather than stochastic process. Understanding the chaotic process is useful in applications such as radar surveillance, secure communication, and narrowband interference cancellation in chaotic spread-spectrum system [25, 26]. The main characteristic of a deterministically chaotic system is the sensitive dependence on initial conditions. Small differences in the initial input conditions can grow into large changes in the output value. Moreover, it is very difficult to predict the next system state, even if an accurate measurement of the current state can be obtained .\n\nIn ANC system, the noise that is generated from a dynamic system may be nonlinear and deterministic chaotic rather than stochastic, white, or tonal noise processes [1, 5, 7, 8, 12, 14, 1618]. Research has shown that the noise measured from a ventilation fan exhibit chaotic behaviour . Three kinds of chaotic noises in the form of Logistic, Lorenz, and Duffing noise filters have been applied to test the capability of a proposed single channel nonlinear controller . The results from this study show that if the input signal is non-Gaussian and predictable, the use of a combination of linear and nonlinear controller produces better results than a linear one exclusively.\n\nAmong these chaotic noises, the Logistic type is the simplest and the most useful test signal, and it can be described as a second-order white and predictable nonlinear process. The Logistic process can be generated using the following equation [7, 8, 12, 14, 1618]:\n\nThe white noise signal is typically generated by setting and the initial condition to be 4 and 0.9, respectively [1, 5].\n\n##### 2.3. Propagation Paths\n\nPrimary and secondary propagation paths may exhibit nonlinear impulse responses. Nonlinear distortion between the reference sensor and the error sensor occurs in ducts where the noise propagates with high sound pressure . Typically, a sinusoidal sound wave of 500 Hz propagating in a duct at a sound pressure level of 140 dB generates a harmonic distortion of about 1% after travelling a distance of 1 m .\n\nThe nonlinear noise source at the cancelling point can be represented by a third-order polynomial equation given as [7, 8, 12, 14]\n\nIn (3), is obtained by the following linear convolution where is the reference noise, and represents the impulse response of the primary path expressed as\n\nIn general, FXLMS structure and algorithm are used both in linear and nonlinear feedforward ANC systems. The reference signal must be filtered for correct adaptation. In the next section, the feedforward ANC with FXLMS algorithm is presented.\n\n#### 3. Feedforward Active Noise Control\n\nThe basic single channel ANC system can be represented using an adaptive filtering block diagram, as shown in Figure 1. One of the most popular adaptive algorithms applied in ANC is the filtered-x least mean square (FXLMS) algorithm . This algorithm is a variant of the LMS algorithm. It is used when the reference signal needs to be filtered by the transfer function of the secondary path to ensure correct controller adaptation .\n\nThe FXLMS scheme is depicted in Figure 1, where , , , , , and are discrete time domain signals representing noise process, filtered noise, controller output, secondary path output, primary path output, and residual noise error signals, respectively. are the -transforms for the transfer functions of the primary path, adaptive controller, estimated secondary path and secondary path systems, respectively. The objective of the adaptive controller is to generate an antinoise signal in order to minimize the residual error noise. The controller is typically an FIR filter with tuneable weights adjusted to minimize the error signal.\n\n#### 4. NANC Algorithms\n\nThe performance of linear ANC algorithm (FXLMS) degrades due to the existence of nonlinearity sources. Therefore, NANC algorithms are needed. Such algorithms are utilised in adapting nonlinear models such as truncated Volterra series, functional link neural networks, bilinear filters, and the NARMAX model. The model order represented by the number of weights is an important issue in hardware implementation. It affects both the computational load and the memory requirements of the control algorithm. Typically, in polynomial models such as the Volterra, bilinear, and NARMAX models, the model order is set to a large value causing over parameterization. However, only a few parameters are dominant to model the system. Therefore, several strategies have been employed to tackle the over parameterization problem like akaike information criterion (AIC) and bayesian information criterion (BIC) .\n\nFLNN uses the functional expansion structure to replace the task performed by the hidden layers in traditional neural networks. This replacement has the advantage of making the structure for hardware easier to implement. Moreover, FLNN involves less computational complexity compared to the multilayer artificial neural network (MLANN).\n\nThis paper compares the performance and computational complexity of three NANC algorithms, which are commonly applied to update the nonlinear controllers. The algorithms considered are Volterra filtered-x least mean square (VFXLMS), bilinear filtered-x least mean square (BFXLMS), and filtered-s least mean square (FSLMS) which is based on the functional link neural network (FLNN) model. The description of the three NANC algorithms is given in the following subsections.\n\n##### 4.1. VFXLMS Based on Volterra Filters\n\nThe general -order input-output relationship for a Volterra filter in a multichannel structure is given by [8, 9] where is the order of the Volterra filter, and the term can be further defined as In (7), represents the maximum number of past input samples (memory length). The Volterra output can be represented in vector form as where The terms in the bracket of (9) which represent the input signal to the filter can be written as and the impulse response of the filters can be written as where is a vector that represents the coefficients of the th order Volterra kernel of the system. The Volterra filter can be updated using the VFXLMS algorithm as shown in Figure 2. The error signal between the secondary path output and the desired signal is given as where the operation () represents time-domain convolution. The coefficient vector will be adjusted accordingly to minimize the mean square error , which is equal to minimizing the residual noise power where is the signal filtered by the secondary path . Substituting (14) into (13) yields\n\nThe Volterra model typically requires a large number of parameters to estimate. In practice, the model order is initially chosen to be large which could cause overparameterization. Unfortunately, overparameterization will degrades parameter estimation accuracy and the robustness of the model. Moreover, overparameterization can be responsible for several unwanted dynamic effects . On the positive side, Volterra model is linear in the parameters, such that conventional algorithms like least mean squares used for FIR filters adaptation be can still applied with minor modifications.\n\n##### 4.2. BFXLMS Based on Bilinear Filters\n\nIn comparison to FIR filter, IIR filter can model a linear system with fewer coefficients. These coefficients are associated with delayed input and output samples. The Volterra filters are viewed as the nonlinear generalisation of a linear FIR filter while bilinear filters are an extension of the linear IIR filter with additional coefficients associated with the input-output cross multiplied samples. Thus, bilinear filters can model nonlinear systems with fewer coefficients compared to Volterra filters. The general equation of the input-output relationship of bilinear filters is given as where , , and are the coefficients of the delayed input, delayed output, and delayed input-output cross-multiplied samples, respectively. and are the number of input and output delay taps, respectively. The block diagram of the BFXLMS for the feedforward ANC system is shown in Figure 3.\n\nFrom Figure 3, the output of the bilinear filters is expressed as\n\nIn BFXLMS, steepest descent algorithm is utilized in minimizing the squared error function. The weights are updated according to where , , and are the filtered signals through the estimated secondary path block of the input, output, and the input-output crossover multiplied delayed samples, respectively.\n\nThe disadvantages of modelling using the adaptive bilinear filters are the following.(1)The adaptive bilinear filter may not converge to the global minimum because the error function has local minima.(2)The adaptation process may be unstable.\n\nThese problems can be avoided if the algorithm is designed by choosing the step size carefully. In terms of computational complexity, BFXLMS requires less computation compared to VFXLMS for the same amount of noise reduction and same system conditions.\n\n##### 4.3. FSLMS Based on FLNN\n\nIn conventional MLANN, the activation functions are used to introduce nonlinearity into the network. In FLNN, the functional expansion carries out the task performed by the activation functions. As a result of this fact, FLNN has the key advantages of involving less computational complexity and a simpler structure for hardware implementation . The general functional expansion equation that expands an input signal is given by where and represent the input vector and the output of the nonlinear system, respectively. is the ith coefficient filter at time (), is an orthogonal basis functional expansion. Several function expansions in the form of approximate and exact trigonometric, Legendre (L-FLANN), Chebyshev (C-FLANN), Hammerstein (H-FLANN), and Piecewise linear expansion (PWL) have been implemented .\n\nThe block diagram of the FSLMS based on FLNN is shown in Figure 4.\n\nFrom Figure 4, the expansion function Ψ used is the exact trigonometric functions of order , and and can be further defined as where and represent the th expanded input signal and the th filter coefficients, respectively, and they are expressed as\n\nThe weight updates equations can be derived using the steepest descent algorithm to minimise the squared error function and can be expressed as \n\nThe disadvantage of the FSLMS algorithm is the extra computation required by the functional expansion block. The choice of the expansion function depends on the strength of nonlinearity and sets the computational complexity of the model. Thus, numerous expansion functions have been used to achieve compromised design between computational complexity and performance . However, under higher degree of nonlinearities, the FSLMS performs better compared to VFXLMS and BFXLMS.\n\nIn NANC system, appropriate nonlinear algorithm is required in compensating the nonlinearity effects. Improved noise reduction can sometimes be achieved at the expense of high computation load. It is important to understand the trade off between performance and computational load in the design process. In the next section, a performance comparison between the nonlinear algorithms is investigated by means of simulation. In achieving fair comparison, the nonlinear algorithms are applied under the same system conditions and nonlinearity sources (type of reference noise, primary and secondary path transfer functions).\n\n#### 5. Computer Simulation and Performance Comparison\n\nThe performances of the three nonlinear algorithms with the associated models are compared. The normalized mean square error () after convergence versus the number of iterations is used as the performance criteria and is defined as where is the variance of the primary noise at the cancellation point, and is the expected value of the squared error. The expected value is approximated by averaging the recorded squared error of fifty independent trials (simulations). In the simulation, the primary noise signal is generated using the third-order polynomial model defined by (3), (4), and (5). The secondary path impulse response and its estimate used in the simulation are linear minimum phase defined by\n\nThe saturation characteristic of the loudspeaker is modelled by saturating the controller output to a certain level. This level of saturation is obtained by setting a clipping threshold to the controller output signal at 85% of the maximum signal value.\n\nTwo reference noises are used in this simulation to investigate the performances of the algorithms. The first noise process is represented by a multiharmonic signal [6, 17]. The multiharmonic sum of three sine waves of 500 Hz, 400 Hz, and 300 Hz sampled at the rate of 8000 samples/sec was used and is given by where is a white noise process with Gaussian distribution. The signal to noise power ratio (SNR) was set at 40 dB. The second noise process is logistic chaotic noise. The logistic chaotic noise is a second-order white and predictable nonlinear process. It was generated using (2). This nonlinear noise process was then normalised to have unity signal power to prevent gradient noise amplification in the weights update equation.\n\nThree experiments were simulated to compare the performance of the NANC algorithms with the FXLMS algorithm. In each experiment, only a single nonlinearity source is introduced in the ANC system at any time. This was done to evaluate the performance of the NANC and the FXLMS algorithms in the presence of the corresponding nonlinearity source individually. In all experiments, the memory tap of the Volterra filter , secondary path filter length and the functional expansion order of the FLNN filter were set to 10, 4, and 3, respectively. Figures 5, 6, and 7 depict the noise reduction performance in the presence of chaotic noise, nonlinear primary path, and saturation effects of secondary path, respectively. Table 1 presents the summary of the simulation results obtained with the three nonlinearity sources for different NANC algorithms after convergence at the 104 iteration.\n\nFigure 5 shows the NMSE of different NANC algorithms with the presence of chaotic noise. The simulation shows that the BFXLMS is the most effective algorithm. However, the FXLMS algorithm can still achieve a noise reduction of 81.25% of the reduced noise achieved by the BFXLMS algorithm. Figure 6 shows the NMSE of different NANC algorithms in the presence of nonlinear primary path given by (3), (4), and (5). Although the second-order VFXLMS algorithm gives the most effective noise reduction, the FXLMS is still capable of achieving 91.66% of the VFXLMS performance. Figure 7 shows the NMSE for different NANC algorithms in the presence of saturation. Here, it is evident that FSLMS and BFXLMS perform equally yet better than VFXLMS and FXLMS. However, the FXLMS algorithm can still achieve a noise reduction of 93.10% of the reduced noise achieved by both the FSLMS and the BFXLMS algorithms.\n\n#### 6. Computational Complexity\n\nComputational complexity is an important issue when implementing the algorithm in real-time application. In feed forward ANC structure, three stages of signal processing computation are performed. These stages of computation are(1)generating the control signal,(2)filtering the reference signal through the estimated secondary path,(3)updating the controller weights.\n\nTable 2 summarizes the computational complexities of the algorithms for each processing stage. From the table, the FXLMS has the least number of multiplications and additions followed by the VFXLMS. Taking into account only the number of multiplications which dominates the computational requirements, the FXLMS computational load is only 12.5% in comparison to that of the VFXLMS.\n\n#### 7. Conclusion\n\nIn this paper, the performance and computational load of three nonlinear active noise control algorithms and the conventional FXLMS algorithm are compared. The nonlinearity sources that affect the system are categorised into noise process, primary and secondary acoustic paths, and the actuators which consist of the loudspeaker, microphone, and amplifier. Three NANC algorithms, namely, VFXLMS, BFSLMS, and FSLMS were simulated on a feedforward ANC system. The results from this study suggest that, in the presence of nonlinearity sources, the FXLMS algorithm still gives an acceptable performance compared to nonlinear control algorithms at the fraction of computational load."
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.8897357,"math_prob":0.9527694,"size":33070,"snap":"2022-40-2023-06","text_gpt3_token_len":7152,"char_repetition_ratio":0.16415654,"word_repetition_ratio":0.057692308,"special_character_ratio":0.21136983,"punctuation_ratio":0.14876585,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9826317,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-10-07T13:05:04Z\",\"WARC-Record-ID\":\"<urn:uuid:283fa2ae-141f-4a97-9de1-861bc392bf29>\",\"Content-Length\":\"542027\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f42d5549-d75f-45c7-8628-f1088bfc5713>\",\"WARC-Concurrent-To\":\"<urn:uuid:2f646890-594d-4e30-bcf8-003eba4bc6d6>\",\"WARC-IP-Address\":\"13.249.39.104\",\"WARC-Target-URI\":\"https://www.hindawi.com/journals/isrn/2011/925085/\",\"WARC-Payload-Digest\":\"sha1:7UUGMEBFPEU3XFWGYZKXY4WPQJRGIHLD\",\"WARC-Block-Digest\":\"sha1:GHT7MPKA63MWKHIXEDCXKBFEKTP6IZQC\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030338073.68_warc_CC-MAIN-20221007112411-20221007142411-00746.warc.gz\"}"} |
https://support.office.com/en-us/article/using-excel-formulas-to-figure-out-payments-and-savings-11cb708f-c137-4ef8-bcf3-5137aaeb4b20 | [
"Using Excel formulas to figure out payments and savings\n\nManaging personal finances can be a challenge, especially when trying to plan your payments and savings. Excel formulas can help you calculate the future value of your debts and investments, making it easier to figure out how long it will take for you to reach your goals. Use the following functions:\n\n• PMT calculates the payment for a loan based on constant payments and a constant interest rate.\n\n• NPER calculates the number of payment periods for an investment based on regular, constant payments and a constant interest rate.\n\n• PV returns the present value of an investment. The present value is the total amount that a series of future payments is worth now.\n\n• FV returns the future value of an investment based on periodic, constant payments and a constant interest rate.\n\nFigure out the monthly payments to pay off a credit card debt\n\nAssume that the balance due is \\$5,400 at a 17% annual interest rate. Nothing else will be purchased on the card while the debt is being paid off.\n\nUsing the function PMT(rate,NPER,PV)\n\n=PMT(17%/12,2*12,5400)\n\nthe result is a monthly payment of \\$266.99 to pay the debt off in two years.\n\n• The rate argument is the interest rate per period for the loan. For example, in this formula the 17% annual interest rate is divided by 12, the number of months in a year.\n\n• The NPER argument of 2*12 is the total number of payment periods for the loan.\n\n• The PV or present value argument is 5400.\n\nFigure out monthly mortgage payments\n\nImagine a \\$180,000 home at 5% interest, with a 30-year mortgage.\n\nUsing the function PMT(rate,NPER,PV)\n\n=PMT(5%/12,30*12,180000)\n\nthe result is a monthly payment (not including insurance and taxes) of \\$966.28.\n\n• The rate argument is 5% divided by the 12 months in a year.\n\n• The NPER argument is 30*12 for a 30 year mortgage with 12 monthly payments made each year.\n\n• The PV argument is 180000 (the present value of the loan).\n\nFind out how to save each month for a dream vacation\n\nYou’d like to save for a vacation three years from now that will cost \\$8,500. The annual interest rate for saving is 1.5%.\n\nUsing the function PMT(rate,NPER,PV,FV)\n\n=PMT(1.5%/12,3*12,0,8500)\n\nto save \\$8,500 in three years would require a savings of \\$230.99 each month for three years.\n\n• The rate argument is 1.5% divided by 12, the number of months in a year.\n\n• The NPER argument is 3*12 for twelve monthly payments over three years.\n\n• The PV (present value) is 0 because the account is starting from zero.\n\n• The FV (future value) that you want to save is \\$8,500.\n\nNow imagine that you are saving for an \\$8,500 vacation over three years, and wonder how much you would need to deposit in your account to keep monthly savings at \\$175.00 per month. The PV function will calculate how much of a starting deposit will yield a future value.\n\nUsing the function PV(rate,NPER,PMT,FV)\n\n=PV(1.5%/12,3*12,-175,8500)\n\nan initial deposit of \\$1,969.62 would be required in order to be able to pay \\$175.00 per month and end up with \\$8500 in three years.\n\n• The rate argument is 1.5%/12.\n\n• The NPER argument is 3*12 (or twelve monthly payments for three years).\n\n• The PMT is -175 (you would pay \\$175 per month).\n\n• The FV (future value) is 8500.\n\nFind out how long it will take to pay off a personal loan\n\nImagine that you have a \\$2,500 personal loan, and have agreed to pay \\$150 a month at 3% annual interest.\n\nUsing the function NPER(rate,PMT,PV)\n\n=NPER(3%/12,-150,2500)\n\nit would take 17 months and some days to pay off the loan.\n\n• The rate argument is 3%/12 monthly payments per year.\n\n• The PMT argument is -150.\n\n• The PV (present value) argument is 2500.\n\nFigure out a down payment\n\nSay that you’d like to buy a \\$19,000 car at a 2.9% interest rate over three years. You want to keep the monthly payments at \\$350 a month, so you need to figure out your down payment. In this formula the result of the PV function is the loan amount, which is then subtracted from the purchase price to get the down payment.\n\nUsing the function PV(rate,NPER,PMT)\n\n=19000-PV(2.9%/12, 3*12,-350)\n\nthe down payment required would be \\$6,946.48\n\n• The \\$19,000 purchase price is listed first in the formula. The result of the PV function will be subtracted from the purchase price.\n\n• The rate argument is 2.9% divided by 12.\n\n• The NPER argument is 3*12 (or twelve monthly payments over three years).\n\n• The PMT is -350 (you would pay \\$350 per month).\n\nSee how much your savings will add up to over time\n\nStarting with \\$500 in your account, how much will you have in 10 months if you deposit \\$200 a month at 1.5% interest?\n\nUsing the function FV(rate,NPER,PMT,PV)\n\n=FV(1.5%/12,10,-200,-500)\n\nin 10 months you would have \\$2,517.57 in savings.\n\n• The rate argument is 1.5%/12.\n\n• The NPER argument is 10 (months).\n\n• The PMT argument is -200.\n\n• The PV (present value) argument is -500.\n\nExpand your Office skills\nExplore training",
null,
"Get instant Excel help\nConnect to an expert now\nSubject to Got It terms and conditions"
]
| [
null,
"https://support.office.com/SocImages/got-it-original.png",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.9251298,"math_prob":0.9678316,"size":4753,"snap":"2019-43-2019-47","text_gpt3_token_len":1262,"char_repetition_ratio":0.15813856,"word_repetition_ratio":0.113879,"special_character_ratio":0.30191457,"punctuation_ratio":0.1246432,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9931236,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-20T16:56:09Z\",\"WARC-Record-ID\":\"<urn:uuid:a793c691-828a-4e70-9f88-7cf327e47a48>\",\"Content-Length\":\"113181\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4d9a3995-2593-4ff2-8cae-68fc7948de3c>\",\"WARC-Concurrent-To\":\"<urn:uuid:dc82797d-a0df-4ba6-a477-2ee07393ef29>\",\"WARC-IP-Address\":\"104.117.23.55\",\"WARC-Target-URI\":\"https://support.office.com/en-us/article/using-excel-formulas-to-figure-out-payments-and-savings-11cb708f-c137-4ef8-bcf3-5137aaeb4b20\",\"WARC-Payload-Digest\":\"sha1:GNMOUQ4LQ77C2OUECZDGMPTAPGIPDF5S\",\"WARC-Block-Digest\":\"sha1:CKMP3NMVTQ3Y43RVOG5HT7W2P4ZX5PPI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570986717235.56_warc_CC-MAIN-20191020160500-20191020184000-00108.warc.gz\"}"} |
https://www.answers.com/Q/What_is_a_Line_Plot | [
"Math and Arithmetic\nStatistics\nAlgebra\nGraphs\n\n# What is a Line Plot?\n\nA line plot is a graph that indicates the frequency of values along a range of possibilities. These may be numerical or other forms of data. An X is placed for each occurrence of a given value. (This is analogous to a vertical bar graph.)\n\nExample:\n\nLine plot of cars sold (x for each sold)\n\n_ X _ X\n\n_ X X X\n\nX X X X\n\n_______\n\n1 2 3 4\n\n1 = car type 1 (one sold)\n\n2 = car type 2 (three sold)\n\n3 = car type 3 (two sold)\n\n4 = car type 4 (three sold)\n\n🙏\n0\n🤨\n0\n😮\n0\n😂\n0",
null,
""
]
| [
null,
"https://www.answers.com/icons/searchIcon.svg",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.9449698,"math_prob":0.9980352,"size":2224,"snap":"2021-21-2021-25","text_gpt3_token_len":565,"char_repetition_ratio":0.20225225,"word_repetition_ratio":0.023809524,"special_character_ratio":0.2589928,"punctuation_ratio":0.09126214,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9961101,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-05-14T20:05:01Z\",\"WARC-Record-ID\":\"<urn:uuid:523b9f0c-266d-449c-bede-3860c266201f>\",\"Content-Length\":\"299654\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5a5373c7-d47f-460c-914d-cadc21d29bb5>\",\"WARC-Concurrent-To\":\"<urn:uuid:58d20c5f-2eb2-40bf-aeee-b69a00c71a36>\",\"WARC-IP-Address\":\"151.101.200.203\",\"WARC-Target-URI\":\"https://www.answers.com/Q/What_is_a_Line_Plot\",\"WARC-Payload-Digest\":\"sha1:RKQXGEAC5NJR7NSIT44LHW322Y6H3VWZ\",\"WARC-Block-Digest\":\"sha1:34CX3MVFXOWHZ57VD5C2RRZ2RQZRWEYD\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-21/CC-MAIN-2021-21_segments_1620243991207.44_warc_CC-MAIN-20210514183414-20210514213414-00040.warc.gz\"}"} |
https://www.homeworklib.com/qaa/1783690/1-point-in-this-problem-you-will-calculate-the | [
"Question\n\n(1 point) In this problem you will calculate the area between f(x) = x2 and the...",
null,
"",
null,
"(1 point) In this problem you will calculate the area between f(x) = x2 and the x-axis over the interval [3,12] using a limit of right-endpoint Riemann sums: Area = lim ( f(xxAx bir (3 forwar). Express the following quantities in terms of n, the number of rectangles in the Riemann sum, and k, the index for the rectangles in the Riemann sum. a. We start by subdividing [3, 12) into n equal width subintervals [x0, x1], [x1, x2),..., [Xn-1, x, each of width Ax. Express the width of each subinterval Ax in terms of the number of subintervals n. Ax = b. Find the right endpoints X1, X2, X3 of the first, second, and third subintervals [x0, x1], [x1, x2], [X2, X3] and express your answers in terms of n. X1, X2, X3 = (Enter a comma separated list.) c. Find a general expression for the right endpoint Xk of the kth subinterval [Xk-1, xx], where 1 sk <n. Express your answer in terms of k and n. Xx = d. Find f(x) in terms of k and n. f(x) = e. Find f (xx)Ax in terms of k and n. f(x)Ax =\ne. Find f(xk)Ax in terms of k and n. f(xk)Ax = f. Find the value of the right-endpoint Riemann sum in terms of n. n f(xx)Ax = k=1 g. Find the limit of the right-endpoint Riemann sum. lim ( f(xx)Ax = n00 k=1",
null,
"Earn Coins\n\nCoins can be redeemed for fabulous gifts.\n\nSimilar Homework Help Questions\n• Estimate the area of the region bounded by the graph of f(x)-x + 2 and the x-axis on [0,4] in the following ways a. Divide [0,4] into n = 4 subintervals and approximate the area of the region using...",
null,
"Estimate the area of the region bounded by the graph of f(x)-x + 2 and the x-axis on [0,4] in the following ways a. Divide [0,4] into n = 4 subintervals and approximate the area of the region using a left Riemann sum. Illustrate the solution geometrically. b. Divide [0,4] into n = 4 subintervals and approximate the area of the region using a midpoint Riemann sum· illustrate the solution geometrically. C. Divide into n = 4 subintervals and...\n\n• Please answer with work Graph the function f(x) over the given interval. Partition the interval into...",
null,
"Please answer with work Graph the function f(x) over the given interval. Partition the interval into 4 subintervals of equal length. Then add to 4 your sketch the rectangles associated with the Riemann sum f(ck) Axk, using the indicated point in the kth k=1 subinterval for ck. Then approximate the area using these rectangles. 20) f(x) = cos x + 4, [0, 2TT), right-hand endpoint a) Graph: 2 7 22 b) What is the right Riemann sum from 0 to...\n\n• Let f(x) = x on the interval [1,2]. Let the interval be divided into two equal...",
null,
"Let f(x) = x on the interval [1,2]. Let the interval be divided into two equal subintervals. Find the value of the Riemann sum endpoint of its subinterval / dx, if each X;\" is the left\n\n• 3. Find the sum of the areas of approximating rectangles for the area under f(x) =...",
null,
"3. Find the sum of the areas of approximating rectangles for the area under f(x) = 48 - x?, between x = 1 and x = 5 using 4 subintervals and the right endpoints of each subinterval for sample points.\n\n• 2. Write the limit of the Riemann sums as a definite integral. plz !!! Cancel 1....",
null,
"2. Write the limit of the Riemann sums as a definite integral. plz !!! Cancel 1. f(x) = x3 Find the Riemann sum for function f. -2 < x < 3 partitioned into 5 equal subintervals for which u; is the left endpoint of each subinterval. 9 1 • dx a. 성 - 1 b. Sutra ( + r + 6)dx - 3 2. C. { (-6x (-6x3 - 3x² + 2x)dx -2\n\n• 3. Find the sum of the areas of approximating rectangles for the area under f(x) =...",
null,
"3. Find the sum of the areas of approximating rectangles for the area under f(x) = 48 - x?, between x = 1 and x = 5 using 4 subintervals and the right endpoints of each subinterval for sample points. 31\n\n• 4 Graph the function f(x) = cos x on the interval ( - 1,1], showing the...",
null,
"4 Graph the function f(x) = cos x on the interval ( - 1,1], showing the addition of the rectangles associated with the Riemann sum Ef() 4x4 given that ck is the right endpoint of the kth subinterval. Choose the correct graph. O C. OA OB. 1 NA VN/ 2\n\n• Use finite approximation to estimate the area under the graph of f(x) =x2 and above the...",
null,
"Use finite approximation to estimate the area under the graph of f(x) =x2 and above the graph of f(x) = 0 from x0-0 to xn-14 using i) a lower sum with two rectangles of equal width. ii) a lower sum with four rectangles of equal width ili) an upper sum with two rectangles of equal width iv) an upper sum with four rectangles of equal width. The estimated area using a lower sum with two rectangles of equal width is...\n\n• 5. (12 pts.) Consider the region bounded by f(x) 4-2x and the x-axis on interval [-1,...",
null,
"5. (12 pts.) Consider the region bounded by f(x) 4-2x and the x-axis on interval [-1, 4] Follow the steps to state the right Riemann Sum of the function f with n equal-length subintervals on [-, 4] (5 pts.) a. Xk= f(xa) (Substitute x into f and simplify.) Complete the right Riemann Sum (do not evaluate or simplify): -2 b. (1 pt.) lim R calculates NET AREA or TOTAL AREA. (Circle your choice.) Using the graph, shade the region bounded...\n\n• (6) Evaluate the Riemann sum for f(x) = x2 + 2x – 1, 1 < x...",
null,
"(6) Evaluate the Riemann sum for f(x) = x2 + 2x – 1, 1 < x < 4 with six subintervals, taking the sample points to be right endpoints."
]
| [
null,
"https://img.homeworklib.com/questions/fa47e9b0-af08-11eb-89bc-8bae81070a81.png",
null,
"https://img.homeworklib.com/questions/facc6930-af08-11eb-a0aa-a53fc3e4838c.png",
null,
"https://img.homeworklib.com/questions/d796ff40-af5a-11eb-bb8f-654da1cd3aad.png",
null,
"https://img.homeworklib.com/images/24c76abe-3f76-43f7-adb1-26c6a2ea67f9.png",
null,
"https://img.homeworklib.com/questions/46d5b510-ff04-11ea-a153-edd13d91fe42.png",
null,
"https://img.homeworklib.com/questions/d9a50ef0-ab60-11eb-b9e8-d33e84a4daaa.png",
null,
"https://img.homeworklib.com/questions/96c216c0-e8ec-11ea-9235-f9c8646ac47b.png",
null,
"https://img.homeworklib.com/questions/483347f0-4225-11eb-9622-27a26b4f3830.png",
null,
"https://img.homeworklib.com/questions/84bef0d0-e70f-11ea-879e-43195046d00c.png",
null,
"https://img.homeworklib.com/questions/84160d00-bffe-11eb-af0c-292dcc54c190.png",
null,
"https://img.homeworklib.com/questions/5fbc4100-4b9b-11eb-9e75-554fc6794343.png",
null,
"https://img.homeworklib.com/images/75d3b2ee-b75a-40bb-b47b-330d2b80d4b9.png",
null,
"https://img.homeworklib.com/questions/36a01050-e80d-11ea-8982-1133585d0205.png",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.7722099,"math_prob":0.9998037,"size":3238,"snap":"2022-05-2022-21","text_gpt3_token_len":931,"char_repetition_ratio":0.17316018,"word_repetition_ratio":0.19169329,"special_character_ratio":0.293391,"punctuation_ratio":0.11976912,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99992704,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26],"im_url_duplicate_count":[null,3,null,1,null,1,null,2,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,2,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-22T11:36:53Z\",\"WARC-Record-ID\":\"<urn:uuid:67c16df8-037e-4290-a1b4-f7ae4fb539eb>\",\"Content-Length\":\"39812\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ce6981e6-6b65-4c2a-a931-f34062149859>\",\"WARC-Concurrent-To\":\"<urn:uuid:a350416a-98e6-4f39-98e8-fdaaab8f2156>\",\"WARC-IP-Address\":\"172.67.42.31\",\"WARC-Target-URI\":\"https://www.homeworklib.com/qaa/1783690/1-point-in-this-problem-you-will-calculate-the\",\"WARC-Payload-Digest\":\"sha1:WESNXCAJLD2BTQBLNDPLUCRRZKJMLWSF\",\"WARC-Block-Digest\":\"sha1:LEZCPE7V2FP5MK6GTUO2QWKYJEYTC2WC\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320303845.33_warc_CC-MAIN-20220122103819-20220122133819-00618.warc.gz\"}"} |
https://www.physicsforums.com/threads/trigonometry-and-identities.869901/ | [
"# Trigonometry and Identities\n\n## Homework Statement\n\nsin^2x + 4sinx +4 / sinx + 2 = sinx +2\n\n## The Attempt at a Solution\n\nL.S = sin^2x + 4sinx +4 / sinx + 2\n=1-cos^2+4(sinx + 1) / sinx +2\n\nNot sure where to go from there.\nNot sure if I was even supposed to factor out the 4?\n\nSammyS\nStaff Emeritus\nScience Advisor\nHomework Helper\nGold Member\n\n## Homework Statement\n\nsin^2x + 4sinx +4 / sinx + 2 = sinx +2\n\n## The Attempt at a Solution\n\nL.S = sin^2x + 4sinx +4 / sinx + 2\n=1-cos^2+4(sinx + 1) / sinx +2\n\nNot sure where to go from there.\nNot sure if I was even supposed to factor out the 4?\nPlease enclose the entirety of any numerator and/or denominator in parentheses.\n\nPlease enclose the entirety of any numerator and/or denominator in parentheses.\n\n(Sin^2x + 4sinx + 4) / (sinx + 2) = sinx + 2\n\nSammyS\nStaff Emeritus\nScience Advisor\nHomework Helper\nGold Member\n(Sin^2x + 4sinx + 4) / (sinx + 2) = sinx + 2\nFactor the numerator.\n\nFactor the numerator.\nThank you, didn't catch that.\n\nSammyS\nStaff Emeritus\nScience Advisor\nHomework Helper\nGold Member\nThank you, didn't catch that.\nSo, what do you get ?\n\nSo, what do you get ?\n((Sinx + 2)(Sinx + 2)) / (Sinx + 2)\n\nThen you cancel one from top and bottom to get: Sinx + 2.\n\n•",
null,
"SammyS\n((Sinx + 2)(Sinx + 2)) / (Sinx + 2)\n\nThen you cancel one from top and bottom to get: Sinx + 2.\n\nYes, but it is a tiny bit more complicated. Here's something to think about:\n\n1) Why doesn't the following equality hold for all ##x##:\n\n$$\\frac{(x+2)(x+2)}{x+2} = x+2$$\n\n2) Why is this no problem with the question in the OP?\n\nYes, but it is a tiny bit more complicated. Here's something to think about:\n\n1) Why doesn't the following equality hold for all ##x##:\n\n$$\\frac{(x+2)(x+2)}{x+2} = x+2$$\n\n2) Why is this no problem with the question in the OP?\n\n((Sinx + 2)(Sinx + 2)) you then take reciprocal of denominator and multiply it by the numerator, and that it is when you cancel them out?\n\nmember 587159\nCan you always divide out common factors from numerator and denumerator? For example, can you always say that (cosx-1)(cosx + 1)/(cosx - 1) = cosx + 1?\n\nWhy can/can't you say that? And what about your expression, those are things you have to think about!\n\nYes, but it is a tiny bit more complicated. Here's something to think about:\n\n1) Why doesn't the following equality hold for all ##x##:\n\n$$\\frac{(x+2)(x+2)}{x+2} = x+2$$\n\n2) Why is this no problem with the question in the OP?\n\nTo give you a hint, what happens if we plug in -2 for x? Pay attention to the denominator.\n\nMark44\nMentor\nYes, but it is a tiny bit more complicated. Here's something to think about:\n\n1) Why doesn't the following equality hold for all ##x##:\n\n$$\\frac{(x+2)(x+2)}{x+2} = x+2$$\n\n2) Why is this no problem with the question in the OP?\n\nVeronica Oles said:\n((Sinx + 2)(Sinx + 2)) you then take reciprocal of denominator and multiply it by the numerator, and that it is when you cancel them out?\nmicromass asked two questions. You didn't respond to his first question, and your answer to the second question doesn't address why ##\\frac{(\\sin x+2)(\\sin x+2)}{\\sin x+2} = \\sin x+2## is always true, regardless of the value of x."
]
| [
null,
"data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.84200054,"math_prob":0.99060243,"size":271,"snap":"2021-04-2021-17","text_gpt3_token_len":112,"char_repetition_ratio":0.1610487,"word_repetition_ratio":0.13793103,"special_character_ratio":0.4095941,"punctuation_ratio":0.05172414,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9963943,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-04-17T11:28:03Z\",\"WARC-Record-ID\":\"<urn:uuid:b33bf266-faaa-4fc4-9030-c912fca56d7a>\",\"Content-Length\":\"108347\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2d9a0494-a296-4504-8b9b-468221c14618>\",\"WARC-Concurrent-To\":\"<urn:uuid:78ce8147-2882-4323-b7c4-f69642f09a3e>\",\"WARC-IP-Address\":\"104.26.14.132\",\"WARC-Target-URI\":\"https://www.physicsforums.com/threads/trigonometry-and-identities.869901/\",\"WARC-Payload-Digest\":\"sha1:MVX55WRW5KG6P3RSBEM3QEGAJ673UKUI\",\"WARC-Block-Digest\":\"sha1:MQYIEBNJ4KAN27FGO5P5J4VE77YTD4EQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-17/CC-MAIN-2021-17_segments_1618038119532.50_warc_CC-MAIN-20210417102129-20210417132129-00002.warc.gz\"}"} |
https://wiki.vg/index.php?title=Data_types&diff=next&oldid=6377 | [
"# Difference between revisions of \"Data types\"\n\nAll data sent over the network is big-endian, that is the bytes are sent from most significant byte to least significant byte. The majority of everyday computers are little-endian, therefore it may be necessary to change the endianness before sending data over the network.\n\nOther than 'String' and 'Metadata', which are decoded with a custom function, these data formats are identical to those provided by the Java classes DataInputStream and DataOutputStream.\n\nName Size (bytes) Encodes Notes\nBoolean 1 false or true Value can be either true (`0x01`) or false (`0x00`)\nByte 1 -128 to 127 Signed 8-bit integer, two's complement\nUnsigned Byte 1 0 to 255 Unsigned 8-bit integer\nShort 2 -32768 to 32767 Signed 16-bit integer, two's complement\nUnsigned Short 2 0 to 65535 Unsigned 16-bit integer\nInt 4 -2147483648 to 2147483647 Signed 32-bit integer, two's complement\nLong 8 -9223372036854775808 to 9223372036854775807 Signed 64-bit integer, two's complement\nFloat 4 Single-precision 32-bit IEEE 754 floating point\nDouble 8 Double-precision 64-bit IEEE 754 floating point\nString ≥ 1\n≤ 2147483652\nA sequence of Unicode code points UTF-8 string prefixed with its length as a VarInt\nChat ≥ 1\n≤ 2147483652\nSee Chat Encoded as a String\nVarInt ≥ 1\n≤ 5\n-2147483648 to 2147483647 Protocol Buffer Varint, encoding a two's complement signed 32-bit integer\nVarLong ≥ 1\n≤ 10\n-9223372036854775808 to 9223372036854775807 Protocol Buffer Varint, encoding a two's complement signed 64-bit integer\nSlot Varies See Slot Data\nObject Data 4 or 10 See Object Data\nNBT Tag Varies See NBT\nPosition 8 integer/block position: x (-33554432 to 33554431), y (-2048 to 2047), z (-33554432 to 33554431) x as a 26-bit integer, followed by y as a 12-bit integer, followed by z as a 26-bit integer (all signed, two's complement). See also the section below.\nUUID 16 A UUID The vanilla Minecraft server internally sends this as two longs.\n```this.writeLong(uuid.getMostSignificantBits());\nthis.writeLong(uuid.getLeastSignificantBits());```\nByte Array Varies Depends on context This is just a sequence of zero or more bytes, its meaning should be explained somewhere else, e.g. in the packet description. The length must also be known from the context.\n\n### Position\n\n64-bit long split in to three parts\n\nx: 26 MSBs\n\nz: 26 LSBs\n\ny: 12 bits between them\n\nEncoded as followed:\n\n``` ((x & 0x3FFFFFF) << 38) | ((y & 0xFFF) << 26) | (z & 0x3FFFFFF)\n\n```\n\nAnd decoded as:\n\n``` long val; // Encoded value\nx = val >> 38;\ny = (val >> 26) & 0xFFF\nz = val << 38 >> 38\n```\n\n### Fixed-point numbers\n\nSome fields may be stored as fixed-point numbers, where a certain number of bits represents the signed integer part (number to the left of the decimal point) and the rest represents the fractional part (to the right). Floating points (float and double), in contrast, keep the number itself (mantissa) in one chunk, while the location of the decimal point (exponent) is stored beside it.\n\nEssentially, while fixed-point numbers have lower range than floating points, their fractional precision is greater for higher values. This makes them ideal for representing global coordinates of an entity in Minecraft, as it's more important to store the integer part accurately than position them more precisely within a single block (or meter).\n\nCoordinates are often represented as a 32-bit integer, where 5 of the least-significant bits are dedicated to the fractional part, and the rest store the integer part.\n\nJava lacks support for fractional integers directly, but you can represent them as integers. To convert from a double to this integer representation, use the following formulas:\n\n``` abs_int = (int)double * 32;\n```\n\nAnd back again:\n\n``` double = (double)abs_int / 32;\n```"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.71809936,"math_prob":0.91572315,"size":7184,"snap":"2021-21-2021-25","text_gpt3_token_len":2155,"char_repetition_ratio":0.13830084,"word_repetition_ratio":0.20894238,"special_character_ratio":0.36052337,"punctuation_ratio":0.120732725,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97805375,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-05-14T07:42:35Z\",\"WARC-Record-ID\":\"<urn:uuid:1eb569e2-23e0-4000-a628-e036f5063bf9>\",\"Content-Length\":\"41997\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ec347e18-cb34-42a1-ab9a-bb5647ddea14>\",\"WARC-Concurrent-To\":\"<urn:uuid:80d0912a-6ee1-47c0-aa8a-e6e2488ba4c0>\",\"WARC-IP-Address\":\"107.21.115.97\",\"WARC-Target-URI\":\"https://wiki.vg/index.php?title=Data_types&diff=next&oldid=6377\",\"WARC-Payload-Digest\":\"sha1:QF2VC2MHYWTRH7UNGOPHMP6Y46LIOHLD\",\"WARC-Block-Digest\":\"sha1:IXMXVI6G2ZB7BSRUZ7VEHLFE6TGETXZA\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-21/CC-MAIN-2021-21_segments_1620243991648.40_warc_CC-MAIN-20210514060536-20210514090536-00381.warc.gz\"}"} |
https://bookdown.org/rbg/surrogates/chap5.html | [
"# Chapter 5 Gaussian Process Regression\n\nHere the goal is humble on theoretical fronts, but fundamental in application. Our aim is to understand the Gaussian process (GP) as a prior over random functions, a posterior over functions given observed data, as a tool for spatial data modeling and surrogate modeling for computer experiments, and simply as a flexible nonparametric regression. We’ll see that, almost in spite of a technical over-analysis of its properties, and sometimes strange vocabulary used to describe its features, GP regression is a simple extension of linear modeling. Knowing that is all it takes to make use of it as a nearly unbeatable regression tool when input–output relationships are relatively smooth, and signal-to-noise ratios relatively high. And even sometimes when they’re not.\n\nThe subject of this chapter goes by many names and acronyms. Some call it kriging, which is a term that comes from geostatistics (Matheron 1963); some call it Gaussian spatial modeling or a Gaussian stochastic process. Both, if you squint at them the right way, have the acronym GaSP. Machine learning (ML) researchers like Gaussian process regression (GPR). All of these instances are about regression: training on inputs and outputs with the ultimate goal of prediction and uncertainty quantification (UQ), and ancillary goals that are either tantamount to, or at least crucially depend upon, qualities and quantities derived from a predictive distribution. Although the chapter is titled “Gaussian process regression”, and we’ll talk lots about Gaussian process surrogate modeling throughout this book, we’ll typically shorten that mouthful to Gaussian process (GP), or use “GP surrogate” for short. GPS would be confusing and GPSM is too scary. I’ll try to make this as painless as possible.\n\nAfter understanding how it all works, we’ll see how GPs excel in several common response surface tasks: as a sequential design tool in Chapter 6; as the workhorse in modern (Bayesian) optimization of blackbox functions in Chapter 7; and all that with a “hands off” approach. Classical RSMs of Chapter 3 have many attractive features, but most of that technology was designed specifically, and creatively, to cope with limitations arising in first- and second-order linear modeling. Once in the more flexible framework that GPs provide, one can think big without compromising finer detail on smaller things.\n\nOf course GPs are no panacea. Specialized tools can work better in less generic contexts. And GPs have their limitations. We’ll have the opportunity to explore just what they are, through practical examples. And down the line in Chapter 9 we’ll see that most of those are easy to sweep away with a bit of cunning. These days it’s hard to make the case that a GP shouldn’t be involved as a component in a larger analysis, or at least attempted as such, where ultimately limited knowledge of the modeling context can be met by a touch of flexibility, taking us that much closer to human-free statistical “thinking” – a fundamental desire in ML and thus, increasingly, in tools developed for modern analytics.\n\n## 5.1 Gaussian process prior\n\nGaussian process is a generic term that pops up, taking on disparate but quite specific meanings, in various statistical and probabilistic modeling enterprises. As a generic term, all it means is that any finite collection of realizations (i.e., $$n$$ observations) is modeled as having a multivariate normal (MVN) distribution. That, in turn, means that characteristics of those realizations are completely described by their mean $$n$$-vector $$\\mu$$ and $$n \\times n$$ covariance matrix $$\\Sigma$$. With interest in modeling functions, we’ll sometimes use the term mean function, thinking of $$\\mu(x)$$, and covariance function, thinking of $$\\Sigma(x, x')$$. But ultimately we’ll end up with vectors $$\\mu$$ and matrices $$\\Sigma$$ after evaluating those functions at specific input locations $$x_1, \\dots, x_n$$.\n\nYou’ll hear people talk about function spaces, reproducing kernel Hilbert spaces, and so on, in the context of GP modeling of functions. Sometimes thinking about those aspects/properties is important, depending on context. For most purposes that makes things seem fancier than they really need to be.\n\nThe action, at least the part that’s interesting, in a GP treatment of functions is all in the covariance. Consider a covariance function defined by inverse exponentiated squared Euclidean distance:\n\n$\\Sigma(x, x') = \\exp\\{ - || x - x '||^2 \\}.$\n\nHere covariance decays exponentially fast as $$x$$ and $$x'$$ become farther apart in the input, or $$x$$-space. In this specification, observe that $$\\Sigma(x,x) = 1$$ and $$\\Sigma(x, x') < 1$$ for $$x' \\ne x$$. The function $$\\Sigma(x, x')$$ must be positive definite. For us this means that if we define a covariance matrix $$\\Sigma_n$$, based on evaluating $$\\Sigma(x_i, x_j$$) at pairs of $$n$$ $$x$$-values $$x_1, \\dots, x_n$$, we must have that\n\n$x^\\top \\Sigma_n x > 0 \\quad \\mbox{ for all } x \\ne 0.$\n\nWe intend to use $$\\Sigma_n$$ as a covariance matrix in an MVN, and a positive (semi-) definite covariance matrix is required for MVN analysis. In that context, positive definiteness is the multivariate extension of requiring that a univariate Gaussian have positive variance parameter, $$\\sigma^2$$.\n\nTo ultimately see how a GP with that simple choice of covariance $$\\Sigma_n$$ can be used to perform regression, let’s first see how GPs can be used to generate random data following a smooth functional relationship. Suppose we take a bunch of $$x$$-values: $$x_1,\\dots, x_n$$, define $$\\Sigma_n$$ via $$\\Sigma_n^{ij} = \\Sigma(x_i, x_j)$$, for $$i,j=1,\\dots,n$$, then draw an $$n$$-variate realization\n\n$Y \\sim \\mathcal{N}_n(0, \\Sigma_n),$\n\nand plot the result in the $$x$$-$$y$$ plane. That was a mouthful, but don’t worry: we’ll see it in code momentarily. First note that the mean of this MVN is zero; this need not be but it’s quite surprising how well things work even in this special case. Location invariant zero-mean GP modeling, sometimes after subtracting off a middle value of the response (e.g., $$\\bar{y}$$), is the default in computer surrogate modeling and (ML) literatures. We’ll talk about generalizing this later.\n\nHere’s a version of that verbal description with $$x$$-values in 1d. First create an input grid with 100 elements.\n\nn <- 100\nX <- matrix(seq(0, 10, length=n), ncol=1)\n\nNext calculate pairwise squared Euclidean distances between those inputs. I like the distance function from the plgp package (Gramacy 2014) in R because it was designed exactly for this purpose (i.e., for use with GPs), however dist in base R provides similar functionality.\n\nlibrary(plgp)\nD <- distance(X)\n\nThen build up covariance matrix $$\\Sigma_n$$ as inverse exponentiated squared Euclidean distances. Notice that the code below augments the diagonal with a small number eps $$\\equiv \\epsilon$$. Although inverse exponentiated distances guarantee a positive definite matrix in theory, sometimes in practice the matrix is numerically ill-conditioned. Augmenting the diagonal a tiny bit prevents that. Neal (1998), a GP vanguard in the statistical/ML literature, calls $$\\epsilon$$ the jitter in this context.\n\neps <- sqrt(.Machinedouble.eps) Sigma <- exp(-D) + diag(eps, n) Finally, plug that covariance matrix into an MVN random generator; below I use one from the mvtnorm package (Genz et al. 2018) on CRAN. library(mvtnorm) Y <- rmvnorm(1, sigma=Sigma) That’s it! We’ve generated a finite realization of a random function under a GP prior with a particular covariance structure. Now all that’s left is visualization. Figure 5.1 plots those X and Y pairs as tiny connected line segments on an $$x$$-$$y$$ plane. plot(X, Y, type=\"l\")",
null,
"FIGURE 5.1: A random function under a GP prior. Because the $$Y$$-values are random, you’ll get a different curve when you try this on your own. We’ll generate some more below in a moment. But first, what are the properties of this function, or more precisely of a random function generated in this way? Several are easy to deduce from the form of the covariance structure. We’ll get a range of about $$[-2,2]$$, with 95% probability, because the scale of the covariance is 1, ignoring the jitter $$\\epsilon$$ added to the diagonal. We’ll get several bumps in the $$x$$-range of $$[0,10]$$ because short distances are highly correlated (about 37%) and long distances are essentially uncorrelated ($$1e^{-7}$$). c(exp(-1^2), exp(-4^2)) ## 3.679e-01 1.125e-07 Now the function plotted above is only a finite realization, meaning that we really only have 100 pairs of points. Those points look smooth, in a tactile sense, because they’re close together and because the plot function is “connecting the dots” with lines. The full surface, which you might conceptually extend to an infinite realization over a compact domain, is extremely smooth in a calculus sense because the covariance function is infinitely differentiable, a discussion we’ll table for a little bit later. Besides those three things – scale of two, several bumps, smooth look – we won’t be able to anticipate much else about the nature of a particular realization. Figure 5.2 shows three new random draws obtained in a similar way, which will again look different when you run the code on your own. Y <- rmvnorm(3, sigma=Sigma) matplot(X, t(Y), type=\"l\", ylab=\"Y\")",
null,
"FIGURE 5.2: Three more random functions under a GP prior. Each random finite collection is different than the next. They all have similar range, about the same number of bumps, and are smooth. That’s what it means to have function realizations under a GP prior: $$Y(x) \\sim \\mathcal{GP}$$. ### 5.1.1 Gaussian process posterior Of course, we’re not in the business of generating random functions. I’m not sure what that would be useful for. Instead, we ask: given examples of a function in pairs $$(x_1, y_1), \\dots, (x_n, y_n)$$, comprising data $$D_n = (X_n, Y_n)$$, what random function realizations could explain – could have generated – those observed values? That is, we want to know about the conditional distribution of $$Y(x) \\mid D_n$$. If we call $$Y(x) \\sim \\mathcal{GP}$$ the prior, then $$Y(x) \\mid D_n$$ must be the posterior. Fortunately, you don’t need to be a card-carrying Bayesian to appreciate what’s going on, although that perspective has really taken hold in ML. That conditional distribution, $$Y(x) \\mid D_n$$, which one might more simply call a predictive distribution, is a familiar quantity in regression analysis. Forget for the moment that when regressing one is often interested in other aspects, like relevance of predictors through estimates of parameter standard errors, etc., and that so far our random functions look like they have no noise. The somewhat strange, and certainly most noteworthy, thing is that so far there are no parameters! Let’s shelve interpretation (Bayesian updating or a twist on simple regression) for a moment and focus on conditional distributions, because that’s what it’s really all about. Deriving that predictive distribution is a simple application of deducing a conditional from a (joint) MVN. From Wikipedia, if an $$N$$-dimensional random vector $$X$$ is partitioned as $X = \\left( \\begin{array}{c} X_1 \\\\ X_2 \\end{array} \\right) \\quad \\mbox{ with sizes } \\quad \\left( \\begin{array}{c} q \\times 1 \\\\ (N-q) \\times 1 \\end{array} \\right),$ and accordingly $$\\mu$$ and $$\\Sigma$$ are partitioned as, $\\mu = \\left( \\begin{array}{c} \\mu_1 \\\\ \\mu_2 \\end{array} \\right) \\quad \\mbox{ with sizes } \\quad \\left( \\begin{array}{c} q \\times 1 \\\\ (N-q) \\times 1 \\end{array} \\right)$ and $\\Sigma = \\left(\\begin{array}{cc} \\Sigma_{11} & \\Sigma_{12} \\\\ \\Sigma_{21} & \\Sigma_{22} \\end{array} \\right) \\ \\mbox{ with sizes } \\ \\left(\\begin{array}{cc} q \\times q & q \\times (N-q) \\\\ (N-q)\\times q & (N - q)\\times (N-q) \\end{array} \\right),$ then the distribution of $$X_1$$ conditional on $$X_2 = x_2$$ is MVN $$X_1 \\mid x_2 \\sim \\mathcal{N}_q (\\bar{\\mu}, \\bar{\\Sigma})$$, where \\begin{align} \\bar{\\mu} &= \\mu_1 + \\Sigma_{12} \\Sigma_{22}^{-1}(x_2 - \\mu_2) \\tag{5.1} \\\\ \\mbox{and } \\quad \\bar{\\Sigma} &= \\Sigma_{11} - \\Sigma_{12} \\Sigma_{22}^{-1} \\Sigma_{21}. \\notag \\end{align} An interesting feature of this result is that conditioning upon $$x_2$$ alters the variance of $$X_1$$. Observe that $$\\bar{\\Sigma}$$ above is reduced compared to its marginal analog $$\\Sigma_{11}$$. Reduction in variance when conditioning on data is a hallmark of statistical learning. We know more – have less uncertainty – after incorporating data. Curiously, the amount by which variance is decreased doesn’t depend on the value of $$x_2$$. Observe that the mean is also altered, comparing $$\\mu_1$$ to $$\\bar{\\mu}$$. In fact, the equation for $$\\bar{\\mu}$$ is a linear mapping, i.e., of the form $$ax + b$$ for vectors $$a$$ and $$b$$. Finally, note that $$\\Sigma_{12} = \\Sigma_{21}^\\top$$ so that $$\\bar{\\Sigma}$$ is symmetric. Ok, how do we deploy that fundamental MVN result towards deriving the GP predictive distribution $$Y(x) \\mid D_n$$? Consider an $$n+1^{\\mathrm{st}}$$ observation $$Y(x)$$. Allow $$Y(x)$$ and $$Y_n$$ to have a joint MVN distribution with mean zero and covariance function $$\\Sigma(x,x')$$. That is, stack $\\left( \\begin{array}{c} Y(x) \\\\ Y_n \\end{array} \\right) \\quad \\mbox{ with sizes } \\quad \\left( \\begin{array}{c} 1 \\times 1 \\\\ n \\times 1 \\end{array} \\right),$ and if $$\\Sigma(X_n,x)$$ is the $$n \\times 1$$ matrix comprised of $$\\Sigma(x_1, x), \\dots, \\Sigma(x_n, x)$$, its covariance structure can be partitioned as follows: $\\left(\\begin{array}{cc} \\Sigma(x,x) & \\Sigma(x,X_n) \\\\ \\Sigma(X_n,x) & \\Sigma_n \\end{array} \\right) \\ \\mbox{ with sizes } \\ \\left(\\begin{array}{cc} 1 \\times 1 & 1 \\times n \\\\ n\\times 1 & n \\times n \\end{array} \\right).$ Recall that $$\\Sigma(x,x) = 1$$ with our simple choice of covariance function, and that symmetry provides $$\\Sigma(x,X_n) = \\Sigma(X_n,x)^\\top$$. Applying Eq. (5.1) yields the following predictive distribution $Y(x) \\mid D_n \\sim \\mathcal{N}(\\mu(x), \\sigma^2(x))$ with \\begin{align} \\mbox{mean } \\quad \\mu(x) &= \\Sigma(x, X_n) \\Sigma_n^{-1} Y_n \\tag{5.2} \\\\ \\mbox{and variance } \\quad \\sigma^2(x) &= \\Sigma(x,x) - \\Sigma(x, X_n) \\Sigma_n^{-1} \\Sigma(X_n, x). \\notag \\end{align} Observe that $$\\mu(x)$$ is linear in observations $$Y_n$$, so we have a linear predictor! In fact it’s the best linear unbiased predictor (BLUP), an argument we’ll leave to other texts (e.g., Santner, Williams, and Notz 2018). Also notice that $$\\sigma^2(x)$$ is lower than the marginal variance. So we learn something from data $$Y_n$$; in fact the amount that variance goes down is a quadratic function of distance between $$x$$ and $$X_n$$. Learning is most efficient for $$x$$ that are close to training data locations $$X_n$$. However the amount learned doesn’t depend upon $$Y_n$$. We’ll return to that later. The derivation above is for “pointwise” GP predictive calculations. These are sometimes called the kriging equations, especially in geospatial contexts. We can apply them, separately, for many predictive/testing locations $$x$$, one $$x$$ at a time, but that would ignore the obvious correlation they’d experience in a big MVN analysis. Alternatively, we may consider a bunch of $$x$$ locations jointly, in a testing design $$\\mathcal{X}$$ of $$n'$$ rows, say, all at once: $Y(\\mathcal{X}) \\mid D_n \\sim \\mathcal{N}_{n'}(\\mu(\\mathcal{X}), \\Sigma(\\mathcal{X}))$ with \\begin{align} \\mbox{mean } \\quad \\mu(\\mathcal{X}) &= \\Sigma(\\mathcal{X}, X_n) \\Sigma_n^{-1} Y_n \\tag{5.3}\\\\ \\mbox{and variance } \\quad \\Sigma(\\mathcal{X}) &= \\Sigma(\\mathcal{X},\\mathcal{X}) - \\Sigma(\\mathcal{X}, X_n) \\Sigma_n^{-1} \\Sigma(\\mathcal{X}, X_n)^\\top, \\notag \\end{align} where $$\\Sigma(\\mathcal{X}, X_n)$$ is an $$n' \\times n$$ matrix. Having a full covariance structure offers a more complete picture of the random functions which explain data under a GP posterior, but also more computation. The $$n' \\times n'$$ matrix $$\\Sigma(\\mathcal{X})$$ could be enormous even for seemingly moderate $$n'$$. #### Simple 1d GP prediction example Consider a toy example in 1d where the response is a simple sinusoid measured at eight equally spaced $$x$$-locations in the span of a single period of oscillation. R code below provides relevant data quantities, including pairwise squared distances between the input locations collected in the matrix D, and its inverse exponentiation in Sigma. n <- 8 X <- matrix(seq(0, 2*pi, length=n), ncol=1) y <- sin(X) D <- distance(X) Sigma <- exp(-D) + diag(eps, ncol(D)) Now this is where the example diverges from our earlier one, where we used such quantities to generate data from a GP prior. Applying MVN conditioning equations requires similar calculations on a testing design $$\\mathcal{X}$$, coded as XX below. We need inverse exponentiated squared distances between those XX locations … XX <- matrix(seq(-0.5, 2*pi + 0.5, length=100), ncol=1) DXX <- distance(XX) SXX <- exp(-DXX) + diag(eps, ncol(DXX)) … as well as between testing locations $$\\mathcal{X}$$ and training data locations $$X_n$$. DX <- distance(XX, X) SX <- exp(-DX) Note that an $$\\epsilon$$ jitter adjustment is not required for SX because it need not be decomposed in the conditioning calculations (and SX is anyways not square). We do need jitter on the diagonal of SXX though, because this matrix is directly involved in calculation of the predictive covariance which we shall feed into an MVN generator below. Now simply follow Eq. (5.3) to derive joint predictive equations for XX $$\\equiv \\mathcal{X}$$: invert $$\\Sigma_n$$, apply the linear predictor, and calculate reduction in covariance. Si <- solve(Sigma) mup <- SX %*% Si %*% y Sigmap <- SXX - SX %*% Si %*% t(SX) Above mup maps to $$\\mu(\\mathcal{X})$$ evaluated at our testing grid $$\\mathcal{X} \\equiv$$ XX, and Sigmap similarly for $$\\Sigma(\\mathcal{X})$$ via pairs in XX. As a computational note, observe that Siy <- Si %*% y may be pre-computed in time quadratic in $$n=$$ length(y) so that mup may subsequently be calculated for any XX in time linear in $$n$$, without redoing Siy; for example, as solve(Sigma, y). There are two reasons we’re not doing that here. One is to establish a clean link between code and mathematical formulae. The other is a presumption that the variance calculation, which remains quadratic in $$n$$ no matter what, is at least as important as the mean. Mean vector and covariance matrix in hand, we may generate $$Y$$-values from the posterior/predictive distribution $$Y(\\mathcal{X}) \\mid D_n$$ in the same manner as we did from the prior. YY <- rmvnorm(100, mup, Sigmap) Those $$Y(\\mathcal{X}) \\equiv$$ YY samples may then be plotted as a function of predictive input $$\\mathcal{X} \\equiv$$ XX locations. Before doing that, extract some pointwise quantile-based error-bars from the diagonal of $$\\Sigma(\\mathcal{X})$$ to aid in visualization. q1 <- mup + qnorm(0.05, 0, sqrt(diag(Sigmap))) q2 <- mup + qnorm(0.95, 0, sqrt(diag(Sigmap))) Figure 5.3 plots each of the random predictive, finite realizations as gray curves. Training data points are overlayed, along with true response at the $$\\mathcal{X}$$ locations as a thin blue line. Predictive mean $$\\mu(\\mathcal{X})$$ in black, and 90% quantiles in dashed-red, are added as thicker lines. matplot(XX, t(YY), type=\"l\", col=\"gray\", lty=1, xlab=\"x\", ylab=\"y\") points(X, y, pch=20, cex=2) lines(XX, sin(XX), col=\"blue\") lines(XX, mup, lwd=2) lines(XX, q1, lwd=2, lty=2, col=2) lines(XX, q2, lwd=2, lty=2, col=2)",
null,
"FIGURE 5.3: Posterior predictive distribution in terms of means (solid black), quantiles (dashed-red), and draws (gray). The truth is shown as a thin blue line. What do we observe in the figure? Notice how the predictive surface interpolates the data. That’s because $$\\Sigma(x, x) = 1$$ and $$\\Sigma(x, x') \\rightarrow 1^-$$ as $$x' \\rightarrow x$$. Error-bars take on a “football” shape, or some say a “sausage” shape, being widest at locations farthest from $$x_i$$-values in the data. Error-bars get really big outside the range of the data, a typical feature in ordinary linear regression settings. But the predictive mean behaves rather differently than under an ordinary linear model. For GPs it’s mean-reverting, eventually leveling off to zero as $$x \\in \\mathcal{X}$$ gets far away from $$X_n$$. Predictive variance, as exemplified by those error-bars, is also reverting to something: a prior variance of 1. In particular, variance won’t continue to increase as $$x$$ gets farther and farther from $$X_n$$. Together those two “reversions” imply that although we can’t trust extrapolations too far outside of the data range, at least their behavior isn’t unpredictable, as can sometimes happen in linear regression contexts, for example when based upon feature-expanded (e.g., polynomial basis) covariates. These characteristics, especially the football/sausage shape, is what makes GPs popular as surrogates for computer simulation experiments. That literature, which historically emphasized study of deterministic computer simulators, drew comfort from interpolation-plus-expansion of variance away from training simulations. Perhaps more importantly, they liked that out-of-sample prediction was highly accurate. Come to think of it, that’s why spatial statisticians and machine learners like them too. But hold that thought; there are a few more things to do before we get to predictive comparisons. ### 5.1.2 Higher dimension? There’s nothing particularly special about the presentation above that would preclude application in higher input dimension. Except perhaps that visualization is a lot simpler in 1d or 2d. We’ll get to even higher dimensions with some of our later examples. For now, consider a random function in 2d sampled from a GP prior. The plan is to go back through the process above: first prior, then (posterior) predictive, etc. Begin by creating an input set, $$X_n$$, in two dimensions. Here we’ll use a regular $$20 \\times 20$$ grid. nx <- 20 x <- seq(0, 2, length=nx) X <- expand.grid(x, x) Then calculate pairwise distances and evaluate covariances under inverse exponentiated squared Euclidean distances, plus jitter. D <- distance(X) Sigma <- exp(-D) + diag(eps, nrow(X)) Finally make random MVN draws in exactly the same way as before. Below we save two such draws. Y <- rmvnorm(2, sigma=Sigma) For visualization in Figure 5.4, persp is used to stretch each $$20 \\times 20$$ = 400-variate draw over a mesh with a fortuitously chosen viewing angle. par(mfrow=c(1,2)) persp(x, x, matrix(Y[1,], ncol=nx), theta=-30, phi=30, xlab=\"x1\", ylab=\"x2\", zlab=\"y\") persp(x, x, matrix(Y[2,], ncol=nx), theta=-30, phi=30, xlab=\"x1\", ylab=\"x2\", zlab=\"y\")",
null,
"FIGURE 5.4: Two random functions under a GP prior in 2d. So drawing from a GP prior in 2d is identical to the 1d case, except with a 2d input grid. All other code is “cut-and-paste”. Visualization is more cumbersome, but that’s a cosmetic detail. Learning from training data, i.e., calculating the predictive distribution for observed $$(x_i, y_i)$$ pairs, is no different: more cut-and-paste. To try it out we need to cook up some toy data from which to learn. Consider the 2d function $$y(x) = x_1 \\exp\\{-x_1^2 - x_2^2\\}$$ which is highly nonlinear near the origin, but flat (zero) as inputs get large. This function has become a benchmark 2d problem in the literature for reasons that we’ll get more into in Chapter 9. Suffice it to say that thinking up simple-yet-challenging toy problems is a great way to get noticed in the community, even when you borrow a common example in vector calculus textbooks or one used to demonstrate 3d plotting features in MATLAB®. library(lhs) X <- randomLHS(40, 2) X[,1] <- (X[,1] - 0.5)*6 + 1 X[,2] <- (X[,2] - 0.5)*6 + 1 y <- X[,1]*exp(-X[,1]^2 - X[,2]^2) Above, a Latin hypercube sample (LHS; §4.1) is used to generate forty (coded) input locations in lieu of a regular grid in order to create a space-filling input design. A regular grid with 400 elements would have been overkill, but a uniform random design of size forty or so would have worked equally well. Coded inputs are mapped onto a scale of $$[-2,4]^2$$ in order to include both bumpy and flat regions. Let’s suppose that we wish to interpolate those forty points onto a regular $$40 \\times 40$$ grid, say for stretching over a mesh. Here’s code that creates such testing locations XX $$\\equiv\\mathcal{X}$$ in natural units. xx <- seq(-2, 4, length=40) XX <- expand.grid(xx, xx) Now that we have inputs and outputs, X and y, and predictive locations XX we can start cutting-and-pasting. Start with the relevant training data quantities … D <- distance(X) Sigma <- exp(-D) … and follow with similar calculations between input sets X and XX. DXX <- distance(XX) SXX <- exp(-DXX) + diag(eps, ncol(DXX)) DX <- distance(XX, X) SX <- exp(-DX) Then apply Eq. (5.3). Code wise, these lines are identical to what we did in the 1d case. Si <- solve(Sigma) mup <- SX %*% Si %*% y Sigmap <- SXX - SX %*% Si %*% t(SX) It’s hard to visualize a multitude of sample paths in 2d – two was plenty when generating from the prior – but if desired, we may obtain them with the same rmvnorm commands as in §5.1.1. Instead focus on plotting pointwise summaries, namely predictive mean $$\\mu(x) \\equiv$$ mup and predictive standard deviation $$\\sigma(x)$$: sdp <- sqrt(diag(Sigmap)) The left panel in Figure 5.5 provides an image plot of the mean over our regularly-gridded inputs XX; the right panel shows standard deviation. par(mfrow=c(1,2)) cols <- heat.colors(128) image(xx, xx, matrix(mup, ncol=length(xx)), xlab=\"x1\", ylab=\"x2\", col=cols) points(X[,1], X[,2]) image(xx, xx, matrix(sdp, ncol=length(xx)), xlab=\"x1\", ylab=\"x2\", col=cols) points(X[,1], X[,2])",
null,
"FIGURE 5.5: Posterior predictive for a two-dimensional example, via mean (left) and standard deviation (right) surfaces. Training data input locations are indicated by open circles. What do we observe? Pretty much the same thing as in the 1d case. We can’t see it, but the predictive surface interpolates. Predictive uncertainty, here as standard deviation $$\\sigma(x)$$, is highest away from $$x_i$$-values in the training data. Predictive intervals don’t look as much like footballs or sausages, yet somehow that analogy still works. Training data locations act as anchors to smooth variation between points with an organic rise in uncertainty as we imagine predictive inputs moving away from one toward the next. Figure 5.6 provides another look, obtained by stretching the predictive mean over a mesh. Bumps near the origin are clearly visible, with a flat region emerging for larger $$x_1$$ and $$x_2$$ settings. persp(xx, xx, matrix(mup, ncol=40), theta=-30, phi=30, xlab=\"x1\", ylab=\"x2\", zlab=\"y\")",
null,
"FIGURE 5.6: Perspective view on the posterior mean surface from the left panel of Figure 5.5. Well that’s basically it! Now you know GP regression. Where to go from here? Hopefully I’ve convinced you that GPs hold great potential as a nonlinear regression tool. It’s kinda-cool that they perform so well – that they “learn” – without having to tune anything. In statistics, we’re so used to seeking out optimal settings of parameters that a GP predictive surface might seem like voodoo. Simple MVN conditioning is able to capture input–output dynamics without having to “fit” anything, or without trying to minimize a loss criteria. That flexibility, without any tuning knobs, is what people think of when they call GPs a nonparametric regression tool. All we did was define covariance in terms of (inverse exponentiated squared Euclidean) distance, condition, and voilà. But when you think about it a little bit, there are lots of (hidden) assumptions which are going to be violated by most real-data contexts. Data is noisy. The amplitude of all functions we might hope to learn will not be 2. Correlation won’t decay uniformly in all directions, i.e., radially. Even the most ideally smooth physical relationships are rarely infinitely smooth. Yet we’ll see that even gross violations of those assumptions are easy to address, or “fix up”. At the same time GPs are relatively robust to transgressions between assumptions and reality. In other words, sometimes it works well even when it ought not. As I see it – once we clean things up – there are really only two serious problems that GPs face in practice: stationarity of covariance (§5.3.3), and computational burden, which in most contexts go hand-in-hand. Remedies for both will have to wait for Chapter 9. For now, let’s keep the message upbeat. There’s lots that can be accomplished with the canonical setup, whose description continues below. ## 5.2 GP hyperparameters All this business about nonparametric regression and here we are introducing parameters, passive–aggressively you might say: refusing to call them parameters. How can one have hyperparameters without parameters to start with, or at least to somehow distinguish from? To make things even more confusing, we go about learning those hyperparameters in the usual way, by optimizing something, just like parameters. I guess it’s all to remind you that the real power – the real flexibility – comes from MVN conditioning. These hyperparameters are more of a fine tuning. There’s something to that mindset, as we shall see. Below we revisit the drawbacks alluded to above – scale, noise, and decay of correlation – with a (fitted) hyperparameter targeting each one. ### 5.2.1 Scale Suppose you want your GP prior to generate random functions with an amplitude larger than two. You could introduce a scale parameter $$\\tau^2$$ and then take $$\\Sigma_n = \\tau^2 C_n$$. Here $$C$$ is basically the same as our $$\\Sigma$$ from before: a correlation function for which $$C(x,x) = 1$$ and $$C(x,x') < 1$$ for $$x \\ne x'$$, and positive definite; for example $C(x, x') = \\exp \\{- || x - x' ||^2 \\}.$ But we need a more nuanced notion of covariance to allow more flexibility on scale, so we’re re-parameterizing a bit. Now our MVN generator looks like $Y \\sim \\mathcal{N}_n(0, \\tau^2 C_n).$ Let’s check that that does the trick. First rebuild $$X_n$$-locations, e.g., a sequence of one hundred from zero to ten, and then calculate pairwise distances. Nothing different yet compared to our earlier illustration in §5.1. n <- 100 X <- matrix(seq(0, 10, length=n), ncol=1) D <- distance(X) Now amplitude, via 95% of the range of function realizations, is approximately $$2\\sigma(x)$$ where $$\\sigma^2 \\equiv \\mathrm{diag}(\\Sigma_n)$$. So for an amplitude of 10, say, choose $$\\tau^2 = 5^2 = 25$$. The code below calculates inverse exponentiated squared Euclidean distances in $$C_n$$ and makes ten draws from an MVN whose covariance is obtained by pre-multiplying $$C_n$$ by $$\\tau^2$$. C <- exp(-D) + diag(eps, n) tau2 <- 25 Y <- rmvnorm(10, sigma=tau2*C) As Figure 5.7 shows, amplitude has increased. Not all draws completely lie between $$-10$$ and $$10$$, but most are in the ballpark. matplot(X, t(Y), type=\"l\")",
null,
"FIGURE 5.7: Higher amplitude draws from a GP prior. But again, who cares about generating random functions? We want to be able to learn about functions on any scale from training data. What would happen if we had some data with an amplitude of 5, say, but we used a GP with a built-in scale of 1 (amplitude of 2). In other words, what would happen if we did things the “old-fashioned way”, with code cut-and-pasted directly from §5.1.1? First generate some data with that property. Here we’re revisiting sinusoidal data from §5.1.1, but multiplying by 5 on the way out of the sin call. n <- 8 X <- matrix(seq(0, 2*pi, length=n), ncol=1) y <- 5*sin(X) Next cut-and-paste code from earlier, including our predictive grid of 100 equally spaced locations. D <- distance(X) Sigma <- exp(-D) XX <- matrix(seq(-0.5, 2*pi + 0.5, length=100), ncol=1) DXX <- distance(XX) SXX <- exp(-DXX) + diag(eps, ncol(DXX)) DX <- distance(XX, X) SX <- exp(-DX) Si <- solve(Sigma); mup <- SX %*% Si %*% y Sigmap <- SXX - SX %*% Si %*% t(SX) Now we have everything we need to visualize the resulting predictive surface, which is shown in Figure 5.8 using plotting code identical to that behind Figure 5.3. YY <- rmvnorm(100, mup, Sigmap) q1 <- mup + qnorm(0.05, 0, sqrt(diag(Sigmap))) q2 <- mup + qnorm(0.95, 0, sqrt(diag(Sigmap))) matplot(XX, t(YY), type=\"l\", col=\"gray\", lty=1, xlab=\"x\", ylab=\"y\") points(X, y, pch=20, cex=2) lines(XX, mup, lwd=2) lines(XX, 5*sin(XX), col=\"blue\") lines(XX, q1, lwd=2, lty=2, col=2) lines(XX, q2, lwd=2, lty=2, col=2)",
null,
"FIGURE 5.8: GP fit to higher amplitude sinusoid. What happened? In fact the “scale 1” GP is pretty robust. It gets the predictive mean almost perfectly, despite using the “wrong prior” relative to the actual data generating mechanism, at least as regards scale. But it’s over-confident. Besides a change of scale, the new training data exhibit no change in relative error, nor any other changes for that matter, compared to the example we did above where the scale was actually 1. So we must now be under-estimating predictive uncertainty, which is obvious by visually comparing the error-bars to those obtained from our earlier fit (Figure 5.3). Looking closely, notice that the true function goes well outside of our predictive interval at the edges of the input space. That didn’t happen before. How to estimate the right scale? Well for starters, admit that scale may be captured by a parameter, $$\\tau^2$$, even though we’re going to call it a hyperparameter to remind ourselves that its impact on the overall estimation procedure is really more of a fine-tuning. The analysis above lends some credence to that perspective, since our results weren’t too bad even though we assumed an amplitude that was off by a factor of five. Whether benevolently gifted the right scale or not, GPs clearly retain a great deal of flexibility to adapt to the dynamics at play in data. Decent predictive surfaces often materialize, as we have seen, in spite of less than ideal parametric specifications. As with any “parameter”, there are many choices when it comes to estimation: method of moments (MoM), likelihood (maximum likelihood, Bayesian inference), cross validation (CV), the “eyeball norm”. Some, such as those based on (semi-) variograms, are preferred in the spatial statistics literature. All of those are legitimate, except maybe the eyeball norm which isn’t very easily automated and challenges reproducibility. I’m not aware of any MoM approaches to GP inference for hyperparameters. Stochastic kriging (Ankenman, Nelson, and Staum 2010) utilizes MoM in a slightly more ambitious, latent variable setting which is the subject of Chapter 10. Whereas CV is common in some circles, such frameworks generalize rather less well to higher dimensional hyperparameter spaces, which we’re going to get to momentarily. I prefer likelihood-based inferential schemes for GPs, partly because they’re the most common and, especially in the case of maximizing (MLE/MAP) solutions, they’re also relatively hands-off (easy automation), and nicely generalize to higher dimensional hyperparameter spaces. But wait a minute, what’s the likelihood in this context? It’s a bit bizarre that we’ve been talking about priors and posteriors without ever talking about likelihood. Both prior and likelihood are needed to form a posterior. We’ll get into finer detail later. For now, recognize that our data-generating process is $$Y \\sim \\mathcal{N}_n(0, \\tau^2 C_n)$$, so the relevant quantity, which we’ll call the likelihood now (but was our prior earlier), comes from an MVN PDF: $L \\equiv L(\\tau^2, C_n) = (2\\pi \\tau^2)^{-\\frac{n}{2}} | C_n |^{-\\frac{1}{2}} \\exp\\left\\{- \\frac{1}{2\\tau^2} Y_n^\\top C_n^{-1} Y_n \\right\\}.$ Taking the log of that is easy, and we get $\\begin{equation} \\ell = \\log L = -\\frac{n}{2} \\log 2\\pi - \\frac{n}{2} \\log \\tau^2 - \\frac{1}{2} \\log |C_n| - \\frac{1}{2\\tau^2} Y_n^\\top C_n^{-1} Y_n. \\tag{5.4} \\end{equation}$ To maximize that (log) likelihood with respect to $$\\tau^2$$, just differentiate and solve. \\begin{align} 0 \\stackrel{\\mathrm{set}}{=} \\ell' &= - \\frac{n}{2 \\tau^2} + \\frac{1}{2 (\\tau^2)^2} Y_n^\\top C_n^{-1} Y_n, \\notag \\\\ \\mbox{so } \\hat{\\tau}^2 &= \\frac{Y_n^\\top C_n^{-1} Y_n}{n}. \\tag{5.5} \\end{align} In other words, we get that the MLE for scale $$\\tau^2$$ is a mean residual sum of squares under the quadratic form obtained from an MVN PDF with a mean of $$\\mu(x) = 0$$: $$(Y_n - 0)^\\top C_n^{-1} (Y_n - 0)$$. How would this analysis change if we were to take a Bayesian approach? A homework exercise (§5.5) invites the curious reader to investigate the form of the posterior under prior $$\\tau^2 \\sim \\mathrm{IG}\\left(a/2, b/2\\right)$$. For example, what happens when $$a=b=0$$ which is equivalent to $$p(\\tau^2) \\propto 1/\\tau^2$$, a so-called reference prior in this context (Berger, De Oliveira, and Sansó 2001; Berger, Bernardo, and Sun 2009)? Estimate of scale $$\\hat{\\tau}^2$$ in hand, we may simply “plug it in” to the predictive equations (5.2)(5.3). Now technically, when you estimate a variance and plug it into a (multivariate) Gaussian, you’re turning that Gaussian into a (multivariate) Student-$$t$$, in this case with $$n$$ degrees of freedom (DoF). (There’s no loss of DoF when the mean is assumed to be zero.) For details, see for example Gramacy and Polson (2011). For now, presume that $$n$$ is large enough so that this distinction doesn’t matter. As we generalize to more hyperparameters, DoF correction could indeed matter but we still obtain a decent approximation, which is so common in practice that the word “approximation” is often dropped from the description – a transgression I shall be guilty of as well. So to summarize, we have the following scale-adjusted (approximately) MVN predictive equations: \\begin{aligned} Y(\\mathcal{X}) \\mid D_n & \\sim \\mathcal{N}_{n'}(\\mu(\\mathcal{X}), \\Sigma(\\mathcal{X})) \\\\ \\mbox{with mean } \\quad \\mu(\\mathcal{X}) &= C(\\mathcal{X}, X_n) C_n^{-1} Y_n \\\\ \\mbox{and variance } \\quad \\Sigma(\\mathcal{X}) &= \\hat{\\tau}^2[C(\\mathcal{X},\\mathcal{X}) - C(\\mathcal{X}, X_n) C_n^{-1} C(\\mathcal{X}, X_n)^\\top]. \\end{aligned} Notice how $$\\hat{\\tau}^2$$ doesn’t factor into the predictive mean, but it does figure into predictive variance. That’s important because it means that $$Y_n$$-values are finally involved in assessment of predictive uncertainty, whereas previously (5.2)(5.3) only $$X_n$$-values were involved. To see it all in action, let’s return to our simple 1d sinusoidal example, continuing from Figure 5.8. Start by performing calculations for $$\\hat{\\tau}^2$$. CX <- SX Ci <- Si CXX <- SXX tau2hat <- drop(t(y) %*% Ci %*% y / length(y)) Checking that we get something reasonable, consider … 2*sqrt(tau2hat) ## 5.487 … which is quite close to what we know to be the true value of five in this case. Next plug $$\\hat{\\tau}^2$$ into the MVN conditioning equations to obtain a predictive mean vector and covariance matrix. mup2 <- CX %*% Ci %*% y Sigmap2 <- tau2hat*(CXX - CX %*% Ci %*% t(CX)) Finally gather some sample paths using MVN draws and summarize predictive quantiles by cutting-and-pasting from above. YY <- rmvnorm(100, mup2, Sigmap2) q1 <- mup + qnorm(0.05, 0, sqrt(diag(Sigmap2))) q2 <- mup + qnorm(0.95, 0, sqrt(diag(Sigmap2))) Figure 5.9 shows a much better surface compared to Figure 5.8. matplot(XX, t(YY), type=\"l\", col=\"gray\", lty=1, xlab=\"x\", ylab=\"y\") points(X, y, pch=20, cex=2) lines(XX, mup, lwd=2) lines(XX, 5*sin(XX), col=\"blue\") lines(XX, q1, lwd=2, lty=2, col=2); lines(XX, q2, lwd=2, lty=2, col=2)",
null,
"FIGURE 5.9: Sinusoidal GP predictive surface with estimated scale $$\\hat{\\tau}^2$$. Compare to Figure 5.8. Excepting the appropriately expanded scale of the $$y$$-axis, the view in Figure 5.9 looks nearly identical to Figure 5.3 with data back on the two-unit scale. Besides that this last fit (with $$\\hat{\\tau}^2$$) looks better (particularly the variance) than the one before it (with implicit $$\\tau^2=1$$ when the observed scale was really much bigger), how can one be more objective about which is best out-of-sample? A great paper by Gneiting and Raftery (2007) offers proper scoring rules that facilitate comparisons between predictors in a number of different situations, basically depending on what common distribution characterizes predictors being compared. These are a great resource when comparing apples and oranges, even though we’re about to use them to compare apples to apples: two GPs under different scales. We have the first two moments, so Eq. (25) from Gneiting and Raftery (2007) may be used. Given $$Y(\\mathcal{X})$$-values observed out of sample, the proper scoring rule is given by $\\begin{equation} \\mathrm{score}(Y, \\mu, \\Sigma; \\mathcal{X}) = - \\log | \\Sigma(\\mathcal{X}) | - (Y(\\mathcal{X}) - \\mu(\\mathcal{X}))^\\top (\\Sigma(\\mathcal{X}))^{-1} (Y(\\mathcal{X}) - \\mu(\\mathcal{X})). \\tag{5.6} \\end{equation}$ In the case where predictors are actually MVN, which they aren’t quite in our case (they’re Student-$$t$$), this is within an additive constant of what’s called predictive log likelihood. Higher scores, or higher predictive log likelihoods, are better. The first term $$-\\log | \\Sigma(\\mathcal{X})|$$ measures magnitude of uncertainty. Smaller uncertainty is better, all things considered, so larger is better here. The second term $$(Y(\\mathcal{X}) - \\mu(\\mathcal{X}))^\\top (\\Sigma(\\mathcal{X}))^{-1} (Y(\\mathcal{X}) - \\mu(\\mathcal{X}))$$ is mean-squared error (MSE) adjusted for covariance. Smaller MSE is better, but when predictions are inaccurate it’s also important to capture that uncertainty through $$\\Sigma(\\mathcal{X})$$. Score compensates for that second-order consideration: it’s ok to mispredict as long as you know you’re mispredicting. A more recent paper by Bastos and O’Hagan (2009) tailors the scoring discussion to deterministic computer experiments, which better suits our current setting: interpolating function observations without noise. They recommend using Mahalanobis distance, which for the multivariate Gaussian is the same as the (negative of the) formula above, except without the determinant of $$\\Sigma(\\mathcal{X})$$, and square-rooted. $\\begin{equation} \\mathrm{mah}(y, \\mu, \\Sigma; \\mathcal{X}) = \\sqrt{(y(\\mathcal{X}) - \\mu(\\mathcal{X}))^\\top (\\Sigma(\\mathcal{X}))^{-1} (y(\\mathcal{X}) - \\mu(\\mathcal{X}))} \\tag{5.7} \\end{equation}$ Smaller distances are otherwise equivalent to higher scores. Here’s code that calculates both in one function. score <- function(Y, mu, Sigma, mah=FALSE) { Ymmu <- Y - mu Sigmai <- solve(Sigma) mahdist <- t(Ymmu) %*% Sigmai %*% Ymmu if(mah) return(sqrt(mahdist)) return (- determinant(Sigma, logarithm=TRUE)modulus - mahdist)\n}\n\nHow about using Mahalanobis distance (Mah for short) to make a comparison between the quality of predictions from our two most recent fits $$(\\tau^2=1$$ versus $$\\hat{\\tau}^2)$$?\n\nYtrue <- 5*sin(XX)\ndf <- data.frame(score(Ytrue, mup, Sigmap, mah=TRUE),\nscore(Ytrue, mup2, Sigmap2, mah=TRUE))\ncolnames(df) <- c(\"tau2=1\", \"tau2hat\")\ndf\n## tau2=1 tau2hat\n## 1 6.259 2.282\n\nEstimated scale wins! Actually if you do score without mah=TRUE you come to the opposite conclusion, as Bastos and O’Hagan (2009) caution. Knowledge that the true response is deterministic is important to coming to the correct conclusion about estimates of accuracy as regards variations in scale, in this case, with signal and (lack of) noise contributing to the range of observed measurements. Now what about when there’s noise?\n\n### 5.2.2 Noise and nuggets\n\nWe’ve been saying “regression” for a while, but actually interpolation is a more apt description. Regression is about extracting signal from noise, or about smoothing over noisy data, and so far our example training data have no noise. By inspecting a GP prior, in particular its correlation structure $$C(x, x')$$, it’s clear that the current setup precludes idiosyncratic behavior because correlation decays smoothly as a function of distance. Observe that $$C(x,x') \\rightarrow 1^-$$ as $$x\\rightarrow x'$$, implying that the closer $$x$$ is to $$x'$$ the higher the correlation, until correlation is perfect, which is what “connects the dots” when conditioning on data and deriving the predictive distribution.\n\nMoving from GP interpolation to smoothing over noise is all about breaking interpolation, or about breaking continuity in $$C(x,x')$$ as $$x\\rightarrow x'$$. Said another way, we must introduce a discontinuity between diagonal and off-diagonal entries in the correlation matrix $$C_n$$ to smooth over noise. There are a lot of ways to skin this cat, and a lot of storytelling that goes with it, but the simplest way to “break it” is with something like\n\n$K(x, x') = C(x, x') + g \\delta_{x, x'}.$\n\nAbove, $$g > 0$$ is a new hyperparameter called the nugget (or sometimes nugget effect), which determines the size of the discontinuity as $$x' \\rightarrow x$$. The function $$\\delta$$ is more like the Kronecker delta, although the way it’s written above makes it look like the Dirac delta. Observe that $$g$$ generalizes Neal’s $$\\epsilon$$ jitter.\n\nNeither delta is perfect in terms of describing what to do in practice. The simplest, correct description, of how to break continuity is to only add $$g$$ on a diagonal – when indices of $$x$$ are the same, not simply for identical values – and nowhere else. Never add $$g$$ to an off-diagonal correlation even if that correlation is based on zero distances: i.e., identical $$x$$ and $$x'$$-values. Specifically,\n\n• $$K(x_i, x_j) = C(x_i, x_j)$$ when $$i \\ne j$$, even if $$x_i = x_j$$;\n• only $$K(x_i, x_i) = C(x_i, x_i) + g$$.\n\nThis leads to the following representation of the data-generating mechanism.\n\n$Y \\sim \\mathcal{N}_n(0, \\tau^2 K_n)$\n\nUnfolding terms, covariance matrix $$\\Sigma_n$$ contains entries\n\n$\\Sigma_n^{ij} = \\tau^2 (C(x_i, x_j) + g \\delta_{ij}),$\n\nor in other words $$\\Sigma_n = \\tau^2 K_n = \\tau^2(C_n + g \\mathbb{I}_n)$$. This all looks like a hack, but it’s operationally equivalent to positing the following model.\n\n$Y(x) = w(x) + \\varepsilon,$\n\nwhere $$w(x) \\sim \\mathcal{GP}$$ with scale $$\\tau^2$$, i.e., $$W \\sim \\mathcal{N}_n(0, \\tau^2 C_n)$$, and $$\\varepsilon$$ is independent Gaussian noise with variance $$\\tau^2 g$$, i.e., $$\\varepsilon \\stackrel{\\mathrm{iid}}{\\sim} \\mathcal{N}(0, \\tau^2 g)$$.\n\nA more aesthetically pleasing model might instead use $$w(x) \\sim \\mathcal{GP}$$ with scale $$\\tau^2$$, i.e., $$W \\sim \\mathcal{N}_n(0, \\tau^2 C_n)$$, and where $$\\varepsilon(x)$$ is iid Gaussian noise with variance $$\\sigma^2$$, i.e., $$\\varepsilon(x) \\stackrel{\\mathrm{iid}}{\\sim} \\mathcal{N}(0, \\sigma^2)$$. An advantage of this representation is two totally “separate” hyperparameters, with one acting to scale noiseless spatial correlations, and another determining the magnitude of white noise. Those two formulations are actually equivalent. There’s a 1:1 mapping between the two. Many researchers prefer the latter to the former on intuition grounds. But inference in the latter is harder. Conditional on $$g$$, $$\\hat{\\tau}^2$$ is available in closed form, which we’ll show momentarily. Conditional on $$\\sigma^2$$, numerical methods are required for $$\\hat{\\tau}^2$$.\n\nOk, so back to plan-A with $$Y \\sim \\mathcal{N}(0, \\Sigma_n)$$, where $$\\Sigma_n = \\tau^2 K_n = \\tau^2(C_n + g \\mathbb{I}_n)$$. Recall that $$C_n$$ is an $$n \\times n$$ matrix of inverse exponentiated pairwise squared Euclidean distances. How, then, to estimate two hyperparameters: scale $$\\tau^2$$ and nugget $$g$$? Again, we have all the usual suspects (MoM, likelihood, CV, variogram) but likelihood-based methods are by far most common. First, suppose that $$g$$ is known.\n\nMLE $$\\hat{\\tau}^2$$ given a fixed $$g$$ is\n\n$\\hat{\\tau}^2 = \\frac{Y_n^\\top K_n^{-1} Y_n}{n} = \\frac{Y_n^\\top (C_n + g \\mathbb{I}_n)^{-1} Y_n}{n}.$\n\nThe derivation involves an identical application of Eq. (5.5), except with $$K_n$$ instead of $$C_n$$.\n\nPlug $$\\hat{\\tau}^2$$ back into our log likelihood to get a concentrated (or profile) log likelihood involving just the remaining parameter $$g$$.\n\n\\begin{align} \\ell(g) &= -\\frac{n}{2} \\log 2\\pi - \\frac{n}{2} \\log \\hat{\\tau}^2 - \\frac{1}{2} \\log |K_n| - \\frac{1}{2\\hat{\\tau}^2} Y_n^\\top K_n^{-1} Y_n \\notag \\\\ &= c - \\frac{n}{2} \\log Y_n^\\top K_n^{-1} Y_n - \\frac{1}{2} \\log |K_n| \\tag{5.8} \\end{align}\n\nUnfortunately taking a derivative and setting to zero doesn’t lead to a closed form solution. Calculating the derivative is analytic, which we show below momentarily, but solving is not. Maximizing $$\\ell(g)$$ requires numerical methods. The simplest thing to do is throw it into optimize and let a polished library do all the work. Since most optimization libraries prefer to minimize, we’ll code up $$-\\ell(g)$$ in R. The nlg function below doesn’t directly work on X inputs, rather through distances D. This is slightly more efficient since distances can be pre-calculated, rather than re-calculated in each evaluation for new g.\n\nnlg <- function(g, D, Y)\n{\nn <- length(Y)\nK <- exp(-D) + diag(g, n)\nKi <- solve(K)\nldetK <- determinant(K, logarithm=TRUE)$modulus ll <- - (n/2)*log(t(Y) %*% Ki %*% Y) - (1/2)*ldetK counter <<- counter + 1 return(-ll) } Observe a direct correspondence between nlg and $$-\\ell(g)$$ with the exception of a counter increment (accessing a global variable). This variable is not required, but we’ll find it handy later when comparing alternatives on efficiency grounds in numerical optimization, via the number of times our likelihood objective function is evaluated. Although optimization libraries often provide iteration counts on output, sometimes that report can misrepresent the actual number of objective function calls. So I’ve jerry-rigged my own counter here to fill in. #### Example: noisy 1d sinusoid Before illustrating numerical nugget (and scale) optimization towards the MLE, we need some example data. Let’s return to our running sinusoid example from §5.1.1, picking up where we left off but augmented with standard Gaussian noise. Code below utilizes the same uniform Xs from earlier, but doubles them up. Adding replication into a design is recommended in noisy data contexts, as discussed in more detail in Chapter 10. Replication is not essential for this example, but it helps guarantee predictable outcomes which is important for a randomly seeded, fully reproducible Rmarkdown build. X <- rbind(X, X) n <- nrow(X) y <- 5*sin(X) + rnorm(n, sd=1) D <- distance(X) Everything is in place to estimate the optimal nugget. The optimize function in R is ideal in 1d derivative-free contexts. It doesn’t require an initial value for g, but it does demand a search interval. A sensible yet conservative range for $$g$$-values is from eps to var(y). The former corresponds to the noise-free/jitter-only case we entertained earlier. The latter is the observed marginal variance of $$Y$$, or in other words about as big as variance could be if these data were all noise and no signal. counter <- 0 g <- optimize(nlg, interval=c(eps, var(y)), D=D, Y=y)$minimum\ng\n## 0.2878\n\nNow the value of that estimate isn’t directly useful to us, at least on an intuitive level. We need $$\\hat{\\tau}^2$$ to understand the full decomposition of variance. But backing out those quantities is relatively straightforward.\n\nK <- exp(-D) + diag(g, n)\nKi <- solve(K)\ntau2hat <- drop(t(y) %*% Ki %*% y / n)\nc(tau=sqrt(tau2hat), sigma=sqrt(tau2hat*g))\n## tau sigma\n## 2.304 1.236\n\nBoth are close to their true values of $$5/2 = 2.5$$ and 1, respectively. Estimated hyperparameters in hand, prediction is a straightforward application of MVN conditionals. First calculate quantities involved in covariance between testing and training locations, and between testing locations and themselves.\n\nDX <- distance(XX, X)\nKX <- exp(-DX)\nKXX <- exp(-DXX) + diag(g, nrow(DXX))\n\nNotice that only KXX is augmented with g on the diagonal. KX is not a square symmetric matrix calculated from identically indexed $$x$$-values. Even if it were coincidentally square, or if DX contained zero distances because elements of XX and X coincide, still no nugget augmentation is deployed. Only with KXX, which is identically indexed with respect to itself, does a nugget augment the diagonal.\n\nCovariance matrices in hand, we may then calculate the predictive mean vector and covariance matrix.\n\nmup <- KX %*% Ki %*% y\nSigmap <- tau2hat*(KXX - KX %*% Ki %*% t(KX))\nq1 <- mup + qnorm(0.05, 0, sqrt(diag(Sigmap)))\nq2 <- mup + qnorm(0.95, 0, sqrt(diag(Sigmap)))\n\nShowing sample predictive realizations that look pretty requires “subtracting” out idiosyncratic noise, i.e., the part due to nugget $$g$$. Otherwise sample paths will be “jagged” and hard to interpret.\n\nSigma.int <- tau2hat*(exp(-DXX) + diag(eps, nrow(DXX))\n- KX %*% Ki %*% t(KX))\nYY <- rmvnorm(100, mup, Sigma.int)\n\n§5.3.2 explains how this maneuver makes sense in a latent function-space view of GP posterior updating, and again when we delve into a deeper signal-to-noise discussion in Chapter 10. For now this is just a trick to get a prettier picture, only affecting gray lines plotted in Figure 5.10.\n\nmatplot(XX, t(YY), type=\"l\", lty=1, col=\"gray\", xlab=\"x\", ylab=\"y\")\npoints(X, y, pch=20, cex=2)\nlines(XX, mup, lwd=2)\nlines(XX, 5*sin(XX), col=\"blue\")\nlines(XX, q1, lwd=2, lty=2, col=2)\nlines(XX, q2, lwd=2, lty=2, col=2)",
null,
"FIGURE 5.10: GP fit to sinusoidal data with estimated nugget.\n\nNotice how the error-bars, which do provide a full accounting of predictive uncertainty, lie mostly outside of the gray lines and appropriately capture variability in the training data observations, shown as filled black dots. That’s it: now we can fit noisy data with GPs using a simple library-based numerical optimizer and about twenty lines of code.\n\n### 5.2.3 Derivative-based hyperparameter optimization\n\nIt can be unsatisfying to brute-force an optimization for a hyperparameter like $$g$$, even though 1d solving with optimize is often superior to cleverer methods. Can we improve upon the number of evaluations?\n\nnlg.count <- counter\nnlg.count\n## 16\n\nActually, that’s pretty good. If you can already optimize numerically in fewer than twenty or so evaluations there isn’t much scope for improvement. Yet we’re leaving information on the table: closed-form derivatives. Differentiating $$\\ell(g)$$ involves pushing the chain rule through the inverse of covariance matrix $$K_n$$ and its determinant, which is where hyperparameter $$g$$ is involved. The following identities, which are framed for an arbitrary parameter $$\\phi$$, will come in handy.\n\n$\\begin{equation} \\frac{\\partial K_n^{-1}}{\\partial \\phi} = - K_n^{-1} \\frac{\\partial K_n}{\\partial \\phi} K_n^{-1} \\quad \\mbox{ and } \\quad \\frac{\\partial \\log | K_n | }{\\partial \\phi} = \\mathrm{tr} \\left \\{ K_n^{-1} \\frac{\\partial K_n}{\\partial \\phi} \\right\\} \\tag{5.9} \\end{equation}$\n\nThe chain rule, and a single application of each of the identities above, gives\n\n\\begin{align} \\ell'(g) &= - \\frac{n}{2} \\frac{Y_n^\\top \\frac{\\partial K_n^{-1}}{\\partial g} Y_n}{Y_n^\\top K_n^{-1} Y_n} - \\frac{1}{2} \\frac{\\partial \\log |K_n|}{\\partial g} \\tag{5.10} \\\\ &= \\frac{n}{2} \\frac{Y_n^\\top K_n^{-1} \\frac{\\partial K_n}{\\partial g} K_n^{-1} Y_n}{Y_n^\\top K_n^{-1} Y_n} - \\frac{1}{2} \\mathrm{tr} \\left \\{ K_n^{-1} \\frac{\\partial K_n}{\\partial g} \\right\\}. \\notag \\end{align}\n\nOff-diagonal elements of $$K_n$$ don’t depend on $$g$$. The diagonal is simply $$1 + g$$. Therefore $$\\frac{\\partial K_n}{\\partial g}$$ is an $$n$$-dimensional identity matrix. Putting it all together:\n\n$\\ell'(g) = \\frac{n}{2} \\frac{Y_n^\\top (K_n^{-1})^{2} Y_n}{Y_n^\\top K_n^{-1} Y_n} - \\frac{1}{2} \\mathrm{tr} \\left \\{ K_n^{-1} \\right\\}.$\n\nHere’s an implementation of the negative of that derivative for the purpose of minimization. The letter “g” for gradient in the function name is overkill in this scalar context, but I’m thinking ahead to where yet more hyperparameters will be optimized.\n\ngnlg <- function(g, D, Y)\n{\nn <- length(Y)\nK <- exp(-D) + diag(g, n)\nKi <- solve(K)\nKiY <- Ki %*% Y\ndll <- (n/2) * t(KiY) %*% KiY / (t(Y) %*% KiY) - (1/2)*sum(diag(Ki))\nreturn(-dll)\n}\n\nObjective (negative concentrated log likelihood, nlg) and gradient (gnlg) in hand, we’re ready to numerically optimize using derivative information. The optimize function doesn’t support derivatives, so we’ll use optim instead. The optim function supports many optimization methods, and not all accommodate derivatives. I’ve chosen to illustrate method=\"L-BFGS-B\" here because it supports derivatives and allows bound constraints (Byrd et al. 1995). As above, we know we don’t want a nugget lower than eps for numerical reasons, and it seems unlikely that $$g$$ will be bigger than the marginal variance.\n\nHere we go … first reinitializing the evaluation counter and choosing 10% of marginal variance as a starting value.\n\ncounter <- 0\nout <- optim(0.1*var(y), nlg, gnlg, method=\"L-BFGS-B\", lower=eps,\nupper=var(y), D=D, Y=y)\nc(g, out$par) ## 0.2878 0.2879 Output is similar to what we obtained from optimize, which is reassuring. How many iterations? c(out$counts, actual=counter)\n## function gradient actual\n## 8 8 8\n\nNotice that in this scalar case our internal, manual counter agrees with optim’s. Just 8 evaluations to optimize something is pretty excellent, but possibly not noteworthy compared to optimize’s 16, especially when you consider that an extra 8 gradient evaluations (with similar computational complexity) are also required. When you put it that way, our new derivative-based version is potentially no better, requiring 16 combined evaluations of commensurate computational complexity. Hold that thought. We shall return to counting iterations after introducing more hyperparameters.\n\n### 5.2.4 Lengthscale: rate of decay of correlation\n\nHow about modulating the rate of decay of spatial correlation in terms of distance? Surely unadulterated Euclidean distance isn’t equally suited to all data. Consider the following generalization, known as the isotropic Gaussian family.\n\n$C_\\theta(x, x') = \\exp\\left\\{ - \\frac{||x - x'||^2}{\\theta} \\right\\}$\n\nIsotropic Gaussian correlation functions are indexed by a scalar hyperparameter $$\\theta$$, called the characteristic lengthscale. Sometimes this is shortened to lengthscale, or $$\\theta$$ may be referred to as a range parameter, especially in geostatistics. When $$\\theta = 1$$ we get back our inverse exponentiated squared Euclidean distance-based correlation as a special case. Isotropy means that correlation decays radially; Gaussian suggests inverse exponentiated squared Euclidean distance. Gaussian processes should not be confused with Gaussian-family correlation or kernel functions, which appear in many contexts. GPs get their name from their connection with the MVN, not because they often feature Gaussian kernels as a component of the covariance structure. Further discussion of kernel variations and properties is deferred until later in §5.3.3.\n\nHow to perform inference for $$\\theta$$? Should our GP have a slow decay of correlation in space, leading to visually smooth/slowly changing surfaces, or a fast one looking more wiggly? Like with nugget $$g$$, embedding $$\\theta$$ deep within coordinates of a covariance matrix thwarts analytic maximization of log likelihood. Yet again like $$g$$, numerical methods are rather straightforward. In fact the setup is identical except now we have two unknown hyperparameters.\n\nConsider brute-force optimization without derivatives. The R function nl is identical to nlg except argument par takes in a two-vector whose first coordinate is $$\\theta$$ and second is $$g$$. Only two lines differ, and those are indicated by comments in the code below.\n\nnl <- function(par, D, Y)\n{\ntheta <- par ## change 1\ng <- par\nn <- length(Y)\nK <- exp(-D/theta) + diag(g, n) ## change 2\nKi <- solve(K)\nldetK <- determinant(K, logarithm=TRUE)$modulus ll <- - (n/2)*log(t(Y) %*% Ki %*% Y) - (1/2)*ldetK counter <<- counter + 1 return(-ll) } That’s it: just shove it into optim. Note that optimize isn’t an option here as that routine only optimizes in 1d. But first we’ll need an example. For variety, consider again our 2d exponential data from §5.1.2 and Figure 5.5, this time observed with noise and entertaining non-unit lengthscales. library(lhs) X2 <- randomLHS(40, 2) X2 <- rbind(X2, X2) X2[,1] <- (X2[,1] - 0.5)*6 + 1 X2[,2] <- (X2[,2] - 0.5)*6 + 1 y2 <- X2[,1]*exp(-X2[,1]^2 - X2[,2]^2) + rnorm(nrow(X2), sd=0.01) Again, replication is helpful for stability in reproduction, but is not absolutely necessary. Estimating lengthscale and nugget simultaneously represents an attempt to strike balance between signal and noise (Chapter 10). Once we get more experience, we’ll see that long lengthscales are more common when noise/nugget is high, whereas short lengthscales offer the potential to explain away noise as quickly changing dynamics in the data. Sometimes choosing between those two can be a difficult enterprise. With optim it helps to think a little about starting values and search ranges. The nugget is rather straightforward, and we’ll copy ranges and starting values from our earlier example: from $$\\epsilon$$ to $$\\mathbb{V}\\mathrm{ar}\\{Y\\}$$. The lengthscale is a little harder. Sensible choices for $$\\theta$$ follow the following rationale, leveraging $$x$$-values in coded units ($$\\in [0,1]^2$$). A lengthscale of 0.1, which is about $$\\sqrt{0.1} = 0.32$$ in units of $$x$$, biases towards surfaces three times more wiggly than in our earlier setup, with implicit $$\\theta = 1$$, in a certain loose sense. More precise assessments are quoted later after learning more about kernel properties (§5.3.3) and upcrossings (5.17). Initializing in a more signal, less noise regime seems prudent. If we thought the response was “really straight”, perhaps an ordinary linear model would suffice. A lower bound of eps allows the optimizer to find even wigglier surfaces, however it might be sensible to view solutions close to eps as suspect. A value of $$\\theta=10$$, or $$\\sqrt{10} = 3.16$$ is commensurately (3x) less wiggly than our earlier analysis. If we find a $$\\hat{\\theta}$$ on this upper boundary we can always re-run with a new, bigger upper bound. For a more in-depth discussion of suitable lengthscale and nugget ranges, and even priors for regularization, see Appendix A of the tutorial (Gramacy 2016) for the laGP library (Gramacy and Sun 2018) introduced in more detail in §5.2.6. Ok, here we go. (With new X we must first refresh D.) D <- distance(X2) counter <- 0 out <- optim(c(0.1, 0.1*var(y2)), nl, method=\"L-BFGS-B\", lower=eps, upper=c(10, var(y2)), D=D, Y=y2) out$par\n## 0.902791 0.009972\n\nActually the outcome, as regards the first coordinate $$\\hat{\\theta}$$, is pretty close to our initial version with implied $$\\theta = 1$$. Since \"L-BFGS-B\" is calculating a gradient numerically through finite differences, the reported count of evaluations in the output doesn’t match the number of actual evaluations.\n\nbrute <- c(outcounts, actual=counter) brute ## function gradient actual ## 14 14 70 We’re searching in two input dimensions, and a rule of thumb is that it takes two evaluations in each dimension to build a tangent plane to approximate a derivative. So if 14 function evaluations are reported, it’d take about $$2\\times 2 \\rightarrow 4 \\times 14 = 56$$ additional runs to approximate derivatives, which agrees with our “by-hand” counter. How can we improve upon those counts? Reducing the number of evaluations should speed up computation time. It might not be a big deal now, but as $$n$$ gets bigger the repeated cubic cost of matrix inverses and determinants really adds up. What if we take derivatives with respect to $$\\theta$$ and combine with those for $$g$$ to form a gradient? That requires $$\\dot{K}_n \\equiv \\frac{\\partial K_n}{\\partial \\theta}$$, to plug into inverse and determinant derivative identities (5.9). The diagonal is zero because the exponent is zero no matter what $$\\theta$$ is. Off-diagonal entries of $$\\dot{K}_n$$ work out as follows. Since \\begin{aligned} K_\\theta(x, x') &= \\exp\\left\\{ - \\frac{||x - x'||^2}{\\theta} \\right\\}, & \\mbox{we have} && \\frac{\\partial K_\\theta(x_i, x_j)}{\\partial \\theta} &= K_\\theta(x_i, x_j) \\frac{||x_i - x_j||^2}{\\theta^2}. \\\\ \\end{aligned} A slightly more compact way to write the same thing would be $$\\dot{K}_n = K_n \\circ \\mathrm{Dist}_n/\\theta^2$$ where $$\\circ$$ is a component-wise, Hadamard product, and $$\\mathrm{Dist}_n$$ contains a matrix of squared Euclidean distances – our D in the code. An identical application of the chain rule for the nugget (5.10), but this time for $$\\theta$$, gives $\\begin{equation} \\ell'(\\theta) \\equiv \\frac{\\partial}{\\partial \\theta} \\ell(\\theta, g) = \\frac{n}{2} \\frac{Y_n^\\top K_n^{-1} \\dot{K}_n K_n^{-1} Y_n}{Y_n^\\top K_n^{-1} Y_n} - \\frac{1}{2} \\mathrm{tr} \\left \\{ K_n^{-1} \\dot{K}_n \\right\\}. \\tag{5.11} \\end{equation}$ A vector collecting the two sets of derivatives forms the gradient of $$\\ell(\\theta, g)$$, a joint log likelihood with $$\\tau^2$$ concentrated out. R code below implements the negative of that gradient for the purposes of MLE calculation with optim minimization. Comments therein help explain the steps involved. gradnl <- function(par, D, Y) { ## extract parameters theta <- par g <- par ## calculate covariance quantities from data and parameters n <- length(Y) K <- exp(-D/theta) + diag(g, n) Ki <- solve(K) dotK <- K*D/theta^2 KiY <- Ki %*% Y ## theta component dlltheta <- (n/2) * t(KiY) %*% dotK %*% KiY / (t(Y) %*% KiY) - (1/2)*sum(diag(Ki %*% dotK)) ## g component dllg <- (n/2) * t(KiY) %*% KiY / (t(Y) %*% KiY) - (1/2)*sum(diag(Ki)) ## combine the components into a gradient vector return(-c(dlltheta, dllg)) } How well does optim work when it has access to actual gradient evaluations? Observe here that we’re otherwise using exactly the same calls as earlier. counter <- 0 outg <- optim(c(0.1, 0.1*var(y2)), nl, gradnl, method=\"L-BFGS-B\", lower=eps, upper=c(10, var(y2)), D=D, Y=y2) rbind(grad=outgpar, brute=out$par) ## [,1] [,2] ## grad 0.9028 0.009972 ## brute 0.9028 0.009972 Parameter estimates are nearly identical. Availability of a true gradient evaluation changes the steps of the algorithm slightly, often leading to a different end-result even when identical convergence criteria are applied. What about the number of evaluations? rbind(grad=c(outg$counts, actual=counter), brute)\n## function gradient actual\n## brute 14 14 70\n\nWoah! That’s way better. No only does our actual “by-hand” count of evaluations match what’s reported on output from optim, but it can be an order of magnitude lower, roughly, compared to what we had before. (Variations depend on the random data used to generate this Rmarkdown document.) A factor of five-to-ten savings is definitely worth the extra effort to derive and code up a gradient. As you can imagine, and we’ll show shortly, gradients are commensurately more valuable when there are even more hyperparameters. “But what other hyperparameters?”, you ask. Hold that thought.\n\nOptimized hyperparameters in hand, we can go about rebuilding quantities required for prediction. Begin with training quantities …\n\nK <- exp(- D/outg$par) + diag(outg$par, nrow(X2))\nKi <- solve(K)\ntau2hat <- drop(t(y2) %*% Ki %*% y2 / nrow(X2))\n\n… then predictive/testing ones …\n\ngn <- 40\nxx <- seq(-2, 4, length=gn)\nXX <- expand.grid(xx, xx)\nDXX <- distance(XX)\nKXX <- exp(-DXX/outg$par) + diag(outg$par, ncol(DXX))\nDX <- distance(XX, X2)\nKX <- exp(-DX/outg$par) … and finally kriging equations. mup <- KX %*% Ki %*% y2 Sigmap <- tau2hat*(KXX - KX %*% Ki %*% t(KX)) sdp <- sqrt(diag(Sigmap)) The resulting predictive surfaces look pretty much the same as before, as shown in Figure 5.11. par(mfrow=c(1,2)) image(xx, xx, matrix(mup, ncol=gn), main=\"mean\", xlab=\"x1\", ylab=\"x2\", col=cols) points(X2) image(xx, xx, matrix(sdp, ncol=gn), main=\"sd\", xlab=\"x1\", ylab=\"x2\", col=cols) points(X2)",
null,
"FIGURE 5.11: Predictive mean (left) and standard deviation (right) after estimating a lengthscale $$\\hat{\\theta}$$. This is perhaps not an exciting way to end the example, but it serves to illustrate the basic idea of estimating unknown quantities and plugging them into predictive equations. I’ve only illustrated 1d and 2d so far, but the principle is no different in higher dimensions. ### 5.2.5 Anisotropic modeling It’s time to expand input dimension a bit, and get ambitious. Visualization will be challenging, but there are other metrics of success. Consider the Friedman function, a popular toy problem from the seminal multivariate adaptive regression splines (MARS; Friedman 1991) paper. Splines are a popular alternative to GPs in low input dimension. The idea is to “stitch” together low-order polynomials. The “stitching boundary” becomes exponentially huge as dimension increases, which challenges computation. For more details, see the splines supplement linked here which is based on Hastie, Tibshirani, and Friedman (2009), Chapters 5, 7 and 8. MARS circumvents many of those computational challenges by simplifying basis elements (to piecewise linear) on main effects and limiting (to two-way) interactions. Over-fitting is mitigated by aggressively pruning useless basis elements with a generalized CV scheme. fried <- function(n=50, m=6) { if(m < 5) stop(\"must have at least 5 cols\") X <- randomLHS(n, m) Ytrue <- 10*sin(pi*X[,1]*X[,2]) + 20*(X[,3] - 0.5)^2 + 10*X[,4] + 5*X[,5] Y <- Ytrue + rnorm(n, 0, 1) return(data.frame(X, Y, Ytrue)) } The surface is nonlinear in five input coordinates, $\\begin{equation} \\mathbb{E}\\{Y(x)\\} = 10 \\sin(\\pi x_1 x_2) + 20(x_3 - 0.5)^2 + 10x_4 - 5x_5, \\tag{5.12} \\end{equation}$ combining periodic, quadratic and linear effects. Notice that you can ask for more (useless) coordinates if you want: inputs $$x_6, x_7, \\dots$$ The fried function, as written above, generates both the $$X$$-values, via LHS (§4.1) in $$[0,1]^m$$, and $$Y$$-values. Let’s create training and testing sets in seven input dimensions, i.e., with two irrelevant inputs $$x_6$$ and $$x_7$$. Code below uses fried to generate an LHS training–testing partition (see, e.g., Figure 4.9) with $$n=200$$ and $$n' = 1000$$ observations, respectively. Such a partition could represent one instance in the “bakeoff” described by Algorithm 4.1. See §5.2.7 for iteration on that theme. m <- 7 n <- 200 nprime <- 1000 data <- fried(n + nprime, m) X <- as.matrix(data[1:n,1:m]) y <- drop(data$Y[1:n])\nXX <- as.matrix(data[(n + 1):nprime,1:m])\nyy <- drop(data$Y[(n + 1):nprime]) yytrue <- drop(data$Ytrue[(n + 1):nprime])\n\nThe code above extracts two types of $$Y$$-values for use in out-of-sample testing. De-noised yytrue values facilitate comparison with root mean-squared error (RMSE),\n\n$\\begin{equation} \\sqrt{\\frac{1}{n'} \\sum_{i=1}^{n'} (y_i - \\mu(x_i))^2}. \\tag{5.13} \\end{equation}$\n\nNotice that RMSE is square-root Mahalanobis distance (5.7) calculated with an identity covariance matrix. Noisy out-of-sample evaluations yy can be used for comparison by proper score (5.6), combining both mean accuracy and estimates of covariance.\n\nFirst learning. Inputs X and outputs y are re-defined, overwriting those from earlier examples. After re-calculating pairwise distances D, we may cut-and-paste gradient-based optim on objective nl and gradient gnl.\n\nD <- distance(X)\nout <- optim(c(0.1, 0.1*var(y)), nl, gradnl, method=\"L-BFGS-B\", lower=eps,\nupper=c(10, var(y)), D=D, Y=y)\nout\n## $par ## 2.534239 0.005208 ## ##$value\n## 683.6\n##\n## $counts ## function gradient ## 33 33 ## ##$convergence\n## 0\n##\n## $message ## \"CONVERGENCE: REL_REDUCTION_OF_F <= FACTR*EPSMCH\" Output indicates convergence has been achieved. Based on estimated $$\\hat{\\theta} = 2.534$$ and $$\\hat{g} = 0.0052$$, we may rebuild the data covariance quantities … K <- exp(- D/out$par) + diag(out$par, nrow(D)) Ki <- solve(K) tau2hat <- drop(t(y) %*% Ki %*% y / nrow(D)) … as well as those involved in predicting at XX testing locations. DXX <- distance(XX) KXX <- exp(-DXX/out$par) + diag(out$par, ncol(DXX)) DX <- distance(XX, X) KX <- exp(-DX/out$par)\n\nKriging equations are then derived as follows.\n\nmup <- KX %*% Ki %*% y\nSigmap <- tau2hat*(KXX - KX %*% Ki %*% t(KX))\n\nNotice how not a single line in the code above, pasted directly from identical lines used in earlier examples, requires tweaking to accommodate the novel 7d setting. Our previous examples were in 1d and 2d, but the code works verbatim in 7d. However the number of evaluations required to maximize is greater now than in previous examples. Here we have 33 compared to 11 previously in 2d.\n\nHow accurate are predictions? RMSE on the testing set is calculated below, but we don’t yet have a benchmark to compare this to.\n\nrmse <- c(gpiso=sqrt(mean((yytrue - mup)^2)))\nrmse\n## gpiso\n## 1.073\n\nHow about comparing to MARS? That seems natural considering these data were created as a showcase for that very method. MARS implementations can be found in the mda (Leisch, Hornik, and Ripley 2017) and earth (Milborrow 2019) packages on CRAN.\n\nlibrary(mda)\nfit.mars <- mars(X, y)\np.mars <- predict(fit.mars, XX)\n\nWhich wins between the isotropic GP and MARS based on RMSE to the truth?\n\nrmse <- c(rmse, mars=sqrt(mean((yytrue - p.mars)^2)))\nrmse\n## gpiso mars\n## 1.073 1.529\n\nUsually the GP wins in this comparison. In about one time out of twenty random Rmarkdown rebuilds MARS wins. Unfortunately MARS doesn’t natively provide a notion of predictive variance. That is, not without an extra bootstrap layer or a Bayesian treatment; e.g., see BASS (Francom 2017) on CRAN. So a comparison to MARS by proper score isn’t readily available. Some may argue that this comparison isn’t fair. MARS software has lots of tuning parameters that we aren’t exploring. Results from mars improve with argument degree=2 and, for reasons that aren’t immediately clear to me at this time, they’re even better with earth after the same degree=2 modification. I’ve deliberately put up a relatively “vanilla” straw man in this comparison. This is in part because our GP setup is itself relatively vanilla. An exercise in §5.5 invites the reader to explore a wider range of alternatives on both fronts.\n\nHow can we add more flavor? If that was vanilla GP regression, what does rocky road look like? To help motivate, recall that the Friedman function involved a diverse combination of effects on the input variables: trigonometric, quadratic and linear. Although we wouldn’t generally know that much detail in a new application – and GPs excel in settings where little is known about input–output relationships, except perhaps that it might be worth trying methods beyond the familiar linear model – it’s worth wondering if our modeling apparatus is not at odds with typically encountered dynamics. More to the point, GP modeling flexibility comes from the MVN covariance structure which is based on scaled (by $$\\theta$$) inverse exponentiated squared Euclidean distance. That structure implies uniform decay in correlation in each input direction. Is such radial symmetry reasonable? Probably not in general, and definitely not in the case of the Friedman function.\n\n$C_\\theta(x, x') = \\exp\\left\\{ - \\sum_{k=1}^m \\frac{(x_k - x'_k)^2}{\\theta_k} \\right\\}$\n\nHere we’re using a vectorized lengthscale parameter $$\\theta = (\\theta_1,\\dots,\\theta_m)$$, allowing strength of correlation to be modulated separately by distance in each input coordinate. This family of correlation functions is called the separable or anisotropic Gaussian. Separable because the sum is a product when taken outside the exponent, implying independence in each coordinate direction. Anisotopic because, except in the special case where all $$\\theta_k$$ are equal, decay of correlation is not radial.\n\nHow does one perform inference for such a vectorized parameter? Simple; just expand log likelihood and derivative functions to work with vectorized $$\\theta$$. Thinking about implementation: a for loop in the gradient function can iterate over coordinates, wherein each iteration we plug\n\n$\\begin{equation} \\frac{\\partial K_n^{ij}}{\\partial \\theta_k} = K_n^{ij} \\frac{(x_{ik} - x_{jk})^2}{\\theta_k^2} \\tag{5.14} \\end{equation}$\n\ninto our formula for $$\\ell'(\\theta_k)$$ in Eq. (5.11), which is otherwise unchanged.\n\nEach coordinate has a different $$\\theta_k$$, so pre-computing a distance matrix isn’t helpful. Instead we’ll use the covar.sep function from the plgp package which takes vectorized d $$\\equiv \\theta$$ and scalar g arguments, combing distance and inverse-scaling into one step. Rather than going derivative crazy immediately, let’s focus on the likelihood first, which we’ll need anyways before going “whole hog”. The function below is nearly identical to nl from §5.2.4 except the first ncol(X) components of argument par are sectioned off for theta, and covar.sep is used directly on X inputs rather than operating on pre-calculated D.\n\nnlsep <- function(par, X, Y)\n{\ntheta <- par[1:ncol(X)]\ng <- par[ncol(X)+1]\nn <- length(Y)\nK <- covar.sep(X, d=theta, g=g)\nKi <- solve(K)\nldetK <- determinant(K, logarithm=TRUE)$modulus ll <- - (n/2)*log(t(Y) %*% Ki %*% Y) - (1/2)*ldetK counter <<- counter + 1 return(-ll) } As a testament to how easy it is to optimize that likelihood, at least in terms of coding, below we port our optim on nl above to nlsep below with the only change being to repeat upper and lower arguments, and supply X instead of D. (Extra commands for timing will be discussed momentarily.) tic <- proc.time() counter <- 0 out <- optim(c(rep(0.1, ncol(X)), 0.1*var(y)), nlsep, method=\"L-BFGS-B\", X=X, Y=y, lower=eps, upper=c(rep(10, ncol(X)), var(y))) toc <- proc.time() out$par\n## 1.046068 1.156524 1.792535 9.036107 9.979581 10.000000\n## 9.207463 0.008191\n\nWhat can be seen on output? Notice how $$\\hat{\\theta}_k$$-values track what we know about the Friedman function. The first three inputs have relatively shorter lengthscales compared to inputs four and five. Recall that shorter lengthscale means “more wiggly”, which is appropriate for those nonlinear terms; longer lengthscale corresponds to linearly contributing inputs. Finally, the last two (save $$g$$ in the final position of out$par) also have long lengthscales, which is similarly reasonable for inputs which aren’t contributing. But how about the number of evaluations? brute <- c(out$counts, actual=counter)\nbrute\n## function gradient actual\n## 71 71 1207\n\nWoah, lots! Although only 71 optimization steps were required, in 8d (including nugget g in par) that amounts to evaluating the objective function more than one-thousand-odd times, plus-or-minus depending on the random Rmarkdown build. When $$n = 200$$, and with cubic matrix decompositions, that can be quite a slog time-wise: about 9 seconds.\n\ntoc - tic\n## elapsed\n## 9.237\n\nTo attempt to improve on that slow state of affairs, code below implements a gradient (5.14) for vectorized $$\\theta$$.\n\ngradnlsep <- function(par, X, Y)\n{\ntheta <- par[1:ncol(X)]\ng <- par[ncol(X)+1]\nn <- length(Y)\nK <- covar.sep(X, d=theta, g=g)\nKi <- solve(K)\nKiY <- Ki %*% Y\n\n## loop over theta components\ndlltheta <- rep(NA, length(theta))\nfor(k in 1:length(dlltheta)) {\ndotK <- K * distance(X[,k])/(theta[k]^2)\ndlltheta[k] <- (n/2) * t(KiY) %*% dotK %*% KiY / (t(Y) %*% KiY) -\n(1/2)*sum(diag(Ki %*% dotK))\n}\n\n## for g\ndllg <- (n/2) * t(KiY) %*% KiY / (t(Y) %*% KiY) - (1/2)*sum(diag(Ki))\n\nreturn(-c(dlltheta, dllg))\n}\n\nHere’s what you get when you feed gradnlsep into optim, otherwise with the same calls as before.\n\ntic <- proc.time()\ncounter <- 0\noutg <- optim(c(rep(0.1, ncol(X)), 0.1*var(y)), nlsep, gradnlsep,\nmethod=\"L-BFGS-B\", lower=eps, upper=c(rep(10, ncol(X)), var(y)), X=X, Y=y)\ntoc <- proc.time()\nthetahat <- rbind(grad=outg$par, brute=out$par)\ncolnames(thetahat) <- c(paste0(\"d\", 1:ncol(X)), \"g\")\nthetahat\n## d1 d2 d3 d4 d5 d6 d7 g\n## grad 1.111 1.116 1.755 7.457 10.00 10 8.910 0.008419\n## brute 1.046 1.157 1.793 9.036 9.98 10 9.207 0.008191\n\nFirst, observe the similar, but not always identical result in terms of optimized parameter(s). Derivatives enhance accuracy and alter convergence criteria compared to tangent-based approximations which sometimes leads to small discrepancies. How about number of evaluations?\n\nrbind(grad=c(outg$counts, actual=counter), brute) ## function gradient actual ## grad 138 138 138 ## brute 71 71 1207 Far fewer; an order of magnitude fewer actually, and that pays dividends in time. toc - tic ## elapsed ## 5.765 Unfortunately, it’s not $$10\\times$$ faster with $$10\\times$$ fewer evaluations because gradient evaluation takes time. Evaluating each derivative component – each iteration of the for loop in gradnlsep – involves a matrix multiplication quadratic in $$n$$. So that’s eight more quadratic-$$n$$-cost operations per evaluation compared to one for nlsep alone. Consequently, we see a 2–3$$\\times$$ speedup. There are some inefficiencies in this implementation. For example, notice that nlsep and gradnlsep repeat some calculations. Also the matrix trace implementation, sum(diag(Ki %*% dotK) is wasteful. Yet again I’ll ask you to hold that thought for when we get to library-based implementations, momentarily. Ok, we got onto this tangent after wondering if GPs could do much better, in terms of prediction, on the Friedman data. So how does a separable GP compare against the isotropic one and MARS? First, take MLE hyperparameters and plug them into the predictive equations. K <- covar.sep(X, d=outg$par[1:ncol(X)], g=outg$par[ncol(X)+1]) Ki <- solve(K) tau2hat <- drop(t(y) %*% Ki %*% y / nrow(X)) KXX <- covar.sep(XX, d=outg$par[1:ncol(X)], g=outg$par[ncol(X)+1]) KX <- covar.sep(XX, X, d=outg$par[1:ncol(X)], g=0)\nmup2 <- KX %*% Ki %*% y\nSigmap2 <- tau2hat*(KXX - KX %*% Ki %*% t(KX))\n\nA 2 is tacked onto the variable names above so as not to trample on isotropic analogs. We’ll need both sets of variables to make a comparison based on score shortly. But first, here are RMSEs.\n\nrmse <- c(rmse, gpsep=sqrt(mean((yytrue - mup2)^2)))\nrmse\n## gpiso mars gpsep\n## 1.0732 1.5288 0.6441\n\nThe separable covariance structure performs much better. Whereas the isotropic GP only beats MARS 19/20 times in random Rmarkdown builds, the separable GP is never worse than MARS, and it’s also never worse than its isotropic cousin. It pays to learn separate lengthscales for each input coordinate.\n\nSince GPs emit full covariance structures we can also make a comparison by proper score (5.6). Mahalanobis distance is not appropriate here because training responses are not deterministic. Score calculations should commence on yy here, i.e., with noise, not on yytrue which is deterministic.\n\nscores <- c(gp=score(yy, mup, Sigmap), mars=NA,\ngpsep=score(yy, mup2, Sigmap2))\nscores\n## gp mars gpsep\n## -1093.4 NA -932.4\n\nRecall that larger scores are better; so again the separable GP wins.\n\n### 5.2.6 Library\n\nAll this cutting-and-pasting is getting a bit repetitive. Isn’t there a library for that? Yes, several! But first, this might be a good opportunity to pin down the steps for GP regression in a formal boxed algorithm. Think of it as a capstone. Some steps in Algorithm 5.1 are a little informal since the equations are long, and provided earlier. There are many variations/choices on exactly how to proceed, especially to do with MVN correlation structure, or kernel. More options follow, i.e., beyond isotropic and separable Gaussian variations, later in the chapter.\n\nAlgorithm 5.1: Gaussian Process Regression\n\nAssume correlation structure $$K(\\cdot, \\cdot)$$ has been chosen, which may include hyperparameter lengthscale (vector) $$\\theta$$ and nugget $$g$$; we simply refer to a combined $$\\theta \\equiv (\\theta, g)$$ below.\n\nRequire $$n \\times m$$ matrix of inputs $$X_n$$ and $$n$$-vector of outputs $$Y_n$$; optionally an $$n' \\times m$$ matrix of predictive locations $$\\mathcal{X}$$.\n\nThen\n\n1. Derive the concentrated log likelihood $$\\ell(\\theta)$$ following Eq. (5.8) under MVN sampling model with hyperparameters $$\\theta$$ and develop code to evaluate that likelihood as a function of $$\\theta$$.\n• Variations may depend on choice of $$K(\\cdot, \\cdot)$$, otherwise the referenced equations can be applied directly.\n2. Optionally, differentiate that log likelihood (5.11) with respect to $$\\theta$$, forming a gradient $$\\ell'(\\theta) \\equiv \\nabla \\ell(\\theta)$$, and implement it too as a code which can be evaluated as a function of $$\\theta$$.\n• Referenced equations apply directly so long as $$\\dot{K}_n$$, the derivative of the covariance matrix with respect to the components of $$\\theta$$, may be evaluated.\n3. Choose initial values and search ranges for the components of $$\\theta$$ being optimized.\n4. Plug log likelihood and (optionally) gradient code into your favorite optimizer (e.g., optim with method=\"L-BFGS-B\"), along with initial values and ranges, obtaining $$\\hat{\\theta}$$.\n• If any components of $$\\hat{\\theta}$$ are on the boundary of the chosen search range, consider expanding those ranges and repeat step 3.\n5. If $$\\mathcal{X}$$ is provided, plug $$\\hat{\\theta}$$ and $$\\mathcal{X}$$ into either pointwise (5.2) or joint (5.3) predictive equations.\n\nReturn MLE $$\\hat{\\theta}$$, which can be used later for predictions; mean vector $$\\mu(\\mathcal{X})$$ and covariance matrix $$\\Sigma(\\mathcal{X})$$ or variance vector $$\\sigma^2(\\mathcal{X})$$ if $$\\mathcal{X}$$ provided.\n\nReferenced equations in the algorithm are meant as examples. Text surrounding those links offers more context about how such equations are intended to be applied. Observe that the description treats predictive/testing locations $$\\mathcal{X}$$ as optional. It’s quite common in implementation to separate inference and prediction, however Algorithm 5.1 combines them. If new $$\\mathcal{X}$$ comes along, steps 1–4 can be skipped if $$\\hat{\\theta}$$ has been saved. If $$\\hat{\\tau}^2$$ and $$K_n^{-1}$$, which depend on $$\\hat{\\theta}$$, have also been saved then pointwise prediction is quadratic in $$n$$. They’re quadratic in $$n'$$ when a full predictive covariance $$\\Sigma(\\mathcal{X})$$ is desired, which may be problematic for large grids. Evaluating those equations, say to obtain draws, necessitates decomposition and is thus cubic in $$n'$$.\n\nThere are many libraries automating the process outlined by Algorithm 5.1, providing several choices of families of covariance functions and variations in hyperparameterization. For R these include mlegp (Dancik 2018), GPfit (MacDonald, Chipman, and Ranjan 2019), spatial (Ripley 2015) fields (Nychka et al. 2019), RobustGaSP (Gu, Palomo, and Berger 2018), and kernlab (Karatzoglou, Smola, and Hornik 2018) – all performing maximum likelihood (or maximum a posteriori/Bayesian regularized) point inference; or tgp (Gramacy and Taddy 2016), emulator (Hankin 2019), plgp, and spBayes (Finley and Banerjee 2019) – performing fully Bayesian inference. There are a few more that will be of greater interest later, in Chapters 9 and 10. For Python see GPy, and for MATLAB/Octave see gpstuff (Vanhatalo et al. 2012). Erickson, Ankenman, and Sanchez (2018) provide a nice review and comparison of several libraries.\n\nHere we shall demonstrate the implementation in laGP (Gramacy and Sun 2018), in part due to my intimate familiarity. It’s the fastest GP regression library that I’m aware of, being almost entirely implemented in C. We’ll say a little more about speed momentarily. The main reason for highlighting laGP here is because of its more advanced features, and other convenient add-ons for sequential design and Bayesian optimization, which will come in handy in later chapters. The basic GP interface in the laGP package works a little differently than other packages do, for example compared to those above. But it’s the considerations behind those peculiarities from which laGP draws its unmatched speed.\n\nOk, now for laGP’s basic GP functionality on the Friedman data introduced in §5.2.5. After loading the package, the first step is to initialize a GP fit. This is where we provide the training data, and choose initial values for lengthscale $$\\theta$$ and nugget $$g$$. It’s a bit like a constructor function, for readers familiar with C or C++. Code below also checks a clock so we can compare to earlier timings.\n\nlibrary(laGP)\ntic <- proc.time()\ngpi <- newGPsep(X, y, d=0.1, g=0.1*var(y), dK=TRUE)\n\nThe “sep” in newGPsep indicates a separable/anisotropic Gaussian formulation. An isotropic version is available from newGP. At this time, the laGP package only implements Gaussian families (and we haven’t talked about any others yet anyways).\n\nAfter initialization, an MLE subroutine may be invoked. Rather than maximizing a concentrated log likelihood, laGP actually maximizes a Bayesian integrated log likelihood. But that’s not an important detail. In fact, the software deliberately obscures that nuance with its mle... naming convention, rather than mbile... or something similar, which would probably look strange to the average practitioner.\n\nmle <- mleGPsep(gpi, param=\"both\", tmin=c(eps, eps), tmax=c(10, var(y)))\ntoc <- proc.time()\n\nNotice that we don’t need to provide training data (X, y) again. Everything passed to newGPsep, and all data quantities derived therefrom, is stored internally by the gpi object. Once MLE calculation is finished, that object is updated to reflect the new, optimal hyperparameter setting. More on implementation details is provided below. Outputs from mleGPsep report hyperparameters and convergence diagnostics primarily for the purposes of inspection.\n\nthetahat <- rbind(grad=outg$par, brute=out$par, laGP=mle$theta) colnames(thetahat) <- c(paste0(\"d\", 1:ncol(X)), \"g\") thetahat ## d1 d2 d3 d4 d5 d6 d7 g ## grad 1.111 1.116 1.755 7.457 10.00 10 8.910 0.008419 ## brute 1.046 1.157 1.793 9.036 9.98 10 9.207 0.008191 ## laGP 1.100 1.071 1.732 8.099 10.00 10 10.000 0.008527 Not exactly the same estimates as we had before, but pretty close. Since it’s not the same objective being optimized, we shouldn’t expect exactly the same estimate. And how long did it take? toc - tic ## elapsed ## 1.083 Now that is faster! Almost five times faster than our bespoke gradient-based version, and ten times faster than our earlier non-gradient-based one. What makes it so fast? The answer is not that it performs fewer optimization iterations, although sometimes that is the case, … rbind(grad=c(outg$counts, actual=counter), brute,\nlaGP=c(mle$its, mle$its, NA))\n## function gradient actual\n## brute 71 71 1207\n## laGP 139 139 NA\n\n… or that it uses a different optimization library. In fact, laGP’s C backend borrows the C subroutines behind L-BFGS-B optimization provided with R. One explanation for that speed boost is the compiled (and optimized) C code, but that’s only part of the story. The implementation is very careful not to re-calculate anything. Matrices and decompositions are shared between objective and gradient, which involve many of the same operations. Inverses are based on Cholesky decompositions, which can be re-used to calculate determinants without new decompositions. (Note that this can be done in R too, with chol and chol2inv, but it’s quite a bit faster in C, where pointers and pass-by-reference save on automatic copies necessitated by an R-only implementation.)\n\nAlthough mle output reports estimated hyperparameter values, those are mostly for information purposes. That mle object is not intended for direct use in subsequent calculations, such as to make predictions. The gpi output reference from newGPsep, which is passed to mleGPsep, is where the real information lies. In fact, the gpi variable is merely an index – a unique integer – pointing to a GP object stored by backend C data structures, containing updated $$K_n$$ and $$K_n^{-1}$$, and related derivative quantities, and everything else that’s needed to do more calculations: more MLE iterations if needed, predictions, quick updates if new training data arrive (more in Chapters 67). These are modified as a “side effect” of the mle calculation. That means nothing needs to be “rebuilt” to make predictions. No copying of matrices back and forth. The C-side GP object is ready for whatever, behind the scenes.\n\np <- predGPsep(gpi, XX)\n\nHow good are these predictions compared to what we had before? Let’s complete the table, fancy this time because we’re done with this experiment. See Table 5.1.\n\nrmse <- c(rmse, laGP=sqrt(mean((yytrue - p$mean)^2))) scores <- c(scores, laGP=score(yy, p$mean, p$Sigma)) kable(rbind(rmse, scores), caption=\"RMSEs and proper scores on the Friedman data.\") TABLE 5.1: RMSEs and proper scores on the Friedman data. gpiso mars gpsep laGP rmse 1.073 1.529 0.6441 0.6355 scores -1093.429 NA -932.3916 -929.8908 About the same as before; we’ll take a closer look at potential differences momentarily. When finished using the data structures stored for a GP fit in C, we must remember to call the destructor function otherwise memory will leak. The stored GP object referenced by gpi is not under R’s memory management. (Calling rm(gpi) would free the integer reference, but not the matrices it refers to as C data structures otherwise hidden to R and to the user.) deleteGPsep(gpi) ### 5.2.7 A bakeoff As a capstone on the example above, and to connect to a dangling thread from Chapter 4, code below performs an LHS Bakeoff, in the style of Algorithm 4.1, over $$R=30$$ Monte Carlo (MC) repetitions with the four comparators above. Begin by setting up matrices to store our two metrics, RMSE and proper score, and one new one: execution time. R <- 30 scores <- rmses <- times <- matrix(NA, nrow=R, ncol=4) colnames(scores) <- colnames(rmses) <- colnames(times) <- names(rmse) Then loop over replicate data with each comparator applied to the same LHS-generated training and testing partition, each of size $$n = n' = 200$$. Note that this implementation discards the one MC replicate we already performed above, which was anyways slightly different under $$n' = 1000$$ testing runs. Since we’re repeating thirty times here, a smaller testing set suffices. As in our previous example, out-of-sample RMSEs are calculated against the true (no noise) response, and scores against a noisy version. Times recorded encapsulate both fitting and prediction calculations. for(r in 1:R) { ## train-test partition and application of f(x) on both data <- fried(2*n, m) train <- data[1:n,] test <- data[(n + 1):(2*n),] ## extract data elements from both train and test X <- as.matrix(train[,1:m]) y <- drop(train$Y)\nXX <- as.matrix(test[,1:m])\nyy <- drop(test$Y) ## for score yytrue <- drop(test$Ytrue) ## for RMSE\n\n## isotropic GP fit and predict by hand\ntic <- proc.time()\nD <- distance(X)\nout <- optim(c(0.1, 0.1*var(y)), nl, gradnl, method=\"L-BFGS-B\",\nlower=eps, upper=c(10, var(y)), D=D, Y=y)\nK <- exp(-D/out$par) + diag(out$par, nrow(D))\nKi <- solve(K)\ntau2hat <- drop(t(y) %*% Ki %*% y / nrow(D))\nDXX <- distance(XX)\nKXX <- exp(-DXX/out$par) + diag(out$par, ncol(DXX))\nDX <- distance(XX, X)\nKX <- exp(-DX/out$par) mup <- KX %*% Ki %*% y Sigmap <- tau2hat*(KXX - KX %*% Ki %*% t(KX)) toc <- proc.time() ## calculation of metrics for GP by hand rmses[r,1] <- sqrt(mean((yytrue - mup)^2)) scores[r,1] <- score(yy, mup, Sigmap) times[r,1] <- toc - tic ## MARS fit, predict, and RMSE calculation (no score) tic <- proc.time() fit.mars <- mars(X, y) p.mars <- predict(fit.mars, XX) toc <- proc.time() rmses[r,2] <- sqrt(mean((yytrue - p.mars)^2)) times[r,2] <- toc - tic ## separable GP fit and predict by hand tic <- proc.time() outg <- optim(c(rep(0.1, ncol(X)), 0.1*var(y)), nlsep, gradnlsep, method=\"L-BFGS-B\", lower=eps, upper=c(rep(10, m), var(y)), X=X, Y=y) K <- covar.sep(X, d=outg$par[1:m], g=outg$par[m+1]) Ki <- solve(K) tau2hat <- drop(t(y) %*% Ki %*% y / nrow(X)) KXX <- covar.sep(XX, d=outg$par[1:m], g=outg$par[m+1]) KX <- covar.sep(XX, X, d=outg$par[1:m], g=0)\nmup2 <- KX %*% Ki %*% y\nSigmap2 <- tau2hat*(KXX - KX %*% Ki %*% t(KX))\ntoc <- proc.time()\n\n## calculation of metrics for separable GP by hand\nrmses[r,3] <- sqrt(mean((yytrue - mup2)^2))\nscores[r,3] <- score(yy, mup2, Sigmap2)\ntimes[r,3] <- toc - tic\n\n## laGP based separable GP\ntic <- proc.time()\ngpi <- newGPsep(X, y, d=0.1, g=0.1*var(y), dK=TRUE)\nmle <- mleGPsep(gpi, param=\"both\", tmin=c(eps, eps), tmax=c(10, var(y)))\np <- predGPsep(gpi, XX)\ndeleteGPsep(gpi)\ntoc <- proc.time()\n\n## calculation of metrics for laGP based separable GP\npsd[i,] <- sqrt(p$s2) ll[i] <- llikGP(gpi) deleteGP(gpi) } l <- exp(ll - max(ll)) The code above utilizes isotropic (newGP/predGP) GP functions from laGP. Since the data is in 1d, these are equivalent to separable analogs illustrated earlier in §5.2.6. For now, concentrate on (log) likelihood evaluations; we’ll come back to predictions momentarily. Figure 5.20 shows the resulting likelihood surface as an image in the $$\\theta \\times g$$ plane. Notice that the final line in the code above exponentiates the log likelihood, so the figure is showing $$z$$-values (via color) on the likelihood scale. image(theta, g, matrix(l, ncol=length(theta)), col=cols) contour(theta, g, matrix(l, ncol=length(g)), add=TRUE)",
null,
"FIGURE 5.20: Log likelihood surface over lengthscale $$\\theta$$ and nugget $$g$$ for mixed sinusoid data (5.19). Since the data is random, it’s hard to anticipate an appropriate range for $$\\theta$$ and $$g$$ axes. It’s worth repeating these codes in your own R session to explore variations that arise under new datasets generated under novel random noise, adding another layer to the sense of estimation risk, i.e., beyond that which is illustrated here. What can be seen in Figure 5.20? Maybe it looks like an ordinary log likelihood surface in 2d: pleasantly unimodal, convex, etc., and easy to maximize by eyeball norm. (Who needs fancy numerical optimizers after all?) There’s some skew to the surface, perhaps owing to positivity restrictions placed on both hyperparameters. In fact, that skewness is hiding a multimodal posterior distribution over functions. The modes are “higher signal/lower noise” and “lower signal/higher noise”. Some random realizations reveal this feature through likelihood more than others, which is one reason why repeating this in your own session may be helpful. Also keep in mind that there’s actually a third hyperparameter, $$\\hat{\\tau}^2$$, being optimized implicitly through the concentrated form of the log likelihood (5.8). So there’s really a third dimension to this view which is missing, challenging a more precise visualization and thus interpretation. Such signal–noise tension is an ordinary affair, and settling for one MLE tuple in a landscape of high values – even if you’re selecting the very highest ones – can grossly underestimate uncertainty. What is apparent in Figure 5.20 is that likelihood contours trace out a rather large area in hyperparameter space. Even the red “outer-reaches” in the viewing area yield non-negligible likelihood, which is consistent across most random realizations. This likelihood surface is relatively flat. The best view of signal-to-noise tension is through the predictive surface, in particular what that surface would look like for a multitude of most likely hyperparameter settings. To facilitate that, code below pre-calculates quantiles derived from predictive equations obtained for each hyperparameter pair. q1 <- pm + qnorm(0.95, sd=psd) q2 <- pm + qnorm(0.05, sd=psd) Figure 5.21 shows three sets of lines (mean and quantile-based interval) for every hyperparameter pair, but not all lines are visualized equally. Transparency is used to downweight low likelihood values. Multiple low likelihood settings accumulate shading, when the resulting predictive equations more or less agree, and gain greater opacity. plot(x,y, ylim=c(range(q1, q2))) matlines(xx, t(pm), col=rgb(0,0,0,alpha=(l/max(l))/2), lty=1) matlines(xx, t(q1), col=rgb(1,0,0,alpha=(l/max(l))/2), lty=2) matlines(xx, t(q2), col=rgb(1,0,0,alpha=(l/max(l))/2), lty=2)",
null,
"FIGURE 5.21: Posterior predictive equations in terms of means (solid-black) and quantiles (dashed-red). The hyperparameter grid is $$100 \\times 100$$, but clearly there are not $$3 \\times 10000$$ distinct lines visible in the figure. Nevertheless it’s easy to see two regimes. Some of the black/red lines are more wavy, explaining the low-amplitude periodic structure as signal; others are less wavy, explaining it as noise. Although the likelihood was unimodal, we have a multimodal posterior predictive surface. For all the emphasis on a Bayesian perspective, marginalizing over latent functions and whatever, it’s surprising that Bayesian inference is rarely used where it’s needed most. Clearly the MLE/MAP is missing an important element of uncertainty. Only fully Bayesian posterior inference, after specifying priors on hyperparameters and running Markov chain Monte Carlo (MCMC) for posterior sampling, could provide a full assessment of estimation risk and provide posterior predictive quantities with full UQ. Very few libraries offer this functionality, tgp, spBayes and plgp being three important exceptions, yet these rely on rather conventional covariance specifications. The tgp package has some extra, highly non-standard, features which will be discussed in more detail in §9.2.2. As covariance kernels incorporate more hyperparameters – smoothness, separable vectorized lengthscales, rank-one anisotropy, latent noise structures (Chapter 10), whatever Franken-kernel results from adding/convolving/multiplying – and likewise incorporate well-thought-out parametric mean structures, it’s obvious that a notionally nonparametric GP framework can become highly and strongly parameterized. In such settings, one must be very careful not to get overconfident about point-estimates so-derived. The only way to do it right, in my opinion, is to be fully Bayesian. With that in mind, it’s a shame to give the (at worst false, at best incomplete) impression of being Bayesian without having to do any of those things. In that light, ML marketing of GPs as Bayesian updating is a double-edged sword. Just because something can be endowed with a Bayesian interpretation, doesn’t mean that it automatically inherits all Bayesian merits relative to a more classical approach. A new ML Bayesian perspective on kriging spawned many creative ideas, but it was also a veneer next to the real thing. ## 5.4 Challenges and remedies This final section wraps up our GP chapter on somewhat of a lower note. GPs are remarkable, but they’re not without limitations, the most limiting being computational. In order to calculate $$K_n^{-1}$$ and $$|K_n|$$ an $$\\mathcal{O}(n^3)$$ decomposition is required for dense covariance matrices $$K_n$$, as generated by most common kernels. In the case of MLE inference, that limits training data sizes to $$n$$ in the low thousands, loosely, depending on how many likelihood and gradient evaluations are required to perform numerical maximization. You can do a little better with the right linear algebra libraries installed. See Appendix A.1 for details. (It’s easier than you might think.) Fully Bayesian GP regression, despite many UQ virtues extolled above, can all but be ruled out on computational grounds when $$n$$ is even modestly large ($$n > 2000$$ or so), speedups coming with fancy matrix libraries notwithstanding. If it takes dozens or hundreds of likelihood evaluations to maximize a likelihood, it will take several orders of magnitude more to sample from a posterior by MCMC. Even in cases where MCMC is just doable, it’s sometimes not clear that posterior inference is the right way to spend valuable computing resources. Surrogate modeling of computer simulation experiments is a perfect example. If you have available compute cycles, and are pondering spending them on expensive MCMC to better quantify uncertainty, why not spend them on more simulations to reduce that uncertainty instead? We’ll talk about design and sequential design in the next two chapters. A full discussion of computational remedies, which mostly boils down to bypassing big matrix inverses, will be delayed until Chapter 9. An exception is GP approximation by convolution which has been periodically revisited, over the years, by geostatistical and computer experiment communities. Spatial and surrogate modeling by convolution can offer flexibility and speed in low input dimension. Modern versions, which have breathed new life into geospatial (i.e., 2d input) endeavours by adding multi-resolution features and parallel computing (Katzfuss 2017), are better reviewed in another text. With emphasis predominantly being on modestly-larger-dimensional settings common in ML and computer surrogate modeling contexts, the presentation here represents somewhat of a straw man relative to Chapter 9 contributions. More favorably said: it offers another perspective on, and thus potentially insight into, the nature of GP regression. ### 5.4.1 GP by convolution In low input dimension it’s possible to avoid decomposing a big covariance matrix and obtain an approximate GP regression by taking pages out of a splines/temporal modeling play-book. Higdon (2002) shows that one may construct a GP $$f(x)$$ over a general region $$x \\in \\mathcal{X}$$ by convolving a continuous Gaussian white noise process $$\\beta(x)$$ with smoothing kernel $$k(x)$$: $\\begin{equation} f(x) = \\int_{\\mathcal{X}} k(u - x) \\beta(u) \\; du, \\quad \\mbox{for } x \\in \\mathcal{X}. \\tag{5.20} \\end{equation}$ The resulting covariance for $$f(x)$$ depends only on relative displacement $$r=x - x'$$: $c(r) = \\mathbb{C}\\mathrm{ov}(f(x), f(x')) = \\int_{\\mathcal{X}} k(u-x)k(u-x') \\; du = \\int_{\\mathcal{X}} k(u-r) k(u)\\; du.$ In the case of isotropic $$k(x)$$ there’s a 1:1 equivalence between smoothing kernel $$k$$ and covariance kernel $$c$$. $\\mbox{e.g., } \\quad k(x) \\propto \\exp\\left\\{-\\frac{1}{2} \\left\\Vert x \\right\\Vert^2 \\right\\} \\rightarrow c(r) \\propto \\exp\\left\\{-\\frac{1}{2} \\left\\Vert\\frac{r}{2}\\right\\Vert^2 \\right\\}.$ Note the implicit choice of lengthscale exhibited by this equivalence. This means that rather than defining $$f(x)$$ directly through its covariance function, which is what we’ve been doing in this chapter up until now, it may instead be specified indirectly, yet equivalently, through the latent a priori white noise process $$\\beta(x)$$. Sadly, the integrals above are not tractable analytically. However by restricting the latent process $$\\beta(x)$$ to spatial sites $$\\omega_1, \\dots, \\omega_m$$, we may instead approximate the requisite integral with a sum. Like knots in splines, the $$\\omega_j$$ anchor the process at certain input locations. Bigger $$m$$ means better approximation but greater computational cost. Now let $$\\beta_j = \\beta(\\omega_j)$$, for $$j=1,\\dots,\\ell$$, and the resulting (approximate yet continuous) latent function under GP prior may be constructed as $f(x) = \\sum_{j=1}^\\ell \\beta_j k(x - \\omega_j),$ where $$k(\\cdot - \\omega_j)$$ is a smoothing kernel centered at $$\\omega_j$$. This $$f$$ is a random function because the $$\\beta_j$$ are random variables. Choice of kernel is up to the practitioner, with the Gaussian above being a natural default. In spline/MARS regression, a “hockey-stick” kernel $$k(\\cdot - \\omega_j) = (\\cdot - \\omega_i)_{+} \\equiv (\\cdot - \\omega_j) \\cdot \\mathbb{I}_{\\{\\cdot - \\omega_j > 0\\}}$$ is a typical first choice. For a concrete example, here’s how one would generate from the prior under this formulation, choosing a normal density with mean zero and variance one as kernel and an evenly spaced grid of $$\\ell=10$$ locations $$\\omega_j$$, $$j=1,\\dots,\\ell$$. It’s helpful to have the knot grid span a slightly longer range in the input domain (e.g., about 10% bigger) than the desired range for realizations. ell <- 10 n <- 100 X <- seq(0, 10, length=n) omega <- seq(-1, 11, length=ell) K <- matrix(NA, ncol=ell, nrow=n) for(j in 1:ell) K[,j] <- dnorm(X, omega[j]) The last line in the code above is key. Calculating the sum approximating integral (5.20) requires kernel evaluations at every pair of $$x$$ and $$\\omega_j$$ locations. To obtain a finite dimensional realization on an $$n=100$$-sized grid, we can store the requisite evaluations in a $$100 \\times 10$$ matrix. The final ingredient is random $$\\beta$$s – the Gaussian white noise process. For each realization we need $$\\ell$$ such deviates. beta <- matrix(rnorm(3*ell), ncol=3) To visualize three sample paths from the prior, the code above takes $$3\\ell$$ samples for three sets of $$\\ell$$ deviates in total, stored in an $$\\ell \\times 3$$ matrix. The sum is most compactly calculated as a simple matrix–vector product between K and beta values. (Accommodating our three sets of beta vectors, the code below utilizes a matrix–matrix product.) F <- K %*% beta Figure 5.22 plots those three realizations, showing locations of knots $$\\omega_j$$ as vertical dashed bars. matplot(X, F, type=\"l\", lwd=2, lty=1, col=\"gray\", xlim=c(-1,11), ylab=\"f(x)\") abline(v=omega, lty=2, lwd=0.5)",
null,
"FIGURE 5.22: Three draws from a GP prior by convolution; knots $$\\omega_j$$ indicated by vertical dashed bars. Generating from priors is fun, but learning from data is where real interest lies. When training data come along, possibly observed under noise, the generating mechanism above suggests the following model for the purposes of inference. Let $y(x) = f(x) + \\varepsilon, \\quad \\varepsilon \\sim \\mathcal{N}(0, \\sigma^2),$ which may be fit through an OLS regression, e.g., $Y_n = K_n \\beta + \\varepsilon, \\quad \\mbox{ where } \\quad K_n^{ij} = k(x_i - \\omega_j),$ and $$x_i$$ are training input $$x$$-values in the data, i.e., with $$x_i^\\top$$ filling out rows of $$X_n$$. Whereas previously $$\\beta$$ was random, in the regression context their role changes to that of unknown parameter. Since that vector can be high dimensional, of length $$\\ell$$ for $$\\ell$$ knots, they’re usually inferred under some kind of regularization, i.e., ridge, lasso, full Bayes or through random effects. Notice that while $$K_n$$ is potentially quite large ($$n \\times \\ell$$), if $$\\ell$$ is not too big we don’t need to decompose a big matrix. Consequently, such a variation could represent a substantial computational savings relative to canonical GP regression. So the whole thing boils down to an ordinary linear regression, but instead of using the $$X_n$$ inputs directly it uses features $$K_n$$ derived from distances between $$x_i$$ and $$\\omega_j$$-values. By contrast, canonical GP regression entertains distances between all $$x_i$$ and $$x_j$$ in $$X_n$$. This swap in distance anchoring set is similar in spirit to inducing point/pseudo input/predictive process approaches, reviewed in greater depth in Chapter 9. To see it in action, let’s return to the multi-tiered periodic example (5.19) from §5.3.4, originally from Higdon’s (2002) convolution GP paper. First build training data quantities. n <- length(x) K <- as.data.frame(matrix(NA, ncol=ell, nrow=n)) for(j in 1:ell) K[,j] <- dnorm(x, omega[j]) names(K) <- paste0(\"omega\", 1:ell) Then fit the regression. For simplicity, OLS is entertained here without regularization. Since $$n$$ is quite a bit bigger than $$\\ell$$ in this case, penalization to prevent numerical instabilities or high standard errors isn’t essential. Naming the columns of K helps when using predict below. fit <- lm(y ~ . -1, data=K) Notice that an intercept is omitted in the regression formula above: we’re assuming a zero-mean GP. Also it’s worth noting that the $$\\hat{\\sigma}^2$$ estimated by lm is equivalent to $$\\hat{\\tau}^2 \\hat{g}$$ in our earlier, conventional GP specification. Prediction on a grid in the input space at $$\\mathcal{X} \\equiv$$ XX involves building out predictive feature space by evaluating the same kernel(s) at those new locations … xx <- seq(-1, 11, length=100) KK <- as.data.frame(matrix(NA, ncol=ell, nrow=length(xx))) for(j in 1:ell) KK[,j] <- dnorm(xx, omega[j]) names(KK) <- paste0(\"omega\", 1:ell) … and then feeding those in as newdata into predict.lm. It’s essential that KK have the same column names as K. p <- predict(fit, newdata=KK, interval=\"prediction\") Figure 5.23 shows the resulting predictive surface summarized as mean and 95% predictive interval(s). plot(x, y) lines(xx, p[,1]) lines(xx, p[,2], col=2, lty=2) lines(xx, p[,3], col=2, lty=2)",
null,
"FIGURE 5.23: Posterior predictive under GP convolution. That surface seems to agree with surfaces provided in Figure 5.21, which synthesized a grid of hyperparameter settings. Entertaining smaller lengthscales is a simple matter of providing a kernel with smaller variance. For example, the kernel below possesses an effective (square-root) lengthscale which is half the original, leading to a wigglier surface. See Figure 5.24. for(j in 1:ell) K[,j] <- dnorm(x, omega[j], sd=0.5) fit <- lm(y ~ . -1, data=K) for(j in 1:ell) KK[,j] <- dnorm(xx, omega[j], sd=0.5) p <- predict(fit, newdata=KK, interval=\"prediction\") plot(x, y) lines(xx, p[,1]) lines(xx, p[,2], col=2, lty=2) lines(xx, p[,3], col=2, lty=2)",
null,
"FIGURE 5.24: Predictive surface under a smaller kernel width/effective lengthscale; compare to Figure 5.23. Fixing the number $$\\ell$$ and location of kernel centers, the $$\\omega_j$$’s, and treating their common scale as unknown, inference can be performed with the usual suspects: likelihood (MLE or Bayes via least squares), CV, etc. Since this exposition is more of a side note, we’ll leave details to the literature. A great starting point is the Ph.D. dissertation of Chris Paciorek (2003), with references and links therein. One feature that this method accommodates rather more gracefully than a canonical GP approach involves relaxations of stationarity, which is the main methodological contribution in Chris’ thesis. Allowing kernels and their parameterization to evolve in space represents a computationally cheap and intuitive mechanism for allowing distance-based dynamics to vary geographically. The culmination of these ideas is packaged neatly by Paciorek and Schervish (2006). There has been a recent resurgence in this area with the advent of deep Gaussian processes (Dunlop et al. 2018; Damianou and Lawrence 2013). One downside worth mentioning is the interplay between kernel width, determining effective lengthscale, and density of the $$\\omega_j$$’s. For fixed kernel, accuracy of approximation improves as that density increases. For fixed $$\\omega_j$$, however, accuracy of approximation diminishes if kernel width becomes narrower (smaller variance in the Gaussian case) because that has the effect of increasing kernel-distance between $$\\omega_j$$’s, and thus distances between them and inputs $$X_n$$. Try the code immediately above with sd=0.1, for example. A kernel width of 0.1 may be otherwise ideal, but not with the coarse grid of $$\\omega_j$$’s in place above; an order of magnitude denser grid (much bigger $$\\ell$$) would be required. At the other end, larger kernel widths can be problematic numerically, leading to ill-conditioned Gram matrices $$K_n^\\top K_n$$ and thus problematic decompositions when solving for $$\\hat{\\beta}$$. This can happen even when the column dimension $$\\ell$$ is small relative to $$n$$. Schemes allowing scale and density of kernels to be learned simultaneously, and which support larger effective lengthscales (even with fixed kernel density), require regularization and consequently demand greater computation as matrices K and KK become large and numerically unwieldy. Some kind of penalty on complexity, or shrinkage prior on $$\\beta$$, is needed to guarantee a well-posed least-squares regression problem and to prevent over-fitting, as can happen in any setting where bases can be expanded to fit noise at the expense of signal. This issue is exacerbated as input dimension increases. Bigger input spaces lead to exponentially increasing inter-point distances, necessitating many more $$\\omega_j$$’s to fill out the void. The result can be exponentially greater computation and potential to waste those valuable resources over-fitting. Speaking of higher input dimension, how do we do that? As long as you can fill out the input space with $$\\omega_j$$’s, and increase the dimension of the kernel, the steps are unchanged. Consider a look back at our 2d data from earlier, which we conveniently saved as X2 and y2, in §5.2.4. An $$\\ell = 10 \\times 10$$ dense grid of $$\\omega_j$$’s would be quite big – bigger than the data size $$n = 80$$, comprised of two replicates of forty, necessitating regularization. We can be more thrifty by taking a page out of the space-filling design literature, using LHSs for knots $$\\omega_j$$’s in just the same way we did for design $$X_n$$. R code below chooses $$\\ell = 20$$ maximin LHS (§4.3) locations to ensure that $$\\omega_j$$’s are as spread out as possible. ell <- 20 omega <- maximinLHS(ell, 2) omega[,1] <- (omega[,1] - 0.5)*6 + 1 omega[,2] <- (omega[,2] - 0.5)*6 + 1 Next build the necessary training data quantities. Rather than bother with a library implementing bivariate Gaussians for the kernel in 2d, code below simply multiplies two univariate Gaussian densities together. Since the two Gaussians have the same parameterization, this treatment is isotropic in the canonical covariance-based GP representation. n <- nrow(X2) K <- as.data.frame(matrix(NA, ncol=ell, nrow=n)) for(j in 1:ell) K[,j] <- dnorm(X2[,1], omega[j,1])*dnorm(X2[,2], omega[j,2]) names(K) <- paste0(\"omega\", 1:ell) Kernel-based features in hand, fitting is identical to our previous 1d example. fit <- lm(y2 ~ . -1, data=K) Now for predictive quantities on testing inputs. Code below re-generates predictive $$\\mathcal{X} =$$ XX values on a dense grid to ease visualization. Otherwise this development mirrors our build of training data features above. xx <- seq(-2, 4, length=gn) XX <- expand.grid(xx, xx) KK <- as.data.frame(matrix(NA, ncol=ell, nrow=nrow(XX))) for(j in 1:ell) KK[,j] <- dnorm(XX[,1], omega[j,1])*dnorm(XX[,2], omega[j,2]) names(KK) <- paste0(\"omega\", 1:ell) Since it’s easier to show predictive standard deviation than error-bars in this 2d context, the code below provides se.fit rather than interval=\"prediction\" to predict.lm. p <- predict(fit, newdata=KK, se.fit=TRUE) Figure 5.25 shows mean (left) and standard deviation (right) surfaces side-by-side. Training data inputs are indicated as open circles, and $$\\omega_j$$’s as filled circles. par(mfrow=c(1,2)) image(xx, xx, matrix(p$fit, ncol=gn), col=cols, main=\"mean\",\nxlab=\"x1\", ylab=\"x2\")\npoints(X2)\npoints(omega, pch=20)\nimage(xx, xx, matrix(p$se.fit, ncol=gn), col=cols, main=\"sd\", xlab=\"x1\", ylab=\"x2\") points(X2) points(omega, pch=20)",
null,
"FIGURE 5.25: Convolution GP posterior predictive via mean (left) and standard deviation (right); compare with Figure 5.11. Design is indicated with open circles; knots as filled dots. For my money, this doesn’t look as good as our earlier results in Figure 5.11. The signal isn’t as clear in either plot. Several explanations suggest themselves upon reflection. One is differing implicit lengthscale, in particular the one used immediately above is not fit from data. Another has to do with locations of the $$\\omega_j$$, and their multitude: $$\\ell = 20$$. Notice how both mean and sd surfaces exhibit “artifacts” near some of the $$\\omega_j$$. Contrasts are most stark in the sd surface, with uncertainty being much higher nearby filled circles which are far from open ones, rather than resembling sausages as seen earlier. Such behavior diminishes with larger $$\\ell$$ and when learning kernel widths from data, but at the expense of other computational and fitting challenges. In two dimensions or higher, there’s potential for added flexibility by parameterizing the full covariance structure of kernels: tuning $$\\mathcal{O}(m^2)$$ unknowns in an $$m$$-dimensional input space, rather than forcing a diagonal structure with all inputs sharing a common width/effective lengthscale, yielding isotropy. A separable structure is a first natural extension, allowing each coordinate to have its own width. Rotations, and input-dependent scale (i.e., as a function of the $$\\omega_j$$) is possible too, implementing a highly flexible nonstationary capability if a sensible strategy can be devised to infer all unknowns. The ability to specify a flexible kernel structure that can warp in the input domain (expand, contract, rotate) as inferred by the data is seductive. That was Higdon’s motivation in his original 2001 paper, and the main subject of Paciorek’s thesis. But details and variations, challenges and potential solutions, are numerous enough in today’s literature (e.g., Dunlop et al. 2018), almost twenty years later, to fill out a textbook of their own. Unfortunately, those methods extend poorly to higher dimension because of the big $$\\ell$$ required to fill out an $$m$$-dimensional space, usually $$\\ell \\propto 10^m$$ with $$\\mathcal{O}(\\ell^3)$$ computation. Why is this a shame? Because it’s clearly desirable to have some nonstationary capability, which is perhaps the biggest drawback of the canonical (stationary) GP regression setup, as demonstrated below. ### 5.4.2 Limits of stationarity If $$\\Sigma(x, x') \\equiv k(x - x')$$, which is what it means for a spatial process to be stationary, then covariance is measured the same everywhere. That means we won’t be able to capture dynamics whose nature evolves in the input space, like in our motivating NASA LGBB example (§2.1). Recall how dynamics in lift exhibit an abrupt change across the sound barrier. That boundary separates a “turbulent” lift regime for high angles of attack from a relatively flat relationship at higher speeds. Other responses show tame dynamics away from mach 1, but interesting behavior nearby. Taking a global view of the three-dimensional input space, LGBB lift exhibits characteristically nonstationary behavior. Locally, however, stationary dynamics reign except perhaps right along the mach 1 boundary, which may harbor discontinuity. How can we handle data of this kind? One approach is to ignore the problem: fit an ordinary stationary GP and hope for the best. As you might guess, ordinary GP prediction doesn’t fail spectacularly because good nonparametric methods have a certain robustness about them, as demonstrated in several variations in this chapter. But that doesn’t mean there isn’t room for improvement. As a simpler illustration, consider the following variation on the multi-scale periodic process from §5.3.4. X <- seq(0,20,length=100) y <- (sin(pi*X/5) + 0.2*cos(4*pi*X/5)) * (X <= 9.6) lin <- X>9.6 y[lin] <- -1 + X[lin]/10 y <- y + rnorm(length(y), sd=0.1) The response is wiggly, identical to (5.19) from Higdon (2002) to the left of $$x=9.6$$, and straight (linear) to the right. This example was introduced by Gramacy and Lee (2008a) as a cartoon mimicking LGBB behavior (§2.1) in a toy 1d setting. Our running 2d example from §5.1.2 was conceived as a higher-dimensional variation. To keep the discussion simpler here, we’ll stick to 1d and return to the others in Chapter 9. Consider the following stationary GP fit to these data using methods from laGP. gpi <- newGP(matrix(X, ncol=1), y, d=0.1, g=0.1*var(y), dK=TRUE) mle <- jmleGP(gpi) Above, isotropic routines are used rather than separable ones. It makes no difference in 1d. As an aside, we remark that the jmleGP function is similar to mleGPsep with argument param=\"both\"; “j” here is for “joint”, meaning both lengthscale and nugget. Rather than using a gradient over both parameters, as mleGPsep does, jmleGP performs a coordinate-wise, or profile-style, maximization iterating until convergence for one hyperparameter conditional on the other, etc. Sometimes this approach leads to more numerically stable behavior; jmleGPsep works similarly for separable Gaussian kernels. Once hyperparameters have been estimated, prediction proceeds as usual. p <- predGP(gpi, matrix(X, ncol=1), lite=TRUE) deleteGP(gpi) Now we’re ready to visualize the fit, as provided by predictive mean and 95% intervals in Figure 5.26. plot(X, y, xlab=\"x\") lines(X, p$mean)\nlines(X, p$mean + 2*sqrt(p$s2), col=2, lty=2)\nlines(X, p$mean - 2*sqrt(p$s2), col=2, lty=2)",
null,
"FIGURE 5.26: GP fit to an inherently nonstationary input–output relationship.\n\nObserve how the predictive equations struggle to match disparate behavior in the two regimes. Since only one lengthscale must accommodate the entire input domain, the likelihood is faced with a choice between regimes and in this case it clearly favors the left-hand one. A wiggly fit to the right-hand regime is far better than a straight fit to left. As a result, wiggliness bleeds from the left to right.\n\nTwo separate GP fits would have worked much better. Consider …\n\nleft <- X < 9.6\ngpl <- newGP(matrix(X[left], ncol=1), y[left], d=0.1,\ng=0.1*var(y), dK=TRUE)\nmlel <- jmleGP(gpl)\ngpr <- newGP(matrix(X[!left], ncol=1), y[!left], d=0.1,\ng=0.1*var(y), dK=TRUE)\nmler <- jmleGP(gpr, drange=c(eps, 100)) \n\nTo allow the GP to acknowledge a super “flat” right-hand region, the lengthscale (d) range has been extended compared to the usual default. Notice how this approximates a (more) linear fit; alternatively – or perhaps more parsimoniously – a simple lm command could be used here instead.\n\nNow predicting …\n\npl <- predGP(gpl, matrix(X[left], ncol=1), lite=TRUE)\ndeleteGP(gpl)\npr <- predGP(gpr, matrix(X[!left], ncol=1), lite=TRUE)\ndeleteGP(gpr)\n\n… and finally visualization in Figure 5.27.\n\nplot(X, y, xlab=\"x\")\nlines(X[left], pl$mean) lines(X[left], pl$mean + 2*sqrt(pl$s2), col=2, lty=2) lines(X[left], pl$mean - 2*sqrt(pl$s2), col=2, lty=2) lines(X[!left], pr$mean)\nlines(X[!left], pr$mean + 2*sqrt(pr$s2), col=2, lty=2)\nlines(X[!left], pr$mean - 2*sqrt(pr$s2), col=2, lty=2)",
null,
"FIGURE 5.27: Partitioned GP fit to a nonstationary input–output relationship. Compare to Figure 5.26.\n\nAesthetically this is a much better fit. Partitioning can be a powerful tool for flexible modeling, and not just for GPs. Divide-and-conquer can facilitate nonstationarity, through spatial statistical independence, and yield faster calculations with smaller datasets and much smaller $$\\mathcal{O}(n^3)$$ matrix decompositions. The two fits can even be performed in parallel.\n\nBut how to know where to partition without knowing the data generating mechanism? It turns out that there are several clever solutions to that problem. Read on in Chapter 9. In the meantime, we shall see going forward that even stationary GPs have many interesting applications and success stories – as response surfaces, for optimization, calibration and input sensitivity analysis, and more – without worrying (much) about how ideal fits are.\n\n### 5.4.3 Functional and other outputs\n\nFocus has been on $$Y(x) \\in \\mathbb{R}$$; i.e., surrogate modeling for scalar, real-valued outputs. That will remain so throughout the text, but it’s worthwhile commenting on what’s available in greater generality. Modeling a small handful of real-valued outputs simultaneously is easy and hard at the same time. It’s easy because treating each scalar output independently works surprisingly well. Gramacy and Lee (2009) modeled six LGBB outputs (§2.1) independently and without any perceivable ill-effect. It’s hard because just about anything else you try can both be unwieldy and underwhelming. Effective, general-purpose multi-output surrogate modeling lies on the methodological frontier, as it were.\n\nIf there’s a small number $$p$$ of outputs following the same underlying spatial field, but experiencing correlated random shocks of varying magnitude, then cokriging (Ver Hoef and Barry 1998) could help. The idea is to combine $$p \\times p$$ covariances $$\\Sigma^{(Y)}_p$$ with the usual $$n\\times n$$ inverse distance-based ones $$\\Sigma_n = \\tau^2 (K_n + \\mathbb{I} _n g)$$ in a Kronecker layout. Inference for $$\\Sigma^{(Y)}_n$$ is relatively straightforward. MLE and Bayesian posterior are available in closed form conditional on $$\\Sigma_n$$. Trouble is, this isn’t a very realistic situation, at least not when data are generated through computer simulation. One exception may be when outputs differ from one another only in resolution or fidelity of simulations. Cokriging has been applied with some success in such multifidelity settings (Le Gratiet and Garnier 2014; Le Gratiet and Cannamela 2015).\n\nThe linear model of coregionalization (LMC; Journel and Huijbregts 1978; Goovaerts 1997) is a special case or generalization of cokriging depending upon your perspective. Sometimes cokriging, as described above, is referred to an intrinsic coregionalization model (ICM). As the name suggests, LMC allows for a more flexible, linear and covarying relationship between outputs. LMC’s most prominent success stories in computer surrogate modeling involve simulators providing additional derivative information. Taylor’s theorem justifies a linear structure. For examples, see Bompard, Peter, and Desideri (2010) and references therein. For a machine learning perspective and Python implementation, see GPy.\n\nFunctional output is rather more common in our field. Simulators may provide realizations $$Y(x,t) \\in \\mathbb{R}$$ across an entire index set $$t=1,\\dots,T$$, simultaneously for each $$x$$. I’ve chosen $$t$$ to represent output indices because functions of time are common. Two-dimensional indexing for image data is also typical. Theoretically, such processes are easy to model with GP surrogates as long as indices are naturally ordered, or otherwise emit a reasonable set of pairwise distances so that off-the-shelf covariance kernels apply. In other words, treating the output index $$t$$ as another set of input coordinates is an option. Some have taken to calling this a “left to right mapping”: moving output indices from the left side of the (probability) conditioning bar to the right. A similar tack may be taken with small-$$p$$ outputs (previous paragraph) as long as an appropriate kernel for mixed quantitative and qualitative inputs can be found (Qian, Wu, and Wu 2008; Zhang et al. 2018). The trouble is, this idea is hard to put into practice if $$N = n \\times T$$ is big, as it would be with any nontrivial $$T$$. Working with $$N \\times N$$ matrices becomes unwieldy except by approximation (Chapter 9), or when the design in $$x$$-space has special structure (e.g., high degrees of replication, as in Chapter 10).4\n\nA more parsimonious approach leverages functional bases for outputs and independent surrogate modeling of weights corresponding to a small number of principal components of that basis (Higdon et al. 2008). This idea was originally developed in a calibration setting (§8.1), but has gained wide traction in a number of surrogate modeling situations. MATLAB software is available as part of the GPMSA toolkit (Gattiker et al. 2016). Fadikar et al. (2018) demonstrate use in a quantile regression setting (Plumlee and Tuo 2014) for an epidemiological inverse problem pairing a disease outbreak simulator to Ebola data from Liberia. Sun, Gramacy, Haaland, Lu, et al. (2019) describe a periodic basis for GP smoothing of hourly simulations of solar irradiance across the continental USA. A cool movie showing surrogate irradiance predictions over the span of a year can be found here.\n\nFinally, how about categorical $$Y(x)$$? This is less common in the computer surrogate modeling literature, but GP classification remains popular in ML. See Chapter 3 of Rasmussen and Williams (2006). Software is widely available in Python (e.g., GPy) and MATLAB/Octave (see gpstuff Vanhatalo et al. (2012)). R implementation is provided in kernlab (Karatzoglou, Smola, and Hornik 2018) and plgp (Gramacy 2014). Bayesian optimization under constraints (§7.3) sometimes leverages classification surrogates to model binary constraints. GP classifiers work well here (Gramacy and Lee 2011), but so too do other common nonparametric classifiers like random forests (Breiman 2001). See §7.3.2 for details.\n\n## 5.5 Homework exercises\n\nThese exercises give the reader an opportunity to explore Gaussian process regression, properties, enhancements and extensions, and related methods.\n\n#### #1: Bayesian zero-mean GP\n\nConsider the following data-generating mechanism $$Y \\sim \\mathcal{N}_n (0, \\tau^2 K_n)$$ and place prior $$\\tau^2 \\sim \\mathrm{IG}\\left(\\frac{a}{2}, \\frac{b}{2}\\right)$$ on the scale parameter. Use the following parameterization of inverse gamma $$\\mathrm{IG}(\\theta; \\beta, \\alpha)$$ density, expressed generically for a parameter $$\\theta > 0$$ given shape $$\\alpha > 0$$ and scale $$\\beta > 0$$: $$f(\\theta) = \\frac{\\beta^\\alpha}{\\Gamma(\\alpha)} \\theta^{-(\\alpha+1)} e^{-\\beta/\\theta}$$, where $$\\Gamma$$ is the gamma function.\n\n1. Show that the IG prior for $$\\tau^2$$ is conditionally conjugate by deriving the closed form of the posterior conditional distribution $$\\tau^2$$ given all other hyperparameters, i.e., those involved in $$K_n$$.\n2. Choosing $$a = b = 0$$ prescribes a reference prior (Berger, De Oliveira, and Sansó 2001; Berger, Bernardo, and Sun 2009) which is equivalent to $$p(\\tau^2) \\propto 1/\\tau^2$$. This prior is improper. Nevertheless, derive the closed form of the posterior conditional distribution for $$\\tau^2$$ and argue that it’s proper under a condition that you shall specify. Characterize the posterior conditional for $$\\tau^2$$ in terms of $$\\hat{\\tau}^2$$.\n3. Now consider inference for the hyperparameterization of $$K_n$$. Derive the marginal posterior $$p(K_n \\mid Y_n)$$, i.e., as may be obtained by integrating out the scale parameter $$\\tau^2$$ under the IG prior above; however, you may find other means equally viable. Use generic $$p(K_n)$$ notation for the prior on covariance hyperparameterization, independent of $$\\tau^2$$. How does this (log) posterior density compare to the concentrated log likelihood (5.8) under the reference prior?\n4. Deduce the form of the marginal predictive equations $$p(Y(x) \\mid K_n, Y_n)$$ at a new location $$x$$, i.e., as may be obtained by integrating out $$\\tau^2$$. Careful, they’re not Gaussian but they’re a member of a familiar family. How do these equations change in the reference prior setting?\n\n#### #2: GP with a linear mean\n\nConsider the following data-generating mechanism $$Y \\sim \\mathcal{N}_n (\\beta_0 + X_n\\beta, \\tau^2 K_n)$$ where\n\n• $$K_n = C_n + g \\mathbb{I}_n$$,\n• $$C_n$$ is an $$n \\times n$$ correlation matrix defined by a positive definite function $$C_\\theta(x, x')$$ calculated on the $$n$$ rows of $$X_n$$, and which has lengthscale hyperparameters $$\\theta$$,\n• and $$g$$ is a nugget hyperparameter, which must be positive.\n\nThere are no restrictions on the coefficients $$\\beta_0$$ and $$\\beta$$, except that the dimension $$m$$ of $$\\beta$$ matches the column dimension of $$X_n$$.\n\n1. Argue, at a high level, that this specification is essentially equivalent to the following semiparametric model $$y(x) = \\beta_0 + x^\\top \\beta + w(x) + \\varepsilon$$, and describe what each component means, and/or what distribution it’s assumed to have.\n2. Conditional on hyperparameters $$\\theta$$ and $$g$$, obtain closed form expressions for the MLE $$\\hat{\\tau}^2$$, $$\\hat{\\beta}_0$$ and $$\\hat{\\beta}$$. You might find it convenient to assume, for the purposes of these calculations, that $$X_n$$ contains a leading column of ones, and that $$\\beta \\equiv [\\beta_0, \\beta]$$.\n3. Provide a concentrated (log) likelihood expression $$\\ell(\\theta,g)$$ that plugs-in expressions for $$\\hat{\\tau}^2$$, $$\\hat{\\beta}_0$$ and $$\\hat{\\beta}$$ (or a combined $$\\hat{\\beta}$$) which you derived above.\n4. Using point estimates for $$\\hat{\\beta}$$, $$\\hat{\\tau}^2$$ and conditioning on $$\\theta$$ and $$g$$ settings, derive the predictive equations.\n\n#### #3: Bayesian linear-mean GP\n\nComplete the setup in #2 above with prior $$p(\\beta, \\tau^2) = p(\\beta \\mid \\tau^2)p(\\tau^2)$$ where $$\\beta\\mid \\tau^2 \\sim \\mathcal{N}_{m+1}(B, \\tau^2 V)$$ and $$\\tau^2 \\sim \\mathrm{IG}\\left(\\frac{a}{2}, \\frac{b}{2}\\right)$$. Notice that the intercept term $$\\beta_0$$ is subsumed into $$\\beta$$ in this notation.\n\n1. Show that the MVN prior for $$\\beta \\mid \\tau^2$$ is conditionally conjugate by deriving the closed form of the posterior conditional distribution $$\\beta \\mid \\tau^2, Y_n$$ and given all other hyperparameters, i.e., those involved in $$K_n$$.\n2. Show that the IG prior for $$\\tau^2$$ is conditionally conjugate by deriving the closed form of the posterior conditional distribution $$\\tau^2 \\mid \\beta, Y_n$$ and given all other hyperparameters, i.e., those involved in $$K_n$$.\n3. Under the reference prior $$p(\\beta, \\tau^2) \\propto 1/\\tau^2$$, which is improper, how do the forms of the posterior conditionals change? Under what condition(s) are these conditionals still proper? (Careful, proper conditionals don’t guarantee a proper joint.) Express the $$\\beta$$ conditional as function of $$\\hat{\\beta}$$ from exercise #2.\n\nFor the remainder of this question, parts #d–f, use the reference prior to keep the math simple.\n\n1. Derive the marginalized posterior distribution $$\\tau^2 \\mid Y_n$$ and given $$K_n$$, i.e., as may be obtained by integrating out $$\\beta$$, however you may choose to utilize other means. Express your distribution as a function of $$\\hat{\\beta}$$ and $$\\hat{\\tau}^2$$ from exercise #2.\n2. Now consider inference for the hyperparameterization of $$K_n$$. Derive the marginal posterior $$p(K_n \\mid Y_n)$$ up to a normalizing constant, i.e., as may be obtained by integrating out both linear mean parameter $$\\beta$$ and scale $$\\tau^2$$ under their joint reference prior. Use generic $$p(K_n)$$ notation for the prior on covariance hyperparameterization, independent of $$\\tau^2$$ and $$\\beta$$. How does the form of this density compare to the concentrated log likelihood in #2c above? Under what condition(s) is this density proper?\n3. Deduce the form of the marginal predictive equations $$p(Y(x) \\mid K_n, Y_n)$$ at a new location $$x$$, i.e., as may be obtained by integrating out $$\\beta$$ and $$\\tau^2$$. Careful, they’re not Gaussian but they’re a member of a familiar family.\n\n#### #4: Implementing the Bayesian linear-mean GP\n\nCode up the marginal posterior $$p(K_n \\mid Y_n)$$ from #3e and the marginal predictive equations $$p(Y(x) \\mid K_n, Y_n)$$ from #3f and try them out on the Friedman data. Take the reference prior $$p(\\beta, \\tau^2) \\propto 1/\\tau^2$$ and define $$p(K_n)$$ as independent gamma priors on isotropic (Gaussian family) lengthscale and nugget as follows:\n\n$\\theta \\sim G(3/2, 1) \\quad \\mbox{ and } \\quad g \\sim G(3/2, 1/2),$\n\nproviding shape and rate parameters to dgamma in R, respectively.\n\n1. Use the marginal posterior as the basis of a Metropolis–Hastings scheme for sampling from the posterior distribution of lengthscale $$\\theta$$ and nugget $$g$$ hyperparameters. Provide a visual comparison between these marginal posterior densities and the point estimates we obtained in the chapter. How influential was the prior?\n2. Use the marginal posterior predictive equations to augment Table 5.1’s RMSEs and scores collecting out-of-sample results from comparators in §5.2.55.2.6. (You might consider more random training/testing partitions as in our bakeoff in §5.2.7, extended in #7 below.)\n3. Use boxplots to summarize the marginal posterior distribution of regression coefficients $$\\beta$$. Given what you know about the Friedman data generating mechanism, how do these boxplots compare with the “truth”? You will need #3d and #3a for this part.\n\nSuppose you knew, a priori, that only the first three inputs contributed nonlinearly to response. How would you change your implementation to reflect this knowledge, and how do the outputs/conclusions (#ii–iii) change?\n\n#### #5: Matèrn kernel\n\nRevisit noise-free versions of our 1d sinusoidal (§5.1.1 and Figure 5.3) and 2d exponential (§5.1.2 and Figure 5.5) examples with Matèrn $$\\nu = 3/2$$ and $$\\nu = 5/2$$ kernels (5.18). Extend concentrated log likelihood and gradient functions to learn an $$m$$-vector of lengthscale hyperparameters $$\\hat{\\theta}$$ using nugget $$g=0$$ and no $$\\epsilon$$ jitter. Define this separable Matèrn in product form as $$k_{\\nu,\\theta}(r) = \\prod_{\\ell=1}^m k_{\\nu, \\theta_\\ell}(r^{(\\ell)})$$ where $$r^{(\\ell)}$$ is based on (original, not squared) distances calculated only on the $$\\ell^{\\mathrm{th}}$$ input coordinate. Provide visuals of the resulting surfaces using the predictive grids established along with those examples. Qualitatively (looking at the visuals) and quantitatively (via Mahalanobis distance (5.7) calculated out of sample), how do these surfaces compare to the Gaussian kernel alternative (with jitter and with estimated lengthscale(s))?\n\nFor another sensible vectorized lengthscale option, see “ARD Matèrn” here. ARD stands for “automatic relevance determination”, which comes from the neural networks/machine learning literature, allowing a hyperparameter to control the relative relevance of each input coordinate. For the Gaussian family the two definitions, product form and ARD, are the same. But for Matèrn they differ ever-so-slightly.\n\n#### #6: Splines v. GP\n\nRevisit the 2d exponential data (§5.1.2 and Figure 5.5), and make a comparison between spline and GP predictors. For a review of splines, see the supplement linked here. Generate a random uniform design of size $$n=100$$ in $$[-2,4]^2$$ and observe random responses under additive Gaussian error with a mean of zero and a standard deviation of 0.001. This is your training set. Then generate a dense $$100 \\times 100$$ predictive grid in 2d, and obtain (again noisy) responses at those locations, which you will use as a testing set.\n\nIgnoring the testing responses, use the training set to obtain predictions on the testing input grid under\n\n1. a spline model with a tensor product basis provided in splines2d.R;\n2. a zero-mean GP predictor with an isotropic Gaussian correlation function, whose hyperparameters (including nugget, scale, and lengthscale) are inferred by maximum likelihood;\n3. MARS in the mda package for R.\n\nYou may wish to follow the format in splines2d.R for your GP and MARS comparators. Consider the following benchmarking metrics:\n\n1. computation time for inference and prediction combined;\n2. RMSE on the testing set.\n\nOnce you’re satisfied with your setup using one random training/testing partition, put a “for” loop around everything and do 99 more MC repetitions of the experiment (for 100 total), each with novel random training and testing set as defined above. Make boxplots collecting results for #i–ii above and thereby summarize the distribution of those metrics over the randomized element(s) in the experiment.\n\n#### #7: MARS v. GP redux\n\nRevisit the MARS v. GP bakeoff (§5.2.7) with five additional predictors.\n\n1. MARS via mda with degree=2.\n2. MARS via earth with default arguments.\n3. MARS via earth with degree=2.\n4. Bayesian MARS via the bass function with default arguments from the BASS package (Francom 2017) on CRAN.\n5. Bayesian GP with jumps to the limiting linear model (LLM; Gramacy and Lee 2008b) via bgpllm with default arguments in the tgp package on CRAN.\n\nRebuild the RMSE and time boxplots to incorporate these new predictors; ignore proper score unless you’d like to comb tgp and BASS documentation to figure out how to extract predictive covariance matrices, which are not the default.\n\nBASS supports a simulated tempering scheme to avoid Markov chains becoming stuck in local modes of the posterior. Devin Francom recommends the following call for best results on this exercise.\n\nfit.bass <- bass(X, y, verbose=FALSE, nmcmc=40000, nburn=30000, thin=10,\ntemp.ladder=(1+0.27)^((1:9)-1))\n\nA similar importance tempering feature (Gramacy, Samworth, and King 2010) is implemented in tgp and is described in more detail in the second package vignette (Gramacy and Taddy 2010). Also see ?default.itemps in the package documentation. The curious reader may wish to incorporate these gold standards for brownie points.\n\n#### #8: Langley Glide-Back Booster\n\nIn this problem, revisit #6 on the “original” LGBB drag response.\n\nlgbb <- read.table(\"lgbb/lgbb_original.txt\", header=TRUE)\nX <- lgbb[,1:3]\ny <- lgbb\\$drag\n\nHowever note that the scales of LGBB’s inputs are heterogeneous, and quite different from #6. In the least, it’d be wise to code your inputs. For best results, you might wish to upgrade the isotropic GP comparator from #6 to a separable version.\n\n1. Consider the subset of the data where the side-slip angle is zero, so that it’s a 2d problem. Create a random training and testing partition in the data so that about half are for training and half for testing, and then perform exactly #a–c with #i–ii from #6, above, and report on what you find.\n2. Put a “for” loop around everything and do 100 MC repetitions of the above experiment, each with novel random training and testing set as defined above. Then, make boxplots for RMSE results collected for #i–ii above and thereby summarize the distribution of those metrics over randomized element(s) in the experiment.\n3. Now return to the full set of data, i.e., for all side-slip angles. Since the number of observations, $$n$$, is bigger than 3000, you won’t be able to get very far with a random 50:50 split. Instead, do a 20:80 split or whatever you think you can manage. Also, it’ll be tough to do a spline approach with a tensor product basis in 3d, so perhaps ignore comparator #6a unless you’re feeling particularly brave. Otherwise perform exactly #a–c with #i–ii from #6, above, and report on what you find. (That is, do like in part #A, without the “for” loop in #B.)\n4. Repeat #C with a GP scheme using an axis-aligned partition. Fit two GP models in 3d where the first one uses the subset of the training data with mach < 2 and the second uses mach >= 2. Since you’re dividing-and-conquering, you can probably afford a 50:50 split for training and testing.\n\n#### #9: Convolution kernel width\n\nRevisit the 1d multi-tiered periodic example (5.19) as treated by convolution in §5.4.1.\n\n1. Write down a concentrated log likelihood for the kernel width parameter $$\\theta$$, notated as sd in the example, and provide an implementation in code.\n2. Plot the concentrated log likelihood over the range sd $$\\equiv \\theta \\in [0.4, 4]$$ and note the optimal setting.\n3. Verify the result with optimize on your concentrated log likelihood.\n4. How does the value you inferred compare to the two settings entertained in §5.4.1? How do the three predictive surfaces compare visually?\n\n### References\n\nMatheron, G. 1963. “Principles of Geostatistics.” Economic Geology 58 (8). Society of Economic Geologists: 1246–66.\n\nGramacy, RB. 2014. plgp: Particle Learning of Gaussian Processes. https://CRAN.R-project.org/package=plgp.\n\nNeal, R. 1998. “Regression and Classification Using Gaussian Process Priors.” Bayesian Statistics 6: 475.\n\nGenz, A, F Bretz, T Miwa, X Mi, and T Hothorn. 2018. mvtnorm: Multivariate Normal and $$t$$ Distributions. https://CRAN.R-project.org/package=mvtnorm.\n\nSantner, TJ, BJ Williams, and W Notz. 2018. The Design and Analysis of Computer Experiments, Second Edition. New York, NY: Springer–Verlag.\n\nAnkenman, B, BL Nelson, and J Staum. 2010. “Stochastic Kriging for Simulation Metamodeling.” Operations Research 58 (2). INFORMS: 371–82.\n\nBerger, JO, V De Oliveira, and B Sansó. 2001. “Objective Bayesian Analysis of Spatially Correlated Data.” Journal of the American Statistical Association 96 (456). Taylor & Francis: 1361–74.\n\nBerger, JO, JM Bernardo, and D Sun. 2009. “The Formal Definition of Reference Priors.” The Annals of Statistics 37 (2). Institute of Mathematical Statistics: 905–38.\n\nGramacy, RB, and NG Polson. 2011. “Particle Learning of Gaussian Process Models for Sequential Design and Optimization.” Journal of Computational and Graphical Statistics 20 (1). Taylor & Francis: 102–18.\n\nGneiting, T, and AE Raftery. 2007. “Strictly Proper Scoring Rules, Prediction, and Estimation.” Journal of the American Statistical Association 102 (477). Taylor & Francis: 359–78.\n\nBastos, LS, and A O’Hagan. 2009. “Diagnostics for Gaussian Process Emulators.” Technometrics 51 (4). Taylor & Francis: 425–38.\n\nByrd, RH, P Qiu, J Nocedal, and C and Zhu. 1995. “A Limited Memory Algorithm for Bound Constrained Optimization.” Journal on Scientific Computing 16 (5). Society for Industrial; Applied Mathematics: 1190–1208.\n\nGramacy, RB. 2016. “laGP: Large-Scale Spatial Modeling via Local Approximate Gaussian Processes in R.” Journal of Statistical Software 72 (1). Foundation for Open Access Statistics: 1–46.\n\nGramacy, RB, and F Sun. 2018. laGP: Local Approximate Gaussian Process Regression. http://bobby.gramacy.com/r_packages/laGP.\n\nFriedman, JH. 1991. “Multivariate Adaptive Regression Splines.” The Annals of Statistics 19 (1). Institute of Mathematical Statistics: 1–67.\n\nHastie, T, R Tibshirani, and JH Friedman. 2009. The Elements of Statistical Learning: Data Mining, Inference, and Prediction. New York, NY: Springer.\n\nLeisch, F, K Hornik, and BD Ripley. 2017. mda: Mixture and Flexible Discriminant Analysis. https://CRAN.R-project.org/package=mda.\n\nMilborrow, S. 2019. earth: Multivariate Adaptive Regression Splines. https://CRAN.R-project.org/package=earth.\n\nFrancom, D. 2017. BASS: Bayesian Adaptive Spline Surfaces. https://CRAN.R-project.org/package=BASS.\n\nDancik, GM. 2018. mlegp: Maximum Likelihood Estimates of Gaussian Processes. https://CRAN.R-project.org/package=mlegp.\n\nMacDonald, B, H Chipman, and P Ranjan. 2019. GPfit: Gaussian Processes Modeling. https://CRAN.R-project.org/package=GPfit.\n\nRipley, BD. 2015. spatial: Functions for Kriging and Point Pattern Analysis. https://CRAN.R-project.org/package=spatial.\n\nNychka, D, R Furrer, J Paige, and S Sain. 2019. fields: Tools for Spatial Data. https://CRAN.R-project.org/package=fields.\n\nGu, M, J Palomo, and J Berger. 2018. RobustGaSP: Robust Gaussian Stochastic Process Emulation. https://CRAN.R-project.org/package=RobustGaSP.\n\nKaratzoglou, A, A Smola, and K Hornik. 2018. kernlab: Kernel-Based Machine Learning Lab. https://CRAN.R-project.org/package=kernlab.\n\nGramacy, RB, and MA Taddy. 2016. tgp: Bayesian Treed Gaussian Process Models. https://CRAN.R-project.org/package=tgp.\n\nHankin, RKS. 2019. emulator: Bayesian Emulation of Computer Programs. https://CRAN.R-project.org/package=emulator.\n\nFinley, A, and S Banerjee. 2019. spBayes: Univariate and Multivariate Spatial-Temporal Modeling. https://CRAN.R-project.org/package=spBayes.\n\nVanhatalo, J, J Riihimäki, J Hartikainen, P Jylänki, V Tolvanen, and A Vehtari. 2012. “Bayesian Modeling with Gaussian Processes Using the GPstuff Toolbox.” Preprint on ArXiv:1206.5754.\n\nErickson, Collin B, Bruce E Ankenman, and Susan M Sanchez. 2018. “Comparison of Gaussian Process Modeling Software.” European Journal of Operational Research 266 (1). Elsevier: 179–92.\n\nRasmussen, CE, and CKI Williams. 2006. Gaussian Processes for Machine Learning. Cambridge, MA: MIT Press.\n\nPark, T, and G Casella. 2008. “The Bayesian Lasso.” Journal of the American Statistical Association 103 (482). Taylor & Francis: 681–86.\n\nCarlin, BP, and NG Polson. 1991. “Inference for Nonconjugate Bayesian Models Using the Gibbs Sampler.” Canadian Journal of Statistics 19 (4). Wiley Online Library: 399–405.\n\nAdler, RJ. 2010. The Geometry of Random Fields. SIAM.\n\nAndrianakis, I, and PG Challenor. 2012. “The Effect of the Nugget on Gaussian Process Emulators of Computer Models.” Computational Statistics & Data Analysis 56 (12). Elsevier: 4215–28.\n\nGramacy, RB, and HKH Lee. 2012. “Cases for the Nugget in Modeling Computer Experiments.” Statistics and Computing 22 (3): 713–22.\n\nPeng, CY, and CFJ Wu. 2014. “On the Choice of Nugget in Kriging Modeling for Deterministic Computer Experiments.” Journal of Computational and Graphical Statistics 23 (1). Taylor & Francis: 151–68.\n\nStein, ML. 2012. Interpolation of Spatial Data: Some Theory for Kriging. New York, NY: Springer Science & Business Media.\n\nDurrande, N, D Ginsbourger, and O Roustant. 2012. “Additive Covariance Kernels for High-Dimensional Gaussian Process Modeling.” In Annales de La Faculté Des Sciences de Toulouse: Mathématiques, 21 (3):481–99.\n\nGramacy, RB, and H Lian. 2012. “Gaussian Process Single-Index Models as Emulators for Computer Experiments.” Technometrics 54 (1). Taylor & Francis Group: 30–41.\n\nAbrahamsen, P. 1997. “A Review of Gaussian Random Fields and Correlation Functions.” Norsk Regnesentral/Norwegian Computing Center Oslo, https://www.nr.no/directdownload/917_Rapport.pdf.\n\nWendland, H. 2004. Scattered Data Approximation. Cambridge, England: Cambridge University Press.\n\nDeville, Y, D Ginsbourger, and O Roustant. 2018. kergp: Gaussian Process Laboratory. https://CRAN.R-project.org/package=kergp.\n\nQian, PZG, H Wu, and CFJ Wu. 2008. “Gaussian Process Models for Computer Experiments with Qualitative and Quantitative Factors.” Technometrics 50 (3). Taylor & Francis: 383–96.\n\nZhou, Q, PZG Qian, and S Zhou. 2011. “A Simple Approach to Emulation for Computer Models with Qualitative and Quantitative Factors.” Technometrics 53 (3). Taylor & Francis: 266–73.\n\nZhang, Y, S Tao, W Chen, and DW Apley. 2018. “A Latent Variable Approach to Gaussian Process Modeling with Qualitative and Quantitative Factors.” Preprint on ArXiv:1806.07504.\n\nLanckriet, GRG, N Cristianini, P Bartlett, LE Ghaoui, and MI Jordan. 2004. “Learning the Kernel Matrix with Semidefinite Programming.” Journal of Machine Learning Research 5 (Jan): 27–72.\n\nHigdon, D. 2002. “Space and Space-Time Modeling Using Process Convolutions.” In Quantitative Methods for Current Environmental Issues, 37–56. New York, NY: Springer.\n\nKatzfuss, M. 2017. “A Multi-Resolution Approximation for Massive Spatial Datasets.” Journal of the American Statistical Association 112 (517). Taylor & Francis: 201–14.\n\nPaciorek, CJ. 2003. “Nonstationary Gaussian Processes for Regression and Spatial Modelling.” PhD thesis, Carnegie Mellon University.\n\nPaciorek, CJ, and MJ Schervish. 2006. “Spatial Modelling Using a New Class of Nonstationary Covariance Functions.” Environmetrics: The Official Journal of the International Environmetrics Society 17 (5). Wiley Online Library: 483–506.\n\nDunlop, MM, MA Girolami, AM Stuart, and AL Teckentrup. 2018. “How Deep Are Deep Gaussian Processes?” The Journal of Machine Learning Research 19 (1). JMLR. org: 2100–2145.\n\nDamianou, A, and N Lawrence. 2013. “Deep Gaussian Processes.” In Artificial Intelligence and Statistics, 207–15.\n\nGramacy, RB, and HKH Lee. 2008a. “Bayesian Treed Gaussian Process Models with an Application to Computer Modeling.” Journal of the American Statistical Association 103 (483). Taylor & Francis: 1119–30.\n\nGramacy, RB, and HKH Lee. 2009. “Adaptive Design and Analysis of Supercomputer Experiments.” Technometrics 51 (2). Taylor & Francis: 130–45.\n\nVer Hoef, J., and RP Barry. 1998. “Constructing and Fitting Models for Cokriging and Multivariate Spatial Prediction.” Journal of Statistical Planning and Inference 69: 275–94.\n\nLe Gratiet, L, and J Garnier. 2014. “Recursive Co-Kriging Model for Design of Computer Experiments with Multiple Levels of Fidelity.” International Journal for Uncertainty Quantification 4 (5). Begel House Inc.\n\nLe Gratiet, L, and C Cannamela. 2015. “Cokriging-Based Sequential Design Strategies Using Fast Cross-Validation Techniques for Multi-Fidelity Computer Codes.” Technometrics 57 (3). Taylor & Francis: 418–27.\n\nJournel, AG, and CJ Huijbregts. 1978. Mining Geostatistics. Vol. 600. Academic Press.\n\nGoovaerts, P. 1997. Geostatistics for Natural Resources Evaluation. Oxford University Press on Demand.\n\nBompard, M, J Peter, and JA Desideri. 2010. “Surrogate Models Based on Function and Derivative Values for Aerodynamic Global Optimization.” In V European Conference on Computational Fluid Dynamics Eccomas Cfd 2010.\n\nHigdon, D, J Gattiker, B Williams, and M Rightley. 2008. “Computer Model Calibration Using High-Dimensional Output.” Journal of the American Statistical Association 103 (482). Taylor & Francis: 570–83.\n\nGattiker, J, K Myers, B Williams, D Higdon, M Carzolio, and A Hoegh. 2016. “Gaussian Process-Based Sensitivity Analysis and Bayesian Model Calibration with Gpmsa.” Handbook of Uncertainty Quantification. Springer, 1–41.\n\nFadikar, A, D Higdon, J Chen, B Lewis, S Venkatramanan, and M Marathe. 2018. “Calibrating a Stochastic, Agent-Based Model Using Quantile-Based Emulation.” SIAM/ASA Journal on Uncertainty Quantification 6 (4). SIAM: 1685–1706.\n\nPlumlee, M, and R Tuo. 2014. “Building Accurate Emulators for Stochastic Simulations via Quantile Kriging.” Technometrics 56 (4). Taylor & Francis: 466–73.\n\nSun, F, RB Gramacy, B Haaland, S Lu, and Y Hwang. 2019. “Synthesizing Simulation and Field Data of Solar Irradiance.” Statistical Analysis and Data Mining 12 (4): 311–24.\n\nGramacy, RB, and Herbert KH Lee. 2011. “Optimization Under Unknown Constraints.” In Bayesian Statistics. Vol. 9. Oxford University Press.\n\nBreiman, L. 2001. “Random Forests.” Machine Learning 45 (1). Springer: 5–32.\n\nGramacy, RB, and HKH Lee. 2008b. “Gaussian Processes and Limiting Linear Models.” Computational Statistics & Data Analysis 53 (1). Elsevier: 123–36.\n\nGramacy, RB, R Samworth, and R King. 2010. “Importance Tempering.” Statistics and Computing 20 (1): 1–7.\n\nGramacy, RB, and MA Taddy. 2010. “Categorical Inputs, Sensitivity Analysis, Optimization and Importance Tempering with tgp Version 2, an R Package for Treed Gaussian Process Models.” Journal of Statistical Software 33 (6): 1–48.\n\n1. Qian, Wu, and Wu (2008)’s method for categorical inputs exploits a dual relationship with multiple-output modeling. Kronecker structure in the resulting $$N \\times N$$ matrices can make an otherwise unwieldy decomposition manageable."
]
| [
null,
"https://bookdown.org/rbg/surrogates/surrogates_files/figure-html/gpprior-1.png",
null,
"https://bookdown.org/rbg/surrogates/surrogates_files/figure-html/gpprior3-1.png",
null,
"https://bookdown.org/rbg/surrogates/surrogates_files/figure-html/sin1-1.png",
null,
"https://bookdown.org/rbg/surrogates/surrogates_files/figure-html/2dprior-1.png",
null,
"https://bookdown.org/rbg/surrogates/surrogates_files/figure-html/2dex-1.png",
null,
"https://bookdown.org/rbg/surrogates/surrogates_files/figure-html/2dex-persp-1.png",
null,
"https://bookdown.org/rbg/surrogates/surrogates_files/figure-html/amp-1.png",
null,
"https://bookdown.org/rbg/surrogates/surrogates_files/figure-html/sin1-amp-1.png",
null,
"https://bookdown.org/rbg/surrogates/surrogates_files/figure-html/sin1-scale-1.png",
null,
"https://bookdown.org/rbg/surrogates/surrogates_files/figure-html/sin1-nug-1.png",
null,
"https://bookdown.org/rbg/surrogates/surrogates_files/figure-html/x2y2-1.png",
null,
"https://bookdown.org/rbg/surrogates/surrogates_files/figure-html/mixsin-llik-1.png",
null,
"https://bookdown.org/rbg/surrogates/surrogates_files/figure-html/mixsin-1.png",
null,
"https://bookdown.org/rbg/surrogates/surrogates_files/figure-html/conv-prior-1.png",
null,
"https://bookdown.org/rbg/surrogates/surrogates_files/figure-html/mixsin-conv-1.png",
null,
"https://bookdown.org/rbg/surrogates/surrogates_files/figure-html/mixsin-wiggly-1.png",
null,
"https://bookdown.org/rbg/surrogates/surrogates_files/figure-html/2dexp-conv-1.png",
null,
"https://bookdown.org/rbg/surrogates/surrogates_files/figure-html/nons-1.png",
null,
"https://bookdown.org/rbg/surrogates/surrogates_files/figure-html/nons2-1.png",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.8425717,"math_prob":0.99629134,"size":195742,"snap":"2020-10-2020-16","text_gpt3_token_len":52289,"char_repetition_ratio":0.12369085,"word_repetition_ratio":0.04829865,"special_character_ratio":0.27465746,"punctuation_ratio":0.13765438,"nsfw_num_words":6,"has_unicode_error":false,"math_prob_llama3":0.9993111,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-28T06:07:11Z\",\"WARC-Record-ID\":\"<urn:uuid:c9ec3511-4bb9-4990-a551-07d1a4269a92>\",\"Content-Length\":\"405009\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d7883ab8-4512-4beb-9927-93dba07fb0ac>\",\"WARC-Concurrent-To\":\"<urn:uuid:8ce0cf13-4d12-438b-9da2-62f3b5285aab>\",\"WARC-IP-Address\":\"54.164.212.190\",\"WARC-Target-URI\":\"https://bookdown.org/rbg/surrogates/chap5.html\",\"WARC-Payload-Digest\":\"sha1:A52ZWWNZWG6NAOQRXZUDBA2EGKM4JTEP\",\"WARC-Block-Digest\":\"sha1:6EE2N4AQ37HX6RXLZOXHTSULOI37SQCW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875147054.34_warc_CC-MAIN-20200228043124-20200228073124-00470.warc.gz\"}"} |
https://epjam.edp-open.org/articles/epjam/full_html/2017/01/epjam160013/epjam160013.html | [
"Issue EPJ Appl. Metamat. Volume 4, 2017 Artificial materials for advanced applications in electromagnetics and mechanics 5 6 https://doi.org/10.1051/epjam/2017002 15 February 2017\n\n© C.A. Valagiannopoulos et al., Published by EDP Sciences, 2017",
null,
"This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.\n\n## 1 General principles\n\n### 1.1 Introduction and motivation\n\nThe selection of configurations and materials to achieve a desired distribution of electromagnetic energy in space and time is a general and fascinating topic covering a broad scientific area within electromagnetics and photonics. In particular, how the available energy from a primary source can be very rapidly and suitably delivered to another region of space is an issue with numerous applications. Designs of effective photovoltaic cells [1, 2], wireless charging components [3, 4] and light steering subwavelength metasurfaces [5, 6] are only a few of the application areas with huge practical interest.\n\nAdditionally, one remarkable class of problems where wireless transfer of power and its distribution is crucial concerns understanding, design, and fabrication of electromagnetic absorbers. A thorough overview of potential topologies for ultra-efficient absorbers wrapped into a general theory of thin perfect absorbing layers is provided in , where suitable models and designs are given for each of the identified families. Furthermore, so-called perfect metamaterial absorbers have been fabricated with sole use of metallic elements which are experimentally tested with excellent results , and are examined in alternative structures [9, 10]. Other interesting designs in the visible spectrum [11, 12], the sub-THz spectrum and at radio frequencies have been reported to exhibit high efficiency combined with broadband features.\n\nIt should be stressed that the performance of all the aforementioned absorbing configurations is bounded by the performance of the so-called ideal black body, which completely absorbs all the incident electromagnetic radiation, regardless of the angle of incidence and polarization; the corresponding concept is widely used in thermodynamics, optics, and radio engineering. Several attempts to emulate the response of a black body have been made in acoustics and photonics with some success. However, very recently absorbing structures which break that upper limit posed by perfectly black body have been proposed [17, 18], and they are based on the use of Double-Negative (DNG) uniaxial media which obey the Perfectly Matched Layer (PML) rule. That Conjugate Matched Layer (CML), as we call it, and its variants, would be the major topic of the present work, where alternative excitations are considered. In particular, the proposed CML concept is described in the Section 1.2, while its enormous, super-Planckian absorbing performance is demonstrated in Section 2. The ability of the analyzed component to send a part of the giant near-field power concentrated at its surface, to the far-zone, is examined in Section 3. In particular, we perturb slightly the structure by putting one (Sect. 3.1) or multiple (Sect. 3.2) particles in the region with strong background field. Due to the diffuse scattering occurred from the induced currents, the far-field radiation of the device becomes stronger than that of the corresponding ideal black body.\n\n### 1.2 Conjugate matched layer (CML) concept\n\nIn order to maximize the power wirelessly delivered from a source to a load, the load should be conjugate matched to the internal impedance of the source. This well-known maximal power principle applied in circuits can be generalized to cover electrically sizable electromagnetic structures. The only difference is that, in the latter case, there are infinitely many channels (modes) for transferring energy; if all of them obey the conjugate-matching principle, the transferred power P is diverging . In particular, the load can be replaced by a semi-infinite half-space filled with a uniaxial medium of relative constituent properties (εrt, μrt, εrn) and the source by any dipole or multipole placed in the vicinity of the air-medium interface . Considering TM (magnetic field with one sole Cartesian component) illumination (which does not damage the generality), the internal impedance of the source is the one of free space:",
null,
", where",
null,
"is the free-space wavenumber and kt is the transverse wavenumber for the direction parallel to the interface (t stands for transverse and n for normal direction with respect to the surface of the material sample). The symbols η0, λ0, ε0 and μ0 correspond to the free-space impedance, wavelength, permittivity and permeability, respectively (e+j2πfτ time dependence is suppressed, and f is the operational frequency). The TM wave impedance of the uniaxial medium is given by:",
null,
", where (εrt, μrt) are the transverse and normal permittivities, and μrt its transverse permeability. It has been shown that the constituent parameters of the uniaxial medium which can achieve conjugate matching with free space",
null,
"for every single mode kt, should satisfy the Perfectly Matched Layer (PML) rule but with negative real parts of the material parameters, namely:",
null,
"(1)\n\nThat is why we call such an effective material Conjugate Matched Layer (CML). The ordinary PML just behaves like a perfect “black body” [16, 19] absorbing solely the propagating modes; on the contrary, CML additionally fully exploits all the evanescent waves. Note that the parameter b > 0 represents losses along the transverse direction, which means that a medium defined by (1) is active along the normal direction (Im[εrn] > 0).\n\n## 2 Near-field energy transfer\n\nIf we particularize our research to the test-bed configuration of Figure 1a, namely, a grounded slab excited by a tilted (by the angle θ) electric-dipole line source (expression for the incident field of such source can be found e.g. in ) at distance g from the interface, we can find in analytical form the electromagnetic power P absorbed in the slab. To better understand the variation of P, we consider a slightly perturbed version of the material (1) with εrn = 1/εrt − = 1/μrt − (for δ = 0, we have the perfect CML of (1)). The absorbed power P can be written as a sum of two terms: one expressing the energy transferred via the propagating modes Pprop (which is not dependent on a, b, δ) and another corresponding to the evanescent modes Pevan. If we assume a small perturbation parameter δ → 0, Pevan takes the following form :",
null,
"(2)",
null,
"Figure 1. The configurations offering: (a) extremely high absorbing power and (b) very high emitting power.\n\nIt is remarkable that for the DNG case (a < 0) the aforementioned quantity behaves like Pevan ~ Pprop/δ, which means that it takes very high values when the uniaxial slab exhibits electromagnetic behavior close to that of a perfect CML (1). Furthermore, it should be noted that the sign of the absorbed power Pevan (and accordingly the overall power P = Pprop + Pevan) is negative when δ < 0; it means that the slab acts as a secondary source and pumps (extremely quickly) energy to the system. In other words, our load (the grounded uniaxial slab) can either absorb or emit energy infinitely fast, if it obeys the rule (1) and simultaneously is slightly lossy (δ > 0) or slightly active (δ < 0), respectively. It should be stressed that the aforementioned separation between activity and passivity is valid for the chosen dipole excitation; for a more general characterization of the medium, we should check the signs of the imaginary part of the supported propagation constant and the real part of the wave impedance. Alternatively, one can check the negative definiteness of the dyadic ([εr] − [εr]*), where [εr] is the complex permittivity matrix as a criterion for the structure passivity . Specializing to our example case, the perturbed CML medium is passive for the chosen dipole excitation when δ > 0 but as far as any possible excitation is concerned, we should take values obeying the inequality δ >b/(a2 + b2).\n\nExtremely efficient absorption is demonstrated by Figure 2, where the ratio P/Pprop is represented as a function of a for various loss parameters b > 0. We consider two scenarios: one as in Figure 2a, where the rule (1) is followed (not exactly but with a small δ > 0) and one as in Figure 2b, where the effective parameters of the medium are all passive, namely εrt = μrt = a − jb but εrn = Re[1/εrt] = a/(a2 + b2). In other words, the first case corresponds to a medium with an active normal component but overall the structure absorbs energy from the source (0 < δ → 0) and in the second case we consider a non-active normal component with δ = b/(a2 + b2",
null,
". The most dominant characteristic of the two graphs is the switch of P from very low values for a > 0 (conventional PML “black body” absorption Pprop, double positive DPS medium) to extremely high values in the CML regime (a < 0). It should be stressed that the nonactive case exhibits high absorbing efficiency (P ≫ Pprop), which is, however, smaller than in the CML structure. Furthermore, an interesting feature is the effect of δ in Figure 2a which makes P to increase substantially for small |a < 0|; on the contrary, in Figure 2b, where all the material parameters are passive, the power P is a growing function of |a < 0|.",
null,
"Figure 2. The normalized absorbed power P/Pprop as a function of a = Re[εrt] = Re[μrt] for several values of the loss parameter b in: (a) CML passive case and (b) nonactive case.\n\n## 3 Far-field energy transfer\n\n### 3.1 Single particle\n\nThe structure exhibits extremely high efficiency as an emitter, namely has Pevan, P → −∞ when choosing a small |δ < 0|; however, this does not mean that this huge power travels far away. Due to the nature of evanescent fields which are responsible for the development of Pevan, the resonant fields decay exponentially with the distance from the planar interface. This would not be the case if the slab had a finite size, since its evanescent modes would always have a non-zero propagating factor; however, in this work we confine our research to the simple, analytically solvable configurations of Figure 1. In order to remedy this weakness (of the infinite layer) and enable sending a significant portion of that huge P (which is huge regardless of the choice of the sign of δ) to the far-field region, we add a small cylindrical particle in the near field of the structure (close to x = g interface). Furthermore, to understand better the effect of this particle, we consider a simple plane wave excitation instead of the tilted dipole of Figure 1a, namely, an incident -polarized magnetic field given by:",
null,
". If the single particle is located at the origin of the coordinate system and is made from a perfect magnetic conductor (a lossless scatterer), the magnetic current M measured in Volt that would be induced along it, normalized by the corresponding current M0 in the absence of the CML structure, is written as:",
null,
"(3)where",
null,
",",
null,
", and r is the radius of the particle. The notation",
null,
"is used for the Hankel function of zeroth order and second type. Expression (3) is obtained by assuming that the thickness L of the slab is infinite: k0L → +∞ (the layer behaves like a half space). The choice of the lossless material (magnetic conductor in this example) of the pin does not play a crucial role; it is made on the basis that renders the formulated boundary value problem simpler. Naturally, the type and the size of the particle of course affect the field distributions, but not decisively. Since the background field is very strong in the region, the induced currents along the cylinder will be rather strong; therefore, it will serve the goal of enhancing the radiation regardless of the kind of object we use, as long as the object is electrically small.\n\nIn Figure 3 we show the variation of the magnitude of M/M0 in contour plot with respect to the relative transverse wavenumber of the incident field kt/k0 and the real part of the transverse permittivity/permeability of the CML medium a. In Figure 3a, the CML structure is selected with δ > 0 and we can clearly note that the large enhancement in the current of the radiator happens when the excitation is an evanescent wave (kt > k0); on the contrary, for the propagating modes (kt < k0), the driving current M is much smaller than in the case of vacuum background (M0). Furthermore, the switch between the DPS and DNG media (sign of a), which is indicated by Figure 2 also holds since the variation of the represented quantity for",
null,
"is not significant. In Figure 3b we show the results for a CML structure with δ < 0 and the distribution of |M/M0| does not differ substantially from that of Figure 3a. This feature clearly shows that the radiation enhancement is not much dependent on the sign of δ, namely on the overall character (absorbing or emitting) of the structure for the dipole excitation of Figure 1a.",
null,
"Figure 3. The magnetic current ratio |M/M0| (perturbed CML case) as a function of the relative transverse wavenumber kt/k0 and the real permittivity and permeability a for: (a) δ = 0.005 > 0 and (b) δ = −0.005 < 0. Plot parameters: b = 0.2, k0r = 0.01, k0g = 0.1.\n\nIn Figure 4 we regard the nonactive case of the example grounded slab (where δ = b/(a2 + b2)) and represent the same quantity |M/M0| as above on the same map (kt/k0, a",
null,
". In Figure 4a we consider a pin with the electrical radius k0r = 0.01. Again we see how much suppressed gets the current M when a < 0 and propagating waves (kt < k0) are used as an excitation and how mild is the variation when a > 0 which confirms that a DPS slab does not enhance substantially the induced current (and accordingly the radiated power) on the perfectly magnetically conducting cylinder. It is noteworthy that M is increased when the reported unlimited power concentration is developed (DNG and evanescent modes). The same conclusions are drawn from Figure 4b where a thicker dipole (k0r = 0.05) is considered. One can observe that |M/M0| is smaller across the bottom right region of the map compared to Figure 4a; however, that does not necessarily correspond to a lower radiated power since the pin is (five times) more sizeable.",
null,
"Figure 4. The magnetic current ratio |M/M0| (nonactive CML case, δ = b/(a2 + b2)) as a function of the relative transverse wavenumber kt/k0 and the real permittivity and permeability a for: (a) k0r = 0.01 and (b) k0r = 0.05. Plot parameters: b = 0.2, k0g = 0.1.\n\n### 3.2 Multiple random particles\n\nBy observing the Figures 3 and 4, we remark that the ratio |M/M0| of the induced currents is not very high, when the structure is excited by a propagating plane wave. The same conclusion is true if we consider as excitation, instead of a plane wave, the dipole of Figure 1a. Reciprocally, if we would use this single pin or a single dipole as a radiating antenna, the presence of a resonant CML close to the small source would not increase radiation into the far zone, since only the evanescent modes will be enhanced. That is why we propose the configuration depicted in Figure 1b where multiple pins (acting as “radiation vessels”) are located in the vicinity of the air-medium interface. That large number of randomly distributed particles, covering a length comparable or larger than the wavelength and positioned in the near field of the structure are able to couple the evanescent fields with the propagating free-space modes and radiate far away from the interface. The considered boundary value problem can be treated with an integral equation formulation which fully describes the wave interactions between the source, the CML slab and the cylindrical particles. In this way, one can evaluate the radiated power in the presence (Prad) and the absence (",
null,
") of the particles and accordingly compute the ratio",
null,
"which shows how significant is the radiation enhancement.\n\nThe dramatic radiative effect of the particles located in the near field of the CML body is shown in Figure 5, where the parameter",
null,
"is represented as a function of a for various loss levels b > 0. We consider two cases similar to those of Figure 2 in the presence of numerous electrically small cylinders in the vicinity of the interface. Again the proposed solution works only for DNG media (a < 0) where extremely high values of the radiation enhancement ratio are observed for the CML case (Figure 5a). It is noteworthy, that for a > 0, the radiated far-field is even lower than in the vessel-free structure. Such a result can be explained through partial screening of the propagating modes of the vessel-free structure. In Figure 5b of the nonactive scenario, we notice the same behavior of",
null,
"for small negative a. It can be attributed to out-of-phase far-field responses of the slab and the particles which are of similar magnitudes due to the weak background field as indicated by Figure 2b. One can observe significant similarities between Figures 2 and 5, which is natural since both effects have the same physical reason: resonant excitation of surface modes. In particular, we have evanescent modes in both regions which are vanishing away from the interface and their nature is plasmonic. Clearly, we have two adjacent half spaces filled with materials whose permittivity has opposite signs (their real parts); therefore, collective electronic excitation is taking place and surface plasmons are excited [25, 26]. In contrast with other works , we do not care much about the spatial distribution of electromagnetic field owed (or not) to surface modes; we use the developed surface waves along the boundary as “containers” of concentrated energy, which is either absorbed by the CML slab (without particles) or emitted in free-space (with particles).",
null,
"Figure 5. The radiation enhancement ratio",
null,
"as a function of a = Re[εrt] = Re[μrt] for several losses b in: (a) CML passive case and (b) nonactive case.\n\n## 4 Conclusion\n\nTo conclude, we revisited the recently introduced concept of CML medium (in both its exact and non-active versions), which leads to huge near field enhancement along its interface, and examined the effect of a single particle placed in its vicinity. The particle acts as a radiation vessel and couples the evanescent modes developed on the CML planar surface with cylindrical radiated waves. We report substantial far-field radiation enhancement when multiple randomly placed particles are positioned close to the CML-vacuum interface. The most important limitation of the proposed idea, since it has not been actually implemented, is imposed by the difficulties in fabrication of a CML medium. Towards this direction, we are planning the use of fishnet metamaterials (metallic surfaces with holes) or, alternatively, the employment of binary metasurfaces (gratings with two alternating media). In case of success in realizations, it will pave the way for the fabrication of revolutionary, ultra-efficient electronic and photonic designs.\n\n## Acknowledgments\n\nDr. Valagiannopoulos acknowledges target program No. 011503029 “NU-Berkeley strategic initiative in warm-dense matter, advanced materials and energy sources for 2014–2018” from the Ministry of Education and Science of the Republic of Kazakhstan.\n\n## References\n\n1. H.A. Atwater, A. Polman, Plasmonics for improved photovoltaic devices, Nature Materials 9 (2010) 205–213. [Google Scholar]\n2. G. Li, V. Shrotriya, J. Huang, Y. Yao, T. Moriarty, K. Emery, Y. Yang, High-efficiency solution processable polymer photovoltaic cells by self-organization of polymer blends, Nature Materials 4 (2005) 864–868. [Google Scholar]\n3. A. Kurs, A. Karalis, R. Moffatt, J.D. Joannopoulos, P. Fisher, M. Soljacic, Wireless power transfer via strongly coupled magnetic resonances, Science 317 (2007) 83–86. [CrossRef] [MathSciNet] [PubMed] [Google Scholar]\n4. A.P. Sample, D.A. Meyer, J.R. Smith Analysis, Experimental results, and range adaptation of magnetically coupled resonators for wireless power transfer, IEEE Transactions on Antennas and Propagation 58 (2011) 544–554. [Google Scholar]\n5. N.M. Estakhri, A. Alu, Manipulating optical reflections using engineered nanoscale metasurfaces, Physical Review B 89 (2014) 235419. [Google Scholar]\n6. Y. Radi, V.S. Asadchy, S.A. Tretyakov, Tailoring reflections from thin composite metamirrors, IEEE Transactions on Antennas and Propagation 62 (2014) 3749–3760. [CrossRef] [Google Scholar]\n7. Y. Radi, C.R. Simovski, S.A. Tretyakov, Thin perfect absorbers for electromagnetic waves: theory, design, and realizations, Physical Review Applied 3 (2015) 037001. [CrossRef] [Google Scholar]\n8. N.I. Landy, S. Sajuyigbe, J.J. Mock, D.R. Smith, W.J. Padilla, Perfect metamaterial absorber, Physical Review Letters 100 (2008) 207402. [Google Scholar]\n9. L.L. Spada, L. Vegni, Metamaterial-based wideband electromagnetic wave absorber, Optics Express 6 (2016) 5763–5772. [CrossRef] [Google Scholar]\n10. Y.R. Padooru, A.B. Yakovlev, C.S.R. Kaipa, G.W. Hanson, F. Medina, F. Mesa, A.W. Glisson, New absorbing boundary conditions and analytical model for multilayered mushroom-type metamaterials: applications to wideband absorbers, IEEE Transactions on Antennas and Propagation 60 (2012) 5727–5742. [CrossRef] [Google Scholar]\n11. X. Chen, L. Liu, P.Y. Yu, S.S. Mao, Increasing solar absorption for photocatalysis with black hydrogenated titanium dioxide nanocrystals, Science 331 (2011) 746–750. [CrossRef] [PubMed] [Google Scholar]\n12. C.A. Valagiannopoulos, A. Tukiainen, T. Aho, T. Niemi, M. Guina, S.A. Tretyakov, C.R. Simovski, Perfect magnetic mirror and simple perfect absorber in the visible spectrum, Physical Review B 91 (2015) 115305. [CrossRef] [Google Scholar]\n13. B. Wu, H.M. Tuncer, M. Naeem, B. Yang, M.T. Cole, W.I. Milne, Y. Hao, Experimental demonstration of a transparent graphene millimetre wave absorber with 28% fractional bandwidth at 140 GHz, Scientific Reports 6 (2016) 29363. [Google Scholar]\n14. C.A. Valagiannopoulos, S.A. Tretyakov, Symmetric absorbers realized as gratings of PEC cylinders covered by ordinary dielectrics, IEEE Transactions on Antennas and Propagation 62 (2014) 5089–5098. [CrossRef] [Google Scholar]\n15. Y.I. Bobrovnitskii, Impedance theory of sound absorption: the best absorber and the black body, Acoustical Physics 52 (2006) 638–647. [CrossRef] [Google Scholar]\n16. E.E. Narimanov, A.V. Kildishev, Optical black hole: Broadband omnidirectional light absorber, Applied Physics Letters 95 (2009) 041106. [Google Scholar]\n17. S.I. Maslovski, C.R. Simovski, S.A. Tretyakov, Overcoming black body radiation limit in free space: metamaterial superemitter, New Journal of Physics 18 (2016) 013034. [CrossRef] [Google Scholar]\n18. C.A. Valagiannopoulos, J. Vehmas, C.R. Simovski, S.A. Tretyakov, S.I. Maslovski, Electromagnetic energy sink, Physical Review B 92 (2015) 245402. [CrossRef] [Google Scholar]\n19. S.D. Gedney, An anisotropic perfectly matched layer – absorbing medium for the truncation of FDTD lattices, IEEE Transactions Antennas and Propagation 44 (1996) 1630–1639. [CrossRef] [Google Scholar]\n20. C.A. Valagiannopoulos, M.S. Mirmoosa, I.S. Nefedov, S.A. Tretyakov, C.R. Simovski, Hyperbolic-metamaterial antennas for broadband enhancement of dipole emission to free space, Journal of Applied Physics 116 (2014) 163106. [CrossRef] [Google Scholar]\n21. C.A. Valagiannopoulos, How non-reciprocal is an effective permittivity matrix?, Microwave and Optical Technology Letters 56 (2014) 9. [CrossRef] [Google Scholar]\n22. C.A. Valagiannopoulos, On examining the influence of a thin dielectric strip posed across the diameter of a penetrable radiating cylinder, Progress in Electromagnetics Research C 3 (2008) 203–214. [CrossRef] [Google Scholar]\n23. C.A. Valagiannopoulos, S.A. Tretyakov, Theoretical concepts of unlimited-power reflectors, absorbers, and emitters with conjugately matched layers, Physical Review B 94 (2016) 125117. [CrossRef] [Google Scholar]\n24. C.A. Valagiannopoulos, Electromagnetic scattering of the field of a metamaterial slab antenna by an arbitrarily positioned cluster of metallic cylinders, Progress in Electromagnetics Research 114 (2011) 55–66. [CrossRef] [Google Scholar]\n25. J.M. Pitarke, V.M. Silkin, E.V. Chulkov, P.M. Echenique, Theory of surface plasmons and surface-plasmon polaritons, Reports on Progress in Physics 70 (2007) 1–87. [CrossRef] [Google Scholar]\n26. J. Polo, T. Mackay, A. Lakhtakia, Electromagnetic surface waves: a modern perspective, Elsevier, New York, 2013. [Google Scholar]\n27. R. Yang, Y. Hao, An accurate control of the surface wave using transformation optics, Optics Express 20 (2012) 9341. [CrossRef] [Google Scholar]\n28. S. Xua, H. Xu, H. Gao, Y. Jianga, F. Yuf, J.D. Joannopoulos, M. Soljacic, H. Chena, H. Sunc, B. Zhang, Broadband surface-wave transformation cloak, Proceedings of the National Academy of Sciences of the United States of America 112 (2015) 7635–7638. [Google Scholar]\n29. L. La Spada, T.M. McManus, A. Dyke, S. Haq, L. Zhang, Q. Cheng, Y. Hao, Surface wave cloak from graded refractive index nanocomposites, Scientific Reports 6 (2016) 29363. [Google Scholar]\n\nCite this article as: Valagiannopoulos CA, Simovski CR & Tretyakov SA: Breaking the black-body limit with resonant surfaces. EPJ Appl. Metamat. 2017, 4, 5.\n\n## All Figures",
null,
"Figure 1. The configurations offering: (a) extremely high absorbing power and (b) very high emitting power. In the text",
null,
"Figure 2. The normalized absorbed power P/Pprop as a function of a = Re[εrt] = Re[μrt] for several values of the loss parameter b in: (a) CML passive case and (b) nonactive case. In the text",
null,
"Figure 3. The magnetic current ratio |M/M0| (perturbed CML case) as a function of the relative transverse wavenumber kt/k0 and the real permittivity and permeability a for: (a) δ = 0.005 > 0 and (b) δ = −0.005 < 0. Plot parameters: b = 0.2, k0r = 0.01, k0g = 0.1. In the text",
null,
"Figure 4. The magnetic current ratio |M/M0| (nonactive CML case, δ = b/(a2 + b2)) as a function of the relative transverse wavenumber kt/k0 and the real permittivity and permeability a for: (a) k0r = 0.01 and (b) k0r = 0.05. Plot parameters: b = 0.2, k0g = 0.1. In the text",
null,
"Figure 5. The radiation enhancement ratio",
null,
"as a function of a = Re[εrt] = Re[μrt] for several losses b in: (a) CML passive case and (b) nonactive case. In the text\n\nCurrent usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.\n\nData correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.\n\nInitial download of the metrics may take a while."
]
| [
null,
"https://i.creativecommons.org/l/by/4.0/88x31.png",
null,
"https://epjam.edp-open.org/articles/epjam/full_html/2017/01/epjam160013/epjam160013-eq1.gif",
null,
"https://epjam.edp-open.org/articles/epjam/full_html/2017/01/epjam160013/epjam160013-eq2.gif",
null,
"https://epjam.edp-open.org/articles/epjam/full_html/2017/01/epjam160013/epjam160013-eq3.gif",
null,
"https://epjam.edp-open.org/articles/epjam/full_html/2017/01/epjam160013/epjam160013-eq4.gif",
null,
"https://epjam.edp-open.org/articles/epjam/full_html/2017/01/epjam160013/epjam160013-eq5.gif",
null,
"https://epjam.edp-open.org/articles/epjam/full_html/2017/01/epjam160013/epjam160013-eq6.gif",
null,
"https://epjam.edp-open.org/articles/epjam/full_html/2017/01/epjam160013/epjam160013-fig1_small.jpg",
null,
"https://epjam.edp-open.org/articles/epjam/full_html/2017/01/epjam160013/epjam160013-eq7.gif",
null,
"https://epjam.edp-open.org/articles/epjam/full_html/2017/01/epjam160013/epjam160013-fig2_small.jpg",
null,
"https://epjam.edp-open.org/articles/epjam/full_html/2017/01/epjam160013/epjam160013-eq8.gif",
null,
"https://epjam.edp-open.org/articles/epjam/full_html/2017/01/epjam160013/epjam160013-eq9.gif",
null,
"https://epjam.edp-open.org/articles/epjam/full_html/2017/01/epjam160013/epjam160013-eq10.gif",
null,
"https://epjam.edp-open.org/articles/epjam/full_html/2017/01/epjam160013/epjam160013-eq11.gif",
null,
"https://epjam.edp-open.org/articles/epjam/full_html/2017/01/epjam160013/epjam160013-eq12.gif",
null,
"https://epjam.edp-open.org/articles/epjam/full_html/2017/01/epjam160013/epjam160013-eq13.gif",
null,
"https://epjam.edp-open.org/articles/epjam/full_html/2017/01/epjam160013/epjam160013-fig3_small.jpg",
null,
"https://epjam.edp-open.org/articles/epjam/full_html/2017/01/epjam160013/epjam160013-eq14.gif",
null,
"https://epjam.edp-open.org/articles/epjam/full_html/2017/01/epjam160013/epjam160013-fig4_small.jpg",
null,
"https://epjam.edp-open.org/articles/epjam/full_html/2017/01/epjam160013/epjam160013-eq15.gif",
null,
"https://epjam.edp-open.org/articles/epjam/full_html/2017/01/epjam160013/epjam160013-eq16.gif",
null,
"https://epjam.edp-open.org/articles/epjam/full_html/2017/01/epjam160013/epjam160013-eq17.gif",
null,
"https://epjam.edp-open.org/articles/epjam/full_html/2017/01/epjam160013/epjam160013-eq18.gif",
null,
"https://epjam.edp-open.org/articles/epjam/full_html/2017/01/epjam160013/epjam160013-fig5_small.jpg",
null,
"https://epjam.edp-open.org/articles/epjam/full_html/2017/01/epjam160013/epjam160013-eq19.gif",
null,
"https://epjam.edp-open.org/articles/epjam/full_html/2017/01/epjam160013/epjam160013-fig1_small.jpg",
null,
"https://epjam.edp-open.org/articles/epjam/full_html/2017/01/epjam160013/epjam160013-fig2_small.jpg",
null,
"https://epjam.edp-open.org/articles/epjam/full_html/2017/01/epjam160013/epjam160013-fig3_small.jpg",
null,
"https://epjam.edp-open.org/articles/epjam/full_html/2017/01/epjam160013/epjam160013-fig4_small.jpg",
null,
"https://epjam.edp-open.org/articles/epjam/full_html/2017/01/epjam160013/epjam160013-fig5_small.jpg",
null,
"https://epjam.edp-open.org/articles/epjam/full_html/2017/01/epjam160013/epjam160013-eq19.gif",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.87776273,"math_prob":0.8523128,"size":28936,"snap":"2022-40-2023-06","text_gpt3_token_len":7065,"char_repetition_ratio":0.1251901,"word_repetition_ratio":0.09432314,"special_character_ratio":0.23562345,"punctuation_ratio":0.13867404,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9527479,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62],"im_url_duplicate_count":[null,null,null,5,null,5,null,5,null,5,null,5,null,5,null,10,null,5,null,10,null,5,null,5,null,5,null,5,null,5,null,5,null,10,null,5,null,10,null,5,null,5,null,5,null,5,null,10,null,10,null,10,null,10,null,10,null,10,null,10,null,10,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-09-28T21:55:23Z\",\"WARC-Record-ID\":\"<urn:uuid:7f763766-7d9d-4af3-8d1a-c01ccfac00c6>\",\"Content-Length\":\"122414\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:dcff1d24-9a74-4318-b474-afd2bd91f495>\",\"WARC-Concurrent-To\":\"<urn:uuid:6b398fa0-ece8-4a34-830f-8318370c5e4d>\",\"WARC-IP-Address\":\"167.114.155.65\",\"WARC-Target-URI\":\"https://epjam.edp-open.org/articles/epjam/full_html/2017/01/epjam160013/epjam160013.html\",\"WARC-Payload-Digest\":\"sha1:MAS4UH2FRGJRVDTLBW2BNPBOCHBTI2KI\",\"WARC-Block-Digest\":\"sha1:7FW3QWAQXE6LDLCHFLE4UGFWXWEOKX2I\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030335286.15_warc_CC-MAIN-20220928212030-20220929002030-00241.warc.gz\"}"} |
https://onlineessay.net/2021/03/18/addition-problem-solving_ff/ | [
"writing-essays\n\n# Addition problem solving\n\nThe advantages and disadvantages of problem-solving practice when learning basic addition facts. we have 12 coloring page sample about addition problem solving there is a river essay including paper sample, paper example, coloring page pictures, coloring page sample, resume models, resume example, resume pictures, and more multiplication word problem worksheets 3rd grade addition solving #92817. we addition problem solving will be different ways to write poems applying the previously ta persuasive essay learnt strategies to solve problems. write health care plans for small businesses the correct addition sentence. understand what is asked. an addition word problem. in this lesson, we'll take a look at addition word problems with two or more variables. addition problem solving practice solving money word problems addition problem solving with addition strategies us constitution essay add-subtract-place value , addition problem solving time-measurement-data | 0 comments well since best creative writing program our standards and textbooks are having students solve addition problems international law topics for research paper using strategies, i mobile phones good or bad essay wanted to do this little mini-series all about what the addition strategies are and how they apply as kids are solving time and measurement and money addition problems year 5 problem-solving investigations: if you education system on essay have a word problem that requires addition, take the sum of all the values. use addition and subtraction within 100 to solve one- and two-step word problems. get your students excited about basic math with this quirky addition lesson addition word problems (1-step word problems) here are some examples and solutions of addition word problems that can be solved in one step. we have 12 coloring page sample about addition problem addition problem solving solving including paper sample, paper example, coloring page pictures, addition problem solving coloring page sample, resume models, resume example, resume pictures, and more fraction problem solving using calculators (rachel barker) doc; choosing the correct operation for word problems (heather stokes) doc; transition words for expository essays division word problems 1 (dhipa begum) doc; division word problems 2 (dhipa begum) doc; unit 9: in this package, students learn how to problem-solve–how how to write paper in chinese to use their what is a thesis in a research paper notebooks and math tools; how to share their thinking; and how to record their solutions problem solving activities use one of more of these steps."
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.93444914,"math_prob":0.90343994,"size":2616,"snap":"2021-04-2021-17","text_gpt3_token_len":482,"char_repetition_ratio":0.2166922,"word_repetition_ratio":0.082051285,"special_character_ratio":0.18157493,"punctuation_ratio":0.0861678,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9709515,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-04-17T18:12:14Z\",\"WARC-Record-ID\":\"<urn:uuid:4940c8b4-b39b-4df2-a5af-a9c83ff67d53>\",\"Content-Length\":\"20357\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c0f32109-1b13-4fce-ad8c-ebabe46733e5>\",\"WARC-Concurrent-To\":\"<urn:uuid:1f35855e-50e9-43f2-9c99-7efe66e27fd6>\",\"WARC-IP-Address\":\"37.252.9.207\",\"WARC-Target-URI\":\"https://onlineessay.net/2021/03/18/addition-problem-solving_ff/\",\"WARC-Payload-Digest\":\"sha1:AVA5MJUIPRXHVFKC2LF3LANSRNXPWUDR\",\"WARC-Block-Digest\":\"sha1:YA7LYRMHI3OBR2R6TUVGGBAEFZEYXHRT\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-17/CC-MAIN-2021-17_segments_1618038461619.53_warc_CC-MAIN-20210417162353-20210417192353-00501.warc.gz\"}"} |
https://yalmip.github.io/tutorial/multiparametricprogramming/ | [
"# Multiparametric programming\n\nUpdated:\n\nThis tutorial requires MPT.\n\nYALMIP can be used to calculate explicit solutions of parametric linear and quadratic programs by interfacing the Multi-Parametric Toolbox MPT. This tutorial assumes that the reader is familiar with parametric programming and the basics of MPT.\n\n### Generic example.\n\nConsider the following simple quadratic program in the decision variable z, solved for a particular value on a parameter x.\n\nA = randn(15,3);\nb = rand(15,1);\nE = randn(15,2);\n\nz = sdpvar(3,1);\nx = [0.1;0.2];\n\nF = [A*z <= b+E*x];\nobj = (z-1)'*(z-1);\n\nsol = optimize(F,obj);\nvalue(z)\nans =\n-0.1454\n-0.1789\n-0.0388\n\n\nTo obtain the parametric solution with respect to x, we call the function solvemp, and tell the solver that x is a parametric variable. Moreover, we must add constraints on x to define the region where we want to compute the parametric solution, the so called exploration set.\n\nx = sdpvar(2,1);\nF = [A*z <= b+E*x, -1 <= x <= 1];\nsol = solvemp(F,obj,[],x);\n\n\nThe first output is an MPT structure. In accordance with MPT syntax, the optimizer for the parametric value (0.1,0.2) is given by the following code.\n\nxx = [0.1;0.2];\n[i,j] = isinside(sol{1}.Pn,xx)\nsol{1}.Fi{j}*xx + sol{1}.Gi{j}\nans =\n-0.1454\n-0.1789\n-0.0388\n\n\nBy using more outputs from solvemp, it is possible to simplify things considerably.\n\n[sol,diagnostics,aux,Valuefunction,Optimal_z] = solvemp(F,obj,[],x);\n\n\nThe function now returns solutions using YALMIPs nonlinear operator framework. To retrieve the numerical solution for a particular parameter value, simply use assign and value in standard fashion.\n\nassign(x,[0.1;0.2]);\nvalue(Optimal_z)\n\n\nSome of the plotting capabilities of MPT are overloaded for the piecewise functions. Hence, we can plot the piecewise quadratic value function\n\nplot(Valuefunction);\nfigure\nplot(Optimizer);",
null,
"and plot the piecewise affine optimizer\n\nfigure\nplot(Optimizer(1));",
null,
"### Simple MPC example\n\nDefine numerical data for a linear system, prediction matrices, and variables for current state $$x$$ and the future control sequence $$U(x)$$, for an MPC problem with horizon 5 (create_CHS is a cheat function that creates the numerical matrices to describe the linear relation between current state $$x$$ and future input sequence $$U$$, to the predicted outputs. See the standard MPC example to see how you would do this in a more generic fashion in an actual application)\n\nN = 5;\nA = [2 -1;1 0];\nB = [1;0];\nC = [0.5 0.5];\n[H,S] = create_CHS(A,B,C,N);\nx = sdpvar(2,1);\nU = sdpvar(N,1);\n\n\nThe future output predictions are linear in the current state and the control sequence.\n\nY = H*x+S*U;\n\n\nWe wish to minimize a quadratic cost, compromising between small input and outputs.\n\nobjective = Y'*Y+U'*U;\n\n\nThe input variable has a hard constraint, and so does the output at the terminal state.\n\nF = [1 >= U >= -1, 1 >= Y(N) >= -1];\n\n\nWe seek the explicit solution $$U(x)$$ over the exploration set $$\\left \\lvert x\\right \\rvert \\leq 5$$\n\nF = [F, 5 >= x >= -5];\n\n\nThe explicit solution $$U(x)$$ is obtained by calling solvemp with the parametric variable $$x$$ as the fourth argument. Additionally, since we only are interested in the first element of the solution $$U(x)$$, we use a fifth input to communicate this.\n\n[sol,diagnostics,aux,Valuefunction,Optimizer] = solvemp(F,objective,[],x,U(1));\n\n\nWe can plot the overloaded solutions directly\n\nfigure\nplot(Valuefunction)\nfigure\nplot(Optimizer)\n\n\n### Mixed integer multiparametric programming\n\nYALMIP extends the multiparametric solvers in MPT by adding support for binary variables in the parametric problems.\n\nLet us solve an extension of the MPC problem from the previous section. To begin with, we formulate a similar problem (shorter horizon and linear cost)\n\nN = 3;\nA = [2 -1;1 0];\nB = [1;0];\nC = [0.5 0.5];\n[H,S] = create_CHS(A,B,C,N);\nx = sdpvar(2,1);\nU = sdpvar(N,1);\nY = H*x+S*U;\n\nobjective = norm(Y,1) + norm(U,1);\n\nF = [1 >= U >= -1];\nF = [F, 5 >= x >= -5];\n\n\nWe will now solve this problem under the additional constraints that the input is quantized in steps of 1/3. This can easily be modelled in YALMIP using ismember. Note that this nonconvex operator introduces a lot of binary variables, and the MPC problem is most likely solved more efficiently using a dynamic programming approach.\n\nF = [F, ismember(U,[-1:1/3:1])];\n\n\nSame commands as before to solve the problem and plot the optimal solution\n\n[sol,diagnostics,aux,Valuefunction,Optimizer] = solvemp(F,objective,[],x,U(1));\nplot(Optimizer);",
null,
"For more examples, see the dynamic programming example, the robust MPC example, the portfolio example, and the MAXPLUS control example."
]
| [
null,
"https://yalmip.github.io/images/valuefunction1.png",
null,
"https://yalmip.github.io/images/pwasolution1.png",
null,
"https://yalmip.github.io/images/pwaquantsolution1.png",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.78058416,"math_prob":0.9996412,"size":4571,"snap":"2022-27-2022-33","text_gpt3_token_len":1263,"char_repetition_ratio":0.115393035,"word_repetition_ratio":0.06713287,"special_character_ratio":0.28812075,"punctuation_ratio":0.18191162,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":0.9998359,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,5,null,5,null,5,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-18T22:05:55Z\",\"WARC-Record-ID\":\"<urn:uuid:e3a7a0cb-99d1-4a70-852b-5cd081b3b71c>\",\"Content-Length\":\"34888\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1c43f2d4-4b38-4972-8021-9f85ef52899d>\",\"WARC-Concurrent-To\":\"<urn:uuid:5ac715b9-4ca1-452a-8c91-b3f52f7654b0>\",\"WARC-IP-Address\":\"185.199.109.153\",\"WARC-Target-URI\":\"https://yalmip.github.io/tutorial/multiparametricprogramming/\",\"WARC-Payload-Digest\":\"sha1:RXTML7SRHDPJ6AB3PJ7UYLHLVZYYSAZK\",\"WARC-Block-Digest\":\"sha1:YVS3LIXWWZ24JLVV3TVN4QX7QTTAF5DC\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882573533.87_warc_CC-MAIN-20220818215509-20220819005509-00378.warc.gz\"}"} |
https://rdrr.io/cran/rle/src/R/rle_utils.R | [
"# R/rle_utils.R In rle: Common Functions for Run-Length Encoded Vectors\n\n#### Documented in as.rleas.rle.defaultas.rle.rlecompresscompress.rlec.rleis.na.rlelength.rleMath.rlemean.rleOps.rlerep.rlestr.rleSummary.rle\n\n```# File R/rle_utils.R in package rle, currently hosted at https://github.com/statnet/rle .\n#\n# A copy of this license may be found at https://www.gnu.org/licenses/gpl-3.0.en.html .\n#\n#######################################################################\n.check_lengths <- function(rle1, rle2){\nif(sum(as.numeric(rle1\\$lengths))!=sum(as.numeric(rle2\\$lengths)))\nstop(\"At this time, binary rle operators require the vectors represented by the encoding to have equal lengths.\")\n}\n\n#' Safe multiplication of integer run lengths.\n#'\n#' Return a vector of run lengths each no larger than maximum\n#' representable integer that sum to the product of the arguments. If\n#' the product is 0, an empty integer vector is returned.\n#'\n#' @param e1,e2 arguments to multiply, both `<=.Machine\\$integer.max`.\n#'\n#' @noRd\n.run_mul <- function(e1, e2){\no <- as.numeric(e1)*as.numeric(e2)\nif(o > .Machine\\$integer.max){ # Integer overflow.\nc(as.integer(rep.int(.Machine\\$integer.max, o %/% .Machine\\$integer.max)), as.integer(o %% .Machine\\$integer.max))\n}else if(o==0){\ninteger(0)\n}else as.integer(o)\n}\n\n#' @name rle-methods\n#'\n#' @title Miscellaneous Common Methods for [`rle`] Objects\n#'\n#' @param x,object An [`rle`] object.\n#' @param na.rm Whether missing values are to be ignored (`TRUE`) or propagated (`FALSE`).\n#' @param ... For `c`, objects to be concatenated. The first object\n#' must be of class [`rle`].\n#'\n#' @examples\n#' x <- rle(as.logical(rbinom(10,1,.7)))\n#' y <- rle(as.logical(rbinom(10,1,.3)))\n#'\n#' stopifnot(isTRUE(all.equal(c(inverse.rle(x),inverse.rle(y)),inverse.rle(c(x,y)))))\n#'\n#' @export\nc.rle <- function(...){\nl <- list(...)\nl <- lapply(l, as.rle)\nstructure(list(\nlengths = do.call(c, lapply(l, `[[`, \"lengths\")),\nvalues = do.call(c, lapply(l, `[[`, \"values\"))\n), class = \"rle\")\n}\n\n#' Unary and Binary Operations for [`rle`] Objects\n#'\n#' Unary and binary [Arithmetic] and [Logic] operators (with\n#' exceptions given below) are implemented between two [`rle`] objects\n#' and between an [`rle`] object and a scalar.\n#'\n#' @param e1,e2 Arguments to unary (`e1`) and binary (`e1` and `e2`)\n#' operators.\n#'\n#' @details Supported operations include all elements of the `Ops`\n#' group, as well as [`xor`]. Within the [Arithmetic] and [Logic]\n#' operators, this includes (taken from the R help): `+`, `-`, `*`,\n#' `/`, `^`, `<` , `>`, `<=`, `>=`, `!=`, `==`, `%%`, `%/%`, `&`,\n#' `|`, `!`, and `xor`; but excludes non-vector logical functions\n#' and operators such as [`isTRUE`] and [`&&`].\n#'\n#' @return In every supported case, the operation should result in an\n#' [`rle`] that would have resulted had the operation been applied\n#' to the original (uncompressed) vectors, then compressed using\n#' [`rle`], with the proviso that if the resulting function creates\n#' adjacent runs of the same value, they are *not* merged. This must\n#' be done explicitly with [`compress.rle`]. (At no point in the\n#' calculation are the uncompressed vectors actually constructed, of\n#' course.)\n#'\n#' An operation between an `rle` and a zero-length object produces\n#' an empty `rle`.\n#'\n#' @examples\n#'\n#' x <- rle(as.logical(rbinom(10,1,.7)))\n#' y <- rle(as.logical(rbinom(10,1,.3)))\n#'\n#' stopifnot(isTRUE(all.equal((!inverse.rle(x)),inverse.rle(!x))))\n#'\n#' stopifnot(isTRUE(all.equal((inverse.rle(x)|inverse.rle(y)),inverse.rle(x|y))))\n#'\n#' stopifnot(isTRUE(all.equal((inverse.rle(x)&inverse.rle(y)),inverse.rle(x&y))))\n#'\n#' x <- rle(sample(c(-1,+1), 10, c(.7,.3), replace=TRUE))\n#' y <- rle(sample(c(-1,+1), 10, c(.3,.7), replace=TRUE))\n#'\n#' stopifnot(isTRUE(all.equal((inverse.rle(x)*inverse.rle(y)),inverse.rle(x*y))))\n#' stopifnot(isTRUE(all.equal((2*inverse.rle(y)),inverse.rle(2*y))))\n#' stopifnot(isTRUE(all.equal((inverse.rle(x)*2),inverse.rle(x*2))))\n#'\n#' stopifnot(isTRUE(all.equal((inverse.rle(x)/inverse.rle(y)),inverse.rle(x/y))))\n#' stopifnot(isTRUE(all.equal((2/inverse.rle(y)),inverse.rle(2/y))))\n#' stopifnot(isTRUE(all.equal((inverse.rle(x)/2),inverse.rle(x/2))))\n#'\n#' stopifnot(isTRUE(all.equal((-inverse.rle(y)),inverse.rle(-y))))\n#' stopifnot(isTRUE(all.equal((inverse.rle(x)-inverse.rle(y)),inverse.rle(x-y))))\n#'\n#' stopifnot(isTRUE(all.equal((inverse.rle(x)%/%inverse.rle(y)),inverse.rle(x%/%y))))\n#'\n#' stopifnot(isTRUE(all.equal(inverse.rle(x)==inverse.rle(y),inverse.rle(x==y))))\n#'\n#' stopifnot(isTRUE(all.equal((inverse.rle(x)>inverse.rle(y)),inverse.rle(x>y))))\n#' @export\nOps.rle <- function(e1, e2){\nFUN <- match.fun(.Generic)\nif(missing(e2)){ # Unary operation\nstructure(list(lengths = e1\\$lengths,\nvalues = FUN(e1\\$values)),\nclass = \"rle\")\n}else if(!nzchar(.Method[1L])){ # e1 is not an rle but e2 is\nl <- length(e1)\nif(l == 0L){\nstructure(list(lengths = integer(0),\nvalues = FUN(e1, e2\\$values)),\nclass = \"rle\")\n}else if(l == 1L){\nstructure(list(lengths = e2\\$lengths,\nvalues = FUN(e1, e2\\$values)),\nclass = \"rle\")\n}else{\nstop(\"Binary operations between a non-scalar and an \", sQuote(\"rle\"), \" object are not supported at this time.\")\n}\n}else if(!nzchar(.Method[2L])){ # e2 is not an rle but e1 is\nl <- length(e2)\nif(l == 0L){\nstructure(list(lengths = integer(0),\nvalues = FUN(e1\\$values, e2)),\nclass = \"rle\")\n}else if(l == 1L){\nstructure(list(lengths = e1\\$lengths,\nvalues = FUN(e1\\$values, e2)),\nclass = \"rle\")\n}else{\nstop(\"Binary operations between an \", sQuote(\"rle\"), \" object and a non-scalar are not supported at this time.\")\n}\n}else{ # Both are rle.\n.check_lengths(e1, e2)\nsyncinfo <- .Call(\"sync_RLEs\", e1\\$lengths, e2\\$lengths)\nstructure(list(lengths = syncinfo\\$lengths[seq_len(syncinfo\\$nruns)],\nvalues = FUN(e1\\$values[syncinfo\\$val1i[seq_len(syncinfo\\$nruns)]],\ne2\\$values[syncinfo\\$val2i[seq_len(syncinfo\\$nruns)]])),\nclass = \"rle\")\n}\n}\n\n#' Mathematical functions for [`rle`] Objects\n#'\n#' Mathematical functions that work independently elementwise on vectors described in [Math] are implemented for [`rle`] objects. See Details for list of exceptions.\n#'\n#' @param x An [`rle`] object.\n#'\n#' @details Supported functions include all elements of the S3 [Math]\n#' group excluding the \"cumulative\" ones, which are not supported at\n#' this time and will raise an error. As of this writing, functions\n#' supported include (from R help) `abs`, `sign`, `sqrt`, `floor`,\n#' `ceiling`, `trunc`, `round`, `signif`, `exp`, `log`, `expm1`,\n#' `log1p`, `cos`, `sin`, `tan`, `cospi`, `sinpi`, `tanpi`, `acos`,\n#' `asin`, `atan`, `cosh`, `sinh`, `tanh`, `acosh`, `asinh`,\n#' `atanh`, `lgamma`, `gamma`, `digamma`, and `trigamma`.\n#'\n#' Functions `cumsum`, `cumprod`, `cummax`, and `cummin` are not\n#' supported at this time and will raise an error.\n#'\n#' @return In every supported case, the call should result in an\n#' [`rle`] that would have resulted had the call been applied to the\n#' original (uncompressed) vector, then compressed using\n#' [`rle`]. (At no point in the calculation is the uncompressed\n#' vector actually constructed, of course.)\n#'\n#' By default, the functions do not merge adjacent\n#' runs with the same value. This must be done explicitly with\n#' [`compress.rle`].\n#'\n#' @examples\n#'\n#' x <- rle(sample(runif(2), 10, c(.7,.3), replace=TRUE))\n#'\n#' stopifnot(isTRUE(all.equal(sin(inverse.rle(x)),inverse.rle(sin(x)))))\n#' stopifnot(inherits(try(cumprod(x)), \"try-error\"))\n#' @export\nMath.rle <- function(x, ...){\nif(.Generic %in% c(\"cumsum\", \"cumprod\", \"cummax\", \"cummin\"))\nstop(sQuote(paste0(.Generic,\"()\")), \" method is not yet implemented for \", sQuote(\"rle\"), \" objects.\")\n\nFUN <- match.fun(.Generic)\nstructure(list(lengths = x\\$lengths,\nvalues = FUN(x\\$values, ...)),\nclass = \"rle\")\n}\n\n#' Summary methods for [`rle`] objects.\n#'\n#' Summarisation functions for vectors described in [Summary] are implemented for [`rle`] objects.\n#'\n#' @param ... [`rle`] objects or objects that can be coerced to `rle`.\n#' @param na.rm Whether the missing values should be ignored (`TRUE`) or propagated (`FALSE`).\n#'\n#' @details Supported functions include all elements of the S3\n#' [Summary] group. As of this writing, functions supported include\n#' (from R help) `all`, `any`, `max`, `min`, `prod`, `range`, and\n#' `sum`.\n#'\n#' @return In every supported case, the call should produce the same\n#' result as what would have resulted had the call been applied to\n#' the original (uncompressed) vector. (At no point in the\n#' calculation is the uncompressed vector actually constructed, of\n#' course.) The exception is that if `values` are of class\n#' `integer`, the result will nonetheless always be upcast to\n#' `numeric` to avert overflows. This behaviour may change in the\n#' future.\n#'\n#' @examples\n#'\n#' x <- rle(as.logical(rbinom(20,1,.7)))\n#' y <- rle(as.logical(rbinom(20,1,.3)))\n#'\n#' stopifnot(isTRUE(all.equal(any(x, y),any(inverse.rle(x), inverse.rle(y)))))\n#' stopifnot(isTRUE(all.equal(any(y),any(inverse.rle(y)))))\n#'\n#' stopifnot(isTRUE(all.equal(sum(inverse.rle(x),inverse.rle(y)),sum(x,y))))\n#' stopifnot(isTRUE(all.equal(sum(inverse.rle(y)),sum(y))))\n#'\n#' y\\$values[2:3] <- NA\n#' stopifnot(isTRUE(all.equal(sum(inverse.rle(y), na.rm=TRUE),sum(y, na.rm=TRUE))))\n#' stopifnot(isTRUE(all.equal(sum(inverse.rle(y), na.rm=FALSE),sum(y, na.rm=FALSE))))\n#'\n#' @export\nSummary.rle <- function(..., na.rm){\nFUN <- match.fun(.Generic)\n\ninl <- list(...)\n\n# If it's just one, strip the length-zero runs and evaluate.\nif(length(inl) == 1L){\nx <- as.rle(inl[[1L]])\nkeep <- x\\$lengths!=0L\n# TODO: Benchmark whether it's better to first check if\n# any(!keep) or, better yet, write a .Call() function that\n# returns a flag indicating that as a part of calculating keep.\nx\\$values <- x\\$values[keep]\nx\\$lengths <- x\\$lengths[keep]\n\nswitch(.Generic,\nsum = sum(x\\$values*as.numeric(x\\$lengths), na.rm = na.rm),\nprod = prod(x\\$values^as.numeric(x\\$lengths), na.rm = na.rm),\nFUN(x\\$values, na.rm=na.rm)) # The rest only test existence.\n}else{ # Otherwise, break up, evaluate individually, and recombine.\ndo.call(FUN, c(lapply(inl, FUN, na.rm=na.rm), na.rm=na.rm))\n}\n}\n\n#' A generic function for compressing a data structure.\n#'\n#' @param x the object to be compressed.\n#'\n#' @param ... additional arguments to methods.\n#'\n#' @export\ncompress <- function(x, ...){\nUseMethod(\"compress\")\n}\n\n#' Compress the [`rle`] object by merging adjacent runs\n#'\n#' @param x an [`rle`] object.\n#'\n#' @param ... additional objects; if given, all arguments are\n#' concatenated.\n#'\n#' @note Since [`rle`] stores run lengths as integers, [`compress.rle`]\n#' will not merge runs that add up to lengths greater than what can\n#' be represented by a 32-bit signed integer\n#' (\\Sexpr{.Machine\\$integer.max}).\n#'\n#' @examples\n#'\n#' x <- rle(as.logical(rbinom(10,1,.7)))\n#' y <- rle(as.logical(rbinom(10,1,.3)))\n#'\n#' stopifnot(identical(rle(inverse.rle(x)&inverse.rle(y)),compress(x&y)))\n#'\n#' big <- structure(list(lengths=as.integer(rep(.Machine\\$integer.max/4,6)),\n#' values=rep(TRUE,6)), class=\"rle\")\n#'\n#' stopifnot(all(aggregate(as.numeric(lengths)~values,\n#' data=as.data.frame(unclass(big)),FUN=sum)\n#' ==\n#' aggregate(as.numeric(lengths)~values,\n#' data=as.data.frame(unclass(compress(big))),\n#' FUN=sum)))\n#' @export\ncompress.rle <- function(x, ...){\n# First, strip the 0-length runs.\nx\\$values <- x\\$values[x\\$lengths!=0L]\nx\\$lengths <- x\\$lengths[x\\$lengths!=0L]\n# Second, code distinct values as integers if they are not already.\nremap <- ! storage.mode(x\\$values) %in% c(\"integer\",\"logical\")\nif(remap){\nvf <- as.integer(as.factor(x\\$values))\nvf[is.na(vf)] <- 0L # NA runs get coded 0.\n}else vf <- x\\$values\n# Third, call the C code to produce the mapping onto the compressed vector.\ncompinfo <- .Call(\"compress_RLE\", x\\$lengths, vf, remap)\n# Lastly, rebuild the rle with the combined lengths and remapped values.\nstructure(list(lengths = compinfo\\$lengths[seq_len(compinfo\\$nruns)],\nvalues = if(remap) x\\$values[compinfo\\$vali[seq_len(compinfo\\$nruns)]]\nelse compinfo\\$vali[seq_len(compinfo\\$nruns)]),\nclass = \"rle\")\n}\n\n#' @rdname rle-methods\n#'\n#' @examples\n#'\n#' stopifnot(isTRUE(all.equal(mean(inverse.rle(x)),mean(x))))\n#' stopifnot(isTRUE(all.equal(mean(inverse.rle(y)),mean(y))))\n#'\n#' @export\nmean.rle <- function(x, na.rm = FALSE, ...){\nif(na.rm) sum(x\\$values*as.numeric(x\\$lengths), na.rm = TRUE, ...)/sum(!is.na(x))\nelse sum(x\\$values*as.numeric(x\\$lengths), na.rm = FALSE, ...)/length(x)\n}\n\n#' @rdname rle-methods\n#'\n#' @note The [`length`] method returns the length of the vector\n#' represented by the object, obtained by summing the lengths of\n#' individual runs. This can be overridden by setting\n#' `options(rle.unclass_index = FALSE)`, which causes it to\n#' return the length of the underlying representation (usually 2) instead.\n#'\n#' @examples\n#'\n#' stopifnot(isTRUE(all.equal(length(inverse.rle(x)),length(x))))\n#' stopifnot(isTRUE(all.equal(length(inverse.rle(y)),length(y))))\n#'\n#' @export\nlength.rle <- function(x){\nif(!is.null(rle_unclass_index <- getOption(\"rle.unclass_index\")) && rle_unclass_index) length(unclass(x))\nelse sum(as.numeric(x\\$lengths))\n}\n\n#' @rdname rle-methods\n#'\n#' @examples\n#' x\\$values <- NA\n#' y\\$values <- NA\n#' stopifnot(isTRUE(all.equal(is.na(inverse.rle(x)),inverse.rle(is.na(x)))))\n#' stopifnot(isTRUE(all.equal(is.na(inverse.rle(y)),inverse.rle(is.na(y)))))\n#'\n#' @export\nis.na.rle <- function(x){\nx\\$values <- is.na(x\\$values)\nx\n}\n\n#' A [`rep`] method for [`rle`] objects\n#'\n#' @param x an [`rle`] object.\n#'\n#' @param ... see documentation for [`rep`].\n#'\n#' @param scale whether to replicate the elements of the\n#' RLE-compressed vector or the runs.\n#'\n#' @param doNotCompress,doNotCompact whether the method should call\n#' [`compress.rle`] the results before returning. Methods liable to\n#' produce very long output vectors, like [`rep`], have this set\n#' `FALSE` by default. `doNotCompact` is an old name for this argument.\n#'\n#' @note The [`rep`] method for [`rle`] objects is very limited at\n#' this time. Even though the default setting is to replicate\n#' elements of the vector, only the run-replicating functionality is\n#' implemented at this time except for the simplest case (scalar\n#' `times` argument).\n#'\n#' @examples\n#'\n#' x <- rle(sample(c(-1,+1), 10, c(.7,.3), replace=TRUE))\n#' y <- rpois(length(x\\$lengths), 2)\n#'\n#' stopifnot(isTRUE(all.equal(rep(inverse.rle(x), rep(y, x\\$lengths)),\n#' inverse.rle(rep(x, y, scale=\"run\")))))\n#'\n#' stopifnot(isTRUE(all.equal(rep(inverse.rle(x), max(y)),\n#' inverse.rle(rep(x, max(y), scale=\"element\")))))\n#'\n#' @export\nrep.rle <- function(x, ..., scale = c(\"element\", \"run\"), doNotCompact = FALSE, doNotCompress = doNotCompact){\nif(!missing(doNotCompact)) .Deprecated(msg=paste(\"Argument\", sQuote(\"doNotCompact=\"), \"to\", sQuote(\"rep.rle()\"), \"is deprecated and has been renamed to\", sQuote(\"doNotCompress=\"), \".\"))\n\nscale <- match.arg(scale)\nddd <- list(...)\n\nif(is.null(names(ddd)) && length(ddd)==1) names(ddd) <- \"times\"\n\nif(scale==\"element\" && length(ddd\\$times)!=1) stop(\"RLE on element scale is not supported at this time for vector \",sQuote(\"times\"),\" argument.\")\n\nif(length(x\\$lengths)==length(ddd\\$times)){ # This handles the specific scale=\"run\" AND times is vector of appropriate length case.\ntmp <- mapply(function(v, l, times){\nnewl <- .run_mul(l, times)\nnewv <- rep(v, length(newl))\nlist(l = newl, v = newv)\n},\nx\\$values, x\\$lengths, ddd\\$times, SIMPLIFY=FALSE)\n\nx\\$values <- as.vector(unlist(sapply(tmp, `[[`, \"v\")))\nx\\$lengths <- as.integer(unlist(sapply(tmp, `[[`, \"l\")))\n}else{ # This handles the scale=\"run\" OR times is scalar case.\nx\\$values <- rep(x\\$values, ...)\nx\\$lengths <- rep(x\\$lengths, ...)\n}\n\nif(doNotCompress) x else compress(x)\n}\n\n#' Coerce to [`rle`] if not already an [`rle`] object\n#'\n#' @param x the object to be coerced.\n#'\n#' @export\nas.rle <- function(x){\nUseMethod(\"as.rle\")\n}\n\n#' @rdname as.rle\n#' @export\nas.rle.rle <- function(x) x\n\n#' @rdname as.rle\n#' @export\nas.rle.default <- function(x){\n#' @importFrom methods is\nif(is(x, \"rle\")) x else rle(x)\n}\n\n#' @rdname rle-methods\n#'\n#' @examples\n#'\n#' str(x)\n#'\n#' @export\nstr.rle <- function(object, ...){\n# This is needed because `str` needs the length of the underlying\n# list rather than that represented by the RLE.\nop <- options(rle.unclass_index = TRUE)\non.exit(options(op))\nNextMethod(\"str\")\n}\n```\n\n## Try the rle package in your browser\n\nAny scripts or data that you put into this service are public.\n\nrle documentation built on Jan. 13, 2021, 6:17 p.m."
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.54106414,"math_prob":0.98279554,"size":16418,"snap":"2021-31-2021-39","text_gpt3_token_len":4874,"char_repetition_ratio":0.18636529,"word_repetition_ratio":0.113967024,"special_character_ratio":0.34096724,"punctuation_ratio":0.20606937,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99839365,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-24T01:20:38Z\",\"WARC-Record-ID\":\"<urn:uuid:9a1ec4bc-8b36-4dcc-95c5-443965fb84c6>\",\"Content-Length\":\"121840\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c87b89e4-a4d7-4f1b-9680-1fab45211993>\",\"WARC-Concurrent-To\":\"<urn:uuid:5f38fa14-713a-453a-a958-e4d571f2e139>\",\"WARC-IP-Address\":\"51.81.83.12\",\"WARC-Target-URI\":\"https://rdrr.io/cran/rle/src/R/rle_utils.R\",\"WARC-Payload-Digest\":\"sha1:G74MYPVPBEA2Q2MBATPOJ6UGWUHLBPHE\",\"WARC-Block-Digest\":\"sha1:GOJQCNLLTRAJOPVOQVH2JOJSNDRUW5QM\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780057479.26_warc_CC-MAIN-20210923225758-20210924015758-00495.warc.gz\"}"} |
https://d2mvzyuse3lwjc.cloudfront.net/pdfs/NAG26/Manual/html/g03/g03ecc.html | [
"NAG Library Function Document\n\n1Purpose\n\nnag_mv_hierar_cluster_analysis (g03ecc) performs hierarchical cluster analysis.\n\n2Specification\n\n #include #include\n void nag_mv_hierar_cluster_analysis (Nag_ClusterMethod method, Integer n, double d[], Integer ilc[], Integer iuc[], double cd[], Integer iord[], double dord[], NagError *fail)\n\n3Description\n\nGiven a distance or dissimilarity matrix for $n$ objects (see nag_mv_distance_mat (g03eac)), cluster analysis aims to group the $n$ objects into a number of more or less homogeneous groups or clusters. With agglomerative clustering methods, a hierarchical tree is produced by starting with $n$ clusters, each with a single object and then at each of $n-1$ stages, merging two clusters to form a larger cluster, until all objects are in a single cluster. This process may be represented by a dendrogram (see nag_mv_dendrogram (g03ehc)).\nAt each stage, the clusters that are nearest are merged, methods differ as to how the distance between the new cluster and other clusters are computed. For three clusters $i$, $j$ and $k$ let ${n}_{i}$, ${n}_{j}$ and ${n}_{k}$ be the number of objects in each cluster and let ${d}_{ij}$, ${d}_{ik}$ and ${d}_{jk}$ be the distances between the clusters. Let clusters $j$ and $k$ be merged to give cluster $jk$, then the distance from cluster $i$ to cluster $jk$, ${d}_{i.jk}$ can be computed in the following ways:\n 1 Single link or nearest neighbour: ${d}_{i.jk}=\\mathrm{min}\\phantom{\\rule{0.125em}{0ex}}\\left({d}_{ij},{d}_{ik}\\right)$. 2 Complete link or furthest neighbour: ${d}_{i.jk}=\\mathrm{max}\\phantom{\\rule{0.125em}{0ex}}\\left({d}_{ij},{d}_{ik}\\right)$. 3 Group average: ${d}_{i.jk}=\\frac{{n}_{j}}{{n}_{j}+{n}_{k}}{d}_{ij}+\\frac{{n}_{k}}{{n}_{j}+{n}_{k}}{d}_{ik}$. 4 Centroid: ${d}_{i.jk}=\\frac{{n}_{j}}{{n}_{j}+{n}_{k}}{d}_{ij}+\\frac{{n}_{k}}{{n}_{j}+{n}_{k}}{d}_{ik}-\\frac{{n}_{j}{n}_{k}}{{\\left({n}_{j}+{n}_{k}\\right)}^{2}}{d}_{jk}$. 5 Median: ${d}_{i.jk}=\\frac{1}{2}{d}_{ij}+\\frac{1}{2}{d}_{ik}-\\frac{1}{4}{d}_{jk}$. 6 Minimum variance: ${d}_{i.jk}=\\left\\{\\left({n}_{i}+{n}_{j}\\right){d}_{ij}+\\left({n}_{i}+{n}_{k}\\right){d}_{ik}-{n}_{i}{d}_{jk}\\right\\}/\\left({n}_{i}+{n}_{j}+{n}_{k}\\right)$.\nFor further details see Everitt (1974) or Krzanowski (1990).\nIf the clusters are numbered $1,2,\\dots ,n$ then, for convenience, if clusters $j$ and $k$, $j, merge then the new cluster will be referred to as cluster $j$. Information on the clustering history is given by the values of $j$, $k$ and ${d}_{jk}$ for each of the $n-1$ clustering steps. In order to produce a dendrogram, the ordering of the objects such that the clusters that merge are adjacent is required. This ordering is computed so that the first element is 1. The associated distances with this ordering are also computed.\nEveritt B S (1974) Cluster Analysis Heinemann\nKrzanowski W J (1990) Principles of Multivariate Analysis Oxford University Press\n\n5Arguments\n\n1: $\\mathbf{method}$Nag_ClusterMethodInput\nOn entry: indicates which clustering.\n${\\mathbf{method}}=\\mathrm{Nag_SingleLink}$\n${\\mathbf{method}}=\\mathrm{Nag_CompleteLink}$\n${\\mathbf{method}}=\\mathrm{Nag_GroupAverage}$\nGroup average.\n${\\mathbf{method}}=\\mathrm{Nag_Centroid}$\nCentroid.\n${\\mathbf{method}}=\\mathrm{Nag_Median}$\nMedian.\n${\\mathbf{method}}=\\mathrm{Nag_MinVariance}$\nMinimum variance.\nConstraint: ${\\mathbf{method}}=\\mathrm{Nag_SingleLink}$, $\\mathrm{Nag_CompleteLink}$, $\\mathrm{Nag_GroupAverage}$, $\\mathrm{Nag_Centroid}$, $\\mathrm{Nag_Median}$ or $\\mathrm{Nag_MinVariance}$.\n2: $\\mathbf{n}$IntegerInput\nOn entry: the number of objects, $n$.\nConstraint: ${\\mathbf{n}}\\ge 2$.\n3: $\\mathbf{d}\\left[{\\mathbf{n}}×\\left({\\mathbf{n}}-1\\right)/2\\right]$doubleInput/Output\nOn entry: the strictly lower triangle of the distance matrix. $D$ must be stored packed by rows, i.e., ${\\mathbf{d}}\\left[\\left(i-1\\right)\\left(i-2\\right)/2+j-1\\right]$, $i>j$ must contain ${d}_{ij}$.\nOn exit: is overwritten.\nConstraint: ${\\mathbf{d}}\\left[\\mathit{i}-1\\right]\\ge 0.0$, for $\\mathit{i}=1,2,\\dots ,n\\left(n-1\\right)/2$.\n4: $\\mathbf{ilc}\\left[{\\mathbf{n}}-1\\right]$IntegerOutput\nOn exit: ${\\mathbf{ilc}}\\left[\\mathit{l}-1\\right]$ contains the number, $j$, of the cluster merged with cluster $k$ (see iuc), $j, at step $\\mathit{l}$, for $\\mathit{l}=1,2,\\dots ,n-1$.\n5: $\\mathbf{iuc}\\left[{\\mathbf{n}}-1\\right]$IntegerOutput\nOn exit: ${\\mathbf{iuc}}\\left[\\mathit{l}-1\\right]$ contains the number, $k$, of the cluster merged with cluster $j$, $j, at step $\\mathit{l}$, for $\\mathit{l}=1,2,\\dots ,n-1$.\n6: $\\mathbf{cd}\\left[{\\mathbf{n}}-1\\right]$doubleOutput\nOn exit: ${\\mathbf{cd}}\\left[\\mathit{l}-1\\right]$ contains the distance ${d}_{jk}$, between clusters $j$ and $k$, $j, merged at step $\\mathit{l}$, for $\\mathit{l}=1,2,\\dots ,n-1$.\n7: $\\mathbf{iord}\\left[{\\mathbf{n}}\\right]$IntegerOutput\nOn exit: the objects in dendrogram order.\n8: $\\mathbf{dord}\\left[{\\mathbf{n}}\\right]$doubleOutput\nOn exit: the clustering distances corresponding to the order in iord. ${\\mathbf{dord}}\\left[\\mathit{l}-1\\right]$ contains the distance at which cluster ${\\mathbf{iord}}\\left[\\mathit{l}-1\\right]$ and ${\\mathbf{iord}}\\left[\\mathit{l}\\right]$ merge, for $\\mathit{l}=1,2,\\dots ,n-1$. ${\\mathbf{dord}}\\left[n-1\\right]$ contains the maximum distance.\n9: $\\mathbf{fail}$NagError *Input/Output\nThe NAG error argument (see Section 3.7 in How to Use the NAG Library and its Documentation).\n\n6Error Indicators and Warnings\n\nNE_ALLOC_FAIL\nDynamic memory allocation failed.\nOn entry, argument method had an illegal value.\nNE_DENDROGRAM\nA true dendrogram cannot be formed because the distances at which clusters have merged are not increasing for all steps, i.e., ${\\mathbf{cd}}\\left[i-1\\right]<{\\mathbf{cd}}\\left[i-2\\right]$ for some $i=2,3,\\dots ,n-1$. This can occur for the ${\\mathbf{method}}=\\mathrm{Nag_Centroid}$ and ${\\mathbf{method}}=\\mathrm{Nag_Median}$ methods.\nNE_INT_ARG_LT\nOn entry, ${\\mathbf{n}}=〈\\mathit{\\text{value}}〉$.\nConstraint: ${\\mathbf{n}}\\ge 2$.\nNE_INTERNAL_ERROR\nAn internal error has occurred in this function. Check the function call and any array sizes. If the call is correct then please contact NAG for assistance.\nNE_REALARR\nOn entry, ${\\mathbf{d}}\\left[〈\\mathit{\\text{value}}〉\\right]=〈\\mathit{\\text{value}}〉$.\nConstraint: ${\\mathbf{d}}\\left[\\mathit{i}-1\\right]\\ge 0.0$, for $\\mathit{i}=1,2,\\dots ,n×\\left(n-1\\right)/2$.\n\n7Accuracy\n\nFor methods other than ${\\mathbf{method}}=\\mathrm{Nag_SingleLink}$ or $\\mathrm{Nag_CompleteLink}$, slight rounding errors may occur in the calculations of the updated distances. These would not normally significantly affect the results, however there may be an effect if distances are (almost) equal.\nIf at a stage, two distances ${d}_{ij}$ and ${d}_{kl}$, $i or $i=k$ and $j, are equal then clusters $k$ and $l$ will be merged rather than clusters $i$ and $j$. For single link clustering this choice will only affect the order of the objects in the dendrogram. However, for other methods the choice of $kl$ rather than $ij$ may affect the shape of the dendrogram. If either of the distances ${d}_{ij}$ or ${d}_{kl}$ are affected by rounding errors then their equality, and hence the dendrogram, may be affected.\n\n8Parallelism and Performance\n\nnag_mv_hierar_cluster_analysis (g03ecc) is not threaded in any implementation.\n\nThe dendrogram may be formed using nag_mv_dendrogram (g03ehc). Groupings based on the clusters formed at a given distance can be computed using nag_mv_cluster_indicator (g03ejc).\n\n10Example\n\nData consisting of three variables on five objects are read in. Euclidean squared distances based on two variables are computed using nag_mv_distance_mat (g03eac), the objects are clustered using nag_mv_hierar_cluster_analysis (g03ecc) and the dendrogram computed using nag_mv_dendrogram (g03ehc). The dendrogram is then printed.\n\n10.1Program Text\n\nProgram Text (g03ecce.c)\n\n10.2Program Data\n\nProgram Data (g03ecce.d)\n\n10.3Program Results\n\nProgram Results (g03ecce.r)\n\n© The Numerical Algorithms Group Ltd, Oxford, UK. 2017"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.80977434,"math_prob":0.99959105,"size":5329,"snap":"2022-05-2022-21","text_gpt3_token_len":1308,"char_repetition_ratio":0.15680751,"word_repetition_ratio":0.048387095,"special_character_ratio":0.24282229,"punctuation_ratio":0.1827622,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999199,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-29T04:05:07Z\",\"WARC-Record-ID\":\"<urn:uuid:ade748e0-6846-4c24-a309-9aa073968bd7>\",\"Content-Length\":\"31241\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:563fde4e-f646-4d70-9fc2-e1ca1db73d83>\",\"WARC-Concurrent-To\":\"<urn:uuid:709089dc-1df1-49ec-a1b1-68dc35ec24f2>\",\"WARC-IP-Address\":\"13.249.46.86\",\"WARC-Target-URI\":\"https://d2mvzyuse3lwjc.cloudfront.net/pdfs/NAG26/Manual/html/g03/g03ecc.html\",\"WARC-Payload-Digest\":\"sha1:7UQDQUMV4TNJGJQLD47O4ZHAPIEH33JI\",\"WARC-Block-Digest\":\"sha1:KGC5H2SU4CEVESM3C52BWSNLXZV5WKPS\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320299927.25_warc_CC-MAIN-20220129032406-20220129062406-00029.warc.gz\"}"} |
https://videos.najah.edu/node/4273 | [
"# Partial Differential Equations (1)\n\nFaculty:\nFaculty of Science\nDepartment:\nMathematics\nCourse Description:\n\nTopics covered in this course include: the formation of a partial differential equation; methods of solutions of first order linear and nonlinear partial differential equations; methods of solutions of second order linear and nonlinear partial differential equations; Fourier series and transforms; wave equation, Laplace’s equation, potential equation, equation of an infinite wire, heat equation."
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.67612296,"math_prob":0.99657667,"size":1284,"snap":"2023-14-2023-23","text_gpt3_token_len":381,"char_repetition_ratio":0.24140625,"word_repetition_ratio":0.09876543,"special_character_ratio":0.30140188,"punctuation_ratio":0.14611872,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9987051,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-09T08:35:48Z\",\"WARC-Record-ID\":\"<urn:uuid:f693e792-a817-42a3-b16a-51d5ea52a4c5>\",\"Content-Length\":\"18509\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:31f07267-5004-4330-a2b7-a7a027045d03>\",\"WARC-Concurrent-To\":\"<urn:uuid:30c634f0-91b2-478a-be4d-357c758dd8cb>\",\"WARC-IP-Address\":\"172.67.27.164\",\"WARC-Target-URI\":\"https://videos.najah.edu/node/4273\",\"WARC-Payload-Digest\":\"sha1:TKMRRILIDP34G3D5F5CXHO56DRKE6SDC\",\"WARC-Block-Digest\":\"sha1:DXQ2PPY7MV3MP6OIC2KRHSMWPSBZSKBY\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224655446.86_warc_CC-MAIN-20230609064417-20230609094417-00760.warc.gz\"}"} |
http://zims-en.kiwix.campusafrica.gos.orange.com/wikipedia_en_all_nopic/A/Halting_problem | [
"# Halting problem\n\nIn computability theory, the halting problem is the problem of determining, from a description of an arbitrary computer program and an input, whether the program will finish running, or continue to run forever.\n\nAlan Turing proved in 1936 that a general algorithm to solve the halting problem for all possible program-input pairs cannot exist. For any program f that might determine if programs halt, a \"pathological\" program g called with an input can pass its own source and its input to f and then specifically do the opposite of what f predicts g will do. No f can exist that handles this case. A key part of the proof was a mathematical definition of a computer and program, which became known as a Turing machine; the halting problem is undecidable over Turing machines. Turing's proof is one of the first cases of decision problems to be concluded. The theoretical conclusion that it is not solvable is significant to practical computing efforts, defining a class of applications which no programming invention can possibly perform perfectly.\n\nJack Copeland (2004) attributes the introduction of the term halting problem to the work of Martin Davis in the 1950s.\n\n## Background\n\nThe halting problem is a decision problem about properties of computer programs on a fixed Turing-complete model of computation, i.e., all programs that can be written in some given programming language that is general enough to be equivalent to a Turing machine. The problem is to determine, given a program and an input to the program, whether the program will eventually halt when run with that input. In this abstract framework, there are no resource limitations on the amount of memory or time required for the program's execution; it can take arbitrarily long and use an arbitrary amount of storage space before halting. The question is simply whether the given program will ever halt on a particular input.\n\nFor example, in pseudocode, the program\n\nwhile (true) continue\n\ndoes not halt; rather, it goes on forever in an infinite loop. On the other hand, the program\n\nprint \"Hello, world!\"\n\ndoes halt.\n\nWhile deciding whether these programs halt is simple, more complex programs prove problematic.\n\nOne approach to the problem might be to run the program for some number of steps and check if it halts. But if the program does not halt, it is unknown whether the program will eventually halt or run forever.\n\nTuring proved no algorithm exists that always correctly decides whether, for a given arbitrary program and input, the program halts when run with that input. The essence of Turing's proof is that any such algorithm can be made to contradict itself and therefore cannot be correct.\n\n### Programming consequences\n\nSome infinite loops can be quite useful. For instance, event loops are typically coded as infinite loops. However, most subroutines are intended to finish (halt). In particular, in hard real-time computing, programmers attempt to write subroutines that are not only guaranteed to finish (halt), but are also guaranteed to finish before a given deadline.\n\nSometimes these programmers use some general-purpose (Turing-complete) programming language, but attempt to write in a restricted style—such as MISRA C or SPARK—that makes it easy to prove that the resulting subroutines finish before the given deadline.\n\nOther times these programmers apply the rule of least power—they deliberately use a computer language that is not quite fully Turing-complete. Frequently, these are languages that guarantee all subroutines finish, such as Coq.\n\n### Common pitfalls\n\nThe difficulty in the halting problem lies in the requirement that the decision procedure must work for all programs and inputs. A particular program either halts on a given input or does not halt. Consider one algorithm that always answers \"halts\" and another that always answers \"doesn't halt\". For any specific program and input, one of these two algorithms answers correctly, even though nobody may know which one. Yet neither algorithm solves the halting problem generally.\n\nThere are programs (interpreters) that simulate the execution of whatever source code they are given. Such programs can demonstrate that a program does halt if this is the case: the interpreter itself will eventually halt its simulation, which shows that the original program halted. However, an interpreter will not halt if its input program does not halt, so this approach cannot solve the halting problem as stated; it does not successfully answer \"doesn't halt\" for programs that do not halt.\n\nThe halting problem is theoretically decidable for linear bounded automata (LBAs) or deterministic machines with finite memory. A machine with finite memory has a finite number of states, and thus any deterministic program on it must eventually either halt or repeat a previous state:\n\n...any finite-state machine, if left completely to itself, will fall eventually into a perfectly periodic repetitive pattern. The duration of this repeating pattern cannot exceed the number of internal states of the machine... (italics in original, Minsky 1967, p. 24)\n\nMinsky warns us, however, that machines such as computers with e.g., a million small parts, each with two states, will have at least 21,000,000 possible states:\n\nThis is a 1 followed by about three hundred thousand zeroes ... Even if such a machine were to operate at the frequencies of cosmic rays, the aeons of galactic evolution would be as nothing compared to the time of a journey through such a cycle (Minsky 1967 p. 25):\n\nMinsky exhorts the reader to be suspicious—although a machine may be finite, and finite automata \"have a number of theoretical limitations\":\n\n...the magnitudes involved should lead one to suspect that theorems and arguments based chiefly on the mere finiteness [of] the state diagram may not carry a great deal of significance. (Minsky p. 25)\n\nIt can also be decided automatically whether a nondeterministic machine with finite memory halts on none, some, or all of the possible sequences of nondeterministic decisions, by enumerating states after each possible decision.\n\n## History\n\nThe halting problem is historically important because it was one of the first problems to be proved undecidable. (Turing's proof went to press in May 1936, whereas Alonzo Church's proof of the undecidability of a problem in the lambda calculus had already been published in April 1936 [Church, 1936].) Subsequently, many other undecidable problems have been described.\n\n### Timeline\n\n• 1900: David Hilbert poses his \"23 questions\" (now known as Hilbert's problems) at the Second International Congress of Mathematicians in Paris. \"Of these, the second was that of proving the consistency of the 'Peano axioms' on which, as he had shown, the rigour of mathematics depended\". (Hodges p. 83, Davis' commentary in Davis, 1965, p. 108)\n• 1920–1921: Emil Post explores the halting problem for tag systems, regarding it as a candidate for unsolvability. (Absolutely unsolvable problems and relatively undecidable propositions – account of an anticipation, in Davis, 1965, pp. 340–433.) Its unsolvability was not established until much later, by Marvin Minsky (1967).\n• 1928: Hilbert recasts his 'Second Problem' at the Bologna International Congress. (Reid pp. 188–189) Hodges claims he posed three questions: i.e. #1: Was mathematics complete? #2: Was mathematics consistent? #3: Was mathematics decidable? (Hodges p. 91). The third question is known as the Entscheidungsproblem (Decision Problem). (Hodges p. 91, Penrose p. 34)\n• 1930: Kurt Gödel announces a proof as an answer to the first two of Hilbert's 1928 questions [cf Reid p. 198]. \"At first he [Hilbert] was only angry and frustrated, but then he began to try to deal constructively with the problem... Gödel himself felt—and expressed the thought in his paper—that his work did not contradict Hilbert's formalistic point of view\" (Reid p. 199)\n• 1931: Gödel publishes \"On Formally Undecidable Propositions of Principia Mathematica and Related Systems I\", (reprinted in Davis, 1965, p. 5ff)\n• 19 April 1935: Alonzo Church publishes \"An Unsolvable Problem of Elementary Number Theory\", wherein he identifies what it means for a function to be effectively calculable. Such a function will have an algorithm, and \"...the fact that the algorithm has terminated becomes effectively known ...\" (Davis, 1965, p. 100)\n• 1936: Church publishes the first proof that the Entscheidungsproblem is unsolvable. (A Note on the Entscheidungsproblem, reprinted in Davis, 1965, p. 110.)\n• 7 October 1936: Emil Post's paper \"Finite Combinatory Processes. Formulation I\" is received. Post adds to his \"process\" an instruction \"(C) Stop\". He called such a process \"type 1 ... if the process it determines terminates for each specific problem.\" (Davis, 1965, p. 289ff)\n• 1937: Alan Turing's paper On Computable Numbers With an Application to the Entscheidungsproblem reaches print in January 1937 (reprinted in Davis, 1965, p. 115). Turing's proof departs from calculation by recursive functions and introduces the notion of computation by machine. Stephen Kleene (1952) refers to this as one of the \"first examples of decision problems proved unsolvable\".\n• 1939: J. Barkley Rosser observes the essential equivalence of \"effective method\" defined by Gödel, Church, and Turing (Rosser in Davis, 1965, p. 273, \"Informal Exposition of Proofs of Gödel's Theorem and Church's Theorem\")\n• 1943: In a paper, Stephen Kleene states that \"In setting up a complete algorithmic theory, what we do is describe a procedure ... which procedure necessarily terminates and in such manner that from the outcome we can read a definite answer, 'Yes' or 'No,' to the question, 'Is the predicate value true?'.\"\n• 1952: Kleene (1952) Chapter XIII (\"Computable Functions\") includes a discussion of the unsolvability of the halting problem for Turing machines and reformulates it in terms of machines that \"eventually stop\", i.e. halt: \"... there is no algorithm for deciding whether any given machine, when started from any given situation, eventually stops.\" (Kleene (1952) p. 382)\n• 1952: \"Martin Davis thinks it likely that he first used the term 'halting problem' in a series of lectures that he gave at the Control Systems Laboratory at the University of Illinois in 1952 (letter from Davis to Copeland, 12 December 2001).\" (Footnote 61 in Copeland (2004) pp. 40ff)\n\n## Formalization\n\nIn his original proof Turing formalized the concept of algorithm by introducing Turing machines. However, the result is in no way specific to them; it applies equally to any other model of computation that is equivalent in its computational power to Turing machines, such as Markov algorithms, Lambda calculus, Post systems, register machines, or tag systems.\n\nWhat is important is that the formalization allows a straightforward mapping of algorithms to some data type that the algorithm can operate upon. For example, if the formalism lets algorithms define functions over strings (such as Turing machines) then there should be a mapping of these algorithms to strings, and if the formalism lets algorithms define functions over natural numbers (such as computable functions) then there should be a mapping of algorithms to natural numbers. The mapping to strings is usually the most straightforward, but strings over an alphabet with n characters can also be mapped to numbers by interpreting them as numbers in an n-ary numeral system.\n\n### Representation as a set\n\nThe conventional representation of decision problems is the set of objects possessing the property in question. The halting set\n\nK = {(i, x) | program i halts when run on input x}\n\nrepresents the halting problem.\n\nThis set is recursively enumerable, which means there is a computable function that lists all of the pairs (i, x) it contains (Moore and Mertens 2011, pp. 236–237). However, the complement of this set is not recursively enumerable (Moore and Mertens 2011, pp. 236–237).\n\nThere are many equivalent formulations of the halting problem; any set whose Turing degree equals that of the halting problem is such a formulation. Examples of such sets include:\n\n• {i | program i eventually halts when run with input 0}\n• {i | there is an input x such that program i eventually halts when run with input x}.\n\n### Proof concept\n\nThe proof that the halting problem is not solvable is a proof by contradiction. To illustrate the concept of the proof, suppose that there exists a total computable function halts(f) that returns true if the subroutine f halts (when run with no inputs) and returns false otherwise. Now consider the following subroutine:\n\ndef g():\nif halts(g):\nloop_forever()\n\n\nhalts(g) must either return true or false, because halts was assumed to be total. If halts(g) returns true, then g will call loop_forever and never halt, which is a contradiction. If halts(g) returns false, then g will halt, because it will not call loop_forever; this is also a contradiction. Overall, halts(g) can not return a truth value that is consistent with whether g halts. Therefore, the initial assumption that halts is a total computable function must be false.\n\nThe method used in the proof is called diagonalization - g does the opposite of what halts says g should do. The difference between this sketch and the actual proof is that in the actual proof, the computable function halts does not directly take a subroutine as an argument; instead it takes the source code of a program. The actual proof requires additional work to handle this issue. Moreover, the actual proof avoids the direct use of recursion shown in the definition of g.\n\n### Sketch of proof\n\nThe concept above shows the general method of the proof; this section will present additional details. The overall goal is to show that there is no total computable function that decides whether an arbitrary program i halts on arbitrary input x; that is, the following function h is not computable (Penrose 1990, p. 5763):\n\n$h(i,x)={\\begin{cases}1&{\\text{if }}{\\text{ program }}i{\\text{ halts on input }}x,\\\\0&{\\text{otherwise.}}\\end{cases}}$",
null,
"Here program i refers to the i th program in an enumeration of all the programs of a fixed Turing-complete model of computation.\n\n f(i,j) i 1 2 3 4 5 6 j 1 1 0 0 1 0 1 2 0 0 0 1 0 0 3 0 1 0 1 0 1 4 1 0 0 1 0 0 5 0 0 0 1 1 1 6 1 1 0 0 1 0 f(i,i) 1 0 0 1 1 0 g(i) U 0 0 U U 0\n\nPossible values for a total computable function f arranged in a 2D array. The orange cells are the diagonal. The values of f(i,i) and g(i) are shown at the bottom; U indicates that the function g is undefined for a particular input value.\n\nThe proof proceeds by directly establishing that no total computable function with two arguments can be the required function h. As in the sketch of the concept, given any total computable binary function f, the following partial function g is also computable by some program e:\n\n$g(i)={\\begin{cases}0&{\\text{if }}f(i,i)=0,\\\\{\\text{undefined}}&{\\text{otherwise.}}\\end{cases}}$",
null,
"The verification that g is computable relies on the following constructs (or their equivalents):\n\n• computable subprograms (the program that computes f is a subprogram in program e),\n• duplication of values (program e computes the inputs i,i for f from the input i for g),\n• conditional branching (program e selects between two results depending on the value it computes for f(i,i)),\n• not producing a defined result (for example, by looping forever),\n• returning a value of 0.\n\nThe following pseudocode illustrates a straightforward way to compute g:\n\nprocedure compute_g(i):\nif f(i,i) == 0 then\nreturn 0\nelse\nloop forever\n\n\nBecause g is partial computable, there must be a program e that computes g, by the assumption that the model of computation is Turing-complete. This program is one of all the programs on which the halting function h is defined. The next step of the proof shows that h(e,e) will not have the same value as f(e,e).\n\nIt follows from the definition of g that exactly one of the following two cases must hold:\n\n• f(e,e) = 0 and so g(e) = 0. In this case h(e,e) = 1, because program e halts on input e.\n• f(e,e) ≠ 0 and so g(e) is undefined. In this case h(e,e) = 0, because program e does not halt on input e.\n\nIn either case, f cannot be the same function as h. Because f was an arbitrary total computable function with two arguments, all such functions must differ from h.\n\nThis proof is analogous to Cantor's diagonal argument. One may visualize a two-dimensional array with one column and one row for each natural number, as indicated in the table above. The value of f(i,j) is placed at column i, row j. Because f is assumed to be a total computable function, any element of the array can be calculated using f. The construction of the function g can be visualized using the main diagonal of this array. If the array has a 0 at position (i,i), then g(i) is 0. Otherwise, g(i) is undefined. The contradiction comes from the fact that there is some column e of the array corresponding to g itself. Now assume f was the halting function h, if g(e) is defined (g(e) = 0 in this case), g(e) halts so f(e,e) = 1. But g(e) = 0 only when f(e,e) = 0, contradicting f(e,e) = 1. Similarly, if g(e) is not defined, then halting function f(e,e) = 0, which leads to g(e) = 0 under g's construction. This contradicts the assumption of g(e) not being defined. In both cases contradiction arises. Therefore any arbitrary computable function f cannot be the halting function h.\n\n## Computability theory\n\nThe typical method of proving a problem to be undecidable is with the technique of reduction. To do this, it is sufficient to show that if a solution to the new problem were found, it could be used to decide an undecidable problem by transforming instances of the undecidable problem into instances of the new problem. Since we already know that no method can decide the old problem, no method can decide the new problem either. Often the new problem is reduced to solving the halting problem. (The same technique is used to demonstrate that a problem is NP complete, only in this case, rather than demonstrating that there is no solution, it demonstrates there is no polynomial time solution, assuming P ≠ NP.)\n\nFor example, one such consequence of the halting problem's undecidability is that there cannot be a general algorithm that decides whether a given statement about natural numbers is true or false. The reason for this is that the proposition stating that a certain program will halt given a certain input can be converted into an equivalent statement about natural numbers. If we had an algorithm that could find the truth value of every statement about natural numbers, it could certainly find the truth value of this one; but that would determine whether the original program halts, which is impossible, since the halting problem is undecidable.\n\nRice's theorem generalizes the theorem that the halting problem is unsolvable. It states that for any non-trivial property, there is no general decision procedure that, for all programs, decides whether the partial function implemented by the input program has that property. (A partial function is a function which may not always produce a result, and so is used to model programs, which can either produce results or fail to halt.) For example, the property \"halt for the input 0\" is undecidable. Here, \"non-trivial\" means that the set of partial functions that satisfy the property is neither the empty set nor the set of all partial functions. For example, \"halts or fails to halt on input 0\" is clearly true of all partial functions, so it is a trivial property, and can be decided by an algorithm that simply reports \"true.\" Also, this theorem holds only for properties of the partial function implemented by the program; Rice's Theorem does not apply to properties of the program itself. For example, \"halt on input 0 within 100 steps\" is not a property of the partial function that is implemented by the program—it is a property of the program implementing the partial function and is very much decidable.\n\nGregory Chaitin has defined a halting probability, represented by the symbol Ω, a type of real number that informally is said to represent the probability that a randomly produced program halts. These numbers have the same Turing degree as the halting problem. It is a normal and transcendental number which can be defined but cannot be completely computed. This means one can prove that there is no algorithm which produces the digits of Ω, although its first few digits can be calculated in simple cases.\n\nWhile Turing's proof shows that there can be no general method or algorithm to determine whether algorithms halt, individual instances of that problem may very well be susceptible to attack. Given a specific algorithm, one can often show that it must halt for any input, and in fact computer scientists often do just that as part of a correctness proof. But each proof has to be developed specifically for the algorithm at hand; there is no mechanical, general way to determine whether algorithms on a Turing machine halt. However, there are some heuristics that can be used in an automated fashion to attempt to construct a proof, which succeed frequently on typical programs. This field of research is known as automated termination analysis.\n\nSince the negative answer to the halting problem shows that there are problems that cannot be solved by a Turing machine, the Church–Turing thesis limits what can be accomplished by any machine that implements effective methods. However, not all machines conceivable to human imagination are subject to the Church–Turing thesis (e.g. oracle machines). It is an open question whether there can be actual deterministic physical processes that, in the long run, elude simulation by a Turing machine, and in particular whether any such hypothetical process could usefully be harnessed in the form of a calculating machine (a hypercomputer) that could solve the halting problem for a Turing machine amongst other things. It is also an open question whether any such unknown physical processes are involved in the working of the human brain, and whether humans can solve the halting problem (Copeland 2004, p. 15).\n\n### Gödel's incompleteness theorems\n\nThe concepts raised by Gödel's incompleteness theorems are very similar to those raised by the halting problem, and the proofs are quite similar. In fact, a weaker form of the First Incompleteness Theorem is an easy consequence of the undecidability of the halting problem. This weaker form differs from the standard statement of the incompleteness theorem by asserting that an axiomatization of the natural numbers that is both complete and sound is impossible. The \"sound\" part is the weakening: it means that we require the axiomatic system in question to prove only true statements about natural numbers. Since soundness implies consistency, this weaker form can be seen as a corollary of the strong form. It is important to observe that the statement of the standard form of Gödel's First Incompleteness Theorem is completely unconcerned with the truth value of a statement, but only concerns the issue of whether it is possible to find it through a mathematical proof.\n\nThe weaker form of the theorem can be proved from the undecidability of the halting problem as follows. Assume that we have a sound (and hence consistent) and complete axiomatization of all true first-order logic statements about natural numbers. Then we can build an algorithm that enumerates all these statements. This means that there is an algorithm N(n) that, given a natural number n, computes a true first-order logic statement about natural numbers, and that for all true statements, there is at least one n such that N(n) yields that statement. Now suppose we want to decide if the algorithm with representation a halts on input i. We know that this statement can be expressed with a first-order logic statement, say H(a, i). Since the axiomatization is complete it follows that either there is an n such that N(n) = H(a, i) or there is an n' such that N(n') = ¬ H(a, i). So if we iterate over all n until we either find H(a, i) or its negation, we will always halt, and furthermore, the answer it gives us will be true (by soundness). This means that this gives us an algorithm to decide the halting problem. Since we know that there cannot be such an algorithm, it follows that the assumption that there is a consistent and complete axiomatization of all true first-order logic statements about natural numbers must be false.\n\n## Generalization\n\nMany variants of the halting problem can be found in computability textbooks (e.g., Sipser 2006, Davis 1958, Minsky 1967, Hopcroft and Ullman 1979, Börger 1989). Typically their undecidability follows by reduction from the standard halting problem. However, some of them have a higher degree of unsolvability. The next two examples are typical.\n\n### Halting on all inputs\n\nThe universal halting problem, also known (in recursion theory) as totality, is the problem of determining, whether a given computer program will halt for every input (the name totality comes from the equivalent question of whether the computed function is total). This problem is not only undecidable, as the halting problem, but highly undecidable. In terms of the arithmetical hierarchy, it is $\\Pi _{2}^{0}$",
null,
"-complete (Börger 1989, p. 121).\n\nThis means, in particular, that it cannot be decided even with an oracle for the halting problem.\n\n### Recognizing partial solutions\n\nThere are many programs that, for some inputs, return a correct answer to the halting problem, while for other inputs they do not return an answer at all. However the problem \"given program p, is it a partial halting solver\" (in the sense described) is at least as hard as the halting problem. To see this, assume that there is an algorithm PHSR (\"partial halting solver recognizer\") to do that. Then it can be used to solve the halting problem, as follows: To test whether input program x halts on y, construct a program p that on input (x,y) reports true and diverges on all other inputs. Then test p with PHSR.\n\nThe above argument is a reduction of the halting problem to PHS recognition, and in the same manner, harder problems such as halting on all inputs can also be reduced, implying that PHS recognition is not only undecidable, but higher in the arithmetical hierarchy, specifically $\\Pi _{2}^{0}$",
null,
"-complete.\n\n### Lossy computation\n\nA lossy Turing machine is a Turing machine in which part of the tape may non-deterministically disappear. The Halting problem is decidable for lossy Turing machine but nonprimitive recursive.:92\n\n### Oracle machines\n\nA machine with an oracle for the halting problem can determine whether particular Turing machines will halt on particular inputs, but they cannot determine, in general, if machines equivalent to themselves will halt."
]
| [
null,
"http://zims-en.kiwix.campusafrica.gos.orange.com/wikipedia_en_all_nopic/I/m/76728da59b2423fe6fd133a8f6b14593f4c54716.svg",
null,
"http://zims-en.kiwix.campusafrica.gos.orange.com/wikipedia_en_all_nopic/I/m/c41dab0cdddcd1afa65ca31710c21d6254c650ed.svg",
null,
"http://zims-en.kiwix.campusafrica.gos.orange.com/wikipedia_en_all_nopic/I/m/0d58b0d35851996c3e8fa2ed4b5f4c583a3337df.svg",
null,
"http://zims-en.kiwix.campusafrica.gos.orange.com/wikipedia_en_all_nopic/I/m/0d58b0d35851996c3e8fa2ed4b5f4c583a3337df.svg",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.8860515,"math_prob":0.9626485,"size":35516,"snap":"2021-04-2021-17","text_gpt3_token_len":8156,"char_repetition_ratio":0.15938275,"word_repetition_ratio":0.023231441,"special_character_ratio":0.22913617,"punctuation_ratio":0.1354244,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9910145,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,1,null,1,null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-25T02:26:23Z\",\"WARC-Record-ID\":\"<urn:uuid:737adadf-b554-4470-85ec-0f924e254f42>\",\"Content-Length\":\"76773\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:90f919d3-a275-4b69-b938-8b488356b7d7>\",\"WARC-Concurrent-To\":\"<urn:uuid:0d3055cb-277c-4905-aec8-a87002e0287d>\",\"WARC-IP-Address\":\"41.66.34.68\",\"WARC-Target-URI\":\"http://zims-en.kiwix.campusafrica.gos.orange.com/wikipedia_en_all_nopic/A/Halting_problem\",\"WARC-Payload-Digest\":\"sha1:D5FEUPRVLVGPAZ5FP5JSARISSX3N2FE3\",\"WARC-Block-Digest\":\"sha1:FL6LSLNPEEIBDE6YDB3MKA2DGXDOGERA\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610703561996.72_warc_CC-MAIN-20210124235054-20210125025054-00347.warc.gz\"}"} |
https://math.tutorvista.com/geometry/surface-area-of-a-prism.html | [
"To get the best deal on Tutoring, call 1-855-666-7440 (Toll Free)",
null,
"Top\n\n# Surface Area of a Prism\n\nPrism is a three dimensional solid figure. Generally, the flat surface of the solid is known as its face. The top and bottom faces are known as bases. Faces forming the sides of the prism are also called as lateral faces or lateral surfaces. Surface area of any solid describes the material used to cover a geometrical figure and calculated in square units. Below will study about prism surface area formula and some solved examples. After this lesson you will be able to solve prism based problems at our own pace.",
null,
"Related Calculators Prism Surface Area Calculator Calculate Surface Area of a Rectangular Prism Surface Area of a Triangular Prism Calculator Calculate Surface Area\n\n## Formula\n\nBack to Top\nTotal Surface Area is the sum of the lateral surface area and twice the base area of the prism.\n\nTotal Surface Area = $LSA + 2 \\times$ Base Area\n\nTotal surface area = $(P \\times h + 2 A) sq.$ units\n\nWhen $P$ is the perimeter of the base, $A$ is the area of the base and h is the height of the prism.\n\n## Lateral Surface Area\n\nBack to Top\nThe lateral surface area is the sum of the areas of prism's lateral faces. And it can be calculated by multiplying the perimeter of the base by the height of the prism.\n\nLateral Area = Perimeter of Base $\\times$ Height of Prism\nIf $P$ is the perimeter of the base and $h$ is the height of a prism then\n\nLateral surface of right prism = $P\\ h$\n\n## Examples\n\nBack to Top\n\nGiven below are some of the examples:\n\nExample 1:\n\nFind the surface area of a prism whose base is a right angled triangle of side 8 cm, 15 cm and 17 cm, and height of the prism is 20 cm.\n\nSolution:\n\nThe sides of the triangular base are $8 cm,\\ 15 cm$ and $17 cm$ and\n\nHeight of the prism is $20 cm$\n\nLet us take, the base of the triangle = $8 cm$\n\nHeight of the triangle = $15 cm$\n\nThe lateral surface area of the prism = $P\\ h$ square units\n\n= $(8 + 15 + 17) \\times 20$\n\n= $40 \\times 20$\n\nLateral Surface Area = $800 sq.\\ cm$\n\nNow, the area of the bases, $A$ = $\\frac{1}{2}$ $b\\ h\\ sq.units$\n\n= $\\frac{1}{2}$ $\\times 8 \\times 15$\n\n$A$ = $60 sq.cm$\n\nThe total surface area of the prism = $P\\ h\\ +\\ 2\\ A$\n\n= $800\\ +\\ 2\\ \\times\\ 60$\n\nTotal surface area of the prism is $920\\ sq.cm$\n\nExample 2:\n\nFind the surface area of the rectangular prism given below.",
null,
"Solution:\n\nSurface area is the sum of all unit squares that fit on the exterior of a solid.\n\nFormula for surface area of rectangular prism = $2lw + 2lh + 2wh$\n\nwhere, $l$ - length, $w$ - width and $h$ - height of prism\n\nFrom figure: $l$ = $10 cm,\\ w$ = $4 cm$ and $h$ = $5 cm$\n\nsurface area of rectangular prism = $2 \\times 10 \\times 4 + 2 \\times 10 \\times 5 + 2 \\times 4 \\times 5$\n\n= $80 + 100 + 40$\n\n= $220$\n\nTherefore, the surface area of rectangular prism is $220 cm^2$\n\n More topics in Surface Area of a Prism Lateral Surface Area of a Prism Surface Area of a Trapezoidal prism Surface Area of a Square Prism Surface Area of a Triangular Prism\n NCERT Solutions NCERT Solutions NCERT Solutions CLASS 6 NCERT Solutions CLASS 7 NCERT Solutions CLASS 8 NCERT Solutions CLASS 9 NCERT Solutions CLASS 10 NCERT Solutions CLASS 11 NCERT Solutions CLASS 12\n Related Topics Math Help Online Online Math Tutor\n*AP and SAT are registered trademarks of the College Board."
]
| [
null,
"https://image.tutorvista.com/seonew/images/searchbutton.png",
null,
"https://image.tutorvista.com/cms/images/38/surface-area-of-a-prism.jpg",
null,
"https://images.tutorvista.com/cms/images/146/surface-area-of-rectangular-prism.png",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.8298237,"math_prob":0.9997491,"size":2581,"snap":"2019-13-2019-22","text_gpt3_token_len":718,"char_repetition_ratio":0.20178503,"word_repetition_ratio":0.042769857,"special_character_ratio":0.3060829,"punctuation_ratio":0.06746032,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99983174,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,null,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-05-24T02:08:46Z\",\"WARC-Record-ID\":\"<urn:uuid:903b5ec2-0da6-4d35-a92c-9e1a03939066>\",\"Content-Length\":\"52367\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:679df604-e696-486c-bd3e-ed574b2702ac>\",\"WARC-Concurrent-To\":\"<urn:uuid:b81136be-56a1-4b91-a642-3a0f3bdfd58f>\",\"WARC-IP-Address\":\"74.86.236.83\",\"WARC-Target-URI\":\"https://math.tutorvista.com/geometry/surface-area-of-a-prism.html\",\"WARC-Payload-Digest\":\"sha1:JJLVGDFDL4E3QEHEOTZSKOE7PMAFVHMD\",\"WARC-Block-Digest\":\"sha1:AUMLEJQOQ5WLBQ4VFKFYLT6NTNGDJSUD\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-22/CC-MAIN-2019-22_segments_1558232257481.39_warc_CC-MAIN-20190524004222-20190524030222-00344.warc.gz\"}"} |
https://cryptographyacademy.com/identification-schemes/protocol/schnorr-sigma-protocol.php | [
"### Peggy\n\n##### Parameters known by Peggy:\n\nComputes the generator $g$:\n\nComputes the public key $h$:\n\nComputes the value $a$:\n\nReceives the challenge $e$\n\nComputes the response $z$:\n\n### Step y/x\n\nBefore Peggy can start Schnorr's sigma protocol she needs two prime numbers $p$ and $q$:\n\n• $p$\n• $q$\n\nUse the left and right arrow keys to navigate.\n\nPeggy receives the two prime numbers $p$ and $q$ and the generator $g_{1}$ of the group $\\mathbb{Z}_{p}$ which is used to compute another generator $g$ of order $q$ from the group $\\mathbb{Z}_{p}^{*}$.\n\nShe sends the prime number $p$ and the generator $g$ to Victor.\n\nPeggy wants to convince Victor about that she is really Peggy, i.e. she know the value of the secret key $w$ corresponding to the public key $h$.\n\nShe chooses $w$ and computes $h$ which she sends to Victor.\n\nPeggy chooses a random integer $r$ and computes the value $a$ which she sends to Victor.\n\nVictor chooses randomly a challenge $e$ which he sends to Peggy.\n\nIn response to the challenge Peggy computes the value $z$ which she sends to Victor.\n\nVictor verifies the received response $z$ from Peggy by checking that the two values are equals. Only if this is the case is Victor convinced about Peggy's identity, i.e. he know that she is in possession of the secret key $w$ corresponding to the public key $h$.\n\n### Victor\n\n##### Parameters known by Victor:\n\nReceives the public key $h$\nReceives the value $a$\nChooses the random challenge $e$:\nReceives the response $z$"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.8754962,"math_prob":0.9997279,"size":1533,"snap":"2021-31-2021-39","text_gpt3_token_len":395,"char_repetition_ratio":0.14846304,"word_repetition_ratio":0.15384616,"special_character_ratio":0.307893,"punctuation_ratio":0.08424909,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9993397,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-08-02T16:42:21Z\",\"WARC-Record-ID\":\"<urn:uuid:079093a2-4c03-4ffa-95af-83a1d12c6f12>\",\"Content-Length\":\"36755\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7bbc5a2e-a911-4f0d-b7c3-b75d6064c274>\",\"WARC-Concurrent-To\":\"<urn:uuid:c10359ac-aa43-42b8-bb29-79a1e209cb31>\",\"WARC-IP-Address\":\"46.30.215.163\",\"WARC-Target-URI\":\"https://cryptographyacademy.com/identification-schemes/protocol/schnorr-sigma-protocol.php\",\"WARC-Payload-Digest\":\"sha1:O55Q4S3INOYYVIXLDVFYZRQ6P37FJB5R\",\"WARC-Block-Digest\":\"sha1:KSRSZQYUI6T3VUIMIF7VF634LEI6QBCR\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046154321.31_warc_CC-MAIN-20210802141221-20210802171221-00017.warc.gz\"}"} |
https://indico.cern.ch/event/433345/contributions/2373781/ | [
"# Quark Matter 2017\n\n5-11 February 2017\nHyatt Regency Chicago\nAmerica/Chicago timezone\n\n## Lambda-Kaon Femtoscopy in Pb-Pb Collisions at $\\sqrt{s_{NN}}$ = 2.76 TeV with ALICE\n\nNot scheduled\n2h 30m\nHyatt Regency Chicago\n\n#### Hyatt Regency Chicago\n\n151 East Wacker Drive Chicago, Illinois, USA, 60601\nBoard: E07\nPoster\n\n### Speaker\n\nJesse Thomas Buxton (Ohio State University (US))\n\n### Description\n\nWe present results from a femtoscopic analysis of Lambda-Kaon correlations in Pb-Pb collisions at $\\sqrt{s_{NN}}$ = 2.76 TeV by the ALICE experiment at the LHC. All pair combinations of $\\Lambda$ and $\\bar{\\Lambda}$ with K$^{+}$, K$^{-}$ and K$^{0}_{S}$ are analyzed. The femtoscopic correlations are the result of strong final-state interactions, and are fit with a parametrization based on a model by R. Lednicky and V. L. Lyuboshitz . This allows us to both characterize the emission source and measure the scattering parameters for the particle pairs. We observe a large difference in the $\\Lambda$-K$^{+}$ ($\\bar{\\Lambda}$-K$^{-}$) and $\\Lambda$-K$^{-}$ ($\\bar{\\Lambda}$-K$^{+}$) correlations in pairs with low relative momenta (k* < 100 MeV). Additionally, the average of the $\\Lambda$-K$^{+}$ ($\\bar{\\Lambda}$-K$^{-}$) and $\\Lambda$-K$^{-}$ ($\\bar{\\Lambda}$-K$^{+}$) correlation functions is consistent with our $\\Lambda$-K$^{0}_{S}$ ($\\bar{\\Lambda}$-K$^{0}_{S}$) measurement. The results suggest an effect arising from different quark-antiquark interactions in the pairs, i.e. $\\rm s\\bar{s}$ in $\\Lambda$-K$^{+}$ ($\\bar{\\Lambda}$-K$^{-}$) and $\\rm u\\bar{u}$ in $\\Lambda$-K$^{-}$ ($\\bar{\\Lambda}$-K$^{+}$). To gain further insight into this hypothesis, we currently are conducting a Cascade-Kaon femtoscopic analysis.\n\n R. Lednicky and V.L. Lyuboshitz, Sov. J. Nucl. Phys. 35, 770 (1982)\n\nCollaboration ALICE Correlations and Fluctuations\n\n### Primary author\n\nJesse Thomas Buxton (Ohio State University (US))"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.7385308,"math_prob":0.9974856,"size":1489,"snap":"2021-21-2021-25","text_gpt3_token_len":453,"char_repetition_ratio":0.1912458,"word_repetition_ratio":0.021621622,"special_character_ratio":0.32505038,"punctuation_ratio":0.10769231,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99956447,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-06-15T06:30:43Z\",\"WARC-Record-ID\":\"<urn:uuid:6e54a40b-be0e-4db8-8c90-a54f67136f75>\",\"Content-Length\":\"62016\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3a9c1f61-64b4-403f-a2eb-f59f0c1eb127>\",\"WARC-Concurrent-To\":\"<urn:uuid:ab050b4a-efd2-4999-966c-f00777e01c20>\",\"WARC-IP-Address\":\"188.184.23.103\",\"WARC-Target-URI\":\"https://indico.cern.ch/event/433345/contributions/2373781/\",\"WARC-Payload-Digest\":\"sha1:TMC3OGZPMIGVAWTZEBP6PH25UUF5B25I\",\"WARC-Block-Digest\":\"sha1:CJMYDYORWORKGTJRYCJLIQEIOQSTMUP3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-25/CC-MAIN-2021-25_segments_1623487617599.15_warc_CC-MAIN-20210615053457-20210615083457-00000.warc.gz\"}"} |
https://awc.org/online-span-calculator-help/ | [
"Online Span Calculator Help\n##### Span Calculations\n\nLumber design values used to calculate maximum horizontal spans include modulus of elasticity (E), bending strength (Fb), and shear strength (Fv). Bearing strength in compression perpendicular to grain (Fcp) is used to determine the minimum required bearing length at each end of joists and rafters. Calculated spans incorporate design value adjustments appropriate for repetitive-member use (Cr = 1.15), duration of load (CD), lumber size (CF), wet service conditions (CM), and incised lumber (Ci). The 2012 National Design Specification® for Wood Construction (NDS®) specifies appropriate magnitudes for lumber design values and adjustment factors.\n\nMaximum horizontal joist and rafter spans are taken as the smallest span (L) calculated from the following three formulas:",
null,
"based on bending strength (Fb)\n\nwhere s = spacing between joists or rafters\n\nSx = section modulus for strong-axis bending of joist or rafter\n\nwT = total distributed load (D + L, or D + Lr, or D + S)\nsupported by joist or rafter, in terms of load per unit area",
null,
"based on deflection limit and modulus of elasticity (E)\n\nwhere Ix = strong axis moment of inertia for joist or rafter\n\nwL = distributed live load (L or Lr) or distributed snow\nload (S) supported by joist or rafter,\nin terms of load per unit area\n\ndeflection constant = constant term in denominator of\ndeflection limit (e.g., L/360)",
null,
"based on shear strength (Fv)\n\nwhere A = cross-sectional area of joist or rafter\n\n##### Bearing Length\n\nThe minimum required bearing length (lb) at each end of a joist or rafter is determined from the following formula:",
null,
"where t = thickness of joist or rafter"
]
| [
null,
"http://awc.org/wp-content/uploads/2022/02/formula1.gif",
null,
"http://awc.org/wp-content/uploads/2022/02/formula2.gif",
null,
"http://awc.org/wp-content/uploads/2022/02/formula3.gif",
null,
"http://awc.org/wp-content/uploads/2022/02/formula4.gif",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.88180447,"math_prob":0.9651277,"size":2385,"snap":"2023-40-2023-50","text_gpt3_token_len":523,"char_repetition_ratio":0.12641747,"word_repetition_ratio":0.06878307,"special_character_ratio":0.20419288,"punctuation_ratio":0.073891625,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9848596,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,2,null,2,null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-03T15:10:00Z\",\"WARC-Record-ID\":\"<urn:uuid:4111a4c9-5a1a-4830-a40d-76a93b0f844f>\",\"Content-Length\":\"66323\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c8bd4df6-39cf-4b05-856f-0b15cf14b11d>\",\"WARC-Concurrent-To\":\"<urn:uuid:0f1e0ead-edc3-485f-82b8-7116ef04efd4>\",\"WARC-IP-Address\":\"54.187.243.66\",\"WARC-Target-URI\":\"https://awc.org/online-span-calculator-help/\",\"WARC-Payload-Digest\":\"sha1:KMJNKHCDFXOTXGIMJ5VHIBAKBUGGZYKO\",\"WARC-Block-Digest\":\"sha1:I5N6DLVZZCADYT74IBXEMDDOZUULZMHF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100508.23_warc_CC-MAIN-20231203125921-20231203155921-00768.warc.gz\"}"} |
http://slideplayer.com/slide/4986722/ | [
"",
null,
"# Asymptotic Enumerators of Protograph LDPCC Ensembles Jeremy Thorpe Joint work with Bob McEliece, Sarah Fogal.\n\n## Presentation on theme: \"Asymptotic Enumerators of Protograph LDPCC Ensembles Jeremy Thorpe Joint work with Bob McEliece, Sarah Fogal.\"— Presentation transcript:\n\nAsymptotic Enumerators of Protograph LDPCC Ensembles Jeremy Thorpe Joint work with Bob McEliece, Sarah Fogal\n\nOutline Motivation What causes error floors in codes? How can we predict error floors in advance of simulation? “Bad Sets” Codeword Stopping Set Other Sets Protograph Ensembles and Asymptotic Enumerators Computation of Enumerators\n\nWhy error-correcting codes? Error correcting codes are designed to transmit information As efficiently as possible With very low probability of error\n\nWhat is an Error Floor? An error floor occurs when the probability of error doesn’t improve “fast enough” as a channel improves.\n\nWhat’s going on? Error floors occur when the typical mode of failure changes. In “waterfall” region, performance depends on global channel statistics. In “error floor” region, performance depends on channel output near: Low-weight codewords Low-weight stopping sets [FKV 98] Low-weight “trapping sets” [R. 04]\n\nCan we predict error floors without simulation? Predict the “bad sets”: Low-weight codewords Low-weight stopping sets Low-weight “trapping sets” This is difficult to do for particular codes However, for ensembles of codes, the problem becomes feasible, and has been solved for Codeword WE, Regular Ensembles [G 63] Codeword WE, Unstructured Irregular Ensembles [LS 98] Codeword, Stopping Set WE, UIE [Di 04]\n\nWeight Enumerators Defined\n\nCode Basics Recall that a code is a set of vectors of length n. The code on the right is the (7,4) Hamming Code. 0000000 0000111 0011001 0011110 0101010 0101101 0110011 0110100 1111111 1111000 1100110 1100001 1010101 1010010 1001100 1001011 C =\n\nLinear Codes Linear Codes can be represented by their parity check matrices. 0000000 0000111 0011001 0011110 0101010 0101101 0110011 0110100 1111111 1111000 1100110 1100001 1010101 1010010 1001100 1001011 C = 1010101 0011110 1100110 H =\n\nRepresentation by a Graph Parity check matrices can be represented by a graph. 1010101 0011110 1100110 H =\n\nCodeword weight Enumerator for the (7,4) Hamming Code 0000000 0000111 0011001 0011110 0101010 0101101 0110011 0110100 1111111 1111000 1100110 1100001 1010101 1010010 1001100 1001011 C = A(w) w\n\nProtograph Ensembles Protograph is expanded by “N” to obtain code graph. Randomness comes from permutations of each edge type N=4\n\nAverage codeword weight Enumerator of expanded code N=2 For N=2, there are (2!) |E| =4096 codes in the ensemble. The “ensemble average” weight enumerator is shown.\n\nAsymptotic average codeword weight enumerator Plot on log scale…\n\nAsymptotic average codeword weight enumerator Plot on log scale Make quantities intrinsic…\n\nAsymptotic average codeword weight enumerator Plot on log scale Make quantities intrinsic Take Limit\n\nCodewords, Stopping Sets, Trapping Sets\n\nCodewords (on a graph) Assignment x of the variables of a graph such that: each check node is adjacent an even number of times to variables assigned 1 example: x If half of the variables of a codeword are “flipped” in a BSC, ML decoding fails. X = 1 0 0 1 1 0 0\n\nStopping Sets Assignment x of the variables of a graph such that: Each check node is adjacent 0 times, or 2 or more times to variables assigned 1 example: x, y If all of a stopping set is erased, BP decoding cannot continue. X = 1 0 0 1 1 0 0 y = 1 0 1 0 1 0 0\n\nStopping Set Enumerators On the right is a “sneak peak” at a stopping set enumerator. Stopping set enumerators are uniformly larger than codeword enumerators because every codeword is a stopping set.\n\nTrapping Sets Trapping sets are sets of variables that cause quantized decoders to get “stuck”. Usually, trapping sets have a small number of Checks connected once to variables assigned a 1. Unfortunately, this is not a good combinatorial characterization, so we forget about trapping sets for now.\n\nTrapping Set Enumerators Words like “usually” and “small number” usually don’t lead to useful combinatorial characterizations… ?\n\nFormalizing “Bad Sets” For a given “word” x, define the vector of assignments adjacent to check c as x c. For each check node c, define the set Ω c. Define Ω If Ω c are the sets of even weight, then Ω is the set of codewords. If Ω c are the set of vectors of non-unit weight, Ω is the stopping sets.\n\nExample If Ω c are the sets of even weight, then x is not in Ω, because x 1 is not in Ω 1. If Ω c are the set of vectors of non-unit weight, x is in Ω. x = 1 0 1 0 1 0 0 c1c1 c2c2 c3c3\n\nTypes of words Consider an arbitrary vector x Define vector θ where θ v is the fraction of 1’s assigned to variables of type v x = 1 0 1 0 0 1 0 1 0 0 0 0 1 1 1 0 0 0 1 0 0 θ =1 0.25 0.5.5.25\n\nA Lemma about types Lemma: All words of a given type are equally probably a member of Ω, with respect to our ensemble. Sketch of Proof: Suppose x and y have the same type. If x is in Ω for a certain set of permutations (= code in the ensemble), then y is in Ω for some other set of permutations.\n\nApplying method of Types There is usually a “typical” type θ* of words of any weight Θ. Even if all types are equally numerous, there are only polynomially many types, and this cannot change the exponent.\n\nComputing E(θ) A(Nθ) can be broken down as shown on the right. It is easy to compute the number of words of a given type. The exponent of the probability that all checks of a given type will be satisfied is defined as Φ.\n\nComputing Φ(θ c ) Sanov’s theorem (or something like it) tells us that Φ(θ c ) is the minimum KL-distance to a certain distribution q θ c. P θ c is the set of distributions on Ω c with marginals equal to θ c.\n\nFinding “p” We can do a constrained minimization to show that the optimal p must have a “Boltzmann distribution” with parameter s. It is easy to compute θ c from s, but difficult in reverse. It is easy to compute D(p||q) from s. s p D(p||q) θcθc\n\nOptimizing over θ E(θ) is not a convex function, thus in principle, we have to evaluate everywhere to find its maximum E(Θ). In practice, we still use convex optimization techniques.\n\nUsing weight enumerators to design codes!\n\nWhy are Enumerators Interesting? The zero-crossings of weight enumerators give us lower bounds on the typical size of bad sets. In this example, the zero- crossing of the stopping set enumerator is about 0.1, so we would expect a code of size 10 6 to have a minimum stopping set of size 10 5 (or possibly bigger).\n\nOptimizing Protograph Ensembles Currently, many ensembles are designed only for density evolution threshold. Such optimized codes are often ignored by actual implementers because desired error rates are around 10 -10. By optimizing simultaneously for threshold and asymptotic enumerator, we may be able to find efficient codes that can be used in these applications.\n\nOpen Questions How fast can we compute the weight enumerator (or just the zero-crossing)? What’s the achievable region of threshold and zero-crossing for protograph ensembles?\n\nDownload ppt \"Asymptotic Enumerators of Protograph LDPCC Ensembles Jeremy Thorpe Joint work with Bob McEliece, Sarah Fogal.\"\n\nSimilar presentations"
]
| [
null,
"http://slideplayer.com/static/blue_design/img/slide-loader4.gif",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.84485686,"math_prob":0.9548758,"size":7050,"snap":"2019-51-2020-05","text_gpt3_token_len":1951,"char_repetition_ratio":0.12432586,"word_repetition_ratio":0.10841424,"special_character_ratio":0.2883688,"punctuation_ratio":0.08327299,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97952497,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-10T17:23:54Z\",\"WARC-Record-ID\":\"<urn:uuid:90ec3b9a-5f55-42ee-aa44-2ab2eef7d911>\",\"Content-Length\":\"191353\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:48efdc12-561e-4842-bb0d-9dc7a3146681>\",\"WARC-Concurrent-To\":\"<urn:uuid:75c37fa7-2dee-402b-b117-0d40e0b73e32>\",\"WARC-IP-Address\":\"138.201.54.25\",\"WARC-Target-URI\":\"http://slideplayer.com/slide/4986722/\",\"WARC-Payload-Digest\":\"sha1:LQGC765RRJOB4C6AVKZL4ZH3SGP2HJA3\",\"WARC-Block-Digest\":\"sha1:AKRG226NFIQX3ZTZIQZUBQJ6BVEPLS5F\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540528457.66_warc_CC-MAIN-20191210152154-20191210180154-00308.warc.gz\"}"} |
http://ajmie.org/article/248/10.11648.j.ajmie.20190402.11 | [
"Archive\nSpecial Issues",
null,
"Volume 4, Issue 2, March 2019, Page: 28-34\nAn Integrated One-step Equation for Solving Duct/Pipe Friction Loss by Hand Calculator\nChung-Yueh Ho, Tempace HVAC&R Consultancy Firm, Taiwan\nCheng-Ta Ho, Tempace HVAC&R Consultancy Firm, Taiwan\nReceived: Jun. 18, 2019; Accepted: Sep. 12, 2019; Published: Sep. 26, 2019\nAbstract\nASHRAE Handbooks are the worldwide reference books for HVAC engineers. When we tried to develop a duct software, we also followed the steps shown in 2013 ASHRAE Handbook. Accidently we found that some friction loss data of a duct design example seemed contrary to the data obtained from duct friction chart. Then we go back to adopt Darcy’s and Colebrook’s equations that have been used to solve duct/pipe friction loss for decades. However, the calculation process needs to use complicated computer program. After doing huge trial and error processes by computerized program, we obtained one integrated equation that can be used to calculate duct/pipe friction loss by hand calculator. We own an HVAC&R consultancy firm and have the opportunity to contact many real duct/pipe projects. This empirical equation has been successfully applied to dozens of actual duct and pipe design projects. For Reynolds Number (Re) is greater than 10,000 (i.e. turbulent flow), our analysis shows the friction losses obtained from this integrated equation are within ±2.0% of those obtained from Darcy’s and Colebrook’s equations. The accuracy (±2.0%) is good enough for engineers doing realistic duct/pipe designs. Hence, this one-step equation can be the handy alternative for Darcy’s and Colebrook’s equations. For the practical duct/pipe designs, engineers can calculate friction loss easily, no need to use iterative method.\nKeywords\nDarcy Equation, Colebrook Equation, Moody Chart, Friction Loss Chart\nChung-Yueh Ho, Cheng-Ta Ho, An Integrated One-step Equation for Solving Duct/Pipe Friction Loss by Hand Calculator, American Journal of Mechanical and Industrial Engineering. Vol. 4, No. 2, 2019, pp. 28-34. doi: 10.11648/j.ajmie.20190402.11\nReference\n\nBrown, G. O. “The History of the Darcy-Weisbach Equation for pipe Flow Resistance” Environmental and Water Resources History. American Society of Civil Engineers. Pp. 34-43. ISBN978-0-7844-0650-2, 2003.\n\nColebrook, C. F.: Turbulent flow in pipes, with particular reference to the transition region between the smooth and rough pipe laws, Journal of the Institution of Civil Engineers, England, Vol. 11, No.4, 1939.\n\nMoody, L. F.: Friction factors for pipe flow Transactions of the ASME. Vol. 66, No.8, 1944.\n\nASHRAE Handbook 2017, Figure 10 (p21.9) in Chapter 21.\n\nASHRAE Handbook 2017, Figure 4 in Chapter 22.\n\nASHRAE Handbook 2013, Example 7 (p21.22) in Chapter 21.\n\nMoody, L. F.: An approximate formula for pipe friction factors, Transactions of the ASME, Vol. 69, 1947.\n\nZigrang, D. J. and Sylvester, N. D.: Explicit approximations to the solution of Colebrook’s friction factor equation, AIChE Journal, Vol. 28, No.3, 1982.\n\nHaaland, S. E.: Simple and explicit formulas for the friction factor in turbulent pipe flow, Transactions of the ASME, Journal of Fluids Engineering, Vol. 105, No.1, 1983.\n\nRomeo, Royo, and Monzon, “Improved explicit equations for estimation of friction factor in rough and smooth pipes” 2002.\n\nLester, T. “Solving for Friction Factor.” ASHRAE Journal July, 2003.\n\nAvci and Karagoz, “A novel Explicit Equation for friction factor in smooth and rough pipes”, ASME J. Fluids Eng., 131, 2009.\n\nMore, A. A. “Analytical solutions for the Colebrook and White equation and for pressure drop in ideal gas flow in pipes”. Chemical Engineering Science. 61 (16), 2006.\n\nFang, X, Xua, Y. and Zhou Z., “New correlations of single-phase friction factor for turbulent pipe flow and evaluation of existing single-phase friction factor correlations”, Nuclear Engineering and Design, Vol. 241, No. 3, 2011.\n\nBrkic, Dejan, Review of explicit approximations to the Colebrook relation for the flow friction, Journal of Petroleum Science and Engineering, 77 (1), Elsevier, 2011.",
null,
""
]
| [
null,
"http://ajmie.org/spgj/decorator/img/article/article_r2_c3.jpg",
null,
"http://ajmie.org/spgj/decorator/img/home/home_r16_c13.jpg",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.8012494,"math_prob":0.84111327,"size":4842,"snap":"2019-51-2020-05","text_gpt3_token_len":1242,"char_repetition_ratio":0.12112443,"word_repetition_ratio":0.120617114,"special_character_ratio":0.26311442,"punctuation_ratio":0.20486815,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.961253,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-15T05:34:41Z\",\"WARC-Record-ID\":\"<urn:uuid:38f84f62-a578-4629-907e-1a33369ccd7b>\",\"Content-Length\":\"39478\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4c0e595b-fb60-4b45-af9f-6ac9c429242a>\",\"WARC-Concurrent-To\":\"<urn:uuid:28690d99-e7a2-4591-91fa-a73373397de7>\",\"WARC-IP-Address\":\"47.88.20.168\",\"WARC-Target-URI\":\"http://ajmie.org/article/248/10.11648.j.ajmie.20190402.11\",\"WARC-Payload-Digest\":\"sha1:3NMWNDXO5SJ2N5MKPKPZ7KPDIPPNA2OX\",\"WARC-Block-Digest\":\"sha1:DRHE2IIOZXK4IDI5OKSXQCEK3LJF4FPZ\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575541301598.62_warc_CC-MAIN-20191215042926-20191215070926-00121.warc.gz\"}"} |
https://englishlangkan.com/question/it-is-not-possible-to-prove-one-pair-of-triangles-congruent-and-then-use-their-congruent-corresp-12786548-74/ | [
"## It is not possible to prove one pair of triangles congruent and then use their congruent corresponding parts to prove another pair congruent\n\nQuestion\n\nIt is not possible to prove one pair of triangles congruent and then use their congruent corresponding parts to prove another pair congruent. True or false\n\nin progress 0\n2 weeks 2021-10-14T01:39:32+00:00 2 Answers 0\n\ntrue\n\nThe wording does not quite mean anything,\n\nbut what I think was meant to ask is\n\n“if we use some parts of two triangles to prove they are congruent,\n\ncan we then use that to prove that\n\na pair of corresponding parts not used before are congruent?”\n\nYes, of course,\n\nCorresponding Parts of Congruent Triangles are Congruent,\n\nwhich teachers usually abbreviate as CPCTC.\n\nFor example, if we find that\n\nside AB is congruent with side DE,\n\nside BC is congruent with side EF, and\n\nangle ABC is congruent with angle DEF,\n\nwe can prove that triangles ABC and DEF are congruent\n\nby Side-Angle-Side (SAS) congruence.\n\nWe then, by CPCTC, can conclude that other pairs of corresponding parts are congruent:\n\nside AB is congruent with side DE,\n\nangle BCA is congruent with angle EFD, and\n\nangle CAB is congruent with angle FDE.\n\nIt was possible (by CPCTC) to prove those last 3 congruence statements,\n\nafter proving the triangles congruent."
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.91422856,"math_prob":0.7970622,"size":1357,"snap":"2021-43-2021-49","text_gpt3_token_len":324,"char_repetition_ratio":0.17960088,"word_repetition_ratio":0.026431719,"special_character_ratio":0.20265292,"punctuation_ratio":0.11320755,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9721442,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-28T11:10:37Z\",\"WARC-Record-ID\":\"<urn:uuid:01a60c79-a2d1-430e-b2f2-60c88e71aede>\",\"Content-Length\":\"69854\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:08c0a60f-a944-440a-95d3-b3024937307c>\",\"WARC-Concurrent-To\":\"<urn:uuid:9cb5eff7-620d-4bed-acf0-51d93ca45f0b>\",\"WARC-IP-Address\":\"172.96.186.144\",\"WARC-Target-URI\":\"https://englishlangkan.com/question/it-is-not-possible-to-prove-one-pair-of-triangles-congruent-and-then-use-their-congruent-corresp-12786548-74/\",\"WARC-Payload-Digest\":\"sha1:YZR74LDUA5TRB6VOREVBASJTGNI5KXO6\",\"WARC-Block-Digest\":\"sha1:FAS2KXQKXXV7GBRLX2XX4FGFKIVUFS2B\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323588284.71_warc_CC-MAIN-20211028100619-20211028130619-00575.warc.gz\"}"} |
https://www.physicsforums.com/threads/lazy-flea-and-log-kinda-tough.348045/ | [
"# Lazy flea and log. Kinda tough.\n\nI'm given the following problem. A log sits in the woods with a radius \"r\". Lazy flea wants to jump over the log with the least amount of velocity necessary. If distance \"d\" away from the center of the log, and angle theta is the angle of velocity above the horizontal. Find theta, d, and v(min).\n\nI'm mostly lost as to how the hell to even begin this equation. This is the first projectile motion problem I've had where I actually need to find the derivative of something. The maximum hight is not going to be 2R, because that makes the distance farther, and thus the velocity increases. The distance can not be too close either because then the angle increases and requires a larger velocity.\n\nWith this many unknowns, i'm a bit lost. What I think I need to do is find a relation of the radius to this odd parabola of my projectile motion. So I'm looking at it like a geometry problem.\n\nAny hints are appreciated!\n\nberkeman\nMentor\nI'm given the following problem. A log sits in the woods with a radius \"r\". Lazy flea wants to jump over the log with the least amount of velocity necessary. If distance \"d\" away from the center of the log, and angle theta is the angle of velocity above the horizontal. Find theta, d, and v(min).\n\nI'm mostly lost as to how the hell to even begin this equation. This is the first projectile motion problem I've had where I actually need to find the derivative of something. The maximum hight is not going to be 2R, because that makes the distance farther, and thus the velocity increases. The distance can not be too close either because then the angle increases and requires a larger velocity.\n\nWith this many unknowns, i'm a bit lost. What I think I need to do is find a relation of the radius to this odd parabola of my projectile motion. So I'm looking at it like a geometry problem.\n\nAny hints are appreciated!\n\nI think the max height of the parabola is indeed 2r. When the flea jumps, the horizontal component of the velocity will determine how long it takes him to reach the peak of the log. That has to match the time in the vertical direction for the flea to reach the height of the peak of the log. Use those two equations to determine the distance away from the log where he starts his jump, based on velocity.\n\nYou might be right that there is some geometry involved as well. Draw some of the parabolas on top of the round log cross-section, to see how close you can get to the log before you can't just clip the top going over. You should be able to use a graphing calculator or Excel to play with that a bit to get some intuition going...\n\nI know that 2R is not the peak of the parabola because I was specifically told that it was not! The reason for this is for the apex to be 2R the distance away from the log increases. As the distance increases, the more velocity in the x component necessary. The problem requires however that I find the optimal minimum velocity out of all possible distances and thetas.\n\nberkeman\nMentor\nI know that 2R is not the peak of the parabola because I was specifically told that it was not! The reason for this is for the apex to be 2R the distance away from the log increases. As the distance increases, the more velocity in the x component necessary. The problem requires however that I find the optimal minimum velocity out of all possible distances and thetas.\n\nAh, interesting variation! So maybe you can get the minimum initial v total, even though your vertical v has to be bigger. So yeah, use geometry to fit the parabolas to the contour of the circular log you have to clear, and calculate the initial v for each (find the equation for it). Then just minimize the total initial v with respect to the takeoff distance from the log.\n\nSounds like a start. I'll try playing around with that. Let me know if you have any other ideas that may help me out.\n\nAlright, so I tried to fit some geometries together and assumed that wherever the flea jumped over would be tangent to the circle, so the distance from the center of the log and the point I called tangent were both distance \"D\" (being both tangent). Then I bisected them into two congruent triangles found theta in terms of R and D and plugged into my projectile equations, then took the derivative of v with respect to D, set for min and got my values.\n\nI showed this to the instructor and he pointed out, assuming tangent, I drew my motion in straight lines to those points. When actually, the motion being parabolic, this is not correct.\n\nAny ideas?\n\nIf this wasn't clear let me know and I can scan some of my work.\n\nberkeman\nMentor\nAlright, so I tried to fit some geometries together and assumed that wherever the flea jumped over would be tangent to the circle, so the distance from the center of the log and the point I called tangent were both distance \"D\" (being both tangent). Then I bisected them into two congruent triangles found theta in terms of R and D and plugged into my projectile equations, then took the derivative of v with respect to D, set for min and got my values.\n\nI showed this to the instructor and he pointed out, assuming tangent, I drew my motion in straight lines to those points. When actually, the motion being parabolic, this is not correct.\n\nAny ideas?\n\nIf this wasn't clear let me know and I can scan some of my work.\n\nNice work on a solution! That's the right way to try out an idea and see if it looks correct. Have you been able to do some plotting on a calculator or with Excel or Mathematica to see intuitively how inverted parabolas and circles fit together? That may give you some insights to help your approach.\n\nYou could try to brute force it by writing the equation for a circle, and the equation for a parabola, and finding where they just barely intersect (so only one solution per quadrant), and see if that gives you some geometrical insight into that the angle is to the center of the circle when that happens. Be sure to have variables for both the top of the parabola and the width of the parabola in your equation for the parabola...\n\nI haven't used mathematica or a graphing calculator to do very much for me, so I don't have much experience doing so. Is it possible you can help me on how to set this up into mathematica?"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.94747305,"math_prob":0.91391087,"size":1544,"snap":"2021-31-2021-39","text_gpt3_token_len":348,"char_repetition_ratio":0.10194805,"word_repetition_ratio":0.7649123,"special_character_ratio":0.22279793,"punctuation_ratio":0.094936706,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9978401,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-16T21:15:39Z\",\"WARC-Record-ID\":\"<urn:uuid:5a31406b-3ef9-4da7-b5c8-ce2a39f860f1>\",\"Content-Length\":\"82803\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:65df9531-97b8-4e8c-b071-7d50632f0cb0>\",\"WARC-Concurrent-To\":\"<urn:uuid:a78aca49-4098-492d-8120-f036df5cbb76>\",\"WARC-IP-Address\":\"172.67.68.135\",\"WARC-Target-URI\":\"https://www.physicsforums.com/threads/lazy-flea-and-log-kinda-tough.348045/\",\"WARC-Payload-Digest\":\"sha1:OP7N5GAGNSXSDOKG6BYCJANPNQLGT7QN\",\"WARC-Block-Digest\":\"sha1:IHPOQETK2DFZNJUAMO3VKJIKESSA2EHN\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780053759.24_warc_CC-MAIN-20210916204111-20210916234111-00705.warc.gz\"}"} |
https://tech.apdaga.com/2021/06/coursera-machine-learning-week-3-assignment-solution.html | [
"# Coursera: Machine Learning (Week 3) [Assignment Solution] - Andrew NG\n\n▸ Logistic regression and apply it to two different datasets.\n\nI have recently completed the Machine Learning course from Coursera by Andrew NG.\n\nWhile doing the course we have to go through various quiz and assignments.\n\nHere, I am sharing my solutions for the weekly assignments throughout the course.\n\nThese solutions are for reference only.\n\nIt is recommended that you should solve the assignments by yourself honestly then only it makes sense to complete the course.\nBut, In case you stuck in between, feel free to refer the solutions provided by me.\n\n#### NOTE:\n\nDon't just copy paste the code for the sake of completion.\nEven if you copy the code, make sure you understand the code first.\n\nClick here to check out week-2 assignment solutions, Scroll down for the solutions for week-3 assignment.\n\nIn this exercise, you will implement logistic regression and apply it to two different datasets. Before starting on the programming exercise, we strongly recommend watching the video lectures and completing the review questions for the associated topics.\n\nRecommended Machine Learning Courses:\n\nIt consist of the following files:\n• ex2.m - Octave/MATLAB script that steps you through the exercise\n• ex2 reg.m - Octave/MATLAB script for the later parts of the exercise\n• ex2data1.txt - Training set for the first half of the exercise\n• ex2data2.txt - Training set for the second half of the exercise\n• submit.m - Submission script that sends your solutions to our servers\n• mapFeature.m - Function to generate polynomial features\n• plotDecisionBoundary.m - Function to plot classifier's decision boundary\n• Function to plot 2D classification data\n• Sigmoid Function\n• Logistic Regression Cost Function\n• Logistic Regression Prediction Function\n• Regularized Logistic Regression Cost\n• YouTube videos featuring Free IOT/ML tutorials\n* indicates files you will need to complete\n\n### plotData.m :\n\n`function plotData(X, y) %PLOTDATA Plots the data points X and y into a new figure % PLOTDATA(x,y) plots the data points with + for the positive examples % and o for the negative examples. X is assumed to be a Mx2 matrix. % ====================== YOUR CODE HERE ====================== % Instructions: Plot the positive and negative examples on a % 2D plot, using the option 'k+' for the positive % examples and 'ko' for the negative examples. % %Seperating positive and negative results pos = find(y==1); %index of positive results neg = find(y==0); %index of negative results % Create New Figure figure; %Plotting Positive Results on % X_axis: Exam1 Score = X(pos,1) % Y_axis: Exam2 Score = X(pos,2) plot(X(pos,1),X(pos,2),'g+'); %To keep above plotted graph as it is. hold on; %Plotting Negative Results on % X_axis: Exam1 Score = X(neg,1) % Y_axis: Exam2 Score = X(neg,2) plot(X(neg,1),X(neg,2),'ro'); % ========================================================================= hold off;end`\n\n### sigmoid.m :\n\n`function g = sigmoid(z) %SIGMOID Compute sigmoid function % g = SIGMOID(z) computes the sigmoid of z. % You need to return the following variables correctly g = zeros(size(z)); % ====================== YOUR CODE HERE ====================== % Instructions: Compute the sigmoid of each value of z (z can be a matrix, % vector or scalar). g = 1./(1+exp(-z)); % =============================================================end`\n\n### costFunction.m :\n\n`function [J, grad] = costFunction(theta, X, y) %COSTFUNCTION Compute cost and gradient for logistic regression % J = COSTFUNCTION(theta, X, y) computes the cost of using theta as the % parameter for logistic regression and the gradient of the cost % w.r.t. to the parameters. % Initialize some useful values m = length(y); % number of training examples % You need to return the following variables correctly J = 0; grad = zeros(size(theta)); % ====================== YOUR CODE HERE ====================== % Instructions: Compute the cost of a particular choice of theta. % You should set J to the cost. % Compute the partial derivatives and set grad to the partial % derivatives of the cost w.r.t. each parameter in theta % % Note: grad should have the same dimensions as theta % %DIMENSIONS: % theta = (n+1) x 1 % X = m x (n+1) % y = m x 1 % grad = (n+1) x 1 % J = Scalar z = X * theta; % m x 1 h_x = sigmoid(z); % m x 1 J = (1/m)*sum((-y.*log(h_x))-((1-y).*log(1-h_x))); % scalar grad = (1/m)* (X'*(h_x-y)); % (n+1) x 1 % ============================================================= end`\n\n### predict.m :\n\n`function p = predict(theta, X) %PREDICT Predict whether the label is 0 or 1 using learned logistic %regression parameters theta % p = PREDICT(theta, X) computes the predictions for X using a % threshold at 0.5 (i.e., if sigmoid(theta'*x) >= 0.5, predict 1) m = size(X, 1); % Number of training examples % You need to return the following variables correctly p = zeros(m, 1); % ====================== YOUR CODE HERE ====================== % Instructions: Complete the following code to make predictions using % your learned logistic regression parameters. % You should set p to a vector of 0's and 1's % % Dimentions: % X = m x (n+1) % theta = (n+1) x 1 h_x = sigmoid(X*theta); p=(h_x>=0.5); %p = double(sigmoid(X * theta)>=0.5); % =========================================================================end`\n\n### costFunctionReg.m :\n\n`function [J, grad] = costFunctionReg(theta, X, y, lambda) %COSTFUNCTIONREG Compute cost and gradient for logistic regression with regularization % J = COSTFUNCTIONREG(theta, X, y, lambda) computes the cost of using % theta as the parameter for regularized logistic regression and the % gradient of the cost w.r.t. to the parameters. % Initialize some useful values m = length(y); % number of training examples % You need to return the following variables correctly J = 0; grad = zeros(size(theta)); % ====================== YOUR CODE HERE ====================== % Instructions: Compute the cost of a particular choice of theta. % You should set J to the cost. % Compute the partial derivatives and set grad to the partial % derivatives of the cost w.r.t. each parameter in theta %DIMENSIONS: % theta = (n+1) x 1 % X = m x (n+1) % y = m x 1 % grad = (n+1) x 1 % J = Scalar z = X * theta; % m x 1 h_x = sigmoid(z); % m x 1 reg_term = (lambda/(2*m)) * sum(theta(2:end).^2); J = (1/m)*sum((-y.*log(h_x))-((1-y).*log(1-h_x))) + reg_term; % scalar grad(1) = (1/m)* (X(:,1)'*(h_x-y)); % 1 x 1 grad(2:end) = (1/m)* (X(:,2:end)'*(h_x-y))+(lambda/m)*theta(2:end); % n x 1 % =============================================================end`\n\nI tried to provide optimized solutions like vectorized implementation for each assignment. If you think that more optimization can be done, then put suggest the corrections / improvements.\n\n--------------------------------------------------------------------------------\n&\nClick here to see more codes for Raspberry Pi 3 and similar Family.\n&\nClick here to see more codes for NodeMCU ESP8266 and similar Family.\n&\nClick here to see more codes for Arduino Mega (ATMega 2560) and similar Family.\n\nFeel free to ask doubts in the comment section. I will try my best to solve it.\nIf you find this helpful by any mean like, comment and share the post.\nThis is the simplest way to encourage me to keep doing such work.\n\nThanks and Regards,\n-Akshay P. Daga\n\n1.",
null,
"how could you do this please explain me...\n\n1.",
null,
"What explanation you want?\n\n2.",
null,
"1.",
null,
"You can copy the the code from above code sections.\n\n3.",
null,
"Hi Akshay, Please may I have theses files as well:\n\nex2.m\nex2 reg.m\nex2data1.txt\nex2data2.txt\nsubmit.m\nmapFeature.m\nplotDecisionBoundary.m\n\n1.",
null,
"You can get those files from Coursera assignments. I don't have those with me now.\n\n4.",
null,
"can you please tell me what you did by this\n\n5.",
null,
"this means:- take the transpose of feature matrix X(i.e X') and multiply it with the difference of matrices h_x and y i.e the matrix with sigmoid outputs and the result matrix(y). Finally multiply the end product with 1/m , where m is the number of training examples.\n\nThis is the vectorized implementation of the code that's actually way more lengthier to implement using loops.\n\n6.",
null,
"Hi, can you please explain the predict function?\n\n7.",
null,
"In this gradient decent the number of iteration are not specified so how is the gradient decent working? can someone please explain?\n\n8.",
null,
"I used the exact code at the end but I'm still getting 65/100 not able to figure out the reason\n\n1.",
null,
"Did you figure out the reason yet?\n\n9.",
null,
"Hi !! why didn't you use sum() function for grad even why formula contains that ?\n\n1.",
null,
"sum() is used for the summation in the formula.\nHere We are doing matrix multiplication which itself consist of \"sum of product\". So, no need of external sum function.\nPlease try to do it on paper by yourself, you will get clear idea.\nThanks\n\n10.",
null,
"we have learned that Z= theta transpose X then why are using Z=X multiplied by theta in the above codes ?\n\n1.",
null,
"When we are calculating z(small z) for a single sample, then it is z=theta' * x. (here small x)\nBut When you do the same computation for all the samples at the same time then we call it as Z (Capital Z).\nZ = X * theta. (Here Capital X)\n\nTry to do it using pen-paper, you will get clear understanding.\n\n11.",
null,
"I tried coding for predict function in the following way:\n\nh_x = sigmoid(X*theta);\nif (0<=h_x<0.5)\np=0;\nelseif (0.5<=h_x<=1)\np=1;\nendif\n\nI know I did it in a long way but the accuracy that I am getting 60.00. Your code gave me the accuracy 89.00. Can you please help me understand what's wrong with this and what's the exact difference between your code and mines'?\n\n1.",
null,
"P is a matrix with dimensions m x 1.\nSolution:\nYou can put your code in a \"for\" loop and check the value of each element in h_x and accordingly set the value of each element in p.\n\nIt will work.\n\n12.",
null,
"hey bro it says z not defined why???\n\n1.",
null,
"Hi, I think you are doing this assignment in Octave and that's why you are facing this issue.\n\nChethan Bhandarkar has provided solution for it. Please check it out: https://www.apdaga.com/2018/06/coursera-machine-learning-week-2.html?showComment=1563986935868#c4682866656714070064\n\nThanks\n\n13.",
null,
"I have copy the exact code for plotData.m , and all the others program worked very well but I am still getting 70/100. Can you tel what's the problem ?\n\n14.",
null,
"Can you tell me , how can I run \"ex2\" script in console ?\n\n15.",
null,
"hi I want to clarify few things from you,\nI have read in regression, these are few important points which have not been covered in andrew ng regression topic, how to find how significant your variable is, significance of p value and R^2 (R-square) values. I would like to know more about them. kindly share some sources.\n\n16.",
null,
"HI, The line code reg_term = (lambda/(2*m)) * sum(theta(2:end).^2); in costFunctionReg function,\n\ncan you explain more about this part theta(2:end) , what does it mean and how did you deduce it,\n\n17.",
null,
"I used\nfor i=1:size(X,1)\nif sigmoid(X*theta)>=0.5\np=sigmoid(X*theta);\n\nend\nas well as,\nh_x = sigmoid(X*theta);\nfor i=1:size(X,1)\n\nif (0<=h_x<0.5)\np=0;\nelseif (0.5<=h_x<=1)\np=1;\nend\nbut i am getting 40 accuracy it is working only with your code.why sir?\n\n18.",
null,
"Hi there,\nI am trying the the same code as yours of sigmoid() function but each time it is getting an error saying that\n\n'z' undefined near line 6 column 18\nerror: called from\nsigmoid at line 6 column 5\n\n1.",
null,
"Hi, I think you are doing this assignment in Octave and that's why you are facing this issue.\n\nChethan Bhandarkar has provided solution for it. Please check out the comment by Chethan Bhandarkar: https://www.apdaga.com/2018/06/coursera-machine-learning-week-2.html?showComment=1563986935868#c4682866656714070064\n\nThanks\n\n19.",
null,
"Hello Akshay,\nIt'd be great if you kindly share the code for \"fminunc\" in this week's files(wherever needed), coz i don't understand that particular function well, neither did i get its solution anywhere else on internet.\n\n1.",
null,
"Hi Ankit,\nSorry but I don't have the code for \"fminunc\".\n\n20.",
null,
"21.",
null,
"Hey it says my plot is empty can someone help?\n\n22.",
null,
"I am facing this type of problem in matlab , what can i do ? how to fix that n where ??\n\n'fminunc' requires Optimization Toolbox.\n\nError in ex2 (line 99)\nfminunc(@(t)(costFunction(t, X, y)), initial_theta, options);\n\n23.",
null,
"In sigmoid\nerror in line 6 (the preallocated value assigned to variable 'g' might be unused)\n\nwhat should i do\n\n1.",
null,
"How's value of 'g' is unused. 'g' is nothing but output of sigmoid function.\nIf you are getting some msg, it must be warning not error. So, don't worry about it, keep it as it is. (But I don't think you should get any kind of warning like this).\nline 6, is called initialization of variable.\n\n24.",
null,
"Hi Akshay can you please explain why we use this X(:,2:end) and theta(2:end) instead of plain X and theta??\n\n1.",
null,
"It's because as per the theory in videos, We don't apply regularization on theta_0. Regularization is applied from theta_1 onwards.\nand that's why 2 gradients. 1st corresponding to theta_0 and other for theta_1 onwards.\n\n25.",
null,
"And also why use two gradents?\n\n26.",
null,
"Good day sir,\nim new in this course...i could not fully understand the assignment in week 3...as i enter my code...i think still in error..\n\n27.",
null,
"1.",
null,
"Predict function is fairly simple. You have implemented your gradient and now you just have to predict whether the answer will be 1 or 0... So, what will you do is check for the result > 0.5. If it is above the 0.5, then prediction will be true (1), otherwise false (0)\n\n2.",
null,
"@Hassan Ashas Thank you very much for your explanation.\n\n28.",
null,
"costfuntion is not returning the scalar value, it is returning the 1*100 matrix.\n\n29.",
null,
"Hello Akshay,\nI keep getting this error for the costFunctionReg.m file:\n\nsyntax error\n>>> reg_term = (lambda/2*m)) * sum(theta(2:end).^2);\n^\nWhat is the problem here I do not understand.\n\nThank you\n\n1.",
null,
"Opening and closing brackets are not matching you code.\n\nNOTE: check the brackets are \"2*m\"\n\nYOUR CODE: reg_term = (lambda/2*m)) * sum(theta(2:end).^2);\nWORKING CODE: reg_term = (lambda/(2*m)) * sum(theta(2:end).^2);\n\n30.",
null,
"Hello Akshay,\nWhile computing cost function I am getting so many outputs\n\n1.",
null,
"You should only get [J, grad] as a output of costFunction & costFunctionReg.\n\n31.",
null,
"Error - theta may not be defined , predict function\n\n32.",
null,
"hi i have a doubt i took theta as [zeros(n+1),1] it is giving me 0 and i cant submit the assignment can you specify initial value of theta and theta and values of X. i am totally confused\n\n33.",
null,
"nothing is working here\nevery time it is showing\n>> plotData\n\nerror: 'y' undefined near line 14 column 12\nerror: called from\nplotData at line 14 column 5\n>>\n\n34.",
null,
"J = (1 / m) * sum ((- y. * Log (h_x)) - ((1-y). * Log (1-h_x))) the log representation in this equation means ln isn't it? So, shouldn't we write it as log (1-h_x) / log (10).\n\n35.",
null,
"function [J, grad] = costFunctionReg(theta, X, y, lambda)\n%COSTFUNCTIONREG Compute cost and gradient for logistic regression with regularization\n% J = COSTFUNCTIONREG(theta, X, y, lambda) computes the cost of using\n% theta as the parameter for regularized logistic regression and the\n% gradient of the cost w.r.t. to the parameters.\n\n% Initialize some useful values\nm = length(y); % number of training examples\n\n% You need to return the following variables correctly\nJ = 0;\n\n% ====================== YOUR CODE HERE ======================\n% Instructions: Compute the cost of a particular choice of theta.\n% You should set J to the cost.\n% Compute the partial derivatives and set grad to the partial\n% derivatives of the cost w.r.t. each parameter in theta\n\n[J, grad] = costFunction(theta, X, y);\nfeats = theta(2:end);\nJ = J + lambda / (2 * m) * (feats' * feats);\n\n% =============================================================\n\nend\n\n36.",
null,
"My question is about the solved subroutine 'plotDecisionBoundary.m'\nLine 20 : plot_y\nI didn't understand the definition of this\nInfact how this particular code helped to plot the decision boundary! Please explain..\n\n37.",
null,
"so in cost function grad is basically you doing gradient descent right? but what is the use of 1/m? i'm really confused sorry\n\n1.",
null,
"While calculating cost function, we are doing sum (summation) operation over 'm' samples. And then dividing it by 'm' in order to scale the output (as a scaling factor).\n\n38.",
null,
"Muje 55 marks hi aa rahe he mane code bhi sahi likha he phir bhi...logistic regression cost and regularised logistic regression gradient dono me 0 marks he.."
]
| [
null,
"https://resources.blogblog.com/img/blank.gif",
null,
"https://resources.blogblog.com/img/blank.gif",
null,
"https://resources.blogblog.com/img/blank.gif",
null,
"https://resources.blogblog.com/img/blank.gif",
null,
"https://resources.blogblog.com/img/blank.gif",
null,
"https://resources.blogblog.com/img/blank.gif",
null,
"https://resources.blogblog.com/img/blank.gif",
null,
"https://resources.blogblog.com/img/blank.gif",
null,
"https://resources.blogblog.com/img/blank.gif",
null,
"https://resources.blogblog.com/img/blank.gif",
null,
"https://resources.blogblog.com/img/blank.gif",
null,
"https://resources.blogblog.com/img/blank.gif",
null,
"https://resources.blogblog.com/img/blank.gif",
null,
"https://resources.blogblog.com/img/blank.gif",
null,
"https://resources.blogblog.com/img/blank.gif",
null,
"https://resources.blogblog.com/img/blank.gif",
null,
"https://resources.blogblog.com/img/blank.gif",
null,
"https://resources.blogblog.com/img/blank.gif",
null,
"https://resources.blogblog.com/img/blank.gif",
null,
"https://resources.blogblog.com/img/blank.gif",
null,
"https://resources.blogblog.com/img/blank.gif",
null,
"https://resources.blogblog.com/img/blank.gif",
null,
"https://resources.blogblog.com/img/blank.gif",
null,
"https://resources.blogblog.com/img/blank.gif",
null,
"https://resources.blogblog.com/img/blank.gif",
null,
"https://resources.blogblog.com/img/blank.gif",
null,
"https://resources.blogblog.com/img/blank.gif",
null,
"https://resources.blogblog.com/img/blank.gif",
null,
"https://resources.blogblog.com/img/blank.gif",
null,
"https://resources.blogblog.com/img/blank.gif",
null,
"https://resources.blogblog.com/img/blank.gif",
null,
"https://resources.blogblog.com/img/blank.gif",
null,
"https://resources.blogblog.com/img/blank.gif",
null,
"https://resources.blogblog.com/img/blank.gif",
null,
"https://resources.blogblog.com/img/blank.gif",
null,
"https://resources.blogblog.com/img/blank.gif",
null,
"https://resources.blogblog.com/img/blank.gif",
null,
"https://resources.blogblog.com/img/blank.gif",
null,
"https://resources.blogblog.com/img/blank.gif",
null,
"https://resources.blogblog.com/img/blank.gif",
null,
"https://resources.blogblog.com/img/blank.gif",
null,
"https://resources.blogblog.com/img/blank.gif",
null,
"https://resources.blogblog.com/img/blank.gif",
null,
"https://resources.blogblog.com/img/blank.gif",
null,
"https://resources.blogblog.com/img/blank.gif",
null,
"https://resources.blogblog.com/img/blank.gif",
null,
"https://resources.blogblog.com/img/blank.gif",
null,
"https://resources.blogblog.com/img/blank.gif",
null,
"https://resources.blogblog.com/img/blank.gif",
null,
"https://resources.blogblog.com/img/blank.gif",
null,
"https://resources.blogblog.com/img/blank.gif",
null,
"https://resources.blogblog.com/img/blank.gif",
null,
"https://resources.blogblog.com/img/blank.gif",
null,
"https://resources.blogblog.com/img/blank.gif",
null,
"https://resources.blogblog.com/img/blank.gif",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.83858275,"math_prob":0.95412016,"size":17615,"snap":"2021-43-2021-49","text_gpt3_token_len":4446,"char_repetition_ratio":0.15672024,"word_repetition_ratio":0.18756661,"special_character_ratio":0.29673573,"punctuation_ratio":0.1403404,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9980678,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-05T01:41:49Z\",\"WARC-Record-ID\":\"<urn:uuid:f276fe4e-71d0-4cbe-b096-0115e3bf2ba1>\",\"Content-Length\":\"343045\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:060efc12-232f-4294-a5db-03a75320c8ca>\",\"WARC-Concurrent-To\":\"<urn:uuid:4ee5a2d9-1e52-4653-8edb-1a62871c0e34>\",\"WARC-IP-Address\":\"142.250.188.211\",\"WARC-Target-URI\":\"https://tech.apdaga.com/2021/06/coursera-machine-learning-week-3-assignment-solution.html\",\"WARC-Payload-Digest\":\"sha1:QDE7UGJGAJKIAQ2YYHU2HCLRUJVPS5QU\",\"WARC-Block-Digest\":\"sha1:JRMK6IAV6O2BRMXWFI5CKFVCLB33LBII\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964363134.25_warc_CC-MAIN-20211205005314-20211205035314-00040.warc.gz\"}"} |
https://www.hindawi.com/journals/aaa/2012/572493/ | [
"/ / Article\n\nResearch Article | Open Access\n\nVolume 2012 |Article ID 572493 | https://doi.org/10.1155/2012/572493\n\nXuejun Wang, Shuhe Hu, Wenzhi Yang, Xinghui Wang, \"Convergence Rates in the Strong Law of Large Numbers for Martingale Difference Sequences\", Abstract and Applied Analysis, vol. 2012, Article ID 572493, 13 pages, 2012. https://doi.org/10.1155/2012/572493\n\n# Convergence Rates in the Strong Law of Large Numbers for Martingale Difference Sequences\n\nAccepted14 Jun 2012\nPublished31 Jul 2012\n\n#### Abstract\n\nWe study the complete convergence and complete moment convergence for martingale difference sequence. Especially, we get the Baum-Katz-type Theorem and Hsu-Robbins-type Theorem for martingale difference sequence. As a result, the Marcinkiewicz-Zygmund strong law of large numbers for martingale difference sequence is obtained. Our results generalize the corresponding ones of Stoica (2007, 2011).\n\n#### 1. Introduction\n\nThe concept of complete convergence was introduced by Hsu and Robbins as follows. A sequence of random variables is said to converge completely to a constant if for all . In view of the Borel-Cantelli lemma, this implies that almost surely (a.s.). The converse is true if the are independent. Hsu and Robbins proved that the sequence of arithmetic means of independent and identically distributed (i.i.d.) random variables converges completely to the expected value if the variance of the summands is finite. Erdös proved the converse. The result of Hsu-Robbins-Erdös is a fundamental theorem in probability theory and has been generalized and extended in several directions by many authors. One of the most important generalizations is Baum and Katz for the strong law of large numbers as follows.\n\nTheorem A (see Baum and Katz ). Let and let . Let be a sequence of independent and identically distributed random variables. Assume further that if . Then the following statements are equivalent:(i),(ii) for all .\n\nMotivated by Baum and Katz for independent and identically distributed random variables, many authors studied the Baum-Katz-type Theorem for dependent random variables; see, for example, -mixing random variables, -mixing random variables, negatively associated random variables, martingale difference sequence, and so forth.\n\nOur emphasis in the paper is focused on the Baum-Katz-type Theorem for martingale difference sequence. Recently, Stoica [4, 5] considered the following series that describes the rate of convergence in the strong law of large numbers: They obtained the follow results.\n\nTheorem B (see Stoica ). Let be an -bounded martingale difference sequence, and let . Then series (1.1) converges for all .\n\nTheorem C (see Stoica ). (i) Let , and let . Then the series (1.1) converges for any martingale difference sequence bounded in .\n(ii) Let and . Then the series (1.1) converges for any martingale difference sequence satisfying .\n\nThe main purpose of the paper is to further study the Baum-Katz-type Theorem for martingale difference sequence. We have the following generalizations.(i)Our results include Baum-Katz-type Theorem and Hsu-Robbins-type Theorem (see Hsu and Robbins ) as special cases.(ii)Our results generalize Theorems B and C for the partial sum to the case of maximal partial sum.(iii)Our results not only generalize Theorem B for and Theorem C (i) for , to the case of , and but also generalize Theorem C (ii) for to the case of .\n\nThroughout the paper, let be a sequence of random variables defined on a fixed probability space . Denote , , and . stands for . , denote positive constants which may be different in various places. denotes the integer part of . Let be the indicator function of the set .\n\nLet be an increasing sequence of fields with for each . If is measurable for each , then fields are said to be adapted to the sequence , and is said to be an adapted stochastic sequence.\n\nDefinition 1.1. If is an adapted stochastic sequence with and for each , then the sequence is called a martingale difference sequence.\n\nThe following two definitions will be used frequently in the paper.\n\nDefinition 1.2. A real-valued function , positive and measurable on , is said to be slowly varying if for each .\n\nDefinition 1.3. A sequence of random variables is said to be stochastically dominated by a random variable if there exists a positive constant , such that for all and .\n\nOur main results are as follows.\n\nTheorem 1.4. Let , and let . Let be a martingale difference sequence, which is stochastically dominated by a random variable . Let be a slowly varying function as . Supposing that if and then for any ,\n\nTheorem 1.5. Let , and let . Let be a martingale difference sequence, which is stochastically dominated by a random variable . Let be a slowly varying function as . Supposing that if and (1.5) holds, then for any ,\n\nFor and , we have the following theorem.\n\nTheorem 1.6. Let , and let be a martingale difference sequence, which is stochastically dominated by a random variable . Supposing that then for any ,\n\nThe following theorem presents the complete moment convergence for martingale difference sequence.\n\nTheorem 1.7. Letting the conditions of Theorem 1.4 hold, then for any ,\n\nRemark 1.8. If we take in Theorem 1.4, then we can not only get the Baum-Katz-type Theorem for martingale difference sequence but also consider the case of . Furthermore, if we take , , and in Theorem 1.4, then we can get the Hsu-Robbins-type Theorem (see Hsu and Robbins ) for martingale difference sequence.\n\nRemark 1.9. As stated above, our Theorems 1.4 and 1.5 not only generalize the corresponding results of Theorems B and C for the partial sum to the maximal partial sum but also expand the scope of and .\n\nRemark 1.10. If we take in Theorem 1.4, then we can get the Marcinkiewicz-Zygmund strong law of large numbers for martingale difference sequence as follows:\n\n#### 2. Preparations\n\nTo prove the main results of the paper, we need the following lemmas.\n\nLemma 2.1 (see [6, Theorem 2.11]). If is a martingale difference and , then there exists a constant depending only on such that\n\nLemma 2.2. Let be a sequence of random variables, which is stochastically dominated by a random variable . Then for any and , the following two statements hold: where and are positive constants.\n\nLemma 2.3 (cf. ). If is a slowly varying function as , then(i) for each ; for each ,(ii),(iii), for each ,(iv) for every , , positive integer and some , ,(v) for every , , positive integer and some , .\n\n#### 3. Proofs of the Main Results\n\nProof of Theorem 1.4. For fixed , denote Since , we can see that\nFor , we have by Markov’s inequality, Lemma 2.2, and (1.5) that For , we have by Markov’s inequality and (3.3) that To prove (1.6), it suffices to show that For fixed , it is easily seen that is still a martingale difference. By Markov’s inequality and Lemma 2.1, we have that for any ,\nWe consider the following three cases.\nCase 1 (and ). Take large enough such that , which implies that .\nFor , we have by ’s inequality, Lemma 2.2, (3.3), Lemma 2.3, and (1.5) that Note that , if . We have by Lemma 2.3 that Case 2 (and ). Take . Similar to the proof of (3.6) and (3.7), we can get that Case 3 (). Note that . Take , and similar to the proof of (3.9), we still have .\nFrom the statements mentioned previously, we have proved (3.5). This completes the proof of the theorem.\n\nProof of Theorem 1.5. We have by Lemma 2.3 that The desired result (1.7) follows from the inequality above and (1.6) immediately.\n\nProof of Theorem 1.6. We use the same notation as that in Theorem 1.4. According to the proof of Theorem 1.4, we can see that for and under the conditions of Theorem 1.6. So it suffices to show that and for and .\nSimilar to the proof of (3.3), we have Similar to the proof of (3.4) and (3.11), we can get that This completes the proof of the theorem.\n\nProof of Theorem 1.7. For any , we have by Theorem 1.4 that Hence, it suffices to show that For , denote Since , it follows that Similar to the proof of (3.3), we have by Markov’s inequality and Lemma 2.2 that According to the proof of (3.17), we have by Markov’s inequality and Lemma 2.2 that For any , it is easily seen that is still a martingale difference. By Markov’s inequality and Lemma 2.1, we have that for any , We still consider the following three cases.\nCase 1 (and ). Take large enough such that , which implies that . We have by Lemma 2.2 and (3.17) that Hence, similar to the proof of (3.7), we can see that Note that , if . We have by Lemma 2.3 that Case 2 (and ). Take . Similar to the proof of (3.19) and (3.21), we can get that Case 3 (). Note that . Take , and similar to the proof of (3.23), we still have .\nFrom the statements mentioned previously, we have proved (3.14). This completes the proof of the theorem.\n\n#### Acknowledgments\n\nThe authors are most grateful to the Editor Sung Guen Kim and anonymous referees for careful reading of the paper and valuable suggestions which helped in improving an earlier version of this paper. This work was supported by the National Natural Science Foundation of China (11171001, 11126176), Natural Science Foundation of Anhui Province (1208085QA03), Provincial Natural Science Research Project of Anhui Colleges (KJ2010A005), Doctoral Research Start-up Funds Projects of Anhui University, the Academic Innovation Team of Anhui University (KJTD001B), and The Talents Youth Fund of Anhui Province Universities (2010SQRL016ZD, 2011SQRL012ZD).\n\n1. P. L. Hsu and H. Robbins, “Complete convergence and the law of large numbers,” Proceedings of the National Academy of Sciences of the United States of America, vol. 33, no. 2, pp. 25–31, 1947.\n2. P. Erdös, “On a theorem of Hsu and Robbins,” Annals of Mathematical Statistics, vol. 20, no. 2, pp. 286–291, 1949.\n3. L. E. Baum and M. Katz, “Convergence rates in the law of large numbers,” Transactions of the American Mathematical Society, vol. 120, no. 1, pp. 108–123, 1965.\n4. G. Stoica, “Baum-Katz-Nagaev type results for martingales,” Journal of Mathematical Analysis and Applications, vol. 336, no. 2, pp. 1489–1492, 2007.\n5. G. Stoica, “A note on the rate of convergence in the strong law of large numbers for martingales,” Journal of Mathematical Analysis and Applications, vol. 381, no. 2, pp. 910–913, 2011.\n6. P. Hall, Martingale Limit Theory and Its Application, Academic Presss, New York, NY, USA, 1980.\n7. Z. D. Bai and C. Su, “The complete convergence for partial sums of i.i.d. random variables,” Science in China Series A, vol. 28, no. 12, pp. 1261–1277, 1985. View at: Google Scholar | Zentralblatt MATH\n\n#### More related articles\n\nWe are committed to sharing findings related to COVID-19 as quickly as possible. We will be providing unlimited waivers of publication charges for accepted research articles as well as case reports and case series related to COVID-19. Review articles are excluded from this waiver policy. Sign up here as a reviewer to help fast-track new submissions."
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.88843507,"math_prob":0.96480536,"size":10990,"snap":"2021-04-2021-17","text_gpt3_token_len":2823,"char_repetition_ratio":0.14900783,"word_repetition_ratio":0.2549545,"special_character_ratio":0.25978163,"punctuation_ratio":0.17983413,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9958892,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-28T00:06:41Z\",\"WARC-Record-ID\":\"<urn:uuid:f85a18c5-f403-40a4-81e6-ca725c84d585>\",\"Content-Length\":\"1049254\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b660dce6-f6f9-4e3a-b4c8-ad6eaefc214b>\",\"WARC-Concurrent-To\":\"<urn:uuid:aea71add-e741-4e5f-9e0c-f791f7cc695c>\",\"WARC-IP-Address\":\"13.32.204.76\",\"WARC-Target-URI\":\"https://www.hindawi.com/journals/aaa/2012/572493/\",\"WARC-Payload-Digest\":\"sha1:54RZGE4T2YO6NIOTU4CX2POTJNF5KAQI\",\"WARC-Block-Digest\":\"sha1:LDHUVAEAL4PTF4TMEEU356PON3W3CE6S\",\"WARC-Truncated\":\"length\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610704833804.93_warc_CC-MAIN-20210127214413-20210128004413-00379.warc.gz\"}"} |
https://studentsupportaccelerator.com/studies/effect-tutoring-nonstandard-equations-students-mathematics-difficulty | [
"# The effect of tutoring with nonstandard equations for students with mathematics difficulty\n\nStudents often misinterpret the equal sign (=) as operational instead of relational. Research indicates misinterpretation of the equal sign occurs because students receive relatively little exposure to equations that promote relational understanding of the equal sign. No study, however, has examined effects of nonstandard equations on the equation solving and equal-sign understanding of students with mathematics difficulty (MD). In the present study, second-grade students with MD (n = 51) were randomly assigned to standard equations tutoring, combined tutoring (standard and nonstandard equations), and no-tutoring control. Combined tutoring students demonstrated greater gains on equation-solving assessments and equal-sign tasks compared to the other two conditions. Standard tutoring students demonstrated improved skill on equation solving over control students, but combined tutoring students’ performance gains were significantly larger. Results indicate that exposure to and practice with nonstandard equations positively influence student understanding of the equal sign. (PsycINFO Database Record (c) 2016 APA, all rights reserved)\nAuthors citation\nPowell, S. R., Driver, M. K., & Julian, T. E.\nPublication\nJournal of Learning Disabilities\nYear of Study\n2015\nSubject\nMath\nProgram Evaluated\nStandard equations tutoring\nTutor Type\nParaprofessional\nDuration\n4 weeks\nSample size\n33"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.89531964,"math_prob":0.96173346,"size":1641,"snap":"2022-40-2023-06","text_gpt3_token_len":317,"char_repetition_ratio":0.1686011,"word_repetition_ratio":0.00913242,"special_character_ratio":0.18159659,"punctuation_ratio":0.109375,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99399614,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-01-29T22:52:45Z\",\"WARC-Record-ID\":\"<urn:uuid:af2263f0-6055-43e3-929a-be6f11ad47e7>\",\"Content-Length\":\"42055\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f11e3444-b125-4561-a0b7-936c11c121ff>\",\"WARC-Concurrent-To\":\"<urn:uuid:93e1fe3c-571f-4765-a8b1-0bdd78fd6ef9>\",\"WARC-IP-Address\":\"23.185.0.1\",\"WARC-Target-URI\":\"https://studentsupportaccelerator.com/studies/effect-tutoring-nonstandard-equations-students-mathematics-difficulty\",\"WARC-Payload-Digest\":\"sha1:6FGMV7Y7PE4QXBGAVUE2UBQXPLCZ5F5Q\",\"WARC-Block-Digest\":\"sha1:7SI2OO7QP4JOLIP7CY4F4C72VFFEBZ4I\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764499768.15_warc_CC-MAIN-20230129211612-20230130001612-00796.warc.gz\"}"} |
https://math.libretexts.org/Courses/Borough_of_Manhattan_Community_College/MAT_206_Precalculus/3%3A_Polynomial_and_Rational_Functions_New/3.3E%3A_Exercises | [
"$$\\newcommand{\\id}{\\mathrm{id}}$$ $$\\newcommand{\\Span}{\\mathrm{span}}$$ $$\\newcommand{\\kernel}{\\mathrm{null}\\,}$$ $$\\newcommand{\\range}{\\mathrm{range}\\,}$$ $$\\newcommand{\\RealPart}{\\mathrm{Re}}$$ $$\\newcommand{\\ImaginaryPart}{\\mathrm{Im}}$$ $$\\newcommand{\\Argument}{\\mathrm{Arg}}$$ $$\\newcommand{\\norm}{\\| #1 \\|}$$ $$\\newcommand{\\inner}{\\langle #1, #2 \\rangle}$$ $$\\newcommand{\\Span}{\\mathrm{span}}$$\n\n# 3.3E: Exercises\n\n$$\\newcommand{\\vecs}{\\overset { \\rightharpoonup} {\\mathbf{#1}} }$$\n\n$$\\newcommand{\\vecd}{\\overset{-\\!-\\!\\rightharpoonup}{\\vphantom{a}\\smash {#1}}}$$\n\n## Section Exercises\n\n### Verbal\n\n1. Explain the difference between the coefficient of a power function and its degree.\n\nAnswer: The coefficient of the power function is the real number that is multiplied by the variable raised to a power. The degree is the highest power appearing in the function.\n\n2. If a polynomial function is in factored form, what would be a good first step in order to determine the degree of the function?\n\n3. In general, explain the end behavior of a power function with odd degree if the leading coefficient is positive.\n\nAnswer: As $$x$$ decreases without bound, so does $$f(x)$$. As $$x$$ increases without bound, so does $$f(x)$$.\n\n4. What is the relationship between the degree of a polynomial function and the maximum number of turning points in its graph?\n\n5. What can we conclude if, in general, the graph of a polynomial function exhibits the following end behavior? As x→−∞, $$f(x)→−∞$$ and as x→∞, $$f(x)→−∞$$.\n\nAnswer: The polynomial function is of even degree and leading coefficient is negative.\n\n### Algebraic\n\nFor the following exercises, identify the function as a power function, a polynomial function, or neither.\n\n6. $$f(x)=x^5$$\n\n7. $$f(x)=(x^2)^3$$\n\n8. $$f(x)=x−x^4$$\n\nExercise $$\\PageIndex{9}$$\n\n$$f(x)=\\frac{x^2}{x^2−1}$$\n\nNeither\n\n10. $$f(x)=2x(x+2)(x−1)^2$$\n\n11. $$f(x)=3^{x+1}$$\n\nFor the following exercises, find the degree and leading coefficient for the given polynomial.\n\n12. $$−3x^4$$\n\n13. $$7−2x^2$$\n\nAnswer: Degree = 2, Coefficient = –2\n\n14. $$−2x^2− 3x^5+ x−6$$\n\n15. $$x(4−x^2)(2x+1)$$\n\nAnswer: Degree =4, Coefficient = –2\n\n16. $$x^2(2x−3)^2$$\n\nFor the following exercises, determine the end behavior of the functions.\n\n17. $$f(x)=x^4$$\n\nAnswer: As $$x→∞$$, $$f(x)→∞$$, as $$x→−∞$$, $$f(x)→∞$$\n\n18. $$f(x)=x^3$$\n\n19. $$f(x)=−x^4$$\n\nAnswer: As $$x→−∞$$, $$f(x)→−∞$$, as $$x→∞$$, $$f(x)→−∞$$\n\n20. $$f(x)=−x^9$$\n\n21. $$f(x)=−2x^4− 3x^2+ x−1$$\n\nAnswer: As $$x→−∞$$, $$f(x)→−∞$$, as $$x→∞$$, $$f(x)→−∞$$\n\n22. $$f(x)=3x^2+ x−2$$\n\n23. $$f(x)=x^2(2x^3−x+1)$$\n\nAnswer: As $$x→∞$$, $$f(x)→∞$$, as $$x→−∞$$, $$f(x)→−∞$$\n\n24. $$f(x)=(2−x)^7$$\n\nFor the following exercises, find the intercepts of the functions.\n\n25. $$f(t)=2(t−1)(t+2)(t−3)$$\n\nAnswer: y-intercept is $$(0,12)$$, t-intercepts are $$(1,0);(–2,0);$$ and $$(3,0)$$.\n\n26. $$g(n)=−2(3n−1)(2n+1)$$\n\nExercise $$\\PageIndex{27}$$\n\n$$f(x)=x^4−16$$\n\ny-intercept is $$(0,−16).$$ x-intercepts are $$(2,0)$$ and $$(−2,0)$$.\n\n28. $$f(x)=x^3+27$$\n\n29. $$f(x)=x(x^2−2x−8)$$\n\nAnswer: y-intercept is $$(0,0)$$. x-intercepts are $$(0,0),(4,0),$$ and $$(−2, 0)$$.\n\n30. $$f(x)=(x+3)(4x^2−1)$$\n\n### Graphical\n\nFor the following exercises, determine the least possible degree of the polynomial function shown.\n\n31.",
null,
"32.",
null,
"33.",
null,
"34.",
null,
"35.",
null,
"36.",
null,
"37.",
null,
"38.",
null,
"For the following exercises, determine whether the graph of the function provided is a graph of a polynomial function. If so, determine the number of turning points and the least possible degree for the function.\n\n39.",
null,
"Answer: Yes. Number of turning points is 2. Least possible degree is 3.\n\n40.",
null,
"41.",
null,
"Answer: Yes. Number of turning points is 1. Least possible degree is 2.\n\n42.",
null,
"43.",
null,
"Answer: Yes. Number of turning points is 0. Least possible degree is 3.\n\nExercise $$\\PageIndex{44}$$",
null,
"No (the graph is not smooth)\n\n45.",
null,
"Answer: Yes. Number of turning points is 0. Least possible degree is 1.\n\n### Numeric\n\nFor the following exercises, make a table to confirm the end behavior of the function.\n\n46. $$f(x)=−x^3$$\n\n47. $$f(x)=x^4−5x^2$$\n\n$$x$$ $$f(x)$$\n10 9,500\n100 99,950,000\n–10 9,500\n–100 99,950,000\n\nas $$x→−∞,$$ $$f(x)→∞$$, as $$x→∞,$$ $$f(x)→∞$$\n\n48. $$f(x)=x^2(1−x)^2$$\n\n49. $$f(x)=(x−1)(x−2)(3−x)$$\n\n$$x$$ $$f(x)$$\n10 9,500\n100 99,950,000\n–10 9,500\n–100 99,950,000\n\nas $$x→−∞,$$ $$f(x)→∞$$, as $$x→∞,$$ $$f(x)→−∞$$\n\n50. $$f(x)=\\frac{x^5}{10}−x^4$$\n\n### Technology\n\nFor the following exercises, graph the polynomial functions using a calculator. Based on the graph, determine the intercepts and the end behavior.\n\n51. $$f(x)=x^3(x−2)$$",
null,
"The y-intercept is $$(0, 0)$$. The x-intercepts are $$(0, 0), (2, 0).$$ As $$x→−∞,$$ $$f(x)→∞$$, as $$x→∞,$$ $$f(x)→∞$$\n\n52. $$f(x)=x(x−3)(x+3)$$\n\n53. $$f(x)=x(14−2x)(10−2x)$$",
null,
"The y-intercept is $$(0,0)$$ . The x-intercepts are $$(0, 0), (5, 0), (7, 0)$$. As $$x→−∞$$, $$f(x)→−∞$$, as $$x→∞,$$ $$f(x)→∞$$\n\n54. $$f(x)=x(14−2x)(10−2x)^2$$\n\n55. $$f(x)=x^3−16x$$",
null,
"The y-intercept is (0, 0). The x-intercept is $$(−4, 0), (0, 0), (4, 0)$$. As $$x→−∞$$, $$f(x)→−∞$$, as $$x→∞,$$ $$f(x)→∞$$\n\n56. $$f(x)=x^3−27$$\n\n57. $$f(x)=x^4−81$$",
null,
"The y-intercept is (0, −81). The x-intercept are $$(3, 0), (−3, 0)$$. As $$x→−∞,$$ $$f(x)→∞$$, as $$x→∞,$$ $$f(x)→∞$$\n\n58. $$f(x)=−x^3+x^2+2x$$\n\n59. $$f(x)=x^3−2x^2−15x$$",
null,
"The y-intercept is $$(0, 0)$$. The x-intercepts are $$(−3, 0), (0, 0), (5, 0).$$ As $$x→−∞$$, $$f(x)→−∞$$, as $$x→∞,$$ $$f(x)→∞$$\n\n60. $$f(x)=x^3−0.01x$$\n\n### Extensions\n\nFor the following exercises, use the information about the graph of a polynomial function to determine the function. Assume the leading coefficient is 1 or –1. There may be more than one correct answer.\n\n61. The y-intercept is $$(0,−4)$$. The x-intercepts are $$(−2,0), (2,0)$$. Degree is 2.\n\nEnd behavior: as $$x→−∞,$$ $$f(x)→∞$$, as $$x→∞,$$ $$f(x)→∞$$.\n\nAnswer: $$f(x)=x^2−4$$\n\n62. The y-intercept is $$(0,9)$$. The x-intercepts are $$(−3,0), (3,0)$$. Degree is 2.\n\nEnd behavior: as $$x→−∞,$$ $$f(x)→−∞$$, as $$x→∞,$$ $$f(x)→−∞$$.\n\n63. The y-intercept is $$(0,0)$$. The x-intercepts are $$(0,0), (2,0)$$. Degree is 3.\n\nEnd behavior: as $$x→−∞,$$ $$f(x)→−∞$$, as $$x→∞,$$ $$f(x)→∞$$.\n\nAnswer: $$f(x)=x^3−4x^2+4x$$\n\n64. The y-intercept is $$(0,1)$$. The x-intercept is $$(1,0)$$. Degree is 3.\n\nEnd behavior: as $$x→−∞$$, $$f(x)→∞$$, as $$x→∞$$, $$f(x)→−∞$$.\n\n65. The y-intercept is $$(0,1)$$. There is no x-intercept. Degree is 4.\n\nEnd behavior: as $$x→−∞,$$ $$f(x)→∞$$, as $$x→∞,$$ $$f(x)→∞$$.\n\nAnswer: $$f(x)=x^4+1$$\n\n### Real-World Applications\n\nFor the following exercises, use the written statements to construct a polynomial function that represents the required information.\n\n66. An oil slick is expanding as a circle. The radius of the circle is increasing at the rate of 20 meters per day. Express the area of the circle as a function of $$d$$, the number of days elapsed.\n\n67. A cube has an edge of 3 feet. The edge is increasing at the rate of 2 feet per minute. Express the volume of the cube as a function of $$m$$, the number of minutes elapsed.\n\nAnswer: $$V(m)=8m^3+36m^2+54m+27$$\n\n68. A rectangle has a length of 10 inches and a width of 6 inches. If the length is increased by $$x$$ inches and the width increased by twice that amount, express the area of the rectangle as a function of $$x$$.\n\nExercise $$\\PageIndex{69}$$\n\nAn open box is to be constructed by cutting out square corners of $$x$$-inch sides from a piece of cardboard 8 inches by 8 inches and then folding up the sides. Express the volume of the box as a function of $$x$$.\n\n$$V(x)=4x^3−32x^2+64x$$"
]
| [
null,
"https://math.libretexts.org/@api/deki/files/13324/CNX_Precalc_Figure_03_03_201.jpg",
null,
"https://math.libretexts.org/@api/deki/files/13323/CNX_Precalc_Figure_03_03_202.jpg",
null,
"https://math.libretexts.org/@api/deki/files/13322/CNX_Precalc_Figure_03_03_203.jpg",
null,
"https://math.libretexts.org/@api/deki/files/13321/CNX_Precalc_Figure_03_03_204.jpg",
null,
"https://math.libretexts.org/@api/deki/files/13320/CNX_Precalc_Figure_03_03_205.jpg",
null,
"https://math.libretexts.org/@api/deki/files/13319/CNX_Precalc_Figure_03_03_206.jpg",
null,
"https://math.libretexts.org/@api/deki/files/13318/CNX_Precalc_Figure_03_03_207.jpg",
null,
"https://math.libretexts.org/@api/deki/files/13317/CNX_Precalc_Figure_03_03_208.jpg",
null,
"https://math.libretexts.org/@api/deki/files/13316/CNX_Precalc_Figure_03_03_209.jpg",
null,
"https://math.libretexts.org/@api/deki/files/13315/CNX_Precalc_Figure_03_03_210.jpg",
null,
"https://math.libretexts.org/@api/deki/files/13314/CNX_Precalc_Figure_03_03_211.jpg",
null,
"https://math.libretexts.org/@api/deki/files/13313/CNX_Precalc_Figure_03_03_212.jpg",
null,
"https://math.libretexts.org/@api/deki/files/13312/CNX_Precalc_Figure_03_03_213.jpg",
null,
"https://math.libretexts.org/@api/deki/files/13311/CNX_Precalc_Figure_03_03_214.jpg",
null,
"https://math.libretexts.org/@api/deki/files/13310/CNX_Precalc_Figure_03_03_215.jpg",
null,
"https://math.libretexts.org/@api/deki/files/13309/CNX_Precalc_Figure_03_03_216.jpg",
null,
"https://math.libretexts.org/@api/deki/files/13308/CNX_Precalc_Figure_03_03_218.jpg",
null,
"https://math.libretexts.org/@api/deki/files/13307/CNX_Precalc_Figure_03_03_220.jpg",
null,
"https://math.libretexts.org/@api/deki/files/13306/CNX_Precalc_Figure_03_03_222.jpg",
null,
"https://math.libretexts.org/@api/deki/files/13305/CNX_Precalc_Figure_03_03_224.jpg",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.7991149,"math_prob":1.0000099,"size":7398,"snap":"2019-51-2020-05","text_gpt3_token_len":2881,"char_repetition_ratio":0.2028672,"word_repetition_ratio":0.1380256,"special_character_ratio":0.44782373,"punctuation_ratio":0.19292237,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":1.0000099,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40],"im_url_duplicate_count":[null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-14T01:57:58Z\",\"WARC-Record-ID\":\"<urn:uuid:f500efd3-3970-478a-8f4e-5ab7a30849b3>\",\"Content-Length\":\"91945\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:04ec5615-b295-4d47-9b37-2946b1e50b27>\",\"WARC-Concurrent-To\":\"<urn:uuid:6b54cd79-6e57-44d5-9f0c-89c6f413c8d6>\",\"WARC-IP-Address\":\"34.232.212.106\",\"WARC-Target-URI\":\"https://math.libretexts.org/Courses/Borough_of_Manhattan_Community_College/MAT_206_Precalculus/3%3A_Polynomial_and_Rational_Functions_New/3.3E%3A_Exercises\",\"WARC-Payload-Digest\":\"sha1:24XL3AKO3GOX2G5KPI6NZPO6CPJUMSHY\",\"WARC-Block-Digest\":\"sha1:2PYCOFSIT5KIFMSK4XAODX7EQ2TA7LWF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540579703.26_warc_CC-MAIN-20191214014220-20191214042220-00176.warc.gz\"}"} |
http://cs.roanoke.edu/Spring2010/CPSC170A/lab7/lab7in.html | [
"CPSC 170 Lab 7: Recursion and Fractals\n\nA fractal is a geometric figure that is self-similar in that any piece of the figure contains a miniature of the entire figure. Naturally, fractals fit nicely with the concept of recursion. There are lots of fractals that have interesting mathematical properties (and make pretty pictures); in this lab you will draw two of them, Sierpinski triangles and Koch snowflakes.\n\nSierpinski Triangles\n\nA Sierpinski triangle (aka a Sierpinski Gasket) is a fractal that may be constructed as follows:\n\n1. Draw a triangle.\n2. Draw a new triangle by connecting the midpoints of the three sides of your original triangle. This should split your triangle into four smaller triangles, one in the center and three around the outside.\n3. Repeat step 2 for each of the outside triangles (not the center one). Each of them will split into four yet smaller triangles. Repeat for each of their outside triangles.. and for each of the new ones.. and so on, forever. Draw a few rounds of this on paper to see how it works.\n\nYour job is to write a Java program that draws a Sierpinski triangle. Think about the following:\n\n• A Sierpinski triangle is a recursively defined structure -- each of the three outer triangles formed by joining the midpoints is itself a Sierpinski triangle.\n• In practice you don't want to go on with the process \"forever\" as suggested above, so we'll limit how deep it goes. Define the depth of a Sierpinski triangle as the number of directly nested triangles at the deepest point. So a Sierpinski triangle consisting of a single triangle has depth 0; when a triangle is drawn inside of it, the resulting Sierpinski triangle has depth 1; when the three outside triangles have triangles drawn inside of them, the resulting triangle is depth 2, and so on. A depth of 10 or 11 gives a nice looking triangle in a reasonable amount of time. Smaller depths are interesting in that you can see more of the construction; higher depths generally take too long for casual viewing.\n• A triangle is a polygon, so you can use the drawPolygon method of the Graphics class. Note: this method that it takes an array containing the x coordinates, an array containing the y coordinates, and an integer indicating how many points should be drawn (3 for a triangle).\n\nThe midpoint of a line segment can be calculated using interpolation. In order to find a point that is some fraction between the two points (x1, y1) and (x2, y2) the following equations can be used:\n\nxi = i * x2 + (1 - i) * x1\nyi = i * y2 + (1 - i) * y1\n\nwhere i is a value between 0 and 1 that is the fraction of the distance between the two points that the point (xi, yi) is located. So, in order to find a point that is 1/2 between two points a value of i = 0.5 could be used.\n\nThe file SierpinskiTriangle.java contains the skeleton of a program to draw a Sierpinski triangle. The program has a slider that allows the user to specify the depth of the triangle that is drawn to the screen. The paint method simply calls the sierpinski method, which you must complete. The method should draw the triangle whose points were passed in, and then check to see if the desired depth has been achieved; if not, it should draw the triangle formed by the midpoints of the given points, then call itself recursively on each of the three new outer triangles. Try changing the color of the triangles and the location of the vertices. Use the slider to create fractal that you like. Use the Applications>Accessories>Take Screenshot application to create an image of the fractal and put it in your lab7 directory.\n\nKoch Snowflakes\n\nThe Koch snowflake is a fractal generated by starting with 3 line segments forming a triangle (a Koch fractal of order 1). The algorithm for generating higher order Koch fractals involves splitting each line segment into three equal segments then replacing the middle segment by two line segments that protrude outward. The same algorithm is then recursively applied to each of the 4 new line segments. The diagram below illustrates the transformation of a segment.",
null,
"In order to calculate the four line segments that represent the transformation of the segment defined by the points (x1, y1) and (x5, y5) the points that divide the segment into thirds, (x2, y2) and (x4, y4), must be calculated. These can be calculated using interpolation and the equations from the previous section of this lab.\n\nThe point that is the apex of the protrusion, (x3, y3), can be calculated with the following equations:\n\nx3 = x1 + Δx / 2 - Δy * p\ny3 = y1 + Δy / 2 + Δx * p\n\nwhere\n\nΔx = x5 - x1\nΔy = y5 - y1\n\nand p is a value between -1 and 1 that is the percent length of the original segment that the protrusion will be. That is, if p is 0.5 then the protrusion would be half the size of the segment from (x1, y1) to (x5, y5). If p is -0.5 then it would also be half the size of the segment, but protrude off of the other side of the segment. A value of one third of a segment length produces a nice looking fractal.\n\nThe file KochSnowflake.java contains the skeleton of a program to draw a Koch snowflake. The program has sliders that allow the user to specify the order and the size of the protrusion of the snowflake that is drawn to the screen. The paint method simply calls the koch method, which you must complete. The method should check to see if the desired depth has been achieved. If it has, it should draw the line of the given points. If it has not, then it should compute the four line segments that make up the next order and call itself recursively on each. Try changing the color of the triangles and the location and number of the initial lines. Use the sliders to create a fractal that you like. Use the Applications>Accessories>Take Screenshot application to create an image of the fractal and put it in your lab7 directory.\n\nTo submit your code: Tar the files in your lab7 directory and copy the tgz file to the directory /home/staff/bouchard/CPSC170A/lab7. Be sure to name the tar file with your names, not lab7.tgz."
]
| [
null,
"http://cs.roanoke.edu/Spring2010/CPSC170A/lab7/KochDiagram.png",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.9133706,"math_prob":0.97637844,"size":4277,"snap":"2022-05-2022-21","text_gpt3_token_len":1031,"char_repetition_ratio":0.15445822,"word_repetition_ratio":0.14340588,"special_character_ratio":0.2300678,"punctuation_ratio":0.08813161,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99631345,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-29T04:44:28Z\",\"WARC-Record-ID\":\"<urn:uuid:133bb86d-c740-45b0-b54b-f4a6668229fc>\",\"Content-Length\":\"7928\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9767ed72-9fed-4313-9a69-1bfdbace13a3>\",\"WARC-Concurrent-To\":\"<urn:uuid:4ecc70ea-e3c4-40b0-9e1a-7a1f094d7558>\",\"WARC-IP-Address\":\"199.111.154.63\",\"WARC-Target-URI\":\"http://cs.roanoke.edu/Spring2010/CPSC170A/lab7/lab7in.html\",\"WARC-Payload-Digest\":\"sha1:ORPNILHE7GS56IYFG3NA3LWZV775XUC3\",\"WARC-Block-Digest\":\"sha1:JABITPGTUU77GN4YN2GX3WP4RUHQZKQV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320299927.25_warc_CC-MAIN-20220129032406-20220129062406-00672.warc.gz\"}"} |
https://www.calculateme.com/water-weight/3-4-cups | [
"# Weight of 3/4 Cup of Water\n\nHow heavy is 3/4 cup of water? How much does three quarters of a cup of water weigh?\nAmount\nUnit\n3/4 Cup of Water Weighs\n0.3905 pounds\n6.248 ounces\n177.1 grams\nrounded to 4 digits\nassuming water at 20° Celsius\nNote on Units\nThis calculator uses United States customary units which are different from the Imperial units used in the United Kingdom.\nDensity of Water\nThis calculator uses the density of water at 20° Celsius, which is 0.9982071 grams per cubic centimeter. The weight of a certain volume of water could vary slightly depending on the temperature."
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.8850293,"math_prob":0.89023066,"size":582,"snap":"2023-40-2023-50","text_gpt3_token_len":150,"char_repetition_ratio":0.119377166,"word_repetition_ratio":0.0,"special_character_ratio":0.2611684,"punctuation_ratio":0.084033616,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96191996,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-08T14:49:20Z\",\"WARC-Record-ID\":\"<urn:uuid:366be1fd-4eb8-4d5a-9873-d4c4ade50fc1>\",\"Content-Length\":\"8369\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5bbd89d5-a654-4cae-9b20-e2e5107e9813>\",\"WARC-Concurrent-To\":\"<urn:uuid:81673b04-76a9-4018-bda9-8944267f002f>\",\"WARC-IP-Address\":\"34.231.192.208\",\"WARC-Target-URI\":\"https://www.calculateme.com/water-weight/3-4-cups\",\"WARC-Payload-Digest\":\"sha1:HRSDNQDLIUWNZ3T2VANTTKEDPUAF3MUL\",\"WARC-Block-Digest\":\"sha1:YQNHSLXYYVEP43NMCB2ITZPEWKVGPDAC\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100762.64_warc_CC-MAIN-20231208144732-20231208174732-00244.warc.gz\"}"} |
http://www.benbxfudao.cn/zuoye/8813700 | [
"# 已知二次函数y=x²-2(2m+2)x+2(m-1),当图像的对称轴为直线x=3时,求它与x轴的两个交点及顶点所构角形的面积?",
null,
"y=x²-2(2m+2)x+2(m-1)\n∵函数图形对称轴 x=-b/2a=2m+2=3\n∴ m=1/2\ny=x²-6x-1\n=>y=(x-3)²-10\n\ny=x²-2(2m+2)x+2(m-1)\ny=[x-(2m+2)]²-(2m+2)²+2(m-1)\ny=[x-(2m+2)]²-4m²-6m-6,顶点坐标为[(2m+2),(-4m²-6m-6)]\n\n=>y=(x-3)²-10\n\ny=x²-2(2m+2)x+2(m-1)\ny=[x-(2m+2)]²-(2m+2)²+2(m-1)\ny=[x-(2m+2)]²-4m²-6m-6,顶点坐标为[(2m+2),(-4m²-6m-6)]\n\n=>y=(x-3)²-10"
]
| [
null,
"http://www.benbxfudao.cn/uploads/image/z/8813700-36-0.jpg",
null
]
| {"ft_lang_label":"__label__zh","ft_lang_prob":0.6454373,"math_prob":0.9999434,"size":956,"snap":"2023-40-2023-50","text_gpt3_token_len":913,"char_repetition_ratio":0.15441176,"word_repetition_ratio":0.051282052,"special_character_ratio":0.5700837,"punctuation_ratio":0.09090909,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9884666,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-22T05:15:43Z\",\"WARC-Record-ID\":\"<urn:uuid:f0755dc0-8710-433c-8768-ad3b43e7e6f0>\",\"Content-Length\":\"18504\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:eb49cc72-4295-4f49-8750-a668832488d1>\",\"WARC-Concurrent-To\":\"<urn:uuid:3ac34ce1-1071-43f4-8fdb-5e039a093f61>\",\"WARC-IP-Address\":\"8.217.0.10\",\"WARC-Target-URI\":\"http://www.benbxfudao.cn/zuoye/8813700\",\"WARC-Payload-Digest\":\"sha1:3BCFTFEACCCY4T5TJPDJOPJAHGWP4ZCW\",\"WARC-Block-Digest\":\"sha1:TOP3U432J4NXJHB3ZCBBFHZMOMBAXXWW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233506329.15_warc_CC-MAIN-20230922034112-20230922064112-00018.warc.gz\"}"} |
https://leetcode.jp/leetcode-1385-find-the-distance-value-between-two-arrays-%E8%A7%A3%E9%A2%98%E6%80%9D%E8%B7%AF%E5%88%86%E6%9E%90/ | [
"# LEETCODE 1385. Find the Distance Value Between Two Arrays 解题思路分析\n\n「距离值」 定义为符合此描述的元素数目:对于元素 arr1[i] ,不存在任何元素 arr2[j] 满足 |arr1[i]-arr2[j]| <= d 。\n\n```输入:arr1 = [4,5,8], arr2 = [10,9,1,8], d = 2\n\n对于 arr1=4 我们有:\n|4-10|=6 > d=2\n|4-9|=5 > d=2\n|4-1|=3 > d=2\n|4-8|=4 > d=2\n对于 arr1=5 我们有:\n|5-10|=5 > d=2\n|5-9|=4 > d=2\n|5-1|=4 > d=2\n|5-8|=3 > d=2\n对于 arr1=8 我们有:\n|8-10|=2 <= d=2 |8-9|=1 <= d=2 |8-1|=7 > d=2\n|8-8|=0 <= d=2```\n\n```输入:arr1 = [1,4,2,3], arr2 = [-4,-3,6,10,20,30], d = 3\n\n```输入:arr1 = [2,1,100,3], arr2 = [-5,-2,10,-3,7], d = 6\n\n• 1 <= arr1.length, arr2.length <= 500\n• -10^3 <= arr1[i], arr2[j] <= 10^3\n• 0 <= d <= 100\n\n```public int findTheDistanceValue(int[] arr1, int[] arr2, int d) {\nint res=0;\nfor(int i=0;i<arr1.length;i++){\nboolean isValid=true;\nfor(int j=0;j<arr2.length;j++){\nif(Math.abs(arr1[i]-arr2[j])<=d){\nisValid=false;\nbreak;\n}\n}\nif(isValid) res++;\n}\nreturn res;\n}```\n\nRuntime: 3 ms, faster than 76.74% of Java online submissions for Find the Distance Value Between Two Arrays.\n\nMemory Usage: 40.8 MB, less than 100.00% of Java online submissions for Find the Distance Value Between Two Arrays.\n\n```public int findTheDistanceValue(int[] arr1, int[] arr2, int d) {\nArrays.sort(arr2);\nint res=0;\nfor(int num : arr1){\nint big=findFirstBigOrEqual(arr2,num);\nint small=findFirstSmallOrEqual(arr2,num);\nif(big-num>d&&num-small>d) res++;\n}\nreturn res;\n}\n\nint findFirstBigOrEqual(int[] arr, int target){\nint low=0, high=arr.length-1;\nwhile(low<=high){\nint mid=(low+high)/2;\nif(arr[mid]>=target){\nhigh=mid-1;\n}else{\nlow=mid+1;\n}\n}\nreturn low==arr.length?Integer.MAX_VALUE:arr[low];\n}\n\nint findFirstSmallOrEqual(int[] arr, int target){\nint low=0, high=arr.length-1;\nwhile(low<=high){\nint mid=(low+high)/2;\nif(arr[mid]<=target){\nlow=mid+1;\n}else{\nhigh=mid-1;\n}\n}\nreturn high==-1?Integer.MAX_VALUE:arr[high];\n}```\n\nRuntime: 4 ms, faster than 38.68% of Java online submissions for Find the Distance Value Between Two Arrays.\n\nMemory Usage: 40.8 MB, less than 100.00% of Java online submissions for Find the Distance Value Between Two Arrays."
]
| [
null
]
| {"ft_lang_label":"__label__zh","ft_lang_prob":0.6254853,"math_prob":0.99888164,"size":2933,"snap":"2020-24-2020-29","text_gpt3_token_len":1564,"char_repetition_ratio":0.09218163,"word_repetition_ratio":0.25657895,"special_character_ratio":0.34947154,"punctuation_ratio":0.1779661,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99850035,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-08T07:41:12Z\",\"WARC-Record-ID\":\"<urn:uuid:50530c64-c4f5-403d-a47b-53d8196e0734>\",\"Content-Length\":\"53898\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e00ea83e-7422-4561-8ff1-e5b511396f08>\",\"WARC-Concurrent-To\":\"<urn:uuid:05a51d98-63ee-4cb9-9603-94aca4915064>\",\"WARC-IP-Address\":\"49.212.198.109\",\"WARC-Target-URI\":\"https://leetcode.jp/leetcode-1385-find-the-distance-value-between-two-arrays-%E8%A7%A3%E9%A2%98%E6%80%9D%E8%B7%AF%E5%88%86%E6%9E%90/\",\"WARC-Payload-Digest\":\"sha1:2KUMYII3WXLUDVJA5WPSEKYXBLO4DUS6\",\"WARC-Block-Digest\":\"sha1:E3XOY257KOJUYG25R6JDQKNAAW2K2BP4\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593655896905.46_warc_CC-MAIN-20200708062424-20200708092424-00452.warc.gz\"}"} |
https://blender.stackexchange.com/questions/235121/driver-expression-for-rotations-with-two-limits | [
"# Driver expression for rotations with two limits\n\nObject A is constrained to -20 degrees and +15 degrees rotation on the X axis.\n\nObject B is constrained to -90 degrees and +90 degrees rotation on the X axis.\n\nI would like the rotation of A to be driven by the rotation of B. Normally, this is a simple enough procedure, but I lack the mathematical understanding to write the correct expression. Here's my intention:\n\nWhen Object B is at 0 degrees, Object A is also at 0 degrees. I'd then like to take the percentage of Object B's rotation to rotate Object A to it's constraint by that same percentage.\n\nFor example: If Object B rotates +45 degrees, that's halfway to its constraint of +90, so the value should be 0.5. I then want to apply this 0.5 to the +15 degree limit of Object A, so 0.5 of +15 is +7.5, but if Object B rotates -45, it's now -0.5, which when applied to Object A's -20 degree limit, is -10.\n\nEDIT:\n\nThe closest expression I can get is -var/90*20. This takes the percentage of rotation to Object B's +90 degree limit, then applies that to Object A's constraint of -20 degrees. Only problem is, that expression needs to be -var/90*15 when Object B's rotation is negative. Is there a way to write an if statement along the lines of if var<0, then -var/90*15, else -var/90*20? I'm unfamiliar with the syntax.\n\n• Actually, this isn't a rigging exercise. No bones or anything. I'm creating a model of an aircraft, and some of the control surfaces deflect more in one direction, than another. This deflection is driven by rotating the yoke. Not sure how to illustrate that, but if needed, I could upload the .blend file. Aug 23, 2021 at 8:59\n• Also, see my edit. It might be an easier way to solve this. Aug 23, 2021 at 9:11\n• I saw a comment on my answer briefly earlier today when I didn't have time to get to it.... Don't forget, you can always use an f-curve for complex relationships.. avoiding expressions for them. Aug 23, 2021 at 13:38\n• @RobinBetts: Yeah, I had a follow up question as I was trying to apply your solution to a similar need, except X position drove another object's X rotation instead. I managed to figure it out though. Aug 23, 2021 at 15:44\n\nIt sounds as if you need two ranges, one for negative angles and one for positive?\n\nYou can avoid if clauses in limited-space driver expressions by using the implicit cast of Boolean True and False to integer 1 and 0, and multiplying:\n\n((20/90) * (B_rZ < 0) + (15/90) * (B_rZ >= 0)) * B_rZ\n\n\nwhere B_rZ is B's Z rotation.",
null,
"• This would do if you don't need your movement to be smooth across 0 .. it's just a straight-line function on either side. Aug 23, 2021 at 9:31\n• Correct. For the purposes of what I'm trying to achieve in my file, this is sufficient. Did some research on Python syntax, and I was pretty much ready to go with -var/90*15 if var<0 else -var/90*20, but modifying your suggestion to ((15/90) * (var < 0) + (20/90) * (var >= 0)) * -var works just as well, and looks like a more elegant solution. Wouldn't have thought to use true/false as 1 and 0 to multiply away the irrelevant part. Thank you. Aug 23, 2021 at 9:37\n\nLinear interpolation.\n\nAfter seeking clarification in a comment, I've come to that what you seek is linear interpolation, ie if we are some ratio between A and B, what is the point at the same ratio between C and D.\n\nThe lerp is available to the driver namespace, ie it can be used in driver expressions.\n\n>>> import bl_math\n>>> bl_math.lerp(\nlerp(from, to, factor)\n.. function:: lerp(from, to, factor)\nLinearly interpolate between two float values based on factor.\n:arg from: The value to return when factor is 0.\n:type from: float\n:arg to: The value to return when factor is 1.\n:type to: float\n:arg factor: The interpolation value, normally in [0.0, 1.0].\n:type factor: float\n:return: The interpolated value.\n:rtype: float\n\n\nAnd its partner in crime. smooth_step\n\n>>> bl_math.smoothstep(\nsmoothstep(from, to, value)\n.. function:: smoothstep(from, to, value)\nPerforms smooth interpolation between 0 and 1 as value changes between from and to.\nOutside the range the function returns the same value as the nearest edge.\n:arg from: The edge value where the result is 0.\n:type from: float\n:arg to: The edge value where the result is 1.\n:type to: float\n:arg factor: The interpolation value.\n:type factor: float\n:return: The interpolated value in [0.0, 1.0].\n:rtype: float\n\n\nTo get the values as driver vars: Mouse over one of the limit rotation constraint axis rotation limits, right click and choose \"Copy Data Path\" it copies\n\npose.bones[\"Bone\"].constraints[\"Limit Rotation\"].min_x\n\n\nto the clipboard. Paste that into a single property driver variable datapath. (The object chosen is the rig.) and the driver variable will have the constraints min x value",
null,
"In example minimum is set to 45 degrees. Blender uses radians as base unit (the degrees are converted and displayed if scene unit for rotation is degrees (the default))\n\n>>> radians(45)\n0.7853981633974483\n\n\nFinally,\n\nIf we set up a driver with variables\n\n• A current value of A\n\n• Amin lower limit of A\n\n• Amax upper limit of A\n\n• Bmin lower limit of B\n\n• Bmax upper limit of B\n\n, to find rotation B which is rotation of B that is same ration between Bmin, Bax as A is between Amin, Amax\n\nThen our ratio toward Amax from Amin is\n\nsmoothstep(Amin, Amax, A)\n\n\nAnd finally, to the driver expression to set B rotation\n\nlerp(Bmin, Bmax, smoothstep(Amin, Amax, A))\n\n\nor hardcode in the values if they dont change, eg A is some rot between 0 and 180 what is the same ratio between -45 and 45 degrees\n\nradians(lerp(-45, 45, smoothstep(0, 180, degrees(A)))\n\n\nA path.\n\nInstead of limit rotation constraints consider using a curve path as part of your rig. Can be parented to a bone. A follow path constraint fixed offset 0 is one end of the path 1 the other. the rotation bone instead tracks to an object (or bone) following the path. Driving another is simply a matter of using offset of other.\n\n• Ahhh, so @hiigaran wants a continuous function, over 0? Aug 23, 2021 at 9:24\n• Not sure, its one of those questions that I can read a zillion times and still not be sure. Talked myself into linear interpolation while seeking clarification, Haven't bothered with clamp (the other member of the sneaky bl_maths since limit constraint should deal with. Once again I'm going to be foiled by the Master of Illustration .. lol got that right.... bloody BSR. Aug 23, 2021 at 9:38\n• Pure luck, in the circumstances :D Aug 23, 2021 at 9:41\n• missed the dual range part too. Aug 23, 2021 at 9:43"
]
| [
null,
"https://i.stack.imgur.com/PsJ0h.gif",
null,
"https://i.stack.imgur.com/1Jhb1.png",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.84925985,"math_prob":0.8964784,"size":2807,"snap":"2022-05-2022-21","text_gpt3_token_len":715,"char_repetition_ratio":0.134142,"word_repetition_ratio":0.07112971,"special_character_ratio":0.2543641,"punctuation_ratio":0.179402,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9819633,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-16T14:57:26Z\",\"WARC-Record-ID\":\"<urn:uuid:d820200a-49fb-43ca-9d20-595cda8faf74>\",\"Content-Length\":\"248718\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:907133a3-e184-41bb-b2aa-542c9e5c566b>\",\"WARC-Concurrent-To\":\"<urn:uuid:f4f5e064-ba66-4c05-aaa4-e5af31fb60fa>\",\"WARC-IP-Address\":\"151.101.65.69\",\"WARC-Target-URI\":\"https://blender.stackexchange.com/questions/235121/driver-expression-for-rotations-with-two-limits\",\"WARC-Payload-Digest\":\"sha1:67OLGO7B4OPUQPEVK6CC7YLGK5OTHK2T\",\"WARC-Block-Digest\":\"sha1:E2PA4LP72RBFV2OJYC2THR3HBOK6TN7O\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662510138.6_warc_CC-MAIN-20220516140911-20220516170911-00052.warc.gz\"}"} |
http://lhcbproject.web.cern.ch/lhcbproject/Publications/LHCbProjectPublic/LHCb-PAPER-2017-017.html | [
"# Measurement of the ratio of the $B^0 \\to D^{*-} \\tau^+ \\nu_{\\tau}$ and $B^0 \\to D^{*-} \\mu^+ \\nu_{\\mu}$ branching fractions using three-prong $\\tau$-lepton decays\n\n[to restricted-access page]\n\n## Abstract\n\nThe ratio of branching fractions ${\\cal{R}}(D^{*-})\\equiv {\\cal{B}}(B^0 \\to D^{*-} \\tau^+ \\nu_{\\tau})/{\\cal{B}}(B^0 \\to D^{*-} \\mu^+\\nu_{\\mu})$ is measured using a data sample of proton-proton collisions collected with the LHCb detector at center-of-mass energies of 7 and 8 Tev, corresponding to an integrated luminosity of 3fb$^{-1}$. For the first time ${\\cal{R}}(D^{*-})$ is determined using the $\\tau$ lepton decays with three charged pions in the final state. The $B^0 \\to D^{*-} \\tau^+\\nu_{\\tau}$ yield is normalized to that of the $B^0\\to D^{*-} \\pi^+\\pi^-\\pi^+$ mode, providing a measurement of ${\\cal{B}}(B^0\\to D^{*-}\\tau^+\\nu_{\\tau})/{\\cal{B}}(B^0\\to D^{*-}\\pi^+\\pi^-\\pi^+) = 1.97 \\pm 0.13 \\pm 0.18$, where the first uncertainty is statistical and the second systematic. The value of ${\\cal{B}}(B^0 \\to D^{*-} \\tau^+ \\nu_{\\tau}) = (1.42 \\pm 0.094 \\pm 0.129 \\pm 0.054)\\%$ is obtained, where the third uncertainty is due to the limited knowledge of the branching fraction of the normalization mode. Using the well-measured branching fraction of the $B^0 \\to D^{*-} \\mu^+\\nu_{\\mu}$ decay, a value of ${\\cal{R}}(D^{*-}) = 0.291 \\pm 0.019 \\pm 0.026 \\pm 0.013$ is established, where the third uncertainty is due to the limited knowledge of the branching fractions of the normalization and $B^0\\to D^{*-}\\mu^+\\nu_{\\mu}$ modes. This measurement is in agreement with the Standard Model prediction and with previous results.\n\n## Figures and captions\n\n Topology of the signal decay. A requirement on the distance between the 3 $\\pi$ and the $B ^0$ vertices along the beam direction to be greater than four times its uncertainty is applied. For $B \\rightarrow D ^* 3\\pi (X)$ decays, the 3 $\\pi$ vertex coincides with the $B$ vertex. Fig1.pdf [228 KiB] HiDef png [366 KiB] Thumbnail [151 KiB] *.C file",
null,
"{Results from the fit to the invariant mass of the $D ^{*-}$ $D ^+_ s$ pair for the $D ^{*-} D ^+_ s (X)$ data control sample, with $D ^+_ s \\rightarrow 3\\pi$. The components contributing to the fit model are indicated in the legend. } Fig2.pdf [30 KiB] HiDef png [239 KiB] Thumbnail [225 KiB] *.C file",
null,
"Distribution of ${\\mathrm{min}}[m(\\pi^+\\pi^-)]$ for a sample enriched in $B \\rightarrow D ^{*-} D ^+_ s (X)$ decays, obtained by requiring the BDT output below a threshold. The different fit components are indicated in the legend. Fig3.pdf [16 KiB] HiDef png [190 KiB] Thumbnail [159 KiB] *.C file",
null,
"Distributions of (left) $t_{\\tau}$ and (right) $q^2$ in four different BDT bins, with increasing values of the BDT response from top to bottom. The various fit components are described in the legend. Fig4.pdf [25 KiB] HiDef png [374 KiB] Thumbnail [338 KiB] *.C file",
null,
"Animated gif made out of all figures. PAPER-2017-017.gif Thumbnail",
null,
"## Tables and captions\n\n Relative systematic uncertainties on ${\\cal{R}}( D ^{*-} )$. Table_1.pdf [56 KiB] HiDef png [70 KiB] Thumbnail [30 KiB] tex code",
null,
"## Supplementary Material [file]\n\n Supplementary material full pdf supple[..].pdf [538 KiB]",
null,
"This ZIP file contains supplemetary material for the publication LHCb-PAPER-2017-017. The files are: supplementary.pdf : An overview of the extra figures *.pdf, *.png, *.eps : The figures in variuous formats Fig1a.pdf [228 KiB] HiDef png [366 KiB] Thumbnail [151 KiB] *C file",
null,
"Fig1b.pdf [190 KiB] HiDef png [308 KiB] Thumbnail [127 KiB] *C file",
null,
"Fig2a.pdf [29 KiB] HiDef png [246 KiB] Thumbnail [214 KiB] *C file",
null,
"Fig2b.pdf [30 KiB] HiDef png [253 KiB] Thumbnail [212 KiB] *C file",
null,
"Fig3.pdf [18 KiB] HiDef png [610 KiB] Thumbnail [324 KiB] *C file",
null,
"Created on 11 July 2020."
]
| [
null,
"http://lhcbproject.web.cern.ch/lhcbproject/Publications/LHCbProjectPublic/Directory_LHCb-PAPER-2017-017/thumbnail_Fig1.png",
null,
"http://lhcbproject.web.cern.ch/lhcbproject/Publications/LHCbProjectPublic/Directory_LHCb-PAPER-2017-017/thumbnail_Fig2.png",
null,
"http://lhcbproject.web.cern.ch/lhcbproject/Publications/LHCbProjectPublic/Directory_LHCb-PAPER-2017-017/thumbnail_Fig3.png",
null,
"http://lhcbproject.web.cern.ch/lhcbproject/Publications/LHCbProjectPublic/Directory_LHCb-PAPER-2017-017/thumbnail_Fig4.png",
null,
"http://lhcbproject.web.cern.ch/lhcbproject/Publications/LHCbProjectPublic/Directory_LHCb-PAPER-2017-017/thumbnail_PAPER-2017-017.gif",
null,
"http://lhcbproject.web.cern.ch/lhcbproject/Publications/LHCbProjectPublic/Directory_LHCb-PAPER-2017-017/thumbnail_Table_1.png",
null,
"http://lhcbproject.web.cern.ch/lhcbproject/Publications/LHCbProjectPublic/Directory_LHCb-PAPER-2017-017/supplementary/thumbnail_supplementary.png",
null,
"http://lhcbproject.web.cern.ch/lhcbproject/Publications/LHCbProjectPublic/Directory_LHCb-PAPER-2017-017/supplementary/thumbnail_Fig1a.png",
null,
"http://lhcbproject.web.cern.ch/lhcbproject/Publications/LHCbProjectPublic/Directory_LHCb-PAPER-2017-017/supplementary/thumbnail_Fig1b.png",
null,
"http://lhcbproject.web.cern.ch/lhcbproject/Publications/LHCbProjectPublic/Directory_LHCb-PAPER-2017-017/supplementary/thumbnail_Fig2a.png",
null,
"http://lhcbproject.web.cern.ch/lhcbproject/Publications/LHCbProjectPublic/Directory_LHCb-PAPER-2017-017/supplementary/thumbnail_Fig2b.png",
null,
"http://lhcbproject.web.cern.ch/lhcbproject/Publications/LHCbProjectPublic/Directory_LHCb-PAPER-2017-017/supplementary/thumbnail_Fig3.png",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.7150559,"math_prob":0.99607474,"size":3507,"snap":"2020-24-2020-29","text_gpt3_token_len":1140,"char_repetition_ratio":0.14387667,"word_repetition_ratio":0.05724508,"special_character_ratio":0.36270317,"punctuation_ratio":0.096960925,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9981945,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24],"im_url_duplicate_count":[null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-13T06:01:53Z\",\"WARC-Record-ID\":\"<urn:uuid:e203b29a-5072-431e-8561-c18f3fa10665>\",\"Content-Length\":\"16301\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9a56e3e8-9fee-4177-9067-8016541e7b4b>\",\"WARC-Concurrent-To\":\"<urn:uuid:c3909bb4-601a-4e5b-bb32-8785f0d1251b>\",\"WARC-IP-Address\":\"137.138.150.3\",\"WARC-Target-URI\":\"http://lhcbproject.web.cern.ch/lhcbproject/Publications/LHCbProjectPublic/LHCb-PAPER-2017-017.html\",\"WARC-Payload-Digest\":\"sha1:IAVOAIYQCCECYDRW27LCEUOIKFWKHVNJ\",\"WARC-Block-Digest\":\"sha1:GZ7BAFAMWLQRH33NVSGOERJ4TB5HX7FK\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593657142589.93_warc_CC-MAIN-20200713033803-20200713063803-00387.warc.gz\"}"} |
http://www.pendragonforms.com/documentation_scripting.html | [
"## Forms 8 Documentation - Scripting\n\n### Variables\n\nVariables are used in scripting statements to represent values in calculations, and to refer to the values of fields.\n\nPendragon Forms scripts support the following variables:\n\nRepresents the value of the current field.\n\nExample 1: answer = 10 puts the number 10 in the current field.\n\nExample 2: answer = \\$7 puts the value in field 7 into the current field.\n\nExample 3: answer = result puts the value that is currently in the result variable into the current field.\n\nresult\n\nUsed as a temporary variable for storing intermediate calculations.\n\nSeveral scripting statements also place values into the result variable.\n\nExample 1: result = 6 + 19 stores the number 25 in the result variable.\n\nExample 2: result = \\$5 + \\$6 adds up the values in fields 5 and 6, and places the sum of the two fields in the result variable.\n\nExample 3: result = result + \\$15 adds the existing value in result to the value in field 15, and stores the sum of the two values back in the result variable.\n\n\\$number\n\n\\$ is a field reference, used to refer to the value in the specified field.\n\n\\$[label]\n\nExample 1: \\$5 means the value in field 5.\n\nExample 2: \\$[OrderTotal] means the value in a field with the field label of OrderTotal.\n\ntemp\n\ntemp is a \"free\" variable that can be used to store data. Whereas scripting statements place values into the result variable, the value in the temp variable can be set in a script and will be preserved until you change this value in a script yourself, or until Pendragon Forms is no longer the active application on the mobile device.\n\nnull\n\nnull is a constant which is equivalent to an empty string.\n\nExample 1: \\$10 = null sets the value of field 10 to null.\n\nExample 2: calculate:\nif \\$4 <> null then\nendif\n\nIn this example, if field 4 is not null, then a calculation is performed.\n\nbuffer\n\nbuffer is another \"free\" variable that can be used to store data, similar to the temp variable. The buffer variable can be set in a script and will be preserved until you change this value in a script yourself, or until Pendragon Forms is no longer the active application on the mobile device.\n\nlookupname\n\nlookupname is a variable that can be used to determine the name of the Lookup List that will be displayed when a user taps in a Lookup List field. Lookupname is set with a setlookupname statement - see page 301.\n\nlookuplocale\n\nlookuplocale is another variable that can be used to determine the name of the Lookup List that will be displayed when a user taps in a Lookup List field. Lookuplocale is set with a setlookuplocale statement - see page 301.",
null,
"(847) 816-9660\[email protected]"
]
| [
null,
"http://www.pendragonforms.com/images/pendragon_logo.png",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.85824025,"math_prob":0.94052035,"size":71224,"snap":"2019-51-2020-05","text_gpt3_token_len":16291,"char_repetition_ratio":0.20744173,"word_repetition_ratio":0.1496699,"special_character_ratio":0.23059642,"punctuation_ratio":0.10565059,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":0.97027487,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-28T17:35:02Z\",\"WARC-Record-ID\":\"<urn:uuid:58abf7e1-af7c-4bf0-ab3e-c08b1fadcfcf>\",\"Content-Length\":\"165714\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5e551db3-5fe1-400c-b2fd-79fb41f4cdd2>\",\"WARC-Concurrent-To\":\"<urn:uuid:5b234f55-6315-4b92-93d5-570e40fb0f7d>\",\"WARC-IP-Address\":\"168.61.152.29\",\"WARC-Target-URI\":\"http://www.pendragonforms.com/documentation_scripting.html\",\"WARC-Payload-Digest\":\"sha1:DL6ETGS6CTZ5HJGGNW7OOB4XE2C2ZBJW\",\"WARC-Block-Digest\":\"sha1:YN3R2UY4L2TCIOEIWJ5NCKXKLKR4WEY5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579251779833.86_warc_CC-MAIN-20200128153713-20200128183713-00424.warc.gz\"}"} |
https://www.xlstat.com/es/soluciones/funciones/prueba-de-tendencias-cochran-armitage | [
"# Prueba de Tendencias Cochran-Armitage\n\nThe Cochran-Armitage trend test allows to check if a series of proportions vary linearly along a numeric variable. Do it in Excel using the XLSTAT software.",
null,
"## When to use a Cochran-Armitage trend test\n\nThe Cochran Armitage trend is used to test if a series of proportions, possibly computed from a contingency table, can be considered as varying linearly with an ordinal or continuous variable.\n\nIt can be a one- or two-sided test.\n\n## What is the Cochran-Armitage trend test\n\nThe Cochran-Armitage test allows to test if a series of proportions, can be considered as varying linearly with an ordinal or continuous score variable.\n\nIf X is the score variable, the statistic that is computed to test for the linearity is given by:\n\nz = [Ʃi=1..r nr1(Xi - X)] / √ p+1 (1– p+1) s²\n\nwith s²=Ʃi=1..r ni+(Xi - X)²\n\nNote: if X is an ordinal variable, the minimum value of X has no influence on the value of z.\n\nIn the case of the two-tailed (or two-sided) test, the null (H0) and alternative (Ha) hypotheses are:\n\n• H0: z = 0\n• Ha: z ≠ 0\n\nNote: z is asymptotically distributed as a standard Normal variable. Some statistical programs use z² to test the linearity. z² follows a Chi-square distribution with one degree of freedom.\n\nIn the one-tailed case, you need to distinguish the left-tailed (or lower-tailed or lower one-sided) test and the right-tailed (or upper-tailed or upper one-sided) test. In the left-tailed test, the following hypotheses are used:\n\n• H0: z = 0\n• Ha: z < 0\n\nIf Ha is chosen, one concludes that the proportions decrease when the score variable increases.\n\nIn the right-tailed test, the following hypotheses are used:\n\n• H0: z = 0\n• Ha: z > 0\n\nIf Ha is chosen, one concludes that the proportions increase when the score variable increases.",
null,
"",
null,
"### analice sus datos con xlstat\n\nprueba gratuita de 14 días\n\nIncluido en"
]
| [
null,
"https://cdn.xlstat.com/media/feature/0001/01/thumb_176_feature_medium.png",
null,
"https://cdn.xlstat.com/dist/assets/img/diagram_ternary.svg",
null,
"https://cdn.xlstat.com/dist/assets/img/diagram_neural_network.svg",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.85813874,"math_prob":0.76340723,"size":1772,"snap":"2021-21-2021-25","text_gpt3_token_len":473,"char_repetition_ratio":0.12556562,"word_repetition_ratio":0.18566775,"special_character_ratio":0.24266365,"punctuation_ratio":0.10803324,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99560195,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,8,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-06-14T21:56:50Z\",\"WARC-Record-ID\":\"<urn:uuid:beb4e4dd-2913-426f-9a7d-6a8059d27bf9>\",\"Content-Length\":\"27185\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d6dbdab9-8838-4a3a-af3f-afca280db461>\",\"WARC-Concurrent-To\":\"<urn:uuid:60ef6832-8814-4209-8a95-6737c4e88f2a>\",\"WARC-IP-Address\":\"13.68.195.86\",\"WARC-Target-URI\":\"https://www.xlstat.com/es/soluciones/funciones/prueba-de-tendencias-cochran-armitage\",\"WARC-Payload-Digest\":\"sha1:4LH7ANT24C5DQQLGILBKFGEGYC74W5ST\",\"WARC-Block-Digest\":\"sha1:XYVHFQ2QEQPGYRPLCHSOJTLF4GYZEMSL\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-25/CC-MAIN-2021-25_segments_1623487613453.9_warc_CC-MAIN-20210614201339-20210614231339-00204.warc.gz\"}"} |
http://www.aimsciences.org/article/doi/10.3934/cpaa.2019078 | [
"",
null,
"",
null,
"",
null,
"",
null,
"• Previous Article\nGround state solutions for the fractional Schrödinger-Poisson systems involving critical growth in $\\mathbb{R} ^{3}$\n• CPAA Home\n• This Issue\n• Next Article\nEffects of localized spatial variations on the uniform persistence and spreading speeds of time periodic two species competition systems\nJuly 2019, 18(4): 1637-1662. doi: 10.3934/cpaa.2019078\n\n## New general decay results in a finite-memory bresse system\n\n Department of Mathematics and Statistics, King Fahd University of Petroleum and Minerals, P.O. Box 546, Dhahran 31261, Saudi Arabia\n\n* Corresponding author\n\nReceived April 2018 Revised July 2018 Published January 2019\n\nFund Project: This work is funded by KFUPM under Project IN161006.\n\nThis paper is concerned with the following memory-type Bresse system\n $\\begin{array}{ll} \\rho_1\\varphi_{tt}-k_1(\\varphi_x+\\psi+lw)_x-lk_3(w_x-l\\varphi) = 0,\\\\ \\rho_2\\psi_{tt}-k_2\\psi_{xx}+k_1(\\varphi_x+\\psi+lw)+ \\int_0^tg(t-s)\\psi_{xx}(\\cdot,s)ds = 0,\\\\ \\rho_1w_{tt}-k_3(w_x-l\\varphi)_x+lk_1(\\varphi_x+\\psi+lw) = 0, \\end{array}$\nwith homogeneous Dirichlet-Neumann-Neumann boundary conditions, where\n $(x,t) \\in (0,L) \\times (0, \\infty)$\n,\n $g$\nis a positive strictly increasing function satisfying, for some nonnegative functions\n $\\xi$\nand\n $H$\n,\n $g'(t)\\leq-\\xi(t)H(g(t)),\\qquad\\forall t\\geq0.$\nUnder appropriate conditions on\n $\\xi$\nand\n $H$\n, we prove, in cases of equal and non-equal speeds of wave propagation, some new decay results that generalize and improve the recent results in the literature.\nCitation: Salim A. Messaoudi, Jamilu Hashim Hassan. New general decay results in a finite-memory bresse system. Communications on Pure & Applied Analysis, 2019, 18 (4) : 1637-1662. doi: 10.3934/cpaa.2019078\n##### References:\n M. O. Alves, L. H. Fatori, M. A. Jorge Silva and R. N. Monteriro, Stability and optimality of decay rate for a weakly dissipative Bresse system, Math. Methods Appl. Sci., 38 (2015), 898-908. doi: 10.1002/mma.3115.",
null,
"",
null,
"Google Scholar M. S. Alves, O. Vera, J. Muñoz-Rivera and A. Rambaud, Exponential stability to the Bresse system with boundary dissipation conditions, (2015), arXiv: 1506.01657. Google Scholar V. I. Arnol'd, Mathematical Methods of Classical Mechanics, Springer-Verlag, New York, 1989. doi: 10.1007/978-1-4757-2063-1.",
null,
"",
null,
"Google Scholar F. Dell'Oro, Asymptotic stability of thermoelastic systems of Bresse type, J. Differ. Equ., 258 (2015), 3902-3927. doi: 10.1016/j.jde.2015.01.025.",
null,
"",
null,
"Google Scholar L. H. Fatori and R. N. Monteiro, The optimal decay rate for a weak dissipative Bresse system, Appl. Math. Lett., 25 (2012), 600-604. doi: 10.1016/j.aml.2011.09.067.",
null,
"",
null,
"Google Scholar A. Guesmia and M. Kafini, Bresse system with infinite memories, Math. Methods Appl. Sci., 38 (2015), 2389-2402. doi: 10.1002/mma.3228.",
null,
"",
null,
"Google Scholar A. Guesmia and S. A. Messaoudi, On the stabilization of Timoshenko systems with memory and different speeds of wave propagation, Appl. Math. Comput., 219 (2013), 9424-9437. doi: 10.1016/j.amc.2013.03.105.",
null,
"",
null,
"Google Scholar T. F. Ma and R. N. Monteiro, Singular limit and long-time dynamics of Bresse systems, SIAM J. Math. Anal., 49 (2017), 2468-2495. doi: 10.1137/15M1039894.",
null,
"",
null,
"Google Scholar M. I. Mustafa, General decay result for nonlinear viscoelastic equations, J. Math. Anal. Appl., 457 (2018), 134-152. doi: 10.1016/j.jmaa.2017.08.019.",
null,
"",
null,
"Google Scholar J. A. Soriano, J. E. Muñoz Rivera and L. H. Fatori, Bresse system with indefinite damping, J. Math. Anal. Appl., 387 (2012), 284-290. doi: 10.1016/j.jmaa.2011.08.072.",
null,
"",
null,
"Google Scholar A. Soufyane and B. Said-Houari, The effect of the wave speeds and the frictional damping terms on the decay rate of the bresse system, Evol. Equations Control Theory, 3 (2014), 713-738. doi: 10.3934/eect.2014.3.713.",
null,
"",
null,
"Google Scholar A. Wehbe and W. Youssef, Exponential and polynomial stability of an elastic Bresse system with two locally distributed feedbacks, J. Math. Phys., 51 (2010), 1-17. doi: 10.1063/1.3486094.",
null,
"",
null,
"Google Scholar\n\nshow all references\n\n##### References:\n M. O. Alves, L. H. Fatori, M. A. Jorge Silva and R. N. Monteriro, Stability and optimality of decay rate for a weakly dissipative Bresse system, Math. Methods Appl. Sci., 38 (2015), 898-908. doi: 10.1002/mma.3115.",
null,
"",
null,
"Google Scholar M. S. Alves, O. Vera, J. Muñoz-Rivera and A. Rambaud, Exponential stability to the Bresse system with boundary dissipation conditions, (2015), arXiv: 1506.01657. Google Scholar V. I. Arnol'd, Mathematical Methods of Classical Mechanics, Springer-Verlag, New York, 1989. doi: 10.1007/978-1-4757-2063-1.",
null,
"",
null,
"Google Scholar F. Dell'Oro, Asymptotic stability of thermoelastic systems of Bresse type, J. Differ. Equ., 258 (2015), 3902-3927. doi: 10.1016/j.jde.2015.01.025.",
null,
"",
null,
"Google Scholar L. H. Fatori and R. N. Monteiro, The optimal decay rate for a weak dissipative Bresse system, Appl. Math. Lett., 25 (2012), 600-604. doi: 10.1016/j.aml.2011.09.067.",
null,
"",
null,
"Google Scholar A. Guesmia and M. Kafini, Bresse system with infinite memories, Math. Methods Appl. Sci., 38 (2015), 2389-2402. doi: 10.1002/mma.3228.",
null,
"",
null,
"Google Scholar A. Guesmia and S. A. Messaoudi, On the stabilization of Timoshenko systems with memory and different speeds of wave propagation, Appl. Math. Comput., 219 (2013), 9424-9437. doi: 10.1016/j.amc.2013.03.105.",
null,
"",
null,
"Google Scholar T. F. Ma and R. N. Monteiro, Singular limit and long-time dynamics of Bresse systems, SIAM J. Math. Anal., 49 (2017), 2468-2495. doi: 10.1137/15M1039894.",
null,
"",
null,
"Google Scholar M. I. Mustafa, General decay result for nonlinear viscoelastic equations, J. Math. Anal. Appl., 457 (2018), 134-152. doi: 10.1016/j.jmaa.2017.08.019.",
null,
"",
null,
"Google Scholar J. A. Soriano, J. E. Muñoz Rivera and L. H. Fatori, Bresse system with indefinite damping, J. Math. Anal. Appl., 387 (2012), 284-290. doi: 10.1016/j.jmaa.2011.08.072.",
null,
"",
null,
"Google Scholar A. Soufyane and B. Said-Houari, The effect of the wave speeds and the frictional damping terms on the decay rate of the bresse system, Evol. Equations Control Theory, 3 (2014), 713-738. doi: 10.3934/eect.2014.3.713.",
null,
"",
null,
"Google Scholar A. Wehbe and W. Youssef, Exponential and polynomial stability of an elastic Bresse system with two locally distributed feedbacks, J. Math. Phys., 51 (2010), 1-17. doi: 10.1063/1.3486094.",
null,
"",
null,
"Google Scholar\n Abdelaziz Soufyane, Belkacem Said-Houari. The effect of the wave speeds and the frictional damping terms on the decay rate of the Bresse system. Evolution Equations & Control Theory, 2014, 3 (4) : 713-738. doi: 10.3934/eect.2014.3.713 Ammar Khemmoudj, Taklit Hamadouche. General decay of solutions of a Bresse system with viscoelastic boundary conditions. Discrete & Continuous Dynamical Systems - A, 2017, 37 (9) : 4857-4876. doi: 10.3934/dcds.2017209 Belkacem Said-Houari, Salim A. Messaoudi. General decay estimates for a Cauchy viscoelastic wave problem. Communications on Pure & Applied Analysis, 2014, 13 (4) : 1541-1551. doi: 10.3934/cpaa.2014.13.1541 Ammar Khemmoudj, Yacine Mokhtari. General decay of the solution to a nonlinear viscoelastic modified von-Kármán system with delay. Discrete & Continuous Dynamical Systems - A, 2019, 39 (7) : 3839-3866. doi: 10.3934/dcds.2019155 Dongbing Zha, Yi Zhou. The lifespan for quasilinear wave equations with multiple propagation speeds in four space dimensions. Communications on Pure & Applied Analysis, 2014, 13 (3) : 1167-1186. doi: 10.3934/cpaa.2014.13.1167 Jing Zhang. The analyticity and exponential decay of a Stokes-wave coupling system with viscoelastic damping in the variational framework. Evolution Equations & Control Theory, 2017, 6 (1) : 135-154. doi: 10.3934/eect.2017008 Nguyen Thanh Long, Hoang Hai Ha, Le Thi Phuong Ngoc, Nguyen Anh Triet. Existence, blow-up and exponential decay estimates for a system of nonlinear viscoelastic wave equations with nonlinear boundary conditions. Communications on Pure & Applied Analysis, 2020, 19 (1) : 455-492. doi: 10.3934/cpaa.2020023 Kunimochi Sakamoto. Destabilization threshold curves for diffusion systems with equal diffusivity under non-diagonal flux boundary conditions. Discrete & Continuous Dynamical Systems - B, 2016, 21 (2) : 641-654. doi: 10.3934/dcdsb.2016.21.641 Tae Gab Ha. Global existence and general decay estimates for the viscoelastic equation with acoustic boundary conditions. Discrete & Continuous Dynamical Systems - A, 2016, 36 (12) : 6899-6919. doi: 10.3934/dcds.2016100 Mohammad M. Al-Gharabli, Aissa Guesmia, Salim A. Messaoudi. Existence and a general decay results for a viscoelastic plate equation with a logarithmic nonlinearity. Communications on Pure & Applied Analysis, 2019, 18 (1) : 159-180. doi: 10.3934/cpaa.2019009 Ammar Khemmoudj, Imane Djaidja. General decay for a viscoelastic rotating Euler-Bernoulli beam. Communications on Pure & Applied Analysis, 2020, 19 (7) : 3531-3557. doi: 10.3934/cpaa.2020154 Yvan Martel, Frank Merle. Inelastic interaction of nearly equal solitons for the BBM equation. Discrete & Continuous Dynamical Systems - A, 2010, 27 (2) : 487-532. doi: 10.3934/dcds.2010.27.487 Jeffrey Diller, Han Liu, Roland K. W. Roeder. Typical dynamics of plane rational maps with equal degrees. Journal of Modern Dynamics, 2016, 10: 353-377. doi: 10.3934/jmd.2016.10.353 Abbes Benaissa, Abderrahmane Kasmi. Well-posedeness and energy decay of solutions to a bresse system with a boundary dissipation of fractional derivative type. Discrete & Continuous Dynamical Systems - B, 2018, 23 (10) : 4361-4395. doi: 10.3934/dcdsb.2018168 Jong Yeoul Park, Sun Hye Park. On uniform decay for the coupled Euler-Bernoulli viscoelastic system with boundary damping. Discrete & Continuous Dynamical Systems - A, 2005, 12 (3) : 425-436. doi: 10.3934/dcds.2005.12.425 William Thomson. For claims problems, another compromise between the proportional and constrained equal awards rules. Journal of Dynamics & Games, 2015, 2 (3&4) : 363-382. doi: 10.3934/jdg.2015011 Eduardo S. G. Leandro. On the Dziobek configurations of the restricted $(N+1)$-body problem with equal masses. Discrete & Continuous Dynamical Systems - S, 2008, 1 (4) : 589-595. doi: 10.3934/dcdss.2008.1.589 Étienne Bernard, Marie Doumic, Pierre Gabriel. Cyclic asymptotic behaviour of a population reproducing by fission into two equal parts. Kinetic & Related Models, 2019, 12 (3) : 551-571. doi: 10.3934/krm.2019022 W. Wei, Yin Li, Zheng-An Yao. Decay of the compressible viscoelastic flows. Communications on Pure & Applied Analysis, 2016, 15 (5) : 1603-1624. doi: 10.3934/cpaa.2016004 Marcelo M. Cavalcanti, Valéria N. Domingos Cavalcanti, Irena Lasiecka, Flávio A. Falcão Nascimento. Intrinsic decay rate estimates for the wave equation with competing viscoelastic and frictional dissipative effects. Discrete & Continuous Dynamical Systems - B, 2014, 19 (7) : 1987-2011. doi: 10.3934/dcdsb.2014.19.1987\n\n2018 Impact Factor: 0.925"
]
| [
null,
"https://www.aimsciences.org:443/style/web/images/white_google.png",
null,
"https://www.aimsciences.org:443/style/web/images/white_facebook.png",
null,
"https://www.aimsciences.org:443/style/web/images/white_twitter.png",
null,
"https://www.aimsciences.org:443/style/web/images/white_linkedin.png",
null,
"https://www.aimsciences.org:443/style/web/images/crossref.jpeg",
null,
"https://www.aimsciences.org:443/style/web/images/math-review.gif",
null,
"https://www.aimsciences.org:443/style/web/images/crossref.jpeg",
null,
"https://www.aimsciences.org:443/style/web/images/math-review.gif",
null,
"https://www.aimsciences.org:443/style/web/images/crossref.jpeg",
null,
"https://www.aimsciences.org:443/style/web/images/math-review.gif",
null,
"https://www.aimsciences.org:443/style/web/images/crossref.jpeg",
null,
"https://www.aimsciences.org:443/style/web/images/math-review.gif",
null,
"https://www.aimsciences.org:443/style/web/images/crossref.jpeg",
null,
"https://www.aimsciences.org:443/style/web/images/math-review.gif",
null,
"https://www.aimsciences.org:443/style/web/images/crossref.jpeg",
null,
"https://www.aimsciences.org:443/style/web/images/math-review.gif",
null,
"https://www.aimsciences.org:443/style/web/images/crossref.jpeg",
null,
"https://www.aimsciences.org:443/style/web/images/math-review.gif",
null,
"https://www.aimsciences.org:443/style/web/images/crossref.jpeg",
null,
"https://www.aimsciences.org:443/style/web/images/math-review.gif",
null,
"https://www.aimsciences.org:443/style/web/images/crossref.jpeg",
null,
"https://www.aimsciences.org:443/style/web/images/math-review.gif",
null,
"https://www.aimsciences.org:443/style/web/images/crossref.jpeg",
null,
"https://www.aimsciences.org:443/style/web/images/math-review.gif",
null,
"https://www.aimsciences.org:443/style/web/images/crossref.jpeg",
null,
"https://www.aimsciences.org:443/style/web/images/math-review.gif",
null,
"https://www.aimsciences.org:443/style/web/images/crossref.jpeg",
null,
"https://www.aimsciences.org:443/style/web/images/math-review.gif",
null,
"https://www.aimsciences.org:443/style/web/images/crossref.jpeg",
null,
"https://www.aimsciences.org:443/style/web/images/math-review.gif",
null,
"https://www.aimsciences.org:443/style/web/images/crossref.jpeg",
null,
"https://www.aimsciences.org:443/style/web/images/math-review.gif",
null,
"https://www.aimsciences.org:443/style/web/images/crossref.jpeg",
null,
"https://www.aimsciences.org:443/style/web/images/math-review.gif",
null,
"https://www.aimsciences.org:443/style/web/images/crossref.jpeg",
null,
"https://www.aimsciences.org:443/style/web/images/math-review.gif",
null,
"https://www.aimsciences.org:443/style/web/images/crossref.jpeg",
null,
"https://www.aimsciences.org:443/style/web/images/math-review.gif",
null,
"https://www.aimsciences.org:443/style/web/images/crossref.jpeg",
null,
"https://www.aimsciences.org:443/style/web/images/math-review.gif",
null,
"https://www.aimsciences.org:443/style/web/images/crossref.jpeg",
null,
"https://www.aimsciences.org:443/style/web/images/math-review.gif",
null,
"https://www.aimsciences.org:443/style/web/images/crossref.jpeg",
null,
"https://www.aimsciences.org:443/style/web/images/math-review.gif",
null,
"https://www.aimsciences.org:443/style/web/images/crossref.jpeg",
null,
"https://www.aimsciences.org:443/style/web/images/math-review.gif",
null,
"https://www.aimsciences.org:443/style/web/images/crossref.jpeg",
null,
"https://www.aimsciences.org:443/style/web/images/math-review.gif",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.54949665,"math_prob":0.76256984,"size":10283,"snap":"2020-24-2020-29","text_gpt3_token_len":3569,"char_repetition_ratio":0.15264131,"word_repetition_ratio":0.5037288,"special_character_ratio":0.3836429,"punctuation_ratio":0.27909997,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9517089,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-05-26T16:12:37Z\",\"WARC-Record-ID\":\"<urn:uuid:370aa01a-0d84-43a0-8f6e-d9a017a981b6>\",\"Content-Length\":\"92862\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:53359ad7-14e0-4c20-acb1-4b9a6d53902d>\",\"WARC-Concurrent-To\":\"<urn:uuid:a38aa4b1-3ba4-421a-931c-114ad4c70cc3>\",\"WARC-IP-Address\":\"216.227.221.143\",\"WARC-Target-URI\":\"http://www.aimsciences.org/article/doi/10.3934/cpaa.2019078\",\"WARC-Payload-Digest\":\"sha1:4Y6H5W4T7DFLV3IPFM4LAUAIK4QZZ7NS\",\"WARC-Block-Digest\":\"sha1:QPMRVKHBPANWASSDSHROA277AHA3JCR7\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-24/CC-MAIN-2020-24_segments_1590347391277.13_warc_CC-MAIN-20200526160400-20200526190400-00442.warc.gz\"}"} |
https://www.arxiv-vanity.com/papers/1208.0553/ | [
"arXiv Vanity renders academic papers from arXiv as responsive web pages so you don’t have to squint at a PDF. Read this paper on arXiv.org.\n\n# The homogeneity theorem for supergravity backgrounds\n\nJosé Figueroa-O’Farrill and Noel Hustler Maxwell and Tait Institutes, School of Mathematics, University of Edinburgh\n###### Abstract.\n\nWe prove the strong homogeneity conjecture for eleven- and ten-dimensional (Poincaré) supergravity backgrounds. In other words, we show that any backgrounds of 11-dimensional, type I/heterotic or type II supergravity theories preserving a fraction of the supersymmetry of the underlying theory are necessarily locally homogeneous. Moreover we show that the homogeneity is due precisely to the supersymmetry, so that at every point of the spacetime one can find a frame for the tangent space made out of Killing vectors constructed out of the Killing spinors.\n\nEMPG-12-14\n\n## 1. Introduction\n\nIt is a fact that all known (Poincaré) supergravity backgrounds in 10 and 11 dimensions (and also in some lower dimensions) which preserve more than one half of the supersymmetry of the theory are homogeneous, by which one means that there is a Lie group acting transitively on the spacetime and preserving all the bosonic fields which are turned on in the background: metric, fluxes,… This empirical fact led naturally to the homogeneity conjecture, reviewed in , which we heard for the first time in a private communication from Patrick Meessen in 2004. The earliest attempt to prove this conjecture was , where it was shown that classical M-theory backgrounds preserving more than of the supersymmetry are (locally) homogeneous. As explained in , we usually work with local metrics defined on open subsets of , whence the relevant notion is that of local homogeneity. This follows from the related notion of local transitivity, which simply says that at every point in the spacetime there is a frame made out of Killing vectors which preserve all bosonic fields. If we further demand that the frame consists of Killing vectors which are made out of the Killing spinors of the background we arrive at what one could call the “strong” form of the conjecture. In there is a proof of a strong form of the conjecture but for backgrounds preserving more than of the supersymmetry and moreover suggested that the critical supersymmetry fraction beyond which homogeneity was guaranteed was in fact . That suggestion was based on what turns out to be an error in that paper, namely the construction of a 24-dimensional subspace of the spinor representation of obeying certain properties. In fact, as will be explained below, no such subspaces exist.\n\nIn it was shown that any type IIB supergravity background preserving more than of the supersymmetry is (locally) homogeneous and moreover that the homogeneity is due to the supersymmetry. In the same paper, the conjecture (for ) was proven in type I/heterotic supergravity, albeit not in the strong form. In other words, the question was left open whether the homogeneity for type I/heterotic backgrounds preserving more than of the supersymmetry could be “accidental”, as the proof used unrelated results concerning parallelisable heterotic backgrounds.\n\nThe purpose of this note is to prove the strong form of the conjecture for eleven-dimensional, type I/heterotic and type II supergravity backgrounds. The proof is quite elementary and uniform in all cases, resting as it does on two fundamental facts. Firstly, that there is a “squaring” map from spinor fields to vector fields such that when applied to Killing spinors produces Killing vectors which in addition preserve all the other bosonic fields which are turned on in the background. This has been established in (but see also ) for eleven-dimensional supergravity and in for type IIB and type I/heterotic supergravities. And lastly, that this squaring map is pointwise surjective provided that the dimension of the space of Killing spinors is greater than one half of the rank of the spinor bundle. It is the proof of this latter fact which is the main aim of this note.\n\nThe note is organised as follows. In Section 2 we present the general set-up, since the idea of the proof is to a large extent independent of the details of the supergravity theory. Then in Section 3 we present the proofs for each of the supergravity theories under discussion. Finally we conclude in Section 4.\n\n## 2. General set-up\n\nUnless otherwise stated, by “supergravity background” we shall mean a bosonic background of eleven-dimensional, type II or type I/heterotic supergravity. The common ingredients in all supergravity backgrounds are a lorentzian spin manifold and a bundle of spinors on which one has defined a connection . The connection depends on the bosonic fields of the background in question. The bundle is obtained from the spin bundle as a vector bundle associated to a representation of the spin group. The representation need not be irreducible, but might be a direct sum of irreducible spinor representations. The tangent bundle is similarly obtained, but this time the associated representation of the spin group is the vector representation of the corresponding orthogonal group and shall be denoted .\n\nEach supergravity background has a notion of Killing spinor, which is a section of which is -parallel and in some cases might satisfy additional algebraic equations which say that it is in the kernel of some bundle maps which depend on the bosonic fields in the supergravity background. Since the equations defining a Killing spinor are linear, the Killing spinors span a vector space which we denote . Because the equations satisfied by Killing spinors are at most first order in derivatives, a Killing spinor is uniquely determined by its value at any point , whence having chosen such a point, can be identified with a subspace of the fibre of at , which can itself be identified with the representation . This means that we can think of as a vector subspace of .\n\nAnother common ingredient of supergravity backgrounds is the existence of a symmetric bilinear bundle map (called “squaring”) with the property that if are Killing spinors, then is a Killing vector which moreover preserves all the other bosonic fields in the background (when appropriate, only up to gauge transformations). Let denote the vector fields obtained by squaring Killing spinors. Then it was shown in and for eleven- and ten-dimensional supergravity backgrounds, respectively, that on the 2-graded vector space one can define the structure of a Lie superalgebra, called the Killing superalgebra of the supergravity background. In particular, is a Lie algebra and the strong homogeneity conjecture says that acts locally transitively on the spacetime; that is, that the values at of the Killing vectors in span for all .\n\nFixing a point once and for all and identifying with with , the squaring map induces a spin-equivariant symmetric bilinear map . Being symmetric and bilinear, is uniquely determined by its value on the diagonal by the usual polarisation identity\n\n 2φ(ε1,ε2)=φ(ε1+ε2,ε1+ε2)−φ(ε1,ε1)−φ(ε2,ε2) . (1)\n\nA final property of the map is that for any , the vector is either timelike or null relative to the lorentzian inner product on induced by the restriction to of the metric . Of course, for two different , the causal type of is typically unrestricted.\n\nThe proof of the conjecture consists in showing that if , then the restriction of the map to is surjective onto . In other words, that we can always find a frame for which is made out of the values at of Killing vectors in the image , thus proving the strong form of the homogeneity conjecture.\n\nThe idea of the proof is the same in all cases. To show that is surjective one would like to show that the perpendicular complement of its image in is trivial. It is not difficult to show that in all cases is a totally null subspace of , whence in lorentzian signature its dimension is bounded above by 1. One concludes the proof by showing that the case of where the dimension is 1 cannot occur by deriving a contradiction.\n\n## 3. The proof of the strong homogeneity conjecture\n\nFor each of the supergravity theories in question we will now exhibit the map , which requires in particular identifying the representations and , and we will prove that the restriction of to any subspace of dimension is surjective.\n\n### 3.1. Eleven-dimensional supergravity\n\nIn eleven-dimensional supergravity, is one of the two irreducible representations of the Clifford algebra and hence restricts to as its unique irreducible spinor representation. It is real and 32-dimensional and admits an invariant symplectic structure we shall denote . is the eleven-dimensional real vector representation of and we shall let denote the invariant lorentzian inner product. The squaring map is the transpose of the Clifford action relative to the symplectic structure on and the lorentzian inner product on . In other words, for every and ,\n\n ⟨v,φ(ε1,ε2)⟩=(ε1,v⋅ε2) . (2)\n\nChoosing a pseudo-orthonormal basis for and letting denote the corresponding gamma matrices, we have\n\n φ(ε1,ε2)=¯ε1Γμε2eμ , (3)\n\nwhere we have used the standard physics notation for the symplectic inner product on , which is defined as\n\n ¯ε1ε2=ε†1Γ0ε2 . (4)\n\nBy taking , this shows that the vector\n\n vμ:=¯εΓμε=ε†Γ0Γμε (5)\n\nobtained by squaring has a nonzero component along :\n\n v0=¯εΓ0ε=ε†Γ0Γ0ε=−ε†ε=−|ε|2<0 . (6)\n\nThis means that is either null or timelike, since if it were spacelike one could Lorentz-transform to the rest frame, where .\n\nThe values of Killing spinors at the point span a subspace . We want to show that if then the restriction of to is surjective onto .\n\nLet . The map\n\n φ|W:W⊗W→V (7)\n\nis surjective if and only if the perpendicular complement of its image is trivial. Equivalently, if and only if the only vector obeying\n\n (ε1,v⋅ε2)=0for all ε1,2∈W (8)\n\nis the zero vector .\n\nOur first observation is that any satisfying (8) is necessarily null. Indeed, notice that (8) can be rephrased as saying that the Clifford product by sends , where\n\n W⊥={ε∈S∣(ε,w)=0for all w∈W} (9)\n\nis the symplectic perpendicular complement of . Since , it follows from that , whence the Clifford product by must have nonzero kernel purely on dimensional grounds. On the other hand, , whence has nonzero kernel if and only if . (Here denotes the indefinite norm in .)\n\nIn other words, the perpendicular complement (relative to ) of the image of is a totally null subspace of . Since is lorentzian, any totally null subspace is at most one-dimensional. Moreover, if one-dimensional, it is spanned by a null vector in .\n\nAssume for a contradiction that the perpendicular complement of the image of is one-dimensional. Without loss of generality we can choose a Witt basis for such that the image of is the perpendicular complement of , which is spanned by itself and the . In particular, this means that for every , must be perpendicular to , but we have seen that it cannot be spacelike, hence it has to be collinear with . In other words, for any , , for some function . Now consider for . By the polarisation identity (1),\n\n 2φ(ε1,ε2)=φ(ε1+ε2,ε1+ε2)−φ(ε1,ε1)−φ(ε2,ε2)=λ(ε1+ε2)e+−λ(ε1)e+−λ(ε2)e+=(λ(ε1+ε2)−λ(ε1)−λ(ε2))e+ , (10)\n\nwhence the image of is contained in the null line spanned by , which contradicts the fact that its perpendicular complement is one-dimensional. This means that has to be surjective, as desired.\n\nIn [2, §6.3] it was claimed that there was a 24-dimensional subspace with the property that the image of was not all of . The “proof” of this statement in that paper is incorrect. It relies on choosing the signature of the bilinear form which is not possible, as the signature of this bilinear form is not a matter of choice but follows from a calculation. Indeed, one can compute it and it has rank 16 and it is semi-definite.\n\n### 3.2. Type IIA supergravity\n\nWe may prove the strong homogeneity conjecture for type IIA supergravity as a consequence of the one for eleven-dimensional supergravity. Indeed, any background of IIA supergravity preserving more than half of the supersymmetry oxidises to a background of eleven-dimensional supergravity which also preserves more than half of the supersymmetry. By the above result, it is locally homogeneous. The eleven-dimensional geometry is the total space of a locally trivial fibre bundle over the IIA geometry. The Killing spinors of the eleven-dimensional supergravity background obtained via oxidation are constant along the fibres, meaning that their Lie derivative along the Killing vector along the fibres vanishes. This is in fact that geometric interpretation of the vanishing of the supersymmetry variation of the dilatino. This means that the Killing vectors obtained by squaring them are also constant along the fibres, which means that they commute with the Killing vector along the fibre and hence push down to Killing vectors of the IIA background. Since they act locally transitive in the eleven-dimensional geometry, their push-downs to the base also act locally transitively. This shows that the IIA background is locally homogeneous.\n\n### 3.3. Type I/Heterotic supergravity\n\nIn type I/heterotic supergravity the relevant spinor representation is , the positive-chirality spinor representation of which is real and 16-dimensional. We will let be the unique irreducible Clifford module of . It is which has an invariant symplectic inner product relative to which are lagrangian subspaces. This means that the symplectic structure sets up an isomorphism of representations. The representation is the real ten-dimensional vector representation of with an invariant lorentzian inner product . The squaring map is defined as in the case of eleven-dimensional supergravity as the transpose of the Clifford product relative to the two inner products on and ; that is, for all and , is defined by\n\n ⟨v,φ(ε1,ε2)⟩=(ε1,v⋅ε2) . (11)\n\nAs in the case of eleven-dimensional supergravity, it again follows that for every nonzero , is nonzero and is not spacelike.\n\nThe values at of the Killing spinors define a subspace . We wish to show that if , then the restriction of to is surjective onto . Let . A vector is perpendicular to the image of if and only if for all ,\n\n 0=⟨v,φ(ε1,ε2)⟩=(ε1,v⋅ε2) . (12)\n\nIn other words, the Clifford product with sends to its annihilator\n\n W0={χ∈S−∣∣(χ,ε)=0 ∀ε∈W} (13)\n\nin . Since , implies that and hence the Clifford product with has nontrivial kernel, again purely by dimensional reasons. However the Clifford relation says that is null. In other words, the perpendicular complement of the image of is a totally null subspace of , hence at most one-dimensional. We wish to show that it cannot be one-dimensional, so let us assume for a contradiction that it is. Again we may choose a Witt basis for such that the image of is the null line spanned by . As in the case of eleven-dimensional supergravity, we observe that for every , is a non-spacelike vector perpendicular to , whence it has to be collinear with , so that for some function . Again using polarisation we deduce that for all , lies in the line spanned by , whence the image of is one-dimensional, contradicting the fact that its codimension is equal to 1. Therefore, we conclude that is surjective, as desired.\n\n### 3.4. Type IIB supergravity\n\nIn type IIB supergravity, the relevant spinor representation is , consisting of two copies of the positive-chirality spinor representation of . Letting , we have that consists of two copies of the unique irreducible module of . On we have an invariant symplectic inner product which is given by the diagonal extension of the one on discussed in the previous section. In other words, if we let , then their symplectic inner product is given by\n\n (ε1,ε2)=2∑A=1(εA1,εA2) , (14)\n\nwhere , et cetera. Relative to this symplectic inner product, are lagrangian subspaces.\n\nHere is the ten-dimensional vector representation of as in the case of type I/heterotic supergravity. Its invariant lorentzian inner product is denoted as before by . The squaring map is defined again in such a way that for all and ,\n\n ⟨v,φ(ε1,ε2)⟩=(ε1,v⋅ε2) , (15)\n\nor in more traditional notation and again relative to pseudo-orthonormal basis for ,\n\n φ(ε1,ε2)=∑A¯εA1ΓμεA2eμ (16)\n\nwhere the sum over is implicit.\n\nIt follows as before that if is nonzero, then the vector with components is nonzero and is not spacelike, since\n\n v0=∑A¯εAΓ0εA=∑A(εA)†Γ0Γ0εA=−∑A(εA)†εA<0 . (17)\n\nNow let denote the subspace defined by the values at of the Killing spinors and let denote the restriction of to . A vector is perpendicular to the image of if and only if Clifford product with maps to its annihilator . If , , whence Clifford product with has nonzero kernel and hence by the Clifford relation is null. Therefore is a totally null subspace of and hence it must be at most one-dimensional. Assuming for a contradiction that it is one-dimensional and choosing a Witt basis for so that , we again conclude by polarisation that , which contradicts the codimension of being . Therefore we can conclude that is surjective as desired.\n\n## 4. Conclusion\n\nWe have given uniform and elementary proofs of the strong homogeneity conjecture for the ten- and eleven-dimensional Poincaré supergravity theories. This result simplifies the classification efforts of backgrounds preserving more than one half of the supersymmetry, by restricting the search to homogeneous backgrounds. Of course, classifying homogeneous lorentzian manifolds is not an easy matter, but one can go some way towards a classification with some further restrictions (e.g., semisimplicity of ).\n\nWe believe that the strong homogeneity conjecture is true in more generality and an effort is currently underway to study its validity in other supergravity theories. It may also be the case that one can generalise these results to other geometric situations, strengthening the work in .\n\n## Acknowledgments\n\nThis work was supported in part by the grant ST/J000329/1 “Particle Theory at the Tait Institute” from the UK Science and Technology Facilities Council. In addition, NH is supported by an EPSRC studentship."
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.92840356,"math_prob":0.9687203,"size":18279,"snap":"2020-34-2020-40","text_gpt3_token_len":4020,"char_repetition_ratio":0.16487004,"word_repetition_ratio":0.065513805,"special_character_ratio":0.20734176,"punctuation_ratio":0.10378765,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98398244,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-14T03:02:58Z\",\"WARC-Record-ID\":\"<urn:uuid:9f26ee03-1827-4d24-b033-1291bf55a414>\",\"Content-Length\":\"418102\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0a50c12c-198b-4d2a-a9d4-2ab352b72a39>\",\"WARC-Concurrent-To\":\"<urn:uuid:b705fa57-4d46-4755-9d8e-0888e7c1acf5>\",\"WARC-IP-Address\":\"104.28.20.249\",\"WARC-Target-URI\":\"https://www.arxiv-vanity.com/papers/1208.0553/\",\"WARC-Payload-Digest\":\"sha1:27GKBW7FVVNM7RMHIWYRSRDSKJV76B2M\",\"WARC-Block-Digest\":\"sha1:VL27VIO6WJXI5TEPTTPJQQ4NRYWZVQV2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439739134.49_warc_CC-MAIN-20200814011517-20200814041517-00366.warc.gz\"}"} |
https://brilliant.org/problems/the-force-it-applies/ | [
"# The force it applies.",
null,
"Figure above is an overhead view of a rigid rod that turns about a vertical axle until the identical rubber stoppers A and B are forced against rigid walls at distances $r_{A} = 7$ $cm$ and $r_{B} = 4$ $cm$ from the axle. Initially the stoppers touch the walls without being compressed. Then force $\\vec{F}$of magnitude $220$ $N$ is applied perpendicular to the rod at a distance $R = 5$ $cm$ from the axle. Find the magnitude of the force compressing Stopper A.\n\nLiked it? try some more\n\n×"
]
| [
null,
"https://ds055uzetaobb.cloudfront.net/brioche/solvable/6619902009.3c15e7fe76.yxv13O.png",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.8675139,"math_prob":0.99936,"size":583,"snap":"2021-31-2021-39","text_gpt3_token_len":123,"char_repetition_ratio":0.11398964,"word_repetition_ratio":0.12631579,"special_character_ratio":0.20240137,"punctuation_ratio":0.16806723,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99979407,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-08-06T04:43:18Z\",\"WARC-Record-ID\":\"<urn:uuid:53bcc952-d490-47a9-841e-e89793e142d4>\",\"Content-Length\":\"40117\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:32d50dfa-0eb5-4254-b555-dfd259813a09>\",\"WARC-Concurrent-To\":\"<urn:uuid:89bf891c-3729-465a-b13f-22dfb564a03a>\",\"WARC-IP-Address\":\"104.20.35.242\",\"WARC-Target-URI\":\"https://brilliant.org/problems/the-force-it-applies/\",\"WARC-Payload-Digest\":\"sha1:5R5SBNQM23P7CT762LGQNNLPPRCCAV4K\",\"WARC-Block-Digest\":\"sha1:NPT5RWNB7K6ZST7LEVCVZX3ATVPH73TR\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046152112.54_warc_CC-MAIN-20210806020121-20210806050121-00350.warc.gz\"}"} |
https://eng.libretexts.org/Bookshelves/Civil_Engineering/Book%3A_All_Things_Flow_-_Fluid_Mechanics_for_the_Natural_Sciences_(Smyth)/06%3A_Fluid_Dynamics/6.10%3A_Solution_methods | [
"# 6.10: Solution methods\n\n$$\\newcommand{\\vecs}{\\overset { \\rightharpoonup} {\\mathbf{#1}} }$$ $$\\newcommand{\\vecd}{\\overset{-\\!-\\!\\rightharpoonup}{\\vphantom{a}\\smash {#1}}}$$$$\\newcommand{\\id}{\\mathrm{id}}$$ $$\\newcommand{\\Span}{\\mathrm{span}}$$ $$\\newcommand{\\kernel}{\\mathrm{null}\\,}$$ $$\\newcommand{\\range}{\\mathrm{range}\\,}$$ $$\\newcommand{\\RealPart}{\\mathrm{Re}}$$ $$\\newcommand{\\ImaginaryPart}{\\mathrm{Im}}$$ $$\\newcommand{\\Argument}{\\mathrm{Arg}}$$ $$\\newcommand{\\norm}{\\| #1 \\|}$$ $$\\newcommand{\\inner}{\\langle #1, #2 \\rangle}$$ $$\\newcommand{\\Span}{\\mathrm{span}}$$ $$\\newcommand{\\id}{\\mathrm{id}}$$ $$\\newcommand{\\Span}{\\mathrm{span}}$$ $$\\newcommand{\\kernel}{\\mathrm{null}\\,}$$ $$\\newcommand{\\range}{\\mathrm{range}\\,}$$ $$\\newcommand{\\RealPart}{\\mathrm{Re}}$$ $$\\newcommand{\\ImaginaryPart}{\\mathrm{Im}}$$ $$\\newcommand{\\Argument}{\\mathrm{Arg}}$$ $$\\newcommand{\\norm}{\\| #1 \\|}$$ $$\\newcommand{\\inner}{\\langle #1, #2 \\rangle}$$ $$\\newcommand{\\Span}{\\mathrm{span}}$$\n\nThere is no general solution for the equations summarized in section 6.8, because they are inherently nonlinear. The main source of nonlinearity is in the advective part of the material derivative, e.g., $$[\\vec{u}\\cdot\\vec{\\nabla}]\\vec{u}$$, which stymies standard solution methods as well as accounting for many of the most fascinating aspects of fluid motion. To make analytical progress, we must restrict our attention to very simple flow geometries. In recent decades numerical methods of solution have become increasingly important. While allowing progress on complex flows, numerical solutions have an important limitation. Each numerical solution pertains only to a single set of assumed parameter values. If we want to know how a flow varies with some parameter, we must create many such solutions, and we can never be sure that we’ve captured all of the variability.\n\nFor example, suppose we want to know how the wind speed over a mountain depends on the mountain’s height. We could construct numerical solutions for mountains of height 1000 m, 2000 m, 3000 m, etc., plot the results on a graph and draw a smooth curve connecting them. But what if something completely different happens for a mountain of height 1500 m? No matter how closely we space our heights, we can never be certain that we are seeing the real picture. At what height is the speed a maximum? We can simulate forever and never be sure. The task is further complicated because wind speed over a mountain depends on many other parameters such as the width of the mountain and the upstream velocity. We can easily find ourselves doing thousands of simulations to describe one fairly simple flow geometry. Laboratory experiments, incidentally, suffer exactly the same limitation.\n\nAn analytical solution, even if it requires an extreme simplification of the physics, provides us with a mathematical description that we can examine in as much detail as we wish. For example, we can find the mountain height that maximizes wind speed simply by differentiating the solution. In the mountain example, the most useful solution follows from assumptions of this sort: \"The flow varies mainly in the streamwise $$(x)$$ direction and in height $$z$$, so derivatives with respect to $$t$$ and $$y$$ can be discarded.\"\n\nIn practice, progress in understanding fluids results from a combination of numerical solutions, analytical solutions and laboratory experiments, all of which must be compared with real-world observations to assess the validity of the underlying assumptions.\n\nIn what follows we will construct analytical solutions for a few very simple flow geometries that model phenomena we witness in everyday life. We do this to gain insight into the workings of these phenomena, but more importantly to test the validity of our model of Newtonian fluid mechanics by comparing its predictions with the behavior we observe.\n\nThis page titled 6.10: Solution methods is shared under a CC BY-NC 4.0 license and was authored, remixed, and/or curated by Bill Smyth via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request."
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.9199353,"math_prob":0.99654627,"size":2933,"snap":"2022-27-2022-33","text_gpt3_token_len":582,"char_repetition_ratio":0.1242745,"word_repetition_ratio":0.01724138,"special_character_ratio":0.20081827,"punctuation_ratio":0.10150376,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97587335,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-20T03:25:50Z\",\"WARC-Record-ID\":\"<urn:uuid:0ad25938-ff1e-42e6-a3aa-3bf803d2bbcd>\",\"Content-Length\":\"103685\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d6b09380-be35-4716-9403-842ef9af39ab>\",\"WARC-Concurrent-To\":\"<urn:uuid:0eab6cee-1a74-41e6-a481-ead69396153e>\",\"WARC-IP-Address\":\"99.86.224.70\",\"WARC-Target-URI\":\"https://eng.libretexts.org/Bookshelves/Civil_Engineering/Book%3A_All_Things_Flow_-_Fluid_Mechanics_for_the_Natural_Sciences_(Smyth)/06%3A_Fluid_Dynamics/6.10%3A_Solution_methods\",\"WARC-Payload-Digest\":\"sha1:HAPKVLJNE7ROHNDMBOVYF33XMUVGSMYD\",\"WARC-Block-Digest\":\"sha1:UZKY5CNMYSIY5Q5RZAHC4YETVT3PVSIT\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882573876.92_warc_CC-MAIN-20220820012448-20220820042448-00192.warc.gz\"}"} |
https://eccc.weizmann.ac.il/static/books/Disjoint_NP_Pairs_and_Propositional_Proof_Systems/ | [
"",
null,
"",
null,
"Under the auspices of the Computational Complexity Foundation (CCF)",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"ECCC BOOKS, LECTURES AND SURVEYS > DISJOINT NP PAIRS AND PROPOSITIONAL PROOF SYSTEMS:\n\n## Disjoint NP-Pairs and Propositional Proof Systems\n\nHumboldt-Universität zu Berlin, Germany\nJune 2006\n\nAbstract: Disjoint NP-pairs are an interesting complexity-theoretic concept with important applications in cryptography and propositional proof complexity. In this dissertation we explore the connection between disjoint NP-pairs and propositional proof complexity. This connection is fruitful for both fields. Various disjoint NP-pairs have been associated with propositional proof systems which characterize important properties of these systems, yielding applications to areas such as automated theorem proving. Further, conditional and unconditional lower bounds for the separation of disjoint NP-pairs can be translated to results on lower bounds to the length of propositional proofs. In this way disjoint NP-pairs have substantially contributed to the understanding of propositional proof systems.\n\nConversely, this dissertation aims to transfer proof-theoretic knowledge to the theory of NP-pairs to gain a more detailed understanding of the structure of the class of disjoint NP-pairs and in particular of the NP-pairs defined from propositional proof systems. For a proof system P we introduce the complexity class DNPP(P) of all disjoint NP-pairs for which the disjointness of the pair is efficiently provable in the proof system P. We exhibit structural properties of proof systems which make the previously defined canonical NP-pairs of these proof systems hard or complete for DNPP(P). Moreover, we demonstrate that non-equivalent proof systems can have equivalent canonical pairs and that depending on the properties of the proof systems different scenarios for DNPP(P) and the reductions between the canonical pairs exist. As an important tool for our investigation we use the connection of propositional proof systems and disjoint NP-pairs to theories of bounded arithmetic.\n\n```\n1. Introduction\n\n2. Propositional Proof Systems\n- Propositional Logic\n- Propositional Proof Complexity\n- Frege Systems and Their Extensions\n- Efficient Deduction\n- The Propositional Sequent Calculus\n- Natural Properties of Proof Systems\n\n3. Arithmetic Theories and Proof Systems\n- Theories of Bounded Arithmetic\n- A Translation of Arithmetic Formulas into Propositional Formulas\n- Coding Propositional Proofs in Bounded Arithmetic\n- Consistency Statements\n- The Correspondence Between Arithmetic Theories and Propositional\nProof Systems\n- The Correspondence Between S1_2 and EF\n- Regular Proof Systems\n- Comparing Properties of Proof Systems\n\n4. Disjoint NP-Pairs\n- Reductions Between NP-Pairs\n- The Simulation Order of Disjoint NP-Pairs\n- Examples for Combinatorially Defined Pairs\n- NP-Pairs Characterize Properties of Proof Systems\n- Representations of NP-Pairs\n- The Complexity Class DNPP(P)\n- The Canonical Pair and the Reflection Principle\n- The Class DNPP(P) Under the Strong Strong Reduction\n- Canonical Candidates for Complete Pairs\n- Symmetry of Disjoint NP-Pairs\n- NP-Pairs and the Simulation Order of Proof Systems\n- A Weak Reduction Between Proof Systems\n- Proof Systems with Equivalent Canonical Pairs\n- Different Scenarios for DNPP(P)\n- On the Complexity of Ref(P)\n- Are Canonical Pairs Something Special?\n\n5. Two Applications\n- Security of Public-Key Crypto Systems\n- Pseudorandom Generators in Proof Complexity\n\n6. Disjoint Tuples of NP-Sets\n- Basic Definitions and Properties\n- Representable Disjoint Tuples of NP-Sets\n- Disjoint Tuples from Proof Systems\n- Arithmetic Representations\n- On Complete Disjoint Tuples of NP-Sets\n\n```\n\nNumber of pages: 133\n\nISSN 1433-8092 | Imprint"
]
| [
null,
"https://eccc.weizmann.ac.il/resources/gf/logoNew.png",
null,
"https://eccc.weizmann.ac.il/resources/gf/subtitle.png",
null,
"https://eccc.weizmann.ac.il/resources/txt2img/6477d27f5652481c8709ce20804beef47000ddfacb628eb8f3e1424aa319da92d9706a1b9969c3beea6d0d0579f8c3574dfe71145b9a63e0f4cc7e59723f9d59-000000-13.png",
null,
"https://eccc.weizmann.ac.il/resources/txt2img/734cc234b69ec76be631e268baeba4246056bc255901fd92951a0836428e49f37084bbbfa3c4e253e31cc4d576b67f6cd530e1bb77f0ecc98955de6ba9eb86c4-000000-13.png",
null,
"https://eccc.weizmann.ac.il/resources/txt2img/112d9535943722a86180ae44a5a638b4f2c8b88f2b35a2161475927f703e4959e03e1c231f19ff9bb2aff902b0183e2db60085b49f5c3b501624b17f86a1b036-000000-13.png",
null,
"https://eccc.weizmann.ac.il/resources/txt2img/734cc234b69ec76be631e268baeba4246056bc255901fd92951a0836428e49f37084bbbfa3c4e253e31cc4d576b67f6cd530e1bb77f0ecc98955de6ba9eb86c4-000000-13.png",
null,
"https://eccc.weizmann.ac.il/resources/txt2img/94b19a5a52ca45c018ed1cb67a8f8a31a33b54a97551ad8a99f802714d157423de95da10ff4cd42551e3995e26f1c3c4c437b5c95fd23fd10cb8195fe86f48f1-000000-13.png",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.82925,"math_prob":0.71015733,"size":3534,"snap":"2020-34-2020-40","text_gpt3_token_len":683,"char_repetition_ratio":0.17790368,"word_repetition_ratio":0.0039525693,"special_character_ratio":0.17628749,"punctuation_ratio":0.04536862,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98989034,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-28T15:12:33Z\",\"WARC-Record-ID\":\"<urn:uuid:281e7bef-5f27-4a99-8fbe-45307d53b062>\",\"Content-Length\":\"21627\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5ebffd5b-353f-4aca-9481-cd0d2fbc0f32>\",\"WARC-Concurrent-To\":\"<urn:uuid:9297854a-d223-4a63-a034-c441c6276357>\",\"WARC-IP-Address\":\"132.77.150.87\",\"WARC-Target-URI\":\"https://eccc.weizmann.ac.il/static/books/Disjoint_NP_Pairs_and_Propositional_Proof_Systems/\",\"WARC-Payload-Digest\":\"sha1:XFY7WLTCMRRYOJAUTHBKWYS2LQUZ7HKJ\",\"WARC-Block-Digest\":\"sha1:47D27DGNSZBP74GUYN5XTGYMYPI23FU3\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600401601278.97_warc_CC-MAIN-20200928135709-20200928165709-00599.warc.gz\"}"} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.