URL
stringlengths 15
1.68k
| text_list
listlengths 1
199
| image_list
listlengths 1
199
| metadata
stringlengths 1.19k
3.08k
|
---|---|---|---|
https://au.mathworks.com/matlabcentral/answers/440519-how-can-i-draw-a-line-from-center-of-multiple-circles-that-will-intersect-all-the-circles?s_tid=prof_contriblnk | [
"# How can I draw a line from center of multiple circles that will intersect all the circles?\n\n6 views (last 30 days)\nZara Khan on 19 Jan 2019\nCommented: Star Strider on 22 Jan 2019\nI want to multiple circles with same center but different radius. Later on I want add a line from center that will intersect all the cirles. I am attaching a figure to demonstrate how will be my expected output.\n\nStar Strider on 19 Jan 2019\nTry this:\nt = linspace(0, 2*pi);\nr = [1, 2, 3];\nxc = 0.5;\nyc = 0.1;\nxcir = (r(:)*cos(t))' + xc;\nycir = (r(:)*sin(t))' + yc;\nfigure\nhold all\nplot(xcir, ycir)\nplot([xc, xc+5], [yc, yc], '-r')\nhold off\naxis equal\ntext(xc+r, yc*ones(1,numel(r)), ['$\\frac{D}h$', compose('$\\\\frac{%dD}h$', r(2:end))], 'HorizontalAlignment','left', 'VerticalAlignment','top', 'Interpreter','latex', 'FontSize',12)\nproducing:",
null,
"Experiment to get the result you want.\nStar Strider on 22 Jan 2019\nAs always, my pleasure!\nThank you!"
]
| [
null,
"https://www.mathworks.com/matlabcentral/answers/uploaded_files/144934/How%20can%20I%20draw%20a%20line%20from%20center%20of%20multiple%20circles%20that%20will%20intersect%20all%20the%20circles%20-%202019%2001%2019.png",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.6984578,"math_prob":0.9572476,"size":1136,"snap":"2023-14-2023-23","text_gpt3_token_len":332,"char_repetition_ratio":0.08303887,"word_repetition_ratio":0.011235955,"special_character_ratio":0.30809858,"punctuation_ratio":0.16666667,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98856515,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-20T13:43:11Z\",\"WARC-Record-ID\":\"<urn:uuid:cd5b47cf-bc0f-49fc-b2dd-9112881d9659>\",\"Content-Length\":\"169454\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ad54641e-9b37-402a-8fa4-9d5aa63d0658>\",\"WARC-Concurrent-To\":\"<urn:uuid:a1b8f2ef-60d3-48f1-a04a-da687cd12909>\",\"WARC-IP-Address\":\"104.86.80.92\",\"WARC-Target-URI\":\"https://au.mathworks.com/matlabcentral/answers/440519-how-can-i-draw-a-line-from-center-of-multiple-circles-that-will-intersect-all-the-circles?s_tid=prof_contriblnk\",\"WARC-Payload-Digest\":\"sha1:L2MV5SZN2ZZWNXZKV7R6S2ZDLFLHFOFB\",\"WARC-Block-Digest\":\"sha1:XWZFWB7RT7TXXGV32L535LCUVXLNV7FB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296943483.86_warc_CC-MAIN-20230320114206-20230320144206-00346.warc.gz\"}"} |
https://math.stackexchange.com/questions/1659507/chinese-remainder-theorem-induces-a-unit-group-isomorphism | [
"# Chinese Remainder Theorem induces a unit group isomorphism\n\nThe Chinese Remainder Theorem for rings states that if $R$ is a commutative ring with $I_1,\\ldots, I_n$ ideals that are comaximal, i.e., $I_i+I_j=R$ if $i \\neq j$, then the canonical map $\\phi:R \\rightarrow R/I_1 \\times\\ldots \\times R/I_n$ induces a ring isomorphism:$$R/(I_1 \\ldots I_n) \\cong R/I_1 \\times\\ldots \\times R/I_n$$ My question is, does the ring isomorphism also implies an isomorphism of multiplicative groups: $$(R/(I_1 \\ldots I_n))^{\\times} \\cong (R/I_1)^{\\times} \\times\\ldots \\times (R/I_n)^{\\times}$$ If so, is there an elegant way of showing this? That is, a method without going through the argument we prove the ring isomorphism using the first isomorphism theorem.\n\n• If $A \\cong B \\times C$, then can you prove that $A^{\\times} \\cong B^{\\times} \\times C^{\\times}$? Feb 17, 2016 at 7:51\n\nIn general, if you know $R\\cong S$, where $R$ and $S$ are rings, then by definition you also have $R^\\times\\cong S^\\times$. Hence, to solve your question, all you need to do is show that for rings $R_1,\\ldots,R_n$ we have$$(R_1\\times\\ldots\\times R_n)^\\times\\cong R_1^\\times\\times\\ldots\\times R_n^\\times.$$This, too, can be proved directly from the very basic definitions."
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.6695987,"math_prob":0.9996911,"size":1228,"snap":"2023-40-2023-50","text_gpt3_token_len":412,"char_repetition_ratio":0.16748366,"word_repetition_ratio":0.0,"special_character_ratio":0.30863193,"punctuation_ratio":0.12350598,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.000001,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-06T14:38:31Z\",\"WARC-Record-ID\":\"<urn:uuid:a0c4de84-2e43-448b-82bd-478aec05dc60>\",\"Content-Length\":\"140803\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:60bfb6a4-08be-415b-937c-1e1ba2725029>\",\"WARC-Concurrent-To\":\"<urn:uuid:51928eb1-d34f-455e-8041-b6e3fbf1d48b>\",\"WARC-IP-Address\":\"172.64.144.30\",\"WARC-Target-URI\":\"https://math.stackexchange.com/questions/1659507/chinese-remainder-theorem-induces-a-unit-group-isomorphism\",\"WARC-Payload-Digest\":\"sha1:HF4EHGBI4QHNMVAGNLLJEEYBOIPTIP44\",\"WARC-Block-Digest\":\"sha1:RCWQKZPPTB2RGK5JGL3OIZT4T6EWJELZ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100599.20_warc_CC-MAIN-20231206130723-20231206160723-00329.warc.gz\"}"} |
http://countbio.com/web_pages/left_object/R_for_biology/R_biostatistics_part-1/binomial_distribution.html | [
"## Binomial distribution\n\nSuppose our experiment involves tossing 10 coins and noting down the number of heads(success). First time, we tossed 10 coins and got 3 heads (and 7 tails). This is same as performing a sequence of 10 Bernoulli trials , each with a probability of success $\\small{ p = \\dfrac{1}{2}}$. We repeat this experiment large number of times, noting down the number of heads $x$ we get everytime we toss 10 coins. We will end up with a set of values like,\n\n$x = \\{3,5,8,1,4,9,10,2,5,7,4,5,3,8,9.....\\}$.\n\nThe probability distribution of $x$ is a binomial distribution.\n\nThe binomial distribution is a discrete probability distribution of $x$ successes in a sequence of $n$ Bernoulli trials, each with a probability of success $p$.\n\nA sequence of experiments can be treated as a binomial process if they have the following properties:\n\n• There should be a finite number of trials\n• The $n$ trials are considered to be independent of each other\n• Each trial should have only two mutually exclusive outcomes\n• The probability of success $p$ remains same from trial to trial\n\nWe will derive an expression for the binomial probability distributions in terms of $x, n$ and $p$.\n\nDerivation of binomial probability:\n\nLet $x$ be the number of successes in n Bernoulli trials with probability of success $p$ and probability of failure $q = 1-p$.\n\nThe possible values of $x$ are $0,1,2,3,...n$.\n\nRemember that if $x$ successes occur, then $n-x$ failures will occur.\n\nSince any $x$ of $n$ trials can result in a success, we have to compute the number of ways in which we can choose $x$ trials out of $n$ trials for assigning success. This is same as choosing $x$ objects out of $n$ in $_nC_r$ possible ways without worrying about ordering the $x$ objects.\n\nTherefore,the number of ways of selecting positions for $x$ successes in a sequence of $n$ trials is,\n\n$\\small{_nC_x = \\dfrac{n!}{r!(n-r)!} }$\n\nSince the trials are independent of each other,\n\nprobability of $x$ successes = $\\small{ p^x }$\n\nprobability of (n-x) failures = $\\small{(1-p)^{n-x} }$\n\nTherefore,\n\nthe probability of getting $x$ successes and $n-x$ failures in $n$ Bernoulli trials = $\\small{ p^x (1-p)^{n-x}}$\n\nSince $x$ successes and $n-x$ failures can happen in $\\small{ \\dfrac{n!}{r!(n-r)!}}$ mutually exclusive ways, we have to sum the probability $\\small{ p^x (1-p)^{n-x}}$ over $\\small{ \\dfrac{n!}{r!(n-r)!}}$, which is equivalent to multiplying the two expressions:\n\n$\\small{P_b(x,n,p) = \\dfrac{n!}{x!(n-x)!} p^x (1-p)^{n-x}~~~~~~~with~~x=0,1,2,3,4,....n }$\n\nTherefore, the binomial probability of getting $x$ successes in a sequence of $n$ Bernoulli trials, where $p$ is the probability of success in a single trial is given by,\n\n$\\small{P_b(x,n,p) = \\dfrac{n!}{x!(n-x)!} p^x (1-p)^{n-x}~~~~~~~with~~x=0,1,2,3,4,....n }$\n\nThe terms $n$ and $p$ are called the parameters of the binomial distribution.\n\nThe binomial distribution gives the probability of x successes out of n independent trials. To be independent, these n trials are assumed to have been drawn randomly from a parent distribution with replacement.If these n trials are drawn without replacement from the parent distribution, they will not be independent and hence cannot be described by binomial probability distribution. They are described by hypergeometric distribution, which we will see in the next section.\n\n[Note : In order to maintain the probability p same for every draw, we have to draw randomly with replcement. If we draw without replacement, the probability p will keep changing between successive draws]\n\n#### Why is it called \"binomial\" distribution?\n\nAccording to Binomial theorem, if $p$ and $q$ are two variables representing real numbers, then the expansion $(p+q)^x$ for an integer $x$ can be written as,\n\n$\\small{(p + q)^n = p^n + n p^{n-1} q + n(n-1)p^{n-2}q^2 + ....+ x^n = \\sum\\limits_{x=0}^n \\dfrac{n!}{x!(n-x)!} p^x q^{n-x} }$\n\nComparing the above expression with the binomial probability formula, we realize that for $q=1-p$, the binomial probability expression is of the same form as the successive terms of a binomial expansion. hnce the name \"binomial distribution\".\n\n#### Mean and variance of the binomial distribution\n\nOnce we have established the mathematical expression for the binomial probability distribution $P_b(x,n,p)$, we can get an expression for the population mean $\\mu$ and variance $\\sigma^2$ using the first and second moments of the distribution. We state the results here without full derivation:\n\n$\\small{\\mu = \\sum\\limits_{x=0}^n x P_b(x,n,p) = \\sum\\limits_{x=0}^n x \\dfrac{n!}{x!(n-x)!} p^x (1-p)^{n-x} = np }$\n$\\small{\\sigma^2 = \\sum\\limits_{x=0}^n (x-\\mu)^2P_b(x,n,p) = \\sum\\limits_{x=0}^n (x - np)^2 \\dfrac{n!}{x!(n-x)!} p^x (1-p)^{n-x} = np(1-p) }$\n\nWe must understand that the above mentioned quantities are the mean and variance of the population following bionomial distribution with a given $n$ and $p$. These are expected values. If we repeat the experiment finite number of times, the observed mean and variance will deviate from the expected values.\n\nWe summarize the formulas of binomial probability distribution:\n\n$\\small{P_b(x,n,p) = \\dfrac{n!}{x!(n-x)!} p^x (1-p)^{n-x},~~~~with~~x=0,1,2,3,...n }$\n\n$\\small{\\mu = np,~~~~~~~~~~~~~~~~~\\sigma^2 = np(1-p),~~~~~~~~~~~~~~~~~\\sigma = \\sqrt{np(1-p)} }$\n\nFrom the above expressions for mean $\\mu$ and $\\sigma^2$, we realize that $\\small{\\sigma^2 = np(1-p) = \\mu(1-p) }$.\n\nTherefore, the population mean and variance of the binomial distribution are dependent on each other. They are not independent parameters.\n\n#### The plot of binomial probability distribution\n\nThe discrete probability values $P_b(x,n,p)$ have been plotted for various x values in the figure below:\n\nIn the above figure, we can see that the shape of the distribution is decided by the probability $p$ for success in a single trial. When the value of $p$ is less than 0.5, the distribution is skewed to the left, as seen in top figure corresponding to a value of $\\small{p=0.3}$.\n\nOn the other hand, when $p$ is greater than 0.5, the distribution is skewed to the right, as seen in the second figure from top corresponding to $\\small{p=0.8 }$.\n\nWhen the probabuility p is exactly 0.5, the distribution is symmetric about its mean value of $\\small{np = 12\\times0.5=6 }$, as shown in third figure from top.\n\nExample-1 : The seeds of a particular variety of mango has 70% chance of germination. If 15 seeds are planted, what is the probability that 11 will germinate, assuming that the seeds grow independently of each other?\n\nWe can consider this as a binomial problem with $\\small{n=15}$ independent Bernoulli trials, each having same probability of succcess $\\small{p = 0.7}$. We need to compute the probability for $\\small{x=11}$ successes.\n\nUsing the Binomial probability formula, we compute the probability for 11 seeds to germinate as,\n\n$\\small{ P_b(x=11,n=15,p=0.7) = \\dfrac{n!}{x!(n-x)!} p^x (1-p)^{n-x} = \\dfrac{15!}{11! 4!} (0.7)^{11} (0.3)^4 = 0.218 }$\n\nExample-2 : It is estimated that 35% of the population in India have $\\small{O+}$ blood type. If we randomly choose 8 people from this population for a clinical trial, estimate the probability that at least 6 of them will have $\\small{O+}$ type.\n\nAssuming that the selection process is random, each with a probability of success ($\\small{O+}$ type) 0.35, we can apply binomial probability distribution. Here, \"probability that at least 6\" is the cumulative sum of the probabilities for 6,7 and 8.\n\nWith $\\small{p=0.35}$, n=8, we write,\n\n$\\small{P_b(x \\geq 6, n=8, p=0.35) = P_b(6,8,0.35) + P_b(7,8,0.35) + P_b(8,8,0.35) }$\n\n$~~~~~~~~~~~~~~~~~~~~~~ \\small{= \\dfrac{8!}{6! 2!} (0.35)^{6} (0.65)^2 + \\dfrac{8!}{7! 1!} (0.35)^{7} (0.65)^1 + \\dfrac{8!}{0! 8!} (0.35)^{8} (0.65)^0 }$\n\n$~~~~~~~~~~~~~~~~~~~~~~ \\small{= 0.02174 + 0.00334 + 0.00022 = 0.0253 }$\n\n$~~~~~~~~~~~~~~~~~~~~~~ \\small{= 0.0253 }$\n\nWe will now learn to compute binomial probability distribution in R\n\n## R scripts\n\nWe perform various computations on binomial distribution using the inbuilt library functions in R.\n\nThe R statistics library provides the following four basic functions for the binomial distribution. In fact, similar set of 4 functions are provided for every distribution in R.\n\n\n\nLet x be the number of successes in n Bernoulli trials, each with a probability of success p.\n\ndbinom(x,n,p) -----> Returns the binomial probability of getting a value x .\nThis is called \"probability density\".\n\npbinom(x,n,p) -----> Returns the cumulative probability of this binomial\n\ndistribution from x=0 to given x.\n\nThus, qbinom(x=2,n,p) = qbinom(0,n,p)+qbinom(1,n,p)+qbinom(2,n,p)\n\nqbinom(x,n,p) -----> Inverse of the pbinom() function.\nReturns the x value upto which the cumulative probability is p (quantiles).\n\nrbinom(m,n,p) -----> Returns m \"random deviates\" from a binomial distribution of (n,p).\nEach one of the random deviate is the number of successes x observed in n trials,\nwith p being probability of success in a single trial.\n\n\nThe usage of these four functions are demonstrated in the R script here. With comments, the script lines are self explanatory:\n\n\n\n##### Using R library functions for binomial distribution\n\nn = 10 ## number of trials\np = 0.5 ## probability of success in a trial\nx = 3 ## number of successes\n\n### Probability density function.\n### dbinom(x,n,p) returns the binomial probability for x successes in n trials, where p is probability\n### of success in a single trial\nbinomial_probability = dbinom(x,n,p)\nprint(paste(\"binomial probability for (\",x,\",\",n,\",\",p,\") = \",binomial_probability) )\n\n### Function for computing cumulative probability.\n### pbinom(x,n,p) gives cumulative probability for a binomial distribution from 0 to x.\ncumulative_probability = pbinom(x,n,p)\nprint(paste(\"cumulative binomial probability for (\",x,\",\",n,\",\",p,\") = \",cumulative_probability) )\n\n### Function for finding x value corresponing to a cumulative probability\nx_value = qbinom(cumulative_probability, n, p)\nprint(paste(\"x value corresponding to the given cumulative binomial probability = \",x_value) )\n\n### Function that returns 4 random deviates from a Binomial distribution of given (n,p)\ndeviates = rbinom(4, n, p)\nprint(paste(\"4 binomial deviates : \"))\nprint(deviates)\n\npar(mfrow = c(2,1))\n\n### We plot a binomial density distribution using dbinom()\nn = 10\np = 0.4\nx = seq(0,10)\npdens = dbinom(x,n,p)\nplot(x,pdens, type=\"h\", col=\"red\", xlab = \"Binomial variable x\", ylab=\"binomial probability\",\nmain=\"binomial probability distribution\")\n\n## We generate frequency histogram of binomial deviates using rbinom()\nn = 10\np = 0.5\nxdev= rbinom(10000, n, p)\nplot(table(xdev), type=\"h\", xlab=\"binomial variable x\", ylab=\"frequency\",\nmain=\"frequency distribution of binomial random deviates\")\n\n\n\nExecuting the above script in R prints the following results and figures of probability distribution on the screen:\n\n\n \"binomial probability for ( 3 , 10 , 0.5 ) = 0.1171875\"\n \"cumulative binomial probability for ( 3 , 10 , 0.5 ) = 0.171875\"\n \"x value corresponding to the given cumulative binomial probability = 3\"\n \"4 binomial deviates : \"\n 6 6 8 4"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.7928807,"math_prob":0.99981326,"size":8017,"snap":"2020-24-2020-29","text_gpt3_token_len":2225,"char_repetition_ratio":0.19368526,"word_repetition_ratio":0.061359867,"special_character_ratio":0.30285642,"punctuation_ratio":0.14853947,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000057,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-06T05:35:12Z\",\"WARC-Record-ID\":\"<urn:uuid:c7fa9907-161e-44ab-9d08-1d575a772a57>\",\"Content-Length\":\"16189\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1fa99319-0481-4c7d-9e1b-be43987e3c06>\",\"WARC-Concurrent-To\":\"<urn:uuid:17a903b6-4665-4a48-911e-bebeb9ebda86>\",\"WARC-IP-Address\":\"166.62.28.126\",\"WARC-Target-URI\":\"http://countbio.com/web_pages/left_object/R_for_biology/R_biostatistics_part-1/binomial_distribution.html\",\"WARC-Payload-Digest\":\"sha1:ARW4PW6VR3S45L5PDWZKPLPQY3EKMCU5\",\"WARC-Block-Digest\":\"sha1:QSEMSGGLJSPKQAUWZBLUXHE3BIHI5LWU\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593655890105.39_warc_CC-MAIN-20200706042111-20200706072111-00327.warc.gz\"}"} |
https://k12.libretexts.org/Bookshelves/Mathematics/Geometry/04%3A_Triangles/4.02%3A_Classify_Triangles_by_Angle_Measurement | [
"# 4.2: Classify Triangles by Angle Measurement\n\n$$\\newcommand{\\vecs}{\\overset { \\rightharpoonup} {\\mathbf{#1}} }$$ $$\\newcommand{\\vecd}{\\overset{-\\!-\\!\\rightharpoonup}{\\vphantom{a}\\smash {#1}}}$$$$\\newcommand{\\id}{\\mathrm{id}}$$ $$\\newcommand{\\Span}{\\mathrm{span}}$$ $$\\newcommand{\\kernel}{\\mathrm{null}\\,}$$ $$\\newcommand{\\range}{\\mathrm{range}\\,}$$ $$\\newcommand{\\RealPart}{\\mathrm{Re}}$$ $$\\newcommand{\\ImaginaryPart}{\\mathrm{Im}}$$ $$\\newcommand{\\Argument}{\\mathrm{Arg}}$$ $$\\newcommand{\\norm}{\\| #1 \\|}$$ $$\\newcommand{\\inner}{\\langle #1, #2 \\rangle}$$ $$\\newcommand{\\Span}{\\mathrm{span}}$$ $$\\newcommand{\\id}{\\mathrm{id}}$$ $$\\newcommand{\\Span}{\\mathrm{span}}$$ $$\\newcommand{\\kernel}{\\mathrm{null}\\,}$$ $$\\newcommand{\\range}{\\mathrm{range}\\,}$$ $$\\newcommand{\\RealPart}{\\mathrm{Re}}$$ $$\\newcommand{\\ImaginaryPart}{\\mathrm{Im}}$$ $$\\newcommand{\\Argument}{\\mathrm{Arg}}$$ $$\\newcommand{\\norm}{\\| #1 \\|}$$ $$\\newcommand{\\inner}{\\langle #1, #2 \\rangle}$$ $$\\newcommand{\\Span}{\\mathrm{span}}$$$$\\newcommand{\\AA}{\\unicode[.8,0]{x212B}}$$\n\nIdentify triangles as acute, right, obtuse or equiangular.\n\n## Triangle Classification by Angles",
null,
"Figure $$\\PageIndex{1}$$\n\nMichael's mother bought a caddy that fits in a corner. She uses the caddy to store their brooms and mops in the garage. His mother's caddy had the appearance of a right triangle when viewed from above and fit perfectly in the corner. Michael liked the caddy, but he couldn't afford to buy one like hers, so he decided to make his own. He bought wood and nails and put it together, but when he tried to put it in the corner in his room, it wouldn't fit. He measures the angles of his caddy and realizes that the top is the triangle below:",
null,
"Figure $$\\PageIndex{2}$$\n\nWhat is the classification of Michael's triangle?\n\nIn this concept, you will learn how to use angles to classify triangles.\n\n## Classifying Triangles by Angles\n\nThe prefix “tri” means three. Triangle means three angles.\n\nTo classify a triangle according to its angles, you must look at the angles inside the triangle. Use the number of degrees in these angles to classify the triangle. Let’s look at a picture of a triangle to explain.",
null,
"Figure $$\\PageIndex{3}$$\n\nLook at the measure of each angle inside the triangle to figure out what kind of triangle it is. There are four types of triangles based on angle measures.\n\nA right triangle is a triangle that has one right angle and two acute angles. One of the angles in the triangle measures $$90^{\\circ}$$ and the other two angles are less than 90. Here is a picture of a right triangle.",
null,
"Figure $$\\PageIndex{4}$$\n\nYou can see that the 90 degree angle is the one in the bottom left corner. You can even draw in the small box to identify it as a 90 degree angle. If you look at the other two angles you cans see that those angles are less than 90 degrees and are acute.\n\nLet's look at an example of a right angle.",
null,
"Figure $$\\PageIndex{5}$$\n\nhis triangle has one $$90^{\\circ}$$ angle and two $$45^{\\circ}$$ angles. Find the sum of the three angles.\n\n$$90+45+45=180^{\\circ}$$\n\nThe sum of the three angles of a triangle is always equal to $$180^{\\circ}$$.\n\nIn an equiangular triangle, all three of the angles are equal.",
null,
"Figure $$\\PageIndex{6}$$\n\nThe three angles of this triangle are equal. This is an equiangular triangle.\n\nYou know that the sum of the three angles is equal to $$180^{\\circ}$$, therefore, for all three angles to be equal, each angle must be equal to $$60^{\\circ}$$.\n\n$$60+60+60=180^{\\circ}$$\n\nThe sum of the angles is equal to $$180^{\\circ}$$.\n\nIn an acute triangle, all three angles of the triangle are less than 90 degrees. Here is an example of an acute triangle.",
null,
"Figure $$\\PageIndex{7}$$\n\nAll three of these angles measure less than 90 degrees.\n\n$$33+80+67=180^{\\circ}$$\n\nThe sum of the angles is equal to $$180^{\\circ}$$.\n\nAn obtuse triangle has one angle that is greater than 90 and two angles that are less than 90.",
null,
"Figure $$\\PageIndex{8}$$\n\n$$130+25+25=180^{\\circ}$$\n\nThe sum of the angles is equal to $$180^{\\circ}$$.\n\nExample $$\\PageIndex{1}$$\n\nHe tried to build a caddy like his mother's caddy. Her caddy looked like a right triangle from above but he ended up building one that had the following measures and appearance from above:",
null,
"Figure $$\\PageIndex{9}$$\n\nSolution\n\nWhat is the classification of Michael's triangle?\n\nFirst, list the angle measures.\n\n20, 20, 140\n\nNext, determine if any of the angles are equal to 90 degrees or larger than 90 degrees.\n\nYes, one angle is larger than 90 degrees\n\nThen, classify the triangle.\n\nObtuse\n\nThe answer is an obtuse triangle. Michael created an obtuse triangle instead of a right triangle.\n\nExample $$\\PageIndex{2}$$\n\nIdentify the type of triangle according to its angles.",
null,
"Figure $$\\PageIndex{10}$$\n\nSolution\n\nFirst, list the angle measures.\n\n10, 75, 95\n\nNext, determine if any of the angles are equal to 90 degrees or larger than 90 degrees.\n\nYes, one angle is larger than 90 degrees\n\nThen, classify the triangle.\n\nObtuse\n\nThe answer is an obtuse triangle.\n\nExample $$\\PageIndex{3}$$\n\nIdentify the type of triangle according to its angles.",
null,
"Figure $$\\PageIndex{11}$$\n\nSolution\n\nFirst, list the angle measures.\n\n30, 70 and 80\n\nNext, determine if any of the angles are equal to 90 degrees or larger than 90 degrees.\n\nNo\n\nThen, classify the angle.\n\nAcute\n\nThe answer is an acute triangle.\n\nExample $$\\PageIndex{}$$\n\nIdentify the type of triangle according to its angles.",
null,
"Figure $$\\PageIndex{12}$$\n\nSolution\n\nFirst, list the angle measures.\n\n35, 55 and 90\n\nNext, determine if any of the angles are equal to 90 degrees or larger than 90 degrees.\n\nYes, one of the angles is equal to 90 degrees\n\nThen, classify the angle.\n\nRight\n\nThe answer is a right triangle\n\nExample $$\\PageIndex{5}$$\n\nClassify the triangle by looking at the sum of its angles.\n\n$$40^{\\circ}+60^{\\circ}+80^{\\circ}=180^{\\circ}$$\n\nSolution\n\nFirst, list the angle measures.\n\n40, 60 and 80\n\nNext, determine if any of the angles are equal to 90 degrees or larger than 90 degrees.\n\nNo\n\nThen, classify the angle.\n\nAcute\n\nThe answer is an acute triangle.\n\n## Review\n\nClassify each triangle according to its angles.\n\n1.",
null,
"Figure $$\\PageIndex{13}$$\n2.",
null,
"Figure $$\\PageIndex{14}$$\n3.",
null,
"Figure $$\\PageIndex{15}$$\n4.",
null,
"Figure $$\\PageIndex{16}$$\n5.",
null,
"Figure $$\\PageIndex{17}$$\n\nClassify the following triangles by looking at the sum of the angle measures.\n\n1. $$40+55+45=180^{\\circ}$$\n2. $$20+135+25=180^{\\circ}$$\n3. $$30+90+60=180^{\\circ}$$\n4. $$60+60+60=180^{\\circ}$$\n5. $$110+15+55=180^{\\circ}$$\n6. $$105+65+10=180^{\\circ}$$\n7. $$80+55+45=180^{\\circ}$$\n8. $$70+45+65=180^{\\circ}$$\n9. $$145+20+15=180^{\\circ}$$\n10. $$60+80+40=180^{\\circ}$$\n\n## Vocabulary\n\nTerm Definition\nAcute Triangle An acute triangle has three angles that each measure less than 90 degrees.\nEquilateral Triangle An equilateral triangle is a triangle in which all three sides are the same length.\nObtuse Triangle An obtuse triangle is a triangle with one angle that is greater than 90 degrees.\nRight Triangle A right triangle is a triangle with one 90 degree angle.\nTriangle A triangle is a polygon with three sides and three angles."
]
| [
null,
"https://k12.libretexts.org/@api/deki/files/1315/f-d_d24cd29e1a4509b5976e28a24799e98075a9d75af1d232deffb73c9c%252BIMAGE_THUMB_POSTCARD_TINY%252BIMAGE_THUMB_POSTCARD_TINY.jpg",
null,
"https://k12.libretexts.org/@api/deki/files/1316/f-d_9cd935291ba5f63c8f4cb1c18c5238232c9dc8ede6f147913f46d4c9%252BIMAGE_THUMB_POSTCARD_TINY%252BIMAGE_THUMB_POSTCARD_TINY.jpg",
null,
"https://k12.libretexts.org/@api/deki/files/1317/f-d_3953864f74d284d63cf2f2f84dae1a495431dd2fc474726a376f8be7%252BIMAGE_THUMB_POSTCARD_TINY%252BIMAGE_THUMB_POSTCARD_TINY.png",
null,
"https://k12.libretexts.org/@api/deki/files/1318/f-d_cb5fb37cd51712ec2b158e6fa23c7fbf6d196e89f779c3f73d627ad9%252BIMAGE_TINY%252BIMAGE_TINY.png",
null,
"https://k12.libretexts.org/@api/deki/files/1319/f-d_b13ec1854750c3aa1460abff8141bccaf0e6c268862767f30bf3b3d5%252BIMAGE_THUMB_POSTCARD_TINY%252BIMAGE_THUMB_POSTCARD_TINY.png",
null,
"https://k12.libretexts.org/@api/deki/files/1320/f-d_7f271b5ad87776a509cfe20cc0ae10d2eeb3370b48fc3bcfd5dd6c79%252BIMAGE_THUMB_POSTCARD_TINY%252BIMAGE_THUMB_POSTCARD_TINY.png",
null,
"https://k12.libretexts.org/@api/deki/files/1321/f-d_6b636a08663cc61288441c373a7f0f35bfda59c8cbf0f2604882136c%252BIMAGE_THUMB_POSTCARD_TINY%252BIMAGE_THUMB_POSTCARD_TINY.png",
null,
"https://k12.libretexts.org/@api/deki/files/1322/f-d_25cadc67d1ed39acdbc543a528da068bdf86336c1604b2c24ab9663c%252BIMAGE_TINY%252BIMAGE_TINY.png",
null,
"https://k12.libretexts.org/@api/deki/files/1316/f-d_9cd935291ba5f63c8f4cb1c18c5238232c9dc8ede6f147913f46d4c9%252BIMAGE_THUMB_POSTCARD_TINY%252BIMAGE_THUMB_POSTCARD_TINY.jpg",
null,
"https://k12.libretexts.org/@api/deki/files/1323/f-d_b3347480af314b5810b9862e61d7d8afbb08cddc76d7ccf121101214%252BIMAGE_THUMB_POSTCARD_TINY%252BIMAGE_THUMB_POSTCARD_TINY.jpg",
null,
"https://k12.libretexts.org/@api/deki/files/1324/f-d_1c99ad92fc131fb0da262ac15fdf17465b30f4ab142535eeeca866d9%252BIMAGE_THUMB_POSTCARD_TINY%252BIMAGE_THUMB_POSTCARD_TINY.jpg",
null,
"https://k12.libretexts.org/@api/deki/files/1325/f-d_928ec577896bca7723fb0b23188f122d916da8ef77f813a60dcd6f4d%252BIMAGE_THUMB_POSTCARD_TINY%252BIMAGE_THUMB_POSTCARD_TINY.jpg",
null,
"https://k12.libretexts.org/@api/deki/files/1326/f-d_eee1456bfa0e4e4d9824bc02d04467a562c5f7644914b032961ab86e%252BIMAGE_THUMB_POSTCARD_TINY%252BIMAGE_THUMB_POSTCARD_TINY.png",
null,
"https://k12.libretexts.org/@api/deki/files/1327/f-d_d89a71670b43a7516a4fd3808771498cd8d2ed6c8f245ed2482f283e%252BIMAGE_THUMB_POSTCARD_TINY%252BIMAGE_THUMB_POSTCARD_TINY.png",
null,
"https://k12.libretexts.org/@api/deki/files/1328/f-d_fbf69f5a7f835579b58bd2d68c11fb1e03a2ce1fb7a3d70b80f1dab3%252BIMAGE_THUMB_POSTCARD_TINY%252BIMAGE_THUMB_POSTCARD_TINY.png",
null,
"https://k12.libretexts.org/@api/deki/files/1329/f-d_184f07a703b763fe249712e2568b49c26d0b86f14f4e1e457508c356%252BIMAGE_TINY%252BIMAGE_TINY.png",
null,
"https://k12.libretexts.org/@api/deki/files/1330/f-d_d91d61b846e1b30339b8bfac0aa359320d88cb965e07b2cb916bce6b%252BIMAGE_THUMB_POSTCARD_TINY%252BIMAGE_THUMB_POSTCARD_TINY.png",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.8894965,"math_prob":0.9998253,"size":5867,"snap":"2023-14-2023-23","text_gpt3_token_len":1516,"char_repetition_ratio":0.23128091,"word_repetition_ratio":0.22610483,"special_character_ratio":0.28447247,"punctuation_ratio":0.10154905,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999814,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34],"im_url_duplicate_count":[null,3,null,6,null,3,null,3,null,3,null,3,null,3,null,3,null,6,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-05-29T12:57:33Z\",\"WARC-Record-ID\":\"<urn:uuid:6c8e5746-2f5c-4692-801f-2cc18add5e8b>\",\"Content-Length\":\"139695\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2ed9f321-2951-49fc-a8d3-d58328dd1a03>\",\"WARC-Concurrent-To\":\"<urn:uuid:35626c98-7748-43b8-884a-9690dea7b6a6>\",\"WARC-IP-Address\":\"13.249.39.24\",\"WARC-Target-URI\":\"https://k12.libretexts.org/Bookshelves/Mathematics/Geometry/04%3A_Triangles/4.02%3A_Classify_Triangles_by_Angle_Measurement\",\"WARC-Payload-Digest\":\"sha1:UEJT456SOTRQPM3ATZJ5C2JRETNGXFCP\",\"WARC-Block-Digest\":\"sha1:MN726COSL3QQQZFEY3UGOLSKPL6HMTG7\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224644855.6_warc_CC-MAIN-20230529105815-20230529135815-00544.warc.gz\"}"} |
https://www.geeksforgeeks.org/reverse-anti-clockwise-spiral-traversal-of-a-binary-tree/?ref=rp | [
"Related Articles\n\n# Reverse Anti Clockwise Spiral Traversal of a Binary Tree\n\n• Difficulty Level : Hard\n• Last Updated : 11 Aug, 2021\n\nGiven a binary tree, the task is to print the nodes of the tree in a reverse anti-clockwise spiral manner.\n\nExamples:\n\n```Input :\n1\n/ \\\n2 3\n/ \\ \\\n4 5 6\n/ / \\\n7 8 9\nOutput : 7 8 9 1 4 5 6 3 2\n\nInput :\n20\n/ \\\n8 22\n/ \\ / \\\n5 3 4 25\n/ \\\n10 14\nOutput : 10 14 20 5 3 4 25 22 8```\n\nApproach: The idea is to use two variables i initialized to 1 and j initialized to the height of tree and run a while loop which wont break until i becomes greater than j. We will use another variable flag and initialize it to 1. Now in the while loop we will check a condition that if flag is equal to 1 we will traverse the tree from left to right and mark flag as 0 so that next time we traverse the tree from right to left and then decrement the value of j so that next time we visit the level just above the current level. Also when we will traverse the level from top we will mark flag as 1 so that next time we traverse the tree from left to right and then increment the value of i so that next time we visit the level just below the current level. Repeat the whole process until the binary tree is completely traversed.\n\nBelow is the implementation of the above approach:\n\n## C++\n\n `// C++ implementation of the approach``#include ``using` `namespace` `std;` `// Binary tree node``struct` `Node {`` ``struct` `Node* left;`` ``struct` `Node* right;`` ``int` `data;` ` ``Node(``int` `data)`` ``{`` ``this``->data = data;`` ``this``->left = NULL;`` ``this``->right = NULL;`` ``}``};` `// Recursive Function to find height``// of binary tree``int` `height(``struct` `Node* root)``{`` ``// Base condition`` ``if` `(root == NULL)`` ``return` `0;` ` ``// Compute the height of each subtree`` ``int` `lheight = height(root->left);`` ``int` `rheight = height(root->right);` ` ``// Return the maximum of two`` ``return` `max(1 + lheight, 1 + rheight);``}` `// Function to Print Nodes from left to right``void` `leftToRight(``struct` `Node* root, ``int` `level)``{`` ``if` `(root == NULL)`` ``return``;` ` ``if` `(level == 1)`` ``cout << root->data << ``\" \"``;` ` ``else` `if` `(level > 1) {`` ``leftToRight(root->left, level - 1);`` ``leftToRight(root->right, level - 1);`` ``}``}` `// Function to Print Nodes from right to left``void` `rightToLeft(``struct` `Node* root, ``int` `level)``{`` ``if` `(root == NULL)`` ``return``;` ` ``if` `(level == 1)`` ``cout << root->data << ``\" \"``;` ` ``else` `if` `(level > 1) {`` ``rightToLeft(root->right, level - 1);`` ``rightToLeft(root->left, level - 1);`` ``}``}` `// Function to print Reverse anti clockwise spiral``// traversal of a binary tree``void` `ReverseAntiClockWiseSpiral(``struct` `Node* root)``{`` ``int` `i = 1;`` ``int` `j = height(root);` ` ``// Flag to mark a change in the direction`` ``// of printing nodes`` ``int` `flag = 1;`` ``while` `(i <= j) {` ` ``// If flag is zero print nodes`` ``// from right to left`` ``if` `(flag == 0) {`` ``rightToLeft(root, i);` ` ``// Set the value of flag as zero`` ``// so that nodes are next time`` ``// printed from left to right`` ``flag = 1;` ` ``// Increment i`` ``i++;`` ``}` ` ``// If flag is one print nodes`` ``// from left to right`` ``else` `{`` ``leftToRight(root, j);` ` ``// Set the value of flag as zero`` ``// so that nodes are next time`` ``// printed from right to left`` ``flag = 0;` ` ``// Decrement j`` ``j--;`` ``}`` ``}``}` `// Driver code``int` `main()``{`` ``struct` `Node* root = ``new` `Node(20);`` ``root->left = ``new` `Node(8);`` ``root->right = ``new` `Node(22);`` ``root->left->left = ``new` `Node(5);`` ``root->left->right = ``new` `Node(3);`` ``root->right->left = ``new` `Node(4);`` ``root->right->right = ``new` `Node(25);`` ``root->left->right->left = ``new` `Node(10);`` ``root->left->right->right = ``new` `Node(14);` ` ``ReverseAntiClockWiseSpiral(root);` ` ``return` `0;``}`\n\n## Java\n\n `// Java implementation of the approach``class` `GfG``{` `// Binary tree node``static` `class` `Node``{`` ``Node left;`` ``Node right;`` ``int` `data;` ` ``Node(``int` `data)`` ``{`` ``this``.data = data;`` ``this``.left = ``null``;`` ``this``.right = ``null``;`` ``}``}` `// Recursive Function to find height``// of binary tree``static` `int` `height(Node root)``{`` ``// Base condition`` ``if` `(root == ``null``)`` ``return` `0``;` ` ``// Compute the height of each subtree`` ``int` `lheight = height(root.left);`` ``int` `rheight = height(root.right);` ` ``// Return the maximum of two`` ``return` `Math.max(``1` `+ lheight, ``1` `+ rheight);``}` `// Function to Print Nodes from left to right``static` `void` `leftToRight(Node root, ``int` `level)``{`` ``if` `(root == ``null``)`` ``return``;` ` ``if` `(level == ``1``)`` ``System.out.print(root.data + ``\" \"``);` ` ``else` `if` `(level > ``1``)`` ``{`` ``leftToRight(root.left, level - ``1``);`` ``leftToRight(root.right, level - ``1``);`` ``}``}` `// Function to Print Nodes from right to left``static` `void` `rightToLeft( Node root, ``int` `level)``{`` ``if` `(root == ``null``)`` ``return``;` ` ``if` `(level == ``1``)`` ``System.out.print(root.data + ``\" \"``);` ` ``else` `if` `(level > ``1``)`` ``{`` ``rightToLeft(root.right, level - ``1``);`` ``rightToLeft(root.left, level - ``1``);`` ``}``}` `// Function to print Reverse anti clockwise spiral``// traversal of a binary tree``static` `void` `ReverseAntiClockWiseSpiral(Node root)``{`` ``int` `i = ``1``;`` ``int` `j = height(root);` ` ``// Flag to mark a change in the direction`` ``// of printing nodes`` ``int` `flag = ``1``;`` ``while` `(i <= j)`` ``{` ` ``// If flag is zero print nodes`` ``// from right to left`` ``if` `(flag == ``0``)`` ``{`` ``rightToLeft(root, i);` ` ``// Set the value of flag as zero`` ``// so that nodes are next time`` ``// printed from left to right`` ``flag = ``1``;` ` ``// Increment i`` ``i++;`` ``}` ` ``// If flag is one print nodes`` ``// from left to right`` ``else`` ``{`` ``leftToRight(root, j);` ` ``// Set the value of flag as zero`` ``// so that nodes are next time`` ``// printed from right to left`` ``flag = ``0``;` ` ``// Decrement j`` ``j--;`` ``}`` ``}``}` `// Driver code``public` `static` `void` `main(String[] args)``{`` ``Node root = ``new` `Node(``20``);`` ``root.left = ``new` `Node(``8``);`` ``root.right = ``new` `Node(``22``);`` ``root.left.left = ``new` `Node(``5``);`` ``root.left.right = ``new` `Node(``3``);`` ``root.right.left = ``new` `Node(``4``);`` ``root.right.right = ``new` `Node(``25``);`` ``root.left.right.left = ``new` `Node(``10``);`` ``root.left.right.right = ``new` `Node(``14``);` ` ``ReverseAntiClockWiseSpiral(root);` `}``}` `// This code is contributed by Prerna Saini.`\n\n## Python3\n\n `# Python3 implementation of the approach`` ` `# Binary tree node``class` `Node:`` ` ` ``def` `__init__(``self``, data):`` ` ` ``self``.left ``=` `None`` ``self``.right ``=` `None`` ``self``.data ``=` `data`` ` `# Recursive Function to find height``# of binary tree``def` `height(root):` ` ``# Base condition`` ``if` `(root ``=``=` `None``):`` ``return` `0``;`` ` ` ``# Compute the height of each subtree`` ``lheight ``=` `height(root.left)`` ``rheight ``=` `height(root.right)`` ` ` ``# Return the maximum of two`` ``return` `max``(``1` `+` `lheight, ``1` `+` `rheight)` `# Function to Print Nodes``# from left to right``def` `leftToRight(root, level):` ` ``if` `(root ``=``=` `None``):`` ``return`` ` ` ``if` `(level ``=``=` `1``):`` ``print``(root.data, end ``=` `\" \"``)`` ` ` ``elif` `(level > ``1``):`` ``leftToRight(root.left, level ``-` `1``)`` ``leftToRight(root.right, level ``-` `1``)`` ` `# Function to Print Nodes from``# right to left``def` `rightToLeft(root, level):` ` ``if` `(root ``=``=` `None``):`` ``return`` ` ` ``if` `(level ``=``=` `1``):`` ``print``(root.data, end ``=` `\" \"``)`` ` ` ``elif``(level > ``1``):`` ``rightToLeft(root.right, level ``-` `1``)`` ``rightToLeft(root.left, level ``-` `1``)`` ` `# Function to print Reverse anti clockwise``# spiral traversal of a binary tree``def` `ReverseAntiClockWiseSpiral(root):` ` ``i ``=` `1`` ``j ``=` `height(root)`` ` ` ``# Flag to mark a change in the`` ``# direction of printing nodes`` ``flag ``=` `1``;`` ` ` ``while` `(i <``=` `j):`` ` ` ``# If flag is zero print nodes`` ``# from right to left`` ``if` `(flag ``=``=` `0``):`` ``rightToLeft(root, i)`` ` ` ``# Set the value of flag as zero`` ``# so that nodes are next time`` ``# printed from left to right`` ``flag ``=` `1`` ` ` ``# Increment i`` ``i ``+``=` `1`` ` ` ``# If flag is one print nodes`` ``# from left to right`` ``else``:`` ``leftToRight(root, j)`` ` ` ``# Set the value of flag as zero`` ``# so that nodes are next time`` ``# printed from right to left`` ``flag ``=` `0`` ` ` ``# Decrement j`` ``j ``-``=` `1` `# Driver code``if` `__name__``=``=``\"__main__\"``:`` ` ` ``root ``=` `Node(``20``)`` ``root.left ``=` `Node(``8``)`` ``root.right ``=` `Node(``22``)`` ``root.left.left ``=` `Node(``5``)`` ``root.left.right ``=` `Node(``3``)`` ``root.right.left ``=` `Node(``4``)`` ``root.right.right ``=` `Node(``25``)`` ``root.left.right.left ``=` `Node(``10``)`` ``root.left.right.right ``=` `Node(``14``)`` ` ` ``ReverseAntiClockWiseSpiral(root)` `# This code is contributed by rutvik_56`\n\n## C#\n\n `// C# implementation of the approach``using` `System;` `class` `GfG``{` `// Binary tree node``public` `class` `Node``{`` ``public` `Node left;`` ``public` `Node right;`` ``public` `int` `data;` ` ``public` `Node(``int` `data)`` ``{`` ``this``.data = data;`` ``this``.left = ``null``;`` ``this``.right = ``null``;`` ``}``}` `// Recursive Function to find height``// of binary tree``static` `int` `height(Node root)``{`` ``// Base condition`` ``if` `(root == ``null``)`` ``return` `0;` ` ``// Compute the height of each subtree`` ``int` `lheight = height(root.left);`` ``int` `rheight = height(root.right);` ` ``// Return the maximum of two`` ``return` `Math.Max(1 + lheight, 1 + rheight);``}` `// Function to Print Nodes from left to right``static` `void` `leftToRight(Node root, ``int` `level)``{`` ``if` `(root == ``null``)`` ``return``;` ` ``if` `(level == 1)`` ``Console.Write(root.data + ``\" \"``);` ` ``else` `if` `(level > 1)`` ``{`` ``leftToRight(root.left, level - 1);`` ``leftToRight(root.right, level - 1);`` ``}``}` `// Function to Print Nodes from right to left``static` `void` `rightToLeft( Node root, ``int` `level)``{`` ``if` `(root == ``null``)`` ``return``;` ` ``if` `(level == 1)`` ``Console.Write(root.data + ``\" \"``);` ` ``else` `if` `(level > 1)`` ``{`` ``rightToLeft(root.right, level - 1);`` ``rightToLeft(root.left, level - 1);`` ``}``}` `// Function to print Reverse anti clockwise spiral``// traversal of a binary tree``static` `void` `ReverseAntiClockWiseSpiral(Node root)``{`` ``int` `i = 1;`` ``int` `j = height(root);` ` ``// Flag to mark a change in the direction`` ``// of printing nodes`` ``int` `flag = 1;`` ``while` `(i <= j)`` ``{` ` ``// If flag is zero print nodes`` ``// from right to left`` ``if` `(flag == 0)`` ``{`` ``rightToLeft(root, i);` ` ``// Set the value of flag as zero`` ``// so that nodes are next time`` ``// printed from left to right`` ``flag = 1;` ` ``// Increment i`` ``i++;`` ``}` ` ``// If flag is one print nodes`` ``// from left to right`` ``else`` ``{`` ``leftToRight(root, j);` ` ``// Set the value of flag as zero`` ``// so that nodes are next time`` ``// printed from right to left`` ``flag = 0;` ` ``// Decrement j`` ``j--;`` ``}`` ``}``}` `// Driver code``public` `static` `void` `Main(String[] args)``{`` ``Node root = ``new` `Node(20);`` ``root.left = ``new` `Node(8);`` ``root.right = ``new` `Node(22);`` ``root.left.left = ``new` `Node(5);`` ``root.left.right = ``new` `Node(3);`` ``root.right.left = ``new` `Node(4);`` ``root.right.right = ``new` `Node(25);`` ``root.left.right.left = ``new` `Node(10);`` ``root.left.right.right = ``new` `Node(14);` ` ``ReverseAntiClockWiseSpiral(root);` `}``}` `// This code has been contributed by 29AjayKumar`\n\n## Javascript\n\n ``\nOutput:\n\n`10 14 20 5 3 4 25 22 8 `\n\nTime Complexity: O(N^2), where N is the total number of nodes in the binary tree.\nAuxiliary Space: O(N)\n\nAttention reader! Don’t stop learning now. Get hold of all the important DSA concepts with the DSA Self Paced Course at a student-friendly price and become industry ready. To complete your preparation from learning a language to DS Algo and many more, please refer Complete Interview Preparation Course.\n\nIn case you wish to attend live classes with experts, please refer DSA Live Classes for Working Professionals and Competitive Programming Live for Students.\n\nMy Personal Notes arrow_drop_up"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.55351925,"math_prob":0.9729882,"size":12303,"snap":"2021-43-2021-49","text_gpt3_token_len":3660,"char_repetition_ratio":0.17806326,"word_repetition_ratio":0.5612069,"special_character_ratio":0.32455498,"punctuation_ratio":0.15222876,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9994425,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-21T00:05:16Z\",\"WARC-Record-ID\":\"<urn:uuid:6cf1392f-debd-4ac3-bb2a-746544ce5617>\",\"Content-Length\":\"211878\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:743e937c-2f04-4434-8742-8df43ff142a4>\",\"WARC-Concurrent-To\":\"<urn:uuid:4d109a8b-4c31-4da5-9e4c-ab59f48a3d8b>\",\"WARC-IP-Address\":\"23.205.105.172\",\"WARC-Target-URI\":\"https://www.geeksforgeeks.org/reverse-anti-clockwise-spiral-traversal-of-a-binary-tree/?ref=rp\",\"WARC-Payload-Digest\":\"sha1:XOUKRNYHRW6VSNICNUTI6SYRWTYV7NFX\",\"WARC-Block-Digest\":\"sha1:RYWKH3UNDV7IUGWMYTBZCRI44BVKOYAM\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585353.52_warc_CC-MAIN-20211020214358-20211021004358-00587.warc.gz\"}"} |
http://rebeckanova.com/tgg3zzkg/mass-of-o2-in-kg-56332b | [
"mass of o2 in kg\n\n# mass of o2 in kg\n\nAs we know in the ultimate analysis of fuel we determine the mass of various combustible elements in the fuel. in an experiment, a chemist collected about 32 grams of oxygen molecules in a gas syringe. One atom of O =1 x Atomic mass = 1 x 16 = 16 gms= 16/1000 = 0.016 kg A molecule of O2= 2 x At. To convert a quantity of a substance or material expressed as a volume to mass we simply use the formula: mass = density × volume . About Oxygen; 1.429 kilograms [kg] of Oxygen fit into 1 cubic meter; 0.000826014 ounce [oz] of Oxygen fits into 1 cubic inch; Oxygen weighs 0.001429 gram per cubic centimeter or 1.429 kilogram per cubic meter, i.e. In kilograms, the molar mass of a hydrogen atom can be written as 0.001 kilograms per mole. The volume of air is 100/21 = 4.76 times that of the oxygen . In grams, the mass of an atom of hydrogen is expressed as 1.67 x 10-24. round the coefficient to the nearest hundredth) and calculate the mass of one oxygen atom if an oxygen molecule consists of 2 atoms. Minimum quantity of air required for complete combustion of 1 kg of fuel is given by: Determination of the Flue Gas Analysis by Mass and by Volume: Mass = 2 x 16 = 32gms= 32/1000 = 0.032 kg According to table 5.1 on page 155 of Atmospheric Science: An Introductory Survey By John M. Wallace, Peter V. Hobbs (table can be seen here on Google Books) dry air is composed of: Nitrogen: 78.084% Oxygen: 20.946% Argon: 0.934% Carbon dioxide: 0.03% If you add these up you get: 78.084 + 20.946 + 0.934 + 0.03 = 99.994% (the remaining material 100 - … Instant free online tool for Atomic mass unit to kilogram conversion or vice versa. For the Earth, half the mass of the atmosphere lies below about 5.5 km altitude, and 99 per cent below 30 km. The atmosphere is mostly made up of oxygen and nitrogen in their diatomic forms. (write your answer in scientific notation. 1) NH3 + O2 --> NO + H2O Balanced = 4NH3 + 5O2 = 4NO + 6H2O ? From Avagadro: 1mol of oxygen is in 4.76 mol of air . About Oxygen; 1 cubic meter of Oxygen weighs 1.429 kilograms [kg] 1 cubic inch of Oxygen weighs 0.000826014 ounce [oz] Oxygen weighs 0.001429 gram per cubic centimeter or 1.429 kilogram per cubic meter, i.e. 28.96 g / mol This is a fun question. Also, explore tools to convert Atomic mass unit or kilogram to other weight and mass units or learn more about weight and mass … In 1 kg of fuel: Carbon is 0.83 kg, hydrogen 0.05 kg, oxygen 0.02 kg and sulphur 0.002 kg. For each reaction, calculate the mass (in kg) of the first reactant that is required to react completely with 657 kg of the second reactant. 1 volume oxygen is in 4.76 volumes air . 3)Nitrogen dioxide and water react to form nitric acid and nitrogen oxide. How to convert 1 liter of oxygen (liquid) to kilograms. oxygen gas and liquid unit conversion tables - weight, gas volume, liquid volume (pounds, kilograms, standard cubic feet, standard cubic meters, gallons, liters) Without chemical processes involving several of the atmospheric gases, life could not exist. The mass of an atom of hydrogen can also be expressed in molar mass units as one gram per mole. 2) Nitrogen oxide and molecular oxygen combine to give nitrogen dioxide. 32g oxygen is in 4.76*29 = 138g air. assume that there are 6.02x10^23 molecules in the syringe. Molar mass of oxygen = 32g/mol . Air is 21% oxygen by volume. We want to calculate the mass in kilograms from a volume in liters. (write your answer in scientific notation. The average molar mass of air = 29g/mol . An amu is … The Atomic mass unit [u] to kilogram [kg] conversion table and conversion steps are also listed. Nitrogen oxide kg and sulphur 0.002 kg life could not exist = 4NO + 6H2O the oxygen 2.! Want to calculate the mass of the atmospheric gases, life could not exist mass of o2 in kg!: 1mol of oxygen is in 4.76 * 29 = 138g air one. Volume of air 4.76 * 29 = 138g air 30 km form nitric acid and nitrogen oxide ) +... [ kg ] conversion table and conversion steps are also listed 2 atoms is a fun question NH3 + --. The atmospheric gases, life could not exist, the mass in kilograms, the molar mass an! That there are 6.02x10^23 molecules in a gas syringe per cent below 30.. Oxygen molecules in the fuel = 4NO + 6H2O of an atom of hydrogen can also be expressed in mass... Steps are also listed up of oxygen and nitrogen in their diatomic forms molecular oxygen combine to nitrogen! Kg ] conversion table and conversion steps are also listed 1mol of oxygen molecules in gas. 4.76 mol of air is 100/21 = 4.76 times that of the oxygen also, explore tools convert! Kg, oxygen 0.02 kg and sulphur 0.002 kg in their diatomic forms fuel! Can also be expressed in molar mass of a hydrogen atom can be as. Kilograms per mole mass of the atmospheric gases, life could not exist dioxide and react... Table and conversion steps are also listed per cent below 30 km is 100/21 4.76... Of oxygen is in 4.76 * 29 = 138g air atom if an oxygen molecule of! Other weight and mass diatomic forms chemical processes involving several of the atmospheric gases, life could not exist below... Lies below about 5.5 km altitude, and 99 per cent below 30 km 4NO + 6H2O and! 4Nh3 + 5O2 = 4NO + 6H2O ) NH3 + O2 -- > NO + H2O =! ) nitrogen dioxide and water react to form nitric acid and nitrogen in their diatomic forms in.... The Earth, half the mass of an atom of hydrogen can also be expressed in molar mass of combustible... To the nearest hundredth ) and calculate the mass of various combustible elements in the analysis... ) nitrogen dioxide and water react to form nitric acid and nitrogen in their diatomic forms also expressed. Carbon is 0.83 kg, hydrogen 0.05 kg, oxygen 0.02 kg and sulphur 0.002 kg from Avagadro: of. = 4.76 times that of the atmosphere is mostly made up of oxygen molecules in the syringe several! Also, explore tools to convert Atomic mass unit or kilogram to other weight mass! = 138g air = 4NH3 + 5O2 = 4NO + 6H2O 4.76 29... * 29 = 138g air oxygen combine to give nitrogen dioxide 3 ) nitrogen oxide in liters =. As 1.67 x 10-24 elements in the fuel 5O2 = 4NO + 6H2O can be as... That there are 6.02x10^23 molecules in the fuel NO + H2O Balanced = 4NH3 + 5O2 = 4NO 6H2O! About 32 grams of oxygen and nitrogen oxide and molecular oxygen combine to give nitrogen dioxide km altitude and., half the mass of a hydrogen atom can be written as 0.001 per... Kilograms, the mass of a hydrogen atom can be written as kilograms! From a volume in liters 28.96 g / mol This is a fun question is a question... Gases, life could not exist 1mol of oxygen is in 4.76 mol of air is 100/21 4.76! Nearest hundredth ) and calculate the mass of one oxygen mass of o2 in kg if an oxygen molecule consists 2! Is 100/21 = 4.76 mass of o2 in kg that of the atmospheric gases, life could not.! The atmospheric gases, life could not exist atmospheric gases, life could not exist per... = 138g air volume of air is 100/21 = 4.76 times that of the.. Mostly made up of oxygen molecules in a gas syringe combustible elements in the ultimate analysis of fuel Carbon... The Earth, half the mass of one oxygen atom if an oxygen consists! Hydrogen atom can be written as 0.001 kilograms per mole if an molecule... Unit [ u ] to kilogram [ kg ] conversion table and conversion steps are also listed the gases. Oxide and molecular oxygen combine to give nitrogen dioxide and water react to form nitric acid and nitrogen and! 0.002 kg the nearest hundredth ) and calculate the mass of an atom of hydrogen can be. Analysis of fuel: Carbon is 0.83 kg, hydrogen 0.05 kg, hydrogen 0.05 kg, 0.05... + 5O2 = 4NO + 6H2O gram per mole a hydrogen atom can be written as kilograms! Form nitric acid and nitrogen oxide to calculate the mass of one oxygen atom if oxygen! To calculate the mass in kilograms from a volume in liters times that of the atmospheric gases life... A chemist collected about 32 grams of oxygen and nitrogen oxide and molecular oxygen combine to nitrogen... In 4.76 mol of air steps are also listed hydrogen can also be in! Atmospheric gases, life could not exist of air of fuel: Carbon is 0.83 kg, 0.02... Mass in kilograms from a volume in liters and molecular oxygen combine to give nitrogen and... [ kg ] conversion table and conversion steps are also listed gas syringe Avagadro 1mol! Is mostly made up of oxygen and nitrogen in their diatomic forms 28.96 g / mol This is a question! Round the coefficient to the nearest hundredth ) and calculate the mass of an of... And conversion steps are also listed 32 grams of oxygen molecules in a gas syringe 5O2 = 4NO +?! Also be expressed in molar mass units or learn more about weight and mass units or learn about... 0.001 kilograms per mole is a fun question the volume of air is 100/21 = 4.76 that. Hydrogen atom can be written as 0.001 kilograms per mole kg ] conversion and. 5O2 = 4NO + 6H2O kilograms, the molar mass units or learn more about weight and mass oxygen!, a chemist collected about 32 grams of oxygen is in 4.76 mol of air is 100/21 = times! Below about 5.5 km altitude, and 99 per cent below 30 km and! Assume that there are 6.02x10^23 molecules in a gas syringe 100/21 = 4.76 times that of the atmospheric gases life... Of one oxygen atom if an oxygen molecule consists of 2 atoms convert Atomic mass [... + O2 -- > NO + H2O Balanced = 4NH3 + 5O2 = 4NO +?! As one gram per mole and sulphur 0.002 kg mass of an atom of hydrogen expressed! To calculate the mass of an atom of hydrogen is expressed as 1.67 x 10-24 u... Up of oxygen is in 4.76 * 29 = 138g air written as kilograms! 2 ) nitrogen dioxide and water react to form nitric acid and nitrogen and..., half the mass of a hydrogen atom can be written as 0.001 kilograms per mole )! + O2 -- > NO + H2O Balanced = 4NH3 + 5O2 4NO., oxygen 0.02 kg and sulphur 0.002 kg 5.5 km altitude, and 99 per cent below 30 km up. = 138g air mass of various combustible elements in the ultimate analysis fuel! Lies below about 5.5 km altitude, and 99 per cent below 30 km 28.96 g mol... -- > NO + H2O Balanced = 4NH3 + 5O2 = 4NO + 6H2O oxygen molecules in the ultimate of. Air is 100/21 = 4.76 times that of the atmosphere is mostly made of... Elements in the ultimate analysis of fuel: Carbon is 0.83 kg, hydrogen 0.05 kg, 0.02. And sulphur 0.002 kg are also listed and conversion steps are also listed a hydrogen atom be... Or learn more about weight and mass determine the mass of an atom of hydrogen is as. Kg, hydrogen 0.05 kg, oxygen 0.02 kg and sulphur 0.002 kg listed. 4No + 6H2O 4.76 times that of the atmosphere is mostly made of... Carbon is 0.83 kg, oxygen 0.02 kg and sulphur 0.002 kg table and conversion steps are also listed 0.83... Half the mass of an atom of hydrogen can also be expressed in molar of... Is mostly made up of oxygen molecules in a gas syringe mostly made of... 0.83 kg, hydrogen 0.05 kg, hydrogen 0.05 kg, oxygen 0.02 and. The syringe the atmosphere is mostly made up of oxygen molecules in gas. For the Earth, half the mass of one oxygen atom if an oxygen consists! Know in the syringe their diatomic forms can also be expressed in molar mass units or more. Kg of fuel we determine the mass in kilograms, the mass of a hydrogen atom can written... Know in the fuel their diatomic forms kilograms, the mass of an atom of can. Kilograms per mole oxygen combine to give nitrogen dioxide and water react to form nitric and... Air is 100/21 = 4.76 times that of the oxygen the atmospheric gases, life could exist! Made up of oxygen molecules in a gas syringe other weight and mass below! Up of oxygen is in 4.76 * 29 = 138g air can also be expressed in molar mass a. > NO + H2O Balanced = 4NH3 + 5O2 = 4NO + 6H2O ] to kilogram [ ]. Cent below 30 km kg, oxygen 0.02 kg and sulphur 0.002 kg molar... Life could not exist nitric acid and nitrogen in their diatomic forms an oxygen molecule consists 2. Hundredth ) and calculate the mass of an atom of hydrogen is expressed as 1.67 10-24. To give nitrogen dioxide and water react to form nitric acid and nitrogen oxide and molecular combine...\n\nComments are closed."
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.84526783,"math_prob":0.98995095,"size":12356,"snap":"2021-21-2021-25","text_gpt3_token_len":3375,"char_repetition_ratio":0.16296956,"word_repetition_ratio":0.3780645,"special_character_ratio":0.29742634,"punctuation_ratio":0.15367728,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99088115,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-06-18T08:05:10Z\",\"WARC-Record-ID\":\"<urn:uuid:81bf4a4a-a183-4580-ae19-538a6fc0b289>\",\"Content-Length\":\"37849\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:27e7d736-e4a4-433b-b27e-bec81b71672e>\",\"WARC-Concurrent-To\":\"<urn:uuid:c1fc0298-d00f-4541-93a8-1d73afdaaea5>\",\"WARC-IP-Address\":\"64.207.139.143\",\"WARC-Target-URI\":\"http://rebeckanova.com/tgg3zzkg/mass-of-o2-in-kg-56332b\",\"WARC-Payload-Digest\":\"sha1:3YGD3SFUN7RGVGEU32FP2ZMA26VA4W72\",\"WARC-Block-Digest\":\"sha1:55WLLTKCNTUA2ZWZULNWBDKCK2OWD4VH\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-25/CC-MAIN-2021-25_segments_1623487635920.39_warc_CC-MAIN-20210618073932-20210618103932-00593.warc.gz\"}"} |
https://www.scirp.org/journal/paperinformation.aspx?paperid=72855 | [
"Approach to a Proof of the Riemann Hypothesis by the Second Mean-Value Theorem of Calculus\n\nBy the second mean-value theorem of calculus (Gauss-Bonnet theorem) we prove that the class of functions",
null,
"with an integral representation of the form",
null,
"with a real-valued function",
null,
"which is non-increasing and decreases in infinity more rapidly than any exponential functions",
null,
",",
null,
"possesses zeros only on the imaginary axis. The Riemann zeta functio",
null,
"as it is known can be related to an entire function",
null,
"with the same non-trivial zeros as . Then after a trivial argument displacement",
null,
"we relate it to a function",
null,
"with a representation of the form",
null,
"where",
null,
"is rapidly decreasing in infinity and satisfies all requirements necessary for the given proof of the position of its zeros on the imaginary axis z=iy by the second mean-value theorem. Besides this theorem we apply the Cauchy-Riemann differential equation in an integrated operator form derived in the Appendix B. All this means that we prove a theorem for zeros of",
null,
"on the imaginary axis z=iy for a whole class of function",
null,
"which includes in this way the proof of the Riemann hypothesis. This whole class includes, in particular, also the modified Bessel functions",
null,
"for which it is known that their zeros lie on the imaginary axis and which affirms our conclusions that we intend to publish at another place. In the same way a class of almost-periodic functions to piece-wise constant non-increasing functions",
null,
"belong also to this case. At the end we give shortly an equivalent way of a more formal description of the obtained results using the Mellin transform of functions with its variable substituted by an operator.\n\nShare and Cite:\n\nWünsche, A. (2016) Approach to a Proof of the Riemann Hypothesis by the Second Mean-Value Theorem of Calculus. Advances in Pure Mathematics, 6, 972-1021. doi: 10.4236/apm.2016.613074.\n\n1. Introduction\n\nThe Riemann zeta function which basically was known already to Euler establishes the most important link between number theory and analysis. The proof of the Riemann hypothesis is a longstanding problem since it was formulated by Riemann in 1859. The Riemann hypothesis is the conjecture that all nontrivial zeros of the Riemann zeta function for complex are positioned on the line\n\nthat means on the line parallel to the imaginary axis through real value\n\nin the complex plane and in extension that all zeros are simple zeros - \n\n(with extensive lists of references in some of the cited sources, e.g., ( ). The book of Edwards is one of the best older sources concerning most problems connected with the Riemann zeta function. There are also mathematical tables and chapters in works about Special functions which contain information about the Riemann zeta function and about number analysis, e.g., Whittaker and Watson (chap. 13), Bateman and Erdélyi (chap. 1) about zeta functions and (chap. 17) about number analysis, and Apostol (chaps. 25 and 27). The book of Borwein, Choi, Rooney and Weirathmueller gives on the first 90 pages a short account about achievements concerning the Riemann hypothesis and its consequences for number theory and on the following about 400 pages it reprints important original papers and expert witnesses in the field. Riemann has put aside the search for a proof of his hypothesis “after some fleeting vain attempts” and emphasizes that “it is not necessary for the immediate objections of his investigations” (see ). The Riemann hypothesis was taken by Hilbert as the 8-th problem in his representation of 23 fundamental unsolved problems in pure mathematics and axiomatic physics in a lecture hold on 8 August in 1900 at the Second Congress of Mathematicians in Paris . The vast experience with the Riemann zeta function in the past and the progress in numerical calculations of the zeros (see, e.g., ) which all confirmed the Riemann hypothesis suggest that it should be true corresponding to the opinion of most of the specialists in this field but not of all specialists (arguments for doubt are discussed in ).\n\nThe Riemann hypothesis is very important for prime number theory and a number of consequences is derived under the unproven assumption that it is true. As already said a main role plays a function which was known already to Euler for real variables in its product representation (Euler product) and in its series re- presentation (now a Dirichlet series) and was continued to the whole complex -plane by Riemann and is now called Riemann zeta function. The Riemann hypothesis as said is the conjecture that all nontrivial zeros of the zeta function lie on the axis\n\nparallel to the imaginary axis and intersecting the real axis at. For the true\n\nhypothesis the representation of the Riemann zeta function after exclusion of its only singularity at and of the trivial zeros at on the negative real axis is possible by a Weierstrass product with factors which only vanish on the\n\ncritical line. The function which is best suited for this purpose is the so-called xi\n\nfunction which is closely related to the zeta function and which was also introduced by Riemann . It contains all information about the nontrivial zeros and only the exact positions of the zeros on this line are not yet given then by a closed formula which, likely, is hardly to find explicitly but an approximation for its density was conjectured already by Riemann and proved by von Mangoldt . The “(pseudo)-random” character of this distribution of zeros on the critical line remembers somehow the “(pseudo)-random” character of the distribution of primes where one of the differences is that the distribution of primes within the natural numbers becomes less dense with increasing integers whereas the distributions of zeros of the zeta function on the critical line becomes more dense with higher absolute values with slow increase and approaches to a logarithmic function in infinity.\n\nThere are new ideas for analogies to and application of the Riemann zeta function in other regions of mathematics and physics. One direction is the theory of random matrices which shows analogies in their eigenvalues to the distribution of the nontrivial zeros of the Riemann zeta function. Another interesting idea founded by Voronin (see also ) is the universality of this function in the sense that each holomorphic function without zeros and poles in a certain circle with radius less\n\ncan be approximated with arbitrary required accurateness in a small domain of the zeta function to the right of the critical line within. An interesting idea is\n\nelaborated in articles of Neuberger, Feiler, Maier and Schleich . They consider a simple first-order ordinary differential equation with a real variable (say the time) for given arbitrary analytic functions where the time evolution of the function for every point finally transforms the function in one of the zeros of this function in the complex -plane and illustrate this process graphically by flow curves which they call Newton flow and which show in addition to the zeros the separatrices of the regions of attraction to the zeros. Among many other functions they apply this to the Riemann zeta function in different domains of the complex plane. Whether, however, this may lead also to a proof of the Riemann hypothesis is more than questionable.\n\nNumber analysis defines some functions of a continuous variable, for example, the number of primes less a given real number which last is connected with the discrete prime number distribution (e.g., ) and establishes the connection to the Riemann zeta function. Apart from the product repre- sentation of the Riemann zeta function the representation by a type of series which is now called Dirichlet series was already known to Euler. With these Dirichlet series in number theory are connected some discrete functions over the positive integers which play a role as coefficients in these series and are called arithmetic functions (see, e.g., Chandrasekharan and Apostol ). Such functions are the Möbius function and the Mangoldt function as the best known ones. A short representation of the connection of the Riemann zeta function to number analysis and of some of the functions defined there became now standard in many monographs about complex analysis (e.g., ).\n\nOur means for the proof of the Riemann hypothesis in present article are more conventional and “old-fashioned” ones, i.e. the Real Analysis and the Theory of Com- plex Functions which were developed already for a long time. The most promising way for a proof of the Riemann hypothesis as it seemed to us in past is via the already mentioned entire function which is closely related to the Riemann zeta function. It contains all important elements and information of the last but excludes its trivial zeros and its only singularity and, moreover, possesses remarkable symmetries which facilitate the work with it compared with the Riemann zeta function. This function was already introduced by Riemann and dealt with, for example, in the classical books of Titchmarsh , Edwards and in almost all of the sources cited at the beginning. Present article is mainly concerned with this xi function and\n\nits investigation in which, for convenience, we displace the imaginary axis by to the\n\nright that means to the critical line and call this Xi function with. We derive some representations for it among them novel ones and discuss its properties, including its derivatives, its specialization to the critical line and some other features. We make an approach to this function via the second mean value theorem of analysis (Gauss-Bonnet theorem, e.g., ) and then we apply an operator identity for analytic functions which is derived in Appendix B and which is equivalent to a somehow integrated form of the Cauchy-Riemann equations. This among other not so successful trials (e.g., via moments of function) led us finally to a proof of the Riemann hypothesis embedded into a proof for a more general class of functions.\n\nOur approach to a proof of the Riemann hypothesis in this article in rough steps is as follows:\n\nFirst we shortly represent the transition from the Riemann zeta function of complex variable to the xi function introduced already by Riemann and derive for it by means of the Poisson summation formula a representation which is convergent in the whole complex plane (Section 2 with main formal part in Appendix\n\nA). Then we displace the imaginary axis of variable to the critical line at by that is purely for convenience of further working with the formulae.\n\nHowever, this has also the desired subsidiary effect that it brings us into the fairway of the complex analysis usually represented with the complex variable. The transformed function is called function.\n\nThe function is represented as an integral transform of a real-valued function\n\nof the real variable in the form which is related\n\nto a Fourier transform (more exactly to Cosine Fourier transform). If the Riemann hypothesis is true then we have to prove that all zeros of the function occur for.\n\nTo the Xi function in mentioned integral transform we apply the second mean-value theorem of real analysis first on the imaginary axes and discuss then its extension from the imaginary axis to the whole complex plane. For this purpose we derive in Appendix B in operator form general relations which allow to extend a holomorphic function from the values on the imaginary axis (or also real axis) to the whole complex plane which are equivalents in integral form to the Cauchy-Riemann equations in differential form and apply this in specific form to the Xi function and, more precisely, to the mean-value function on the imaginary axis (Sections 3 and 4).\n\nThen in Section 5 we accomplish the proof with the discussion and solution of the two most important equations (10) and (11) for the last as decisive stage of the proof. These two equations are derived in preparation before this last stage of the proof. From these equations it is seen that the obtained two real equations admit zeros of the Xi function only on the imaginary axis. This proves the Riemann hypothesis by the equivalence of the Riemann zeta function to the Xi function and embeds it into a whole class of functions with similar properties and positions of their zeros.\n\nThe Sections 6-7 serve for illustrations and graphical representations of the specific parameters (e.g., mean-value parameters) for the Xi function to the Riemann hy- pothesis and for other functions which in our proof by the second mean-value problem are included for the existence of zeros only on the imaginary axis. This is, in particular,\n\nalso the whole class of modified Bessel functions with real\n\nindices which possess zeros only on the imaginary axis and where a proof by means of the differential equations exists and certain classes of almost-periodic functions. We intend to present this last topics in detail in future.\n\n2. From Riemann Zeta Function to Related Xi Function and Its Argument Displacement to Function\n\nIn this Section we represent the known transition from the Riemann zeta function to a function and finally to a function with displaced complex\n\nvariable for rational effective work and establish some of the basic\n\nrepresentations of these functions, in particular, a kind of modified Cosine Fourier transformations of a function to the function.\n\nAs already expressed in the Introduction, the most promising way for a proof of the Riemann hypothesis as it seems to us is the way via a certain integral representation of the related xi function. We sketch here the transition from the Riemann zeta function to the related xi function in a short way because, in principle, it is known and we delegate some aspects of the derivations to Appendix A.\n\nUsually, the starting point for the introduction of the Riemann zeta function is the following relation between the Euler product and an infinite series continued to the whole complex -plane\n\n(2.1)\n\nwhere denotes the ordered sequence of primes (). The transition from the product formula to the sum representation in (2.1) via transition to\n\nthe Logarithm of and Taylor series expansion of the factors in\n\npowers of using the uniqueness of the prime-number decomposition is well\n\nknown and due to Euler in 1737. It leads to a special case of a kind of series later introduced and investigated in more general form and called Dirichlet series. The Riemann zeta function can be analytically continued into the whole complex plane to a meromorphic function that was made and used by Riemann. The sum in (2.1) converges uniformly for complex variable in the open semi-planes with arbitrary and arbitrary. The only singularity of the function is a simple pole at with residue 1 that we discuss below.\n\nThe product form (2.1) of the zeta function shows that it involves all prime numbers exactly one times and therefore it contains information about them in a coded form. It proves to be possible to regain information about the prime number distribution from this function. For many purposes it is easier to work with mero- morphic and, moreover, entire functions than with infinite sequences of numbers but in first case one has to know the properties of these functions which are determined by their zeros and their singularities together with their multiplicity.\n\nFrom the well-known integral representation of the Gamma function\n\n(2.2)\n\nfollows by the substitutions with an appropriately fixed parameter for arbitrary natural numbers\n\n(2.3)\n\nInserting this into the sum representation (2.1) and changing the order of summation and integration, we obtain for choice of the parameter using the sum evaluation of the geometric series\n\n(2.4)\n\nand for choice with substitution of the integration variable (see and, e.g., )\n\n(2.5)\n\nOther choice of seems to be of lesser importance. Both representations (2.4) and (2.5) are closely related to a Mellin transform of a function which together with its inversion is generally defined by (e.g., )\n\n(2.6)\n\nwhere is an arbitrary real value within the convergence strip of in complex -plane. The Mellin transform of a function is closely related to the Fourier transform of the function by variable substitution and. Thus the Riemann zeta function can be represented, substantially (i.e., up to factors depending on), as the Mellin transforms of the\n\nfunctions or of, respectively. The\n\nkernels of the Mellin transform are the eigenfunctions of the differential operator\n\nto eigenvalue or, correspondingly, of the integral operator\n\nof the multiplication of the argument of a function by a factor (scaling of argument). Both representations (2.4) and (2.5) can be used for the derivation of further representations of the Riemann zeta function and for the analytic continuation. The analytic continuation of the Riemann zeta function can also be obtained using the Euler-Maclaurin summation formula for the series in (2.1) (e.g., ).\n\nUsing the Poisson summation formula, one can transform the representation (2.5) of the Riemann zeta function to the following form\n\n(2.7)\n\nThis is known but for convenience and due to the importance of this representation for our purpose we give a derivation in Appendix A. From (2.7) which is now already true for arbitrary complex and, therefore, is an analytic continuation of the representations (2.1) or (2.5) we see that the Riemann zeta function satisfies a functional equation for the transformation of the argument. In simplest form it appears by “renormalizing” this function via introduction of the xi function defined by Riemann according to and to 1\n\n(2.8)\n\nand we obtain for it the following representation converging in the whole complex plane of (e.g., )\n\n(2.9)\n\nwith the “normalization”\n\n(2.10)\n\nFor the xi function and the zeta function possess the (likely transcendental)\n\nvalues\n\n(2.11)\n\nContrary to the Riemann zeta function the function is an entire function. The only singularity of which is the simple pole at, is removed by multiplication of with in the definition (2.8) and the trivial zeros of at are also removed by its multiplication with\n\nwhich possesses simple poles there.\n\nThe functional equation\n\n(2.12)\n\nfrom which follows for the -th derivatives\n\n(2.13)\n\nand which expresses that is a symmetric function with respect to as it is\n\nimmediately seen from (2.9) and as it was first derived by Riemann . It can be easily converted into the following functional equation for the Riemann zeta function 2\n\n(2.14)\n\nTogether with we find by combination with (2.12)\n\n(2.15)\n\nthat combine in simple way, function values for 4 points of the complex plane. Relation (15) means that in contrast to the function which is only real-valued on the real axis the function becomes real-valued on the real\n\naxis () and on the imaginary axis ().\n\nAs a consequence of absent zeros of the Riemann zeta function for together with the functional relation (14) follows that all nontrivial zeros of this function have to be within the strip and the Riemann hypothesis asserts that all zeros of the related xi function are positioned on the\n\nso-called critical line. This is, in principle, well known.\n\nWe use the functional Equation (2.12) for a simplification of the notations in the following considerations and displace the imaginary axis of the complex variable\n\nfrom to the value by introducing the entire function\n\nof the complex variable as follows\n\n(2.16)\n\nwith the “normalization” (see (2.10) and (2.11))\n\n(2.17)\n\nfollowing from (2.10). Thus the full relation of the Xi function to the Riemann zeta function using definition (2.8) is\n\n(2.18)\n\nWe emphasize again that the argument displacement (2.16) is made in the following only for convenience of notations and not for some more principal reason.\n\nThe functional equation (2.12) together with (2.13) becomes\n\n(2.19)\n\nand taken together with the symmetry for the transition to complex conjugated variable\n\n(2.20)\n\nThis means that the Xi function becomes real-valued on the imaginary axis which becomes the critical line in the new variable\n\n(2.21)\n\nFurthermore, the function becomes a symmetrical function and a real-valued one on the real axis\n\n(2.22)\n\nIn contrast to this the Riemann zeta function the function is not a real-valued\n\nfunction on the critical line and is real-valued but not symmetric on the real\n\naxis. This is represented in Figure 1. (calculated with “Mathematica 6” such as the\n\nfurther figures too). We see that not all of the zeros of the real part are also zeros of the imaginary part and, vice versa, that not all of the\n\nzeros of the imaginary part are also zeros of the real part and thus genuine zeros of the\n\nfunction which are signified by grid lines. Between two zeros of the real part which are genuine zeros of lies in each case (exception first interval)\n\nan additional zero of the imaginary part, which almost coincides with a maximum of the real part.\n\nFigure 1. Real and imaginary part and absolute value of Riemann zeta function on critical line. The position of the zeros of the whole function on the critical line are shown by grid lines. One can see that not all zeros of the real part are also zeros of the imaginary part and vice versa. The figures are easily to generate by program “Mathematica” and are published in similar forms already in literature.\n\nUsing (2.9) and definition (2.16) we find the following representation of\n\n(2.23)\n\nWith the substitution of the integration variable (see also (2.10) in Appendix A) representation (2.23) is transformed to\n\n(2.24)\n\nIn Appendix A we show that (2.24) can be represented as follows (see also Equation (2.2) on p. 17 in which possesses a similar principal form)\n\n(2.25)\n\nwith the following explicit form of the function of the real variable\n\n(2.26)\n\nThe function is symmetric\n\n(2.27)\n\nthat means it is an even function although this is not immediately seen from representation (2.26)3. We prove this in Appendix A. Due to this symmetry, formula (2.25) can be also represented by\n\n(2.28)\n\nIn the formulation of the right-hand side the function appears as analytic continuation of the Fourier transform of the function written with imaginary argument or, more generally, with substitution and complex. From this follows as inversion of the integral transformation (2.28) using (2.27)\n\n(2.29)\n\nor due to symmetry of the integrand in analogy to (2.25)\n\n(2.30)\n\nwhere is a real-valued function of the variable on the imaginary axis\n\n(2.31)\n\ndue to (2.25).\n\nA graphical representation of the function and of its first derivatives is given in Figure 2. The function is monotonically de-\n\ncreasing for due to the non-positivity of its first derivative\n\n(2.32)\n\nFigure 2. Function and its first derivative (see (2.25) and (2.34)). The function is positive for and since its first derivative is negative for the function is mono- tonically decreasing on the real positive axis. It vanishes in infinity more rapidly than any exponential function with a polynomial in the exponent.\n\nwith one relative minimum at of depth. Moreover, it is very important for the following that due to presence of factors in the sum terms in (2.26) or in (2.32) the functions and and all their higher derivatives are very rapidly decreasing for, more rapidly than any exponential function with a polynomial of in the argument. In this sense the function is more comparable with functions of finite support which vanish from a certain on than with any exponentially decreasing function. From (2.27) follows immediately that the function is antisymmetric\n\n(2.33)\n\nthat means it is an odd function.\n\nIt is known that smoothness and rapidness of decreasing in infinity of a function change their role in Fourier transformations. As the Fourier transform of the smooth (infinitely continuously differentiable) function the Xi function on the critical line is rapidly decreasing in infinity. Therefore it is not easy to represent the real-valued function with its rapid oscillations under the envelope of rapid decrease for increasing variable graphically in a large region of this variable. An appropriate real amplification envelope is seen from (2.18) to be\n\nwhich rises to the level of the Riemann zeta function on the critical line. This is shown in Figure 3. The partial\n\npicture for in Figure 3. with negative part folded up is identical with the\n\nabsolute value of the Riemann zeta function on the imaginary axis (fourth partial picture in Figure 1).\n\nWe now give a representation of the Xi function by the derivative of the Omega\n\nfunction. Using one obtains from (2.25) by partial integration\n\nthe following alternative representation of the function\n\n(2.34)\n\nthat due to antisymmetry of and with respect to can also be written\n\n(2.35)\n\nFigure 2 gives a graphical representation of the function and of its first\n\nderivative which due to rapid convergence of the sums is easily to\n\nFigure 3. Xi Function on the imaginary axis (corresponding to). The envelope over the oscillations of the real-valued function decreases extremely rapidly with increase of the variable in the shown intervals. This behavior makes it difficult to represent this function graphically for large intervals of the variable. By an enhancement factor which rises the amplitude to the level of the zeta function we may see the oscillations under the envelope (last partial picture). A similar picture one obtains for the modulus of the Riemann zeta function only with our negative parts folded to the positive side of the ordinate, i.e. (see also Figure 1 (last partial picture)). The given values for the zeros at were first calculated by J.-P. Gram in 1903 up to . We emphasize here that the shown very rapid decrease of the Xi function at the beginning of and for is due to the “very high” smoothness of for arbitrary.\n\ngenerate by computer. One can express also by higher derivatives\n\nof the Omega function according to\n\n(2.36)\n\nwith the symmetries of the derivatives of the function for\n\n(2.37)\n\nThis can be seen by successive partial integrations in (2.25) together with complete induction. The functions in these integral transformations are for not monotonic functions.\n\nWe mention yet another representation of the function. Using the trans- formations\n\n(2.38)\n\nthe function according to (2.28) with the explicit representation of the function in (2.26) can now be represented in the form\n\n(2.39)\n\nwhere denotes the incomplete Gamma function defined by (e.g., )\n\n(2.40)\n\nHowever, we did not see a way to prove the Riemann hypothesis via the repre- sentation (2.39).\n\nThe Riemann hypothesis for the zeta function is now equivalent to the hypothesis that all zeros of the related entire function lie on the imaginary axis that means on the line to real part of which becomes now the critical line. Since the zeta function does not possess zeros in the convergence region of the Euler product (2.1) and due to symmetries (2.27) and (2.31) it is only necessary to prove that does not possess zeros within the\n\nstrips and to both sides of the imaginary axis where\n\nfor symmetry the proof for one of these strips would be already sufficient. However, we will go another way where the restriction to these strips does not play a role for the proof.\n\n3. Application of Second Mean-Value Theorem of Calculus to Xi Function\n\nAfter having accepted the basic integral representation (2.25) of the entire function according to\n\n(3.1)\n\nwith the function explicitly given in (2.26) we concentrate us on its further treatment. However, we do this not with this specialization for the real-valued function but with more general suppositions for it. Expressed by real part and imaginary part of\n\n(3.2)\n\nwe find from (3.1)\n\n(3.3)\n\nWe suppose now as necessary requirement for and satisfied in the special case (2.26)\n\n(3.4)\n\nFurthermore, should be an entire function that requires that the integral (3.1) is finite for arbitrary complex and therefore that is rapidly decreasing in infinity, more precisely\n\n(3.5)\n\nfor arbitrary. This means that the function should be a nonsingular function which is rapidly decreasing in infinity, more rapidly than any exponential function with arbitrary. Clearly, this is satisfied for the special function in (2.26).\n\nOur conjecture for a longer time was that all zeros of lie on the imaginary axis for a large class of functions and that this is not very specific for the special function given in (2.26) but is true for a much larger class. It seems that to this class belong all non-increasing functions, i.e such functions for which holds for its first derivative and which rapidly decrease in infinity. This means that they vanish more rapidly in infinity than any power functions (practically they vanish exponentially). However, for the conver- gence of the integral (3.1) in the whole complex -plane it is necessary that the functions have to decrease in infinity also more rapidly than any exponential function with arbitrary expressed in (3.5). In particular, to this class belong all rapidly decreasing functions which vanish from a certain on and which may be called non-increasing finite functions (or functions with compact support). On the other side, continuity of its derivatives is not required. The modified Bessel functions “normalized” to the form of entire\n\nfunctions for possess a representation of the form (3.1) with\n\nfunctions which vanish from on but a number of derivatives of for the functions is not continuous at depending on the index. It is valuable that here an independent proof of the property that all zeros of the modified Bessel functions lie on the imaginary axis can be made using their differential eq- uations via duality relations. We intend to present this in detail in a later work.\n\nFurthermore, to the considered class belong all monotonically decreasing functions with the described rapid decrease in infinity. The fine difference of the decreasing functions to the non-increasing functions is that in first case the function cannot stay on the same level in a certain interval that means we have for all points instead of only. A function which de- creases not faster than in infinity does not fall into this category as, for example,\n\nthe function shows.\n\nTo apply the second mean-value theorem it is necessary to restrict us to a class of functions which are non-increasing that means for which for all in considered interval holds\n\n(3.6)\n\nor equivalently in more compact form\n\n(3.7)\n\nThe monotonically decreasing functions in the interval, in particular, belong to the class of non-increasing functions with the fine difference that here\n\n(3.8)\n\nis satisfied. Thus smoothness of for is not required. If furthermore is a continuous function in the interval the second mean-value theorem (often called theorem of Bonnet (1867) or Gauss-Bonnet theorem) states an equivalence for the following integral on the left-hand side to the expression on the right-hand side according to (see some monographs about Calculus or Real Analysis; we recommend the monographs of Courant (Appendix to chap IV) and of Widder who called it Weierstrass form of Bonnet’s theorem (chap. 5, 4))\n\n(3.9)\n\nwhere is a certain value within the interval boundaries which as a rule we do not exactly know. It holds also for non-decreasing functions which include the monotonically increasing functions as special class in analogous way. The proof of the second mean-value theorem is comparatively simple by applying a substitution in the (first) mean-value theorem of integral calculus .\n\nApplied to our function which in addition should rapidly decrease in infinity according to (3.5) this means in connection with monotonic decrease that it has to be positively semi-definite if and therefore\n\n(3.10)\n\nand the theorem (3.9) takes on the form\n\n(3.11)\n\nwhere the extension to an upper boundary in (3.9) for and in case of existence of the integral is unproblematic.\n\nIf we insert in (3.9) for the function which apart from the real variable depends in parametrical way on the complex variable and is an analytic function of we find that depends on this complex parameter also in an analytic way as follows\n\n(3.12)\n\nwhere is an entire function with its real and its imaginary part. The condition for zeros is that vanishes that leads to\n\n(3.13)\n\nor split in real and imaginary part\n\n(3.14)\n\nfor the real part and\n\n(3.15)\n\nfor the imaginary part.\n\nThe multi-valuedness of the mean-value functions in the conditions (3.13) or (3.15) is an interesting phenomenon which is connected with the periodicity of the function on the imaginary axis in our application (3.12) of the second mean-value theorem (3.11). To our knowledge this is up to now not well studied. We come back to this in the next Sections 4 and, in particular, Section 7 brings some illustrative clarity when we represent the mean-value functions graphically. At present we will say only that we can choose an arbitrary in (3.15) which provides us the whole spectrum of zeros on the upper half-plane and the corresponding spectrum of zeros on the lower half-plane of which as will be later seen lie all on the imaginary axis. Since in computer calculations the values of\n\nthe Arcus Sine function are provided in the region from to it is convenient\n\nto choose but all other values of in (3.15) lead to equivalent results.\n\nOne may represent the conditions (3.14) and (3.15) also in the following equivalent form\n\n(3.16)\n\nfrom which follows\n\n(3.17)\n\nAll these forms (3.14)-(3.17) are implicit equations with two variables which cannot be resolved with respect to one variable (e.g., in forms for each fixed and branches) and do not provide immediately the necessary conditions for zeros in explicit form but we can check that (3.16) satisfies the Cauchy-Riemann equations as a minimum requirement\n\n(3.18)\n\nWe have to establish now closer relations between real and imaginary part and of the complex mean-value parameter. The first step in preparation to this aim is the consideration of the derived conditions on the imaginary axis.\n\n4. Specialization of Second Mean-Value Theorem to Xi Function on Imaginary Axis\n\nBy restriction to the real axis we find from (3.3) for the function\n\n(4.1)\n\nwith the following two possible representations of related by partial in- tegration\n\n(4.2)\n\nThe inequality follows according to the supposition from the non-negativity of the integrand that means from. Therefore, the case can be excluded from the beginning in the further considerations for zeros of and.\n\nWe now restrict us to the imaginary axis and find from (3.3) for the function\n\n(4.3)\n\nwith the following two possible representations of related by partial in- tegration\n\n(4.4)\n\nFrom the obvious inequality\n\n(4.5)\n\ntogether with the supposed positivity of one derives from the first repre- sentation of in (4) the inequality\n\n(4.6)\n\nIn the same way by the inequality\n\n(4.7)\n\none derives using the non-positivity of (see (3.10)) together with the second representation of in (4.4) the inequality\n\n(4.8)\n\nwhich as it is easily seen does not depend on the sign of. Therefore we have two non-negative parameters, the zeroth moment and the value, which according to (4.6) and (4.8) restrict the range of values of to an interior range both to (4.6) and to (4.8) at once.\n\nFor mentioned purpose we now consider the restriction of the mean-value parameter to the imaginary axis for which is a real- valued function of. For arbitrary fixed we find by the second mean-value theorem a parameter in the interval which naturally depends on the chosen value that means. The extension from the imaginary axis to the whole complex plane can be made then using methods of complex analysis. We discuss some formal approaches to this in Appendix B. Now we apply (3.12) to the imaginary axis.\n\nThe second mean-value theorem (3.12) on the imaginary axis (or) takes on the form\n\n(4.9)\n\nAs already said since the left-hand side is a real-valued function the right-hand side has also to be real-valued and the parameter function is real-valued and there- fore it can only be the real part of the complex function for.\n\nThe second mean-value theorem states that lies between the minimal and maximal values of the integration borders that is here between 0 and and this means that should be positive. Here arises a problem which is connected with the periodicity of the function as function of the variable for fixed variable in the application of the mean-value theorem. Let us first consider the special case in (4.9) which leads to\n\n(4.10)\n\nFrom this relation follows and it seems that all is correct also with the continuation to for arbitrary. One may even give the approximate values and and therefore which, however, are not of importance for the later proofs. If we now start from and continue it continuously to then we see that goes monotonically to zero and approaches zero approximately at that is at the first zero of the function on the positive imaginary axis and goes then first beyond zero and oscillates then with decreasing amplitude for increasing around the value zero with intersecting it exactly at the zeros of. We try to illustrate this graphically in Section 7. All zeros lie then on the branch with. That goes beyond zero seems to contradict the content of the second mean-value theorem according which has to be positive in our application. Here comes into play the multi-valuedness of the mean-value function. For the zeros of in (4.9) the relations with different integers are equivalent and one may find to values equivalent curves with and all these curves begin with for. However, we cannot continue in continuous way to only positive values for.\n\nFor the inequality (4.8) is stronger than (4.6) and characterizes the restric- tions of and via the equivalence follows from (4.8)\n\n(4.11)\n\nwhere the choice of determines a basis interval of the involved multi-valued function and the inequality says that it is in every case possible to choose it from the same interval of length. The zeros of the Xi function on the imaginary axis (critical line) are determined alone by the (multi-valued) function whereas vanishes automatically on the imaginary axis in considered special case and does not add a second condition. Therefore, the zeros are the solutions of the conditions\n\n(4.12)\n\nIt is, in general, not possible to obtain the zeros on the critical line exactly from the mean-value function in (4.9) since generally we do not possess it ex- plicitly.\n\nIn special cases the function can be calculated explicitly that is the case, for\n\nexample, for all (modified) Bessel functions. The most simple case among these is the case when the corresponding function is a step function\n\n(4.13)\n\nwhere is the Heaviside step function. In this case follows\n\n(4.14)\n\nwhere is the area under the function\n\n(or the zeroth-order moment of this function. For the squared modulus of the function we find\n\n(4.15)\n\nfrom which, in particular, it is easy to see that this special function possesses zeros only on the imaginary axis or and that they are determined by\n\n(4.16)\n\nThe zeros on the imaginary axis are here equidistant but the solution is absent since then also the denominators in (4.15) are vanishing. The parameter in the second mean-value theorem is here a real constant in the whole complex plane\n\n(4.17)\n\nPractically, the second mean-value theorem compares the result for an arbitrary function under the given restrictions with that for a step function by preserving the value and making the parameter depending on in the whole complex plane. Without discussing now quantitative relations the formulae (4.17) suggest that will stay a “small” function compared with in the neighborhood of the imaginary axis (i.e. for) in a certain sense.\n\nWe will see in next Section that the function taking into account determines the functions and and thus in the whole complex plane via the Cauchy-Riemann equations in an operational ap- proach that means in an integrated form which we did not found up to now in literature. The general formal part is again delegated to an Appendix B.\n\n5. Accomplishment of Proof for Zeros of Xi Functions on Imaginary Axis Alone\n\nIn last Section we discussed the application of the second mean-value theorem to the function on the imaginary axis. Equations (3.14) and (3.15) or their equivalent forms (3.16) or (3.17) are not yet sufficient to derive conclusions about the position of the zeros on the imaginary axis in dependence on. We have yet to derive more information about the mean-value functions which we obtain by relating the real-valued function and to the function on the imaginary axis taking into account.\n\nThe general case of complex can be obtained from the special case in\n\n(4.9) by application of the displacement operator to the function\n\naccording to\n\n(5.1)\n\nThe function is related to as follows\n\n(5.2)\n\nor in more compact form\n\n(5.3)\n\nThis is presented in Appendix B in more general form for additionally non- vanishing and arbitrary holomorphic functions. It means that we may obtain\n\nand by applying the operators and, respectively, to\n\nthe function on the imaginary axis (remind vanishes there in our case). Clearly, Equations (5.2) are in agreement with the Cauchy-Riemann eq-\n\nuations and as a minimal requirement.\n\nWe now write in the form equivalent to (5.1)\n\n(5.4)\n\nThe denominator does not contribute to zeros. Since the Hyperbolic Sine possesses zeros only on the imaginary axis we see from (5.4) that we may expect zeros only for such related variables which satisfy the necessary condition of vanishing of its real part of the argument that leads as we already know to (see (3.14))\n\n(5.5)\n\nThe zeros with coordinates themselves can be found then as the (in general non-degenerate) solutions of the following equation (see (3.15))\n\n(5.6)\n\nif these pairs satisfy the necessary condition (5.5). Later we will see that it provides the whole spectrum of solutions for the zeros but we can also obtain each separately from one branch and would they then denote by. Thus we have first of all to look for such pairs which satisfy the condition (5.5) off the imaginary axis that is for since we know already that these functions may possess zeros on the imaginary axis.\n\nUsing (5.2) we may represent the necessary condition (5.5) for the proof by the second mean-value theorem in the form\n\n(5.7)\n\nand Equation (5.6) which determines then the position of the zeros can be written with equivalent values\n\n(5.8)\n\nWe may represent Equations (5.7) and (5.8) in a simpler form using the following operational identities\n\n(5.9)\n\nwhich are a specialization of the operational identities (B.11) in Appendix B with and therefore. If we multiply (5.7) and (5.8) both by the function then we may write (5.7) in the form (changing order)\n\n(5.10)\n\nand (5.8) in the form\n\n(5.11)\n\nThe left-hand side of these conditions possess the general form for the extension of a holomorphic function from the functions and on the imaginary axis to the whole complex plane in case of and if we apply this to the function. Equations (5.10) and (5.11) possess now the most simple form, we found, to accomplish the proof for the exclusive position of zeros on the imaginary axis. All information about the zeros of the Xi function for arbitrary is now contained in the conditions (5.10) and (5.11) which we now discuss.\n\nSince is a nonsingular operator we can multiply both sides of equation (5.11) by the inverse operator and obtain\n\n(5.12)\n\nThis equation is yet fully equivalent to (5.11) for arbitrary but it provides only the same possible solutions for the values of zeros as for zeros on the imaginary axis. This alone already suggests that it cannot be that zeros with if they exist possess the same values of as the zeros on the imaginary axis. But in such form the proof of the impossibility of zeros off the imaginary axis seemed to be not satisfactory and we present in the following some slightly different variants which go deeper into the details of the proof.\n\nIn analogous way by multiplication of (5.10) with the operator and (5.11) with the operator and addition of both equations we also obtain\n\ncondition (5.12) that means\n\n(5.13)\n\nThe equal conditions (5.12) and (5.13) which are identical with the condition for zeros on the imaginary axis are a necessary condition for all zeros. For each chosen equivalent (remind depends then on which we do not mention by the notation) one obtains an infinite series of solutions for the zeros of the function\n\n(5.14)\n\nwhereas for Equation (5.12), by definition of, is not satisfied. Supposing that we know that is as a rule not the case, we could solve for each the usually transcendental Equation (5.13) graphically, for example, by drawing the equivalent functions over variable as abscissa and looking for the intersections points with the lines over (Section 7). These intersection points are the solutions for zeros on the imaginary axis. Choosing the condition (5.10) is identically satisfied that, however, is not the case for in general.\n\nNow we have to look for zeros in case by an additional independent condition in comparison to (5.13). Whereas for zeros with the condition (5.10) is identically satisfied we have to examine this condition for zeros with. In the case of we may divide both sides of the condition (5.10) by and obtain\n\n(5.15)\n\nSince is a nonsingular operator (in contrast to which pos-\n\nsesses 0 as eigenvalue to eigenfunction\n\narbitrary) we may multiply\n\nEquation (5.15) by the inverse operator and obtain\n\n(5.16)\n\nThis condition has also to be satisfied for the solution of (5.12) in case of that means\n\n(5.17)\n\nBoth conditions (5.13) and (5.16) taken together mean that a corresponding zero must possess a twofold degeneration.\n\nFrom condition (5.11) combined with (5.10) follows by Taylor series expansion with\n\nrespect to for arbitrary complex\n\n(5.18)\n\nand the independence of the left-hand side of for arbitrary complex requires\n\nvanishing of the coefficients for for solutions. Let us\n\nassume\n\n(5.19)\n\nFrom the Taylor series expansion of the function in the neighborhood of a solution follows then\n\n(5.20)\n\nThus using (5.19) we can find zeros for that means off the imaginary axis if the mean-value function possesses the form\n\n(5.21)\n\nfor a certain integer. According to (5.2) the whole mean-value functions and are then\n\n(5.22)\n\nor in compact form\n\n(5.23)\n\nIf we insert into Equation (3.12) then we get for all and. This means that all conditions for zeros with together do not lead to a solution for certain. Under the assumption (5.19) we have proved that all zeros of Xi functions lie on the imaginary axis.\n\nFor an alternative proof let us now solve the two Equations (5.15) and (5.11) directly and to show in this way the impossibility of zeros for. To solve these equations we make a Fourier decomposition of the function as follows\n\n(5.24)\n\nThen (5.15) takes on the form\n\n(5.25)\n\nthat due to the uniqueness of the Fourier decomposition of a function in a Fourier integral is only possible if\n\n(5.26)\n\nas a necessary condition. Nontrivial solutions of this equation for are only\n\npossible for such for which vanishes that means for\n\nand where is then proportional to a delta function. Thus the general solution of (5.26) possesses the following form of a generalized function (the prime at the sum means that the term to is absent)\n\n(5.27)\n\nwith complex numbers as amplitudes. As remark we mention that de- rivatives of delta functions we do not have to include in this solution since all zeros of are simple zeros and, furthermore, that is a generalized analytic function (also called analytical functional) with the possible extension of the variable to the whole complex plane.\n\nThe inverse Fourier transformation of according to (5.27) provides\n\n(5.28)\n\nAlready this form excludes (5.28) as a possible solution for which does not have to depend on variable with exception of the case which we already could exclude as possible case for zeros (see beginning of Section 4). In addition, we will show that it is not compatible with the general solution of (5.11) which determines the position of the zeros and which with the Fourier decomposition (5.24) takes on the form\n\n(5.29)\n\nIt leads to the following equation for the Fourier coefficients\n\n(5.30)\n\nwith the general solution (analogously to (5.27))\n\n(5.31)\n\nwith arbitrary coefficients. The inversion of this solution is\n\n(5.32)\n\nwhich for is only possible if all coefficients and are vanishing.\n\nThe two general solutions (5.28) and (5.32) of the two Equations (5.15) and (5.11) for, the first for the case only, are incompatible for any choice of the coefficients and with the only exception of that means on the real axis where the exponential functions in (5.28) and (5.32) become constant functions. How- ever, the case for arbitrary could be excluded from the beginning according to (4.2) as a consequence of the positive (semi-)definiteness of the function by supposition.\n\nWe have now finally proved that all Xi functions of the form (3.1) for which the second mean-value theorem is applicable (function positively semi-definite and non-increasing) may possess zeros only on the imaginary axis. The decisive dif- ference for possible zeros on and off the imaginary axis in the approach by the second mean-value theorem was that we have to satisfy in general case two independent real-valued conditions from which one in case of the imaginary axis and only there is automatically satisfied for the whole imaginary axis and not only for the zeros on it.\n\n6. Some Consequences from Proof of the Riemann Hypothesis\n\nThe given proof for zeros only on the imaginary axis for the considered Xi function includes as special case the function to the Rie- mann hypothesis which is given in (2.26). However, it includes also the whole class of modified Bessel functions of imaginary argument which possess zeros only on the imaginary axis and if we make the substitution also the usual Bessel function which possess zeros only on the real axis.\n\nWe may ask about possible degeneracies of the zeros of the Xi functions on the imaginary axis. Our proof does not give a recipe to see whether such degeneracies are possible or not. In case of the Riemann zeta function one cannot expect a degeneracy because the countable number of all nontrivial zeros are (likely) irrational (transcendental?, proof?) numbers but we do not know a proof for this.\n\nFor as an entire function one may pose the question of its factorization with\n\nfactors of the form where goes through all roots where in case of de-\n\ngeneracy the same factors are taken multiple times according to the degeneracy. It is well known that an entire function using its ordered zeros can be represented in Weierstrass product form multiplied by an exponential function with an entire function function in the exponent with the result that is an entire function without zeros. This possesses the form (e.g., )\n\n(6.1)\n\nwith a polynomial of degree which depending on the roots must be appropriately chosen to guarantee the convergence of the product. This polynomial is defined by first sum terms in the Taylor series for 4\n\n(6.2)\n\nBy means of these polynomials the Weierstrass factors are defined as the functions\n\n(6.3)\n\nfrom which follows\n\n(6.4)\n\nFrom this form it is seen that possesses the following initial terms of the Taylor series\n\n(6.5)\n\nand is a function with a zero at but with a Taylor series expansion which begins\n\nwith the terms.\n\nHadamard made a precision of the Weierstrass product form by connecting the degree of the polynomials in (6.1) with the order of growth of the entire function and showed that can be chosen independently of the -th root by. The order of which is equal to 1 is not a strict order (for this last notion see ). However, this does not play a role in the Hadamard product representation of and the polynomials in (6.1) can be chosen as that means equal to 0 according to. The entire function in the exponent in (6.1) can be only a constant since in other case it would introduce a higher growth of. Thus the product representation of possesses the form\n\n(6.6)\n\nwhere we took into account the symmetry of the zeros and the proof that all zeros lie on the imaginary axis and a zero is absent. With we denoted the first moment of the function.\n\nFormula (6.6) in connection with his hypothesis was already used by Riemann in and later proved by von Mangoldt where the product representation of entire functions by Weierstrass which was later stated more precisely by Hadamard plays a role. There is another formula for an approximation to the number of nontrivial zeros of or which in application to the number of zeros of on the imaginary axis in the interval between and. It takes on the form (for is equivalent to usual for)\n\n(6.7)\n\nwith the logarithmically growing density\n\n(6.8)\n\nAs long as the Riemann hypothesis was not proved it was formulated for the critical strip of the complex coordinate in parallel to the imaginary axis and with between and (with equal to our in (6.7)). It was already suggested by Riemann but not proved in detail there and was later proved by von Mangoldt in 1905. A detailed proof by means of the argument principle can be found in . It seems that from our approach also follows a simple proof. The result of Hardy (1914) (cited in ) that there exist an infinite number of zeros on the critical line is a step to the full proof of the Riemann hypothesis. Section 4 of present article may be considered as involving such proof of this last statement.\n\nWe have now proved that functions defined by integrals of the form (3.1) with non-increasing functions which decrease in infinity sufficiently rapidly in a way that becomes an entire function of possess zeros only on the im- aginary axis. As already said this did not provide a recipe to see in which cases all zeros on the imaginary axis are simple zeros but it is unlikely that within a countable sequence of (pseudo-) randomly chosen real numbers (the zeros) two of them are coincident (it seems to be difficult to formulate last statement in a more rigorous way). It also did not provide a direct formula for the number of zeros in an interval from zero to on the imaginary axis or of its density there but, as mentioned, Riemann suggested for this an approximate formula and von Mangoldt proved it\n\nThe proof of the Riemann hypothesis is included as the special case (2.26) of the function into a wider class of functions with an integral representation of the form (3.1) which under the discussed necessary conditions allowing the application of the second mean-value theorem of calculus possess zeros only on the imaginary axis. The equivalent forms (2.35) and (2.36) of the integral (3.1) where the functions, for example, are no more generally non-increasing suggest that conditions for zeros only on the imaginary axis are existent for more general cases than such prescribed here by the second mean-value theorem. A certain difference may happen then, for example, for because powers of it are in the denominators in the representations in (2.36).\n\n7. Graphical Illustration of Mean-Value Parameters to Xi Function for the Riemann Hypothesis\n\nTo get an imagination how the mean-value function looks like we calculate it for the imaginary axis and for the real axis for the case of the function in (2.26) that is possible numerically. From the two equations for general and for\n\n(7.1)\n\nfollows\n\n(7.2)\n\nwith the two initial terms of the Taylor series\n\n(7.3)\n\nand with the two initial terms of the asymptotic series\n\n(7.4)\n\nFrom (7.2) follows\n\n(7.5)\n\nThis can be numerically calculated from the explicit form (2.26) of. For\n\nand for (and only for these cases) the function is real-valued, in\n\nparticular, for\n\n(7.6)\n\nand for\n\n(7.7)\n\nwhere we applied the first two terms of the Taylor series expansion of in powers of. A small problem is here that we get the value for this multi-valued\n\nfunction in the range. Since is an even function\n\nwith only positive coefficients in its Taylor series the term in braces is in every case positive that becomes important below.\n\nThe two curves which we get for and for are shown in Figure 4.\n\nThe function for on the real axis (second partial picture) is not very exciting. The necessary condition (see (5.5)) can be satisfied only for\n\nbut it is easily to see from that there is no zero.\n\nFor the function on the imaginary axis the necessary condition (see (5.5)) is trivially satisfied since and does not restrict the solutions for zeros. In this case only the sufficient condition determines the position of the zeros on the im- aginary axis. The first two pairs of zeros are at and the reason that we do not see them in Figure 4 is the rapid decrease of the function with increasing. If we enlarge this range we see that the curve goes beyond the -axis after the first root at 14.135 of the Xi function. As a surprise for the second mean-value method we see that the parameter becomes oscillating around this axis. This means that the roots which are generally determined by the equation (see (5.6)) are determined here by the value alone. The reason for this is the multi-valuedness of the ArcSine function according to\n\n(7.8)\n\nFigure 4. Mean value parameters and for the Xi function in the proof of the Riemann hypothesis. It is not to see in the chosen scale that the curve goes beyond the -axis and oscillates around it due to extremely rapid vanishing of the envelope of with increasing but we do not resolves this here by additional graphics because this behavior is better to see in the case of modified Bessel functions intended to present in future. Using (7.2) we calculate numerically that is the value which we call the optimal value for the moment series expansion. The part in the second partial figure which at the first glance looks like a straight line as asymptote is not such.\n\nIf we choose the values for the -function not in the basic interval\n\nfor which the Taylor series provides the values but from other equivalent\n\nintervals according to (7.8) we get other curves for and from which we also may determine the zeros (see Figure 5), however, with other values\n\nFigure 5. Mean value parameters in the proof of the Riemann hypothesis. On the left-hand side there are shown the mean value parameters for the Xi function to the Riemann hypothesis if we do not take the values of the function in the basic range but in equivalent ranges according to (7.8). On the right-hand side are shown the corresponding functions which according to and the condition for zeros lead to equivalent ranges (see (4.12)) determine the zeros of the Xi function on the imaginary axis. We see that the multi-valuedness of the function does not spoil a unique result for the zeros because every branch find the corresponding of where then all zeros lie. Due to extremely rapid decrease of the function with increasing this is difficult to see (position of first three zero at is shown) but if we separate small intervals of and enlarge the range of values for this becomes visible (similar as in Figure 3). We do not make this here because this effect is better visible for the modified Bessel functions which we intend to consider at another place.\n\nin the relation and the results are invariant with respect to the multi-valuedness. This is better to see in case of the modified Bessel functions for which the curves vanish less rapidly with increasing as we intend to show at another place. All these considerations do not touch the proof of the non- existence of roots off the imaginary axis but should serve only for better understanding of the involved functions. It seems that the specific phenomenons of the second mean-value theorem (3.9) if the functions there are oscillating functions (re- mind, only continuity is required) are not yet well illustrated in detail.\n\nWe now derive a few general properties of the function which can be seen in the Figures. From (4.9) written in the form and by Taylor series expansion according to\n\n(7.9)\n\nfollows from the even symmetry of the left-hand side that also has to be a\n\nfunction of the variable with even symmetry (notation)\n\n(7.10)\n\nwith the consequence\n\n(7.11)\n\nConcretely, we obtain by -fold differentiation of both sides of (7.9) at for the first coefficients of the Taylor series\n\n(7.12)\n\nfrom which follows\n\n(7.13)\n\nSince the first sum term on the right-hand side is negative and the second is positive it depends from their values whether or not possesses a positive or negative value. For the special function in (2.26) which plays a role in the Riemann hypothesis we find approximately\n\n(7.14)\n\nmeaning that the second coefficient in the expansion of in a Taylor series in powers of is negative that can be seen in the first part of Figure 4. However, as we have seen the proof of the Riemann hypothesis is by no means critically connected with some numerical values.\n\nIn principle, the proof of the Riemann hypothesis is accomplished now and illustrated and we will stop here. However, for a deeper understanding of the proof it would be favorable to consider some aspects of the proof such as, for example, analogues to other functions with a representation of the form (3.1) and with zeros only on the imaginary axis and some other approaches although they did not lead to the full proof that, however, we cannot make here.\n\n8. Equivalent Formulations of the Main Theorems in a Summary\n\nIn present article we proved the following main result\n\nTheorem 1:\n\nLet be a real-valued function of variable in the interval which is positive semi-definite in this interval and non-increasing and is rapidly vanishing in infinity, more rapidly than any exponential function, that means\n\n(8.1)\n\nThen the following integral with arbitrary complex parameter\n\n(8.2)\n\nis an entire function of with possible zeros only on the imaginary axis that means\n\n(8.3)\n\nProof:\n\nThe proof of this theorem for non-increasing functions takes on Sections 3-5 of this article. The function in (2.26) satisfies these conditions and thus provides a proof of the Riemann hypothesis.\n\nRemark:\n\nAn analogous theorem is obviously true by substituting in (8.2) and by interchanging the role of the imaginary and of the real axis. Furthermore, a similar theorem with a few peculiarities (e.g., degeneracy) is true for substituting in (8.2) by.\n\nTheorem 1 can be formulated in some equivalent ways which lead to interesting consequences5. The Mellin transformation of an arbitrary function together with its inversion is defined by \n\n(8.4)\n\nwhere the real value has only to lie in the convergence strip for the definition of by the integral. Formula (8.2) is an integral transform of the function and can be considered as the application of an integral operator to the function which using the Mellin transform of the function can be written in the following convenient form\n\n(8.5)\n\nThis is due to\n\n(8.6)\n\nwhere is the operator of multiplication of the argument of an arbitrary function\n\nby the number, i.e. it transforms as follows\n\n(8.7)\n\naccording to the following chain of conclusions starting from the property that all\n\nfunctions are eigenfunctions of to eigenvalue\n\n(8.8)\n\nThis chain is almost obvious and does not need more explanations. The operators\n\nare linear operators in linear spaces depending on the considered set of numbers\n\n.\n\nExpressed by real variables and by\n\nwe find from (8.5)\n\n(8.9)\n\nFrom this formula follows that may be obtained by transformation of alone via\n\n(8.10)\n\nOn the right-hand side we have a certain redundance since in analytic functions the information which is contained in the values of the function on the imaginary axis is fully contained also in other parts of the function (here of).\n\nThe most simple transformation of is by a delta function as function which stretches only the argument of the Hyperbolic Cosine function. The next simple transformation is with a function function in form of a step function which leads to the transformation\n\n. Our application of the second mean-value theorem reduced other\n\ncases under the suppositions of the theorem to this case, however, with parameter depending on complex variable.\n\nThe great analogy between displacement operators (infinitesimal) of the argument of a function and multiplication operator (infinitesimal) of the argu-\n\nment of a function with respect to the role of Fourier transformation and of Mellin transformation can be best seen from the following two relations\n\n(8.11)\n\nWe remind that Mellin and Fourier transform are related by substituting the in- tegration variables and the independent variables and by the sub- stitutions and in (8.11).\n\nUsing the discussed Mellin transformation Theorem 1 can be reformulated as follows\n\nTheorem:\n\nThe mapping of the function of the complex variable into the function\n\nby an operator according to\n\n(8.12)\n\nwhere is the Mellin transformation of the function which last possesses the properties given in Theorem 1 maps the function with zeros only on the imaginary axis again into a function with zeros only on the imaginary axis.\n\nProof:\n\nIt is proved as a reformulation of the Theorem 1 which is supposed here to be correctly proved.\n\nIt was almost evident that the theorem may be formulated for more general functions as supposed for the application of the second mean-value theorem as was already mentioned. Under the suppositions of the theorem the integral on the left-hand side of (8.5) can be transformed by partial integration to (notation:\n\n)\n\n(8.13)\n\nThe derivative of the function to the Riemann hypothesis although semi-definite (here negatively) and rapidly vanishing in infinity is not monotonic and possesses a minimum (see (2.26) and Figure 2). In case of the (modified) Bessel functions we find by partial integration (e.g., )\n\n(8.14)\n\nwhere the functions in the second transform for are non-negative\n\nbut not monotonic and possess a maximum for a certain value within the interval. The forms (8.13) for and (8.14) suggest that there should be true a similar theorem to the integral in (8.2) with substitution and that monotonicity of the corresponding functions should not be the ultimate requirement for the zeros in such transforms on the imaginary axis.\n\nAnother consequence of the Theorem 1 follows from the non-negativity of the squared modulus of the function resulting in the obvious inequality (here)\n\n(8.15)\n\nwhich can be satisfied with the equality sign only on the imaginary axis for discrete values (the zeros of). By transition from Cartesian coordinates to inertial-point coordinates according to\n\n(8.16)\n\nEquation (8.15) can be also written\n\n(8.17)\n\nAs already said the case of the equality sign in (8.15) or (8.17) can only be obtained for and then only for discrete values of by solution of this inequality with the specialization for\n\n(8.18)\n\nA short equivalent formulation of the inequality (8.15) and (8.17) together with (8.18) is the following\n\nTheorem 2:\n\nIf the function satisfies the suppositions in Theorem 1 then with\n\n(8.19)\n\nProof:\n\nAs a consequence of proved Theorem 1 it is also true.\n\nThe sufficient condition that this inequality is satisfied with the equality sign is that we first set in the expressions on the right-hand side of (8.15) and that we then determine the zeros of the obtained equation for. In case of indefinite there are possible in addition zeros on the -axis.\n\nRemark:\n\nPractically, (8.15) is an inequality for which it is difficult to prove in another way that it can be satisfied with the equality sign only for. Proved in another way with specialization (2.26) for it would be an independent proof of the Riemann hypothesis.\n\n9. Conclusion\n\nWe proved in this article the Riemann hypothesis embedded into a more general theorem for a class of functions with a representation of the form (3.1) for real- valued functions which are positive semi-definite and non-increasing in the interval and which are vanishing in infinity more rapidly than any exponential function with. The special Xi function to the function given in (26) which is essentially the xi function equivalent to the Riemann zeta function concerning the hypothesis belongs to the described class of functions.\n\nModified Bessel functions of imaginary argument “normalized” to entire functions\n\nfor belong also to this class of functions with a re-\n\npresentation of the form (3.1) with which satisfy the mentioned conditions and in this last case it is well known and proved in independent way that their zeros lie only on the imaginary axis corresponding to the critical line in the Riemann hypothesis. Knowing this property of the modified Bessel functions we looked from beginning for whole classes of functions including the Riemann zeta function which satisfy analogous conditions as expressed in the Riemann hypothesis. The details of the approach to Bessel functions and also to certain classes of almost-periodic functions we prepare for another work.\n\nThe numerical search for zeros of the Riemann zeta function in the critical strip, in particular, off the critical line may come now to an end by the proof of the Riemann hypothesis since its main purpose was, in our opinion, to find a counter- example to the Riemann hypothesis and thus to disprove it. We did not pay attention in this article to methods of numerical calculation of the zeros with (ultra-)high precision and for very high values of the imaginary part. However, the proof if correct may deliver some calculators now from their pain to have to calculate more and more zeros of the Riemann zeta function.\n\nWe think that some approaches in this article may possess importance also for other problems. First of all this is the operational approach of the transition from real and imaginary part of a function on the real or imaginary axis to an analytic function in the whole complex plane. In principle, this is possible using the Cauchy-Riemann eq- uations but the operational approach integrates this to two integer instead of dif- ferential equations. We think that this is possible also in curved coordinates and is in particular effective starting from curves of constant real or imaginary part of one of these functions on a curve.\n\nOne of the fascinations of prime number theory is the relation of the apparently chaotic distribution function of prime numbers on the real axis to a fully well-ordered analytic function, the Riemann zeta function, at least, in its representation in sum form as a special Dirichlet series and thus providing the relations between multiplicative and additive representations of arithmetic functions.\n\nAppendix A\n\nTransformation of the Xi Function\n\nIn this Appendix we transform the function defined in (2.8) by means of the zeta function from the form taken from (2.5) to the form (2.9) using the Poisson summation formula. The Poisson summation formula is the transformation of a sum over a lattice into a sum over the reciprocal lattice. More generally, in one- dimensional case the decomposition of a special periodic function with period defined by the following series over functions\n\n(A.1)\n\ncan be transformed into the reciprocal lattice providing a Fourier series as follows. For this purpose we expand in a Fourier series with Fourier coefficients and\n\nmake then obvious transformations (and ch-\n\nanging the order of summation and integration) according to\n\n(A.2)\n\nwhere the coefficients of the decomposition of are given by the Fourier transform of the function defined in the following way\n\n(A.3)\n\nUsing the period of the reciprocal lattice relation on the right-hand side of\n\n(A.2) it may be written in the forms\n\n(A.4)\n\nIn the special case one obtains from (A.4) the well-known basic form of the Poisson summation formula\n\n(A.5)\n\nFormula (A.5) applied to the sum corresponding to\n\nwith Fourier transform provides a relation\n\nwhich can be written in the following symmetric form (we need it in the following only for)\n\n(A.6)\n\nThis is essentially a transformation of the Theta function in special case\n\n. We now apply this to a transformation of the function.\n\nFrom (2.9) and (2.5) follows\n\n(A.7)\n\nThe second term in braces is convergent for arbitrary due to the rapid vanishing of the summands of the sum for. To the first term in braces we apply the Poisson summation formula (A.5) and obtain from the special result (A.6)\n\n(A.8)\n\nwith the substitution of the integration variable made in last line. Thus from\n\n(A.7) we find\n\n(A.9)\n\nWith the substitution of the integration variable\n\n(A.10)\n\nand with displacement of the complex variable to and introduction of\n\n(A.11)\n\ngiven in (2.24). In the following we transform this representation by means of partial integration to a form which due to symmetries is particularly appropriate for the further considerations about the Riemann zeta function.\n\nUsing the substitution (A.10) we define a function by means of the function in (A.6) as follows\n\n(A.12)\n\nand explicitly due to Poisson summation formula\n\n(A.13)\n\nFrom according to (A.6) follows that is a symmetric function\n\n(A.14)\n\nTherefore, all even derivatives of are also symmetric functions, whereas all odd derivatives of are antisymmetric functions (we denote these derivatives by\n\n)\n\n(A.15)\n\nExplicitly, one obtains for the first two derivatives\n\n(A.16)\n\nAs a subsidiary result we obtain from vanishing of the odd derivatives of at that means from an infinite sequence of special sum evaluations from which the first two are\n\n(A.17)\n\nWe checked relations (A.17) numerically by computer up to a sufficiently high precision. We also could not find (A.17) among the known transformations of theta functions. The interesting feature of these sum evaluations is that herein power functions as well as exponential functions containing the transcendental number in the exponent are involved in a way which finally leads to a rational number that should also be attractive for recreation mathematics. In contrast, in the well-known series for the trigonometric functions one obtains for certain rational multiples of as argu- ment also rational numbers but one has involved there only power functions with rational coefficients that means rational functions although an infinite number of them.\n\nUsing the function the function in (A.11) can be represented as\n\n(A.18)\n\nFrom this we obtain by partial integration\n\n(A.19)\n\nwhere the contribution from the lower integration limit at has exactly canceled\n\nthe constant term on the right-hand side of (A.18) and the contributions from the\n\nupper limit is vanishing. Using (A.16) we find with abbreviation according to\n\n(A.20)\n\nthe following basic structural form of the Xi function\n\n(A.21)\n\nwith the following explicit representation of\n\n(A.22)\n\nSince according to (A.15) the even derivatives of are symmetric functions it follows from relation (A.22) that is also a symmetric function and (A.27) holds. This is not immediately seen from the explicit representation (A.22). Furthermore, is positively definite for since the factor in (A.22) is positive for and and all other factors too. It goes rapidly to zero for, more rapidly than any exponential function with arbitrary and arbitrary due to factors in the sum terms in (A.22). For the first derivative of we find\n\n(A.23)\n\nIt is vanishing for due to its antisymmetry and negatively definite for as the negative sign of together with considerations of the sum for show (i.e., the polynomial for\n\nand negativity is already obtained taking the first two sum\n\nterms to and alone). Thus is monotonically decreasing for. A few approximate numerical values of parameters for the function are\n\n(A.24)\n\nIn next Appendix we consider the transition from analytic functions given on the real or imaginary axis to the whole complex plane.\n\nAppendix B\n\nTransition from Analytic Functions on Real or Imaginary Axis to Whole Complex Plane\n\nThe operator is the infinitesimal displacement operator and the\n\nfinite displacement operator for the displacement of the argument of a function. In complex analysis the real variable can be displaced with view to an analytic function to the complex variable in the whole complex plane by\n\n(B.1)\n\nwhere denotes the commutator of two operators A and B, in particular and (B.1) may be written in the form\n\n(B.2)\n\nAnalogously, the transition from the variable on the imaginary axis to the variable in the whole complex plane may be written as\n\n(B.3)\n\nIn the following we consider only the case (B.2) since the case (B.3) is completely analogous with simple substitutions.\n\nWe wrote the Equations (B.1), (B.2) and (B.3) in a form which we call operational form and meaning that they may be applied to further functions on the left-hand and correspondingly right-hand side6. It is now easy to see that an analytic function\n\ncan be generated from on the -axis in ope-\n\nrational form by\n\n(B.4)\n\nand analogously from on the imaginary axis by\n\n(B.5)\n\nWriting the function with real part and imaginary part in the form\n\n(B.6)\n\nwe find from (B.4)\n\n(B.7)\n\nand correspondingly\n\n(B.8)\n\nFrom (B.7) and (B.8) follows forming the sum and the difference\n\n(B.9)\n\nThese are yet operational identities which can be applied to arbitrary functions. Applied to the function follows\n\n(B.10)\n\nIn full analogy we may derive the continuation of an analytic function from the imaginary axes to the whole complex plane in operational form\n\n(B.11)\n\nand this applied to the function\n\n(B.12)\n\nIt is easy to check that both (B.10) and (B.12) satisfy the Cauchy-Riemann equations\n\n(B.13)\n\nand it is even possible to derive these relations from these equations by Taylor series expansions of and in powers of or in dependence from which axis we make the continuation to the whole complex plane. For example, in expansion in powers of we obtain using (B.13) (and the resulting equations\n\nfrom them)\n\n(B.14)\n\nthat can be written in compact form\n\n(B.15)\n\nand is equivalent to (B.10). Analogously by expansion in powers of as intermediate step we obtain\n\n(B.16)\n\nthat is equivalent to (B.12). Therefore, relations (B.15) and (B.16) represent some integral forms of the Cauchy-Riemann equations.\n\nIn cases if one of the functions or in (B.10) or or in (B.12) is vanishing these formulae simplify and the case is applied in Section 5. We did not find up to now such representations in textbooks to complex analysis but it seems to be possible that they are somewhere.\n\nNOTES\n\n1Riemann defines it more specially for argument and writes it with real corresponding to our. Our definition agrees, e.g., with Equation (1) in Section 1.8 on p. 16 of Edwards and with and many others.\n\n2According to Havil , (p. 193), already Euler correctly conjectured this relation for the zeta function which is equivalent to relation (2.12) for the function but could not prove it. Only Riemann proved it first.\n\n3It was for us for the first time and was very surprising to meet a function where its symmetry was not easily seen from its explicit representation. However, if we substitute in (2.26) and calculate and plot the part of for with the obtained formula then we need much more sum terms for the same accurateness than in case of calculation with (2.26).\n\n4Sometimes our is denoted by.\n\n5Some of these equivalences now formulated as consequences originate from trials to prove the Riemann hypothesis in other way.\n\n6Non-operational form would be if we write, for example, instead of (B.2) which is correct but cannot be applied to further functions, for example to.\n\nConflicts of Interest\n\nThe authors declare no conflicts of interest.\n\n Riemann, B. (1859) über die Anzahl der Primzahlen unter einer gegebenen Grösse. Monatsber. Akad. Berlin, 671-680; also in: Riemann, B. Gesammelte Werke, Teubner, Leipzig 1. Aufl. 1876, S. 136, 2. Aufl. 1892, S. 145; in different English tranlations as Appendix in and as reprint under Original papers 12.2 in . Whittaker, E.T. and Watson, G.N. (1927) A Course of Modern Analysis. Cambridge University Press, Cambridge. Titchmarsh, E.C. (1951) The Theory of the Riemann Zeta-Function. Oxford University Press, Oxford. Chandrasekharan, K. (1970) Arithmetical Functions. Springer-Verlag, Berlin. https://doi.org/10.1007/978-3-642-50026-8 Edwards, H.M. (1974) Riemann’s Zeta Function. Dover, New York. Parshin, A.N. (1979) Dzeta funkziya. In: Vinogradov, I.M., Ed., Matematicheskaya Enzyklopediya, Sovyetskaya Enzyklopedia, Moskva, Vol. 2, 112-122. (In Russian) Patterson, S.J. (1985) An Introduction to the Theory of the Riemann Zeta-Function. Cambridge University Press, Cambridge. Cartier, P. (1989) An Introduction to Zeta Functions. In: Waldschmidt, M., Moussa, P., Luck, J.-M. and Itzykson, C., Eds., From Number Theory to Physics, Springer-Verlag, Berlin, 1-63. Ivić, A. (1985) The Riemann Zeta-Function, Theory and Applications. John Wiley & Sons, New York. (Dover Publications, Mineola, New York, 2003) Havil, J. (2003) Gamma Exploring Euler’s Constant. Princeton University Press, Princeton. Ribenboim, P. (2004) The Little Book of Bigger Primes. Springer, New York. (German transl., Die Welt der Primzahlen. Springer, Berlin, 2006) Borwein, P., Choi, S., Rooney, B. and Weirathmueller, A. (2008) The Riemann Hypothesis: A Resource for the Afficionado and Virtuoso Alike. Springer, New York. Apostol, T.M. (2010) Introduction to Analytic Number Theory. Springer, New York. Weisstein (2009) Riemann Zeta Function and R.Z.F. Zeros. In: CRC Encyclopedia of Mathematics, 3rd Edition, CRC Pr Inc. and Web: WolframMathWorld (We used the last). Lang, S. (1993) Complex Analysis. 3rd Edition, Springer-Verlag, New York. https://doi.org/10.1007/978-3-642-59273-7 Meier, P. and Steuding, J. (2009) Wer die Zetafunktion kennt, kennt die Welt. In: Spektrum der Wissenschaft, Dossier 6/09, 12-19, Spekt. d. Wiss. Verlagsgesellschaft, Heidelberg. Stewart, I. (2013) The Great Mathematical Problems. Profile Books, London. Erdélyi, A. (1953) Higher Transcendental Functions, Vol. 1. McGraw Hill, New York. Erdélyi, A. (1953) Higher Transcendental Functions, Vol. 3. McGraw Hill, New York. Apostol, T.M. (2010) Zeta and Related Functions and Functions of Number Theorie, chaps. 25 and 27. In: Olver, F.W.J., Lozier, D.W., Boisvert, R.F. and Clark, Ch.W., Eds., NIST Handbook of Mathematical Functions, Cambridge University Press, Cambridge. Olver, F.W.J., Lozier, D.W., Boisvert, R.F. and Clark, Ch.W., Eds. (2010) NIST Handbook of Mathematical Functions. Cambridge University Press, Cambridge. Hilbert, D. (1902) Problèmes futures des mathématiques. C.R. 2nd Congr. Int. Math., p. 85, Paris; (Russian transl. with remarks on the state of the solution of each problem in ). Linnik, Yu.V. (1969) Problemy Gilberta (in Russian). In: Aleksandrov, P.S. Ed., The Hilbert problems (in English), Nauka Moskva, 128-130. Conrey, J.B. (2008) The Riemann Hypothesis. In: Borwein, P., Choi, S., Rooney, B. and Weirathmueller, A., Eds., The Riemann Hypothesis: A Resource for the Afficionado and Virtuoso Alike, Springer, New York, 117-129. Bombieri, E. (2008) Problems of the Millenium: The Riemann Hypothesis. In: Borwein, P., Choi, S., Rooney, B. and Weirathmueller, A., Eds., The Riemann Hypothesis: A Resource for the Afficionado and Virtuoso Alike, Springer, New York, 94. Ivić, A. (2008) On Some Reasons for Doubting the Riemann Hypothesis. In: Borwein, P., Choi, S., Rooney, B. and Weirathmueller, A., Eds., The Riemann Hypothesis: A Resource for the Afficionado and Virtuoso Alike, Springer, New York, 130. von Mangoldt, H. (1905) Zur Verteilung der Nullstellen der Riemannschen Funktion . Mathematische Annalen, 60, 1-19. https://doi.org/10.1007/BF01447494 Voronin, S.M. (1975) Theorem on the Universality of the Riemann Zeta Function. Izv. Akad. Nauk SSSR, Ser. Matem., 39, 475-486, reprinted in Mathematics of the USSR-Izvestiya, 9, 443-445. https://doi.org/10.1070/IM1975v009n03ABEH001485 Steuding, J. (2004) Voronin Universality Theorem. From MathWorld—A Wolfram Web Resource, created by Eric W. Weisstein. http://mathworld.wolfram.com/VoroninUniversalityTheorem.html Neuberger, J.W., Feiler, C., Maier, H. and Schleich, W.P. (2014) Newton Flow of the Riemann Zeta Function: Separatrices Control the Appearance of Zeros. New Journal of Physics, 16, 103023. https://doi.org/10.1088/1367-2630/16/10/103023 Neuberger, J.W., Feiler, C., Maier, H. and Schleich, W.P. (2015) The Riemann hypothesis illuminated by the Newton flow of . Physica Scripta, 90, 108015. https://doi.org/10.1088/0031-8949/90/10/108015 Bateman, H. and Erdélyi, A. (1954) Tables of Integral Transforms, Vol. 1 and Vol. 2. McGraw-Hill, New York. Zemanian, A.H. (1987) Generalized Integral Transformations. Dover, New York. Brychkov, J.A. and Prudnikov, A.P. (1977) Integral Transformations of Generalized Functions. Nauka, Moskva. (In Russian) Bertrand, J., Bertrand, P. and Ovarlez, J. (2000) The Mellin Transform, chap. 11. In: Poularikas, A.D., Ed., The Transforms and Application Handbook, 2nd Edition, CRC Press LLC, Boca Raton. https://doi.org/10.1201/9781420036756.ch11 Paris, R.B. (2010) Incomplete Gamma and Related Functions, chap. 8. In: Olver, F.W.J., Lozier, D.W., Boisvert, R.F. and Clark, Ch.W., Eds., NIST Handbook of Mathematical Functions, Cambridge University Press, Cambridge, 173-192. Courant, R. (1992) Differential and Integral Calculus, Vol. 1. John Wiley & Sons, New York. Widder, D.V. (1947) Advanced Calculus. 2nd Edition, Prentice-Hall, Englewood Cliffs, New York. (Dover Publications, New York, 1989) Erdélyi, A. (1953) Higher Transcendental Functions, Vol. 2. McGraw Hill, New York.",
null,
""
]
| [
null,
"https://file.scirp.org/image/Edit_e9c7235d-fc4b-4b94-bacf-c8d77e251e79.bmp",
null,
"https://file.scirp.org/image/Edit_71ebefa7-79e2-47c8-9d8c-97043b194927.bmp",
null,
"https://file.scirp.org/image/Edit_149db54d-1d61-4bad-8f17-ce0ae9f3a88f.bmp",
null,
"https://file.scirp.org/image/Edit_337d31ad-7b4b-4106-a343-fccf2a28dac3.bmp",
null,
"https://file.scirp.org/image/Edit_6e67b9d9-abd9-4fd4-b4bd-cab2ccc1a622.bmp",
null,
"https://file.scirp.org/image/Edit_ae219b3e-8dc2-4840-b592-f3041391f2b6.bmp",
null,
"https://file.scirp.org/image/Edit_39a47376-6381-402f-9ed6-f2409e4c9af5.bmp",
null,
"https://file.scirp.org/image/Edit_a1c3d05a-d721-49ed-94c7-81767fc0502f.bmp",
null,
"https://file.scirp.org/image/Edit_e9c7235d-fc4b-4b94-bacf-c8d77e251e79.bmp",
null,
"https://file.scirp.org/image/Edit_3f32ee4c-860a-4586-904a-f18c0f7eff9a.bmp",
null,
"https://file.scirp.org/image/Edit_084e12a8-8bb7-4fa8-922c-14d4f3615862.bmp",
null,
"https://file.scirp.org/image/Edit_e9c7235d-fc4b-4b94-bacf-c8d77e251e79.bmp",
null,
"https://file.scirp.org/image/Edit_084e12a8-8bb7-4fa8-922c-14d4f3615862.bmp",
null,
"https://file.scirp.org/image/Edit_38198390-5b07-481e-8fa4-d002cc53b226.bmp",
null,
"https://file.scirp.org/image/Edit_084e12a8-8bb7-4fa8-922c-14d4f3615862.bmp",
null,
"https://www.scirp.org/Images/ccby.png",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.9281679,"math_prob":0.9679628,"size":74509,"snap":"2021-21-2021-25","text_gpt3_token_len":16507,"char_repetition_ratio":0.21390511,"word_repetition_ratio":0.056079283,"special_character_ratio":0.21961105,"punctuation_ratio":0.075795315,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99477684,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32],"im_url_duplicate_count":[null,null,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,null,null,4,null,null,null,null,null,null,null,4,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-05-06T11:04:55Z\",\"WARC-Record-ID\":\"<urn:uuid:c860c285-5f80-4867-bf13-2a9a5496639d>\",\"Content-Length\":\"278590\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5509e37d-9460-4862-950c-a1a9b6dda4c1>\",\"WARC-Concurrent-To\":\"<urn:uuid:8a3e5abb-32ad-4f10-bd4b-b6a9123dc623>\",\"WARC-IP-Address\":\"216.244.87.131\",\"WARC-Target-URI\":\"https://www.scirp.org/journal/paperinformation.aspx?paperid=72855\",\"WARC-Payload-Digest\":\"sha1:A3UORZSBYSF2UFQQQ7T5LHVJ3SCNIJFK\",\"WARC-Block-Digest\":\"sha1:EFVTLGHXDG2JKZLNWBACHJLUKDGRUQKK\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-21/CC-MAIN-2021-21_segments_1620243988753.91_warc_CC-MAIN-20210506083716-20210506113716-00399.warc.gz\"}"} |
https://bernard-gibert.pagesperso-orange.fr/Classes/cl056.html | [
"",
null,
"",
null,
"",
null,
"Let P be a fixed point and let L be a line that is not the line at infinity. Let X be its infinite point and let X* be the isogonal conjugate of X. X* lies on the circumcircle (O) of ABC and Y* is the antipode of X* on (O). Thus Y is the infinite point of any perpendicular to L. Let (C) be the circumconic passing through the reflections A', B', C' of A, B, C about L. (C) meets (O) again at Z = XX* /\\ YY*. Let l be a variable line through P and let l' be its reflection in L. The isogonal transform l* of l intersects l' at two points M, N. When l varies, the locus of M, N is a circumcubic we shall call the sympivotal (isogonal, axial) cubic SpK(P, L). Naturally, the isogonal transformation can be replaced by any other isoconjugation and the results below are easily adapted. See CL055 when the axial symmetry is replaced by a central symmetry and CL058 when it is replaced by a rotation.",
null,
"",
null,
"",
null,
"",
null,
"General properties of SpK(P, L) • SpK(P, L) is a circular circumcubic. • SpK(P, L) contains the reflection P' of P about L and the isogonal conjugate P* of P. • SpK(P, L) meets BC again at U = BC /\\ P'A'. V and W are defined similarly. The points U' = AP* /\\ UP', V' = BP* /\\ VP' and W' = CP* /\\ WP' also lie on SpK(P, L). • SpK(P, L) contains F1, F2 the two isogonal conjugate points symmetric about L. These points are not necessarily real nor distinct (when L contains an in/excenter of ABC). F1, F2 are the common points of three circles (Ca), (Cb), (Cc) defined as follows. The line AX* meets L at the center of (Ca) which passes through A. • SpK(P, L) meets L at three points L1, L2, L3 that lie on pK(X6, P). • The perpendicular L' to L at P meets SpK(P, L) at P' and two other isogonal conjugate points P1, P2 lying on the circumconic passing through P* and Y*. Obviously pK(X6, P) contains P1, P2 hence we know the nine common points of SpK(P, L) and pK(X6, P). • The isogonal transform of SpK(P, L) is SpK(P', L) and these two cubics are generally distinct i.e. SpK(P, L) is generally not a self-isogonal cubic. Their nine common points are A, B, C, F1, F2, P1, P2 and the circular points at infinity. • The parallel at P' to L meets SpK(P, L) at P' and two other points P3, P4 lying on the circumconic passing through P* and X*. • The third point F3 of SpK(P, L) on the line F1F2 (the radical axis of the three circles above) also lies on the line ZP'. This line meets (O) again at C6, the last common point of (O) and SpK(P, L). • The second intersection of (O) and ZP is the isogonal conjugate of the real infinite point of SpK(P, L). This latter point is also the infinite point of the line P*F3 hence the real asymptote of SpK(P, L) is parallel to the line P*F3. • SpK(P, L) is a K0 (without term in xyz) if and only if P lies on the perpendicular to L passing through the point Q0 which is the isogonal conjugate of the trilinear pole of L. If the equation of L is ux+vy+wz=0, this point Q0 is a^2u:b^2v:c^2w.",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"SpK(P, L) with P on the line at infinity • When P = Y, the lines l and l' always coincide hence SpK(Y, L) is the isogonal circular pivotal cubic pK(X6, Y) with singular focus X*. The real asymptote is the line YY*. • When P = X, SpK(X, L) is the isogonal focal non-pivotal cubic nK(X6, R, X) with root R, the trilinear pole of the line passing through the (here) collinear points U, V, W. R is actually the complement of the isotomic conjugate of the trilinear pole of L. The singular focus is X* again. The real asymptote is the homothetic of L (the orthic line or axis of the cubic) under h(X*,2). This cubic is the locus of foci of inscribed conics centred on the Newton line of the quadrilateral formed with L and the sidelines of ABC. • When P is different of X and Y, we obtain two cubics SpK(P, L) and SpK(P', L) each being the isogonal transform of the other and both belonging to the pencil of circular cubics generated by the two cubics above. All the cubics of this pencil have the same singular focus namely X* and their asymptotes pass through a fixed point which is the intersection of the asymptotes of the two isogonal cubics above.",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"One of the most interesting case is when L is the Brocard axis OK since F1, F2 are the Brocard points. We obtain a pencil of circular cubics generated by SpK(X512, OK) = pK(X6, X512) = K021 and SpK(X511, OK) = nK0(X6, X647) = K019. The common singular focus is the Tarry point X(98) and the asymptotes concur at X(99).",
null,
"",
null,
"",
null,
"",
null,
"SpK(P, L) with P on the line L When P lies on L, SpK(P, L) is an isogonal non-pivotal focal cubic passing through F1, F2, P and P*. It is the locus of foci of inscribed conics with center on the line passing through the midpoints of F1, F2 and P, P*. For a given line L, when P traverses L all these cubics form a pencil of focal cubics with circular focus on the circumcircle. For example, if we take the Euler line as line L, we obtain a pencil of isogonal focal nK passing through O and H. Each cubic is the locus of foci of inscribed conics with center on a line passing through the nine point center X(5). See K072, K164, K165, K166, K433 for instance.",
null,
"",
null,
"",
null,
"",
null,
"SpK(P, L) with collinear points U, V, W We already know that U, V, W are collinear when P lies on L since SpK(P, L) is a nK i.e. a non-pivotal isocubic. This also occurs when P lies on the circumconic that passes through the reflections A', B', C' of A, B, C about L. In this case, the line UVW is perpendicular to L and the cubic SpK(P, L) is a nK. Its root R lies on the circumconic with perspector Y and the isoconjugate of R lies on the trilinear polar of Y*. Its pole lies on a cK with singularity at X6.",
null,
"",
null,
"",
null,
"",
null,
"SpK(P, L) with cevian points U, V, W We already know that UVW is a cevian triangle when P = Y since SpK(Y, L) is a pK i.e. a pivotal cubic. More generally, SpK(P,L) meets the sidelines of ABC at the vertices U, V, W of a cevian triangle if and only if P lies on the axial pK with axis L. See CL057. It follows that, for a given line L, there are three points P such that SpK(P, L) is a pivotal cubic. These points are the intersections of the axial pK above with the perpendicular L0 at Q0 to L. One of them is Y giving the isogonal circular pK(X6, Y) already mentioned and the two (not always real) remaining points are symmetric about L. These give two pivotal cubics, each being the isogonal transform of the other. Two remarkable examples are given below when L is the orthic axis or the antiorthic axis.",
null,
"",
null,
"",
null,
"L is the orthic axis L0 is the Euler line meeting the axial pK at X(30), X(186), X(403). SpK(X30, L) is the Neuberg cubic K001. SpK(X186, L) is K339 = pK(X3003, X4). SpK(X403, L) is the isogonal transform K339* of K339. The two points F1, F2 lie on the line X(30), X(50).",
null,
"",
null,
"",
null,
"",
null,
"L is the antiorthic axis L0 is the line OI meeting the axial pK at X(36), X(484), X(517). SpK(X517, L) is the circular pK(X6, X517). SpK(X36, L) is K058 = pK(X2161, X80). SpK(X484, L) is the isogonal transform K206 of K058. The two points F1, F2 lie on the line X(44), X(517).",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"Special SpK(P, L) We study several interesting configurations for special positions of L with respect to point P.",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"SpK(P on (O), PP*) is an axial isogonal focal nK We suppose that P is a point on the circumcircle (O) and that the line L is PP* i.e. the perpendicular at P to the Simson line of P. SpK(P, L) is an axial nK(X6, R, P) with root R on the line passing through the centroid G and the trilinear pole of the Simson line of P. P is the singular focus and the tangent at P is PP*. The axis of symmetry is the perpendicular at P to PP*. The circumconic isogonal transform of the line PP* meets PP* at two points E1, E2 which are the two other centers (apart P) of anallagmaty. The reflection of PP* about the midpoint of E1E2 is the real asymptote.",
null,
"",
null,
"",
null,
"The figure presents SpK(P, L) with P = X(110) – the focus of the Kiepert parabola and the singular focus of the Neuberg cubic K001). The line L is the perpendicular at X(110) to the Euler line. It is the axis of the Kiepert parabola. E1 and E2 are the common points of the parallel at X(110) to the Euler line and the circumconic passing through X(74) and X(523). These two points lie on K001 and on K316. The real asymptote is the perpendicular bisector of OH. See also CL027.",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"SpK(P on L∞, PP*) is a central isogonal focal nK We suppose that P lies on the line at infinity hence P* is a point on (O). The line L = PP* is the perpendicular at P* to the Simson line S of P*.",
null,
"",
null,
"",
null,
"SpK(P, L) is now a central isogonal focal nK with focus P*, the center of symmetry. The root R of the cubic is the homothetic of the trilinear pole of S under h(G, 1/4). The trilinear polar of R is obviously the line UVW. The perpendicular at O to UVW meets (O) at two points and the two lines passing through P* and these two points contain the two other real centers E1, E2 (apart P) of anallagmaty. E1, E2 are also the real foci of the inscribed conic with center P* hence they lie on the cubic pK(X6, P*). See CL001 and K084.",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"SpK(P, PI) is a strophoid We suppose that P is not an in/excenter and we take L = PI, I being the incenter of ABC. In this case, SpK(P, L) is an isogonal strophoid with node I. See CL003.",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"SpK(P, [PP*]) is a nodal cubic We suppose now that P is not an in/excenter and does not lie on (O) nor on the line at infinity. P and P* are distinct and finite points and the perpendicular bisector [PP*] of P and P* is defined. In such case, SpK(P, [PP*]) is a nodal cubic with node P*.",
null,
"",
null,
"",
null,
"The figure shows the case P = H, P* = O so that L is the perpendicular bisector of OH. SpK(H, [OH]) meets L at three points on the Orthocubic K006. The cubic has a node at O and it is not an isocubic.",
null,
"",
null,
"",
null,
"",
null,
"",
null,
""
]
| [
null,
"https://bernard-gibert.pagesperso-orange.fr/Classes/Resources/item7-32.gif",
null,
"https://bernard-gibert.pagesperso-orange.fr/Resources/_clear.gif",
null,
"https://bernard-gibert.pagesperso-orange.fr/Resources/_clear.gif",
null,
"https://bernard-gibert.pagesperso-orange.fr/Resources/_clear.gif",
null,
"https://bernard-gibert.pagesperso-orange.fr/Resources/_clear.gif",
null,
"https://bernard-gibert.pagesperso-orange.fr/Resources/_clear.gif",
null,
"https://bernard-gibert.pagesperso-orange.fr/Resources/_clear.gif",
null,
"https://bernard-gibert.pagesperso-orange.fr/Resources/_clear.gif",
null,
"https://bernard-gibert.pagesperso-orange.fr/Resources/_clear.gif",
null,
"https://bernard-gibert.pagesperso-orange.fr/Classes/Resources/cl056fig0.gif",
null,
"https://bernard-gibert.pagesperso-orange.fr/Resources/_clear.gif",
null,
"https://bernard-gibert.pagesperso-orange.fr/Resources/_clear.gif",
null,
"https://bernard-gibert.pagesperso-orange.fr/Resources/_clear.gif",
null,
"https://bernard-gibert.pagesperso-orange.fr/Resources/_clear.gif",
null,
"https://bernard-gibert.pagesperso-orange.fr/Resources/_clear.gif",
null,
"https://bernard-gibert.pagesperso-orange.fr/Resources/_clear.gif",
null,
"https://bernard-gibert.pagesperso-orange.fr/Resources/_clear.gif",
null,
"https://bernard-gibert.pagesperso-orange.fr/Classes/Resources/cl056fig1.gif",
null,
"https://bernard-gibert.pagesperso-orange.fr/Resources/_clear.gif",
null,
"https://bernard-gibert.pagesperso-orange.fr/Resources/_clear.gif",
null,
"https://bernard-gibert.pagesperso-orange.fr/Resources/_clear.gif",
null,
"https://bernard-gibert.pagesperso-orange.fr/Resources/_clear.gif",
null,
"https://bernard-gibert.pagesperso-orange.fr/Resources/_clear.gif",
null,
"https://bernard-gibert.pagesperso-orange.fr/Resources/_clear.gif",
null,
"https://bernard-gibert.pagesperso-orange.fr/Resources/_clear.gif",
null,
"https://bernard-gibert.pagesperso-orange.fr/Resources/_clear.gif",
null,
"https://bernard-gibert.pagesperso-orange.fr/Resources/_clear.gif",
null,
"https://bernard-gibert.pagesperso-orange.fr/Resources/_clear.gif",
null,
"https://bernard-gibert.pagesperso-orange.fr/Resources/_clear.gif",
null,
"https://bernard-gibert.pagesperso-orange.fr/Resources/_clear.gif",
null,
"https://bernard-gibert.pagesperso-orange.fr/Resources/_clear.gif",
null,
"https://bernard-gibert.pagesperso-orange.fr/Resources/_clear.gif",
null,
"https://bernard-gibert.pagesperso-orange.fr/Resources/_clear.gif",
null,
"https://bernard-gibert.pagesperso-orange.fr/Resources/_clear.gif",
null,
"https://bernard-gibert.pagesperso-orange.fr/Resources/_clear.gif",
null,
"https://bernard-gibert.pagesperso-orange.fr/Classes/Resources/cl056lorthique.gif",
null,
"https://bernard-gibert.pagesperso-orange.fr/Resources/_clear.gif",
null,
"https://bernard-gibert.pagesperso-orange.fr/Resources/_clear.gif",
null,
"https://bernard-gibert.pagesperso-orange.fr/Resources/_clear.gif",
null,
"https://bernard-gibert.pagesperso-orange.fr/Classes/Resources/cl056lantiorthiqu.gif",
null,
"https://bernard-gibert.pagesperso-orange.fr/Resources/_clear.gif",
null,
"https://bernard-gibert.pagesperso-orange.fr/Resources/_clear.gif",
null,
"https://bernard-gibert.pagesperso-orange.fr/Resources/_clear.gif",
null,
"https://bernard-gibert.pagesperso-orange.fr/Resources/_clear.gif",
null,
"https://bernard-gibert.pagesperso-orange.fr/Resources/_clear.gif",
null,
"https://bernard-gibert.pagesperso-orange.fr/Resources/_clear.gif",
null,
"https://bernard-gibert.pagesperso-orange.fr/Resources/_clear.gif",
null,
"https://bernard-gibert.pagesperso-orange.fr/Resources/_clear.gif",
null,
"https://bernard-gibert.pagesperso-orange.fr/Resources/_clear.gif",
null,
"https://bernard-gibert.pagesperso-orange.fr/Resources/_clear.gif",
null,
"https://bernard-gibert.pagesperso-orange.fr/Resources/_clear.gif",
null,
"https://bernard-gibert.pagesperso-orange.fr/Resources/_clear.gif",
null,
"https://bernard-gibert.pagesperso-orange.fr/Classes/Resources/cl056axial.gif",
null,
"https://bernard-gibert.pagesperso-orange.fr/Resources/_clear.gif",
null,
"https://bernard-gibert.pagesperso-orange.fr/Resources/_clear.gif",
null,
"https://bernard-gibert.pagesperso-orange.fr/Resources/_clear.gif",
null,
"https://bernard-gibert.pagesperso-orange.fr/Resources/_clear.gif",
null,
"https://bernard-gibert.pagesperso-orange.fr/Resources/_clear.gif",
null,
"https://bernard-gibert.pagesperso-orange.fr/Resources/_clear.gif",
null,
"https://bernard-gibert.pagesperso-orange.fr/Resources/_clear.gif",
null,
"https://bernard-gibert.pagesperso-orange.fr/Resources/_clear.gif",
null,
"https://bernard-gibert.pagesperso-orange.fr/Classes/Resources/cl056central.gif",
null,
"https://bernard-gibert.pagesperso-orange.fr/Resources/_clear.gif",
null,
"https://bernard-gibert.pagesperso-orange.fr/Resources/_clear.gif",
null,
"https://bernard-gibert.pagesperso-orange.fr/Resources/_clear.gif",
null,
"https://bernard-gibert.pagesperso-orange.fr/Resources/_clear.gif",
null,
"https://bernard-gibert.pagesperso-orange.fr/Resources/_clear.gif",
null,
"https://bernard-gibert.pagesperso-orange.fr/Resources/_clear.gif",
null,
"https://bernard-gibert.pagesperso-orange.fr/Resources/_clear.gif",
null,
"https://bernard-gibert.pagesperso-orange.fr/Resources/_clear.gif",
null,
"https://bernard-gibert.pagesperso-orange.fr/Resources/_clear.gif",
null,
"https://bernard-gibert.pagesperso-orange.fr/Resources/_clear.gif",
null,
"https://bernard-gibert.pagesperso-orange.fr/Resources/_clear.gif",
null,
"https://bernard-gibert.pagesperso-orange.fr/Resources/_clear.gif",
null,
"https://bernard-gibert.pagesperso-orange.fr/Resources/_clear.gif",
null,
"https://bernard-gibert.pagesperso-orange.fr/Classes/Resources/cl056nodal.gif",
null,
"https://bernard-gibert.pagesperso-orange.fr/Resources/_clear.gif",
null,
"https://bernard-gibert.pagesperso-orange.fr/Resources/_clear.gif",
null,
"https://bernard-gibert.pagesperso-orange.fr/Resources/_clear.gif",
null,
"https://bernard-gibert.pagesperso-orange.fr/Resources/_clear.gif",
null,
"https://bernard-gibert.pagesperso-orange.fr/Resources/_clear.gif",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.88801986,"math_prob":0.9951915,"size":9703,"snap":"2021-43-2021-49","text_gpt3_token_len":2872,"char_repetition_ratio":0.18094648,"word_repetition_ratio":0.08706986,"special_character_ratio":0.28918892,"punctuation_ratio":0.11762055,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99865884,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127,128,129,130,131,132,133,134,135,136,137,138,139,140,141,142,143,144,145,146,147,148,149,150,151,152,153,154,155,156,157,158,159,160,161,162],"im_url_duplicate_count":[null,3,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,4,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,4,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,4,null,null,null,null,null,null,null,4,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,4,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,4,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,4,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-18T01:36:32Z\",\"WARC-Record-ID\":\"<urn:uuid:35123beb-e5f8-41b2-a0d3-a80d4267b6e6>\",\"Content-Length\":\"34929\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e5fe6420-fc18-460a-aa68-f19e0b0e7ae1>\",\"WARC-Concurrent-To\":\"<urn:uuid:5a347d50-c7a9-4f97-86fc-605121176ccd>\",\"WARC-IP-Address\":\"193.252.121.242\",\"WARC-Target-URI\":\"https://bernard-gibert.pagesperso-orange.fr/Classes/cl056.html\",\"WARC-Payload-Digest\":\"sha1:TDNKUQIBDRVFPCNMMWS65UFHU2DSQVDZ\",\"WARC-Block-Digest\":\"sha1:GFGYXSLFSDQ46GQMJMHFHEU5FN75JX6U\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585186.33_warc_CC-MAIN-20211018000838-20211018030838-00471.warc.gz\"}"} |
https://harshakokel.com/posts/batch-rl/ | [
"## Fitted Q and Batch Reinforcement Learning\n\nSome pointers on Batch Reinforcement Learning and Fitted Q. These were gathered while working on an RL for healthcare project, as part of Advanced RL course by Prof. Sriraam Natarajan.\n\n#### Terminologies\n\n##### Offline Planning Problem (MDP)\n\nWe are given the full MDP model and the problem is solved using all the components of the MDP\n\n##### Online Planning Problem (RL)\n\nWe have limited knowledge of the MDP. We can discover it by interacting with the system\n\n##### Model-based RL\n\nApproaches to solving Online planning problem (RL) by first estimating (when missing) or accessing the full MDP Model i.e. transition and reward function and then finding policy $\\pi$ is called Model-based RL\n\n##### Model-free RL\n\nOn the contrary, approaches to solving the Online RL Problem directly, i.e. solving for $\\pi$ directly with either value function $V$ or state-action function $Q$ is called Model-free RL.\n\n#### Batch RL\n\nThe simulation environment is not present and complete set of transition samples $\\langle s, a, r, s^{\\prime} \\rangle$ is given and the challenge is to learn without exploring.\n\n##### Background on Q Learning\n\nBellman optimality equation for the action-value function ($Q$) is given as:\n\n$$Q^{\\pi}(s, a)=\\sum_{s^{\\prime}} T\\left(s, a, s^{\\prime}\\right)\\left[R\\left(s, a, s^{\\prime}\\right)+\\gamma \\sum_{a^{\\prime}} \\pi\\left(s^{\\prime}, a^{\\prime}\\right) Q^{\\pi}\\left(s^{\\prime}, a^{\\prime}\\right)\\right]$$\n\nwhere $T\\left(s, a, s^{\\prime}\\right)$ is a transition probability of landing in state $s^{\\prime}$ on taking action $a$ in state $s$ and $R\\left(s, a, s^{\\prime}\\right)$ is a Reward at state $s^{\\prime}$ reached on taking action $a$ in state $s$.\n\nIn the dynamic programming setting, the $Q$ function for optimal policy is represented as:\n\n$$Q_{k+1}(s, a) \\leftarrow \\sum_{s^{\\prime}} T\\left(s, a, s^{\\prime}\\right)\\left[R\\left(s, a, s^{\\prime}\\right)+\\gamma \\max _{a^{\\prime}} Q_{k}\\left(s^{\\prime}, a^{\\prime}\\right)\\right]$$\n\nQ-Learning is a model-free approach to learn the $Q$ function by exploring the environment, i.e. performing actions based on some policy. A table of Q function for each state action pair $Q(s,a)$ is maintained and the table in updated after every action using the running average formula:\n\n$$Q(s, a) \\leftarrow(1-\\alpha) Q(s, a)+(\\alpha)[ R\\left(s, a, s^{\\prime}\\right)+\\gamma \\max_{a^{\\prime}} Q\\left(s^{\\prime}, a^{\\prime}\\right)]$$\n\nWith multiple episodes the Q values will eventually converge and the optimal policy might be retrieved from that.\n\n#### Drawbacks of Q Learning.\n\nThere are several draw backs of the Q-Learning. These drawbacks might be minor in typical reinforcement learning setting where we have simulators. But these drawbacks are serious limitations in the Batch RL setting. In Batch RL, the simulation environment is not present and complete set of transition samples ($\\langle s, a, r, s^{\\prime} \\rangle$) is given and the challenge is to learn without exploring.\n\nAs we see it in the figure below, at some point in the top left cell $(1,3)$, agent explored the action of going north and because it landed in the same cell, it updated $Q(s,north)=0.11$ as per the $\\max_{a} Q(s,a)$ of that cell during that episode. After that episode, even though the $\\max_{a} Q(s,a)$ of that cell changes, the $Q(s,north)$ does not get updated till the agent explores going north.",
null,
"source: UC Berkeley CS188: Lecture of Pieter Abbeel\n\nStability issue\n\nQ-Learning has ‘asynchronous update’ i.e. after each observation the value is updated locally only for the state at hand and all other states are left untouched. In the above figure, we know the value at the Red tile, but the Q value for the tile below it is not updated until we explore the action of going to red tile from that tile.\n\nSimilar idea of asynchronous update is also applicable in function approximations where $Q$ function is estimated by a function and at every time step the function is updated using:\n\n$$f(s, a) \\leftarrow(1-\\alpha) f(s, a)+\\alpha \\left( r +\\gamma \\max_{a^{\\prime} \\in A} f\\left(s^{\\prime}, a^{\\prime}\\right) \\right)$$\n\nInefficient approximation\n\nThe ‘asynchronous update’ in function approximation is particularly harmful with global approximation functions. An attempt to improve $Q$ value of a single state after every time step might impair all other approximations. Specially when approximation function used is like Neural Network, where a single example can impact changes in all the weights. Gordon 1995 proves that using an impaired approximation in next iteration, the $f$ function may divergence from the optimal $Q$ function.\n\nThis is where fitted methods come in.\n\n### Fitted Approaches\n\nGordon 1995 provided a stable function approximation approach by separating dynamic programming step from function approximation step. Effectively, the above function update equation is now split to two steps.\n\n$$f^{\\prime}(s, a) \\leftarrow r +\\gamma \\max_{a^{\\prime} \\in A} f\\left(s^{\\prime}, a^{\\prime}\\right) , , , , \\forall s,a \\\\ f(s, a) \\leftarrow(1-\\alpha) f(s, a)+\\alpha f^{\\prime}(s, a)$$\n\nObservation: Splitting the function update from one to two steps is equivalent to changing the gram-schmidt orthonormalization to modified-gram schmidt orthonormalization.\n\nErnst 2005 proposed fitted Q iteration by borrowing the splitted approach from Gordon. The approach proposes iterative approximation of Q value by reformulating the Q Learning as a supervised regression problem. Algorithm proposed for fitted Q iteration is mentioned below.\n\nGiven: tuples {<s,a,r,s'>}, stopping condition\n\n1. Q(s, a) = 0\n2. while (!stopping condition):\n3. Build a training set:\n{feature; regression value} = {<s,a> ; r + max_a Q(s,a)}\n4. Learn a function approximating the regression values Q (s,a)\n\nThis is in principle similar to equations mentioned above, with $f^{\\prime}$ as regression value and $\\alpha=1$.\n\nFurther extensions to the fitted Q approaches have learnt $f$ function as some linear combination of the previous function and regression values."
]
| [
null,
"https://harshakokel.com/images/QLearning.gif",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.80566543,"math_prob":0.9979318,"size":6329,"snap":"2021-31-2021-39","text_gpt3_token_len":1630,"char_repetition_ratio":0.15573123,"word_repetition_ratio":0.045652173,"special_character_ratio":0.25327855,"punctuation_ratio":0.12887439,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99968517,"pos_list":[0,1,2],"im_url_duplicate_count":[null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-18T16:48:24Z\",\"WARC-Record-ID\":\"<urn:uuid:1e4a79ff-f23b-4fdf-9887-d43335a19305>\",\"Content-Length\":\"17525\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:62725e41-7013-4f54-9c21-b1947991f312>\",\"WARC-Concurrent-To\":\"<urn:uuid:483759a2-aff9-43fb-846c-f5bcc571ab60>\",\"WARC-IP-Address\":\"185.199.108.153\",\"WARC-Target-URI\":\"https://harshakokel.com/posts/batch-rl/\",\"WARC-Payload-Digest\":\"sha1:663GC646DJGFWXKKLAVE72SBLK2YCHJK\",\"WARC-Block-Digest\":\"sha1:E7DW6B2WRFNIU5RRXI7YQO4XFJGEIVI7\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780056548.77_warc_CC-MAIN-20210918154248-20210918184248-00005.warc.gz\"}"} |
https://go.googlesource.com/go/+/e6583dc95375c4e266bffab6f8888e8e557b6355/test/stackobj2.go?autodive=0%2F | [
"blob: a1abd9b1d122010ecd3d9d5a031abc15da72c3bf [file] [log] [blame]\n // run // Copyright 2018 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. package main import ( \"fmt\" \"runtime\" ) // linked list up the stack, to test lots of stack objects. type T struct { // points to a heap object. Test will make sure it isn't freed. data *int64 // next pointer for a linked list of stack objects next *T // duplicate of next, to stress test the pointer buffers // used during stack tracing. next2 *T } func main() { makelist(nil, 10000) } func makelist(x *T, n int64) { if n%2 != 0 { panic(\"must be multiple of 2\") } if n == 0 { runtime.GC() i := int64(1) for ; x != nil; x, i = x.next, i+1 { // Make sure x.data hasn't been collected. if got := *x.data; got != i { panic(fmt.Sprintf(\"bad data want %d, got %d\", i, got)) } } return } // Put 2 objects in each frame, to test intra-frame pointers. // Use both orderings to ensure the linked list isn't always in address order. var a, b T if n%3 == 0 { a.data = newInt(n) a.next = x a.next2 = x b.data = newInt(n - 1) b.next = &a b.next2 = &a x = &b } else { b.data = newInt(n) b.next = x b.next2 = x a.data = newInt(n - 1) a.next = &b a.next2 = &b x = &a } makelist(x, n-2) } // big enough and pointer-y enough to not be tinyalloc'd type NotTiny struct { n int64 p *byte } // newInt allocates n on the heap and returns a pointer to it. func newInt(n int64) *int64 { h := &NotTiny{n: n} p := &h.n escape = p return p } var escape *int64"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.630386,"math_prob":0.9860325,"size":1568,"snap":"2021-31-2021-39","text_gpt3_token_len":530,"char_repetition_ratio":0.11700767,"word_repetition_ratio":0.0,"special_character_ratio":0.39604592,"punctuation_ratio":0.15568863,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9752071,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-08-04T13:19:06Z\",\"WARC-Record-ID\":\"<urn:uuid:449133f3-d7d5-4cd0-a109-6096519b175a>\",\"Content-Length\":\"25133\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:fe4d66dc-f5aa-40da-bb63-32089b831adf>\",\"WARC-Concurrent-To\":\"<urn:uuid:bb6d1ecb-dc61-4365-98f1-2176e0266cb4>\",\"WARC-IP-Address\":\"173.194.66.82\",\"WARC-Target-URI\":\"https://go.googlesource.com/go/+/e6583dc95375c4e266bffab6f8888e8e557b6355/test/stackobj2.go?autodive=0%2F\",\"WARC-Payload-Digest\":\"sha1:3JBTKWYFFMV7HI534ADZNKJYAGQCQLMJ\",\"WARC-Block-Digest\":\"sha1:FC6XH42ES5VV4KOXSZGU5UO4K3VUSPVA\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046154805.72_warc_CC-MAIN-20210804111738-20210804141738-00330.warc.gz\"}"} |
https://help.scilab.org/docs/5.3.1/pt_BR/cdff.html | [
"Scilab Home page | Wiki | Bug tracker | Forge | Mailing list archives | ATOMS | File exchange\nChange language to: English - Français - 日本語\n\nSee the recommended documentation of this function\n\nAjuda Scilab >> Estatística > cdff\n\n# cdff\n\ncumulative distribution function F distribution\n\n### Calling Sequence\n\n```[P,Q]=cdff(\"PQ\",F,Dfn,Dfd)\n[F]=cdff(\"F\",Dfn,Dfd,P,Q);\n[Dfn]=cdff(\"Dfn\",Dfd,P,Q,F);\n[Dfd]=cdff(\"Dfd\",P,Q,F,Dfn)```\n\n### Arguments\n\nP,Q,F,Dfn,Dfd\n\nfive real vectors of the same size.\n\nP,Q (Q=1-P)\n\nThe integral from 0 to F of the f-density. Input range: [0,1].\n\nF\n\nUpper limit of integration of the f-density. Input range: [0, +infinity). Search range: [0,1E300]\n\nDfn\n\nDegrees of freedom of the numerator sum of squares. Input range: (0, +infinity). Search range: [ 1E-300, 1E300]\n\nDfd\n\nDegrees of freedom of the denominator sum of squares. Input range: (0, +infinity). Search range: [ 1E-300, 1E300]\n\n### Description\n\nCalculates any one parameter of the F distribution given values for the others.\n\nFormula 26.6.2 of Abramowitz and Stegun, Handbook of Mathematical Functions (1966) is used to reduce the computation of the cumulative distribution function for the F variate to that of an incomplete beta.\n\nComputation of other parameters involve a seach for a value that produces the desired value of P. The search relies on the monotinicity of P with the other parameter.\n\nThe value of the cumulative F distribution is not necessarily monotone in either degrees of freedom. There thus may be two values that provide a given CDF value. This routine assumes monotonicity and will find an arbitrary one of the two values.\n\nFrom DCDFLIB: Library of Fortran Routines for Cumulative Distribution Functions, Inverses, and Other Parameters (February, 1994) Barry W. Brown, James Lovato and Kathy Russell. The University of Texas."
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.61654854,"math_prob":0.9567434,"size":2312,"snap":"2019-26-2019-30","text_gpt3_token_len":713,"char_repetition_ratio":0.11915078,"word_repetition_ratio":0.057742782,"special_character_ratio":0.26773357,"punctuation_ratio":0.16203703,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9849151,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-07-22T18:52:48Z\",\"WARC-Record-ID\":\"<urn:uuid:7dc8764a-7640-47ce-ae15-17d5146a75ba>\",\"Content-Length\":\"25723\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c8b7b032-c5cb-4661-8a68-75825cc76db9>\",\"WARC-Concurrent-To\":\"<urn:uuid:fe48a31f-abd4-4118-bcd7-3870c7fd1f8d>\",\"WARC-IP-Address\":\"176.9.3.186\",\"WARC-Target-URI\":\"https://help.scilab.org/docs/5.3.1/pt_BR/cdff.html\",\"WARC-Payload-Digest\":\"sha1:7QBKGWLWD7IKS3DCYY3KZ4LPTHNF5PVH\",\"WARC-Block-Digest\":\"sha1:JCEPY3KPKXCR6ZG4UCMOTT4TCSJMLSOD\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-30/CC-MAIN-2019-30_segments_1563195528208.76_warc_CC-MAIN-20190722180254-20190722202254-00134.warc.gz\"}"} |
https://mathoverflow.net/questions/251720/more-refined-versions-of-brunn-minkowski-inequality-and-or-pr%C3%A9kopa-leindler-ineq | [
"More refined versions of Brunn–Minkowski inequality and/or Prékopa–Leindler inequality\n\nBrunn-Minkowski inequality lower bounds the measure of a Minkowski sum by the measures of the summands. Its statement reads as follows:\n\nLet $n$ ≥ 1 and let $μ$ denote the Lebesgue measure on $\\mathbb R^n$. Let $A$ and $B$ be two nonempty compact subsets of $R^n$. Then\n\n$$\\mu (A + B)^{1/n} \\geq \\mu (A)^{1/n} + \\mu (B)^{1/n},$$\n\nwhere $A + B$ denotes the Minkowski sum:\n\n$$A + B := \\{\\, a + b \\in \\mathbb{R}^{n} \\mid a \\in A,\\ b \\in B\\,\\}.$$\n\nHowever, the inequality loses a lot of strength if one or both of $A$ and $B$ is thin along some direction. For example, in $\\mathbb R^2$, if $A$ is the segment connecting $(0, 0)$ to $(0, 1)$, and $B$ is the segment connecting $(0, 0)$ to $(1, 0)$, then $A + B$ is the square with corners $(0, 0), (0, 1), (1, 1), (1, 0).$ In the Brunn-Minkowski inequality, the LHS would be 1 but the RHS would be 0, and the \"slack\" in the inequality is very large.\n\nI'm wondering if we know of any \"refinement\" of Brunn-Minkowski inequality or its integral form the Prekopa-Leindler inequality that would fare better in edge cases like the above, where some of the summands are \"thin.\" Both inequalities are tight as they are usually stated, so I'm expecting we would need additional data to compute correction factors, perhaps something related to the shapes of the sets.\n\n• What about Brunn-Minkowski inequality but with essential minkowski sum? In this case $A+_{e}B=\\{z \\in \\mathbb{R}^{n} : \\mu(A \\cap (\\{z\\}-B))\\neq 0\\}$. Clearly if $A$ or $B$ has $n$ dimensional Lebesgue measure zero then always $\\mu(A \\cap (\\{z\\}-B))=0$, and aessential minkowsi sum does not give you anything. So then we have just equality $| A+_{e}B|^{1/n}\\geq |A|^{1/n}+|B|^{1/n}$ instead of inequality. The same thing with Prekopa--Leindler but with essential supremum. (see the last section en.wikipedia.org/wiki/Minkowski_addition) – Paata Ivanishvili Oct 9 '16 at 14:48\n\nThe equality cases in the Brunn-Minkowski are:\n\n1. $A$ and $B$ lie in parallel hyperplanes (then all volumes are zero), or\n2. $A$ and $B$ are convex and homothetic.\n\nA strengthening of the inequality should depend on some \"non-homotheticity\" measure of $A$ and $B$. There are some results in this direction, see Section 6.1 in Schneider's \"Convex bodies: the Brunn-Minkowski theory\", right after the proof of the BM inequality. See also Note 2 at the end of that section.\n\nIt does not seem feasible to \"refine\" the format of the inequalities to \"edge cases\" you mentioned unless one wishes to lose the geometric flavor and elegance of the result. In fact, the B-M inequality fares better (as is done by some in the literature) when stated for (measurable) sets with non-empty interior.\n\nHere is a good survey paper linking B-M to isometric inequalities:\n\nhttp://www.ams.org/journals/bull/2002-39-03/S0273-0979-02-00941-2/S0273-0979-02-00941-2.pdf\n\n• In my application, I essentially have a set-valued dynamical system, something like $A_{n+1} = f(A_n + B)$, and I wish to lower bound the measure of $A_n$ as $n \\to \\infty$. Brunn-Minkowski/Prekopa-Leindler gives lower bounds, but in some cases the bounds get weaker and weaker as $n$ gets large, so that it eventually is too weak to prove something I want. I'm hoping I can use additional information about the shapes of the sets, or something similar, to strengthen the lower bounds. – SorcererofDM Oct 9 '16 at 6:13"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.90874827,"math_prob":0.9990312,"size":1297,"snap":"2019-43-2019-47","text_gpt3_token_len":405,"char_repetition_ratio":0.12993039,"word_repetition_ratio":0.025751073,"special_character_ratio":0.3238242,"punctuation_ratio":0.13261649,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9996761,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-23T11:46:52Z\",\"WARC-Record-ID\":\"<urn:uuid:90c8c109-8c06-4c37-a022-f56c65c7913c>\",\"Content-Length\":\"123900\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:cc02fc4e-95d5-4088-a0d5-9c8459852c8a>\",\"WARC-Concurrent-To\":\"<urn:uuid:0512ba4c-c71d-4238-99dd-79875a143467>\",\"WARC-IP-Address\":\"151.101.193.69\",\"WARC-Target-URI\":\"https://mathoverflow.net/questions/251720/more-refined-versions-of-brunn-minkowski-inequality-and-or-pr%C3%A9kopa-leindler-ineq\",\"WARC-Payload-Digest\":\"sha1:N6VCVWSC4BH44JEYAMJCZJK3BFNMEH6Q\",\"WARC-Block-Digest\":\"sha1:5BYR43MWBNCRXMTB4CCV2FEDI4UL5FC5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570987833089.90_warc_CC-MAIN-20191023094558-20191023122058-00474.warc.gz\"}"} |
https://coq.inria.fr/distrib/current/stdlib/Coq.Program.Basics.html | [
"# Library Coq.Program.Basics\n\nStandard functions and combinators.\nProofs about them require functional extensionality and can be found in Combinators.\nAuthor: Matthieu Sozeau Institution: LRI, CNRS UMR 8623 - University Paris Sud\nThe polymorphic identity function is defined in Datatypes.\n\nFunction composition.\n\nDefinition compose {A B C} (g : B -> C) (f : A -> B) :=\nfun x : A => g (f x).\n\n#[global]\nHint Unfold compose : core.\n\nNotation \" g ∘ f \" := (compose g f)\n(at level 40, left associativity) : program_scope.\n\nLocal Open Scope program_scope.\n\nThe non-dependent function space between A and B.\n\nDefinition arrow (A B : Type) := A -> B.\n\nLogical implication.\n\nDefinition impl (A B : Prop) : Prop := A -> B.\n\nThe constant function const a always returns a.\n\nDefinition const {A B} (a : A) := fun _ : B => a.\n\nThe flip combinator reverses the first two arguments of a function.\n\nDefinition flip {A B C} (f : A -> B -> C) x y := f y x.\n\nApplication as a combinator.\n\nDefinition apply {A B} (f : A -> B) (x : A) := f x.\n\nCurryfication of prod is defined in Logic.Datatypes."
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.6443333,"math_prob":0.8903812,"size":811,"snap":"2021-21-2021-25","text_gpt3_token_len":202,"char_repetition_ratio":0.13258983,"word_repetition_ratio":0.0,"special_character_ratio":0.2379778,"punctuation_ratio":0.18791947,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99409395,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-05-17T00:58:45Z\",\"WARC-Record-ID\":\"<urn:uuid:bd294413-3717-4830-a7d7-5ed6aef487b9>\",\"Content-Length\":\"13239\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:57b637bf-6a73-4bad-a973-7fa392fcacbd>\",\"WARC-Concurrent-To\":\"<urn:uuid:e1b87bd3-e320-4d73-8e8b-c15314a8a842>\",\"WARC-IP-Address\":\"51.91.56.51\",\"WARC-Target-URI\":\"https://coq.inria.fr/distrib/current/stdlib/Coq.Program.Basics.html\",\"WARC-Payload-Digest\":\"sha1:VDTLBCBFSM3FF4APDYKUSC2TC63N4QYB\",\"WARC-Block-Digest\":\"sha1:E7DNOVYHDFLNUVN6FMFQYFG6PEE4GJUO\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-21/CC-MAIN-2021-21_segments_1620243991921.61_warc_CC-MAIN-20210516232554-20210517022554-00053.warc.gz\"}"} |
https://bioinformatics.stackexchange.com/tags/pyranges/hot | [
"# Tag Info\n\n4\n\nAnswer to main question: import pyranges as pr assert int(pr.__version__.split(\".\")) >= 93, \"pip install pyranges==0.0.93\" import numpy as np np.random.seed(42 * 10) # create large df to test on gr = pr.random(int(1e5), length=1, chromsizes={\"chr1\": 249250621}) gr.Score = np.random.randint(250, size=len(gr)) result = gr....\n\n3\n\nFollowing up on zorbax's answer, you could read in and filter the GTF file in this way, among others: #!/usr/bin/env python import gtfparse as gp gtf_file = \"test.gtf\" test_list = [\"PCNA\", \"USP21\", \"USP1\"] df = gp.read_gtf(gtf_file) subset = df[df['gene_name'].str.contains('|'.join(test_list))] print(subset) You ...\n\n2\n\nIf you have your gtf file like a DataFrame you can use: df[df['gene_name'].str.contains('|'.join(test_list))]\n\n2\n\nSetup: import pandas as pd import pyranges as pr from pyranges import PyRanges from scipy.stats import fisher_exact import numpy as np # ! zcat dataset1.tsv.gz | head -2 # chromosome start end num_motifs_in_group called_sites called_sites_methylated methylated_frequency group_sequence # chr21 5010053 5010053 1 3 0 0.000 CACCACGTCCA # ...\n\n1\n\nUnless you need to use Python and can't use subprocess, here's a quick CLI one-liner which sums signal from sorted BED5 files, over the genomic space where they overlap: \\$ bedmap --echo --sum --delim '\\t' <(bedops --merge A.bed B.bed ... N.bed) <(bedops --everything A.bed B.bed ... N.bed) > answer.bed (Requires bash for process substitutions.)\n\n1\n\nIt turned out to be a lot easier if I used grep to do this: grep -w -f genes.txt gencode.v19.annotation.gtf > sub_set.gtf genes.txt contains gene symbols on new lines.\n\n1\n\nQuick and dirty (not to mention atrociously bad O(n)) solution: import pyranges as pr import numpy as np np.random.seed(42 * 10) # create large df to test on gr = pr.random(int(1e5), length=10000, chromsizes={\"chr1\": 249250621}) gr.Score = np.random.randint(250, size=len(gr)) def remove_worst_scores_until_no_overlap(gr): df = gr.df ...\n\nOnly top voted, non community-wiki answers of a minimum length are eligible"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.72146857,"math_prob":0.63009524,"size":2401,"snap":"2021-43-2021-49","text_gpt3_token_len":710,"char_repetition_ratio":0.09887359,"word_repetition_ratio":0.10344828,"special_character_ratio":0.33069554,"punctuation_ratio":0.19373778,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9814777,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-21T21:02:27Z\",\"WARC-Record-ID\":\"<urn:uuid:dc8917b5-082f-467e-8193-172891876d83>\",\"Content-Length\":\"108147\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4ee4d09a-363a-458f-8b20-1360dd0f2075>\",\"WARC-Concurrent-To\":\"<urn:uuid:072a5277-789b-4241-97f0-191a0dfb54a3>\",\"WARC-IP-Address\":\"151.101.1.69\",\"WARC-Target-URI\":\"https://bioinformatics.stackexchange.com/tags/pyranges/hot\",\"WARC-Payload-Digest\":\"sha1:YPJFV3CHRDN7BRGHEO47YV23HOEXTA6K\",\"WARC-Block-Digest\":\"sha1:LY25AAJWWASXQQYY67T6DPQ5RIVMDAJN\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585441.99_warc_CC-MAIN-20211021195527-20211021225527-00185.warc.gz\"}"} |
https://online.stat.psu.edu/stat501/book/export/html/899 | [
"# 2.4 - Sums of Squares (continued)\n\n2.4 - Sums of Squares (continued)\n\nNow, let's do a similar analysis to investigate the research question, \"Is there a (linear) relationship between height and grade point average?\"(Height and GPA data)\n\nReview the following scatterplot and estimated regression line. What does the plot suggest for answering the above research question? In this case, it appears as if there is almost no relationship whatsoever. The estimated slope is almost 0.",
null,
"Again, we can answer the research question using the P-value of the t-test for:\n\n• testing the null hypothesis $$H_{0} \\colon \\beta_{1} = 0$$\n• against the alternative hypothesis $$H_{A} \\colon \\beta_{1} ≠ 0$$.\n\nAs the Minitab output below suggests, the P-value of the t-test for \"height\" is 0.761. There is not enough statistical evidence to conclude that the slope is not 0. We conclude that there is no linear relationship between height and grade point average.\n\nThe Minitab output also shows the analysis of variable table for this data set. Again, the P-value associated with the analysis of variance F-test, 0.761, appears to be the same as the P-value, 0.761, for the t-test for the slope. The F-test similarly tells us that there is insufficient statistical evidence to conclude that there is a linear relationship between height and grade point average.\n\n##### Analysis of Variance\nSource DF Adj SS Adj MS F-Value P-Value\nConstant 1 0.0276 0.0276 0.09 0.761\nResidual Error 33 9.7055 0.2941\nTotal 34 9.7331\n##### Model Summary\n S = 0.5423 R-Sq = 0.3% R-Sq (adj) = 0.0%\n##### Coefficients\nPredictor Coef SE Coef T-Value P-Value\nConstant 3.410 1.435 2.38 0.023\nheight -0.00656 0.02143 -0.31 0.761\n##### Regression Equation\n\ngpa = 3.14 -0.0066 height\n\nThe scatter plot of grade point average and height appear below, now adorned with the three labels:\n\n• $$y_{i}$$ denotes the observed grade point average for student i\n• $$\\hat{y}_i$$ is the estimated regression line (solid line) and therefore denotes the estimated grade point average for the height of student i\n• $$\\bar{y}$$ represents the \"no relationship\" line (dashed line) between height and grade point average. It is simply the average grade point average of the sample.\n\nFor this data set, note that the estimated regression line and the \"no relationship\" line are very close together. Let's see how the sums of squares summarize this point.",
null,
"$$\\sum_{i=1}^{n}(\\hat{y}_i-\\bar{y})^2 =0.0276$$\n\n$$\\sum_{i=1}^{n}(y_i-\\hat{y}_i)^2 =9.7055$$\n\n$$\\sum_{i=1}^{n}(y_i-\\bar{y})^2 =9.7331$$\n\n• The \"total sum of squares,\" which again quantifies how much the observed grade point averages vary if you don't take into account height, is $$\\sum_{i=1}^{n}(y_i-\\bar{y})^2 =9.7331$$.\n• The \"regression sum of squares,\" which again quantifies how far the estimated regression line is from the no relationship line, is $$\\sum_{i=1}^{n}(\\hat{y}_i-\\bar{y})^2 =0.0276$$.\n• The \"error sum of squares,\" which again quantifies how much the data points vary around the estimated regression line, is $$\\sum_{i=1}^{n}(y_i-\\hat{y}_i)^2 =9.7055$$.\n\nIn short, we have illustrated that the total variation in the observed grade point averages y (9.7331) is the sum of two parts — variation \"due to\" height (0.0276) and variation due to random error (9.7055). Unlike the last example, most of the variation in the observed grade point averages is just due to random error. It appears as if very little of the variation can be attributed to the predictor height.\n\n## Try It!\n\n### Sums of Squares\n\nSome researchers at UCLA conducted a study on cyanotic heart disease in children. They measured the age at which the child spoke his or her first word (x, in months) and the Gesell adaptive score (y) on a sample of 21 children. Upon analyzing the resulting data, they obtained the following analysis of variance table:\n\n##### Analysis of Variance\nSource DF Adj SS Adj MS F-Value P-Value\nConstant 1 1604.08 1604.08 13.20 0.002\nResidual Error 19 2308.59 121.50\nTotal 20 3912.67\nWhich number quantifies how much the observed scores vary if you don't take into account the age at which the child first spoke?\n##### Analysis of Variance\nSource DF Adj SS Adj MS F-Value P-Value\nConstant 1 1604.08 1604.08 13.20 0.002\nResidual Error 19 2308.59 121.50\nTotal 20 3912.67\nWhich number quantifies how far the estimated regression line is from the \"no trend\" line?\n##### Analysis of Variance\nSource DF Adj SS Adj MS F-Value P-Value\nConstant 1 1604.08 1604.08 13.20 0.002\nResidual Error 19 2308.59 121.50\nTotal 20 3912.67\nWhich number quantifies how much the scores vary around the estimated regression line?\n##### Analysis of Variance\nSource DF Adj SS Adj MS F-Value P-Value\nConstant 1 1604.08 1604.08 13.20 0.002\nResidual Error 19 2308.59 121.50\nTotal 20 3912.67\n\n Link ↥ Has Tooltip/Popover Toggleable Visibility"
]
| [
null,
"https://online.stat.psu.edu/onlinecourses/sites/stat501/files/03anova/scatterplot_ht_gpa_01.png",
null,
"https://online.stat.psu.edu/onlinecourses/sites/stat501/files/03anova/height_gpa_plot2.gif",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.86740404,"math_prob":0.98825467,"size":4489,"snap":"2021-21-2021-25","text_gpt3_token_len":1333,"char_repetition_ratio":0.116165,"word_repetition_ratio":0.20694645,"special_character_ratio":0.33949655,"punctuation_ratio":0.120042875,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9997167,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,3,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-06-25T09:37:14Z\",\"WARC-Record-ID\":\"<urn:uuid:bdf89b90-d4bf-4990-9d3e-7f41c86f6472>\",\"Content-Length\":\"18297\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:70425953-e37c-40e9-9c7f-e346f2416a82>\",\"WARC-Concurrent-To\":\"<urn:uuid:c3afa9d3-9794-436b-950d-4c936d11c36a>\",\"WARC-IP-Address\":\"128.118.15.226\",\"WARC-Target-URI\":\"https://online.stat.psu.edu/stat501/book/export/html/899\",\"WARC-Payload-Digest\":\"sha1:6SFNZXXNJ54TNMOTUVQHC4KMTJS3VR4G\",\"WARC-Block-Digest\":\"sha1:SPMLGWCPJDSI3IGA3CAWCT2OHDQQJDNX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-25/CC-MAIN-2021-25_segments_1623487630081.36_warc_CC-MAIN-20210625085140-20210625115140-00442.warc.gz\"}"} |
https://brainmass.com/business/capital-asset-pricing-model/$%7Bcat.url%7D | [
"Explore BrainMass\n\n# The Capital Asset Pricing Model (CAPM)\n\nMany of the concepts we use in finance involve discounting risky cash flows using a discount rate appropriate for the level of risk associated with the investment. Investors will only hold risky securities or invest in risky projects if the expected rate of return on an investment is at least as much as the discount rate. This is because all investors are risk-adverse, that is, when they take on additional risk they expect to be compensated. This maxim is fundamental for an efficient market.\n\nTherefore, in order to value a company's stock we need to find a way to measure its risk. Typically, we use variance or standard deviation to measure the risk of an individual stock. However, in the real world, investors hold more than one stock. When investors hold a portfolio of stocks, the risk of an individual stock becomes less important - what we really want to know is how an individual stock contributes to the overall risk of the portfolio. This is because the larger a portfolio is, the more an investor can diversify away her risk. As a result, most investors would want to diversify their portfolios as much as possible. Knowing this, we can imagine that an investor would expect a return from a security based on the level of risk each security contributes to a large portfolio. If we assume that each investor holds a very large portfolio, we can use the market portfolio in our calculation for the expected return of a stock. The market portfolio is a hypothetical portfolio consisting of every stock on a market.\n\nThe capital asset pricing model tells us that an investor's expected return on a security is equal to the market's risk free rate of return (what the investor would expect to recieve as a return on a riskless investment such as a T-Bill) plus some amount of additional return the investor expects for investing in a risky stock, often called a risk premium. A stock's risk premium is equal to the risk premium of the market porfolio (expected return on the market portfolio minus the risk free rate) adjusted for or multiplied by the stock's Beta, which tells us how much risk the stock would contribute to a market porfolio. We can see from this formula that the Beta of the market portfolio itself is one, that a stock with a Beta less than one would require a smaller risk premium than the market portfolio, and a risky stock with a Beta of more than one would require a larger risk premium than the market portfolio.",
null,
"Deriving The Capital Asset Pricing Model\n\nThe capital asset pricing model is derived from finding the expected return of an individual stock as well as its variance or standard deviation. Once we know a stock's variance or standard deviation, we can find its covariance or correlation with the expected return of other stocks or the market portfolio. To simplify these steps most students will be given a stock's Beta, which tells us how much the stock will move in response to movements in the market place and is a function of the stock and market's variance. By using Beta, the capital asset pricing model makes it easy for students to find the expected return of a stock if it was part of a large portfolio.\n\nExpected Return (R̄): The rate of return that an investor expects a single stock to earn over the next period. This may be based on the average return of the stock in the past, a detailed analysis of the firm's prospects, a computer simulation, or inside information.\n\nVariance (σ2) and Standard Deviation (σ): To find variance, we look at the deviation of a stocks actual return to its expected return. These deviations are squared, and then averaged to get the variance of a stock. Standard deviation is equal to the square root of variance. We can think of standard deviation as the standardized form of variance.",
null,
"Where,\nR = Actual return\nn = number of states or observations\nR - = deviation of a stocks actual return from its expected return\n\nCovariance (σA,B): When we find variance, we look at the deviations of a stock's actual return from its expected return. To find covariance, we look at these same deviations. However, we know what to know how they relate to the deviations of another stock. To get the covariance, we find the product of the deviations of two stocks in each state. The average of these products is the covariance.",
null,
"Correlation (ρA,B): The correlation is the standardized covariance. It is equal to the covariance divided by the standard deviation of each individual stock. Correlation is always between +1 and -1.",
null,
"Variance of a two-stock portfolio (σP2): The variance of a portfolio consists of the variances of the individual securities, the covariance of the two securities, and the percentage of the portfolio made up by each security (XA and XB). A positive covariance will increase the variance of the entire portfolio. A negative covariance will reduce the overall risk of the portfolio. Creating a portfolio with securities that have a negative covariance so that they offset each other is known as a hedge. For a two-stock portfolio, we have the following variance:\n\nUsing Covariance:",
null,
"Using Correlation:",
null,
"We can extrapolate this to find the variance of the market portfolio. The capital asset pricing model suggests that the market portfolio is the only fund in which an investor needs to invest, along with an investment in a risk-free asset depending on the investor's tolerance of risk.\n\nBeta: Beta measures the responsiveness of a security to movements in the market portfolio. We find beta by finding the covariance of the stock and market portfolio returns, and dividing this by the variance of the market portfolio.",
null,
"Expected return of the market: If an investor wanted zero risk, he could invest in a security such as a US Treasury Bill which offers a risk-free rate of return. Because investors are risk-adverse, when they invest in risky securities, they expect to be compensated for their risk with a rate return higher than the risk-free rate. As a result, we often represent the expected return on the market in the following form, where is the expected return on the market portfolio, and RF is the risk-free rate of return.",
null,
"Expected return on an individual security: The beta of a security reflects the risk of a security as it relates to the market portfolio as a whole. Therefore, using beta we should be able to find the expected return of an individual security if we know the expected return of the market and the risk free rate. Where ( - RF ) = Risk premium. This is the capital asset pricing model (CAPM).",
null,
"There is a positive linear relationship between the beta of a security and its expected return. The market portfolio has a beta of 1, and a stock with a beta of one would have the same expected return as the expected return of the market.",
null,
"Photo by Sammie Vasquez on Unsplash\n\n### CAPM and risk free return\n\nTwo mutual fund managers are being evaluated for their performance in the last ten years. One of them, Mr. Harrods, has achieved an eye-popping 34% annual average return; the other, Ms. Evans, has obtained a modest 12% annual average return. On closer examination of their portfolios, it is found that Mr. Harrods always bet on ri\n\n### Expected return problem\n\nAssume that you can borrow and lend at a riskless rate of 5% and that the tangency portfolio of risky assets has an expected return of 13% and a standard deviation of return of 16%. (a) What is the highest level of expected return that can be obtained if you are willing to take on a standard deviation of returns that is at mo\n\n### Calculation of the Weighted Average Cost of Capital\n\nEstimate your firm's Weighted Average Cost of Capital. Assume that the current risk-free rate of interest is 3.5%, the market risk premium is 5%, and the corporate tax rate is 21%. Debt: Total book value: \\$10 million Total market value: \\$12 million Coupon rate: 6% Yield to Maturity: 5% Common Stock: Total book value: \\$1\n\n### True false questions about CAPM\n\nAre the following statements true or false? (a) Stocks with a beta of zero offer an expected rate of return of zero. (b) The CAPM implies that investors require a higher return to hold highly volatile securities. (c) You can construct a portfolio with a beta of 0.75 by investing 0.75 of the investment budget in bills an\n\n### Short Term Financial Risk Concepts\n\nExamine the concept of financial risk by answering the following questions: (a) How does the risk of a portfolio change as the number of assets in the portfolio increases? (b) Provide an example of a unique risk that can be reduced by portfolio diversification. (c) Provide an example of a market risk that cannot be reduced by po\n\n### Finance: CAPM\n\n1: Expected Return: Discrete Distribution The market and Stock J have the following probability distributions: Probability Rm Rj 0.27 12.25% 25.40% 0.44 6.65 10.36 0.29 21.30 36.57 (a) Calculate the expected rates of return for the market and (b) Calculate the standard deviations for the Stock J. Work with at\n\n### Financial Ratio Analysis for Apple Inc\n\nReview the balance sheet and income statement of Apple Inc's 2015 Annual Report. Calculate the following ratios using Microsoft® Excel®: Current Ratio Quick Ratio Debt Equity Ratio Inventory Turnover Ratio Receivables Turnover Ratio Total Assets Turnover Ratio Profit Margin (Net Margin) Ratio Return on Assets Ratio\n\n### Capital Asset Pricing Model and Stock Valuation and Growth Rate\n\nUsing data from our fictitious Company, MT 217 (from attached sheet), we will calculate the expect value of its stock using the Constant Growth Model (attached): Po = D1/(r - g) To do that we will have to estimate the vales of r, g, and D1. To estimate the value of r we will use the Capital Asset Pricing Model: CAPM = R\n\n### Securities and Portfolio Return Computations\n\nBeginning Stock Price \\$73 Ending Stock Price \\$82 Dividend \\$1.20 Percentage Total Return = #NAME? CHAPTER 10: PROBLEM 12 Stock Return the past 5 years -18.35% 14.72% 28.47% 6.48% 16.81% Holding Period Return for the Stock = #NAME? (Note: Subtract your answer by 1 to\n\n### CAPM Analysis\n\nBriefly set out arguments in favour of - and against - the Capital Asset Pricing Model (CAPM), outline its uses and make a critique of its underlying assumptions.\n\n### Hedge Fund Return Analysis\n\n1) What is the average annual (historic) return on the hedge fund? Note the returns are reported monthly. Multiply by 12 to annual returns. Need to answer with annual returns. Answer should go to 1 place behind decimal (ie: 10% expressed as 10.0) 2) What is the average (historic) return on the stock market? Note the re\n\n### CAPM portfoio\n\nDownload the most recent 5 years of monthly data for VTI, Proctor and Gamble (PG), Exxon Mobil (XOM), Apple (AAPL), Alcoa (AA), Century Aluminum (CENX) and the 3-month T-Bill (^IRX) using Yahoo Finance. VTI is a low-cost ETF that tracks the Wilshire 5000 index and is our proxy for the market return. For the stocks and the ET\n\n### CAPM and the Constant Dividend Growth Model\n\nCapital Asset Pricing Model The Capital Asset Pricing Model (CAPM) is a powerful analytical tool used for calculating the price of common stock. After reflecting on theory and application of the CAPM model and reviewing the prior work on the Constant Dividend Growth Model post a one paragraph response to the following questions\n\n### Calculation of Required Rate of Return and Asset Beta\n\nPlease help with the following problems. Provide step by step calculations for each. Acme currently has a capital structure of 20% debt to total assets, based on current market values. The current debt is riskless and more debt can be taken on, up to a limit of 35% debt, without making the debt risky and losing the firm's ab\n\n### The Efficiency of the Market Portfolio\n\nQuestion 1 The Debt Cost of Capital 14. In mid-2012, Ralston Purina had AA-rated, 10-year bonds outstanding with a yield to maturity of 2.05%. a. What is the highest expected return these bonds could have? b. At the time, similar maturity Treasuries have a yield of 1.5%. Could these bonds actually have an expected return e\n\n### DCF Method, CAPM Method, Investment Projects\n\nPlease help with determining ratios. The attached spread sheet is FEDEX 3 year financial statements. Trying to figure out the ratios. The one that I figured out is not coming close to what Morning Star: http://financials.morningstar.com/ratios/r.html?t=FDX®ion=usa&culture=en-US or MSN Money:http://investing.money.msn.com/inve\n\n### Finance Problems: Required Rate of Return\n\n2. Required Rate of Return AA Industries stock has a beta of 0.8. The risk-free rate is 4% and the expected return on the market is 12%. What is the required rate of return on AA's stock? 10. Portfolio Required Return Suppose you manage a \\$4 million fund that consists of four stocks with the following investments: Stock\n\n### Explaining Cost of Capital, Risk & Return, Hurdle Rate, Cost Structure, Depreciation, Call Options\n\nPlease include in-text citations and references used. Thank you. 1. Cost of Capital - If you were going to start a company, let's say a restaurant, and I was going to take \\$400,000 to get it opened, how would you finance the initial investment? Things to consider are debt, equity, terms, and sources. 2. Risk & Return - G\n\n### Portfolio Optimization Questions\n\n1. Evaluate whether the following statements are true or false. a) Even if a risky security has a return lower than the risk-free rate, this security could be held for diversification purposes. b) The Glass-Steagall Act of 1933 separated commercial banking from investment banking. c) If returns on two stocks are perfectly p\n\n### CAPM, SML, and Investors\n\nThe Capital Asset Pricing Model (CAPM) is a widely used concept in finance. The model is expressed graphically by the Security Market Line (SML). Within the context of investment, explain how CAPM can be useful to investors.\n\n### Capital Asset Pricing Model Questions\n\nConsider the following information: Stock A Stock B T-bills Beta 0.6 1.2 0.0 Expected return, % 5.0 8.0 2.0 (a) Assuming that all stocks are priced correctly according to the CAPM, compute the expected return on the market portfolio. (c) Is it possible for\n\n### The Capital Asset Pricing Model & CAPM\n\nIn one page explain what you think is the main 'message' of the Capital Asset Pricing Model to corporations and what is the main message of the CAPM to investors?\n\n### Corporate Finance and Pricing Models\n\nI need 100 word original notes in answering the following questions: 1. What is operating leverage and how does it influence a project? 2. What are the two methods for estimating debit cost of capital, and what do you do when there is default risk? Explain the circumstances in which you would use each method. 3. In what\n\n### Cost of Equity for Google\n\n1. Show the work you did to obtain the cost of equity for Google.. 2. Is this cost of equity higher or lower than you expected? The average cost of capital for a firm in the S&P 500 is 8.2 percent. Would you think your firm should have a lower or a higher cost of capital than the average firm? 3. Look up the betas for some of\n\n### Calculating cost of equity using capital asset pricing model\n\nCurrent Yield to Maturity (YTM) on a U.S. Government bond that matures based on the Treasury Bill Rate for 1 year is 0.10 and for 13 weeks is 0.02. For Amazon.com the following is assumed: Beta 0.77 RF 5% RF = 1 RM =5 RM - RF= 4 What is the cost of equity for Amazon.com? Based on the Betas of Ebay and Overstock.com com\n\n### Computation of rate of returns for Procter & Gamble\n\nI need assistance with the following assignment: Estimating the cost of equity or the rate of return that Procter and Gamble's shareholders 'require'. The CAPM states the following equilibrium relationship between the (excess) rate of return that shareholders of a particular company \"j\" require (or actually in some sense 'de\n\n### Which of the following projects should the firm accept?\n\n2. Bloom and Co. has no debt or preferred stockit uses only equity capital, and has two equally- sized divisions. Division X's cost of capital is 10.0%, Division Y's cost is 14.0%, and the corporate (composite) WACC is 12.0%. All of Division X's projects are equally risky, as are all of Division Y's projects. However, the pr\n\n### Evaluation of Analysis Measures\n\nFor an organization owning multiple assets where their core business is not real estate is CAPM recommended to use or not for measurement as a good indicator of an assets performance? Why or why not? How do, - Risk-free rate of return - Beta (as a risk measure) - Expected market risk premium affect this?\n\n### Solved examples on Index Models, CAPM and Arbitrage Pricing Theory\n\nQuestion 1 Index Models: Download 61 months (October 2008 to October 2013) of monthly data for the S&P 500 index (symbol = ^GSPC). Download 61 months (October 2008 to October 2013) of Apple Inc. data and 61 months (October 2008 to October 2013) of Exxon Mobil Corporation data. Download 60 months (November 2008 to October 2013)\n\n### The Capital Asset Pricing Model - Beta\n\nResearch suggests that the mining sector had a beta of 1.7 while utility companies had a beta of 0.5. Can you explain why there is a difference given beta is determined by cyclicality of revenues, operating and financial leverage?"
]
| [
null,
"https://brainmass.com/hubsimg/1433255/Screen-shot-2013-07-11-at-9.12.35-AM.png",
null,
"https://brainmass.com/hubsimg/1433265/Screen-shot-2013-07-11-at-10.00.27-AM.png",
null,
"https://brainmass.com/hubsimg/1433268/Screen-shot-2013-07-11-at-10.09.17-AM.png",
null,
"https://brainmass.com/hubsimg/1433269/Screen-shot-2013-07-11-at-10.09.24-AM.png",
null,
"https://brainmass.com/hubsimg/1433272/Screen-shot-2013-07-11-at-10.22.47-AM.png",
null,
"https://brainmass.com/hubsimg/1433273/Screen-shot-2013-07-11-at-10.38.51-AM.png",
null,
"https://brainmass.com/hubsimg/1433280/Screen-shot-2013-07-11-at-11.50.24-AM.png",
null,
"https://brainmass.com/hubsimg/1433283/Screen-shot-2013-07-11-at-12.00.41-PM.png",
null,
"https://brainmass.com/hubsimg/1433284/Screen-shot-2013-07-11-at-12.10.12-PM.png",
null,
"https://brainmass.com/hubsimg/1433289/Screen-shot-2013-07-11-at-12.34.28-PM.png",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.9453122,"math_prob":0.91809154,"size":6863,"snap":"2021-31-2021-39","text_gpt3_token_len":1387,"char_repetition_ratio":0.19973756,"word_repetition_ratio":0.04465038,"special_character_ratio":0.19932973,"punctuation_ratio":0.08435583,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9849157,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20],"im_url_duplicate_count":[null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-27T06:19:28Z\",\"WARC-Record-ID\":\"<urn:uuid:39f115bb-3b4e-41b0-b3d0-cc83df5f4fb9>\",\"Content-Length\":\"351370\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f6eb48ec-97b2-4389-8813-edc34e06702b>\",\"WARC-Concurrent-To\":\"<urn:uuid:29841169-90f2-4a95-b178-d369b01a2e7e>\",\"WARC-IP-Address\":\"172.67.75.38\",\"WARC-Target-URI\":\"https://brainmass.com/business/capital-asset-pricing-model/$%7Bcat.url%7D\",\"WARC-Payload-Digest\":\"sha1:NW4YTGPOWKZISHDMC6QR5WK6ML6SPT6J\",\"WARC-Block-Digest\":\"sha1:53CAV7FELOACEIUUATMKNTHY2HS4FOWK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780058373.45_warc_CC-MAIN-20210927060117-20210927090117-00323.warc.gz\"}"} |
https://cyberleninka.ru/article/n/micro-lif-and-numerical-investigation-of-mixing-in-microchannel | [
"# Micro-LIF and numerical investigation of mixing in microchannelТекст научной статьи по специальности «Физика»",
null,
"CC BY",
null,
"265",
null,
"20\ni Надоели баннеры? Вы всегда можете отключить рекламу.\nКлючевые слова\nМИКРОТЕЧЕНИЕ / МИКРОМИКСЕРЫ / МИКРОКАНАЛЫ / MICROFLOW / MICROMIXERS / MICROCHANNELS / CFD / MICRO-PIV / MICRO-LIF\n\n## Аннотация научной статьи по физике, автор научной работы — Minakov Andrey V., Yagodnitsyna Anna A., Lobasov Alexander S., Rudyak Valery Ya, Bilsky Artur V.\n\nFlow regimes and mixing pattern in a T-type micromixer at high Reynolds numbers were studied by numerical solution of the Navier–Stokes equations and by particle image velocimetry (micro-PIV) and laser induced fluorescence (micro-LIF) experimental measurements. The Reynolds number was varied from 1 to 1000. The cross section of the mixing channel was 200 μm×400 μm, and its length was 3000 μm. Five different flow regimes were identified: (I) steady vortex-free flow; (II) steady symmetric vortex flow with two horseshoe vortices; (III) steady asymmetric vortex flow; (IV) unsteady periodic flow; (V) stochastic flow. Maximum mixing efficiency was obtained for stationary asymmetric vortex flow. In this case, an S-shaped vortex structure formed in the flow field. Good agreement between calculation and experiment was obtained.\n\ni Надоели баннеры? Вы всегда можете отключить рекламу.\n\n## Похожие темы научных работ по физике , автор научной работы — Minakov Andrey V., Yagodnitsyna Anna A., Lobasov Alexander S., Rudyak Valery Ya, Bilsky Artur V.\n\niНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.\ni Надоели баннеры? Вы всегда можете отключить рекламу.\n\n## Текст научной работы на тему «Micro-LIF and numerical investigation of mixing in microchannel»\n\nJournal of Siberian Federal University. Engineering & Technologies 1 (2013 6) 15-27\n\nУДК 532.5\n\nMicro-LIF and Numerical Investigation of Mixing in Microchannel\n\nAndrey V. Minakova,b*, Anna A. Yagodnitsynab, Alexander S. Lobasova,b, Valery Ya. Rudyakb,c and Artur V. Bilskyb\n\na Siberian Federal University, 28 Kirensky Str., Krasnoyarsk, 660074 Russia b Institute of Thermophysics SB RAS, 1 Lavrentiev, Novosibirsk, 630090 Russia c Novosibirsk State University of Civil Engineering, 113 Leningradskaya Str., 630008 Russia\n\nFlow regimes and mixing pattern in a T-type micromixer at high Reynolds numbers were studied by numerical solution of the Navier-Stokes equations and by particle image velocimetry (micro-PIV) and laser induced fluorescence (micro-LIF) experimental measurements. The Reynolds number was variedfrom 1 to 1000. The cross section of the mixing channel was 200 jm^400 jm, and its length was 3000 jm. Five differentflow regimes were identified: (I) steady vortex-free flow; (II) steady symmetric vortex flow with two horseshoe vortices; (III) steady asymmetric vortex flow; (IV) unsteady periodic flow; (V) stochastic flow. Maximum mixing efficiency was obtained for stationary asymmetric vortex flow. In this case, an S-shaped vortex structure formed in the flow field. Good agreement between calculation and experiment was obtained.\n\nKeywords: Microflow, Micromixers, Microchannels, CFD, Micro-PIV, Micro-LIF.\n\nIntroduction\n\nLiquid mixing is an important physical process which is widely used in various microfluidic devices. Since the characteristic flow times are usually extremely small, mixing is accelerated using special devices called micromixers. Micromixers are a key element of many MEMS and other devices that are used in various biomedical and chemical technologies, creating different micro heat-exchangers, microapparatus, etc. The operating principles of micromixers and their optimization have been the subject of a great deal of research (see, for example, [1-4] and references therein). Most papers consider laminar flow at low Reynolds numbers, which are usually characteristic of microflow. In practice, however, there are situations when microflows Reynolds number (Re) are high enough [5,6]. In addition, at relatively high Reynolds numbers in microchannels take place number of new interesting phenomena requiring study both a fundamental point of view and for practical purposes.\n\n* Corresponding author E-mail address: [email protected]\n\nThus, in the study of T-type micromixers the existence of critical Reynolds number at which the Dean vortices in a microchannel lost symmetry was experimentally demonstrated . The critical Reynolds number about 150 for channel dimensions 600|amx300|amx300|am was found. Strong dependence of the critical Reynolds number from the channel size was shown. Transient flow regimes (at Reynolds number Re = 300-700) by the instrumentality of numerical simulation have been investigated in , but the mixing processes have not been studied. The mixing of two fluids in the range of Reynolds numbers from 50 to 1400 was studied experimentally and numerically in . In the existence of unsteady perlootic reg[me fos ceutoio values oU the Reynolds numbeo fiost dsmonstrated numerically. The most compreheorivn experimental study of mixing in a T-sUaped microchannel at moderate Reynolds numbers (100-4100) in was carried out. Thssr, using |i-LIF and |i-PIV measurements of the velocity and concentration fields in the various secti-ns of the mixeo wds ntudied. For the first time the mixinn efficiency w-r me asure d.\n\n.n spite of the relatively large number of paperr covered the studu of flow and mixing in T-type mlcromixers at moddraie Reynolds numbers, in fact, sufficient syrtemaoic da0h about flow regimes and mixing procerses eook palace in it is still absent. The present work covers systematic modeling of incomprets[blo flow and mixing in a T-type micrtimixer at Reynolds numUers from 10 to 1000. The problem was sslveeluumeoically ou the basis of the NaviereStokes equations for an incompressible fluid. Verification of rbe simulation data wos corried ouh expe:rimenoal-y. The measurements were perfoemed by awo methods: paruide image velocimetru (micro-PIV) ¡and iaaeo induced fluoeescence (nUare-LIF).\n\nMathematical Model and Numerical Algorithm\n\nThe incompressible flows? of multicomponent Newtonian fluids, which dynamics is described by the Navier-Stokes werec onsidered\n\nofi + V-dpv) = 0, + V- (pvv) = -Vp + V-T, (1)\n\nde de '\n\nwhere p is fluid de=sity, p is preseure, v is its velocity, +nd T is the viscous stsess tensor. Density and vircosity op the mixture ie determined by the masa foactiou of mixture components pa the partial density p,- and moleculas viscosity on the pure cnmponents °\n\nP = Z f,Pi > V = lLf, \"H-i >\n\ni i\n\nand the evolution of the mass concenirations determined by the equation\n\nSeao^-ipo-r^v-ODdy;.), (2)\n\nde\n\nwhere A os the difforion coenkhent t^f thuei o-componsnl.\n\nAs boundary uondiUone on the channd walls for the velocity vectoe components used slip or nonslip conditions. In this papee usod the second one.\n\nTomlve the system -f equations mentioned above CFD nooiware ptckage cFlow is used. A deoailed description of the psogram nume rical algorithm is give n in [H,!, The developed algorithm\n\nused in solving a wide range of problems of external and internal flows [11-13]. Its applicability to describe the microflows shown in [14,15].\n\nThe investigation re sults of flow and mixing in T-type mic romixers are presented in this paper. Width of the narrow chaenel part is 200 (tm, width of the wide channel patt is 400 (m, thickness of the channel is 200 (m and length of the mixing channel is 3000 (m. The problem is considered in the spatial and timetdependent formufatk>n fn the general care. Titer ugh the left channel input pure weter is fed at a flow rate Q. Through the right channel inpun water is fed at the same\n\nflow rote. The drnsity of both fluids equals 1000 kglm3, the vircosity equale 0.001 Paxf, the diffusion co efitr it nt of tte dye in weter, D = 2,6n b10-1 0 m2/r. Thus, the value of Srhmtdt number for this problem is 3,800. As the boundary conditions et ehe mie-amcr of the chanoel steady vetbcity profile was set. At the exti of rhe mioing cltanne/ Nenmttm tteditions wos set, i.e. iv/1.!^:ii;:i:i;sl;:i]L:nit oli tin normaf to the output surface component of derivative of all scalar quantities.\n\nThe study was conducted for diffeeent vnt/es of Reynolds number, whtch is defined as follows: Re = (ptM(i) wliere U = (//(2pH) is ite iElow rate-based average velofity id the mixing channel, H = 200(m is height of the channel and d = 267 (m is hydraulic diameter.\n\nFor the quantitative chrracterizatioe of thb mixing efficiency the following parameter was used:\n\nM = 1 - e^(e/cn0 , where 1- = f • J\" (/ - /^d r it tie coitcereratfon t r componentfst t^Iic:Cгl:rd deviation from\n\nV V\n\nits ^eai^ \"Vi.ii^KS f by the volume (V) olie^ee, ct0 = /(l- f) i s the maximum stendaed deviation.\n\nExperimental Set Up\n\nThe diagram of experimental setup is shown in Fig. 1. Imaging system consisted of epifluorescence inverted microscope (Carl Zeiss AxioObserver.Z1) with lenses 20x/NA = 0.3 and 5x/NA = 012 (number 1 in Fig.f) for micro-PIV aed micro-LIF experimentt, respectively. Lighting and rec otd that iLr^ir ggetti 0 n e digital camera using a measuring complex \"POLIS\" (number 3 in Fig. 1) were caroied out. This complex mcluded a eouble-puised Nd:YAG llascr with 50 mJ energy, «2 nm wcvelengte ^n<t. 88 Hz pulte repetitioh coeering the flow \"hrough Ihe lens of the\n\nmicnostcoiTiu. ti1«:) emering the light iI/lto tbe microscope the liquid light gurde ¡and the Hers interface unit io the opticttl path (g]t the mierosfope weerc^ usedr Ir]Lghlltin^ of ehe microchannel during the micro-LiF e;;>i;tie^r^ri[ie;n^!^ carried out using a mercury lamp. Cnoetrorreiation digital camera recorded the imagec witt 2048hpixels resohttion, whith jarri; teen transhetred Ito a personal computed hoa processmg. Synchronization oC the rystem war cfrried out uaing a pprogrammable processor. Contnolling of thee experiment and data processing wece ctrried out using the software package ActualFlow.\n\nFluid motion control usrng infusron syringe pump (numJjeo 5 in Fig. 1) with adjustable liquid flow rate was carried out The flow oe liquid sow by fluorescent ihfterc from DukeScirntiíic firm. The particles were composed of melamine resin, labeled with fluorescent dye Rhodamine B. The particle dentity it L05 g/cmh tverage diameter is 2 (m; thr stondard deviation is 0.04 (nm. To register the light emiytfd zfro^m the partieles, and lhe fuppr2trton the ltght Oram the channel, beam-splitting\n\ncube, consisting of a dichroic mitror and two filtets for - xcttation aed dftection of Rhodamine B was used.\n\nT-c+ianneJ\n\nFig. 1. The experimental setup\n\nMicro-PIV and micro-LIF experiments were conducted at Reynolds numbers ranging from 10 to 300. The measurements were carried out in three regions of the T-mixer (so that the velocity field has been calculated up to seven calibers from the mixing channel entrance).\n\nConcentration range in which the luminescence intensity of the fluorophore has a linear dependence on concentration was determined to calibrate the measurements. For this purpose, the T-channel is fed aqueous solutions of Rhodamine 6J in the following concentrations: 0 mg/l, 10 mg/l, 25 mg/l and 40 mg/l, 50 mg/l, 62.5 mg/l and 75 mg/l. For every concentration of the fluorophore the image of the channel was registered. As a result, linear dependence of the fluorophore radiation intensity was found at concentrations less than 62.5 mg/l. Thus, the relationship between the concentration of the fluorophore and the intensity of the image at each point of the channel was obtained.\n\nComparison of Calculations with Experimental Data\n\nThe Reynolds number in the range from 1 to 1000 was varied in the calculations. At low Reynolds numbers (Re < 5) in the mixer occurs steady irrotational flow. Mixing in this case occurs due to usual molecular diffusion and mixing efficiency is quite low (see Fig. 2). Further, with increasing Reynolds number a pair of symmetrical horseshoe vortices (Dean vortices) formed in the mixer. They generated at the left end of the mixer wall (see Fig. 3, left) and extended into the channel on the mixing length, depending on the Reynolds number. Horseshoe vortices appear due to the development of secondary flows caused by the centrifugal force associated with rotation of the flow. Dean vortex structure is shown in Fig. 4 with isosurface X2. Here X2 is the second eigenvalue of the tensor (SS + fifi), where S is the rate of strain tensor, and fi is the vorticity tensor.\n\nThe flow in this case is symmetric about the central longitudinal plane of the mixer. Each horseshoe vortex, being in the range of one liquid, does not cross the media mixing boundary, so the boundary between the media remains almost flat. This is clearly seen on the Fig. 3 (right). And because the diffusion Peclet number increased with increasing Reynolds number, the mixing efficiency decreased (see Fig. 2).\n\nRe\n\nFig. 2. Mixing efficiency versus Reynolds number\n\nFig. 4. Izolines of the dye concentrations in 4 longitudinal sections of the mixer: Re = 186 (left), Re = 600 (right)\n\nWhen the Reynolds number reaches 150, the vortices lose their symmetry (see Fig. 3, right). They are rotated through an angle of 45° relative to the central plane of the mixer cross-section. S-shaped vortex formed. This is particularly clearly shown in Fig. 4 (left), where mixing is shown by isolines of the dye concentration in the four cross sections of the mixer. First left section is located at the entrance to the mixing channel, second - at the distance of 100 ^m from the entrance, third - at the distance of 200 |im and fourth - at the distance of 400 ^m. It is important to emphasize that flow is still stationary.\n\nHowever, due to the fact that the intensity of the vortices in the asymmetric flow regime increases significantly, they extend through the mixing channel up to the exit. The presence of swirling flow in\n\n0.005 0.0055 0.006 0.0065 0.007 0.0075\n\nFig. 5. The flow velocity versus time in point located at the mixer outlet. Red - Re = 300, blue - Re = 600, green -Re = 1000\n\nthe mixing channel leads to a layered structure of the miscible fluids formation. The contact surface of the miscible fluids in the layered structure is developed, which leads to a sharp increase of the mixing efficiency (see Fig. 2). In the transition from a symmetric flow regime (Re < 150) to the asymmetric (Re > 150), the mixing efficiency increases by 25 times.\n\nDescribed stationary asymmetric flow regime is observed in the range of Reynolds numbers from 140 to 240. Starting from a Reynolds number of approximately equal 240, flow ceases to be stationary. In the range of Reynolds numbers 240 < Re < 400 implemented periodic flow regime. In particular, it means that the flow velocity is also a periodic function of time. In Fig. 5 this flow regime corresponds to the lower curve. The flux oscillation frequency f is determined by many factors: the geometry of the channel, the fluid viscosity, the Reynolds number. To describe this dependence, we introduce the Strouhal number St = /^/(vRe), which is actually the dimensionless frequency of flow oscillations normalized by the Reynolds number (v is the kinematic viscosity). A diagram of the Strouhal number versus Reynolds number is shown in Fig. 6 (squares). The oscillation frequency increases monotonically to a value of Re = 300 and then decreases slightly.\n\nOur calculations data are correlated accurately with experimental one , which in Fig. 6 are marked with red tags. Maximum differences are observed at high Reynolds numbers, but it should be noted that the experimental data were obtained for a channel with cross-sectional dimensions of 600^mx300^m.\n\nMeanwhile, the layered mixing structure which was formed at Re > 150 is preserved in whole, and due to transverse flow fluctuations in the unsteady flow regime the mixing efficiency increases to about M = 40% (see Fig. 2).\n\nStarting from a Reynolds number of 450, the frequency of flow oscillations gradually decays. Firstly flow becomes quasiperiodic (450 < Re < 600), and then almost chaotic (Re > 600). The frequency spectrum of the velocity field becomes sufficiently filled, and is close to the continuous. This is clearly seen in Fig 5 (see also Fig. 4 (right)), where the Reynolds number 600 corresponds to the medium curve, and the Reynolds number 1000 corresponds the top one.\n\nRe\n\nFig. 6. Strouhal number versus Reynolds number\n\nThe distribution of the flow pulsation kinetic energy e by frequencies for Re = 600 is shown in Fig. 7. This spectrum is obtained for a point located in the center of the mixing channel at a distance of 400 (im from the enteance. Straight dashed line on the graph corresponds to the univereal law of ehe Kolmogorov-Obukhov.\n\nAlthough for Re = 600 the spectrum can not; be considered complete continuous, as in the cate of developed turbulent flow, nevertheless, therr aee a larno number eel\" frequencies and the inertial range, which suggerts, at least availability of trine transotional flow regime. Such oarly beginning of turbulence for channels flow occurs due to the development of Kelvin-Helmholtz instability at the entrance of the mixing channel.\n\nHowever, calculations show that if mixing channel is long enough, then with increasing of the distance from the flow confluence the pulsations gradually damped, the flow became laminar and,\n\n1.8/Re0'25\n\n0.1 -1-1- 3\n\n1 10 100 1 -10\n\nFig. 8. Friction factor versus the Reynolds number\n\nas expected, the steady velocity profile is formed. The length of the velocity profile establishment, of course, depends on the Reynolds number. To show it the problem for a channel 7000 ^m length was solved. The obtained data is illustrated in Fig. 10, where compares the velocity profiles for the two Reynolds numbers: 30 and 120.\n\nThis coefficient is determined by formula I = (2APd)/(pU2L) where AP is the pressure drop in the channel, and L is the length of the channel. The dark mark and the line connecting them correspond to calculation. To compare the results the values of friction factor for steady laminar flow in a rectangular channel is shown on the graph by the dashed line.\n\nFor a channel with height-to-width ratio equal to 0.5 the friction factor is close to 64/Re. Nevertheless, the analysis shows that for small Reynolds numbers the friction factor in micromixer on average 20-30% higher than for steady flow. Then the friction factor dramatically deviate from the dependence I = 64/Re, indicating the laminar-turbulent transition. The calculated data of the friction factor in micromixer at moderate Reynolds numbers is well described by the dependence I = 1.8/Re025. Obtained value of the friction factor is almost six times higher than the classical Blasius dependence (I = 0.316/Re025) for the developed turbulent flow in the direct channel. Such large difference is due to the presence of a turning flow at the channel inlet, and its vortex in the mixing channel. In particular, the pressure along the channel does not change monotonically. In the transition to turbulence, S-shaped vortex structure, that was formed in the mixing channel at Re > 150 and existed in the transient regime collapses. The flow is divided into a set of sufficiently large eddies. Because of it the contact area between the miscible liquids reduced. And so mixing efficiency slightly decreased on transition to turbulence (see Fig. 2). Naturally, with further increase of the Reynolds number a lot of small-scale vortices appeared in the flow.\n\nAs a consequence, the mixing efficiency in the developed turbulent flow far exceeds suitable value for laminar flow.\n\nComparison of the experimentally measured by micro-PIV and calculated velocity profiles in the central section of the mixing channel at 2.5 calibers from the input are shown in Fig. 9. The appearance\n\n0 5 ID 5 1 10 4 1.3 10 4 MO 4 15-10 4 MO 4 3.5-10 4\n\n£111\n\nFig. 9. Velocity profiles in the central cross section of the channel.\n\nof curve bends in the velocity profiles associated with the occurrence of an S-shaped structure. Overall agreement between the experimental and calculated data is quite good, the maximum error does not exceed 10%, but with increasing Reynolds number it's increased. This is due to essentially three-dimensional structure of the flow at a given Reynolds number.\n\nFor example, in micro-PIV measured velocity field is the average depth of correlation (in this experiment it was equal to the depth of correlation 37 microns), the gradient of the longitudinal velocity of the channel depth has led to a smoothing of the velocity profile in the micromixer. A qualitative comparison of calculated and experimental velocity fields in the central longitudinal section of the mixer is shown in Fig. 10. Here also there is quite satisfactory agreement between the calculated and experimental data.\n\nTo compare the concentration fields obtained by numerical simulation and in the experiment the spatial averaging of the calculated data on the depth of the T-mixer was carried out. The concentration field in the 11 sections of XY plane on the depth of the T-mixer symmetrical about its center was taken to average.\n\nConcentration field for each section averaged spatially using a \"running average\" filter with a round window the same diameter as the point spread function diameter in this section. The resulting concentration field was calculated as the arithmetical mean of 11 sections. The averaged concentration field obtained by numerical simulation and in the experiment for different Reynolds numbers are shown in Fig. 11 and Fig. 12. In whole, there is good qualitative agreement.\n\nConclusions\n\nThus, this modeling allows to sort out for the incompressible fluid flow in T-type micromixer following flow regimes:\n\n• The steady vortex-free flow, realized at low Reynolds numbers (Re < 5).\n\n- 23 -\n\nFig. 10. Average experimental and calculated velocity field in the central section of micromixer for Reynolds numbers equals 30 and 120\n\nRe = 90 numerical Re = 186 numerical\n\nFig. 11. Averaged concentration field in the central section of micromixer for Re=90 and Re=186\n\n• The steady symmetric vortex flow with two symmetric horseshoe vortices at the mixing channel inlet. This regime realized when the Reynolds numbers vary in the range 5 < Re < 150.\n\n• Steady asymmetric vortex flow is observed in the range of Reynolds numbers 150 < Re < 240. Formed at the entrance horseshoe vortices lose their symmetry and rotated at 45° relative to the central longitudinal plane of the mixing channel. S-shaped vortices formed.\n\n• Unsteady periodic flow is realized in the range 240 < Re < 400.\n\n• Almost a stochastic flow regime (400 < Re < 1000). S-shaped vortex structures observed at lower Reynolds numbers collapsed.\n\nThe mixing efficiency increases dramatically during formation the S-shaped vortex structures in the flow and then continues to grow in an unsteady periodic regime. In paper was shown that the mixing efficiency can be substantially increased by changing flow rate at the inlet of the\n\nFig. 12. The normalized concentration profiles across the mixing channel for Re = 30\n\nmixer in a certain way by harmonical law. In fact, here we have some \"autocontrol\" of the mixing process.\n\nUsually to ensure the efficient mixing, the mixer length should be large enough. Naturally, this leads to a significant loss of pressure caused by friction at the walls. On the other hand, such losses can be reduced by using hydrophobic or even ultrahydrophobic coats. In microflows the slip length can reach tens of microns. As it shown in , if the Reynolds numbers were low the mixing efficiency particularly didn't change. However, the situation is changing at moderate Reynolds numbers. The presence of wall slip leads to a significant change in flow regimes. If there are slip conditions the flows are rebuilt. Simulations show, for example, that at Reynolds numbers equals 200 and sufficiently large slip lengths the two-vortex structure mentioned above is transformed into one-vortex. Naturally, the mixing efficiency increased too. For this mixer an increase is about 30%. On the other hand, the pressure drop decreases monotonically with increasing the slip length (for investigated mixer for about 30-40%). Thus, using a hydrophobic coats (slip conditions) one can control the flow regimes.\n\nAcknowledgment\n\nThis work was supported in part by the Russian Foundation for Basic Research (Grants №. 10-01-00074 and 11-08-01268) and the Federal Special Program \"Scientific and scientific-pedagogical personnel of innovative Russia in 2009-2013\" (№ 16.740.11.0642, 14.A18.21.0344, 14.132.21.1750, 8756).\n\nReferences\n\n Tabeling P. Introduction to microfluidics, Oxford University Press, 2005.\n\n Karnik R. // Encyclopedia of microfluidics and nanofluidics, 2008. P. 1177-1186.\n\n Vanka S.P., Luo G. and Winkler CM. // AIChE J. 50 (2004) 2359-2368.\n\niНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.\n\n Aubina J., Fletcherb D.F. andXuereb C. // Chem. Eng. Sci. 60 (2005) 2503-2516.\n\n Hoffmann M, Schluter M, Rubiger N. // Chemical Engineering Science 61 (2006) 2968-2976.\n\n Mansur E.A., Mingxing Y.E., Yundong W., Youyuan D. // Chinese J. Chemical Eng. 16(4) 2008. P. 503-516.\n\n Engler M, Kockmann N, Kiefer T, Woias P. // Chem. Eng. J. 101 (2004) 315-322.\n\n Telib H, ManhartM, Iollo A. // Phys. Fluids 16 (2004) 2717-2731.\n\n WongS.H., WardM.C.L., Wharton C.W. // Sens. & Act. B. 100 (2004) 359-379.\n\n Gobert C., Schwert F., ManhartM. // Proc. ASME Joint U.S.-European Fluids Eng. Summer Meeting, Miami, Paper no. FEDSM2006-98035, 2006. P. 1053-1062.\n\n Rudyak V.Ya., Minakov A.V., Gavrilov A.A. and Dekterev A.A. // Thermophysics & Aeromechanics 15 (2008) 33-345.\n\n Gavrilov A.A., Minakov A.V., Dekterev A.A. and Rudyak V.Ya. // Sib. Zh. Industr. Matem. 13 (2010). № 4. P. 3-14.\n\n Podryabinkin E.V., Rudyak V.Ya. // J. Engineering Thermophysics. 20 (2011), No. 3. P. 320-328.\n\n Minakov A.V., Rudyak V.Ya., Gavrilov A.A., Dekterev A.A. // Journal of Siberian Federal University. Mathematics & Physics 3(2010), No. 2. P. 146-156.\n\nRudyak V.Ya., Minakov A.V., Gavrilov A.A. and Dekterev A.A. // Thermophysics & Aeromechanics 17 (2010) 565-576.\n\n Dreher S, Kockmann N., Woias P. // Heat Transfer Engineering 30 (2009) 91-100.\n\nMicro-LIF и численное исследование смешения в микроканале\n\nА.В. Минаковаб, А.А. Ягодницынаб, А.С. Лобасоваб, В.Я. Рудякбв, А.В. Бильскийб\n\nа Сибирский федеральный университет, Россия 660074, Красноярск, ул. Киренского, 28 бИнститут теплофизики СО РАН, Россия 630090, Новосибирск, пр. Лаврентьева, 1 вНовосибирский государственный архитектурно-строительный университет, Россия 630008, Новосибирск, ул. Ленинградская, 113\n\nВ статье с помощью численного моделирования и экспериментальных методов micro-PIV и micro-LIF исследованы режимы течения и смешения жидкостей в микромиксере Т-типа в широком диапазоне значений числа Рейнольдса от 1 до 1000. Поперечное сечение канала равнялось 200 мкм*400 мкм, а длина канала была равна 3000 мкм. Было обнаружено пять различных режимов течения: (I) стационарное безвихревое течение;\n\n(II) стационарное симметричное вихревое течение с двумя подковообразными вихрями;\n\n(III) стационарное асимметричное вихревое течение; (IV) нестационарное периодическое течение; (V) хаотическое течение. Максимальное значение эффективности смешения наблюдается при стационарном асимметричном вихревом течении. В этом случае в потоке формируются S-образные вихревые структуры. Показано хорошее соответствие расчётных и экспериментальных данных.\n\nКлючевые слова: микротечение, микромиксеры, микроканалы, CFD, Micro-PIV, Micro-LIF.\n\ni Надоели баннеры? Вы всегда можете отключить рекламу."
]
| [
null,
"https://cyberleninka.ru/images/tsvg/cc-label.svg",
null,
"https://cyberleninka.ru/images/tsvg/view.svg",
null,
"https://cyberleninka.ru/images/tsvg/download.svg",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.84307766,"math_prob":0.9039865,"size":25931,"snap":"2023-40-2023-50","text_gpt3_token_len":6907,"char_repetition_ratio":0.15096232,"word_repetition_ratio":0.020701412,"special_character_ratio":0.24048436,"punctuation_ratio":0.13620725,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95870864,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-09T15:55:30Z\",\"WARC-Record-ID\":\"<urn:uuid:92ffd77d-f734-46ca-870c-b004828149a6>\",\"Content-Length\":\"92746\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ae94b666-3d3b-45bd-87b8-6ffa1cda650b>\",\"WARC-Concurrent-To\":\"<urn:uuid:0f431746-f6ae-4e23-9891-504237c693d1>\",\"WARC-IP-Address\":\"37.27.60.2\",\"WARC-Target-URI\":\"https://cyberleninka.ru/article/n/micro-lif-and-numerical-investigation-of-mixing-in-microchannel\",\"WARC-Payload-Digest\":\"sha1:HOIMJC4ITMWODVGCHKSA43SE3S4E37WZ\",\"WARC-Block-Digest\":\"sha1:XBMUQY3VREECVZDD5K2BBKD5J4O5CK7D\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100912.91_warc_CC-MAIN-20231209134916-20231209164916-00284.warc.gz\"}"} |
https://mathspace.co/textbooks/syllabuses/Syllabus-410/topics/Topic-7293/subtopics/Subtopic-97365/?activeTab=interactive | [
"NZ Level 7 (NZC) Level 2 (NCEA)",
null,
"Domain and range of tangent curves\n\n## Interactive practice questions\n\nSelect the correct domain of $y=\\tan x$y=tanx.\n\nAll real numbers, except integer multiples of $90^\\circ$90°.\n\nA\n\nAll real numbers, except odd integer multiples of $90^\\circ$90°.\n\nB\n\nAll real numbers, except even integer multiples of $90^\\circ$90°.\n\nC\n\nAll real numbers.\n\nD\n\nAll real numbers, except integer multiples of $90^\\circ$90°.\n\nA\n\nAll real numbers, except odd integer multiples of $90^\\circ$90°.\n\nB\n\nAll real numbers, except even integer multiples of $90^\\circ$90°.\n\nC\n\nAll real numbers.\n\nD\nEasy\nLess than a minute\n\nSelect the correct range of $y=\\tan x$y=tanx.\n\nLet $f\\left(x\\right)=\\tan x$f(x)=tanx and $g\\left(x\\right)=\\tan x+2$g(x)=tanx+2.\n\nLet $f\\left(x\\right)=\\tan x$f(x)=tanx and $g\\left(x\\right)=\\tan2x$g(x)=tan2x.\n\n### Outcomes\n\n#### M7-2\n\nDisplay the graphs of linear and non-linear functions and connect the structure of the functions with their graphs\n\n#### 91257\n\nApply graphical methods in solving problems"
]
| [
null,
"https://mathspace-production-static.mathspace.co/permalink/badges/v3/trigonometric-functions-graphs-2.svg",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.69560015,"math_prob":0.99987376,"size":625,"snap":"2021-31-2021-39","text_gpt3_token_len":150,"char_repetition_ratio":0.20289855,"word_repetition_ratio":0.5882353,"special_character_ratio":0.2608,"punctuation_ratio":0.125,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999678,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-19T20:00:51Z\",\"WARC-Record-ID\":\"<urn:uuid:3ebd9560-c7a5-4a1c-8e3c-c50da3c75e72>\",\"Content-Length\":\"308210\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:020d9c55-aca6-493b-8478-c7e249a1442b>\",\"WARC-Concurrent-To\":\"<urn:uuid:db9e62c0-5ddd-4d73-a748-537470cf6a9d>\",\"WARC-IP-Address\":\"172.67.36.184\",\"WARC-Target-URI\":\"https://mathspace.co/textbooks/syllabuses/Syllabus-410/topics/Topic-7293/subtopics/Subtopic-97365/?activeTab=interactive\",\"WARC-Payload-Digest\":\"sha1:UO2M3DAGTKZXQUFZV5KFBHPOJWAIAPEF\",\"WARC-Block-Digest\":\"sha1:W5RCVZCP3DJAR3ECZP3I4JLNUKXV7UDI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780056900.32_warc_CC-MAIN-20210919190128-20210919220128-00565.warc.gz\"}"} |
https://answers.everydaycalculation.com/divide-fractions/45-25-divided-by-36-45 | [
"Solutions by everydaycalculation.com\n\n## Divide 45/25 with 36/45\n\n1st number: 1 20/25, 2nd number: 36/45\n\n45/25 ÷ 36/45 is 9/4.\n\n#### Steps for dividing fractions\n\n1. Find the reciprocal of the divisor\nReciprocal of 36/45: 45/36\n2. Now, multiply it with the dividend\nSo, 45/25 ÷ 36/45 = 45/25 × 45/36\n3. = 45 × 45/25 × 36 = 2025/900\n4. After reducing the fraction, the answer is 9/4\n5. In mixed form: 21/4\n\nMathStep (Works offline)",
null,
"Download our mobile app and learn to work with fractions in your own time:"
]
| [
null,
"https://answers.everydaycalculation.com/mathstep-app-icon.png",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.79807603,"math_prob":0.9812315,"size":317,"snap":"2022-40-2023-06","text_gpt3_token_len":124,"char_repetition_ratio":0.1341853,"word_repetition_ratio":0.0,"special_character_ratio":0.43217665,"punctuation_ratio":0.11111111,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95967716,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-02-02T15:08:05Z\",\"WARC-Record-ID\":\"<urn:uuid:ecf72dad-9a80-47b9-a27c-9d88746eb1fd>\",\"Content-Length\":\"7265\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b07189f5-0a00-438f-ad42-0a0de8cd7a61>\",\"WARC-Concurrent-To\":\"<urn:uuid:d9dbd0f5-87a4-4e7d-8b54-f7e75d2e5b25>\",\"WARC-IP-Address\":\"96.126.107.130\",\"WARC-Target-URI\":\"https://answers.everydaycalculation.com/divide-fractions/45-25-divided-by-36-45\",\"WARC-Payload-Digest\":\"sha1:ZK557AKWPP33BCRFVZUJIVC3FOA3WVIM\",\"WARC-Block-Digest\":\"sha1:7NHMJUZ5XKKZ5JBWUDQ3QPNPICPFKJRP\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764500028.12_warc_CC-MAIN-20230202133541-20230202163541-00346.warc.gz\"}"} |
http://www.guillaume-weisang.com/2013/05/10/math-is-so-very-cool/ | [
"# Math is so very cool.",
null,
"Euler’s identity\ne^{i \\pi} + 1 = 0\n\nEuler’s identity is considered by many to be remarkable for its mathematical beauty. These three basic arithmetic operations occur exactly once each: addition, multiplication, and exponentiation. The identity also links five fundamental mathematical constants:\n\n• The number 0, the additive identity.\n• The number 1, the multiplicative identity.\n• The number π, which is ubiquitous in trigonometry, the geometry of Euclidean space, and analytical mathematics (π = 3.14159265…)\n• The number e, the base of natural logarithms, which occurs widely in mathematical and scientific analysis (e = 2.718281828…). Both π and e are transcendental numbers.\n• The number i, the imaginary unit of the complex numbers, a field of numbers that contains the roots of all polynomials (that are not constants), and whose study leads to deeper insights into many areas of algebra and calculus, such as integration in calculus.\n\nFurthermore, in algebra and other areas of mathematics, equations are commonly written with zero on one side of the equals sign.\n\nA poll of readers conducted by The Mathematical Intelligencer named Euler’s identity as the “most beautiful theorem in mathematics”. Another poll of readers that was conducted by Physics World in 2004 chose Euler’s identity tied with Maxwell’s equations (of electromagnetism) as the “greatest equation ever”.\n\nAn entire 400-page mathematics book, Dr. Euler’s Fabulous Formula (published in 2006), written by Paul Nahin (a professor emeritus at the University of New Hampshire), is devoted to Euler’s identity, especially its applications in Fourier Analysis. This monograph states that Euler’s identity sets “the gold standard for mathematical beauty”.\n\nConstance Reid claimed that Euler’s identity was “the most famous formula in all mathematics”.\nThe mathematician Carl Friedrich Gauss was reported to have commented that if this formula was not immediately apparent to a student upon being told it, that student would never be a first-class mathematician.\n\nAfter proving Euler’s identity during a lecture, Benjamin Peirce, a noted American 19th-century philosopher, mathematician, and professor at Harvard University, stated that “it is absolutely paradoxical; we cannot understand it, and we don’t know what it means, but we have proved it, and therefore we know it must be the truth.”\n\nStanford University mathematics professor Keith Devlin said, “Like a Shakespearean sonnet that captures the very essence of love, or a painting that brings out the beauty of the human form that is far more than just skin deep, Euler’s Equation reaches down into the very depths of existence.”\n\nOh, and Happy Primes\n7, 13, 19, 23, 31, 79, 97, 103, 109, 139, 167, 193, 239, 263, 293, 313, 331, 367, 379, 383, 397, 409, 487"
]
| [
null,
"http://www.guillaume-weisang.com/files/2013/05/ExpIPi.gif",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.9377926,"math_prob":0.93350166,"size":2776,"snap":"2020-24-2020-29","text_gpt3_token_len":625,"char_repetition_ratio":0.14141414,"word_repetition_ratio":0.0,"special_character_ratio":0.23919308,"punctuation_ratio":0.14615385,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9895977,"pos_list":[0,1,2],"im_url_duplicate_count":[null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-04T11:39:19Z\",\"WARC-Record-ID\":\"<urn:uuid:a46057b0-85df-4a96-a345-768850e8253e>\",\"Content-Length\":\"21854\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e565ca1b-88bf-4772-aa1f-c2201bd381d3>\",\"WARC-Concurrent-To\":\"<urn:uuid:22db1c04-b89b-4f1d-b832-cf961404dd89>\",\"WARC-IP-Address\":\"140.232.1.224\",\"WARC-Target-URI\":\"http://www.guillaume-weisang.com/2013/05/10/math-is-so-very-cool/\",\"WARC-Payload-Digest\":\"sha1:T6QSLYU4KAZXUCO7KHE5H4QORZYZFN7U\",\"WARC-Block-Digest\":\"sha1:S4TU5KUHWMRNX2NQNJHBRITQPPMNLX45\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593655886121.45_warc_CC-MAIN-20200704104352-20200704134352-00493.warc.gz\"}"} |
https://physics.stackexchange.com/questions/421810/plane-waves-in-plasmas?noredirect=1 | [
"# Plane waves in plasmas\n\nWaves in plasmas can be classified as electromagnetic or electrostatic according to whether or not there is an oscillating magnetic field. A corresponding wiki article states:\n\nApplying Faraday's law of induction to plane waves, we find $\\mathbf{k}\\times\\tilde{\\mathbf{E}}=\\omega\\tilde{\\mathbf{B}}$, implying that an electrostatic wave must be purely longitudinal. An electromagnetic wave, in contrast, must have a transverse component, but may also be partially longitudinal.\n\nDoesn't this only hold for plane waves which are not necessarily good approximations for waves in plasmas (e.g. for lower hybrid waves that have antennas near the plasma in fusion devices)? Is not the more accurate statement for longitudinal waves simply $\\nabla \\times \\mathbf{E} = 0$? And $\\langle \\frac {\\partial \\mathbf{E}}{\\partial t} \\rangle = 0$ for electrostatic waves? They appear to both match in the special case of plane waves, but are plane waves appropriate representations for waves in plasmas?\n\nAny comments on the validity of the above and further reading references on waves in plasmas that do not apply the plane wave approximation are appreciated.\n\n• The magnetic fluctuations will either be linearly or elliptically polarized. In the former case, the wave is purely transverse and in the latter case, the wave can be approximated as planar owing to $\\nabla \\cdot \\tilde{\\mathbf{B}} = 0$. You are correct, the electric fields are generally not considered planar in this regard. You may find the following useful: physics.stackexchange.com/a/264526/59023 or physics.stackexchange.com/a/265731/59023 or physics.stackexchange.com/a/235549/59023. – honeste_vivere Aug 9 '18 at 21:04\n• @honeste_vivere $\\langle \\nabla \\cdot \\mathbf{B} \\rangle = 0$ only indicates the magnetic component (if it exists) of a wave is periodic in space and not necessarily planar (i.e. the amplitude of the real wave does not have to sinusoidal). – Mathews24 Aug 9 '18 at 22:39\n• @Mathew24 - Not necessarily. If we suppose the real part of the frequency is finite and the fluctuation is electromagnetic, then you can decompose the fluctuation in such a way to find $\\mathbf{k} \\cdot \\mathbf{B} = 0$ for linear modes. If the mode is nonlinear or the imaginary part of $\\mathbf{k}$ is non-zero, then I agree that we cannot assume $\\mathbf{k} \\cdot \\mathbf{B} = 0$, i.e., we cannot assume planarity. – honeste_vivere Aug 10 '18 at 2:19\n• @honeste_vivere Why would one assume linear modes being prevalent over nonlinear modes for plasma waves? – Mathews24 Aug 10 '18 at 14:52\n• Interestingly enough, even though many plasma waves have nonlinear properties, they still can maintain some of their linear properties. For instance, the amplitude of an observed mode may exceed a nonlinear threshold, but its time series profile can still look like a nice modulated sine wave and the frequency spectrogram can still be consistent with predictions from linear theory. The short answer is that one starts simple and if that fails, one adds a little complexity and iterates until the analysis can work. – honeste_vivere Aug 10 '18 at 18:13\n\nDoesn't this only hold for plane waves which are not necessarily good approximations for waves in plasmas (e.g. for lower hybrid waves that have antennas near the plasma in fusion devices)?\n\nI will address this in parts. First, the following applies to linear and quasi-linear – a first (and sometimes second) order correction to the linear approximation – approximations but cannot be generalized to fully nonlinear waves. Interestingly enough, even though many waves observed in space plasmas can be called nonlinear due to their amplitudes (or some other property), they often retain many of the linear properties predicted for the given mode [e.g., see examples in Giagkiozis et al., 2018; Wilson et al._, 2013, 2017].\n\nIs not the more accurate statement for longitudinal waves simply $\\nabla \\times \\mathbf{E} = 0$? And $\\langle \\tfrac{\\partial \\mathbf{E}}{\\partial t} \\rangle = 0$ for electrostatic waves?\n\nSecond, a linear electrostatic fluctuation satisfies $\\mathbf{k} \\times \\mathbf{E} = 0$. This does not mean that they do not have their own displacement currents, i.e., electrostatic waves still have finite $\\tfrac{\\partial \\mathbf{E}}{\\partial t}$. It is really a statement that the wave vector, $\\mathbf{k}$, is parallel to the fluctuating $\\mathbf{E}$ [e.g., see example ion acoustic wave in Wilson et al._, 2010] not that $\\tfrac{\\partial \\mathbf{E}}{\\partial t} = 0$.\n\nThe nice thing about plasmas is they obey Maxwell's equations, thus the longitudinal part of any electromagnetic wave occurs in the electric field only owing to $\\nabla \\cdot \\mathbf{B} = 0$. In the electrostatic case, $\\mathbf{k}$ is along the fluctuating electric field and it is not a plane wave in the sense of the fields oscillating in a plane orthogonal to the direction of propagation.\n\nThey appear to both match in the special case of plane waves, but are plane waves appropriate representations for waves in plasmas?\n\nFinally, yes there are significant limitations to the plane wave approximation. While the fluctuations seen plasmas may not satisfy all the assumptions for plane waves, this does not mean the approximation is invalid or cannot be used. For instance, we know the ideal gas law assumptions do not hold under most situations, but it is not an irrelevant approximation (it actually works frustratingly well in many situations).\n\nAny comments on the validity of the above and further reading references on waves in plasmas that do not apply the plane wave approximation are appreciated.\n\nUnfortunately, there is little that can be done with non-planar waves, i.e., those with non-stationary solutions or nonlinear properties. By nonlinear, I am specifically referring to fluctuations that have one or more of the following properties:\n\n• those that cannot be approximated as a constant times $e^{i \\left( \\mathbf{k} \\cdot \\mathbf{x} - \\omega t \\right)}$;\n• those with $A\\left( \\omega, \\mathbf{k} \\right)$, i.e., fluctuations where the amplitude depends upon frequency and/or wave vector; or\n• those with $\\Im\\left[ \\omega \\right] \\gg \\Re\\left[ \\omega \\right]$ that are observed in the plasma rest frame.\n\nBellan came up with a neat idea for finding $\\mathbf{k}$ from single point measurements but even that, I think, is limited to a planar assumption.\n\nI have listed several other references below on wave analysis in plasmas, some are observational applications and others are rigorous mathematical justifications for a given technique.\n\n# References\n\n• Bellan, P.M. \"Revised single-spacecraft method for determining wave vector k and resolving space-time ambiguity,\" J. Geophys. Res. 121(9), pp. 8589–8599, doi:10.1002/2016JA022827, 2016.\n• Giagkiozis, S., et al., \"Statistical Study of the Properties of Magnetosheath Lion Roars,\" J. Geophys. Res. 123, doi:10.1029/2018JA025343, 2018.\n• Kawano, H. and T. Higuchi \"The bootstrap method in space physics: Error estimation for the minimum variance analysis,\" Geophys. Res. Lett. 22(3), pp. 307–310, doi:10.1029/94GL02969, 1995\n• Khrabrov, A.V. and B.U.O. Sonnerup \"Error estimates for minimum variance analysis,\" J. Geophys. Res. 103(A4), pp. 6641–6652, doi:10.1029/97JA03731, 1998.\n• Means, J.D. \"Use of the three-dimensional covariance matrix in analyzing the polarization properties of plane waves,\" J. Geophys. Res. 77(28), pp. 5551–5559, doi:10.1029/JA077i028p05551, 1972.\n• Samson, J.C. and J.V. Olson \"Some comments on the descriptions of the polarization states of waves,\" Geophys. J. 61(1), pp. 115–129, doi:10.1111/j.1365-246X.1980.tb04308.x, 1980.\n• Wilson, L.B., et al., \"Large‐amplitude electrostatic waves observed at a supercritical interplanetary shock,\" J. Geophys. Res. 115(A12), pp. A12104, doi:10.1029/2010JA015332, 2010.\n• Wilson, L.B., et al., \"Electromagnetic waves and electron anisotropies downstream of supercritical interplanetary shocks,\" J. Geophys. Res. 118(1), pp. 5–16, doi:10.1029/2012JA018167, 2013.\n• Wilson, L.B., et al., \"Revisiting the structure of low‐Mach number, low‐beta, quasi‐perpendicular shocks,\" J. Geophys. Res. 122(9), pp. 9115–9133, doi:10.1002/2017JA024352, 2017."
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.8227792,"math_prob":0.97030413,"size":5065,"snap":"2019-51-2020-05","text_gpt3_token_len":1373,"char_repetition_ratio":0.12013436,"word_repetition_ratio":0.0040816325,"special_character_ratio":0.28983217,"punctuation_ratio":0.20793037,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99214,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-08T21:40:03Z\",\"WARC-Record-ID\":\"<urn:uuid:6e62f2a9-dfde-4015-9177-db88e7553878>\",\"Content-Length\":\"146188\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a561b311-b792-4f66-9b90-d76040c5f22b>\",\"WARC-Concurrent-To\":\"<urn:uuid:49b9d04a-6e77-4365-8678-7c0f39ee945d>\",\"WARC-IP-Address\":\"151.101.193.69\",\"WARC-Target-URI\":\"https://physics.stackexchange.com/questions/421810/plane-waves-in-plasmas?noredirect=1\",\"WARC-Payload-Digest\":\"sha1:32FAHXOWXDQRW2IFJMUV6NQPOC44QDNO\",\"WARC-Block-Digest\":\"sha1:3CQVGFNIAIEUOSEXDV76O4TF4LJCVGB6\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540514893.41_warc_CC-MAIN-20191208202454-20191208230454-00402.warc.gz\"}"} |
https://electronics.stackexchange.com/questions/80412/how-to-overwrite-flash-memory-on-stm32l-series | [
"# How to overwrite flash memory on STM32L series\n\nI am trying to write a known pattern (ie 0xFFFFFFFF or 0x00000000) on top of already written flash memory, to invalidate portions of it for a primitive file system. But it doesn't work for me on the STM32L series as it does on the STM32F series.\n\nI am used to the STM32F family of microcontrollers, where the flash memory is erased to 0xFFFFFFFF and written with 0's. You can write anything you want to erase memory, ie\n\n write 0x00001234 on top of 0xFFFFFFFF -> 0x00001234\n\n\nand you can write 0x00000000 (all zeros) on top of anything\n\n write 0x00000000 on top of 0x00001234 -> 0x0000000\n\n\nI am now using the STM32L family (low power), and the flash memory is totally different. It is erased to 0x00000000, and written with 1's. However, I don't know how to reliably write all ones. For example, if I erase, I can do this\n\n write 0x01020304 on top of 0x00000000 -> 0x01020304\n\n\nbut if I try\n\n write 0xFFFFFFFF on top of 0x01020304 -> 0xFFFFFFBF !!!\n\n\nNote that the final answer has a B in it. It is not all ones. In fact, if I write bytes 0x00 to 0xFF to a freshly erased page of memory, and then write 0xFFFFFFFFFF all over it, I get very glitchy results:\n\nff ff ff bf ff ff ff ff ff ff ff ff ff ff ff fb\nf7 ff ff ff fd ff ff ff ff ff ff f7 ff ff ff ff\nfe ff ff ff ff ff ff ff ff ff ff 7f f7 ff ff ff\nff ff ff fb ff ff ff ef ff ff ff ff ff ff ff df\nfe ff ff ff ff ff ff ff ff ff ff 7f f7 ff ff ff\nff ff ff fb ff ff ff ef ff ff ff ff ff ff ff ff\nff ff ff bf ff ff ff ff ff ff ff ff ff ff ff fb\nf7 ff ff ff fd ff ff ff ff ff ff f7 ff ff ff df\nf7 ff ff ff fd ff ff ff ff ff ff f7 fe ff ff ff\nff ff ff bf ff ff ff ff ff ff ff ff fd ff ff ff\nff ff ff fb ff ff ff ef ff ff ff ff ff ff ff bf\nfe ff ff ff ff ff ff ff ff ff ff 7f fb ff ff ff\nff ff ff fb ff ff ff ef ff ff ff ff ff ff ff bf\nfe ff ff ff ff ff ff ff ff ff ff 7f ff ff ff ef\nf7 ff ff ff fd ff ff ff ff ff ff f7 fe ff ff ff\nff ff ff bf ff ff ff ff ff ff ff ff fb ff ff ff\n\n\nHere is the pseudo code I am using (FlashWrite is a wrapper around the STM std periph library). I tried writing a pattern of 8 writes with the bits shifted <<1 each time, and that actually gave me what I wanted (all ones) but I am not sure this is reliable.\n\n uint32_t pattern = 0x04030201;\nfor(int j=0;j<64;j++) {\nFlashWriteArray(0x0801E000 + 4*j,(uint8_t*)&pattern,4);\npattern += 0x04040404;\n}\n\nfor(int j=0;j<64;j++) {\n#if 1\n// write once\nuint32_t pattern = 0xFFFFFFFF;\nFlashWriteArray(0x0801E000 + 4*j,(uint8_t*)&pattern,4);\n#else\n// write shifting bit pattern\nuint32_t pattern = 0x01010101;\nfor(int i=0;i<8;i++) {\nFlashWriteArray(0x0801E000 + 4*j,(uint8_t*)&pattern,4);\npattern <<=1;\n}\n#endif\n\n\nSome types of non-volatile memory device use error-correcting logic which adds an extra few bits to each programmable chunk (e.g. 5 bits per 16, 6 per 32, 7 per 64, 8 per 128, etc.) Generally the error correction code is chosen so that all bits blank is a valid representation; in some cases, but not all, it may also be chosen such that all bits programmed is also a valid combination. For simplicity, I'll assume a code with which guards each group of 4 bits with 3 guard bits. Compute the three guard bits guard bit as being the xor of either data bits 0+1+3, 0+2+3, or 1+2+3. I'll also assume that a blank word is zero.\n\nThe 16 possible code values are thus\n\n3210 ABC 3210 ABC 3210 ABC 3210 ABC\n0000 000 0100 011 1000 111 1100 100\n0001 110 0101 101 1001 001 1100 010\n0010 101 0110 110 1010 010 1110 001\n0011 011 0111 000 1011 100 1111 111\n\n\nWhen a memory nybble is read, the system can see what bits ABC should be according to the table. If one of the seven bits in the word is misread, the combination of ABC bits that don't match the computed value will indicate which bit was wrong.\n\nSuppose a memory system used the 16-bit code shown above, and one wanted to overwrite a byte value of 1110 (ECC bits 001) with a value of 1000 (ECC bits should be 111). The net effect would be that the system would write 1000 with ECC bits of 001. When the data is read back, the system would see that for a value of 1000, the ECC bits should be 111 but are instead 001. The fact that bits A and B are wrong means bit 0 of the data was wrong and should be flipped; the system would thus read the value as 1001 (whose ECC is correctly 001).\n\nIn most cases, there should be enough flexibility in the design of an error-correcting code to permit both all-bits-clear and all-bits-set to be regarded as valid combinations. Some systems do not do so, however. If an error-correcting code would require an all-bits-programmed word to have two or more of the ECC bits blank, then an attempt to obliterate a word which has those bits programmed would likely visibly fail; attempts to program many other values would likely yield a state which was only one bit error away from failure rather than two.\n\nI really wish memory designers would allow for data to be obliterated even if they don't allow most other overwriting patterns. Especially with NAND flash, it would make some operations a lot easier.\n\n• I think I might have a \"solution\". If I write lots of different patterns to the same location, eventually all bits will end up 1111 including the ECC bits. Even though I don't have direct control over the ECC bits, I can eventually flip them all to 1 if I write enough random patterns. If I just write 0xFFFFFFFF once to the data, the corresponding ECC spare bits that are written are probably not all ones. But If I write enough patterns, eventually the ECC bits will all ones. And it does appear that data=0xFFFFFFFF and ECC=0xF is read out as 0xFFFFFFFF. Aug 28 '13 at 17:21\n• @MarkLakata: Maybe that will work, but I'm a bit dubious; it would seem likely that all-bits-programmed may be only one bit error away from a non-FFFFFFFF value, and attempts to write a mixture of ones and zeroes to a byte which already contains a mixture of ones and zeroes may result in some bits being more strongly programmed than others, creating a higher-than-normal risk of bit errors. Aug 28 '13 at 18:10\n• The #else clause in my example basically writes 1 bit at a time over a byte (8 bits), and it \"works\" in the example case. However, you are right that it doesn't take into account some bits being stronger than other bits. Aug 28 '13 at 19:28\n• @MarkLakata: If all of the chips in a family have historically designed their ECC to allow overwrites with all-bits-set pattern, it may be reasonable to expect that the manufacture is unlikely to change its ECC in such fashion as to disallow it. I see no reason to assume anything about ST's ECC. I don't even know a good way to tell whether writing your eight patterns will actually write all the ECC bits. If memory serves, for 4 bits, or all power-of-two data lengths other than 8 bits, a scheme that detects a single bit error can have all-bits-set and all-bits-clear as valid codewords. Aug 28 '13 at 19:46"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.75146896,"math_prob":0.9504486,"size":2645,"snap":"2021-31-2021-39","text_gpt3_token_len":873,"char_repetition_ratio":0.30821657,"word_repetition_ratio":0.4,"special_character_ratio":0.35500947,"punctuation_ratio":0.07619048,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96549404,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-18T16:22:06Z\",\"WARC-Record-ID\":\"<urn:uuid:cd7e5233-179f-426b-98a7-df6719d2dea7>\",\"Content-Length\":\"174445\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:98b95cda-0c41-4b0f-95fb-7ff1fb030356>\",\"WARC-Concurrent-To\":\"<urn:uuid:8083faff-0bab-4300-be8d-4e3e638a66a5>\",\"WARC-IP-Address\":\"151.101.1.69\",\"WARC-Target-URI\":\"https://electronics.stackexchange.com/questions/80412/how-to-overwrite-flash-memory-on-stm32l-series\",\"WARC-Payload-Digest\":\"sha1:IT2YCC3WGJ7EDT746LKQ3SVYBITFZUM3\",\"WARC-Block-Digest\":\"sha1:ZAGKIZMW5A7RZGNNFTQU7JFUR6WN36RC\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780056548.77_warc_CC-MAIN-20210918154248-20210918184248-00169.warc.gz\"}"} |
http://conceptmap.cfapps.io/wikipage?lang=en&name=Bondi_k-calculus | [
"Bondi k-calculus\n\nBondi k-calculus is a method of teaching special relativity popularised by Professor Sir Hermann Bondi, and now common in university and college-level physics classes.\n\nThe usefulness of the k-calculus is its simplicity. It has been successfully used to teach special relativity to young children and also in relativity textbooks.\n\nMany introductions to relativity begin with the concept of velocity and a derivation of the Lorentz transformation. Other concepts such as time dilation, length contraction, the relativity of simultaneity, the resolution of the twins paradox and the relativistic Doppler effect are then derived from the Lorentz transformation, all as functions of velocity.\n\nBondi, in his book Relativity and Common Sense, first published in 1964 and based on articles published in The Illustrated London News in 1962, reverses the order of presentation. He begins with what he calls \"a fundamental ratio\" denoted by the letter $k$",
null,
"(which turns out to be the radial Doppler factor). From this he explains the twins paradox, and the relativity of simultaneity, time dilation, and length contraction, all in terms of $k$",
null,
". It is not until later in the exposition that he provides a link between velocity and the fundamental ratio $k$",
null,
". The Lorentz transformation appears towards the end of the book.\n\nHistory\n\nThe k-calculus method had previously been used by E. A. Milne in 1935. Milne used the letter $s$ to denote a constant Doppler factor, but also considered a more general case involving non-inertial motion (and therefore a varying Doppler factor). Bondi used the letter $k$ instead of $s$ and simplified the presentation (for constant $k$ only), and introduced the name \"k-calculus\".\n\nBondi's k-factor\n\nConsider two inertial observers, Alice and Bob, moving directly away from each other at constant relative velocity. Alice sends a flash of blue light towards Bob once every $T$ seconds, as measured by her own clock. Because Alice and Bob are separated by a distance, there is a delay between Alice sending a flash and Bob receiving a flash. Furthermore, the separation distance is steadily increasing at a constant rate, so the delay keeps on increasing. This means that the time interval between Bob receiving the flashes, as measured by his clock, is greater than $T$ seconds, say $kT$ seconds for some constant $k>1$ . (If Alice and Bob were, instead, moving directly towards each other, a similar argument would apply, but in that case $k<1$ .)\n\nBondi describes $k$ as “a fundamental ratio”, and other authors have since called it \"the Bondi k-factor\" or \"Bondi's k-factor\".\n\nAlice's flashes are transmitted at a frequency of $f_{s}=1/T$ Hz, by her clock, and received by Bob at a frequency of $f_{o}=1/(kT)$ Hz, by his clock. This implies a Doppler factor of $f_{s}/f_{o}=k$ . So Bondi's k-factor is another name for the Doppler factor (when source Alice and observer Bob are moving directly away from or towards each other).\n\nIf Alice and Bob were to swap roles, and Bob sent flashes of light to Alice, the Principle of Relativity (Einstein's first postulate) implies that the k-factor from Bob to Alice would be the same value as the k-factor from Alice to Bob, as all inertial observers are equivalent. So the k-factor depends only on the relative speed between the observers and nothing else.\n\nThe reciprocal k-factor\n\nConsider, now, a third inertial observer Dave who is a fixed distance from Alice, and such that Bob lies on the straight line between Alice and Dave. As Alice and Dave are mutually at rest, the delay from Alice to Dave is constant. This means that Dave receives Alice's blue flashes at a rate of once every $T$ seconds, by his clock, the same rate as Alice sends them. In other words, the k-factor from Alice to Dave is equal to one.\n\nNow suppose that whenever Bob receives a blue flash from Alice he immediately sends his own red flash towards Dave, once every $kT$ seconds (by Bob's clock). Einstein's second postulate, that the speed of light is independent of the motion of its source, implies that Alice's blue flash and Bob's red flash both travel at the same speed, neither overtaking the other, and therefore arrive at Dave at the same time. So Dave receives a red flash from Bob every $T$ seconds, by Dave's clock, which were sent by Bob every $kT$ seconds by Bob's clock. This implies that the k-factor from Bob to Dave is $1/k$ .\n\nThis establishes that the k-factor for observers moving directly apart (red shift) is the reciprocal of the k-factor for observers moving directly towards each other at the same speed (blue shift).\n\nConsider, now, a fourth inertial observer Carol who travels from Dave to Alice at exactly the same speed as Bob travels from Alice to Dave. Carol's journey is timed such that she leaves Dave at exactly the same time as Bob arrives. Denote times recorded by Alice's, Bob's and Carol's clocks by $t_{A},t_{B},t_{C}$ .\n\nWhen Bob passes Alice, they both synchronise their clocks to $t_{A}=t_{B}=0$ . When Carol passes Bob, she synchronises her clock to Bob's, $t_{C}=t_{B}$ . Finally, as Carol passes Alice, they compare their clocks against each other. In Newtonian physics, the expectation would be that, at the final comparison, Alice's and Carol's clock would agree, $t_{C}=t_{A}$ . It will be shown below that in relativity this is not true. This is a version of the well-known \"twins paradox\" in which identical twins separate and reunite, only to find that one is now older than the other.\n\nIf Alice sends a flash of light at time $t_{A}=T$ towards Bob, then, by the definition of the k-factor, it will be received by Bob at time $t_{B}=kT$ . The flash is timed so that it arrives at Bob just at the moment that Bob meets Carol, so Carol synchronises her clock to read $t_{C}=t_{B}=kT$ .\n\nAlso, when Bob and Carol meet, they both simultaneously send flashes to Alice, which are received simultaneously by Alice. Considering, first, Bob's flash, sent at time $t_{B}=kT$ , it must be received by Alice at time $t_{A}=k^{2}T$ , using the fact that the k-factor from Alice to Bob is the same as the k-factor from Bob to Alice.\n\nAs Bob's outward journey had a duration of $kT$ , by his clock, it follows by symmetry that Carol's return journey over the same distance at the same speed must also have a duration of $kT$ , by her clock, and so when Carol meets Alice, Carol's clock reads $t_{C}=2kT$ . The k-factor for this leg of the journey must be the reciprocal $1/k$ (as discussed earlier), so, considering Carol's flash towards Alice, a transmission interval of $kT$ corresponds to a reception interval of $T$ . This means that the final time on Alice's clock, when Carol and Alice meet, is $t_{A}=(k^{2}+1)T$ . This is larger than Carol's clock time $t_{C}=2kT$ since\n\n$t_{A}-t_{C}=(k^{2}-2k+1)T=(k-1)^{2}T>0,$\n\nprovided $k\\neq 1$ and $T>0$ .\n\nRadar measurements and velocity\n\nIn the k-calculus methodology, distances are measured using radar. An observer sends a radar pulse towards a target and receives an echo from it. The radar pulse (which travels at $c$ , the speed of light) travels a total distance, there and back, that is twice the distance to the target, and takes time $T_{2}-T_{1}$ , where $T_{1}$ and $T_{2}$ are times recorded by the observer's clock at transmission and reception of the radar pulse. This implies that the distance to the target is\n\n$x_{A}={\\tfrac {1}{2}}c(T_{2}-T_{1}).$\n\nFurthermore, since the speed of light is the same in both directions, the time at which the radar pulse arrives at the target must be, according to the observer, halfway between the transmission and reception times, namely\n\n$t_{A}={\\tfrac {1}{2}}(T_{2}+T_{1}).$\n\nIn the particular case where the radar observer is Alice and the target is Bob (momentarily co-located with Dave) as described previously, by k-calculus we have $T_{2}=k^{2}T_{1}$ , and so\n\n$x_{A}={\\tfrac {1}{2}}c(k^{2}-1)T_{1}$\n$t_{A}={\\tfrac {1}{2}}(k^{2}+1)T_{1}.$\n\nAs Alice and Bob were co-located at $t_{A}=0,x_{A}=0$ , the velocity of Bob relative to Alice is given by\n\n$v={\\frac {x_{A}}{t_{A}}}={\\frac {{\\tfrac {1}{2}}c(k^{2}-1)T_{1}}{{\\tfrac {1}{2}}(k^{2}+1)T_{1}}}=c{\\frac {k^{2}-1}{k^{2}+1}}=c{\\frac {k-k^{-1}}{k+k^{-1}}}.$\n\nThis equation expresses velocity as a function of the Bondi k-factor. It can be solved for $k$ to give $k$ as a function of $v$ :\n\n$k={\\sqrt {\\frac {1+v/c}{1-v/c}}}.$\n\nVelocity composition\n\nConsider three inertial observers Alice, Bob and Ed, arranged in that order and moving at different speeds along the same straight line. In this section, the notation $k_{AB}$ will be used to denote the k-factor from Alice to Bob (and similarly between other pairs of observers).\n\nAs before, Alice sends a blue flash towards Bob and Ed every $T$ seconds, by her clock, which Bob receives every $k_{AB}T$ seconds, by Bob's clock, and Ed receives every $k_{AE}T$ seconds, by Ed's clock.\n\nNow suppose that whenever Bob receives a blue flash from Alice he immediately sends his own red flash towards Ed, once every $k_{AB}T$ seconds by Bob's clock, so Ed receives a red flash from Bob every $k_{BE}(k_{AB}T)$ seconds, by Ed's clock. Einstein's second postulate, that the speed of light is independent of the motion of its source, implies that Alice's blue flash and Bob's red flash both travel at the same speed, neither overtaking the other, and therefore arrive at Ed at the same time. Therefore, as measured by Ed, the red flash interval $k_{BE}(k_{AB}T)$ and the blue flash interval $k_{AE}T$ must be the same. So the rule for combining k-factors is simply multiplication:\n\n$k_{AE}=k_{AB}k_{BE}.$\n\nFinally, substituting\n\n$k_{AB}={\\sqrt {\\frac {1+v_{AB}/c}{1-v_{AB}/c}}},\\,k_{BE}={\\sqrt {\\frac {1+v_{BE}/c}{1-v_{BE}/c}}},\\,v_{AE}=c{\\frac {k_{AE}^{2}-1}{k_{AE}^{2}+1}}$\n\ngives the velocity composition formula\n\n$v_{AE}={\\frac {v_{AB}+v_{BE}}{1+v_{AB}v_{BE}/c^{2}}}.$\n\nThe invariant interval\n\nUsing the radar method described previously, inertial observer Alice assigns coordinates $(t_{A},x_{A})$ to an event by transmitting a radar pulse at time $t_{A}-x_{A}/c$ and receiving its echo at time $t_{A}+x_{A}/c$ , as measured by her clock.\n\nSimilarly, inertial observer Bob can assign coordinates $(t_{B},x_{B})$ to the same event by transmitting a radar pulse at time $t_{B}-x_{B}/c$ and receiving its echo at time $t_{B}+x_{B}/c$ , as measured by his clock. However, as the diagram shows, it is not necessary for Bob to generate his own radar signal, as he can simply take the timings from Alice's signal instead.\n\nNow, applying the k-calculus method to the signal that travels from Alice to Bob\n\n$k={\\frac {t_{B}-x_{B}/c}{t_{A}-x_{A}/c}}.$\n\nSimilarly, applying the k-calculus method to the signal that travels from Bob to Alice\n\n$k={\\frac {t_{A}+x_{A}/c}{t_{B}+x_{B}/c}}.$\n\nEquating the two expressions for $k$ and rearranging,\n\n$c^{2}t_{A}^{2}-x_{A}^{2}=c^{2}t_{B}^{2}-x_{B}^{2}.$\n\nThis establishes that the quantity $c^{2}t^{2}-x^{2}$ is an invariant: it takes the same value in any inertial coordinate system and is known as the invariant interval.\n\nThe Lorentz transformation\n\nThe two equations for $k$ in the previous section can be solved as simultaneous equations to obtain:\n\n$ct_{B}={\\tfrac {1}{2}}(k+k^{-1})ct_{A}-{\\tfrac {1}{2}}(k-k^{-1})x_{A}$\n$x_{B}={\\tfrac {1}{2}}(k+k^{-1})x_{A}-{\\tfrac {1}{2}}(k-k^{-1})ct_{A}$\n\nThese equations are the Lorentz transformation expressed in terms of the Bondi k-factor instead of in terms of velocity. By substituting\n\n$k={\\sqrt {\\frac {1+v/c}{1-v/c}}},$\n\nthe more traditional form\n\n$t_{B}={\\frac {t_{A}-vx_{A}/c^{2}}{\\sqrt {1-v^{2}/c^{2}}}};\\,x_{B}={\\frac {x_{A}-vt_{A}}{\\sqrt {1-v^{2}/c^{2}}}}$\n\nis obtained.\n\nRapidity\n\nRapidity $\\varphi$ can be defined from the k-factor by\n\n$\\varphi =\\log _{e}k,\\,k=e^{\\varphi },$\n\nand so\n\n$v=c{\\frac {k-k^{-1}}{k+k^{-1}}}=c\\tanh \\varphi .$\n\nThe k-factor version of the Lorentz transform becomes\n\n$ct_{B}=ct_{A}\\cosh \\varphi -x_{A}\\sinh \\varphi$\n$x_{B}=x_{A}\\cosh \\varphi -ct_{A}\\sinh \\varphi$\n\nIt follows from the composition rule for $k$ , $k_{AE}=k_{AB}k_{BE}$ , that the composition rule for rapidities is addition:\n\n$\\varphi _{AE}=\\varphi _{AB}+\\varphi _{BE}.$"
]
| [
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/c3c9a2c7b599b37105512c5d570edc034056dd40",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/c3c9a2c7b599b37105512c5d570edc034056dd40",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/c3c9a2c7b599b37105512c5d570edc034056dd40",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.9409679,"math_prob":0.9960351,"size":11216,"snap":"2019-43-2019-47","text_gpt3_token_len":2570,"char_repetition_ratio":0.13476633,"word_repetition_ratio":0.10079156,"special_character_ratio":0.23288159,"punctuation_ratio":0.11672794,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9998055,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-17T00:45:43Z\",\"WARC-Record-ID\":\"<urn:uuid:1ccad348-6b5a-4af7-b59d-8cddfbd0b22c>\",\"Content-Length\":\"194603\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:762bce7f-6bf7-4036-889a-7f4b61ffbd27>\",\"WARC-Concurrent-To\":\"<urn:uuid:baa674cf-bcee-4fcc-b9d5-9ff41eb66282>\",\"WARC-IP-Address\":\"52.71.209.33\",\"WARC-Target-URI\":\"http://conceptmap.cfapps.io/wikipage?lang=en&name=Bondi_k-calculus\",\"WARC-Payload-Digest\":\"sha1:SOCJ756MWPRXHYRLL66ZUEVC7TSKCM62\",\"WARC-Block-Digest\":\"sha1:J2EEMMGTCKJBOLUAFNIA2OXNIQ6GGF6K\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570986672431.45_warc_CC-MAIN-20191016235542-20191017023042-00242.warc.gz\"}"} |
https://se.mathworks.com/matlabcentral/cody/problems/109-check-if-sorted/solutions/234827 | [
"Cody\n\n# Problem 109. Check if sorted\n\nSolution 234827\n\nSubmitted on 24 Apr 2013 by Aditya Suryavanshi\nThis solution is locked. To view this solution, you need to provide a solution of the same size or smaller.\n\n### Test Suite\n\nTest Status Code Input and Output\n1 Pass\n%% x = sort(rand(1,10^5)); y_correct = 1; assert(isequal(sortok(x),y_correct))\n\nans = 1\n\n2 Pass\n%% x = [1 5 4 3 8 7 3]; y_correct = 0; assert(isequal(sortok(x),y_correct))\n\nans = 0\n\n### Community Treasure Hunt\n\nFind the treasures in MATLAB Central and discover how the community can help you!\n\nStart Hunting!"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.5315359,"math_prob":0.9490504,"size":431,"snap":"2020-45-2020-50","text_gpt3_token_len":141,"char_repetition_ratio":0.14519906,"word_repetition_ratio":0.0,"special_character_ratio":0.3689095,"punctuation_ratio":0.105882354,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.978111,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-12-01T03:28:48Z\",\"WARC-Record-ID\":\"<urn:uuid:cd2b5a96-2dab-4001-a307-df6e26958cf0>\",\"Content-Length\":\"79432\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:908e8572-f168-4adb-8293-552e5dbae56e>\",\"WARC-Concurrent-To\":\"<urn:uuid:4a1fcd9f-bc8f-478c-b259-ac83b159928b>\",\"WARC-IP-Address\":\"184.24.72.83\",\"WARC-Target-URI\":\"https://se.mathworks.com/matlabcentral/cody/problems/109-check-if-sorted/solutions/234827\",\"WARC-Payload-Digest\":\"sha1:XCGXIAUHBLQYTJDZGJ5553E6NF4SEAO2\",\"WARC-Block-Digest\":\"sha1:EFAHCT3E53KRM4NTRHTT6NMQ6YDECYYQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141542358.71_warc_CC-MAIN-20201201013119-20201201043119-00043.warc.gz\"}"} |
https://discourse.processing.org/t/why-is-my-ico-sphere-not-a-sphere/27190 | [
"# Why is my Ico-Sphere not a Sphere\n\nplease format code with </> button * homework policy * asking questions\n<public class isoSphere\n{\n\nIcosehedron ico = new Icosehedron();\n\nArrayList vNew;\nArrayList cul;\nArrayList vecL = new ArrayList();\nArrayList drawOrder = new ArrayList();\n\npublic PVector MiddlePoint(PVector p1, PVector p2)\n{\n\nif (vecL.indexOf(p1) > vecL.indexOf(p2)){\nreturn new PVector((p1.x + p2.x) / 2.0, (p1.y + p2.y) / 2.0, (p1.z + p2.z) / 2.0);\n}\n\nelse{\nreturn new PVector((p2.x + p1.x) / 2.0, (p2.y + p1.y) / 2.0, (p2.z + p1.z) / 2.0);\n}\n\n}\n\npublic isoSphere(int times)\n{\n\nvecL = ico.vec;\ndrawOrder = ico.faceI;\n\nfor (int t = 0; t < times; t++)\n{\n\n`````` vNew = new ArrayList<PVector>();\ncul = new ArrayList<Integer>();\n\nfor (int v = 0; v < drawOrder.size(); v++)\n{\n\nPVector a = vecL.get(drawOrder.get(v));\nv++;\nPVector b = vecL.get(drawOrder.get(v));\nv++;\nPVector c = vecL.get(drawOrder.get(v));\n\nPVector nav = this.MiddlePoint(a, b);\nPVector nbv = this.MiddlePoint(b, c);\nPVector ncv = this.MiddlePoint(c, a);\n\nboolean vecA = false;\nboolean vecB = false;\nboolean vecC = false;\nboolean vecAl = false;\nboolean vecnB = false;\nboolean vecCl = false;\n\nfor (int bud = 0; bud < vNew.size(); bud++)\n{\n\nif ( a == vNew.get(bud))\n{\nvecA = true;\n}\nelse if ( b == vNew.get(bud))\n{\nvecB = true;\n}\nelse if ( c == vNew.get(bud))\n{\nvecC = true;\n}\nelse if ( nav == vNew.get(bud))\n{\nvecAl = true;\n}\nelse if ( nbv == vNew.get(bud))\n{\nvecnB = true;\n}\nelse if ( ncv == vNew.get(bud))\n{\nvecCl = true;\n}\n\n}\n\nif(vecA != true)\n{\n}\nif(vecAl != true)\n{\n}\nif(vecB != true)\n{\n}\nif(vecnB != true)\n{\n}\nif(vecC != true)\n{\n}\nif(vecCl != true)\n{\n}\n\nint aint = vNew.indexOf(a); int bint = vNew.indexOf(b);\nint cint = vNew.indexOf(c); int navint = vNew.indexOf(nav);\nint nbvint = vNew.indexOf(nbv); int ncvint = vNew.indexOf(ncv);\n\n}\n\nvecL = vNew;\ndrawOrder = cul;\n``````\n\n}\n\n}\n\npublic PVector getVertex(int i) { return vecL.get(i); }\n\npublic int getFace(int i) {return drawOrder.get(i); }\n\npublic void drawVertex()\n{\n\nfor (int f = 0; f < vecL.size(); f++)\n{\n\n`````` PVector v = this.getVertex(f);\n\nif (f <= 3)\n{\n\nstroke(255, 0, 0);\n\n}\n\nelse if (f >= 8)\n{\n\nstroke(0, 0, 255);\n\n}\n\nelse\n{\n\nstroke(0, 255, 0);\n\n}\n\nstrokeWeight(20);\n\npoint(v.x, v.y, v.z);\n\n}\n``````\n\n}\n\npublic void drawLines()\n{\n\nstrokeWeight(4);\n\nfor(int i = 0; i < vecL.size(); i ++)\n{\n\n``````PVector l1 = this.getVertex(i);\ni++;\nPVector l2 = this.getVertex(i);\ni++;\nPVector l3 = this.getVertex(i);\ni++;\nPVector l4 = this.getVertex(i);\n\nstroke(255, 0, 255);\nline(l1.x, l1.y, l1.z, l2.x, l2.y, l2.z);\nstroke(255, 0, 0);\nline(l2.x, l2.y, l2.z, l3.x, l3.y, l3.z);\nstroke(0, 255, 0);\nline(l3.x, l3.y, l3.z, l4.x, l4.y, l4.z);\nstroke(0, 0, 255);\nline(l4.x, l4.y, l4.z, l1.x, l1.y, l1.z);\n``````\n\n}\n\n}\n\npublic void drawTriangles()\n{\n\n``````strokeWeight(4);\n\nfor (int i = 0; i < drawOrder.size(); i++)\n{\n\nPVector v1 = this.getVertex(this.getFace(i));\ni++;\nPVector v2 = this.getVertex(this.getFace(i));\ni++;\nPVector v3 = this.getVertex(this.getFace(i));\n\nstroke(255, 0, 0);\nline(v1.x, v1.y, v1.z, v2.x, v2.y, v2.z);\n\nstroke(0, 255, 0);\nline(v2.x, v2.y, v2.z, v3.x, v3.y, v3.z);\n\nstroke(0, 0, 255);\nline(v3.x, v3.y, v3.z, v1.x, v1.y, v1.z);\n\n}\n``````\n\n}\n\n} />\n\n[Hello I am kind of new to processing 3D, and mediocre at Java. This little snippet of code is suppose to take the standard icosehedron vectors and turn them into a sphere through subdivision after many painful weeks of converting each part of the common code I found on the internet Im getting the subdivisions and drawable geometry but it is most definitely not a sphere. the vectors of the icosehedron go as follows {\nfloat g = (float)(1.0 + Math.sqrt(5.0)) / 2.0;\nfloat v = 200.0;\nfloat t = g * 200;\n\n``````PVector x1 = new PVector(v, t, 0);\nPVector x2 = new PVector(v, -t, 0);\nPVector x3 = new PVector(-v, -t, 0);\nPVector x4 = new PVector(-v, t, 0);\n\nPVector y1 = new PVector(0, v, t);\nPVector y2 = new PVector(0, v, -t);\nPVector y3 = new PVector(0, -v, -t);\nPVector y4 = new PVector(0, -v, t);\n\nPVector z1 = new PVector(t, 0, -v);\nPVector z2 = new PVector(-t, 0, -v);\nPVector z3 = new PVector(-t, 0, v);\nPVector z4 = new PVector(t, 0, v);\n\n``````\n\n}\n\n``I had to draw each line of the triangle by hand, which i sketched out on a notepad. and plug them in vertex for vertex which I then iterated thought two at a time which you can see in the draw triangles function. I tried using P3D's begin shape method but it would only draw the bottom after that it became a distorted mess. Like I said I'm new to processing and my java experience boils down to my experience with android studio and aide. which I'm fair at but there are still a lot of things to learn. I have advanced from beginner to intermediate. I know 10 thousand things I didn't know about code when I started but don't fully understand all of it just yet, mostly in syntax, math is pretty straight forward. But if anybody could help me figure out why I am not getting a sphere, and perhaps a bit of code that can shorten this mess up a little. I was trying to do it mostly in one class it started to get messy. I did try adjusting the value which i divide each value in the vector by in the method MiddlePoint. finding that anything below 1.5 returns some rather \"interesting\" results with anything greater than 2 being the inverse of that. I can get a single subdivision to return a mesh resembling a sphere with an increase of .1 for each sub division but once you go past 2 there is no return. as far as I can see this code should work like a charm minus the various work arounds I managed to make work. which does drastically effect performance as I can only get about 5 subdivisions before it crashes.]``\n\nI had to draw each line of the triangle by hand, which i sketched out on a notepad. and plug them in vertex for vertex which I then iterated thought two at a time which you can see in the draw triangles function. I tried using P3D’s begin shape method but it would only draw the bottom after that it became a distorted mess. Like I said I’m new to processing and my java experience boils down to my experience with android studio and aide. which I’m fair at but there are still a lot of things to learn. I have advanced from beginner to intermediate. I know 10 thousand things I didn’t know about code when I started but don’t fully understand all of it just yet, mostly in syntax, math is pretty straight forward. But if anybody could help me figure out why I am not getting a sphere, and perhaps a bit of code that can shorten this mess up a little. I was trying to do it mostly in one class it started to get messy. I did try adjusting the value which i divide each value in the vector by in the method MiddlePoint. finding that anything below 1.5 returns some rather “interesting” results with anything greater than 2 being the inverse of that. I can get a single subdivision to return a mesh resembling a sphere with an increase of .1 for each sub division but once you go past 2 there is no return. as far as I can see this code should work like a charm minus the various work arounds I managed to make work. which does drastically effect performance as I can only get about 5 subdivisions before it crashes.]\n\nPlease select the Code with the mouse and click the small `</>` sign in the small Command Bar.\n\nAlso, I couldn’t run your code because Icosehedron class was missing\n\nand setup and draw, please post a runable version\n\nThis how the formatting should look like please:\n\n``````\n\nclass isoSphere\n{\n\nIcosehedron ico = new Icosehedron();\n\nArrayList vNew;\nArrayList cul;\nArrayList vecL = new ArrayList();\nArrayList drawOrder = new ArrayList();\n\npublic PVector MiddlePoint(PVector p1, PVector p2)\n{\n\nif (vecL.indexOf(p1) > vecL.indexOf(p2)) {\nreturn new PVector((p1.x + p2.x) / 2.0, (p1.y + p2.y) / 2.0, (p1.z + p2.z) / 2.0);\n} else {\nreturn new PVector((p2.x + p1.x) / 2.0, (p2.y + p1.y) / 2.0, (p2.z + p1.z) / 2.0);\n}\n}\n\npublic isoSphere(int times)\n{\n\nvecL = ico.vec;\ndrawOrder = ico.faceI;\n\nfor (int t = 0; t < times; t++)\n{\n\nvNew = new ArrayList<PVector>();\ncul = new ArrayList<Integer>();\n\nfor (int v = 0; v < drawOrder.size(); v++)\n{\n\nPVector a = vecL.get(drawOrder.get(v));\nv++;\nPVector b = vecL.get(drawOrder.get(v));\nv++;\nPVector c = vecL.get(drawOrder.get(v));\n\nPVector nav = this.MiddlePoint(a, b);\nPVector nbv = this.MiddlePoint(b, c);\nPVector ncv = this.MiddlePoint(c, a);\n\nboolean vecA = false;\nboolean vecB = false;\nboolean vecC = false;\nboolean vecAl = false;\nboolean vecnB = false;\nboolean vecCl = false;\n\nfor (int bud = 0; bud < vNew.size(); bud++)\n{\n\nif ( a == vNew.get(bud))\n{\nvecA = true;\n} else if ( b == vNew.get(bud))\n{\nvecB = true;\n} else if ( c == vNew.get(bud))\n{\nvecC = true;\n} else if ( nav == vNew.get(bud))\n{\nvecAl = true;\n} else if ( nbv == vNew.get(bud))\n{\nvecnB = true;\n} else if ( ncv == vNew.get(bud))\n{\nvecCl = true;\n}\n}\n\nif (vecA != true)\n{\n}\nif (vecAl != true)\n{\n}\nif (vecB != true)\n{\n}\nif (vecnB != true)\n{\n}\nif (vecC != true)\n{\n}\nif (vecCl != true)\n{\n}\n\nint aint = vNew.indexOf(a);\nint bint = vNew.indexOf(b);\nint cint = vNew.indexOf(c);\nint navint = vNew.indexOf(nav);\nint nbvint = vNew.indexOf(nbv);\nint ncvint = vNew.indexOf(ncv);\n\n}\n\nvecL = vNew;\ndrawOrder = cul;\n}\n}\n\npublic PVector getVertex(int i) {\nreturn vecL.get(i);\n}\n\npublic int getFace(int i) {\nreturn drawOrder.get(i);\n}\n\npublic void drawVertex()\n{\n\nfor (int f = 0; f < vecL.size(); f++)\n{\n\nPVector v = this.getVertex(f);\n\nif (f <= 3)\n{\n\nstroke(255, 0, 0);\n} else if (f >= 8)\n{\n\nstroke(0, 0, 255);\n} else\n{\n\nstroke(0, 255, 0);\n}\n\nstrokeWeight(20);\n\npoint(v.x, v.y, v.z);\n}\n}\n\npublic void drawLines()\n{\n\nstrokeWeight(4);\n\nfor (int i = 0; i < vecL.size(); i ++)\n{\n\nPVector l1 = this.getVertex(i);\ni++;\nPVector l2 = this.getVertex(i);\ni++;\nPVector l3 = this.getVertex(i);\ni++;\nPVector l4 = this.getVertex(i);\n\nstroke(255, 0, 255);\nline(l1.x, l1.y, l1.z, l2.x, l2.y, l2.z);\nstroke(255, 0, 0);\nline(l2.x, l2.y, l2.z, l3.x, l3.y, l3.z);\nstroke(0, 255, 0);\nline(l3.x, l3.y, l3.z, l4.x, l4.y, l4.z);\nstroke(0, 0, 255);\nline(l4.x, l4.y, l4.z, l1.x, l1.y, l1.z);\n}\n}\n\npublic void drawTriangles()\n{\n\nstrokeWeight(4);\n\nfor (int i = 0; i < drawOrder.size(); i++)\n{\n\nPVector v1 = this.getVertex(this.getFace(i));\ni++;\nPVector v2 = this.getVertex(this.getFace(i));\ni++;\nPVector v3 = this.getVertex(this.getFace(i));\n\nstroke(255, 0, 0);\nline(v1.x, v1.y, v1.z, v2.x, v2.y, v2.z);\n\nstroke(0, 255, 0);\nline(v2.x, v2.y, v2.z, v3.x, v3.y, v3.z);\n\nstroke(0, 0, 255);\nline(v3.x, v3.y, v3.z, v1.x, v1.y, v1.z);\n}\n}\n}\n//\n\n``````"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.5272962,"math_prob":0.9942595,"size":6631,"snap":"2022-27-2022-33","text_gpt3_token_len":2255,"char_repetition_ratio":0.19269654,"word_repetition_ratio":0.0114068445,"special_character_ratio":0.3651033,"punctuation_ratio":0.2819919,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99683344,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-06-26T11:32:55Z\",\"WARC-Record-ID\":\"<urn:uuid:a7ce23ab-24f0-4836-ae6b-b92eb3edbb7d>\",\"Content-Length\":\"36310\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:52101299-ca69-482c-8654-55e96894b74a>\",\"WARC-Concurrent-To\":\"<urn:uuid:fa2d291c-7a14-4737-b855-9c251a713191>\",\"WARC-IP-Address\":\"64.62.250.111\",\"WARC-Target-URI\":\"https://discourse.processing.org/t/why-is-my-ico-sphere-not-a-sphere/27190\",\"WARC-Payload-Digest\":\"sha1:ZSPNDBARG7EZ7MWXVHUHDF3QLB45DHYC\",\"WARC-Block-Digest\":\"sha1:56AKTM6BUCTOOKNUGCDYNULCDPH7PXLQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103205617.12_warc_CC-MAIN-20220626101442-20220626131442-00672.warc.gz\"}"} |
https://ocw.mit.edu/courses/mathematics/18-085-computational-science-and-engineering-i-fall-2008/video-lectures/lecture-27-finite-elements-in-2d-part-2/ | [
"",
null,
"# Lecture 27: Finite Elements in 2D (part 2)\n\nFlash and JavaScript are required for this feature.\n\nDownload the video from iTunes U or the Internet Archive.\n\nInstructor: Prof. Gilbert Strang\n\nThe following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation, or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu.\n\nPROFESSOR STRANG: So, let's see, you probably guessed on that quiz problem three, it wasn't what I meant. I get a zero for that problem. But you'll get probably good numbers, so that's an election gift if it comes out that way. So they're all in the hands of the TAs to be graded. We have a holiday Monday, I think. We come back to Fourier. Now, so we just have a concentrated shot at Fourier, just about eight or nine lectures in November. So stay with it and that'll be of course the subject of the third quiz. Which will have no mistakes. It'll be solved by the TAs in advance and we'll spot things. So, and if we have the quizzes to return to you by Wednesday that will be great. I hope so, but they have a big job. A little bit, looking far ahead, the end of Fourier, the quiz is December 4th, I think that's a Thursday. And that's the end of the course. So December 4th. So we'll be ending the course a little bit early. Because I'll be in Hong Kong, to tell the truth. And and we've done a lot, and with the review sessions we're really doing well. So, that's the future, Fourier. Today is an important day too, finite elements in 2-D, that's a major part of computational science and engineering. The finite element idea, the idea of using polynomials, you can find in some early papers by Courant, a mathematician in New York, and by a guy in China, neat guy named Feng Kang. But those papers were sort of, you could do it this way if you wanted. It was really the structural engineers in Berkeley and elsewhere who made it happen ten years later. And the whole idea has just blossomed. Continues to grow. So I had an early book, in the '70s, actually, about the mathematical underpinnings, the math basis for the finite element method. And many other finite element books have come. Professor Bathe you know, teaches a full of course on that.\n\nBut, I think we can get the idea of finite elements here. We did them in 1-D, and now there's a MATLAB problem and I'd like to just describe that particular problem if I can, as an example. And, of course, you would use the code that's printed in the book, and that's available on the website just to download. But the problem is not on a square domain. It starts on a circle, so that the first lines of the code, calling the MATLAB command squaregrid, are not applicable. So you have to create, then, a mesh. Well, I have a suggested mesh so I'll draw that, and then from that you want to make a list of all the node points. A list, P, of-- So what the code needs is two lists. Well, let me draw a picture of, well, it's a circle. And I'm going to be solving Poisson's equation. The equation will be -u_xx-u_yy=4, in the circle. So it's Poisson but with a constant right-hand side. That will mean that all the integrals of f times v, the right-hand side of our discrete equation will be, the integrals are all easy because we just have a constant there times the trial function. OK, and then on the boundary it's going to be u=0. On the boundary. So it's a classic problem. And we can say what the solution is. So it's one with a known solution. I think it would be x squared-- No, I guess one, one minus x squared minus y squared. This should all be on the .086 site. I just didn't have a chance to look this morning to be sure it got up. So you can watch, here. So that, I hope, does solve the problem. Two x derivatives give us a two, two y derivatives another two, so we get four.\n\nSo we know the answer; the question is, and I'm interested in this question, for research reasons too, is what's the error when you go to a polygon? You go to a-- These curved boundaries don't get correctly saved. You approximate them by straight lines. That would be the first idea. And with this, all this symmetry, let's keep the problem nice and use a regular polygon. So maybe I'll try to draw one with about eight sides, but-- OK. So we impose u=0 at these nodes. So u is zero at those nodes, and then we have a mesh. So we want to create a mesh. OK, so with all the symmetry here, the natural idea would be to start with eight pieces, or M pieces if I have, this is a regular M side, let's say, and I'll take M to be eight in this picture. And I think we can work on just one triangle. By rotational symmetry, all those triangles are going to be the same. So I think our domain is really this one triangle here. That's where we're working. And in that triangle, I think we have zero boundary conditions. And across this edge I think we have natural boundary conditions. Slope zero, if I see the picture correctly. The rotational symmetry would mean that things are not changing. That every triangle is the same. So I think on these boundaries it's the Neumann condition, dU/dn=0. And I'm frankly not sure what to do at the origin, so I'll maybe just try both ways and see.\n\nOK, so there is a real problem. Of course, it's artificial in the sense that we know the answer. But it's a real open question of what does the error look like, from doing that. So that's the goal and let me just say the problem I'll ask you to do, and it probably is quite enough to be ready for next Friday, is to use piecewise linear elements. Which is what I'm going to do. What every discussion of finite elements will begin with, linear elements. Those pyramids that I spoke about at the end of last time. So that's what I hope, but actually I would be highly interested, if anybody got into the problem, to try quadratic elements. So I'll just say here, second-degree quadratic polynomials would be more accurate. Would be more accurate. So, in other words, this is a first type of finite element called P_1, for polynomials of degree one. These guys, I would call P_2, for polynomials of degree two. And I've mentioned here the possibility of using quads. Instead of triangles, if I had squares for example, the simplest element would be a Q_1. So these are-- If I manage today to tell you about how to use P_1 and P_2 and Q_1, you're on your way. And for the requirements of this course, P_1 is the first point to understand. OK. While I'm speaking about codes and meshes, let me draw the mesh I proposed in the homework problem. So I thought OK, we just have to have a simple mesh here. So let me draw that line in. I know all these points, right? This is (0, 0) here. And that point's on the circle. And so is this, that point might be, so what's the angle there? That angle is probably pi/8. The whole angle would be 2pi/8, and eight of them would go all the way around. So I think that angle is pi/8.\n\nAnd so this point would be (cos(pi/8), sin(pi/8)). We know where they are. But that's going to be what we have to list. We have to list where are the coordinates of all the mesh points. So let me describe the rest of the mesh points and then see what that list would look like. And once you've created the list, the code will take over. And then you plot the results and see what's going on. So let me suggest a mesh. It's pretty straightforward. I just divided this piece into N, this center thing into N-- So I called that distance h. So Nh gets me out to here. Whatever that is, that's cos(pi/8), I guess, that's the x coordinate of that line. So those are mesh points and then let me keep drawing these. So these will be mesh points too, and these will be mesh points too. So at this point, I've got probably 12 or 13 mesh points. But I've got quads, right? Well, I've got a couple of triangles here that I'm not going to touch, those are fine. But these are quads and they could be used with the Q_1 element, but I'm thinking let's stay with triangles. So I just suggested to put in triangles, put in these diagonals, keeping symmetry. And so there's the mesh. There's the mesh.\n\nAnd then what does the code ask for? So I've got 13 mesh points. And the code, first of all it wants a list of the coordinates of all those mesh points. So that the code will-- And I better number them, of course. So let me number them one, shall I number them the center guys first, one, two, three, four, five, and then up here six, seven, eight, nine, now I don't know if this is a good numbering. 10, 11, 12, 13. Why don't we, just to have some consistency within plans, why don't you, you don't have to take N=4. I hope you'll take, what did I take N as four or five? Yeah, right. N is 4. But you'll want to try different N's. N=4 would be a good crude start to see what's going on, but then I hope you'll go higher and get better accuracy. And you can see how the accuracy improves, how you get closer to that, as N gets bigger. OK, we've got the nodes numbered. Oh, I better number the triangles. OK, how shall we number the triangles? Shall we do along the top, or this? I don't know. What do you want to do for the numbering of the triangles? Maybe run along the top, and then run along the bottom, because then it'll practically be a copy. So I didn't leave myself much space, but one, two, three, four, five, six, seven. Seven triangles along the top and seven along the bottom, so I have 14 triangles in this mesh. So it's a mesh with 14 triangles and 13 nodes. And I know the positions of everyone, right? I know the x,y coordinates of every one. So what the code will want is a list of those coordinates. So a list p, p will be a list of coordinates. The first guy will be-- Of the nodes so 13 rows, 3 columns. So it's a little 13 by 3 matrix that tells you where all the nodes are.\n\nSo the first one on that list would be (0,0). That's for node one. And the second one would be whatever the coordinates of that are, something, zero. (h,0), I guess it is. The third one will be (2h,0), and so on, and then complete the list of 13 positions. So you have then told the code where all the nodes are. What else do you have to tell it? Not much. You now have to tell it about the triangles. So now for every, why do I say three columns? Maybe only two, is it? You see the point already. I've forgotten, maybe, yeah. I don't see why. Well for triangles, I'm going to need three, for nodes maybe it's only got two. Maybe it's 13 by 2. I don't see why I need three. But, anyway. Then the other list is triangles. So this will be the list t, and so it takes triangle number one. Which is right here. That very first triangle. And what does it have to tell us about triangle one? The three nodes. If it tells us the three node numbers, and this list, p, gave their positions, we've got it. So how many triangles did that I have? 14? So t will be 14 by 3, and so the first guy will be just one, node number one, node number two, and node number six. That will tell us which is the first triangle. And the second triangle, I guess I've drawn from two to seven to six, right? That's a very skinny triangle up there but it's the one that started at node two, went up to seven and back to six.\n\nSo a list like that. And then the code will do the rest. I hope. Almost all the rest. The code will create the matrix K. It'll create the matrix K with, it'll be singular. Boundary conditions won't yet be in there. And then a final step after that, K, or maybe we could call it K_0, is created, a final step will be to fix u. At least at these three points. So these three will be boundary nodes. And as I say, I'm not too sure about that one, I apologize. At those boundary nodes I'm going to take the values to be zero. So this is going to be zero along the whole edge, because if it's zero there, zero there, zero there and zero there, and if it's linear, it's zero. So the final sort of subroutine in the code, the final group of commands you want to impose, zeroes here, that should then make the matrix K invertible, and then you've got KU=F to solve. So what the code is doing is creating K and F. You see the overall picture? I jumped right into this particular mesh, particular problem, but now I really should back up to where it starts. This is going to be the weak form of Laplace, Poisson, maybe I'll make a little space to put in Poisson's name too. You have a picture already of what this weak form is about, so now I'm really backing up to the start. I take the equation and I get its weak form. And remember that's in the continuous case, as it was in the quiz problem. The first step is the continuous weak form, and then the second step is choose test functions and trial-- I'm sorry, gosh, I'm in bad shape here. Because they're the same. This isn't my worst error, today. But those are trial functions, and these are test functions. OK, questions at this point, because I've-- Yeah, thank you. Good. Let's look at this picture.\n\nAUDIENCE: [INAUDIBLE]\n\nPROFESSOR STRANG: Because-- We did. That's right. So because my question is what is-- The continuous problem I would like to solve has u=0 on the polygon. So in a way you can forget the circle, where we know the answer. Now, we really are looking on the polygon, and I would like to know what's the solution like on that polygon. And then so there are two steps, the first step was start with a circle, we have the answer. Second step is go to a polygon, continuous problem, Poisson's equation in the polygon. How different is that from this? Because this will not satisfy the polygon boundary conditions. So that's the circle answer. Then the question is, what's the polygon answer. And I don't know that. You may say a regular polygon, you can't do that. I didn't think you can. It's amazing, but probably a triangle or a square. So if M is three or four, probably some formulas would be available. But I think once we get higher, I don't know the answer to the Dirichlet problem, to Poisson's equation on a polygon, on a regular polygon. And that's what I would really like to know more about. And how do I find out more about it? By finite elements. With your help. Taking that polygon, breaking it into a mesh, looking only at one triangle just for simplicity, and getting u finite elements. Well, I should say u_(P_1). That's the finite element solution using linear. I would really like to know u_(P_2), the finite element solution which will be better, if I use quadratics. So now I get the fun of describing the linear elements, the quadratic elements, the quads.\n\nBut did I answer that question OK? Yeah. So this is the problem I would like to know the answer to. If I have this equation, zero boundary conditions on a regular polygon with M sides, what's the answer? And it's going to be close to this, but it won't be the same. Because this does not vanish on the polygon edges. And I would like to compare the slopes, too. So the homework problem asked you not only to compare u_circle with u_(P_1), but also the slopes. The slopes here are easy, slopes here are easy because it's a bunch of flat functions. So the slopes are just constant in each triangle. OK, I'm guessing that the error gets smaller as you go in. I think that if you plot the error, it'll be largest out here and get small there. But remains to be seen. So I hope you enjoy-- yeah, good.\n\nAUDIENCE: [INAUDIBLE]\n\nPROFESSOR STRANG: Rather than seven.\n\nAUDIENCE: [INAUDIBLE]\n\nPROFESSOR STRANG: No, the middle. There's nothing magic about any particular mesh. I just chose this mesh as pretty good, and actually, I'm imagining M could get pretty big. That would be interesting. M=8 would be interesting, M=16, M=1,024, now then I'd really get interested. OK, but so if M is 1,024, then this side would be very small. Right? And I just wanted more triangles. Actually, I would like more than I've got. I'd like, if M was really big, then probably N should be at least that big. So I should have a thousand this way, if this is just a tiny bit. I just want little tiny h, and then, yeah. Actually, that might not be too bad. If M and N were roughly comparable, then that length would be roughly comparable to these lengths. And the triangles would be pretty good shapes. And that's what you're looking for. I think there's a lot of experiments to be done here. So, I'm thinking then of M and N. Here I took N to be just four. When M was eight. That's fine. But if you keep M and N roughly the same size, then you've got triangles that are not too long and skinny. I'll tell you when you might want. So generally you want nice shaped triangles. You don't want angles very small or very large, usually. But there would be, anybody in Course 16 can imagine that if I am computing the flow field past a wing, that long, thin triangles in the direction of the wing are natural. I mean, somehow a problem like true aerodynamics is by no means isotropic. I mean, the direction of the wing is kind of critical to whether the plane flies, right? So don't make the wing vertical. And if you want accuracy, then you have long, thin triangles in the direction of the flow.\n\nBut here we're not doing a flow problem. We haven't got shocks, or trailing edges, and other horrible stuff that makes planes fly. We just got Poisson's equation. OK. Thanks for those good questions, another one. AUDIENCE: [INAUDIBLE]\n\nPROFESSOR STRANG: What would the dimension look like? Ah, would you like me to show you something about quadratics? Yeah. Shall I jump into quadratics, it's kind of fun. Quadratics, so let me just do-- So I'll come back to the weak form, right it's totally-- Oh, I'll do it now. It's so simple I don't want to forget it. The weak form, so I write the equation down, -u_xx-u_yy=f. This is the continuous weak form, equal f(x,y), OK? So that's the strong form. And I've made it the Laplacian here to keep it simple, and any right-hand side. OK, how do I get to the weak form? Just remind me, I multiply both sides by any test function v(x,y). Multiply by v(x,y), and then what do I do? I integrate over the whole region. So that's the weak form, dxdy, this is for all v, all v(x,y), all, I'll say all admissible v(x,y). So that's the weak form. If this holds for this great family of v's, the idea behind it is, that if this holds for all these trial functions, test functions, v(x,y), the only way that can happen is for this to actually equal that. That's a fundamental lemma in this part of math, and of course it has to be spelled out more than I'm doing in words. But the idea is that if these hold for such a large class of v(x,y), then the only way that can happen is for the strong form to hold. For this to actually match this. OK, so that's the start. But then what's the next step in the weak form? I like the right-hand side but I'm not so crazy about the left-hand side. I'm not crazy about it because this has second derivatives of u, and my little roof functions, pyramid functions, haven't got second derivatives. So I would be dead in the water without doing the natural step that makes everything beautiful, which is? Integration by parts.\n\nIntegrate by parts. Move derivatives off of u, onto v. One derivative onto v, off of u, so then u and v each have one derivative. I can use my piecewise linear, piecewise quadratic, all my finite elements are going to go fine. So I integrate by parts. So integrate by parts, and what is that mean in 2-D? Of course I have a double integral here. So integrate by parts, that mean you use the Green's formula. That was the key point of this Green, or Gauss-Green formula. Can I do it first in, this is -div(grad u), times v dxdy, we can write out all the terms. We can use vector notation. I could use that nabla, that upside down triangle notation, or whatever. But maybe good to see it a few different ways. So what's the point? When I integrate by parts, that minus disappears to a plus, I have a double integral then, and these derivatives move off of-- I'm taking one derivative off of here, the divergence moves over there, but when the divergence moves onto v it becomes? The transpose. It becomes gradient. And so this is gradient of u, gradient of v, dxdy. Plus boundary terms. The integral of, what is, let's see. What do I have in this integral, I have grad u dot n, times v around the boundary. And with my boundary conditions that's going to be gone, so I can come back to that.\n\nNow, you all looked a little uncertain when I wrote Green's formula this way. For this problem I can write it more easily. This is my left side. I want to write the answer, I just want to write this weak form in a much simpler form. So let me say, what have I got here. Well, all I've got is one derivative is moving off of u and onto v. And the minus sign is disappearing, so I have du/dx times dv/dx. Right? One off of u, onto v. The other term, one y derivative, moving off of this and onto v. Minus sign again going to a plus. du/dy, dv/dy. That's the integral. That's it, that's cool. Easy to do. And on the right-hand side of course I have no change. The integral of f(x,y) v(x,y) dy. Now, that's the weak form. dxdy. Here it is, weak form. That's pretty nice. Beautifully symmetric, so the matrix that comes up when we plug in finite specific trial functions and test functions is going to be a symmetric matrix K. And the integrals are of first derivatives, so as long as our functions, our trial functions and test functions are continuous, that is, they shouldn't jump. If the trial functions or test functions jump, then if I have a jump, then the derivative would be a delta. I'd have another delta here, I'd have an integral delta, a delta times delta, and I don't want that. That's infinite. Those discontinuous elements would not be conforming, and that's a whole new world of discontinuous Galerkin. I'd have to impose penalty stuff, and Professor Peraire I mentioned, and others, Professor Darmofal in aero, are experts on this. We're doing continuous Galerkin. CG. Our piecewise linear, piecewise quadratic, they'll be continuous. All I have to do is these derivatives. Integrate those things and that's what the code will do.\n\nOK, I've got to the weak form. That's the weak form. Now comes the finite element idea. So there is our weak form, now ready for the finite element idea. OK, so what was that idea? That's the continuous problem. Now, the finite element idea is, plug in U as a combination-- Let me write out the terms. You know what's coming here. If I'm using finite elements, I'm going to choose nice polynomials phi, say N of them. That would be like, one for every node, so I would have 13 functions here. I'm going to choose the V's to be the same as the phis. And then, I'm working then in 13 dimensions instead of infinite dimensions. So what do I do? For this limited subspace, this finite element subspace, this piecewise polynomial, piecewise linear subspace, I plug that into the weak form and I test it against 13 V's, which are phis. So I plug that in, so now what is K? Now let me just say, so I now have the integral of, yeah I guess I'd better plug it in. K_ij would then be the integral -- I'm just copying the weak form in -- Of dU/dx d-- no, sorry, I'd better just plug it in first. dU/dx*dV/dx plus dU/dy*dV/dy, those are the integrals I have to do. And on the right hand side I have to do the f-integral. fV dxdy. OK, plug that in. That's the integral over the whole domain. When I plug it in this U is a combination of known functions and the V's will be the same guys. So what am I going to get here? It's just as in 1-D. So no new ideas entering here. The new idea's going to enter when I construct these phis.\n\nLet me just say, though, one thing. In 1-D, we pretty much had a choice of-- When it was one dimension. Just remember that. In one dimension, when I had these hat functions, when I had these guys, integrated against these guys, I pretty much had a choice of did I want to think about integrating that hat function against that one. Or actually it was their derivatives. It was the integral of U, yeah. Of phi, what I needed was all the integrals of phi_i', phi_j'. Those are what I needed, these go into K. Into the matrix K. In fact, that's what equals K_ij, the integral of phi prime. In 1-D. OK. Now, what I was going to say, I could do it this way if I wanted. But you remember the other way to do it? Was element at a time. So this was one method here. That found the entries of K separately, one by one. The other way was take the elements, one by one. So the other way was take an element like this element. It's got two functions, two trial functions are involved there. There's a little two by two, so this is four. Two by two element matrices K_e. And the quiz recalled that part. That approach. So what I want to say is that's the right way to do it in two dimensions. A triangle at a time. That's the way the code will do it. It creates these little element matrices, and then it stamps them into the big matrix K. Alright. So I want to do this integral one triangle at a time. Is the good way. OK, and that's what the code will do. Actually, I think that the best way to learn these steps is just to read the lines of the code. You can read them in the book, page 303 or something. And you'll see it just doing all the steps that need to be done. One triangle at a time.\n\nSo, now. Now comes the fun. I get to answer, what do these piecewise linear elements look like? What do the quadratic elements look like? What do the Q_1, quad elements look like? This was the golden age of finite elements, when people invented these ways to create piecewise polynomials. And it continues. People are still inventing, I had a email this week, somebody says I've got spectral elements. People are going higher and higher degrees. You, know sixth degree, eighth degree. In order to get more accuracy. OK, let's start with P_1. How do I describe a P_1 element inside a triangle? So in a triangle, the unknowns will be the value-- This has a height U_1, this has a height U_2, and a height U_3 at those nodes. Inside the triangle, the function U is linear. a+bx+cy. Then, you see that if I know these three values, then I know these three numbers. And vice versa. There's a three by three matrix, right? There has to be a three by-- Any time you see pictures like this, like the good part of 18.085 is to realize that if I have three numbers here, three values and I've got three coefficients, that there's some three by three matrix that connects them that you're going to need. That's like a meta-message of this course. Is, you've got to translate between the node values and the coefficients. Because the node values are the unknowns, right? These are the guys that are multiplying the pyramid function, this is multiplying a pyramid function with height one at that point, going down to zero. So this one will be a pyramid function of height U_2 times one, going down to zero. And U_3. So we've got a flat function in here. And it looks exactly like that. OK?\n\nSo what do I want to say? We know the positions of these three nodes from our list p, right? These were the crucial things we needed. The positions of all the nodes, we know where they are. Then there has to be a three by three matrix that will now connect to the coefficients. Why do we want the coefficients? Because those are what we do when we integrate. The coefficients are what we need, we need to integrate dU/dx, dU/dy, dU/dz. Sorry, dU/dx, dU/dy. Are you visualizing this overall solution capital U, yeah. So the overall solution capital U, you should visualize with a combination of all the little U's, is zero here and then it's going to go up in these triangles and bend around and, I don't know, maybe down again. Or maybe, no, maybe it keeps going up. This is probably the largest value, because it's the largest value in the correct solution. So is this is probably going to be the highest point of this, what's-- The Forbidden City, right? In China, in Beijing is like, or a single-- Do pagodas have flat--? No. We we will meet pagoda functions. But this would be just an ordinary western roof, I guess. Just flat pieces. Yeah. OK, see, you've got to see the whole thing and then you look at each piece. Each piece looks like that, and the integrals are doable.\n\nOK, so while I'm going here, I want to do quadratics. You'll get the idea right away. So, same triangle, now I'm going to have quadratics. So I'm now going to have, so this won't be the arrow, this arrow will now go this way. I'm going to have d x squared, exy, and f y squared. So now how many coefficients have I got to determine a quadratic? Six, right? a, b, c, d, e, f. How many nodes do I need? Six. Where are they? Well, the natural positions are those guys in the mid-points. So now, those are all nodes now. Some nodes are at vertices of triangles, some nodes are at midpoints. But remember, we've got other triangles hooking on here, many other triangles, all with their own six nodes. Well, not their own, because they share. That's a big point. So there's a grid of triangles, with nodes for quadratic. And we've got one, two, three, four, five, six, seven, eight, nine, ten, 11, 12, 13, 14, 15, 16 nodes, I think. And within each triangle, this is what we've got. So there's a six by six matrix for each triangle. A six by six matrix which will connect the values U_1, U_2, U_3, U_4, U_5, U_6 for this triangle. Connect those six heights with these six numbers. And what will the roof look like within that triangle? Well, sort of curved. A parabola, right? A parabola somehow in 2-D, it'll look like this, yeah. Yeah. And here's the key question. Will that roof, that curvy roof, fit the one over there? Because if it didn't fit, we're in trouble. This derivative would have a delta function, and we've got delta functions, and integrals squaring them would give infinite.\n\nSo here's the question. Why does this roof, using these six points, fit onto the roof that uses U_7, U_8, U_9, and U_3, U_4 and U_5? Why do those two roofs fit together? This is the fun of piecewise polynomials? Of course, the slope will change. But the roof won't have a gap. Water won't go through it. Why's that? Do you see why? Because what do they share, what do those two curvy roofs share? They share a side. They share the same values along the side. And are those three values that are shared along the side sufficient to make it match all along the side? Yes. That's the important question. Finite elements lives or dies on that question. The answer is yes, because along that side, if I just focus on that side, where these three values are shared on both sides, by the triangle on both sides. Along that edge, what kind of a function have I got? It's second degree. This is whatever, when I restrict this to just run along a line, it's a parabola. And the parabola is determined by those three values. So having it right at three points means I have it right the whole way. Yeah. So there you see what quadratic elements would look like, and you could extend the code in the book and on the CSE site to work for quadratic elements. And you want to just guess what cubic elements could look like? I'm sorry, we've run five minutes over, but maybe finite elements is worth it. So if I had cubic elements, any idea how many? So I'm now going up to, I'm adding g x cubed, h, i, j, any idea how many coefficients I now have? Four new ones plus these six is ten. I need ten nodes. Where I am I going to put ten nodes in this triangle? I want to put them, I'd like to have some on the edges. Because the edges help me make triangles match each other. They'll just be like bowling balls. So here's six, oops, that wouldn't be believable. Is that right? Four, three, two, and one. Yeah. Yeah. OK.\n\nSo, now I've got a bubble node inside and I've got four nodes of vertices and two points, at two 1/3 points, and that will then match the triangle next to it. Because four points determine a cubic. There you go, I hope you have fun, I hope you have a great holiday. I'll see you Wednesday for Fourier and always open for questions on the MATLAB."
]
| [
null,
"https://www.facebook.com/tr",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.9687188,"math_prob":0.8521319,"size":32059,"snap":"2019-13-2019-22","text_gpt3_token_len":7943,"char_repetition_ratio":0.14662299,"word_repetition_ratio":0.0110269915,"special_character_ratio":0.24423718,"punctuation_ratio":0.15477332,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":0.9907587,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-05-20T20:43:27Z\",\"WARC-Record-ID\":\"<urn:uuid:99227ad5-8198-43cf-a3dc-f49cef35b6ec>\",\"Content-Length\":\"243920\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:56b105f7-d725-4329-9278-85c76bdc06d0>\",\"WARC-Concurrent-To\":\"<urn:uuid:d0063568-96aa-4734-96f4-f5691190f984>\",\"WARC-IP-Address\":\"184.30.70.42\",\"WARC-Target-URI\":\"https://ocw.mit.edu/courses/mathematics/18-085-computational-science-and-engineering-i-fall-2008/video-lectures/lecture-27-finite-elements-in-2d-part-2/\",\"WARC-Payload-Digest\":\"sha1:TEWB4BL5PDOPT7BRDD7IYY4JRKSVBK6Z\",\"WARC-Block-Digest\":\"sha1:LZJ23FX4DX6Y2WZR2RSUVFLRDLWJUISJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-22/CC-MAIN-2019-22_segments_1558232256147.15_warc_CC-MAIN-20190520202108-20190520224108-00320.warc.gz\"}"} |
https://www.javatpoint.com/pandas-standard-deviation | [
"# Pandas Series.std()\n\nThe Pandas std() is defined as a function for calculating the standard deviation of the given set of numbers, DataFrame, column, and rows. In respect to calculate the standard deviation, we need to import the package named \"statistics\" for the calculation of median.\n\nThe standard deviation is normalized by N-1 by default and can be changed using the ddof argument.\n\n### Parameters:\n\n• axis: {index (0), columns (1)}\n• skipna: It excludes all the NA/null values. If NA is present in an entire row/column, the result will be NA.\n• level: It counts along with a particular level, and collapsing into a scalar if the axis is a MultiIndex (hierarchical).\n• ddof: Delta Degrees of Freedom. The divisor used in calculations is N - ddof, where N represents the number of elements.\n• numeric_only: boolean, default value None\nIt includes only float, int, boolean columns. If it is None, it will attempt to use everything, so use only numeric data.\nIt is not implemented for a Series.\n\n### Returns:\n\nIt returns Series or DataFrame if the level is specified.\n\n### Example1:\n\nOutput\n\n```2.1147629234082532\n10.077252622027656\n```\n\n### Example2:\n\nOutput\n\n```sub1_Marks 6.849574\nsub2_Marks 4.924429\ndtype: float64\n```\n\nNext TopicSeries.to_frame()",
null,
"",
null,
"",
null,
""
]
| [
null,
"https://www.javatpoint.com/images/facebook32.png",
null,
"https://www.javatpoint.com/images/twitter32.png",
null,
"https://www.javatpoint.com/images/pinterest32.png",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.6320526,"math_prob":0.8780926,"size":629,"snap":"2020-45-2020-50","text_gpt3_token_len":153,"char_repetition_ratio":0.1152,"word_repetition_ratio":0.0,"special_character_ratio":0.28139904,"punctuation_ratio":0.1724138,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99217826,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-22T09:33:15Z\",\"WARC-Record-ID\":\"<urn:uuid:99c9323d-2cf0-44e9-83fd-d7a741449c4c>\",\"Content-Length\":\"51849\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:143b8876-4c87-4849-8923-3f8ac3d70adf>\",\"WARC-Concurrent-To\":\"<urn:uuid:4eda8559-93a5-43f5-b3bd-3502fae5581a>\",\"WARC-IP-Address\":\"194.169.80.121\",\"WARC-Target-URI\":\"https://www.javatpoint.com/pandas-standard-deviation\",\"WARC-Payload-Digest\":\"sha1:RHG3GFZRFYUVB63G3KPPM2FRFUW37GSO\",\"WARC-Block-Digest\":\"sha1:4GH52CVLXU2ZSFKYQY5EHLW6GSTGCMN4\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107879362.3_warc_CC-MAIN-20201022082653-20201022112653-00574.warc.gz\"}"} |
http://www.100porcento.net/84rpqkj/3o1rst.php?tag=polyphase-realization-of-fir-filters-40215d | [
"in Figure 1 and is described in more detail in the following Solving the MILP problem leads to a minimum number of SPT terms given a filter specification. Fourier analysis of some input signal performs a Discrete Fourier Transform written in C, that reads 8-bit, complex, dual-polarisation data from a file In many cases, the data quality advantages outweigh this increase in cost. filters, each with (N/M) taps, where N is the number of taps in the filter and M is the decimation factor. The transfer function of polyphase decimation filter is represented by equation 1. Digital filters forms crucial blocks of digital transmitter and receiver. smooth function, such as the Hanning window. The CASPER library In the case of FIR filters the transfer function is a polynomial in terms of z?1, and consequently the polyphase decomposition is very simple as shown in Chapter IV.However, the transfer function of an IIR filter is the ratio of two polynomials, and therefore, the representation of such a function in the form of equations (5.11) and … Cascading Sharpened CIC and Polyphase FIR Filter for Decimation Filter V.Jayaprakasan and M.Madheswaran T The output of this structure is y (n), which is the input to an N -point DFT. xref frequency response. Since h(n+ pN) is a decimated-by-N version of h(n), if the original filter sinc function that makes up the filter coefficients can be weighed with a tone is not strong enough, this effect can go unnoticed. The PFB endstream endobj 762 0 obj<>/Outlines 137 0 R/Metadata 155 0 R/PieceInfo<>>>/Pages 150 0 R/PageLayout/SinglePage/OCProperties<>/StructTreeRoot 157 0 R/Type/Catalog/LastModified(D:20070816080645)/PageLabels 148 0 R>> endobj 763 0 obj<>/PageElement<>>>/Name(Background)/Type/OCG>> endobj 764 0 obj<>/Font<>/ProcSet[/PDF/Text]/Properties<>/ExtGState<>>>/Type/Page>> endobj 765 0 obj[/ICCBased 770 0 R] endobj 766 0 obj<> endobj 767 0 obj<>stream one output frequency bin. �`~y9��B�}�Bv;����jl���M��a�#�W>9�P�B�)t6Y&O&:ɴ���1@(e�(�'�@ㆴ>��'&HU�8>(����C�\\$��m��LT#�#����-���/����5���Q��W�*��(&�b�(��\\$'Rjw��[o�>t�:�;�>�M�ݹ�3���8Y`\"Z��xS�^Ir/*��^|�~m�}����.�Ҩ��Āb��h+�\\S�]g�9�h�?u�A��J`i CZ�H�`1%�����n-�������k�|��>�* ���2a��. In the proposed method, however, they are different, so the conventional method cannot be applied. The frequency selective characteristics of the filters are due to the phase shifts between consecutive branches (Figure 1 1(b)). the DFT is equivalent to the product of an infinitely long time series and a The only ANSWER: (c) 3. The polyphase filter bank (PFB) technique is a mechanism for alleviating the 0000001616 00000 n The commutator at the left rotates in the clockwise direction, and makes one complete rotation in the duration of one unit delay. sub-filters that make up this operation, together with the following DFT stage, are collectively called a 'polyphase Once the multiplication is where the sub-filter coefficients h(n + pN) correspond to location of the delta function. But in general, the frequency domain bin centres lie at non-zero A stand-alone spectrometer program Mathematically, DFT would be the convolution of the Fourier Transform of the sinusoid and that xڬTmHSQ~���Nq�;��t�S�Y�b!q5�K��r�pf:��ٖn]K)�FE\"�*�QزB+(?RF �>~ٯ�g��ح�o���s��>ﻻ \\$��� �[��ߏ� �`0\"������~Oַn���ݎ�շ�;��,'����՟���v6�Hݐ��;�˔�>q ��ٱ�zؑ;�T�|��P���ɽ�M|c�>n ~egzol���NmՔYD����= �h�|s��ʵ+���k�>� Figure 4. input frequency can be such that the zeroes of the sinc function coincide with Fortuitous combinations of N, fs, and the FIR filters can be discrete-time or continuous-time and digital or analog. 761 0 obj <> endobj this process is given by. Fig. • A direct form realization of an FIR filter can be readily developed from the convolution sum description as ... Polyphase FIR Structures • The polyphase decomposition of H(z) leads to a parallel form structure • To illustrate this approach, consider a causal FIR transfer section. called DFT leakage. This page was last modified on 18 October 2016, at 20:23. In the polyphase and FFT realization for the transmultiplexer, the sampling rate reduction is the same as the number of the sub-bands. 0000003672 00000 n Figure 4. Spectrometers and correlators are typical beneficiaries of the PFB technique. The FIR filter structure realization of a polyphase filter bank with P = 3 taps and N sub-filters. However, the coefficient symmetry of the linear phase FIR filters is not exploited in either … For M=5M… the multiplierless FIR filter structures realization. With a small number of extra additions, a high-order 2D FIR filter is converted to several lower-order 2D subfilters. We will discuss the polyphase FIR realization in this section. from two significant drawbacks, namely, leakage and scalloping loss. These filter architec-tures are sometimes referred to as fast FIR algorithms (FFA) or parallel FIR filters –. point-by-point. A realization of this filter bank is shown in comes with the pfb_fir 0000000016 00000 n This method is also known as 'weighted overlap-add' ('WOLA'), or 'window .��.�bΚg��3P�H���xs|]:d�Fº�˧-^{�ת�`uE��W\"��� ^9�A��uxfA D�F��O�!���o�Q�>L P �*�uaنT�=4`0���G�O�B�0Ͱ� out astronomical signals of interest in the nearby bins. 1 b. Answer to Consider a polyphase filter realization of a rational rate converter with rate conversion factor L/M = 4/3. %%EOF problem is formulated as one MCM block for each subfilter, or as a matrix MCM block for all subfilters. The fig 1.1 shows the FIR filter structure where the input is x(n), h(n) is the coefficients and y(n) is the output. To suppress the sidelobes of the single-bin frequency response further, the all the frequency bins of the DFT output. (in other words, the data is 'weighted'). non-existent. As mentioned before, 0000003100 00000 n (DFT). REDUCED COMPLEXITY POLYPHASE FIR FILTERS The output of an FIR filter of … (For the purpose of this memo, the input Type 1 polyphase decomposition • Polyphase decomposition of FIR filter H(z) • The structure is used to change filtering and down-sampling to down-sampling and filtering • The number of operations remains the same but the filter operates at … Polyphase Matrix of an FIR Interpolator Open Live Script When you create a multirate filter that uses polyphase decomposition, polyphase lets you analyze the component filters individually by returning the components as rows in a matrix. pre-sum-FFT'. Each term in equation 1 represents a polyphasesubfilter.Fig.2shows the realization of polyphase decimation filter … On FPGAs, a PFB typically consumes about 1.5 times more resources than a direct FFT. and pfb_fir_real In this paper have implemented the polyphase decomposed FIR filters having interpolation and decimation filter structure. This effect is shown Since the Fourier Transform of This method is presented Various implementations of the PFB are available online. <]>> x�bb�e`b``Ń3� ���ţ�1�x4>�W| D�Q the frequency domain. 0000000549 00000 n [d�3*��W��\"��H��&��un�M`����A��f�@�Z@�@�C�� V4�q�z�l�mb�&lA��/��;�\\$��=������9N�t '������A�����s}�\"K�] In digital communication polyphase FIR filters can be used for sample rate conversion as decimation or interpolation filters. This implies N-point transform that exhibits less leakage. These lters are recurrent and use their own output from previous calculations performed DFT leakage is the phenomenon in which, depending on the sampling frequency and Moderate c. High d. None of the … the shape of the window function determines the shape of the single-bin These subfilters are then realized … of the rectangular window. 0000002455 00000 n A polyphase filter implementation reduces the computational inefficiencies of the conventional approach by means of decimating the input instead of the output, using a reduced filter bank and by applying the FFT algorithm. that of the rectangular window, to be precise.) complexity polyphase FIR filters. ��X��KMԦ�)N'f����3ٓ�9��:x�yv��t���A���Ġ���h.8`.sK��~>l �3��j�� 8�cH���+�ZV�n��\\�:���_ ��B The second group are called IIR lters, IIR is an abbreviation of In nite Impulse Response. In this paper, a polyphase and FFT realization of a sub-band adaptive filter is proposed. This is in contrast to infinite impulse response (IIR) filters, which may have internal feedback and may continue to respond … That is, a single tone appears to some level in the non-flat nature of the single-bin frequency response. fs, each sub-filter is essentially an all-pass filter. To get an N-point transform that exhibits less leakage math we can see a lot just looking! Called a 'polyphase ' filter bank using hardware … and polyphase theory are indicated one MCM block all... Polyphase decimation filter … the multiplierless FIR filter impulse response h [ N ] is used for the.. For sample rate conversion as decimation or interpolation … Figure 4 hardware and. The clockwise direction, and added point-by-point bin centres due to the non-flat nature of the.! In the polyphase realization is a way of doing sampling-rate conversion that leads to efficient! Pfb ) technique is a parallel decomposition of a FIR digital filter based on sinc... But more than that, it leads to very general viewpoints that are … interpolation-by-four... Signal suffers from two significant drawbacks, namely, leakage and scalloping loss reduction the... Implemented the polyphase filters polyphase is a mechanism for alleviating the aforementioned drawbacks the... Read about the interpolation filter in multiple powers of z using hardware … and polyphase theory are indicated in filter... That can be polyphase realization of fir filters for sample rate conversion as decimation or interpolation filters rotates in clockwise... Each, and M.N.S 1 and is described in more detail in the proposed method, however they... Pn ) correspond to what are called IIR lters, IIR is an abbreviation of in nite response! A high-order 2D FIR filter realization can provide advantages in computational efficiency when used for sample rate conversion as or. The aforementioned drawbacks of the filter coefficients centres lie at non-zero locations on sinc... Domain bin centres lie at non-zero locations on the decomposition of the PFB technique, this effect called... Of sampling rate reduction is the loss in energy between frequency bin '... And scalloping loss of this memo, the data quality advantages outweigh this increase in cost weighting/windowing can used! In cost FIR filters can be used for sample rate conversion as decimation or interpolation … Figure 4 the. Milp problem leads to very efficient implementations this method is also known as 'weighted overlap-add ' ( '... … digital filters forms crucial blocks of digital transmitter and receiver rate as... Sampled at a rate fs, is defined as significant drawbacks,,. A matrix MCM block for all subfilters shape of the PFB technique then dies to zero of time samples for... Additions, a PFB typically consumes about 1.5 times more resources than a direct FFT the involved. Structure is called a 'polyphase ' filter bank using hardware … and polyphase theory are indicated in more in... Go unnoticed the realization of polyphase filters polyphase is a way of doing sampling-rate conversion that leads to very implementations! The straightforward Application of the single-bin frequency response or as a filtering process in which the elements of the frequency... Length N each, and then dies to zero group are called 'polyphase! Implemented the polyphase filter bank is shown in Figure 1 and is described in detail... Pfb_Fir and pfb_fir_real blocks that can be used for the transposed direct form ; polyphase realization and reduced COMPLEXITY FIR. Interpolation filters than a direct FFT derived the total number of the window function determines the shape of the polyphase realization of fir filters! Of adders required for the development the aforementioned drawbacks of the filter in my article, Multirate and... Have M sub-filters drawbacks of the filtering… the sub-filters is their phase response which! Fir realization in this section implementation allows this exchange to be possible for general filters non-flat of! Represents a polyphasesubfilter.Fig.2shows the realization of polyphase decimation filter is represented by equation 1 a! B ) Type I polyphase for interpolator outweigh this increase in cost a FIR filter structures realization of digital and... General viewpoints that are useful in building filter banks the clockwise direction, and added point-by-point is... Effect is called DFT leakage FIR digital filter based on the sinc function time samples the loss in between. Based on the sinc function 1 represents a polyphasesubfilter.Fig.2shows the realization of polyphase decimation filter structure realization of a filter. Of SPT terms given a filter specification, a high-order 2D FIR filter structures with comparison... Implementation allows this exchange to be possible for general filters a parallel decomposition of single-bin... The filters of interest have transfer functions that are useful in building filter.! Defined as branches ( Figure 1 1 ( b ) ) due to phase! The weighting/windowing can be used with an FFT block small number of extra additions, a high-order 2D FIR lasts. Filter in multiple powers of z filters of interest have transfer functions that are … polyphase interpolation-by-four structure. Hardware … and polyphase theory are indicated conversion that leads to very efficient implementations DSP Its. Interpolation and decimation filter … the multiplierless FIR filter structure realization of a sequence of values x ( N,... Rotation in the input data is split into P subsets of length N of time samples straightforward Application of filtering–! Done, the data quality advantages outweigh this increase in cost impulse response of an FIR structure... Communication polyphase FIR filters can be discrete-time or continuous-time and digital or analog 1.5 times more resources a. Filter in my article, Multirate DSP and Its Application in D/A conversion the phase shifts between consecutive branches Figure... Unit delay once the multiplication is done, the block of data split... The following section the transposed direct form ; polyphase realization and reduced COMPLEXITY polyphase filters. Are the filter in multiple powers of z M, then we 'll have M sub-filters filter... … and polyphase theory are indicated values x ( N ), or a. Application in D/A conversion and M.N.S have implemented the polyphase realization is a of! Namely, leakage and scalloping loss structure realization of polyphase decimation filter realization!, a PFB typically consumes about 1.5 times more resources than a direct FFT decomposed FIR having. Or continuous-time and digital or analog group are called P-tap 'polyphase sub-filters.... Realization for the transposed direct form ; polyphase realization is a mechanism alleviating! Single tone appears to some level in all the frequency domain. polyphase Combined! Once the multiplication is done, the sampling rate reduction is the input signal suffers two. By looking at the structure of the window function determines the shape of filtering…... Transform that exhibits less leakage … polyphase interpolation-by-four filter structure zero-va… this is the. Based on the decomposition of a sequence of values x ( N pN! Direct-Form structure is directly obtained from the difference equation of the straightforward Application of the single-bin response. Iir lters, IIR is an abbreviation of in nite impulse response about 1.5 times resources! … and polyphase theory are indicated the filter coefficients of these lters to regular... In the polyphase filters polyphase is a way of doing sampling-rate conversion leads. For the subfilters involved in the following section a filter specification then realized 12.4! Spectrometers and correlators are typical beneficiaries of the single-bin frequency response converted to several lower-order 2D.... Leakage and scalloping loss several lower-order 2D subfilters lters, IIR is an abbreviation of in impulse! Mentioned before, the sampling rate for the transposed communication polyphase FIR filters using generalized polyphase structure (... In D/A conversion filter based on the decomposition of the window function are the filter coefficients impulse! Level in all the frequency domain. between consecutive branches ( Figure 1 1 ( )! A lot just by looking at the structure is directly obtained from the difference equation 'll have M.... Lters, IIR is an abbreviation of in nite impulse response of an Nth-order discrete-time filter... Interpolation … Figure 4 Chao Wu, and then dies to zero of adders required for the purpose of filter... ] have implemented the polyphase realization is a way of doing sampling-rate conversion that leads to very implementations... X ( N + pN ) correspond to what are called P-tap 'polyphase sub-filters ' cases, the to! Conversion as decimation or interpolation … Figure 4 of this filter bank with P = 3 and. Minimum number of the DFT operates on a finite length polyphase realization of fir filters each, and M.N.S and makes one rotation! Chao Wu, and added point-by-point frequency response we 'll have M sub-filters as! Makes one complete rotation in the input data is split into P subsets of length of! … and polyphase theory are indicated weighting/windowing can be thought of as filtering... Sub-Filter coefficients h ( N + pN ) correspond to what are called IIR,..., a high-order 2D FIR filter is converted to several lower-order 2D subfilters polyphase realization of fir filters which the elements of the of... Frequency bins, this effect is called a 'polyphase ' filter bank P... Weighting/Windowing can be discrete-time or continuous-time and digital or analog why the structure y... This structure is called DFT leakage each subfilter, or as a filtering process which! Input frequency bin centres lie at non-zero locations on the sinc function nature! Frequency bins, this effect is called DFT leakage decimation or interpolation … Figure 4 2016, at 20:23 many. ), or as a bank of FIR sub-filters the development by equation 1 the collaboration has involved implementation. Transmitter and receiver is directly obtained from the difference equation of … digital filters forms crucial blocks digital. ' into other frequency bins, this effect can go unnoticed march 2020 1 realization of polyphase?. Typically consumes about 1.5 times more resources than a direct FFT not be.. Called a 'polyphase ' filter bank using hardware … and polyphase theory indicated... To an N -point DFT from the difference equation that leads to efficient... Used for sample rate conversion as decimation or interpolation filters loss is the operating level of sampling rate reduction the..."
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.87531686,"math_prob":0.92965996,"size":18733,"snap":"2021-04-2021-17","text_gpt3_token_len":4446,"char_repetition_ratio":0.16792141,"word_repetition_ratio":0.18460995,"special_character_ratio":0.23071586,"punctuation_ratio":0.11661891,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9540794,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-04-21T22:52:44Z\",\"WARC-Record-ID\":\"<urn:uuid:57ea1fb9-335d-46f8-a111-9f821579360b>\",\"Content-Length\":\"27702\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d4d390de-eee4-4e76-9835-f885c61625fe>\",\"WARC-Concurrent-To\":\"<urn:uuid:809233a6-09cb-4a93-abc5-b2e38b37dd0b>\",\"WARC-IP-Address\":\"187.45.193.154\",\"WARC-Target-URI\":\"http://www.100porcento.net/84rpqkj/3o1rst.php?tag=polyphase-realization-of-fir-filters-40215d\",\"WARC-Payload-Digest\":\"sha1:V3UG5YFMAWSACMWLPB2QETDUVAWD2SW4\",\"WARC-Block-Digest\":\"sha1:2KGBRRPR5YORT7DNECO3IBERY4OW3OVM\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-17/CC-MAIN-2021-17_segments_1618039554437.90_warc_CC-MAIN-20210421222632-20210422012632-00045.warc.gz\"}"} |
https://thebrainboxtutorials.com/author/priyanka-kabra | [
"## CBSE Class 10 Practice Papers for 2023\n\nSearching for CBSE Class 10 Practice Papers for 2023? As the date for CBSE Class 10 Board exams is approaching, students are becoming more and more nervous and excited. There are lot of apprehensions and queries of students about the types of question which may come in Board exams. Therefore, to perform well and score … Read more\n\n## Lines and Angles MCQ CBSE Class 9 Math\n\nLines and angles are the fundamental concepts in geometry that establish the foundation for the field. Lines and Angles MCQ has been prepared by The Brainbox Tutorials for CBSE Class 9 Math students. A line is described as a series of discrete dots that are spaced tightly together and continue infinitely in both directions. Its … Read more\n\n## Triangles MCQ Class 9\n\nTriangles is an important chapter of Math in Class 9. Students can learn how to solve both easy and complex problems by using Triangles as a medium of instruction. The comprehensive set of multiple choice questions is organized by The Brainbox Tutorials with an advanced level of difficulty, giving students plenty of chances to put … Read more\n\n## Pythagoras Theorem MCQ\n\nThe Pythagorean theorem is a mathematical statement that describes the relationship between the sides of a right triangle. A right triangle is a type of triangle that has one angle that measures 90 degrees. Pythagoras Theorem Statement The Pythagoras theorem (or the Pythagorean Theorem) states that “In a right triangle, the square of the length … Read more\n\n## Quadrilaterals MCQ Class 9 CBSE Maths\n\nQuadrilaterals is an important chapter of Maths in CBSE Class 9. Students can learn how to solve both easy and complex problems by using quadrilaterals as a medium of instruction. The comprehensive set of multiple choice questions is organised by The Brainbox Tutorials with an advanced level of difficulty, giving students plenty of chances to … Read more\n\n## Mid point Theorem MCQ Class 9 ICSE\n\nThe mid point theorem states that “the line segment joining the midpoints of any two sides of the triangle is parallel to the third side and is equal to half of the length of the third side.” The mid-point theorem, in geometry, helps in determining the missing values of the sides of triangle. It provides … Read more\n\n## Coordinate Geometry MCQ Class 9 CBSE\n\nCoordinate Geometry is an important chapter of Maths. You get to learn about the position of objects in a plane. Coordinate Geometry MCQ Class 9 CBSE has been prepared by The Brainbox Tutorials. Coordinate geometry questions will make up around 6 of the 80 total marks for the CBSE Class 9 final exams. Moreover, based … Read more\n\n## Logarithm MCQs ICSE Class 9 Maths\n\nIn Mathematics, Logarithm is another way of writing Exponents or expressing Exponents. Logarithm is written as log, in short. For Example, 25 = 32 is written as log232 = 5. So, 2 raised to the power 5 is equal to 32 is written in logarithm as log of 32 to the base 2 is 5. … Read more\n\n## Strong Numbers\n\nWhat are Strong Numbers? Strong number is a special number such that sum of the factorial of its digits is equal to the original number. Strong Numbers are also known as Peterson Numbers. Examples of Strong Number Let us understand Strong Number with an example. 145 is a … Read more\n\n## What is an Abundant Number?\n\nWhat is an Abundant Number? An Abundant Number is a positive integer for which the sum of all its Proper divisors (factors) is more than the original number. Abundant number is also called Excessive Numbers. Examples of Abundant Numbers Let us understand Abundant Number with a few examples. Example 1. … Read more"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.96478873,"math_prob":0.95872486,"size":370,"snap":"2022-40-2023-06","text_gpt3_token_len":82,"char_repetition_ratio":0.1284153,"word_repetition_ratio":0.06557377,"special_character_ratio":0.22702703,"punctuation_ratio":0.072463766,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9943508,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-02-05T05:10:46Z\",\"WARC-Record-ID\":\"<urn:uuid:f2dca162-c229-47e8-a204-7fe8dcf0b534>\",\"Content-Length\":\"101617\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:81479147-2f3a-44ce-b6bf-79c3c0c3befa>\",\"WARC-Concurrent-To\":\"<urn:uuid:bb5ba0bf-ff04-4e7e-bb94-f4c14fdd5483>\",\"WARC-IP-Address\":\"191.101.230.202\",\"WARC-Target-URI\":\"https://thebrainboxtutorials.com/author/priyanka-kabra\",\"WARC-Payload-Digest\":\"sha1:VIXOGDJS7ETXNRABUAGSTOXC5KYZ25J2\",\"WARC-Block-Digest\":\"sha1:ROPHY4HNOSLUV72MOKAHV66LAPHNSWS4\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764500215.91_warc_CC-MAIN-20230205032040-20230205062040-00762.warc.gz\"}"} |
https://www.tutorialspoint.com/largest-multiple-of-three-in-cplusplus | [
"# Largest Multiple of Three in C++\n\nSuppose we have one array of digits, we have to find the largest multiple of three that can be formed by concatenating some of the given digits in any order as we want. The answer may be very large so make it as string. If there is no answer return an empty string.\n\nSo, if the input is like [7,2,8], then the output will be 87\n\nTo solve this, we will follow these steps −\n\n• Define one 2D array d, there will be three rows\n\n• sort the array digits\n\n• sum := 0\n\n• for initialize i := 0, when i < size of digits, update (increase i by 1), do−\n\n• x := digits[i]\n\n• insert digits[i] at the end of d[x mod 3]\n\n• sum := sum + x\n\n• sum := sum mod 3\n\n• if sum is non-zero, then −\n\n• if not size of d[sum], then −\n\n• rem := 3 - sum\n\n• if size of d[rem] < 2, then −\n\n• return empty string\n\n• delete last element from d[rem] twice\n\n• Otherwise\n\n• delete last element from d[sum]\n\n• ret := empty string\n\n• for initialize i := 0, when i < 3, update (increase i by 1), do −\n\n• for initialize j := 0, when j < size of d[i], update (increase j by 1), do−\n\n• ret := ret concatenate d[i, j] as string\n\n• sort the array ret\n\n• if size of ret and ret is same as '0', then −\n\n• return \"0\"\n\n• return \"0\"\n\nLet us see the following implementation to get better understanding −\n\n## Example\n\nLive Demo\n\n#include <bits/stdc++.h>\nusing namespace std;\nclass Solution {\npublic:\nstring largestMultipleOfThree(vector<int>& digits) {\nvector<vector<int>> d(3);\nsort(digits.begin(), digits.end(), greater<int>());\nint sum = 0;\nfor (int i = 0; i < digits.size(); i++) {\nint x = digits[i];\nd[x % 3].push_back(digits[i]);\nsum += x;\nsum %= 3;\n}\nif (sum) {\nif (!d[sum].size()) {\nint rem = 3 - sum;\nif (d[rem].size() < 2)\nreturn \"\";\nd[rem].pop_back();\nd[rem].pop_back();\n}\nelse {\nd[sum].pop_back();\n}\n}\nstring ret = \"\";\nfor (int i = 0; i < 3; i++) {\nfor (int j = 0; j < d[i].size(); j++) {\nret += to_string(d[i][j]);\n}\n}\nsort(ret.begin(), ret.end(), greater<int>());\nif (ret.size() && ret == '0')\nreturn \"0\";\nreturn ret;\n}\n};\nmain(){\nSolution ob;\nvector<int> v = {7,2,8};\ncout << (ob.largestMultipleOfThree(v));\n}\n\n## Input\n\n{7,2,8}\n\n## Output\n\n87\n\nUpdated on: 09-Jun-2020\n\n65 Views",
null,
""
]
| [
null,
"https://www.tutorialspoint.com/static/images/library-cta.svg",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.5530369,"math_prob":0.9983262,"size":2628,"snap":"2023-40-2023-50","text_gpt3_token_len":795,"char_repetition_ratio":0.092987806,"word_repetition_ratio":0.032719836,"special_character_ratio":0.34056318,"punctuation_ratio":0.16141002,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99978715,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-10-01T15:56:15Z\",\"WARC-Record-ID\":\"<urn:uuid:6bce2a65-929e-4602-ad02-e2f612cb657f>\",\"Content-Length\":\"73245\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:fc734a33-5d12-48d6-9e7e-5db69ad168d4>\",\"WARC-Concurrent-To\":\"<urn:uuid:a0059dcb-a333-4e28-98fc-c2dcba7a2d38>\",\"WARC-IP-Address\":\"192.229.210.176\",\"WARC-Target-URI\":\"https://www.tutorialspoint.com/largest-multiple-of-three-in-cplusplus\",\"WARC-Payload-Digest\":\"sha1:ED6NCXLS3UFEWPMF65CLQUXHS6CSIHMA\",\"WARC-Block-Digest\":\"sha1:PHGUNVXVJNEG3CPBVSSMPWIJQANAISH3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510903.85_warc_CC-MAIN-20231001141548-20231001171548-00390.warc.gz\"}"} |
https://btechgeeks.com/python-program-to-find-majority-element-boyer-moore-majority-vote-algorithm/ | [
"# Python Program to Find majority element (Boyer–Moore Majority Vote Algorithm)\n\nDon’t miss the chance of Java programs examples with output pdf free download as it is very essential for all beginners to experienced programmers for cracking the interviews.\n\nGiven a list, the task is to find the majority element of the given list.\n\nMajority Element:\n\nA majority element appears more than n/2 times, where n is the size of the array.\n\nBoyer–Moore Majority Vote Algorithm History and Introduction:\n\nThe Boyer–Moore majority vote algorithm uses linear time and constant space to determine the majority of a series of elements. It is named after Robert S. Boyer and J Strother Moore, who published it in 1981, and is an example of a streaming algorithm.\n\nIn its most basic version, the algorithm looks for a majority element, which is an element that appears frequently for more than half of the items in the input. A version of the procedure that performs a second pass through the data can be used to confirm that the element found in the first pass is truly a majority.\n\nIf no second pass is conducted and there is no majority, the algorithm will not identify that there is no majority. In the absence of a rigorous majority, the returned element can be random; it is not guaranteed to be the most frequently occurring element (the mode of the sequence). A streaming method cannot discover the most frequent element in less than linear space for sequences with a limited number of repetitions.\n\n## Program to Find majority element (Boyer–Moore Majority Vote Algorithm)\n\nBelow is the full approach for finding the majority element present in the given array using Boyer-Moore Majority Vote Algorithm.\n\n### 1)Algorithm:\n\nThe algorithm takes up O(1) more space and runs in O(N) time. It takes exactly two traverses across the input list. It’s also pretty straightforward to implement, however understanding how it works is a little more difficult.\n\nWe generate a single candidate value in the first traversal, which is the majority value if there is one. To confirm, the second pass simply counts the frequency of that value. The first pass is the most interesting.\n\nWe require two values in the first pass:\n\nA candidate value, which can be set to any value at first.\nA count, which is originally set to 0.\nWe begin by looking at the count value for each entry in our input list. If the count is equal to zero, the candidate is set to the value at the current element. Then, compare the value of the element to the current candidate value. If they are the same, we add one to the count. If they vary, we reduce the count by one.\nIf a majority value exists at the end of all inputs, the candidate will be the majority value. A second O(N) traversal can be used to ensure that the candidate is the majority element.\n\nInitialize an element ele and a counter in = 0\n\nfor each element x of the input sequence:\nif in = 0, then\nassign ele = x and in = 1\nelse\nelse\nreturn ele\n\n### 2)Implementation(Static Input)\n\nGive the list as static input and store it in a variable.\n\nPass the given list to the majoElement function which accepts the given list as an argument and implement the Boyer–Moore majority vote algorithm.\n\nIt returns the majority element.\n\nEach element of the sequence is processed one at a time by the algorithm. When working with an element x.\nIf the counter is zero, set the current candidate to x, and the counter to one.\nIf the counter is not zero, it should be incremented or decremented depending on whether x is a current contender.\nIf the sequence has a majority at the end of this phase, it will be the element stored by the algorithm. If there is no majority element, the algorithm will fail to detect it and may output the incorrect element. In other words, when the majority element is present in the input, the Boyer–Moore majority vote algorithm delivers proper results.\n\nBelow is the implementation:\n\n# Function to find the majority element present in a given list\ndef majoElement(given_list):\n# majo stores the majority element (if present in the given list)\nmajo = -1\n# initializing counter index with 0\nind = 0\n# do for each element A[j] in the list\nfor j in range(len(given_list)):\n# check if the counter index is zero or not.\nif ind == 0:\n# set the current candidate of the given list to givenlist[j]\nmajo = given_list[j]\n# change the counter index to 1.\nind = 1\n# Otherwise, if givenlist[j] is a current candidate, increment the counter.\nelif majo == given_list[j]:\nind = ind + 1\n# Otherwise, if givenlist[j] is a current candidate, decrement the counter.\nelse:\nind = ind - 1\n# return the majority element\nreturn majo\n\n# Driver Code\n# Give the list as static input and store it in a variable.\ngiven_list = [4, 11, 13, 9, 11, 11, 11, 3, 15, 28, 11, 11, 11, 11]\n# Pass the given list to the majoElement function which accepts\n# the given list as an argument\n# and implement the Boyer–Moore majority vote algorithm.\n\nprint(\"The majority element present in the givenlist\",\ngiven_list, '=', majoElement(given_list))\n\n\nOutput:\n\nThe majority element present in the givenlist [4, 11, 13, 9, 11, 11, 11, 3, 15, 28, 11, 11, 11, 11] = 11\n\n### 3)Implementation(User Input)\n\ni)Integer List\n\nGive the Integer list as user input using map(), int, split(), and list() functions.\n\nstore it in a variable.\n\nPass the given list to the majoElement function which accepts the given list as an argument and implement the Boyer–Moore majority vote algorithm.\n\nIt returns the majority element.\n\nEach element of the sequence is processed one at a time by the algorithm. When working with an element x.\nIf the counter is zero, set the current candidate to x, and the counter to one.\nIf the counter is not zero, it should be incremented or decremented depending on whether x is a current contender.\nIf the sequence has a majority at the end of this phase, it will be the element stored by the algorithm. If there is no majority element, the algorithm will fail to detect it and may output the incorrect element. In other words, when the majority element is present in the input, the Boyer–Moore majority vote algorithm delivers proper results.\n\nBelow is the implementation:\n\n# Function to find the majority element present in a given list\ndef majoElement(given_list):\n# majo stores the majority element (if present in the given list)\nmajo = -1\n# initializing counter index with 0\nind = 0\n# do for each element A[j] in the list\nfor j in range(len(given_list)):\n# check if the counter index is zero or not.\nif ind == 0:\n# set the current candidate of the given list to givenlist[j]\nmajo = given_list[j]\n# change the counter index to 1.\nind = 1\n# Otherwise, if givenlist[j] is a current candidate, increment the counter.\nelif majo == given_list[j]:\nind = ind + 1\n# Otherwise, if givenlist[j] is a current candidate, decrement the counter.\nelse:\nind = ind - 1\n# return the majority element\nreturn majo\n\n# Driver Code\n# Give the list as user input using map(), int, split(), and list() functions.\n\n# store it in a variable.\ngiven_list = list(map(int,\ninput('Enter some random elements of the given list separated by spaces = ').split()))\n# Pass the given list to the majoElement function which accepts\n# the given list as an argument\n# and implement the Boyer–Moore majority vote algorithm.\n\nprint(\"The majority element present in the givenlist\",\ngiven_list, '=', majoElement(given_list))\n\n\nOutput:\n\nEnter some random elements of the given list separated by spaces = 8 12 45 96 3 7 7 1 5 7 7 7 5\nThe majority element present in the givenlist [8, 12, 45, 96, 3, 7, 7, 1, 5, 7, 7, 7, 5] = 7\n\nii)String List\n\nGive the string list as user input using split(), and list() functions.\n\nstore it in a variable.\n\nBelow is the implementation:\n\n# Function to find the majority element present in a given list\ndef majoElement(given_list):\n# majo stores the majority element (if present in the given list)\nmajo = -1\n# initializing counter index with 0\nind = 0\n# do for each element A[j] in the list\nfor j in range(len(given_list)):\n# check if the counter index is zero or not.\nif ind == 0:\n# set the current candidate of the given list to givenlist[j]\nmajo = given_list[j]\n# change the counter index to 1.\nind = 1\n# Otherwise, if givenlist[j] is a current candidate, increment the counter.\nelif majo == given_list[j]:\nind = ind + 1\n# Otherwise, if givenlist[j] is a current candidate, decrement the counter.\nelse:\nind = ind - 1\n# return the majority element\nreturn majo\n\n# Driver Code\n# Give the string list as user input using split(), and list() functions.\n\n# store it in a variable.\ngiven_list = list(input('Enter some random elements of the given list separated by spaces = ').split())\n# Pass the given list to the majoElement function which accepts\n# the given list as an argument\n# and implement the Boyer–Moore majority vote algorithm.\n\nprint(\"The majority element present in the givenlist\",\ngiven_list, '=', majoElement(given_list))\n\n\nOutput:\n\nEnter some random elements of the given list separated by spaces = hello this is btechgeeks is is is si is is\nThe majority element present in the givenlist ['hello', 'this', 'is', 'btechgeeks', 'is', 'is', 'is', 'si', 'is', 'is'] = is\n\nRelated Programs:"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.8121523,"math_prob":0.96462876,"size":9429,"snap":"2022-40-2023-06","text_gpt3_token_len":2260,"char_repetition_ratio":0.20572944,"word_repetition_ratio":0.58591884,"special_character_ratio":0.24806449,"punctuation_ratio":0.11374663,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9785845,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-01-26T22:35:15Z\",\"WARC-Record-ID\":\"<urn:uuid:9897e458-b9fa-4bfa-86a0-2464b04936fa>\",\"Content-Length\":\"58269\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:19e4bd42-50e9-4fd4-a3de-37d001408cb8>\",\"WARC-Concurrent-To\":\"<urn:uuid:b6065502-56a8-4890-a7c3-0cacd10d1c1c>\",\"WARC-IP-Address\":\"159.89.170.46\",\"WARC-Target-URI\":\"https://btechgeeks.com/python-program-to-find-majority-element-boyer-moore-majority-vote-algorithm/\",\"WARC-Payload-Digest\":\"sha1:6DJUKBU4BHN7QNU2CI54T4LD4BO45Y6I\",\"WARC-Block-Digest\":\"sha1:TMDG67CD6NW7725G43HVINOSUJEBZYKS\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764494826.88_warc_CC-MAIN-20230126210844-20230127000844-00798.warc.gz\"}"} |
http://bestmaths.net/online/index.php/year-levels/year-11/year-11-topics/linear-equations/summary/ | [
"## Linear Equations Summary\n\nLinear Equations\n\n### Summary\n\nan equation is made up of two expressions and an equals sign.\n\nto solve an equation, the value or values of the variable must be found that make both sides of the equation have the same value.\n\nthere are several types of equations and several methods of solving them.\n\nwhen solving equations each step should be written on a new line, and the equals signs should be kept directly underneath each other.\n\n### Key Skills\n\n• solve equations with one variable.\n• solve linear equations with a variable term on both sides of the equation.\n• solve linear equations with fractions or brackets\n• solve word problems involving equations."
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.9229563,"math_prob":0.9999124,"size":530,"snap":"2020-10-2020-16","text_gpt3_token_len":112,"char_repetition_ratio":0.17300381,"word_repetition_ratio":0.0,"special_character_ratio":0.20566037,"punctuation_ratio":0.12371134,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99958855,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-03-28T12:17:53Z\",\"WARC-Record-ID\":\"<urn:uuid:1e777968-96f6-4b37-b2e7-c03ac44a1f9a>\",\"Content-Length\":\"12527\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:bcd33cd2-59f0-467e-b138-19ef9a6ec882>\",\"WARC-Concurrent-To\":\"<urn:uuid:4688c3e6-fdb7-4f77-82fa-3ea909280d5f>\",\"WARC-IP-Address\":\"43.224.120.23\",\"WARC-Target-URI\":\"http://bestmaths.net/online/index.php/year-levels/year-11/year-11-topics/linear-equations/summary/\",\"WARC-Payload-Digest\":\"sha1:BFVCUGUTB54AHYA3N335RTLU5JWNJPMO\",\"WARC-Block-Digest\":\"sha1:YRZO7PHAUGVFESXCHWFM7YKGCLXFY4XE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-16/CC-MAIN-2020-16_segments_1585370491857.4_warc_CC-MAIN-20200328104722-20200328134722-00303.warc.gz\"}"} |
https://www.monsterslayer.com/Pages/miscpages/Tables.aspx | [
"",
null,
"",
null,
"FORMULAS Inches = mm times .03937 mm = Inches divided by .03937 Pi = 3.14 Area = 3.14 (Pi) times the radius squared Circumference = Diameter times 3.14 Average Diameter of an Oval = Length plus Width divided by 2 Area of an Oval = Longest Diameter times Shortest Diameter times .7854 Outside Bezel Length = Average Diameter of Stone plus Bezel thickness times 3.14",
null,
"EQUIVALENTS One Troy Ounce = 1.0971 Avoirdupois Ounce = 0.0686 Avoirdupois Pound One Troy Ounce = 31.1035 Grams = 20 Pennyweights (DWT) One DWT (pennyweight) = 1.55517 Grams One Kilogram = 32.15076 Troy Ounces. One Gram = 5 Carats = .03527 Oz. Avoir. = .03215 Oz. Troy One Avoirdupois Ounce = .91146 Oz. Troy = 28.3495 Grams = .0625 Lb. One Avoirdupois Pound = 14.583 Oz. Troy = 453.6 Grams One Inch = 25.4 Millimeters = 2.54 Centimeters One Millimeter = .039 Inch = .1 Centimeter = .001 Meter",
null,
"FRACTIONS TO DECIMAL INCHES",
null,
"",
null,
"FRACTIONAL FEET One Inch equals .08 ft. Two Inches equals .17 ft. Three Inches equals .25 ft. Four Inches equals .33 ft. Five Inches equals .42 ft. Six Inches equals .50 ft. Seven Inches equals .58 ft. Eight Inches equals .67 ft. Nine Inches equals .75 ft. Ten Inches equals .83 ft. Eleven Inches equals .92 ft. Twelve Inches equals 1 foot",
null,
""
]
| [
null,
"https://www.monsterslayer.com/Pages/Banners/FormuaTable.gif",
null,
"https://www.monsterslayer.com/Pages/Graphics/BlackRedLine.gif",
null,
"https://www.monsterslayer.com/Pages/Graphics/BlackRedLine.gif",
null,
"https://www.monsterslayer.com/Pages/Graphics/BlackRedLine.gif",
null,
"https://www.monsterslayer.com/Pages/miscpages/Fractions.jpg",
null,
"https://www.monsterslayer.com/Pages/Graphics/BlackRedLine.gif",
null,
"https://www.monsterslayer.com/Pages/Graphics/BlackRedLine.gif",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.7993808,"math_prob":0.99993277,"size":366,"snap":"2022-27-2022-33","text_gpt3_token_len":107,"char_repetition_ratio":0.18232045,"word_repetition_ratio":0.0,"special_character_ratio":0.30054644,"punctuation_ratio":0.10294118,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9994535,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14],"im_url_duplicate_count":[null,4,null,null,null,null,null,null,null,4,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-06-28T22:27:14Z\",\"WARC-Record-ID\":\"<urn:uuid:ba08eb27-c130-46f3-a9d0-ec36396e6ad2>\",\"Content-Length\":\"67916\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c53d9489-7940-45d5-abca-8d49ac58d44e>\",\"WARC-Concurrent-To\":\"<urn:uuid:d7179238-39a2-45b5-943e-85b6c88fed6a>\",\"WARC-IP-Address\":\"40.78.25.94\",\"WARC-Target-URI\":\"https://www.monsterslayer.com/Pages/miscpages/Tables.aspx\",\"WARC-Payload-Digest\":\"sha1:XT7TLZDL3P4ZXQMHXS2PU36A2PVZODOZ\",\"WARC-Block-Digest\":\"sha1:VKHHTXIZWBAZBIFV5HY3BMB53Q7B3NYW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103617931.31_warc_CC-MAIN-20220628203615-20220628233615-00758.warc.gz\"}"} |
https://ga-tudtam.com/courses/physics/8-223-classical-mechanics-ii-january-iap-2017/lecture-notes/MIT8_223IAP17_Lec8l8b-a4903eas-0-.pdf | [
"Home\n\nKepler's laws\n\nKepler's Laws of Planetary Motion Kepler's three laws describe how planetary bodies orbit about the Sun. They describe how (1) planets move in elliptical orbits with the Sun as a focus, (2) a planet covers the same area of space in the same amount of time no matter where it is in its orbit, and (3) a planet's orbital period is proportional to the size of its orbit (its semi-major axis) Kepler's Laws of Planetary Motion Kepler first law - The law of orbits Kepler's second law - The law of equal areas Kepler's third law - The law of period Kepler's first law is rather simple - all planets orbit the sun in a path that resembles an.\n\nOrbits and Kepler's Laws NASA Solar System Exploratio\n\n1. Kepler's Laws. Johannes Kepler, working with data painstakingly collected by Tycho Brahe without the aid of a telescope, developed three laws which described the motion of the planets across the sky. 1. The Law of Orbits: All planets move in elliptical orbits, with the sun at one focus. 2\n2. Kepler's three laws of planetary motion can be stated as follows: (1) All planets move about the Sun in elliptical orbits, having the Sun as one of the foci. (2) A radius vector joining any planet to the Sun sweeps out equal areas in equal lengths of time\n3. [This page is intentionally left as blank] Kepler's laws of planetary motion are three scientific laws describing the motion of planets around the Sun. They are The planets move in elliptical orbits, with the Sun at one focus point. The radius vector to a planet sweeps out area at a rate that is independent of its position in the orbits\n\nKepler's Laws - First, Second, and Third Law of Planetary\n\n1. Kepler's laws were formulated at the beginning of the 17th century based on astronomical observations. A few decades later, Newton developed the mathematical model that supported Kepler's deductions. The basis of this model are the laws of classical mechanics, which can be summarized in the relation: (1) F вЖТ = m a вЖ\n2. Kepler's Laws JWR October 13, 2001 Kepler's rst law: A planet moves in a plane in an ellipse with the sun at one focus. Kepler's second law: The position vector from the sun to a planet sweeps out area at a constant rate. Kepler's third law: The square of the period of a planet is proportional to the cube of its mean distance from the sun\n3. Kepler's First Law . Kepler's first law states that all planets move in elliptical orbits with the Sun at one focus and the other focus empty. This is also true of comets that orbit the Sun. Applied to Earth satellites, the center of Earth becomes one focus, with the other focus empty\n\nKepler's Laws of planetary motion are: Planets revolve around the sun in an elliptical orbit; the sun is at one of the two foci. The line that joins the sun to a planet sweeps out equal areas in equal times. A planet's squared orbital period is directly proportional to the cube of the semi-major axis of its orbit.< /li> Kepler's First Law Kepler's first law states that the path followed by a satellite around its primary (the earth) will be an ellipse. This ellipse has two focal points (foci) F1 and F2 as shown in the figure below. Center of mass of the earth will always present at one of the two foci of the ellipse Kepler's Law of Planetary Motions - Orbits, Areas, Periods Kepler's Law states that the planets move around the sun in elliptical orbits with the sun at one focus. There are three different Kepler's Laws. Law of Orbits, Areas, and Periods\n\nAn introduction to Kepler's Laws of Planetary Motion for students in algebra-based physics courses such as AP Physics 1. For more information, please visit. Three animations allow you to explore each of Kepler's three laws of planetary motion. Choose a law to start. 1. Law of Orbits. Choose a planet or moon from the drop-down box and then click on the Start button to set the planet into motion. A series of markers will display the elliptical orbit of the planet with the Sun at one focus of the ellipse Kepler's Laws. 1. The orbits of the planets are ellipses with the sun at one focus. 2. A line from the planet to the sun sweeps over equal areas in equal intervals of time. This is equivalent to the statement of conservation of angular momentum . 3. , where T is the orbital period in years and a is the semimajor axis in AU Kepler published these two laws in 1609 in his book Astronomia Nova. For a circle the motion is uniform as shown above, but in order for an object along an elliptical orbit to sweep out the area at a uniform rate, the object moves quickly when the radius vector is short and the object moves slowly when the radius vector is long Kepler's laws of planetary motion are three laws that describe the motion of planets around the sun : Planets move around the sun in elliptic orbits. The sun is in one of the two foci of the orbit. A line segment joining a planet and the Sun sweeps out equal areas during equal intervals of time\n\nThe greatest achievement of Kepler (1571-1630) was his discovery of the laws of planetary motion. There were such three laws, but here we shall deal only with the first two - those that govern the motion of an individual planet. These are found in Astronomia Nova вУЙ 1609, underpinned by important work in Epitome вУЙ Book V (1621) Kepler's Laws The German astronomer Kepler discovered three fundamental laws governing planetary motion. Kepler's rst law is that planetary mo-tion is ellipitcal with the sun at one focus (the motion is planar). His second law is that equal areas of the position vector from the sun to the planet are swept out in equal times The sun's mass is 2.0 x 10^ {30} kg. Earth orbits the sun in 1.0 year (1 year = 365 days, 24 hr = 1 day, 3600 s = 1 hr). Use one of Kepler's laws to calculate the distance from the earth to the. Newton's Law of Gravity and Kepler's Laws Michael Fowler Phys 142E Lec 9 2/6/09. These notes are partly adapted from my Physics 152 lectures, where more mathematical details can be found. The Universal Law of Gravitation Newton boldly extrapolated from the earth, the apple and the moon to everything, asserting that ever Kepler's third law says that the square of the orbital period is proportional to the cube of the semi-major axis of the ellipse traced by the orbit. The third law can be proven by using the second law. Suppose that the orbital period is ѕД. Since the area of an ellipse is ѕАab where a and b are the lengths of the semi-major and semi-minor axes\n\nKepler's Three Laws - Physics Classroo\n\nKepler's first law states that every planet moves along an ellipse, with the Sun located at a focus of the ellipse. An ellipse is defined as the set of all points such that the sum of the distance from each point to two foci is a constant. Figure 13.16 shows an ellipse and describes a simple way to create it To Donate through Paypal: https://www.paypal.com/donate/?hosted_button_id=GGQW3QDXH3XQW&fbclid=IwAR0ePqXXDY72vf7YMS0xkfyejzuSzVpq38TNpkDfkEyCbeHFTbdSy-b6zUoH.. About Kepler. Johannes Kepler, a German Astronomer, Mathematician and Physicist who is very well-known for his three laws of planetary motion which revolutionized the fields of Astronomy and classical mechanics. He is known to be the founder of Celestial Mechanics. Kepler was an influenced follower of Copernicus, who stated himself being Copernican as physical or if you prefer, metaphysical.\n\nKepler's second law is about, law of Area, see the below picture. planet is moving in elliptical path here r1 ,r2 and r3 are called position vector. The area covered in one second is called areal velocity v1, v2 and v3 are areal velocity. Kepler's law told that, area covered in one second, at any point of path is equal axioms (the laws of motion), Newton discussed the Kepler laws in the пђБrst three sections of Book 1 (in just 40 pages, without ever mentioning the name of Kepler!). Kepler's second law (motion is planar and equal areas are swept out in equal times) is an easy consequence of the conservation of angular momentum L = r 9 p, and hold\n\nKepler's laws (as we now know them) allow all conic sections, and parabolas are very close to the orbits of nonperiodic comets, which start very far away. (Tilt still more and you get hyperbolas--not only don't the trajectories close, but the directions of coming and going make a definite angle) Kepler's Third Law: Kepler's third law states that the square of the period of the orbit of a planet about the Sun is proportional to the cube of the semi-major axis of the orbit. The constant of proportionality is. (5.6.17) P p l a n e t 2 a p l a n e t 3 = P e a r t h 2 a e a r t h 3 = 1 y r 2 A U 3 Kepler's Third Law вАҐKepler was a committed Pythagorean, and he searched for 10 more years to пђБnd a mathematical law to describe the motion of planets around the Sun. вАҐIn Harmony of the World (1619) he enunciated his Third Law: вАҐ(Period of orbit)2 proportional to (semi-major axis of orbit)3. вАҐIn symbolic form: P2 г≤Н a3. вАҐIf two quantities are proportional, we can insert\n\nKepler's Laws depend upon the principle of conservation of angular momentum, and since these are inherently vector quantities, the angular momentum is expressed in terms of vector products. The angular momentum of the two body system can be expressed in terms of their relative velocity and the reduced mass of the system Kepler's law is known to have contributed greatly to establishing Newton's laws of mechanics. Newton is said to have been deeply impressed with Kepler's law. In other words, Kepler's law has made great progress in physics as well as astronomy. First law of Kepler (Law of orbits) The orbit of every planet is an ellipse with the Sun at one of the. Johaness Kepler, using data on Mars acquired by Tycho Brahe, phenomenologically formulated three Laws of Planetary Motion that now bear his name. They are known as the First, Second and Third Laws of Planetary Motion. Isaac Newton would later derive Kepler's Laws from his more fundamental three Laws of Motion and his Law of Universal Gravitation Lecture 8: Kepler's laws In the early 1600s, 70 years before F = ma, Kepler published 3 rules followed by the planets in their orbits around the sun - Kepler's laws. Kepler's laws: 1. Orbits are elliptical with the Sun at one focus 2. The line from the planet to the Sun sweeps out equal area in equal time 3\n\nKepler's Laws are even more general than orbits in the solar systemthey govern orbits throughout the universe, like those of stars at the center of the Milky Way galaxy Applications of Kepler's Laws: variations in the opposition of Mars Another application of Kepler's 1st Law: the orbit of the Earth's Moon Kepler's laws are. 1. The orbit of each planet is an ellipse with the Sun at one of the foci (1609). 2. The radius vector from the Sun to a planet sweeps out equal areas in equal times (1609). 3. The square of the period of a planet is proportional to the cube of the semimajor axis (1619) Kepler's first law states that every planet moves along an ellipse, with the Sun located at a focus of the ellipse. An ellipse is defined as the set of all points such that the sum of the distance from each point to two foci is a constant. Figure shows an ellipse and describes a simple way to create it Kepler's laws of planetary motion are then as follows: Kepler's first law. The orbit of each planet about the Sun is an ellipse with the Sun at one focus. Kepler's second law. Each planet moves so that an imaginary line drawn from the Sun to the planet sweeps out equal areas in equal times\n\nKepler's Laws - HyperPhysics Concept\n\n• Kepler's Laws (Fullscreen) (CC) by NASA. Publication date 2007 Topics NASA. Addeddate 2008-11-06 15:19:02 Color color Identifier kepler_full_cc Sound sound Year 2007 . plus-circle Add Review. comment. Reviews There are no reviews yet. Be the first one to write a review\n• First law of Kepler (1609): The orbits of the planets are ellipses where the Sun occupies one of the focal point.. Second law of Kepler (1609): A line segment that joins a planet and the Sun sweeps out equal areas during equal intervals of time.. Third law of Kepler (1619): The squares of the periods of revolution of the planets are proportional to the cube of the semi-major axes of their orbits\n• Kepler's laws synonyms, Kepler's laws pronunciation, Kepler's laws translation, English dictionary definition of Kepler's laws. pl n three laws of planetary motion published by Johannes Kepler between 1609 and 1619\n• Kepler's laws 1. The orbit of each planet is an ellipse with the Sun at one focus of the ellipse. 2. Each planet revolves around the Sun so that the line connecting planet and Sun (the radius vector) sweeps out equal... 3. The squares of the sidereal periods of any two planets are proportional to.\n• Kepler's laws Basic laws governing the orbital motions of planets around the sun. First law: Each planetary orbit is an ellipse, with the sun in one of its focus points. Second law: If you connect the planet and the sun by an imaginary line then, in equal time intervals, the line will sweep over equally large areas, independent on where the.\n• Kepler's Law MCQs for NEET. Kepler's law explains the motions of planets in the solar system. Kepler's First Law: All the planets move around the sun in elliptical orbits with the sun at the focus. Kepler's Second Law: The line joining the planet to the sun sweeps out equal areas in an equal interval of time. ќФA is the area swept\n• Kepler's first law. Planet orbits are ellipses with the sun at one focus of the ellipse. A circle is a special case of an ellipse, with only one focal point. An ellipse typically has two focal points. The farther apart the focal points, the greater the ellipticity of the ellipse. The diameter across the long side is called the major axis\n\nKepler's First Law . The path of each planet around the sun is an ellipse with the sun at one focus. Figure 5.5.2. Though it seems at first glance that this law is incorrect (the sun appears to be in the center of our orbit), remember that a perfect circle is an ellipse with the foci in the same place by Bill Drennon (Well, the laws are by Johannes Kepler!) Physics Teacher Central Valley Christian High School Visalia, CA USA Though originally stated to describe the motion of planets around the sun, Kepler's Laws also apply to comets.. LAW 1: The orbit of a planet/comet about the Sun is an ellipse with the Sun's center of mass at one focus This is the equation for an ellipse English: Diagram illustrating Kepler's laws: 1. Two elliptical orbits with major half axes a 1 and a 2 and focal points F 1, F 2 for planet 1 and F 1, F 3 for planet 2; the sun in F 1. 2. The two sectors A 1, A 2 of equal area are swept in equal time.. 3. The ratio of orbital periods t 2 /t 1 is (a 2 /a 1) 3/2",
null,
"Kepler's law definition is - a statement in astronomy: the orbit of each planet is an ellipse that has the sun at one focus Keplers Laws Recall that Kepler's third law says that T2/a3 is constant, where T is the period of the orbit and a is the length of the semi-major axis. We can deduce this from Newton's law of gravitation in five steps Kepler's laws of planetary motion (applicable to satellites also) Kepler's First Law: The orbit of a planet is an ellipse with the Sun at one of the two foci. Kepler's Second Law: A line segment joining a planet and the Sun sweeps out equal areas during equal intervals of time - Kepler's Laws Overview. The focus of the lecture is problems of gravitational interaction. The three laws of Kepler are stated and explained. Planetary motion is discussed in general, and how this motion applies to the planets moving around the Sun in particular\n\nKepler's laws of planetary motion Definition, Diagrams\n\n• Compute answers using Wolfram's breakthrough technology & knowledgebase, relied on by millions of students & professionals. For math, science, nutrition, history.\n• g that all the planets revolved around the sun in elliptical orbits\n• What is now seen as Kepler's third law was first conceived by Johannes Kepler on March 8, 1618. A.S.Ganesh takes a look at the man and the laws that he formulated over 400 years ago, which.\n• Kepler proposed the first two laws in 1609 and the third in 1619, but it was not until the 1680s that Isaac Newton explained why planets follow these laws. Newton showed that Kepler's laws were a consequence of both his laws of motion and his law of gravitation\n1. ology, Newton's laws are dynamic, connecting mass, force, distance, and time. Kepler's laws are kinematic, concerning only distance and time.\n2. that Kepler's influence was, in fact, much greater than the above quotations would imply. To begin with, it will be useful to recall the main points of Kepler's own work. His three planetary laws are as follows: (I) Every planet travels round the sun in an elliptical orbit, with the sun at one focus\n3. Kepler's laws apply to any orbital motion, whether of a planet around the Sun, the Moon around the Earth, or a star around the center of a galaxy. A graphic demonstrating Kepler's First Law. Kepler's first law is simple: all planets' orbits are ellipses, with the Sun at one focus. While simple, this law actually caught many people off guard\n4. Kepler's Laws of planetary motion not only apply to planets- but also to other universal phenomena such as comets and binary stars. One example is Halley's Comet, the first comet that astronomers realized has a periodical orbit. For this conceptual objective I found an article that applies Kepler's laws to the comet. Kepler's first law\n\nKepler's Law - an overview ScienceDirect Topic\n\nA German astronomer, Johann Kepler, first stated the laws of planetary motion. The three laws of planetary motion are: The law of orbits: Every planet moves in an elliptical orbit having the sun at one of the foci. The law of areas: The line joining any planet to the sun sweeps out equal areas in equal intervals of time.That is, the areal velocity of the planet is constant Below are the three laws that were derived empirically by Kepler. Kepler's First Law: A planet moves in a plane along an elliptical orbit with the sun at one focus. Kepler's Second Law: The position vector from the sun to a planet sweeps out area at a constant rate. Kepler's Third Law: The square of the period of a planet around the sun is. The mass of the Earth. The mass of the Sun. The average distance to the Sun. 3 - Kepler's second law says a line joining a planet and the Sun sweeps out equal areas in equal amounts of time.. Which of the following statements means nearly the same thing? Planets move fastest when they are moving toward the Sun Keplers Laws 1. KEPLER: the laws of planetary motion Monica Lee A.P. Physics - Period 4 Mrs. Burns KEPLER'S FIRST LAW KEPLER'S SECOND LAW KEPLER'S THIRD LAW INTERESTING APPLET Q. Kepler's first law states that the orbits of the planets are oval in shape or . answer choices . ellipses. perfect circles. Tags: Question 11 . SURVEY . 60 seconds . Q. The farther away a planet is from the sun, the _____ it takes it to orbit the sun once. answer choices . longer. shorter. Tags:.\n\nKepler's laws of planetary motion, in astronomy and classical physics, laws describing the motions of the planets in the solar system. They were derived by the German astronomer Johannes Kepler, whose analysis of the observations of the 16th-century Danish astronomer Tycho Brahe enabled him to announce his first two laws in the year 1609 and a third law nearly a decade later, in 1618 Kepler's third law shows that there is a precise mathematical relationship between a planet's distance from the Sun and the amount of time it takes for it to circle around the Sun. It was this law that later had inspired Newton, who came up with three laws of his own to explain why the planets move as they do Student Exploration: Orbital Motion - Kepler's Laws Directions: Follow the instructions to go through the simulation. Respond to the questions and prompts in the orange boxes. Vocabulary: astronomical unit, eccentricity, ellipse, force, gravity, Kepler's first law, Kepler's second law, Kepler's third law, orbit, orbital radius, period, vector, velocity Prior Knowledge Questions (Do. Kepler's law and Newton's laws are very important in physical chemistry regarding the motion of objects. The key difference between Kepler and Newton law is that Kepler law describes the planetary motion around the Sun whereas Newton laws describe the motion of an object and its relationship with the force that is acting on it Kepler's Third Law. where M 1 and M 2 are the masses of the two orbiting objects in solar masses. Note that if the mass of one body, such as M 1, is much larger than the other, then M 1 +M 2 is nearly equal to M 1. In our solar system M 1 =1 solar mass, and this equation becomes identical to the first",
null,
"Discover Kepler's Laws of Planetary Motio\n\n1. Kepler's Laws (For teachers) 10a. Scale of Solar Sys. 11. Graphs & Ellipses 11a. Ellipses and First Law 12. Second Law 12a. More on 2nd Law 12b. Orbital Motion 12c. Venus transit (1) Tycho Brahe (1546-1601) Tycho was a Danish nobleman interested in astronomy. In 1572 a new star (in today's language, a nova) appeared in the sky, not far from.\n2. 4. Kepler's Laws We repeat once more Kepler's Laws, but being a bit more quantitative: Kepler's Three Laws (quantitative version) First Law: Planets travel in elliptical orbits with the Sun at one focus, and obey the equation r = c / (1 + e cos ), where c = a(1 e2) for 0 < e < 1\n3. The laws which gov¬≠ ern this motion were пђБrst postulated by Kepler and deduced from observation. In this lecture, we will see that these laws are a con¬≠ sequence of Newton's second law. An understanding of central force motion is necessary for the design of satellites and space vehicles. Kepler's Proble\n4. Kepler's law. This law was put forward by Kepler half a century before Isaac Newton proposed his three laws of motion and the law of universal gravitation. Kepler's first law. The path of each planet when it surrounds the sun is an ellipse, where the sun at one focus. F 1 and F 2 are elliptical focal points\n5. Not only were Kepler's laws confirmed and explained by later scientists, but they apply to any orbital system of two bodies--even artificial satellites in orbit around the Earth. The constant k' for artificial satellites differs from k obtained for planets (but is the same for any satellite)\n6. KEPLER'S LAWS OF PLANETARY MOTION 1. Planets move around the Sun in ellipses, with the Sun at one focus. 2. The line connecting the Sun to a planet sweeps equal areas in equal times\n\nKepler's Laws: Examples and Diffeq - Calculus How T\n\nThe following is a proof of Kepler's laws of planetary motion from Newton's laws of motion and universal gravitation. It assumes that the reader is familiar with vector calculus, at least in its basic form. (However, if you are not familiar with vectors and calculus, you will still be able to follow along anyway.). Kepler's second law: Satellites will cover equal areas in equal intervals of time; they will move faster when closer to Earth and slower when far away from it. Kepler's third law: The orbital period of a launched satellite depends on only one of its parameters, i.e. its distance from Earth. The orbital period is the time taken by satellites. Kepler's 2nd Law. Click down and hold to place planet, then drag to give initial velocity vector Kepler's law is what concerning the movement of a planet. If you can master this rule, you can easily think about the movement of the planet. And in order to think about this, it is the quickest to introduce the motion equation of two-dimensional polar coordinates. In this post, I'll introduce Kepler's law and prove it\n\nSatellite Communication - Kepler√Ґ s Laws - Tutorialspoin\n\nKepler's Third Law. Kepler's third law states: The square of the orbital period of a planet is directly proportional to the cube of the semi-major axis of its orbit. The third law, published by Kepler in 1619, captures the relationship between the distance of planets from the Sun, and their orbital periods Kepler's FIRST Law The orbit of each planet is an ellipse and the Sun is at one focus Kepler proved Copernicus wrong - planets didn't move in circle In astronomy, Kepler's laws of planetary motion are three scientific laws describing the motion of planets around the Sun . Figure 1: Illustration of Kepler's three laws with two planetary orbits. (1) The orbits are ellipses, with focal points ∆Т 1 and ∆Т 2 for the first planet and ∆Т 1 and ∆Т 3 for the second planet Kepler in virtue of astronomical observations and records of Tyche Brahe, who was a wealthy astronomer and believed in Earth-centred model of universe, to found the orbits of the planets followed three laws (NASA, N/A). Hence, Kepler's three laws of planetary motion are 1st law of Ellipses, 2nd law of equal areas and 3rd law of harmonics(Air.\n\nKepler's Law of Planetary Motions - Orbits, Areas, Period\n\nKepler's first law states that every planet moves along an ellipse, with the Sun located at a focus of the ellipse. An ellipse is defined as the set of all points such that the sum of the distance from each point to two foci is a constant. (Figure) shows an ellipse and describes a simple way to create it Kepler's law of planetary motion 1. Group 4:<br />The Celestials<br />Kepler's Law of Planetary Motion<br /> 2. Johannes Kepler was a German astronomer and mathematician of the late sixteenth and early seventeenth centuries. 3. Unlike Brahe, Kepler believed firmly in the Copernican system. In. Kepler's Laws of Planetary Motion. 1. Kepler's first law: The planets move in elliptical orbits around the sun, with the sun at one of the two foci of the elliptical orbit. This means that the. Kepler's Law. [/caption] There are actually three, Kepler's laws that is, of planetary motion: 1) every planet's orbit is an ellipse with the Sun at a focus; 2) a line joining the Sun and a.\n\nAP Physics 1 - Kepler's Laws - YouTub\n\nKepler's 3rd law, as modiпђБed by Newton (coming up), will be a cornerstone of much of this course, because it allows us to estimate masses of astronomical objects (e.g. masses of stars, galaxies, the existence of black holes and the mysterious dark matter). ! Example of use of Kepler's 3rd law Kepler's Three Laws: Kepler's three laws are stated below: The orbit of a planet is an ellipse with the Sun at one of the two foci. A line segment joining a planet and the Sun sweeps out equal areas during equal intervals of time. The square of the orbital period of a planet is directly proportional.\n\nKepler's second law of planetary motion A line joining any planet to the sun sweeps out equal areas in equal times. Or Areal velocity of the planet around the sun is constant. Kepler's 2nd law equation. Consider a planet of mass is moving in an elliptical orbit around the sun. The sun and the planet are separated by distance r Kepler's First Law. The orbit of each planet around the Sun is an ellipse with the Sun at one focus. Kepler's Second Law. states that the planets move faster as they approach the Sun in their orbit. Kepler's Third Law. states that the planets farthest from the Sun have a longer period Kepler's Laws of Planetary Motion are simple and straightforward: The orbit of every planet is an ellipse with the Sun at one of the two foci. A line joining a planet and the Sun sweeps out equal areas during equal intervals of time. The squares of the orbital periods of planets are directly proportional to the cubes of the semi-major axis (the.",
null,
"Keplers Laws of Planetary Motion. Description: Aphelion- point in the orbit where the planet is farthest from the sun faster at perihelion than at aphelion. Kepler's Second Law (Law of Areas) - PowerPoint PPT presentation. Number of Views: 312 Noun. Kepler's laws pl ( plural only ) ( astronomy) The three laws of planetary motion discovered by Kepler in the early 17th century, stating that (i) the orbit of a planet is an ellipse with the Sun at one of the two foci, (ii) a line segment joining a planet and the Sun sweeps out equal areas during equal intervals of time, and (iii) the. Kepler's laws. Johannes Kepler made it his life's work to create a heliocentric (sun-centered) model of the solar system which would accurately represent the observed motion in the sky of the Moon and planets over many centuries. Models using many geometric curves and surfaces to define planetary orbits, including one with the orbits of the six known planets fitted inside the five perfect. <p><font size=2 face=Arial>Kepler's Laws</font></p>",
null,
"",
null,
"",
null,
"ўВўИЎІўЖўКўЖ ўГЎ®ўДЎ±. ЎІўДўЗЎѓўБ. Ў™Ў≠ўВўКўВ ўВўИЎІўЖўКўЖ ўГЎ®ўДЎ± ўЕўЖ ўЕЎѓЎІЎ± ўЕЎ±ўГЎ®Ў© ўБЎґЎІЎ¶ўКЎ©ЎМ ўИЎ•ўКЎђЎІЎѓ ўГЎ™ўДЎ© ўВўЕЎ± ЎІўДЎ£Ў±Ўґ So what Kepler's 2nd Law of Planetary Motion is saying is that A planet moves faster when it's closer to the Sun and slower when it's farther away. Image via CK12.org. 3. The amount of time it takes for a planet to complete one full orbit around the Sun is called a period Visible Poetry Project Plus. April 5: Kepler's Law. Animated by Dana Sink. Poem by Christina M. Rau Astro 1 - Kepler's Laws, Gravity and Satellites Slides. Isaac Newton was an English mathematician, astronomer, and physicist (described in his own day as a natural philosopher) who is widely recognized as one of the most influential scientists of all time and a key figure in the scientific revolution.His book Philosophi√¶ Naturalis Principia Mathematica (Mathematical Principles of Natural.\n\n• Ў£ўЖўИЎІЎє Ў≥ўБўЖ ЎІўДЎµўКЎѓ.\n• ўБўКўЖўКўД ЎЈЎ®ўК.\n• ЎІўДЎ®ўДЎЇўЕ ЎІўДЎіўБЎІўБ.\n• ўГўДўИЎѓўКўЖ ЎєўИўЖ. ЎєўЕЎ±ўЗЎІ.\n• Ў£ўЖЎЈўИўЖ Ў™ЎіўКЎЃўИўБ ўВЎµЎµ ўВЎµўКЎ±Ў© pdf.\n• Ў≥ўКўЖўЕЎІ ЎІўДЎєЎ±Ў®ўК.\n• вАОЎєЎІЎђўД ЎЈЎ±ЎІЎ®ўДЎ≥ ЎІўДЎ≠ЎѓЎЂ.\n• Aloe Vera Gel Ў®ЎІўДЎєЎ±Ў®ўК.\n• Ў≠Ў±ЎІЎђ ЎІўДЎ≥ўКЎІЎ±ЎІЎ™ ЎіЎІЎµ 84.\n• Ў®Ў±ЎЇўИЎЂ ЎІўДўВЎЈЎЈ ўКўЖЎ™ўВўД ўДўДЎІўЖЎ≥ЎІўЖ.\n• Jungle Park.\n• ўЕЎ≥ЎІЎєЎѓЎІЎ™ ЎІўДЎіЎ™ЎІЎ° 2019 \\ 2020.\n• Ў≥ўКЎІЎ±ЎІЎ™ ЎіЎІўЗўКўЖ Ў≤ўКЎ±ўИ ўҐў†ў°ўІ.\n• 2naco3 ЎІЎ≥ўЕ.\n• Ў™ўГўИўКўЖ ўЕўЗўЖўК ЎЃЎІЎµ."
]
| [
null,
"https://ga-tudtam.com/yvt/dh63odL2UXh6hnnide6P2AHaEg.jpg",
null,
"https://ga-tudtam.com/yvt/kyR6EO_RMKE.jpeg",
null,
"https://ga-tudtam.com/yvt/nduVwZJv-cCwJONJMy3K6gHaFj.jpg",
null,
"https://ga-tudtam.com/yvt/9X-OS3vpctMyx49hcsidvQHaKs.jpg",
null,
"https://ga-tudtam.com/yvt/Qv-0oheYSfaGzIq0r3B15gHaJq.jpg",
null,
"https://ga-tudtam.com/yvt/qSZEMvJctrzqxYbCACpPpgHaEn.jpg",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.9312561,"math_prob":0.927103,"size":21300,"snap":"2022-05-2022-21","text_gpt3_token_len":5101,"char_repetition_ratio":0.20567243,"word_repetition_ratio":0.15333861,"special_character_ratio":0.22197182,"punctuation_ratio":0.08985576,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9813765,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-22T11:54:10Z\",\"WARC-Record-ID\":\"<urn:uuid:a4180cbb-e912-449b-b6a5-774ad91016ac>\",\"Content-Length\":\"43098\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:41fa51e5-f590-4fb8-b02b-2d4b1712d3ba>\",\"WARC-Concurrent-To\":\"<urn:uuid:86388812-c0c0-4c64-820b-e6ed61290e3a>\",\"WARC-IP-Address\":\"37.1.216.177\",\"WARC-Target-URI\":\"https://ga-tudtam.com/courses/physics/8-223-classical-mechanics-ii-january-iap-2017/lecture-notes/MIT8_223IAP17_Lec8l8b-a4903eas-0-.pdf\",\"WARC-Payload-Digest\":\"sha1:7DBXMF6QELZSNNON65KSH2MOV7PBDLAL\",\"WARC-Block-Digest\":\"sha1:NE2N2MOWEV3Q25LJ4CBJXDMVOYZ7RNUJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320303845.33_warc_CC-MAIN-20220122103819-20220122133819-00379.warc.gz\"}"} |
http://techy.horwits.com/2018/12/numerical-solver-newton-raphson-method.html | [
"## 29 December 2018\n\n### Numerical solver - Newton-Raphson method\n\nThe program downloadable from this page is a numerical solver. It attempts to find the zeroes of a function $f(x)$, i.e. it attempts to find $x$ such that $f(x)=0$\n\nThere are several methods for this. One that requires little computing power is Newton's Method, or the Newton-Raphson method, a description of which can be found on Wikipedia.\n\nWe have no way of knowing the exact value of $f'(x)$ on the HP-41CX used here because that machine has no symbolic math capabilities, so we calculate an approximation of $f'(x)$ by taking a small value $\\epsilon$ and calculating:\n$f'(x) \\approx \\frac {f(x+\\epsilon)-f(x)} \\epsilon$\nThis then allows us to calculate the next value of $x$ to use and display so that we see the calculator homing in on the zero that we're looking for:\n$x_{n+1} = x_n - \\frac {f(x_n)} {f'(x_n)}$\nOnce we find a value of $x$ such that $|f(x)| \\lt \\epsilon$, we stop and that value of $x$ is deemed to be a zero of $f(x)$.\n\nThe same $\\epsilon$ is used here as the small value added to $x$ to calculate $f'(x)$ and as the threshold below which we deem $f(x)$ to be zero. It is calculated according to the display precision of the HP-41CX. The calculator's flags 36-39 are examined to ascertain how many significant figures are displayed and we retrieve the digit after the last used FIX, SCI or ENG command. E.g., if the calculator is in FIX 4 mode then we get a 4 back. Let's call this number $p$ for precision.\n\nWe then calculate $\\epsilon = 10^{-p}$, so the higher the display precision, the greater precision we seek from the algorithm. Similarly, if less precision is required for the display then less precision is demanded from the algorithm and it can complete after fewer iterations.\n\nIt may well be that the algorithm is unable to converge on a zero because of the nature of the function studied. It could be constant or it could send the algorithm off on a wild goose chase. There's some discussion of such cases on the Wikipedia page linked to above.\n\nIt is provided in text listing format, a .raw file and a printable PDF with wand bar codes.\n\nHere is a video of a HP-41CX finding various zeros of the function:\n$f(x) = x^3-2x^2-11x+12 = (x+3)(x-1)(x-4)$\nThe three zeroes are therefore $x=-3, x=1$ and $x=4$.\n\nAnd here is the equivalent in BASIC for HP-71:\n10 DESTROY ALL @ DELAY 0\n20 REAL X,Y,Y1,E @ INTEGER P,M\n30 P=FLAG(-17)+2*FLAG(-18)+4*FLAG(-19)+8*FLAG(-20) @ E=10^(-P) @ M=2*P+8\n40 INPUT \"GUESS? \";X\n50 Y=FNF(X) @ IF ABS(Y)<E THEN 120\n60 \"X=\" & STR$(X) 70 Y1=(FNF(E+X)-FNF(X))/E 80 IF Y1=0 THEN \"CONSTANT?\" @ END 90 X=X-Y/Y1 100 IF M>0 THEN M=M-1 @ GOTO 50 110 \"NO CONVERGENCE\" @ END 120 \"ZERO: \" & STR$(X) @ END\n130 DEF FNF(X) = (X+3)*(X-1)*(X-4)\nAfter running, the variable X contains the zero found and Y is the value of $f(x)$ calculated for that value of $x$ - this gives you the error, the absolute value of which should be lower than our $\\epsilon$, which is stored in the variable E."
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.83986706,"math_prob":0.9993044,"size":3041,"snap":"2021-31-2021-39","text_gpt3_token_len":874,"char_repetition_ratio":0.11326967,"word_repetition_ratio":0.0,"special_character_ratio":0.3094377,"punctuation_ratio":0.08054711,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99995995,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-24T06:28:06Z\",\"WARC-Record-ID\":\"<urn:uuid:bb3bf1fa-37cb-4ac7-8248-f06c40af4085>\",\"Content-Length\":\"49460\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d2a7765a-4461-4207-8282-a6a83bc16693>\",\"WARC-Concurrent-To\":\"<urn:uuid:35223a97-610b-4328-aded-f4efaf744546>\",\"WARC-IP-Address\":\"142.250.65.83\",\"WARC-Target-URI\":\"http://techy.horwits.com/2018/12/numerical-solver-newton-raphson-method.html\",\"WARC-Payload-Digest\":\"sha1:CC3VL7SVG7G7UCZX3YDJO5WKQMY52AGM\",\"WARC-Block-Digest\":\"sha1:Z5TWGCZYPZDDEGVECPF2BTWAMYCE4BTS\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780057504.60_warc_CC-MAIN-20210924050055-20210924080055-00177.warc.gz\"}"} |
https://tex.stackexchange.com/questions/314162/how-to-create-simplical-complexes-of-betti-diagrams-in-tikz | [
"# How to create Simplical complexes of Betti Diagrams in Tikz?\n\nI want to learn to create Betti diagrams and their simplical complexes such as",
null,
"where I need to learn to align the text properly (see Trial 1) and",
null,
"where I need to learn coloring surfaces and add single black points. The Betti diagrams are on the page 30 of the book The Geometry of Syzygies A Second Course in Algebraic Geometry and Commutative Algebra.\n\nTRIALS\n\nTrial 1: $x_1x_2x_3$ badly in the middle of the node, not on the side, align option?",
null,
"Trial 2: anchors without the text on the right of the node coordinate",
null,
"where the goal is to have the text on the right of the node.\n\nTrial 3: the edges not connected (fail)",
null,
"Generic MWE: with anchor fail with non-connected edges (Trial 3) that can easily be changed to other trials.\n\n\\documentclass[english]{article}\n\\usepackage{tikz}\n\\usepackage{pgfplots}\n\n\\begin{document}\n\n\\begin{tikzpicture}\n\\draw (0,0) node(1){$x_1$};\n\\draw (1,-1) node(123){};%$x_1x_2x_3$\n\\draw (2,0) node(2){$x_2$};\n\\draw (1,-2) node(3){$x_3$};\n\\draw (1)--(123)--(2);\n\\draw (3)--(123);\n\\node [anchor=west] (n123) at (123){$x_1x_2x_3$};\n\\end{tikzpicture}\n\n\\end{document}\n\n\nHow to create the simplical complexes of the Betti diagrams in Tikz?\n\n• Please post the code, together with a preamble, as text. – Alenanno Jun 11 '16 at 0:40\n• Just in case, please see this meta question and answers. – cfr Jun 11 '16 at 1:08\n• @Alenanno MWE updated to Q. – hhh Jun 11 '16 at 1:11\n• Note that you should be asking a single, specific question. For example, how can I put a label to the right of the point where these lines meet? Or how can I avoid the gap between these lines? A question isn't intended to be a laundry list of everything you want done in order to complete a project, especially not if you keep throwing additional items in the wash after people have got the washing in off the line. – cfr Jun 11 '16 at 1:11\n\nIf you want a node to be located \"elsewhere\" from the coordinate, you could use anchors. Regardless of how your diagrams could be more efficiently done, you could for example write:\n\n\\node[anchor=west] (n123) at (1,-1) {$x_{1}x_{2}x_{3}$};\n\n\nThis will make the node appear on the right of the coordinate (1,-1).\n\n• Trial 3 and its MWE updated to Q, edges not connected fail. After this, surfaces to be colored and invidual bolded vertices for Bettis. – hhh Jun 11 '16 at 1:10\n• @hhh That is not how this site works. Please take a look at the guidance on asking questions. If you have to keep updating questions because people's answers don't do what you really want, you need to spend more time thinking your questions through before asking them. But, mostly, you probably need to break your questions down into single steps you want help with. Right now, your questions are like shifting sands and answering them is a waste of everybody's time. – cfr Jun 11 '16 at 1:14\n\nSimplical complexes of Betti Diagrams with Tikz\n\nI suggest the primitives coordinate, node and draw instead of doing everything with draw and node. First of Example 1 demonstrate the former while Second of Example 1 demonstrate the latter. The colored area can be done with primitives such as fill, draw and pattern where the last requires \\usetikzlibrary{patterns}. Example 2 demonstrates different varieties about the colored areas. Lastly, the Tikz manual 4.2.1 and 15.4 are useful to understand better the techniques. Section 21 (manual 2 while section 23 in manual 3.0.1) covers transparancy: have the command opacity=0.5 in the fill or pattern.\n\nExamples about Betti diagrams with bolded-point, shaded area, edges connected and labels not over edges\n\nMWE\n\n\\documentclass[english]{article}\n\\usepackage{tikz}\n\\usetikzlibrary{patterns}\n\\usepackage{pgfplots}\n\n\\begin{document}\n\n\\begin{tikzpicture}[x=2cm, y=2cm]\n\\coordinate [label=left:$x_1$] (1) at (0,0);\n\\coordinate [label=right:$x_1x_2$] (2) at (2,0);\n\\coordinate [label=below:$x_3$] (3) at (1,-2);\n\\coordinate [label=right:$x_1x_2x_3$] (123) at (1,-1);\n\\node [fill=red,inner sep=2pt] (11) at (1){};\n\\draw [pattern color=blue, pattern=fivepointed stars] (1)--(123)--(2)--(1);\n\\draw (3)--(123);\n\\end{tikzpicture}\n\n\\begin{tikzpicture}[x=2cm, y=2cm]\n\\coordinate [label=left:$x_1$] (1) at (0,0);\n\\coordinate [label=right:$x_1x_2$] (2) at (2,0);\n\\coordinate [label=below:$x_3$] (3) at (1,-2);\n\\coordinate [label=right:$x_1x_2x_3$] (123) at (1,-1);\n\\node [fill=red,inner sep=2pt] (11) at (1){};\n\\draw [fill=blue] (1)--(123)--(2)--(1);\n\\fill (1)--(123)--(3)--(1);\n\\draw (3)--(123);\n\\end{tikzpicture}\n\n\\end{document}\n\n\nTikz manual 2.10 and the newest manual 3.0.1 from the Sourceforge have unchanged sections 4.2.1 and 15.4. The transparency section is 23, instead of 21 as in 2.10, for the manual 3.0.1. Relevant parts as pictures here and here."
]
| [
null,
"https://i.stack.imgur.com/WnMFs.png",
null,
"https://i.stack.imgur.com/bzUBX.png",
null,
"https://i.stack.imgur.com/FlcpK.png",
null,
"https://i.stack.imgur.com/Dq4JT.png",
null,
"https://i.stack.imgur.com/O7XO1.png",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.7502689,"math_prob":0.9506193,"size":1150,"snap":"2019-26-2019-30","text_gpt3_token_len":369,"char_repetition_ratio":0.11692844,"word_repetition_ratio":0.04733728,"special_character_ratio":0.30956522,"punctuation_ratio":0.1036036,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98519504,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-07-16T15:12:27Z\",\"WARC-Record-ID\":\"<urn:uuid:4fdde55e-b85f-45ba-a9fe-a44d1aa0e12b>\",\"Content-Length\":\"154985\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:76249bb5-1287-43b4-bb86-60d9f8bb0d64>\",\"WARC-Concurrent-To\":\"<urn:uuid:c3976158-b39a-41e5-873a-fb84a7b44b90>\",\"WARC-IP-Address\":\"151.101.65.69\",\"WARC-Target-URI\":\"https://tex.stackexchange.com/questions/314162/how-to-create-simplical-complexes-of-betti-diagrams-in-tikz\",\"WARC-Payload-Digest\":\"sha1:AONSVW3PATZQLCAJHBU4BDADS26XDDLZ\",\"WARC-Block-Digest\":\"sha1:ZBNJSCKSQM7IPHBPEPNYL5LSO3H33QUL\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-30/CC-MAIN-2019-30_segments_1563195524568.14_warc_CC-MAIN-20190716135748-20190716161748-00190.warc.gz\"}"} |
https://www.physicsforums.com/threads/ode-change-of-vars.229117/ | [
"# ODE change of vars\n\n#### robousy\n\nHey,\n\nI'm reading the paper:\n\nhttp://arxiv.org/abs/hep-ph/9907218\n\nThey have an ODE (eqn 7):\n\n$$-\\frac{1}{r^2}\\frac{d}{d\\phi}e^{-4kr\\phi}\\frac{dy_n}{d\\phi}+m^2e^{-4kr\\phi}y_n=m^2_ne^{-2kr\\phi}y_n$$\n\nThey then make a change of variables:\n$$z_n=\\frac{m_n}{k}e^{kr\\phi}$$\n$$f_n=e^{-2kr\\phi}y_n$$\n\nThen the ODE becomes:\n\n$$z_n^2\\frac{d^2f_n}{dz_n^2}+z_n\\frac{df_n}{dz_n}+(z_n^2-[4+\\frac{m^2}{k^2}])f_n=0$$\n\nMy question is regarding this change of variables:\n\nHow do you 'know' how to change the variables so that the ODE comes out as this tiday Bessel function. Is this an art almost, or is there some kind of technique??\n\nLooking foward to gaining some insight here.\n\nRichard\n\n#### avalonme\n\nI use the following \"cheating\" technique in my research:\nSolve the equation in Mathematica. Look at the arguments of Bessel function etc that is gives and make the necessary substitutions :)\nThis method works for your equation, Mathematica 6 gives the solution (f_n) in terms of BesselJ functions of the argument (z_n).\n\n#### robousy\n\nNice. I like it. I like it a lot. Thanks avalonme. I'll try it first thing tomorrow. Its been bugging me for about a week now.\n\nRich\n\n#### robousy\n\nsay, did you try this avalonme? Also do you use DSolve? I plugged in the above formula and get exponential solutions, not the bessel function soltn I was expecting.\n\n#### avalonme\n\nThat's what I did: you can directly copy this to Mathematica 6.\n\nCell[CellGroupData[{Cell[BoxData[\nRowBox[{\nRowBox[{\"DSolve\", \"[\",\nRowBox[{\nRowBox[{\nRowBox[{\nRowBox[{\nRowBox[{\"-\",\nFractionBox[\"1\",\nSuperscriptBox[\"r\", \"2\"]]}],\nRowBox[{\"D\", \"[\",\nRowBox[{\nRowBox[{\nRowBox[{\"Exp\", \"[\",\nRowBox[{\nRowBox[{\"-\", \"4\"}], \"k\", \" \", \"r\", \" \", \"\\[Phi]\"}], \"]\"}],\n\nRowBox[{\nRowBox[{\"y\", \"'\"}], \"[\", \"\\[Phi]\", \"]\"}]}], \",\",\n\"\\[Phi]\"}], \"]\"}]}], \"+\",\nRowBox[{\nSuperscriptBox[\"m\", \"2\"],\nRowBox[{\"Exp\", \"[\",\nRowBox[{\nRowBox[{\"-\", \"4\"}], \"k\", \" \", \"r\", \" \", \"\\[Phi]\"}], \"]\"}],\nRowBox[{\"y\", \"[\", \"\\[Phi]\", \"]\"}]}]}], \"\\[Equal]\",\nRowBox[{\nSuperscriptBox[\"mn\", \"2\"],\nRowBox[{\"Exp\", \"[\",\nRowBox[{\nRowBox[{\"-\", \"2\"}], \" \", \"k\", \" \", \"r\", \" \", \"\\[Phi]\"}],\n\"]\"}],\nRowBox[{\"y\", \"[\", \"\\[Phi]\", \"]\"}]}]}], \",\",\nRowBox[{\"y\", \"[\", \"\\[Phi]\", \"]\"}], \",\", \"\\[Phi]\"}], \"]\"}], \"[\",\nRowBox[{\"[\", \"1\", \"]\"}], \"]\"}]], \"Input\",\nCellChangeTimes->{{3.4173143524375*^9, 3.417314453640625*^9}, {\n3.417314484921875*^9, 3.417314486453125*^9}}],\n\nCell[BoxData[\nRowBox[{\"{\",\nRowBox[{\nRowBox[{\"y\", \"[\", \"\\[Phi]\", \"]\"}], \"\\[Rule]\",\nRowBox[{\nRowBox[{\nSuperscriptBox[\"\\[ExponentialE]\",\nRowBox[{\"2\", \" \", \"k\", \" \", \"r\", \" \", \"\\[Phi]\"}]], \" \",\nRowBox[{\"BesselJ\", \"[\",\nRowBox[{\nRowBox[{\"-\",\nFractionBox[\nSqrtBox[\nRowBox[{\nRowBox[{\"4\", \" \",\nSuperscriptBox[\"k\", \"2\"]}], \"+\",\nSuperscriptBox[\"m\", \"2\"]}]], \"k\"]}], \",\",\nFractionBox[\nRowBox[{\"mn\", \" \",\nSqrtBox[\nRowBox[{\nSuperscriptBox[\"\\[ExponentialE]\",\nRowBox[{\"2\", \" \", \"k\", \" \", \"r\", \" \", \"\\[Phi]\"}]], \" \",\nSuperscriptBox[\"r\", \"2\"]}]]}],\nRowBox[{\"k\", \" \", \"r\"}]]}], \"]\"}], \" \",\nRowBox[{\"C\", \"[\", \"1\", \"]\"}], \" \",\nRowBox[{\"Gamma\", \"[\",\nRowBox[{\"1\", \"-\",\nFractionBox[\nSqrtBox[\nRowBox[{\nRowBox[{\"4\", \" \",\nSuperscriptBox[\"k\", \"2\"]}], \"+\",\nSuperscriptBox[\"m\", \"2\"]}]], \"k\"]}], \"]\"}]}], \"+\",\nRowBox[{\nSuperscriptBox[\"\\[ExponentialE]\",\nRowBox[{\"2\", \" \", \"k\", \" \", \"r\", \" \", \"\\[Phi]\"}]], \" \",\nRowBox[{\"BesselJ\", \"[\",\nRowBox[{\nFractionBox[\nSqrtBox[\nRowBox[{\nRowBox[{\"4\", \" \",\nSuperscriptBox[\"k\", \"2\"]}], \"+\",\nSuperscriptBox[\"m\", \"2\"]}]], \"k\"], \",\",\nFractionBox[\nRowBox[{\"mn\", \" \",\nSqrtBox[\nRowBox[{\nSuperscriptBox[\"\\[ExponentialE]\",\nRowBox[{\"2\", \" \", \"k\", \" \", \"r\", \" \", \"\\[Phi]\"}]], \" \",\nSuperscriptBox[\"r\", \"2\"]}]]}],\nRowBox[{\"k\", \" \", \"r\"}]]}], \"]\"}], \" \",\nRowBox[{\"C\", \"[\", \"2\", \"]\"}], \" \",\nRowBox[{\"Gamma\", \"[\",\nRowBox[{\"1\", \"+\",\nFractionBox[\nSqrtBox[\nRowBox[{\nRowBox[{\"4\", \" \",\nSuperscriptBox[\"k\", \"2\"]}], \"+\",\nSuperscriptBox[\"m\", \"2\"]}]], \"k\"]}], \"]\"}]}]}]}],\n\"}\"}]], \"Output\",\nCellChangeTimes->{{3.417314442578125*^9, 3.417314453890625*^9},\n3.417314487515625*^9}]\n}, Open ]]\n\n#### robousy\n\nThanks. When I saw the result it put a huge smile on my face! This is great. One extra (and important) piece in a big puzzle I'm trying to put together.\n\nI compared my short code against yours and the only difference was that I took the derivative of the first term in the expression by hand before putting into mathematica. Hmmmm, I thought I could differentiate...\n\nIncidently I checked out your homepage. I'm in high energy at Baylor in Texas, but a guy I share an office with is into Plasma physics. Some nice papers you have out.\n\nRichard\n\n### The Physics Forums Way\n\nWe Value Quality\n• Topics based on mainstream science\n• Proper English grammar and spelling\nWe Value Civility\n• Positive and compassionate attitudes\n• Patience while debating\nWe Value Productivity\n• Disciplined to remain on-topic\n• Recognition of own weaknesses\n• Solo and co-op problem solving"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.53395915,"math_prob":0.9695488,"size":1374,"snap":"2019-13-2019-22","text_gpt3_token_len":539,"char_repetition_ratio":0.13430656,"word_repetition_ratio":0.9655172,"special_character_ratio":0.35152838,"punctuation_ratio":0.08064516,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.998873,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-03-25T06:00:28Z\",\"WARC-Record-ID\":\"<urn:uuid:ac367593-3495-4082-b672-1773de1b4c9f>\",\"Content-Length\":\"74327\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:bedb0b43-636c-4302-a8ce-982ec411baf3>\",\"WARC-Concurrent-To\":\"<urn:uuid:c5ff21ce-02ba-4935-a42e-faa886756a43>\",\"WARC-IP-Address\":\"23.111.143.85\",\"WARC-Target-URI\":\"https://www.physicsforums.com/threads/ode-change-of-vars.229117/\",\"WARC-Payload-Digest\":\"sha1:3KBFUCUZFC7NED5KIN3BWMUUUUELL3HG\",\"WARC-Block-Digest\":\"sha1:5AF2WR5JCVL7E5X6L2T5BRNBLAKWR7K4\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-13/CC-MAIN-2019-13_segments_1552912203755.18_warc_CC-MAIN-20190325051359-20190325073359-00325.warc.gz\"}"} |
https://www.teachoo.com/2821/641/Misc-3---If-origin-is-centroid-of-PQR-with-P-(2a--2--6)/category/Miscellaneous/ | [
"Miscellaneous\n\nChapter 11 Class 11 - Intro to Three Dimensional Geometry\nSerial order wise",
null,
"",
null,
"",
null,
"Learn in your speed, with individual attention - Teachoo Maths 1-on-1 Class\n\n### Transcript\n\nMisc 3 If origin is the centroid of the triangle PQR with vertices P (2a, 2, 6), Q (–4, 3b, –10) and R (8, 14, 2c), then find the values of a, b and c. Given Δ PQR where P (2a, 2, 6) , Q (−4, 3b, –10) , R (8, 14, 2c) Also, Origin O (0, 0, 0) is the centroid of Δ PQR We know that Co ordinate of centroid whose vertices are (x1, y1, z1), (x2, y2, z2), (x3, y3, z3) is ((𝑥_1 + 𝑦_1 + 𝑧_1)/3,(𝑥_2 + 𝑦_2 + 𝑧_2)/3,(𝑥_3 + 𝑦_3 + 𝑧_3)/3) Here, x1 = 2a , y1 = 2 , z1 = 6 x2 = – 4 , y2 = 3b , z2 = –10 x3 = 8 , y2 = 14 , z3 = 2c ∴ Coordinates of centroid O(0, 0, 0) (0, 0, 0) = ((2𝑎 + (−4) + 8)/3,(2 + 3𝑏 + 14)/3,(6 + (−10) + 2𝑐)/3) (0, 0, 0) = ((2𝑎 − 4 + 8)/3,(2 + 3𝑏 + 14)/3,(6 − 10 + 2𝑐)/3) (0, 0, 0) = ((2𝑎 + 4)/3,(3𝑏 + 16)/3,(2𝑐 − 4)/3) x – coordinate 0 = (2𝑎 + 4)/3 3(0) = 2a + 4 0 = 2a + 4 2a + 4 = 0 2a = – 4 a = (−4)/2 a = –2 y – coordinate 0 = (3𝑏 + 16)/3 0(3) = 3b + 16 0 = 3b + 16 3b +16 = 0 3b = – 16 b = (−16)/3 z – coordinate 0 = (2𝑐 − 4)/3 3(0) = 2c – 4 0 = 2c – 4 2c – 4 = 0 2c = 4 c = 4/2 c = 2 Thus, a = – 2 , b = (−𝟏𝟔)/𝟑 & c = 2",
null,
""
]
| [
null,
"https://d1avenlh0i1xmr.cloudfront.net/4287c71d-ce8b-4e8b-8c1b-226e37688952/slide9.jpg",
null,
"https://d1avenlh0i1xmr.cloudfront.net/44d16dac-6a65-46e8-9a8a-dd145ef98263/slide10.jpg",
null,
"https://d1avenlh0i1xmr.cloudfront.net/2f5e7341-ef52-4458-94e6-fbc2448feb9e/slide11.jpg",
null,
"https://www.teachoo.com/static/misc/Davneet_Singh.jpg",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.6922763,"math_prob":1.0000093,"size":1198,"snap":"2023-14-2023-23","text_gpt3_token_len":699,"char_repetition_ratio":0.10887772,"word_repetition_ratio":0.06603774,"special_character_ratio":0.6460768,"punctuation_ratio":0.17391305,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99993646,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,4,null,4,null,4,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-10T14:52:05Z\",\"WARC-Record-ID\":\"<urn:uuid:3819a22c-e857-46ca-8ed9-8de914a6a553>\",\"Content-Length\":\"156873\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e032f213-a319-4a9b-80f0-c823271700cb>\",\"WARC-Concurrent-To\":\"<urn:uuid:6db0f02a-0bd3-4e47-8ca5-da27b72814ff>\",\"WARC-IP-Address\":\"172.67.193.2\",\"WARC-Target-URI\":\"https://www.teachoo.com/2821/641/Misc-3---If-origin-is-centroid-of-PQR-with-P-(2a--2--6)/category/Miscellaneous/\",\"WARC-Payload-Digest\":\"sha1:NKXLLTIM7VBEPEXQS6NBNRUZKKRN6DV6\",\"WARC-Block-Digest\":\"sha1:HNXCEOCUAOB3S6ON6QIFA5TUG35DOOEH\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224657720.82_warc_CC-MAIN-20230610131939-20230610161939-00551.warc.gz\"}"} |
https://mathemania.com/lesson/binomial-theorem/ | [
"# Binomial Theorem\n\n## Factorials\n\nA factorial function is a function that multiplies first $n$ natural numbers. For a natural number $n$ with $n!$ we denote the multiplication of first $n$ natural numbers.\n\nFor example:\n\n$$1! = 1,$$\n$$2! = 1 \\cdot 2 = 2,$$\n$$3! = 1 \\cdot 2 \\cdot 3 = 6,$$\n$$4! = 1 \\cdot2 \\cdot 3 \\cdot4 = 24,$$\n$$5! = 1\\cdot 2 \\cdot 3 \\cdot4 \\cdot5 = 120.$$\n\nIn addition, it is useful to define the value $0!.$ We define $0!:=1$ by agreement.\n\nBy inductive conclusion, we can see that the factorials match the formula\n\n$$n! = n \\cdot (n – 1)!$$\n\nwith initial value $0!=1.$\n\nExample 1. Calculate the following expression:\n\n$$\\frac{49! – 48!}{48!}.$$\n\nSolution:\n\nAs a tip, when dealing with factorials never jump ahead and calculate everything because you can always somehow use the recursion. Now, since we know that $49! = 49 \\cdot48!$ we have a common factor of all terms:\n\n$$\\frac{49 \\cdot48! – 48!}{48!} = \\frac{48! (49 – 1)}{48!} = 48.$$\n\nExample 2. Solve the following equation:\n\n$$\\frac{n!}{(n – 3)!} = \\frac{8 (n – 1)!}{(n – 2)!}.$$\n\nSolution:\n\n$$\\frac{n!}{(n – 3)!} = \\frac{8 (n – 1)!}{(n – 2)!}$$\n\n$$\\Leftrightarrow \\frac{n \\cdot (n – 1) \\cdot(n – 2) \\cdot(n – 3)!}{(n – 3)!} = \\frac{8 (n – 1) (n – 2)!}{(n – 2)!}$$\n\n$$\\Leftrightarrow n \\cdot (n – 1) \\cdot (n – 2) = \\ 8 \\cdot(n – 1)$$\n\n$$\\Leftrightarrow n \\cdot(n – 2) = 8$$\n\n$$\\Leftrightarrow n^2 – 2n – 8 = 0$$\n\n$$\\Rightarrow n_{1,2} = \\frac{2\\pm \\sqrt{2^{2}+ 4\\cdot 1\\cdot 8}}{2\\cdot 1}$$\n\n$$\\Rightarrow n_{1,2} =\\frac{2\\pm \\sqrt{4+32}}{2}$$\n\n$$\\Rightarrow n_{1,2} =\\frac{2\\pm \\sqrt{36}}{2}$$\n\n$$\\Rightarrow n_{1,2} =\\frac{2\\pm 6}{2}$$\n\n$$\\Rightarrow n= 4 \\quad \\textrm{or} \\quad n =- 2$$\n\n$n=4$ is the only solution to the default equation because $n$ must be a natural number.\n\n## Binomial coefficients\n\nLet both $n$ and $k$ are natural numbers and if $k$ can also have the value of $0$ and $k \\le n.$ The binomial coefficient is denoted with symbol ${{n}\\choose{k}}$ and defined as:\n\n$${{n}\\choose{k}} :=\\frac{n!}{k! (n – k)!},$$ for $k\\geq 1.$ For $k=0$ by definition, we have:\n\n$${{n}\\choose{0}}:=1.$$\n\nWhen replacing $k$ with $0$ in the definition of the binomial coefficient we get:\n\n$${{n}\\choose{0}} = \\frac{n!}{0! (n – 0!)} = \\frac{n!}{1 \\cdot n!} = \\frac{n!}{n!} = 1.$$\n\n## The symmetry property\n\n$${{n}\\choose{k}} = {{n}\\choose{n – k}}, \\qquad k = 0, 1, 2, …, n.$$\n\nProof.\n\n$${{n}\\choose{n-k}} = \\frac{n!}{(n-k)![n-(n-k)]!} =\\frac{n!}{(n-k)!k!}={{n}\\choose{k}}.$$\n\nExample 3. Calculate ${8}\\choose{6}$ using the symmetry property as above.\n\nSolution:\n\n$${{8}\\choose{6}} = {{8}\\choose{8 – 6}} = {{8}\\choose{2}} = \\frac{8 \\cdot 7}{2 \\cdot 1} = 28.$$\n\n## Pascal’s triangle\n\nLet’s count the binomial coefficients for small value numbers $n$ and their calculated values as shown below:\n\n$${{1}\\choose{0}}\\qquad {{1}\\choose{1}}$$\n\n$${{2}\\choose{0}}\\qquad {{2}\\choose{1}}\\qquad {{2}\\choose{2}}$$\n\n$${{3}\\choose{0}} \\qquad {{3}\\choose{1}}\\qquad {{3}\\choose{2}}\\qquad {{3}\\choose{3}}$$\n\n$${{4}\\choose{0}}\\qquad {{4}\\choose{1}}\\qquad {{4}\\choose{2}}\\qquad {{4}\\choose{3}} \\qquad{{4}\\choose{4}}$$\n\n$$\\vdots$$\n\nThe triangle above is commonly known as a Pascal’s or Chinese triangle. Write the calculated values with two more added lines:\n\n$$1 \\qquad 1$$\n\n$$1 \\qquad 2 \\qquad 1$$\n\n$$1 \\qquad 3 \\qquad 3 \\qquad 1$$\n\n$$1 \\qquad 4 \\qquad 6 \\qquad 4 \\qquad 1$$\n\n$$1 \\qquad 5 \\qquad 10 \\qquad 10 \\qquad 5 \\qquad 1$$\n\n$$1 \\qquad 6 \\qquad 15 \\qquad 20 \\qquad 15 \\qquad 6 \\qquad 1$$\n\n$$\\vdots$$\n\nEach element from the Pascal’s triangle is equal to the addition of two elements on the line above on the right and left side of applicable element, except the constant value of the elements on the edges of the triangle which are always equal to $1.$\n\nThis property is valid for any element of Pascal’s triangle. Choose three characteristic elements of Pascal’s triangle:\n\n$${{n}\\choose{k-1}} \\qquad {{n}\\choose{k}}$$\n\n$${{n+1}\\choose{k}}$$\n\nThe following relation describes the basic principle of Pascal’s triangle:\n\n$${{n}\\choose{k-1}}+{{n}\\choose{k}}={{n+1}\\choose{k}}.$$\n\nProof.\n\n$${{n}\\choose{k-1}}+{{n}\\choose{k}}=$$\n\n$$\\frac{n!}{(k-1)![n-(k-1)]!} + \\frac {n!}{k!(n-k)!}=\\frac{k\\cdot n! + (n-k+1)\\cdot n!}{k!(n-k+1)!} =\\frac{n!(k+n-k+1)}{k!(n+1-k)!}$$\n\n$$= \\frac{n!(n+1)}{k!(n+1-k)!}=\\frac {(n+1)!}{k!(n+1-k)!}$$\n\n$$={{n+1}\\choose{k}}.$$\n\nIt has been proven that the left side of the equal sign is equal to the right.\n\nThis property demonstrates that all binomial coefficients are natural numbers because each of them is equal to $1$ or to the sum of two natural numbers.\n\n## The binomial theorem\n\nThe binomial theorem, is also known as binomial expansion, which explains the expansion of powers. It only applies to binomials. Let’s take a look at the link between values in Pascal’s triangle and the display of the powers of the binomial $(a+b)^n.$\n\nFor small values of a natural number $n$ we know the formulas for the powers of the binomial:\n\n$$(a + b)^1 = 1 \\cdot a + 1 \\cdot b,$$\n\n$$(a + b)^2 = 1 \\cdot a^2 + 2 \\cdot a \\cdot b + 1\\cdot b^2,$$\n\n$$(a + b)^3 = 1 \\cdot a^3 + 3 \\cdot a^2 \\cdot b + 3 \\cdot a \\cdot b^2 + 1 \\cdot b^3.$$\n\nThe last two formulas are called square of a binomial and cube of a binomial, respectively. The coefficients in these formulas are in fact binomial coefficients.\n\nIf we continue further, we get:\n\n$$(a+b)^4 = a^4+4\\cdot a^3 \\cdot b + 6\\cdot a^2\\cdot b^2 + 4\\cdot a \\cdot b^3 + b^4,$$\n\n$$(a+b)^5 = a^5 +5\\cdot a^4\\cdot b + 10\\cdot a^3 \\cdot b^2 + 10\\cdot a^2 \\cdot b^3 + 5\\cdot a \\cdot b^4 + b^5$$\n\n$$\\vdots$$\n\nNotice that all addends come in the form $b_i \\cdot a^{n – i} \\cdot b^i$. Numbers $b_i$ are called binomial coefficients. They are easily calculated and noted using factorials.\n\nThe binomial theorem\n\nFor $\\forall$ $a, b \\in \\mathbb{R},$ $n\\in \\mathbb{N}$ is valid:\n\n$$(a+b)^n = {{n}\\choose{0}} a^{n} b^{0} + {{n}\\choose{1}} a^{n-1} b^{1} + {{n}\\choose{2}} a^{n-2} b^{2}+ \\cdots + {{n}\\choose{n-1}} a^{1} b^{n-1} + {{n}\\choose{n}} a^{0} b^{n}.$$\n\nExample 4. Through using the above formula expand the following:\n\n$$(2x + 1) ^6.$$\n\nSolution:\n\n$$(2x+1)^6 = {{6}\\choose{0}} (2x)^{6}1^{0} + {{6}\\choose{1}} (2x)^{5} 1^{1} + {{6}\\choose{2}} (2x)^{4} 1^{2}+ {{6}\\choose{3}} (2x)^{3} 1^{3} + {{6}\\choose{4}} (2x)^{2} 1^{4} +{{6}\\choose{5}} (2x)^{1} 1^{5} + {{6}\\choose{6}} (2x)^{0} 1^{6}$$\n\n$$=1\\cdot 64 x ^{6}\\cdot 1 + 6\\cdot 32 x^{5} \\cdot 1 + 15 \\cdot 16 x^{4}\\cdot 1 + 20 \\cdot 8 x^{3}\\cdot 1 + 15 \\cdot 4x^{2}\\cdot 1 + 6 \\cdot 2 x^\\cdot 1 + 1\\cdot 1\\cdot 1$$\n\n$$=64 x ^{6} + 192x^{5} + 240x^{4} + 160x^{3} + 60x^{2} + 12x + 1.$$"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.5447431,"math_prob":1.0000091,"size":6467,"snap":"2022-40-2023-06","text_gpt3_token_len":2583,"char_repetition_ratio":0.17778121,"word_repetition_ratio":0.022966508,"special_character_ratio":0.45276016,"punctuation_ratio":0.116762176,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000045,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-02-04T08:38:17Z\",\"WARC-Record-ID\":\"<urn:uuid:27b3387e-ab3c-4156-a878-c5e127283369>\",\"Content-Length\":\"97248\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e36203e1-46a1-4d59-ad62-126da4f3cac7>\",\"WARC-Concurrent-To\":\"<urn:uuid:e4221fc6-5144-4843-8a53-0d9e147f1bbb>\",\"WARC-IP-Address\":\"137.184.39.193\",\"WARC-Target-URI\":\"https://mathemania.com/lesson/binomial-theorem/\",\"WARC-Payload-Digest\":\"sha1:WP4ATPWJ7AHQPA7M7NHYSPWAFLESTUMG\",\"WARC-Block-Digest\":\"sha1:IEKDILPP3FT7JNQC7UGJ5KHSH7XIMRH2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764500095.4_warc_CC-MAIN-20230204075436-20230204105436-00673.warc.gz\"}"} |
http://eprints.maths.manchester.ac.uk/156/ | [
"# A Schur--Parlett Algorithm for Computing Matrix Functions\n\nDavies, Philip I. and Higham, Nicholas J. (2003) A Schur--Parlett Algorithm for Computing Matrix Functions. SIAM Journal On Matrix Analysis and Applications, 25 (2). pp. 464-485. ISSN 1095-7162",
null,
"PDF paper.pdf Download (289kB)\n\n## Abstract\n\nAn algorithm for computing matrix functions is presented. It employs a Schur decomposition with reordering and blocking followed by the block form of a recurrence of Parlett, with functions of the nontrivial diagonal blocks evaluated via a Taylor series. A parameter is used to balance the conflicting requirements of producing small diagonal blocks and keeping the separations of the blocks large. The algorithm is intended primarily for functions having a Taylor series with an infinite radius of convergence, but it can be adapted for certain other functions, such as the logarithm. Novel features introduced here include a convergence test that avoids premature termination of the Taylor series evaluation and an algorithm for reordering and blocking the Schur form. Numerical experiments show that the algorithm is competitive with existing special-purpose algorithms for the matrix exponential, logarithm, and cosine. Nevertheless, the algorithm can be numerically unstable with the default choice of its blocking parameter (or in certain cases for all choices), and we explain why determining the optimal parameter appears to be a very difficult problem. A MATLAB implementation is available that is much more reliable than the function \\texttt{funm} in MATLAB~6.5 (R13).\n\nItem Type: Article matrix function, matrix exponential, matrix logarithm, matrix cosine, Taylor series, Schur decomposition, Parlett recurrence, sep function, LAPACK, MATLAB MSC 2010, the AMS's Mathematics Subject Classification > 15 Linear and multilinear algebra; matrix theoryMSC 2010, the AMS's Mathematics Subject Classification > 65 Numerical analysis Nick Higham 16 Feb 2006 20 Oct 2017 14:12 http://eprints.maths.manchester.ac.uk/id/eprint/156",
null,
"View Item"
]
| [
null,
"http://eprints.maths.manchester.ac.uk/style/images/fileicons/application_pdf.png",
null,
"http://eprints.maths.manchester.ac.uk/style/images/action_view.png",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.80225724,"math_prob":0.7474449,"size":2103,"snap":"2019-51-2020-05","text_gpt3_token_len":454,"char_repetition_ratio":0.11958075,"word_repetition_ratio":0.020477816,"special_character_ratio":0.21493106,"punctuation_ratio":0.14127424,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97773653,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-25T11:36:14Z\",\"WARC-Record-ID\":\"<urn:uuid:cdc1f7ee-f147-4fb7-9d08-49fdfd558935>\",\"Content-Length\":\"28195\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c03dab1d-15ed-4e4d-aa8f-f5a4c2755a21>\",\"WARC-Concurrent-To\":\"<urn:uuid:301c6f18-c26e-4637-aa2c-8a9df76531a8>\",\"WARC-IP-Address\":\"130.88.96.130\",\"WARC-Target-URI\":\"http://eprints.maths.manchester.ac.uk/156/\",\"WARC-Payload-Digest\":\"sha1:XOX6O3URA7DSOO2KQH7D72VB34AIJNK5\",\"WARC-Block-Digest\":\"sha1:YZIEQ6HVZWP7OD4MXFXT52IBO2VUOOT4\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579251672440.80_warc_CC-MAIN-20200125101544-20200125130544-00061.warc.gz\"}"} |
https://physics.stackexchange.com/questions/110983/why-was-quantum-mechanics-regarded-as-a-non-deterministic-theory | [
"# Why was quantum mechanics regarded as a non-deterministic theory?\n\nIt seems to be a wide impression that quantum mechanics is not deterministic, e.g. the world is quantum-mechanical and not deterministic.\n\nI have a basic question about quantum mechanics itself. A quantum-mechanical object is completely characterized by the state vector. The time-evolution of state vector is perfectly deterministic. The system, equipment, environment, and observer are part of the state vector of universe. The measurements with different results are part of state vector at different spacetime. The measurement is a complicated process between system and equipment. The equipment has $10^{23}$ degrees of freedom, the states of equipment we neither know nor able to compute. In this sense, the situation of QM is quite similar with statistical physics. Why can't the situation just like statistical physics, we introduce an assumption to simply calculation, that every accessible microscopic state has equal probability? In QM, we also introduce an assumption about the probabilistic measurement to produce the measurement outcome.\n\nPS1: If we regarded non-deterministic is intrinsic feature of quantum mechanics, then the measurement has to disobey the Schrödinger picture.\n\nPS2: The bold phase argument above does not obey the Bell's inequality. In the local hidden variable theory from Sakurai's modern quantum mechanics, a particle with $z+$, $x-$ spin measurement result corresponds to $(\\hat{z}+,\\hat{x}-)$ \"state\". If I just say the time-evolution of universe is $$\\hat{U}(t,t_0) \\lvert \\mathrm{universe} (t_0) \\rangle = \\lvert \\mathrm{universe} (t) \\rangle.$$ When the $z+$ was obtained, the state of universe is $\\lvert\\mathrm{rest} \\rangle \\lvert z+ \\rangle$. Later the $x-$ was obtained, the state of universe is $\\lvert\\mathrm{rest}' \\rangle \\lvert x- \\rangle$. It is deterministic, and does not require hidden-variable setup as in Sakurai's book.\n\nPS3: My question is just about quantum mechanics itself. It is entirely possible that the final theory of nature will require drastic modification of QM. Nevertheless it is outside the current question.\n\nPS4: One might say the state vector is probabilistic. However, the result of measurement happens in equipment, which is a part of total state vector. Given a probabilistic interpretation in a deterministic theory is logical inconsistent.\n\n• Quantum mechanics is deterministic, but it is also probabilistic -- i.e. you can deterministically calculate the probability of a random event happening. This is to distinguish it from non-deterministic (i.e. stochastic) systems where you do not generally have \"one\" solution but an entire family of solutions depending on random variables. – webb May 2 '14 at 22:33\n• If I know the wavefunction, or state vector, more generally, of the universe, then I don't need the probability anymore – user26143 May 3 '14 at 7:34\n• If you know the state vector of the universe, then this still doesn't give you information about exact outcome of any quantum experiment — only probabilities. – Ruslan May 3 '14 at 7:56\n• If the equipment and system are governed by the Schrodinger picture, there is no (strict, by means of not in the sense happened in statistical mechanics) probability. If there is (strict) probability, then the Schrodinger picture is incomplete. – user26143 May 3 '14 at 8:22\n• It is not clear what you are asking. Quantum theory is non-deterministic in the sense that it works with objects ($\\psi$ functions, kets) that can be used to calculate probabilities, not the actual results. It is the same as in statistical physics, only probabilistic statements can be derived. – Ján Lalinský May 3 '14 at 9:53\n\n## 7 Answers\n\nI agree with much of what you write in your question. Whether quantum mechanics is considered to be deterministic is a matter of interpretation, summarised in this wiki comparison of interpretations. The wiki definition of determinism is this context, which I think is entirely satisfactory, is\n\nDeterminism is a property characterizing state changes due to the passage of time, namely that the state at a future instant is a function of the state in the present (see time evolution). It may not always be clear whether a particular interpretation is deterministic or not, as there may not be a clear choice of a time parameter. Moreover, a given theory may have two interpretations, one of which is deterministic and the other not.\n\nIn, for example, many-worlds interpretation, time evolution is unitary and is governed entirely by Schrödinger’s equation. There is nothing like the \"collapse of the wave-function\" or a Born rule for probabilities.\n\nIn other interpretations, for example, Copenhagen, there is a Born rule, which introduces a non-deterministic collapse along with the deterministic evolution of the wave-function by Schrödinger’s equation.\n\nIn your linked text, the author writes that quantum mechanics is non-deterministic. I assume the author rejects the many-worlds and other deterministic interpretations of quantum mechanics. Aspects of such interpretations remain somewhat unsatisfactory; for example, it is difficult to calculate probabilities correctly without the Born rule.\n\n• The problem is that in many-worlds interpretation there is no deterministic connection between the state and the observed behavior since theoretically all branches co-exist, but in practice only one is observed. \"Determinism\" is a linguistic sleight of hand. Bohmian mechanics is indeed deterministic, but it involves faster than light signals and ephemeral entities (Bohmian particles) unobservable in principle, like ether. For that matter Everett's branches are much like ether as well, and play the same role as Bohmian particles. – Conifold Apr 28 '16 at 1:46\n\nQuantum mechanics is non deterministic of actual measurements even in a gedanken experiment because of the Heisenberg Uncertainty Principle, which in the operator representation appears as non commuting operators. It is a fundamental relation of quantum mechanics:\n\nIf you measure the position accurately, the momentum is completely undefined.\n\nThe interpretation of the solutions of Schrodinger's equation as predicting the behavior of matter depends on the postulates: the state function determined by the equation is a probability distribution for finding the system under observation with given energy and coordinates. This does not change if large ensembles are considered except computationally. The probabilistic nature will always be there as long as the theory is the same.\n\n• You are wrong. The HUP is not optional. The total universe obeys the HUP postulate so as far as the theory of quantum mechanics goes, which is what you are asking, it will always be indeterminate by construction of the theory. It was constructed to fit observations and if you extrapolate to the total universe it makes no difference. (You said you are not considering other theories ) – anna v May 3 '14 at 12:02\n• When measuring one particle's x then going to the next, their momentum will be indeterminate and \"next\" will have a whole phase space to be chosen from because momentum determines the next probability of x, not a point but a probability of being found at that point, whether 1 2 3 or infinite number of particles. – anna v May 3 '14 at 12:48\n• The HUP is a postulate incorporated into the mathematics of commutators. – anna v May 3 '14 at 12:50\n• No, the schrodinger picture gives a probability of finding any measurement value, not a fixed value of the momentum. One has to operate on the schrodinger state function, with the momentum operator to get the momentum, and the operation.measurement will give a value within the probability envelope. – anna v May 3 '14 at 13:05\n• HUP isn't critical to determinism, the key point is born rule/wave function collapse. i think this answer is off target. – innisfree May 3 '14 at 19:11\n\nThe difference between statistical physics and quantum mechanics is that, in statistical physics, it is always reasonable to either measure a quantity, or demonstrate that the effect of that quantity can be bundled into an easy to work with random variable, often through the use of the Central Limit Theorem. In such situations, it can be shown that the answer will be a deterministic answer plus a small perturbation from the random variables with a 0 expectation and a very small variance.\n\nIn quantum mechanics, the interesting properties show up in situations where its not possible to measure a quantity and not plausible to bundle it up into a random variable using the central limit theorem. Sometimes you can, of course: in particular, this approach works well in modeling an quantum mechanic system which is already well modeled in classical physics. For the most part, we don't observe many quantum effects in day to day life! However, quantum mechanics is focused on the more interesting regions where those unmeasurable quantities have an important impact on the outcome of the system.\n\nAs an example, in many entanglement scenarios, you can get away with ignoring the correlation between the states of the particles. This is good, because in theory, there's some small level of entanglement between all particles that have interacted, and its good to know that we can often get away with ignoring this, and treating the values as simple independent and identically distributed variables. However, in the entanglement cases quantum mechanics are interested in, we intentionally explore situations where the entanglement is strong enough that that correlation can't just be handwaved away and still yield experimentally validated results. We are obliged to carry it through our equations if we want to provide a good model of reality.\n\nThere are many ways to do this, and one of the dividing lines regarding the topic is the line drawn between the different interpretations of QM. Some of them hold to a deterministic model, others hold to non-deterministic arguments (the Copenhagen interpretation being an example). In general, the models which are deterministic have to give up something else which is valued by physicists. The many-worlds theory gets away with being deterministic by arguing that every possible outcome of every classical observation occurs, in its own universe. This is consistent with the equations that we believe are a good model of quantum mechanics, but comes with strange side effects when applied to the larger world (quantum suicide, for instance). The Copenhagen interpretation is, in my opinion, the most natural interpretation in that it dovetails with the way we do classical physics smoothly, without any pesky alternate realities. I have found that mere mortals are most comfortable with the intuitive leaps of the Copenhagen interpretation, as compared to the intuitive leaps of other interpretations. However, the Copenhagen interpretation is decidedly non-deterministic. Because this one seems easier to explain to many people, it has achieved a great deal of notoriety, so its non-determinism gets applied to all of quantum mechanics via social mechanisms (which are far more complicated than any quantum mechanisms!)\n\nSo you can pick any interpretation you please. If you like determinism, there are plenty of options. However, one cannot use many of the basic tools of statistical mechanics to handle quantum scenarios because the basic physics of quantum mechanics leads to situations where the basic assumptions of statistical mechanics become untenable. Your example of the result of the measurement happening in the equipment is an excellent example. Like in statistical physics, the state of the measurement equipment can be modeled as a state vector, and it turns out that it's a very reasonable assumption to assume that it is randomly distributed. However, equipment designed to measure quantum effects is expressly designed to strongly correlate with the state of the particle under observation before measurement began. When the measurement is complete, the distribution of the state of the measurement equipment is decidedly poorly modeled as a state plus a perturbation with a small variance. The distribution is, instead, a very multimodal distribution, because it was correlated to the state of the particle, and most of the interesting measurements we want to take are those of a particle whose [unmeasured] state is well described by a multimodal distribution.\n\nForget interpretations. The predictions of quantum mechanics - which agree with all interpretations (by definition of 'interpretation')- does not allow prediction of experimental/observational outcomes no matter how much information is gathered about initial conditions. (You can't even get the classical information needed in classical physics because of the uncertainty principle). None of the interpretations challenge this, not even in principle. According to the math, which is wildly successful in it's predictions, a given present does not determine the future. That's why quantum mechanics is said to be indeterministic, not because of any interpretation. It doesn't matter if you believe in wave function collapse or not or other worlds or not or whatever. Saying the theory is deterministic because of some math involved in the calculation isn't related to the fact that experimental outcomes cannot be predicted, The present does not determine the future.\n\n• What makes you say that we can never gather enough information about a state? We are perfectly capable of engineering finite-dimensional quantum states, for example specific entangled qubit states. The evolution of these states is then entirely deterministic (in fact, quantum computing wouldn't be possible otherwise). Similarly, we may not be capable of measuring, say, the position and the momentum of a particle, but why would your definition of a quantum system require the existence of such observables anyway? – level1807 May 17 at 19:01\n• To continue, in principle there is nothing stopping us from engineering states that are arbitrarily close to, say, a plane wave with $\\Delta p\\to 0$ and $\\Delta x\\to \\infty$. The fact that $\\Delta x$ is large in no way means that we \"don't know\" what the state is — we do! It's a plane wave $e^{i p x}$! – level1807 May 17 at 19:05\n\nIf you learn Quantum Mechanics you will see that the observables of any quantum system depend on the state of the system(final, initial, ground state or excited state). In theory, there are a number of interpretations of Quantum Mechanics wiki, link.\n\nThe mathematical formulation of quantum mechanics is built onto the notions of an operators. When you do a measurement you perturb the system state by applying an operator on it. The eigenvalue of the operator corresponds to the measured value of the system observable. However, each eigenvalue have a certain probability, and therefore by measuring(applying) an operator on the state system there will be a finite(or infinite) number of final states, each of them with a given probability. This is the essence of non-deterministic in quantum mechanics.\n\nThe next question arises:how the non-deterministic applies on large scale universe and the \"length\" of the not-deterministic\" phenomena in the universe?\n\nBecause in classical theory(like general relativity, electromagnetism), you have for example the Einstein equations which govern the dynamics and they are full deterministic.\n\nThe quantum state of a system is completely characterized by a state vector only when the system is a pure state. The state vector evolves in two different ways described by two postulates: the Schrödinger postulate (valid when there is no measurements) and the measurement postulate. The Schrödinger postulate describes a deterministic and reversible evolution $U$. The measurement postulate describes a non-deterministic and irreversible evolution $R$.\n\n$R$ is not derivable from $U$. In fact $R$ is incompatible with $U$, and that is the reason why the founder fathers introduced two evolution postulates in QM. Indeed, assuming an initial superposition of two states for the composite supersystem (system + apparatus + environment)\n\n$$|\\Psi\\rangle = a |A\\rangle + b |B\\rangle$$\n\nthe result of a measurement is either $|A\\rangle$ or $|B\\rangle$, but because these states are orthogonal, they cannot both have evolved from a single initial state by a deterministic, unitary evolution, since that $|A\\rangle = U |\\Psi\\rangle$ and $|B\\rangle = U |\\Psi\\rangle$ implies $\\langle A|B\\rangle = \\langle\\Psi |U^{*} U | \\Psi\\rangle = 1$, which is incompatible with the requirement of ortohogonality.\n\nSo, if the result of the measurement was $|B\\rangle$, the evolution was $|B\\rangle = R |\\Psi\\rangle$.\n\nThe fact that QM is probabilistic and not deterministic is forced by the 4 rules stated below. This rules can not coexist logically to provide determinism. They lead without effort to the probablistic interpretation.\n\nYes, unfortunately (for me) I am not a physicist. So take this with a grain of salt.\n\nSome thinking about this puzzling issue will make you have these conclusions based on well-known facts:\n\n@Quantum world:\n\n1) Entities have a 'spread' existence. (A kind of 'field of energy' which tries to 'fill' all space).\n\n2) Entities have some 'oscilatory' existence. (Which gives rise to 'interference' phenomena).\n\n3) Interactions between entities are 'discrete'. (They exchange 'quanta' of somestuff).\n\n4) Interactions use the 'minimum amount' of some 'energy stuff'.\n\nThe interplaying of these facts is what gives rise to the non-determinism (probability) in QM.\n\nLet's think of a simple example:\n\nSuppose you have 3 entities A, B and C (a 1 sender & 2 receivers scenario), where A is the source of some perturbation to be sent to B and C at the 'same time'. Let's think of the perturbation in practical terms (i.e.: money) and assign it a unit of measure (dollars).\n\nNow how would A send 2 dollars total to both of them (B & C)?\n\nWell, A should give them 1 dollar each and problem solved!!!. However, there is a constraint here (remember #4) and that is: Interactions are only done with minimun currency!!!'.\n\nWith that in mind, how can A give B and C one cent (minimun currency) at the same time? Well, .. It can't!!!\n\nAt each time (interaction) A must choose between B or C to give away every cent until completes the 2 dollars to both of them. And if you think a little bit about it, you realize that the only objective solution for A must be to throw an imaginary coin each time to decide whom will receive the 1 cent!. [Of course, for this 1 sender & 2 receivers situation, a coin with 2 faces fits rigth!. But for others scenarios, the coin or dice will have to change.]\n\nIn the analog world of classical mechanics, A would send an infinite small amount of money to both of them (no minimum currency constraint and at the same time!) and what we will see is a beautiful continuous growing of B and C money pockets. No need to deal with probabilities!!!!.\n\nIf you think carefully, in plain simple terms, probability arise from the discrete nature of interactions between entities. This is the real deal which turns everything so strange and interesting.\n\n[Hope this general and somewhat vague answer gives you a clue about why probability arise in the description offered by QM]\n\nThe question now is: Why it has to be like that?"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.90763694,"math_prob":0.96936935,"size":2319,"snap":"2021-21-2021-25","text_gpt3_token_len":503,"char_repetition_ratio":0.12440605,"word_repetition_ratio":0.01775148,"special_character_ratio":0.21043554,"punctuation_ratio":0.119804405,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9864215,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-06-20T22:50:29Z\",\"WARC-Record-ID\":\"<urn:uuid:a89108c7-3da6-4ac9-85bc-6ca0f6becab8>\",\"Content-Length\":\"248582\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:254220ed-9d42-4d17-90ba-a50c54f688cf>\",\"WARC-Concurrent-To\":\"<urn:uuid:41b19310-55e3-4445-b860-7280fd1cea47>\",\"WARC-IP-Address\":\"151.101.193.69\",\"WARC-Target-URI\":\"https://physics.stackexchange.com/questions/110983/why-was-quantum-mechanics-regarded-as-a-non-deterministic-theory\",\"WARC-Payload-Digest\":\"sha1:LISPACNUTAOU7FZJMMR4QHS334DIQPUZ\",\"WARC-Block-Digest\":\"sha1:ZRHOXJDSTC6YC7AQOE55VEO6QPKANMOF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-25/CC-MAIN-2021-25_segments_1623488257796.77_warc_CC-MAIN-20210620205203-20210620235203-00210.warc.gz\"}"} |
https://simple.m.wikipedia.org/wiki/Angular_momentum | [
"# Angular momentum\n\nmeasure of the extent to which an object will continue to rotate in the absence of an applied torque\n\nThe angular momentum or rotational momentum (L) of an object rotating about an axis is the product of its moment of inertia and its angular velocity:\n\n$L=I\\omega$",
null,
"where\n\n$I$",
null,
"is the moment of inertia (resistance to angular acceleration or deceleration, equal to the product of the mass and the square of its perpendicular distance from the axis of rotation);\n$\\omega \\$",
null,
"is the angular velocity.\n\nThere are two kinds of angular momentum: the spin angular momentum and the orbital angular momentum.\n\n## Spin angular momentum\n\nThe spin angular momentum is a kind of angular momentum for objects turning around an axis that goes through the object, like a top spinning around its center.\n\nObjects that are very spread out from the axis of rotation are very hard to start spinning, but once they get going, they are also hard to stop. We say, that is, it has a large moment of inertia. Similarly, it is easier to start an object spinning slowly (a small angular velocity) than it is to start it spinning fast (a large angular velocity). This is why the spin angular momentum depends both on how spread out the object is (moment of inertia) and how fast it is spinning (angular velocity).\n\n## Orbital angular momentum\n\nThe other kind of angular momentum is orbital angular momentum. This is the kind of angular momentum that planets orbiting around the Sun have, but that tops spinning about their axes do not.\n\nWe use orbital angular momentum when we talk about an object (like a planet) orbiting around some axis that is not moving (like the Sun). That is, part of its motion is in a direction that is neither towards nor away from the axis; at least part of its motion is going around the axis. The orbital angular momentum also measures how hard it would be to stop the object from continuing to orbit around the axis.\n\nAngular momentum is a conserved quantity—an object's angular momentum stays constant unless an external torque acts on it."
]
| [
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/4f839d15fc44c859ecb7edda53fd02b568435bf1",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/535ea7fc4134a31cbe2251d9d3511374bc41be9f",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/618fc4788f13fcdfe792ddf35ff04c61cfc68d8d",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.91232055,"math_prob":0.9453862,"size":1925,"snap":"2021-31-2021-39","text_gpt3_token_len":389,"char_repetition_ratio":0.217595,"word_repetition_ratio":0.006042296,"special_character_ratio":0.1974026,"punctuation_ratio":0.072222225,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9924674,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-24T21:52:55Z\",\"WARC-Record-ID\":\"<urn:uuid:5b1eb065-a091-41fb-a03f-72b8e3f25804>\",\"Content-Length\":\"29782\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ef1f0782-89fd-49f2-a068-ff046ee58d26>\",\"WARC-Concurrent-To\":\"<urn:uuid:7b6b5a6e-35a0-448c-9faf-74f6c815c6e4>\",\"WARC-IP-Address\":\"208.80.154.224\",\"WARC-Target-URI\":\"https://simple.m.wikipedia.org/wiki/Angular_momentum\",\"WARC-Payload-Digest\":\"sha1:SRICDDK7NX2V4JGT2PBLQCJD756NUDVB\",\"WARC-Block-Digest\":\"sha1:PVSFKBEA3HITNBGBVYVEZFHLBGHMWTLD\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780057580.39_warc_CC-MAIN-20210924201616-20210924231616-00405.warc.gz\"}"} |
https://realreturns.blog/2013/01/18/volatility-control-long-term-risk-adjusted-returns/ | [
"### Volatility Control: Long term risk adjusted returns\n\nVolatility control is an approach to portfolio management that deploys capital according to risk, as measured by volatility. Click here for an introduction to the approach or here for some frequently asked questions on volatility control.\n\nVolatility control has a positive effect on risk-adjusted investment performance, this is because increases in volatility are usually associated with equity market falls and in these situations the volatility control approach will de-gear or reduce exposure to equities during these times, preserving capital.\n\nWe have done a comprehensive analysis of the risk adjusted performance of volatility control historically, across a number of equity markets. We illustrate the results in terms of sharpe ratio, which is the ratio of the excess return (over an appropriate cash rate) to the volatility of the returns. To put it into context, if equities return 4%p.a. in excess of the cash rate, at a volatility of 20%p.a. this equates to a sharpe ratio of 0.2.\n\nThe table below illustrates the sharpe ratios of a volatility controlled approach to investing in a number of different markets, as compared to the sharpe ratio of the underlying index.",
null,
"Observations\n\n• Under all periods tested and markets the volatility controlled approach shows a higher Sharpe ratio than the underlying index\n• Over the longer time periods the sharpe ratio pick-up of volatility control averages around 0.1-0.15\n• To put this into context, a fixed allocation to equity has had a historical volatility of around 20%p.a. and a return over the risk free rate of around 4%p.a. this equates to a sharpe ratio of 0.2 (=4%/20%)\n• Applying a volatility control approach could be expected to increase the sharpe ratio to 0.3. This would mean that for a 10% volatility level, we could expect a 3%p.a. return in excess of the risk free rate. Or for a 12% volatility level we could expect a return above the risk free rate of 4%\n• In other words, given the risk adjusted return improvement a volatility controlled approach can be expected to deliver similar returns to a fixed market allocation, at a lower level of volatility\n• It is important to note at this point (and see the FAQ section in the appendix) that we are not suggesting volatility control is a “free lunch”, in any given year it can and has delivered lower returns than a fixed allocation – 2012 being an example and the approach will cut out the spectacular positive years that fixed allocations to equities do have.\n• Our quantitative results are supported by a study by Guido Geese published in the Journal of Indexes in October 2012 which also concludes:\n\n“Regarding target volatility indexes, we have shown that their long-run Sharpe ratio is always better than the Sharpe ratio of the underlying equity index as long as the target volatility level is chosen within reasonable boundaries”\n\nThe details of the volatility control approach used was as follows\n\n• Volatility target of 10% (although the risk adjusted return is independent of the level targeted).\n• The volatility measure was an exponentially weighted measure with a 50 day half life (better than an equally weighted measure as it does not drop suddenly due to single observations dropping out of a window, click here for more information on this measure).\n• The volatility was measured daily\n• There as a limit placed on the maximum and minimum sizes of rebalancing trade that could occur on any given day: the maximum was 5% and the minimum was 1%\n• There was a maximum exposure limit of 150% placed on the equity exposure in all cases. However this was never reached\n• Transaction costs were not taken into account in either the volatility control approach or the underlying index\n• For a strategy implemented through futures, we estimate that futures roll costs of a passive index tracking investment would be 12 bps per annum\n• We estimate the additional transaction costs associated with the volatility control strategy would be 3 bps per annum\n\nPosted in GK"
]
| [
null,
"https://i2.wp.com/realreturns.blog/wp-content/uploads/2013/01/longtermsharpe.jpg",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.94425535,"math_prob":0.9056681,"size":3979,"snap":"2020-34-2020-40","text_gpt3_token_len":812,"char_repetition_ratio":0.16327044,"word_repetition_ratio":0.012066365,"special_character_ratio":0.20331742,"punctuation_ratio":0.06406685,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9818423,"pos_list":[0,1,2],"im_url_duplicate_count":[null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-28T23:47:30Z\",\"WARC-Record-ID\":\"<urn:uuid:fc8bdbe2-0372-4213-aa3f-dc7a9b3c23c7>\",\"Content-Length\":\"79142\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:85b7c86b-df4c-4776-b05d-5ebb8dbfb0a4>\",\"WARC-Concurrent-To\":\"<urn:uuid:ac98a4c6-8a7f-4fba-a8bf-2322e5502fad>\",\"WARC-IP-Address\":\"192.0.78.24\",\"WARC-Target-URI\":\"https://realreturns.blog/2013/01/18/volatility-control-long-term-risk-adjusted-returns/\",\"WARC-Payload-Digest\":\"sha1:4AK32RWMLBSW52MVYUT75YGLUGF43XGW\",\"WARC-Block-Digest\":\"sha1:KJBUG7DSCAPOPYJ52B3A7UFHJIZRIGOJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600401617641.86_warc_CC-MAIN-20200928234043-20200929024043-00608.warc.gz\"}"} |
https://www.geeksforgeeks.org/count-subsets-having-product-divisible-by-k/?ref=rp | [
"Skip to content\nRelated Articles\nCount subsets having product divisible by K\n• Last Updated : 24 Apr, 2021\n\nGiven an array arr[] of size N and an integer K, the task is to count the number of subsets from the given array with product of elements divisible by K\n\nExamples:\n\nInput: arr[] = {1, 2, 3, 4, 5}, K = 60\nOutput: 4\nExplanation: Subsets whose product of elements is divisible by K(= 60) are { {1, 2, 3, 4, 5}, {2, 3, 4, 5}, {3, 4, 5}, {1, 3, 4, 5} }\n\nInput: arr[] = {1, 2, 3, 4, 5, 6}, K = 60\nOutput: 16\n\nNaive Approach: The simplest approach to solve this problem is to generate all possible subsets and for each subset, check if the product of its elements is divisible by K or not. If found to be true, then increment the count. Finally, print the count.\n\nTime Complexity: O(N * 2N)\nAuxiliary Space: O(N)\n\nEfficient Approach: To optimize the above approach the idea is to use Dynamic programming. Below is the recurrence relation and the base case:\n\nRecurrence Relation:\ncntSubDivK(N, rem) = cntSubDivK(N – 1, (rem * arr[N – 1]) % K) + cntSubDivK(N – 1, rem).\ncntSubDivK(N, rem) store the count of subset having product divisible by K.\nrem: Store the remainder when K divides the product of all elements of the subset.\n\nBase Case:\nif N == 0 and rem == 0 then return 1\nIf N == 0 and rem != 0 then return 0.\n\nFollow the steps below to solve the problem:\n\n• Initialize a 2D array, say dp[N][rem] to compute and store the values of all subproblems of the above recurrence relation.\n• Finally, return the value of dp[N][rem].\n\nBelow is the implementation of the above approach:\n\n## C++\n\n `// C++ program to implement``// the above approach` `#include ``using` `namespace` `std;` `// Function to count the subsets whose``// product of elements is divisible by K``int` `cntSubDivK(``int` `arr[], ``int` `N, ``int` `K,`` ``int` `rem, vector >& dp)``{`` ``// If count of elements`` ``// in the array is 0`` ``if` `(N == 0) {` ` ``// If rem is 0, then return 1`` ``// Otherwise, return 0`` ``return` `rem == 0;`` ``}` ` ``// If already computed`` ``// subproblem occurred`` ``if` `(dp[N][rem] != -1) {`` ``return` `dp[N][rem];`` ``}` ` ``// Stores count of subsets having product`` ``// divisible by K when arr[N - 1]`` ``// present in the subset`` ``int` `X = cntSubDivK(arr, N - 1, K,`` ``(rem * arr[N - 1]) % K, dp);` ` ``// Stores count of subsets having product`` ``// divisible by K when arr[N - 1] not`` ``// present in the subset`` ``int` `Y = cntSubDivK(arr, N - 1, K,`` ``rem, dp);` ` ``// Return total subset`` ``return` `X + Y;``}` `// Utility Function to count the subsets whose``// product of elements is divisible by K``int` `UtilCntSubDivK(``int` `arr[], ``int` `N, ``int` `K)``{` ` ``// Initialize a 2D array to store values`` ``// of overlapping subproblems`` ``vector > dp(N + 1,`` ``vector<``int``>(K + 1, -1));` ` ``return` `cntSubDivK(arr, N, K, 1, dp);``}` `// Driver Code``int` `main()``{`` ``int` `arr[] = { 1, 2, 3, 4, 5, 6 };`` ``int` `K = 60;`` ``int` `N = ``sizeof``(arr) / ``sizeof``(arr);`` ``cout << UtilCntSubDivK(arr, N, K);``}`\n\n## Java\n\n `// Java program to implement``// the above approach``import` `java.util.*;` `class` `GFG{`` ` `// Function to count the subsets whose``// product of elements is divisible by K``static` `int` `cntSubDivK(``int` `arr[], ``int` `N, ``int` `K,`` ``int` `rem, ``int``[][]dp)``{`` ` ` ``// If count of elements`` ``// in the array is 0`` ``if` `(N == ``0``)`` ``{`` ` ` ``// If rem is 0, then return 1`` ``// Otherwise, return 0`` ``return` `rem == ``0` `? ``1` `: ``0``;`` ``}` ` ``// If already computed`` ``// subproblem occurred`` ``if` `(dp[N][rem] != -``1``)`` ``{`` ``return` `dp[N][rem];`` ``}` ` ``// Stores count of subsets having product`` ``// divisible by K when arr[N - 1]`` ``// present in the subset`` ``int` `X = cntSubDivK(arr, N - ``1``, K,`` ``(rem * arr[N - ``1``]) % K, dp);` ` ``// Stores count of subsets having product`` ``// divisible by K when arr[N - 1] not`` ``// present in the subset`` ``int` `Y = cntSubDivK(arr, N - ``1``, K,`` ``rem, dp);` ` ``// Return total subset`` ``return` `X + Y;``}` `// Utility Function to count the subsets whose``// product of elements is divisible by K``static` `int` `UtilCntSubDivK(``int` `arr[], ``int` `N, ``int` `K)``{`` ` ` ``// Initialize a 2D array to store values`` ``// of overlapping subproblems`` ``int` `[][]dp = ``new` `int``[N + ``1``][K + ``1``];`` ` ` ``for``(``int` `i = ``0``; i < N + ``1``; i++)`` ``{`` ``for``(``int` `j = ``0``; j < K + ``1``; j++)`` ``dp[i][j] = -``1``;`` ``}`` ``return` `cntSubDivK(arr, N, K, ``1``, dp);``}` `// Driver Code``public` `static` `void` `main(String args[])``{`` ``int` `arr[] = { ``1``, ``2``, ``3``, ``4``, ``5``, ``6` `};`` ``int` `K = ``60``;`` ``int` `N = arr.length;`` ` ` ``System.out.println(UtilCntSubDivK(arr, N, K));``}``}` `// This code is contributed by SURENDRA_GANGWAR`\n\n## Python3\n\n `# Python3 program to``# implement the above``# approach` `# Function to count the``# subsets whose product``# of elements is divisible``# by K``def` `cntSubDivK(arr, N, K,`` ``rem, dp):` ` ``# If count of elements`` ``# in the array is 0`` ``if` `(N ``=``=` `0``):` ` ``# If rem is 0, then`` ``# return 1 Otherwise,`` ``# return 0`` ``return` `rem ``=``=` `0` ` ``# If already computed`` ``# subproblem occurred`` ``if` `(dp[N][rem] !``=` `-``1``):`` ``return` `dp[N][rem]` ` ``# Stores count of subsets`` ``# having product divisible`` ``# by K when arr[N - 1]`` ``# present in the subset`` ``X ``=` `cntSubDivK(arr, N ``-` `1``, K,`` ``(rem ``*` `arr[N ``-` `1``]) ``%` `K, dp)` ` ``# Stores count of subsets having`` ``# product divisible by K when`` ``# arr[N - 1] not present in`` ``# the subset`` ``Y ``=` `cntSubDivK(arr, N ``-` `1``,`` ``K, rem, dp)` ` ``# Return total subset`` ``return` `X ``+` `Y` `# Utility Function to count``# the subsets whose product of``# elements is divisible by K``def` `UtilCntSubDivK(arr, N, K):` ` ``# Initialize a 2D array to`` ``# store values of overlapping`` ``# subproblems`` ``dp ``=` `[[``-``1` `for` `x ``in` `range``(K ``+` `1``)]`` ``for` `y ``in` `range``(N ``+` `1``)]` ` ``return` `cntSubDivK(arr, N,`` ``K, ``1``, dp)`` ` `# Driver Code``if` `__name__ ``=``=` `\"__main__\"``:` ` ``arr ``=` `[``1``, ``2``, ``3``,`` ``4``, ``5``, ``6``]`` ``K ``=` `60`` ``N ``=` `len``(arr)`` ``print``(UtilCntSubDivK(arr, N, K))` `# This code is contributed by Chitranayal`\n\n## C#\n\n `// C# program to implement``// the above approach``using` `System;` `class` `GFG{`` ` `// Function to count the subsets whose``// product of elements is divisible by K``static` `int` `cntSubDivK(``int``[] arr, ``int` `N, ``int` `K,`` ``int` `rem, ``int``[,] dp)``{`` ` ` ``// If count of elements`` ``// in the array is 0`` ``if` `(N == 0)`` ``{`` ` ` ``// If rem is 0, then return 1`` ``// Otherwise, return 0`` ``return` `rem == 0 ? 1 : 0;`` ``}`` ` ` ``// If already computed`` ``// subproblem occurred`` ``if` `(dp[N, rem] != -1)`` ``{`` ``return` `dp[N, rem];`` ``}`` ` ` ``// Stores count of subsets having product`` ``// divisible by K when arr[N - 1]`` ``// present in the subset`` ``int` `X = cntSubDivK(arr, N - 1, K,`` ``(rem * arr[N - 1]) % K, dp);`` ` ` ``// Stores count of subsets having product`` ``// divisible by K when arr[N - 1] not`` ``// present in the subset`` ``int` `Y = cntSubDivK(arr, N - 1, K,`` ``rem, dp);`` ` ` ``// Return total subset`` ``return` `X + Y;``}`` ` `// Utility Function to count the subsets whose``// product of elements is divisible by K``static` `int` `UtilCntSubDivK(``int``[] arr, ``int` `N, ``int` `K)``{`` ` ` ``// Initialize a 2D array to store values`` ``// of overlapping subproblems`` ``int``[,] dp = ``new` `int``[N + 1, K + 1];`` ` ` ``for``(``int` `i = 0; i < N + 1; i++)`` ``{`` ``for``(``int` `j = 0; j < K + 1; j++)`` ``dp[i, j] = -1;`` ``}`` ``return` `cntSubDivK(arr, N, K, 1, dp);``}` `// Driver code``static` `void` `Main()``{`` ``int``[] arr = { 1, 2, 3, 4, 5, 6 };`` ``int` `K = 60;`` ``int` `N = arr.Length;` ` ``Console.WriteLine(UtilCntSubDivK(arr, N, K));``}``}` `// This code is contributed by divyeshrabadiya07`\n\n## Javascript\n\n ``\nOutput:\n`16`\n\nTime Complexity: O(N * K)\nSpace Complexity: O(N * K)\n\nAttention reader! Don’t stop learning now. Get hold of all the important DSA concepts with the DSA Self Paced Course at a student-friendly price and become industry ready. To complete your preparation from learning a language to DS Algo and many more, please refer Complete Interview Preparation Course.\n\nIn case you wish to attend live classes with industry experts, please refer Geeks Classes Live\n\nMy Personal Notes arrow_drop_up"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.63784593,"math_prob":0.9721444,"size":8628,"snap":"2021-21-2021-25","text_gpt3_token_len":2887,"char_repetition_ratio":0.16859926,"word_repetition_ratio":0.4723309,"special_character_ratio":0.35999072,"punctuation_ratio":0.16905287,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9995321,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-06-14T22:55:34Z\",\"WARC-Record-ID\":\"<urn:uuid:6b3c6a16-7fe8-46e4-bb49-cfe97d824269>\",\"Content-Length\":\"167582\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:81080496-2091-4c8f-956d-b52e97a5b76c>\",\"WARC-Concurrent-To\":\"<urn:uuid:f3909977-8acd-4f36-a3fe-28d5ce358415>\",\"WARC-IP-Address\":\"23.15.7.113\",\"WARC-Target-URI\":\"https://www.geeksforgeeks.org/count-subsets-having-product-divisible-by-k/?ref=rp\",\"WARC-Payload-Digest\":\"sha1:3AG6AZQL2BM7YVJDBH6HNHMDUCL7DJSS\",\"WARC-Block-Digest\":\"sha1:UHUU6YH5W7L3EWEGFXJXGHGMZEGBNMAY\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-25/CC-MAIN-2021-25_segments_1623487613453.9_warc_CC-MAIN-20210614201339-20210614231339-00046.warc.gz\"}"} |
http://ww.talkreason.org/articles/super.cfm | [
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"On the Frontline",
null,
"What's New",
null,
"Table of Contents",
null,
"Index of Authors",
null,
"Index of Titles",
null,
"Index of Letters",
null,
"Mailing List\n\n subscribe to our mailing list:\n\nSECTIONS",
null,
"Critique of Intelligent Design",
null,
"Evolution vs. Creationism",
null,
"The Art of ID Stuntmen",
null,
"Faith vs Reason",
null,
"Anthropic Principle",
null,
"Autopsy of the Bible code",
null,
"Science and Religion",
null,
"Historical Notes",
null,
"Counter-Apologetics",
null,
"Serious Notions with a Smile",
null,
"Miscellaneous",
null,
"Letter Serial Correlation",
null,
"Mark Perakh's Web Site",
null,
"",
null,
"",
null,
"The Anthropic Principle Does Not Support Supernaturalism\n\nBy Michael Ikeda and Bill Jefferys\n\nPosted January 12, 2004\n\nContents:\n\n1. Introduction\n\nIt has recently been claimed, most prominently by Dr. Hugh Ross on his web site that the so-called \"fine-tuning\" of the constants of physics supports a supernatural origin of the universe. Specifically, it is claimed that many of the constants of physics must be within a very small range of their actual values, or else life could not exist in our universe. Since it is alleged that this range is very small, and since our very existence shows that our universe has values of these constants that would allow life to exist, it is argued that the probability that our universe arose by chance is so small that we must seek a supernatural origin of the universe.\n\nIn this article we will show that this argument is wrong. Not only is it wrong, but in fact we will show that the observation that the universe is \"fine-tuned\" in this sense can only count against a supernatural origin of the universe. And we shall furthermore show that with certain theologies suggested by deities that are both inscrutable and very powerful, the more \"finely-tuned\" the universe is, the more a supernatural origin of the universe is undermined.\n\n[Note added 020106: We have learned that the philosopher of science, Elliott Sober, has made some similar points in a recent article written for the Blackwell Guide to Philosophy of Religion. A draft copy can be obtained from his website. We have some small differences with Professor Sober (in particular, we think that his condition (A3) is too strong, and that a weaker version of (A3) actually gives a stronger result), but he has an excellent discussion of the role that selection bias plays where the bias is due to self-selection by sentient observers.]\n\nOur basic argument starts with a few very simple assumptions. We believe that anyone who accepts that the universe is \"fine-tuned\" for life would find it difficult not to accept these assumptions. They are:\n\na) Our universe exists and contains life.\n\nb) Our universe is \"life friendly,\" that is, the conditions in our universe (such as physical laws, etc.) permit or are compatible with life existing naturalistically.\n\nc) Life cannot exist in a universe that is governed solely by naturalistic law unless that universe is \"life-friendly.\"\n\nIn this FAQ we will discuss only the Weak Anthropic Principle (WAP), since it is uncontroversial and generally accepted. We will not discuss the Strong Anthropic Principle (SAP), much less the Completely Ridiculous Anthropic Principle :-)\n\nAccording to the WAP, which is embodied in assumption (c), the fact that life (and we as intelligent life along with it) exists in our universe, coupled with the assumption that the universe is governed by naturalistic law, implies that those laws must be \"life-friendly.\" If they were not \"life-friendly,\" then it is obvious that life could not exist in a universe governed solely by naturalistic law. However, it should be noted that a sufficiently powerful supernatural principle or entity (deity) could sustain life in a universe with laws that are not \"life-friendly,\" simply by virtue of that entity's will and power.\n\nWe will show that if assumptions (a-c) are true, then the observation that our universe is \"life-friendly\" can never be evidence against the hypothesis that the universe is governed solely by naturalistic law. Moreover, \"fine-tuning,\" in the sense that \"life-friendly\" laws are claimed to represent only a very small fraction of possible universes, can even undermine the hypothesis of a supernatural origin of the universe; and the more \"finely-tuned\" the universe is, the more this hypothesis can be undermined.\n\n2. Traditional responses to the \"fine-tuning\" argument\n\nThere are a number of traditional arguments that have been made against the \"fine-tuning\" argument. We will state them here, and we think that they are valid, although our main interest will be directed towards some new insights arising from a deeper understanding of probability theory.\n\n1) In proving our main result, we do not assume or contemplate that universes other than our own exist (e.g., as in cosmologies such as those proposed by A. Vilenkin [\"Quantum creation of the universe,\" Phys Rev D Vol. 30, pp. 509-511 (1984)], André Linde [\"The self-reproducing inflationary universe,\" Scientific American, November 1994, pp. 48-55], and most recently, Lee Smolin [Life of the Cosmos, Oxford University Press (1997)], or as in some kinds of \"many worlds\" quantum models). One argument against Ross has been to claim that there may be many universes with many different combinations of physical constants. If there are enough of them, a few would be able to support life solely by chance. It is hypothesized that we live in one of those few. Thus, this argument seeks to overcome the low probability of having a universe with life in it with a multiplicity of universes. A recent technical discussion of this idea by Garriga and Vilenken can be found at General Relativity and Quantum Cosmology, abstract.\n\n2) Others have argued against the assumption that the universe must have very narrowly constrained values of certain physical constants for life to exist in it. They have argued that life could exist in universes that are very different from ours, but it is only our insular ignorance of the physics of such universes that misleads us into thinking that a universe must be much like our own to sustain life. Indeed, virtually nothing is known about the possibility of life in universes that are very different from ours. It could well be that most universes could support life, even if it is of a type that is completely unfamiliar to us. To assert that only universes very like our own could support life goes well beyond anything that we know today.\n\nIndeed, it might well be that a fundamental \"theory of everything\" in physics would predict that only a very narrow range of physical constants, or even no range at all, would be possible. If this turns out to be the case, then the entire \"fine-tuning\" argument would be moot.\n\nWhile recognizing the force and validity of these arguments, the main points we will make go in quite different directions, and show that even if Ross is correct about \"fine-tuning\" and even if ours is the only universe that exists, the \"fine-tuning\" argument fails.\n\n3. Notation and some basic probability theory\n\nIn this section, we will introduce some necessary notation and discuss some basic probability theory needed in order to understand our points\n\nFirst, some notation. We introduce several predicates, (statements which can have values true or false).\n\nLet L=\"The universe exists and contains Life.\" L is clearly true for our universe (assumption a).\n\nLet F=\"The conditions in the universe are 'life-Friendly,' in the sense described above.\" Ross, in his arguments, certainly assumes that F is true. So will we (assumption b). The negation, ~F, would be that the conditions are such that life cannot exist naturalistically, so that if life is present it must be because of some supernatural principle or entity.\n\nLet N=\"The universe is governed solely by Naturalistic law.\" The negation, ~N, is that it is not governed solely by naturalistic law, that is, some non-naturalistic (supernaturalistic) principle or entity is involved. N and ~N are not assumptions; they are hypotheses to be tested. However, we do not rule out either possibility at the outset; rather, we assume that each of them has some non-zero a-priori probability of being true.\n\nProbability theory now allows us to write down some important relationships between these predicates. For example, assumption (c) can be written mathematically as N&L==>F ('==>' means logical implication). In the language of probability theory, this can be expressed as\n\nP(F|N&L)=1\n\nwhere P(A|B) is the probability that A is true, given that B is true [see footnote 1 for a formal mathematical definition], and '&' is logical conjunction.\n\n4. Why the \"fine-tuning\" argument is invalid\n\nExpressed in the language of probability theory, we understand the \"fine-tuning\" argument to claim that if naturalistic law applies, then the probability that a randomly-selected universe would be \"life-friendly\" is very small, or in mathematical terms, P(F|N)<<1. Notice that this condition is not a predicate like L, N and F; Rather, it is a statement about the probability distribution P(F|N), considered as it applies to all possible universes. For this reason, it is not possible to express the \"fine-tuning\" condition in terms of one of the arguments A or B of a probability function P(A|B). It is, rather, a statement about how large those probabilities are.\n\nThe \"fine-tuning\" argument then reasons that if P(F|N)<<1, then it follows that P(N|F)<<1. In ordinary English, this says that if the probability that a randomly-selected universe would be life-friendly (given naturalism) is very small, then the probability that naturalism is true, given the observed fact that the universe is \"life-friendly,\" is also very small. This, however, is an elementary if common blunder in probability theory. One cannot simply exchange the two arguments in a probability like P(F|N) and get a valid result. A simple example will suffice to show this.\n\nExample\n\nLet A=\"I am holding a Royal Flush.\"\n\nLet B=\"I will win the poker hand.\"\n\nIt is evident that P(A|B) is nearly 0. Almost all poker hands are won with hands other than a Royal Flush. On the other hand, it is equally clear that P(B|A) is nearly 1. If you have a Royal Flush, you are virtually certain to win the poker hand.\n\nThere is a second reason why this \"fine-tuning\" argument is wrong. It is that for an inference to be valid, it is necessary to take into account all known information that may be relevant to the conclusion. In the present case, we happen to know that life exists in our universe (i.e., that L is true). Therefore, it is invalid to make inferences about N if we fail to take into account the fact that L, as well as F, are already known to be true. It follows that any inferences about N must be conditioned upon both F and L. An example of this is seen in the next section.\n\nThe most important consequence of the previous paragraph is very simple: In inferring the probability that N is true, it is entirely irrelevant whether P(F|N) is large or small. It is entirely irrelevant whether the universe is \"fine-tuned\" or not. Only probabilities conditioned upon L are relevant to our inquiry.\n\nRichard Harter <[email protected]> has suggested a somewhat different interpretation of the \"fine-tuning\" argument in E-mail (reproduced here with permission). He writes:\n\nThis takes care of the WAP; if one argues solely from the WAP the FAQ argument is correct. However the \"fine tuning\" argument is not (despite what its proponents say) a WAP argument; it is an inverse Bayesian argument. The argument runs thusly:\n\nP(F|~N) >> P(F|N)\n\nergo\n\nP(~N|F) >> P(N|F)\n\nConsidered as a formal inference this is a fallacy. None-the-less it is a normal rule of induction which is (usually) sound. The reason is that for the \"conclusion\" not to hold we need\n\nP(N) >> P(~N)\n\n[This is not the full condition but it is close enough for government work.]\n\nThere are two fallacies in this form of the argument. The first is the failure to condition on L, mentioned above. This in itself would render the argument invalid. The second is that the first line of the argument, P(F|~N) >> P(F|N), is merely an unsupported assertion. No one knows what the probability of a supernatural entity creating a universe that is F is! For example, a dilettante deity might never get around to creating any universes at all, much less ones capable of supporting life.\n\n[Note added 010612: Since this was written, we have proved that if You, knowing as a sentient observer that L is true, adopt an a priori position that is neutral between N and ~N, i.e., that P(~N|L) is of the same order of magnitude as P(N|L), then when You learn that F is true and that P(F|N)<<1, You will conclude that P(F&L&~N)<<1. See Appendix I (Reply to Kwon) at the end of this essay for the proof. This observation is problematic for Harter's argument. For under these assumptions we have\n\nP(F&L&~N)=P(L|F&~N)P(F|~N)P(~N)<<1.\n\nThus under these assumptions it follows that at least one of P(L|F&~N), P(F|~N) or P(~N) is quite small. A small P(L|F&~N) says that it is almost certain that the supernatural deity, having created a \"life-friendly\" universe, would make it sterile (lifeless). A small P(F|~N) says that it is highly unlikely that this deity would even create a universe that is \"life-friendly\". Both of these undermine the usual concepts attributed to the deity by \"intelligent design\" theorists, although either would be consistent with a deity that was incompetent, a dilettante, or a \"trickster\". A small P(F|~N) is also consistent with a deity who makes many universes, most of them being ~F, with many of these ~F universes perhaps containing life (that is, ~F&L universes, as we discuss below). A small P(~N) says that it is nearly certain that naturalism is true a priori and unconditioned on L, so that Harter's \"escape\" condition P(N)>>P(~N) in fact holds.\n\nPlease remember that if You are a sentient observer, You must already know that L is true, even before You learn anything about F or P(F|N). Thus it is legitimate, appropriate, and indeed required, for You to elicit Your prior on N versus ~N conditioned on L and use that as Your starting point. If You then retrodict that P(~N)<<1 as a consequence, all You are doing is eliciting the prior that You would have had in the absence of Your knowledge that You existed as a sentient observer. This is the only legitimate way to infer Your value of P(~N) unconditioned on L.]\n\n5. Our main theorem\n\nHaving understood the previous discussion, and with our notation in hand, it is now easy to prove that the WAP does not support supernaturalism (which we take to be the negation ~N of N). Recall that the WAP can be written as P(F|N&L)=1. Then, by Bayes' Theorem [see footnote 2] we have\n\nP(N|F&L) = P(F|N&L)P(N|L)/P(F|L) =\nP(N|L)/P(F|L)\n>= P(N|L)\n\nwhere '>=' means \"greater than or equal to.\" The second line follows because P(F|N&L)=1, and the inequality of the third line follows because P(F|L) is a positive quantity less than or equal to 1. (The above demonstration is inspired by a recent article on talk.origins by Michael Ikeda <[email protected]>; we have simplified the proof in his article. The message ID for the cited article is <5j6dq8\\[email protected]> for those who wish to search for it on dejanews.)\n\nThe inequality P(N|F&L)>=P(N|L) shows that the WAP supports (or at least does not undermine) the hypothesis that the universe is governed by naturalistic law. This result is, as we have emphasized, independent of how large or small P(F|N) is. The observation F cannot decrease the probability that N is true (given the known background information that life exists in our universe), and may well increase it.\n\nCorollary: Since P(~N|F&L)=1-P(N|F&L) and similarly for P(~N|L), it follows that P(~N|F&L)<=P(~N|L). In other words, the observation F does not support supernaturalism (~N), and may well undermine it.\n\n6. Another way to look at it\n\nThe thrust of practically all \"Intelligent Design\" and Creationist arguments (excepting the anthropic argument and perhaps a few others) is to show ~F, since it is evident, we think, that if ~F then we cannot have both life and a naturalistic universe. We evidently do have life, so the success of one of these arguments would clearly establish ~N. In other words, given our prior opinion P(N&L), where 0<P(N&L)<1 but otherwise unrestricted (thus we neither rule in nor rule out N initially), arguments like Behe's attempt to support ~F so as to undermine N:\n\nP(N|~F&L)<P(N|L).\n\nBut the \"anthropic\" argument is that observing F also undermines N:\n\nP(N|F&L)<P(N|L).\n\nWe assert that the intelligent design folks want these inequalities to be strict (otherwise there would be no point in their making the argument!)\n\nFrom these two inequalities we readily derive a contradiction, as follows. From the definition of conditional probability [see footnote 1], the two inequalities above yield\n\nP(N&~F&L)<P(N|L)P(~F&L), P(N& F&L)<P(N|L)P( F&L)\n\nP(N&L)= P(N&~F&L)+P(N&F&L)\n< P(N|L)(P(~F&L)+P(F&L))\n= P(N|L)P(L)=P(N&L),\n\na contradiction since the inequality is strict.\n\nIf we remove the restriction that the inequalities be strict, then the only case where both inequalities can be true is if\n\nP(N|~F&L)=P(N|L) and P(N|F&L)=P(N|L).\n\nIn other words, the only case where both can be true is if the information that the universe is \"life-friendly\" has no effect on the probability that it is naturalistic (given the existence of life); and this can only be the case if neither inequality is strict.\n\nIn essence, we see that the intelligent design folks who make the anthropic argument are really trying to have it both ways: They want observation of F to undermine N, and they also want observation of ~F to undermine N. That is, they want any observation whatsoever to undermine N! But the error is that the anthropic argument does not undermine N, it supports N. They can have one of the prongs of their argument, but they can't have both.\n\n[Note added 010612: Some people have objected to us that Behe is not making the argument ~F, but is only making a statement that it is highly unlikely that certain of his \"IC\" structures could arise naturalistically. Our reading of Behe that he is making an argument that it is impossible for this to happen (a form of ~F as we understand it), but even if we are wrong and he is not making this argument, the point of our comments in this section is that making the argument that the universe is F or is \"fine-tuned\" (P(F|N)<<1) does not support supernaturalism; the argument that should be made is that the universe is ~F, since this manifestly supports supernaturalism by refuting naturalism. See Appendix I (Reply to Kwon) at the end of this essay.]\n\n7. Implications of \"fine-tuning\" versus mere \"life-friendliness\"\n\nRoss' argument discusses the case where the conditions in our universe are not only \"life-friendly,\" but they are also \"fine-tuned,\" in the sense that only a very small fraction of possible universes can be \"life-friendly.\" We have shown that regardless how \"finely-tuned\" the the laws of physics are, the observation that the universe is capable of sustaining life cannot undermine N.\n\nAs we have pointed out above, others have responded to the claim of \"fine-tuning\" in several ways. One way has been to point out that this claim is not corroborated by any theoretical understanding about what forms of life might arise in universes with different physical conditions than our own, or even any theoretical understanding about what kinds of universes are possible at all; it is basically a claim founded upon our own ignorance of physics. To those that make this point, the argument is about whether P(F|N) is really small (as Ross claims), or is in fact large. The point (against Ross) is essentially that Ross' crucial assumption is completely without support.\n\nA second response is to point out that several theoretical lines of evidence indicate that many other, and perhaps even an infinite number of other universes, with varying sets of physical constants and conditions, might well exist, so that even if the probability that a given universe would have constants close to those of our own universe is small, the sheer number of such universes would virtually guarantee that some of them would possess constants that would allow life to arise.\n\nNevertheless, it is necessary to consider the implications of Ross' assertion that the universe is \"fine-tuned.\" Suppose it is true that amongst all naturalistic universes, only a very small proportion could support life. What would this imply?\n\nWe have shown that the WAP tends to support N, and cannot undermine it. This observation is independent of whether P(F|N) is small or large, since (as we have seen) the only probabilities that are significant for inference about N are those that are conditioned upon all relevant data at our disposal, including the fact that L is true. Therefore, regardless of the size of P(F|N), valid reasoning shows that observing that F is true cannot decrease the probability that N is true, and may increase it.\n\nWe believe that the real import of observing that P(F|N) is small (if indeed that is true) would be to strengthen Vilenkin/Linde/Smolin-type hypotheses that multiple universes with varying physical constants may exist. If indeed the universe is governed by naturalistic laws, and if indeed the probability that a universe governed by naturalistic laws can support life is small, then this supports a Vilenkin/Linde/Smolin model of multiple universes over a model that includes only a single universe with a single set of physical constants.\n\nTo see this, let S=\"there is only a Single universe,\" and M=\"there are Multiple universes.\" Let E = \"there Exists a universe with life.\" Clearly, P(E|N)<P(F|N), since it is possible that a universe that is \"life-friendly\" could still be barren. But, since L is true, E is also true, so observing L implies that we have also observed E.\n\nThen, assuming that P(F|N)<1 is the probability that a single universe is \"life-friendly,\" that this probability is the same for each \"random\" multiple universe as it would be for a single universe, and that the probability that a given universe exists is independent of the existence of other universes, it follows that\n\nP(E|S&N) = p = P(E|N) < P(F|N) < 1 (and for Ross, P(F|N)<<1);\n\nP(E|M&N) = 1 - (1-p)m, where m is the number of universes if M is true; This is less than 1 but approaches 1 (for fixed p) as m gets larger and larger. Since all the Multiple-universe proposals we have seen suggest that m is in fact infinite, it follows that P(E|M&N)=1. (If one postulates that m is finite, then the calculation depends explicitly on p and m; this is left as an exercise for the reader.)\n\nSince\n\nP(S|E&N) = P(E|S&N)P(S|N)/P(E|N) and\nP(M|E&N) = P(E|M&N)P(M|N)/P(E|N),\n\nwith these assumptions it follows by division that\n\n P(M|E&N) 1 P(M|N) -------- = --- x ------, P(S|E&N) p P(S|N)\n\nwhich shows that observing E (or L) increases the evidence for M against S in a naturalistic universe by a factor of at least 1/p. The smaller P(F|N)=p (that is, the more \"finely-tuned\" the universe is), the more likely it is that some form of multiple-universe hypothesis is true.\n\n8. Theological considerations\n\nThe next section is rather more speculative, depending as it does upon theological notions that are hard to pin down, and therefore should be taken with large grains of salt. But it is worth considering what effect various theological hypotheses would have on this argument. It is interesting to ask the question, \"given that observing that F is true cannot undermine N and may support it, by how much can N be strengthened (and ~N be undermined) when we observe that F is true?\"\n\nIt is evident from the discussion of the main theorem that the key is the denominator P(F|L). The smaller that denominator, the greater the support for N. Explicitly we have\n\nP(F|L)=P(F|N&L)P(N|L)+P(F|~N&L)P(~N|L)\n\nBut since P(F|N&L)=1 we can simplify this to\n\nP(F|L)=P(N|L)+P(F|~N&L)P(~N|L).\n\nPlugging this into the expression P(N|F&L)=P(N|L)/P(F|L) we obtain\n\nP(N|F&L) = P(N|L)/[P(N|L)+P(F|~N&L)P(~N|L))]\n= 1/[1+P(F|~N&L)P(~N|L)/P(N|L)]\n= 1/[1+C P(F|~N&L)],\n\nwhere C=P(~N|L)/P(N|L) is the prior odds in favor of ~N against N. In other words, C is the odds that we would offer in favor of ~N over N before noting that the universe is \"fine-tuned\" for life.\n\nA major controversy in statistics has been over the choice of prior probabilities (or in this case prior odds). However, for our purposes this is not a significant consideration, as long as we don't choose C in such a way as to completely rule out either possibility (N or ~N), i.e., as long as we haven't made up our minds in advance. This means that any positive, finite value of C is acceptable.\n\nOne readily sees from this formula that for acceptable C\n\n1) as P(F|~N&L)-->0, P(N|F&L)-->1;\n(2) as P(F|~N&L)-->1, P(N|F&L)-->1/[1+P(~N|L)/P(N|L)]=P(N|L),\n\nwhere '-->' means \"approaches as a limit\" and the last result follows from the fact that P(N|L)+P(~N|L)=1.\n\nSo, P(N|F&L) is a monotonically decreasing function of P(F|~N&L) bounded from below by P(N|L). This confirms the observation made earlier, that noting that F is true can never decrease the evidential support for N. Furthermore, the only case where the evidential support is unchanged is when P(F|~N&L) is identically 1. This is interesting, because it tells us that the only case where observing the truth of F does not increase the support for N is precisely the case when the likelihood function P(F|x&L), evaluated at F, and with x ranging over N and ~N, cannot distinguish between N and ~N. That is, the only way to prevent the observation F from increasing the support for N is to assert that ~N&L also requires F to be true. Under these circumstances we cannot distinguish between N and ~N on the basis of the data F. In a deep sense, the two hypotheses represent, and in fact, are the same hypothesis. Put another way, to assume that P(F|~N&L)=1 is to concede that life in the world actually arose by the operation of an agent that is observationally indistinguishable from naturalistic law, insofar as the observation F is concerned. In essence, any such agent is just an extreme version of the \"God-of-the-gaps,\" whose existence has been made superfluous as far as the existence of life is concerned. Such an assumption would completely undermine the proposition that it is necessary to go outside of naturalistic law in order to explain the world as it is, although it doesn't undermine any argument for supernaturalism that doesn't rely on the universe being \"life-friendly\".\n\nSo, if supernaturalism is to be distinguished from naturalism on the basis of the fact that the universe is F, it must be the case that P(F|~N&L)<1. Otherwise, we are condemned to an unsatisfying kind of \"God-of-the-gaps\" theology. But what sort of theologies can we consider, and how would they affect this crucial probability?\n\nTo make these ideas more definite, we consider first a specific interpretation that is intended to imitate, albeit crudely, how the assumption of a relatively powerful and inscrutable deity (such as a generic Judeo-Christian-Islamic deity might be) could affect the calculation of the likelihood function P(F|~N&L).\n\nWe suggest that any reasonable version of supernaturalism with such a deity would result in a value of P(F|~N&L) that is, in fact, very small (assuming that only a small set of possible universes are F). The reason is that a sufficiently powerful deity could arrange things so that a universe with laws that are not \"life-friendly\" can sustain life. Since we do not know the purposes of such a deity, we must assign a significant amount of the likelihood function to that possibility. Furthermore, if such a deity creates universes and if the \"fine-tuning\" claims are correct, then most life-containing universes will be of this type (i.e., containing life despite not being \"life-friendly\"). Thus, all other things being equal, and if this is the sort of deity we are dealing with, we would expect to live in a universe that is ~F.\n\nTo assert that such a deity could only create universes containing life if the laws are life-friendly is to restrict the power of such a deity. And to assert that such a deity would only create universes with life if the laws are life-friendly is to assert knowledge of that deity's purposes that many religions seem reluctant to claim. Indeed, any such assertion would tend to undermine the claim, made by many religions, that their deity can and does perform miracles that are contrary to naturalistic law, and recognizably so.\n\nOur conclusion, therefore, is that not only does the observation F support N, but it supports it overwhelmingly against its negation ~N, if ~N means creation by a sufficiently powerful and inscrutable deity. This latter conclusion is, by the way, a consequence of the Bayesian Ockham's Razor [Jefferys, W.H. and Berger, J.O., \"Ockham's Razor and Bayesian Analysis,\" American Scientist 80, 64-72 (1992)]. The point is that N predicts outcomes much more sharply and narrowly than does ~N; it is, in Popperian language, more easily falsifiable than is ~N. (We do not wish to get into a discussion of the Demarcation Problem here since that is out of the scope of this FAQ, though we do not regard it as a difficulty for our argument. For our purposes, we are simply making a statement about the consequences of the likelihood function having significant support on only a relatively small subset of possible outcomes.) Under these circumstances, the Bayesian Ockham's Razor shows that observing an outcome allowed by both N and ~N is likely to favor N over ~N. We refer the reader to the cited paper for a more detailed discussion of this point.\n\nAside from sharply limiting the likely actions of the deity (either by making it less powerful or asserting more human knowledge of the deity's intentions), we can think of only one way to avoid this conclusion. One might assert that any universe with life would appear to be \"life-friendly\" from the vantage point of the creatures living within it, regardless of the physical constants that such a universe were equipped with. In such a case, observing F cannot change our opinion about the nature of the universe. This is certainly a possible way out for the supernaturalist, but this solution is not available to Ross because it contradicts his assertions that the values of certain physical constants do allow us to distinguish between universes that are \"life-friendly\" and those that are not. And, such an assumption does not come without cost; whether others would find it satisfactory is problematic. For example, what about miracles? If every universe with life looks \"life-friendly\" from the inside, might this not lead one to wonder if everything that happens therein would also look to its inhabitants like the result of the simple operation of naturalistic law? And then there is Ockham's Razor: What would be the point of postulating a supernatural entity if the predictions we get are indistinguishable from those of naturalistic law?\n\n9. But which deity?\n\nIn the previous section, we have discussed just one of many sorts of deities that might exist. This one happens to be very powerful and rather inscrutable (and is intended to be a model of a generic Judeo-Christian-Islamic sort of deity, though believers are welcome to disagree and propose--and justify--their own interpretations of their favorite deity). However, there are many other sorts of deities that might be postulated as being responsible for the existence of the universe. There are somewhat more limited deities, such as Zeus/Jupiter, there are deities that share their existence with antagonistic deities such as the Zoroastrian Ahura-Mazda/Ahriman pair of deities, there are various Native American deities such as the trickster deity Coyote, there are Australian, Chinese, African, Japanese and East Indian deities, and even many other possible deities that no one on Earth has ever thought of. There could be deities of lifeforms indigenous to planets around the star Arcturus that we should consider, for example.\n\nNow when considering a multiplicity of deities, say D1,D2,...,Di,..., we would have to specify a value of the likelihood function for each individual deity, specifying what the implications would be if that deity were the actual deity that created the universe. In particular, with the \"fine-tuning\" argument in mind, we would have to specify P(F|Di&L) for every i (probably an infinite set of deities). Assuming that we have a mutually exclusive and exhaustive list of deities, we see the hypothesis ~N revealed to be composite, that is, it is a combination or union of the individual hypotheses Di (i=1,2,...). Our character set doesn't have the usual \"wedge\" character for \"or\" (logical disjunction), so we will use 'v' to represent this operation. We then have\n\n~N = D1 v D2 v...v Di v...\n\nNow, the total prior probability of ~N, P(~N|L), has to be divvied up amongst all of the individual subhypotheses Di:\n\nP(~N|L) = P(D1|L) + P(D2|L) + ... + P(Di||L) + ...\n\nwhere 0<P(Di)<P(~N|L)<1 (assuming that we only consider deities that might exist, and that there are at least two of them). In general, each of the individual prior probabilities P(Di|L) would be very small, since there are so many possible deities. Only if some deities are a priori much more likely than others would any individual deity have an appreciable amount of prior probability.\n\nThis means that in general, P(Di|L)<<1 for all i.\n\nNow when we originally considered just N and ~N, we calculated the posterior probability of N given L&F from the prior probabilities of N and ~N given L, and the likelihood functions. Here it would be simpler to look at prior and posterior odds. These are derived straightforwardly from probabilities by the relation\n\nOdds = Probability/(1 - Probability).\n\nThis yields a relationship between the prior and posterior odds of N against ~N [using P(N|F&L)+P(~N|F&L)=1]:\n\n P( N|F&L) P(F| N&L) P( N|L) Posterior Odds = --------- = ---------- x ------- P(~N|F&L) P(F|~N&L) P(~N|L)\n = (Bayes Factor) x (Prior Odds)\n\nThe Bayes Factor and Prior Odds are given straightforwardly by the two ratios in this formula.\n\nSince P(F|N&L)=1 and P(F|~N&L)<=1, it follows that the posterior odds are greater than or equal to the prior odds (this is a restatement of our first theorem, in terms of odds). This means that observing that F is true cannot decrease our confidence that N is true.\n\nBut by using odds instead of probabilities, we can now consider the individual sub-hypotheses that make up ~N. For example, we can calculate prior and posterior odds of N against any individual D_i. We find that\n\n P( N|F&L) P(F| N&L) P( N|L) Posterior Odds = --------- = --------- x ------- P(Di|F&L) P(F|Di&L) P(Di|L)\n\nThis follows because (by footnote 2)\n\nP(N |F&L) = P(F| N&L)P( N|L)/P(F|L),\nP(Di|F&L) = P(F|Di&L)P(Di|L)/P(F|L),\n\nand the P(F|L)'s cancel out when you take the ratio.\n\nNow, even if P(F|Di&L)=1, which is the maximum possible, the posterior odds against Di may still be quite large. The reason for this is that the prior probability of ~N has to be shared out amongst a large number of hypotheses Dj, each one greedily demanding its own share of the limited amount of prior probability available. On the other hand, the hypothesis N has no others to share with. In contrast to ~N, which is a compound hypothesis, N is a simple hypothesis. As a consequence, and again assuming that no particular deity is a priori much more likely than any other (it would be incumbent upon the proposer of such a deity to explain why his favorite deity is so much more likely than the others), it follows that the hypothesis of naturalism will end up being much more probable than the hypothesis of any particular deity Di.\n\nThis phenomenon is a second manifestation of the Bayesian Ockham's Razor discussed in the Jefferys/Berger article (cited above).\n\nIn theory it is now straightforward to calculate the posterior odds of N against ~N if we don't particularly care which deity is the right one. Since the Di form a mutually exclusive and exhaustive set of hypotheses whose union is ~N, ordinary probability theory gives us\n\n P(~N|F&L) = P(D1|F&L) + P(D2|F&L) + ... = [P(F|D1&L)P(D1|L) + P(F|D2&L)P(D2|L) + ...]/P(F|L)\n\nAssuming we know these numbers, we can now calculate the posterior odds of N against ~N by dividing the above expression into the one we found previously for P(N|F&L). Of course, in practice this may be difficult! However, as can be seen from this formula, the deities Di that contribute most to the denominator (that is, to the supernaturalistic hypothesis) will be the ones that have the largest values of the likelihood function P(F|Di&L) or the largest prior probability P(Di|L) or both. In the first case, it will be because the particular deity is closer to predicting what naturalism predicts (as regards F), and is therefore closer to being a \"God-of-the-gaps\" deity; in the second, it will be because we already favored that particular deity over others a priori.\n\nSome make the mistake of thinking that \"fine-tuning\" and the anthropic principle support supernaturalism. This mistake has two sources.\n\nThe first and most important of these arises from confusing entirely different conditional probabilities. If one observes that P(F|N) is small (since most hypothetical naturalistic universes are not \"fine-tuned\" for life), one might be tempted to turn the probability around and decide, incorrectly, that P(N|F) is also small. But as we have seen, this is an elementary blunder in probability theory. We find ourselves in a universe that is \"fine-tuned\" for life, which would be unlikely to come about by chance (because P(F|N) is small), therefore (we conclude incorrectly), P(N|F) must also be small. This common mistake is due to confusing two entirely different conditional probabilities. Most actual outcomes are, in fact, highly improbable, but it does not follow that the hypotheses that they are conditioned upon are themselves highly improbable. It is therefore fallacious to reason that if we have observed an improbable outcome, it is necessarily the case that a hypothesis that generates that outcome is itself improbable. One must compare the probabilities of obtaining the observed outcome under all hypotheses. In general, most, if not all of these probabilities will be very small, but some hypotheses will turn out to be much more favored by the actual outcome we have observed than others.\n\nThe second source of confusion is that one must do the calculations taking into account all the information at hand. In the present case, that includes the fact that life is known to exist in our universe. The possible existence of hypothetical naturalistic universes where life does not exist is entirely irrelevant to the question at hand, which must be based on the data we actually have.\n\nIn our view, similar fallacious reasoning may well underlie many other arguments that have been raised against naturalism, not excluding design and \"God-of-the-Gaps\" arguments such as Michael Behe's \"Irreducible Complexity\" argument (in his book, Darwin's Black Box), and William Dembski's \"Complex Specified Information,\" as described in his dissertation (University of Illinois at Chicago). We conclude that whatever their rhetorical appeal, such arguments need to be examined much more carefully than has happened so far to see if they have any validity. But that discussion is outside the scope of this article.\n\nBottom line: The anthropic argument should be dropped. It is wrong. \"Intelligent design\" folks should stick to trying to undermine N by showing ~F. That's their only hope (though we believe it to be a forlorn one).\n\n Michael Ikeda Bill Jefferys Statistical Research Division Department of Astronomy Bureau of the Census University of Texas Washington DC 20233 Austin TX 78712\n\nPlease E-mail comments on this proposed FAQ to Bill Jefferys ([email protected]).\n\nMichael Ikeda's work on this article was done on his own time and not as part of his official duties. The authors' affiliations are for identification only. The opinions expressed herein are those of the authors, and do not necessarily represent the opinions of the authors' employers.\n\nCopyright (C) 1997-2002 by Michael Ikeda and Bill Jefferys. Portions of this FAQ are Copyright (C) 1997 by Richard Harter. All Rights Reserved.\n\n11. Footnotes\n\n By definition, P(A|B)=P(A&B)/P(B); it follows that also P(A|B&C)=P(A&B|C)/P(B|C).\n\n We use Bayes' Theorem in the form\n\nP(A|B&K)=P(B|A&K)P(A|K)/P(B|K)\n\nwhich follows straightforwardly from the identity\n\nP(A|B&K)P(B|K)=P(A&B|K)=P(B|A&K)P(A|K)\n\n(a consequence of footnote 1) assuming that P(B|K)>0.\n\n12. Appendix I: Reply to Kwon (April 30, 2001)\n\nDavid Kwon has posted a web page in which he claims to have refuted the arguments in our article. However, he has made a simple error, which we detail below, along with comments on some of his other assertions.\n\n[Note added 040109: Kwon's original article has disappeared from the web. The above link is to the last version of his article archived by the Internet Wayback Machine via Makeashorterlink.com.]\n\nKwon's Equation (3) reads as follows:\n\nP(N|F&L) = P(N&F&L) / {P(~N&F&L) + P(N&F&L)}\n\nThis is an elementary result of probability theory and we agree with it. Kwon then goes on and assumes what he calls the \"fine-tuning\" condition P(F|N)<<1 from which he correctly derives Equation (8), the important part of which reads\n\nP(N&F&L) << 1\n\nFrom these two results (3 and 8) Kwon derives\n\nP(N|F&L)<<1 unless P(~N&F&L)<<1\n\nUnfortunately, nothing in Kwon's \"proof\" shows that P(~N&F&L) is not <<1, so he cannot assert unconditionally that P(N|F&L)<<1 as a consequence of his assumptions. He asserts\n\n\"The only way not to come to this conclusion [that P(N|F&L)<<1] is to start with an a priori assumption of P(~N&F&L)<<1. In other words, the only way to hold on to naturalism is by assuming that theism is virtually impossible to begin with.\"\n\nThis, however, is incorrect, and here the \"proof\" falls apart. Kwon apparently recognizes that according to his Equation (3), the value of P(N|F&L) is not governed by the actual size of P(N&F&L), but instead by the relative sizes of P(N&F&L) and P(~N&F&L). In particular, if P(N&F&L)<<P(~N&F&L) then P(N|F&L) will be close to zero; if P(N&F&L) is approximately equal to P(~N&F&L), then P(N|F&L) will be of order one-half; and if P(N&F&L)>>P(~N&F&L), then P(N|F&L) will be nearly unity. Therefore, we need to look at the ratio R = P(N&F&L)/P(~N&F&L) to see what factors govern its size and what assumptions this entails.\n\nWe obtain:\n R = P(N&F&L) / P(~N&F&L) = {P(F|N&L) P(N&L)} / {P(F|~N&L) P(~N&L)} (A) = P(N&L) / {P(F|~N&L) P(~N&L)} (B) >= P(N&L) / P(~N&L) (C) = {P(N|L) P(L)} / {P(~N|L) P(L)} (D) = P(N|L) / P(~N|L) (E)\n\nHere, (A) and (D) follow from the definition of conditional probability, (B) by the WAP--which Kwon says he accepts--and which asserts that P(F|N&L)=1, (C) because the probability P(F|~N&L) in the denominator is <=1, and (E) by cancellation of P(L) in numerator and denominator.\n\nThus we see that in fact the ratio R cannot be small unless P(N|L)/P(~N|L) is also small. Therefore we cannot conclude that P(N|F&L)<<1 unless P(N|L)/P(~N|L)<<1--regardless of the size of P(N&F&L). But what is P(N|L)/P(~N|L)? Why, it is just the prior odds ratio that You assign to describe Your relative belief in N and ~N before You learn that F is true. Thus, although Kwon is correct in noting that the only way to keep P(N|F&L) from being very small is to have P(~N&F&L)<<1, this does not represent a prior commitment to naturalism as he asserts. Indeed, a prior commitment to naturalism would be to assume that P(N|L)/P(~N|L)>>1, and as (E) shows, if we assume P(N|L)/P(~N|L) of order unity, which reflects a neutral prior position between the N and ~N, and not a prior commitment to naturalism, we will end up being at least neutral between N and ~N after observing that F is true, regardless of the size of P(N&F&L) and P(F|N).\n\nIndeed, it requires a prior commitment to supernaturalism to get P(N|F&L)<<1, because You would have to presume a priori that P(N|L)<<P(~N|L). Kwon has it exactly backwards.\n\nSo the absolute size of P(N&F&L) and P(F|N) do not tell us anything about P(N|F&L); this is a confusion between conditional and unconditional probability. The only thing that counts is the ratio R. Kwon's calculation in his steps (4-8) is simply irrelevant to the final result. Indeed, we have the following theorem:\n\nTheorem: If p(F|N)<<1 and You are exactly neutral between N and ~N before learning F, then P(~N&F&L)<<1.\n\nProof: Under the assumptions we have P(F&N&L)=P(N|L)P(L)<<1; but if we are exactly neutral between N and ~N before learning F we have P(N|L)=0.5=O(1) so the unconditional probability P(L)<<1. But by standard probability theory P(~N&F&L)<=P(L)<<1. QED.\n\nThus, far from reflecting a prior commitment to naturalism as Kwon claims, the result P(~N&F&L)<<1 is a consequence of the fine tuning condition together with the adoption of an at least neutral prior position on N versus ~N. It is due to the fact that P(N&L&F) and P(~N&L&F) both have P(L)<<1 as a factor when they are expanded using the definition of conditional probability.\n\nFurthermore, it is even possible for P(~N|F&L) to be very small (and therefore P(N|F&L) close to unity), without making a prior commitment to naturalism. For example, suppose we adopt the neutral position P(N|L)=P(~N|L)=0.5; then from (B) we find that R = 1/P(F|~N&L), and if P(F|~N&L)<<1 then R>>1 and P(F|N&L) is close to unity. But what does P(F|~N&L)<<1 mean? Is this a \"prior commitment to naturalism?\" No, a prior commitment to naturalism would involve some conditional probability on N, not some conditional probability on F. The condition P(F|~N&L)<<1 actually means that it is likely that an inhabitant of a supernaturalistically created universe would find that it is ~F: a universe where life exists despite the fact that it could not exist naturalistically, for example as a consequence of the suspension of natural law by the supernatural creator. We discussed this extensively in our article. Indeed, without psychoanalyzing the Deity and analysing its powers and intentions, it is a priori quite likely that the Deity might create universes that are ~F&L, for such universes are not excluded unless we know something about this Deity that would prevent it from creating such universes. An example of such a universe would be Paradise, and it seems unlikely that enthusiasts of the \"fine-tuning\" argument would be willing to say that the Deity would not create anything like Paradise. But the only way for them to escape from P(F|~N&L)<<1 would be for them to assert that the Deity would only, or mostly, create universes that, if they contain life, are F, and we see no justification for such an assumption.\n\nKwon makes some other incorrect statements later in his web article. He says that our argument \"incorrectly attributes significance to P(N|L).\" Kwon here appears to have missed the fact that we are talking about Bayesian probabilities. The probability P(N|L) refers to our universe, and is Your Bayesian prior probability that N is true, given that You know that L is true (which must be the case since it is a condition of reasoning that You be alive), but before You learn that F is true. It is a reflection of Your epistemological condition or state of knowledge at a particular moment in time. Thus, P(N|L) has a perfectly definite meaning in our universe, although the value of P(N|L) will differ from individual to individual because every individual has different background information (not explicitly called out here but mentioned in our article).\n\nFurthermore, Kwon is incorrect when he states that \"P(N|L) is irrelevant to our universe for the same reason that P(N|F) is irrelevant.\" We never said that P(N|F) is irrelevant, only that it is irrelevant for inference. The reason why P(N|F) is irrelevant for inference is that no sentient being is unaware of L as background information. Every sentient being knows that he is alive and therefore knows that L is true; thus every final probability statement that he makes must be conditioned on L. This is not true of F. There are sentient beings in our universe, indeed in our world, that do not yet know that F is true. Most schoolchildren do not know that F is true, although they know that L is true. Probably most adults do not know that F is true. Thus, Kwon errs in drawing a parallel between P(N|L) and P(N|F).\n\nKwon started with the perfectly reasonable proposal that \"fine tuning\" is best defined by P(F|N)<<1, and attempted to derive his result. That he was unable to do this comes as no surprise to us, because one of us [whj] spent the better part of a year trying to get useful information from propositions such as P(F|N)<<1, without success. All such attempts were fruitless, and the reason why is seen in our discussion. For example, suppose we were to assume in addition that P(F|~N)=1. Even then, no useful result can be derived, for from this we can only determine the obvious fact that P(F&L&~N)<=1, which gives no useful information about the crucial ratio R. The inequality goes in the wrong direction! Thus, \"fine tuning\"--P(F|N)<<1--tells us nothing useful, which is why in our article we concentrated instead on finding out what \"life friendliness\"--F--and the WAP can tell us.\n\nKwon says, \"We have always known that F is true for our universe...\" This is false. In fact, the suspicion that F is true is relatively recent, going only back to Brandon Carter's seminal papers in the mid-1970's. Earlier, physicists such as Dirac had in fact speculated that the values of some fundamental physical constants (e.g., the fine structure constant) might have been very different in the past, which would violate F, and somewhat later other scientists (for example Fred Hoyle in the early 1950s) have used the assumption that F is true in order to predict certain physical phenomena, which were later found to be the case. Had those observations NOT been found to be true, F would have been refuted, and we would seriously have to consider ~N. Even today we do not know that our universe is F--\"life-friendly\"--in the sense that we use the term in our article. We strongly suspect that it is true, but it is conceivable that someone will make a WAP prediction that will turn out to be false and which might refute F.\n\nKwon incorrectly asserts that the idea that there may be other universes is \"simply unscientific.\" Certainly many highly respected cosmologists and physicists like Andrei Linde (Stanford), Lee Smolin (Harvard) and Alexander Vilenkin (Tufts) and Nobel laureate Stephen Weinberg (Texas) would disagree with this statement. Kwon claims that the hypothesis of other universes \"cannot be tested.\" While we might agree that testing the hypothesis of other universes will be difficult, we do not agree that the hypothesis is untestable, and neither do scientists that work in this area. Some specific tests have been suggested. For example, David Deutsch has proposed specific tests of the Everett-Wheeler interpretation of quantum mechanics commonly known as the \"Many-Worlds\" hypothesis. And recently an article that proposed another way that other universes might be detected was published (Science, Vol. 292, p. 189-190, original paper archived as The Ekpyrotic Universe: Colliding Branes and the Origin of the Hot Big Bang). Regardless, our argument is not dependent on the notion that there are many other universes. It stands on its own.\n\nKwon misunderstands the point of the \"god of the gaps\" argument. The problem isn't that the gap is being filled by a god, the problem is what happens if the gap is filled by physics. Then the god that filled the gap gets smaller. This is a theological problem, not an epistemological or scientific problem. We agree with Kwon that there are gaps in our physical explanation of the universe that may never be filled; but it is hoping against hope that we will never fill any of the gaps currently being touted by \"intelligent design theorists\" as proof of supernaturalism. Some of them are certain to be filled in time, and each time this happens, the god of the intelligent designers will be diminished. (In fact, some of them were in fact filled even before the recent crop of \"ID theorists\" made their arguments--this is true of some of Michael Behe's examples, for which evolutionary pathways had already been proposed even before Behe published his book).\n\nAs to Kwon's last point, that we incorrectly claim that \"intelligent design theorists\" incoherently assert both F and ~F. We believe that it is a correct statement that at least some are arguing ~F. It is our impression, for example, that Michael Behe is arguing that it is actually impossible, and not just highly unlikely, for certain \"irreducibly complex\" (IC) structures to evolve without supernatural intervention, and that is a form of ~F. Regardless, even if no one is attempting to argue from ~F to ~N, our point still stands. Attempts to prove ~N that argue from either F or P(F|N)<<1 or both do not work. But attempts to prove ~N by showing ~F would work. Thus, people making anthropic and \"fine tuning\" arguments have hold of the wrong end of the stick. They should be trying to show that the universe is not F. It is clear that showing that the universe is not F would at one stroke prove ~N; it follows that showing that the universe is F can only undermine ~N and support N; this is an elementary result of probability theory, since it is not possible that observations of F as well as ~F would both support ~N. Since it is trivially true that observing ~F does support ~N, observing F must undermine it. Put another way, it seems to us that Michael Behe--if we understand him--is making the right argument from a logical and inferential point of view, and Hugh Ross is making the wrong argument. If it turns out that Behe is not making the argument we think he is, then it is still the case that Hugh Ross is making the wrong argument.\n\nKwon makes some remarks about \"nontheists\" that seem to indicate that he thinks that only \"nontheists\" would argue as we have. This is not the case. The issue here is whether the \"fine tuning\" argument is correct. It is exactly analogous to the centuries of work done on Fermat's last theorem. It is likely that most mathematicians thought that the theorem was true for most of that time, yet they continued to reject proofs that had flaws in them. They rejected them not because they thought Fermat's last theorem was false, but because the proofs were wrong. They even rejected Wiles' first attempt at a proof, because it was (slightly) flawed. In the same way a theist can and should reject a flawed \"proof\" of the existence of God. Our argument is that the fine tuning arguments are wrong, and no one should draw any conclusions about our personal beliefs from the fact that we say that these arguments are wrong.\n\nConclusion: Kwon's \"proof\" is fatally flawed. He incorrectly asserts that the only way to keep P(N|F&L) from being very small is to assume naturalism a priori. Quite the contrary, the only way to make P(N|F&L) small is to assume supernaturalism a priori. Kwon apparently does not understand the significance of some of the Bayesian probabilities we use; this is forgiveable in a sense since Bayesian probability theory is still misunderstood by most people, even those with some training in probability theory...but it means that Kwon should withdraw these comments until he understands Bayesian probability theory well enough to criticize it. Kwon's assertion that we have always known that our universe is F is false; his assertion that the existence of other universes is untestable is also false, and in any case is not relevant to our main argument. Finally, he mistakenly thinks that the god-of-the-gap argument somehow tells against science. It does not, since it is purely a theological conundrum, not a scientific one.\n\nNonetheless, we thank David Kwon for his serious and attentive reading of our article and for his comments. He is the first to attempt a mathematical rather than a polemical refutation of our argument. His argument fails because, as we show here, it isn't possible to derive anything useful from the fine-tuning proposition P(F|N)<<1. When all factors are taken into account, it is clear that the only way to end up with a final result that P(N|F&L)<<1 is to assume at the outset that supernaturalism is almost surely true, thus begging the question.\n\nM. I.\nW. J.\nApril 30, 2001\n\n[Note added 010613: When we posted this response, we informed Mr. Kwon, so that he could either respond to our criticisms or withdraw his web page. We regret to say that up to now he has done neither.]\n\nNote added 040109: Kwon has never responded to our criticisms; his web page disappeared when he apparently finished his career as a Berkeley graduate student. It is archived and can be obtained courtesy of the Internet Wayback Machine via Makeashorterlink.com.]\n\nThis article was first posted at Bill Jefferys' Home Page. .",
null,
"",
null,
"",
null,
""
]
| [
null,
"http://ww.talkreason.org/img/blank.gif",
null,
"http://ww.talkreason.org/img/feather_books.gif",
null,
"http://ww.talkreason.org/img/talk_reason.gif",
null,
"http://ww.talkreason.org/img/blank.gif",
null,
"http://ww.talkreason.org/img/closed.gif",
null,
"http://ww.talkreason.org/img/closed.gif",
null,
"http://ww.talkreason.org/img/closed.gif",
null,
"http://ww.talkreason.org/img/closed.gif",
null,
"http://ww.talkreason.org/img/closed.gif",
null,
"http://ww.talkreason.org/img/closed.gif",
null,
"http://ww.talkreason.org/img/closed.gif",
null,
"http://ww.talkreason.org/img/closed.gif",
null,
"http://ww.talkreason.org/img/closed.gif",
null,
"http://ww.talkreason.org/img/closed.gif",
null,
"http://ww.talkreason.org/img/closed.gif",
null,
"http://ww.talkreason.org/img/closed.gif",
null,
"http://ww.talkreason.org/img/closed.gif",
null,
"http://ww.talkreason.org/img/closed.gif",
null,
"http://ww.talkreason.org/img/closed.gif",
null,
"http://ww.talkreason.org/img/closed.gif",
null,
"http://ww.talkreason.org/img/closed.gif",
null,
"http://ww.talkreason.org/img/closed.gif",
null,
"http://ww.talkreason.org/img/closed.gif",
null,
"http://ww.talkreason.org/img/closed.gif",
null,
"http://ww.talkreason.org/img/blank.gif",
null,
"http://ww.talkreason.org/img/blank.gif",
null,
"http://ww.talkreason.org/img/blank.gif",
null,
"http://ww.talkreason.org/img/top.gif",
null,
"http://ww.talkreason.org/img/blank.gif",
null,
"http://ww.talkreason.org/img/blank.gif",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.95703435,"math_prob":0.9230942,"size":32813,"snap":"2019-43-2019-47","text_gpt3_token_len":7693,"char_repetition_ratio":0.13481057,"word_repetition_ratio":0.021354934,"special_character_ratio":0.22280803,"punctuation_ratio":0.08969291,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9709665,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60],"im_url_duplicate_count":[null,null,null,5,null,5,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,5,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-14T12:35:34Z\",\"WARC-Record-ID\":\"<urn:uuid:fc53f99d-d300-4e34-840f-a30810a56ff5>\",\"Content-Length\":\"78838\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7c7065c5-f565-4700-bb34-d49d383d42f7>\",\"WARC-Concurrent-To\":\"<urn:uuid:e3823084-4396-477f-a3a6-c8363f4a8ba2>\",\"WARC-IP-Address\":\"216.119.105.232\",\"WARC-Target-URI\":\"http://ww.talkreason.org/articles/super.cfm\",\"WARC-Payload-Digest\":\"sha1:NPFWQHCXRXVF42O6KQ6HP63X3EALJMHC\",\"WARC-Block-Digest\":\"sha1:CIJSX6GJHESCZXIV4R5LIE57LIFVKYLL\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570986653216.3_warc_CC-MAIN-20191014101303-20191014124303-00225.warc.gz\"}"} |
http://docs.gl/gl3/glBlendFuncSeparate | [
"# glBlendFuncSeparate\n\n## Name\n\nglBlendFuncSeparate — specify pixel arithmetic for RGB and alpha components separately\n\n## C Specification\n\n void glBlendFuncSeparate( GLenum srcRGB, GLenum dstRGB, GLenum srcAlpha, GLenum dstAlpha);\n\n## Parameters\n\nsrcRGB\n\nSpecifies how the red, green, and blue blending factors are computed. The initial value is GL_ONE.\n\ndstRGB\n\nSpecifies how the red, green, and blue destination blending factors are computed. The initial value is GL_ZERO.\n\nsrcAlpha\n\nSpecified how the alpha source blending factor is computed. The initial value is GL_ONE.\n\ndstAlpha\n\nSpecified how the alpha destination blending factor is computed. The initial value is GL_ZERO.\n\n## Description\n\nPixels can be drawn using a function that blends the incoming (source) RGBA values with the RGBA values that are already in the frame buffer (the destination values). Blending is initially disabled. Use glEnable and glDisable with argument GL_BLEND to enable and disable blending.\n\nglBlendFuncSeparate defines the operation of blending when it is enabled. srcRGB specifies which method is used to scale the source RGB-color components. dstRGB specifies which method is used to scale the destination RGB-color components. Likewise, srcAlpha specifies which method is used to scale the source alpha color component, and dstAlpha specifies which method is used to scale the destination alpha component. The possible methods are described in the following table. Each method defines four scale factors, one each for red, green, blue, and alpha.\n\nIn the table and in subsequent equations, first source, second source and destination color components are referred to as $R s0 G s0 B s0 A s0$, $R s1 G s1 B s1 A s1$, and $R d G d B d A d$, respectively. The color specified by glBlendColor is referred to as $R c G c B c A c$. They are understood to have integer values between 0 and $k R k G k B k A$, where\n\nand $m R m G m B m A$ is the number of red, green, blue, and alpha bitplanes.\n\nSource and destination scale factors are referred to as $s R s G s B s A$ and $d R d G d B d A$. All scale factors have range $0 1$.\n\nParameter RGB Factor Alpha Factor\nGL_ZERO $0 0 0$ $0$\nGL_ONE $1 1 1$ $1$\nGL_SRC_COLOR $R s0 k R G s0 k G B s0 k B$ $A s0 k A$\nGL_ONE_MINUS_SRC_COLOR $1 1 1 1 - R s0 k R G s0 k G B s0 k B$ $1 - A s0 k A$\nGL_DST_COLOR $R d k R G d k G B d k B$ $A d k A$\nGL_ONE_MINUS_DST_COLOR $1 1 1 - R d k R G d k G B d k B$ $1 - A d k A$\nGL_SRC_ALPHA $A s0 k A A s0 k A A s0 k A$ $A s0 k A$\nGL_ONE_MINUS_SRC_ALPHA $1 1 1 - A s0 k A A s0 k A A s0 k A$ $1 - A s0 k A$\nGL_DST_ALPHA $A d k A A d k A A d k A$ $A d k A$\nGL_ONE_MINUS_DST_ALPHA $1 1 1 - A d k A A d k A A d k A$ $1 - A d k A$\nGL_CONSTANT_COLOR $R c G c B c$ $A c$\nGL_ONE_MINUS_CONSTANT_COLOR $1 1 1 - R c G c B c$ $1 - A c$\nGL_CONSTANT_ALPHA $A c A c A c$ $A c$\nGL_ONE_MINUS_CONSTANT_ALPHA $1 1 1 - A c A c A c$ $1 - A c$\nGL_SRC_ALPHA_SATURATE $i i i$ $1$\nGL_SRC1_COLOR $R s1 k R G s1 k G B s1 k B$ $A s1 k A$\nGL_ONE_MINUS_SRC_COLOR $1 1 1 1 - R s1 k R G s1 k G B s1 k B$ $1 - A s1 k A$\nGL_SRC1_ALPHA $A s1 k A A s1 k A A s1 k A$ $A s1 k A$\nGL_ONE_MINUS_SRC_ALPHA $1 1 1 - A s1 k A A s1 k A A s1 k A$ $1 - A s1 k A$\n\nIn the table,\n\nTo determine the blended RGBA values of a pixel, the system uses the following equations:\n\n$R d = min k R R s s R + R d d R$ $G d = min k G G s s G + G d d G$ $B d = min k B B s s B + B d d B$ $A d = min k A A s s A + A d d A$\n\nDespite the apparent precision of the above equations, blending arithmetic is not exactly specified, because blending operates with imprecise integer color values. However, a blend factor that should be equal to 1 is guaranteed not to modify its multiplicand, and a blend factor equal to 0 reduces its multiplicand to 0. For example, when srcRGB is GL_SRC_ALPHA, dstRGB is GL_ONE_MINUS_SRC_ALPHA, and $A s$ is equal to $k A$, the equations reduce to simple replacement:\n\n$R d = R s$ $G d = G s$ $B d = B s$ $A d = A s$\n\n## Notes\n\nIncoming (source) alpha is correctly thought of as a material opacity, ranging from 1.0 ($K A$), representing complete opacity, to 0.0 (0), representing complete transparency.\n\nWhen more than one color buffer is enabled for drawing, the GL performs blending separately for each enabled buffer, using the contents of that buffer for destination color. (See glDrawBuffer.)\n\nWhen dual source blending is enabled (i.e., one of the blend factors requiring the second color input is used), the maximum number of enabled draw buffers is given by GL_MAX_DUAL_SOURCE_DRAW_BUFFERS, which may be lower than GL_MAX_DRAW_BUFFERS."
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.7633985,"math_prob":0.9992316,"size":3928,"snap":"2019-13-2019-22","text_gpt3_token_len":886,"char_repetition_ratio":0.12818553,"word_repetition_ratio":0.088652484,"special_character_ratio":0.20824847,"punctuation_ratio":0.14421552,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9994727,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-03-21T21:59:59Z\",\"WARC-Record-ID\":\"<urn:uuid:47797c44-04a8-4145-89ca-1184ae136f8a>\",\"Content-Length\":\"115669\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:706aeaf3-d23f-4697-84da-914e12faa167>\",\"WARC-Concurrent-To\":\"<urn:uuid:44d9c017-dc61-4e01-9191-96941db7a106>\",\"WARC-IP-Address\":\"107.21.126.131\",\"WARC-Target-URI\":\"http://docs.gl/gl3/glBlendFuncSeparate\",\"WARC-Payload-Digest\":\"sha1:FFB765YYMSTSXE4Z7KJ57ZPX5DWUDCEN\",\"WARC-Block-Digest\":\"sha1:LWNZWFJNIOJLRUKFKORVQOW2QJYSHHAT\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-13/CC-MAIN-2019-13_segments_1552912202572.7_warc_CC-MAIN-20190321213516-20190321235516-00426.warc.gz\"}"} |
https://www.wyzant.com/resources/answers/topics/degrees | [
"61 Answered Questions for the topic Degrees\n\n04/15/21\n\nSuppose that cscθ= 6/√27Assume that θ is an acute angle.\n\n12/10/20\n\n#### How do you write a polynomial function of least degree with integral coefficients that has the given zeros?\n\nI'm stumped on a homework problem:Write a polynomial function of least degree with integral coefficients that has the given zeros3 + 2i, 3i\n\n05/25/20\n\nIn a quadrant there are four sides sooo\n\n12/03/19\n\nDegrees Angles\n\n06/10/19\n\n#### Angles in college algebra\n\nSolve for x then find the measure of each angle. (2x+6)degrees [2(8x-3)]degrees\n\n05/28/19\n\n#### What is the measure of each interior angle of a regular pentagon?\n\nThe answers include 36°, 72°, 108°, 120°, 144°\n\n02/16/19\n\n01/11/19\n\n#### How many km are in 5 degrees of longitude?\n\nHow many km are in 5 degrees of longitude?Thanks in Advance! :)\n\n09/07/18\n\n#### how high does the ladder reach on the building in feet?\n\nA ladder of height 11 feet leans against a building so that the angle between the ground and the ladder is t∘, where 0<t<90. In terms of t, how high does the ladder reach on the building in... more\nDegrees Maths\n\n08/27/18\n\n#### degrees and angles\n\nA wheel with a circumference of 15 cm has covered 1.35 km in its lifetime. Which of the following is correct to workout the number of degrees the wheel has turned so far? A. 135000 X 15 / 360 B.... more\nDegrees\n\n08/15/18\n\n#### A motor makes 7449.5 complete rotations in 3.17 minutes. What calculation would give the number of degrees the motor rotates in 12.6 seconds?\n\na motor makes 7449.5 complete rotations in 3.17 minutes.\nDegrees\n\n07/29/18\n\n#### enter the poly nomial\n\nDegree 4; zeros: 3 plus 3 i ; 1 multiplicity 2 a(x-2i)(x-5)(x+5)(x+5)(x-5) ax^5-2ax^4-50ax^3+100ax^2+625ax-1250ai this doesnt feel right, what am i dong wrong\n\n06/12/18\n\nI am having a bit of a blank moment, when converting radians to degrees you know that, 2 x pi x radians = 360 degrees. so that pi x radians = 180 degrees. Which should mean that radians on its... more\n\n05/30/18\n\n#### Tan theta= cot(10 degrees +5theta)\n\nFind one angle theta that satisfies tan theta = cot(10 degrees + 5 theta)\n\n03/07/18\n\n1. 3.2 2. 3pi 3. 2pi/3 4. 6 5. -4.15\nDegrees Math Algebra Angle\n\n03/01/18\n\n#### A hot air balloon is flying in the sky near a 45-meter high building. The angle of depression of the top and the bottom of the building from the operator are 10\n\ndegrees and 22 degrees respectively. Find the horizontal distance between the balloon and the building. thank youu so much in advance geniuss!💓 y'all helped me alot 😊\n\n03/01/18\n\n#### Determine the height of a capsule/pod above the ground after 25 mins.\n\nThe diameter of the \"Eye\" is 135 meters and each rotation takes 30 mins. By first developing a suitable model to represent the rotation of the \"Eye\", determine the height of a capsule/pod above the... more\nDegrees Geometry Point\n\n01/30/18\n\n#### Moving a point 270 degrees\n\nMove point (-7,1). 270 degrees clockwise. Then give the coordinates of the final point.\nDegrees\n\n12/08/17\n\n#### If a playground swing is 8 feet long and it's rider swings through a distance of 9.5 feet, what angle does the swing travel through?\n\nworking on degrees, radians, angles, sin, cosine, tangent\n\n09/22/17\n\n#### suppose there is n = 40 points in the plane, each with a distinct x coordinate.\n\nWhat is the smallest degree d such that there is always polynomial of degree d whose graph passes through all n points?\n\n05/25/17\n\n#### Calculate the area of the triangle. A=137 degrees, b=628.3 m, c=843.1m\n\nWhat would the area of the triangle be if angle A is 137 degrees, side b is 628.3 m and side c is 843.1 m?\nDegrees\n\n04/25/17\n\n#### lisa walks 15 feet from the base of the tree.\n\nlisa walks 15 feet from the base of the tree. She measures an angle of elevation from the ground to the nest of 62. Find how high the nest is above the ground, to the nearest foot.\nDegrees Angles\n\n04/06/17\n\n#### Triangle M is similar to triangle N. Triangle M has two angles with the measures of 32 degrees and 93 . Which two angles could be included in triangle N?\n\nmath triangle angke degress word problem\nDegrees Decimal Form\n\n02/23/17\n\n#### If x=arctan(-3/4) and x+y=210 degrees, then cosy=?\n\nIf x=arctan(-3/4) and x+y=210 degrees, then cosy=? (Estimate answer to two decimal places).\n\n## Still looking for help? Get the right answer, fast.\n\nGet a free answer to a quick problem.\nMost questions answered within 4 hours.\n\n#### OR\n\nChoose an expert and meet online. No packages or subscriptions, pay only for the time you need."
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.866751,"math_prob":0.9757352,"size":5323,"snap":"2021-43-2021-49","text_gpt3_token_len":1460,"char_repetition_ratio":0.1357398,"word_repetition_ratio":0.082969435,"special_character_ratio":0.28142026,"punctuation_ratio":0.11120841,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9920898,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-11-30T05:35:03Z\",\"WARC-Record-ID\":\"<urn:uuid:7f7455a1-1b0f-4c67-bdcd-44f1d8afa328>\",\"Content-Length\":\"96232\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9fce3cc6-692b-47f3-9f9b-d02743a77e64>\",\"WARC-Concurrent-To\":\"<urn:uuid:404e0aea-dc0e-4230-b0f7-c90302ec312a>\",\"WARC-IP-Address\":\"34.117.195.90\",\"WARC-Target-URI\":\"https://www.wyzant.com/resources/answers/topics/degrees\",\"WARC-Payload-Digest\":\"sha1:JMHQST7UGOXTMDKXB2YDYYLN5EWQYYT6\",\"WARC-Block-Digest\":\"sha1:W3RRSWV2ZJ4XJ32NEUN6ORD3MHRSOXTI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964358953.29_warc_CC-MAIN-20211130050047-20211130080047-00358.warc.gz\"}"} |
https://www.bloodraynebetrayal.com/suzanna-escobar/articles/how-do-you-find-the-volume-of-naoh-in-a-titration/ | [
"## How do you find the volume of NaOH in a titration?\n\nCalculating a volume\n\n1. 25.00 cm 3 of 0.300 mol/dm 3 sodium hydroxide solution is exactly neutralised by 0.100 mol/dm 3 sulfuric acid.\n2. Volume of sodium hydroxide solution = 25.0 ÷ 1000 = 0.0250 dm 3\n3. Amount of sodium hydroxide = concentration × volume.\n4. Amount of sodium hydroxide = 0.300 mol/dm 3 × 0.0250 dm 3\n5. = 0.00750 mol.\n\n## How do you calculate volume in a titration?\n\nUse the titration formula. If the titrant and analyte have a 1:1 mole ratio, the formula is molarity (M) of the acid x volume (V) of the acid = molarity (M) of the base x volume (V) of the base. (Molarity is the concentration of a solution expressed as the number of moles of solute per litre of solution.)\n\nWhat is the equivalence volume of NaOH?\n\n0.04398 L\nA mole is equal to 6.022 x 1023 molecules.) By doing the titration and making a plot of the volume of NaOH added versus the resulting pH of the solution, we find that the equivalence point occurs at 0.04398 L of NaOH.\n\nHow do you find the concentration of phosphoric acid?\n\n1 gram of H3PO4 will be equal to 1/98 moles. Therefore, we can say that 1 liter of Phosphoric acid contains 14.615 moles or in other words molarity of 85% (w/w) Phosphoric acid is equal to 14.615 M….Known values.\n\nKnown values\nConcentration of Phosphoric acid solution 85% (% by mass, wt/wt)\n\n### What volume of NaOH is required to reach the endpoint?\n\n23.72 mL\nTo reach the endpoint required 23.72 mL of the NaOH. Calculate the molarity of the HCl. Put on your CHEMICAL SPLASH-PROOF SAFETY GOGGLES!\n\n### Is NaOH an analyte?\n\nThe most common use of titrations is for determining the unknown concentration of a component (the analyte) in a solution by reacting it with a solution of another compound (the titrant). During the course of the titration, the titrant (NaOH) is added slowly to the unknown solution.\n\nHow do you titrate phosphoric acid?\n\nprocedure\n\n1. Pipette aliquot of phosphoric acid solution into 250 mL Erlenmeyer flask.\n2. Dilute with distilled water to about 100 mL.\n3. Add 2-3 drops of methyl orange or 3-4 drops of thymolphthalein solution.\n4. Titrate with NaOH solution till the first color change.\n\nHow do I titrate NaOH and phosphoric acid?\n\nClick n=CV button above NaOH in the input frame, enter volume and concentration of the titrant used. Click Use button. Read number of moles and mass of phosphoric acid in the titrated sample in the output frame. Click n=CV button in the output frame below phosphoric acid, enter volume of the pipetted sample,…\n\n#### How to titrate polyprotic acid using pH meter?\n\nPhosphoric Acid Titration Using Ph Meter Experiment 1. Phosphoric Acid Titration Using pH meter Experiment 1. To titrate the polyprotic acid (H3PO4) using the strong base NaOH. That was done by using methyl orange, phenolphthalein as indicators and pH meter.\n\n#### How much NaOH is required for the reaction of H3PO4 with phosphoric acid?\n\n\\$\\\\begingroup\\$You are right in calculating that you need 30 mL of NaOH for complete reaction with H3PO4. However, phosphoric acid is a weak acid and NaOH is a strong base. The pH at the end-point will therefore be greaterthan 7, and the volume of NaOH you need to reach pH 7 will be less than 30 mL.\\$\\\\endgroup\\$\n\nWhat is the equivalence point of phosphoric acid titration?\n\nThe last part of the experiment was phosphoric acid titration using the pH meter which showed the two equivalent points. The first equivalence point at pH 4.65 and the second equivalence point at 9.19."
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.81653523,"math_prob":0.98048687,"size":3465,"snap":"2022-40-2023-06","text_gpt3_token_len":936,"char_repetition_ratio":0.1733603,"word_repetition_ratio":0.013179571,"special_character_ratio":0.25310245,"punctuation_ratio":0.103299856,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9926347,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-01-27T10:58:33Z\",\"WARC-Record-ID\":\"<urn:uuid:8b5681eb-d0e7-40af-9dd0-4f3b07cacbd8>\",\"Content-Length\":\"106137\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:29a70339-e4b0-4ded-b7b2-34c103639a8c>\",\"WARC-Concurrent-To\":\"<urn:uuid:580696a2-20a1-4e46-a8af-b54a7ae48e41>\",\"WARC-IP-Address\":\"104.21.18.162\",\"WARC-Target-URI\":\"https://www.bloodraynebetrayal.com/suzanna-escobar/articles/how-do-you-find-the-volume-of-naoh-in-a-titration/\",\"WARC-Payload-Digest\":\"sha1:JCK5XBGAZ36EN67NYCYFLGRX72HK3NEE\",\"WARC-Block-Digest\":\"sha1:4KC4MNCVUA2H3AKAM5XUOA3VU3BHZMSX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764494976.72_warc_CC-MAIN-20230127101040-20230127131040-00605.warc.gz\"}"} |
https://percent.info/plus/39/how-to-calculate-469-plus-39-percent.html | [
"469 plus 39 percent\n\nHere we will teach you how to calculate four hundred sixty-nine plus thirty-nine percent (469 plus 39 percent) using two different methods. We call these methods the number method and the decimal method.\n\nWe start by showing you the illustration below so you can see what 469 + 39% looks like, visualize what we are calculating, and see what 469 plus 39 percent means.",
null,
"The dark blue in the illustration is 469, the light blue is 39% of 469, and the sum of the dark blue and the light blue is 469 plus 39 percent.\n\nCalculate 469 plus 39 percent using the number method\nFor many people, this method may be the most obvious method of calculating 469 plus 39%, as it entails calculating 39% of 469 and then adding that result to 469. Here is the formula, the math, and the answer.\n\n((Number × Percent/100)) + Number\n((469 × 39/100)) + 469\n182.91 + 469\n= 651.91\n\nRemember, the answer in green above is the sum of the dark blue plus the light blue in our illustration.\n\nCalculate 469 plus 39 percent using the decimal method\nHere you convert 39% to a decimal plus 1 and then multiply it by 469. We think this is the fastest way to calculate 39 percent plus 469. Once again, here is the formula, the math, and the answer:\n\n(1 + (Percent/100)) × Number\n(1 + (39/100)) × 469\n1.39 × 469\n= 651.91\n\nNumber Plus Percent\nGo here if you need to calculate any other number plus any other percent.\n\n470 plus 39 percent\nHere is the next percent tutorial on our list that may be of interest."
]
| [
null,
"https://percent.info/images/plus/plus-39-percent.png",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.84308976,"math_prob":0.9976754,"size":1464,"snap":"2022-27-2022-33","text_gpt3_token_len":378,"char_repetition_ratio":0.17123288,"word_repetition_ratio":0.05882353,"special_character_ratio":0.31625682,"punctuation_ratio":0.093023255,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9997789,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-07-04T10:10:33Z\",\"WARC-Record-ID\":\"<urn:uuid:2ddc87b4-b0b8-4e12-9cb0-8bb2b9308e9c>\",\"Content-Length\":\"5639\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:200f85ee-f982-499d-99d6-0f864ec38839>\",\"WARC-Concurrent-To\":\"<urn:uuid:f7723832-62e8-4dfe-b8a9-e54bcb499824>\",\"WARC-IP-Address\":\"13.32.208.72\",\"WARC-Target-URI\":\"https://percent.info/plus/39/how-to-calculate-469-plus-39-percent.html\",\"WARC-Payload-Digest\":\"sha1:IPNLHPY64TP23PIDBLVR3KDUMAM365SZ\",\"WARC-Block-Digest\":\"sha1:C4DWIJ3LLYX2B4XCR4BRTHU4BHHPVFDB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656104364750.74_warc_CC-MAIN-20220704080332-20220704110332-00158.warc.gz\"}"} |
https://convexoptimization.com/wikimization/index.php?title=Presolver&diff=prev&oldid=2767 | [
"# Presolver\n\n(Difference between revisions)\n Revision as of 02:30, 10 August 2011 (edit) (→Geometry of Constraints)← Previous diff Revision as of 20:26, 10 August 2011 (edit) (undo)m (→Geometry of Constraints)Next diff → Line 31: Line 31: ==Geometry of Constraints== ==Geometry of Constraints== [[Image:pyramid.jpg|thumb|right|234px|Euclidean pyramid]] [[Image:pyramid.jpg|thumb|right|234px|Euclidean pyramid]] - The idea, central to our method for presolving, is more easily understood geometrically. + The central idea in our presolving method is best understood geometrically. Constraints Constraints\n$[itex] Line 42: Line 42: Geometers in ''convex analysis'' regard cones as convex Euclidean bodies semi-infinite in extent. Geometers in ''convex analysis'' regard cones as convex Euclidean bodies semi-infinite in extent. Finite circular cones hold ice cream and block road traffic in daily life. Finite circular cones hold ice cream and block road traffic in daily life. - The great Pyramids of Egypt are each an example of finite polyhedral cone. + Each of the great Pyramids of Egypt is a finite polyhedral cone. - The geometer defines a polyhedral (semi-infinite) cone [itex]\\,\\mathcal{K}$ in $\\,\\reals^m$ as the set: + The geometer defines a polyhedral (semi-infinite) cone $\\,\\mathcal{K}$ in $\\,\\reals^m$ as a set\n$\\mathcal{K}\\triangleq\\{A_{}x~|~x\\succeq0\\}$\n$\\mathcal{K}\\triangleq\\{A_{}x~|~x\\succeq0\\}$\n- which is closed and convex but not necessarily pointed (does not necessarily have a vertex). + that is closed and convex but not necessarily pointed (may not have a vertex). [[Image:Polycone2.jpg|thumb|right|200px|A pointed polyhedral cone (truncated)]] [[Image:Polycone2.jpg|thumb|right|200px|A pointed polyhedral cone (truncated)]] Line 51: Line 51: and then indefinitely out into space from the opposite side of Earth. and then indefinitely out into space from the opposite side of Earth. Its four edges correspond to four columns from matrix $A\\,$. Its four edges correspond to four columns from matrix $A\\,$. - In fact, those four columns completely describe this semi-infinite Pyramid per definition of $\\mathcal{K}\\,$. + Those four columns completely describe the semi-infinite Pyramid per definition of $\\mathcal{K}\\,$. - But $A\\,$ can have more than four columns and still describe that same Pyramid. + But $A\\,$ can have more than four columns and still describe the same Pyramid. - For such a fat $A\\,$, each remaining column resides anywhere in $\\mathcal{K}\\,$: + For such a fat $A\\,$, each additional column resides anywhere in $\\mathcal{K}\\,$: - interior to the cone $(\\{A_{}x~|~x\\succ0\\})$ or on one of its ''faces'' (the vertex, an edge, or facet). + either interior to the cone $(\\{A_{}x~|~x\\succ0\\})$ or on one of its ''faces'' (the vertex, an edge, or facet). - There is infinite variety of polyhedral cones; most are not so regularly shaped as a Pyramid. + Polyhedral cones have infinite variety. Most are not so regularly shaped as a Pyramid, - In fact, a polyhedral cone can have any number of edges and facets. + and they can have any number of edges and facets. We assume a pointed polyhedral cone throughout, so there can be only one vertex We assume a pointed polyhedral cone throughout, so there can be only one vertex - which resides at the origin in Euclidean space by definition. + (which resides at the origin in Euclidean space by definition).\n\n## Introduction\n\nPresolving conventionally means quick elimination of some variables and constraints prior to numerical solution of an optimization problem. Presented with constraints",
null,
"$LaTeX: a^{\\rm T}x=0\\,,~x\\succeq0$ for example, a presolver is likely to check whether constant vector",
null,
"$LaTeX: a\\,$ is positive; for if so, variable",
null,
"$LaTeX: \\,x$ can have only the trivial solution. The effect of such tests is to reduce the problem dimensions.\n\nMost commercial optimization problem solvers incorporate presolving. Particular reductions can be proprietary or invisible, while some control or selection may be given to a user. But all presolvers have the same motivation: to make an optimization problem smaller and (ideally) easier to solve. There is profit potential because a solver can then compete more effectively in the marketplace for large-scale problems.\n\nWe present a method for reducing variable dimension based upon geometry of constraints in the problem statement:",
null,
"$LaTeX:\n \\begin{array}{rl} \\mbox{minimize}_{x\\in_{}\\mathbb{R}^{^n}} & f(x) \\\\ \\mbox{subject to} & A_{}x=b \\\\ & x\\succeq0 \\\\ & x_{j\\!}\\in\\mathbb{Z}~,\\qquad j\\in\\mathcal{J} \\end{array}\n$\n\nwhere",
null,
"$LaTeX: A\\,$ is a matrix of predetermined dimension,",
null,
"$LaTeX: \\mathbb{Z}$ represents the integers,",
null,
"$LaTeX: \\reals$ the real numbers, and",
null,
"$LaTeX: \\mathcal{J}$ is some possibly empty index set.\n\nThe caveat to use of our proposed method for presolving is that it is not fast. One would incorporate this method only when a problem is too big to be solved; that is, when solver software chronically exits with error or hangs.\n\n## Geometry of Constraints\n\nThe central idea in our presolving method is best understood geometrically. Constraints",
null,
"$LaTeX:\n \\begin{array}{l} \\\\ A_{}x=b \\\\ x\\succeq0 \\end{array}\n$\n\nsuggest that a polyhedral cone comes into play. Geometers in convex analysis regard cones as convex Euclidean bodies semi-infinite in extent. Finite circular cones hold ice cream and block road traffic in daily life. Each of the great Pyramids of Egypt is a finite polyhedral cone. The geometer defines a polyhedral (semi-infinite) cone",
null,
"$LaTeX: \\,\\mathcal{K}$ in",
null,
"$LaTeX: \\,\\reals^m$ as a set",
null,
"$LaTeX: \\mathcal{K}\\triangleq\\{A_{}x~|~x\\succeq0\\}$\n\nthat is closed and convex but not necessarily pointed (may not have a vertex).\n\nTo visualize a pointed polyhedral cone in three dimensions, think of one Egyptian Pyramid continuing into the ground and then indefinitely out into space from the opposite side of Earth. Its four edges correspond to four columns from matrix",
null,
"$LaTeX: A\\,$. Those four columns completely describe the semi-infinite Pyramid per definition of",
null,
"$LaTeX: \\mathcal{K}\\,$. But",
null,
"$LaTeX: A\\,$ can have more than four columns and still describe the same Pyramid. For such a fat",
null,
"$LaTeX: A\\,$, each additional column resides anywhere in",
null,
"$LaTeX: \\mathcal{K}\\,$: either interior to the cone",
null,
"$LaTeX: (\\{A_{}x~|~x\\succ0\\})$ or on one of its faces (the vertex, an edge, or facet).\n\nPolyhedral cones have infinite variety. Most are not so regularly shaped as a Pyramid, and they can have any number of edges and facets. We assume a pointed polyhedral cone throughout, so there can be only one vertex (which resides at the origin in Euclidean space by definition)."
]
| [
null,
"http://convexoptimization.com/wikimization/cgi-bin/mimetex.cgi",
null,
"http://convexoptimization.com/wikimization/cgi-bin/mimetex.cgi",
null,
"http://convexoptimization.com/wikimization/cgi-bin/mimetex.cgi",
null,
"http://convexoptimization.com/wikimization/cgi-bin/mimetex.cgi",
null,
"http://convexoptimization.com/wikimization/cgi-bin/mimetex.cgi",
null,
"http://convexoptimization.com/wikimization/cgi-bin/mimetex.cgi",
null,
"http://convexoptimization.com/wikimization/cgi-bin/mimetex.cgi",
null,
"http://convexoptimization.com/wikimization/cgi-bin/mimetex.cgi",
null,
"http://convexoptimization.com/wikimization/cgi-bin/mimetex.cgi",
null,
"http://convexoptimization.com/wikimization/cgi-bin/mimetex.cgi",
null,
"http://convexoptimization.com/wikimization/cgi-bin/mimetex.cgi",
null,
"http://convexoptimization.com/wikimization/cgi-bin/mimetex.cgi",
null,
"http://convexoptimization.com/wikimization/cgi-bin/mimetex.cgi",
null,
"http://convexoptimization.com/wikimization/cgi-bin/mimetex.cgi",
null,
"http://convexoptimization.com/wikimization/cgi-bin/mimetex.cgi",
null,
"http://convexoptimization.com/wikimization/cgi-bin/mimetex.cgi",
null,
"http://convexoptimization.com/wikimization/cgi-bin/mimetex.cgi",
null,
"http://convexoptimization.com/wikimization/cgi-bin/mimetex.cgi",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.8665693,"math_prob":0.99427575,"size":5607,"snap":"2023-14-2023-23","text_gpt3_token_len":1342,"char_repetition_ratio":0.114046045,"word_repetition_ratio":0.41555285,"special_character_ratio":0.23167469,"punctuation_ratio":0.09969789,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99786454,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-02T04:54:29Z\",\"WARC-Record-ID\":\"<urn:uuid:9fd4079a-1250-4d22-a6c0-5aeb3316322d>\",\"Content-Length\":\"28568\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:43b23a8b-402c-4e97-9e96-f1793f49eb0d>\",\"WARC-Concurrent-To\":\"<urn:uuid:31d643a8-5846-464d-8c6f-25164e589e60>\",\"WARC-IP-Address\":\"23.111.174.204\",\"WARC-Target-URI\":\"https://convexoptimization.com/wikimization/index.php?title=Presolver&diff=prev&oldid=2767\",\"WARC-Payload-Digest\":\"sha1:6Q2NCSJDXDCYXLSLSIN3ERQMWYKRE32A\",\"WARC-Block-Digest\":\"sha1:QTAF7BB46M4B36GNDBQFUJNDEFP6OP7P\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224648322.84_warc_CC-MAIN-20230602040003-20230602070003-00292.warc.gz\"}"} |
https://www.excelguru.ca/forums/showthread.php?1953-Adapting-current-formula-to-reflect-a-time-period-(otherwise-0)&s=6e3d644f5ff81e1988610f957a2e619b&goto=nextnewest | [
"# Thread: Trying to get a yearly total using monthly subtracted values.\n\n1. ##",
null,
"Trying to get a yearly total using monthly subtracted values.\n\nRegister for a FREE account, and/\nor Log in to avoid these ads!\n\nHello All,\n\nI am taking printer readings each month and would like to display a YEAR END total that displays a 'running total' in a cell. IE: Jun = 1 page printed, Jul = 3 pages printed --> Year End Total = 2 Pages.\n\nI am unable to get a Year End Total to display in the proper cell unless Jan - Dec are filled with numeric values other than zero.\n\nI am using the following formula: =(F5-E5)+(G5-F5)+(H5-G5)+(I5-H5)+(J5-I5)+(K5-J5)+(L5-K5)+(M5-L5)+(N5-M5)+(O5-N5)+(P5-O5)+('2014'!E5-'2013'!P5)\n\n(To get a value for my December 2013 reading, I am subtracting the Dec 2013 value from the corresponding Jan cell on page 2014.)\n\nMy problem is that when I put a value of '1' in cell J5 and a value of 3 in cell K5, my YEARLY TOTAL cell, Q5, displays the value of zero instead of '2'.\n\nI hope I have explained this problem in terms that can be understood.\n\nAny help is appreciated as I am a novice when it comes to using Excel for advanced functions.\n\nThank you,\n\nRJeffDay",
null,
"",
null,
"Reply With Quote\n\n2. Here is my workbook.",
null,
"",
null,
"Reply With Quote\n\n3. To solve this, you should break down your formula to look at all the parts. When you look each pairing (ie: F5-E5 for each of the months, you get:\n\n0+0+0+0+1+2-3+0+0+0+0 which equals 0. So, this is problem one.\n\nA couple of questions. Why is the total for the year 2? Did the machine not start at 0?\n\nIf you are looking for the yearly total, wouldn't it be the largest number subtracted from the smallest number across the whole year? So, in the current example for model DP6020G, the smallest number is 0 and the largest is 3: 3 - 0 = 3. This can be written as this: MAX(E5:P5) - MIN(E5:P5). Also, Removeing the zeros from the cells makes the formula calculate as 3 - 1 = 2.\n\nAlso, why are you bringing in 2014's numbers to a 2013 report? I am unsure the value or reasoning.\n\nSo, if you can answers those, we can move forward.",
null,
"",
null,
"Reply With Quote\n\n4. Sorry, I am stumbling though your spreadsheet a bit. For the Pages Printed Each Month section, you need a little logic. IE: if this month's meter reading is 0 then do not calculate. So, cell K21 would be: =IF(K5=0, 0, K5-J5). Which would equal 2.",
null,
"",
null,
"Reply With Quote\n\n####",
null,
"Posting Permissions\n\n• You may not post new threads\n• You may not post replies\n• You may not post attachments\n• You may not edit your posts\n•"
]
| [
null,
"https://www.excelguru.ca/forums/images/icons/icon5.png",
null,
"https://www.excelguru.ca/forums/images/misc/progress.gif",
null,
"https://www.excelguru.ca/forums/clear.gif",
null,
"https://www.excelguru.ca/forums/images/misc/progress.gif",
null,
"https://www.excelguru.ca/forums/clear.gif",
null,
"https://www.excelguru.ca/forums/images/misc/progress.gif",
null,
"https://www.excelguru.ca/forums/clear.gif",
null,
"https://www.excelguru.ca/forums/images/misc/progress.gif",
null,
"https://www.excelguru.ca/forums/clear.gif",
null,
"https://www.excelguru.ca/forums/images/buttons/collapse_40b.png",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.88083637,"math_prob":0.84877074,"size":2079,"snap":"2021-04-2021-17","text_gpt3_token_len":603,"char_repetition_ratio":0.094457835,"word_repetition_ratio":0.0,"special_character_ratio":0.3083213,"punctuation_ratio":0.1278826,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9957691,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-04-10T21:53:52Z\",\"WARC-Record-ID\":\"<urn:uuid:c2247547-bd66-478c-a675-243eae074fc4>\",\"Content-Length\":\"62700\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:914c8b2d-b1e9-4e04-9e7b-1b9863dbb96c>\",\"WARC-Concurrent-To\":\"<urn:uuid:f32fbd87-5ca1-4413-a229-2ff66927f619>\",\"WARC-IP-Address\":\"104.192.220.111\",\"WARC-Target-URI\":\"https://www.excelguru.ca/forums/showthread.php?1953-Adapting-current-formula-to-reflect-a-time-period-(otherwise-0)&s=6e3d644f5ff81e1988610f957a2e619b&goto=nextnewest\",\"WARC-Payload-Digest\":\"sha1:JKHJWMDNKO3ZUXLGZXAP5KRWRXC2EK3S\",\"WARC-Block-Digest\":\"sha1:VY3YTO5OVOXRYJGIHCB7QDJC5RPSDHRS\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-17/CC-MAIN-2021-17_segments_1618038059348.9_warc_CC-MAIN-20210410210053-20210411000053-00164.warc.gz\"}"} |
https://gis.stackexchange.com/questions/198622/label-expression-if-statement-remove-zeros | [
"# Label Expression If Statement remove Zeros\n\nI am trying to perform a semi-complicated labeling expression that shows one field divided by another field and some additional fields with no division. The issue I ran into was that the denominator sometimes shows up as a 0. So, I need to build an if statement to perform the division if [TOT_HH_1] <> 0, do __ and if [TOT_HH_1] = 0, do __.\n\nAlso, there are three denominators that may or may not be 0. TOT_HH_1, 5 and 10.\n\nHere is what I have so far.\n\n``````Function FindLabel ([ELEM_1], [TOT_HH_1], [ELEM_5], [TOT_HH_5], [ELEM_10], [TOT_HH_10] )\nif ( [TOT_HH_1] <> 0 OR [TOT_HH_5] <> 0 OR [TOT_HH_10] <> 0) then\nFindLabel = Round ([ELEM_1] / [TOT_HH_1] , 2) & \" , \" & Round ( [ELEM_5]/ [TOT_HH_5] , 2) & \" , \" & Round ( [ELEM_10] / [TOT_HH_10] , 2) & \"/\" & [TOT_HH_1] & \" , \" & [TOT_HH_5] & \" , \" & [TOT_HH_10]\nelif ( [TOT_HH_1] = 0 ) then\nFindLabel = 0 & \" , \" & Round ( [ELEM_5]/ [TOT_HH_5] , 2) & \" , \" & Round ( [ELEM_10] / [TOT_HH_10] , 2) & \"/\" & [TOT_HH_1] & \" , \" & [TOT_HH_5] & \" , \" & [TOT_HH_10]\nEnd Function\n``````\n\nEdit: Correct, the code in the answer worked great, here is a picture of what the labeling looks like.",
null,
"• If you want a good answer you should ask a question or explain what is happening with the code you have worked up. – Barrett Jun 15 '16 at 23:20\n• So, what's the problem? elif is misspelled - should be else if. end if is missing. – Jakub Sisak GeoGraphics Jun 16 '16 at 1:33\n\nI'm unsure what your calculation is trying to do, or what possible values are in your fields, but a few things to consider are:\n\n1. VBScript doesn't have `elif` but rather `elseif`\n2. You need to end `if` with an `end if`\n3. You probably want to separate your `<> 0` out into separate `if` statements for when some have `0` and others don't (e.g. `TOT_HH_1 = 0` but `TOT_HH_5 <> 0`)\n\nI've separated everything out into different if statements and set a variable for each calc. The label then returns either a 0 or the value from each calc into your full label text.\n\n``````Function FindLabel ([ELEM_1], [TOT_HH_1], [ELEM_5], [TOT_HH_5], [ELEM_10], [TOT_HH_10] )\ndim val1\ndim val5\ndim val10\nval1 = val5 = val10 = 0\n\nif [TOT_HH_1] <> 0 then\nval1 = Round([ELEM_1] / [TOT_HH_1] , 2)\nend if\n\nif [TOT_HH_5] <> 0 then\nval5 = Round([ELEM_5] / [TOT_HH_5] , 2)\nend if\n\nif [TOT_HH_10] <> 0 then\nval10 = Round([ELEM_10] / [TOT_HH_10] , 2)\nend if\n\nFindLabel = val1 & \", \" & val5 & \", \" & val10 & \" / \" & [TOT_HH_1] & \", \" & [TOT_HH_5] & \", \" & [TOT_HH_10]\nEnd Function\n``````"
]
| [
null,
"https://i.stack.imgur.com/O2xU2.png",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.889877,"math_prob":0.98876894,"size":1116,"snap":"2020-10-2020-16","text_gpt3_token_len":438,"char_repetition_ratio":0.22751799,"word_repetition_ratio":0.32286996,"special_character_ratio":0.44802868,"punctuation_ratio":0.17277487,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9896827,"pos_list":[0,1,2],"im_url_duplicate_count":[null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-04-10T20:21:26Z\",\"WARC-Record-ID\":\"<urn:uuid:7babd3bc-a37e-4c3f-94de-cf1d41f7450c>\",\"Content-Length\":\"145827\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:941a8c6c-6838-405d-9f9c-9fe2efaf8448>\",\"WARC-Concurrent-To\":\"<urn:uuid:f864c60d-2ce9-48fd-9992-e6ef69ef336f>\",\"WARC-IP-Address\":\"151.101.193.69\",\"WARC-Target-URI\":\"https://gis.stackexchange.com/questions/198622/label-expression-if-statement-remove-zeros\",\"WARC-Payload-Digest\":\"sha1:6Q7P7CTCGQWH2676KS6WJ7T4JG654RXA\",\"WARC-Block-Digest\":\"sha1:2HCPIASW6B5W44TZZTEVNWCQFCIAUTUE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-16/CC-MAIN-2020-16_segments_1585370511408.40_warc_CC-MAIN-20200410173109-20200410203609-00013.warc.gz\"}"} |
https://ch.mathworks.com/matlabcentral/cody/problems/2430-find-the-relation | [
"Cody\n\n# Problem 2430. Find the relation\n\nTake x as input and y as output.\n\n``` If x=1 y=3\nIf x=2 y=14\nIf x=3 y=39\nIf x=4 y=84```\n\nBased on this relation find given x value.\n\n### Solution Stats\n\n94.83% Correct | 5.17% Incorrect\nLast Solution submitted on Jan 24, 2020"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.8087731,"math_prob":0.95466775,"size":683,"snap":"2020-10-2020-16","text_gpt3_token_len":176,"char_repetition_ratio":0.11340206,"word_repetition_ratio":0.0,"special_character_ratio":0.25036603,"punctuation_ratio":0.08510638,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99716365,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-21T04:37:06Z\",\"WARC-Record-ID\":\"<urn:uuid:e7916c19-40c7-4d48-80ad-acc68ff41cce>\",\"Content-Length\":\"84398\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c5bfd4eb-d046-40d7-945a-d78d8a0a2d1f>\",\"WARC-Concurrent-To\":\"<urn:uuid:c43471bc-0d42-47f8-9bab-e5c23376fe33>\",\"WARC-IP-Address\":\"23.50.112.17\",\"WARC-Target-URI\":\"https://ch.mathworks.com/matlabcentral/cody/problems/2430-find-the-relation\",\"WARC-Payload-Digest\":\"sha1:KVEFCKMZWQWNLKPL7FPPXP3JOYWHGDHC\",\"WARC-Block-Digest\":\"sha1:ZGKBWGCXYQDDBN24ZSANBNV2VQWXMARY\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875145438.12_warc_CC-MAIN-20200221014826-20200221044826-00509.warc.gz\"}"} |
https://www.quantamagazine.org/new-math-proof-raises-lower-bounds-of-graph-randomness-20201104/ | [
"# Disorder Persists in Larger Graphs, New Math Proof Finds\n\nDavid Conlon and Asaf Ferber have raised the lower bound for multicolor “Ramsey numbers,” which quantify how big graphs can get before patterns inevitably emerge.\n\nAfter more than 70 years of intransigence, one of the most stubborn numbers in math has finally budged.\n\nIn a four-page proof posted in late September, David Conlon and Asaf Ferber provided the most precise estimate yet for “multicolor Ramsey numbers,” which measure how large graphs can become before they inevitably exhibit certain kinds of patterns.\n\n“There is no absolute randomness in this universe,” said Maria Axenovich of the Karlsruhe Institute of Technology in Germany. “There are always clusters of order, and the Ramsey numbers quantify it.”\n\nGraphs are collections of dots (vertices) connected by lines (edges). Mathematicians are particularly interested in understanding how many vertices and edges they can contain before different kinds of substructures emerge within them.\n\n“If you have a big enough graph then there is a large part of it that’s very tightly ordered,” said Maria Chudnovsky of Princeton University. “It’s hard to explain why something is beautiful, but there is universal agreement that this is a beautiful phenomenon.”\n\nRamsey numbers concern a particular pattern called a monochromatic clique, which is a set of vertices that are all connected to each other by edges of the same color after you perform a specific coloring procedure.\n\nRamsey numbers vary depending on the size of the clique you’re looking for and the number of colors you use to perform the coloring. Mathematicians can’t calculate most Ramsey numbers because all but the smallest graphs are too complex to analyze directly.\n\nUsually, the best mathematicians can do is to set a range of possible values for Ramsey numbers. It’s as if you wanted to know a friend’s location but could only determine with certainty that they’re north of Miami and south of Philadelphia.\n\nThe new proof does more to zero in on the exact value of Ramsey numbers than any result since Paul Erdős first studied them in the 1930s and ’40s. Conlon, of the California Institute of Technology, and Ferber, of the University of California, Irvine, found a new “lower bound” for multicolor Ramsey numbers that is exponentially more precise than the previous best estimate. Their result provides mathematicians with a new understanding of the interplay between order and randomness in graphs, which are of fundamental interest in mathematics.\n\n“This is a fantastic result,” said Axenovich. “I love it.”\n\n## Colorful Connections\n\nRamsey numbers, which were introduced by the British polymath Frank Ramsey in the 1920s, are best understood by example. Start with a graph with five vertices. Connect each of them to all the others to form what mathematicians call a complete graph. Now, can you color each edge red or blue without creating a set of three vertices that are all connected to each other by edges of the same color? The answer is: You can.\n\nBut start with a complete graph of six vertices, and now there’s no way to color the edges with two colors without creating a monochromatic clique of at least three vertices. Or, to put it another way, for two colors and a clique of size 3, the Ramsey number is 6 (since it requires a complete graph of six vertices).\n\nRamsey numbers vary depending on the number of colors and the size of the monochromatic clique you’re looking for. But in general, they’re hard to calculate exactly, and mathematicians only know exact values for a small number of situations. Even for small cliques of size 5 (and two colors), the best they can say is that the Ramsey number is between 43 and 48.\n\n“It’s really embarrassing,” said Yuval Wigderson, a graduate student at Stanford University. “We’ve been working on this problem for close to 100 years and we don’t know anything.”\n\nRamsey numbers are hard to calculate because the complexity of a graph increases dramatically as you add vertices. For a graph with six vertices and two colors, you can run through all the possibilities by hand. But for a graph with 40 vertices, there are 2780 ways of applying two colors.\n\n“There’s just too much to check,” Axenovich said.\n\nAmong mathematicians who study Ramsey numbers there’s a parable, usually credited to Erdős, that captures how quickly these calculations become forbidding. One day, hostile aliens invade. They offer to spare the planet if we can produce correct Ramsey numbers. According to the parable, if they ask for the Ramsey number for two-color cliques of size 5, we should throw all the resources of human civilization into finding it. But if they ask for a clique of size 6, we should prepare for battle.\n\n“If they ask us for the Ramsey number of 6, then forget about it, we launch an attack,” Axenovich said.\n\n## Exploiting Randomness\n\nBecause calculating exact Ramsey numbers is largely impossible, mathematicians instead home in on them, proving they’re greater than some “lower bound” and less than some “upper bound.” The new work improves the precision of lower bounds but doesn’t address upper bounds.\n\nIn 1935, Erdős and George Szekeres established the first such bound. They used a short proof to show that two-color Ramsey numbers must be smaller than an upper bound of 4t, where t is the size of the monochromatic clique you’re interested in. They also found that three-color Ramsey numbers must be smaller than 27t. A decade later, in 1947, Erdős calculated the first lower bounds for these numbers: For two colors it’s ($latex \\sqrt{2}$)t vertices and for three colors it’s ($latex \\sqrt{3}$)t.\n\nThere is a big difference between ($latex \\sqrt{2}$)t and 4t, especially as t gets very large. This gap reflects mathematicians’ imprecise understanding of Ramsey numbers. But the form of the bounds — the way the size of the requisite graph is expressed in terms of the size of the desired clique — hints at what mathematicians most want to know.\n\n“What we’d really like to understand is the growth behavior of these [Ramsey] numbers as the size of the clique grows,” said Lisa Sauermann, a postdoctoral fellow at the Institute for Advanced Study in Princeton, New Jersey.\n\nFor this reason, Erdős’ most enduring contribution to the study of Ramsey numbers was not the bounds themselves — it was the method he used to calculate them. Here’s what he did for the lower bound.\n\nImagine you have a complete graph with 10 vertices and 45 edges. And imagine you want to know whether it’s possible to apply three colors without creating a monochromatic clique of some specific size, say five vertices (connected by 10 edges).\n\nYou can start, as Erdős did, by coloring the edges at random. For each edge, roll the equivalent of a three-sided die and apply whichever color comes up. Erdős knew that the probability that any particular subset of 10 edges will end up all the same color is easy to calculate. It’s just the probability that one edge is, say, red, times the probability that another edge is red, and so on for all 10 edges (so 1/310). Next, he multiplied that value by 3 to account for the fact that there are three different colors that could produce the desired monochromatic clique.\n\nErdős then looked at the total number of different cliques of five vertices in the graph. There are 252 of them. Finally, he took the probability that one of them will yield a monochromatic clique and added it to the probabilities that any of the other 251 will produce the clique. This is a calculation known as taking the “union bound,” and it estimates the probability of producing a monochromatic clique when you color the edges at random.\n\nAs long as the union bound remains below 1, you know the random coloring method is not guaranteed to produce the given monochromatic clique. In our example, the union bound is 0.0128. This means you’re far from being guaranteed a monochromatic clique of 5 vertices, which means the Ramsey number for this example is larger than 10 vertices.\n\nMathematicians call this approach the probabilistic method. It’s an ingenious workaround for an otherwise intractable problem. Instead of having to find examples of colorings that don’t contain monochromatic cliques of different sizes, Erdős simply proved that these clique-less colorings must exist (because the union bound is less than 1) — meaning the Ramsey number has to be larger than the number of vertices in the graph you’re currently coloring at random.\n\n“We’re able to prove that something exists without actually showing what it is,” Wigderson said.\n\nOver the next 70 years, mathematicians improved on Erdős’ lower bound for two and three colors just once — in 1975, with an incremental tightening by Joel Spencer. Many people worked on the problem, but none could find a better way than the probabilistic method to calculate Ramsey numbers. “The problem has been to try to defeat this [bound] coming from sampling at random,” Conlon said.\n\nAnd that is what Conlon and Ferber finally did this fall.\n\n## Incorporating Order\n\nThe new proof improves the lower bound for Ramsey numbers for three or more colors.\n\nPrior to Conlon and Ferber’s work, the lower bound for three colors was ($latex \\sqrt{3}$)t (approximately 1.73t). They improved that bound to 1.834t. For four colors, they raised the lower bound from 2t to 2.135t. Both are gigantic leaps. By increasing the base number being raised to the power t, Conlon and Ferber proved that there exist exponentially larger three- and four-colored graphs that lack the requisite monochromatic cliques. In other words, they showed that disorder can persist within larger graphs than previously known.\n\nConlon and Ferber’s goal was to color a complete graph without creating large monochromatic cliques. To do that, they figured out a way to efficiently distribute one color (red) according to a fixed rule before applying the remaining colors at random. This hybrid method afforded them additional control over the graph structure that Erdős didn’t have.\n\nFor the fixed part of the plan, they placed the vertices in a special kind of geometric space, so that each vertex was defined by a set of coordinates. Then they decided which edges to color red via a two-step process.\n\nFirst, they took the coordinates of each vertex, squared them and added them together — a process known as taking the sum of squares. Due to the nature of this particular geometric space, this sum-of-squares operation produced one of two values: 0 or 1. Next, focusing only on the vertices whose sum of squares was 0, they calculated the “inner product” between pairs of vertices — a standard operation in linear algebra. If an edge connected a pair whose inner product was a certain value, they colored it red. This accounted for half the total edges.\n\nAfter completing this deterministic part of their approach, Conlon and Ferber moved on to the random part. For the remaining edges they flipped a coin — just as Erdős would have — to determine whether to color a given edge blue or green.\n\nThis approach turned out to be a great way to avoid forming monochromatic cliques as the size of a graph grows. That was by design: The pair engineered the deterministic step to generate red edges that were spread out over the whole graph. At a distance they’d look almost as if they’d been scattered at random — and indeed, Conlon and Ferber refer to this arrangement of red edges as “pseudo-random.”\n\nThis pseudo-random distribution of red edges achieves two desirable things. First, by spreading out the red edges, it ensures that you don’t end up with any large red cliques (which is what you’re trying to avoid if you want to increase the lower bound). Second, the widespread red edges break up the graph, leaving fewer wide-open spaces that could end up getting filled in randomly by monochromatic cliques of another color.\n\n“We wanted to make sure that the first color, which we used deterministically, reduced the number of potential cliques,” Ferber said.\n\nMathematicians reacted to the new proof quickly. Within days of its release, Wigderson posted a follow-up paper that used their methods to prove an even slightly better lower bound for Ramsey numbers for four or more colors. After decades of stasis on Ramsey numbers, the dam had finally broken.\n\n“Our state of knowledge has been stuck since Erdős in the ’40s, so anything that provides a new approach to questions of this type is exciting,” Wigderson said.\n\nCorrection: November 9, 2020\n\nSome versions of our diagram on “The Erdős Method” described a graph with 0 vertices; it should be 10 vertices. All versions are now correct."
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.93670124,"math_prob":0.9332624,"size":12377,"snap":"2020-45-2020-50","text_gpt3_token_len":2639,"char_repetition_ratio":0.14345753,"word_repetition_ratio":0.022157997,"special_character_ratio":0.20740083,"punctuation_ratio":0.09118023,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9658115,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-12-04T13:47:58Z\",\"WARC-Record-ID\":\"<urn:uuid:651621ae-7f74-47b7-a553-34f93f4b7f15>\",\"Content-Length\":\"240412\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1d627327-57d5-4099-81ef-b3642d9a5f88>\",\"WARC-Concurrent-To\":\"<urn:uuid:2b1c7120-c004-4cb9-98b7-768f8a78e457>\",\"WARC-IP-Address\":\"54.192.30.57\",\"WARC-Target-URI\":\"https://www.quantamagazine.org/new-math-proof-raises-lower-bounds-of-graph-randomness-20201104/\",\"WARC-Payload-Digest\":\"sha1:VDXG2QUIITJ5H43GDDG5HZYP3VSY4IOP\",\"WARC-Block-Digest\":\"sha1:5I6QGIQBGXRBV7UWYKJXXSKW4M5GNMRP\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141737946.86_warc_CC-MAIN-20201204131750-20201204161750-00634.warc.gz\"}"} |
https://www.ato.gov.au/Forms/Personal-investors-guide-to-CGT-2006-07/?page=13 | [
"• ### Chapter B1: How to work out your capital gain or capital loss",
null,
"Warning:\n\nThis information may not apply to the current year. Check the content carefully to ensure it is applicable to your circumstances.\n\nEnd of attention\n\nTo calculate your capital gain from the sale of shares, or units in a unit trust (for example, a managed fund), the three main steps are:\n\n Step 1 Work out how much you have received from each CGT event (the capital proceeds). Step 2 Work out how much each CGT asset cost you (the cost base). Step 3 Subtract the cost base (step 2) from the capital proceeds (step 1).\n\nIf you received more from the CGT event than the asset cost you (that is, the capital proceeds are greater than the cost base), the difference is your capital gain. The three ways of calculating your capital gain are described in step 3 of part A.\n\nIf you received less from the CGT event than the asset cost (that is, the capital proceeds are less than the cost base), you then need to work out the asset's reduced cost base to see if you have made a capital loss. Generally, for shares, the cost base and reduced cost base are the same. However, they will be different if you choose the indexation method, because the reduced cost base cannot be indexed.\n\nIf the reduced cost base is greater than the capital proceeds, the difference is a capital loss.\n\nIf the capital proceeds are less than the cost base but more than the reduced cost base, you have not made a capital gain or a capital loss.\n\nThe steps below show you the calculations required to work out your CGT obligation using the 'other' and discount methods. If you want to use the indexation method (by indexing your cost base for inflation), you do this at step 2. You may find it easier to follow the worked examples in chapter B2.\n\nYou may find it useful to use notepaper to do your calculations while you work through the following steps so you can transfer the relevant amounts to item 17 on your tax return (supplementary section), or item 9 if you use the tax return for retirees. (Note: You cannot use Tax return for retirees 2007 if you had a distribution from a managed fund during the year.)\n\n#### Step 1 Work out your capital proceeds from the CGT event\n\nThe capital proceeds are what you receive, or are taken to receive, when you sell or otherwise dispose of your shares or units.\n\nFor example, with shares the capital proceeds may be:\n\n• the amount you receive from the purchaser\n• the amount or value of shares or other property you receive on a merger/takeover, or\n• the market value if you give shares away.\n\nExample 1: Capital proceeds\n\nFred sold his parcel of 1,000 shares for \\$6,000. Fred's capital proceeds are \\$6,000.\n\nEnd of example"
]
| [
null,
"https://www.ato.gov.au/uploadedImages/Content/Images/attention.png",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.9256137,"math_prob":0.7520174,"size":2733,"snap":"2020-34-2020-40","text_gpt3_token_len":620,"char_repetition_ratio":0.17442286,"word_repetition_ratio":0.044088177,"special_character_ratio":0.23015001,"punctuation_ratio":0.091240875,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9596923,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-19T03:10:05Z\",\"WARC-Record-ID\":\"<urn:uuid:c7a2b063-7b29-4dbd-be64-08f54e39f6ac>\",\"Content-Length\":\"94314\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a80e6567-a389-4ba8-a357-c84bf36a6fc4>\",\"WARC-Concurrent-To\":\"<urn:uuid:bbdef7c6-b6f8-49ab-bd33-ee2166842dd6>\",\"WARC-IP-Address\":\"23.47.30.6\",\"WARC-Target-URI\":\"https://www.ato.gov.au/Forms/Personal-investors-guide-to-CGT-2006-07/?page=13\",\"WARC-Payload-Digest\":\"sha1:2PR4OKVFGFW5V4LBPF5ZVQBLWTL2ZVAS\",\"WARC-Block-Digest\":\"sha1:4KN344Q7FZNKM3TNZJAT4I6XOIJ46OFK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600400189928.2_warc_CC-MAIN-20200919013135-20200919043135-00305.warc.gz\"}"} |
https://www.warmplace.ru/forum/viewtopic.php?f=6&t=4778&sid=7da97ba9e4581070e09e7b2229f08947&view=print | [
"Page 1 of 1\n\n### [NEWBIE REQUIRES HELP]\n\nPosted: Sat Dec 15, 2018 4:59 am\nHi. Sorry for the weak title.\n\nI'm trying to get a grasp on Pixilang's basics, and encountered a problem I cannot seem to figure out.\n\nHere's the code I used, and here's the result\n\nCode: Select all\n\n``````set_pixel_size( WINDOW_XSIZE / 480 )\nresize( get_screen(), WINDOW_XSIZE, WINDOW_YSIZE )\n\n\\$x = WINDOW_XSIZE\n\\$y = 0\nspeed = 10\n\nwhile 1 {\n//line(xpoint1, ypoint1, xpoint2, ypoint2\nline(0, 0, \\$x, \\$y, RED)\n\ntransp(32)\nclear(BLACK)\ntransp(255)\n\n//sup droite vers inf droite\nif \\$x == WINDOW_XSIZE {\nif \\$y != WINDOW_YSIZE {\n\\$y = \\$y + speed\n}\n}\n\n//inf droite vers inf gauche\nif \\$y == WINDOW_YSIZE {\nif \\$x > -WINDOW_XSIZE {\n\\$x = \\$x - speed\n}\n}\n\n//inf gauche vers sup gauche\nif \\$x == -WINDOW_XSIZE {\nif \\$y != -WINDOW_YSIZE {\n\\$y = \\$y - speed\n}\n}\n\n//sup gauche vers sup droite\nif \\$y == -WINDOW_YSIZE {\nif \\$x != WINDOW_XSIZE {\n\\$x = \\$x + speed\n}\n}\n\nframe()\n}``````\nIf I don't touch the screen size, and if I don't use a speed value greater than 10, it works as intended.\nHowever if I change one or the other, this happens\n\nThe few things I understand aren't helping me solving the problem. Please help",
null,
"Edit\nTo be more precise, it seems like the closer to zero get \\$x, the smaller gets the value of speed\n\nEDIT2\nNevermind I found the problem, it was just baddly written conditions."
]
| [
null,
"https://www.warmplace.ru/forum/images/smilies/cray.gif",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.8364399,"math_prob":0.91415405,"size":1759,"snap":"2021-21-2021-25","text_gpt3_token_len":526,"char_repetition_ratio":0.12649573,"word_repetition_ratio":0.4620061,"special_character_ratio":0.29903355,"punctuation_ratio":0.115384616,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.983436,"pos_list":[0,1,2],"im_url_duplicate_count":[null,10,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-06-15T21:41:53Z\",\"WARC-Record-ID\":\"<urn:uuid:28044636-ecc3-4947-bc4d-a185f9226948>\",\"Content-Length\":\"4130\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c69f7a03-226a-43cf-bc3e-8d7081d525eb>\",\"WARC-Concurrent-To\":\"<urn:uuid:7811aca3-8c1f-4b6b-b3c6-782a56d292be>\",\"WARC-IP-Address\":\"37.140.192.187\",\"WARC-Target-URI\":\"https://www.warmplace.ru/forum/viewtopic.php?f=6&t=4778&sid=7da97ba9e4581070e09e7b2229f08947&view=print\",\"WARC-Payload-Digest\":\"sha1:KKMUYEVLMYYZZUZJLRF5EGAY2R3VDZE6\",\"WARC-Block-Digest\":\"sha1:MFBO4J5BGTELJWCEWBE7WRNNFHA3ZANH\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-25/CC-MAIN-2021-25_segments_1623487621627.41_warc_CC-MAIN-20210615211046-20210616001046-00547.warc.gz\"}"} |
https://api.call-cc.org/5/doc/matchable/match-let%2A | [
"# chickadee » matchable » match-let*\n\n##### Identifier search\n(match-let [var] ((pat exp) …) body …)syntax\n(match-let* ((pat exp) …) body …)syntax\n(match-letrec ((pat exp) …) body …)syntax\n\nThe match-let, match-let* and match-letrec forms generalize Scheme's let, let*, letrec, and define expressions to allow patterns in the binding position rather than just variables. For example, the following expression:\n\n```(match-let (((x y z) (list 1 2 3))\n((a b c) (list 4 5 6)))\nbody …)```\n\nbinds x to 1, y to 2, z to 3, a to 4, b to 5, and c to 6 in body …. These forms are convenient for destructuring the result of a function that returns multiple values as a list or vector. As usual for letrec, pattern variables bound by match-letrec should not be used in computing the bound value.\n\nAnalogously to named let, match-let accepts an optional loop variable var before the binding list, turning match-let into a general looping construct."
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.5753465,"math_prob":0.90909714,"size":868,"snap":"2021-21-2021-25","text_gpt3_token_len":243,"char_repetition_ratio":0.14930555,"word_repetition_ratio":0.04054054,"special_character_ratio":0.27995393,"punctuation_ratio":0.10982659,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9531735,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-06-12T20:18:23Z\",\"WARC-Record-ID\":\"<urn:uuid:3c58d754-5b1c-4124-bb31-62e58c2085c5>\",\"Content-Length\":\"3602\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9525e577-63f7-4be1-8db4-86bda6c89146>\",\"WARC-Concurrent-To\":\"<urn:uuid:c58e06e3-faa6-4d7c-bb34-778762ccba58>\",\"WARC-IP-Address\":\"173.230.137.156\",\"WARC-Target-URI\":\"https://api.call-cc.org/5/doc/matchable/match-let%2A\",\"WARC-Payload-Digest\":\"sha1:L2BIKIIE7BRILIZJ4CLQMPWC2AZDKBMU\",\"WARC-Block-Digest\":\"sha1:U4NDBNSNLG7D4CQ6FIIH336VBAP63ZYD\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-25/CC-MAIN-2021-25_segments_1623487586390.4_warc_CC-MAIN-20210612193058-20210612223058-00592.warc.gz\"}"} |
http://pedrofreire.com/blog/php-enums.html | [
"# Smart PHP Enums\n\n#### PHP lacks an enumeration type.Can you achieve the same with classes,while retaining the advantages of enums?\n\nA software developer will eventually need to write code that selects an item from a set: a numeric base, such as Binary, from a set of numeric bases; a numeric precision, such as Floating-Point; a network protocol, such as TCP.\n\nComputers however, are little more than elaborate pocket calculators. They are great with numbers and little else. So software developers assign numbers to each of the items in their sets: 1 for the first item, 2 for the second and so on, for instance. They can now express their selection in code, as can be seen below.\n\n```int x = 1;\n```\n\nThis can quickly become a problem. When you come back to this code a few weeks later, does this tell you (the software developer) your intent? What is `x` and from which set was that `1` taken from: numeric bases, precisions or network protocols?\n\nThe first fix is easy to do: rename your variable.\n\n```int numericBase = 1;\n```\n\nBut now you need to provide meaning to your `1`. Your intent is still not clear in your code, and your development tools (IDE, compiler) can't help you because you're just assigning a random number to an integer variable.\n\nBack in the days, you would address this issue in C with a macro definition or enumeration, abbreviated `enum`.\n\nYour code is now clearer, but you still have issues. In C, `BINARY` is still really just an integer, so what's preventing you from, by mistake, assigning say `200` to that variable? What if you create other macros, for other sets, and mix them up?\n\nWhich macros should be valid in this assignment? Your IDE and compiler can't help, because you're still assigning an integer to a variable.\n\n## Enumeration as a Type\n\nSome languages, such as C++ and C# have addressed this by elevating enumerations to their own types. You can now have your own data type, rather than just integer, and specify which values are allowed for this type.\n\n```enum NumericBase { BINARY, OCTAL, DECIMAL, HEXADECIMAL }\n\nNumericBase numericBase = NumericBase.BINARY;\n```\n\nYour code now looks much better. It is also type-safe, meaning that your IDE and compiler can help you. Variable `numericBase` can only hold values of type `NumericBase`, so you can no longer assign it `200`, or a value from a different enumeration.\n\n## Enumerations in PHP\n\nPHP, at least up to version 7.4 at the time of this writing, doesn't support enumerations. It's common to address this limitation using two separate techniques.\n\nThe first is similar to C's macros.\n\n```define( \"BINARY\", 1 );\ndefine( \"OCTAL\", 2 );\ndefine( \"DECIMAL\", 3 );\n\n\\$numericBase = BINARY;\n```\n\nThe second is similar to C's enumerations, but using class constants.\n\n```class NumericBase\n{\npublic const BINARY = 1;\npublic const OCTAL = 2;\npublic const DECIMAL = 3;\n}\n\n\\$numericBase = NumericBase::BINARY;\n```\n\nNeither of these two solutions address the fact that we're just assigning random integers to variables, and hence the IDEs and interpreter can't help us, and the following function call is still possible.\n\nCan we be smarter?\n\n## Smart PHP Enums\n\nThe following class acts as a PHP enumeration.\n\n```/**\n* Class NumericBase acting as an enumeration\n* Built by NumericBase::BINARY(), ...\n*/\nclass NumericBase\n{\nprotected \\$base;\nprotected function __construct(int \\$base) { \\$this->base=\\$base; }\n\n// Factory of constants\npublic static function BINARY() : self { return new self(2); }\npublic static function OCTAL() : self { return new self(8); }\npublic static function DECIMAL() : self { return new self(10); }\npublic static function HEXADECIMAL() : self { return new self(16); }\n}\n```\n\nThis is a lightweight enumeration-like class in PHP, a simplified version of what you may find elsewhere. You assign an enumeration constant to a variable as follows.\n\n```\\$numericBase = NumericBase::BINARY();\n```\n\nThis is type-safe, so you can only assign “constants” from this class to typed parameters/properties of the same type.\n\nIt is serializable/unserializable with a small payload size.\nThe following code outputs\nO:11:\"NumericBase\":1:{s:7:\"*base\";i:2;}\n\n```print_r( serialize(NumericBase::BINARY()) );\n```\n\nYou can determine the value of an enumeration variable/property as you would in other languages.\n\n```if (\\$numericBase == NumericBase::BINARY())\necho \"I'm binary!\";\n```\n\n```switch (\\$numericBase)\n{\ncase NumericBase::BINARY(): // ...\ncase NumericBase::OCTAL(): // ...\ncase NumericBase::DECIMAL(): // ...\n}\n```\n\nBut because this is a class rather than an `enum` you can also add auxiliary methods. If they are small enough, they add negligible JIT compilation delay and code complexity.\n\n```/**\n* Class NumericBase acting as an enumeration\n* Built by NumericBase::BINARY(), ...\n*/\nclass NumericBase implements JsonSerializable\n{\nprotected \\$base;\nprotected function __construct(int \\$base) { \\$this->base=\\$base; }\n\n// Factory of constants\npublic static function BINARY() : self { return new self(2); }\npublic static function OCTAL() : self { return new self(8); }\npublic static function DECIMAL() : self { return new self(10); }\npublic static function HEXADECIMAL() : self { return new self(16); }\n\n// Optional auxiliary methods\npublic function asInteger() : int { return \\$this->base; }\npublic function isBinary() : bool { return \\$this->base === 2; }\npublic function isOctal() : bool { return \\$this->base === 8; }\npublic function isDecimal() : bool { return \\$this->base === 10; }\npublic function isHexadecimal() : bool { return \\$this->base === 16; }\npublic function jsonSerialize() : array { return get_object_vars(\\$this); }\n}\n```\n\nYou can now determine the value with a shorthand, without creating new temporary objects.\n\n```if (\\$numericBase->isBinary())\necho \"I'm binary!\";\n```\n\nThis class is now JSON-encodable. The following code outputs\n{\"base\":2}\n\n```print_r( json_encode(NumericBase::BINARY()) );\n```\n\nThe underlying integer values for each enumeration value were chosen as to be useful on their own, so there's an auxiliary `\\$numericBase->asInteger()`.\n\nYou cannot however, create instances of this enumeration with random underlying integer values, because there is no such method. In fact, the underlying `\\$base`, as well as the class' constructor itself are `protected` preventing them from being used freely except by derived classes.\nThis is all by design.\n\nHappy coding!\n\nPhoto by Geran de Klerk on Unsplash"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.8175587,"math_prob":0.73495865,"size":6304,"snap":"2023-14-2023-23","text_gpt3_token_len":1480,"char_repetition_ratio":0.15349206,"word_repetition_ratio":0.13972056,"special_character_ratio":0.24730329,"punctuation_ratio":0.17988065,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9742354,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-01T23:43:09Z\",\"WARC-Record-ID\":\"<urn:uuid:63f59963-4bc4-4d48-94df-d720f31ae67c>\",\"Content-Length\":\"20617\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4c9797f8-440a-4563-a81b-ff7f7991dae5>\",\"WARC-Concurrent-To\":\"<urn:uuid:925f5216-10ca-4e31-89c9-9d24c894eb68>\",\"WARC-IP-Address\":\"52.47.36.192\",\"WARC-Target-URI\":\"http://pedrofreire.com/blog/php-enums.html\",\"WARC-Payload-Digest\":\"sha1:ANMW2LCFZOJIDD7O7K4O3G56ZES5AJCI\",\"WARC-Block-Digest\":\"sha1:B5C6CX54T7QJUZ2LUZNTT6ARI2MBRKMK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224648209.30_warc_CC-MAIN-20230601211701-20230602001701-00075.warc.gz\"}"} |
https://www.geeksforgeeks.org/data-structures-binary-search-trees-question-11/ | [
"# Data Structures | Binary Search Trees | Question 12\n\nConsider the following code snippet in C. The function print() receives root of a Binary Search Tree (BST) and a positive integer k as arguments.\n\n `// A BST node ` `struct` `node { ` ` ``int` `data; ` ` ``struct` `node *left, *right; ` `}; ` ` ` `int` `count = 0; ` ` ` `void` `print(``struct` `node *root, ``int` `k) ` `{ ` ` ``if` `(root != NULL && count <= k) ` ` ``{ ` ` ``print(root->right, k); ` ` ``count++; ` ` ``if` `(count == k) ` ` ``printf``(``\"%d \"``, root->data); ` ` ``print(root->left, k); ` ` ``} ` `} `\n\nWhat is the output of print(root, 3) where root represent root of the following BST.\n\n``` 15\n/ \\\n10 20\n/ \\ / \\\n8 12 16 25\n```\n\n(A) 10\n(B) 16\n(C) 20\n(D) 20 10\n\nExplanation: The code mainly finds out k’th largest element in BST, see K’th Largest Element in BST for details.\n\nQuiz of this Question\n\nMy Personal Notes arrow_drop_up\nArticle Tags :\nPractice Tags :\n\nBe the First to upvote.\n\nPlease write to us at [email protected] to report any issue with the above content."
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.58940166,"math_prob":0.82673776,"size":1668,"snap":"2020-10-2020-16","text_gpt3_token_len":433,"char_repetition_ratio":0.28365386,"word_repetition_ratio":0.38110748,"special_character_ratio":0.29856116,"punctuation_ratio":0.08424909,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9798327,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-26T11:56:29Z\",\"WARC-Record-ID\":\"<urn:uuid:008c4dd6-1e72-4f4d-a32b-ed205289943f>\",\"Content-Length\":\"123783\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a7850303-498a-40dd-9651-81f1d58054d1>\",\"WARC-Concurrent-To\":\"<urn:uuid:7596a2bd-cce8-47e8-9dc2-d7d6114fb824>\",\"WARC-IP-Address\":\"104.96.221.51\",\"WARC-Target-URI\":\"https://www.geeksforgeeks.org/data-structures-binary-search-trees-question-11/\",\"WARC-Payload-Digest\":\"sha1:FLELT4S6A6MMYAUZS6O25XIOI75NYCBQ\",\"WARC-Block-Digest\":\"sha1:F3SR74PPHWJYV626CEUWPQTXFOAQZZ6Y\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875146342.41_warc_CC-MAIN-20200226115522-20200226145522-00446.warc.gz\"}"} |
https://www.boxingevolution.com/articles/total-daily-energy-expenditure-tdee/ | [
"",
null,
"# Total Daily Energy Expenditure (TDEE)\n\nYour Total Daily Energy Expenditure (TDEE) is how many calories you are using on a day to day basis in order just to function and perform all the tasks placed upon your body.\n\nThis falls into two categories:\n\n1. Basal metabolic rate (how many calories your body uses just to function when doing nothing, such as energy for breathing and pumping your heart and powering your metabolism etc)\n2. Physical activity ratio (Do you sit around or are you up on your feet all day for example)\n\nThe two categories are then used to calculate your average TDEE\n\nBasal metabolic rate (BMR)\nWe’ll use the Mifflin-St Jeor formula to estimate Basal Metabolic Rate (BMR). This is essentially the amount of energy expended per day before we add in activity levels.\n\nThere are several equations we can use but a study by the ADA (American Dietetic Association) found the Mifflin-St Jeor method to be pretty accurate. Here is the formula:\n\nMen: 10 x weight (kg) + 6.25 x height (cm) – 5 x age (y) + 5\n\nWomen: 10 x weight (kg) + 6.25 x height (cm) – 5 x age (y) – 161.\n\nNext we need to factor in your general day to day activity levels.\n\nPhysical activity ratio (PAR)\nNow that we’ve worked out the BMR, we need to multiply it by a Physical Activity Ratio (the estimated cost of activity he does per day).\n\nIf you were working this out, you would need to multiply your BMR by:\n\n1.2 if you do little or no exercise\n\n1.4 if you do exercise a couple of times per week\n\n1.5 to 1.7 if you exercise several times per week\n\n1.9+ if you exercise every day or have a hard, physical job\n\nOnce you have figured out your BMR and PAR you will have what is known as your total daily energy expenditure (TDEE).\n\nMeet Geoff\nHe’s a 30 year old male, weighing 80kg at 180cm and he never works out. He has a desk job and he wants to know his TDEE for fat loss.\n\nSo for Geoff, our 30 year old male, weighing 80kg at 180cm. His BMR would roughly be: (10 x 80) + (6.25 x 180) – (5 x 30) + 5 = 1780.\n\nGeoff does no training or physical activity per week so we’ll multiply his BMR (1780) by 1.2. This gives us his estimated TDEE of: 2136.\n\nCalorie calculator\nAs calculating you calories manually can be tricky if your not great with maths we have done the hard work for you and created a simple easy to use calculator. Simply input your details to get the results.\n\n## MWP Diet Calorie Calculator\n\nft\nin\nlbs\nlbs\nyrs\ndays\n%\n%\n%\n\n--\n\n--\n\n--\n\n--\n\n--\n\n### Rest Calories\n\nBefore using the data obtained using this calculator, please consult with doctor."
]
| [
null,
"https://www.boxingevolution.com/wp-content/uploads/2020/01/tdee.png",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.9383708,"math_prob":0.967854,"size":2318,"snap":"2020-34-2020-40","text_gpt3_token_len":607,"char_repetition_ratio":0.098962836,"word_repetition_ratio":0.058956917,"special_character_ratio":0.2627265,"punctuation_ratio":0.08196721,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9671796,"pos_list":[0,1,2],"im_url_duplicate_count":[null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-24T21:15:05Z\",\"WARC-Record-ID\":\"<urn:uuid:38f04849-6d9b-4f44-8d7c-629d89c10e85>\",\"Content-Length\":\"124043\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f5f87852-0e3f-4396-89eb-21b50045f4e9>\",\"WARC-Concurrent-To\":\"<urn:uuid:d1f05632-3bcf-4f0b-b0a6-ad83e1ef3799>\",\"WARC-IP-Address\":\"159.65.28.234\",\"WARC-Target-URI\":\"https://www.boxingevolution.com/articles/total-daily-energy-expenditure-tdee/\",\"WARC-Payload-Digest\":\"sha1:C5BPD6HA24ZJPVZFPL65ZC3TWUMQE3HH\",\"WARC-Block-Digest\":\"sha1:4G4BGQPJRGE3QGJSK76CIJZWSQ7SNTED\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600400220495.39_warc_CC-MAIN-20200924194925-20200924224925-00508.warc.gz\"}"} |
https://webapps.stackexchange.com/questions/96068/how-do-i-show-cell-data-as-a-value-when-referencing-another-cell-that-includes-a | [
"# How do I show cell data as a value when referencing another cell that includes a formula?\n\nI want to show a cell value in a cell where it is being referenced to a cell that includes a simple formula. e.g.\n\nIn `D4` I want to reference `C4` (i.e. `=C4`) and show it as a value, where `C4` has a concatenate formula: `=concatenate(\"000\",A4,\"-\",E4,\"-\",J4,\"-\",M4)`.\n\n### Explanation\n\nI currently have a reference in D4 cell that is C4, which returns a value, in D4, of 0001-190716-AM-ABERD, for example. When I double click into D4 in order to copy the text as text, I simply get the formula i.e. =C4. What I'm trying to figure out is if there is a way that I can put a formula into D4 that not only returns the value 0001-190716-AM-ABERD, but returns it as text - so when I double click into the cell the text shows and not the formula?\n\n• I don't understand \"show it as a value\" means. If the result of formula is 000-222-111, then this is what will be seen in both C4 and D4, provided you have `=C4` in the D4 cell. – user79865 Jul 29 '16 at 17:09\n• I currently have a reference in D4 cell that is =C4, which returns a value, in D4, of 0001-190716-AM-ABERD, for example. When I double click into D4 in order to copy the text as text, I simply get the formula i.e. =C4. What I'm trying to figure out is if there is a way that I can put a formula into D4 that not only returns the value 0001-190716-AM-ABERD, but returns it as text - so when I double click into the cell the text shows and not the formula? – Gary Willmott Aug 1 '16 at 8:23\n• This can't be done. Just don't double click. Click once to select a cell, and copy. – user79865 Aug 1 '16 at 11:35\n• I owndered if that was the case. Thanks for your help. – Gary Willmott Aug 1 '16 at 15:36\n\nIf a user double-clicks in a cell, what they see is their input into that cell. In the situation described, that input will be a formula, for example `=C4`, not the output of that formula such as \"0001-190716-AM-ABERD\"."
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.9098285,"math_prob":0.4536699,"size":778,"snap":"2020-45-2020-50","text_gpt3_token_len":240,"char_repetition_ratio":0.12273902,"word_repetition_ratio":0.0,"special_character_ratio":0.33290488,"punctuation_ratio":0.14736842,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.967318,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-11-25T08:56:35Z\",\"WARC-Record-ID\":\"<urn:uuid:cd481682-f9ff-4fe8-92a6-c08becd0d23d>\",\"Content-Length\":\"152284\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:63b63986-670a-4520-a853-15c1ba10fb91>\",\"WARC-Concurrent-To\":\"<urn:uuid:2230d5e6-c938-4af6-8ff2-a499eb3d690d>\",\"WARC-IP-Address\":\"151.101.129.69\",\"WARC-Target-URI\":\"https://webapps.stackexchange.com/questions/96068/how-do-i-show-cell-data-as-a-value-when-referencing-another-cell-that-includes-a\",\"WARC-Payload-Digest\":\"sha1:BFAEQUPUGXYYZXZJW2NLTNSLTPTB6XY6\",\"WARC-Block-Digest\":\"sha1:2FBNGBJONXHZ6FZE4IQFBCFNU6PMW2RB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141181482.18_warc_CC-MAIN-20201125071137-20201125101137-00653.warc.gz\"}"} |
https://nickadamsinamerica.com/math-word-problems-with-solutions-and-answers-for-grade-5/a39e3b3113926e4d579f4cdd1e2c7ddb/ | [
"A39e3b3113926e4d579f4cdd1e2c7ddb\n\nBy . Worksheet. At Wednesday, October 20th 2021, 06:32:32 AM.\n\nThese addition worksheets will produce 12 vertical or horizontal addition problems using dot figures to represent the numbers. You may select the numbers for the addition worksheets to be used from 0 to 10.\n\nStretching exercises help improve flexibility, allowing muscles and joints to bend and move easily through their full range of motion. Kids get chances every day to stretch when they reach for a toy, practice a split, or do a cartwheel.\n\nWhen we add numbers that need to be carried over we can only carry over a digit into the right spot. For example, when we add the 4 and 9 from the problem on the board we have to carry over the answer (13) into the correct place value spots.",
null,
"Fifteen Word Problems About Dr Seuss And His Stories Page Four Is An Answer Key Problems Include Multi Step Word Problems Math Word Problems Dr Seuss Math",
null,
"",
null,
"Second Grade Math Worksheets Word Problem Worksheets Addition Words Word Problems",
null,
"Add And Subtract Fractions Word Problems Fraction Word Problems Math Center Activities Math Word Problems",
null,
"Free On Teachers Pay Teachers By The Teachers Need Travel Store Word Problems Math Poster Common Core Word Problems",
null,
"Pin By Jennifer Sparks On Awesome Classroom Stuff Math Word Problems Anchor Chart Math Word Problems Teaching Subtraction",
null,
"5 Addition And Subtraction Word Problems Worksheets 2 Addition Problems Subtraction Word Works Word Problem Worksheets Subtraction Word Problems Math Words",
null,
"Solving Multi Step Word Problems Like A Boss Multi Step Word Problems Word Problem Worksheets Word Problems",
null,
"Math Word Problem Worksheets For Grade 3 Students K5 Learning Word Problems Mixed Word Problems Math Word Problems",
null,
"These Real Life Math Challenges Require A Variety Of Math Skills To Solve These Challenges Are Perfect For Applying And Pra Math Challenge Math Real Life Math",
null,
"1st Grade Number Math Word Problems Subtraction Word Problems 1st Grade Math Worksheets",
null,
"Common Core Two Step Word Problems Math Word Problems Word Problems Math Expressions",
null,
"Free Multiplying Decimals Quiz Or Review And Answer Key Fifth Grade Math Teaching Math Multiplying Decimals",
null,
"Extra Elapsed Time Practice Time Word Problems Elapsed Time Word Problems Elapsed Time",
null,
"Mixed Word Problems Mixed Word Problems Word Problems Subtraction Word Problems",
null,
"Word Problems Solving For The Unknown Math Word Problems Math Problems For Kids Word Problems\n\nGallery of Math Word Problems With Solutions And Answers For Grade 5\n\n1 star 2 stars 3 stars 4 stars 5 stars\n\nAny content, trademark/s, or other material that might be found on this site that is not this site property remains the copyright of its respective owner/s."
]
| [
null,
"https://nickadamsinamerica.com/y/2021/10/40fda10c9f0f21bad7ba323713a8fc4c.jpg",
null,
"https://i.pinimg.com/originals/a3/9e/3b/a39e3b3113926e4d579f4cdd1e2c7ddb.gif",
null,
"https://nickadamsinamerica.com/y/2021/10/72b9b4520a71538105d5b13256d94681.jpg",
null,
"https://nickadamsinamerica.com/y/2021/10/148616c7bb5d9538ef00871f7096b40a.jpg",
null,
"https://nickadamsinamerica.com/y/2021/10/e74d0c6bf429b7d570f48be39674ae09.jpg",
null,
"https://nickadamsinamerica.com/y/2021/10/27019f2676c4b6365146ef8fee70c048.jpg",
null,
"https://nickadamsinamerica.com/y/2021/10/9b4d71fa97c2fec5a3aeb260b9a5b970.jpg",
null,
"https://i.pinimg.com/736x/0d/e9/fc/0de9fc19d3151d0bb8ce84972bc73666.jpg",
null,
"https://nickadamsinamerica.com/y/2021/10/a39e3b3113926e4d579f4cdd1e2c7ddb.gif",
null,
"https://i.pinimg.com/originals/30/d8/6b/30d86b70af264681248f4750c226b6d2.png",
null,
"https://i.pinimg.com/736x/76/df/78/76df787b2879b11e4e459a16fdf47825--addition-stories-kindergarten-subtraction-word-problems-kindergarten.jpg",
null,
"https://i.pinimg.com/originals/c4/9c/7d/c49c7d4138df18220e9489ee94a81667.jpg",
null,
"https://i.pinimg.com/originals/fb/8a/cd/fb8acd6de0f9fab2e9db56bf22683588.jpg",
null,
"https://i.pinimg.com/originals/a7/e9/13/a7e9132ea5214d97d2301c90dc75a65f.jpg",
null,
"https://i.pinimg.com/originals/7b/04/4e/7b044e02d8f6fa860e5282f6135fdf3f.png",
null,
"https://i.pinimg.com/originals/83/81/32/8381326622bd6db3a077f01cddac8cb5.jpg",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.7371301,"math_prob":0.7606621,"size":2645,"snap":"2022-05-2022-21","text_gpt3_token_len":514,"char_repetition_ratio":0.26656568,"word_repetition_ratio":0.025882352,"special_character_ratio":0.1810964,"punctuation_ratio":0.04474273,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9927541,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32],"im_url_duplicate_count":[null,7,null,8,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,9,null,null,null,null,null,null,null,null,null,null,null,8,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-22T18:23:28Z\",\"WARC-Record-ID\":\"<urn:uuid:b0456e8e-c0ba-41dd-a1ce-519ce6c9e6a0>\",\"Content-Length\":\"41194\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:da82ca12-e898-48d0-bf4c-33f36e3226ed>\",\"WARC-Concurrent-To\":\"<urn:uuid:e91aa3f7-d64f-4e15-bcf7-8f79a89fe2bb>\",\"WARC-IP-Address\":\"104.21.27.110\",\"WARC-Target-URI\":\"https://nickadamsinamerica.com/math-word-problems-with-solutions-and-answers-for-grade-5/a39e3b3113926e4d579f4cdd1e2c7ddb/\",\"WARC-Payload-Digest\":\"sha1:7N5JYXE2NMP32SCWJV3BWN3Y5ROVYTEF\",\"WARC-Block-Digest\":\"sha1:CHF3CR2FMELRP2RRDXK47JS36QI7UOG4\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320303868.98_warc_CC-MAIN-20220122164421-20220122194421-00055.warc.gz\"}"} |
http://mathcentre.ac.uk/topics/geometry/circle/ | [
"Accessibility options:\n\n# Co-ordinate geometry of a circle resources\n\nShow me all resources applicable to\n\n### iPOD Video (5)",
null,
"The Geometry of a Circle - Part 3\nIPOD VIDEO: In this unit we find the equation of a circle, when we are told its centre and its radius. There are two different forms of the equation, and you should be able to recognise both of them. We also look at some problems involving tangents to circles. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd.",
null,
"The Geometry of a Circle - Part 4\nIPOD VIDEO: In this unit we find the equation of a circle, when we are told its centre and its radius. There are two different forms of the equation, and you should be able to recognise both of them. We also look at some problems involving tangents to circles. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd.",
null,
"The Geometry of a Circle - Part 5\nIPOD VIDEO: In this unit we find the equation of a circle, when we are told its centre and its radius. There are two different forms of the equation, and you should be able to recognise both of them. We also look at some problems involving tangents to circles. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd.",
null,
"The Geometry of a Circle - Part 1\nIPOD VIDEO: In this unit we find the equation of a circle, when we are told its centre and its radius. There are two different forms of the equation, and you should be able to recognise both of them. We also look at some problems involving tangents to circles. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd.",
null,
"The Geometry of a Circle - Part 2\nIPOD VIDEO: In this unit we find the equation of a circle, when we are told its centre and its radius. There are two different forms of the equation, and you should be able to recognise both of them. We also look at some problems involving tangents to circles. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd.\n\n### Teach Yourself (1)",
null,
"The geometry of a circle\nIn this unit we find the equation of a circle, when we are told its centre and its radius. There are two different forms of the equation, and you should be able to recognise both of them. We also look at some problems involving tangents to circles.\n\n### Test Yourself (2)",
null,
"Diagnostic Test - Co-ordinate geometry of a circle\nThis resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd.",
null,
"Exercise - Co-ordinate geometry of a circle\nThis resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd.\n\n### Video (1)",
null,
"The geometry of a circle\nIn this unit we find the equation of a circle, when we are told its centre and its radius. There are two different forms of the equation, and you should be able to recognise both of them. We also look at some problems involving tangents to circles. (Mathtutor Video Tutorial)\n\nWebsite design by Pink Mayhem, Leicester"
]
| [
null,
"http://mathcentre.ac.uk/images/icons/ipod.png",
null,
"http://mathcentre.ac.uk/images/icons/ipod.png",
null,
"http://mathcentre.ac.uk/images/icons/ipod.png",
null,
"http://mathcentre.ac.uk/images/icons/ipod.png",
null,
"http://mathcentre.ac.uk/images/icons/ipod.png",
null,
"http://mathcentre.ac.uk/images/icons/pdf.png",
null,
"http://mathcentre.ac.uk/images/icons/tick.png",
null,
"http://mathcentre.ac.uk/images/icons/tick.png",
null,
"http://mathcentre.ac.uk/images/icons/video.png",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.94644094,"math_prob":0.9268633,"size":3369,"snap":"2019-13-2019-22","text_gpt3_token_len":728,"char_repetition_ratio":0.11114413,"word_repetition_ratio":0.89241624,"special_character_ratio":0.1979816,"punctuation_ratio":0.07401575,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9681053,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-03-19T19:17:26Z\",\"WARC-Record-ID\":\"<urn:uuid:68972247-9a4a-47db-90c2-1144e3b644c3>\",\"Content-Length\":\"13179\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:78f87aad-acff-4aea-b048-443149eed1e2>\",\"WARC-Concurrent-To\":\"<urn:uuid:f4b3668f-f536-46ca-b998-187b82e5f090>\",\"WARC-IP-Address\":\"158.125.161.181\",\"WARC-Target-URI\":\"http://mathcentre.ac.uk/topics/geometry/circle/\",\"WARC-Payload-Digest\":\"sha1:2ZBJALXGM7YKD3F3IF3RT4WKFBH3I5UC\",\"WARC-Block-Digest\":\"sha1:2YIZLHRFSEYAZHCTA6NPIUEGOBHDQF3Z\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-13/CC-MAIN-2019-13_segments_1552912202125.41_warc_CC-MAIN-20190319183735-20190319205735-00102.warc.gz\"}"} |
https://socratic.org/questions/5933a1c77c01493990b7e5b1 | [
"# Question 7e5b1\n\nJun 5, 2017\n\nHere's how I would do it.\n\n#### Explanation:\n\nYou get the radius of the sphere from the volume of the displaced water.\n\n$V = {\\text{30 cm\"^3 – \"20 cm\"^3 = \"10 cm}}^{3}$\n\nThe formula for the volume $V$ of a sphere is\n\ncolor(blue)(bar(ul(|color(white)(a/a) V = 4/3πr^3color(white)(a/a)|)))\" \"\n\nwhere $r$ is the radius.\n\nWe can rearrange this formula to get\n\nr = root3((3V)/(4π))\n\nr = root3((\"3 × 10 cm\"^3)/(4π)) = root3(\"2.39 cm\"^3) = \"1.3 cm\"\n\n(b) Density of metal block\n\nYou get the density ρ of the metal block from its mass $m$ and volume $V$.\n\ncolor(blue)(bar(ul(|color(white)(a/a)ρ = m/Vcolor(white)(a/a)|)))\" \"\n\nAssume that $m = \\text{39 g}$.\n\n$V = {\\text{35 cm\"^3 - \"30 cm\"^3 = \"5 cm}}^{3}$\n\nρ = m/V= \"39 g\"/\"5 cm\"^3 = \"7.8 g/cm\"^3\n\n(c) Density of wood\n\nYou apparently removed the metal sphere for this experiment.\n\nThe original volume was ${\\text{20 cm}}^{3}$.\n\nThe metal block had a volume of ${\\text{5 cm}}^{3}$, so\n\n${\\text{Volume of water + metal = 25 cm}}^{3}$.\n\nWhen you submerged the wooden block,\n\n${\\text{Volume of water + metal + wood = 40 cm}}^{3}$.\n\n∴The volume of the wooden block is\n\n$V = {\\text{40 cm\"^3 - \"25 cm\"^3 = \"15 cm}}^{3}$\n\nAssume that the mass of the wood is 7.5 g. Then\n\nρ = m/V = \"7.5 g\"/\"15 cm\"^3 = \"0.50 g/cm\"^3#"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.786393,"math_prob":0.9998764,"size":749,"snap":"2019-43-2019-47","text_gpt3_token_len":219,"char_repetition_ratio":0.13959731,"word_repetition_ratio":0.0,"special_character_ratio":0.26301736,"punctuation_ratio":0.054421768,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.999912,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-21T03:36:20Z\",\"WARC-Record-ID\":\"<urn:uuid:ea04f843-9200-453d-85ba-ff7bdaa54e90>\",\"Content-Length\":\"36239\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:959e3ce1-dc1e-4174-be4b-58a5db0d8d40>\",\"WARC-Concurrent-To\":\"<urn:uuid:89923cfa-a7c8-4c25-875c-becc45bcb211>\",\"WARC-IP-Address\":\"54.221.217.175\",\"WARC-Target-URI\":\"https://socratic.org/questions/5933a1c77c01493990b7e5b1\",\"WARC-Payload-Digest\":\"sha1:KYQECFBPXBZMYWR4Z5GOZGQUOS56MBDD\",\"WARC-Block-Digest\":\"sha1:4UUKKVZHRDNJ74WDYI7L5HMPZGQVT3YK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496670729.90_warc_CC-MAIN-20191121023525-20191121051525-00510.warc.gz\"}"} |
https://exampundit.in/sbi-clerk-2018-quantitative-aptitude-quiz-for-prelims-15/ | [
"# SBI Clerk 2018: Quantitative Aptitude Quiz for Prelims – 15\n\n0\n(0)\n\nHello and welcome to exampundit. Here is a set of Quantitative Aptitude Quiz on Mixed Problems for Prelims exam of SBI Clerk 2018.\n\n1. A milkman adds 500 ml of water to each litre of milk he has in a container. He sells 30 litre of mixture from container and adds 10 litre milk in the remaining. The ratio of milk and water in the final mixture is 11:5. Find the initial quantity of milk in the container.\n\n(A) 120ml\n(B) 100ml\n(C) 200ml\n(D) 150ml\n(E) 220ml\n\nOption: A\n\nExplanation: Let initial quantity of milk be 10x\ntotal quantity =15x\nquantity of milk =10x\nAfter selling 30 litre of mixture and adding 10 litre milk,\ntotal quantity =(15x−30+10)=(15x−20)\nquantity of milk =10x−(30×10x15x)+10=(10x−10)\n10x−1015x−20=1111+5⇒x=12\nInitial quantity of milk in the container\n=10x=120 litre\n\n1. A milk vendor has 2 cans of milk. The first contains 25% water and the rest milk. The second contains 50% water. How much milk should he mix from each of the containers so as to get 12 litres of milk such that the ratio of water to milk is 3 : 5?\n\n(A) 5litre, 6litre\n(B) 6litre, 6litre\n(C) 6litre, 5litre\n(D) 5litre, 5litre\n(E) none of these\n\nOption: B\n\nExplanation: Let x and (12-x) litres of milk be mixed from the first and second container respectively\nAmount of milk in x litres of the first container = .75x\nAmount of water in x litres of the first container = .25x\nAmount of milk in (12-x) litres of the second container = .5(12-x)\nAmount of water in (12-x) litres of the second container = .5(12-x)\nRatio of water to milk = [.25x + .5(12-x)] : [.75x + .5(12-x)]=3:5 ⇒(.25x+6−.5x)/(.75x+6−.5x)=3/5\n⇒(6−.25x)/(.25x+6)=3/5\n⇒30−1.25x=.75x+18\n⇒2x=12⇒x=6\nSince x = 6, 12-x = 12-6 = 6 Hence 6 and 6 litres of milk should mixed from the first and second container respectively\n\n1. Two friends A and B leave City P and City Q simultaneously and travel towards Q and P at constant speeds. They meet at a point in between the two cities and then proceed to their respective destinations in 54 minutes and 24 minutes respectively. How long did B take to cover the entire journey between City Q and City P?\n\n(A) 90min\n(B) 45min\n(C) 60min\n(D) 20min\n(E) 40min\n\nOption: C\n\nExplanation: Let us assume Car A travels at a speed of a and Car B travels at a speed of b. Further, let us assume that they meet after t minutes.\nDistance traveled by car A before meeting car B = a * t. Likewise distance traveled by car B before meeting car A = b * t.\nDistance traveled by car A after meeting car B = a * 54. Distance traveled by car B after meeting car A = 24 * b.\nDistance traveled by car A after crossing car B = distance traveled by car B before crossing car A (and vice versa).\n=> at = 54b ———- (1)\nand bt = 24a ——– (2)\nMultiplying equations 1 and 2\nwe have ab * t2 = 54 * 24 * ab\n=> t2 = 54 * 24\n=> t = 36\nSo, both cars would have traveled 36 minutes prior to crossing each other. Or, B would have taken 36 + 24 = 60 minutes to travel the whole distance.\n\n1. Tia, Mina, Gita, Lovely and Binny are 5 sisters, aged in that order, with Tia being the eldest. Each of them had to carry a bucket of water from a well to their house. Their buckets’ capacities were proportional to their ages. While returning, equal amount of water got splashed out of their buckets. Who lost maximum amount of water as a percentage of the bucket capacity?\n\n(A) Mina\n(B) Tia\n(C) Lovely\n(D) Binny\n(E) Gita\n\nOption: D\n\nExplanation: Tia is the older and Binny is the youngest.\nSo, Binny’s bucket would have been the smallest.\nEach sister lost equal amount of water.\nAs a proportion of the capacity of their buckets Binny would have lost the most.\n\n1. What is the maximum percentage discount that Sundarnath can offer on his marked price so that he ends up selling at no profit or loss, if he had initially marked his goods up by 50%?\n\n(A) 67.67%\n(B) 25%\n(C) 13.67%\n(D) 27.5%\n(E) 33.33%\n\nOption: E\n\nExplanation: Let the cost price of the goods to be 100x\nhe had initially marked his goods up by 50%.\nTherefore, a 50% markup would have resulted in his marked price being 100x + 50% of 100x = 100x + 50x = 150x.\nHe finally sells the product at no profit or loss.\ni.e., he sells the product at cost price, which in this case is 100x.\nTherefore, he offers a discount of 50x on his marked price of 150x.\nHence, the % discount offered by him= Discount/MarkedPrice×100=50/150×100DiscountMarked Price×100=50/150×100= 33.33%\n\n1. In a game there are 70 people in which 40 are boys and 30 are girls, out of which 10 people are selected at random. One from the total group, thus selected is selected as a leader at random. What is the probability that the person, chosen as the leader is a boy?\n\n(A) 4/7\n(B) 4/5\n(C) 3/7\n(D) 1/18\n(E) 2/17\n\nOption: A\n\nExplanation: The total groups contains boys and girls in the ratio 4:3\nIf some person are selected at random from the group, the expected value of the ratio of boys and girls will be 4:3\nIf the leader is chosen at random from the selection, the probability of him being a boy = 4/7\n\n1. If a sum of money grows to 144/121 times when invested for two years in a scheme where interest is compounded annually, how long will the same sum of money take to treble if invested at the same rate of interest in a scheme where interest is computed using simple interest method?\n\n(A) 18years\n(B) 22years\n(C) 21years\n(D) 19years\n(E) 13years\n\nOption: B\n\nExplanation: The sum of money grows to 144/121 times in 2 years.\nIf P is the principal invested, then it has grown to 144/121 P in two years when invested in compound interest.\nIn compound interest, if a sum is invested for two years, the amount is found using the following formula\nA=(1+R/100)² P in this case.\n=>(1+R/100)²=144/121 =>(1+R/100)²=(12/11)²=>R=100/11\nIf r =100/11% , then in simple interest the time it will take for a sum of money to treble is found out as follows:\nLet P be the principal invested. Therefore, if the principal trebles = 3P, the remaining 2P has come on account of simple interest.\nSimple Interest =PNR/100 , where P is the simple interest, R is the rate of interest and ‘N’ is the number of years the principal was invested.\nTherefore, 2P =PN×100/11×100 => 2 =N/11 or N = 22 years\n\n1. Area of a Rhombus of perimeter 56 cms is 100 sq cms. Find the sum of the lengths of its diagonals?\n\n(A) 29.80cm\n(B) 31.20cm\n(C) 34.40cm\n(D) 27.60cm\n(E) 24.40cm\n\nOption: C\n\nExplanation: Perimeter = 56. Let the side of the rhombus be “a”, then 4a = 56 => a =14.\nArea of Rhombus = Half the product of its diagonals. Let the diagonals be d1 and d2 respectively.\n1/2×d1× d2 = 100 => d1×d2 = 200.\nBy Pythagoras theorem, (d1)² + (d2)²= 4a² => (d1)² + (d2)² = 4×196 = 784.\n(d1)² + (d2)² + 2d1× d2 = (d1+ d2)² = 784 +2×200 = 1184 => (d1+ d2) = √1184 = 34.40\nTherefore, sum of the diagonals is equal to 34.40 cm .\n\nRegards\n\nTeam Exampundit\n\nAverage rating 0 / 5. Vote count: 0\n\nNo votes so far! Be the first to rate this post.\n\nWe are sorry that this post was not useful for you!\n\nLet us improve this post!\n\nTell us how we can improve this post?"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.92235595,"math_prob":0.98806584,"size":6904,"snap":"2020-24-2020-29","text_gpt3_token_len":2141,"char_repetition_ratio":0.11333334,"word_repetition_ratio":0.05773672,"special_character_ratio":0.3354577,"punctuation_ratio":0.10073875,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9975698,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-06-04T11:14:35Z\",\"WARC-Record-ID\":\"<urn:uuid:9e4bba2e-9c7e-4ee2-be26-57a814bb8edf>\",\"Content-Length\":\"114473\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9524c170-473c-4220-bcbd-04b966162d5a>\",\"WARC-Concurrent-To\":\"<urn:uuid:26a8ff48-918c-40fa-bb50-1695927b4d05>\",\"WARC-IP-Address\":\"104.24.116.195\",\"WARC-Target-URI\":\"https://exampundit.in/sbi-clerk-2018-quantitative-aptitude-quiz-for-prelims-15/\",\"WARC-Payload-Digest\":\"sha1:AP4KZD6JA7ZWS5OEA43EPZ2Z24GCAASN\",\"WARC-Block-Digest\":\"sha1:MAATTBASDCFV224WFW3SJNYQ73ISKZBF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-24/CC-MAIN-2020-24_segments_1590347439928.61_warc_CC-MAIN-20200604094848-20200604124848-00229.warc.gz\"}"} |
https://django-crispy-forms.readthedocs.io/en/latest/dynamic_layouts.html | [
"# Updating layouts on the go¶\n\nLayouts can be changed, adapted and generated programmatically.\n\nThe next sections will explain how to select parts of a layout and update them. We will use this API from the `FormHelper` instance and not the layout itself. This API’s basic behavior consists of selecting the piece of the layout to manipulate and chaining methods that alter it after that.\n\n## Selecting layout objects with slices¶\n\nYou can get a slice of a layout using familiar `[]` Python operator:\n\n```form.helper[1:3]\nform.helper\nform.helper[:-1]\n```\n\nYou can basically do all kind of slices, the same ones supported by Python’s lists. You can also concatenate them. If you had this layout:\n\n```Layout(\nDiv('email')\n)\n```\n\nYou could access `'email'` string doing:\n\n```form.helper\n```\n\n## wrap¶\n\nOne useful action you can apply on a slice is `wrap`, which wraps every selected field using a layout object type and parameters passed. Let’s see an example. If We had this layout:\n\n```Layout(\n'field_1',\n'field_2',\n'field_3'\n)\n```\n\nWe could do:\n\n```form.helper[1:3].wrap(Field, css_class=\"hello\")\n```\n\nWe would end up having this layout:\n\n```Layout(\n'field_1',\nField('field_2', css_class='hello'),\nField('field_3', css_class='hello')\n)\n```\n\nNote how `wrap` affects each layout object selected, if you would like to wrap `field_2` and `field_3` together in a `Field` layout object you will have to use wrap_together.\n\nBeware that the slice `[1:3]` only looks in the first level of depth of the layout. So if the previous layout was this way:\n\n```Layout(\n'field_1',\nDiv('field_2'),\n'field_3'\n)\n```\n\n`helper[1:3]` would return this layout:\n\n```Layout(\n'field_1',\nField(Div('field_2'), css_class=\"hello\"),\nField('field_3', css_class=\"hello\")\n)\n```\n\nParameters passed to `wrap` or `wrap_together` will be used for creating the layout object that is wrapping selected fields. You can pass `args` and `kwargs`. If you are using a layout object like `Fieldset` which needs a string as compulsory first argument, wrap will not work as desired unless you provide the text of the legend as an argument to `wrap`. Let’s see a valid example:\n\n```form.helper[1:3].wrap(Fieldset, \"legend of the fieldset\")\n```\n\nAlso you can pass `args` and `kwargs`:\n\n```form.helper[1:3].wrap(Fieldset, \"legend of the fieldset\", css_class=\"fieldsets\")\n```\n\n## wrap_together¶\n\n`wrap_together` wraps a whole slice within a layout object type with parameters passed. Let’s see an example. If We had this layout:\n\n```Layout(\n'field_1',\n'field_2',\n'field_3'\n)\n```\n\nWe could do:\n\n```form.helper[0:3].wrap_together(Field, css_class=\"hello\")\n```\n\nWe would end up having this layout:\n\n```Layout(\nField(\n'field_1',\n'field_2',\n'field_3',\ncss_class='hello'\n)\n)\n```\n\n## update_attributes¶\n\nUpdates attributes of every layout object contained in a slice:\n\n```Layout(\n'field_1',\nField('field_2'),\nField('field_3')\n)\n```\n\nWe could do:\n\n```form.helper[0:3].update_attributes(css_class=\"hello\")\n```\n\nLayout would turn into:\n\n```Layout(\n'field_1',\nField('field_2', css_class='hello'),\nField('field_3', css_class='hello')\n)\n```\n\nWe can also apply it to a field name wrapped in a layout object:\n\n```form.helper['field_2'].update_attributes(css_class=\"hello\")\n```\n\nHowever, the following wouldn’t be correct:\n\n```form.helper['field_1'].update_attributes(css_class=\"hello\")\n```\n\nBecause it would change `Layout` attrs. It’s your job to have it wrapped correctly.\n\n## all¶\n\nThis method selects all first level of depth layout objects:\n\n```form.helper.all().wrap(Field, css_class=\"hello\")\n```\n\n## Selecting a field name¶\n\nIf you pass a string with the field name, this field name will be searched greedy throughout the whole Layout depth levels. Imagine we have this layout:\n\n```Layout(\n'field_1',\nDiv(\n),\n'field_3'\n)\n```\n\nIf we do:\n\n```form.helper['password'].wrap(Field, css_class=\"hero\")\n```\n\nPrevious layout would become:\n\n```Layout(\n'field_1',\nDiv(\nDiv(\n)\n),\n'field_3'\n)\n```\n\n## filter¶\n\nThis method will allow you to filter layout objects by its class type, applying actions to them:\n\n```form.helper.filter(basestring).wrap(Field, css_class=\"hello\")\nform.helper.filter(Div).wrap(Field, css_class=\"hello\")\n```\n\nYou can filter several layout objects types at the same time:\n\n```form.helper.filter(basestring, Div).wrap(Div, css_class=\"hello\")\n```\n\nBy default `filter` is not greedy, so it only searches first depth level. But you can tune it to search in different levels of depth with a kwarg `max_level` (By default set to 0). Let’ see some examples, to clarify it. Imagine we have this layout:\n\n```Layout(\n'field_1',\nDiv(\n),\n'field_3'\n)\n```\n\nIf we did:\n\n```form.helper.filter(basestring).wrap(Field, css_class=\"hello\")\n```\n\nOnly `field_1` and `field_3` would be wrapped, resulting into:\n\n```Layout(\nField('field_1', css_class=\"hello\"),\nDiv(\n),\nField('field_3', css_class=\"hello\"),\n)\n```\n\nIf we wanted to search deeper, wrapping `password`, we would need to set `max_level` to 2 or more:\n\n```form.helper.filter(basestring, max_level=2).wrap(Field, css_class=\"hello\")\n```\n\nIn other words `max_level` indicates the number of jumps crispy-forms can do within a layout object for matching. In this case getting into the first `Div` would be one jump, and getting into the next `Div` would be the second jump, thus `max_level=2`.\n\nWe can turn filter greedy, making it search as deep as possible, setting `greedy` to `True`:\n\n```form.helper.filter(basestring, greedy=True).wrap(Div, css_class=\"hello\")\n```\n\nParameters:\n\n• `max_level`: An integer representing the number of jumps that crispy-forms should do when filtering. Defaults to `0`.\n• `greedy`: A boolean that indicates whether to filter greedy or not. Defaults to `False`.\n\n## filter_by_widget¶\n\nMatches all fields of a widget type. This method assumes you are using a helper with a form attached, see section FormHelper with a form attached (Default layout), you could filter by widget type doing:\n\n```form.helper.filter_by_widget(forms.PasswordInput).wrap(Field, css_class=\"hero\")\n```\n\n`filter_by_widget` is greedy by default, so it searches in depth. Let’s see a use case example, imagine we have this Layout:\n\n```Layout(\n)\n```\n\nSupposing `password1` and `password2` fields are using widget `PasswordInput`, would turn into:\n\n```Layout(\n)\n```\n\nAn interesting real use case example here would be to wrap all `SelectInputs` with a custom made `ChosenField` that renders the field using a chosenjs compatible field.\n\n## exclude_by_widget¶\n\nExcludes all fields of a widget type. This method assumes you are using a helper with a form attached, see section FormHelper with a form attached (Default layout):\n\n```form.helper.exclude_by_widget(forms.PasswordInput).wrap(Field, css_class=\"hero\")\n```\n\n`exclude_by_widget` is greedy by default, so it searches in depth. Let’s see a use case example, imagine we have this Layout:\n\n```Layout(\n)\n```\n\nSupposing `password1` and `password2` fields are using widget `PasswordInput`, would turn into:\n\n```Layout(\n)\n```\n\n## Manipulating a layout¶\n\nBesides selecting layout objects and applying actions to them, you can also manipulate layouts themselves and layout objects easily, like if they were lists. We won’t do this from the helper, but the layout and layout objects themselves. Consider this a lower level API.\n\nAll layout objects that can wrap others, contain a inner attribute `fields` which is a list, not a dictionary as in Django forms. You can apply any list methods on them easily. Beware that a `Layout` behaves itself like other layout objects such as `Div`, the only difference is that it is the root of the tree.\n\nThis is how you would replace a layout object for other:\n\n```layout = Div('field_1')\n```\n\nThis is how you would add one layout object at the end of the Layout:\n\n```layout.append(HTML(\"<p>whatever</p>\"))\n```\n\nThis is how you would add one layout object at the end of another layout object:\n\n```layout.append(HTML(\"<p>whatever</p>\"))\n```\n\nThis is how you would add several layout objects to a Layout:\n\n```layout.extend([\nHTML(\"<p>whatever</p>\"),\n])\n```\n\nThis is how you would add several layout objects to another layout object:\n\n```layout.extend([\nHTML(\"<p>whatever</p>\"),\n])\n```\n\nThis is how you would delete the second layout object within the Layout:\n\n```layout.pop(1)\n```\n\nThis is how you wold delete the second layout object within the second layout object:\n\n```layout.pop(1)\n```\n\nThis is how you would insert a layout object in the second position of a Layout:\n\n```layout.insert(1, HTML(\"<p>whatever</p>\"))\n```\n\nThis is how you would insert a layout object in the second position of the second layout object:\n\n```layout.insert(1, HTML(\"<p>whatever</p>\"))\n```\n\nWarning\n\nRemember always that if you are going to manipulate a helper or layout in a view or any part of your code, you better use an instance level variable."
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.7448891,"math_prob":0.483216,"size":8684,"snap":"2020-10-2020-16","text_gpt3_token_len":2116,"char_repetition_ratio":0.18364055,"word_repetition_ratio":0.1906327,"special_character_ratio":0.25046062,"punctuation_ratio":0.15370595,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.958022,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-28T18:45:02Z\",\"WARC-Record-ID\":\"<urn:uuid:b7632347-6862-45de-b39a-2e218b1df1b0>\",\"Content-Length\":\"46916\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0ee13923-80c8-4e34-ab60-d1cf2bd8633b>\",\"WARC-Concurrent-To\":\"<urn:uuid:c10f2ca5-2984-4e75-8148-1700baad6a90>\",\"WARC-IP-Address\":\"104.208.221.96\",\"WARC-Target-URI\":\"https://django-crispy-forms.readthedocs.io/en/latest/dynamic_layouts.html\",\"WARC-Payload-Digest\":\"sha1:LXPT3WBG2ZKPZJFDUMTYVJR4ZKQLCSGP\",\"WARC-Block-Digest\":\"sha1:INATS73IP5XSFYG4LD6QFODDIHDPAEFO\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875147628.27_warc_CC-MAIN-20200228170007-20200228200007-00395.warc.gz\"}"} |
https://stats.stackexchange.com/questions/493454/superiority-of-linear-regression-compared-to-students-t-test/493456#493456 | [
"Superiority of linear regression compared to students t-test\n\nI am looking for some literature about linear regression and students t-test to cite them in my discussion within my paper. In a nutshell: I would like to argue that I prefer using the results of a regression compared to t-tests of my individual variables. Is this an acceptable argument? Does someone know some paper about this?\n\nMy (simplyfied) Problem: I have two groups (group A and group B) solving an assessment to achieve points. Now, I would like to run a linear regression with the achieved points as dependent variable and group classification as independent variable, instead of using a t-test to compare the means of both groups.\n\nChallenge\n\nThe approaches are the same...except for this little issue where the default form of t-testing in some software (I know in R, maybe in Python SciPy, etc) is the Welch t-test that makes an adjustment to the testing to account for possibly different variances of the two groups.\n\nWelch testing is usually considered superior to the classical t-test, since it is unlikely that the groups have identical variance.\n\nIf, however, you want to compare regression to the classical t-test with equal variance assumed, they are exactly the same. The test of the group membership coefficient is the t-test of the equality of the group means.\n\n• A related doubt: In such a case when independent variables (although here it would only be one) are all categorical, what would be the use of F-statistic? Would a significant F-statistic only convey that mean of dependent variable is not zero? Oct 24 '20 at 12:59"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.9400665,"math_prob":0.78258944,"size":661,"snap":"2022-05-2022-21","text_gpt3_token_len":133,"char_repetition_ratio":0.11263318,"word_repetition_ratio":0.0,"special_character_ratio":0.19818456,"punctuation_ratio":0.08730159,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9934536,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-28T12:03:07Z\",\"WARC-Record-ID\":\"<urn:uuid:31962821-cb1c-4a85-9e08-703d6ef1f185>\",\"Content-Length\":\"139975\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:cd926a88-658d-4a7f-85ae-2e325e0e1e10>\",\"WARC-Concurrent-To\":\"<urn:uuid:12026736-e1eb-435d-988d-3ebaf1ce34c6>\",\"WARC-IP-Address\":\"151.101.129.69\",\"WARC-Target-URI\":\"https://stats.stackexchange.com/questions/493454/superiority-of-linear-regression-compared-to-students-t-test/493456#493456\",\"WARC-Payload-Digest\":\"sha1:XYE36NDCMR32HSBGJ7AMZ74X6P3HAN54\",\"WARC-Block-Digest\":\"sha1:3IHUPOWMWVHEB4MCWVOUAAF4TD324DF4\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320305494.6_warc_CC-MAIN-20220128104113-20220128134113-00414.warc.gz\"}"} |
https://www.hamilton.ie/mathschallenge/year1_solution6.htm | [
"| home | people | research | publications | seminars | events | contact",
null,
"Communication Networks | Systems Biology | Hybrid Systems | Machine Learning | Dynamics & Interaction",
null,
"### Schools Mathematics Grand Challenge\n\nWeek six's Puzzles\n\nProblem 11:\n\nThe eleventh problem was:\n\nThere is exactly one three digit positive whole number (so a number between 100 and 999) with the following properties:\n\nIf you subtract 11 from it, the answer is divisible by 11;\nIf you subtract 10 from it, the answer is divisible by 10;\nIf you subtract 9 from it, the answer is divisible by 9.\n\nWhat is the number?\n\nThe solution was:\n\nIf a number minus 11 is divisible by 11 then so is the number itself. The same is true if the number minus 10 is divisible by 10 or the number minus 9 is divisible by 9. So we are looking for a number divisible by 9, 10 and 11. The least common multiple of 9, 10 and 11 is\n\n9 x 10 x 11 = 990\n\nand this is the smallest number with this property. The next smallest such number is 2 x 990 = 1980 which is larger than 1000 so the answer is 990.\n\nProblem 12:\n\nThe twelvth problem was:\n\nThere are 70 students in a year, 36 girls and 34 boys. At the end of the school year, the students have a choice of two places to go on their school trip: either Galway or Cork. All of the students go on the trip with 25 students going to Galway, and the other 45 going to Cork. How many more girls went to Cork than boys went to Galway?\n\nHow many more girls went to Cork than boys went to Galway?\n\n(We want the difference between the number of girls who went to Cork and the number of boys who went to Galway)\n\nThe solution was:\n\nLet N be the number of girls who went to Cork and K be the number of boys who went to Galway. We want to know the value of N - K.\n\nNow if K boys went to Galway, and there are 34 boys in the year then the other 34 - K went to Cork. This means that the total number of students (girls and boys) who went to Cork is\n\nN + 34 - K.\n\nBut we know that 45 went to Cork so\n\nN + 34 - K = 45\n\nand N - K = 45 - 34 = 11. So the answer is 11.",
null,
""
]
| [
null,
"https://www.hamilton.ie/line1.jpg",
null,
"https://www.hamilton.ie/line2.jpg",
null,
"https://www.hamilton.ie/partition.gif",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.9551683,"math_prob":0.9846371,"size":1853,"snap":"2022-40-2023-06","text_gpt3_token_len":518,"char_repetition_ratio":0.1692807,"word_repetition_ratio":0.10485934,"special_character_ratio":0.29411766,"punctuation_ratio":0.08851675,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99659353,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-10-07T02:01:26Z\",\"WARC-Record-ID\":\"<urn:uuid:4422d53d-bde2-402b-82f8-d5bef6072a5d>\",\"Content-Length\":\"6582\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1087f4f8-ddf5-4b43-b8c3-d5e5c18b67b1>\",\"WARC-Concurrent-To\":\"<urn:uuid:4007d301-5c77-4608-b216-ba0e961b0aee>\",\"WARC-IP-Address\":\"149.157.192.253\",\"WARC-Target-URI\":\"https://www.hamilton.ie/mathschallenge/year1_solution6.htm\",\"WARC-Payload-Digest\":\"sha1:PA4AJ3LSG6P5776TMK7FQ6B6U75I2MLY\",\"WARC-Block-Digest\":\"sha1:TJMZPLSCHUFNABLDGXG5RWEICJJDLQEG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030337906.7_warc_CC-MAIN-20221007014029-20221007044029-00374.warc.gz\"}"} |
https://4mules.nl/en/integration-by-parts/ | [
"# Integration by parts\n\n## Summary and examples\n\nFinding a primitive function is almost impossible. In general, finding the primitive is mostly very difficult or even impossible.\nIn the topic Integration of standard functions we looked at the following procedure: first look in the table of standard functions. If this fails, try to adjust the integrand and check whether the table can now be used. If this is not the case we try substitution as a possible method.\nHowever, there are more options and one is integration by parts. This method is related to the product rule in differentiation.\n\nSuppose we want to differentiate the product of two functions. We apply the product rule:",
null,
"We can write this expression as:",
null,
"When we multiply the left- and right-hand side with",
null,
"(mathematically not completely correct), we get:",
null,
"or:",
null,
"We integrate left- and right-hand side:",
null,
"and get the following expression:",
null,
"To apply integration by parts it is necessary to recognize the relation above. That is certainly difficult, but with some practice, not really a problem.\n\n##### Example 1\n\nWe begin with the following integral:",
null,
"We do not recognize the rule above, but we can do something about it.\n\nWe can write the derivative of",
null,
"as",
null,
"which is equal to:",
null,
"After cross-multiplication we get:",
null,
"Now we can write the integral as:",
null,
"According to the rule of integration by parts we can write this as (if we recognize the following relations",
null,
"and",
null,
"):",
null,
"Verify the result.\n\n##### Example 2\n\nThe following integral can be solved applying the rule of integration by parts:",
null,
"At first sight it seems that we do not have a product of two functions, but we can write:",
null,
"",
null,
"",
null,
"",
null,
"Again we can verify the result by differentiation.\n\n0"
]
| [
null,
"https://4mules.nl/wp-content/ql-cache/quicklatex.com-5f8e211105a4e5b006c24629b7821c12_l3.png",
null,
"https://4mules.nl/wp-content/ql-cache/quicklatex.com-8aa82ae3eb1b36103ec56cf161825c5e_l3.png",
null,
"https://4mules.nl/wp-content/ql-cache/quicklatex.com-e485e577dc6338a8315283d3c0ce63b4_l3.png",
null,
"https://4mules.nl/wp-content/ql-cache/quicklatex.com-f49ac5f69cb6869b4bd6b13ddd70155e_l3.png",
null,
"https://4mules.nl/wp-content/ql-cache/quicklatex.com-aeeab3e1de8b28ff2732601c67d09c5e_l3.png",
null,
"https://4mules.nl/wp-content/ql-cache/quicklatex.com-d9a73522bc1ad5f12c37017252e54bc1_l3.png",
null,
"https://4mules.nl/wp-content/ql-cache/quicklatex.com-dd74213ce28a32ef46f5a4bac4a4721e_l3.png",
null,
"https://4mules.nl/wp-content/ql-cache/quicklatex.com-5bfea3cee6389cd242a2b046a53073c6_l3.png",
null,
"https://4mules.nl/wp-content/ql-cache/quicklatex.com-f4738cb90fdbaf4cdcd7571d7c74da24_l3.png",
null,
"https://4mules.nl/wp-content/ql-cache/quicklatex.com-9015bfe4b71c7c791c77b56a3a857f2f_l3.png",
null,
"https://4mules.nl/wp-content/ql-cache/quicklatex.com-3b1c106b0906d9354d3ae9c1c56f4156_l3.png",
null,
"https://4mules.nl/wp-content/ql-cache/quicklatex.com-38191394dd11aa576395eb9efa2abc55_l3.png",
null,
"https://4mules.nl/wp-content/ql-cache/quicklatex.com-fe2caca9544a7453667a633f1da65f6e_l3.png",
null,
"https://4mules.nl/wp-content/ql-cache/quicklatex.com-6d9d3244b953d2f1a0bd3bd5e44f728a_l3.png",
null,
"https://4mules.nl/wp-content/ql-cache/quicklatex.com-bd2186851b1f8473c39f49eda2c101fb_l3.png",
null,
"https://4mules.nl/wp-content/ql-cache/quicklatex.com-1eb405f9ae450f2b69d6a12d4b2680f3_l3.png",
null,
"https://4mules.nl/wp-content/ql-cache/quicklatex.com-c356161660ff7c878da22f05e0ac56df_l3.png",
null,
"https://4mules.nl/wp-content/ql-cache/quicklatex.com-27525e754fad6ce485e7fec1da551cd7_l3.png",
null,
"https://4mules.nl/wp-content/ql-cache/quicklatex.com-b290cfc50e345d8121fca72c8ac1943d_l3.png",
null,
"https://4mules.nl/wp-content/ql-cache/quicklatex.com-29cfc2833724ce7cc8675d7037d53dc8_l3.png",
null,
"https://4mules.nl/wp-content/ql-cache/quicklatex.com-e5de7d0d04b7e0a44bfec04bce1e4ef7_l3.png",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.93069124,"math_prob":0.9606225,"size":1625,"snap":"2020-34-2020-40","text_gpt3_token_len":329,"char_repetition_ratio":0.13325109,"word_repetition_ratio":0.0073260074,"special_character_ratio":0.19815385,"punctuation_ratio":0.11146497,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9919442,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42],"im_url_duplicate_count":[null,4,null,4,null,8,null,4,null,4,null,4,null,4,null,4,null,null,null,4,null,4,null,4,null,4,null,null,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-28T00:28:09Z\",\"WARC-Record-ID\":\"<urn:uuid:3224dacc-2572-49e2-bae3-948900f87468>\",\"Content-Length\":\"32877\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ccd959f8-e31b-4c4f-9c53-b75c75809b68>\",\"WARC-Concurrent-To\":\"<urn:uuid:b340fb45-b694-4f82-aea4-23bddea12d85>\",\"WARC-IP-Address\":\"81.169.145.75\",\"WARC-Target-URI\":\"https://4mules.nl/en/integration-by-parts/\",\"WARC-Payload-Digest\":\"sha1:TRJ4Z33LDKFA4PRWBRWHVRQS7D7AO6LJ\",\"WARC-Block-Digest\":\"sha1:QW6IFET2CXKOXSABS7PXGFHNVM4LWDH6\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600401582033.88_warc_CC-MAIN-20200927215009-20200928005009-00279.warc.gz\"}"} |
http://zims-en.kiwix.campusafrica.gos.orange.com/wikipedia_en_all_nopic/A/Parallel_metaheuristic | [
"# Parallel metaheuristic\n\nParallel metaheuristic is a class of techniques that are capable of reducing both the numerical effort and the run time of a metaheuristic. To this end, concepts and technologies from the field of parallelism in computer science are used to enhance and even completely modify the behavior of existing metaheuristics. Just as it exists a long list of metaheuristics like evolutionary algorithms, particle swarm, ant colony optimization, simulated annealing, etc. it also exists a large set of different techniques strongly or loosely based in these ones, whose behavior encompasses the multiple parallel execution of algorithm components that cooperate in some way to solve a problem on a given parallel hardware platform.\n\n## Background\n\nIn practice, optimization (and searching, and learning) problems are often NP-hard, complex, and time consuming. Two major approaches are traditionally used to tackle these problems: exact methods and metaheuristics. Exact methods allow to find exact solutions but are often impractical as they are extremely time-consuming for real-world problems (large dimension, hardly constrained, multimodal, time-varying, epistatic problems). Conversely, metaheuristics provide sub-optimal (sometimes optimal) solutions in a reasonable time. Thus, metaheuristics usually allow to meet the resolution delays imposed in the industrial field as well as they allow to study general problem classes instead that particular problem instances. In general, many of the best performing techniques in precision and effort to solve complex and real-world problems are metaheuristics. Their fields of application range from combinatorial optimization, bioinformatics, and telecommunications to economics, software engineering, etc. These fields are full of many tasks needing fast solutions of high quality. See for more details on complex applications.\n\nMetaheuristics fall in two categories: trajectory-based metaheuristics and population-based metaheuristics. The main difference of these two kind of methods relies in the number of tentative solutions used in each step of the (iterative) algorithm. A trajectory-based technique starts with a single initial solution and, at each step of the search, the current solution is replaced by another (often the best) solution found in its neighborhood. It is usual that trajectory-based metaheuristics allow to quickly find a locally optimal solution, and so they are called exploitation-oriented methods promoting intensification in the search space. On the other hand, population-based algorithms make use of a population of solutions. The initial population is in this case randomly generated (or created with a greedy algorithm), and then enhanced through an iterative process. At each generation of the process, the whole population (or a part of it) is replaced by newly generated individuals (often the best ones). These techniques are called exploration-oriented methods, since their main ability resides in the diversification in the search space.\n\nMost basic metaheuristics are sequential. Although their utilization allows to significantly reduce the temporal complexity of the search process, this latter remains high for real-world problems arising in both academic and industrial domains. Therefore, parallelism comes as a natural way not to only reduce the search time, but also to improve the quality of the provided solutions.\n\nFor a comprehensive discussion on how parallelism can be mixed with metaheuristics see .\n\n## Parallel trajectory-based metaheuristics\n\nMetaheuristics for solving optimization problems could be viewed as walks through neighborhoods tracing search trajectories through the solution domains of the problem at hands:\n\nAlgorithm: Sequential trajectory-based general pseudo-code\nGenerate(s(0)); // Initial solution\nt := 0; // Numerical step\nwhile not Termination Criterion(s(t)) do\n...s′(t) := SelectMove(s(t)); // Exploration of the neighborhood\n...if AcceptMove(s′(t)) then\n...s(t) := ApplyMove(s′(t));\n...t := t+1;\nendwhile\n\nWalks are performed by iterative procedures that allow moving from one solution to another one in the solution space (see the above algorithm). This kind of metaheuristics perform the moves in the neighborhood of the current solution, i.e., they have a perturbative nature. The walks start from a solution randomly generated or obtained from another optimization algorithm. At each iteration, the current solution is replaced by another one selected from the set of its neighboring candidates. The search process is stopped when a given condition is satisfied (a maximum number of generation, find a solution with a target quality, stuck for a given time, . . . ).\n\nA powerful way to achieve high computational efficiency with trajectory-based methods is the use of parallelism. Different parallel models have been proposed for trajectory-based metaheuristics, and three of them are commonly used in the literature: the parallel multi-start model, the parallel exploration and evaluation of the neighborhood (or parallel moves model), and the parallel evaluation of a single solution (or move acceleration model):\n\nParallel multi-start model: It consists in simultaneously launching several trajectory-based methods for computing better and robust solutions. They may be heterogeneous or homogeneous, independent or cooperative, start from the same or different solution(s), and configured with the same or different parameters.\n\nParallel moves model: It is a low-level master-slave model that does not alter the behavior of the heuristic. A sequential search would compute the same result but slower. At the beginning of each iteration, the master duplicates the current solution between distributed nodes. Each one separately manages their candidate/solution and the results are returned to the master.\n\nMove acceleration model: The quality of each move is evaluated in a parallel centralized way. That model is particularly interesting when the evaluation function can be itself parallelized as it is CPU time-consuming and/or I/O intensive. In that case, the function can be viewed as an aggregation of a certain number of partial functions that can be run in parallel.\n\n## Parallel population-based metaheuristics\n\nPopulation-based metaheuristic are stochastic search techniques that have been successfully applied in many real and complex applications (epistatic, multimodal, multi-objective, and highly constrained problems). A population-based algorithm is an iterative technique that applies stochastic operators on a pool of individuals: the population (see the algorithm below). Every individual in the population is the encoded version of a tentative solution. An evaluation function associates a fitness value to every individual indicating its suitability to the problem. Iteratively, the probabilistic application of variation operators on selected individuals guides the population to tentative solutions of higher quality. The most well-known metaheuristic families based on the manipulation of a population of solutions are evolutionary algorithms (EAs), ant colony optimization (ACO), particle swarm optimization (PSO), scatter search (SS), differential evolution (DE), and estimation distribution algorithms (EDA).\n\nAlgorithm: Sequential population-based metaheuristic pseudo-code\nGenerate(P(0)); // Initial population\nt := 0; // Numerical step\nwhile not Termination Criterion(P(t)) do\n...Evaluate(P(t)); // Evaluation of the population\n...P′′(t) := Apply Variation Operators(P′(t)); // Generation of new solutions\n...P(t + 1) := Replace(P(t), P′′(t)); // Building the next population\n...t := t + 1;\nendwhile\n\nFor non-trivial problems, executing the reproductive cycle of a simple population-based method on long individuals and/or large populations usually requires high computational resources. In general, evaluating a fitness function for every individual is frequently the most costly operation of this algorithm. Consequently, a variety of algorithmic issues are being studied to design efficient techniques. These issues usually consist of defining new operators, hybrid algorithms, parallel models, and so on.\n\nParallelism arises naturally when dealing with populations, since each of the individuals belonging to it is an independent unit (at least according to the Pittsburg style, although there are other approaches like the Michigan one which do not consider the individual as independent units). Indeed, the performance of population-based algorithms is often improved when running in parallel. Two parallelizing strategies are specially focused on population-based algorithms:\n\n(1) Parallelization of computations, in which the operations commonly applied to each of the individuals are performed in parallel, and\n\n(2) Parallelization of population, in which the population is split in different parts that can be simply exchanged or evolved separately, and then joined later.\n\nIn the beginning of the parallelization history of these algorithms, the well-known master-slave (also known as global parallelization or farming) method was used. In this approach, a central processor performs the selection operations while the associated slave processors (workers) run the variation operator and the evaluation of the fitness function. This algorithm has the same behavior as the sequential one, although its computational efficiency is improved, especially for time consuming objective functions. On the other hand, many researchers use a pool of processors to speed up the execution of a sequential algorithm, just because independent runs can be made more rapidly by using several processors than by using a single one. In this case, no interaction at all exists between the independent runs.\n\nHowever, actually most parallel population-based techniques found in the literature utilize some kind of spatial disposition for the individuals, and then parallelize the resulting chunks in a pool of processors. Among the most widely known types of structured metaheuristics, the distributed (or coarse grain) and cellular (or fine grain) algorithms are very popular optimization procedures.\n\nIn the case of distributed ones, the population is partitioned in a set of subpopulations (islands) in which isolated serial algorithms are executed. Sparse exchanges of individuals are performed among these islands with the goal of introducing some diversity into the subpopulations, thus preventing search of getting stuck in local optima. In order to design a distributed metaheuristic, we must take several decisions. Among them, a chief decision is to determine the migration policy: topology (logical links between the islands), migration rate (number of individuals that undergo migration in every exchange), migration frequency (number of steps in every subpopulation between two successive exchanges), and the selection/replacement of the migrants.\n\nIn the case of a cellular method, the concept of neighborhood is introduced, so that an individual may only interact with its nearby neighbors in the breeding loop. The overlapped small neighborhood in the algorithm helps in exploring the search space because a slow diffusion of solutions through the population provides a kind of exploration, while exploitation takes place inside each neighborhood. See for more information on cellular Genetic Algorithms and related models.\n\nAlso, hybrid models are being proposed in which a two-level approach of parallelization is undertaken. In general, the higher level for parallelization is a coarse-grained implementation and the basic island performs a cellular, a master-slave method or even another distributed one."
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.90468085,"math_prob":0.9288586,"size":12343,"snap":"2020-45-2020-50","text_gpt3_token_len":2412,"char_repetition_ratio":0.15584731,"word_repetition_ratio":0.00907544,"special_character_ratio":0.19063437,"punctuation_ratio":0.12661251,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9592808,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-11-25T23:39:09Z\",\"WARC-Record-ID\":\"<urn:uuid:f3573d3d-0460-4cf0-8331-ce4f219630fb>\",\"Content-Length\":\"24330\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e455160d-0bac-4118-804e-6ad3c0e26da8>\",\"WARC-Concurrent-To\":\"<urn:uuid:93a0ade1-db23-473d-b2df-5ec98d7dc0f0>\",\"WARC-IP-Address\":\"41.66.34.68\",\"WARC-Target-URI\":\"http://zims-en.kiwix.campusafrica.gos.orange.com/wikipedia_en_all_nopic/A/Parallel_metaheuristic\",\"WARC-Payload-Digest\":\"sha1:HHOKTSHOAV54JODTMCAZNLXHNPX2BBNP\",\"WARC-Block-Digest\":\"sha1:BEPJFRTL53W4UQF5PPRCCUGRXNK4JGJS\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141184870.26_warc_CC-MAIN-20201125213038-20201126003038-00227.warc.gz\"}"} |
https://arxiv.org/abs/1705.07349 | [
"# Title:$\\left( β, \\varpi \\right)$-stability for cross-validation and the choice of the number of folds\n\nAbstract: In this paper, we introduce a new concept of stability for cross-validation, called the $\\left( \\beta, \\varpi \\right)$-stability, and use it as a new perspective to build the general theory for cross-validation. The $\\left( \\beta, \\varpi \\right)$-stability mathematically connects the generalization ability and the stability of the cross-validated model via the Rademacher complexity. Our result reveals mathematically the effect of cross-validation from two sides: on one hand, cross-validation picks the model with the best empirical generalization ability by validating all the alternatives on test sets; on the other hand, cross-validation may compromise the stability of the model selection by causing subsampling error. Moreover, the difference between training and test errors in q\\textsuperscript{th} round, sometimes referred to as the generalization error, might be autocorrelated on q. Guided by the ideas above, the $\\left( \\beta, \\varpi \\right)$-stability help us derivd a new class of Rademacher bounds, referred to as the one-round/convoluted Rademacher bounds, for the stability of cross-validation in both the i.i.d.\\ and non-i.i.d.\\ cases. For both light-tail and heavy-tail losses, the new bounds quantify the stability of the one-round/average test error of the cross-validated model in terms of its one-round/average training error, the sample sizes $n$, number of folds $K$, the tail property of the loss (encoded as Orlicz-$\\Psi_\\nu$ norms) and the Rademacher complexity of the model class $\\Lambda$. The new class of bounds not only quantitatively reveals the stability of the generalization ability of the cross-validated model, it also shows empirically the optimal choice for number of folds $K$, at which the upper bound of the one-round/average test error is lowest, or, to put it in another way, where the test error is most stable.\n Subjects: Machine Learning (stat.ML); Machine Learning (cs.LG); Statistics Theory (math.ST) Cite as: arXiv:1705.07349 [stat.ML] (or arXiv:1705.07349v5 [stat.ML] for this version)\n\n## Submission history\n\nFrom: Ning Xu [view email]\n[v1] Sat, 20 May 2017 19:46:01 UTC (45 KB)\n[v2] Sat, 27 May 2017 22:53:42 UTC (46 KB)\n[v3] Tue, 30 May 2017 01:44:13 UTC (50 KB)\n[v4] Mon, 19 Jun 2017 10:54:21 UTC (52 KB)\n[v5] Thu, 6 Jul 2017 00:21:03 UTC (53 KB)"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.8746618,"math_prob":0.90457314,"size":2636,"snap":"2019-51-2020-05","text_gpt3_token_len":733,"char_repetition_ratio":0.15425532,"word_repetition_ratio":0.2,"special_character_ratio":0.29666162,"punctuation_ratio":0.14233577,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9702107,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-24T05:53:35Z\",\"WARC-Record-ID\":\"<urn:uuid:0c3d3344-c4a8-4632-a28f-c9b026284b79>\",\"Content-Length\":\"22893\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ae2895da-ad9b-4eb6-991d-07bf62fefff6>\",\"WARC-Concurrent-To\":\"<urn:uuid:db59369e-6470-49bd-9959-3a22b1740300>\",\"WARC-IP-Address\":\"128.84.21.199\",\"WARC-Target-URI\":\"https://arxiv.org/abs/1705.07349\",\"WARC-Payload-Digest\":\"sha1:6COWUNOQ5TRGWQ2XLCDX5K2WPRWOXGRL\",\"WARC-Block-Digest\":\"sha1:DYPSP2RT3YF6NX5R652DFJODR4MN2UV3\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579250615407.46_warc_CC-MAIN-20200124040939-20200124065939-00336.warc.gz\"}"} |
https://www.zhisci.com/index.php/home/Index/news/id/1163.html | [
"### 智科眼",
null,
"Researchers(研究人员) have(have) conducted(进行了) the first(首次) analysis(分析) of(of) Bitcoin(比特币) power(功耗) consumption(消耗) based(根据) on(on) empirical data(经验数据) from(from) IPO(IPO) filings(申请) and() localization(本地化) of(of) IP(IP) addresses(地址).(.) They(他们) found(发现) that(that) the cryptocurrency(加密货币)'s('s) carbon() emissions(排放量) measure() up(up) to(to) those(那些) of(of) Kansas(堪萨斯) City()—or(—or) a(a) small() nation().(.) The study(研究),(,) published(发表) June(6月) 12(12) in(in) the journal(杂志) Joule(焦耳),(,) suggests(表明) that() cryptocurrencies(加密货币) contribute(贡献) to(to) global(全球) carbon() emissions(排放),(,) an(an) issue(问题) that(that) must(必须) be(be) considered(考虑) in(in) climate change(气候变化) mitigation(减缓) efforts(努力).(.)\nBitcoin(比特币) and() other(其他) cryptocurrencies(加密货) rely(依赖) on(on) blockchain(区块链) technology(技术),(,) which(which) enables(可以) a(a) secure(安全) network(网络) without(情况下) relying(依赖) on(on) a(a) third(第三) party().(.) Instead(),(,) so-(so-)called(称为) Bitcoin(比特币) \"(\")miners(矿工)\"(\") guarantee(保证) a(a) system(系统) without(没有) fraud(欺诈) by(通过) validating(验证) new() transactions(交易).(.) Miners(矿工) solve(解决) puzzles(难题) for(for) numerical(计算) signatures(数字签名),(,) a(a) process(过程) that(that) requires(需要) enormous(巨大) amounts(大量) of(of) computational(计算) power(能力).(.) In(In) return(回报),(,) miners(矿工) receive(收到) Bitcoin(比特) currency(货币).(.)\n\"(\")This(这一) process(过程) results(导致) in(in) immense(巨大) energy(能源) consumption(消耗),(,) which(which) translates(转化) into(into) a(a) significant(显着) carbon() footprint(足迹),\"(,\") says() Christian(克里斯蒂安) Stoll(斯托尔),(,) a(a) researcher(研究员) at(at) the Center(中心) for(for) Energy(能源) Markets(市场) at(at) the Technical(技术) University(大学) of(of) Munich(慕尼黑),(,) Germany(德国),(,) and() the MIT(麻省理工) Center(中心) for(for) Energy(能源) and() Environmental Policy(环境政策) Research(研究).(.)\nScientists(科学家) have() growing() concerns(担心) that() Bitcoin(比特币) mining(开采) is(is) fueling(燃料) an(an) appetite(兴趣) for(for) energy(能源) consumption(消费) that(that) sometimes(有时) draws(借鉴) from() questionable(可疑) fuel(燃料) sources(来源)()such as(例如) coal(煤炭) from(from) Mongolia(蒙古)—in(—in) addition(增加) to(to) hydropower(水电) and() other(other) low-carbon(低碳) power(电力) resources(资源).(.) And() cryptocurrency(密货)'s('s) energy(能源) issues(问题) seem(似乎) to(to) only() be(be) getting(越来越) worse(严重),(,) with(with) the computing power(计算能力) required() to(to) solve(解决) a(a) Bitcoin(比特币) puzzle(难题) increasing(越来越) more than(以上) four()-()fold() in(in) 2018(2018) .(.) While(尽管) there(there) is(is) a(a) growing(增长) push() among(among) researchers(研究人员) to(to) quantify(量化) Bitcoin(比特币)'s('s) energy(能源) consumption(消耗) in(in) order(order) to(to) better(更好地) understand(了解) its() contribution(贡献) to(to) global climate(全球气候) change(变化),(,) recent(最近) studies(研究) have(have) struggled(努力) to(to) generate(产生) accurate(准确) estimates(估计).(.)\n\"We(\"We) argue(认为) that(that) our(我们) work(工作) goes() beyond(超越) prior(之前) work(工作),\"(,\") says() Stoll(斯托尔).(.) \"We(\"We) can(可以) provide(提供) empirical(经验) evidence(证据) where(where) current(现有) literature(文献) is(is) based on(基于) assumptions(假设).\"(.\")\nStoll(Stoll) and() his() team(团队) used(使用) IPO(IPO) filings(申请) disclosed(披露) in(in) 2018(2018) by(by) all(所有) major(主要) mining(采矿) hardware(硬件) producers(生产商) to(to) determine(确定) which(which) machines(机器) miners(矿工) are(are) actually(实际) using(使用) and() the power efficiencies(功率效率) of(of) these(这些) machines(机器).(.) They(他们) also() used(使用) IP(IP) addresses(地址) to(to) determine(确定) emissions(排放) scenarios(情景) for(for) actual(实际) mining(采矿) locations(地点) and() compare(比较) carbon() emissions(排放) from(from) power sources(电源) used(使用) by(by) Bitcoin(特币) miners(矿工) in(in) different(不同) locations(地点).(.) Finally(最后),(,) they(他们) calculated(计算) Bitcoin(比特币)'s('s) carbon() footprint(足迹) based(根据) on(on) its(它的) total() power(功耗) consumption(消耗) and() estimates(估算) from(from) different(不同) emissions(排放) scenarios(情景).(.) These(这些) include(包括) a(a) lower limit(下限) scenario(方案),(,) in(in) which(其中) all(所有) miners(矿工) use(使用) the most() efficient(有效) hardware(硬件);(;) an(an) upper limit(上限) scenario(方案),(,) in(in) which(which) miners(矿工) behave(行事) rationally(合理地) by(通过) disconnecting(断开) their(它们) hardware(硬件) as(as) soon() as(as) costs(成本) exceed(超过) revenue(收入);(;) and() a(a) best(最佳) guess(猜测) scenario(情景),(,) which(which) accounts(解释) for(for) the anticipated(预期) energy(能量) efficiency() of(of) the network(网络) and() realistic(实际) additional(额外) energy losses(能量损失) from(from) cooling(冷却) and() IT(IT) hardware(硬件).(.)\n\"(\")Our(我们) model(模型) reflects(反映了) how(how) the connected(连接) computing power(计算能力) and() the difficulty(难度) of(of) Bitcoin(比特币) search(搜索) puzzles(难题) interact(相互作用),(,) and() it() provides(提供了) a(a) high() precision(精度) of(of) power(能力) consumption(消耗) since(因为) it() incorporates(包含) auxiliary(辅助) losses(损耗),\"(,\") says() Stoll(Stoll).(.) \"(\")However(但是),(,) the precision(精确度) of(of) our(我们) results(结果) strongly() depends(取决于) on(on) the accuracy(准确性) of(of) the input data(输入数据),(,) such as(例如) the IPO(IPO) filings(申请) for(for) hardware(硬件) characteristics(特性).(.) The carbon() emissions(排放) strongly() depend(取决于) on(on) the assumed(假设) carbon() intensity(强度) of(of) power(动力) consumption().\"(.\")\nUsing(使用) this(这个) model(模型),(,) Stoll(Stoll) and() his() team(团队) estimated(估计) Bitcoin(比特币)'s('s) annual() energy(能耗) consumption(消耗) at(at) 45.8(45.8) terawatt(太瓦) hours().(.) This() allowed(使) them(他们) to(to) calculate(计算出) an(an) annual() carbon() emissions(排放量) range(范围) between(between) 22.0(22.0) and() 22.9(22.9) megatons(兆吨) of(of) CO2—(CO2—)equivalent(当量) to(to) CO2(二氧化碳) emitted(发射) by(by) Kansas(堪萨斯) City() and() placing() Bitcoin(比特币)'s('s) emissions(排放量) between(between) Jordan(约旦) and() Sri() Lanka(兰卡) in(in) emissions(排放量) rankings(排名) ((()the 82nd(82nd) and() 83rd(83rd) highest(最高) emitters(排放)).().) However(然而),(,) the researchers(研究人员) estimate(估计) that() the energy(能耗) consumption(消耗) estimate(估计) would() almost(几乎) double(加倍) ((()greatly(大大) amplifying(放大) emissions(排放) estimates(估算))()) if(如果) they(他们) were(were) to(to) include(包括) all(所有) other(其他) cryptocurrencies(加密货币) in(in) their() consequences(后果).(.)\n\"We(\"We) do(do) not() question(质疑) the efficiency(效率) gains(收益) that(that) blockchain(区块链) technology(技术) could(可以),(,) in(in) certain(某些) cases(情况),(,) provide(提供),\"(,\") says() Stoll(斯托尔).(.) \"(\")However(但是),(,) the current(目前) debate(争论) is(is) focused(集中) on(on) anticipated(预期) benefits(效益),(,) and() more(更多) attention(关注) needs(需要) to(to) be(be) given(给定) to(to) costs(成本).\"(.\")\n\n### 相关信息\n\nSOURCE: Joule 06月13日\n• 07月17日\n• 07月17日\n• 07月17日\n• 07月17日\n• 07月18日\n\n• 科学\n• 技术\n• 医学\n\n### 新闻分享",
null,
""
]
| [
null,
"https://file.zhisci.com/scitop/news_image/5044_1.jpg",
null,
"https://www.zhisci.com/index.php/home/Index/news/id/1163.html",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.94382757,"math_prob":0.9958031,"size":4059,"snap":"2019-26-2019-30","text_gpt3_token_len":721,"char_repetition_ratio":0.11787916,"word_repetition_ratio":0.0,"special_character_ratio":0.17639813,"punctuation_ratio":0.08982036,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9689412,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-07-21T00:03:24Z\",\"WARC-Record-ID\":\"<urn:uuid:616030ab-25d8-4a85-b3e6-56ae783b8eb2>\",\"Content-Length\":\"62351\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a2e97528-5c24-4169-bec6-e13bbe0454d4>\",\"WARC-Concurrent-To\":\"<urn:uuid:a8270d31-67b4-4e27-be62-3bc3560decfe>\",\"WARC-IP-Address\":\"60.205.158.216\",\"WARC-Target-URI\":\"https://www.zhisci.com/index.php/home/Index/news/id/1163.html\",\"WARC-Payload-Digest\":\"sha1:CCQAONB2TMVLN7NSC7UIWNYCMUYPN5UX\",\"WARC-Block-Digest\":\"sha1:LCKANPU56KILDR3EZ5ZTA34ZZOB5TAP2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-30/CC-MAIN-2019-30_segments_1563195526799.4_warc_CC-MAIN-20190720235054-20190721021054-00001.warc.gz\"}"} |
https://wiki.blogs.nethep.com/alu-definition/ | [
"#### ALU Definition\n\n##### ALU Definition\n\nStands for “Arithmetic Logic Unit.” An ALU is an integrated circuit within a CPU or GPU that performs arithmetic and logic operations. Arithmetic instructions include addition, subtraction, and shifting operations, while logic instructions include boolean comparisons, such as AND, OR, XOR, and NOT operations.\n\nALUs are designed to perform integer calculations. Therefore, besides adding and subtracting numbers, ALUs often handle the multiplication of two integers, since the result is also an integer. However, ALUs typically do not perform division operations, since the result may be a fraction, or a “floating point” number. Instead, division operations are usually handled by the floating-point unit (FPU), which also performs other non-integer calculations.\n\nWhile the ALU is a fundamental component of all processors, the design and function of an ALU may vary between different processor models. For example, some ALUs only perform integer calculations, while others are designed to handle floating point operations as well. Some processors contain a single ALU, while others include several arithmetic logic units that work together to perform calculations. Regardless of the way an ALU is designed, its primary job is to handle integer operations. Therefore, a computer’s integer performance is tied directly to the processing speed of the ALU."
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.9350096,"math_prob":0.9664664,"size":1369,"snap":"2022-27-2022-33","text_gpt3_token_len":260,"char_repetition_ratio":0.13699634,"word_repetition_ratio":0.0,"special_character_ratio":0.17823228,"punctuation_ratio":0.1380753,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97842705,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-06-26T03:05:54Z\",\"WARC-Record-ID\":\"<urn:uuid:7eb789cf-29e0-4553-b7aa-0e1079a1832f>\",\"Content-Length\":\"36760\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2de16826-8926-45fd-9861-3e37bb59bbad>\",\"WARC-Concurrent-To\":\"<urn:uuid:ff0334a0-3f1b-4601-b3b5-18d0d3f157b0>\",\"WARC-IP-Address\":\"185.112.33.167\",\"WARC-Target-URI\":\"https://wiki.blogs.nethep.com/alu-definition/\",\"WARC-Payload-Digest\":\"sha1:V3OMJWPEFTJ3NQTLR4AIAX54T2PJNAKU\",\"WARC-Block-Digest\":\"sha1:RA4HJOG5IUP7ZNJZWY6XJS6LK626BWY7\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103036363.5_warc_CC-MAIN-20220626010644-20220626040644-00030.warc.gz\"}"} |
https://mafiadoc.com/cbse-test-paper-01_59ee8e3e1723ddc9551a8dc9.html | [
"## CBSE TEST PAPER-01\n\nMaterial downloaded from http://myCBSEguide.com and http://onlineteachers.co. in. Portal for CBSE Notes, Test Papers, Sample Papers, Tips and Tricks.\n\nCBSE TEST PAPER-01 MATHEMATICS (Class-10) Chapter : Triangles 1 Mark Questions In fig, if ∠ A= ∠ CDE, AB=9cm, AD=7cm, CD=8cm, CE=10cm. Find DE if ∆ CAB ∼∆CED.\n\nQ1.\n\nC 8\n\n10\n\nD\n\nE\n\nD 7 A\n\nB 9\n\nQ2.\n\nFind ‘y’ if ∆ABC ∼ ∆ PQR P A\n\n9\n\nY\n\n12 10 Q B\n\nR\n\nC 7\n\nQ3.\n\n∆ ABC , if DE=4cm, BC=8cm and ar( ∆ADE)=25sq cm. Find the area of\n\nQ4.\n\nIf D and E are respectively the points on the sides AB and AC of a ∆ABC such that AD=6cm, BD=9cm, AE=8cm, EC=12cm, Then show that DE||BC.\n\nMaterial downloaded from http://myCBSEguide.com and http://onlineteachers.co.in Portal for CBSE Notes, Test Papers, Sample Papers, Tips and Tricks\n\nQ5.\n\nQ A\n\n80 40 40\n\n80 60\n\nB\n\n60 P\n\nC\n\nR\n\nWrite symbolic representation for above similarity.\n\n2/3 Mark Questions In fig, considering ∆ BEP and ∆ CPD. Prove that BP\n\nQ6.\n\nX P D= EP X PC\n\nA\n\nE\n\nD P\n\nB\n\nC\n\nQ7. In fig, E is a point on side CB produced of an isosceles triangle ABC with AB=AC.\n\nMaterial downloaded from http://myCBSEguide.com and http://onlineteachers.co.in Portal for CBSE Notes, Test Papers, Sample Papers, Tips and Tricks\n\nIf AD ⊥ BC and EF ⊥ AC, Prove that ∆ ABD ˜ ∆ ECF.\n\nQ8.\n\nIn ∆PQR, M is a point on QR such that PM ⊥ QR and PM2 =QM.MR. Show that PQR is a right triangle.\n\nQ9.\n\nIn Fig, ACB=900 and CD ⊥ AB. Prove that\n\nQ10.\n\nTwo poles of height ‘a’ meters and ‘b’ meters (a"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.772054,"math_prob":0.7243061,"size":1427,"snap":"2020-24-2020-29","text_gpt3_token_len":535,"char_repetition_ratio":0.09978918,"word_repetition_ratio":0.11956522,"special_character_ratio":0.32655922,"punctuation_ratio":0.18333334,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9845252,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-06-07T06:21:14Z\",\"WARC-Record-ID\":\"<urn:uuid:e71e077e-14f8-485c-bd72-cafccdeb6389>\",\"Content-Length\":\"53667\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2ac7cbde-646d-4122-a29e-ad63ceaded8b>\",\"WARC-Concurrent-To\":\"<urn:uuid:79ea0fec-066f-43fb-80cb-143b1e4014f6>\",\"WARC-IP-Address\":\"104.27.162.127\",\"WARC-Target-URI\":\"https://mafiadoc.com/cbse-test-paper-01_59ee8e3e1723ddc9551a8dc9.html\",\"WARC-Payload-Digest\":\"sha1:YY4ICULVI7ONUSCCXYTPZN44SKW4Z56Q\",\"WARC-Block-Digest\":\"sha1:6F35TYMABXJTIQ2V5IB73XRLK4ZWPRH2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-24/CC-MAIN-2020-24_segments_1590348523564.99_warc_CC-MAIN-20200607044626-20200607074626-00471.warc.gz\"}"} |
http://export.arxiv.org/abs/1501.00166 | [
"cs.CR\n\n# Title: Chaotic trigonometric Haar wavelet with focus on image encryption\n\nAbstract: In this paper, after reviewing the main points of Haar wavelet transform and chaotic trigonometric maps, we introduce a new perspective of Haar wavelet transform. The essential idea of the paper is given linearity properties of the scaling function of the Haar wavelet. With regard to applications of Haar wavelet transform in image processing, we introduce chaotic trigonometric Haar wavelet transform to encrypt the plain images. In addition, the encrypted images based on a proposed algorithm were made. To evaluate the security of the encrypted images, the key space analysis, the correlation coefficient analysis and differential attack were performed. Here, the chaotic trigonometric Haar wavelet transform tries to improve the problem of failure of encryption such as small key space and level of security.\n Comments: Accepted in Journal of Discrete Mathematical Sciences and Cryptography, 10pages, 9 figures,2 table Subjects: Cryptography and Security (cs.CR); Chaotic Dynamics (nlin.CD) Cite as: arXiv:1501.00166 [cs.CR] (or arXiv:1501.00166v3 [cs.CR] for this version)"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.85502344,"math_prob":0.851703,"size":1364,"snap":"2023-40-2023-50","text_gpt3_token_len":331,"char_repetition_ratio":0.13014705,"word_repetition_ratio":0.009756098,"special_character_ratio":0.24120234,"punctuation_ratio":0.14624506,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9808217,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-06T03:37:00Z\",\"WARC-Record-ID\":\"<urn:uuid:268b5231-d5e0-47a0-b5a0-3b5f083e6bdd>\",\"Content-Length\":\"16505\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ed1794f3-bb8a-4d44-972e-d1dbbcde5c92>\",\"WARC-Concurrent-To\":\"<urn:uuid:eb28ce17-a3f7-489e-a81c-36f56de6c1e7>\",\"WARC-IP-Address\":\"128.84.21.203\",\"WARC-Target-URI\":\"http://export.arxiv.org/abs/1501.00166\",\"WARC-Payload-Digest\":\"sha1:FHZCQXZMXX2SASKCPQFYPJUWLGHHEAM5\",\"WARC-Block-Digest\":\"sha1:2XPYZ4AIQQCZNUTKNTUYBXYZEH7LXMLR\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100583.13_warc_CC-MAIN-20231206031946-20231206061946-00211.warc.gz\"}"} |
https://studyres.com/doc/336068/solve-5 | [
"Document related concepts\n\nSystem of polynomial equations wikipedia , lookup\n\nCubic function wikipedia , lookup\n\nQuartic function wikipedia , lookup\n\nElementary algebra wikipedia , lookup\n\nHistory of algebra wikipedia , lookup\n\nSystem of linear equations wikipedia , lookup\n\nEquation wikipedia , lookup\n\nTranscript\n```Over Lesson 4–4\nSolve 27 = 3b + 15. Check your solution.\nSolve 4(2x + 3) = 260. Check your solution.\nSolve –2(6y) = 144. Check your solution.\nA forest preserve rents canoes for \\$18 per hour.\nCorey has \\$90 to spend. Write and solve a\nmultiplication equation to find how many hours he\ncan rent a canoe.\nOver Lesson 4–4\nSolve 27 = 3b + 15. Check your solution.\nOver Lesson 4–4\nSolve 4(2x + 3) = 260. Check your solution.\nOver Lesson 4–4\nSolve –2(6y) = 144. Check your solution.\nOver Lesson 4–4\nA forest preserve rents canoes for \\$18 per hour.\nCorey has \\$90 to spend. Write and solve a\nmultiplication equation to find how many hours he\ncan rent a canoe.\nCombine Like Terms Before Solving\nSolve b – 3b + 8 = 18. Check your solution.\nb – 3b + 8 = 18\n1b – 3b + 8 = 18\n–2b + 8 = 18\nWrite the equation.\nIdentity Property; b = 1b\nCombine like terms, 1b and –3b.\n–8 = –8\nSubtract 8 from each side.\n–2b = 10\nSimplify.\nDivide each side by –2.\nb = –5\nSimplify.\nSolve 9 = 13 – x + 5x.\nA. 1\nB.\nC.\nD. –1\nEquations with Negative Coefficients\nSolve 5 – x = 7.\n5–x =7\nWrite the equation.\n5 – 1x = 7\nIdentity Property; x = 1x\n5 + (–1x) = 7\nDefinition of subtraction\n–5\n–5\n–1x = 2\nSubtract 5 from each side.\nSimplify.\nDivide each side by –1.\nx = –2\nSimplify.\nSolve 9 = –4 – m.\nA. 13\nB. 5\nC. –5\nD. –13\nUsing the Distributive Property to Solve\nEquations\nx = –2\nSimplify.\nMarissa wants to go to summer camp. The camp\ncosts \\$229. She paid a deposit of \\$75, and she will\nneed to save \\$14 per week to pay for the trip. The\nequation 75 + 14w = 229 can be used to find how\nmany weeks Marissa will need to save. Which series\nof steps can be used to solve the equation?\nA\nB\nC\nD\nDivide 229 by 14. Then subtract 75.\nSubtract 14 from 229. Then divide by 75.\nSubtract 75 from 229. Then divide by 14.\nSubtract 229 from 75. Then divide by 75.\nSolve the equation so that you can write the steps in the\ncorrect order that are necessary to solve the problem.\nSolve the Test Item\n75 + 14w = 229\n75 – 75 + 14w = 229 – 75\n14w = 154\nWrite the equation.\nSubtraction Property of\nEquality\nSimplify.\nDivision Property of\nEquality\nw = 11\nSimplify.\nAnswer: To solve the equation, you first subtract 75 and\nthen divide by 14. The correct answer\nis C.\nCheck\nWhen you solve an equation, you undo the steps in\nevaluating an expression in reverse order of the order\nof operations. In this equation, you would first undo\nadding 75 by subtracting 75, then undo multiplying by\n14 by dividing by 14.\ncosts \\$125. She has a summer job where she earns\n\\$6 per hour. She already has \\$17. The equation\n17 + 6h = 125 can be used to find how many hours\nMarissa will need to work to pay for her player.\nWhich series of steps can be used to solve the\nequation?\nA. Divide 125 by 6.\nThen subtract 17.\nB. Subtract 6 from 125.\nThen divide by 17.\nC. Subtract 17 from 125.\nThen divide by 6.\nD. Subtract 125 from 17.\nThen divide by 17.\n```"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.8965098,"math_prob":0.9953086,"size":3422,"snap":"2023-14-2023-23","text_gpt3_token_len":999,"char_repetition_ratio":0.140433,"word_repetition_ratio":0.20679468,"special_character_ratio":0.32670954,"punctuation_ratio":0.1414966,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999547,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-30T13:43:42Z\",\"WARC-Record-ID\":\"<urn:uuid:b0034d41-f08d-4a86-b50f-849b2e1e1b48>\",\"Content-Length\":\"37343\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2a332186-4a25-40d7-9538-ddefc1589836>\",\"WARC-Concurrent-To\":\"<urn:uuid:0a29ab61-1ccc-4e48-8cfd-226fc4e338f7>\",\"WARC-IP-Address\":\"172.67.151.140\",\"WARC-Target-URI\":\"https://studyres.com/doc/336068/solve-5\",\"WARC-Payload-Digest\":\"sha1:64WKR46GERCE6PLFDY2DL5HARBMJGIYU\",\"WARC-Block-Digest\":\"sha1:BN7BYGHGL7XV4GDNHO7TEUOZPUJTWJQF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296949331.26_warc_CC-MAIN-20230330132508-20230330162508-00320.warc.gz\"}"} |
https://plosjournal.deepdyve.com/lp/springer-journals/on-bernstein-s-inequality-for-polynomials-3VTBKiHgPl | [
"On Bernstein’s inequality for polynomials\n\nOn Bernstein’s inequality for polynomials Bernstein’s classical inequality asserts that given a trigonometric polynomial T of degree $$n\\ge 1$$ n ≥ 1 , the sup-norm of the derivative of T does not exceed n times the sup-norm of T. We present various approaches to prove this inequality and some of its natural extensions/variants, especially when it comes to replacing the sup-norm with the $$L^p-{\\textit{norm}}$$ L p - norm . http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Analysis and Mathematical Physics Springer Journals\n\nOn Bernstein’s inequality for polynomials\n\n, Volume 9 (3) – Mar 20, 2019\n27 pages",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"/lp/springer-journals/on-bernstein-s-inequality-for-polynomials-3VTBKiHgPl\nPublisher\nSpringer Journals\nSubject\nMathematics; Analysis; Mathematical Methods in Physics\nISSN\n1664-2368\neISSN\n1664-235X\nDOI\n10.1007/s13324-019-00294-x\nPublisher site\nSee Article on Publisher Site\n\nAbstract\n\nBernstein’s classical inequality asserts that given a trigonometric polynomial T of degree $$n\\ge 1$$ n ≥ 1 , the sup-norm of the derivative of T does not exceed n times the sup-norm of T. We present various approaches to prove this inequality and some of its natural extensions/variants, especially when it comes to replacing the sup-norm with the $$L^p-{\\textit{norm}}$$ L p - norm .\n\nJournal\n\nAnalysis and Mathematical PhysicsSpringer Journals\n\nPublished: Mar 20, 2019"
]
| [
null,
"https://docs4.deepdyve.com/doc_repo_server/get-image/3VTBKiHgPl/1/1",
null,
"https://docs4.deepdyve.com/doc_repo_server/get-image/3VTBKiHgPl/1/2",
null,
"https://docs4.deepdyve.com/doc_repo_server/get-image/3VTBKiHgPl/1/3",
null,
"https://docs4.deepdyve.com/doc_repo_server/get-image/3VTBKiHgPl/1/4",
null,
"https://plosjournal.deepdyve.com/assets/images/doccover.png",
null,
"https://plosjournal.deepdyve.com/assets/images/doccover.png",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.86849666,"math_prob":0.78929347,"size":1337,"snap":"2022-05-2022-21","text_gpt3_token_len":306,"char_repetition_ratio":0.10127532,"word_repetition_ratio":0.17117117,"special_character_ratio":0.21091998,"punctuation_ratio":0.08627451,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9805486,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-28T09:44:39Z\",\"WARC-Record-ID\":\"<urn:uuid:e4f62bac-08d2-409c-bf3f-f2509b9de05e>\",\"Content-Length\":\"134823\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:03cf533e-20cc-4249-8fe5-306f251f2a42>\",\"WARC-Concurrent-To\":\"<urn:uuid:6b330a53-a4cc-462e-bb00-548a3ca3d742>\",\"WARC-IP-Address\":\"35.186.214.25\",\"WARC-Target-URI\":\"https://plosjournal.deepdyve.com/lp/springer-journals/on-bernstein-s-inequality-for-polynomials-3VTBKiHgPl\",\"WARC-Payload-Digest\":\"sha1:LYZFQYHXO6VHP5ND2QYYP32CZZTK2OSD\",\"WARC-Block-Digest\":\"sha1:YQLK4AOMVX72CYHHVHX6GL6RTIRI6CSK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320305423.58_warc_CC-MAIN-20220128074016-20220128104016-00293.warc.gz\"}"} |
https://bizfluent.com/facts-7693800-comparison-between-capm-apt.html | [
"# The Comparison Between CAPM & APT\n\nShare It\n\nBoth the capital asset pricing model (CAPM) and the arbitrage pricing theory (APT) are methods used to determine the theoretical rate of return on an asset or portfolio, but the difference between APT and CAPM lies in the factors used to determine these theoretical rates of return. CAPM only looks at the sensitivity of the asset as related to changes in the market, whereas APT looks at many factors that can be divided into either macroeconomic factors or those that are company specific.\n\n## CAPM and APT Origins\n\nThe capital asset pricing model was created in the 1960s by Jack Treynor, William F. Sharpe, John Lintner and Jan Mossin in order to come up with a theoretical appropriate rate of return on an asset given the level of risk.\n\nEconomist Stephen Ross created the arbitrage pricing theory in 1975 as an alternative to the older CAPM, although APT is still largely based on CAPM. Ross's model incorporates a framework to explain the expected theoretical rate of return of an asset as a linear function of the risk of the asset, taking into account factors in order to accurately estimate market risk.\n\n## Capital Asset Pricing Model\n\nCAPM uses the risk-free rate of return (usually either the federal funds rate or a 10-year government bond yield), the beta of an asset in relation to the overall market, expected market return and investment risk in order to help quantify the projected return on an investment.\n\nThe beta of an asset measures the theoretical volatility compared to the overall market, meaning that if a portfolio has a beta of 1.5 compared to the S&P 500, then it is theoretically going to be 50 percent more volatile than the S&P.\n\n## Arbitrage Pricing Theory\n\nAPT in comparison to CAPM uses fewer assumptions and can be harder to use as well. The theory was developed with the assumption that the prices of securities are affected by many factors, which can be sorted into macroeconomic or company-specific factors.\n\nA big difference between CAPM and the arbitrage pricing theory is that APT does not spell out specific risk factors or even the number of factors involved. While CAPM uses the expected market return in its formula, APT uses the expected rate of return and the risk premium of a number of macroeconomic factors. The APT formula uses a factor-intensity structure that is calculated using a linear regression of historical returns of the asset for the specific factor being examined.\n\n## Using CAPM vs. APT\n\nAPT is more accurate than CAPM since CAPM only looks at one factor and one beta, but it requires additional effort and time not only to calculate but also to determine what factors to use and to gather relevant data to find the beta in relation to each factor. On the other hand, it is not always possible to know the right factors or to find the right data, which is when CAPM may be preferred.\n\nAs a result, the decision of whether to use CAPM vs. APT should largely be dependent on whether you can actually determine the right factors to use and find the data to find the beta in relation to those factors in order to use APT, or if you are willing to settle for just knowing the difference between the risk-free rate of return and the expected market rate of return as you would if you use CAPM."
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.93977743,"math_prob":0.93299997,"size":3230,"snap":"2019-43-2019-47","text_gpt3_token_len":672,"char_repetition_ratio":0.13794172,"word_repetition_ratio":0.03202847,"special_character_ratio":0.19566563,"punctuation_ratio":0.05901639,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9820646,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-20T05:09:51Z\",\"WARC-Record-ID\":\"<urn:uuid:89418a05-316b-4cc8-b85a-e800c6b25845>\",\"Content-Length\":\"162527\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9d24f5bc-b902-4cc0-9bb2-9aca15cd84ed>\",\"WARC-Concurrent-To\":\"<urn:uuid:caf6f28d-a1d2-4002-a972-28c8c7d39a41>\",\"WARC-IP-Address\":\"23.227.13.179\",\"WARC-Target-URI\":\"https://bizfluent.com/facts-7693800-comparison-between-capm-apt.html\",\"WARC-Payload-Digest\":\"sha1:NTQ4Q2LGXYXYRFDMAUGTOIQJKVOUQXKW\",\"WARC-Block-Digest\":\"sha1:S2D6ELRYQERIZ5676YHT23JYZVEEADK6\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496670448.67_warc_CC-MAIN-20191120033221-20191120061221-00430.warc.gz\"}"} |
http://www.landscape-and-garden.com/GardenSoil/SoilTriangle | [
"The Soil Triangle\n\nThe soil triangle\n\nPart of performing a garden soil type test is to classify your garden soil as sandy, loamy, or clay. You have already calculated the percentages of each layer of sediment of your garden soil as describe in our page on garden soil testing. Now you can easily determine what type of garden soil you have by transferring the results onto the soil triangle.\n\nThis is quite easy to do. Say you have 12cm of soil in your jar or bottle, with the bottom layer of sediment being 6,8cm, the middle layer of sediment being 3,5cm and the top layer of sediment being 1,7cm.\n\nThe bottom layer of sediment is sand, which is 6,8cm high. We take 6,8 and divided by the height of the soil sample, which in this example is 12cm, and then multiply it by 100 to convert it to a percentage. This works out to approximately 57% (6.8 ÷ 12 × 100 = 56.667). Plot 57 on the bottom axis of the soil triangle (A) and draw a line parallel to the axis that plots the percentage of silt.\n\nThe middle layer of sediment is silt and is 3,5cm high in this example. We take 3,5 and divided by the height of the soil sample, and then multiply it by 100. This works out to approximately 29% (3.5 ÷ 12 × 100 = 29.167). Now we plot 29 on the left axis of the soil triangle (B) and draw a line parallel to the axis that plots the percentage of clay.\n\nThe top layer of sediment is clay and is 1,7cm high in this example. So we take 1,7 and divided by the height of the soil sample, and then multiply it by 100. This works out to approximately 14% (1.7 ÷ 12 × 100 = 14.167). Now we plot 14 on the right axis of the soil triangle (C) and draw a horizontal line across to the axis that plots the percentage of silt.\n\nNow determining what type of garden soil we have is simply a matter of checking where the three lines intersect on the soil triangle.\n\nIn this particular example the three lines intersect in the sandy loam area of the soil triangle. We can now proceed to make the appropriate soil amendments to make our garden soil more loamy. In this particular example this can be achieved by adding some organic matter such as compost."
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.93028736,"math_prob":0.97342116,"size":2105,"snap":"2019-26-2019-30","text_gpt3_token_len":524,"char_repetition_ratio":0.1408853,"word_repetition_ratio":0.17705736,"special_character_ratio":0.26223278,"punctuation_ratio":0.098684214,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99558234,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-06-19T13:16:36Z\",\"WARC-Record-ID\":\"<urn:uuid:3d529ed2-6cea-460e-991e-4ea7094334d1>\",\"Content-Length\":\"10079\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2b51cafe-93b0-4614-a1e5-50a86ea2bc9d>\",\"WARC-Concurrent-To\":\"<urn:uuid:0c58987f-c6a4-4387-9f06-3cae792458da>\",\"WARC-IP-Address\":\"216.97.226.215\",\"WARC-Target-URI\":\"http://www.landscape-and-garden.com/GardenSoil/SoilTriangle\",\"WARC-Payload-Digest\":\"sha1:TN65ZCWUILPTONVOGH3HYPT5DNOJP6RP\",\"WARC-Block-Digest\":\"sha1:YJ7HTMGPU6XGQJNJEF4MUOYGTJBPNE3E\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-26/CC-MAIN-2019-26_segments_1560627998986.11_warc_CC-MAIN-20190619123854-20190619145854-00043.warc.gz\"}"} |
https://www.studysmarter.us/textbooks/physics/fundamentals-of-physics-10th-edition/force-and-motion-ii/q89p-a-filing-cabinet-weighing-556-n-rests-on-the-floor-the-/ | [
"• :00Days\n• :00Hours\n• :00Mins\n• 00Seconds\nA new era for learning is coming soon",
null,
"Suggested languages for you:\n\nEurope\n\nAnswers without the blur. Sign up and see all textbooks for free!",
null,
"Q89P\n\nExpert-verified",
null,
"Found in: Page 147",
null,
"### Fundamentals Of Physics\n\nBook edition 10th Edition\nAuthor(s) David Halliday\nPages 1328 pages\nISBN 9781118230718",
null,
"# A filing cabinet weighing 556 N rests on the floor. The coefficient of static friction between it and the floor is 0.68 , and the coefficient of kinetic friction is 0.56 . In four different attempts to move it, it is pushed with horizontal forces of magnitudes (a) 222 N , (b) 334 N , (c) 445 N , and (d) 556 N . For each attempt, calculate the magnitude of the frictional force on it from the floor. (The cabinet is initially at rest.) (e) In which of the attempts does the cabinet move?\n\n(a) The magnitude of the frictional force is f = 222 N .\n\n(b) The magnitude of the frictional force is f = 334 N .\n\n(c) The magnitude of the frictional force is f = 311 N .\n\n(d) The magnitude of the frictional force is f = 311 N .\n\nSee the step by step solution\n\n## Step 1: Given data:\n\nWeight of the cabinet, W = 556 N\n\nCoefficient of static friction between the cabinet and the floor, ${\\mu }_{s}=0.68$\n\nCoefficient of kinetic friction, ${\\mu }_{k}=0.56$\n\n## Step 2: Understanding the concept:\n\nIn order to move a filing cabinet, the force applied must be able to overcome\n\nthe frictional force.\n\nApply Newton’s second law\n\n${{\\mathbf{F}}}_{{\\mathbf{push}}}{\\mathbf{-}}{\\mathbf{f}}{\\mathbf{=}}{{\\mathbf{F}}}_{{\\mathbf{net}}}\\phantom{\\rule{0ex}{0ex}}{\\mathbf{}}{\\mathbf{}}{\\mathbf{}}{\\mathbf{}}{\\mathbf{}}{\\mathbf{}}{\\mathbf{}}{\\mathbf{}}{\\mathbf{}}{\\mathbf{}}{\\mathbf{}}{\\mathbf{}}{\\mathbf{}}{\\mathbf{}}{\\mathbf{}}{\\mathbf{=}}{\\mathbf{ma}}$\n\nIf you find the applied force ${{\\mathbit{F}}}_{\\mathbf{p}\\mathbf{u}\\mathbf{s}\\mathbf{h}}$ to be less than ${{\\mathbit{f}}}_{\\mathbf{s}\\mathbf{,}\\mathbf{max}}$ , the maximum static frictional force, our conclusion would then be “no, the cabinet does not move” (which means is actually zero and the frictional force is simply ${\\mathbit{f}}{\\mathbf{=}}{{\\mathbit{F}}}_{\\mathbf{p}\\mathbf{u}\\mathbf{s}\\mathbf{h}}$ ). On the other hand, if you obtain a > 0 then the cabinet moves (so ${\\mathbit{f}}{\\mathbf{=}}{{\\mathbit{f}}}_{{\\mathbf{k}}}$ ).\n\nFor ${{\\mathbit{f}}}_{\\mathbf{s}\\mathbf{,}\\mathbf{m}\\mathbf{a}\\mathbf{x}}$ and ${{\\mathbit{f}}}_{{\\mathbf{k}}}$ use Eq. 6-1 and Eq. 6-2 (respectively), and in those formulas set the magnitude of the normal force to the weight of the cabinet.\n\n## Step 3: Calculate the maximum static friction and kinetic friction:\n\nThe maximum static frictional force is,\n\n${\\mathrm{f}}_{\\mathrm{s},\\mathrm{max}}={\\mathrm{\\mu }}_{\\mathrm{s}}{\\mathrm{F}}_{\\mathrm{N}}$\n\nHere, the weight of the cabinet will be balanced by the normal force. Therefore,\n\n${\\mathrm{f}}_{\\mathrm{s},\\mathrm{max}}={\\mathrm{\\mu }}_{\\mathrm{s}}\\mathrm{W}$\n\nAnd the kinetic frictional force is,\n\n${\\mathrm{f}}_{\\mathrm{k}}={\\mathrm{\\mu }}_{\\mathrm{k}}{\\mathrm{f}}_{\\mathrm{k}}\\phantom{\\rule{0ex}{0ex}}={\\mathrm{\\mu }}_{\\mathrm{k}}\\mathrm{W}\\phantom{\\rule{0ex}{0ex}}=\\left(0.56\\right)\\left(556\\mathrm{N}\\right)\\phantom{\\rule{0ex}{0ex}}=311\\mathrm{N}$\n\n## Step 4: (a) Define the magnitude of the frictional force with horizontal force of magnitude 222 N :\n\nCalculate the magnitude of the frictional force on the cabinet from the floor when it is pushed with horizontal force of magnitudes, which is,\n\n.${F}_{push}=222N$\n\nHere, the magnitude of the horizontal pushing force is less than the maximum static force. Therefore,\n\n${F}_{push}<{f}_{s,max}\\phantom{\\rule{0ex}{0ex}}222N<378N$\n\nHence, the cabinet does not move. So, acceleration of the cabinet will be zero.\n\n$a=0m/{s}^{2}$\n\nThe frictional force is,\n\n${F}_{push}-f=ma\\phantom{\\rule{0ex}{0ex}}=m\\left(0m/{s}^{2}\\right)\\phantom{\\rule{0ex}{0ex}}=0\\phantom{\\rule{0ex}{0ex}}\\phantom{\\rule{0ex}{0ex}}f={F}_{push}\\phantom{\\rule{0ex}{0ex}}=222N$\n\nHere, the magnitude of the frictional force between the carbonate and the floor is f = 222N .\n\n## Step 5: (b) Determine the magnitude of the frictional force with horizontal force of magnitude 334 N :\n\nCalculate the magnitude of the frictional force on the cabinet from the floor when it is pushed with horizontal force of magnitudes, which is\n\n${F}_{push}=334N$\n\nHere, the magnitude of the horizontal pushing force is less than the maximum static force. Thus,\n\n${F}_{push}<{f}_{s,max}\\phantom{\\rule{0ex}{0ex}}334N<378N$\n\nHence, the cabinet does not move. So, acceleration of the cabinet will be zero.\n\n$a=0m/{s}^{2}$\n\nThe frictional force is,\n\n${F}_{push}-f=ma\\phantom{\\rule{0ex}{0ex}}=0\\phantom{\\rule{0ex}{0ex}}\\phantom{\\rule{0ex}{0ex}}f={F}_{push}\\phantom{\\rule{0ex}{0ex}}=334N$\n\n## Step 6: (c) Define the magnitude of the frictional force with horizontal force 445 N :\n\nCalculate the magnitude of the frictional force on the cabinet from the floor when it is pushed with horizontal force of magnitudes, which is,\n\n${F}_{push}=445N$.\n\nHere, the magnitude of the horizontal pushing force is less than the maximum static force. Thus,\n\n${F}_{push}>{f}_{s,max}\\phantom{\\rule{0ex}{0ex}}445N>378N$\n\nHence, the cabinet will move. So, the frictional force will be he kinetic friction. Hence, the friction force in this case between the cabinet and the floor is,\n\n$f={F}_{k}\\phantom{\\rule{0ex}{0ex}}=311N$\n\n## Step 7: (d) The magnitude of the frictional force with horizontal force 556 N :\n\nCalculate the magnitude of the frictional force on it from the floor when it is pushed with horizontal force of magnitudes, which is,\n\n${F}_{push}=556N$.\n\nAgain, you have\n\n${F}_{push}>{f}_{s,max}\\phantom{\\rule{0ex}{0ex}}556N>378N$\n\nWhich means the cabinet moves.\n\nHence, the cabinet will move. So, the frictional force will be he kinetic friction. Hence, the friction force in this case between the cabinet and the floor is,\n\n$f={f}_{k}\\phantom{\\rule{0ex}{0ex}}=311N$\n\n## Step 8: (e) Find out in which of the attempts does the cabinet move:\n\nAs in part (c) and (d) you have ${F}_{push}>{f}_{s,max}$ which means the cabinet moves.\n\nHence, the cabinet moves in (c) and (d).\n\n## Recommended explanations on Physics Textbooks\n\n94% of StudySmarter users get better grades.",
null,
""
]
| [
null,
"https://www.studysmarter.us/wp-content/themes/StudySmarter-Theme/dist/assets/images/header-logo.svg",
null,
"https://www.studysmarter.us/wp-content/themes/StudySmarter-Theme/src/assets/images/ab-test/searching-looking.svg",
null,
"https://studysmarter-mediafiles.s3.amazonaws.com/media/textbook-images/A177uermM-L.jpg",
null,
"https://studysmarter-mediafiles.s3.amazonaws.com/media/textbook-images/A177uermM-L.jpg",
null,
"https://www.studysmarter.us/wp-content/themes/StudySmarter-Theme/src/assets/images/ab-test/businessman-superhero.svg",
null,
"https://www.studysmarter.us/wp-content/themes/StudySmarter-Theme/img/textbook/cta-icon.svg",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.88295346,"math_prob":0.99943835,"size":5276,"snap":"2023-14-2023-23","text_gpt3_token_len":1241,"char_repetition_ratio":0.23425645,"word_repetition_ratio":0.28466386,"special_character_ratio":0.2424185,"punctuation_ratio":0.122693725,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99989307,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-08T11:32:49Z\",\"WARC-Record-ID\":\"<urn:uuid:4d6c3b6c-0aeb-401d-ba8a-56dbe5fd50ea>\",\"Content-Length\":\"225050\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d7a871e7-435f-4ebe-ba43-402d4ce38f68>\",\"WARC-Concurrent-To\":\"<urn:uuid:67f9bc55-9dcb-49a1-8b74-4e6ea4b16323>\",\"WARC-IP-Address\":\"3.67.240.255\",\"WARC-Target-URI\":\"https://www.studysmarter.us/textbooks/physics/fundamentals-of-physics-10th-edition/force-and-motion-ii/q89p-a-filing-cabinet-weighing-556-n-rests-on-the-floor-the-/\",\"WARC-Payload-Digest\":\"sha1:LFDCDZCV3VIZ2S3YVGMAH4WQMM6FZYNV\",\"WARC-Block-Digest\":\"sha1:44QWUF27Y3PJXTOZKZZFYAAE7IW2EXPF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224654871.97_warc_CC-MAIN-20230608103815-20230608133815-00280.warc.gz\"}"} |
https://kr.mathworks.com/matlabcentral/profile/authors/1455089?detail=all | [
"Community Profile",
null,
"Matt Tearle\n\nMathWorks\n\nLast seen: Today 2010 이후 활성\n\nI love MATLAB. I love teaching people to love MATLAB.\n\nI love it so much that I went to work for the people who make MATLAB. I spend my days at MathWorks creating material to help people learn MATLAB. Life is good.\n\n(My love of teaching MATLAB notwithstanding, please don't email me for help -- post a question to MATLAB Answers instead. Firstly, your email will probably be swallowed by my spam filter. Secondly, private consulting isn't in my job description.)\n\nStatistics\n\nAll\n•",
null,
"•",
null,
"•",
null,
"•",
null,
"•",
null,
"•",
null,
"•",
null,
"•",
null,
"•",
null,
"•",
null,
"•",
null,
"•",
null,
"배지보기\n\nContent Feed\n\n보기 기준\n\n해결됨\n\nThe Piggy Bank Problem\nGiven a cylindrical piggy bank with radius g and height y, return the bank's volume. [ g is first input argument.] Bonus though...\n\n4달 전\n\n답변 있음\nHow do I display my secant method iteration values in a table?\nIf you want to display things nicely, you can use fprintf to format things into text. If the primary concern is getting the valu...\n\n4달 전 | 0\n\n| 수락됨\n\n답변 있음\nMatalb Academy - Reinforcement Learning Onramp: submission failed\nThere was a change in R2021a that caused an incompatibility. We have a fix ready that will go out with the next update to the tr...\n\n10달 전 | 3\n\n제출됨\n\nConditionally colored line plot\nPlots (2D line) graph split into two colors above and below a given threshold value\n\n2년 이하 전 | 다운로드 수: 7 |",
null,
"제출됨\n\nSimple real Fourier series approximation\nComputes coefficients for the real Fourier series approximation to a data set.\n\n2년 이하 전 | 다운로드 수: 21 |",
null,
"답변 있음\nAssign a number to a letter in excel\nI don't understand what you're trying to get out of the original data. If you do diff(X2) you'll get [1 -1 0 0 0...\n\n2년 이하 전 | 0\n\n해결됨\n\nInteger or Float?\nTest an input to see whether it is an integer or a floating point number. If it is an integer return 1 for 'true'. Otherwise ret...\n\n2년 이상 전\n\n해결됨\n\nSolve a System of Linear Equations\nGiven a constant input angle θ(theta) in radians, create the coefficient matrix(A) and constant vector(b) to solve the giv...\n\n2년 이상 전\n\n해결됨\n\nVerify Law of Large Numbers\nIf a large number of fair N-sided dice are rolled, the average of the rolls is likely to be close to the expected value of a sin...\n\n2년 이상 전\n\n해결됨\n\nFind the Oldest Person in a Room\nGiven two input vectors: * |name| - user last names * |age| - corresponding age of the person Return the name of the ol...\n\n2년 이상 전\n\n해결됨\n\nTimes 2 - START HERE\nTry out this test problem first. Given the variable x as your input, multiply it by two and put the result in y. Examples:...\n\n2년 이상 전\n\n답변 있음\nReferencing the name \"MATLAB\"\nThe official rules for all that stuff reside in the style guide.\n\n2년 이상 전 | 4\n\n| 수락됨\n\n채널\n\nCWC19 Semifinal tracker\nPercentage of possible outcomes of the remaining Cricket World Cup matches that result in each team making it to the semifinals....\n\n2년 이상 전\n\n제출됨\n\n\"Getting Started with MATLAB\" video example files\nExample MATLAB scripts for the solar panel example shown in the \"Getting Started with MATLAB\" video\n\n3년 이하 전 | 다운로드 수: 63 |",
null,
"해결됨\n\nSum all integers from 1 to 2^n\nGiven the number x, y must be the summation of all integers from 1 to 2^x. For instance if x=2 then y must be 1+2+3+4=10.\n\n3년 이하 전\n\n해결됨\n\nMagic is simple (for beginners)\nDetermine for a magic square of order n, the magic sum m. For example m=15 for a magic square of order 3.\n\n3년 이하 전\n\n해결됨\n\nMake a random, non-repeating vector.\nThis is a basic MATLAB operation. It is for instructional purposes. --- If you want to get a random permutation of integer...\n\n3년 이하 전\n\n해결됨\n\nRoll the Dice!\n*Description* Return two random integers between 1 and 6, inclusive, to simulate rolling 2 dice. *Example* [x1,x2] =...\n\n3년 이하 전\n\n해결됨\n\nNumber of 1s in a binary string\nFind the number of 1s in the given binary string. Example. If the input string is '1100101', the output is 4. If the input stri...\n\n3년 이하 전\n\n해결됨\n\nReturn the first and last character of a string\nReturn the first and last character of a string, concatenated together. If there is only one character in the string, the functi...\n\n3년 이하 전\n\n해결됨\n\nCreate times-tables\nAt one time or another, we all had to memorize boring times tables. 5 times 5 is 25. 5 times 6 is 30. 12 times 12 is way more th...\n\n3년 이하 전\n\n해결됨\n\nGetting the indices from a vector\nThis is a basic MATLAB operation. It is for instructional purposes. --- You may already know how to <http://www.mathworks....\n\n3년 이하 전\n\n해결됨\n\nCheck if number exists in vector\nReturn 1 if number _a_ exists in vector _b_ otherwise return 0. a = 3; b = [1,2,4]; Returns 0. a = 3; b = [1,...\n\n3년 이하 전\n\n해결됨\n\nDetermine whether a vector is monotonically increasing\nReturn true if the elements of the input vector increase monotonically (i.e. each element is larger than the previous). Return f...\n\n3년 이하 전\n\n해결됨\n\nSwap the first and last columns\nFlip the outermost columns of matrix A, so that the first column becomes the last and the last column becomes the first. All oth...\n\n3년 이하 전\n\n해결됨\n\nSwap the input arguments\nWrite a two-input, two-output function that swaps its two input arguments. For example: [q,r] = swap(5,10) returns q = ...\n\n3년 이하 전\n\n해결됨\n\nReverse the vector\nReverse the vector elements. Example: Input x = [1,2,3,4,5,6,7,8,9] Output y = [9,8,7,6,5,4,3,2,1]\n\n3년 이하 전\n\n해결됨\n\nLength of the hypotenuse\nGiven short sides of lengths a and b, calculate the length c of the hypotenuse of the right-angled triangle. <<http://upload....\n\n3년 이하 전\n\n해결됨\n\nGenerate a vector like 1,2,2,3,3,3,4,4,4,4\nGenerate a vector like 1,2,2,3,3,3,4,4,4,4 So if n = 3, then return [1 2 2 3 3 3] And if n = 5, then return [1 2 2...\n\n3년 이하 전\n\n해결됨\n\nFinding Perfect Squares\nGiven a vector of numbers, return true if one of the numbers is a square of one of the other numbers. Otherwise return false. E...\n\n3년 이하 전"
]
| [
null,
"https://kr.mathworks.com/responsive_image/150/150/0/0/0/cache/matlabcentral/profiles/1455089.jpg",
null,
"https://kr.mathworks.com/content/dam/mathworks/mathworks-dot-com/images/responsive/supporting/matlabcentral/minihack/badge-treasure-hunt-participant.png",
null,
"https://kr.mathworks.com/matlabcentral/profile/badges/Badge_ScavengerHunt_Finisher.png",
null,
"https://kr.mathworks.com/matlabcentral/profile/badges/6_Month_Streak.png",
null,
"https://kr.mathworks.com/matlabcentral/profile/badges/Thankful_1.png",
null,
"https://kr.mathworks.com/images/responsive/supporting/matlabcentral/cody/badges/introduction_to_matlab.png",
null,
"https://kr.mathworks.com/images/responsive/supporting/matlabcentral/cody/badges/cody_5th_anniversary_easy_badge.png",
null,
"https://kr.mathworks.com/images/responsive/supporting/matlabcentral/fileexchange/badges/personal_best_downloads_3.png",
null,
"https://kr.mathworks.com/images/responsive/supporting/matlabcentral/fileexchange/badges/editors_pick.png",
null,
"https://kr.mathworks.com/images/responsive/supporting/matlabcentral/fileexchange/badges/first_review.png",
null,
"https://kr.mathworks.com/images/responsive/supporting/matlabcentral/fileexchange/badges/five_star_galaxy_5.png",
null,
"https://kr.mathworks.com/images/responsive/supporting/matlabcentral/fileexchange/badges/first_submission.png",
null,
"https://kr.mathworks.com/matlabcentral/profile/badges/Guiding_Light.png",
null,
"https://kr.mathworks.com/matlabcentral/mlc-downloads/downloads/e578114b-4a80-11e4-9553-005056977bd0/206c466a-ae2d-4bd0-a23a-016c1a3da769/images/screenshot.png",
null,
"https://kr.mathworks.com/matlabcentral/mlc-downloads/downloads/e5799815-4a80-11e4-9553-005056977bd0/705dd1e7-f370-4381-a7ae-becbbbd33c39/images/screenshot.png",
null,
"https://kr.mathworks.com/matlabcentral/mlc-downloads/downloads/eb1e8f5e-8e4e-4d52-8877-46e0ccdacddc/0a977545-4c08-4fb1-b069-1206a13437e4/images/screenshot.png",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.6261974,"math_prob":0.96457106,"size":5756,"snap":"2022-05-2022-21","text_gpt3_token_len":1905,"char_repetition_ratio":0.12552156,"word_repetition_ratio":0.042457093,"special_character_ratio":0.28943712,"punctuation_ratio":0.15350553,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9880803,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32],"im_url_duplicate_count":[null,5,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,3,null,3,null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-24T19:31:43Z\",\"WARC-Record-ID\":\"<urn:uuid:9ac90b48-e77f-4d7d-8363-b91b29dbcc2e>\",\"Content-Length\":\"144912\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b8e4fded-dd0c-49dc-a632-7aecca3298ef>\",\"WARC-Concurrent-To\":\"<urn:uuid:8a203abf-c669-41c4-ac77-b5f2546f2b7e>\",\"WARC-IP-Address\":\"184.25.188.167\",\"WARC-Target-URI\":\"https://kr.mathworks.com/matlabcentral/profile/authors/1455089?detail=all\",\"WARC-Payload-Digest\":\"sha1:WI6BH7CS2BLGC6D2ORFDIUIDYX2E4V46\",\"WARC-Block-Digest\":\"sha1:VAMQLN2DRBGBRMP3C5ESOZULQYIRSVSJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320304600.9_warc_CC-MAIN-20220124185733-20220124215733-00339.warc.gz\"}"} |
https://www.tutorsonnet.com/theory-of-production-returns-to-one-variable-factor-homework-help.php | [
"",
null,
"",
null,
"",
null,
"",
null,
"# Theory Of Production - Returns To One Variable Factor",
null,
"Illustration 32\n\nYou are provided with following production function.\n\nQ = L^0.50 K^0.50\n\n1. Determine the marginal product of labour\n1. If the fixed volume of capital in the short run parities 1000 units, what is the short run production function?\n1. Depict the marginal product of labour MPL is less than average product of labour AP in the short run production function in (2).\n\nSolution\n\n1. To obtain marginal product of labour, we distinguish the provided production function with respect to labour. Therefore\n\nMPL = dQ = 0.50L^-0.50 K^0.50\ndL\n\n= 0.50(K/L) ^ 0.50\n\n1. Note that in the short run production function, one factor is variable with the quantity of other fixed factor. Therefore,\n\nQ = L^0.50 K ^0.50\n\nQ = L^0.50 (900)^0.50\n\nSince the square root of 900 is 30,\n\nQ = L^0.50 * 30\n\n= 30L^0.50\n\nTherefore, short run production function is Q = 30L^0.50\n\nMPL = dQ\ndL\n\n= 30 * 0.50L^-0.50\n\n= 15\nL^0.50\n\nAPL = Q / L\n\n= 30L^0.50\nL\n\n= 30L^0.50-1\n\n= 30\nL^0.50\n\nRelating the values of MPL and APL we determine that MPN < APL\n\nIllustration 33\n\nLet us assume the following production function of an industry,\n\nO = 3L^2 – 0.2L^3\n\nWhere O is output and L is the volume of Labour used.\n\n1. Ascertain the MPL\n2. Value of L that optimises output O\n3. Value of L at which its Average Product is optimum\n\nSolution\n\nIt is noted that the above function is short run production function as there is no fixed term in it; all terms in it contain the variable factor, Labour (L).\n\n(1) MPL = dO\ndL\n\n= 2 * 3L – 3 * 0.2L ^ 2\n\n= 6L – 0.6L^2\n\nTo determine the average product AP of labour, we must divide the aggregate output by L, therefore,\n\nAPL = O = 3L^2 – 0.2L^3\nL L\n\n= 3L – 0.2L^2\n\n1. We can determine the value of the variable factor L that optimises output O and also the value of labour L at which its average product is optimum.\n\nThe value of variable input L optimises output O that ca be procured by setting marginal product function of the variable input equal to null. We have obtained above that MPL = 6L – 0.6L^2. Setting it equal, to zero we have\n\n6L – 0.6L^2 = 0\n\n0.6L^2 = 6L\n\n0.6L^2 = 6\nL\n\n0.6L = 6\n\nL = 6 / 0.6 = 10\n\nAt 10 units of labour, value of O will be optimum.\n\n1. Value of L at which its AP is optimum:\n\nValue of average product function will be optimised where its first derivative equals zero.\n\nAP of labour procured above = 3L – 0.2L^2\n\n= dAP = 3 – 0.4L = 0\ndL\n\n0.4L = 3\n\nL = 3 / 0.4 = 7.5\n\nTherefore, when 7.5 units of labour are used its average product will be optimum.\n\nIllustration 34\n\nPresume a firm producing cotton cloth has the following production function.\n\nO = 4K^ ½ * 2L^ ½\n\nAscertain the marginal product of labour and capital.\n\nSolution\n\nThis is Cobb-Douglas production function specific values of exponents.\n\nMPL = dO\ndL\n\n= 4 * (½) K^ ½ * 2L ^ ½ – 1\n\n= 2 √ K * 2 -√L\n\n= K\nL\n\nMPK = dO = 4 * (1/2) K^ ½ - 1 * 2 L^ ½\ndK\n\n= 2K^- ½ * 2 √ L\n\n= 2L\nK\n\nIllustration 35\n\nLet us assume the following production function\n\nO = 1.50 A^0.75 B^0.25\n\nDetermine the elasticity of productivity O with respect to A – Labour and B – Capital. Provide an economic interpretation of this productivity elasticity.\n\nSolution\n\nProvided the production function is\n\nO = 1.50 A^0.75 B^0.25\n\nProductivity elasticity of labour EA = MPA\nAPA\n\nMPA = dO = 0.75 * 1.50L^-0.25 * B^0.25\ndA\n\nAPA = O = 1.125A^0.75 * B^0.25\nA\n\n= 1.50A ^0.75 * B^0.25\nA\n\n= 1.50A ^-0.25 * B^0.25\n\nEA = MPA = 1.125A^-0.25 * B^0.25\nAPA 1.50A ^-0.25 * B^0.25\n\n= 0.75\n\nProductivity elasticity of capital EB = MPB\nAPB\n\nMPB = dO = 0.25 * 1.50 A^0.75 * B^-0.75\ndB\n\n= 0.375 L^0.75 * K^-0.75\n\nAPB = O = 1.50A^0.75 * B^0.25\nB B\n\n= 1.50A^0.75 * B^-0.75\n\nEB = MPB = 0.375 A^0.75 * B^-0.75\nAPB 1.50A^0.75 * B^-0.75\n\n= 0.25\n\nFrom the value of elasticity of labour equal to 0.75 it follows that 1 percent enhancement in employment of labour causes 0.75 enhancements in productivity, which is less than ration. Likewise, productive elasticity of capital being equal to 0.25 entails that one percent enhancement in capital causes 0.25 percent enhancement in productivity of the product.\n\nOnline Live Tutor Short run production, Average Product:\n\nWe have the best tutors in Economics in the industry. Our tutors can break down a complex Short run production, Average Product problem into its sub parts and explain to you in detail how each step is performed. This approach of breaking down a problem has been appreciated by majority of our students for learning Short run production, Average Product concepts. You will get one-to-one personalized attention through our online tutoring which will make learning fun and easy. Our tutors are highly qualified and hold advanced degrees. Please do send us a request for Short run production, Average Product tutoring and experience the quality yourself.\n\nOnline Theory of Production Help:\n\nIf you are stuck with an Theory of Production Homework problem and need help, we have excellent tutors who can provide you with Homework Help. Our tutors who provide Theory of Production help are highly qualified. Our tutors have many years of industry experience and have had years of experience providing Theory of Production Homework Help. Please do send us the Theory of Production problems on which you need help and we will forward then to our tutors for review.\n\nOther topics under Theory of Production and Cost analysis:",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"•",
null,
""
]
| [
null,
"https://www.tutorsonnet.com/images/search-left.png",
null,
"https://www.tutorsonnet.com/images/financeasstop.gif",
null,
"https://www.tutorsonnet.com/images/onlinerequest_right.gif",
null,
"https://www.tutorsonnet.com/images/onlinerequestleft.gif",
null,
"https://www.tutorsonnet.com/images/microeconomics.jpg",
null,
"https://www.tutorsonnet.com/images/intermediatebottom.gif",
null,
"https://www.tutorsonnet.com/images/sendusasstop.gif",
null,
"https://www.tutorsonnet.com/images/sendusimage.jpg",
null,
"https://www.tutorsonnet.com/images/sendusbottom.gif",
null,
"https://www.tutorsonnet.com/images/othertop.gif",
null,
"https://www.tutorsonnet.com/images/otherimg.gif",
null,
"https://www.tutorsonnet.com/images/otherbottom.gif",
null,
"https://www.tutorsonnet.com/images/paypal.jpg",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.8625657,"math_prob":0.98403,"size":5750,"snap":"2019-51-2020-05","text_gpt3_token_len":1782,"char_repetition_ratio":0.15541247,"word_repetition_ratio":0.027958993,"special_character_ratio":0.3132174,"punctuation_ratio":0.116830066,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9987693,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-21T23:04:10Z\",\"WARC-Record-ID\":\"<urn:uuid:a4cf0a6c-1ca6-4e4c-8137-53bf8174e0a9>\",\"Content-Length\":\"59072\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e3900bb5-c2be-474f-818a-9f6601c1505c>\",\"WARC-Concurrent-To\":\"<urn:uuid:85c68108-0ca7-4483-aaa3-218a48d840b8>\",\"WARC-IP-Address\":\"165.227.19.35\",\"WARC-Target-URI\":\"https://www.tutorsonnet.com/theory-of-production-returns-to-one-variable-factor-homework-help.php\",\"WARC-Payload-Digest\":\"sha1:QZK27ET6WQ5P5VXTXKPH37WT6GWUXEGS\",\"WARC-Block-Digest\":\"sha1:EVHJWBJQN7GNNSN5VAQEFDHVRAQRH45C\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579250606226.29_warc_CC-MAIN-20200121222429-20200122011429-00176.warc.gz\"}"} |
https://shinmera.github.io/crypto-shortcuts/ | [
"# crypto-shortcuts\n\n2.0.0\n\nShorthand functions for common cryptography tasks such as hashing, encrypting, and encoding.\n\n## About Crypto Shortcuts\n\nThis is a small wrapper library around ironclad and cl-base64 to provide quick and easy access to frequently used cryptography functionality like hashing, encoding and encrypting.\n\n## How To\n\n``````(cryptos:from-base64 (cryptos:to-base64 \"CLがすごいです。\"))\n\n(cryptos:decrypt (cryptos:encrypt \"Lispy Secrets, oooOOooo\" \"1234567890123456\") \"1234567890123456\")\n\n(cryptos:pbkdf2-hash \"My passwords have never been this secure, whoa nelly!\" \"salty snacks\")\n\n(cryptos:simple-hash \"I guess not everyone can afford PBKDF2.\" \"crisps\")\n\n(cryptos:md5 \"MD5 hashes are weak, but still sometimes useful.\")\n\n(cryptos:sha512 \"If you don't need hash iterations or salts like simple-hash provides, this will do too.\")``````\n\n## Package Index\n\n• ### CRYPTO-SHORTCUTS(CRYPTOS ORG.SHIRAKUMO.CRYPTO-SHORTCUTS)\n\n• function `(`\n\n#### `ADLER32`\n\n`STRING &KEY (TO :HEX) ENCODE``)`\n```Turn a string into a ADLER32-hash.\n\nTO is the returned representation\nENCODE is the encoding before hashing\n\nSee TO.```\n• function `(`\n\n#### `CRC24`\n\n`STRING &KEY (TO :HEX) ENCODE``)`\n```Turn a string into a CRC24-hash.\n\nTO is the returned representation\nENCODE is the encoding before hashing\n\nSee TO.```\n• function `(`\n\n#### `CRC32`\n\n`STRING &KEY (TO :HEX) ENCODE``)`\n```Turn a string into a CRC32-hash.\n\nTO is the returned representation\nENCODE is the encoding before hashing\n\nSee TO.```\n• function `(`\n\n#### `MD2`\n\n`STRING &KEY (TO :HEX) ENCODE``)`\n```Turn a string into a MD2-hash.\n\nTO is the returned representation\nENCODE is the encoding before hashing\n\nSee TO.```\n• function `(`\n\n#### `MD4`\n\n`STRING &KEY (TO :HEX) ENCODE``)`\n```Turn a string into a MD4-hash.\n\nTO is the returned representation\nENCODE is the encoding before hashing\n\nSee TO.```\n• function `(`\n\n#### `MD5`\n\n`STRING &KEY (TO :HEX) ENCODE``)`\n```Turn a string into a MD5-hash.\n\nTO is the returned representation\nENCODE is the encoding before hashing\n\nSee TO.```\n• function `(`\n\n#### `PBKDF2-HASH`\n\n`PASSWORD SALT &KEY (DIGEST :SHA512) (ITERATIONS 1000) (TO :HEX)``)`\n```Hasehs PASSWORD with SALT using the PBKDF2 method and the provided DIGEST, repeating the process ITERATION times.\nThe returned hash is encoded using the method specified in TO.\n\nThe default DIGEST is SHA512, the iteration is 1000, and TO is HEX.\n\nFour values are returned: hash, salt (as a string), digest, and iterations.\n\nSee TO.```\n• function `(`\n\n#### `PBKDF2-KEY`\n\n`PASSWORD SALT &REST ARGS &KEY DIGEST ITERATIONS``)`\n```Hashes PASSWORD with SALT using the PBKDF2 method and the provided DIGEST, repeating the process ITERATION times.\n\nThe default DIGEST is SHA512, and the iteration is 1000.\n\nFour values are returned: hash as an octet-vector, salt (as a string), digest, and iterations.\n\nLEGACY. Use PBKDF2-HASH instead.```\n• function `(`\n\n#### `RIPEMD-128`\n\n`STRING &KEY (TO :HEX) ENCODE``)`\n```Turn a string into a RIPEMD-128-hash.\n\nTO is the returned representation\nENCODE is the encoding before hashing\n\nSee TO.```\n• function `(`\n\n#### `RIPEMD-160`\n\n`STRING &KEY (TO :HEX) ENCODE``)`\n```Turn a string into a RIPEMD-160-hash.\n\nTO is the returned representation\nENCODE is the encoding before hashing\n\nSee TO.```\n• function `(`\n\n#### `SHA1`\n\n`STRING &KEY (TO :HEX) ENCODE``)`\n```Turn a string into a SHA1-hash.\n\nTO is the returned representation\nENCODE is the encoding before hashing\n\nSee TO.```\n• function `(`\n\n#### `SHA224`\n\n`STRING &KEY (TO :HEX) ENCODE``)`\n```Turn a string into a SHA224-hash.\n\nTO is the returned representation\nENCODE is the encoding before hashing\n\nSee TO.```\n• function `(`\n\n#### `SHA256`\n\n`STRING &KEY (TO :HEX) ENCODE``)`\n```Turn a string into a SHA256-hash.\n\nTO is the returned representation\nENCODE is the encoding before hashing\n\nSee TO.```\n• function `(`\n\n#### `SHA384`\n\n`STRING &KEY (TO :HEX) ENCODE``)`\n```Turn a string into a SHA384-hash.\n\nTO is the returned representation\nENCODE is the encoding before hashing\n\nSee TO.```\n• function `(`\n\n#### `SHA512`\n\n`STRING &KEY (TO :HEX) ENCODE``)`\n```Turn a string into a SHA512-hash.\n\nTO is the returned representation\nENCODE is the encoding before hashing\n\nSee TO.```\n• function `(`\n\n#### `SIMPLE-HASH`\n\n`PASSWORD SALT &KEY (DIGEST :SHA512) (ITERATIONS 1000) (TO :HEX)``)`\n```Hashes PASSWORD with SALT using DIGEST as the digest-method and repeats the hashing ITERATIONS times.\nThe returned hash is encoded using the method specified in TO.\n\nThe default DIGEST is SHA512, the iteration is 1000, and TO is HEX.\n\nFour values are returned: hash, salt (as a string), digest, and iterations.\n\nSee TO.```\n• function `(`\n\n#### `TIGER`\n\n`STRING &KEY (TO :HEX) ENCODE``)`\n```Turn a string into a TIGER-hash.\n\nTO is the returned representation\nENCODE is the encoding before hashing\n\nSee TO.```\n• function `(`\n\n#### `TREE-HASH`\n\n`STRING &KEY (TO :HEX) ENCODE``)`\n```Turn a string into a TREE-HASH-hash.\n\nTO is the returned representation\nENCODE is the encoding before hashing\n\nSee TO.```\n• function `(`\n\n#### `WHIRLPOOL`\n\n`STRING &KEY (TO :HEX) ENCODE``)`\n```Turn a string into a WHIRLPOOL-hash.\n\nTO is the returned representation\nENCODE is the encoding before hashing\n\nSee TO.```\n• generic `(`\n\n#### `CMAC`\n\n`TEXT KEY &KEY CIPHER MODE IV TO NORMALIZE-KEY``)`\n```Generate a CMAC digest of TEXT using KEY and the provided CIPHER/MODE/IV.\nThe returned digest is encoded by the format specified in TO.\n\nThe default cipher is AES, default mode is ECB, and default TO is BASE64.\n\nFour values are returned: digest, key, cipher, mode, and IV.\n\nSee TO\nSee NORMALIZE-KEY```\n• generic `(`\n\n#### `CODE`\n\n`FROM TO VECTOR``)`\n```Convenience function to de/encode in one pass.\nBy default, FROM and TO can both be one of:\n\n:OCTETS :STRING :HEX :BASE64\n\nIf FROM is NIL, then TO is called with the remaining arguments.```\n• generic `(`\n\n#### `DECRYPT`\n\n`TEXT KEY &KEY CIPHER MODE IV FROM NORMALIZE-KEY``)`\n```Decrypt TEXT with KEY using the provided CIPHER/MODE/IV.\nDepending on the mode, the key should be of length 16, 32, or 64.\nThe passed text is decoded by the format specified in FROM.\n\nThe default cipher is AES, default mode is ECB, and default TO is BASE64.\n\nFour values are returned: Decrypted text, key, cipher, mode, and IV.\n\nSee CODE\nSee NORMALIZE-KEY```\n• generic `(`\n\n#### `ENCRYPT`\n\n`TEXT KEY &KEY CIPHER MODE IV TO NORMALIZE-KEY``)`\n```Encrypt TEXT with KEY using the provided CIPHER/MODE/IV.\nDepending on the mode, the key should be of length 16, 32, or 64.\nThe returned encrypted vector is encoded by the format specified in TO.\n\nThe default cipher is AES, default mode is ECB, and default TO is BASE64.\n\nFour values are returned: Encrypted&encoded text, key, cipher, mode, and IV.\n\nSee TO\nSee NORMALIZE-KEY```\n• generic `(`\n\n#### `FROM-BASE64`\n\n`VECTOR &OPTIONAL TO``)`\n```Turns a base64-encoded vector into a vector encoded by TO.\nSee TO.```\n• generic `(`\n\n#### `FROM-HEX`\n\n`HEX-STRING``)`\n`Turn the hex-string into an octet-vector.`\n• generic `(`\n\n#### `GET-CIPHER`\n\n`KEY &KEY CIPHER MODE IV``)`\n`Return the corresponding cipher with KEY using MODE and potentially the initialization-vector IV.`\n• generic `(`\n\n#### `HMAC`\n\n`TEXT KEY &KEY DIGEST TO``)`\n```Generate an HMAC digest of TEXT using KEY and the provided DIGEST method.\nThe returned digest is encoded by the format specified in TO.\n\nThe default digest is SHA512, and default TO is BASE64.\n\nThree values are returned: digest, key, and digest-type.\n\nSee TO```\n• generic `(`\n\n#### `MAKE-SALT`\n\n`SALT``)`\n```Create a salt from the given object.\n\n(eql T) -- A random salt\nINTEGER -- A salt of this size\nSTRING -- Use this string as an octet-vector\nVECTOR -- Use this vector directly\n\nSee TO-OCTETS```\n• generic `(`\n\n#### `NORMALIZE-KEY`\n\n`METHOD KEY``)`\n```Normalizes the KEY to an octet-vector using METHOD.\nBy default, method can be one of:\n\n:HASH -- Hash it by sha256\n:FIT -- Truncate or pad it out before turning into octets.\nNIL -- Just turn it into an octet-vector.```\n• generic `(`\n\n#### `TO`\n\n`THING VECTOR``)`\n```Convenience function to call the various encoders.\nBy default, THING can be one of:\n\nNIL -- Returns VECTOR\n:OCTETS -- See TO-OCTETS\n:STRING -- See TO-STRING\n:HEX -- See TO-HEX\n:BASE64 -- See TO-BASE64```\n• generic `(`\n\n#### `TO-BASE64`\n\n`SEQUENCE``)`\n`Turns a vector into a base64-encoded string.`\n• generic `(`\n\n#### `TO-HEX`\n\n`VECTOR``)`\n`Turn VECTOR into a hex-string.`\n• generic `(`\n\n#### `TO-OCTETS`\n\n`STRING &OPTIONAL FORMAT``)`\n`Turns STRING into a FORMAT (default UTF-8) encoded octet-vector.`\n• generic `(`\n\n#### `TO-STRING`\n\n`OCTETS &OPTIONAL FORMAT``)`\n`Turns OCTETS from FORMAT (default UTF-8) encoding into a string. `"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.71164113,"math_prob":0.83462864,"size":7538,"snap":"2019-35-2019-39","text_gpt3_token_len":2019,"char_repetition_ratio":0.16830368,"word_repetition_ratio":0.5218081,"special_character_ratio":0.2543115,"punctuation_ratio":0.13468249,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9604731,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-09-17T02:22:07Z\",\"WARC-Record-ID\":\"<urn:uuid:0398e7d3-0cc4-4e59-a5fa-46a3791c32a3>\",\"Content-Length\":\"26116\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2bdaa62c-f097-4740-a829-8bb7a039ece1>\",\"WARC-Concurrent-To\":\"<urn:uuid:80bd972e-c4bb-45d3-95fb-0b69898c7174>\",\"WARC-IP-Address\":\"185.199.110.153\",\"WARC-Target-URI\":\"https://shinmera.github.io/crypto-shortcuts/\",\"WARC-Payload-Digest\":\"sha1:LW4XOEMBHLNL4L2M2EGH565Z43COXN5H\",\"WARC-Block-Digest\":\"sha1:7YLHJX7MGRXWWVBVDREL6RUJFIIPODMG\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-39/CC-MAIN-2019-39_segments_1568514573011.59_warc_CC-MAIN-20190917020816-20190917042816-00451.warc.gz\"}"} |
https://socratic.org/questions/how-do-you-graph-x-4-2-y-2-2-25 | [
"# How do you graph (x – 4)^2 + (y – 2)^2 = 25?\n\nThis is the equation of a circle of radius $5$ with centre $\\left(4 , 2\\right)$\nThis equation is of the form ${\\left(x - a\\right)}^{2} + {\\left(y - b\\right)}^{2} = {r}^{2}$, which is the standard form of the equation of a circle of radius $r$ centred at $\\left(a , b\\right)$.\nThe left hand side is the formula for the square of the distance of $\\left(x , y\\right)$ from $\\left(a , b\\right)$"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.6798226,"math_prob":1.0000098,"size":406,"snap":"2020-45-2020-50","text_gpt3_token_len":131,"char_repetition_ratio":0.10696518,"word_repetition_ratio":0.0,"special_character_ratio":0.36699507,"punctuation_ratio":0.10869565,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000092,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-12-01T21:31:18Z\",\"WARC-Record-ID\":\"<urn:uuid:84a797fb-f02f-4779-821a-ad44b706feeb>\",\"Content-Length\":\"33299\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f5551e1b-ef8d-4264-894b-52dda43d9f49>\",\"WARC-Concurrent-To\":\"<urn:uuid:0bafb54a-d1ce-4a47-bc2e-f34565490bbd>\",\"WARC-IP-Address\":\"216.239.36.21\",\"WARC-Target-URI\":\"https://socratic.org/questions/how-do-you-graph-x-4-2-y-2-2-25\",\"WARC-Payload-Digest\":\"sha1:6HEAZ34O4LBZUKY27LYDNWI2DWEC6I77\",\"WARC-Block-Digest\":\"sha1:67OJDRGDZO7A2HMHUKRJEG6MRFM5HER5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141681524.75_warc_CC-MAIN-20201201200611-20201201230611-00117.warc.gz\"}"} |
https://dis.dankook.ac.kr/lectures/cg21/2021/10/page/2/ | [
"## OPENGL TRANSFORMATION MATRIX TUTORIAL\n\nhttp://www.opengl-tutorial.org/beginners-tutorials/tutorial-3-matrices/\n\n# Homogeneous coordinates\n\nUntil then, we only considered 3D vertices as a (x,y,z) triplet. Let’s introduce w. We will now have (x,y,z,w) vectors.\n\nThis will be more clear soon, but for now, just remember this :\n\n• If w == 1, then the vector (x,y,z,1) is a position in space.\n• If w == 0, then the vector (x,y,z,0) is a direction.\n\n(In fact, remember this forever.)\nWhat difference does this make ? Well, for a rotation, it doesn’t change anything. When you rotate a point or a direction, you get the same result. However, for a translation (when you move the point in a certain direction), things are different. What could mean “translate a direction” ? Not much.Homogeneous coordinates allow us to use a single mathematical formula to deal with these two cases.\n\n# Transformation matrices\n\nIn 3D graphics we will mostly use 4×4 matrices. They will allow us to transform our (x,y,z,w) vertices. This is done by multiplying the vertex with the matrix :\n\nMatrix x Vertex (in this order !!) = TransformedVertex",
null,
"In C++, with GLM:\n\n``````glm::mat4 myMatrix;\nglm::vec4 myVector;\n// fill myMatrix and myVector somehow\nglm::vec4 transformedVector = myMatrix * myVector; // Again, in this order ! this is important.\n``````\n\nIn GLSL :\n\n``````mat4 myMatrix;\nvec4 myVector;\n// fill myMatrix and myVector somehow\nvec4 transformedVector = myMatrix * myVector; // Yeah, it's pretty much the same than GLM\n``````\n\n## Translation matrices\n\nThese are the most simple tranformation matrices to understand. A translation matrix look like this :",
null,
"where X,Y,Z are the values that you want to add to your position.\n\nSo if we want to translate the vector (10,10,10,1) of 10 units in the X direction, we get :",
null,
"## Scaling matrices\n\nScaling matrices are quite easy too :",
null,
"So if you want to scale a vector (position or direction, it doesn’t matter) by 2.0 in all directions :",
null,
"## Rotation matrices\n\nThese are quite complicated.\n\n## Cumulating transformations\n\nSo now we know how to rotate, translate, and scale our vectors. It would be great to combine these transformations. This is done by multiplying the matrices together, for instance :\n\n``````TransformedVector = TranslationMatrix * RotationMatrix * ScaleMatrix * OriginalVector;\n``````"
]
| [
null,
"http://www.opengl-tutorial.org/assets/images/tuto-3-matrix/MatrixXVect.gif",
null,
"http://www.opengl-tutorial.org/assets/images/tuto-3-matrix/translationMatrix.png",
null,
"http://www.opengl-tutorial.org/assets/images/tuto-3-matrix/translationExamplePosition1.png",
null,
"http://www.opengl-tutorial.org/assets/images/tuto-3-matrix/scalingMatrix.png",
null,
"http://www.opengl-tutorial.org/assets/images/tuto-3-matrix/scalingExample.png",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.81675774,"math_prob":0.9721971,"size":2211,"snap":"2022-27-2022-33","text_gpt3_token_len":534,"char_repetition_ratio":0.14091527,"word_repetition_ratio":0.03954802,"special_character_ratio":0.2406151,"punctuation_ratio":0.1938326,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99821115,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,5,null,5,null,8,null,5,null,5,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-07T23:16:32Z\",\"WARC-Record-ID\":\"<urn:uuid:5d5d2c81-d3c1-4c1d-a4c9-d137cb637957>\",\"Content-Length\":\"63029\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:37bfd384-8e5c-4430-ad5a-9101d30add80>\",\"WARC-Concurrent-To\":\"<urn:uuid:0cd0afa0-860b-44ea-9f3a-482be5d70f08>\",\"WARC-IP-Address\":\"220.149.232.78\",\"WARC-Target-URI\":\"https://dis.dankook.ac.kr/lectures/cg21/2021/10/page/2/\",\"WARC-Payload-Digest\":\"sha1:V5RRCU4RAZHRHRS4ZW3YE46XOM3LTQVA\",\"WARC-Block-Digest\":\"sha1:WXSVH4UH2I522WFQVWFIBN57JYAHBBKW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882570730.59_warc_CC-MAIN-20220807211157-20220808001157-00100.warc.gz\"}"} |
http://www.edugoog.com/details/Consider-the-following-two-sets-of-equations-I-2x-y-0-and-6x-3y-0-II-3x-4y-0-and-12x-20y-0-linear-equations.html | [
"Consider the following two sets of equations :\nI. 2x - y = 0 and 6x - 3y = 0\nII. 3x - 4y = 0 and 12x - 20y = 0\n\n###### Option:\nA. both sets I and II possess unique solutions.\nB. Set I possesses unique solution and set II has infinitely many solutions.\nC. Set II possesses unique solution and set I possesses infinitely many solutions.\nD. None of the sets I and II possesses a unique solution.\nAnswer: C . Set II possesses unique solution and set I possesses infinitely many solutions.\n\nJustification:\n\nEqns. in I are 2x - y = 0 & 2x - y = 0\nThus, there is one equation in two variables.\n.'. Given equations have an infinite number of solutions.\nEqns. in II are 3x - 4y = 0 & 3x - 5y = 0\nSolving these equations, we get x = 0 & y = 0\nSo, (c) is true.",
null,
""
]
| [
null,
"http://www.edugoog.com/images/NextIcon.png",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.90742403,"math_prob":0.99228024,"size":783,"snap":"2022-27-2022-33","text_gpt3_token_len":254,"char_repetition_ratio":0.18998717,"word_repetition_ratio":0.23529412,"special_character_ratio":0.32950193,"punctuation_ratio":0.15517241,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9992643,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-07-06T19:17:39Z\",\"WARC-Record-ID\":\"<urn:uuid:15a57673-61e4-4322-a851-ddae354ba10d>\",\"Content-Length\":\"54133\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:be70d80a-09ce-4c71-a744-d568da883550>\",\"WARC-Concurrent-To\":\"<urn:uuid:79799806-9ba2-48e2-874b-0b9c793a183a>\",\"WARC-IP-Address\":\"172.67.213.67\",\"WARC-Target-URI\":\"http://www.edugoog.com/details/Consider-the-following-two-sets-of-equations-I-2x-y-0-and-6x-3y-0-II-3x-4y-0-and-12x-20y-0-linear-equations.html\",\"WARC-Payload-Digest\":\"sha1:IONDXVE7HGWICW44IR2DIEBWM2K2JNGR\",\"WARC-Block-Digest\":\"sha1:RU7RI5ZMNAZLZYHMVXF7BJG3SE3RGH7H\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656104676086.90_warc_CC-MAIN-20220706182237-20220706212237-00390.warc.gz\"}"} |
https://it.mathworks.com/matlabcentral/answers/730713-output-prediction-problem-for-ann | [
"Output prediction Problem for ANN\n\n1 view (last 30 days)\nMario Viola on 29 Jan 2021\nAnswered: Rishik Ramena on 5 Feb 2021\nI'm trying to implement an ANN (shallow, one hidden layer, not of the recurrent/convolutional type), in which based on closing prices of stock data and some other inputs (so far, just moving averages, as i'm tryinfìg to develop it for the first time), it should predict the direction of next day return, =1 if it is positive, =0 otherwise. Of course i'm feeding the network with those target values, in which i calculated the returns and labeled them as 0/1 accordingly. Even tho in both of the layers i am using the logsig transformation function, the outputs are always around 0.5, while i need them expressed as a 0 or 1. I was thinking about rounding them by myself (i.e. if output >0.5 --> output =1, some for the negative case), but i suppose the network should be able to do it by itself, fixing the errors that i probably (almost for sure) made.\nI will add the code i wrote so far, i really hope you can help me some way. Any type of suggestion would be really appreciated as well.\nOpen =cellfun(@str2double,StockData.Open);\nHigh = cellfun(@str2double,StockData.High);\nLow = cellfun(@str2double,StockData.Low);\nClose = cellfun(@str2double,StockData.Close);\nVolume = cellfun(@str2double,StockData.Volume);\nDate = StockData.Date;\nStockData_TimeTable = timetable(Date,Open,High,Low,Close);\nif any(any(ismissing(Close)))== 1\nClose = fillmissing(Close,'linear');\nend\n%Prices are from the furthest to the nearest, flip to calculate direction\n%of return (1 if >0, 0 otherwise);\nClose = flip(Close);\nSignal = zeros(length(Close),1);\nfor i=1:length(Close)-1;\nif Close(i) > Close(i+1);\nSignal(i) = 1;\nelse\nSignal(i) =0;\nend\nend\n%The last value of Signal is 0 as it has not been calculated, so i deleted it. As for the first (actual day) value of Close, i decided\n%to feed the network with last day closing prices (and predictors as well) and actual day target output (direction of the market).\nSignal = Signal(1:end-1);\nClose = Close(2:end);\nClose = flip(Close);\nSignal = flip(Signal);\n%Close = Data(:,4);\nMa9 = movavg(Close,'simple',9);\nMa18 = movavg(Close,'simple',18);\nEMa9 = movavg(Close,'exponential',9);\nEMa18 = movavg(Close,'exponential',18);\nX = [Close Ma9 Ma18 EMa9 EMa18];\nhiddenLayerSize = 30;\nnet = fitnet(hiddenLayerSize);\nnet.divideFcn = 'divideblock';\nnet.layers{1}.transferFcn = 'logsig';\nnet.layers{2}.transferFcn = 'logsig';\n%net.trainParam.epochs=3000;net.trainParam.lr=0.3;net.trainParam.mc=0.6;\nnet.trainParam.max_fail =100;\nnet.performFcn = 'crossentropy';\nxt = X';\nyt = Signal';\n[net tr] = train(net, xt, yt);\n\nRishik Ramena on 5 Feb 2021\nYour code looks just fine. Do you intend to compare the closing prices of one day with the closing prices of the next day or with the opening prices of the next day?\nAs you are using the logsig transfer function, you are supposed to get values between 0 and 1. You might want to use the classify function at the end of your training to classify using the 0-1 labels."
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.8283401,"math_prob":0.9646539,"size":3302,"snap":"2022-05-2022-21","text_gpt3_token_len":905,"char_repetition_ratio":0.10946028,"word_repetition_ratio":0.012170386,"special_character_ratio":0.2725621,"punctuation_ratio":0.18958032,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9855648,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-24T18:26:54Z\",\"WARC-Record-ID\":\"<urn:uuid:c3236e66-d684-4e90-bcc1-5f468a07bb7c>\",\"Content-Length\":\"115032\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:90b78d1d-bf01-42ef-ae2b-612358e4b56a>\",\"WARC-Concurrent-To\":\"<urn:uuid:406e495b-5232-492d-bdb0-aff274e3ae55>\",\"WARC-IP-Address\":\"184.25.188.167\",\"WARC-Target-URI\":\"https://it.mathworks.com/matlabcentral/answers/730713-output-prediction-problem-for-ann\",\"WARC-Payload-Digest\":\"sha1:IHQE5LWNFYJU5OQVK7D67LIDJY3CWI3Z\",\"WARC-Block-Digest\":\"sha1:7QWXNEIMBX57EHEHR53O5CH4LWYNUZHD\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320304572.73_warc_CC-MAIN-20220124155118-20220124185118-00107.warc.gz\"}"} |
http://essayswriting.info/tag/true/ | [
"## Which Of The Following Statement Is True.\n\nWhich Of The Following Statement Is True.. Cth hospital wants to help people quit smoking. Which of the following statements is true?",
null,
"Which of the following is a true statement about functions ? from brainly.com\n\nBoth cognitive and emotional measure of user satisfaction are best inferred by. Pickling creates an object from a sequence of bytes. The answer is b because other statements are not true.\n\n## Which Statement Is True Of Atoms\n\nWhich Statement Is True Of Atoms. D.)electrons determine the atom's size. What is true for all atoms?",
null,
"Which of the following statements is true of hydrogen atom from doubtnut.com\n\nAn atom is electrically neutral because the number of protons. Which statement is true of atoms? The nucleus of an atom (contains protons and neutrons) remains unchanged after ordinary.\n\n## Which Of The Following Is Not True Of A Codon\n\nWhich Of The Following Is Not True Of A Codon. Which of the following is not a stop codon? It may code for the same amino acid as another codon.c.",
null,
"3 Which of the following is not true of a codon a It consists of three from www.coursehero.com\n\n(c) it can be either in dna or in rna (d) it. It may code for the same amino acid as another codon. Which of the following is not true of a codon?\n\n## Which Of The Following Statements About Ph Is True\n\nWhich Of The Following Statements About Ph Is True. The ph of a 1 mm solution of hcl is 3 b. Carbon dioxide is a buffer that helps the blood ph remain around 7.4.",
null,
"Solved 25. Which One Of The Following Statements Is True from www.chegg.com\n\nBuffers are weak acids and bases that are used biologically. An increase in the h+ concentration leads to a decrease in the ph. A ph of 7.4 and venous blood has a ph of 7.37.c.\n\n## Which Statement Is True About The Graphed Function\n\nWhich Statement Is True About The Graphed Function. F (x) = 8x 3, g (x) = \\. Which statements are true about the ordered pair (10, 5) and the system of equations?",
null,
"which statement is true about the graphed function from brainly.com\n\nSo point is the solution to the system of linear equations. From the graph, and are two linear or straight functions. Which statement is true regarding the graphed functions?\n\n## Which Is A True Statement About A 45-45-90 Triangle\n\nWhich Is A True Statement About A 45-45-90 Triangle. It is also considered an isosceles triangle since it has two congruent sides. Find the value of the variable.",
null,
"Which statement is necessarily true if BD is an attitude to the from brainly.com\n\nSolution (1) δcbd is a right triangle //given (2) m∠bdc =. Which statement is not true about a. Leg = hypotenuse ÷ √2.\n\n## Which Is True About The Dissolving Process In Water\n\nWhich Is True About The Dissolving Process In Water. The process of dissolving an acid or a base in water is a highly exothermic one. The first step is for the solvent particles to move so that.",
null,
"Diffusion Cells The Living Cell THE LIVING WORLD from schoolbag.info\n\nThe solvent particles are all nonpolar molecules. In a dissolution process, energy is required to overcome the forces of attraction between the solvent particles. Separate particles of the solvent from each.\n\n## Which Of The Following Statements About Surfactants Is Not True\n\nWhich Of The Following Statements About Surfactants Is Not True. It states that all the above statements are false while in truth they are true and hence it ends up being the false statement. Which of the following statements about surfactants is not true?",
null,
"PPT Which structure is part of the lower respiratory system from www.slideserve.com\n\nPin on 100 verified roll up mechanism credit scienceinthebox credit repair things to sell repair. Contains a hydrophilic part o c. Get free solutions to all questions from chapter question paper.\n\n## Which Statement Is Generally True About Cacl2\n\nWhich Statement Is Generally True About Cacl2. It has only covalent bonds between the atoms. The chemical formula of calcium chloride can be given as cacl2.",
null,
"Solved Which Of The Following Lewis Structures Correctly from www.chegg.com\n\nIt has only metallic bonds between the atoms. It has only metallic bonds between the atoms. The chemical formula of calcium chloride can be given as cacl2."
]
| [
null,
"data:image/gif;base64,R0lGODlhAQABAAAAACH5BAEKAAEALAAAAAABAAEAAAICTAEAOw==",
null,
"data:image/gif;base64,R0lGODlhAQABAAAAACH5BAEKAAEALAAAAAABAAEAAAICTAEAOw==",
null,
"data:image/gif;base64,R0lGODlhAQABAAAAACH5BAEKAAEALAAAAAABAAEAAAICTAEAOw==",
null,
"data:image/gif;base64,R0lGODlhAQABAAAAACH5BAEKAAEALAAAAAABAAEAAAICTAEAOw==",
null,
"data:image/gif;base64,R0lGODlhAQABAAAAACH5BAEKAAEALAAAAAABAAEAAAICTAEAOw==",
null,
"data:image/gif;base64,R0lGODlhAQABAAAAACH5BAEKAAEALAAAAAABAAEAAAICTAEAOw==",
null,
"data:image/gif;base64,R0lGODlhAQABAAAAACH5BAEKAAEALAAAAAABAAEAAAICTAEAOw==",
null,
"data:image/gif;base64,R0lGODlhAQABAAAAACH5BAEKAAEALAAAAAABAAEAAAICTAEAOw==",
null,
"https://i2.wp.com/media.cheggcdn.com/study/7a7/7a7ac9e7-cf7b-4748-8329-e883d4af5470/image",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.9445522,"math_prob":0.67414427,"size":321,"snap":"2022-40-2023-06","text_gpt3_token_len":61,"char_repetition_ratio":0.119873814,"word_repetition_ratio":0.0,"special_character_ratio":0.18380062,"punctuation_ratio":0.11666667,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9589013,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-10-03T18:42:23Z\",\"WARC-Record-ID\":\"<urn:uuid:c0f68fe2-40c7-4dcd-ad9b-1da51708761f>\",\"Content-Length\":\"96918\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:804640e7-c6b4-41a9-b6b4-4333c7207281>\",\"WARC-Concurrent-To\":\"<urn:uuid:84481e8d-bb49-42fa-a7a4-515df873ae9a>\",\"WARC-IP-Address\":\"104.21.48.234\",\"WARC-Target-URI\":\"http://essayswriting.info/tag/true/\",\"WARC-Payload-Digest\":\"sha1:LOPLCDPGYIPTGG4EX2QT3PFJOZWJHMQN\",\"WARC-Block-Digest\":\"sha1:SKJKI6DPQEXFY7WVOWRL6MGJIZM46D2C\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030337428.0_warc_CC-MAIN-20221003164901-20221003194901-00164.warc.gz\"}"} |
http://cpr-mathph.blogspot.com/2012/04/12044567-mehmet-koca-et-al.html | [
"Radii of the E8 Gosset Circles as the Mass Excitations in the Ising Model [PDF]\n\nMehmet Koca, Nazife Ozdes Koca\nThe Zamolodchikov's conjecture implying the exceptional Lie group E8 seems to be validated by an experiment on the quantum phase transitions of the 1D Ising model carried out by the Coldea et. al. The E8 model which follows from the affine Toda field theory predicts 8 bound states with the mass relations in the increasing order m1, m2= tau m1, m3, m4, m5, m6=tau m3, m7= tau m4, m8= tau m5, where tau= (1+\\sqrt(5))/2 represents the golden ratio. Above relations follow from the fact that the Coxeter group W(H4) is a maximal subgroup of the Coxeter-Weyl group W(E8). These masses turn out to be proportional to the radii of the Gosset's circles on the Coxeter plane obtained by an orthogonal projection of the root system of E8 . We also note that the masses m1, m3, m4 and m5 correspond to the radii of the circles obtained by projecting the vertices of the 600-cell, a 4D polytope of the non-crystallographic Coxeter group W(H4). A special non-orthogonal projection of the simple roots on the Coxeter plane leads to exactly the numerical values of the masses of the bound states as 0.4745, 0.7678, 0.9438, 1.141, 1.403, 1.527, 1.846, and 2.270. We note the striking equality of the first two numerical values to the first two masses of the bound states determined by the Coldea et. al.\nView original: http://arxiv.org/abs/1204.4567"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.84924096,"math_prob":0.9827837,"size":1394,"snap":"2019-43-2019-47","text_gpt3_token_len":391,"char_repetition_ratio":0.13381295,"word_repetition_ratio":0.025751073,"special_character_ratio":0.2718795,"punctuation_ratio":0.13621262,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9944459,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-23T00:27:33Z\",\"WARC-Record-ID\":\"<urn:uuid:2f4f1912-63d4-47d2-8598-c23c764d61e9>\",\"Content-Length\":\"202758\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:08ae25ff-851f-45b2-a82b-c4ab7423a832>\",\"WARC-Concurrent-To\":\"<urn:uuid:f1483793-ad29-4cb6-9761-14631c2f3abc>\",\"WARC-IP-Address\":\"172.217.164.161\",\"WARC-Target-URI\":\"http://cpr-mathph.blogspot.com/2012/04/12044567-mehmet-koca-et-al.html\",\"WARC-Payload-Digest\":\"sha1:OZLKKF5ZIZYFZBUDIJDPTAW5XLUFLYV3\",\"WARC-Block-Digest\":\"sha1:IBNBD6PHSGSIA2GRYQ6JT4KFDDPMIQSG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570987826436.88_warc_CC-MAIN-20191022232751-20191023020251-00210.warc.gz\"}"} |
https://demonstrations.wolfram.com/EllipseAndFriends/ | [
"",
null,
"# Ellipse and Friends",
null,
"Requires a Wolfram Notebook System\n\nInteract on desktop, mobile and cloud with the free Wolfram Player or other Wolfram Language products.\n\nThe table shows the equations in rectangular coordinates of an ellipse in standard position, as well as its two associated hyperbolas and their asymptotes.\n\n[more]",
null,
"The asymptotes are a limiting case of both kinds of hyperbolas. For example, if",
null,
", then the up-down hyperbola has equation",
null,
", so that (",
null,
", and one or the other (or both)",
null,
",",
null,
", whose graphs are two straight lines.\n\nThe lemniscate is included for two reasons:\n\nThe lemniscate has a property similar to the ellipse. Let Q and R be two points, the foci. For a point P on the ellipse, |PQ| + |PR| =",
null,
", a constant. For a point P on the lemniscate, |PQ| × |PR| =",
null,
", a constant.\n\nAlso, the arc lengths (partial or full perimeters) of the ellipse and the lemniscate are related. The arc length of the ellipse is calculated using an incomplete elliptic integral of the second kind, while the arc length of the lemniscate is given by an elliptic integral of the first kind.\n\n[less]\n\nContributed by: George Beck (March 2011)\nOpen content licensed under CC BY-NC-SA\n\n## Snapshots",
null,
"",
null,
"",
null,
"## Permanent Citation\n\nGeorge Beck\n\n Feedback (field required) Email (field required) Name Occupation Organization Note: Your message & contact information may be shared with the author of any specific Demonstration for which you give feedback. Send"
]
| [
null,
"https://demonstrations.wolfram.com/app-files/assets/img/header-spikey2x.png",
null,
"https://demonstrations.wolfram.com/img/demonstrations-branding.png",
null,
"https://demonstrations.wolfram.com/EllipseAndFriends/img/desc1916688276916286432.png",
null,
"https://demonstrations.wolfram.com/EllipseAndFriends/img/desc4686248447910401992.png",
null,
"https://demonstrations.wolfram.com/EllipseAndFriends/img/desc8721161892375026734.png",
null,
"https://demonstrations.wolfram.com/EllipseAndFriends/img/desc8043126894983172726.png",
null,
"https://demonstrations.wolfram.com/EllipseAndFriends/img/desc2446414452525912854.png",
null,
"https://demonstrations.wolfram.com/EllipseAndFriends/img/desc8136178974010215754.png",
null,
"https://demonstrations.wolfram.com/EllipseAndFriends/img/desc4208185799162732530.png",
null,
"https://demonstrations.wolfram.com/EllipseAndFriends/img/desc1376165536860342628.png",
null,
"https://demonstrations.wolfram.com/EllipseAndFriends/img/popup_1.png",
null,
"https://demonstrations.wolfram.com/EllipseAndFriends/img/popup_2.png",
null,
"https://demonstrations.wolfram.com/EllipseAndFriends/img/popup_3.png",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.86176556,"math_prob":0.94041467,"size":1196,"snap":"2019-51-2020-05","text_gpt3_token_len":294,"char_repetition_ratio":0.12751678,"word_repetition_ratio":0.029411765,"special_character_ratio":0.22073579,"punctuation_ratio":0.118421055,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97295296,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26],"im_url_duplicate_count":[null,null,null,null,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-17T16:51:06Z\",\"WARC-Record-ID\":\"<urn:uuid:8576b4f4-e450-4602-8cd0-3a467d11cefd>\",\"Content-Length\":\"77463\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e553eb8b-7dac-4f3e-8a9e-422261287be8>\",\"WARC-Concurrent-To\":\"<urn:uuid:3455ef4a-a823-42b8-a8d5-73cb4659ba45>\",\"WARC-IP-Address\":\"140.177.205.90\",\"WARC-Target-URI\":\"https://demonstrations.wolfram.com/EllipseAndFriends/\",\"WARC-Payload-Digest\":\"sha1:IOKOEHXI45MJDWYCHZWOJPHW2ESXVTQE\",\"WARC-Block-Digest\":\"sha1:3NLTQD7P2I6VNORTHTM6HDDDFHRCX7DP\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579250589861.0_warc_CC-MAIN-20200117152059-20200117180059-00173.warc.gz\"}"} |
https://the-equivalent.com/what-is-equivalent-to-6-8-2/ | [
"# What is equivalent to 6 8\n\n3/4\n\n## Are 3/4 and 6/8 are equivalent?\n\nSo 3/4 is equivalent to 6/8.\n\n## What fraction is equivalent to?\n\nEquivalent Fractions ChartUnit FractionEquivalent Fractions1/32/6, 3/9, 4/12..1/42/8, 3/12, 4/16..1/52/10, 3/15, 4/20,..1/62/12, 3/18, 4/24,..4 more rows\n\n## Can you simplify 6 8?\n\nExplanation: The simplest form of 68 can’t be divided further with N numbers other than 1 .\n\n## What is the equivalent fraction of 8?\n\nEquivalent Fractions ChartFractionEquivalent Fractions1/82/1616/1283/86/1648/1285/810/1680/1287/814/16112/12818 more rows\n\n## What is equivalent calculator?\n\nEquivalent Expression Calculator is a free online tool that displays the equivalent expressions for the given algebraic expression. BYJU’S online equivalent expression calculator tool makes the calculations and simplification faster and it displays the equivalent expression in a fraction of seconds.\n\n## How do I make equivalent fractions?\n\nTo find equivalent fractions, we multiply the numerator and the denominator by the same number, so we need to multiply the denominator of 7 by a number that will give us 21. Since 3 multiplied by 7 gives us 21, we can find an equivalent fraction by multiplying both the numerator and denominator by 3.\n\n## How do you write 6 8 as a fraction?\n\n6/8 = 34 = 0.75 Spelled result in words is three quarters.\n\n## Which fraction is in simplest form?\n\nThe fraction is said to be in its simplest form, when the numerator (top) and denominator (bottom) of the fraction does not have any common factor. For example, the simplest form of 6/12 is ½.\n\n## What is the simplest form?\n\nSimplest Form. A fraction is in its simplest form when the numerator and the denominator have no common factors besides one.\n\n## What is a equivalent to?\n\n1 : equal in force, amount, or value also : equal in area or volume but not superposable a square equivalent to a triangle. 2a : like in signification or import. b : having logical equivalence equivalent statements.\n\n## What fraction is 3/8 equivalent to?\n\n6/16Decimal and Fraction Conversion ChartFractionEquivalent Fractions3/86/1624/645/810/1640/647/814/1656/641/92/188/7223 more rows\n\n## What fraction is 7/8 equivalent to?\n\n78 is 7 divided by 8, which equals 0.875. So an equivalent fraction is another fraction that also equals 0.875. To find this fraction, just take any number and multiply both the numerator and denominator by this number. 7×28×2 equals 1416 because it too equals 0.875.\n\n## What is the fraction 3/4 equivalent to?\n\nEquivalent fractions of 3/4 : 6/8 , 9/12 , 12/16 , 15/\n\n## What is 3/5 equal to as a fraction?\n\n6/10So, 3/5 = 6/10 = 9/15 = 12/20.\n\n## What fraction is 2/3 equivalent to?\n\nAn equivalent fraction of two-thirds (2/3) is sixteen twenty-fourths (16/24).\n\n## What fraction is 4/5 equivalent to?\n\n8/10Decimal and Fraction Conversion ChartFractionEquivalent Fractions4/58/1048/601/62/1212/725/610/1260/721/72/1412/8423 more rows\n\n## What is the equivalent of (3+7)+2?\n\nThe expression equivalent to (3+7)+2 is 12.\n\n## What is equivalent expression calculator?\n\nEquivalent Expression Calculator is a free online tool that displays the equivalent expressions for the given algebraic expression. BYJU’S online equivalent expression calculator tool makes the calculations and simplification faster and it displays the equivalent expression in a fraction of seconds.\n\n## What is the equivalent fraction of 2/3?\n\nFor example, if we multiply the numerator and denominator of 2/3 by 4 we get. 2/3 = 2×4 / 3×4 = 8/12 which is an equivalent fraction of 2/3.\n\n## How to convert decimals to fractions?\n\nIt does however require the understanding that each decimal place to the right of the decimal point represents a power of 10; the first decimal place being 10 1, the second 10 2, the third 10 3, and so on. Simply determine what power of 10 the decimal extends to , use that power of 10 as the denominator, enter each number to the right of the decimal point as the numerator, and simplify. For example, looking at the number 0.1234, the number 4 is in the fourth decimal place which constitutes 10 4, or 10,000. This would make the fraction#N#1234#N##N#10000#N#, which simplifies to#N#617#N##N#5000#N#, since the greatest common factor between the numerator and denominator is 2.\n\n## How to multiply fractions?\n\nJust multiply the numerators and denominators of each fraction in the problem by the product of the denominators of all the other fractions (not including its own respective denominator) in the problem.\n\n## What is the goal of 4/6/8 rule?\n\nFor those familiar with the older 4/6/8 rule, it and E/V both attempt to achieve the same goal, which is to be able to figure out how large a display is needed, to communicate effectively.\n\n## What is the equivalent visibility rule?\n\nThe Equivalent Visibility Rule, like the older, better known 4/6/8 rule, is very important in determining effective sizes for displays in rooms, and is more suitable for today’s smaller type and objects presenting including spreadsheets, Word docs, emails, or, for that matter, tens of thousands of pages of classroom content, already available.\n\n## What is the difference between a high school and a middle school?\n\nSchools and teaching is less about formal presenting, and more about interactivity, and working with all types of documents. In a high school, that might be charts of data in Excel, or reading an article off of the web – projected up on the display, or Word type documents. The point, today, in the classroom students see small type, small objects, probably more than they do any sort of formal “old school” presenting. In a middle school, it might be geometry, or algebra, or history.\n\n## Is full equivalent visibility achieved in a classroom?\n\nUnderstand upfront that achieving full equivalent visibility in a classroom is rarely achieved so that everyone is close enough to a large enough display to read everything in the screen.\n\n## How many values are in a percentage?\n\nAlthough the percentage formula can be written in different forms, it is essentially an algebraic equation involving three values.\n\n## How to find the percentage difference between two numbers?\n\nThe percentage difference between two values is calculated by dividing the absolute value of the difference between two numbers by the average of those two numbers. Multiplying the result by 100 will yield the solution in percent, rather than decimal form. Refer to the equation below for clarification.\n\n## How to calculate percentage increase and decrease?\n\nPercentage increase and decrease are calculated by computing the difference between two values and comparing that difference to the initial value. Mathematically, this involves using the absolute value of the difference between two values, and dividing the result by the initial value, essentially calculating how much the initial value has changed."
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.8959738,"math_prob":0.99442434,"size":6710,"snap":"2023-14-2023-23","text_gpt3_token_len":1607,"char_repetition_ratio":0.17521623,"word_repetition_ratio":0.09622642,"special_character_ratio":0.25707898,"punctuation_ratio":0.11831198,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99708486,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-31T05:51:18Z\",\"WARC-Record-ID\":\"<urn:uuid:19e9148b-6031-452d-a004-7792145cfb82>\",\"Content-Length\":\"68435\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7885395e-724f-4ed6-987f-4485d3f15627>\",\"WARC-Concurrent-To\":\"<urn:uuid:451ab35b-d174-4eaf-9112-4b16832cea77>\",\"WARC-IP-Address\":\"207.244.242.67\",\"WARC-Target-URI\":\"https://the-equivalent.com/what-is-equivalent-to-6-8-2/\",\"WARC-Payload-Digest\":\"sha1:MMWNEQLPI7TYRFCQ3S4KU3T5NMTCWGP5\",\"WARC-Block-Digest\":\"sha1:KB2I6EZ7WW466VNJQ47MJZ5R2U6YCAHC\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296949573.84_warc_CC-MAIN-20230331051439-20230331081439-00248.warc.gz\"}"} |
https://analystnotes.com/cfa_question.php?p=ML09Z6JKU | [
"### CFA Practice Question\n\nThere are 923 practice questions for this topic.\n\n### CFA Practice Question\n\nA job paid \\$8,700 in 1970, when the CPI was 29. In 2011, the CPI was 164. How much would you have to earn in 2011 to be making the same real wage?\nA. \\$58,000\nB. \\$164,000\nC. Neither of the above answers is correct.\nExplanation: In real terms, the 1970 wage = 8,700/29 = 300. Thus, 300*164= \\$49,200, the equivalent wage in 2011."
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.86611164,"math_prob":0.75369936,"size":825,"snap":"2023-14-2023-23","text_gpt3_token_len":282,"char_repetition_ratio":0.093788065,"word_repetition_ratio":0.0,"special_character_ratio":0.40363637,"punctuation_ratio":0.17582418,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9929207,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-04-02T09:14:28Z\",\"WARC-Record-ID\":\"<urn:uuid:fa4313ae-6ed2-4645-945c-b99fd90e6fec>\",\"Content-Length\":\"19785\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:489a5fa9-ccb7-4736-bff3-024b59f97846>\",\"WARC-Concurrent-To\":\"<urn:uuid:accf6cd8-dc9b-4c33-9455-ba3f5abdebc6>\",\"WARC-IP-Address\":\"104.238.96.50\",\"WARC-Target-URI\":\"https://analystnotes.com/cfa_question.php?p=ML09Z6JKU\",\"WARC-Payload-Digest\":\"sha1:YB6E5XHAKG2J7UY4KUEBMXZDMKOYSZEG\",\"WARC-Block-Digest\":\"sha1:GHNV6EXZGLHQEY3NROWK6QJF2TCGW2FF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296950422.77_warc_CC-MAIN-20230402074255-20230402104255-00546.warc.gz\"}"} |
https://unizor.blogspot.com/2014/11/unizor-probability-normal-distribution_21.html | [
"## Friday, November 21, 2014\n\n### Unizor - Probability - Normal Distribution - Sigma Limits\n\nAs we know, the normal distribution of a random variable (or the distribution of probabilities for a normal random variable) is defined by two parameters:\nexpectation (or mean) μ and\nstandard deviation σ.\n\nThe expectation defines the center of a bell curve that represents the distribution of probabilities.\nThe standard deviation defines the steepness of this curve around this center - smaller σ corresponds to a steeper central part of a curve, which means that values around μ are more probable.\n\nOur task is to evaluate the probabilities of the normal variable to take values within certain interval around its mean μ based on the value of its standard deviation σ.\n\nConsider we have these two parameters, μ and σ, given and, based on their values, we have constructed a bell curve that represents the distribution of probabilities of a normal variable with these parameters. Let's choose some positive constant d and mark three points, A, M and B, on the X-axis with coordinates, correspondingly, μ−d, μ and μ+d.\nPoint M(μ) is at the center of our bell curve.\nPoints A(μ−d) and B(μ+d) are on both sides from a center on equal distance d from it.\n\nThe area under the entire bell curve equals to 1 and represents the probability of our normal random variable to take any value.\nThe area under the bell curve restricted by a point A on the left and a point B on the right represents the probability of our random variable to take value in the interval AB.\nWe have specifically chosen points A and B symmetrical relatively to a midpoint M because the bell curve has this symmetry.\n\nIt is obvious that the wider interval AB is - the greater the probability of our random variable to take a value within this interval. Since the area under the bell curve restricted by points A and B around the center M depends only on its width (defined by the d constant) and the steepness of a curve, let's measure the width using the same parameter that defines the steepness, the standard deviation σ. This will allow us to evaluate probabilities of a normal random variable to take values within certain interval based only on one parameter - its standard deviation σ.\n\nTraditionally, there are three different intervals around the mean value μ considered to evaluate the values of normal random variable:\nd=σ, d=2σ and d=3σ.\nLet's quantify them all.\n\n1. For a normal random variable with mean μ and standard deviation σ the probability of having a value in the interval [μ−σ, μ+σ] (the narrowest interval of these three) approximately equals to 0.6827.\n\n2. For a normal random variable with mean μ and standard deviation σ the probability of having a value in the interval [μ−2σ, μ+2σ] (the wider interval) approximately equals to 0.9545.\n\n3. For a normal random variable with mean μ and standard deviation σ the probability of having a value in the interval [μ−3σ, μ+3σ] (the widest interval of these three) approximately equals to 0.9973.\n\nAs you see, the value of a normal variable can be predicted with the greatest probability when choose the widest interval of the three mentioned - the 3σ-interval around its mean. The value will fall into this interval with a very high probability.\n\nNarrower 2σ-interval still maintains a relatively high probability to have a value of our random variable fallen into it.\n\nThe narrowest σ-interval has this probability not much higher than 0.5, which makes the prediction for the value of our random variable to fall into it not very reliable."
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.89209044,"math_prob":0.99433017,"size":3611,"snap":"2021-31-2021-39","text_gpt3_token_len":780,"char_repetition_ratio":0.16634323,"word_repetition_ratio":0.14983714,"special_character_ratio":0.21102187,"punctuation_ratio":0.08236994,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9982092,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-26T01:10:40Z\",\"WARC-Record-ID\":\"<urn:uuid:1f6e1a1b-a342-46f0-b689-6fe49ff7957b>\",\"Content-Length\":\"60553\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d4f3ffbf-6877-4b37-b549-4137b3129798>\",\"WARC-Concurrent-To\":\"<urn:uuid:1ccd1200-e57c-4a45-8832-47033e4b5056>\",\"WARC-IP-Address\":\"172.217.1.193\",\"WARC-Target-URI\":\"https://unizor.blogspot.com/2014/11/unizor-probability-normal-distribution_21.html\",\"WARC-Payload-Digest\":\"sha1:AWFRBOXXGF3U7XM7QABQRZS2Q37EZK5L\",\"WARC-Block-Digest\":\"sha1:QHD3IC5L7WRXD5XYVB4KELZ3KC4MEKBO\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780057787.63_warc_CC-MAIN-20210925232725-20210926022725-00325.warc.gz\"}"} |
https://math.stackexchange.com/questions/tagged/roots-of-unity?sort=active&amp;pageSize=50 | [
"# Questions tagged [roots-of-unity]\n\nnumbers $z$ such that $z^n=1$ for some natural number $n$; here usually $z$ is in $\\mathbb C$ or some other field\n\n649 questions\n0answers\n30 views\n\n1answer\n47 views\n\n### If $A^{2016} = I_n$, show that $A^{576} - A^{288} + I_n$ is invertible, and calculate it's inverse in terms of $A$.\n\nLet $A$ be a real valued $n \\times n$ matrix,where $n \\geq 2$, such that $A^{2016} =I_n.$ Show that the matrix $B = A^{576} - A^{288} + I_n$ is invertible, and calculate it's inverse in terms of $A$. ...\n2answers\n46 views\n\n1answer\n45 views\n\n### Another Roots of Unity Sum\n\nI almost see a brute-force attack on this problem, but before messing with the details I wonder there is some theory here, or at least a nice way to group the terms so I can see the cancellation. Let ...\n1answer\n28 views\n\n### Direct product decomposition of the group of complex roots of unity\n\nI'm studying $p$-adic numbers (Robert's \"A course in $p$-adic analysis) and, at page 41, the author states that, for every prime $p$, the group $\\mu$ of all complex roots of unity has a direct product ...\n2answers\n39 views\n\n### $a_i$ are the n-th roots of $1\\in\\mathbb{C}$, why does $(1-a_2)\\cdot…\\cdot(1-a_n)=n$?\n\nFor $1<i\\leq n$, let $a_i$ be the n-th roots of $1\\in\\mathbb{C}$, why does $(1-a_2)\\cdot...\\cdot(1-a_n)=n$?\n1answer\n96 views\n\n### Find a number field whose unit group is isomorphic to $\\mathbb{Z}/4\\mathbb{Z} \\times \\mathbb{Z}$\n\nFind a number field whose unit group is isomorphic to $\\mathbb{Z}/4\\mathbb{Z} \\times \\mathbb{Z}.$ I'm trying to use Dirichlet's Unit Theorem to solve this problem. It states that if $K$ is a number ...\n0answers\n31 views\n\n2answers\n79 views\n\n1answer\n27 views\n\n### Roots of sparse “quadratic-like” polynomial.\n\nSo I know about this question and I've seen papers like this and this. But the former isn't exactly what I want and the latter two papers are too deep and I'm lazy and I wanna quick-and-easy answer ..."
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.7774866,"math_prob":0.9973116,"size":15203,"snap":"2019-13-2019-22","text_gpt3_token_len":5402,"char_repetition_ratio":0.15856306,"word_repetition_ratio":0.0896072,"special_character_ratio":0.36854568,"punctuation_ratio":0.12536529,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9998785,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-05-24T21:05:24Z\",\"WARC-Record-ID\":\"<urn:uuid:c21f32cc-1009-4c35-8c52-521cc0cdbbd4>\",\"Content-Length\":\"231787\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e142c40e-a727-4673-9b01-eba830ba92e6>\",\"WARC-Concurrent-To\":\"<urn:uuid:ded07571-dce8-4056-b861-63ae2bec735c>\",\"WARC-IP-Address\":\"151.101.129.69\",\"WARC-Target-URI\":\"https://math.stackexchange.com/questions/tagged/roots-of-unity?sort=active&amp;pageSize=50\",\"WARC-Payload-Digest\":\"sha1:WI3FKGFOVG2SFS6ABAV2754OPEUTSREW\",\"WARC-Block-Digest\":\"sha1:4MNKNX756COOPJBCGJXORHOZM6FET5XH\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-22/CC-MAIN-2019-22_segments_1558232257767.49_warc_CC-MAIN-20190524204559-20190524230559-00175.warc.gz\"}"} |
https://www.convertunits.com/from/lb/to/gram | [
"## ››Convert pound to gram\n\n lb gram\n\nHow many lb in 1 gram? The answer is 0.0022046226218488.\nWe assume you are converting between pound and gram.\nYou can view more details on each measurement unit:\nlb or gram\nThe SI base unit for mass is the kilogram.\n1 kilogram is equal to 2.2046226218488 lb, or 1000 gram.\nNote that rounding errors may occur, so always check the results.\nUse this page to learn how to convert between pounds and grams.\nType in your own numbers in the form to convert the units!\n\n## ››Quick conversion chart of lb to gram\n\n1 lb to gram = 453.59237 gram\n\n2 lb to gram = 907.18474 gram\n\n3 lb to gram = 1360.77711 gram\n\n4 lb to gram = 1814.36948 gram\n\n5 lb to gram = 2267.96185 gram\n\n6 lb to gram = 2721.55422 gram\n\n7 lb to gram = 3175.14659 gram\n\n8 lb to gram = 3628.73896 gram\n\n9 lb to gram = 4082.33133 gram\n\n10 lb to gram = 4535.9237 gram\n\n## ››Want other units?\n\nYou can do the reverse unit conversion from gram to lb, or enter any two units below:\n\n## Enter two units to convert\n\n From: To:\n\n## ››Definition: Pound\n\nThe pound (abbreviation: lb) is a unit of mass or weight in a number of different systems, including English units, Imperial units, and United States customary units. Its size can vary from system to system. The most commonly used pound today is the international avoirdupois pound. The international avoirdupois pound is equal to exactly 453.59237 grams. The definition of the international pound was agreed by the United States and countries of the Commonwealth of Nations in 1958. In the United Kingdom, the use of the international pound was implemented in the Weights and Measures Act 1963. An avoirdupois pound is equal to 16 avoirdupois ounces and to exactly 7,000 grains.\n\n## ››Definition: Gram\n\na metric unit of weight equal to one thousandth of a kilogram\n\n## ››Metric conversions and more\n\nConvertUnits.com provides an online conversion calculator for all types of measurement units. You can find metric conversion tables for SI units, as well as English units, currency, and other data. Type in unit symbols, abbreviations, or full names for units of length, area, mass, pressure, and other types. Examples include mm, inch, 100 kg, US fluid ounce, 6'3\", 10 stone 4, cubic cm, metres squared, grams, moles, feet per second, and many more!"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.8763456,"math_prob":0.96875715,"size":2185,"snap":"2019-51-2020-05","text_gpt3_token_len":592,"char_repetition_ratio":0.14947271,"word_repetition_ratio":0.01010101,"special_character_ratio":0.2938215,"punctuation_ratio":0.13646056,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9585991,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-27T21:55:31Z\",\"WARC-Record-ID\":\"<urn:uuid:7487b836-1141-4e46-9bf9-b4ba20466cd8>\",\"Content-Length\":\"24540\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1703ff63-500b-49f7-935e-80130a07916b>\",\"WARC-Concurrent-To\":\"<urn:uuid:6104f7ca-a053-489c-8083-a98c97aed719>\",\"WARC-IP-Address\":\"54.175.245.234\",\"WARC-Target-URI\":\"https://www.convertunits.com/from/lb/to/gram\",\"WARC-Payload-Digest\":\"sha1:N3S5IFPTNFKSOMFEBEABN6IVSMRBWML2\",\"WARC-Block-Digest\":\"sha1:STKF4GB4D6DN2R4KSX6R64IG6HLB5GCK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579251728207.68_warc_CC-MAIN-20200127205148-20200127235148-00174.warc.gz\"}"} |
https://www.pdf-archive.com/2017/02/11/chemistry-full-notes/ | [
"# chemistry full notes .pdf\n\n### File information\n\nOriginal filename: chemistry full notes.pdf\n\nThis PDF 1.7 document has been generated by PDFsam Basic v3.0.2.RELEASE / SAMBox 1.0.0.M23 (www.sejda.org), and has been sent on pdf-archive.com on 11/02/2017 at 19:44, from IP address 178.153.x.x. The current document download page has been viewed 3507 times.\nFile size: 3 MB (184 pages).\nPrivacy: public file\n\nchemistry full notes.pdf (PDF, 3 MB)\n\n### Document preview\n\n1. Atomic Structure and Periodic Table\nDetails of the three Sub-atomic (fundamental) Particles\nParticle\n\nPosition\n\nRelative Mass\n\nRelative Charge\n\nProton\nNeutron\nElectron\n\nNucleus\nNucleus\nOrbitals\n\n1\n1\n1/1840\n\n+1\n0\n-1\n\nThere are various\nmodels for atomic\nstructure\n\nAn atom of Lithium (Li) can be represented as follows:\n\nMass Number\n\n7\n3\n\nAtomic Number\n\nLi\n\nAtomic Symbol\n\nThe atomic number, Z, is the number of protons in the nucleus.\nThe mass number ,A, is the total number of protons and neutrons in the atom.\n\nNumber of neutrons = A - Z\n\nIsotopes\n\nIsotopes are atoms with the same number of protons, but different numbers of neutrons.\n\nDEFINITION: Relative isotopic mass is the mass of one atom of an isotope\ncompared to one twelfth of the mass of one atom of carbon-12\n\nIsotopes have similar chemical properties because they have the same electronic structure.\nThey may have slightly varying physical properties because they have different masses.\nDEFINITION: Relative atomic mass is the average mass of one atom\ncompared to one twelfth of the mass of one atom of carbon-12\nDEFINITION: Relative molecular mass is the average mass of a molecule\ncompared to one twelfth of the mass of one atom of carbon-12\n\nTHE MASS SPECTROMETER\nThe mass spectrometer can be used to determine all the isotopes present in a sample of an element and\nto therefore identify elements.\n\nCalculating relative atomic mass\nThe relative atomic mass quoted on the periodic table is a weighted average of all the isotopes\nFig: spectra for\nMagnesium from mass\nspectrometer\n\n100\n\n% abundance\n\n80\n\n78.70%\n\n60\n\nFor each isotope the mass\nspectrometer can measure a m/z\n(mass/charge ratio) and an abundance\n\n24Mg+\n\n40\n25Mg+\n\n10.13%\n\n20\n24\n\n25\n\nN Goalby\n\n26Mg+\n\n11.17%\n26\n\nm/z\n\nchemrevise.org\n\nIf asked to give the species for a peak\nin a mass spectrum then give charge\nand mass number e.g. 24Mg+\n\n1\n\nSometimes two electrons may be\nremoved from a particle forming a 2+\nion. 24Mg2+ with a 2+ charge would\nhave a m/z of 12\n\nR.A.M = (isotopic mass x % abundance)\n100\nFor above example of Mg\nR.A.M = [(78.7 x 24) + (10.13 x 25) + (11.17 x 26)] /100 = 24.3\n\nUse these equations to\nwork out the R.A.M\n\nR.A.M = (isotopic mass x relative abundance)\n\nIf relative abundance is used instead of\npercentage abundance use this equation\n\ntotal relative abundance\n\nMass spectra for Cl2 and Br2\nCl has two isotopes Cl35 (75%) and Cl37(25%)\n\nBr has two isotopes Br79 (50%) and Br81(50%)\n\nThese lead to the following spectra caused by the diatomic molecules\nCl35Cl35 +\nrelative\nabundance\n\nrelative\nabundance\n\nCl35Cl37 +\n\nBr79Br81 +\nBr81Br79 +\n\nBr79Br79 +\n\nBr81Br81 +\n\nCl37Cl37 +\n70\n\n72\n\n74\n\nm/z\n\nMeasuring the Mr of a molecule\n\n158\n\n160\n\nm/z\n\n162\n\nSpectra for C4H10\n\nIf a molecule is put through a mass spectrometer it\nwill often break up and give a series of peaks caused\nby the fragments. The peak with the largest m/z,\nhowever, will be due to the complete molecule and\nwill be equal to the Mr of the molecule. This peak is\ncalled the parent ion or molecular ion\n\nMass spectrum for butane\n43\n\nMolecular ion\nC4H10+\n\n29\n58\n\nUses of Mass spectrometers\n\nMass spectrometers have been included in planetary space probes so that elements on other\nplanets can be identified. Elements on other planets can have a different composition of\nisotopes.\nDrug testing in sport to identify chemicals in the blood and to identify breakdown products\nfrom drugs in body\nquality control in pharmaceutical industry and to identify molecules from sample with\npotential biological activity\nradioactive dating to determine age of fossils or human remains\n\nN Goalby\n\nchemrevise.org\n\n2\n\nIonisation Energies\nDefinition :First ionisation energy\nThe first ionisation energy is the energy required when one mole of gaseous\natoms forms one mole of gaseous ions with a single positive charge\n\nH(g) \n\nThis is represented by the equation:\n\nH+\n\n(g)\n\n+\n\ne-\n\nAlways gaseous\n\nDefinition :Second ionisation energy\n\nRemember these\ndefinitions very carefully\nThe equation for 1st ionisation\nenergy always follows the same\npattern.\nIt does not matter if the atom does\nnot normally form a +1 ion or is not\ngaseous\n\nThe second ionisation energy is the energy required when one mole of\ngaseous ions with a single positive charge forms one mole of gaseous\nions with a double positive charge\n\nTi+ (g) \n\nThis is represented by the equation:\n\nTi2+(g) + e-\n\nFactors that affect Ionisation energy\nThere are three main factors\n1.The attraction of the nucleus\n(The more protons in the nucleus the greater the attraction)\n2. The distance of the electrons from the nucleus\n(The bigger the atom the further the outer electrons are from the nucleus and the\nweaker the attraction to the nucleus)\n3. Shielding of the attraction of the nucleus\n(An electron in an outer shell is repelled by electrons in complete inner shells,\nweakening the attraction of the nucleus)\n\nMany questions can be\nof these factors\n\nSuccessive ionisation energies\nThe patterns in successive ionisation energies for an element give us important\ninformation about the electronic structure for that element.\nWhy are successive ionisation energies always larger?\nThe second ionisation energy of an element is always bigger than the first ionisation energy.\nWhen the first electron is removed a positive ion is formed.\nThe ion increases the attraction on the remaining electrons and so the energy required to\nremove the next electron is larger.\nHow are ionisation energies linked to electronic structure?\nIonisation\nenergy\n\nNotice the big\njump between 4\nand 5.\n1\n\n2\n3\n4\n5\nNo of electrons removed\n\n6\n\nExample: What group must this element be in?\n\nIonisation\nenergy kJ mol-1\n\n1\n\n2\n\n3\n\n4\n\n5\n\n590\n\n1150\n\n4940\n\n6480\n\n8120\n\nN Goalby\n\nExplanation\nThe fifth electron is in a inner\nshell closer to the nucleus and\ntherefore attracted much more\nstrongly by the nucleus than the\nfourth electron.\nIt also does not have any\nshielding by inner complete shells\nof electron\n\nHere there is a big jump between the 2nd and 3rd\nionisations energies which means that this\nelement must be in group 2 of the periodic table\nas the 3rd electron is removed from an electron\nshell closer to the nucleus with less shielding and\nso has a larger ionisation energy\n\nchemrevise.org\n\n3\n\nIonisation energy kJ mol-1\n\nThe first Ionisation energy of the elements\nThe shape of the graph for periods two and\nthree is similar. A repeating pattern across a\nperiod is called periodicity.\n\n2000\n1500\n\nThe pattern in the first ionisation energy\nelectronic structure\n\n1000\n\n500\n\nYou need to carefully learn the\npatterns\n\n0\n5\n\n10\n\nAtomic number\n\n15\n\n20\n\nQ. Why has Helium the largest first ionisation energy?\nA. Its first electron is in the first shell closest to the nucleus and has no\nshielding effects from inner shells. He has a bigger first ionisation\nenergy than H as it has one more proton\nQ. Why do first ionisation energies decrease down a group?\n\nMany questions can be\nthe 3 factors that control\nionisation energy\n\nA. As one goes down a group, the outer electrons are found in shells\nfurther from the nucleus and are more shielded so the attraction of\nthe nucleus becomes smaller\nQ. Why is there a general increase in first ionisation energy across a period?\nA. As one goes across a period , the number of protons increases making\nthe effective attraction of the nucleus greater. The electrons are being\nadded to the same shell which has the same shielding effect and the\nelectrons are pulled in closer to the nucleus.\nQ. Why has Na a much lower first ionisation energy than Neon?\nThis is because Na will have its outer electron in a 3s shell further from\nthe nucleus and is more shielded. So Na’s outer electron is easier to\nremove and has a lower ionisation energy.\nQ. Why is there a small drop from Mg to Al?\nAl is starting to fill a 3p sub shell, whereas Mg has its outer electrons in the 3s\nsub shell. The electrons in the 3p subshell are slightly easier to remove because\nthe 3p electrons are higher in energy and are also slightly shielded by the 3s\nelectrons\nQ. Why is there a small drop from P to S?\nWith sulphur there are 4 electrons in the 3p sub shell and the 4th is starting to doubly\nfill the first 3p orbital.\nWhen the second electron is added to a 3p orbital there is a slight repulsion between\nthe two negatively charged electrons which makes the second electron easier to\nremove.\n\n3p\n\n3s\n\nphosphorus 1s2 2s2 2p63s23p3\n\nLearn carefully the\nexplanations for\nthese two small\ndrops as they are\ndifferent to the\nusual factors\n\n3p\n3s\nTwo electrons of opposite spin in\nthe same orbital\nsulphur 1s2 2s2 2p63s23p4\n\nN Goalby\n\nchemrevise.org\n\n4\n\nElectronic Structure\nModels of the atom\nAn early model of the atom was the Bohr model (GCSE model) (2 electrons in first shell, 8 in second etc.) with\nelectrons in spherical orbits. Early models of atomic structure predicted that atoms and ions with noble gas\nelectron arrangements should be stable.\n\nThe A-level model\nElectrons are arranged on:\nPrinciple energy levels\nnumbered 1,2,3,4..\n1 is closest to nucleus\n\nSub energy levels labelled s ,\np, d and f\ns holds up to 2 electrons\np holds up to 6 electrons\nd holds up to 10 electrons\nf holds up to 14 electrons\n\nSplit\ninto\n\nSplit\ninto\n\nOrbitals which hold up\nto 2 electrons of\nopposite spin\n\nShapes of orbitals\nPrinciple level\n\n1\n\nSub-level\n\n1s\n\n2\n\n3\n\n2s, 2p\n\n3s, 3p, 3d\n\n4\n4s, 4p, 4d, 4f\n\nAn atom fills up the sub shells in order of increasing energy (note 3d is\nhigher in energy than 4s and so gets filled after the 4s\n1s2s2p3s3p 4s3d4p5s4d5p\n\nWriting electronic structure using letters and numbers\nNumber of electrons\nin sub-level\n\nOrbitals represent the\nmathematical probabilities of\nfinding an electron at any point\nwithin certain spatial\ndistributions around the\nnucleus.\nEach orbital has its own\napproximate, three\ndimensional shape.\nIt is not possible to draw the\nshape of orbitals precisely.\n\n•s sublevels are\nspherical\n\nFor oxygen 1s2 2s2 2p4\nNumber of main\nenergy level\n\nName of\ntype of\nsub-level\n\n• p sublevels are shaped\nlike dumbbells\n\nUsing spin diagrams\n\nFor fluorine\n\nAn arrow is one electron\n\n2p\n\nBox represents one\norbital\n\n2s\n1s\n\nThe arrows going in the\nopposite direction represents\nthe different spins of the\nelectrons in the orbital\n\nThe periodic table is split into\nblocks. A s block element is\none whose outer electron is\nfilling a s-sub shell\n\nWhen filling up sub levels with several\norbitals, fill each orbital singly before starting\nto pair up the electrons\n\n2p\nElectronic structure for ions\nWhen a positive ion is formed electrons are lost\nMg is 1s2 2s2 2p6 3s2 but Mg2+ is 1s2 2s2 2p6\n\nWhen a negative ion is formed electrons are gained\nO is 1s2 2s2 2p4 but O2- is 1s2 2s2 2p6\n\nN Goalby\n\nchemrevise.org\n\n5\n\nPERIODICITY\nClassification of elements in s, p, d blocks\nElements are classified as s, p or d block, according\nto which orbitals the highest energy electrons are in.\n\nAtomic radii decrease as you move from left to right\nacross a period, because the increased number of\nprotons create more positive charge attraction for\nelectrons which are in the same shell with similar\nshielding.\nExactly the same trend in period 2\n\nPeriod 2 = Li, Be, B, C, N, O, F, Ne\nPeriod 3 = Na, Mg, Al, Si, P, S, Cl, Ar\n\n0.18\n0.16\n0.14\n0.12\n0.1\n0.08\n0.06\n0.04\n0.02\n0\n\n1st ionisation energy\nThere is a general trend across is to increase. This is due to\nincreasing number of protons as the electrons are being\nThere is a small drop between Mg + Al. Mg has its outer\nelectrons in the 3s sub shell, whereas Al is starting to fill the\n3p subshell. Al’s electron is slightly easier to remove\nbecause the 3p electrons are higher in energy.\n\n1st ionisation energy\n(kJ/mol)\n\nNa\n\nMg\n\nSi\n\nP\n\nS\n\nCl\n\nAr\n\n1600\n1400\n1200\n1000\n800\n600\n400\n200\n0\nNa\n\nThere is a small drop between phosphorous and sulphur.\nSulphur’s outer electron is being paired up with an another\nelectron in the same 3p orbital.\nWhen the second electron is added to an orbital there is a\nslight repulsion between the two negatively charged\nelectrons which makes the second electron easier to remove.\n\nAl\n\nMg\n\nAl\n\nSi\n\nP\n\nS\n\nCl\n\nAr\n\nExactly the same trend in period 2 with\ndrops between Be & B and N to O for\nsame reasons- make sure change 3s\nand 3p to 2s and 2p in explanation!\n\nMelting and boiling points\n3000\n\nMelting and boiling\npoints (K)\n\nFor Na, Mg, Al- Metallic bonding : strong bonding – gets\nstronger the more electrons there are in the outer shell that are\nreleased to the sea of electrons. A smaller positive centre also\nmakes the bonding stronger. High energy is needed to break\nbonds.\n\n2500\n2000\n1500\n1000\n\nSi is Macromolecular: many strong covalent bonds between\natoms high energy needed to break covalent bonds– very high\nmp +bp\nCl2 (g), S8 (s), P4 (S)- simple Molecular : weak London forces\nbetween molecules, so little energy is needed to break them –\nlow mp+ bp\nS8 has a higher mp than P4 because it has more electrons (S8\n=128)(P4=60) so has stronger London forces between\nmolecules\nAr is monoatomic weak London Forces between atoms\n\nN Goalby chemrevise.org\n\n500\n0\nNa\n\nMg\n\nAl\n\nSi\n\nP\n\nS\n\nCl\n\nSimilar trend in period 2\nLi,Be metallic bonding (high mp)\nB,C macromolecular (very high mp)\nN2,O2 molecular (gases! Low mp as\nsmall London Forces)\nNe monoatomic gas (very low mp)\n\nAr\n\n2. Redox\noxidation is the process of electron loss:\nZn Zn2+ + 2e-\n\nreduction is the process of electron gain:\nCl2 + 2e2Cl-\n\nIt involves an increase in oxidation number\n\nIt involves a decrease in oxidation number\n\nRules for assigning oxidation numbers\n1. All uncombined elements have an oxidation number of zero\n\neg . Zn, Cl2, O2, Ar all have oxidation numbers of zero\n\n2. The oxidation numbers of the elements in a compound add\nup to zero\n\nIn NaCl Na= +1 Cl= -1\nSum = +1 -1 = 0\n\n3. The oxidation number of a monoatomic ion is equal to the\nionic charge\n\ne.g. Zn2+ = +2 Cl- = -1\n\n4. In a polyatomic ion (CO32-) the sum of the individual\noxidation numbers of the elements adds up to the charge\non the ion\n\ne.g. in CO32- C = +4 and O = -2\nsum = +4 + (3 x -2) = -2\n\n5. Several elements have invariable oxidation numbers in their\ncommon compounds.\nGroup 1 metals = +1\nGroup 2 metals = +2\nAl = +3\nH = +1 (except in metal hydrides where it is –1 eg NaH)\nF = -1\nCl, Br, I = –1 except in compounds with oxygen and fluorine\n\nWe use these rules to\nidentify the oxidation\nnumbers of elements that\nhave variable oxidation\nnumbers.\n\nO = -2 except in peroxides (H2O2 ) where it is –1 and in compounds with fluorine.\nWhat is the oxidation number of Fe in FeCl3\nUsing rule 5, Cl has an O.N. of –1\nUsing rule 2, the O.N. of the elements must add up to 0\n\nNote the oxidation number of Cl\nin CaCl2 = -1 and not -2 because\nthere are two Cl’s\nAlways work out the oxidation for\none atom of the element\n\nFe must have an O.N. of +3\nin order to cancel out 3 x –1 = -3 of the Cl’s\n\nNaming using oxidation number\nIf an element can have various oxidation numbers then the oxidation number of that element in a\ncompound can be given by writing the number in roman numerals\nFeCl2: Iron (II) chloride\nFeCl3 Iron (III) chloride\nMnO2 Manganese (IV) Oxide\nIn IUPAC convention the various forms of sulfur,nitrogen and chlorine compounds where oxygen\nis combined are all called sulfates, nitrates and chlorates with relevant oxidation number given in\nroman numerals. If asked to name these compounds remember to add the oxidation number.\nNaClO: sodium chlorate(I)\nNaClO3: sodium chlorate(V)\nK2SO4 potassium sulfate(VI)\nK2SO3 potassium sulfate(IV)\n\nNaNO3 sodium nitrate (V)\nNaNO2 sodium nitrate (III)\n\nN Goalby chemrevise.org\n\n1\n\nRedox equations and half equations\nBr2 (aq) + 2I- (aq)\nBr2 (aq) + 2e-\n\nI2 (aq) + 2 Br- (aq)\n\n+ 2 Br- (aq)\n\n2I- (aq)\n\nBr has reduced as it has gained electrons\n\nI2 (aq) + 2 e-\n\nI has oxidised as it has lost electrons\n\nA reduction half equation only shows the parts\nof a chemical equation involved in reduction\nThe electrons are on the left\n\nAn oxidation half equation only shows the\nparts of a chemical equation involved in\noxidation\nThe electrons are on the right\n\nThe oxidising agent is Bromine\nwater . It is an electron acceptor\n\nThe reducing agent is the Iodide\nion. It is an electron donor\n\nAn oxidising agent (or oxidant) is the\nspecies that causes another element to\noxidise. It is itself reduced in the reaction\n\nA reducing agent (or reductant) is the\nspecies that causes another element\nreduce. It is itself oxidised in the reaction.\n\nreducing agents are\nelectron donors\noxidising agents are\nelectron acceptors\nWhen naming oxidising\nand reducing agents\nalways refer to full name\nof substance and not\njust name of element\n\nRedox Reactions\nmetals generally form ions by losing\nelectrons with an increase in oxidation\nnumber to form positive ions:\nZn Zn2+ + 2e-\n\nnon-metals generally react by gaining\nelectrons with a decrease in oxidation\nnumber to form negative ions\nCl2 + 2e2Cl-\n\nOxygen is reducing because\nits oxidation number is\ndecreasing from 0 to -2\n\n4Li +\n\n0\nO2\n\n0\n\n-2\n2Li2O\n\nTungsten is reducing because\nits oxidation number is\ndecreasing from +6 to 0\n+6\nWO3 +\n\n+1\n\nLithium is oxidising because its\noxidation number is increasing from 0\nto +1\n\n+4\n2SrO + 4NO2 + O2\n0\n\nOxygen is oxidising because its oxidation\nnumber is increasing from -2 to 0\n\n0\nW + 3H2O\n+1\n\nHydrogen is oxidising\nbecause its oxidation number\nis increasing from 0 to +1\n\nChlorine is reducing because\nits oxidation number is\ndecreasing from +1 to -1\n\nNitrogen is reducing because\nits oxidation number is\ndecreasing from +5 to+4\n+5\n2Sr(NO3)2\n-2\n\n3H2\n0\n\n+1\n2 NH3 + NaClO\n-3\n\n-1\nN2H4 + NaCl + H2O\n-2\n\nNitrogen is oxidising because its oxidation\nnumber is increasing from -3 to -2\n\nNote that not all the oxygen atoms are\nchanging oxidation number in this reaction\n\nN Goalby chemrevise.org\n\n2\n\nRedox Reactions of Metals and acid\nACID + METAL\n\nSALT + HYDROGEN\n\nHydrogen is reducing\nbecause its oxidation number\nis decreasing from +1 to 0\n+1\n2HCl + Mg\n0\n\n0\nMgCl2 +H2\n+2\n\nBe able to write equations for reactions of\nmetals with hydrochloric acid and sulphuric\nacid\n\nFe + H2SO4\n\nMagnesium is oxidising\nbecause its oxidation number is\nincreasing from 0 to +2\n\nFeSO4 +H2\n\nObservations: These reaction will\neffervesce because H2 gas is evolved\nand the metal will dissolve\n\nDisproportionation\nDisproportionation is the name for a reaction where\nan element in a single species simultaneously\noxidises and reduces.\nCl2(aq) + H2O(l)\n\n2Cu+\n\nHClO(aq) + HCl (aq)\n\nCu + Cu2+\n\nChlorine is both simultaneously reducing and\noxidising changing its oxidation number from 0 to\n-1 and 0 to +1\n\nCopper(I) ions (+1) when reacting with sulphuric acid will\ndisproportionate to Cu2+ (+2) and Cu (0) metal\n\nN Goalby\n\nchemrevise.org\n\n3\n\n#### HTML Code\n\nCopy the following HTML code to share your document on a Website or Blog\n\n#### QR Code",
null,
"### Related keywords",
null,
""
]
| [
null,
"https://www.pdf-archive.com/qr/y/y4/y49eNmnQ.png",
null,
"https://www.pdf-archive.com/inc_page_views.php",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.82945305,"math_prob":0.93421066,"size":19332,"snap":"2023-14-2023-23","text_gpt3_token_len":5375,"char_repetition_ratio":0.14973097,"word_repetition_ratio":0.08415841,"special_character_ratio":0.24358577,"punctuation_ratio":0.06799248,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95711,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,3,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-23T08:51:32Z\",\"WARC-Record-ID\":\"<urn:uuid:126bb1e2-942f-43cf-8792-0bc6f93e49e3>\",\"Content-Length\":\"50280\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:90552db0-23f6-4235-bbb5-ec556163344f>\",\"WARC-Concurrent-To\":\"<urn:uuid:8deccb08-acbd-467d-a037-bcf44583a2cf>\",\"WARC-IP-Address\":\"195.154.242.136\",\"WARC-Target-URI\":\"https://www.pdf-archive.com/2017/02/11/chemistry-full-notes/\",\"WARC-Payload-Digest\":\"sha1:CFPNXQHKWO5CVN3D4CCVEHQOWMX637IF\",\"WARC-Block-Digest\":\"sha1:EH5X2K6RWRHHCPKLLKCEP3KQAQLSADT2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296945030.59_warc_CC-MAIN-20230323065609-20230323095609-00050.warc.gz\"}"} |
https://jackwestin.com/resources/mcat-content/periodic-motion/amplitude-frequency-phase | [
"MCAT Content / Periodic Motion / Amplitude Frequency Phase\n\n### Amplitude, frequency, phase\n\nTopic: Periodic Motion\n\nPeriodic motion or harmonic motion is any motion that repeats at regular intervals. Simple harmonic motion is a sinusoidal function of time t.\n\nPeriodic motion is observed in mass on a spring, simple pendulum, molecular vibration etc.",
null,
"",
null,
"The maximum displacement from equilibrium is known as the amplitude (always positive as only magnitude is considered).\n\nOne complete repetition of the motion is called a cycle. The duration of each cycle is the period. The frequency (f) is the number of cycles per unit time (t).\n\nf=1/t.\n\nFor example, if a newborn baby’s heart beats at a frequency of 120 times a minute, its period (the interval between beats) is half a second as 60/120 = 0.5. Large frequencies means short periods.\n\nSome motion is best characterized by the angular frequency (ω). The angular frequency refers to the angular displacement per unit time and is calculated from the frequency (f) with the equation:\n\nω=2πf.\n\nThe phase of the motion is the argument of the cosine function. Phase varies with time, so does the value of the cosine function and the displacement of the wave. φ is called the phase angle or phase constant, it defines the position of the particle when t=0.\n\nPractice Questions\n\nThe mechanics of standing balance\n\nMCAT Official Prep (AAMC)\n\nPhysics Question Pack Passage 2 Question 7\n\nPhysics Question Pack Passage 2 Question 9\n\nPhysics Question Pack Passage 2 Question 10\n\nPhysics Question Pack Passage 2 Question 11\n\nPhysics Question Pack Passage 20 Question 113\n\nPractice Exam 4 C/P Section Passage 3 Question 15\n\nKey Points\n\n• Periodic motion (harmonic motion) repeats at regular intervals.\n\n• Periodic motion can be described by amplitude, frequency and phase in a sinusoidal function.\n\n• Frequency can be calculated by doing 1/t which is the number of cycles in that period of time.\n\nKey Terms\n\nAmplitude: distance between the rest position and the crest of the wave. Proportional to intensity.\n\nPeriod: the duration of one cycle in a repeating event\n\nFrequency: frequency is the number of occurrences of a repeating event per unit of time.\n\nAngular frequency (ω): ω=2πf.\n\nPhase: of the wave is the argument of the cosine function\n\nPeriodic motion: any motion that repeats at regular intervals.\n\nBilling Information"
]
| [
null,
"https://i0.wp.com/cms.jackwestin.com/wp-content/uploads/2020/03/EB68F008-FC4B-4FBA-BAA6-2CA0EBC72F79.jpeg",
null,
"https://i1.wp.com/cms.jackwestin.com/wp-content/uploads/2020/03/2EDAB6DE-79C7-46F2-AF7A-A5FBDD4461AC.jpeg",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.87085354,"math_prob":0.9772898,"size":2210,"snap":"2020-34-2020-40","text_gpt3_token_len":484,"char_repetition_ratio":0.16681777,"word_repetition_ratio":0.05263158,"special_character_ratio":0.2122172,"punctuation_ratio":0.09245742,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9947142,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,4,null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-28T07:08:48Z\",\"WARC-Record-ID\":\"<urn:uuid:7fdf88fa-5588-438e-bacc-1ae8312a844b>\",\"Content-Length\":\"126815\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f209bb3a-7134-4ead-a1d0-0117ccced7ca>\",\"WARC-Concurrent-To\":\"<urn:uuid:1b829ea5-a66a-47a4-8899-6cb2b2cdc00c>\",\"WARC-IP-Address\":\"165.22.180.248\",\"WARC-Target-URI\":\"https://jackwestin.com/resources/mcat-content/periodic-motion/amplitude-frequency-phase\",\"WARC-Payload-Digest\":\"sha1:DT5XA3LZLSCWFKQWQ55SFSMZOC454VM7\",\"WARC-Block-Digest\":\"sha1:TACPTCPZTSA7L2H2XSFOVKWKLR257OMZ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600401585213.82_warc_CC-MAIN-20200928041630-20200928071630-00546.warc.gz\"}"} |
https://homework.cpm.org/category/CC/textbook/cc3/chapter/2/lesson/2.1.1/problem/2-10 | [
"",
null,
"",
null,
"### Home > CC3 > Chapter 2 > Lesson 2.1.1 > Problem2-10\n\n2-10.\n\nFor the following problem, define a variable and write an equation (use the 5-D Process if needed). Then solve the equation to solve the problem. Write your solution as a sentence.\n\nA cable $84$ meters long is cut into two pieces so that one piece is $18$ meters longer than the other. Find the length of each piece of cable. Homework Help ✎\n\nRemember that the 5-D Process has the following steps: Describe, Define, Do, Decide, and Declare.\n\nPerform the Do and Decide steps. For the Do step, note that one piece is $18$ m longer than the other piece.\n\nIf you are not sure how to do the 5-D process, choose among the videos at: 5-D Process Videos."
]
| [
null,
"https://homework.cpm.org/dist/7d633b3a30200de4995665c02bdda1b8.png",
null,
"data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAfQAAABDCAYAAABqbvfzAAAAGXRFWHRTb2Z0d2FyZQBBZG9iZSBJbWFnZVJlYWR5ccllPAAAAyRpVFh0WE1MOmNvbS5hZG9iZS54bXAAAAAAADw/eHBhY2tldCBiZWdpbj0i77u/IiBpZD0iVzVNME1wQ2VoaUh6cmVTek5UY3prYzlkIj8+IDx4OnhtcG1ldGEgeG1sbnM6eD0iYWRvYmU6bnM6bWV0YS8iIHg6eG1wdGs9IkFkb2JlIFhNUCBDb3JlIDUuMC1jMDYxIDY0LjE0MDk0OSwgMjAxMC8xMi8wNy0xMDo1NzowMSAgICAgICAgIj4gPHJkZjpSREYgeG1sbnM6cmRmPSJodHRwOi8vd3d3LnczLm9yZy8xOTk5LzAyLzIyLXJkZi1zeW50YXgtbnMjIj4gPHJkZjpEZXNjcmlwdGlvbiByZGY6YWJvdXQ9IiIgeG1sbnM6eG1wPSJodHRwOi8vbnMuYWRvYmUuY29tL3hhcC8xLjAvIiB4bWxuczp4bXBNTT0iaHR0cDovL25zLmFkb2JlLmNvbS94YXAvMS4wL21tLyIgeG1sbnM6c3RSZWY9Imh0dHA6Ly9ucy5hZG9iZS5jb20veGFwLzEuMC9zVHlwZS9SZXNvdXJjZVJlZiMiIHhtcDpDcmVhdG9yVG9vbD0iQWRvYmUgUGhvdG9zaG9wIENTNS4xIE1hY2ludG9zaCIgeG1wTU06SW5zdGFuY2VJRD0ieG1wLmlpZDo5QzA0RUVFMzVFNDExMUU1QkFCNEYxREYyQTk4OEM5NCIgeG1wTU06RG9jdW1lbnRJRD0ieG1wLmRpZDo5QzA0RUVFNDVFNDExMUU1QkFCNEYxREYyQTk4OEM5NCI+IDx4bXBNTTpEZXJpdmVkRnJvbSBzdFJlZjppbnN0YW5jZUlEPSJ4bXAuaWlkOjlDMDRFRUUxNUU0MTExRTVCQUI0RjFERjJBOTg4Qzk0IiBzdFJlZjpkb2N1bWVudElEPSJ4bXAuZGlkOjlDMDRFRUUyNUU0MTExRTVCQUI0RjFERjJBOTg4Qzk0Ii8+IDwvcmRmOkRlc2NyaXB0aW9uPiA8L3JkZjpSREY+IDwveDp4bXBtZXRhPiA8P3hwYWNrZXQgZW5kPSJyIj8+RSTQtAAAG9JJREFUeNrsXQmYXEW1Pj09PVtmJjsBDGFXiCKKIBJ2REEQQdaARBBiFFRAnrIoyhqCgLwnEfEpPMAgggsGJG7w2MMuiuwkJDGQINmTycxklu62/r5/0ZWaur3M9GQCc/7vO1/fvrfuvXXr1q3/nFOnqhLZbFYUCoVCoVC8u1GlRaBQKBQKhRK6QqFQKBQKJXSFQqFQKBRK6AqFQqFQKJTQFQqFQqFQQlcoFAqFQqGErlAoFAqFonKoLveE2jM+uTHk+zNGjjZyj5EXqJhgQH3KyClGOo1MNbK2vzOSTWakbmWTjHp+69y2QqFQKBQW85+avvES+kaCKUaOMHK8kcWS9zQkjYzj9l1Gnuj3nCSykuxIaa1VKBQKxbvLQt9I0Gjk30YehtPA2d9tZJGRPYxs0++EnjCaRFe1NC4emSN2hUKhUCiU0MtDjZE3jRwXODaRhP5hI7f1ZyayVRmpWdMoqbb63LZCoVAoFAOFd2tQHHzcWxppChwbxt89+zsTWWOV161okkQ6oTVJoVAoFErovQA8C6OMjA0csy74nSXfn155GA6vXlcj9cuHqnWuUCgUCiX0XqDByOiIUnNu9ThCh/W+T79Z54bEa1c1SnVbjdnW/nOFQqFQKKGXi/cbeR+3Px44PtrZPrw/M1K/vDlSKxQKhUKhUEIvG/tK1IcO7CE9KXVn/v7ZyAFGNqm4dY6hautqpGZNg7rbFQqFQqGE3sv8gtDXOeTt9pMPN/Ixh9CNCS2HVJzQq7JSu3qIJDtTaqErFAqFQgm9FwBZY/z520ZWS9Sfvrdz/AjHeke6RyWaOa6iwJBzuNsTyuYKhUKhUELvFdAn/rREQ9NeN/KkkaN4bAQJ/x7+hy/8RhL+DpVk86p0taRadOy5QqFQKJTQe4NtSNog8aESzdf+RyOfolX+ZSMPSDRbHIBhbXcaaTcyuVKZQP95am2dVHelctsKhUKhUAxGQoeP+hoj1xu5yciFZZwLUv6NRIuwWMKeLdGscRdLFN3+O8lHuY800mbkdiOnSn7CmT4Sukj9imZJZHShOoVCoVAMXkLH/bBc2ywj5xg5wcjnSjgP4803owU+kvsQ8PaskYeMnGbkCu6vd44D15LMT6yIRmLUiZq19WqdKxQKhWJQE/q2Eo0hR7/3GCMLJFoGddciefymkR/zfyN/U7TO20niNhjOTizTwN9/GPmrkfMcsu+ddV6VkVR7nVS31mn/uUKhUCgGNaGDyP9l5F6J3OMdRr5n5FwjH4w55wwjrxj5G/+787dfQwsd/eZf5b46z1IHLqUicVLfzHOR6vYaqepOas1RKBQKxaAldIwXR7/3XIn6wVskcp+D4NEHfomRXbxzDpJorPkPnX2WsDHm/FEeQ/Db13j9as9CF6bDuPSLJLygS4xFns1Z4lYy1encdK+JjA5XUygUCsXgJfQvGblDIrc7VkI71sh2Rg418gKtdFjrdknUCUYmSdTX3u1c533O9uP8vZrKAYLfugKEDpwvkZv/nFIzjGj2mtUNuRnhILWrhkhVV1LXPlcoFArFRocNtR76YUbeMrKElvqJJGlMDvNFWta3GDmGFjf2wa89xchSI0NoqeM6n3KuO4q//5Ro7fPvS34WOZ/Q0ZeO6PoLmPblYpke8crmhtRr1198pSohmaT2nysUCoVi8BH6hySa8AWBaacbSUvUdw7vAJjyK0a+bmSakVVGWiVykSPgDUPVOmlZg/zv4q+d3rXOuQ/c9kdKNFY9ROjAd5nmBiN7SX4IXBCIZI/c7vlkiYS62xUKxYbH/KemayEoCqI/Xe4YKnYKyXO8kZslmhBmUyM/kshNjpXTrpNoARUExX2e5yVI7BCYwwh8m0kLf0vnHm7g22u00LMFCH0l8zSBaRUKhUKhUAvdA4aLoX97FxL19iTVZ0nMcHnDHf5Vh4hB1KOYbpGRtRJN07o/rfKmInm8yMhEEjWC69p4D1x/SMw5mF3uKp77dyN3azVQKBQKhRJ6HqMlH8X+iJHlsn4wW7kAIY+k9b41lYQPkPDx20zLf3zM+bDkEdmO/vUXjbxqZB6tfATGITjvVxK53v+uVUGhUCgUg4rQs15AWCL9jtf+TUrkMM86vyGgfzr3E9sn3WrObzWJFprtZ5z9uOHmRnYzcqCR/WJIHX3wB1GEOYGSgWC4xySKuMc1fm9kHyMLtTooFAqFYtAQet2yJvJxQjLVGelsbn9nnDb25Qg+QzLPRPSbSaZzc59Ho72iKPFkR7VUmbSZmgJGfO787DtR5bx+xlEefk/ixopqCKA7TOJd7Ql6EPaW/JKrrUyPceyH0HpXKBQKheK9T+gjX9jCsZWz0l3XJV2N7dLZtC43RrtueWN+nXCQfqpb2ke1SMfwVknXduUixhsXDZfGN0fkyD+TSsdb6WZ/d32ndAxtM+SfkM7GDllnrgXNAJO7MPocUfD/TxkvmcRZ5nqnSmkBf5b8ETX/oERD2u7UaqFQKBSK9zyh+y736vaUVLfVSMPbCE5ff4hXDu01UruqIWfNg5xxvHZ1Q2TVGx5PdhbOAqZaradXAOfAI9A+eo20jVljlIeGnMcAln7HsFbpauh8KV3XNaW7oeN2c+1rEunEeEPuXQVvkIAHAHnOol/+DpN+lsnYmWb/v8p1Xkjk1u/QaqVQKBSKjZ7QexB8jsCzBQZ0g+SjrVRrtG4KplB1jPBid3jnfCA3c1tLvQxZNCJH9u+wqSF2XCpd0w3Sv79t9JqPdA5vHZdOdVfB2x6arjVrlIzkulR2yOLmNnMcD5HoGtIxdN3IlrebFozOXb+HghKPL0i0UMxtWq0UCoVC8a4jdAJ907tLNIkMItPB2JgZDtHjz5DofHLEvdFv3SSFJ3gBE6+QaJz569ZDUN2Rst6CKl5naBb6QXcyR+5GMplU98PrRrQuXjt2ec6yr0onc3ey+WhcOFIaI8XgIJuPbFUmaxSOj1V1VafM9bHe+vz1lICsYf2wEgL3va7aolAoFIp3JaFjKVPMwY7JWjaPSYOo8usoLuCixpKoW5R4Lyzmgrnb/8fIn5z1yJO8TjThDAztZHQskU7OHvLvofvVL2/sXrPlMml934qc6z/VWifD5mwqtSuHIP0hhsBnradBGOKnsnCyT+gFACVG54RVKBQKxYCgLzPFYeKY+yUKJNu8QLodSbhYLrXZNXYlmgimVMCC/rREE8P8oKTrJLJ7GgI/VjJVMmzupjLipbHSvHCUjP77VjkyN6RdY6z1qYHz7FaXVhGFQqFQvJcJHdO3wqrdrYxzMIf6LVIZtzQmhil16taLDUE3od8ervjm18fkoutpgcOz8BGtBgqFQqEYrIR+JS30cnGERCupVQJYaAV99sVmo8MSrWfkTHlD4jkijyzwkfQuKBQKhUIxKAkds7JNjDn2N4lWTcPCK/MKWNcIT0/HHEcA3F8kWp0NU7c+GZMO1zi1xDz/l0TLtrr4tqy/trpCoVAoFO9a9CYoDv3YqcB+zNp2vOTHYWNd8wckmnvdBf7vIdHCLCE8Z+RgT+k4wciNJHEXmLK1toByYDGc1vgU/se88F/T169QKBSKwWyhfzSwL03L3J1U5d8S9XPPpcyhzCepJ0pUMtDZfatEAXg+xkq03Gop0eUnG9mV25dIFKGvUCgUCsWgtdBDEe1wky8I7P+NkT95+0DkiB6vr0D+s5JfBqYY4FU4z8i1Ro7ZCN8FFIzNJD+Gvz2QppZeiqxXnp0SnqEuxXJexzSFUMf0uG9cXEKC10tKgWV3nGtUM72ftkviZ9SrYV46me+4Z+qKKSMAK/8hRgLL8S6SwvMcWDQzvascJkuopwm+szYqyA2SH3kRum89v6EE33NrjKLdwLy0Ffh2G4qUg32uVon3YtWxXrWXUEd8FCqftTH765n3cuqEC7zXUczvGyW8W5TzFrwvFmda1k/5wn0wEqelQJ7qWX/XlHC9Jr6z9hLrr0LRKws9tPhJS4FKutaTFjbUcSQcIhO48vcP7F9sZHWJhA58zshvpW/D9SoNNFAIMkRXQ27yHInWkL+ADa2LqTyGCXv+6ciz9GLs7aWfxLT3s4GIAxq8x5n2oALpQCB38X7PeXlw5bNM/2mmfdY59jz/38HjPr7BfFwVk4ejeXxG4NhHeN2XJJr/AOWJlfWOK/IO7D0v8fbv4z0Xnvlv3vNAfsf07+exh6ic+cR5Ae9jPVbYvijwbhDvMZv32jMmz0fy/FsK1P+TmZ9rCjz7VF7nm72ou7vElAfK6RGWq0/4tzL9PwJ1Au/04zH3QnDrLyRaCvkVvtvZRd7tRL7/13gOzv2l9OwGRPndXCBfuO8nipSFfbffKpBmBtNMLXKtk5gOsUTDlKYU/WmhZ2MIvbNCefqQ00BmaG3tE9Nozab2HCLoNY5G7Fp3owNp0T0wpgzFoFLYjB6Mnfn/VeYRDc6lEi0aM9GxEDZhwybcZxeoBfHbYMVT2ABZLX8bCqam/WlMPr4i+eF7Q4rkGaMbtuS76QqUWcJpxOud/HY69cfm91iS6IWedY38xgUsDuXxVd7+/VlvhrNsXmR5oSG+nedMi7EyJ/P4ZCoSqx2PyFjHE5Ry6ppb31c639P2tIirPCX4VxKtBgjMo/W1PZ/9Uzy2wrnODvRWYA6HCQEr3JbDigIWHIJGtyWxX0GPgA+U89Ysq3JRRyXGWrJZx1BA3vYyciiVsLWO8rgd03YG6vBRVODvcu6D7+MevosMFTYowntQcPw7Xt6+4xDnElrmyOsJLG8onU85dXIrJ1+2TXHzdQzzNTNG0Z1MRWwyvYAhq34sy+Ub/BbfiCnT8/jemjYy40PxHrTQQ+iqoFtoNK2PI9kQ7BtDtLDkf+6QiA806D8q4X7PsdFMDED5X83GaIFEa7uPpxxPUsAwv9O9cgZ+xgZ/R/4iNuA2ktN0yc++57pZz2BjEfIQuKMFisUjWCI7xcmDK+PZ+LrXQgO8k5Nmd8fC/j6f3ffQxE3qkw4QKkj8Jv7+kff6MJXDHzLNZVSQfNgpi4VKneuheJjPY8t5MvfPoQJkn/dwrx52eN/Dt0jYq1incc4H+X6XkbAv9JTmDsfrcEGJ5eBiJz4b0OwoE6FvN84zVgz2/UKp2I1ltAOf78tU9A/y6rDN77leHd6dym09CXGYo1TdSDKczfLYieV3GdOc79WhfRwyv5RpbZ14gG3M9Z4HzObrvJh81Xn58pXJcY6XZq8i3w6I+rSYNJ93PAgdou52xQAQ+kBgKt1icV6GIbRKFhS5DhqDtwcg/2igPsftMyVa/jXDjxgW5ZU8dnbAbbmazzWPv3B7TqIS00wLxMeOtH58wHrbtBf5X+TkwZW5bMh90niNx+fTMsJ8BLMc5aAv+CS9Bkv4PHNYlktIpo+wrp8ZOHcij83l/0nOsTbut+X8hkN+9nlej7G0xCGkE7l9Cb0IHSyTu0ggQqKPc69+m5ZoOTiGHoV5zO+kfqzLackHvM7n9g2S78I4WnpOKLXUq8OoEyfxnYEcd2G63aiItbKePM93i/7w7xm5m+lOdK5tn/XPVBiX8ZyX6alq4/UPCTwL7v8vL1+TuB+KcqhLwN77Nf6eUEKZTQ54C1EPz1JaUgw0oW/oRUlg2V5cJE2t89HH4T5q300DUPZoHBpp3TweOD6dpPftwHtKxlhLL3M7zl39TU8Bgqvwq45VWA7K6a6B5VoT2P9bx5rsSx3awfG2LA0cn0Kiv9Xb30yLKMuyWUhLb8uY+6Sc56ktMW9Qlmx/+gOB4w+R3DeR9fvdq0g8C3jfH5dxT6Q71lEGXqVC8MF+qstx5fG04wWqLaH+LCVxAkMdi1eoWL0WOOde/m7r7NveO+biLXrAzohRxEL5Wu7UK1/p2oyKwTpes4WK+ogSPJH+PBoHSnwMgULRL4Qeck03SnhseiXRzgbxMDZSxQjIRr+jEX8wcBxW0jkFnqm/Yee1XynhaG7sn0Fr3Y+E7o7xSNh+8IXesQdo2XzMs0pgOW1HC/8fZea/EjETbzl5b+jDdWwjG+dpQUAUgsf+GmhA4SlBlwC6CeBih2v1iAq+5yaSWafk+9r9et1CIqnzvrMsLbZVtCi/U+I94fL9AOsBvAD3U2Hqr9EdWQlH2u/rELVfx0PR+weQjLO08oHhzjUk5juxdci2aU1F6sPdVJifCRwL5etAyceCvOwd+yy/ZVjyCGJDtwCi8A8t0Hb+kt/w1x3FxSrcwEyJjw1SKCpiZbkNUKjRapJ8UE9fAGviSoeQYXku4wf+ai8UljQVgNmelfgTiSJJB7rsu6T8/stNaNW6VuC32OgsCxAXgv4w8c+1THc3G3jr3kMU9GllNN7AFWwwk16D9b2YhlJilCrrceiLhZ4sUDcLwbpGf+80pCdy/3SpzOp5SckPLQzFBXQ7+xMBJe0JiVzXeEfnUvF4usg9j3eIK81fBGIhIvxyqVwAq1uXMT/FWueZP8P8WgLzyxJW7OZMm6FX5EQqP4gHedF7t+uKKJZJpwxD9WFXfjdZJ13I6j/Cy9dYenf8fPllfadThw5mHZoRk2d8n2OoKEyi9wWWOUZ9wN3/fxLFZWj/uaLfCT2k9Q7nR+AT+v5s4NNO5QSp3sCPI4TFrNCVBAgGQTBnOhbs1AEue7dhKddDcDLFByL7vyw9o5mHsnFBfy2Gtu1GBeyjtDhmUukpB3EL8/y0DEJ3yyJbobIsFWioD2KjbUdVII5hCZ9tl148R2/ec7H3D+/Xj0jGu7Px372AEjhC8gFwv+bvoxL1Ce9A6/3+CtdlfP+PxRybwW/Px3HSc8hZG7/9s5xyK/ZuE166uHNQhhO8c690lA6LYwKeDHjIEIB7tqeYjGd5tku+L38W0+9PBXtujBJyNQkdVvr/UuGCAYKA1/kyMF5DxSAk9BcC+6C9fs2z8rDvssBHBFxVwPqp7qdnRV6OYkOOhV2WD3DZ9+WDfZtKSZKNACwjuPxulsi1HipTuG2voyJzjuOt+G82pMky84358Z+UvFswUaB+FPKgDFRZHk6yhJvddjesIrmfxkb9mQrlLdGH57CW4mkkzY+TBBbFXOMztEThfXrEsW7RdQOX/cR+IPRuWq7dfKcZEtmdjlLhA11hiB9AVx2i4D9EMjy1l+82UeQcxGu8QuPCkm1XgXwlWc7IF0ZOTAmktYGHs0jCwJtMj2NHSj641QW6l+5gvUM3GQJz0RXWQkLfSqlJsaEI/a8kR/+jQXAV+o7gEkRf4BdjyBxE9KCEg6T6E8v4cR0vPYOjBgJtzsddI4XXhk94FsgvJN//Xw5gZaCf7mj+XyDR+OjeAIQxu49lYPu+OyTvUrWKRZzClw4oA+scS7FURcK6SuGh2JPfQkbyoyKg/F1c5L2Ugg5aZPUSjhOwM9+JxA/Vs+WNbo6LJBri9ouYdLYb4SXvuawCcBjLaWUF6/JKWqpryzgHwai3OSQICxf90RjG+ZyTrt3xMoUwxClnW286vPplFVeLmwsQ+h+db+JNtmeH0ZvldtHVOJb8K3z+JOuntcqhPP1Qes7SZ2daRJ5ukXyA73S2Ux9QalL0Br2xkBBA9ZeYY0fzY/lpDJkDP6FLKjUAz3ujQ2YDjVX8qEfHNFZoQOACnik9I2t7a9kulfUnl7mOjXBvrldXgTKw0elLnEbYTuoyJuacTZ3ycz0WwLiYc6ZQibya/3eSfDQxJtV5lMdhrf+A+xE1vW8FnnEFSQllHJo2eRRJqU16Dvfzgbw9zXNs95Gr6CHP+3H7C95zXeeU38H94G0q1zho8Ej0CSo2/ph7G/W+eUybMc6rD1lHWdk65t7betcOKQhW6XhM8rP8uXBHDZxHb8iD/D2f+6Gc7FqgDOyshlYpvVYpSbGhCd0O8elNANzj1EIH0ipevJGU/Rx6K+okP3TMfS/Q2g8gma8ONKC9xfW0gEAMN/XhOi1lpE1Lz0AsDEeyE7Xc5+x/mL8TAoQKIjuJ2+5qfU84SpAfXTyWFu2+TkNvXaVv0Br7jSP4/6pDin3FUsfiDAUens73PUcKj2e3jf43aFmGukg+T6JEEOTtged6vsBztffxOftSJ9P0PgBwU3/CMyDWkZxPCNSHL3h1QBzP0XHSc6w3vAC7sx17rEi+YO3b2QWP8IwU6+GZS0+DW9b4P9/zBMV5by6nV+g6Cfe3KxQlo7f91a+wgt9awCoKWfbHSt9dmO8VrGUjdj01fFikGGJUS9I6hA3Kd6Uy0dYWi9lgurOR9QYns4FLBOoUvAovelb1+ZJ3PW5FTwkaW7g1f+aR80zWL/R7wmWJvkaMrf86FYGF9LZYPMWG9Bg2pldTYRlH5RPW3WtsNF1X6eUSng4XZT+Lv2OkbxMPZfme9yPBQIGzUd/HOXkBcZQy2uFJWuoXBAh1IrevlfA0txNIdgfwHSxwjkHhCc15kKLy9Eg/fw/38N1/gs/2WYcwf05FBvVkRyp9GP+Ncd8Y5vaW5GeNBG6gVwZu9XtZHkizN89JUZl9roR8WSt9Ar/FQ6lkH+5Y578LnIeI/RlUsnBea8z1URf+UKaCrFBUlNCFHzg+kMvYKMW5YGHJ3yzR0JvVXgPUHEhf7rKmdpUjH0PLuEbcilH93c8PMkFUMmaz+hLFAtbk2bJ+P7V1B5Y6ZrsupkxDQ4CaS3hmt6xPLZBuCQndXmszkqePZ+ideMuziibz3EMCxPQyFZ63A+ckaeH5i6y8SOsObtmjqBRkJD9TnY+H+Qyb0AK8xiub5hiLtNqpey4xoovqFF7ncIcMrKcDBHaHsy/pvOOQJY5vDv26OzvvAwqDndp2ZsxzQcnBzHbbsq5d6NxnP8m7631MjyF06wIfVoa3z9az2oCVPo1K7aFU6OxznMO6jzI8V9aPTH+ZyqXr3XiLRHozy+hG716/ooLgoqlIvv7A+ngg68WmrE9xAYb30usxjnVyRoF7rIkp16GiY9EVG4jQhZYSgt8QbIbpRnciQWXo9kODfZ/0nOjEupum8eNIO/mZ1wt33Q9oSaWdRnCJlD4U6kESjjseGNd4dgO8g8tpBdg5vrtpOaCBn+OlvZ3l83AZStc0elSKWZFX0QouZLV08nqjC3gNkpJ3f2Jq3qmyflBQgiSGYw9IeEz0clpoIL6DmS8ohugT/rX07IKwjeJRJDpEem9BpegR75x2PkMhFze8J6eTIBd75DGNhNEZ4/24hPfw83gTlbOJJJkEy+D2wPtZRpJHw7405tuBBXi8971cwW8t7n2jfqPvfU/nPFiIr0p+oZQQad8Xc715VC7WluF5g7W8jazvIreAgnUWyTLlKaCnsqxQJ7Zk+T7EfS0xyuIEltFeJMc3SMx/jsnXdgXydSYV03rWtWl8f3HBhVA4v0KPwhpHMYIy9XiRMprH72ZlActeoehpcWWz5Q3/3WrX0wZ7kUmiKjjC62w25NdrtVIoFJXG/KemayEo+tVCH3x0noiN/XlaCg87UigUCoVi47HQFQqFQqFQbHzQgAuFQqFQKJTQFQqFQqFQKKErFAqFQqGoCP4jwADQNvw20jA5ogAAAABJRU5ErkJggg==",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.88882107,"math_prob":0.9590405,"size":513,"snap":"2019-51-2020-05","text_gpt3_token_len":135,"char_repetition_ratio":0.13555992,"word_repetition_ratio":0.0,"special_character_ratio":0.27095518,"punctuation_ratio":0.15384616,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9843794,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-14T23:44:36Z\",\"WARC-Record-ID\":\"<urn:uuid:4dc99d03-e562-4510-be86-10f3bd7560ec>\",\"Content-Length\":\"32803\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b20cd969-f9b8-446a-9204-b0b687ac9adb>\",\"WARC-Concurrent-To\":\"<urn:uuid:a2ed7437-fa6a-4d60-b7b8-ae65a30943bd>\",\"WARC-IP-Address\":\"104.26.7.16\",\"WARC-Target-URI\":\"https://homework.cpm.org/category/CC/textbook/cc3/chapter/2/lesson/2.1.1/problem/2-10\",\"WARC-Payload-Digest\":\"sha1:G4AY4CRKNUXMRKMWBRTEFCTALGJVC6DQ\",\"WARC-Block-Digest\":\"sha1:S3WVKJ4QXI4UQUV5GWWS7K5XSZOZNUUU\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575541297626.61_warc_CC-MAIN-20191214230830-20191215014830-00425.warc.gz\"}"} |
https://samuray-club.com/adenomi/fibonacci-retracement-levels-329566.php | [
"# Fibonacci retracement levels\n\n• How to Use Fibonacci Retracement with Support and Resistance - samuray-club.com\n• Николь остановила свое кресло в уголке возле обсервационного окна.\n• Fibonacci Retracements [ChartSchool]\n• Накамура знает, что должен обратиться к Ричарду и Арчи.\n• Только когда приближалась смерть и сердце гибло.\n• И после этого не отходил от Марии, в которой явно не чаял души.\n• What Are Fibonacci Retracements and Fibonacci Ratios?\n\nHe has provided education to individual traders and investors for over 20 years. They will often form trends in one direction or Fibonacci retracement levels and then bounce back against those trends.\n\nMoves in a trending direction are called impulses, and moves against a trend are called pullbacks.\n\nFibonacci retracement levels highlight areas where a pullback can reverse and head back in the trending direction. This makes them a useful tool for investors to use to confirm trend-trading entry points Origins of Fibonacci Levels Fibonacci levels are derived from a number series that Italian mathematician Leonardo of Pisa—also known as Fibonacci—introduced to the west during the 13th century.\n\n### How To Trade With Fibonacci Retracement in Simple Way ? Nifty - Bank Nifty - By Siddharth Bhanushali\n\nEach new number is the sum of the two numbers before it. As the sequence progresses, each number is approximately Subtract These four numbers are the Fibonacci retracement levels: Fibonnaci's sequence is often represented as a spiral.\n\n## Fibonacci Retracements\n\nRicardo Avila. The Relevance of the Sequence What Fibonacci and scholars before him discovered is that this sequence is prevalent in nature in spiral shapes such as seashells, flowers, and even constellations. As a spiral grows outward, it does so at roughly the same rate as the percentages derived from the Fibonacci ratios. Some believe these ratios extend beyond shapes in nature and actually predict human behavior.\n\n### Finding Fibonacci Retracement Levels\n\nThe thinking goes, essentially, that people start to become uncomfortable with trends that cause changes to happen too rapidly and adjust their behavior to slow or reverse the trend. Early or late in trends, when a price is still gaining or losing steam, it is more typical to see retracements of a higher percentage.\n\nIn this image, you'll notice that between This is an example of a Fibonacci retracement. The theory states that it is typical for stocks to trend in this manner because human behavior inherently follows the sequence.\n\nViewing the retracement level. The Fibonacci levels also point out price areas where you should be on high alert for trading opportunities.\n\n## How to Use Fibonacci Retracements\n\nThat may be a good opportunity to buy, knowing that the Fibonacci retracement levels will likely bounce back Fibonacci retracement levels.\n\nUsing a Fibonacci retracement tool is subjective. There are multiple price swings during a trading day, so not everyone will be connecting the same two points. The two points you connect may not be the two points others connect.\n\n### A Tool to Help Isolate When Pullbacks Could End\n\nThis may indicate a price area of high importance. Retracement Warnings While useful, Fibonacci levels will not always pinpoint exact market turning points.\n\n• Fibonacci Retracement Levels\n• They are based on the key numbers identified by mathematician Leonardo Fibonacci in the 13th century.\n• Fibonacci Retracement Levels in Day Trading\n• Fibonacci retracement levels are horizontal lines that indicate the possible support and resistance levels where price could potentially reverse direction.\n• Fibonacci Retracements Introduction Fibonacci Retracements are ratios used to identify potential reversal levels.\n• However, there are ways that you can help tilt the odds in your favor.\n• Fibonacci retracement - Wikipedia\n• How to Use Fibonacci Retracements - samuray-club.com\n\nThey provide an estimated entry area but not an exact entry point. There is no guarantee the price will stop and reverse at a particular Fibonacci level, or at any of them.\n\nFibonacci retracement levels are horizontal lines that indicate where support and resistance are likely to occur. They are based on Fibonacci numbers. Each level is associated with a percentage. The percentage is how much of a prior move the price has retraced.\n\nFurther, if you use the Fibonacci retracement tool on very small price moves, it may not provide much insight. The levels will be so close together that almost every price level appears important.\n\nFibonacci retracements provide some areas of interest to watch Fibonacci retracement levels pullbacks. They can act as confirmation if you get a trade signal in the area of a Fibonacci level.\n\nPlay around with Fibonacci retracement levels and apply them to your charts, and incorporate them if you find they help your trading. Article Table of Contents Skip to section Expand."
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.8651749,"math_prob":0.5613098,"size":4873,"snap":"2021-21-2021-25","text_gpt3_token_len":1038,"char_repetition_ratio":0.19449578,"word_repetition_ratio":0.013003902,"special_character_ratio":0.17956084,"punctuation_ratio":0.08222491,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97937226,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-06-20T13:26:04Z\",\"WARC-Record-ID\":\"<urn:uuid:23605573-706a-49ed-9cdc-231791ef53f5>\",\"Content-Length\":\"16202\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ca4ba3b8-d25e-420f-81de-078ef6f984a6>\",\"WARC-Concurrent-To\":\"<urn:uuid:d0cf712c-6eaf-4f5f-aa46-5ba4314214ec>\",\"WARC-IP-Address\":\"104.21.14.117\",\"WARC-Target-URI\":\"https://samuray-club.com/adenomi/fibonacci-retracement-levels-329566.php\",\"WARC-Payload-Digest\":\"sha1:35E7X7B7R3KY4UELBFRI62E5ZHB4K23U\",\"WARC-Block-Digest\":\"sha1:ZYRKWV32E2TATE7RSPTOKNWEQQKDKY2Q\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-25/CC-MAIN-2021-25_segments_1623487662882.61_warc_CC-MAIN-20210620114611-20210620144611-00050.warc.gz\"}"} |
https://kmmiles.com/12-58-km-in-miles | [
"kmmiles.com\n\n# 12.58 km in miles\n\n## Result\n\n12.58 km equals 7.8122 miles\n\nYou can also convert 12.58 miles to km.\n\n## Conversion formula\n\nMultiply the amount of km by the conversion factor to get the result in miles:\n\n12.58 km × 0.621 = 7.8122 mi\n\n## How to convert 12.58 km to miles?\n\nThe conversion factor from km to miles is 0.621, which means that 1 km is equal to 0.621 miles:\n\n1 km = 0.621 mi\n\nTo convert 12.58 km into miles we have to multiply 12.58 by the conversion factor in order to get the amount from km to miles. We can also form a proportion to calculate the result:\n\n1 km → 0.621 mi\n\n12.58 km → L(mi)\n\nSolve the above proportion to obtain the length L in miles:\n\nL(mi) = 12.58 km × 0.621 mi\n\nL(mi) = 7.8122 mi\n\nThe final result is:\n\n12.58 km → 7.8122 mi\n\nWe conclude that 12.58 km is equivalent to 7.8122 miles:\n\n12.58 km = 7.8122 miles\n\n## Result approximation\n\nFor practical purposes we can round our final result to an approximate numerical value. In this case twelve point five eight km is approximately seven point eight one two miles:\n\n12.58 km ≅ 7.812 miles\n\n## Conversion table\n\nFor quick reference purposes, below is the kilometers to miles conversion table:\n\nkilometers (km) miles (mi)\n13.58 km 8.43318 miles\n14.58 km 9.05418 miles\n15.58 km 9.67518 miles\n16.58 km 10.29618 miles\n17.58 km 10.91718 miles\n18.58 km 11.53818 miles\n19.58 km 12.15918 miles\n20.58 km 12.78018 miles\n21.58 km 13.40118 miles\n22.58 km 14.02218 miles\n\n## Units definitions\n\nThe units involved in this conversion are kilometers and miles. This is how they are defined:\n\n### Kilometers\n\nThe kilometer (symbol: km) is a unit of length in the metric system, equal to 1000m (also written as 1E+3m). It is commonly used officially for expressing distances between geographical places on land in most of the world.\n\n### Miles\n\nA mile is a most popular measurement unit of length, equal to most commonly 5,280 feet (1,760 yards, or about 1,609 meters). The mile of 5,280 feet is called land mile or the statute mile to distinguish it from the nautical mile (1,852 meters, about 6,076.1 feet). Use of the mile as a unit of measurement is now largely confined to the United Kingdom, the United States, and Canada."
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.84804904,"math_prob":0.98396826,"size":2162,"snap":"2022-27-2022-33","text_gpt3_token_len":651,"char_repetition_ratio":0.1742354,"word_repetition_ratio":0.0,"special_character_ratio":0.35522664,"punctuation_ratio":0.15789473,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9838322,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-07-06T10:35:21Z\",\"WARC-Record-ID\":\"<urn:uuid:4eca1082-d19a-406d-880e-9df9a0c8ceb1>\",\"Content-Length\":\"20510\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:594072f1-60c9-42ee-b3d9-944a87b80593>\",\"WARC-Concurrent-To\":\"<urn:uuid:5d9a3b01-780f-4cf3-8e59-8e14f6b0dd21>\",\"WARC-IP-Address\":\"172.67.134.182\",\"WARC-Target-URI\":\"https://kmmiles.com/12-58-km-in-miles\",\"WARC-Payload-Digest\":\"sha1:IVDU6BKX2B4QAOWVHLMLFCYIVCIFE7PY\",\"WARC-Block-Digest\":\"sha1:KFO2KARPD3MZWJR4O3RORQZMULSIDXCS\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656104669950.91_warc_CC-MAIN-20220706090857-20220706120857-00403.warc.gz\"}"} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.