URL
stringlengths
15
1.68k
text_list
sequencelengths
1
199
image_list
sequencelengths
1
199
metadata
stringlengths
1.19k
3.08k
https://math.stackexchange.com/questions/1011158/roots-of-a-quadratic-equation
[ "# Roots of a quadratic equation\n\nCan a quadratic equation have irrational roots? By extension, can any equation have irrational roots? If not, why? If it can, how would you visualize it? (geometrically). I want to add that I am a high school school student. Don't want people wasting time writing answers that I can not understand\n\n• What do you mean by 'visualize' it? – Simon S Nov 7 '14 at 21:58\n• Consider $x^2 = 2$ – ChocolateBar Nov 7 '14 at 21:58\n• consider $x=\\pi$ – John Joy Nov 7 '14 at 22:01\n• Do you know what an irrational number is? – Américo Tavares Nov 7 '14 at 22:03\n• Yes, one that can not be expressed as a ratio of two integers? – user140161 Nov 7 '14 at 22:06\n\n$x^2 -a = 0$ will give alot of irrational numbers. An interesting fact is that if a is a prime, the solution will allways be an irrational number. I am not sure how you can visualize it, but $\\sqrt{2} = 1.41421...$ The number will continue forever and you will never be able to get \"onto it\", just very close. Don't know any other way to look at it. therefor $(1.41421...)^2 -2 = 0.00000...$\nSure $x^2 - 2$ has irrational roots.\n• I don't understand. Neither root here of $\\sqrt{2}$ or $-\\sqrt{2}$ is infinite, but they are irrational. – Simon S Nov 7 '14 at 22:02\n• What's an indefinite value? What ever they may be, $\\sqrt{2}$ and $-\\sqrt{2}$ are well defined. – Simon S Nov 7 '14 at 22:06" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8758801,"math_prob":0.9897643,"size":1865,"snap":"2020-10-2020-16","text_gpt3_token_len":572,"char_repetition_ratio":0.19075766,"word_repetition_ratio":0.13414635,"special_character_ratio":0.35442358,"punctuation_ratio":0.13612565,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9977638,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-20T22:02:14Z\",\"WARC-Record-ID\":\"<urn:uuid:7c184c8b-f98c-42a7-a863-83cee1ec9155>\",\"Content-Length\":\"152990\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7cbde473-f8ff-4475-a7ed-ec46f33eb36d>\",\"WARC-Concurrent-To\":\"<urn:uuid:3602b575-4994-4772-8007-35e6f3c756d4>\",\"WARC-IP-Address\":\"151.101.193.69\",\"WARC-Target-URI\":\"https://math.stackexchange.com/questions/1011158/roots-of-a-quadratic-equation\",\"WARC-Payload-Digest\":\"sha1:S67JHLPUCYZIXF2DEHNAHLTNDE4ELCN6\",\"WARC-Block-Digest\":\"sha1:ADJMHWJGBZGLY6R464W4IBHGZRV6U65E\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875145282.57_warc_CC-MAIN-20200220193228-20200220223228-00484.warc.gz\"}"}
https://math.stackexchange.com/questions/1378946/farthest-point-on-parallelogram-lattice
[ "# Farthest point on parallelogram lattice\n\nOn points arranged in a parallelogram lattice, like on the image in this Wikipedia article, how to calculate the maximal distance any point on the plane may have to its closest point from the lattice. Or alternately, the maximal radius of a disk that can be placed on the plane such that it does not contain any point of the lattice.\n\nAs input I have the side length and both diagonals of one possible parallelogram that fits the lattice.\n\nEdit: I meant the lattice not grid, i.e. only the sparse set of intersection points of a parallelogram grid.\n\n• Do you mean the \"grid,\" the lines, or the \"lattice,\" the intersection points of the lines? – Rory Daulton Jul 30 '15 at 12:40\n\nTwo vectors ${\\bf a}$, ${\\bf b}\\in{\\mathbb R}^2$ representing the sides of the given parallelogram determine the lattice $$\\Lambda:={\\mathbb Z}{\\bf a}+{\\mathbb Z}{\\bf b}:=\\bigl\\{j {\\bf a}+k {\\bf b}\\bigm| j, \\ k\\in{\\mathbb Z}\\bigr\\}\\ ,\\tag{1}$$ but the representation $(1)$ of $\\Lambda$ is not uniquely determined: The lattice ${\\mathbb Z}^2$, for example, is generated by the pair $(1,0)$, $(0,1)$ as well as by the pair ${\\bf a}:=(3,16)$, ${\\bf b}:=(5,27)$.\n\nIn order to solve the problem at hand we have to determine a certain \"standard presentation\" of $\\Lambda$: Find the shortest vector $${\\bf p}=p\\> {\\bf u}, \\quad |u|=1, \\quad p>0,$$ occurring in $\\Lambda$ (this is a standard problem in computational geometry). Then $\\Lambda$ contains the one-dimensional lattice $\\Lambda':={\\mathbb Z}{\\bf p}$, and is the union of translated copies of $\\Lambda'$. Denote the distance between two successive such copies by $h>0$, and let ${\\bf v}$ be a unit vector orthogonal to ${\\bf u}$. There is a unique vector ${\\bf q}\\in\\Lambda$ having a representation of the form $${\\bf q}=c\\>{\\bf u}+h\\>{\\bf v}, \\qquad 0\\leq c<p\\ ,$$ and $\\Lambda$ can then be presented in the form $$\\Lambda:={\\mathbb Z}{\\bf p}+{\\mathbb Z}{\\bf q}\\ .$$ Let $\\rho$ be the circumradius of the triangle with vertices ${\\bf 0}$, ${\\bf p}$, ${\\bf q}$. Then any point ${\\bf x}\\in{\\mathbb R}^2$ has a distance $\\leq\\rho$ from $\\Lambda$, and there are points for which this bound is realized.\n\n• Drawing a picture made this so believable that I will refrain asking for a proof :-) – Jyrki Lahtonen Jul 30 '15 at 16:41\n\nIt is intuitively obvious that the point inside a parallelogram that is farthest from the sides is the center, the intersection of the two diagonals.", null, "In my diagram, the center of parallelogram $ABCD$ is point $E$. The altitudes of triangles $ABE$ and $ADE$ (among others) gives the distance from the center to each of the parallelogram's sides. The altitude to the larger side is the smaller of these altitudes and gives your maximal distance from any point in the plane to its closest point on the grid.\n\nYou apparently know the side and diagonal lengths of the parallelogram. You can use Heron's formula to find the altitudes from that information. Let's say that the parallelogram's sides have lengths $a$ and $b$, with $a\\ge b$, and the diagonals are $c$ and $d$. Then the triangle on the larger side has sides $a,\\frac c2,\\frac d2$. If we let\n\n$$s=\\frac{a+\\frac c2+\\frac d2}2$$\n\nthen the desired altitude is\n\n$$\\frac{2\\sqrt{s(s-a)\\left(s-\\frac c2\\right)\\left(s-\\frac d2\\right)}}a$$\n\nwhich is the answer to your question. Of course, you can simplify that expression in several ways. Note that you do not need the length of the shorter side of the parallelogram. Is that what you mean by \"As input I have the side length and both diagonals...\"?\n\n• The question was not about the distance to the side but to a grid point. That problem is further complicated by the fact that the closest point of the grid may not be a vertex of the parallelogram that the point falls in. As an example consider the grid consisting of translates of the parallelogram with side vectors $u=(5,6)$ and $v=(1,2)$, and the mid point of the parallelogram $P=(u+v)/2=(3,4)$. The grid point $(4,4)=u-v$ is much closer to $P$ than any of the vertices $(0,0), u,v,u+v$. – Jyrki Lahtonen Jul 30 '15 at 11:29\n• IOW there are many parallelograms giving the same lattice. – Jyrki Lahtonen Jul 30 '15 at 11:31\n• @JyrkiLahtonen: So by \"the grid\" you mean only the intersection points, i.e. the vertices of the parallelograms, rather than the grid lines? That does not match the usual English usage of the word \"grid.\" (See here and here and here). You should probably re-word your question for clarity. – Rory Daulton Jul 30 '15 at 12:26\n• A good point, Rory! Somehow that did not occur to me. Let's wait for the original asker to comment what is intended. It would not be the first time that I am assuming a meaning I'm familiar with. To me it seemed that the OP is asking about the covering radius of the lattice. The Wikipedia page linked to by them does concentrate on the set of linear combinations with integer coefficients of the two sides of a fundamental parallelogram. – Jyrki Lahtonen Jul 30 '15 at 12:31\n• @JyrkiLahtonen: I just noticed that the OP uses the word \"lattice\" in the title and \"grid\" in the question, causing the confusion. So we are both right/wrong! – Rory Daulton Jul 30 '15 at 12:39" ]
[ null, "https://i.stack.imgur.com/11fMz.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.83352906,"math_prob":0.9989963,"size":1450,"snap":"2019-26-2019-30","text_gpt3_token_len":480,"char_repetition_ratio":0.1680498,"word_repetition_ratio":0.0,"special_character_ratio":0.35793105,"punctuation_ratio":0.121212125,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999255,"pos_list":[0,1,2],"im_url_duplicate_count":[null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-07-19T12:47:48Z\",\"WARC-Record-ID\":\"<urn:uuid:7ba0c646-f5a9-4862-9da1-b7f005ad42f8>\",\"Content-Length\":\"156300\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:568296b6-d615-42ef-912a-5e979bfa6ca3>\",\"WARC-Concurrent-To\":\"<urn:uuid:c867c6b6-df5f-4738-9f05-43f03421fd31>\",\"WARC-IP-Address\":\"151.101.1.69\",\"WARC-Target-URI\":\"https://math.stackexchange.com/questions/1378946/farthest-point-on-parallelogram-lattice\",\"WARC-Payload-Digest\":\"sha1:ZRKLGEGNIKVV2AVOE7CK4GORJWJXIJNJ\",\"WARC-Block-Digest\":\"sha1:DSAOB5IMG2RAAAS3JW3GK3ZZ5PYQJEXJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-30/CC-MAIN-2019-30_segments_1563195526237.47_warc_CC-MAIN-20190719115720-20190719141720-00457.warc.gz\"}"}
https://geometryatlas.com/contribute.php
[ "", null, "Add an entry to the Atlas\n\nWe certainly don't claim that our atlas is complete by any means. While we've tried to include most of the basics to get us started, there are practically an infinite number of entries waiting to be added. If there is something that you feel is crying out to be covered, fill in the following form and after being reviewed we'll add it to the atlas.\n\nNote that currently you must have a saved Geometry Expressions file to add to the atlas.\n\nEntry Details\n Title: Header: Footer: .gx File: Category: Lines     Misc. Lines     Vectors     General Triangles     Right Triangles         Altitude         Angle Bisector         Angle Measures         Area and Perimeter         Median         Inscribed and Circumscribed Circles         Misc.     Equilateral Triangles         Area and Perimeter         Inscribed and Circumscribed Circles         Median, Altitude, and Angle Bisector     Isosceles Triangles         Area and Perimeter         Inscribed and Circumscribed Circles         Median         Altitude         Angle Bisector         Angle Measures     Scalene Triangles         Area and Perimeter         Angle Measures         Altitude         Angle Bisector         Inscribed and Circumscribed Circles         Median     Misc. Triangles Quadrilaterals     Squares         Area and Perimeter         Diagonals         Inscribed and Circumscribed Circles     Rectangles         Area and Perimeter         Diagonals     Rhombi         Area and Perimeter         Diagonals         Altitude         Angle Measures     Parallelograms         Area and Perimeter         Diagonals         Angle Measures         Altitude     Trapezoids         Area and Perimeter         Angle Measures         Altitude         Diagonals         Misc.     Kites         Area and Perimeter         Angle Measures         Diagonals     Cyclic Quadrilaterals     General         Area and Perimeter         Diagonals         Angle Measures     Misc. Quadrilaterals Polygons     Hexagons     Pentagons     Octagons Circles     Tangents     Diameters/Radii     Chords     Inscribed Circles     General     Circumscribed Circles Transformations Curves     Conics and Polynomials     Cubic Curves     Quartic Curves     Curves of High Degree     Transcendental Curves     Derived Curves Mechanisms Conics     Ellipses     Hyperbolas     Parabolas     General Created By:\n\nIf you entry does not fit into the current categories, make a note of that in the Header and we'll consider making a new category.", null, "Content copyright 2020 Saltire Software. All Geometric Content created by Geometry Expressions." ]
[ null, "https://geometryatlas.com/images/atlas.png", null, "https://geometryatlas.com/images/Euclids_Muse_banner_no-border.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.96598977,"math_prob":0.716218,"size":585,"snap":"2020-24-2020-29","text_gpt3_token_len":123,"char_repetition_ratio":0.09810671,"word_repetition_ratio":0.0,"special_character_ratio":0.20683761,"punctuation_ratio":0.06896552,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.957394,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-06-02T16:42:38Z\",\"WARC-Record-ID\":\"<urn:uuid:a1ec793e-9edb-4a54-866e-d8d26d7ebeeb>\",\"Content-Length\":\"11993\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:20c01e6e-1a9c-4613-b13d-5d75fa7bbf74>\",\"WARC-Concurrent-To\":\"<urn:uuid:5fb0c367-e0d0-421b-94c5-6936d0d34592>\",\"WARC-IP-Address\":\"96.126.102.243\",\"WARC-Target-URI\":\"https://geometryatlas.com/contribute.php\",\"WARC-Payload-Digest\":\"sha1:K7YUNVC57OS3YK5SYIC5HPXPXG2IMOAA\",\"WARC-Block-Digest\":\"sha1:UBS5NHCTRA4TQ7J6GLUIEKI6UWHE5M63\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-24/CC-MAIN-2020-24_segments_1590347425481.58_warc_CC-MAIN-20200602162157-20200602192157-00298.warc.gz\"}"}
http://www.tpub.com/math2/61.htm
[ "", null, "integration by trial and error. However, there are some rules to aid us in the determination of the answer. In this section we will discuss four of these rules and how they are used to integrate standard elementary forms. In the rules we will let u and v denote a differentiable function of a variable such as x. We will let C, n, and a denote constants.\"> Rules for Integration", null, "", null, "", null, "", null, "", null, "", null, "Custom Search", null, "", null, "", null, "RULES FOR INTEGRATION Although integration is the inverse of differentiation and we were given rules for differentiation, we are required to determine the answers in integration by trial and error. However, there are some rules to aid us in the determination of the answer. In this section we will discuss four of these rules and how they are used to integrate standard elementary forms. In the rules we will let u and v denote a differentiable function of a variable such as x. We will let C, n, and a denote constants. Our proofs will involve searching for a function F(x) whose derivative is", null, ".", null, "The integral of a differential of a function is the function plus a constant. PROOF: If", null, "then", null, "and", null, "EXAMPLE. Evaluate the integral", null, "SOLUTION: By Rule 1, we have", null, "A constant may be moved across the integral sign. NOTE: A variable may NOT be moved across the integral sign. PROOF: If", null, "then", null, "and", null, "EXAMPLE: Evaluate the integral", null, "SOLUTION: By Rule 2,", null, "and by Rule 1,", null, "therefore,", null, "The integral of", null, "du may be obtained by adding 1 to the ex�ponent and then dividing by this new exponent. NOTE: If n is minus 1, this rule is not valid and another method must be used. PROOF.- If", null, "then", null, "EXAMPLE: Evaluate the integral", null, "SOLUTION: By Rule 3,", null, "EXAMPLE: Evaluate the integral", null, "SOLUTION: First write the integral", null, "as", null, "Then, by Rule 2,", null, "and by Rule 3,", null, "The integral of a sum is equal to the sum of the integrals. PROOF: If", null, "then", null, "such that", null, "where", null, "EXAMPLE: Evaluate the integral", null, "SOLUTION: We will not combine 2x and -5x.", null, "where C is the sum of", null, ". EXAMPLE: Evaluate the integral", null, "SOLUTION:", null, "Now we will discuss the evaluation of the constant of integration. If we are to find the equation of a curve whose first derivative is 2 times the independent variable x, we may write", null, "or", null, "We may obtain the desired equation for the curve by integrating the expression for dy; that is, by integrating both sides of equa�tion (1). If", null, "then,", null, "But, since", null, "and", null, "then", null, "We have obtained only a general equation of the curve because a different curve results for each value we assign to C. This is shown in figure 6-7. If we specify that x=0 And y=6 we may obtain a specific value for C and hence a par�ticular curve. Suppose that", null, "then,", null, "or C=6", null, "Figure 6-7.-Family of curves. By substituting the value 6 into the general equation, we find that the equation for the particular curve is", null, "which is curve C of figure 6-7. The values for x and y will determine the value for C and also determine the particular curve of the family of curves. In figure 6-7, curve A has a constant equal to - 4, curve B has a constant equal to 0, and curve C has a constant equal to 6. EXAMPLE: Find the equation of the curve if its first derivative is 6 times the independent variable, y equals 2, and x equals 0. SOLUTION. We may write", null, "or", null, "such that,", null, "Solving for C when x=0 and y=2 We have", null, "or C=2 so that the equation of the curve is", null, "", null, "Integrated Publishing, Inc. - A (SDVOSB) Service Disabled Veteran Owned Small Business" ]
[ null, "http://pixel.quantserve.com/pixel/p-a14P8QBB_NyYs.gif", null, "http://www.tpub.com/s1.png", null, "http://www.tpub.com/s2.png", null, "http://www.tpub.com/s3.png", null, "http://www.tpub.com/s4.png", null, "http://www.tpub.com/s5.png", null, "http://www.tpub.com/s6.png", null, "http://www.tpub.com/partsquery.gif", null, "http://www.tpub.com/parts.jpg", null, "http://www.tpub.com/pdf-download.jpg", null, "http://www.tpub.com/math2/Job%202_files/image2188.jpg", null, "http://www.tpub.com/math2/Job%202_files/image2190.jpg", null, "http://www.tpub.com/math2/Job%202_files/image2192.jpg", null, "http://www.tpub.com/math2/Job%202_files/image2194.jpg", null, "http://www.tpub.com/math2/Job%202_files/image2196.jpg", null, "http://www.tpub.com/math2/Job%202_files/image2198.jpg", null, "http://www.tpub.com/math2/Job%202_files/image2200.jpg", null, "http://www.tpub.com/math2/Job%202_files/image2202.jpg", null, "http://www.tpub.com/math2/Job%202_files/image2204.jpg", null, "http://www.tpub.com/math2/Job%202_files/image2206.jpg", null, "http://www.tpub.com/math2/Job%202_files/image2208.jpg", null, "http://www.tpub.com/math2/Job%202_files/image2210.jpg", null, "http://www.tpub.com/math2/Job%202_files/image2212.jpg", null, "http://www.tpub.com/math2/Job%202_files/image2214.jpg", null, "http://www.tpub.com/math2/Job%202_files/image2215.jpg", null, "http://www.tpub.com/math2/Job%202_files/image2217.jpg", null, "http://www.tpub.com/math2/Job%202_files/image2219.jpg", null, "http://www.tpub.com/math2/Job%202_files/image2221.jpg", null, "http://www.tpub.com/math2/Job%202_files/image2223.jpg", null, "http://www.tpub.com/math2/Job%202_files/image2225.jpg", null, "http://www.tpub.com/math2/Job%202_files/image2227.jpg", null, "http://www.tpub.com/math2/Job%202_files/image2229.jpg", null, "http://www.tpub.com/math2/Job%202_files/image2231.jpg", null, "http://www.tpub.com/math2/Job%202_files/image2233.jpg", null, "http://www.tpub.com/math2/Job%202_files/image2235.jpg", null, "http://www.tpub.com/math2/Job%202_files/image2237.jpg", null, "http://www.tpub.com/math2/Job%202_files/image2239.jpg", null, "http://www.tpub.com/math2/Job%202_files/image2241.jpg", null, "http://www.tpub.com/math2/Job%202_files/image2243.jpg", null, "http://www.tpub.com/math2/Job%202_files/image2245.jpg", null, "http://www.tpub.com/math2/Job%202_files/image2247.jpg", null, "http://www.tpub.com/math2/Job%202_files/image2249.jpg", null, "http://www.tpub.com/math2/Job%202_files/image2251.jpg", null, "http://www.tpub.com/math2/Job%202_files/image2253.jpg", null, "http://www.tpub.com/math2/Job%202_files/image2255.jpg", null, "http://www.tpub.com/math2/Job%202_files/image2257.jpg", null, "http://www.tpub.com/math2/Job%202_files/image2259.jpg", null, "http://www.tpub.com/math2/Job%202_files/image2261.jpg", null, "http://www.tpub.com/math2/Job%202_files/image2263.jpg", null, "http://www.tpub.com/math2/Job%202_files/image2265.gif", null, "http://www.tpub.com/math2/Job%202_files/image2267.jpg", null, "http://www.tpub.com/math2/Job%202_files/image2269.jpg", null, "http://www.tpub.com/math2/Job%202_files/image2271.jpg", null, "http://www.tpub.com/math2/Job%202_files/image2273.jpg", null, "http://www.tpub.com/math2/Job%202_files/image2275.jpg", null, "http://www.tpub.com/math2/Job%202_files/image2277.jpg", null, "http://www.tpub.com/math2/Job%202_files/image2279.jpg", null, "http://www.tpub.com/math2/Job%202_files/image2281.jpg", null, "http://www.tpub.com/math2/Job%202_files/image2283.jpg", null, "http://www.tpub.com/75logo.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.88129383,"math_prob":0.99806166,"size":3202,"snap":"2020-10-2020-16","text_gpt3_token_len":840,"char_repetition_ratio":0.15040651,"word_repetition_ratio":0.24537815,"special_character_ratio":0.22954403,"punctuation_ratio":0.113702625,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99964845,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-04-06T19:20:02Z\",\"WARC-Record-ID\":\"<urn:uuid:1fa5be43-70ef-41f6-8d19-729c549dc2bb>\",\"Content-Length\":\"36985\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:99d90fad-b59c-409e-a0b1-475a7b63841c>\",\"WARC-Concurrent-To\":\"<urn:uuid:2a49cfcf-d3da-4973-b169-a1c03c4b3e88>\",\"WARC-IP-Address\":\"209.62.116.35\",\"WARC-Target-URI\":\"http://www.tpub.com/math2/61.htm\",\"WARC-Payload-Digest\":\"sha1:ZULX6MRXZI2CBQUDNCX7M63AQXGCTRZB\",\"WARC-Block-Digest\":\"sha1:FZWJWQVR6EX365WXT2QS4BZ2J2APLXQ2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-16/CC-MAIN-2020-16_segments_1585371656216.67_warc_CC-MAIN-20200406164846-20200406195346-00190.warc.gz\"}"}
https://stanford.library.sydney.edu.au/archives/sum2018/entries/dynamic-epistemic/appendix-M-preferences.html
[ "## Appendix M: Preference dynamics\n\nDEL-style model-changing operators have been applied by a number of researchers to the study of preferences, preference change, and related notions. At the semantic core of the various DEL approaches to reasoning about preference and preference change is some variant of the preference model.\n\nPreference model. Given a nonempty set $$\\sP$$ of propositional letters and a finite nonempty set $$\\sA$$ of agents, a preference model is a structure\n\n$M=(W,\\succeq,V)$\n\nconsisting of a nonempty set W of worlds identifying the possible states of affairs that might obtain, a function $$\\succeq$$ that assigns to each agent a a reflexive and transitive binary relation$$\\succeq_a$$ on W, and a propositional valuation V mapping each propositional letter to the set of worlds at which that letter is true. A pointed preference model, sometimes called a scenario or a situation, is a pair $$(M,w)$$ consisting of a preference model M and a world w (called the point) that designates the state of affairs that we (the formal modelers) currently assume to be actual.\n\nA preference model is just like a plausibility model except that the order $$\\succeq_a$$ need not satisfy the property of Plausibility; in particular, within each nonempty set of worlds, there need not always exist a nonempty subset consisting of the “most preferred” worlds. The expression $$w\\succeq_a v$$ is read, “agent a considers w to be at least as preferred as v”.\n\nNote that the convention here is that the “larger” worlds according to $$\\succeq_a$$ are more preferred; this is the opposite of the convention adopted for plausibility models. The converse preferential relation $$\\preceq_a$$, the strict preferential relations $$\\succ_a$$ and $$\\prec_a$$, and the equi-preferential relation $$\\simeq_a$$ are defined in terms of $$\\succeq_a$$ in a manner analogous to our definitions of converse plausibility, strict plausibility, strict converse plausibility, and equi-plausibility in the definition of plausibility models.\n\nVan Benthem et al. (2009) consider a single-agent version of a reflexive and irreflexive preference language we call \\eqref{RIPL}. This language consists of a universal modality $$[\\forall]$$ (“true at all worlds in the model”), a reflexive preference modality $$[\\succeq_a]$$ (“true at the worlds non-strictly less preferred by a”), the converse reflexive preference modality $$[\\preceq_a]$$ (“true at the worlds non-strictly preferred by a”), an irreflexive preference modality $$[\\succ_a]$$ (“true at the worlds strictly less preferred by a”), and the converse irreflexive preference modality $$[\\prec_a]$$ (“true at the worlds strictly preferred by a”).\n\n\\begin{align*} \\taglabel{RIPL} F \\ccoloneqq &p \\mid F\\land F \\mid \\lnot F \\mid [\\forall F] \\,\\mid \\\\ &[\\succeq_a]F \\mid [\\preceq_a]F \\mid [\\succ_a]F \\mid [\\prec_a]F \\\\ &\\small p\\in\\sP,\\; a\\in\\sA \\end{align*}\n\nWe recall that the dual existential modality $$[\\exists]$$ is defined by $$[\\exists]F\\coloneqq\\lnot[\\forall]\\lnot F$$. The dual modalities for the preference relation, which are written between angled brackets, are defined similarly; for example, $$\\may{\\succeq_a}F\\coloneqq\\lnot[\\succeq_a]\\lnot F$$. The language \\eqref{RIPL} is interpreted over pointed preference models as follows.\n\n• $$M,w\\models p$$ holds if and only if $$w\\in V(p)$$.\n• $$M,w\\models F\\land G$$ holds if and only if both $$M,w\\models F$$ and $$M,w\\models G$$.\n• $$M,w\\models\\lnot F$$ holds if and only if $$M,w\\not\\models F$$.\n• $$M,w\\models[\\forall]F$$ holds if and only if $$M,v\\models F$$ for each $$v\\in W$$.\n• $$M,w\\models[\\succeq_a]F$$ holds if and only if $$M,v\\models F$$ for each $$v\\succeq_a w$$.\n• $$M,w\\models[\\preceq_a]F$$ holds if and only if $$M,v\\models F$$ for each $$v\\preceq_a w$$.\n• $$M,w\\models[\\succ_a]F$$ holds if and only if $$M,v\\models F$$ for each $$v\\succ_a w$$.\n• $$M,w\\models[\\prec_a]F$$ holds if and only if $$M,v\\models F$$ for each $$v\\prec_a w$$.\n\nVan Benthem et al. (2009) show that \\eqref{RIPL} allows us to express eight distinct notions of “agent a prefers G to F”. A number of these notions arise in choosing best moves in a game-theoretic setting (van Benthem 2009).\n\nTheorem (van Benthem et al. 2009). Let $$(M,w)$$ be a pointed preference model.\n\n1. Define $$M,w\\models F\\leq_a^{\\exists\\exists} G$$ to mean that $$\\exists x, \\exists y \\succeq_a x:$$ ($$M,x\\models F$$ and $$M,y\\models G$$).\n“There is an F-world that agent a non-strictly prefers to some G-world.”\nThe formula $$[\\exists](F\\land\\may{\\preceq_a}G)$$ is equivalent to $$F\\leq_a^{\\exists\\exists} G$$.\n2. Define $$M,w\\models F\\leq_a^{\\forall\\exists} G$$ to mean that $$\\forall x, \\exists y \\succeq_a x:$$ ($$M,x\\models F$$ implies $$M,y\\models G$$).\n“For every F-world, there is a G-world that agent a non-strictly prefers.”\nThe formula $$[\\forall](F\\to\\may{\\preceq_a} G)$$ is equivalent to $$F\\leq_a^{\\forall\\exists} G$$.\n3. Define $$M,w\\models F\\lt_a^{\\exists\\exists} G$$ to mean that $$\\exists x, \\exists y \\succ_a x:$$ ($$M,x\\models F$$ and $$M,y\\models G$$).\n“There is an F-world that agent a strictly prefers to some G-world.”\nThe formula $$[\\exists](F\\land\\may{\\prec_a}G)$$ is equivalent to $$F\\lt_a^{\\exists\\exists} G$$.\n4. Define $$M,w\\models F\\lt_a^{\\forall\\exists} G$$ to mean that $$\\forall x, \\exists y :$$ (if $$M,x\\models F$$, then $$M,y\\models G$$ and $$y\\succ_a x$$).\n“For every F-world, there is a G-world that agent a strictly prefers.”\nThe formula $$[\\forall](F\\to\\may{\\prec_a} G)$$ is equivalent to $$F\\lt_a^{\\forall\\exists} G$$.\n5. Define $$M,w\\models F\\lt_a^{\\forall\\forall} G$$ to mean that $$\\forall x, \\forall y :$$ (if $$M,x\\models F$$ and $$M,y\\models G$$, then $$x\\prec_a y$$).\n“Agent a strictly prefers every G-world to every F-world.”\nIf the preference ordering $$\\succeq_a$$ is total (see Appendix C), then $$[\\forall](G\\to[\\preceq_a]\\lnot F)$$ is equivalent to $$F\\lt_a^{\\forall\\forall} G$$.\n6. Define $$M,w\\models G\\gt_a^{\\exists\\forall} F$$ to mean that $$\\exists y, \\forall x :$$ (if $$M,x\\models F$$ and $$M,y\\models G$$, then $$x\\prec_a y$$).\n“There is a G-world that agent a strictly prefers to every F-world.”\nIf the preference ordering $$\\succeq_a$$ is total, then $$[\\exists](G\\land[\\preceq_a]\\lnot F)$$ is equivalent to $$G\\gt_a^{\\exists\\forall} F$$.\n7. Define $$M,w\\models F\\leq_a^{\\forall\\forall} G$$ to mean that $$\\forall x, \\forall y :$$ (if $$M,x\\models F$$ and $$M,y\\models G$$, then $$x\\preceq_a y$$).\n“Agent a non-strictly prefers every G-world to every F-world.”\nIf the preference ordering $$\\succeq_a$$ is total, then $$[\\forall](G\\to[\\prec_a]\\lnot F)$$ is equivalent to $$F\\leq_a^{\\forall\\forall} G$$.\n8. Define $$M,w\\models G\\geq_a^{\\exists\\forall} F$$ to mean that $$\\exists y, \\forall x :$$ (if $$M,x\\models F$$ and $$M,y\\models G$$, then $$x\\preceq_a y$$).\n“There is a G-world that agent a non-strictly prefers to every F-world.”\nIf the preference ordering $$\\succeq_a$$ is total, then $$[\\exists](G\\land[\\prec_a]\\lnot F)$$ is equivalent to $$G\\geq_a^{\\exists\\forall} F$$.\n\nTo consider multi-agent preferences in conjunction with multi-agent knowledge, and to allow for DEL-style changes in preferences and knowledge, van Benthem and Liu (2007) study preference models to which epistemic relations $$R_a$$ are added for each agent a.\n\nEpistemic preference model. Given a nonempty set $$\\sP$$ of propositional letters and a finite nonempty set $$\\sA$$ of agents, an epistemic preference model is a structure $M=(W,\\succeq,R,V)$ consisting of a nonempty set W of worlds identifying the possible states of affairs that might obtain, a function $$\\succeq$$ that assigns to each agent $$a\\in\\sA$$ a reflexive and transitive binary relation $${\\succeq_a}$$ on W, a function R that assigns a binary equivalence relation $$R_a$$ on W to each agent $$a\\in\\sA$$, and a propositional valuation V mapping each propositional letter to the set of worlds at which that letter is true. A pointed epistemic preference model, sometimes called a scenario or a situation, is a pair $$(M,w)$$ consisting of an epistemic preference model M and a world w (called the point) that designates the state of affairs that we (the formal modelers) currently assume to be actual.\n\nWe note that van Benthem and Liu (2007) define epistemic preference models so that each $$R_a$$ is an equivalence relation. This is because they wish to adopt the standard logic of knowledge (multi-agent $$\\mathsf{S5}$$) and assign formulas $$[a]F$$ an epistemic reading (“agent a knows F”). This restriction that the $$R_a$$’s be equivalence relations is not a technical necessity and could easily be varied if other normal modal logics for the modalities $$[a]$$ are desired.\n\nVan Benthem and Liu (2007) define the single-agent dynamic epistemic preference language \\eqref{DEPL} that makes use of a universal modality, reflexive preference formulas $$[\\preceq_a]F$$ (“agent a prefers F”) for each agent a, a knowledge modality $$[a]F$$ (“agent a knows F”) for each agent a, “link-cutting” public announcement formulas $$[F!']G$$ (“after the announcement of F, formula G is true”), and “preference upgrade” formulas $$[\\sharp F]G$$ (“after eliminating preferences for $$\\lnot F$$-worlds over F-worlds, G is true”). Here we consider a simple multi-agent extension.\n\n\\begin{align*} \\taglabel{DEPL} F \\ccoloneqq &p \\mid F\\land F \\mid \\lnot F \\mid [a]F \\,\\mid \\\\ &[\\forall]F \\mid [\\preceq_a]F \\mid [F!']F \\mid [\\sharp F]F \\\\ &\\small p\\in\\sP,\\; a\\in\\sA \\end{align*}\n\nFormulas of this language are interpreted at pointed epistemic preference models using the appropriate clauses from \\eqref{RIPL} and from (ML) along with the following clauses:\n\n• $$M,w\\models[a]F$$ holds if and only if $$M,v\\models F$$ for each v satisfying $$wR_av$$.\n• $$M,w\\models[F!']G$$ holds if and only if we have that $$M[F!'],w\\models G$$, where the model $M[F!']=(W[F!'],{\\succeq}[F!'],R[F!'],V[F!'])$ is defined by:\n• $$W[F!'] \\coloneqq W$$ — retain all worlds,\n• $$x{\\succeq}[F!']_a y$$ if and only if $$x\\succeq_a y$$ — leave preferences as before,\n• $$xR[F!']_ay$$ if and only if $$xR_ay$$ and we have $$M,x\\models F$$ if and only if $$M,y\\models F$$ — delete only those epistemic connections between F-worlds and $$\\lnot F$$-worlds, and\n• $$v\\in V[F!'](p)$$ holds if and only if $$v\\in V(p)$$ — leave the valuation the same at all worlds.\n• $$M,w\\models[\\sharp F]G$$ holds if and only if we have that $$M[\\sharp F],w\\models G$$, where the model $M[\\sharp F]=(W[\\sharp F],{\\succeq}[\\sharp F],R[\\sharp F],V[\\sharp F])$ is defined by:\n• $$W[\\sharp F]\\coloneqq W$$ — retain all worlds,\n• $$x{\\succeq}[\\sharp F]_ay$$ if and only if $$x\\succeq_a y$$ and it is not the case that both $$M,w\\not\\models F$$ and $$M,y\\models F$$ — delete only those preferences for $$\\lnot F$$-worlds over F-worlds,\n• $$x R[\\sharp F]_ay$$ if and only if $$x R_a y$$ — leave epistemic relations as before, and\n• $$v\\in V[\\sharp F](p)$$ if and only if $$v\\in V(p)$$ — leave the valuation the same at all worlds.\n\nVan Benthem and Liu (2007) axiomatize the \\eqref{DEPL}-validities and call the resulting logic Dynamic Epistemic Upgrade Logic $$\\DEUL$$.\n\nThe axiomatic theory $$\\DEUL$$.\n\n• Axiom schemes and rules for classical propositional logic\n• $$\\mathsf{S5}$$ axiom schemes and rules for $$[a]$$\n• $$\\mathsf{S4}$$ axiom schemes and rules for $$[\\preceq_a]$$\n• Universal modality axioms:\n• $$\\mathsf{S5}$$ axiom schemes and rules for $$[\\forall]$$\n• $$[\\forall]F\\to[a]F$$\n• $$[\\forall]F\\to[\\preceq_a]F$$\n• $$[F!']p\\leftrightarrow(F\\to p)$$ for letters $$p\\in\\sP$$\n“After a false announcement, every letter holds—a contradiction. After a true announcement, letters retain their truth values.”\n• $$[F!'](G\\land H)\\leftrightarrow([F!']G\\land[F!']H)$$\n“A conjunction is true after an announcement iff each conjunct is.”\n• $$[F!']\\lnot G\\leftrightarrow(F\\to\\lnot[F!']G)$$\nG is false after an announcement iff the announcement, whenever truthful, does not make G true.”\n• $$[F!'][a]G\\leftrightarrow(F\\to[a][F!']G)$$\na knows G after an announcement iff the announcement, whenever truthful, is known by a to make G true.”\n• $$[F!'][\\preceq_a]G\\leftrightarrow(F\\to[\\preceq_a][F!']G)$$\na prefers G after an announcement iff a prefers that G is true after the announcement is truthfully made.”\n• $$[F!'][\\forall]G\\leftrightarrow (F\\to[\\forall]([F!']G\\land[\\lnot F!']G))$$\nG is true everywhere after an announcement of F iff whenever F is true, it is true everywhere that both the announcement of F and the announcement of its negation makes G true.”\n• Link-Cutting Announcement Necessitation Rule: from F, infer $$[G!']F$$.\n“A validity holds after any announcement.”\n• $$[\\sharp F]p\\leftrightarrow p$$ for letters $$p\\in\\sP$$\n• $$[\\sharp F](G\\land H)\\leftrightarrow ([\\sharp F]G\\land[\\sharp F]H)$$\n“A conjunction is true after an upgrade iff each conjunct is.”\n• $$[\\sharp F]\\lnot G\\leftrightarrow\\lnot[\\sharp F]G$$\n“A negation is true after an upgrade iff it is false before the upgrade.”\n• $$[\\sharp F][a]G\\leftrightarrow[a][\\sharp F]G$$\na knows G after an upgrade iff the upgrade is known by a to make G true.”\n• $$[\\sharp F][\\preceq_a]G\\leftrightarrow ([\\preceq_a](F\\to[\\sharp F]G)\\land (\\lnot F\\to[\\preceq_a][\\sharp F]G))$$\na prefers G after an upgrade by F iff a prefers that ‘the upgrade by F make G true at F-worlds,’ and, in case F is false, a also prefers that the upgrade of F make G true.”\n• $$[\\sharp F][\\forall]G\\leftrightarrow[\\forall][\\sharp F]G$$\nG is true everywhere after an upgrade of F iff it is true everywhere that the upgrade makes G true.”\n• Preference Upgrade Necessitation Rule: from F, infer $$[\\sharp G]F$$.\n“A validity holds after any upgrade.”\n\n$$\\DEUL$$ Soundness and Completeness (van Benthem and Liu 2007). $$\\DEUL$$ is sound and complete with respect to the collection $$\\sC_*$$ of pointed epistemic preference models. That is, for each \\eqref{DEPL}-formula F, we have that $$\\DEUL\\vdash F$$ if and only if $$\\sC_*\\models F$$.\n\nThe work of van Benthem and Liu (2007) has been further developed by a number of authors. Liu (2008) looks at a quantitative version of preference and preference change closely related to earlier work on belief revision by Aucher (2003). Yamada (2007a,b, 2008) examines various deontic logics of action, command, and obligation. Van Eijck (2008) looks at a generalized Propositional Dynamic Logic-style preference logic that encompasses van Benthem and Liu (2007) and allows for the study of common knowledge (or belief) along with a DEL-style notion of conditional belief and belief change. Van Eijck and Sietsma (2010) further extend this work, examining applications to judgment aggregation; this has natural connections with coalition logic and social choice theory. Van Benthem, Girard, and Roy (2009c) examine a preference logic for von Wright’s (1963) notion of ceteris paribus (in the distinct senses of “all things being normal” and of “all things being equal”) along with a dynamic notion of “agenda change”. A textbook on preferences and preference dynamics is Liu (2011).\n\n← beginning of main article" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7528672,"math_prob":0.9999006,"size":15233,"snap":"2022-27-2022-33","text_gpt3_token_len":4477,"char_repetition_ratio":0.15509883,"word_repetition_ratio":0.23957856,"special_character_ratio":0.29718375,"punctuation_ratio":0.10150376,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999573,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-10T08:41:33Z\",\"WARC-Record-ID\":\"<urn:uuid:32ee5c16-0199-43c8-9e6e-4b7147b48041>\",\"Content-Length\":\"30585\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7df3d16a-2688-4d8d-b1f2-851ce98601dc>\",\"WARC-Concurrent-To\":\"<urn:uuid:2012c582-2c6e-4424-b73d-7858c32293bc>\",\"WARC-IP-Address\":\"54.153.194.40\",\"WARC-Target-URI\":\"https://stanford.library.sydney.edu.au/archives/sum2018/entries/dynamic-epistemic/appendix-M-preferences.html\",\"WARC-Payload-Digest\":\"sha1:SXGOECKTR3RQ5L6FGA74GJUP7XM4AQYF\",\"WARC-Block-Digest\":\"sha1:6E73GEVHALNQF2OUTBIEGYHXT23FKOPG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882571150.88_warc_CC-MAIN-20220810070501-20220810100501-00558.warc.gz\"}"}
https://superior-papers.com/urgent-homework-help-34823/
[ "# explain the interesting elements related to threads\n\n1. (20 pts) For the following program, explain the interesting elements related to threads. Focus on\n\nexplaining the output of the program.\n\n2 public static void main (String args []) {\n\n3 String [] sa = {“a”, “X”, “+”, “.”};\n\n4 for (String s: sa) {\n\n5 Runnable ps = new PrintChar (s, 200);\n\n7 ts.start ();\n\n8 } // end for each character\n\n9 } // end main\n\n11\n\n12 class PrintChar implements Runnable {\n\n13 String ch;\n\n14 int times;\n\n15\n\n16 public PrintChar (String c, int n) {\n\n17 ch = c;\n\n18 times = n;\n\n19 } // end constructor\n\n20\n\n21 public void run () {\n\n22 for (int i = 0; i < times; i++) {\n\n23 System.out.print (ch);\n\n24 } // end for loop\n\n25 } // end method run\n\n26 } // end class PrintChar\n\n2. (20 pts) What is changed if the method called on line 7, start(), is replaced with run()? Explain (of\n\ncourse). Focus on explaining the output of the program.\n\n3. (20 pts) What is changed if the method Thread.yield() is added between lines 23 and 24? Explain.\n\nFocus on explaining the output of the program.\n\n4. (20 pts) Using the jconsole or jvisualvm utilities provided in the JDK, list and explain some of the\n\nthreads that are created in your code for Project 3. Note that you can name the threads created in the\n\nprogram, as is done on line 6 in Problem 1 above, which can make this discussion a lot easier to follow.\n\n5. (20 pts) Explain how the java.util.concurrent.Semaphore class might be used in Project 4 to\n\ncoordinate the requirements of the various jobs. Then address the question of whether or not this\n\nactually makes sense in the context of the requirements of program. In other words, can you suggest\n\napproaches to handling shared resource pools that would be simpler than using semaphores?" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.78993946,"math_prob":0.8032627,"size":1826,"snap":"2021-31-2021-39","text_gpt3_token_len":466,"char_repetition_ratio":0.12513721,"word_repetition_ratio":0.07038123,"special_character_ratio":0.3033954,"punctuation_ratio":0.13821138,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9648947,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-08-02T21:51:21Z\",\"WARC-Record-ID\":\"<urn:uuid:465e2bc6-2baa-46ae-9fa1-ef36d7aa2297>\",\"Content-Length\":\"63291\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b9b23cf5-5be4-4841-9e40-21550ba2e6ad>\",\"WARC-Concurrent-To\":\"<urn:uuid:3ae05a3f-95ea-43bb-acc9-01e18cd000f5>\",\"WARC-IP-Address\":\"162.0.208.14\",\"WARC-Target-URI\":\"https://superior-papers.com/urgent-homework-help-34823/\",\"WARC-Payload-Digest\":\"sha1:DQ546HGFNV3ONEEVWJGQSNK56FXSIFFW\",\"WARC-Block-Digest\":\"sha1:EWNAUIKST23H7MYCZF4SYGXXETU3Y225\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046154385.24_warc_CC-MAIN-20210802203434-20210802233434-00665.warc.gz\"}"}
http://mathcentral.uregina.ca/QandQ/topics/sum%20of%20two%20numbers
[ "", null, "", null, "Math Central - mathcentral.uregina.ca", null, "", null, "Quandaries & Queries", null, "", null, "", null, "", null, "Q & Q", null, "", null, "", null, "", null, "Topic:", null, "sum of two numbers", null, "", null, "", null, "start over\n\n3 items are filed under this topic.", null, "", null, "", null, "", null, "Page1/1", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "The sum and difference of two numbers 2018-03-13", null, "From samima:Two numbers have a difference of 0.85 and the sum 1.What are the numbers?Answered by Penny Nom.", null, "", null, "", null, "", null, "", null, "The sum and difference of two numbers 2018-01-25", null, "From Ali:Hello, I was looking at the original question that was posted and answered by your team: http://mathcentral.uregina.ca/QQ/database/QQ.09.07/s/donna1.html I used the 7 and 4 as example and not looking for 39 per original question: A=7 B=4 7+ 4 =11 and 7-4 = 3 We end up with 11 +3 = 14 A= 14/2 give use 7 B = how do you solve for B or 4 with out knowing anything about 3 or any other numbers ? Thank you AliAnswered by Penny Nom.", null, "", null, "", null, "", null, "", null, "The sum of two numbers is 52 2014-09-01", null, "From Blake:The sum of two numbers is 52 and the difference is 10. What are the numbers? i used to be real good at this stuff?Answered by Penny Nom.", null, "", null, "", null, "", null, "", null, "", null, "Page1/1", null, "", null, "", null, "", null, "Math Central is supported by the University of Regina and The Pacific Institute for the Mathematical Sciences.", null, "", null, "", null, "", null, "about math central :: site map :: links :: notre site français" ]
[ null, "http://mathcentral.uregina.ca/lid/images/pixels/transparent.gif", null, "http://mathcentral.uregina.ca/lid/images/pixels/transparent.gif", null, "http://mathcentral.uregina.ca/lid/images/pixels/transparent.gif", null, "http://mathcentral.uregina.ca/lid/images/pixels/transparent.gif", null, "http://mathcentral.uregina.ca/lid/images/pixels/transparent.gif", null, "http://mathcentral.uregina.ca/lid/images/pixels/transparent.gif", null, "http://mathcentral.uregina.ca/lid/images/pixels/transparent.gif", null, "http://mathcentral.uregina.ca/lid/images/pixels/transparent.gif", null, "http://mathcentral.uregina.ca/lid/images/pixels/transparent.gif", null, "http://mathcentral.uregina.ca/lid/images/boxes/whiteonwhite/topleft.gif", null, "http://mathcentral.uregina.ca/lid/images/boxes/whiteonwhite/topright.gif", null, "http://mathcentral.uregina.ca/lid/QQ/images/topic.gif", null, "http://mathcentral.uregina.ca/images/transparent.gif", null, "http://mathcentral.uregina.ca/images/transparent.gif", null, "http://mathcentral.uregina.ca/lid/images/boxes/whiteonwhite/bottomleft.gif", null, "http://mathcentral.uregina.ca/lid/images/boxes/whiteonwhite/bottomright.gif", null, "http://mathcentral.uregina.ca/images/transparent.gif", null, "http://mathcentral.uregina.ca/images/transparent.gif", null, "http://mathcentral.uregina.ca/images/nav_but_inact_first.gif", null, "http://mathcentral.uregina.ca/images/nav_but_inact_previous.gif", null, "http://mathcentral.uregina.ca/images/nav_but_inact_next.gif", null, "http://mathcentral.uregina.ca/images/nav_but_inact_last.gif", null, "http://mathcentral.uregina.ca/images/transparent.gif", null, "http://mathcentral.uregina.ca/images/transparent.gif", null, "http://mathcentral.uregina.ca/images/transparent.gif", null, "http://mathcentral.uregina.ca/images/transparent.gif", null, "http://mathcentral.uregina.ca/images/transparent.gif", null, "http://mathcentral.uregina.ca/images/transparent.gif", null, "http://mathcentral.uregina.ca/images/transparent.gif", null, "http://mathcentral.uregina.ca/images/transparent.gif", null, "http://mathcentral.uregina.ca/images/transparent.gif", null, "http://mathcentral.uregina.ca/images/transparent.gif", null, "http://mathcentral.uregina.ca/images/transparent.gif", null, "http://mathcentral.uregina.ca/images/transparent.gif", null, "http://mathcentral.uregina.ca/images/transparent.gif", null, "http://mathcentral.uregina.ca/images/transparent.gif", null, "http://mathcentral.uregina.ca/images/transparent.gif", null, "http://mathcentral.uregina.ca/images/transparent.gif", null, "http://mathcentral.uregina.ca/images/transparent.gif", null, "http://mathcentral.uregina.ca/images/transparent.gif", null, "http://mathcentral.uregina.ca/images/transparent.gif", null, "http://mathcentral.uregina.ca/images/transparent.gif", null, "http://mathcentral.uregina.ca/images/transparent.gif", null, "http://mathcentral.uregina.ca/images/transparent.gif", null, "http://mathcentral.uregina.ca/images/transparent.gif", null, "http://mathcentral.uregina.ca/images/transparent.gif", null, "http://mathcentral.uregina.ca/images/transparent.gif", null, "http://mathcentral.uregina.ca/images/transparent.gif", null, "http://mathcentral.uregina.ca/images/transparent.gif", null, "http://mathcentral.uregina.ca/images/nav_but_inact_first.gif", null, "http://mathcentral.uregina.ca/images/nav_but_inact_previous.gif", null, "http://mathcentral.uregina.ca/images/nav_but_inact_next.gif", null, "http://mathcentral.uregina.ca/images/nav_but_inact_last.gif", null, "http://mathcentral.uregina.ca/images/transparent.gif", null, "http://mathcentral.uregina.ca/images/transparent.gif", null, "http://mathcentral.uregina.ca/lid/images/pixels/transparent.gif", null, "http://mathcentral.uregina.ca/lid/styles/mathcentral/interior/cms.gif", null, "http://mathcentral.uregina.ca/lid/images/pixels/transparent.gif", null, "http://mathcentral.uregina.ca/lid/images/pixels/transparent.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9320541,"math_prob":0.8207679,"size":406,"snap":"2019-13-2019-22","text_gpt3_token_len":137,"char_repetition_ratio":0.10199005,"word_repetition_ratio":0.0,"special_character_ratio":0.33251232,"punctuation_ratio":0.1,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9925839,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-05-25T01:39:04Z\",\"WARC-Record-ID\":\"<urn:uuid:f140ba20-ab3d-495b-a4fc-0f0f03d8f01d>\",\"Content-Length\":\"16065\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a74f4e75-e0e3-45b6-b423-23f4e7485faf>\",\"WARC-Concurrent-To\":\"<urn:uuid:a33dd19c-90d5-49fb-8664-f61d01343ae1>\",\"WARC-IP-Address\":\"142.3.156.43\",\"WARC-Target-URI\":\"http://mathcentral.uregina.ca/QandQ/topics/sum%20of%20two%20numbers\",\"WARC-Payload-Digest\":\"sha1:VZ7CTBYSVULA555UJGO2R5EZA6J3PHGS\",\"WARC-Block-Digest\":\"sha1:FCNYZ7KPBCZYECI2HZK6AD3GBKDCTW4D\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-22/CC-MAIN-2019-22_segments_1558232257845.26_warc_CC-MAIN-20190525004721-20190525030721-00136.warc.gz\"}"}
https://solve-variable.com/math-logic--problem-solving-honors.html
[ "Free Algebra Tutorials!", null, "Home Systems of Linear Equations and Problem Solving Solving Quadratic Equations Solve Absolute Value Inequalities Solving Quadratic Equations Solving Quadratic Inequalities Solving Systems of Equations Row Reduction Solving Systems of Linear Equations by Graphing Solving Quadratic Equations Solving Systems of Linear Equations Solving Linear Equations - Part II Solving Equations I Summative Assessment of Problem-solving and Skills Outcomes Math-Problem Solving:Long Division Face Solving Linear Equations Systems of Linear Equations in Two Variables Solving a System of Linear Equations by Graphing Ti-89 Solving Simultaneous Equations Systems of Linear Equations in Three Variables and Matrix Operations Solving Rational Equations Solving Quadratic Equations by Factoring Solving Quadratic Equations Solving Systems of Linear Equations Systems of Equations in Two Variables Solving Quadratic Equations Solving Exponential and Logarithmic Equations Solving Systems of Linear Equations Solving Quadratic Equations Math Logic & Problem Solving Honors Solving Quadratic Equations by Factoring Solving Literal Equations and Formulas Solving Quadratic Equations by Completing the Square Solving Exponential and Logarithmic Equations Solving Equations with Fractions Solving Equations Solving Linear Equations Solving Linear Equations in One Variable Solving Linear Equations SOLVING QUADRATIC EQUATIONS USING THE QUADRATIC FORMULA SOLVING LINEAR EQUATIONS\n\nTry the Free Math Solver or Scroll down to Tutorials!\n\n Depdendent Variable\n\n Number of equations to solve: 23456789\n Equ. #1:\n Equ. #2:\n\n Equ. #3:\n\n Equ. #4:\n\n Equ. #5:\n\n Equ. #6:\n\n Equ. #7:\n\n Equ. #8:\n\n Equ. #9:\n\n Solve for:\n\n Dependent Variable\n\n Number of inequalities to solve: 23456789\n Ineq. #1:\n Ineq. #2:\n\n Ineq. #3:\n\n Ineq. #4:\n\n Ineq. #5:\n\n Ineq. #6:\n\n Ineq. #7:\n\n Ineq. #8:\n\n Ineq. #9:\n\n Solve for:\n\n Please use this form if you would like to have this math solver on your website, free of charge. Name: Email: Your Website: Msg:\n\nMath Logic & Problem Solving Honors\n\nCourse Description:\n\nLogic and Problem Solving Honors is a course designed to strengthen the skills essential to\nmathematical reasoning and problem solving. Participants will learn about the major elements of\nmathematical logic including but not limited to validity, soundness, formal proof and counter\nexample, inductive and deductive reasoning, truth tables, and set theory. These concepts will be\napplied to proof writing and analysis, problem solving, and other mathematical units of study such\nas probability, statistics, algebra, geometry, and number theory. Successful completion of this\ncourse will prepare the student for advanced math and logic courses.\n\nOutcomes: Upon successful completion of this course, the student will:\n\na. understand and apply the concepts of inductive and deductive reasoning\nb. apply inductive and deductive reasoning in mathematical proofs and problem solving situations\nc. understand and apply the concepts and symbols of set theory\nd. understand and apply the concepts and symbols relating to the rules of logic\ne. understand and apply the definitions of symbolic logic\nf. construct and apply truth tables for specific cases and for determining validity\ng. construct and use Euler diagrams to determine validity\nh. apply reasoning skills to early numeration systems and number bases in positional systems\ni. apply logic concepts and problem solving to topics of mathematical study including, Number Theory, Algebra, Geometry, Trigonometry, Counting Methods and Probability Theory, Statistics, and Mathematical Systems\n\nStudent Evaluation and Grading Policies for Credit Courses Only:\n\n A+ 100-98% A 97-93% A- 92-90% B+ 89-87% B 86-83% B- 82-80% C+ 79-77% C 76-73% C- 72-70% D+ 69-67% D 66-63% D- 62-60% F below 60%\n\nThe final semester grade will be based on completion of daily assignments (25%) and test grades\n(75%).\n\nInstructor Biography:\n\nLynn Tremmel is a middle school math teacher with 18 years of experience teaching Math, Pre-\nAlgebra, and Algebra I in grades 6-8. She has a B.A. degree in elementary education with a\nmathematics concentration (receiving honors in mathematics) along with a M.Ed. in mathematics\neducation, both from National College of Education (National Louis University). She is also an\narticle reviewer for the Illinois Council of Teachers of Mathematics.\n\nMrs. Tremmel has taught for the Saturday Enrichment Program through the Center for Talent\nDevelopment and is excited to be teaching her first summer session for the Summer Spectrum\nProgram.\n\nSchedule:\n\nThe following is a flexible schedule of topics covered in class. These will be adjusted as students’\nneeds dictate. Instructional strategies that will be utilized include large group instruction, small\ngroup work, individual work, and class discussions.\n\n Date(s) Topic(s) Activities Assignments and/or Assessment Week 1 Monday Problem Solving and Critical Thinking (Chapter 1) -inductive/deductive reasoning -estimation, graphs, and mathematical models -problem solving Pre-Test Chapter 1 assignment Tuesday Set Theory (Chapter 2) -basic set concepts -subsets -Venn diagrams and set operations -problem solving applications Test Chp. 1 Chapter 2 assignment Wednesday Logic (Chapter 3) -statements, negations, and quantified statements -compound statements and connectives -truth tables for negation, conjunctions, and disjunctions -truth table for the conditional and the biconditional -equivalent statements, variations of conditional statements, and De Morgan’s laws Test Chp. 2 Chapter 3 assignment (part 1) Thursday Logic (Chapter 3) -arguments and truth tables -arguments and Euler Diagrams Chapter 3 assignment (part 2) Friday Number Representations and Calculations (Chpater 4) -Hindu-Arabic System and early positional systems -number bases in positional systems computations in positional systems -early numeration systems -groups develop their own numeration system Test Chp. 3 Chapter 4 assignment and small group project Week 2 Monday Number Theory and the Real Number System (Chapter 5) -divisibility, prime, composite -factors, multiples -integers, order of operations -rational numbers -real numbers and their properties -exponents and scientific notation -arithmetic and geometric sequences Test Chp. 4 Chapter 5 assignment Tuesday Mathematical Systems (Chapter 13) -determine what a mathematical system is -properties of certain mathematical systems -groups and clock arithmetic Test Chp. 5 Chapter 13 assignment Wednesday Algebra: Equations and Inequalities (Chapter 6) -algebraic expressions and formulas -linear equations in one variable -applications of linear equations -ratio, proportion and variation -linear inequalities in one variable -quadratic equations -proofs involving linear equations Test Chp. 13 Chapter 6 assignment Thursday Algebra: Graphs, Functions, and Linear Systems (Chapter 7) -graphing and functions -linear functions and their graphs -systems of linear equations in two variables -linear inequalities in two variables -linear programming -approximating reality with nonlinear models: exponential functions, logarithmic functions, quadratic functions Test Chp. 6 Chapter 7 assignment Friday Measurement (Chapter 9) Geometry (Chapter 10) -linear measurement -using dimensional analysis to convert units within and between measurement systems -area and volume -weight and temperature -points, lines, planes, and angles -triangles -polygons, perimeter, tessellations -area and circumference -volume -right triangle trigonometry -beyond Euclidean geometry Test Chp. 7 Chapters 9 and 10 assignment Week 3 Monday Graph Theory (Chapter 15) -graphs, paths, circuits -Euler Paths and Euler Circuits -Hamilton Paths and Hamilton Circuits -trees Test Chp. 9,10 Chapter 15 assignment Tuesday Counting Methods and Probability (Chapter 11) -fundamental counting principle -permutations and combinations and their applications to probability -fundamentals of probabilities -events involving “not” and “or”, odds -events involving “and” conditional probability -expected value Test Chp.15 Chapter 11 assignment Wednesday Statistics (Chapter 12) -sampling, frequency distributions, and graphs -measures of central tendency -measures of dispersion -the normal distribution -scatter plots, correlation and regression lines Test Chp. 11 Chapter 12 assignment Thursday Course overview -review for final exam Final Exam Friday Course overview -complete final exam I -applications of logic- “Numb3rs” T.V. show (CBS) and activities Have a great rest of the summer and a great 2008-09 school year!!" ]
[ null, "https://solve-variable.com/images/arrow_1.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.775238,"math_prob":0.94732296,"size":6708,"snap":"2019-26-2019-30","text_gpt3_token_len":1624,"char_repetition_ratio":0.13484487,"word_repetition_ratio":0.0074626864,"special_character_ratio":0.21571258,"punctuation_ratio":0.098391674,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9970153,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-06-19T23:05:24Z\",\"WARC-Record-ID\":\"<urn:uuid:8a387f9a-e532-49bf-a89d-2f4a3c6d2a91>\",\"Content-Length\":\"102977\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9297c8a9-0a0a-40be-818b-47bde3b9b90e>\",\"WARC-Concurrent-To\":\"<urn:uuid:7927e1c7-09c3-492d-bbfc-f10e382d9385>\",\"WARC-IP-Address\":\"54.197.228.212\",\"WARC-Target-URI\":\"https://solve-variable.com/math-logic--problem-solving-honors.html\",\"WARC-Payload-Digest\":\"sha1:GYSDWS4RLMBSVDIIUNL72W5ACL6LYPDW\",\"WARC-Block-Digest\":\"sha1:VSTP2XXZF7K25S3B6S6OJGAFYXSSJP47\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-26/CC-MAIN-2019-26_segments_1560627999066.12_warc_CC-MAIN-20190619224436-20190620010436-00342.warc.gz\"}"}
https://mathwaycalculus.com/page/2/
[ "Home Blog Page 2\n\n# How to solve 1 + 1/2 + 1/3 + ?= 12\n\n0\n\nOur mission is to systematically share mathematics information to people around the world and to make it universally accessible and useful.\nThe solution of this question has been explained in a very simple way by a well-known teacher by doing addition, subtraction, and fractions.\nFor complete information on how to solve this question How to solve 1 + 1/2 + 1/3 + ?= 12, read and understand it carefully till the end.\n\nLet us know how to solve this question How to solve 1 + 1/2 + 1/3 + ?= 12.\n\nFirst write the question on the page of the notebook.\n\n## How to solve 1 + 1/2 + 1/3 + ?= 12\n\nTo solve this question, we will write this question in the simplest form as follows and simplify it;\n\n\\displaystyle 1+\\frac{1}{2}+\\frac{1}{3}+?=12\\text{ }\n\nLet’s ? = x . then ,\n\n\\displaystyle 1+\\frac{{1\\times 3}}{{2\\times 3}}+\\frac{{1\\times 2}}{{3\\times 2}}+x=12\\text{ }\n\n\\displaystyle 1+\\frac{3}{6}+\\frac{2}{6}+x=12\\text{ }\n\n\\displaystyle 1+\\frac{{3+2}}{6}+x=12\\text{ }\n\n\\displaystyle 1+\\frac{5}{6}+x=12\\text{ }\n\n\\displaystyle \\frac{{1\\times 6+5}}{6}+x=12\\text{ }\n\n\\displaystyle \\frac{{6+5}}{6}+x=12\\text{ }\n\n\\displaystyle \\frac{{11}}{6}+x=12\\text{ }\n\n\\displaystyle x=12\\text{-}\\frac{{11}}{6}\n\n\\displaystyle x=\\frac{{12\\times 6-11}}{6}\n\n\\displaystyle x=\\frac{{72-11}}{6}\n\n\\displaystyle x=\\frac{{61}}{6}\n\nso,\n\nx = ? = \\displaystyle \\frac{{61}}{6} [Answer]\n\nThis article How to solve 1 + 1/2 + 1/3 + ?= 12 has been completely solved by tireless effort from our side, still if any error remains in it then definitely write us your opinion in the comment box. If you like or understand the methods of solving all the questions in this article, then send it to your friends who are in need.\n\nNote: If you have any such question, then definitely send it by writing in our comment box to get the answer.\n\n# How to solve, how many 3/4 in 1/4 ?\n\n0\n\nHello friends,\nWelcome to my article how many 3/4 in 1/4 ?\n\nFirst of all we should write the article on the page.\n\n## how many 3/4 in 1/4 ?\n\n1/4 how many 3/4 are there ,\n\nthat is\n\nhow many 3/4 together will make 1/4\n\nlet, x\n\n3/4 together complete 1/4\n\nthen,\n\n\\displaystyle x+\\frac{3}{4}=\\frac{1}{4}\n\n\\displaystyle \\frac{x}{1}+\\frac{3}{4}=\\frac{1}{4}\n\nSince the LCM of 1 and 4 will be 4.\n\n\\displaystyle \\frac{{4x+3}}{4}=\\frac{1}{4}\n\nnow multiply diagonally,\n\n\\displaystyle (4x+3)=4\\times 1\n\n\\displaystyle (4x+3)=4\n\n\\displaystyle 4x+3=4\n\n\\displaystyle 4x=4-3\n\n\\displaystyle x=\\frac{1}{4}\n\n## how many 3/4 in 1/4 ?\n\nThis article has been completely solved by tireless effort from our side, still if any error remains in it then definitely write us your opinion in the comment box. If you like or understand the methods of solving all the questions in this article, then send it to your friends who are in need.\n\nNote: If you have any such question, then definitely send it by writing in our comment box to get the answer.\n\n# How to solve 5/2+7/2+(0.5)^2+4=x\n\n0\n\nWelcome to my article How to solve 5/2+7/2+(0.5)^2+4=xThis question is taken from the simplification lesson.\nThe solution of this question has been explained in a very simple way by a well-known teacher by doing addition, subtraction, and fractions.\nFor complete information on how to solve this question How to solve 5/2+7/2+(0.5)^2+4=x, read and understand it carefully till the end.\n\nLet us know how to solve this question How to solve 5/2+7/2+(0.5)^2+4=x\n\nFirst write the question on the page of the notebook.\n\n## How to solve 5/2+7/2+(0.5)^2+4=x\n\nLet us first write this question in this way,\n\n\\displaystyle \\frac{5}{2}+\\frac{7}{2}+{{\\left( {\\frac{5}{{10}}} \\right)}^{2}}+4=x\n\n\\displaystyle \\frac{{5+7}}{2}+{{\\left( {\\frac{5}{{10}}} \\right)}^{2}}+4=x\n\n\\displaystyle \\frac{{12}}{2}+{{\\left( {\\frac{5}{{10}}} \\right)}^{2}}+4=x\n\n\\displaystyle 6+{{\\left( {\\frac{5}{{10}}} \\right)}^{2}}+4=x\n\n\\displaystyle 10+{{\\left( {\\frac{5}{{10}}} \\right)}^{2}}=x\n\n\\displaystyle 10+\\frac{{25}}{{100}}=x\n\n\\displaystyle 10+\\frac{1}{4}=x\n\n\\displaystyle \\frac{{10\\times 4+1}}{4}=x\n\n\\displaystyle \\frac{{41}}{4}=x\n\n0r,\n\n\\displaystyle x=\\frac{{41}}{4}\n\nThis article How to solve 5/2+7/2+(0.5)^2+4=x has been completely solved by tireless effort from our side, still if any error remains in it then definitely write us your opinion in the comment box. If you like or understand the methods of solving all the questions in this article, then send it to your friends who are in need.\n\nNote: If you have any such question, then definitely send it by writing in our comment box to get the answer.\n\n# How to solve 12(4-2)+1/2-5/6=x+2\n\n0\n\nWelcome to my article How to solve 12(4-2)+1/2-5/6=x+2. This question is taken from the simplification lesson.\nThe solution of this question has been explained in a very simple way by a well-known teacher by doing addition, subtraction, and fractions.\nFor complete information on how to solve this question How to solve 12(4-2)+1/2-5/6=x+2, read and understand it carefully till the end.\n\nLet us know how to solve this question How to solve 12(4-2)+1/2-5/6=x+2.\n\nFirst write the question on the page of the notebook.\n\n## How to solve 12(4-2)+1/2-5/6=x+2\n\nLet us first write this question in this way,\n\n\\displaystyle 12\\left( {4-2} \\right)+\\frac{1}{2}-\\frac{5}{6}=x+2\n\n\\displaystyle 12\\left( 2 \\right)+\\frac{1}{2}-\\frac{5}{6}=x+2\n\n\\displaystyle 24+\\frac{1}{2}-\\frac{5}{6}=x+2\n\n\\displaystyle \\frac{{24}}{1}+\\frac{1}{2}-\\frac{5}{6}-2=x\n\n\\displaystyle \\frac{{24\\times 2}}{{1\\times 2}}+\\frac{1}{2}-\\frac{5}{6}-2=x\n\n\\displaystyle \\frac{{48}}{2}+\\frac{1}{2}-\\frac{5}{6}-2=x\n\n\\displaystyle \\frac{{48+1}}{2}-\\frac{5}{6}-\\frac{2}{1}=x\n\n\\displaystyle \\frac{{49}}{2}-\\frac{5}{6}-\\frac{2}{1}=x\n\n\\displaystyle \\frac{{49\\times 3}}{{2\\times 3}}-\\frac{5}{6}-\\frac{{2\\times 6}}{{1\\times 6}}=x\n\n\\displaystyle \\frac{{147}}{6}-\\frac{5}{6}-\\frac{{12}}{6}=x\n\n\\displaystyle \\frac{{147-5-12}}{6}=x\n\n\\displaystyle \\frac{{147-17}}{6}=x\n\n\\displaystyle \\frac{{130}}{6}=x\n\nor\n\nx=21.666666..\n\nThis article How to solve 12(4-2)+1/2-5/6=x+2 has been completely solved by tireless effort from our side, still if any error remains in it then definitely write us your opinion in the comment box. If you like or understand the methods of solving all the questions in this article, then send it to your friends who are in need.\n\nNote: If you have any such question, then definitely send it by writing in our comment box to get the answer.\n\n# Fraction to Decimal: 1/4\n\n0\n\nWelcome to my article Fraction to Decimal: 1/4. This question is taken from the simplification lesson.\nThe solution of this question has been explained in a very simple way by a well-known teacher by doing addition, subtraction, and fractions.\nFor complete information on how to solve this question Fraction to Decimal: 1/4 read and understand it carefully till the end.\n\nLet us know how to solve this question Fraction to Decimal: 1/4.\n\nFirst write the question on the page of the notebook.\n\n## Fraction to Decimal: 1/4\n\nBefore solving this question, we will write in this way,\n\n\\displaystyle \\frac{1}{4}\n\nthen,\n\nso that ,\n\nThis article Fraction to Decimal: 1/4 has been completely solved by tireless effort from our side, still if any error remains in it then definitely write us your opinion in the comment box. If you like or understand the methods of solving all the questions in this article, then send it to your friends who are in need.\n\nNote: If you have any such question, then definitely send it by writing in our comment box to get the answer.\n\n# How to solve 12(8-2)+12/2-42/6=4x+2\n\n0\n\nWelcome to my article How to solve 12(8-2)+12/2-42/6=4x+2. This question is taken from the simplification lesson.\nThe solution of this question has been explained in a very simple way by a well-known teacher by doing addition, subtraction, and fractions.\nFor complete information on how to solve this question How to solve 12(8-2)+12/2-42/6=4x+2, read and understand it carefully till the end.\n\nLet us know how to solve this question How to solve 12(8-2)+12/2-42/6=4x+2.\n\nFirst write the question on the page of the notebook.\n\n## How to solve 12(8-2)+12/2-42/6=4x+2\n\nLet us first write this question in this way,\n\n\\displaystyle 12\\left( 6 \\right)+\\frac{{12}}{2}-\\frac{{42}}{6}=4x+2\n\n\\displaystyle 72+\\frac{{12}}{2}-\\frac{{42}}{6}=4x+2\n\n\\displaystyle 72+6-7=4x+2\n\n\\displaystyle 72+6-7-2=4x\n\n\\displaystyle 78-9=4x\n\n\\displaystyle 69=4x\n\nor\n\n\\displaystyle 4x=69\n\n\\displaystyle x=\\frac{{69}}{4}\n\nThis article How to solve 12(8-2)+12/2-42/6=4x+2 has been completely solved by tireless effort from our side, still if any error remains in it then definitely write us your opinion in the comment box. If you like or understand the methods of solving all the questions in this article, then send it to your friends who are in need.\n\nNote: If you have any such question, then definitely send it by writing in our comment box to get the answer.\n\n# What are Types of Triangles\n\n0\n\nOur mission is to systematically share mathematics (What are Types of Triangles) information to people around the world and to make it universally accessible and useful.\nFor complete information on how to solve this question What are Types of Triangles, read and understand it carefully till the end.\n\nLet us know how to solve this question What are Types of Triangles.\n\nFirst write the question on the page of the notebook.\n\nTypes of Triangles\n\nThe different types of triangles are classified according to the length of their sides and as per the measure of the angles. The triangle is one of the most common shapes and is used in construction for its rigidity and stable shape. Understanding these properties allows us to apply the ideas in many real-world problems.\n\n## What are the Different Types of Triangles?\n\nThere are different types of triangles in math that can be distinguished based on their sides and angles.\n\n### Classifying Triangles\n\nThe characteristics of a triangle’s sides and angles are used to classify them. The different types of triangles are as follows:\n\n## Types of Triangles Based on Sides\n\nOn the basis of side lengths, the triangles are classified into the following types:\n\nEquilateral Triangle:\n\nA triangle is considered to be an equilateral triangle when all three sides have the same length.\n\nIsosceles triangle:\n\nWhen two sides of a triangle are equal or congruent, then it is called an isosceles triangle.\n\nScalene triangle:\n\nWhen none of the sides of a triangle are equal, it is called a scalene triangle.\n\n## Types of Triangles Based on Angles\n\nOn the basis of angles, triangles are classified into the following types:\n\n• Acute Triangle: When all the angles of a triangle are acute, that is, they measure less than 90°, it is called an acute-angled triangle or acute triangle.\n• Right Triangle: When one of the angles of a triangle is 90°, it is called a right-angled triangle or right triangle.\n• Obtuse Triangle: When one of the angles of a triangle is an obtuse angle, that is, it measures greater than 90°, it is called an obtuse-angled triangle or obtuse triangle.\n\n## Types of Triangle Based on Sides and Angles\n\nThe different types of triangles are also classified according to their sides and angles as follows:\n\nEquilateral or Equiangular Triangle:\n\nWhen all sides and angles of a triangle are equal, it is called an equilateral or equiangular triangle.\n\nIsosceles Right Triangle:\n\nA triangle in which 2 sides are equal and one angle is 90° is called an isosceles right triangle. So, in an isosceles right triangle, two sides and two acute angles are congruent.\n\nObtuse Isosceles Triangle:\n\nA triangle in which 2 sides are equal and one angle is an obtuse angle is called an obtuse isosceles triangle.\n\nAcute Isosceles Triangle:\n\nA triangle in which all 3 angles are acute angles and 2 sides measure the same is called an acute isosceles triangle.\n\nRight Scalene Triangle:\n\nA triangle in which any one of the angles is a right angle and all the 3 sides are unequal, is called a right scalene triangle.\n\nObtuse Scalene Triangle:\n\nA triangle with an obtuse angle with sides of different measures is called an obtuse scalene triangle.\n\nAcute Scalene Triangle:\n\nA triangle that has 3 unequal sides and 3 acute angles is called an acute scalene triangle.\n\n☛Important Notes:\n\nHere is a list of a few points that should be remembered while studying the types of triangles:\n\n• In an equilateral triangle, each of the three internal angles is 60°.\n• The three internal angles in a triangle always add up to 180°.\n• All triangles have two acute angles.\n• When all the sides and angles of a triangle are equal, it is called an equilateral or equiangular triangle.\n\n☛ Related Topics:\n\nThis article What are Types of Triangles has been completely solved by tireless effort from our side, still if any error remains in it then definitely write us your opinion in the comment box. If you like or understand the methods of solving all the questions in this article, then send it to your friends who are in need.\n\nNote: If you have any such question, then definitely send it by writing in our comment box to get the answer.\n\n# How to solve 1/2 as a fraction ?\n\n0\n\nWelcome to my article How to solve 1/2 as a fraction ?. This question is taken from the simplification lesson.\nThe solution of this question has been explained in a very simple way by a well-known teacher by doing addition, subtraction, and fractions.\nFor complete information on how to solve this question How to solve 1/2 as a fraction ?, read and understand it carefully till the end.\n\nLet us know how to solve this question How to solve 1/2 as a fraction ?.\n\nFirst write the question on the page of the notebook\n\n## To find the equivalent fraction for a given fraction, divide the numerator and denominator by the same number.\n\nThe fractions equal to the answer 1/2 are 2/4, 3/6, 4/8, 6/12 etc.\nEquivalent fractions have the same value in the reduced form.\n\nExplanation:\n\nEquivalent fractions can be written by multiplying or dividing both the numerator and the denominator by the same number. This is why on simplification of these fractions they reduce to the same number.\n\nLet us look at the two ways in which we can make equivalent fractions:\n\nMultiply the numerator and denominator by the same number.\nDivide the numerator and denominator by the same number.\n1/2 . different equivalent to\n\nThus, 3/6, 6/12, and 4/8 are equal to 1/2 when simplified.\n\nSo they are all equal to 1/2. (ANSWER)\n\nThis article How to solve 1/2 as a fraction ? has been completely solved by tireless effort from our side, still if any error remains in it then definitely write us your opinion in the comment box. If you like or understand the methods of solving all the questions in this article, then send it to your friends who are in need.\n\nNote: If you have any such question, then definitely send it by writing in our comment box to get the answer.\n\n# How to solve what is 3/4 of 4/5 ?\n\n0\n\nHello friends,\nWelcome to my article what is 3/4 of 4/5 ? This article is taken from the simplification lesson, in this article we have been told how to solve the problem easily by doing addition, subtraction, multiplication, division and fractionation.or complete information on how to solve this what is 3/4 of 4/5 ? read and understand it carefully.\n\nFirst of all we should write the article on the page of the notebook.\n\n## what is 3/4 of 4/5 ?\n\nThis type of question can be solved in many ways\n\nIf 4/5 of 3/4\n\nThen ,\n\n\\displaystyle \\frac{3}{4}of\\frac{4}{5}=\\frac{3}{4}\\times \\frac{4}{5}\n\n\\displaystyle \\frac{3}{4}of\\frac{4}{5}=\\frac{{3\\times 4}}{{4\\times 5}}\n\n\\displaystyle \\frac{3}{4}of\\frac{4}{5}=\\frac{{12}}{{20}}\n\nIt can also be written like this,\n\n\\displaystyle \\frac{3}{4}of\\frac{4}{5}=\\frac{{3\\times 4}}{{5\\times 4}}\n\nThus if there are even numbers from bottom to top, then remove it.\n\n\\displaystyle \\frac{3}{4}of\\frac{4}{5}=\\frac{3}{5}\n\nSo the desired solution of this question =3/5 answer\n\nCan also solve it like this,\n\n## what is 3/4 of 4/5 ?\n\nLet x be 4/5 of 3/4,\n\nthen,\n\n\\displaystyle \\frac{3}{4}of\\frac{4}{5}=x\n\n\\displaystyle \\frac{3}{4}\\times \\frac{4}{5}=x\n\n\\displaystyle \\frac{{3\\times 4}}{{4\\times 5}}=x\n\n\\displaystyle x=\\frac{{3\\times 4}}{{4\\times 5}}\n\nThis article has been completely solved by tireless effort from our side, still if any error remains in it then definitely write us your opinion in the comment box. If you like or understand the methods of solving all the questions in this article, then send it to your friends who are in need.\n\nNote: If you have any such question, then definitely send it by writing in our comment box to get the answer.\n\n# How to solve x+10/2+75/5=9x-45\n\n0\n\nOur mission is to systematically share mathematics information to people around the world and to make it universally accessible and useful.\nThe solution of this question has been explained in a very simple way by a well-known teacher by doing addition, subtraction, and fractions.\nFor complete information on how to solve this question x+10/2+75/5=9x-45, read and understand it carefully till the end.\n\nLet us know how to solve this question x+10/2+75/5=9x-45.\n\nFirst write the question on the page of the notebook.\n\n## How to solve x+10/2+75/5=9x-45\n\nTo solve this question, we will write it in a simple way like this ,\n\n\\displaystyle x+\\frac{{10}}{2}+\\frac{{75}}{5}=9x-45\n\n\\displaystyle x+5+15=9x-45\n\n\\displaystyle x+20=9x-45\n\n\\displaystyle 20+45=9x-x\n\n\\displaystyle 65=8x\n\nor ,\n\n\\displaystyle 8x=65\n\n\\displaystyle x=\\frac{{65}}{8}" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.94320446,"math_prob":0.98085076,"size":12860,"snap":"2022-40-2023-06","text_gpt3_token_len":2927,"char_repetition_ratio":0.1671593,"word_repetition_ratio":0.7506751,"special_character_ratio":0.22799379,"punctuation_ratio":0.088043064,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9981786,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-02-02T12:12:42Z\",\"WARC-Record-ID\":\"<urn:uuid:7eaa6977-de4f-4e28-a52b-d37aa59bca23>\",\"Content-Length\":\"122968\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:dc90ee78-18f2-402f-bb69-271c52839c1c>\",\"WARC-Concurrent-To\":\"<urn:uuid:1b50a6eb-18bb-4e16-8dbe-b4b64e29c416>\",\"WARC-IP-Address\":\"172.67.135.180\",\"WARC-Target-URI\":\"https://mathwaycalculus.com/page/2/\",\"WARC-Payload-Digest\":\"sha1:ENBJMHGTUEZ2P7CFQXWUUKX7IUQRMQRC\",\"WARC-Block-Digest\":\"sha1:GZQJ4FWWJQ5EHUP2MT4UNGGI745KJRW2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764500017.27_warc_CC-MAIN-20230202101933-20230202131933-00738.warc.gz\"}"}
http://ftp.ntu.edu.tw/php/manual/ro/function.log1p.php
[ "array ( 0 => 'index.php', 1 => 'PHP Manual', ), 'head' => array ( 0 => 'UTF-8', 1 => 'ro', ), 'this' => array ( 0 => 'function.log1p.php', 1 => 'log1p', ), 'up' => array ( 0 => 'ref.math.php', 1 => 'Funcții matematice', ), 'prev' => array ( 0 => 'function.log10.php', 1 => 'log10', ), 'next' => array ( 0 => 'function.log.php', 1 => 'log', ), 'alternatives' => array ( ), ); \\$setup[\"toc\"] = \\$TOC; \\$setup[\"toc_deprecated\"] = \\$TOC_DEPRECATED; \\$setup[\"parents\"] = \\$PARENTS; manual_setup(\\$setup); manual_header(); ?>\n\n# log1p\n\n(PHP 4 >= 4.1.0, PHP 5, PHP 7)\n\nlog1p Returns log(1 + number), computed in a way that is accurate even when the value of number is close to zero\n\n### Descrierea\n\nlog1p ( float `\\$number` ) : float\n\nlog1p() returns log(1 + `number`) computed in a way that is accurate even when the value of `number` is close to zero. log() might only return log(1) in this case due to lack of precision.\n\n### Parametri\n\n`number`\n\nThe argument to process\n\n### Valorile întoarse\n\nlog(1 + `number`)\n\n### Istoricul schimbărilor\n\nVersiune Descriere\n5.3.0 This function is now available on all platforms\n\n### A se vedea și\n\n• expm1() - Returns exp(number) - 1, computed in a way that is accurate even when the value of number is close to zero\n• log() - Natural logarithm\n• log10() - Base-10 logarithm" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5255218,"math_prob":0.9435398,"size":987,"snap":"2020-24-2020-29","text_gpt3_token_len":327,"char_repetition_ratio":0.16378434,"word_repetition_ratio":0.16959064,"special_character_ratio":0.40324214,"punctuation_ratio":0.22959183,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9847439,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-08T15:14:20Z\",\"WARC-Record-ID\":\"<urn:uuid:e25d779e-1992-47ae-a463-fdc02b6dbe15>\",\"Content-Length\":\"4054\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:168e5d0f-251a-480e-9dcc-c38ca5420792>\",\"WARC-Concurrent-To\":\"<urn:uuid:e1173e0a-87fb-4e47-a62b-c2e430b6ccd5>\",\"WARC-IP-Address\":\"140.112.36.185\",\"WARC-Target-URI\":\"http://ftp.ntu.edu.tw/php/manual/ro/function.log1p.php\",\"WARC-Payload-Digest\":\"sha1:GP76KIF26XYR35EMVUTJSZQ7TCW5K7KZ\",\"WARC-Block-Digest\":\"sha1:ODB3RSJHKOZZQZ2AXIHPP2HZ2FKAV7G5\",\"WARC-Identified-Payload-Type\":\"text/x-php\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593655897027.14_warc_CC-MAIN-20200708124912-20200708154912-00159.warc.gz\"}"}
https://www.jiskha.com/questions/1600849/lindsay-draws-a-right-triangle-and-adds-the-measure-of-the-right-angle-and-one-acute
[ "# Math\n\nLindsay draws a right triangle and adds the measure of the right angle and one acute angle. Which is a possible sum of the two angles?\nA) 180\nB) 45\nC) 90\nD) 120\n\nPlease correct me if I am wrong. Thank you for your help.\n\n1. 👍 4\n2. 👎 0\n3. 👁 269\n1. correct\n\n1. 👍 0\n2. 👎 0\n\n1. 👍 0\n2. 👎 0\n\n1. 👍 0\n2. 👎 0\n\n## Similar Questions\n\n1. ### algebra\n\nThe flag of a country contains an isosceles triangle.​ (Recall that an isosceles triangle contains two angles with the same​ measure.) If the measure of the third angle of the triangle is 45​° more than three times nbsp the\n\n2. ### Geometry\n\nName the angle that is supplementary to\n\n3. ### Geometry math\n\n1: what is the correct classification for the triangle shown below? 1:acute, scalene\n\n4. ### Geometry\n\nA right triangle has exterior angles at each of its acute angles with measures in the ratio 13:14. Find the measure of the smallest interior angle of the triangle. 40 (degrees) 50 (degrees) 130 (degrees) 140 (degrees) Please help?\n\n1. ### need help\n\nwhats the definition of a biconditional \"an acute angle is an angle whose measue is less than 90 degress\" 1. if it measure is less than 90 degrees 2.if and only if its measure is less than 90 degrees 3. measure is less than 90\n\n2. ### Math\n\nwhich of the following can represent the measure of the three angle in an acute triangle? A. 59, 61,62 B. 30, 25, 125 C. 56,72,52•• D. 90, 45, 45 Plz correct me!!\n\n3. ### Math\n\nCan someone please check to see if I answered these true and false statements correctly? Thank you! 1. A triangle can have two right angles. (TRUE??) 2. An equilateral triangle is also an acute. (TRUE??) 3. A right triangle can\n\n4. ### math\n\nAn isosceles, obtuse triangle has one angle with a degree measure that is 50% larger than the measure of a right angle. What is the measure, in degrees, of one of the two smallest angles in the triangle? Express your answer as a\n\n1. ### Math\n\nIn Triangle MNP, the measure of angle M is 24degrees, the measure of angle N is five times the measure of P, find the measure of angle N and angle P.\n\n2. ### Math\n\n1. Use the points in the diagram to name the figure shown. Then identify the type of figure. A line goes through points C and D. modifying above Upper C Upper D with two-way-arrow, line modifying above upper D upper C with right" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8829345,"math_prob":0.9729017,"size":2440,"snap":"2020-34-2020-40","text_gpt3_token_len":679,"char_repetition_ratio":0.2089491,"word_repetition_ratio":0.04525862,"special_character_ratio":0.26844263,"punctuation_ratio":0.12007874,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9986512,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-20T08:23:44Z\",\"WARC-Record-ID\":\"<urn:uuid:c9a39888-1df4-4a61-ac80-81b9d2458f15>\",\"Content-Length\":\"20727\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d7cfe82a-525e-42a1-9c8d-e8fef68faf37>\",\"WARC-Concurrent-To\":\"<urn:uuid:f43d361f-73ee-49a3-af6e-a1ff8cb78f7f>\",\"WARC-IP-Address\":\"66.228.55.50\",\"WARC-Target-URI\":\"https://www.jiskha.com/questions/1600849/lindsay-draws-a-right-triangle-and-adds-the-measure-of-the-right-angle-and-one-acute\",\"WARC-Payload-Digest\":\"sha1:GX5ZC2L4KOIN335GPM376TSADW4WCEEL\",\"WARC-Block-Digest\":\"sha1:QONV6DLQYU5ANPWO44D2AMNQUW2I4NF3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600400196999.30_warc_CC-MAIN-20200920062737-20200920092737-00579.warc.gz\"}"}
https://www.ashishpaliwal.com/blog/category/apache-spark/
[ "## [Learning Apache Spark with Examples] Simple Aggregation\n\nIn the last we saw Left Outer join. Let’s look at a simple aggregation. Enhancing our Ad example, we would like to see how many Ads from a particular Ad provider did we served. This is a simple scenario of aggregation. We have already seen simple aggregation as part of Word Count example. The code […]\n\n## [Learning Spark with Examples] Left Outer Join\n\nIn the last post, we saw the Inner join example. Time to tweak this into a Apache Spark left outer join example. From our data set of inner join, we may need to have a dataset with all the Ad’s served, along with possible impression, if received. Left Outer join is the way to do. […]\n\n## [Learning Spark with Examples] Inner Join\n\nIn the last post, we saw the famous Word Count example. Let’s move ahead and look at join’s in Spark. Before looking into the join lets look at the data we shall use for joining. The data is factious and kept simple. There are two inputs Ad Input – CSV file with a unique Ad […]\n\n## [Learning Spark with Examples] Famous Word Count\n\nIn the last post we saw filtering, it’s time to see the famous Word Count example. The Code can be found at WordCount.java Let’s see the code, removing the code which we already discussed in last post // Now we have non-empty lines, lets split them into words JavaRDD<String> words = nonEmptyLines.flatMap(new FlatMapFunction<String, String>() { […]\n\n## [Learning Spark with Examples] Line Count With Filtering\n\nIn the last we saw the Line Count example, now lets add filtering to the example, to filter out empty lines. The code can be found here LineCountWithFiltering.java Lets look at the code public class LineCountWithFiltering { public static void main(String[] args) { SparkConf sparkConf = new SparkConf().setAppName(\"File Copy\"); JavaSparkContext sparkContext = new JavaSparkContext(sparkConf); // […]\n\n## [Learning Spark with Examples] Line Count\n\nIn the First post we looked at how to load/save an RDD. In this post we shall build upon the example and count number of lines present in RDD. The code can be found at LineCount.java For complete project refer https://github.com/paliwalashish/learning-spark Lets look at the code public static void main(String[] args) { SparkConf sparkConf = […]" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8793594,"math_prob":0.7610129,"size":2213,"snap":"2020-24-2020-29","text_gpt3_token_len":494,"char_repetition_ratio":0.1330919,"word_repetition_ratio":0.097826086,"special_character_ratio":0.21961139,"punctuation_ratio":0.09090909,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9528603,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-14T03:07:21Z\",\"WARC-Record-ID\":\"<urn:uuid:346791cd-40a7-4de9-b617-91a35f2833db>\",\"Content-Length\":\"63757\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a213b6a2-f0df-48d6-b2c9-1ca70a7611b8>\",\"WARC-Concurrent-To\":\"<urn:uuid:727133d7-b1a9-483b-a0c7-1dd9a13a8d3e>\",\"WARC-IP-Address\":\"208.113.217.126\",\"WARC-Target-URI\":\"https://www.ashishpaliwal.com/blog/category/apache-spark/\",\"WARC-Payload-Digest\":\"sha1:OVX2NIHAXZEI6BDPEPWLCKWUCFHY5LD2\",\"WARC-Block-Digest\":\"sha1:WF3ERECVL2PFJN66ZMF2E6IOT5BBRPUE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593657147917.99_warc_CC-MAIN-20200714020904-20200714050904-00446.warc.gz\"}"}
https://www.myqbook.com/Grade4.aspx
[ "", null, "Home | Programs | Testimonials | Pricing\n\nHere is a list of all of the skills students learn in Grade 4. The skills are organized into categories, and\nyou can move your mouse over any skill name to see a sample question.\n\n Example: Add 3,208 and 6,251.                 1)    9,0592)    9,3593)    9,4594)    9,559 Answer: 9,459 Example:", null, "1)    9,5302)    10,5203)    10,5304)    10,630 Answer: 10,530 Example: A printer can print a maximum of 125 pages in an hour.Another printer can print a maximum of 160 pages in an hour.What would be the maximum number of pages both the printerscan print together in one hour? 1)    1752)    1853)    2754)    285 Answer: 285 Example: ____ + 2,415 = 3,096 1)    5812)    6813)    6824)    781 Answer: 681 Example: Estimate the following by first rounding off eachnumber to the nearest hundred, and then solving:4,278 + 1,413 1)    5,6002)    5,6903)    5,7004)    5,800 Answer: 5,700 Example: Austin bought a bike that cost \\$868 and a PS3that cost \\$545. Estimate the total cost of the bike andthe PS3 by first rounding off the cost of each item to thenearest ten and then adding. 1)    \\$1,4002)    \\$1,4103)    \\$1,4134)    \\$1,420 Answer: \\$1,420 Example: Find the sum: 65 + 219 + 32 + 1,097 1)    1,4032)    1,4133)    1,4194)    1,423 Answer: 1,413 Example: A department store has multi-floor parking. If 225vehicles can be parked on level P1, 125 vehicles on levelP2 and 105 vehicles on level P3, how many vehicles in allcan be parked in the parking space of the department store? 1)    3552)    4253)    4554)    465 Answer: 455\n\nSubtraction\n\nMultiplication\n Example: Which of the following shows next 4 multiples of 7:7, 14, __, __, __, __? 1)    21, 27, 35, 422)    21, 28, 35, 423)    21, 29, 35, 424)    21, 28, 42, 49 Answer: 21, 28, 35, 42 Example: Which of the following is the group of thefirst two common multiples of 4 and 5? 1)    12 and 202)    20 and 253)    20 and 304)    20 and 40 Answer: 20 and 40 Example: How many arrows are there in the following figure?", null, "1)    162)    203)    244)    28 Answer: 20 Example: 17 X 5 = 5 X ____ 1)    52)    153)    174)    25 Answer: 17 Example: 9 X 8 x 1 = 1 X ____ 1)    622)    633)    724)    81 Answer: 72 Example: Solve: 12 x 5 x 0 1)    02)    173)    604)    72 Answer: 0 Example: Multiply by using associative property:(9 x 8) x 5 1)    2602)    3403)    3604)    380 Answer: 360 Example: 6 x _____ = 54 1)    72)    83)    94)    12 Answer: 9 Example: The math worksheets in a book have 25 questionseach on multiplication. If Jason solved 6 such worksheets,which multiplication sentence shows the total number ofmultiplication questions solved by Jason? 1)    25 x 5 = 1252)    25 x 6 = 1503)    30 x 5 = 1504)    30 x 6 = 180 Answer: 25 x 6 = 150 Example: Multiply 40 and 120. 1)    3602)    4803)    3,6004)    4,800 Answer: 4,800 Example: Find the product: 6 x 3 x 9 1)    1122)    1423)    1624)    172 Answer: 162 Example: The cafeteria workers at a school work for 5hours a day, five days a week. If they make \\$10 perhour, how much money does each worker make in aweek? 1)    \\$1002)    \\$1503)    \\$2004)    \\$250 Answer: \\$250 Example: Round off each number to the nearestten and then multiply:39 x 32 1)    9002)    1,0003)    1,2004)    1,600 Answer: 1,200 Example: Which of the following multiplication factis related to 28 ÷ 7? 1)    7 X 2 = 282)    3 X 7 = 283)    7 X 4 = 284)    7 X 5 = 28 Answer: 7 X 4 = 28 Example: There are 21 rows of chairs in a hall and 12 chairs in each row. How can youfind the total number of chairs in the hall? 1)    Add the number of rows of chairs to the number of chairs in each row.2)    Multiply the number of rows of chairs by the number of chairs in each row.3)    Divide the number of rows of chairs by the number of chairs in each row.4)    Divide the number of chairs in each row by the number of rows of chairs. Answer: Multiply the number of rows of chairs by the number of chairs in each row.\n\nDivision\n Example: Which of the following shows all thefactors of 15? 1)    3, 52)    1, 3, 5, 153)    1, 3, 5, 9, 154)    1, 2, 3, 5, 10, 15 Answer: 1, 3, 5, 15 Example: Which of the following is a commonfactor of 9 and 15? 1)    32)    43)    54)    6 Answer: 3 Example: How many groups of 5 arrows each, can bemade from all the arrows shown below in the figure?", null, "1)    72)    83)    94)    10 Answer: 8 Example: ___ ÷ 12 = 7          1)    722)    743)    844)    86 Answer: 84 Example: Solve: 52 ÷ 1          1)    02)    13)    254)    52 Answer: 52 Example: Use the division properties and findthe quotient: 97 ÷ 97 1)    02)    13)    974)    98 Answer: 1 Example: Which of the following is divisibleby 6? 1)    742)    863)    964)    106 Answer: 96 Example: Dividend 154 and divisor 11 1)    92)    123)    144)    24 Answer: 14 Example: Find the remainder:Dividend 60 and divisor 9 1)    52)    63)    74)    8 Answer: 6 Example: Lauren distributed 35 candies equally to her friends.If she distributed the candies equally among 6 friends, whichof the following is correct? 1)    Each friend got 5 candies and 4 were left over.2)    Each friend got 5 candies and 5 were left over.3)    Each friend got 6 candies and 4 were left over.4)    Each friend got 6 candies and 5 were left over. Answer: Each friend got 5 candies and 5 were left over. Example: Estimate the quotient by first rounding offthe numbers to the nearest ten and then dividing:538 ÷ 62 1)    82)    93)    804)    90 Answer: 9 Example: There are 42 students in a class. They need atotal of \\$518 for a field trip. Estimate the amount of moneyeach student should contribute for the field trip by firstrounding off the total money needed for the field trip andthe number of students in the class to the nearest tenand then dividing. 1)    \\$92)    \\$113)    \\$134)    \\$14 Answer: \\$13 Example: Which of the following division fact isrelated to 8 X 9? 1)    68 ÷ 8 = 92)    72 ÷ 9 = 83)    76 ÷ 8 = 94)    84 ÷ 9 = 8 Answer: 72 ÷ 9 = 8 Example: Mr. Wilson had \\$985 in his wallet. Hepaid the telephone bill in advance for six monthswith cash and was left with \\$295. If his telephonebill was a fixed amount of money each month, howmuch money did Mr. Wilson pay for each month? 1)    \\$1102)    \\$1153)    \\$1204)    \\$125 Answer: \\$115\n\nNumber Concepts\n Example: Choose “Forty five thousand, six hundredeighty nine” in figures. 1)    45,0892)    45,6893)    450,6894)    40,000,5,000, 600,89 Answer: 45,689 Example: Which of the following shows 75,909 in words? 1)    Seven five, nine zero nine2)    Seven thousand, nine hundred nine3)    Seventy thousand, five thousand nine hundred nine4)    Seventy five thousand, nine hundred nine Answer: Seventy five thousand, nine hundred nine Example: Five hundred five thousand, one hundred twocandidates appeared for an exam. Three thousand, onehundred of those candidates did not qualify for the exam.Which of the following shows the number of candidateswho qualified for the exam? 1)    50,0022)    302,0023)    501,0024)    502,002 Answer: 502,002 Example: The table below shows the areas of five statesin the US.According to the table, which is the largeststate in total area?", null, "1)    Texas2)    Alaska3)    Montana4)    California Answer: Alaska Example: Which of the following is correct? 1)    6,345 < 6,3252)    6,345 > 6,3253)    6,345 > 6,3544)    6,345 < 6,025 Answer: 6,345 > 6,325 Example: Container A has 6,342 gallons of water. ContainerB has 6,423 gallons of water. Container C has 6,432 gallonsof water. Which statement is true about the quantity of thewater in the containers? 1)    Container A has more water than Container B.2)    Container B has more water than Container A.3)    Container B has more water than Container C.4)    Container A has more water than Container C. Answer: Container B has more water than Container A. Example: Which of the following shows the set of threedigit numbers arranged from least to greatest? 1)    207, 534, 975, 7532)    534, 207, 753, 9753)    975, 753, 534, 2074)    207, 534, 753, 975 Answer: 207, 534, 753, 975 Example: Diameter of the planet Venus is 7,521 miles,diameter of the planet Earth is 7,926 miles and diameterof the planet Mars is 4,222 miles. Which of the followingshows the names of the planets arranged from greatestto least according to their diameters? 1)    Earth, Venus, Mars2)    Earth, Mars, Venus3)    Venus, Mars, Earth4)    Mars, Venus, Earth Answer: Earth, Venus, Mars Example: Find the next number in the sequence below:9,112, 9,322, 9,532, 9,742, _____ 1)    9,8522)    9,8623)    9,9524)    9,962 Answer: 9,952 Example: Round off 48,792 to the nearest hundred. 1)    48,0002)    48,7003)    48,8004)    49,000 Answer: 48,800 Example: An organization has 45,892 employees. The managementis planning to gift a watch to each of its employees on the occasionof the company’s Silver Jubilee. Find the estimated number of watchesthe management should purchase to the nearest thousand. 1)    40,0002)    45,0003)    46,0004)    56,000 Answer: 46,000\n\nPlace value\n\nTime\n\nFractions\n\nData and Graph\n\nStatistics\n Example: Find the range for the given set of data:4, 6, 11 and 2 1)    62)    73)    84)    9 Answer: 9 Example: The table below shows the dollar amountsJessica earned each day during a three day bake sale.Study the table given below and find the mean of themoney she earned in a day.", null, "1)    102)    123)    144)    18 Answer: 12 Example: Robert recorded the points scored by hisclassmates in a quiz. The points are given below.Find the median for the given set of data.5, 8, 8, 6, 7 and 9 1)    72)    7.53)    84)    8.5 Answer: 7.5 Example: Find the mode of the given set of data.“9 10 7 10 3 3 7 10” 1)    32)    73)    94)    10 Answer: 10 Example: The stem-and-leaf plot shown belowrepresents the points scored by the studentsof a class in their midterm exams. Study thestem-and-leaf plot and list the top 3 pointsscored in the midterm exam.", null, "1)    26, 34, 432)    80, 74, 683)    74, 68, 654)    87, 86, 80 Answer: 87, 86, 80 Example: The stem and leaf plot below showsthe funds raised by Ashley from people in herneighborhood for American cancer society. Studythe stem-and-leaf plot and find the range of thedata set given below.", null, "1)    432)    443)    454)    46 Answer: 45\n\nProbability\n Example: In a random drawing of one marble from a bag of marbles, whichof the following event is certain? 1)    Pulling out a red marble from a bag that has only blue marbles.2)    Pulling out a red marble from a bag that has green and red marbles.3)    Pulling out a red marble from a bag that has blue, green and red marbles.4)    Pulling out a red marble from a bag that has only red marbles. Answer: Pulling out a red marble from a bag that has only red marbles. Example: If you pick a marble at random from the marblesgiven below, how likely is it that you will pick a blue one?", null, "1)    Certain2)    Most likely3)    Least likely4)    Impossible Answer: Most likely Example: How likely is it that the pointer in thespinner below will stop on green?", null, "1)    Certain2)    Most likely3)    Least likely4)    Impossible Answer: Least likely Example: Choose “Certain” or “Impossible” for the eventgiven below: Pulling out a black pen from a boxwith all red pens. 1)    Certain2)    Impossible Answer: Impossible Example: If you pick one marble from the marbles belowwithout looking, which of the following is the most likelyoutcome?", null, "1)    Picking a green marble2)    Picking a gray marble3)    Picking a blue marble4)    Picking an orange marble Answer: Picking a gray marble Example: Brandon is planning his workout. He can either skate or run.For each activity, he can go over the hills or into the valley. How manydifferent combinations of workout can Brandon choose from? 1)    2 combinations2)    3 combinations3)    4 combinations4)    6 combinations Answer: 4 combinations Example: What is the probability of selecting an oddnumber in a random drawing of one number from thegroup of the 10 numbers given below?“21, 23, 24, 35, 43, 37, 52, 19, 20 and 41” 1)", null, "2)", null, "3)", null, "4)", null, "Answer:", null, "Example: The table below shows 40 children’sfavorite dessert. A child is selected at randomfrom these 40 children. What is the probabilitythat the selected child’s favorite dessert iseither Popsicle or cookie?", null, "1)", null, "2)", null, "3)", null, "4)", null, "Answer:", null, "Geometry\n\nDecimals\n Example: Choose a decimal to represent thecolored part of the figure given below:", null, "1)    0.0852)    0.153)    0.854)    8.5 Answer: 0.85 Example: Choose a decimal for the given number:Twenty one and five hundredths 1)    0.2152)    21.0053)    21.054)    21.5 Answer: 21.05 Example: In the number 32.561, which digit isin the tenths place? 1)    12)    23)    54)    6 Answer: 5 Example:", null, "1)    0.022)    0.083)    0.24)    0.8 Answer: 0.8 Example: Convert 0.35 into a fraction in itssimplest form. 1)", null, "2)", null, "3)", null, "4)", null, "Answer:", null, "Example: Choose “True” or “False”:31.06 > 31.60 1)    True2)    False Answer: False Example: Order the following decimals from greatest to least.7.57, 7.05, 7.75, 7.50 1)    7.75, 7.57, 7.05, 7.502)    7.57, 7.75, 7.50, 7.053)    7.75, 7.57, 7.50, 7.054)    7.05, 7.50, 7.57, 7.75 Answer: 7.75, 7.57, 7.50, 7.05 Example: Lauren, Sara, Anna and Andrea participated in a swimmingcompetition. Lauren took 40.8 seconds, Sara 40.75 seconds, Anna40.79 seconds and Andrea 40.82 seconds to complete one lap ofthe swimming pool. Which of the following option best representsthe names of the girls arranged from fastest to the slowestparticipant? 1)    Sara, Anna, Lauren, Andrea2)    Sara, Anna, Andrea, Lauren3)    Sara, Lauren, Anna, Andrea4)    Andrea, Lauren, Anna, Sara Answer: Sara, Anna, Lauren, Andrea Example:", null, "1)    53.5032)    53.5933)    53.6034)    53.99 Answer: 53.603 Example: Austin needs to mail 3 items to his grandma.The weight of the first item is 21.30 ounces, the weightof the second item is 15.21 ounces and the weight ofthe third item is 42.25 ounces. What is the total weightof all the three items that Austin needs to mail to hisgrandma? 1)    76.76 ounces2)    77.56 ounces3)    78.76 ounces4)    79.76 ounces Answer: 78.76 ounces Example: Subtract:", null, "1)    11.362)    12.343)    12.364)    12.46 Answer: 12.36 Example: Morgan bought 3 kittens for \\$42.80 from a pet shop.Two of the kittens were on sale for \\$11.20 each. How muchmoney did Morgan pay for the third kitten? 1)    \\$18.42)    \\$20.43)    \\$21.44)    \\$31.4 Answer: \\$20.4 Example: Find the product of 5 and 2.9. 1)    10.52)    12.53)    14.54)    15.5 Answer: 14.5 Example: Amber bought 6 bottles of apple juice. If eachbottle was for \\$1.4, how much money did Amber pay inall for the juice bottles? 1)    \\$6.42)    \\$7.43)    \\$8.44)    \\$9.4 Answer: \\$8.4 Example: Choose the correct decimal for themixed number:", null, "1)    31.00032)    31.0033)    31.034)    31.3 Answer: 31.003 Example:", null, "1)    38.32)    38.43)    39.34)    39.11 Answer: 38.3 Example:", null, "1)    120.5 cm2)    121.5 cm3)    121.6 cm4)    122.5 cm Answer: 121.5 cm Example: Round off 102.28 to the nearest tenth. 1)    1002)    102.23)    102.34)    103 Answer: 102.3 Example: 12.12, 13.32, ___, 15.72, 16.92 1)    13.522)    14.223)    14.424)    14.52 Answer: 14.52 Example: Matthew purchased a microwave and paid \\$75 asdown payment. He paid the rest of the money in 5 equalinstallments. If the cost of the microwave was \\$320.50,how much money did he pay in each installment? 1)    \\$45.102)    \\$47.103)    \\$48.254)    \\$49.10 Answer: \\$49.10\n\nMeasurement" ]
[ null, "https://www.myqbook.com/Images/New/logo.jpg", null, "https://www.myqbook.com/DisplayList_Files/CommonGrade4_files/870Q.gif", null, "https://www.myqbook.com/DisplayList_Files/CommonGrade4_files/887Q.gif", null, "https://www.myqbook.com/DisplayList_Files/CommonGrade4_files/912Q.gif", null, "https://www.myqbook.com/DisplayList_Files/CommonGrade4_files/933Q.gif", null, "https://www.myqbook.com/DisplayList_Files/CommonGrade4_files/1064Q.gif", null, "https://www.myqbook.com/DisplayList_Files/CommonGrade4_files/1067Q.gif", null, "https://www.myqbook.com/DisplayList_Files/CommonGrade4_files/1068Q.gif", null, "https://www.myqbook.com/DisplayList_Files/CommonGrade4_files/1070Q.gif", null, "https://www.myqbook.com/DisplayList_Files/CommonGrade4_files/1071Q.gif", null, "https://www.myqbook.com/DisplayList_Files/CommonGrade4_files/1073Q.gif", null, "https://www.myqbook.com/DisplayList_Files/CommonGrade4_files/image_019.jpeg", null, "https://www.myqbook.com/DisplayList_Files/CommonGrade4_files/image_024.jpeg", null, "https://www.myqbook.com/DisplayList_Files/CommonGrade4_files/image_055.jpeg", null, "https://www.myqbook.com/DisplayList_Files/CommonGrade4_files/image_011.jpeg", null, "https://www.myqbook.com/DisplayList_Files/CommonGrade4_files/image_055.jpeg", null, "https://www.myqbook.com/DisplayList_Files/CommonGrade4_files/1076Q.gif", null, "https://www.myqbook.com/DisplayList_Files/CommonGrade4_files/image_019.jpeg", null, "https://www.myqbook.com/DisplayList_Files/CommonGrade4_files/image_011.jpeg", null, "https://www.myqbook.com/DisplayList_Files/CommonGrade4_files/image_032.jpeg", null, "https://www.myqbook.com/DisplayList_Files/CommonGrade4_files/image_058.jpeg", null, "https://www.myqbook.com/DisplayList_Files/CommonGrade4_files/image_032.jpeg", null, "https://www.myqbook.com/DisplayList_Files/CommonGrade4_files/1100Q.gif", null, "https://www.myqbook.com/DisplayList_Files/CommonGrade4_files/1104Q.gif", null, "https://www.myqbook.com/DisplayList_Files/CommonGrade4_files/image_039.jpeg", null, "https://www.myqbook.com/DisplayList_Files/CommonGrade4_files/image_041.jpeg", null, "https://www.myqbook.com/DisplayList_Files/CommonGrade4_files/image_018.jpeg", null, "https://www.myqbook.com/DisplayList_Files/CommonGrade4_files/image_015.jpeg", null, "https://www.myqbook.com/DisplayList_Files/CommonGrade4_files/image_018.jpeg", null, "https://www.myqbook.com/DisplayList_Files/CommonGrade4_files/1109Q.gif", null, "https://www.myqbook.com/DisplayList_Files/CommonGrade4_files/1111Q.gif", null, "https://www.myqbook.com/DisplayList_Files/CommonGrade4_files/1115Q.gif", null, "https://www.myqbook.com/DisplayList_Files/CommonGrade4_files/1116Q.gif", null, "https://www.myqbook.com/DisplayList_Files/CommonGrade4_files/1117Q.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7405767,"math_prob":0.9731482,"size":29506,"snap":"2019-51-2020-05","text_gpt3_token_len":10406,"char_repetition_ratio":0.18208934,"word_repetition_ratio":0.081636086,"special_character_ratio":0.39371654,"punctuation_ratio":0.16275167,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9808783,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68],"im_url_duplicate_count":[null,null,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,10,null,5,null,10,null,10,null,10,null,5,null,10,null,10,null,10,null,5,null,10,null,5,null,5,null,5,null,5,null,10,null,5,null,10,null,5,null,5,null,5,null,5,null,5,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-06T04:11:45Z\",\"WARC-Record-ID\":\"<urn:uuid:5920f33a-2e2f-4c0f-8b5f-c42b7a8e8c4f>\",\"Content-Length\":\"337598\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d910a3b1-3912-4cf9-bc8c-d3ecd779d764>\",\"WARC-Concurrent-To\":\"<urn:uuid:5f39336a-678d-4094-8cb0-36b6397dbd93>\",\"WARC-IP-Address\":\"149.56.96.200\",\"WARC-Target-URI\":\"https://www.myqbook.com/Grade4.aspx\",\"WARC-Payload-Digest\":\"sha1:ZDLM3L4PEEWXW637ZY3GV6R6XW7RGWCZ\",\"WARC-Block-Digest\":\"sha1:N5ANUYDRYQ24UWR5DUZ5QZDI76K72ARN\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540484477.5_warc_CC-MAIN-20191206023204-20191206051204-00358.warc.gz\"}"}
https://www.calc-online.xyz/prime-factors
[ "# Prime factors\n\n### Help\n\nPrime factors calculator to determine the prime factors of any number.\n\nTo use this calculator, follow these steps:\n\n1. Fill in the Number field with the value you want to calculate the prime factors.\n2. Click the Calculate button\n3. See results\n\nPrime factors calculator to determine the prime factors of any number." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.81421053,"math_prob":0.98558104,"size":316,"snap":"2021-04-2021-17","text_gpt3_token_len":62,"char_repetition_ratio":0.22115384,"word_repetition_ratio":0.28,"special_character_ratio":0.19303797,"punctuation_ratio":0.08928572,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99283236,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-04-18T14:19:20Z\",\"WARC-Record-ID\":\"<urn:uuid:a20d384d-7620-4fc3-a466-b04cbe5b551b>\",\"Content-Length\":\"28597\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5e9e39d4-b138-4f84-b3b3-263899669002>\",\"WARC-Concurrent-To\":\"<urn:uuid:44a7b2f2-63a3-4ead-b660-3b3d8ab22a4c>\",\"WARC-IP-Address\":\"3.12.196.158\",\"WARC-Target-URI\":\"https://www.calc-online.xyz/prime-factors\",\"WARC-Payload-Digest\":\"sha1:UIUAUGY7DBDFEO3ZMPGU2K46ESIEQSZU\",\"WARC-Block-Digest\":\"sha1:HOEDWUT5JBRGRQCNC2UFDKPEQAEWGU3P\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-17/CC-MAIN-2021-17_segments_1618038492417.61_warc_CC-MAIN-20210418133614-20210418163614-00502.warc.gz\"}"}
https://philosophy.stackexchange.com/questions/12712/does-a-universal-affirmation-entail-an-existential-affirmation
[ "# Does a universal affirmation entail an existential affirmation?\n\nSuppose we have a universal affirmative statement (a statement of the form: ∀x(Px → Qx)), such as \"all dogs go to heaven\". Does an existential affirmative statement (a statement of the form: ∃x(Px ∧ Qx)) such as \"there exists a dog, such that she went to heaven\" follow from that?\n\n• No. Since ∀x(Dx → Hx) is simply ∀x(¬Dx v Hx), any model with no dogs in it (¬Dx) will make the universal vacuously true. But &exist;x(Dx &and; Hx) is true only in models where there exists at least one dog who went to heaven. That suggests a counterexample: pick an arbitrary empty model and it will satisfy the universal vacuously and will fail to satisfy the existential. – Hunan Rostomyan May 23 '14 at 4:26\n• Vaguely related: What do “universal” and “existential” mean in logic? – DBK May 24 '14 at 23:03\n\nI think that we need a more precise discussion of the above topic.\n\nFirst : are we assuming classical logic ? From your question, it is not so clear.\n\nBut, see comment above, if we \"equate\" ∀x(Px → Qx) with ∀x(¬Px v Qx), then we are assuming so, because (P → Q) is equivalent to (¬P v Q) in classical logic.\n\nSecond, it seems to me that you are mixing\" two different questions :\n\n(i) Does a universal affirmation entail an existential affirmation?\n\nwith :\n\n(ii) does an universal affirmative statement, like : ∀x(Px → Qx) entails an existential affirmative statement, like : ∃x(Px ∧ Qx) ?\n\nFor (i), in classical logic the general answer is : YES. We have : ∀xDx |= ∃xDx, because the model-theoretic semantics for first-order logic assume that every interpretation has a not-empty domain.\n\nThus, if in the interpretation I with domain D we have that all things are Dogs, being the domain D not-empty, for sure there is at least one Dog.\n\nThe case of (ii) is different. Again, if we assume classical logic, we are licensed to \"traslate\" ∀x(Px → Qx) with ∀x(¬Px v Qx).\n\nNow, the question is :\n\ndoes ∀x(¬Px v Qx) |= ∃x(Px ∧ Qx) ?\n\nOf course : NO. As per Hunan's comment above :\n\nany interpretation with no dogs in it will make the universal vacuously true, because (¬Dx) is true. But if there are no dogs, then (Dx ∧ Hx) is always false.\n\nBut obviously [see (i)] we have : ∀x(Px → Qx) |= ∃x(Px → Qx).\n\nDomain: set of dogs\n\nH__: __ goes to heaven\n\n'For all d, Hd' is true for every d that exists. It doesn't state that any d does exist. So no, the universal quantifier doesn't imply the existential quantifier.\n\nHowever, for arbitrary domains X,Y and arbitrary predicate Fxy, 'there exists y such that for all x, Fxy' implies that 'for all x, there exists y such that Fxy' since the existing y is the same for every x, but 'for all x, there exists y such that Fxy' does not imply 'there exists y such that for all x, Fxy' since it is possible that the existing y is different for every x.\n\n• Glad to see you participating! – Hunan Rostomyan May 23 '14 at 4:55\n• You have to specify that you are working with so-called Free Logic, otherwise the \"standard\" semantics for first-order logic assumes that every domain of interpreattion must be not empty. Thus, the domain = set of dogs include at least one dog and if the sentence 'For all d, Hd' is true then, being that all dogs go to heaven, for sure at least one dog goes; i.e. 'For all d, Hd' entails 'Exists d, Hd'. – Mauro ALLEGRANZA May 23 '14 at 13:41\n\nMathematically, your first statement means: for all x, when x is P, then it is also Q (e.g. for all things x, when x is a unicorn, its skin has white color). From this it does not follow that there exists a unicorn." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9125386,"math_prob":0.9010938,"size":3481,"snap":"2019-43-2019-47","text_gpt3_token_len":959,"char_repetition_ratio":0.11561691,"word_repetition_ratio":0.067381315,"special_character_ratio":0.2640046,"punctuation_ratio":0.13636364,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97551113,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-12T10:27:51Z\",\"WARC-Record-ID\":\"<urn:uuid:9363fde6-7dae-44e7-a055-bea7afb57594>\",\"Content-Length\":\"147520\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d70e4991-19cf-41c9-87c2-a4b173e476ee>\",\"WARC-Concurrent-To\":\"<urn:uuid:dcc4c928-8160-4d40-97f9-250dae2705fe>\",\"WARC-IP-Address\":\"151.101.65.69\",\"WARC-Target-URI\":\"https://philosophy.stackexchange.com/questions/12712/does-a-universal-affirmation-entail-an-existential-affirmation\",\"WARC-Payload-Digest\":\"sha1:WG3T3OUQFBFQ6Y3WB6X7Y47TXWMIAHCE\",\"WARC-Block-Digest\":\"sha1:FJ2TEDA6HWWVSBQVC5756BZ2NJC4WHET\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496665521.72_warc_CC-MAIN-20191112101343-20191112125343-00532.warc.gz\"}"}
https://www.queensupgme.com/electromagnetic-waves/
[ "家", null, "电磁波\n\n均匀平面波和波浪方程\n\n∇.d~ = 0\n\n∇。B̅= 0\n\nB̅=µh̅\n\nd̅=єe̅\n\nJ̅= σ e̅\n\n∇。B叔̅=叔。(叔h̅)=叔。H̅= 0\n\n∇。H̅= 0 ..........(1)\n\n∇。D̅=叔。(є e̅)= є叔。E̅= 0\n\n∇。E̅= 0 ..........(2)\n\n∇×e̅= - μ∂H̅/∂T.........(3)\n\n∇×H̅=∂(єe̅)/∂t\n\n∇×h̅=σe̅+є∂∂/∂t.........。(4)\n\n∇×∇×e̅= - μs×∂H̅/∂T\n\n∇×∇×H̅=∇×(Σe̅)+∇(є∂e̅/∂T)\n\n∇和∂/∂t是相互独立的,因此operator可以交换为\n\n∇×∇×h̅=σ(∇×e̅)+є×∂(∇×e̅)/∂t\n\n∇×∇×e̅= - μσ×∂e̅/∂t - μє(∂2E̅/∂t2\n\n∇×∇×H̅=σ(−µ∂H̅/∂t) +є×∂/∂t(−µ∂H̅/∂t)\n\n= - μΣ(∂H̅/∂T) - μє(∂2H̅/∂t2\n\n∇(∇.e̅) - ∇2E̅=−µ×∂/∂t (σ E̅+ є∂E̅/∂t)\n\n∇(∇.h̅) - ∇2H∞= - μΣ×∂e̅/∂T - μє(∂2E̅/∂t2\n\n- ∇2e̅= - μΣ×∂e̅/∂t - μ∈(∂2E̅/∂t2\n\n2e̅=μΣ×∂e̅/∂t+μ∈(∂2E̅/∂t2)......(5)\n\n-2H∞= - μΣ(∂H̅/∂T) - μ≥(∂2H̅/∂t2\n\n2H̅=µσ(∂H̅/∂t) +µє(∂2H̅/∂t2)......(6)\n\n2E̅=µo єo(∂. E2E̅/∂t2\n\n2H̅=μOєO(∂2H̅/∂t2\n\nEM波在Z平面的方向上行进,因此e≥和H∞的载体均与x和y无关。因此,vectoree̅和h̅是z和t的功能。因此,上述等式变为\n\n2E̅/∂z2=μoєo(∂2E̅/∂t2\n\n2E̅/∂t2=(1/µo єo)(∂2E̅/∂z2\n\nV.2=(1 /μє)\n\n2E̅/∂t2= v2(∂.2E̅/∂z2\n\n平面波传播", null, "电磁波极化\n\n线性极化\n\nθ= tan.1EY / EX.", null, "", null, "圆偏振", null, "椭圆形极化", null, "电磁波在不同介质中的传播\n\n2e̅=μΣ×∂e̅/∂t+μ∈(∂2E̅/∂t2\n\n2H̅=µσ(∂H̅/∂t) +µє(∂2H̅/∂t2\n\n2µσ × (jw E̅)+µє (jw)2e̅.\n\n2E̅= [jwµ(σ + jw є)] E̅\n\n2H∞= [JWμ(Σ+ JWє)]H∞\n\n2E̅=ɣ2e̅.\n\n2H̅=ɣ2H\n\nɣ=√[jwµ(σ + jw є)] = α + j β\n\nα= W-((μ/ 2)√(1 +(σ/wє)2)) - 1)\n\nβ= W =((μє/ 2)√(1 +(σ/wє)2)))+ 1\n\nη=√[(jwμ)/(σ+ jwє)]\n\nα= 0和\n\nβ = w√(µo єo)\n\n无损介质中的均匀平面波\n\n=(1 /√(μR= O = R))= 1 /(√(μO= O)√(μRєr)))= 1 /(√(μO= O)/√(μRєr)))\n\nɣ=√[jwμ(σ+ jwє)] m-1\n\nɣ= +/- jw√(μє)m-1\n\nη =√[(jwµ)/ (σ + jw є)]欧姆\n\n=√(μO/єO)√(μr/єr)\n\n=ηo∈(μr/єr)\n\nη=377√(μr/єr)欧姆\n\n恒定电介质中的均匀平面波\n\nɣ=√[jwµ(σ + jw є)]\n\nɣ=√[JWє(1 +(Σ/ jwє))jwμ]\n\nη=√[(jwμ)/(σ+ jwє)]\n\nη = |η|∠ӨN.欧姆。\n\nη=√[(jwμ)/(σ+ jwє)]\n\n=√[(jwμ)/ jwє(1 +(σ/ jwє)]\n\nη =(√(µ/ є))(1 /√(1 - j (σ/ w є))欧姆\n\nөN.= 1 /2 [π/2] - tan1(wє/σ)]\n\nөN.=(π/ 4)\n\nөN.= 0.\n\n一个反应\n\n1.", null, "帕潘 说:\n\n清楚地解释道。我很喜欢。", null, "" ]
[ null, "https://www.queensupgme.com/wp-content/uploads/2021/04/Arrow.png", null, "https://www.queensupgme.com/wp-content/uploads/2015/09/Electric-and-magnetic-field-vectors-for-a-uniform-plane-wave.jpg", null, "https://www.queensupgme.com/wp-content/uploads/2015/09/Linear-Polarization-300x276.jpg", null, "https://www.queensupgme.com/wp-content/uploads/2015/09/Polarization-of-EM-Waves.jpg", null, "https://www.queensupgme.com/wp-content/uploads/2015/09/Circular-Polarization.jpg", null, "https://www.queensupgme.com/wp-content/uploads/2015/09/Elliptical-Polarization.jpg", null, "https://secure.gravatar.com/avatar/dddae5da2a870b2d4a2ea1847a0ba652", null, "https://www.queensupgme.com/wp-content/uploads/2021/03/electronicshub-Favicon-150x150.png", null ]
{"ft_lang_label":"__label__zh","ft_lang_prob":0.96514773,"math_prob":0.9991698,"size":6541,"snap":"2022-05-2022-21","text_gpt3_token_len":7203,"char_repetition_ratio":0.06180205,"word_repetition_ratio":0.0,"special_character_ratio":0.24751566,"punctuation_ratio":0.07700422,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9900367,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16],"im_url_duplicate_count":[null,null,null,2,null,2,null,2,null,2,null,2,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-19T23:43:24Z\",\"WARC-Record-ID\":\"<urn:uuid:a4475a69-555b-48da-bead-6e8c823838e0>\",\"Content-Length\":\"98425\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2b185b39-631b-401f-b4bc-51a07e947648>\",\"WARC-Concurrent-To\":\"<urn:uuid:25c34060-dac0-4247-bf0c-1b69c9f64976>\",\"WARC-IP-Address\":\"198.1.167.228\",\"WARC-Target-URI\":\"https://www.queensupgme.com/electromagnetic-waves/\",\"WARC-Payload-Digest\":\"sha1:4X6ZXMHUUK3FKDO6MV5TBXWPYRNX2XX6\",\"WARC-Block-Digest\":\"sha1:QW3BEWKMRY6W6KHN4T437YEW4TKOM3AI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320301592.29_warc_CC-MAIN-20220119215632-20220120005632-00346.warc.gz\"}"}
http://wikinotes.ca/MATH_242/summary/fall-2012/exercises-for-chapter-1-preliminaries
[ "Exercises for Chapter 1: Preliminaries", null, "Student-provided answers to the exercises found in Chapter 1: Preliminaries of Introduction to Real Analysis by Robert G. Bartle and Donald R. Sherbert, third edition (not to be handed in and thus not marked). The material covered in this chapter (along with the material covered in Appendix A) is assumed to be familiar to the student. It is recommended that you attempt these exercises on your own before viewing the proposed solutions below. The content on this page is solely intended to function as a study aid for students and should constitute fair dealing under Canadian copyright law.\n\n1Section 1.1¶\n\n1.1Question 1¶\n\nIf $A$ and $B$ are sets, show that $A \\subseteq B$ if and only if $A \\cap B = A$.\n\nFirst, we show that if $A \\subseteq B$, then $A \\cap B = A$. Well, if $a \\in A$, then $a \\in B$ by virtue of the fact that $A$ is a subset of $B$. So every element in $A$ is also in $B$. Recall that the intersection of $A$ and $B$ contains all elements that are contained in both $A$ and $B$. An element $b \\in B$ that is not an element of $A$ would also not be in their intersection. So the intersection $A \\cap B$ would only contain elements in $A$, and so $A cap B \\subseteq A$. Since every element in $A$ is also an element of $A \\cap B$ (as $A \\subseteq B$) then we also have that $A \\subseteq A \\cap B$ and so $A \\cap B = A$.\n\nNext, we prove the statement in the other direction: if $A \\cap B = A$, then $A \\subseteq B$. $A \\cap B = A$ tells is that for any $a \\in A$, $a \\in A \\cap B$ as well. So all the elements that are in $A$ are also part of the intersection of $A$ and $B$. But this can only happen if all the elements in $A$ are also in $B$, and so it must be that $A \\subseteq B$.\n\nSince $A \\subseteq B$ and $B \\subseteq$, $A = B$. $\\blacksquare$\n\n1.2Question 2¶\n\nProve the second De Morgan Law: $A \\setminus (B \\cap C) = (A \\setminus B) \\cup (A \\setminus C)$.\n\nFirst, we show that $(A \\setminus (B \\cap C)) \\subseteq ((A \\setminus B) \\cup (A \\setminus C))$. Let $d \\in (A \\setminus (B \\cap C))$ represent an arbitrary element in that set. $d$ must be in $A$, but must not be in the intersection of $B$ and $C$, meaning that it can be at most one of $B$ or $C$. If it is in $B$ but not in $C$, then it is in $A \\setminus C$ but not $A \\setminus B$; if it is $C$ but not in $B$, then it is in $A \\setminus B$ but not $A \\setminus C$; and if it is in neither $B$ nor $C$ then it is in both $A \\setminus B$ and $A \\setminus C$. In all three cases, $d$ must be in the union of $A \\setminus B$ and $A \\setminus C$, and so any element in $A \\setminus (B \\cap C)$ must also be in $(A \\setminus B) \\cup (A \\setminus C)$. This tells us that $(A \\setminus (B \\cap C)) \\subseteq ((A \\setminus B) \\cup (A \\setminus C))$, which completes the first part of the proof.\n\nNext, we show that $((A \\setminus B) \\cup (A \\setminus C)) \\subseteq (A \\setminus (B \\cap C))$. Let $e \\in ((A \\setminus B) \\cup (A \\setminus C))$. $e$ must be in $A$, and at most one of $B$ or $C$ (for if it were in both, then neither $A \\setminus B$ nor $A \\setminus C$ would contain it and so it would not be in the union). Consequently, $e \\notin B \\cap C$, as it cannot be in both $B$ and $C$. Therefore, $e \\in (A \\setminus (B \\cap C))$, and so $((A \\setminus B) \\cup (A \\setminus C)) \\subseteq (A \\setminus (B \\cap C))$.\n\n$\\blacksquare$\n\n1.3Question 3¶\n\nProve the distributive laws:\n(a) $A \\cap (B \\cup C) = (A \\cap B) \\cup (A \\cap C)$\n(b) $A \\cup (B \\cap C) = (A \\cup B) \\cap (A \\cup C)$\n\n(a) Let $d \\ in A \\cap (B \\cup C)$. So $d$ must be in $A$, and it must also be in $B \\cup C$, i.e., either $B$ or $C$ or both. If it is in $B$, and not $C$, it is in $A \\cap B$, but not $A \\cap C$; consequently, it would be in $(A \\cap B) \\cup (A \\cap C)$. Similarly, if $d \\in C$ and $d \\notin B$, then it is in $A \\cap C$ but not $A \\cap B$ and so it would again be in their union. If it is in both $B$ and $C$, then it would be in both $A \\cap B$ and $A \\cap C$ and so would again be in their union. Consequently, for any $d \\in A \\cap (B \\cup C)$, $d \\in (A \\cap B) \\cup (A \\cap C)$, and so $A \\cap (B \\cup C) \\subseteq (A \\cap B) \\cup (A \\cap C)$.\n\nNow we prove the other direction. Let $e \\in (A \\cap B) \\cup (A \\cap C)$. $e$ must be in either $A \\cap B$ or $A \\cap C$, or both. If it is the first case, then it's in $A \\cap B$ and so in both $A$ and $B$, but not $C$. In this case it would be in $A$, and in the union of $B \\cup C$, and consequently it would be in $A \\cap (B \\cup C)$. The argument is pretty similar for the other cases. Consequently, for any $e \\in (A \\cap B) \\cup (A \\cap C)$, $e \\in A \\cap (B \\cup C)$, and so $(A \\cap B) \\cup (A \\cap C) \\subseteq A \\cap (B \\cup C)$.\n\nBy the previous parts, we conclude that $A \\cap (B \\cup C) = (A \\cap B) \\cup (A \\cap C)$\n\n(b) This proof is pretty similar to the above.\n\n1.4Question 4¶\n\n(a) Show that the symmetric difference (can also be thought of as XOR), $D$, of $A$ and $B$ is equivalent to $(A \\setminus B) \\cup (B\\setminus A)$.\n(b) Show that $D$ is also given by $(A \\cup B) \\setminus (A \\cap B)$.\n\n(a) Let $d \\in D$. From the definition, we know that $d$ is in exactly one of $A$ and $B$. If $d \\in A$, then $d \\notin B$, and so it would be in $A \\setminus B$. Otherwise, if $d \\in B$, then $d \\notin A$, and so $d \\in (B \\setminus A)$. Either way, it would be in the unions of $A \\setminus B$ and $B \\setminus A$. Consequently, for any $d \\in D$, $d \\in (A \\setminus B) \\cup (B \\setminus A)$, and so $D \\subseteq (A \\setminus B) \\cup (B \\setminus A)$.\n\nNow we prove the other direction. Let $e \\in (A \\setminus B) \\cup (B \\setminus A)$. So it must be in $(A \\setminus B)$, or $(B \\setminus A)$, or both. If it is in only $(A \\setminus B)$, then it is in $A$ and not $B$, so it is in $D$ by definition. Similarly if it is in $B \\setminus A$. It can't actually be in both, for then it would have to be in $A$ and not in $B$, and also in $B$ but not in $A$, which is a contradiction. So it must be that $e \\in D$. Consequently, for any $e \\in (A \\setminus B) \\cup (B \\setminus A)$, $e \\in D$, and so $(A \\setminus B) \\cup (B \\setminus A) \\subseteq D$.\n\nBy the previous parts, we conclude that $(A \\setminus B) \\cup (B \\setminus A) = D$.\n\n(b) If $d \\in D$, then it could be in $A$ but not $B$ or $B$ but not $A$. If it's in the former, then it's in $A \\cup B$ but not in $A \\cap B$, so it would be in $(A \\cup B) \\setminus (A \\cap B)$. Similarly, if it's in the latter, it would also be in $(A \\cup B) \\setminus (A \\cap B)$. Consequently, for any $d \\in D$, $d \\in (A \\cup B) / (A \\cap B)$ and so $D \\subseteq (A \\cup B) / (A \\cap B)$.\n\nNow we prove the other direction. Let $e \\in (A \\cup B) / (B \\cup A)$. So $e$ must be in at least one of $A$ and $B$, to be in their union, but it cannot be in their intersection, so it cannot be in both $A$ and $B$. Consequently, it must be in exactly one of $A$ or $B$, which is in fact the definition of $D$. So $(A \\cup B) / (A \\cap B) \\subseteq D$.\n\nBy the previous parts, we conclude that $D = (A \\cup B) / (A \\cap B)$.\n\n1.5Question 5¶\n\nFor each $n \\in \\mathbb{N}$, let $A_n = \\{(n + 1)k : k \\in \\mathbb{N}\\}$.\n\n(a) What is $A_1 \\cap A_2$?\n(b) Determine the sets $\\bigcup \\{A_n:n\\in \\mathbb{N}\\}$ and $\\bigcap \\{A_n : n \\in \\mathbb{N}\\}$.\n\n(a) $A_1 = \\{2, 4, 6, 8, \\ldots \\}$. $A_2 = \\{3, 6, 9, 12, \\ldots \\}$. Their intersection is given by $\\{6, 12, 18, 24, \\ldots \\}$, i.e., $\\{6k: k \\in \\mathbb{N}\\}$. So, all numbers that are divisible by the product of $(1 + 1)$ and $(2 + 1)$, which is 6.\n\n(b) The union is $\\{n: n \\in \\mathbb{N} / \\{1\\}\\}$ (so $\\{2, 3, 4, 5, \\ldots \\}$). The intersection is empty.\n\n1.6Question 6¶\n\nJust diagrams. Easy enough. Skipping.\n\n1.7Question 7¶\n\nLet $A = B = \\{x \\in\\mathbb{R}: -1 \\leq x \\leq 1\\}$ and consider the subset $C = \\{(x, y): x^2 + y^2 = 1\\}$ of $A \\times B$. Is $C$ a function?\n\nNo. There would have to be a unique $y$ for each $x$. But for $x = 0$, for example, both $y = 1$ and $y = -1$ satisfy the equation, and so we have $(0, 1)$ and $(0, -1)$ in the subset. But that violates one of the defining properties of a function (specifically, the \"vertical line test\").\n\n1.8Question 8¶\n\nLet $f(x) = \\frac{1}{x^2}$, $x \\neq 0$, $x \\in \\mathbb{R}$.\n\n(a) Determine the direct image $f(E)$ where $E = \\{x \\in \\mathbb{R}: 1 \\leq x \\leq 2\\}$.\n(b) Determine the inverse image $f^{-1}(G)$ where $G = \\{x \\in \\mathbb{R}: 1 \\leq x \\leq 4\\}$.\n\n(a) $f(E) = \\{x \\in \\mathbb{R}: \\frac{1}{4} \\leq x \\leq 1\\}$\n(b) The singleton $\\{1\\}$.\n\n1.9Question 9¶\n\nLet $g(x) = x^2$ and $f(x) = x+2$ for $x \\in \\mathbb{R}$, and let $h$ be the composite function $h = g \\circ f$.\n\n(a) Find the direct image $h(E)$ of $E = \\{ x \\in \\mathbb{R}: 0 \\leq x \\leq 1\\}$.\n(b) Find the inverse image $h^{-1}(G)$ of $G = \\{x \\in \\mathbb{R}: 0 \\leq x \\leq 4\\}$.\n\n$h(x) = (x+2)^2$.\n\n(a) $\\{4 \\leq x \\leq 9\\}$\n\n(b) $\\{-2 \\leq x \\leq 0\\}$\n\n1.10Question 10¶\n\nReally long question for which I don't have a valid answer, so, skipped for now. Basically $E \\cap F = \\varnothing$? Is that important?\n\nSkipped for now\n\nShow that if $f: A \\to B$ and $E, F$ and subsets of $A$, then $f(E \\cup F) = F(E) \\cup f(F)$ and f(E \\cap F) \\subseteq f(E) \\cap f(F)$. Later 1.13Question 13¶ Later 1.14Question 14¶ Show that the function$f$defined by$f(x) = x / \\sqrt{x^2+1}, x\\in \\mathbb{R}$, is a bijection of$R$onto$\\{y : -1 < y < 1\\}$. To show that it is injective: Assume that$f(x_1) = f(x_2)$, somehow prove that$x_1 = x_2$. To show that it is surjective: Show that the range is indeed$-1 < y < 1$. To be continued 2Section 1.2¶ 2.1Question 1¶ Prove that that the following holds for all$n \\in \\mathbb N$. $$\\frac{1}{1 \\cdot 2} + \\frac{1}{2 \\cdot 3} + \\ldots + \\frac{1}{n(n+1)} = \\frac{n}{n+1}$$ Proof by induction. Base case,$n=1$: $$\\frac{1}{1\\cdot 2} = \\frac{1}{2} \\, \\checkmark$$ Assume that it holds for$n=k. Then: \\begin{align}\\frac{1}{1 \\cdot 2} + \\frac{1}{2 \\cdot 3} + \\ldots + \\frac{1}{k(k+1)} + \\frac{1}{(k+1)(k+2)} & = \\frac{k}{k+1} + \\frac{1}{(k+1)(k+2)} \\tag{by IH} \\\\ & = \\frac{k(k+2) + 1}{(k+1)(k+2)} \\\\ & = \\frac{k^2+2k + 1}{(k+1)(k+2)} \\\\ & = \\frac{(k+1)^2}{(k+1)(k+2)} = \\frac{k+1}{k+2} \\\\ & = \\frac{k+1}{(k+1) +1} \\, \\checkmark \\end{align} 2.2Question 2¶ Prove that that the following holds for alln \\in \\mathbb N$. $$1^3+2^3+\\ldots + n^3 = \\left ( \\frac{1}{2} n(n+1) \\right )^2$$ Base case:$1^3 = 1 = 1^2\\checkmark$IH: assume that it holds for$n=k. Then: \\begin{align}1^3+3^3+\\ldots+k^3+(k+1)^3 & = \\left ( \\frac{1}{2} k(k+1) \\right )^2 + (k+1)^3 \\tag{by IH} \\\\ & = \\left( \\frac{1}{2} \\right )^2 k^2 (k+1)^2 + k^3 + 3k^2 + 3k + 1 \\\\ & = \\frac{1}{4}(k^2(k^2+2k+1)) + \\frac{1}{4}(4k^3+12k^2+12k + 4) \\\\ & = \\frac{1}{4}(k^4+2k^3+k^2 + 4k^3+12k^2+12k + 4) \\\\ & = \\frac{1}{4}(k^4+6k^3 + 13k^2+12k + 4) \\\\ & = \\frac{1}{4} (k^2+3k+2)^2 = \\frac{1}{4}((k+1)(k+2))^2 \\\\ & = \\left ( \\frac{1}{2}(k+1)(k+2) \\right )^2 \\,\\checkmark \\end{align} 2.3Question 3¶ Prove that the following holds for alln \\in \\mathbb N$: $$3 + 11 + \\ldots + (8n-5) = 4n^2-n$$ Base case:$3 = 4 -1\\checkmark$IH: assume it holds for$n=k. Then: \\begin{align} 3+11+(8k-5) + (8(k+1)-5) & = 4k^2-k + (8(k+1) - 5 \\tag{by IH} \\\\ & = 4k^2-k + (8k+3) = 4k^2-k+8k + 3 \\\\ & = (4k^2 + 8k + 4) -k - 1 = 4(k^2+2k+1) - (k+1) \\\\ & = 4(k+1)^2 - (k+1) \\,\\checkmark\\end{align} 2.4Question 6¶ (Skipped 4-5 because they're pretty the same as the previous ones.) Prove thatn^3+5n$is divisible by 6 for all$n \\in \\mathbb N$. Base case:$1^3+5(1) = 6$which 6 does divide$\\checkmark$Assume$6 \\mid k^3 + 5k. Then: \\begin{align} (k+1)^3 + 5(k+1) & = k^3+3k^2+3k + 1 + 5k + 5 \\\\ & = (k^3 + 5k) + (3k^2+3k+6) = (k^3+5k) + 3(k^2+k+2) \\end{align} Incomplete 2.5Question 10¶ Conjecture and prove a formula for the sum $$\\frac{1}{1\\cdot 3} + \\frac{1}{3\\cdot 5} + \\ldots + \\frac{1}{(2n-1)(2n+1)}$$ Conjectured formula:\\displaystyle \\frac{n}{2n+1}$Base case,$n=1$:$\\displaystyle \\frac{1}{1\\cdot 3} = \\frac{1}{3} \\checkmark$Assume IH for$n=k. Then: \\begin{align} \\frac{1}{1\\cdot 3} + \\ldots + \\frac{1}{(2n-1)(2n+1)} + \\frac{1}{2(n+1)-1)(2(n+1)+1)} & = \\frac{n}{2n+1} + \\frac{1}{2(n+1)-1)(2(n+1)+1)} \\tag{by IH} \\\\ & = \\frac{n}{2n+1} + \\frac{1}{(2n+1)(2n+3)} = \\frac{n(2n+3)}{(2n+1)(2n+3)} \\\\ & = \\frac{n(2n+3) + 1}{(2n+1)(2n+3)} = \\frac{2n^2+3n+1}{(2n+1)(2n+3)} \\\\ & = \\frac{(2n+1)(n+1)}{(2n+1)(2n+3)} = \\frac{n+1}{2n+3} \\\\ & = \\frac{n+1}{2(n+1)+1} \\,\\checkmark \\end{align} 2.6Question 11¶ Conjecture and prove a formula for the sum of the first odd natural numbers: $$1+3+\\ldots+(2n-1)$$ Conjectured formula:n^2$. Base case,$n=1$:$1 = 1^2 \\checkmark$lol Assume IH for$n=k\\$. Then:\n\n\\begin{align} 1 + 3 + \\ldots(2n-1) + (2(n+1) -1) & = n^2 + (2(n+1) -1) \\tag{by IH} \\\\ & = n^2+(2n+1) = (n+1)^2\\, ♥ \\,\\checkmark \\tag{that was easy} \\end{align}" ]
[ null, "http://wikinotes.ca/static/img/cc-by-nc.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.75879025,"math_prob":1.0000056,"size":12873,"snap":"2022-05-2022-21","text_gpt3_token_len":5225,"char_repetition_ratio":0.1829202,"word_repetition_ratio":0.13120711,"special_character_ratio":0.43144566,"punctuation_ratio":0.1068475,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000093,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-23T02:03:45Z\",\"WARC-Record-ID\":\"<urn:uuid:1e87dae2-ab48-456b-af4d-2001260efb49>\",\"Content-Length\":\"24417\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:db75b050-d500-4f42-a418-4f93e96b5b57>\",\"WARC-Concurrent-To\":\"<urn:uuid:cca99eba-74cf-4179-ab98-e3c721f12c00>\",\"WARC-IP-Address\":\"104.21.9.138\",\"WARC-Target-URI\":\"http://wikinotes.ca/MATH_242/summary/fall-2012/exercises-for-chapter-1-preliminaries\",\"WARC-Payload-Digest\":\"sha1:K7XJNLC2OEW4KK27P56M3N2LX542XGSN\",\"WARC-Block-Digest\":\"sha1:L2XSYCWY3HC54TMYOQR6TT425PZ5LILQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320303956.14_warc_CC-MAIN-20220123015212-20220123045212-00264.warc.gz\"}"}
http://proxy.osapublishing.org/oe/fulltext.cfm?uri=oe-22-12-15251&id=290637
[ "## Abstract\n\nIn this paper, the mathematical description of the temporal self-imaging effect is studied, focusing on the situation in which the train of pulses to be dispersed has been previously periodically modulated in phase and amplitude. It is demonstrated that, for each input pulse and for some specific values of the chromatic dispersion, a subtrain of optical pulses is generated whose envelope is determined by the Discrete Fourier Transform of the modulating coefficients. The mathematical results are confirmed by simulations of various examples and some limits on the realization of the theory are commented.\n\n© 2014 Optical Society of America\n\n## 1. Introduction\n\nPhotonic Signal Processing is becoming today one of the most active research topics in optics and photonics, as all-optical processing offers a better performance for high-speed signals than electronic alternatives. Many well-established techniques for optical processing are based on volume optics, employing schemes that combine the diffraction of optical beams propagating through free space, thin lenses and prisms. This area of optics is highly developed and is usually known as Fourier Optics . However, volume optics presents several drawbacks associated to the use of bulk optical components, which have to be carefully aligned and occupy a large space.\n\nTo overcome some of these limitations, the space-time duality has appeared as a promising alternative. It combines the wide-band advantages of all-optical processing and the flexibility of the electronic approaches, enabling the possibility of using all-fibre processing systems which are dual to the well-known space optics approaches. This duality is based on the formal equivalence of the mathematics that govern the paraxial diffraction of beams propagating through free space and the dispersion in time of narrowband pulses through dielectric media. Although this duality was already described in the late 1960s, it has been mainly in the last two decades when it has started showing its full potential, especially after the extension of the duality to include the “time lens”, that is, the equivalent in the temporal domain to a conventional spatial thin lens . Since then, scientists have developed numerous photonic processing systems, using integrated and robust optical waveguide components . Among these proposals, the Optical Fourier Transform and the temporal self-imaging effect have gained a special attention as candidates for the processing of single optical pulses and optical trains of pulses.\n\nIn many signal processing applications based on Fourier optics, the ability to generate the Fourier transform and its inverse for a given signal are essential for the operation of the system. In space optics, a simple approach to obtain the Fourier transform of an object is the use of Fraunhofer diffraction. The basic idea is that far-field diffraction of an object produces the formation of its Fourier transform in the transverse space . This scheme can be replicated in the time domain by simply passing a time-limited waveform through a highly dispersive medium, mapping the spectral information of the signal to the time domain. The proposal of this real-time Fourier transformer, also known as frequency-to-time converter, was made by Muriel et al. [21, 22] who were the first to use a chirped Fiber Bragg Grating as the dispersive device. The realization of the Fourier transform by means of a dispersive medium in the optical domain has found several applications, such as the realization of temporal magnification systems, causing the waveform to be stretched in time and allowing thus the single-shot characterization of ultrafast waveforms . In combination with electro-optic modulation, this setup can also perform the temporal and spectral shaping of optical pulses .\n\nTemporal self-imaging is another popular development of temporal optical processing systems. The Talbot effect or self-imaging effect is a near-field diffraction effect that was first reported by H. F. Talbot in 1836 . When a coherent plane wave is passed through or reflected by a 1D or 2D periodic object, an exact image of the object can be observed at regular distances . Also, sub-images of the object can be observed at shorter distances where, depending on the distance, a certain reduction of the image size is presented. As the spatial Talbot effect is a diffractive effect, an equivalent outcome occurs when a periodic signal, such as a train of optical pulses, is passed through or reflected by a dispersive medium. This result is known as temporal Talbot -or temporal self-imaging effect. Being first proposed and demonstrated by Jannson et al. and generalized in , it has been broadly studied and has gained a special interest for the multiplication of the repetition rate of an optical train .\n\nHowever, there is a situation of special interest that has not been fully studied: the propagation through a dispersive element of a train of modulated ultrashort pulses. The Fourier transform can only be obtained for individual pulses (or very short pulses compared with the temporal window over which the Fourier transform is calculated) and hence is not of application when different pulses interfere. On the other hand, the temporal self imaging effect has been usually analyzed for uniform trains of pulses, where the pulses are of identical shape. However, if the modulation applied to the input train is periodic, the equations for that effect are still valid. Yet, only in some specific cases will the output train of pulses result in the variation of the repetition rate of the pulses while maintaining the train envelope, as will be reviewed in section 2.1. In some others, the interference between pulses when propagated through a medium where chromatic dispersion is the dominant effect will alter the output envelope. By examining the equations in detail, in this paper it will be proven that, for some precise values of dispersion, the resulting output envelope is determined by the Discrete Fourier Transform of the periodic coefficients that modulate the train of pulses. The structure of the paper is as follows: in Section 2 the mathematical analysis of the dispersion effect on a periodically modulated train of pulses is presented. Section 3 studies the specific case for a determined value of dispersion that results in the Discrete Fourier Transform. In section 4 numerical simulations that show the validity of the proposal are presented and some of its possible applications for the processing of optical trains of pulses are outlined. Finally, a summary of the most important results is provided in Section 5.\n\n## 2. Temporal self-imaging effect for modulated pulses\n\nIn this section, the temporal self-imaging effect when the train of optical pulses at the input of the system has been previously modulated will be studied. As hypothesis, it will be assumed that a train of optical pulses with equal phase and amplitude has been modulated by a complex signal, $c(t)$. Thus, the complex envelope of this signal, $x(t)$, is given by:\n\n$x(t)=c(t)∑k=−∞∞a(t−kT0)$\nwhere $a(t)$ is the individual shape of each pulse, $T0$ is the repetition period and $c(t)$ is the complex modulating signal and, in consequence, applies a different phase and amplitude to every pulse. The pulse width of each individual pulse, $Δt$, which is determined by $a(t)$, is considered to be small enough so that the pulses will not overlap, that is, $Δt≤T0$. Also, two additional restrictions are imposed to $c(t)$:\n\n• $c(t)$ has to be periodic and its period is given by $NT0$, where $N$ is an integer\n$c(t)=c(t+NT0)$\n• $c(t)$ is a slow signal when compared to the duration of the individual pulses, $Δt$, so that it it can be considered as constant within the duration of an individual pulse:\n$c(t)≈c(kT0)=ckforkT0−Δt2≤t≤kT0+Δt2wherekisaninteger$\n\nBy introducing conditions (2) and (3) to Eq. (1), the signal at the input of the system can be expressed as the summation of $N$ different trains of pulses with periodicity $T1=NT0$, a temporal delay between each train of $T0$ and being each of them modulated by a different coefficient $cl$. Accordingly, it is possible to group the pulses in $N$ different trains, each with constant phase and amplitude:\n\n$x(t)=∑l=0N−1[cl∑k=−∞∞a(t−lT0−kT1)]$\n\nAn example of such grouping can be observed in Fig. 1, when the input train of pulses is modulated by a signal $c(t)$ with periodicity $4T0$. As a result of this, it is possible to regroup the pulses in four different trains each one modulated by a fixed coefficient $cl$ ($l$ = 0,1,2,3), which is determined by the value of $c(t)$ in $t=lT0$. As a consequence of this rearrangement of the input train of pulses in N different trains, it is possible to determine the total resulting train after applying some dispersion as the superposition of N dispersed trains of equal pulses and repetition rate T1. The expression for each one of those dispersed trains is given by the analysis made for the uniform train cases that result in the different temporal self-imaging conditions and cases.", null, "Fig. 1 (a) Train of pulses that have been intensity modulated by a set of coefficients with N = 4 and (b) grouping of the different trains depending on the modulating coefficient.\n\n#### 2.1 Temporal self-imaging\n\nAs all the trains of pulses defined by $l$ in Eq. (4) have a similar period, $T1$, the amount of dispersion required in order to obtain the temporal self-imaging effect for every one of them is given by:\n\n$|ϕ|=T12s2π=N2T02s2πs=±1,±2,±3...$\nwhere, depending on the value of $s$, two different cases can be distinguished: the ordinary and the inverted temporal self-imaging effect.\n\nWhen $s$ is even, the obtained result is the ordinary temporal self-imaging effect for each of the subtrains defined in Eq. (4) inside the brackets. As a consequence, each train produces after the dispersion a train of pulses modulated by the corresponding coefficient $cl$ and with no additional temporal delay. Therefore, the signal at the output of the system, $y(t)$, can be expressed as:\n\n$y(t)=∑l=0N−1[cl∑k=−∞∞a(t−kT1−lT0)]$\n\nDue to the time delay $lT0$ between the different trains, the pulses coming from the different trains will not overlap in time, not existing any interference between the trains of pulses. Thus, the output of the system is a replica of the train at the input, maintaining the same modulating coefficients than the input train, and the detected optical power at the output of the system is given by:\n\n$Pout(t)=∑l=0N−1|cl|2∑k=−∞∞|a(t−kT1−lT0)|2$\n\nSimilarly, when $s$ is odd, each of the $N$ trains in which the total signal has been decomposed is affected by the inverted temporal self-imaging effect. As a result, each of the trains produces at the output another train of pulses also modulated by the corresponding coefficient $cl$ but with an extra delay of $T1/2$. The signal at the output of the system can be expressed as:\n\n$y(t)=∑l=0N−1[cl∑k=−∞∞a(t−kT1−lT0−T1/2)]$\n\nAs in the case of the ordinary self-imaging effect, the train of pulses at the output of the system is a replica of the train at the input, but in this case with an additional time delay of $T1/2$. Therefore, the optical power at the output of the system is:\n\n$Pout(t)=∑l=0N−1|cl|2∑k=−∞∞|a(t−kT1−lT0−T1/2)|2$\n\nIn Fig. 2 an example for both (a) $s$even and (b) odd can be observed. In the first case the train of pulses obtained at the output is a replica of the input train, whereas in the second an additional $T1/2$ delay appears. These results can also be understood by considering $a'(t)=∑l=0N−1[cl⋅a(t−lT0)]$ as the shape of a single pulse of the total train. Under this condition, the input train of pulses can be interpreted as an unmodulated train of pulses whose pulse shape is determined by $a'(t)$ and with a periodicity $T1$. Therefore, by applying the dispersion given by Eq. (5), the train of pulses will suffer the integer self-imaging effect, which corresponds to the obtained results. That is, these results can be simply considered as the particular case of the integer temporal self-imaging effect for which there are non-equal amplitude or phase pulses. It is worth noting that the condition for the dispersion is given by Eq. (5), that is, the usual value for the integer temporal self imaging effect is used but with the period of the modulating signal, $T1$, and not the temporal separation between consecutive pulses, $T0$.", null, "Fig. 2 Output of the system, for the input train shown in Fig. 1, when (a) s is even or (b) s is odd.\n\n#### 2.2 Fractional temporal self-imaging\n\nDepending on the amount of dispersion applied to the input train of pulses, it is also possible to obtain the fractional temporal self-imaging effect for each of the $N$ train of pulses in Eq. (4). In this case, the required condition for the dispersion is given by\n\n$|ϕ|=T122πsm=N2T022πsm{s=±1,±2,±3...m=2,3,4...$\nwhere $s/m$ is an irreducible fraction. Also, in order to avoid the overlapping of adjacent pulses after the dispersion, an additional restriction has to be imposed to the duration of the pulses, limiting it to a fraction of the output repetition rate determined by $m$:\n\n$Δt≤T0m=T1mN$\n\nAs in the study of the temporal self-imaging effect for unmodulated trains, two different scenarios depending on the value of the product $s⋅m$ have to be studied: the ordinary and the inverted fractional temporal self-imaging effects.\n\nIf $s⋅m$ is even, each of the $N$ trains of pulses in Eq. (4) is affected by the ordinary fractional temporal self-imaging effect. Therefore, after the dispersion, a train of pulses whose intensity repetition rate is given by $T1/m=NT0/m$ and without any additional temporal delay is obtained for each subtrain. Hence, the resulting train of pulses for each value of $l$ can be expressed as :\n\n$cl∑k=−∞∞Aka(t−kT1m−lT0)$\nwhere $Ak$ are the coefficients associated to the temporal self-imaging effect and are given by:\n\n$Ak=1m∑q=0m−1exp(jπ{smq2+2kmq})$\n\nAs a result of this, the signal at the output of the system is determined by the superposition of these trains:\n\n$y(t)=∑k=−∞∞∑l=0N−1clAka(t−kNmT0−lT0)$\n\nIt is worth emphasizing that, although the intensity repetition rate has been multiplied by a factor $m$, the term in Eq. (13) introduces a different phase shift for each subtrain, resulting thus in a train whose periodicity is $T1$, as in the original train.\n\nOn the other hand, if $s⋅m$ is odd, each train of pulses is affected by an inverted fractional temporal self-imaging effect. Consequently, each of the $N$ trains of pulses results at the output in another train whose intensity repetition rate is given by $T1/m=NT0/m$ and with an additional delay of $T1/2m$:\n\n$cl∑k=−∞∞Bka(t−kT1m−lT0−T12m)$\nwhere the coefficients $Bk$ are given by:\n\n$Bk=1m∑q=0m−1exp(jπ{smq2+2k+1mq})$\n\nFinally, the signal at the output of the system is determined by:\n\n$y(t)=∑k=−∞∞∑l=0N−1clBka(t−kNmT0−lT0−N2mT0)$\n\nAs it can be observed in Eqs. (14) and (17), the outcome in both cases is the superposition of $N$ trains of pulses whose intensity repetition rate is given by $T1/m=NT0/m$ and a delay between each subtrain of $T0$. Consequently, the profile of the trains is going to be determined by the relation between $m$ and $N$. Also, if $N/m$ is an irreducible fraction, there would be no interaction between the different subtrains, so that the amplitudes of the pulses at the output can be directly determined by the coefficients $clAk$ or $clBk$. However, if $N/m$ is a reducible fraction, then the pulses coming from different subtrains will occupy the same time slot, overlapping in time and producing interference between the different subtrains due to the different phase terms associated to each of them. In consequence, the optical power of the pulses at the output cannot be evaluated to obtain a general expression, since the coefficient modulating each pulse is now given by the sum of several coefficients $clAk$ or $clBk$. Therefore, it is necessary to evaluate each case independently in order to obtain the output optical power, as we will be doing in the next section for a specific case that is of special interest.\n\n## 3. Discrete Fourier transform of the modulating coefficients\n\nIn the previous section, it has been demonstrated that, when the fractional temporal self-imaging condition determined by Eq. (10) is verified, the train of pulses at the output presents an intensity repetition rate given by $T1/m=NT0/m$, which depends on the relation between $m$ and $N$. In this section, the case for $s=1$ and $m=N2$ is going to be studied. This case is specially interesting because the amount of dispersion which is applied has a fixed value $ϕ=T02/2π$, being thus independent on the number of coefficients modulated to the input train, $N$, and on the period of the modulating signal, $T1$. As in the previous section, depending on the value of $s⋅m$ being even or odd, two different cases have to be studied.\n\nIn the first case, $N$ is going to be considered even. In consequence, the product $s⋅m=N2$ is also going to be even, and the train of pulses at the output will be given by Eq. (14). Therefore, by inserting the aforementioned conditions for $s$ and $m$ into this expression and introducing $k'=k+lN$, the train of pulses at the output can be expressed as a single train of pulses whose repetition rate is given by $T0/N$:\n\n$y(t)=∑k'=−∞∞[∑l=0N−1Ak',lcl]a(t−k'T0N)==∑k'=−∞∞nk′a(t−k'T0N)$\nand the coefficients $nk'$ can be expressed as:\n\n$nk'=1N2∑l=0N−1∑q=0N2−1clexp(jπN2{q2+(2k'−2lN)q})==1N2∑q=0N2−1[∑l=0N−1clexp(−j2πNlq)]exp(jπN2{q2+2k'q})$\n\nAdditionally, by introducing the variables $w$ and $z$, defined to be two integer numbers that verify $q=w+zN$, the first sum term can be divided in two summations, obtaining:\n\n$nk'=1N2∑w=0N−1[∑z=0N−1[∑l=0N−1clexp(jπN2{(w+zN)2+(2k'−2lN)(w+zN)})]]=1N2∑w=0N−1[exp(jπN2{w2+2k'w})∑l=0N−1[clexp(−j2πNlw)]∑z=0N−1[(−1)zexp(j2πNz(w+k'))]]$\n\nThus, by rearranging the different terms in Eq. (20), the coefficients $nk'$ can be rewritten as:\n\n$nk'=1N2∑w=0N−1[Cwexp(jπN2{w2+2wk'})∑z=0N−1[(−1)zexp(j2πNz{w+k'})]]$\nwhere $Cw$ can be identified as the $w-th$ term of the Discrete Fourier Transform of the modulating coefficients $cl$:\n\n$Cw=∑l=0N−1clexp(−j2πNlw)$\n\nTo further work with this expression, it is demonstrated in the Appendix that the summation inside the inner brackets can only have two possible values depending on the value of the complex exponential:\n\n$∑z=0N−1(−1)zexp(j2πNz{w+k'})={0ifw≠rN−k'+N2Nifw=rN−k'+N2$\nwhere $r$ is an integer. In order to determine which terms of the summation within $nk'$ are non-zero, the values where the condition $w=rN−k'+N/2$ is verified have to be found. Due to the external summation in Eq. (21), $w$ is restricted to be an integer between 0 and $N−1$, i. e., $0≤w≤N−1$. Therefore, given a certain value of $k'$, the possible valid values for $r$ are going to be limited to the interval:\n\n$k'−N/2N≤r≤k'+N/2−1N$\n\nAs it can be seen, the size of the interval in Eq. (24) is $(N−1)/N$, which is going to be inferior to the unity. Therefore, as $r$ has to be an integer, there is only one possible value of r within the interval, $r0$, which verifies the condition $w=rN−k'+N/2$. As this value is unique, it also determines the single possible value of $w$, $w0$, that, for a fixed value of $k'$, verifies the necessary conditions for the inner summation in Eq. (21) to be different than zero:\n\n$w0=r0N+N2−k'$\n\nIn consequence, the external sum of Eq. (21) is reduced to a single summand whose index is determined by $w0$ and that has a value of $N$ according to Eq. (23). Therefore, the coefficient $nk'$ can be simplified to:\n\n$nk'=1NCw0exp(jπN2{w02+2w0k'})==1NCN2−k'exp(jπ4)exp(−jπ(k'N)2)$\n\nAnd the signal at the output of the system, given in Eq. (18), thus results into:\n\n$y(t)=∑k'=−∞∞1NCN2−k'exp(jπ4)exp(−jπ(k'N)2)a(t−k'T0N)$\n\nIntroducing the variable $k''=N/2−k'$ and considering that the pulse width is small enough so that pulses don’t overlap ($Δt), the optical power at the output of the system can be finally expressed as:\n\n$Pout(t)=1N2∑k''=−∞∞|Ck''|2|a(t+k''T0N−T02)|2$\n\nThe second case to be considered is when $N$ is odd, which corresponds to the inverted fractional temporal self-imaging effect. Under this assumption, the product $s⋅m=N2$ is going to be odd, so the output train of pulses is determined by Eq. (17). As in the previous case, the output can be expressed as a single train of pulses whose repetition rate is given by $T0/N$:\n\n$y(t)=∑k'=−∞∞[∑l=0N−1Bk',lcl]a(t−k'T0N−T02N)==∑k'=−∞∞mk′a(t−k'T0N−T02N)$\nwith $k'=k+lN$ and where the coefficients $mk'$ are given by:\n\n$mk'=1N2∑l=0N−1∑q=0N2−1clexp(jπN2{q2+(2k'−2lN+1)q})==1N2∑q=0N2−1[∑l=0N−1clexp(−j2πNlq)]exp(jπN2{q2+(2k'+1)q})$\n\nIn order to further simplify this expression and following a similar procedure as in the even case, the variable $w=q−zN$ with $z$ an integer is introduced in the equation, so that by rearranging the terms of the summations the following equation is obtained:\n\n$mk'=1N2∑w=0N−1[Cwexp(jπN2{w2+w(2k'+1)})[∑z=0N−1(−1)zexp(jπNz(2w+2k'+1))]]$\nwhere $Cw$ corresponds again to the $w-th$ term of the Discrete Fourier Transform of the coefficients $cl$. The inner summation can be further simplified using the relation demonstrated in the Appendix, which results in:\n$∑z=0N−1(−1)zexp(jπNz(2w+2k'+1))={0ifw≠rN−k'+N−12Nifw=rN−k'+N−12$\nand the interval for the possible values for $r$ so that the summation is non-zero is now:\n\n$k'−(N−1)/2N≤r≤k'+(N−1)/2N$\n\nAs in the previous case, there is only one integer value $r0$ with its corresponding $w0$ that verifies this condition, and Eq. (31) can be simplified to:\n\n$mk'=1NCw0exp(jπN2{w02+w0(2k'+1)})==1NCN−12−k'exp(jπ4)exp(−jπN2(k'−12)2)$\n\nConsequently, the optical power at the output of the system when $N$ is odd can be written as:\n\n$Pout(t)=∑k'=−∞∞|mk'|2|a(t−k'T0N−T02N)|2==1N2∑k'=−∞∞|CN−12−k'|2|a(t−k′T0N−T02N)|2$\n\nAnd, by introducing $k'''=(N−1)/2−k'$, the resulting optical power at the output of the system can be expressed as:\n\n$Pout(t)=1N2∑k'''=−∞∞|Ck'''|2|a(t+k'''T0N−T02)|2$\n\nAs it can be inferred from Eqs. (28) and (36), when $s=1$ and $m=N2$ the optical power of the train at the output of the system is similar regardless of the parity of $N$. In both cases, the separation between consecutive pulses is going to be determined by $T0/N$, that is, by the number of coefficients modulated to the train at the input, and an additional $T0/2$ delay is introduced to the output train. Also, the amplitude of the pulses at the output of the system is going to be determined by the Discrete Fourier Transform (DFT) of the modulating coefficients $cl$ applied in reverse order. It is worth recalling that in this case, the required dispersion depends only on the repetition rate at the input of the system, regardless of the number $N$ of modulating coefficients. Thus, by modifying only the coefficients $cl$ that are determined by an easily tunable electrical signal, it is possible to simultaneously control the amplitude and repetition rate at the output, as it can be seen in Fig. 3.", null, "Fig. 3 (a) Train of pulses modulated at the input of the system with N = 3, and (b) at the output of the system.\n\n## 4. Numerical simulations\n\nTo verify the validity of the proposal, the system shown in Fig. 4 was evaluated using Matlab. The simulated pulsed source produced a train of Gaussian optical pulses operating at a repetition rate $T0=100ps$ ($f0=10GHz$) and a pulsewidth of 1 ps. The train was modulated using two ideal electro-optic modulators to modify both the amplitude (intensity modulator) and phase (phase modulator) of each pulse, assuming neither bandwidth restrictions nor modulator losses. As the modulation was done both in phase and amplitude, the coefficients were evaluated using the DFT of the desired envelope to be obtained at the output, as demonstrated in the previous section. Finally, the modulated signal was dispersed using 15.6 km of Dispersion Compensating Fibre (DCF) with $α=0.55dB/km$, $D=−80ps/nm/km$, $S=0.19ps/nm2/km$, which corresponds approximately to the total chromatic dispersion required by the condition for obtaining the optical DFT, $ϕ=T02/2π$.", null, "Fig. 4 Proposed setup with intensity and phase modulation in the time domain followed by a dispersive device (EO-IM: Electrooptic Intensity Modulator, EO-PM: Electrooptic Phase Modulator, DCF: Dispersion Compensating Fibre).\n\nIn Fig. 5 the pulses (a) at the input and (b) at the output of the system when no modulating signal is applied are presented. As it can be observed, the signal at the output of the system corresponds to a train of pulses with the same repetition rate as the input and an additional $T0/2$ delay. This result was expected, as this scenario corresponds to the self-imaging effect when the parameters $s=1$ and $m=1$, which corresponds to the inverted case analyzed in section 2.1. Also, as a result of the DCF third order dispersion, the pulses present the typical deformation associated to it, showing some ripples at one side of the pulses.", null, "Fig. 5 Train of pulses at the input (a) and at the output (b) of the system when no modulation is applied.\n\nInitial simulations were done using only intensity modulation and binary $cl$ coefficients. In Fig. 6 the train of pulses obtained at the output of the system for the coefficients (a) 1000000000 and (b) 1000000001 can be seen, as well as the expected output envelope (dashed line) obtained by the DFT of the different modulating coefficients that have been applied. As expected, the train of pulses at the output presents a time delay of $T0/2$, that is, 50 ps and the repetition frequency is multiplied by a factor $N=10$, resulting in a separation between pulses of only 10 ps. However, as the pulse width at the input was 1 ps, no overlapping occurs despite the smaller separation between the pulses. Also, the envelope of the output train of pulses has been modulated, obtaining a constant signal and a cosine signal envelope, respectively, which correspond to the DFT of each of the modulated coefficients.", null, "Fig. 6 Train of pulses at the input (dashed) and output (solid) of the system and the expected output envelope (dotted), when intensity-only coefficients (a) 1000000000 and (b) 1000000001 are applied.\n\nIn order to further verify the shaping capabilities of the setup, four different functions were chosen to be obtained as envelopes at the output of the system: a ramp envelope, a triangular envelope a burst of binary data (1001110101) and a signal with random amplitudes, in all cases with $N=10$. The modulating coefficients were evaluated using the DFT of the desired output envelope in each case. The obtained trains at the output of the system can be seen in Fig. 7, obtaining that the peak power of the output pulses corresponds to the DFT of the modulating coefficients that were chosen for the example but in reverse order, as expected.", null, "Fig. 7 Train of pulses at the input (dashed) and output (solid) of the system and the expected output envelope (dotted) for a (a) ramp, (b) triangular, (c) binary data and (d) random envelope.\n\nIn the previously presented examples the pulse width and the repetition factor were small enough so that there was no overlapping between adjacent pulses. However, if the number of modulating coefficients $N$ is increased, the condition given by Eq. (11) will not be fulfilled, appearing interference between adjacent pulses and invalidating the result obtained in Eq. (36). Two examples where this interference is significant can be seen in Fig. 8 and Fig. 9, where $N=20$ and $N=60$ respectively, and the modulating coefficients were chosen to obtain a triangular envelope at the output. As it can be seen, in the first case the pulses do not overlap completely, but a small amount of interference appears due to the sidelobes resulting from third order dispersion. The effect of this interference is a degradation of the peak optical power obtained in some of the pulses.", null, "Fig. 8 Train of pulses at the output of the system for a desired triangular envelope with N = 20; 20 periods of the output are shown in (a) and zooms of the signal in (a) for three specific periods are shown in (b) to (d).", null, "Fig. 9 Train of pulses at the output of the system for a desired triangular envelope with N = 60; 60 periods of the output are shown in (a) and zooms of the signal in (a) for three specific periods are shown in (b) to (d).\n\nOn the other hand, when the number of coefficients is high enough ($N=60$ in Fig. 9), the pulses completely overlap, resulting in a complete interference between them. When the phases between adjacent pulses don’t differ too much, the pulses overlap in power, obtaining a single pulse with the shape of the envelope instead of a train of pulses (see Fig. 9(b)). Yet, when the phases of the adjacent pulses diverge, the different pulses interfere with different phases and the signal at the output presents a noisy behavior, showing very fast power fluctuations and a general loss in the obtained peak optical power (Fig. 9(c)). This behavior is periodic, as the phases of the coefficients $nk'$and $mk'$ from Eqs. (26) and (34) present a periodicity $N2$, resulting in a periodicity for the output train determined by $NT0$ (6000 ps in this case). In further works, we aim to explore the use of this situation for obtaining individual pulses with arbitrary shapes, although only one over N pulses will probably acquire the desired shape.\n\n## 5. Conclusions\n\nThis work has been developed within the well-known space-time framework for the manipulation of optical pulsed trains. It has focused on a situation of special interest that cannot be considered neither a single-pulse Fourier Transform nor the repetition rate multiplication obtained by means of the temporal self imaging effect: the propagation through a dispersive element of a train of periodically modulated ultrashort pulses when different pulses interfere at the output. It has been found that, for some precise values of chromatic dispersion, the resulting outcome is composed of a periodic pulse train whose intensity repetition rate has been multiplied and whose envelope can be evaluated as the Discrete Fourier Transform of the modulating coefficients. A mathematical demonstration of this transform has been developed, showing the conditions in which it is valid.\n\nThe proposal was verified with different numerical simulations using Matlab. Some basic examples, using only intensity electrooptic modulators and binary modulating signals, were presented, extending afterwards these results to more complex envelopes obtained employing both phase and intensity modulation and the DFT of the desired output envelope in each case. This enables the application of the proposed scheme as an optical train pulse shaper, allowing for example the generation of high-speed binary optical bursts that are of great interest in optical communication systems. Additionally, as the dispersive medium is fixed, the proposed setup could be used to modify the intensity repetition rate of the train at the output by varying only the modulating electrical signal. Furthermore, another possible area of application is the generation of arbitrary millimeter and microwave waveforms employing an envelope detector at the output of the system.\n\nFinally, a study of the limitations of the analysis was made. More specifically, it has been demonstrated that, for a given pulsewidth, there is a maximum number of modulating coefficients that can be used, and a corresponding maximum repetition rate at the output, since otherwise adjacent pulses overlap and the interference prevents the formation of the desired Discrete Fourier Transform. However, it has been found that under these conditions, the system operates as an optical pulse shaper, allowing the synthesis of at least one optical pulse in each period of $N$ with the desired envelope, a finding that will need further research.\n\n## Appendix\n\nPreviously, in order to simplify the resulting mathematical expression for the superposition of different trains of pulses when $s=1$ and $m=N2$, the mathematical identities of Eqs. (23) and (32) were imposed for $N$ even and odd, respectively. For the completeness of the demonstration, in this appendix the full derivation of these expressions will be presented. Introducing $s'=2w+2k'$ for Eq. (23) and $s'=2w+2k'+1$ for Eq. (32), it is possible to unify both expressions as:\n\n$∑h=0N−1(−1)hexp(jπNhs')={0s'≠(2r+1)NNs'=(2r+1)N$\n\nAlso, it will be proven that this equality is only valid as long as $s′$ and $N$ have the same parity. As it can be observed, due to the definition of $s'$ for $N$ even and odd, this condition is fulfilled by Eqs. (23) and (32). To continue the analysis, two different cases are going to be considered depending on $N$ being even or odd.\n\nIf $N$ is assumed to be even, then the summation can be divided in two different summations by selecting the summands depending on $h$ being odd or even:\n\n$∑h=0N−1(−1)hexp(jπNhs')=[∑z=0N/2−1exp(j2πNzs')][1−exp(jπNs')]$\n\nAs it can be observed, a summation of complex exponentials is obtained. In order to calculate its value, we only need to use the well-known mathematical identity that exists for the summation of geometric series:\n\n$∑z=0M−1az=1−aM1−a$\nwhere, in our case, $a=exp(j2πs'/N)$. Therefore, the result of the summation in Eq. (38) is given by:\n$∑z=0N/2−1exp(j2πNzs')=1−exp(jπs')[1+exp(jπNs')][1−exp(jπNs')]$\nand introducing Eq. (40) in Eq. (37), it is obtained that:\n\n$∑h=0N−1(−1)hexp(jπNhs')=1−exp(jπs')1+exp(jπNs')$\n\nEqually, a similar analysis can be performed when $N$ is odd. However, in the separation of the even and odd terms done in Eq. (38) an additional term has to been taken in consideration before grouping:\n\n$∑h=0N−1(−1)hexp(jπNhs')=[∑z=0N−12−1exp(jπN2zs')][1−exp(jπNs')]+exp(jπ(N−1)Ns')$\n\nAs in the previous case, the summation of a complex exponential function is obtained so that it can be simplified by applying Eq. (39), which results in:\n\n$∑z=0N−12−1exp(jπN2zs')=1−exp(jπN−1Ns')[1+exp(jπNs')][1−exp(jπNs')]$\nand introducing this identity into Eq. (42), the summation can be expressed as:\n\n$∑h=0N−1(−1)hexp(jπNhs')=1+exp(jπs')1+exp(jπNs')$\n\nWith this result and combining it with Eq. (41), a single expression for both $N$ even and $N$ odd can be obtained:\n\n$∑h=0N−1(−1)hexp(jπNhs')=1−(−1)(N+s')1+exp(jπNs')$\n\nAs it can be observed, the numerator of Eq. (45) is always going to be zero if the value of $N+s'$ is even, which is satisfied only when the parity of $s'$ and $N$ is the same. Therefore, if both $s'$ and $N$ are even or odd simultaneously, the result of Eq. (45) is going to be zero unless the denominator is also zero. In this last case, an indeterminate form is obtained and the value of the function has to be determined by using its limits at those points. This will occur only when $exp(jπs'/N)=−1$, that is, when the value of the complex exponent is an odd multiple of $π$, which is satisfied when $s'/N$ corresponds to an odd number. Therefore, the limits when $s'=(2r+1)N$ are going to be studied, where $r$ is an integer number. By applying L'Hôpital's rule, it is obtained that the value of the function at those points is:\n\n$lims'→(2r+1)N1−(−1)N+s'1+exp(jπNs')=lims'→(2r+1)N−jπ(−1)N+s'jπNexp(jπNs')=N$\n\nTherefore, if the parity of $s'$ and $N$ is the same, the value of the summation is going to be:\n\n$∑h=0N−1(−1)hexp(jπNhs')={0s'≠(2r+1)NNs'=(2r+1)N$\nas we wanted to demonstrate.\n\n## Acknowledgments\n\nThis work was financially supported in part from the Ministerio de Economia y Competitividad of Spain (projects TEC2010-21305-C04-01 and -02).\n\n1. L. K. Oxenlowe, H. Ji, M. Galili, M. H. Pu, H. Hu, H. C. H. Mulvad, K. Yvind, J. M. Hvam, A. T. Clausen, and P. Jeppesen, “Silicon photonics for signal processing of Tbit/s serial data signals,” IEEE J. Sel. Top. Quantum Electron. 18(2), 996–1005 (2012). [CrossRef]\n\n2. S. J. B. Yoo, R. P. Scott, D. J. Geisler, N. K. Fontaine, and F. M. Soares, “Terahertz information and signal processing by RF-photonics,” IEEE Tran. Terahertz Sci. Technol. 2(2), 167–176 (2012). [CrossRef]\n\n3. P. J. Delfyett, I. Ozdur, N. Hoghooghi, M. Akbulut, J. Davila-Rodriguez, and S. Bhooplapur, “Advanced ultrafast technologies based on optical frequency combs,” IEEE J. Sel. Top. Quantum Electron. 18(1), 258–274 (2012). [CrossRef]\n\n4. A. E. Willner, O. F. Yilmaz, J. A. Wang, X. X. Wu, A. Bogoni, L. Zhang, and S. R. Nuccio, “Optically efficient nonlinear signal processing,” IEEE J. Sel. Top. Quantum Electron. 17(2), 320–332 (2011). [CrossRef]\n\n5. R. S. Tucker and K. Hinton, “Energy consumption and energy density in optical and electronic signal processing,” IEEE Photonics J. 3(5), 821–833 (2011). [CrossRef]\n\n6. J. W. Goodman, Introduction to Fourier Optics (Roberts and Co., 2005).\n\n7. A. M. Weiner, “Femtosecond optical pulse shaping and processing,” Prog. Quantum Electron. 19(3), 161–237 (1995). [CrossRef]\n\n8. A. M. Weiner and A. M. Kan’an, “Femtosecond pulse shaping for synthesis, processing, and time-to-space conversion of ultrafast optical waveforms,” IEEE J. Sel. Top. Quantum Electron. 4(2), 317–331 (1998). [CrossRef]\n\n9. B. H. Kolner, “Space-time duality and the theory of temporal imaging,” IEEE J. Quantum Electron. 30(8), 1951–1963 (1994). [CrossRef]\n\n10. A. Papoulis, Systems and Transforms With Applications in Optics (McGraw-Hill, 1968).\n\n11. S. A. Akhmanov, A. P. Sukhoruk, and A. S. Chirkin, “Nonstationary phenomena and space-time analogy in nonlinear optics,” Sov. Phys. JETP-USSR 28, 748 (1969).\n\n12. E. B. Treacy, “Optical pulse compression with diffraction gratings,” IEEE J. Quantum Electron. 5(9), 454–458 (1969). [CrossRef]\n\n13. B. H. Kolner and M. Nazarathy, “Temporal imaging with a time lens,” Opt. Lett. 14(12), 630–632 (1989). [CrossRef]   [PubMed]\n\n14. C. V. Bennett and B. H. Kolner, “Principles of parametric temporal imaging - Part I: System configurations,” IEEE J. Quantum Electron. 36, 430–437 (2000).\n\n15. C. V. Bennett and B. H. Kolner, “Principles of parametric temporal imaging - Part II: System performance,” IEEE J. Quantum Electron. 36, 649–655 (2000).\n\n16. R. Salem, M. A. Foster, A. C. Turner, D. F. Geraghty, M. Lipson, and A. L. Gaeta, “Optical time lens based on four-wave mixing on a silicon chip,” Opt. Lett. 33(10), 1047–1049 (2008). [CrossRef]   [PubMed]\n\n17. J. van Howe and C. Xu, “Ultrafast optical signal processing based upon space-time dualities,” J. Lightwave Technol. 24(7), 2649–2662 (2006). [CrossRef]\n\n18. M. A. Foster, R. Salem, and A. L. Gaeta, “Ultrahigh-speed optical processing using space-time duality,” Opt. Photon. News 22(5), 29–35 (2011). [CrossRef]\n\n19. P. Refregier and B. Javidi, “Optical image encryption based on input plane and Fourier plane random encoding,” Opt. Lett. 20(7), 767–769 (1995). [CrossRef]   [PubMed]\n\n20. A. V. Mamaev and M. Saffman, “Selection of unstable patterns and control of optical turbulence by Fourier plane filtering,” Phys. Rev. Lett. 80(16), 3499–3502 (1998). [CrossRef]\n\n21. M. A. Muriel, J. Azaña, and A. Carballar, “Real-time Fourier transformer based on fiber gratings,” Opt. Lett. 24(1), 1–3 (1999). [CrossRef]   [PubMed]\n\n22. J. Azana and M. A. Muriel, “Real-time optical spectrum analysis based on the time-space duality in chirped fiber gratings,” IEEE J. Quantum Electron. 36, 517–526 (2000).\n\n23. R. Salem, M. A. Foster, A. C. Turner-Foster, D. F. Geraghty, M. Lipson, and A. L. Gaeta, “High-speed optical sampling using a silicon-chip temporal magnifier,” Opt. Express 17(6), 4324–4329 (2009). [CrossRef]   [PubMed]\n\n24. K. Goda and B. Jalali, “Dispersive Fourier transformation for fast continuous single-shot measurements,” Nat. Photonics 7(2), 102–112 (2013). [CrossRef]\n\n25. J. Azana, N. K. Berger, B. Levit, and B. Fischer, “Spectro-temporal imaging of optical pulses with a single time lens,” IEEE Photon. Technol. Lett. 16(3), 882–884 (2004). [CrossRef]\n\n26. R. Salem, M. A. Foster, and A. L. Gaeta, “Application of space–time duality to ultrahigh-speed optical signal processing,” Adv. Opt. Photon. 5(3), 274–317 (2013). [CrossRef]\n\n27. S. Thomas, A. Malacarne, F. Fresi, L. Poti, and J. Azana, “Fiber-based programmable picosecond optical pulse shaper,” J. Lightwave Technol. 28(12), 1832–1843 (2010). [CrossRef]\n\n28. J. Azaña, “Design specifications of time-domain spectral shaping optical system based on dispersion and temporal modulation,” Electron. Lett. 39(21), 1530–1532 (2003). [CrossRef]\n\n29. C. Wang and J. P. Yao, “Chirped microwave pulse generation based on optical spectral shaping and wavelength-to-time mapping using a Sagnac loop mirror incorporating a chirped fiber Bragg grating,” J. Lightwave Technol. 27(16), 3336–3341 (2009). [CrossRef]\n\n30. H. Chi and J. Yao, “All-fiber chirped microwave pulses generation based on spectral shaping and wavelength-to-time conversion,” IEEE Trans. Microwave Theory 55(9), 1958–1963 (2007). [CrossRef]\n\n31. Y. Park, T. J. Ahn, J. C. Kieffer, and J. Azaña, “Optical frequency domain reflectometry based on real-time Fourier transformation,” Opt. Express 15(8), 4597–4616 (2007). [CrossRef]   [PubMed]\n\n32. H. F. Talbot, “Facts relating to optical science no. IV,” Philos. Mag. 9, 401–407 (1836).\n\n33. L. Rayleigh, “On copying diffraction gratings and on some phenomenon connected therewith,” Philos. Mag. 11(67), 196–205 (1881). [CrossRef]\n\n34. M. V. Berry and S. Klein, “Integer, fractional and fractal Talbot effects,” J. Mod. Opt. 43(10), 2139–2164 (1996). [CrossRef]\n\n35. M. S. Chapman, C. R. Ekstrom, T. D. Hammond, J. Schmiedmayer, B. E. Tannian, S. Wehinger, and D. E. Pritchard, “Near-field imaging of atom diffraction gratings: the atomic Talbot effect,” Phys. Rev. A 51(1), R14–R17 (1995). [CrossRef]   [PubMed]\n\n36. J. Wen, Y. Zhang, and M. Xiao, “The Talbot effect: recent advances in classical optics, nonlinear optics, and quantum optics,” Adv. Opt. Photon. 5(1), 83–130 (2013). [CrossRef]\n\n37. T. Jannson and J. Jannson, “Temporal self-imaging effect in single-mode fibers,” J. Opt. Soc. Am. 71, 1373–1376 (1981).\n\n38. J. Azaña and M. A. Muriel, “Temporal Talbot effect in fiber gratings and its applications,” Appl. Opt. 38(32), 6700–6704 (1999). [CrossRef]   [PubMed]\n\n39. J. Azaña and M. A. Muriel, “Temporal self-imaging effects: Theory and application for multiplying pulse repetition rates,” IEEE J. Sel. Top. Quantum Electron. 7(4), 728–744 (2001). [CrossRef]\n\n40. J. Azaña and L. R. Chen, “General temporal self-imaging phenomena,” J. Opt. Soc. Am. B 20(7), 1447–1458 (2003). [CrossRef]\n\n41. D. Pudo and L. R. Chen, “Tunable passive all-optical pulse repetition rate multiplier using fiber Bragg gratings,” J. Lightwave Technol. 23(4), 1729–1733 (2005). [CrossRef]\n\n42. J. Caraquitena and J. Martí, “High-rate pulse-train generation by phase-only filtering of an electrooptic frequency comb: Analysis and optimization,” Opt. Commun. 282(18), 3686–3692 (2009). [CrossRef]\n\n43. J. Caraquitena, Z. Jiang, D. E. Leaird, and A. M. Weiner, “Tunable pulse repetition-rate multiplication using phase-only line-by-line pulse shaping,” Opt. Lett. 32(6), 716–718 (2007). [CrossRef]   [PubMed]\n\n### References\n\n• View by:\n• |\n• |\n• |\n\n1. L. K. Oxenlowe, H. Ji, M. Galili, M. H. Pu, H. Hu, H. C. H. Mulvad, K. Yvind, J. M. Hvam, A. T. Clausen, and P. Jeppesen, “Silicon photonics for signal processing of Tbit/s serial data signals,” IEEE J. Sel. Top. Quantum Electron. 18(2), 996–1005 (2012).\n[Crossref]\n2. S. J. B. Yoo, R. P. Scott, D. J. Geisler, N. K. Fontaine, and F. M. Soares, “Terahertz information and signal processing by RF-photonics,” IEEE Tran. Terahertz Sci. Technol. 2(2), 167–176 (2012).\n[Crossref]\n3. P. J. Delfyett, I. Ozdur, N. Hoghooghi, M. Akbulut, J. Davila-Rodriguez, and S. Bhooplapur, “Advanced ultrafast technologies based on optical frequency combs,” IEEE J. Sel. Top. Quantum Electron. 18(1), 258–274 (2012).\n[Crossref]\n4. A. E. Willner, O. F. Yilmaz, J. A. Wang, X. X. Wu, A. Bogoni, L. Zhang, and S. R. Nuccio, “Optically efficient nonlinear signal processing,” IEEE J. Sel. Top. Quantum Electron. 17(2), 320–332 (2011).\n[Crossref]\n5. R. S. Tucker and K. Hinton, “Energy consumption and energy density in optical and electronic signal processing,” IEEE Photonics J. 3(5), 821–833 (2011).\n[Crossref]\n6. J. W. Goodman, Introduction to Fourier Optics (Roberts and Co., 2005).\n7. A. M. Weiner, “Femtosecond optical pulse shaping and processing,” Prog. Quantum Electron. 19(3), 161–237 (1995).\n[Crossref]\n8. A. M. Weiner and A. M. Kan’an, “Femtosecond pulse shaping for synthesis, processing, and time-to-space conversion of ultrafast optical waveforms,” IEEE J. Sel. Top. Quantum Electron. 4(2), 317–331 (1998).\n[Crossref]\n9. B. H. Kolner, “Space-time duality and the theory of temporal imaging,” IEEE J. Quantum Electron. 30(8), 1951–1963 (1994).\n[Crossref]\n10. A. Papoulis, Systems and Transforms With Applications in Optics (McGraw-Hill, 1968).\n11. S. A. Akhmanov, A. P. Sukhoruk, and A. S. Chirkin, “Nonstationary phenomena and space-time analogy in nonlinear optics,” Sov. Phys. JETP-USSR 28, 748 (1969).\n12. E. B. Treacy, “Optical pulse compression with diffraction gratings,” IEEE J. Quantum Electron. 5(9), 454–458 (1969).\n[Crossref]\n13. B. H. Kolner and M. Nazarathy, “Temporal imaging with a time lens,” Opt. Lett. 14(12), 630–632 (1989).\n[Crossref] [PubMed]\n14. C. V. Bennett and B. H. Kolner, “Principles of parametric temporal imaging - Part I: System configurations,” IEEE J. Quantum Electron. 36, 430–437 (2000).\n15. C. V. Bennett and B. H. Kolner, “Principles of parametric temporal imaging - Part II: System performance,” IEEE J. Quantum Electron. 36, 649–655 (2000).\n16. R. Salem, M. A. Foster, A. C. Turner, D. F. Geraghty, M. Lipson, and A. L. Gaeta, “Optical time lens based on four-wave mixing on a silicon chip,” Opt. Lett. 33(10), 1047–1049 (2008).\n[Crossref] [PubMed]\n17. J. van Howe and C. Xu, “Ultrafast optical signal processing based upon space-time dualities,” J. Lightwave Technol. 24(7), 2649–2662 (2006).\n[Crossref]\n18. M. A. Foster, R. Salem, and A. L. Gaeta, “Ultrahigh-speed optical processing using space-time duality,” Opt. Photon. News 22(5), 29–35 (2011).\n[Crossref]\n19. P. Refregier and B. Javidi, “Optical image encryption based on input plane and Fourier plane random encoding,” Opt. Lett. 20(7), 767–769 (1995).\n[Crossref] [PubMed]\n20. A. V. Mamaev and M. Saffman, “Selection of unstable patterns and control of optical turbulence by Fourier plane filtering,” Phys. Rev. Lett. 80(16), 3499–3502 (1998).\n[Crossref]\n21. M. A. Muriel, J. Azaña, and A. Carballar, “Real-time Fourier transformer based on fiber gratings,” Opt. Lett. 24(1), 1–3 (1999).\n[Crossref] [PubMed]\n22. J. Azana and M. A. Muriel, “Real-time optical spectrum analysis based on the time-space duality in chirped fiber gratings,” IEEE J. Quantum Electron. 36, 517–526 (2000).\n23. R. Salem, M. A. Foster, A. C. Turner-Foster, D. F. Geraghty, M. Lipson, and A. L. Gaeta, “High-speed optical sampling using a silicon-chip temporal magnifier,” Opt. Express 17(6), 4324–4329 (2009).\n[Crossref] [PubMed]\n24. K. Goda and B. Jalali, “Dispersive Fourier transformation for fast continuous single-shot measurements,” Nat. Photonics 7(2), 102–112 (2013).\n[Crossref]\n25. J. Azana, N. K. Berger, B. Levit, and B. Fischer, “Spectro-temporal imaging of optical pulses with a single time lens,” IEEE Photon. Technol. Lett. 16(3), 882–884 (2004).\n[Crossref]\n26. R. Salem, M. A. Foster, and A. L. Gaeta, “Application of space–time duality to ultrahigh-speed optical signal processing,” Adv. Opt. Photon. 5(3), 274–317 (2013).\n[Crossref]\n27. S. Thomas, A. Malacarne, F. Fresi, L. Poti, and J. Azana, “Fiber-based programmable picosecond optical pulse shaper,” J. Lightwave Technol. 28(12), 1832–1843 (2010).\n[Crossref]\n28. J. Azaña, “Design specifications of time-domain spectral shaping optical system based on dispersion and temporal modulation,” Electron. Lett. 39(21), 1530–1532 (2003).\n[Crossref]\n29. C. Wang and J. P. Yao, “Chirped microwave pulse generation based on optical spectral shaping and wavelength-to-time mapping using a Sagnac loop mirror incorporating a chirped fiber Bragg grating,” J. Lightwave Technol. 27(16), 3336–3341 (2009).\n[Crossref]\n30. H. Chi and J. Yao, “All-fiber chirped microwave pulses generation based on spectral shaping and wavelength-to-time conversion,” IEEE Trans. Microwave Theory 55(9), 1958–1963 (2007).\n[Crossref]\n31. Y. Park, T. J. Ahn, J. C. Kieffer, and J. Azaña, “Optical frequency domain reflectometry based on real-time Fourier transformation,” Opt. Express 15(8), 4597–4616 (2007).\n[Crossref] [PubMed]\n32. H. F. Talbot, “Facts relating to optical science no. IV,” Philos. Mag. 9, 401–407 (1836).\n33. L. Rayleigh, “On copying diffraction gratings and on some phenomenon connected therewith,” Philos. Mag. 11(67), 196–205 (1881).\n[Crossref]\n34. M. V. Berry and S. Klein, “Integer, fractional and fractal Talbot effects,” J. Mod. Opt. 43(10), 2139–2164 (1996).\n[Crossref]\n35. M. S. Chapman, C. R. Ekstrom, T. D. Hammond, J. Schmiedmayer, B. E. Tannian, S. Wehinger, and D. E. Pritchard, “Near-field imaging of atom diffraction gratings: the atomic Talbot effect,” Phys. Rev. A 51(1), R14–R17 (1995).\n[Crossref] [PubMed]\n36. J. Wen, Y. Zhang, and M. Xiao, “The Talbot effect: recent advances in classical optics, nonlinear optics, and quantum optics,” Adv. Opt. Photon. 5(1), 83–130 (2013).\n[Crossref]\n37. T. Jannson and J. Jannson, “Temporal self-imaging effect in single-mode fibers,” J. Opt. Soc. Am. 71, 1373–1376 (1981).\n38. J. Azaña and M. A. Muriel, “Temporal Talbot effect in fiber gratings and its applications,” Appl. Opt. 38(32), 6700–6704 (1999).\n[Crossref] [PubMed]\n39. J. Azaña and M. A. Muriel, “Temporal self-imaging effects: Theory and application for multiplying pulse repetition rates,” IEEE J. Sel. Top. Quantum Electron. 7(4), 728–744 (2001).\n[Crossref]\n40. J. Azaña and L. R. Chen, “General temporal self-imaging phenomena,” J. Opt. Soc. Am. B 20(7), 1447–1458 (2003).\n[Crossref]\n41. D. Pudo and L. R. Chen, “Tunable passive all-optical pulse repetition rate multiplier using fiber Bragg gratings,” J. Lightwave Technol. 23(4), 1729–1733 (2005).\n[Crossref]\n42. J. Caraquitena and J. Martí, “High-rate pulse-train generation by phase-only filtering of an electrooptic frequency comb: Analysis and optimization,” Opt. Commun. 282(18), 3686–3692 (2009).\n[Crossref]\n43. J. Caraquitena, Z. Jiang, D. E. Leaird, and A. M. Weiner, “Tunable pulse repetition-rate multiplication using phase-only line-by-line pulse shaping,” Opt. Lett. 32(6), 716–718 (2007).\n[Crossref] [PubMed]\n\n#### 2013 (3)\n\nK. Goda and B. Jalali, “Dispersive Fourier transformation for fast continuous single-shot measurements,” Nat. Photonics 7(2), 102–112 (2013).\n[Crossref]\n\n#### 2012 (3)\n\nL. K. Oxenlowe, H. Ji, M. Galili, M. H. Pu, H. Hu, H. C. H. Mulvad, K. Yvind, J. M. Hvam, A. T. Clausen, and P. Jeppesen, “Silicon photonics for signal processing of Tbit/s serial data signals,” IEEE J. Sel. Top. Quantum Electron. 18(2), 996–1005 (2012).\n[Crossref]\n\nS. J. B. Yoo, R. P. Scott, D. J. Geisler, N. K. Fontaine, and F. M. Soares, “Terahertz information and signal processing by RF-photonics,” IEEE Tran. Terahertz Sci. Technol. 2(2), 167–176 (2012).\n[Crossref]\n\nP. J. Delfyett, I. Ozdur, N. Hoghooghi, M. Akbulut, J. Davila-Rodriguez, and S. Bhooplapur, “Advanced ultrafast technologies based on optical frequency combs,” IEEE J. Sel. Top. Quantum Electron. 18(1), 258–274 (2012).\n[Crossref]\n\n#### 2011 (3)\n\nA. E. Willner, O. F. Yilmaz, J. A. Wang, X. X. Wu, A. Bogoni, L. Zhang, and S. R. Nuccio, “Optically efficient nonlinear signal processing,” IEEE J. Sel. Top. Quantum Electron. 17(2), 320–332 (2011).\n[Crossref]\n\nR. S. Tucker and K. Hinton, “Energy consumption and energy density in optical and electronic signal processing,” IEEE Photonics J. 3(5), 821–833 (2011).\n[Crossref]\n\nM. A. Foster, R. Salem, and A. L. Gaeta, “Ultrahigh-speed optical processing using space-time duality,” Opt. Photon. News 22(5), 29–35 (2011).\n[Crossref]\n\n#### 2009 (3)\n\nJ. Caraquitena and J. Martí, “High-rate pulse-train generation by phase-only filtering of an electrooptic frequency comb: Analysis and optimization,” Opt. Commun. 282(18), 3686–3692 (2009).\n[Crossref]\n\n#### 2007 (3)\n\nH. Chi and J. Yao, “All-fiber chirped microwave pulses generation based on spectral shaping and wavelength-to-time conversion,” IEEE Trans. Microwave Theory 55(9), 1958–1963 (2007).\n[Crossref]\n\n#### 2004 (1)\n\nJ. Azana, N. K. Berger, B. Levit, and B. Fischer, “Spectro-temporal imaging of optical pulses with a single time lens,” IEEE Photon. Technol. Lett. 16(3), 882–884 (2004).\n[Crossref]\n\n#### 2003 (2)\n\nJ. Azaña, “Design specifications of time-domain spectral shaping optical system based on dispersion and temporal modulation,” Electron. Lett. 39(21), 1530–1532 (2003).\n[Crossref]\n\n#### 2001 (1)\n\nJ. Azaña and M. A. Muriel, “Temporal self-imaging effects: Theory and application for multiplying pulse repetition rates,” IEEE J. Sel. Top. Quantum Electron. 7(4), 728–744 (2001).\n[Crossref]\n\n#### 2000 (3)\n\nJ. Azana and M. A. Muriel, “Real-time optical spectrum analysis based on the time-space duality in chirped fiber gratings,” IEEE J. Quantum Electron. 36, 517–526 (2000).\n\nC. V. Bennett and B. H. Kolner, “Principles of parametric temporal imaging - Part I: System configurations,” IEEE J. Quantum Electron. 36, 430–437 (2000).\n\nC. V. Bennett and B. H. Kolner, “Principles of parametric temporal imaging - Part II: System performance,” IEEE J. Quantum Electron. 36, 649–655 (2000).\n\n#### 1998 (2)\n\nA. V. Mamaev and M. Saffman, “Selection of unstable patterns and control of optical turbulence by Fourier plane filtering,” Phys. Rev. Lett. 80(16), 3499–3502 (1998).\n[Crossref]\n\nA. M. Weiner and A. M. Kan’an, “Femtosecond pulse shaping for synthesis, processing, and time-to-space conversion of ultrafast optical waveforms,” IEEE J. Sel. Top. Quantum Electron. 4(2), 317–331 (1998).\n[Crossref]\n\n#### 1996 (1)\n\nM. V. Berry and S. Klein, “Integer, fractional and fractal Talbot effects,” J. Mod. Opt. 43(10), 2139–2164 (1996).\n[Crossref]\n\n#### 1995 (3)\n\nM. S. Chapman, C. R. Ekstrom, T. D. Hammond, J. Schmiedmayer, B. E. Tannian, S. Wehinger, and D. E. Pritchard, “Near-field imaging of atom diffraction gratings: the atomic Talbot effect,” Phys. Rev. A 51(1), R14–R17 (1995).\n[Crossref] [PubMed]\n\nA. M. Weiner, “Femtosecond optical pulse shaping and processing,” Prog. Quantum Electron. 19(3), 161–237 (1995).\n[Crossref]\n\n#### 1994 (1)\n\nB. H. Kolner, “Space-time duality and the theory of temporal imaging,” IEEE J. Quantum Electron. 30(8), 1951–1963 (1994).\n[Crossref]\n\n#### 1969 (2)\n\nS. A. Akhmanov, A. P. Sukhoruk, and A. S. Chirkin, “Nonstationary phenomena and space-time analogy in nonlinear optics,” Sov. Phys. JETP-USSR 28, 748 (1969).\n\nE. B. Treacy, “Optical pulse compression with diffraction gratings,” IEEE J. Quantum Electron. 5(9), 454–458 (1969).\n[Crossref]\n\n#### 1881 (1)\n\nL. Rayleigh, “On copying diffraction gratings and on some phenomenon connected therewith,” Philos. Mag. 11(67), 196–205 (1881).\n[Crossref]\n\n#### 1836 (1)\n\nH. F. Talbot, “Facts relating to optical science no. IV,” Philos. Mag. 9, 401–407 (1836).\n\n#### Akbulut, M.\n\nP. J. Delfyett, I. Ozdur, N. Hoghooghi, M. Akbulut, J. Davila-Rodriguez, and S. Bhooplapur, “Advanced ultrafast technologies based on optical frequency combs,” IEEE J. Sel. Top. Quantum Electron. 18(1), 258–274 (2012).\n[Crossref]\n\n#### Akhmanov, S. A.\n\nS. A. Akhmanov, A. P. Sukhoruk, and A. S. Chirkin, “Nonstationary phenomena and space-time analogy in nonlinear optics,” Sov. Phys. JETP-USSR 28, 748 (1969).\n\n#### Azana, J.\n\nJ. Azana, N. K. Berger, B. Levit, and B. Fischer, “Spectro-temporal imaging of optical pulses with a single time lens,” IEEE Photon. Technol. Lett. 16(3), 882–884 (2004).\n[Crossref]\n\nJ. Azana and M. A. Muriel, “Real-time optical spectrum analysis based on the time-space duality in chirped fiber gratings,” IEEE J. Quantum Electron. 36, 517–526 (2000).\n\n#### Azaña, J.\n\nJ. Azaña, “Design specifications of time-domain spectral shaping optical system based on dispersion and temporal modulation,” Electron. Lett. 39(21), 1530–1532 (2003).\n[Crossref]\n\nJ. Azaña and M. A. Muriel, “Temporal self-imaging effects: Theory and application for multiplying pulse repetition rates,” IEEE J. Sel. Top. Quantum Electron. 7(4), 728–744 (2001).\n[Crossref]\n\n#### Bennett, C. V.\n\nC. V. Bennett and B. H. Kolner, “Principles of parametric temporal imaging - Part I: System configurations,” IEEE J. Quantum Electron. 36, 430–437 (2000).\n\nC. V. Bennett and B. H. Kolner, “Principles of parametric temporal imaging - Part II: System performance,” IEEE J. Quantum Electron. 36, 649–655 (2000).\n\n#### Berger, N. K.\n\nJ. Azana, N. K. Berger, B. Levit, and B. Fischer, “Spectro-temporal imaging of optical pulses with a single time lens,” IEEE Photon. Technol. Lett. 16(3), 882–884 (2004).\n[Crossref]\n\n#### Berry, M. V.\n\nM. V. Berry and S. Klein, “Integer, fractional and fractal Talbot effects,” J. Mod. Opt. 43(10), 2139–2164 (1996).\n[Crossref]\n\n#### Bhooplapur, S.\n\nP. J. Delfyett, I. Ozdur, N. Hoghooghi, M. Akbulut, J. Davila-Rodriguez, and S. Bhooplapur, “Advanced ultrafast technologies based on optical frequency combs,” IEEE J. Sel. Top. Quantum Electron. 18(1), 258–274 (2012).\n[Crossref]\n\n#### Bogoni, A.\n\nA. E. Willner, O. F. Yilmaz, J. A. Wang, X. X. Wu, A. Bogoni, L. Zhang, and S. R. Nuccio, “Optically efficient nonlinear signal processing,” IEEE J. Sel. Top. Quantum Electron. 17(2), 320–332 (2011).\n[Crossref]\n\n#### Caraquitena, J.\n\nJ. Caraquitena and J. Martí, “High-rate pulse-train generation by phase-only filtering of an electrooptic frequency comb: Analysis and optimization,” Opt. Commun. 282(18), 3686–3692 (2009).\n[Crossref]\n\n#### Chapman, M. S.\n\nM. S. Chapman, C. R. Ekstrom, T. D. Hammond, J. Schmiedmayer, B. E. Tannian, S. Wehinger, and D. E. Pritchard, “Near-field imaging of atom diffraction gratings: the atomic Talbot effect,” Phys. Rev. A 51(1), R14–R17 (1995).\n[Crossref] [PubMed]\n\n#### Chi, H.\n\nH. Chi and J. Yao, “All-fiber chirped microwave pulses generation based on spectral shaping and wavelength-to-time conversion,” IEEE Trans. Microwave Theory 55(9), 1958–1963 (2007).\n[Crossref]\n\n#### Chirkin, A. S.\n\nS. A. Akhmanov, A. P. Sukhoruk, and A. S. Chirkin, “Nonstationary phenomena and space-time analogy in nonlinear optics,” Sov. Phys. JETP-USSR 28, 748 (1969).\n\n#### Clausen, A. T.\n\nL. K. Oxenlowe, H. Ji, M. Galili, M. H. Pu, H. Hu, H. C. H. Mulvad, K. Yvind, J. M. Hvam, A. T. Clausen, and P. Jeppesen, “Silicon photonics for signal processing of Tbit/s serial data signals,” IEEE J. Sel. Top. Quantum Electron. 18(2), 996–1005 (2012).\n[Crossref]\n\n#### Davila-Rodriguez, J.\n\nP. J. Delfyett, I. Ozdur, N. Hoghooghi, M. Akbulut, J. Davila-Rodriguez, and S. Bhooplapur, “Advanced ultrafast technologies based on optical frequency combs,” IEEE J. Sel. Top. Quantum Electron. 18(1), 258–274 (2012).\n[Crossref]\n\n#### Delfyett, P. J.\n\nP. J. Delfyett, I. Ozdur, N. Hoghooghi, M. Akbulut, J. Davila-Rodriguez, and S. Bhooplapur, “Advanced ultrafast technologies based on optical frequency combs,” IEEE J. Sel. Top. Quantum Electron. 18(1), 258–274 (2012).\n[Crossref]\n\n#### Ekstrom, C. R.\n\nM. S. Chapman, C. R. Ekstrom, T. D. Hammond, J. Schmiedmayer, B. E. Tannian, S. Wehinger, and D. E. Pritchard, “Near-field imaging of atom diffraction gratings: the atomic Talbot effect,” Phys. Rev. A 51(1), R14–R17 (1995).\n[Crossref] [PubMed]\n\n#### Fischer, B.\n\nJ. Azana, N. K. Berger, B. Levit, and B. Fischer, “Spectro-temporal imaging of optical pulses with a single time lens,” IEEE Photon. Technol. Lett. 16(3), 882–884 (2004).\n[Crossref]\n\n#### Fontaine, N. K.\n\nS. J. B. Yoo, R. P. Scott, D. J. Geisler, N. K. Fontaine, and F. M. Soares, “Terahertz information and signal processing by RF-photonics,” IEEE Tran. Terahertz Sci. Technol. 2(2), 167–176 (2012).\n[Crossref]\n\n#### Foster, M. A.\n\nM. A. Foster, R. Salem, and A. L. Gaeta, “Ultrahigh-speed optical processing using space-time duality,” Opt. Photon. News 22(5), 29–35 (2011).\n[Crossref]\n\n#### Gaeta, A. L.\n\nM. A. Foster, R. Salem, and A. L. Gaeta, “Ultrahigh-speed optical processing using space-time duality,” Opt. Photon. News 22(5), 29–35 (2011).\n[Crossref]\n\n#### Galili, M.\n\nL. K. Oxenlowe, H. Ji, M. Galili, M. H. Pu, H. Hu, H. C. H. Mulvad, K. Yvind, J. M. Hvam, A. T. Clausen, and P. Jeppesen, “Silicon photonics for signal processing of Tbit/s serial data signals,” IEEE J. Sel. Top. Quantum Electron. 18(2), 996–1005 (2012).\n[Crossref]\n\n#### Geisler, D. J.\n\nS. J. B. Yoo, R. P. Scott, D. J. Geisler, N. K. Fontaine, and F. M. Soares, “Terahertz information and signal processing by RF-photonics,” IEEE Tran. Terahertz Sci. Technol. 2(2), 167–176 (2012).\n[Crossref]\n\n#### Goda, K.\n\nK. Goda and B. Jalali, “Dispersive Fourier transformation for fast continuous single-shot measurements,” Nat. Photonics 7(2), 102–112 (2013).\n[Crossref]\n\n#### Hammond, T. D.\n\nM. S. Chapman, C. R. Ekstrom, T. D. Hammond, J. Schmiedmayer, B. E. Tannian, S. Wehinger, and D. E. Pritchard, “Near-field imaging of atom diffraction gratings: the atomic Talbot effect,” Phys. Rev. A 51(1), R14–R17 (1995).\n[Crossref] [PubMed]\n\n#### Hinton, K.\n\nR. S. Tucker and K. Hinton, “Energy consumption and energy density in optical and electronic signal processing,” IEEE Photonics J. 3(5), 821–833 (2011).\n[Crossref]\n\n#### Hoghooghi, N.\n\nP. J. Delfyett, I. Ozdur, N. Hoghooghi, M. Akbulut, J. Davila-Rodriguez, and S. Bhooplapur, “Advanced ultrafast technologies based on optical frequency combs,” IEEE J. Sel. Top. Quantum Electron. 18(1), 258–274 (2012).\n[Crossref]\n\n#### Hu, H.\n\nL. K. Oxenlowe, H. Ji, M. Galili, M. H. Pu, H. Hu, H. C. H. Mulvad, K. Yvind, J. M. Hvam, A. T. Clausen, and P. Jeppesen, “Silicon photonics for signal processing of Tbit/s serial data signals,” IEEE J. Sel. Top. Quantum Electron. 18(2), 996–1005 (2012).\n[Crossref]\n\n#### Hvam, J. M.\n\nL. K. Oxenlowe, H. Ji, M. Galili, M. H. Pu, H. Hu, H. C. H. Mulvad, K. Yvind, J. M. Hvam, A. T. Clausen, and P. Jeppesen, “Silicon photonics for signal processing of Tbit/s serial data signals,” IEEE J. Sel. Top. Quantum Electron. 18(2), 996–1005 (2012).\n[Crossref]\n\n#### Jalali, B.\n\nK. Goda and B. Jalali, “Dispersive Fourier transformation for fast continuous single-shot measurements,” Nat. Photonics 7(2), 102–112 (2013).\n[Crossref]\n\n#### Jeppesen, P.\n\nL. K. Oxenlowe, H. Ji, M. Galili, M. H. Pu, H. Hu, H. C. H. Mulvad, K. Yvind, J. M. Hvam, A. T. Clausen, and P. Jeppesen, “Silicon photonics for signal processing of Tbit/s serial data signals,” IEEE J. Sel. Top. Quantum Electron. 18(2), 996–1005 (2012).\n[Crossref]\n\n#### Ji, H.\n\nL. K. Oxenlowe, H. Ji, M. Galili, M. H. Pu, H. Hu, H. C. H. Mulvad, K. Yvind, J. M. Hvam, A. T. Clausen, and P. Jeppesen, “Silicon photonics for signal processing of Tbit/s serial data signals,” IEEE J. Sel. Top. Quantum Electron. 18(2), 996–1005 (2012).\n[Crossref]\n\n#### Kan’an, A. M.\n\nA. M. Weiner and A. M. Kan’an, “Femtosecond pulse shaping for synthesis, processing, and time-to-space conversion of ultrafast optical waveforms,” IEEE J. Sel. Top. Quantum Electron. 4(2), 317–331 (1998).\n[Crossref]\n\n#### Klein, S.\n\nM. V. Berry and S. Klein, “Integer, fractional and fractal Talbot effects,” J. Mod. Opt. 43(10), 2139–2164 (1996).\n[Crossref]\n\n#### Kolner, B. H.\n\nC. V. Bennett and B. H. Kolner, “Principles of parametric temporal imaging - Part II: System performance,” IEEE J. Quantum Electron. 36, 649–655 (2000).\n\nC. V. Bennett and B. H. Kolner, “Principles of parametric temporal imaging - Part I: System configurations,” IEEE J. Quantum Electron. 36, 430–437 (2000).\n\nB. H. Kolner, “Space-time duality and the theory of temporal imaging,” IEEE J. Quantum Electron. 30(8), 1951–1963 (1994).\n[Crossref]\n\n#### Levit, B.\n\nJ. Azana, N. K. Berger, B. Levit, and B. Fischer, “Spectro-temporal imaging of optical pulses with a single time lens,” IEEE Photon. Technol. Lett. 16(3), 882–884 (2004).\n[Crossref]\n\n#### Mamaev, A. V.\n\nA. V. Mamaev and M. Saffman, “Selection of unstable patterns and control of optical turbulence by Fourier plane filtering,” Phys. Rev. Lett. 80(16), 3499–3502 (1998).\n[Crossref]\n\n#### Martí, J.\n\nJ. Caraquitena and J. Martí, “High-rate pulse-train generation by phase-only filtering of an electrooptic frequency comb: Analysis and optimization,” Opt. Commun. 282(18), 3686–3692 (2009).\n[Crossref]\n\nL. K. Oxenlowe, H. Ji, M. Galili, M. H. Pu, H. Hu, H. C. H. Mulvad, K. Yvind, J. M. Hvam, A. T. Clausen, and P. Jeppesen, “Silicon photonics for signal processing of Tbit/s serial data signals,” IEEE J. Sel. Top. Quantum Electron. 18(2), 996–1005 (2012).\n[Crossref]\n\n#### Muriel, M. A.\n\nJ. Azaña and M. A. Muriel, “Temporal self-imaging effects: Theory and application for multiplying pulse repetition rates,” IEEE J. Sel. Top. Quantum Electron. 7(4), 728–744 (2001).\n[Crossref]\n\nJ. Azana and M. A. Muriel, “Real-time optical spectrum analysis based on the time-space duality in chirped fiber gratings,” IEEE J. Quantum Electron. 36, 517–526 (2000).\n\n#### Nuccio, S. R.\n\nA. E. Willner, O. F. Yilmaz, J. A. Wang, X. X. Wu, A. Bogoni, L. Zhang, and S. R. Nuccio, “Optically efficient nonlinear signal processing,” IEEE J. Sel. Top. Quantum Electron. 17(2), 320–332 (2011).\n[Crossref]\n\n#### Oxenlowe, L. K.\n\nL. K. Oxenlowe, H. Ji, M. Galili, M. H. Pu, H. Hu, H. C. H. Mulvad, K. Yvind, J. M. Hvam, A. T. Clausen, and P. Jeppesen, “Silicon photonics for signal processing of Tbit/s serial data signals,” IEEE J. Sel. Top. Quantum Electron. 18(2), 996–1005 (2012).\n[Crossref]\n\n#### Ozdur, I.\n\nP. J. Delfyett, I. Ozdur, N. Hoghooghi, M. Akbulut, J. Davila-Rodriguez, and S. Bhooplapur, “Advanced ultrafast technologies based on optical frequency combs,” IEEE J. Sel. Top. Quantum Electron. 18(1), 258–274 (2012).\n[Crossref]\n\n#### Pritchard, D. E.\n\nM. S. Chapman, C. R. Ekstrom, T. D. Hammond, J. Schmiedmayer, B. E. Tannian, S. Wehinger, and D. E. Pritchard, “Near-field imaging of atom diffraction gratings: the atomic Talbot effect,” Phys. Rev. A 51(1), R14–R17 (1995).\n[Crossref] [PubMed]\n\n#### Pu, M. H.\n\nL. K. Oxenlowe, H. Ji, M. Galili, M. H. Pu, H. Hu, H. C. H. Mulvad, K. Yvind, J. M. Hvam, A. T. Clausen, and P. Jeppesen, “Silicon photonics for signal processing of Tbit/s serial data signals,” IEEE J. Sel. Top. Quantum Electron. 18(2), 996–1005 (2012).\n[Crossref]\n\n#### Rayleigh, L.\n\nL. Rayleigh, “On copying diffraction gratings and on some phenomenon connected therewith,” Philos. Mag. 11(67), 196–205 (1881).\n[Crossref]\n\n#### Saffman, M.\n\nA. V. Mamaev and M. Saffman, “Selection of unstable patterns and control of optical turbulence by Fourier plane filtering,” Phys. Rev. Lett. 80(16), 3499–3502 (1998).\n[Crossref]\n\n#### Salem, R.\n\nM. A. Foster, R. Salem, and A. L. Gaeta, “Ultrahigh-speed optical processing using space-time duality,” Opt. Photon. News 22(5), 29–35 (2011).\n[Crossref]\n\n#### Schmiedmayer, J.\n\nM. S. Chapman, C. R. Ekstrom, T. D. Hammond, J. Schmiedmayer, B. E. Tannian, S. Wehinger, and D. E. Pritchard, “Near-field imaging of atom diffraction gratings: the atomic Talbot effect,” Phys. Rev. A 51(1), R14–R17 (1995).\n[Crossref] [PubMed]\n\n#### Scott, R. P.\n\nS. J. B. Yoo, R. P. Scott, D. J. Geisler, N. K. Fontaine, and F. M. Soares, “Terahertz information and signal processing by RF-photonics,” IEEE Tran. Terahertz Sci. Technol. 2(2), 167–176 (2012).\n[Crossref]\n\n#### Soares, F. M.\n\nS. J. B. Yoo, R. P. Scott, D. J. Geisler, N. K. Fontaine, and F. M. Soares, “Terahertz information and signal processing by RF-photonics,” IEEE Tran. Terahertz Sci. Technol. 2(2), 167–176 (2012).\n[Crossref]\n\n#### Sukhoruk, A. P.\n\nS. A. Akhmanov, A. P. Sukhoruk, and A. S. Chirkin, “Nonstationary phenomena and space-time analogy in nonlinear optics,” Sov. Phys. JETP-USSR 28, 748 (1969).\n\n#### Talbot, H. F.\n\nH. F. Talbot, “Facts relating to optical science no. IV,” Philos. Mag. 9, 401–407 (1836).\n\n#### Tannian, B. E.\n\nM. S. Chapman, C. R. Ekstrom, T. D. Hammond, J. Schmiedmayer, B. E. Tannian, S. Wehinger, and D. E. Pritchard, “Near-field imaging of atom diffraction gratings: the atomic Talbot effect,” Phys. Rev. A 51(1), R14–R17 (1995).\n[Crossref] [PubMed]\n\n#### Treacy, E. B.\n\nE. B. Treacy, “Optical pulse compression with diffraction gratings,” IEEE J. Quantum Electron. 5(9), 454–458 (1969).\n[Crossref]\n\n#### Tucker, R. S.\n\nR. S. Tucker and K. Hinton, “Energy consumption and energy density in optical and electronic signal processing,” IEEE Photonics J. 3(5), 821–833 (2011).\n[Crossref]\n\n#### Wang, J. A.\n\nA. E. Willner, O. F. Yilmaz, J. A. Wang, X. X. Wu, A. Bogoni, L. Zhang, and S. R. Nuccio, “Optically efficient nonlinear signal processing,” IEEE J. Sel. Top. Quantum Electron. 17(2), 320–332 (2011).\n[Crossref]\n\n#### Wehinger, S.\n\nM. S. Chapman, C. R. Ekstrom, T. D. Hammond, J. Schmiedmayer, B. E. Tannian, S. Wehinger, and D. E. Pritchard, “Near-field imaging of atom diffraction gratings: the atomic Talbot effect,” Phys. Rev. A 51(1), R14–R17 (1995).\n[Crossref] [PubMed]\n\n#### Weiner, A. M.\n\nA. M. Weiner and A. M. Kan’an, “Femtosecond pulse shaping for synthesis, processing, and time-to-space conversion of ultrafast optical waveforms,” IEEE J. Sel. Top. Quantum Electron. 4(2), 317–331 (1998).\n[Crossref]\n\nA. M. Weiner, “Femtosecond optical pulse shaping and processing,” Prog. Quantum Electron. 19(3), 161–237 (1995).\n[Crossref]\n\n#### Willner, A. E.\n\nA. E. Willner, O. F. Yilmaz, J. A. Wang, X. X. Wu, A. Bogoni, L. Zhang, and S. R. Nuccio, “Optically efficient nonlinear signal processing,” IEEE J. Sel. Top. Quantum Electron. 17(2), 320–332 (2011).\n[Crossref]\n\n#### Wu, X. X.\n\nA. E. Willner, O. F. Yilmaz, J. A. Wang, X. X. Wu, A. Bogoni, L. Zhang, and S. R. Nuccio, “Optically efficient nonlinear signal processing,” IEEE J. Sel. Top. Quantum Electron. 17(2), 320–332 (2011).\n[Crossref]\n\n#### Yao, J.\n\nH. Chi and J. Yao, “All-fiber chirped microwave pulses generation based on spectral shaping and wavelength-to-time conversion,” IEEE Trans. Microwave Theory 55(9), 1958–1963 (2007).\n[Crossref]\n\n#### Yilmaz, O. F.\n\nA. E. Willner, O. F. Yilmaz, J. A. Wang, X. X. Wu, A. Bogoni, L. Zhang, and S. R. Nuccio, “Optically efficient nonlinear signal processing,” IEEE J. Sel. Top. Quantum Electron. 17(2), 320–332 (2011).\n[Crossref]\n\n#### Yoo, S. J. B.\n\nS. J. B. Yoo, R. P. Scott, D. J. Geisler, N. K. Fontaine, and F. M. Soares, “Terahertz information and signal processing by RF-photonics,” IEEE Tran. Terahertz Sci. Technol. 2(2), 167–176 (2012).\n[Crossref]\n\n#### Yvind, K.\n\nL. K. Oxenlowe, H. Ji, M. Galili, M. H. Pu, H. Hu, H. C. H. Mulvad, K. Yvind, J. M. Hvam, A. T. Clausen, and P. Jeppesen, “Silicon photonics for signal processing of Tbit/s serial data signals,” IEEE J. Sel. Top. Quantum Electron. 18(2), 996–1005 (2012).\n[Crossref]\n\n#### Zhang, L.\n\nA. E. Willner, O. F. Yilmaz, J. A. Wang, X. X. Wu, A. Bogoni, L. Zhang, and S. R. Nuccio, “Optically efficient nonlinear signal processing,” IEEE J. Sel. Top. Quantum Electron. 17(2), 320–332 (2011).\n[Crossref]\n\n#### Electron. Lett. (1)\n\nJ. Azaña, “Design specifications of time-domain spectral shaping optical system based on dispersion and temporal modulation,” Electron. Lett. 39(21), 1530–1532 (2003).\n[Crossref]\n\n#### IEEE J. Quantum Electron. (5)\n\nJ. Azana and M. A. Muriel, “Real-time optical spectrum analysis based on the time-space duality in chirped fiber gratings,” IEEE J. Quantum Electron. 36, 517–526 (2000).\n\nB. H. Kolner, “Space-time duality and the theory of temporal imaging,” IEEE J. Quantum Electron. 30(8), 1951–1963 (1994).\n[Crossref]\n\nE. B. Treacy, “Optical pulse compression with diffraction gratings,” IEEE J. Quantum Electron. 5(9), 454–458 (1969).\n[Crossref]\n\nC. V. Bennett and B. H. Kolner, “Principles of parametric temporal imaging - Part I: System configurations,” IEEE J. Quantum Electron. 36, 430–437 (2000).\n\nC. V. Bennett and B. H. Kolner, “Principles of parametric temporal imaging - Part II: System performance,” IEEE J. Quantum Electron. 36, 649–655 (2000).\n\n#### IEEE J. Sel. Top. Quantum Electron. (5)\n\nP. J. Delfyett, I. Ozdur, N. Hoghooghi, M. Akbulut, J. Davila-Rodriguez, and S. Bhooplapur, “Advanced ultrafast technologies based on optical frequency combs,” IEEE J. Sel. Top. Quantum Electron. 18(1), 258–274 (2012).\n[Crossref]\n\nA. E. Willner, O. F. Yilmaz, J. A. Wang, X. X. Wu, A. Bogoni, L. Zhang, and S. R. Nuccio, “Optically efficient nonlinear signal processing,” IEEE J. Sel. Top. Quantum Electron. 17(2), 320–332 (2011).\n[Crossref]\n\nL. K. Oxenlowe, H. Ji, M. Galili, M. H. Pu, H. Hu, H. C. H. Mulvad, K. Yvind, J. M. Hvam, A. T. Clausen, and P. Jeppesen, “Silicon photonics for signal processing of Tbit/s serial data signals,” IEEE J. Sel. Top. Quantum Electron. 18(2), 996–1005 (2012).\n[Crossref]\n\nA. M. Weiner and A. M. Kan’an, “Femtosecond pulse shaping for synthesis, processing, and time-to-space conversion of ultrafast optical waveforms,” IEEE J. Sel. Top. Quantum Electron. 4(2), 317–331 (1998).\n[Crossref]\n\nJ. Azaña and M. A. Muriel, “Temporal self-imaging effects: Theory and application for multiplying pulse repetition rates,” IEEE J. Sel. Top. Quantum Electron. 7(4), 728–744 (2001).\n[Crossref]\n\n#### IEEE Photon. Technol. Lett. (1)\n\nJ. Azana, N. K. Berger, B. Levit, and B. Fischer, “Spectro-temporal imaging of optical pulses with a single time lens,” IEEE Photon. Technol. Lett. 16(3), 882–884 (2004).\n[Crossref]\n\n#### IEEE Photonics J. (1)\n\nR. S. Tucker and K. Hinton, “Energy consumption and energy density in optical and electronic signal processing,” IEEE Photonics J. 3(5), 821–833 (2011).\n[Crossref]\n\n#### IEEE Tran. Terahertz Sci. Technol. (1)\n\nS. J. B. Yoo, R. P. Scott, D. J. Geisler, N. K. Fontaine, and F. M. Soares, “Terahertz information and signal processing by RF-photonics,” IEEE Tran. Terahertz Sci. Technol. 2(2), 167–176 (2012).\n[Crossref]\n\n#### IEEE Trans. Microwave Theory (1)\n\nH. Chi and J. Yao, “All-fiber chirped microwave pulses generation based on spectral shaping and wavelength-to-time conversion,” IEEE Trans. Microwave Theory 55(9), 1958–1963 (2007).\n[Crossref]\n\n#### J. Mod. Opt. (1)\n\nM. V. Berry and S. Klein, “Integer, fractional and fractal Talbot effects,” J. Mod. Opt. 43(10), 2139–2164 (1996).\n[Crossref]\n\n#### Nat. Photonics (1)\n\nK. Goda and B. Jalali, “Dispersive Fourier transformation for fast continuous single-shot measurements,” Nat. Photonics 7(2), 102–112 (2013).\n[Crossref]\n\n#### Opt. Commun. (1)\n\nJ. Caraquitena and J. Martí, “High-rate pulse-train generation by phase-only filtering of an electrooptic frequency comb: Analysis and optimization,” Opt. Commun. 282(18), 3686–3692 (2009).\n[Crossref]\n\n#### Opt. Photon. News (1)\n\nM. A. Foster, R. Salem, and A. L. Gaeta, “Ultrahigh-speed optical processing using space-time duality,” Opt. Photon. News 22(5), 29–35 (2011).\n[Crossref]\n\n#### Philos. Mag. (2)\n\nH. F. Talbot, “Facts relating to optical science no. IV,” Philos. Mag. 9, 401–407 (1836).\n\nL. Rayleigh, “On copying diffraction gratings and on some phenomenon connected therewith,” Philos. Mag. 11(67), 196–205 (1881).\n[Crossref]\n\n#### Phys. Rev. A (1)\n\nM. S. Chapman, C. R. Ekstrom, T. D. Hammond, J. Schmiedmayer, B. E. Tannian, S. Wehinger, and D. E. Pritchard, “Near-field imaging of atom diffraction gratings: the atomic Talbot effect,” Phys. Rev. A 51(1), R14–R17 (1995).\n[Crossref] [PubMed]\n\n#### Phys. Rev. Lett. (1)\n\nA. V. Mamaev and M. Saffman, “Selection of unstable patterns and control of optical turbulence by Fourier plane filtering,” Phys. Rev. Lett. 80(16), 3499–3502 (1998).\n[Crossref]\n\n#### Prog. Quantum Electron. (1)\n\nA. M. Weiner, “Femtosecond optical pulse shaping and processing,” Prog. Quantum Electron. 19(3), 161–237 (1995).\n[Crossref]\n\n#### Sov. Phys. JETP-USSR (1)\n\nS. A. Akhmanov, A. P. Sukhoruk, and A. S. Chirkin, “Nonstationary phenomena and space-time analogy in nonlinear optics,” Sov. Phys. JETP-USSR 28, 748 (1969).\n\n#### Other (2)\n\nJ. W. Goodman, Introduction to Fourier Optics (Roberts and Co., 2005).\n\nA. Papoulis, Systems and Transforms With Applications in Optics (McGraw-Hill, 1968).\n\n### Cited By\n\nOSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.\n\n### Figures (9)\n\nFig. 1 (a) Train of pulses that have been intensity modulated by a set of coefficients with N = 4 and (b) grouping of the different trains depending on the modulating coefficient.\nFig. 2 Output of the system, for the input train shown in Fig. 1, when (a) s is even or (b) s is odd.\nFig. 3 (a) Train of pulses modulated at the input of the system with N = 3, and (b) at the output of the system.\nFig. 4 Proposed setup with intensity and phase modulation in the time domain followed by a dispersive device (EO-IM: Electrooptic Intensity Modulator, EO-PM: Electrooptic Phase Modulator, DCF: Dispersion Compensating Fibre).\nFig. 5 Train of pulses at the input (a) and at the output (b) of the system when no modulation is applied.\nFig. 6 Train of pulses at the input (dashed) and output (solid) of the system and the expected output envelope (dotted), when intensity-only coefficients (a) 1000000000 and (b) 1000000001 are applied.\nFig. 7 Train of pulses at the input (dashed) and output (solid) of the system and the expected output envelope (dotted) for a (a) ramp, (b) triangular, (c) binary data and (d) random envelope.\nFig. 8 Train of pulses at the output of the system for a desired triangular envelope with N = 20; 20 periods of the output are shown in (a) and zooms of the signal in (a) for three specific periods are shown in (b) to (d).\nFig. 9 Train of pulses at the output of the system for a desired triangular envelope with N = 60; 60 periods of the output are shown in (a) and zooms of the signal in (a) for three specific periods are shown in (b) to (d).\n\n### Equations (47)\n\n$x( t )=c( t ) ∑ k=−∞ ∞ a( t−k T 0 )$\n$c(t)=c(t+N T 0 )$\n$c(t)≈c(k T 0 )= c k for k T 0 − Δt 2 ≤t≤k T 0 + Δt 2 where k is an integer$\n$x( t )= ∑ l=0 N−1 [ c l ∑ k=−∞ ∞ a( t−l T 0 −k T 1 ) ]$\n$| ϕ |= T 1 2 s 2π = N 2 T 0 2 s 2π s=±1, ±2, ±3...$\n$y( t )= ∑ l=0 N−1 [ c l ∑ k=−∞ ∞ a( t−k T 1 −l T 0 ) ]$\n$P out ( t )= ∑ l=0 N−1 | c l | 2 ∑ k=−∞ ∞ | a( t−k T 1 −l T 0 ) | 2$\n$y( t )= ∑ l=0 N−1 [ c l ∑ k=−∞ ∞ a( t−k T 1 −l T 0 − T 1 /2 ) ]$\n$P out ( t )= ∑ l=0 N−1 | c l | 2 ∑ k=−∞ ∞ | a( t−k T 1 −l T 0 − T 1 /2 ) | 2$\n$| ϕ |= T 1 2 2π s m = N 2 T 0 2 2π s m { s=±1, ±2, ±3... m= 2, 3,4...$\n$Δt≤ T 0 m = T 1 mN$\n$c l ∑ k=−∞ ∞ A k a( t−k T 1 m −l T 0 )$\n$A k = 1 m ∑ q=0 m−1 exp( jπ{ s m q 2 + 2k m q } )$\n$y( t )= ∑ k=−∞ ∞ ∑ l=0 N−1 c l A k a( t−k N m T 0 −l T 0 )$\n$c l ∑ k=−∞ ∞ B k a( t−k T 1 m −l T 0 − T 1 2m )$\n$B k = 1 m ∑ q=0 m−1 exp( jπ{ s m q 2 + 2k+1 m q } )$\n$y( t )= ∑ k=−∞ ∞ ∑ l=0 N−1 c l B k a( t−k N m T 0 −l T 0 − N 2m T 0 )$\n$y( t )= ∑ k'=−∞ ∞ [ ∑ l=0 N−1 A k ',l c l ]a( t−k' T 0 N ) = = ∑ k'=−∞ ∞ n k ′ a( t−k' T 0 N )$\n$n k' = 1 N 2 ∑ l=0 N−1 ∑ q=0 N 2 −1 c l exp( j π N 2 { q 2 +( 2k'−2lN )q } ) = = 1 N 2 ∑ q=0 N 2 −1 [ ∑ l=0 N−1 c l exp( −j 2π N lq ) ]exp( j π N 2 { q 2 +2k'q } )$\n$n k' = 1 N 2 ∑ w=0 N−1 [ ∑ z=0 N−1 [ ∑ l=0 N−1 c l exp( j π N 2 { ( w+zN ) 2 +( 2k'−2lN )( w+zN ) } ) ] ] = 1 N 2 ∑ w=0 N−1 [ exp( j π N 2 { w 2 +2k'w } ) ∑ l=0 N−1 [ c l exp( −j 2π N lw ) ] ∑ z=0 N−1 [ ( −1 ) z exp( j 2π N z( w+k' ) ) ] ]$\n$n k' = 1 N 2 ∑ w=0 N−1 [ C w exp( j π N 2 { w 2 +2wk' } ) ∑ z=0 N−1 [ ( −1 ) z exp( j 2π N z{ w+k' } ) ] ]$\n$C w = ∑ l=0 N−1 c l exp( −j 2π N lw )$\n$∑ z=0 N−1 ( −1 ) z exp( j 2π N z{ w+k' } ) ={ 0 if w≠rN−k'+ N 2 N if w=rN−k'+ N 2$\n$k'−N/2 N ≤r≤ k'+N/2 −1 N$\n$w 0 = r 0 N+ N 2 −k'$\n$n k' = 1 N C w 0 exp( j π N 2 { w 0 2 +2 w 0 k' } )= = 1 N C N 2 −k' exp( j π 4 )exp( −jπ ( k' N ) 2 )$\n$y( t )= ∑ k'=−∞ ∞ 1 N C N 2 −k' exp( j π 4 )exp( −jπ ( k' N ) 2 )a( t−k' T 0 N )$\n$P out ( t )= 1 N 2 ∑ k''=−∞ ∞ | C k'' | 2 | a( t+k'' T 0 N − T 0 2 ) | 2$\n$y( t )= ∑ k'=−∞ ∞ [ ∑ l=0 N−1 B k ',l c l ]a( t−k' T 0 N − T 0 2N ) = = ∑ k'=−∞ ∞ m k ′ a( t−k' T 0 N − T 0 2N )$\n$m k' = 1 N 2 ∑ l=0 N−1 ∑ q=0 N 2 −1 c l exp( j π N 2 { q 2 +( 2k'−2lN+1 )q } ) = = 1 N 2 ∑ q=0 N 2 −1 [ ∑ l=0 N−1 c l exp( −j 2π N lq ) ]exp( j π N 2 { q 2 +( 2k'+1 )q } )$\n$m k' = 1 N 2 ∑ w=0 N−1 [ C w exp( j π N 2 { w 2 +w( 2k'+1 ) } )[ ∑ z=0 N−1 ( −1 ) z exp( j π N z( 2w+2k'+1 ) ) ] ]$\n$∑ z=0 N−1 ( −1 ) z exp( j π N z( 2w+2k'+1 ) ) ={ 0 if w≠rN−k'+ N−1 2 N if w=rN−k'+ N−1 2$\n$k'− ( N−1 ) /2 N ≤r≤ k'+ ( N−1 ) /2 N$\n$m k' = 1 N C w 0 exp( j π N 2 { w 0 2 + w 0 ( 2k'+1 ) } )= = 1 N C N−1 2 −k' exp( j π 4 )exp( −j π N 2 ( k'− 1 2 ) 2 )$\n$P out ( t )= ∑ k'=−∞ ∞ | m k' | 2 | a( t−k' T 0 N − T 0 2N ) | 2 = = 1 N 2 ∑ k'=−∞ ∞ | C N−1 2 −k' | 2 | a( t− k ′ T 0 N − T 0 2N ) | 2$\n$P out ( t )= 1 N 2 ∑ k'''=−∞ ∞ | C k''' | 2 | a( t+k''' T 0 N − T 0 2 ) | 2$\n$∑ h=0 N−1 ( −1 ) h exp( j π N hs' ) ={ 0 s'≠(2r+1)N N s'=(2r+1)N$\n$∑ h=0 N−1 ( −1 ) h exp( j π N hs' ) =[ ∑ z=0 N/2−1 exp( j 2π N zs' ) ][ 1−exp( j π N s' ) ]$\n$∑ z=0 M−1 a z = 1− a M 1−a$\n$∑ z=0 N/2−1 exp( j 2π N zs' ) = 1−exp( jπs' ) [ 1+exp( j π N s' ) ][ 1−exp( j π N s' ) ]$\n$∑ h=0 N−1 ( −1 ) h exp( j π N hs' ) = 1−exp( jπs' ) 1+exp( j π N s' )$\n$∑ h=0 N−1 ( −1 ) h exp( j π N hs' ) =[ ∑ z=0 N−1 2 −1 exp( j π N 2zs' ) ][ 1−exp( j π N s' ) ]+exp( jπ ( N−1 ) N s' )$\n$∑ z=0 N−1 2 −1 exp( j π N 2zs' ) = 1−exp( jπ N−1 N s' ) [ 1+exp( j π N s' ) ][ 1−exp( j π N s' ) ]$\n$∑ h=0 N−1 ( −1 ) h exp( j π N hs' ) = 1+exp( jπs' ) 1+exp( j π N s' )$\n$∑ h=0 N−1 ( −1 ) h exp( j π N hs' ) = 1− ( −1 ) ( N+s' ) 1+exp( j π N s' )$\n$lim s'→(2r+1)N 1− ( −1 ) N+s' 1+exp( j π N s' ) = lim s'→(2r+1)N −jπ ( −1 ) N+s' j π N exp( j π N s' ) =N$\n$∑ h=0 N−1 ( −1 ) h exp( j π N hs' ) ={ 0 s'≠(2r+1)N N s'=(2r+1)N$" ]
[ null, "http://proxy.osapublishing.org/images/ajax-loader-big.gif", null, "http://proxy.osapublishing.org/images/ajax-loader-big.gif", null, "http://proxy.osapublishing.org/images/ajax-loader-big.gif", null, "http://proxy.osapublishing.org/images/ajax-loader-big.gif", null, "http://proxy.osapublishing.org/images/ajax-loader-big.gif", null, "http://proxy.osapublishing.org/images/ajax-loader-big.gif", null, "http://proxy.osapublishing.org/images/ajax-loader-big.gif", null, "http://proxy.osapublishing.org/images/ajax-loader-big.gif", null, "http://proxy.osapublishing.org/images/ajax-loader-big.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9175848,"math_prob":0.9671779,"size":35934,"snap":"2021-43-2021-49","text_gpt3_token_len":8311,"char_repetition_ratio":0.16267742,"word_repetition_ratio":0.07310838,"special_character_ratio":0.23715702,"punctuation_ratio":0.1570306,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":0.99169916,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-01T06:19:12Z\",\"WARC-Record-ID\":\"<urn:uuid:d1835dc7-2ffb-4600-9eb1-608b3dc4de61>\",\"Content-Length\":\"429370\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6921db42-4152-4789-a7c8-3587b8fd762c>\",\"WARC-Concurrent-To\":\"<urn:uuid:50745fd7-decc-44e8-a531-f2d01ddae439>\",\"WARC-IP-Address\":\"65.202.222.160\",\"WARC-Target-URI\":\"http://proxy.osapublishing.org/oe/fulltext.cfm?uri=oe-22-12-15251&id=290637\",\"WARC-Payload-Digest\":\"sha1:2IW2OYDJZ7LGCLWRWBUKX43FEGGQURFK\",\"WARC-Block-Digest\":\"sha1:SV4EX7TWMPJWIALTBNMEAPFM4C532HE7\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964359093.97_warc_CC-MAIN-20211201052655-20211201082655-00218.warc.gz\"}"}
http://www-desir.lip6.fr/~phw/aGrUM/BookOfWhy/notebooks/BoW-c1p050-smallPox.ipynb.html
[ "#### This notebook follows the example from \"The Book Of Why\" (Pearl, 2018) chapter 1 page 050.¶\n\nIn :\nfrom IPython.display import display, Math, Latex,HTML\n\nimport pyAgrum as gum\nimport pyAgrum.lib.notebook as gnb\nimport pyAgrum.causal as csl\nimport pyAgrum.causal.notebook as cslnb\nimport os\n\n\n\"When the smallpox vaccine was first introduced. Unexpectedly, data showed that more people died from smallpox inoculations than from smallpox itself. Naturally, some people used this information to argue that inoculation should be banned, when in fact it was saving lives by eradicating smallpox. Let’s look at some fictitious data to illustrate the effect and settle the dispute. We have a population of one million children.\" chapter1, page 50.\n\n### We create the causal diagram:¶\n\n• The vaccination rate is set to 0.99.\n• You have a 0.01 probability of developing a reaction to the vaccine.\n• You have a 0.02 probability of contracting smallpox if you don't get vaccinated.\n• You have a 0.01 probability of dying from a reaction to the vaccine.\n• You have a 0.2 probability of dying from smallpox.\n\nThe corresponding causal diagram is the following:\n\nIn :\nsp = gum.fastBN(\"vaccination->reaction->death<-smallPox<-vaccination\")\nsp\n\nOut:\nIn :\n# filling the CPTs\n\n# This function adds a uniform noise to a CPT\ndef smallPertubation(cpt,epsilon=0.0001):\ncpt.fillWith(cpt.translate(epsilon).normalizeAsCPT())\n\n# Fictitious data\nsp.cpt(\"vaccination\")[:] = [0.01, 0.99]\nsp.cpt(\"reaction\")[0:] = [1,0] # sp=0\nsp.cpt(\"reaction\")[1:] = [0.99, 0.01] # sp=1\nsp.cpt(\"smallPox\")[0:] = [0.98, 0.02] # sp=0\nsp.cpt(\"smallPox\")[1:] = [1, 0] # sp=1\n\nsp.cpt(\"death\")[0,0,:] = [1, 0] # r=0,s=0\nsp.cpt(\"death\")[1,0,:] = [0.8, 0.2] # r=0,s=1\nsp.cpt(\"death\")[0,1,:] = [0.99, 0.01] # r=1,s=0\n# This case is not possible\nsp.cpt(\"death\")[1,1,:] = [0.5, 0.5] # r=1,s=1\n\n# We add a uniform noise to the CPTs\nepsilon=0.000000001\nsmallPertubation(sp.cpt(\"reaction\"), epsilon)\nsmallPertubation(sp.cpt(\"smallPox\"), epsilon)\nsmallPertubation(sp.cpt(\"death\"), epsilon)\n\nIn :\ngnb.sideBySide(sp,sp.cpt(\"vaccination\"),\nsp.cpt(\"reaction\"),\nsp.cpt(\"smallPox\"),\nsp.cpt(\"death\"),\ncaptions=[\"the BN\",\"the marginal for $vaccination$\",\"the CPT for $reaction$\",\"the CPT for $smallPox$\",\"the CPT for $death$\"])\n\nvaccination\n0\n1\n0.01000.9900\nreaction\nvaccination\n0\n1\n0\n1.00000.0000\n1\n0.99000.0100\nsmallPox\nvaccination\n0\n1\n0\n0.98000.0200\n1\n1.00000.0000\ndeath\nsmallPox\nreaction\n0\n1\n0\n0\n1.00000.0000\n1\n0.99000.0100\n1\n0\n0.80000.2000\n1\n0.50000.5000\nthe BN\nthe marginal for $vaccination$\nthe CPT for $reaction$\nthe CPT for $smallPox$\nthe CPT for $death$\nIn :\n#gum.saveBN(sp,os.path.join(\"out\",\"smallPox.o3prm\"))\n\n\n### We have that :¶\n\n$$P(reaction = 1 \\mid vaccination = 1 ) < P(smallPox = 1 \\mid vaccination = 0 )$$$$P(death = 1 \\mid reaction = 1) < P(death = 1 \\mid smallPox = 1)$$\n\nWe know that the probability of developing a reaction to the vaccine is lower than the probability of contracting smallpox if one does not vaccinate. We also know that the probability of dying from smallpox is greater than the probability of dying from a reaction to the vaccine. (smallpox is more threatening to one's life than a reaction to the vaccine)\n\nIn :\ndef getAliveObservedProba(sp,evs):\nevs0=dict(evs)\nevs1=dict(evs)\nevs0[\"vaccination\"]=0\nevs1[\"vaccination\"]=1\n\ngum.getPosterior(sp,target=\"death\",evs=evs0),\ngum.getPosterior(sp,target=\"death\",evs=evs1)\n])\n\ngnb.sideBySide(getAliveObservedProba(sp,{}),\ncaptions=[\"$P(death = 1 \\mid vaccination )$<br/>$P(death = 1 \\mid vaacination = 0) > P(death = 1 \\mid vaacination = 1)$\"])\n\nvaccination\n0\n1\n0.00400.0001\n$P(death = 1 \\mid vaccination )$\n$P(death = 1 \\mid vaacination = 0) > P(death = 1 \\mid vaacination = 1)$\n\nBased on this information, we can say that vaccination is a good choice. However, more children die from vaccination than from smallpox itself according to the data:\n\n• Number of people dying from a reaction: $$P(vaccination = 1) \\times P(reaction = 1 \\mid vaccination = 1) \\times P(death = 1 \\mid reaction = 1) \\times 1000000 = 99$$\n• Number of people dying from smallpox: $$P(vaccination = 0) \\times P(smallPox = 1 \\mid vaccination = 0) \\times P(death = 1 \\mid smallPox = 1) \\times 1000000 = 40$$ In total if the vaccination rate is set to 99% we have 139 deaths.\n\n### When we began, the vaccination rate was 99 percent. We now ask the counterfactual question “What if we had set the vaccination rate to zero?”¶\n\nIn :\n# Counterfactual world where the vaccination rate is set to 0\nspCounterfactual = gum.BayesNet(sp)\nspCounterfactual.cpt(\"vaccination\")[:] = [1, 0]\n\nIn :\ngnb.sideBySide(gnb.getInference(sp),\ngnb.getInference(spCounterfactual),\ncaptions=[\"<b>factual world:</b> the vaccination rate is set to 0.99 <br> lower risk of contracting smallpox, higher risk of developing a reaction, lower probability of death\",\"<b>counterfactual world:</b> the vaccination rate is set to 0 <br> higher risk of contracting smallpox, lower risk of developing a reaction, higher probability of death\"])\n\n structs Inference in   0.58ms vaccination *{stroke-linecap:butt;stroke-linejoin:round;} reaction *{stroke-linecap:butt;stroke-linejoin:round;} vaccination->reaction smallPox *{stroke-linecap:butt;stroke-linejoin:round;} vaccination->smallPox death *{stroke-linecap:butt;stroke-linejoin:round;} reaction->death smallPox->death structs Inference in   0.40ms vaccination *{stroke-linecap:butt;stroke-linejoin:round;} reaction *{stroke-linecap:butt;stroke-linejoin:round;} vaccination->reaction smallPox *{stroke-linecap:butt;stroke-linejoin:round;} vaccination->smallPox death *{stroke-linecap:butt;stroke-linejoin:round;} reaction->death smallPox->death factual world: the vaccination rate is set to 0.99 lower risk of contracting smallpox, higher risk of developing a reaction, lower probability of death counterfactual world: the vaccination rate is set to 0 higher risk of contracting smallpox, lower risk of developing a reaction, higher probability of death\n\nThe probability of death in the counterfactual world is higher.\n\n• Number of deaths in this case: $$P(vaccination = 0) \\times P(smallPox = 1 \\mid vaccination = 0) \\times P(death = 1 \\mid smallPox = 1) \\times 1000000 = 4000$$\n\n## In total, if the vaccination rate is set to 0%, we have 4000 deaths, compared to 139 deaths if the vaccination rate is set to 99%. Vaccination is definitely a good choice !¶\n\nMore people died from a reaction than from smallpox itself because with a vaccination rate of 0.99% the probability of developing a reaction was much higher than the probability of contracting smallpox (almost null) (since 99% of the population got vaccinated $\\implies$ 99% of the population had a 0.001 probability of developing a reaction, while only 1% of the population had a 0.2 percent probability of contracting smallpox) that's why we saw that the effect of a reaction on death was greater than the effect of smallpox.(we recorded the effect of a reaction to the vaccine on a much greater population)\n\n### Causal impact of reaction/vaccination on death :¶\n\nCorresponds to the former results\n\nIn :\nspModele = csl.CausalModel(sp)\ncslnb.showCausalImpact(spModele,on=\"death\",doing=\"reaction\", values={\"reaction\":1})\n\n$$\\begin{equation}P( death \\mid \\hookrightarrow\\mkern-6.5mureaction) = \\sum_{vaccination}{P\\left(death\\mid reaction,vaccination\\right) \\cdot P\\left(vaccination\\right)}\\end{equation}$$\ndeath\n0\n1\n0.98990.0101\nCausal Model\nExplanation : backdoor ['vaccination'] found.\nImpact : $P( death \\mid \\hookrightarrow\\mkern-6.5mureaction=1)$\nIn :\nspModele = csl.CausalModel(sp)\ncslnb.showCausalImpact(spModele,on=\"death\",doing=\"smallPox\", values={\"smallPox\":1})\n\n$$\\begin{equation}P( death \\mid \\hookrightarrow\\mkern-6.5musmallPox) = \\sum_{vaccination}{P\\left(death\\mid smallPox,vaccination\\right) \\cdot P\\left(vaccination\\right)}\\end{equation}$$\ndeath\n0\n1\n0.79700.2030\nCausal Model\nExplanation : backdoor ['vaccination'] found.\nImpact : $P( death \\mid \\hookrightarrow\\mkern-6.5musmallPox=1)$" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8009829,"math_prob":0.9907791,"size":4457,"snap":"2019-51-2020-05","text_gpt3_token_len":1329,"char_repetition_ratio":0.16236246,"word_repetition_ratio":0.05904059,"special_character_ratio":0.27013686,"punctuation_ratio":0.18835616,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9954842,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-13T11:00:04Z\",\"WARC-Record-ID\":\"<urn:uuid:22c5d176-cdc8-4ee1-812e-9f78ef3bd394>\",\"Content-Length\":\"463097\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:491bda3a-8db3-4e6c-910d-c6fa404f39a1>\",\"WARC-Concurrent-To\":\"<urn:uuid:d832acd0-e9fa-4166-8fa8-6ba410abeeb2>\",\"WARC-IP-Address\":\"132.227.201.33\",\"WARC-Target-URI\":\"http://www-desir.lip6.fr/~phw/aGrUM/BookOfWhy/notebooks/BoW-c1p050-smallPox.ipynb.html\",\"WARC-Payload-Digest\":\"sha1:3Y6NLTCX7MEMHUVQ37AIYC4D2VTURC6I\",\"WARC-Block-Digest\":\"sha1:MHJB6WI22ATRGUTBJYI27HFF7VNLLZAI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540553486.23_warc_CC-MAIN-20191213094833-20191213122833-00483.warc.gz\"}"}
https://www.vernier.com/experiment/psv-39_newtons-second-law/
[ "### Introduction\n\nNewton’s second law of motion explains the relationship among force, mass, and acceleration. In this activity, you will study the relationship between acceleration and mass, while keeping force constant. A car carrying different masses will be pulled across a table by a hanging weight—the constant force. Acceleration will be measured using a computer-interfaced Motion Detector. You will plot a graph of acceleration versus mass, and then use the graph as you make conclusions about the relationship between mass and acceleration.\n\n### Objectives\n\nIn this experiment, you will\n\n• Use a Motion Detector to determine acceleration.\n• Record data.\n• Graph data.\n• Make conclusions about the relationship between mass and acceleration." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.90289223,"math_prob":0.9018065,"size":738,"snap":"2020-34-2020-40","text_gpt3_token_len":136,"char_repetition_ratio":0.16893733,"word_repetition_ratio":0.07476635,"special_character_ratio":0.17886178,"punctuation_ratio":0.12,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9935055,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-14T17:59:35Z\",\"WARC-Record-ID\":\"<urn:uuid:288b1589-6b75-480b-b863-37b905345762>\",\"Content-Length\":\"308908\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c76c1f7b-1137-40d9-a4e9-fe0e39b25084>\",\"WARC-Concurrent-To\":\"<urn:uuid:b040aac7-8df4-4416-96d9-d29ae4252336>\",\"WARC-IP-Address\":\"104.26.3.92\",\"WARC-Target-URI\":\"https://www.vernier.com/experiment/psv-39_newtons-second-law/\",\"WARC-Payload-Digest\":\"sha1:7LF7RDOT6KSW3B7H6WV5E6Q6VM7XURPU\",\"WARC-Block-Digest\":\"sha1:YSCZ4RBBDWLYK7UI2I2REVYV32TWC5O2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439739347.81_warc_CC-MAIN-20200814160701-20200814190701-00484.warc.gz\"}"}
http://ncl.ucar.edu/Document/Functions/Built-in/csstri.shtml
[ "", null, "NCL Home > Documentation > Functions > Interpolation, Ngmath routines\n\ncsstri\n\nCalculates a Delaunay triangulation of data randomly positioned on the surface of a sphere.\n\nPrototype\n\n```\tfunction csstri (\nrlat [*] : numeric,\nrlon [*] : numeric\n)\n\nreturn_val [*] : integer\n```\n\nArguments\n\nrlat\nrlon\n\nOne-dimensional arrays, of the same size, containing latitudes and longitudes, in degrees, of the input data points. The first three points must not lie on a common great circle.\n\nReturn value\n\nAn integer array, containing triangle vertex indices, dimensioned nt x 3, where nt is the number of triangles in the triangulation. Each index in the array references an original data point as it occurs in sequence in the input data set (numbering starts at 0). For example, if zo is the returned array and zo(it,0)=5, zo(it,1)=0, and zo(it,2)=2 for some index it, then (rlat(5),rlon(5)), (rlat(0),rlon(0)), and (rlat(2),rlon(2)) are the vertices of one of the nt triangles in the Delaunay triangulation.\n\nDescription\n\ncsstri is in the Cssgrid package - a software package that implements a tension spline interpolation algorithm to fit a function to input data randomly spaced on a unit sphere.\n\nThe general documentation for Cssgrid contains complete examples for entries in the package.\n\nIf missing values are detected in the input data, then those values are ignored when calculating the interpolating function.\n\ncssgrid, cssgrid_Wrap, csstri, csvoro, css2c, csc2s, cssetp, csgetp\n\nExamples\n\n```begin\n\n;\n; Create input arrays.\n;\nndata = 10\nrlat = new(ndata,float)\nrlon = new(ndata,float)\n\n;\n; Create random vertices with latitudes between -90. and 90. and\n; longitudes between -180. and 180.\n;\ndo i=0,ndata-1\nrlat(i) = -90. + 180.*rand()/32767.\nrlon(i) = -180. + 360.*rand()/32767.\nend do\n\n;\n; Obtain the triangle vertices.\n;\nvertices = csstri(rlat,rlon)\n\nend\n```" ]
[ null, "http://ncl.ucar.edu/Images/NCL_NCAR_NSF_banner.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6451993,"math_prob":0.9941829,"size":1765,"snap":"2019-26-2019-30","text_gpt3_token_len":475,"char_repetition_ratio":0.108461104,"word_repetition_ratio":0.0,"special_character_ratio":0.26855525,"punctuation_ratio":0.18696883,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9981519,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-06-20T23:22:43Z\",\"WARC-Record-ID\":\"<urn:uuid:fff0c58f-b71e-46ea-82ff-f5e2897cc77f>\",\"Content-Length\":\"16959\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:bbc54f98-0a84-430c-8211-f932b710920e>\",\"WARC-Concurrent-To\":\"<urn:uuid:5c6d2cc4-39c6-45f8-a693-8f8fcf3a67e3>\",\"WARC-IP-Address\":\"128.117.225.48\",\"WARC-Target-URI\":\"http://ncl.ucar.edu/Document/Functions/Built-in/csstri.shtml\",\"WARC-Payload-Digest\":\"sha1:4UXRPJ3AZHPD6MIYXQJDAM6AL4COPUPV\",\"WARC-Block-Digest\":\"sha1:3SAHNFUTVXUUZ6BGD7EPZQIH7XJEAABF\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-26/CC-MAIN-2019-26_segments_1560627999291.1_warc_CC-MAIN-20190620230326-20190621012326-00155.warc.gz\"}"}
https://gitlab.au.dk/au204573/GITMAL/-/commit/4ced2e8a4280e249120c1739757cf2e9189d04fe
[ "### updated_notebooks_for_l09\n\nparent f14d2eb2\n %% Cell type:markdown id: tags: # SWMAL Exercise ## Hyperparameters and Gridsearch When instantiating a Scikit-learn model in python most or all constructor parameters have _default_ values. These values are not part of the internal model and are hence called ___hyperparameters___---in contrast to _normal_ model parameters, for example the neuron weights, $\\mathbf w$, for an MLP model. ### Manual Tuning Hyperparameters Below is an example of the python constructor for the support-vector classifier sklearn.svm.SVC, with say the kernel hyperparameter having the default value 'rbf'. If you should choose, what would you set it to other than 'rbf'? python class sklearn.svm.SVC( C=1.0, kernel=’rbf’, degree=3, gamma=’auto_deprecated’, coef0=0.0, shrinking=True, probability=False, tol=0.001, cache_size=200, class_weight=None, verbose=False, max_iter=-1, decision_function_shape=’ovr’, random_state=None ) The default values might be a sensible general starting point, but for your data, you might want to optimize the hyperparameters to yield a better result. To be able to set kernel to a sensible value you need to go into the documentation for the SVC and understand what the kernel parameter represents, and what values it can be set to, and you need to understand the consequences of setting kernel to something different than the default...and the story repeats for every other hyperparameter! ### Brute Force Search An alternative to this structured, but time-consuming approach, is just to __brute-force__ a search of interesting hyperparameters, and choose the 'best' parameters according to a fit-predict and some score, say 'f1'.\nConceptual graphical view of grid search for two distinct hyperparameters.\nNotice that you would normally search hyperparameters like alpha with an exponential range, say [0.01, 0.1, 1, 10] or similar.\nNow, you just pick out some hyperparameters, that you figure are important, set them to a suitable range, say python 'kernel':('linear', 'rbf'), 'C':[1, 10] and fire up a full (grid) search on this hyperparameter set, that will try out all your specified combination of kernel and C for the model, and then prints the hyperparameter set with the highest score... The demo code below sets up some of our well known 'hello-world' data and then run a _grid search_ on a particular model, here a _support-vector classifier_ (SVC) Other models and datasets ('mnist', 'iris', 'moon') can also be examined. ### Qa Explain GridSearchCV There are two code cells below: 1) function setup, 2) the actual grid-search. Review the code cells and write a __short__ summary. Mainly focus on __cell 2__, but dig into cell 1 if you find it interesting (notice the use of local-function, a nifty feature in python). In detail, examine the lines: python grid_tuned = GridSearchCV(model, tuning_parameters, .. grid_tuned.fit(X_train, y_train) .. FullReport(grid_tuned , X_test, y_test, time_gridsearch) and write a short description of how the GridSeachCV works: explain how the search parameter set is created and the overall search mechanism is functioning (without going into too much detail). What role does the parameter scoring='f1_micro' play in the GridSearchCV, and what does n_jobs=-1 mean? %% Cell type:code id: tags: python # TODO: Qa, code review..cell 1) function setup from time import time import numpy as np from sklearn import svm from sklearn.linear_model import SGDClassifier from sklearn.model_selection import GridSearchCV, RandomizedSearchCV, train_test_split from sklearn.metrics import classification_report, f1_score from sklearn import datasets from libitmal import dataloaders as itmaldataloaders # Needed for load of iris, moon and mnist currmode=\"N/A\" # GLOBAL var! def SearchReport(model): def GetBestModelCTOR(model, best_params): def GetParams(best_params): ret_str=\"\" for key in sorted(best_params): value = best_params[key] temp_str = \"'\" if str(type(value))==\"\" else \"\" if len(ret_str)>0: ret_str += ',' ret_str += f'{key}={temp_str}{value}{temp_str}' return ret_str try: param_str = GetParams(best_params) return type(model).__name__ + '(' + param_str + ')' except: return \"N/A(1)\" print(\"\\nBest model set found on train set:\") print() print(f\"\\tbest parameters={model.best_params_}\") print(f\"\\tbest '{model.scoring}' score={model.best_score_}\") print(f\"\\tbest index={model.best_index_}\") print() print(f\"Best estimator CTOR:\") print(f\"\\t{model.best_estimator_}\") print() try: print(f\"Grid scores ('{model.scoring}') on development set:\") means = model.cv_results_['mean_test_score'] stds = model.cv_results_['std_test_score'] i=0 for mean, std, params in zip(means, stds, model.cv_results_['params']): print(\"\\t[%2d]: %0.3f (+/-%0.03f) for %r\" % (i, mean, std * 2, params)) i += 1 except: print(\"WARNING: the random search do not provide means/stds\") global currmode assert \"f1_micro\"==str(model.scoring), f\"come on, we need to fix the scoring to be able to compare model-fits! Your scoreing={str(model.scoring)}...remember to add scoring='f1_micro' to the search\" return f\"best: dat={currmode}, score={model.best_score_:0.5f}, model={GetBestModelCTOR(model.estimator,model.best_params_)}\", model.best_estimator_ def ClassificationReport(model, X_test, y_test, target_names=None): assert X_test.shape==y_test.shape print(\"\\nDetailed classification report:\") print(\"\\tThe model is trained on the full development set.\") print(\"\\tThe scores are computed on the full evaluation set.\") print() y_true, y_pred = y_test, model.predict(X_test) print(classification_report(y_true, y_pred, target_names)) print() def FullReport(model, X_test, y_test, t): print(f\"SEARCH TIME: {t:0.2f} sec\") beststr, bestmodel = SearchReport(model) ClassificationReport(model, X_test, y_test) print(f\"CTOR for best model: {bestmodel}\\n\") print(f\"{beststr}\\n\") return beststr, bestmodel def LoadAndSetupData(mode, test_size=0.3): assert test_size>=0.0 and test_size<=1.0 def ShapeToString(Z): n = Z.ndim s = \"(\" for i in range(n): s += f\"{Z.shape[i]:5d}\" if i+1!=n: s += \";\" return s+\")\" global currmode currmode=mode print(f\"DATA: {currmode}..\") if mode=='moon': X, y = itmaldataloaders.MOON_GetDataSet(n_samples=5000, noise=0.2) itmaldataloaders.MOON_Plot(X, y) elif mode=='mnist': X, y = itmaldataloaders.MNIST_GetDataSet(load_mode=0) if X.ndim==3: X=np.reshape(X, (X.shape, -1)) elif mode=='iris': X, y = itmaldataloaders.IRIS_GetDataSet() else: raise ValueError(f\"could not load data for that particular mode='{mode}', only 'moon'/'mnist'/'iris' supported\") print(f' org. data: X.shape ={ShapeToString(X)}, y.shape ={ShapeToString(y)}') assert X.ndim==2 assert X.shape==y.shape assert y.ndim==1 or (y.ndim==2 and y.shape==0) X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=test_size, random_state=0, shuffle=True ) print(f' train data: X_train.shape={ShapeToString(X_train)}, y_train.shape={ShapeToString(y_train)}') print(f' test data: X_test.shape ={ShapeToString(X_test)}, y_test.shape ={ShapeToString(y_test)}') print() return X_train, X_test, y_train, y_test print('OK(function setup, hope MNIST loads works, seem best if you got Keras or Tensorflow installed!)') %% Cell type:code id: tags: python # TODO: Qa, code review..cell 2) the actual grid-search # Setup data X_train, X_test, y_train, y_test = LoadAndSetupData( 'iris') # 'iris', 'moon', or 'mnist' # Setup search parameters model = svm.SVC( gamma=0.001 ) # NOTE: gamma=\"scale\" does not work in older Scikit-learn frameworks, # FIX: replace with model = svm.SVC(gamma=0.001) tuning_parameters = { 'kernel': ('linear', 'rbf'), 'C': [0.1, 1, 10] } CV = 5 VERBOSE = 0 # Run GridSearchCV for the model start = time() grid_tuned = GridSearchCV(model, tuning_parameters, cv=CV, scoring='f1_micro', verbose=VERBOSE, n_jobs=-1) grid_tuned.fit(X_train, y_train) t = time() - start # Report result b0, m0 = FullReport(grid_tuned, X_test, y_test, t) print('OK(grid-search)') %% Cell type:markdown id: tags: ### Qb Hyperparameter Grid Search using an SDG classifier Now, replace the svm.SVC model with an SGDClassifier and a suitable set of the hyperparameters for that model. You need at least four or five different hyperparameters from the SGDClassifier in the search-space before it begins to take considerable compute time doing the full grid search. So, repeat the search with the SGDClassifier, and be sure to add enough hyperparameters to the grid-search, such that the search takes a considerable time to run, that is a couple of minutes or up to some hours.. %% Cell type:code id: tags: python # TODO: grid search assert False, \"TODO: make a grid search on the SDG classifier..\" %% Cell type:markdown id: tags: ### Qc Hyperparameter Random Search using an SDG classifier Now, add code to run a RandomizedSearchCV instead.\nConceptual graphical view of randomized search for two distinct hyperparameters.\nUse these default parameters for the random search, similar to the default parameters for the grid search python random_tuned = RandomizedSearchCV( model, tuning_parameters, n_iter=20, random_state=42, cv=CV, scoring='f1_micro', verbose=VERBOSE, n_jobs=-1 ) but with the two new parameters, n_iter and random_state added. Since the search-type is now random, the random_state gives sense, but essential to random search is the new n_tier parameter. So: investigate the n_iter parameter...in code and write a conceptual explanation in text. Comparison of time (seconds) to complete GridSearch versus RandomizedSearchCV, does not necessarily give any sense, if your grid search completes in a few seconds (as for the iris tiny-data). You need a search that runs for minutes, hours, or days. But you could compare the best-tuned parameter set and best scoring for the two methods. Is the random search best model close to the grid search? %% Cell type:code id: tags: python # TODO: assert False, \"implement a random search for the SGD classifier..\" %% Cell type:markdown id: tags: ## Qd MNIST Search Quest II Finally, a search-quest competition: __who can find the best model+hyperparameters for the MNIST dataset?__ You change to the MNIST data by calling LoadAndSetupData('mnist'), and this is a completely other ball-game that the iris _tiny-data_: it's much larger (but still far from _big-data_)! * You might opt for the exhaustive grid search, or use the faster but-less optimal random search...your choice. * You are free to pick any classifier in Scikit-learn, even algorithms we have not discussed yet---__except Neural Networks and KNeighborsClassifier!__. * Keep the score function at f1_micro, otherwise, we will be comparing 'æbler og pærer'. * And, you may also want to scale your input data for some models to perform better. * __REMEMBER__, DO NOT USE any Neural Network models. This also means not to use any Keras or Tensorflow models...since they outperform most other models, and there are also too many examples on the internet to cut-and-paste from! Check your result by printing the first _return_ value from FullReport() python b1, m1 = FullReport(random_tuned , X_test, y_test, time_randomsearch) print(b1) that will display a result like best: dat=mnist, score=0.90780, model=SGDClassifier(alpha=1.0,eta0=0.0001,learning_rate='invscaling') and paste your currently best model into the message box, for ITMAL group 09 like Grp09: best: dat=mnist, score=0.90780, model=SGDClassifier(alpha=1.0,eta0=0.0001,learning_rate='invscaling') Grp09: CTOR for best model: SGDClassifier(alpha=1.0, average=False, class_weight=None, early_stopping=False, epsilon=0.1, eta0=0.0001, fit_intercept=True, l1_ratio=0.15, learning_rate='invscaling', loss='hinge', max_iter=1000, n_iter_no_change=5, n_jobs=None, penalty='l2', power_t=0.5, random_state=None, shuffle=True, tol=0.001, validation_fraction=0.1, verbose=0, warm_start=False) on Brightspace: \"L10: Regularisering, optimering og søgning\" | \"Qd MNIST Search Quest II\" > https://brightspace.au.dk/d2l/le/lessons/27524/topics/674336 > https://brightspace.au.dk/d2l/le/lessons/53939/topics/791969 and, check if your score (for MNIST) is better than the currently best score. Republish if you get a better score than your own previously best. Remember to provide an ITMAL group name manually, so we can identify a winner: the 1. st price is cake! For the journal hand-in, report your progress in scoring choosing different models, hyperparameters to search and how you might need to preprocess your data...and note, that the journal will not be accepted unless it contains information about Your results published on the Brightspace 'Search Quest II' page! %% Cell type:code id: tags: python # TODO:(in code and text..) assert False, \"participate in the Search Quest---remember to publish your result(s) on Brightspace.\" %% Cell type:markdown id: tags: REVISIONS|| ---------|| 2018-03-01| CEF, initial. 2018-03-05| CEF, updated. 2018-03-06| CEF, updated and spell checked. 2018-03-06| CEF, major overhaul of functions. 2018-03-06| CEF, fixed problem with MNIST load and Keras. 2018-03-07| CEF, modified report functions and changed Qc+d. 2018-03-11| CEF, updated Qd. 2018-03-12| CEF, added grid and random search figs and added bullets to Qd. 2018-03-13| CEF, fixed SVC and gamma issue, and changed dataload to be in fetchmode (non-keras). 2019-10-15| CEF, updated for ITMAL E19 2019-10-19| CEF, minor text update. 2019-10-23| CEF, changed demo model i Qd) from MLPClassifier to SVC. 2020-03-14| CEF, updated to ITMAL F20. 2020-10-20| CEF, updated to ITMAL E20. 2020-10-27| CEF, type fixes and minor update. 2020-10-28| CEF, added extra journal hand-in specs for Search Quest II, Qd. 2020-10-30| CEF, added non-use of KNeighborsClassifier to Search Quest II, Qd. 2020-11-19| CEF, changed load_mode=2 (Keras) to load_mode=0 (auto) for MNIST loader. 2021-03-17| CEF, updated to ITMAL F21. 2021-10-31| CEF, updated to ITMAL E21. 2021-11-05| CEF, removed iid=True paramter from GridSearchCV(), not present in current version of Scikit-learn (0.24.1). 2022-03-31| CEF, updated to SWMAL F22. ... ...\n %% Cell type:markdown id: tags: # SWMAL Exercise ## Regulizers ### Resume of The Linear Regressor For our data set $\\mathbf{X}$ and target $\\mathbf{y}$ $$\\newcommand\\rem{} \\rem{SWMAL: CEF def and LaTeX commands, remember: no newlines in defs} \\newcommand\\eq{#1 &=& #2\\\\} \\newcommand\\ar{\\begin{array}{#1}#2\\end{array}} \\newcommand\\ac{\\left[\\ar{#1}{#2}\\right]} \\newcommand\\st{_{\\mbox{\\scriptsize #1}}} \\newcommand\\norm{{\\cal L}_{#1}} \\newcommand\\obs{#1_{\\mbox{\\scriptsize obs}}^{\\left(#2\\right)}} \\newcommand\\diff{\\mbox{d}#1} \\newcommand\\pown{^{(#1)}} \\def\\pownn{\\pown{n}} \\def\\powni{\\pown{i}} \\def\\powtest{\\pown{\\mbox{\\scriptsize test}}} \\def\\powtrain{\\pown{\\mbox{\\scriptsize train}}} \\def\\pred{_{\\scriptsize\\mbox{pred}}} \\def\\bM{\\mathbf{M}} \\def\\bX{\\mathbf{X}} \\def\\bZ{\\mathbf{Z}} \\def\\bw{\\mathbf{m}} \\def\\bx{\\mathbf{x}} \\def\\by{\\mathbf{y}} \\def\\bz{\\mathbf{z}} \\def\\bw{\\mathbf{w}} \\def\\btheta{{\\boldsymbol\\theta}} \\def\\bSigma{{\\boldsymbol\\Sigma}} \\def\\half{\\frac{1}{2}} \\newcommand\\pfrac{\\frac{\\partial~#1}{\\partial~#2}} \\newcommand\\dfrac{\\frac{\\mbox{d}~#1}{\\mbox{d}#2}} \\bX = \\ac{cccc}{ x_1\\pown{1} & x_2\\pown{1} & \\cdots & x_d\\pown{1} \\\\ x_1\\pown{2} & x_2\\pown{2} & \\cdots & x_d\\pown{2}\\\\ \\vdots & & & \\vdots \\\\ x_1\\pownn & x_2\\pownn & \\cdots & x_d\\pownn\\\\ } , ~~~~~~~~ \\by = \\ac{c}{ y\\pown{1} \\\\ y\\pown{2} \\\\ \\vdots \\\\ y\\pown{n} \\\\ } %, ~~~~~~~~ %\\bx\\powni = % \\ac{c}{ % 1\\\\ % x_1\\powni \\\\ % x_2\\powni \\\\ % \\vdots \\\\ % x_d\\powni % }$$ a __linear regressor__ model, with the $d$-dimensional (expressed here without the bias term, $w_0$) weight column vector, $$\\bw = \\ac{c}{ w_1 \\\\ w_2 \\\\ \\vdots \\\\ w_d \\\\ }$$ was previously found to be of the form $$y\\powni\\pred = \\bw^\\top \\bx\\powni$$ for a single data instance, or for the full data set in a compact matrix notation $$\\by\\pred = \\bX \\bw$$ (and rememering to add the bias term $w_0$ on $\\bw$ and correspondingly adding fixed '1'-column in the $\\bX$ matrix, later.) An accociated cost function could be the MSE $$\\ar{rl}{ \\mbox{MSE}(\\bX,\\by;\\bw) &= \\frac{1}{n} \\sum_{i=1}^{n} L\\powni \\\\ &= \\frac{1}{n} \\sum_{i=1}^{n} \\left( \\bw^\\top\\bx\\powni - y\\powni\\pred \\right)^2\\\\ &\\propto ||\\bX \\bw - \\by\\pred||_2^2 }$$ here using the squared Euclidean norm, $\\norm{2}^2$, via the $||\\cdot||_2^2$ expressions. We used the MSE to express the total cost function, $J$, as $$\\mbox{MSE} \\propto J = ||\\bX \\bw - \\by\\pred||_2^2$$ give or take a few constants, like $1/2$ or $1/n$. ### Adding Regularization to the Linear Regressor Now the weights, $\\bw$ (previously also known as $\\btheta$), in this model are free to take on any value they like, and this can lead to both numerical problems and overfitting, if the algorithm decides to drive the weights to insane, humongous values, say $10^{200}$ or similar. Also for some models, neural networks in particular, having weights outside the range -1 to 1 (or 0 to 1) may cause complete saturation of some of the internal non-linear components (the activation function). Now, enters the ___regularization___ of the model: keep the weights at a sane level while doing the numerical gradient descent (GD) in the search space. This can quite simply be done by adding a ___penalty___ part, $\\Omega$, to the $J$ function as $$\\ar{rl}{ \\tilde{J} &= J + \\alpha \\Omega(\\bw)\\\\ &= \\frac{1}{n} ||\\bX \\bw - \\by||_2^2 + \\alpha ||\\bw||^2_2 }$$ So, the algorithm now has to find an optimal value (minimum of $J$) for both the usual MSE part and for the added penalty scaled with the $\\alpha$ constant. ### Regularization and Optimization for Neural Networks (NNs) The regularization method mentioned here is strictly for a linear regression model, but such a model constitutes a major part of the neurons (or perceptrons), used in neural networks. ### Qa The Penalty Factor Now, lets examine what $||\\bw||^2_2$ effectively mean? It is composed of our well-known $\\norm{2}^2$ norm and can also be expressed as simple as $$||\\bw||^2_2 = \\bw^\\top\\bw$$ Construct a penaltiy function that implements $\\bw^\\top\\bw$, re-using any functions from numpy (implementation could be a tiny _one-liner_). Take $w_0$ into account, this weight factor should NOT be included in the norm. Also checkup on numpys dot implementation, if you have not done so already: it is a typical pythonic _combo_ function, doing both dot op's (inner product) and matrix multiplication (outer product) dependent on the shape of the input parameters. Then run it on the three test vectors below, and explain when the penalty factor is low and when it is high. %% Cell type:code id: tags: python # Qa..first define some numeric helper functions for the test-vectors.. import numpy as np import collections def isFloat(x): # is there a python single/double float?? return isinstance(x, float) or isinstance(x, np.float32) or isinstance(x, np.float64) # NOT defined on Windows?: or isinstance(x, np.float128) # Checks that a 'float' is 'sane' (original from libitmal) def CheckFloat(x, checkrange=False, xmin=1E-200, xmax=1E200, verbose=0): if verbose>1: print(f\"CheckFloat({x}, type={type(x)}\") if isinstance(x, collections.Iterable): for i in x: CheckFloat(i, checkrange=checkrange, xmin=xmin, xmax=xmax, verbose=verbose) else: #if (isinstance(x,int)): # print(\"you gave me an integer, that was ignored\") # return assert isFloat(x), f\"x={x} is not a float/float64/numpy.float32/64/128, but a {type(x)}\" assert np.isnan(x)==False , \"x is NAN\" assert np.isinf(x)==False , \"x is inf\" assert np.isinf(-x)==False, \"x is -inf\" # NOTE: missing test for denormalized float if checkrange: z=fabs(x) assert z>=xmin, f\"abs(x)={z} is smaller that expected min value={xmin}\" assert z<=xmax, f\"abs(x)={z} is larger that expected max value={xmax}\" if verbose>0: print(f\"CheckFloat({x}, type={x} => OK\") # Checks that two 'floats' are 'close' (original from libitmal) def CheckInRange(x, expected, eps=1E-9, autoconverttofloat=True, verbose=0): assert eps>=0, \"eps is less than zero\" if autoconverttofloat and (not isFloat(x) or not isFloat(expected) or not isFloat(eps)): if verbose>1: print(f\"notice: autoconverting x={x} to float..\") return CheckInRange(1.0*x, 1.0*expected, 1.0*eps, False, verbose) CheckFloat(x) CheckFloat(expected) CheckFloat(eps) x0 = expected - eps x1 = expected + eps ok = x>=x0 and x<=x1 absdiff = np.fabs(x-expected) if verbose > 0: print(f\"CheckInRange(x={x}, expected={expected}, eps={eps}: x in [{x0}; {x1}] => {ok}\") assert ok, f\"x={x} is not within the range [{x0}; {x1}] for eps={eps}, got eps={absdiff}\" print(\"OK(setup..)\") %% Cell type:code id: tags: python # TODO: code def Omega(w): assert False, \"TODO: implement Omega() here and remove this assert..\" # weight vector format: [w_0 w_1 .. w_d], ie. elem. 0 is the 'bias' w_a = np.array([1., 2., -3.]) w_b = np.array([1E10, -3E10]) w_c = np.array([0.1, 0.2, -0.3, 0]) p_a = Omega(w_a) p_b = Omega(w_b) p_c = Omega(w_c) print(f\"P(w0)={p_a}\") print(f\"P(w1)={p_b}\") print(f\"P(w2)={p_c}\") # TEST VECTORS e0 = 2*2+(-3)*(-3) e1 = 9e+20 e2 = 0.13 CheckInRange(p_a, e0) CheckInRange(p_b, e1) CheckInRange(p_c, e2) print(\"OK\") %% Cell type:markdown id: tags: ## Adding Regularization for Linear Regression Models Adding the penalty $\\alpha ||\\bw||^2_2$ actually corresponds to the Scikit-learn model sklearn.linear_model.Ridge and there are, as usual, a bewildering array of regulized models to choose from in Scikit-learn with exotic names like Lasso and Lars > https://scikit-learn.org/stable/modules/classes.html#module-sklearn.linear_model Let us just examine Ridge, Lasso and ElasticNet here. ### Qb Explain the Ridge Plot First take a peek into the plots (and code) below, that fits the Ridge, Lasso and ElasticNet to a polynomial model. The plots show three fits with different $\\alpha$ values (0, 10$^{-5}$, and 1). First, explain what the different $\\alpha$ does to the actual fitting for the Ridge model in the plot. %% Cell type:code id: tags: python # TODO: Qb, just run the code.. %matplotlib inline from sklearn.linear_model import LinearRegression, SGDRegressor, Ridge, ElasticNet, Lasso from sklearn.pipeline import Pipeline from sklearn.preprocessing import PolynomialFeatures from sklearn.preprocessing import StandardScaler import matplotlib.pyplot as plt def FitAndPlotModel(name, model_class, X, X_new, y, **model_kargs): plt.figure(figsize=(16,8)) alphas=(0, 10**-5, 1) random_state=42 for alpha, style in zip(alphas, (\"b-\", \"g--\", \"r:\")): #print(model_kargs) model = model_class(alpha, **model_kargs) if alpha > 0 else LinearRegression() model_pipe = Pipeline([ (\"poly_features\", PolynomialFeatures(degree=12, include_bias=False)), (\"std_scaler\", StandardScaler()), (\"regul_reg\", model), ]) model_pipe.fit(X, y) y_new_regul = model_pipe.predict(X_new) lw = 2 if alpha > 0 else 1 plt.plot(X_new, y_new_regul, style, linewidth=lw, label=r\"$\\alpha = {}$\".format(alpha)) plt.plot(X, y, \"b.\", linewidth=3) plt.legend(loc=\"upper left\", fontsize=15) plt.xlabel(\"$x_1$\", fontsize=18) plt.title(name) plt.axis([0, 3, 0, 4]) def GenerateData(): np.random.seed(42) m = 20 X = 3 * np.random.rand(m, 1) y = 1 + 0.5 * X + np.random.randn(m, 1) / 1.5 X_new = np.linspace(0, 3, 100).reshape(100, 1) return X, X_new, y X, X_new, y = GenerateData() FitAndPlotModel('ridge', Ridge, X, X_new, y) FitAndPlotModel('lasso', Lasso, X, X_new, y) FitAndPlotModel('elasticnet', ElasticNet, X, X_new, y, l1_ratio=0.1) print(\"OK(plot)\") %% Cell type:markdown id: tags: ### Qc Explain the Ridge, Lasso and ElasticNet Regulized Methods Then explain the different regularization methods used for the Ridge, Lasso and ElasticNet models, by looking at the math formulas for the methods in the Scikit-learn documentation and/or using [HOML]. %% Cell type:code id: tags: python # TODO:(in text..) assert False, \"Explain the math of Ridge, Lasso and ElasticNet..\" %% Cell type:markdown id: tags: ### Qd Regularization and Overfitting Finally, comment on how regularization may be used to reduce a potential tendency to overfit the data Describe the situation with the ___tug-of-war___ between the MSE ($J$) and regulizer ($\\Omega$) terms in $\\tilde{J}$ $$\\tilde{J} = J + \\alpha \\Omega(\\bw)\\\\$$ and the potential problem of $\\bw^*$ being far, far away from the origin, and say for a fixed $\\alpha=1$ in regulizer term (normally for real data $\\alpha \\ll 1$). OPTIONAL part: Would data preprocessing in the form of scaling, standardization or normalization be of any help to that particular situation? If so, describe. %% Cell type:code id: tags: python # TODO: (in text..) Assert False, \"Explain the tug-of-war..\" %% Cell type:markdown id: tags: REVISIONS|| ---------|| 2018-03-01| CEF, initial. 2018-03-06| CEF, updated. 2018-03-07| CEF, split Qb into Qb+c+d and added NN comment. 2018-03-11| CEF, updated Qa and $w_0$ issues. 2018-03-11| CEF, updated Qd with plot and Q. 2018-03-11| CEF, clarified $w_0$ issue and update $\\tilde{J}$'s. 2019-10-15| CEF, updated for ITMAL E19. 2019-10-19| CEF, updated text, added float-check functions. 2020-03-23| CEF, updated to ITMAL F20. 2020-10-20| CEF, updated to ITMAL E20. 2020-10-27| CEF, minor updates. 2020-10-28| CEF, made preprocessing optional part of Qq (tug-of-war). 2020-03-17| CEF, updated to ITMAL F21. 2021-10-31| CEF, updated to ITMAL E21. 2022-03-31| CEF, updated to SWMAL F22. ... ...\nSupports Markdown\n0% or .\nYou are about to add 0 people to the discussion. Proceed with caution.\nFinish editing this message first!" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.58406085,"math_prob":0.9380351,"size":30647,"snap":"2022-27-2022-33","text_gpt3_token_len":9592,"char_repetition_ratio":0.11013935,"word_repetition_ratio":0.1282509,"special_character_ratio":0.33350736,"punctuation_ratio":0.19422042,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9948396,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-17T20:37:23Z\",\"WARC-Record-ID\":\"<urn:uuid:788b3b09-a11e-4790-babd-cb449e486e05>\",\"Content-Length\":\"531108\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ef37c533-0ac7-48a3-bf1f-65d89c0b04e8>\",\"WARC-Concurrent-To\":\"<urn:uuid:b08771b3-80be-4b16-b0cd-729ef711974f>\",\"WARC-IP-Address\":\"185.45.20.19\",\"WARC-Target-URI\":\"https://gitlab.au.dk/au204573/GITMAL/-/commit/4ced2e8a4280e249120c1739757cf2e9189d04fe\",\"WARC-Payload-Digest\":\"sha1:IBCJFIVDFR744IWREUW64R4ME4WM7IJ3\",\"WARC-Block-Digest\":\"sha1:PQJS3XYSFR7C7S4FXX44AFYKGR67E5U2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882573104.24_warc_CC-MAIN-20220817183340-20220817213340-00557.warc.gz\"}"}
https://socratic.org/questions/how-do-you-simplify-3-x-y-3xy
[ "# How do you simplify (3(x²y³)²)/((3xy³)³)?\n\nOct 10, 2017\n\n$\\frac{x}{9 {y}^{3}}$\n\n#### Explanation:\n\nExpand using definition of exponents:\n\n$\\frac{3 \\left({x}^{2} {y}^{3}\\right) \\left({x}^{2} {y}^{3}\\right)}{\\left(3 x {y}^{3}\\right) \\left(3 x {y}^{3}\\right) \\left(3 x {y}^{3}\\right)}$\n\nRearrange the multiplication using the commutative property:\n\n$\\frac{3 {x}^{2} {x}^{2} {y}^{3} {y}^{3}}{3 \\cdot 3 \\cdot 3 x \\cdot x \\cdot x {y}^{3} {y}^{3} {y}^{3}}$\n\nUse product property of exponents and multiply the constants:\n\n$\\frac{3 {x}^{4} {y}^{6}}{27 {x}^{3} {y}^{9}}$\n\nUse quotient property of exponents and divide the constants:\n\n$\\frac{x}{9 {y}^{3}}$" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.57711416,"math_prob":0.9999869,"size":427,"snap":"2019-51-2020-05","text_gpt3_token_len":100,"char_repetition_ratio":0.13238771,"word_repetition_ratio":0.0,"special_character_ratio":0.22248244,"punctuation_ratio":0.097222224,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99984026,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-13T02:53:58Z\",\"WARC-Record-ID\":\"<urn:uuid:36413df6-97b8-4175-8927-a55255b2a6f9>\",\"Content-Length\":\"33415\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2b004c4c-6b72-4b6e-a6a2-0af017bad259>\",\"WARC-Concurrent-To\":\"<urn:uuid:5b95d2d8-e2e6-4708-a9bd-7621d87519b2>\",\"WARC-IP-Address\":\"54.221.217.175\",\"WARC-Target-URI\":\"https://socratic.org/questions/how-do-you-simplify-3-x-y-3xy\",\"WARC-Payload-Digest\":\"sha1:VWF234XVAIW5DSDNRH6YSD6RJXUNSHWI\",\"WARC-Block-Digest\":\"sha1:76OR623L6FRWJJYD4CLGC3XLDKJXND3G\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540548537.21_warc_CC-MAIN-20191213020114-20191213044114-00077.warc.gz\"}"}
http://www.gnet.org/up-in-arms-about-what-does-orthogonal-mean-in-math/
[ "## The Importance of What Does Orthogonal Mean in Math\n\nAdditional the IBM design is more challenging to learn due to the restrictions and the extra instruction. Fewer exceptions mean a greater level of regularity in the plan, making the language a lot easier to learn, read, and understand. It’s been demonstrated, for instance, that the system isn’t a system of almost-everywhere convergence.\n\n## Facts, Fiction and What Does Orthogonal Mean in Math\n\nThus, it may be helpful in controlling body weight. best paper writing service Since it isn’t possible to fix the aforementioned system, we use the least squares method to obtain the closest solution. This table summarizes the data which you’ve collected.\n\nWord problems must be translated into equations. Estimates are helpful if you want to simplify math difficulties. https://rossier.usc.edu/files/2011/10/Sample-APA-Paper.pdf The concept doesn’t come up frequently, but the Formula is quite easy and obvious, so you need to easily have the ability to remember it for later.\n\nThe consequent F tests is going to be the exact same as in classical ANCOVA. The range is figured by subtracting the least number from the best number. A mean is normally known as an average.\n\n## What Is So Fascinating About What Does Orthogonal Mean in Math?\n\nArea is a measure of just how much space there’s on a level surface. But so long as the middle of the book stays in the very same location, in the conclusion of all of the moving around, you’ll have done nothing to the book but an extremely orderly rotation about some axis. To create this square a cube, we’ll simply draw a line from every corner of the square to the vanishing point utilizing a ruler. You don’t need to incorporate all six sides if a number of the sides don’t have anything interesting to show, like the bottom of our home. Moving to the correct side, at this point you see just the most suitable side.\n\n## The True Meaning of What Does Orthogonal Mean in Math\n\nN is the variety of terms in the populace. There’s no easy mathematical formula to figure the median. This example illustrates the bias that could result from consistently rounding midpoint values within a direction. When calculating the probability of a succession of events, it’s important to understand if one result impacts the probability of another one. Learn the way to use estimate values within this lesson.\n\nThe disadvantage of median is that it’s challenging to handle theoretically. Sum of squares is a statistical technique employed in regression analysis to figure out the dispersion of information points. So in this instance, the median is 8. Then discover the median of each one of these groups.\n\n## Up in Arms About What Does Orthogonal Mean in Math?\n\nOrthogonal persistence (sometimes also referred to as transparent persistence) is the caliber of a programming system which makes it possible for a programmer to take care of data similarly without respect to the duration of time the data is kept in storage. Employing orthogonal array testing, we can make the most of the test coverage when minimizing the variety of test cases to think about. For this specific model there are 3 canonical dimensions of which only the initial two are statistically important.\n\n## The Importance of What Does Orthogonal Mean in Math\n\nLet’s look at this previous condition geometrically. Rounding down when it has to do with money may signify you won’t have enough. The top of the house can be linked to the top of the front view of the house.\n\nPrerequisite will change from topic to topic. Average is a term that’s used, mis-used and frequently overused. Keeping a wholesome weight may lessen the opportunity of chronic diseases linked to overweight and obesity.\n\nTry to remember, these solvers are excellent for checking your work, experimenting with unique equations, or reminding yourself the best way to work a specific issue. The typical error of the mean now indicates the change in mean with distinctive experiments conducted each moment. Maybe you only need a fast answer on the job and don’t wish to solve the issue by hand.\n\n## A Secret Weapon for What Does Orthogonal Mean in Math\n\nThe practice of creating literacy skills is fairly recognized. All the above are jobs that demand a specific skill when it has to do with mathematics. A lot of the homework and several of the exam and quiz problems will ask you to compose precise proofs, building on your proof-writing knowledge in Math 300. Each category comprises 6 worksheets. It was left up to the student to determine which tools may be handy. Since you can tell, mathematics isn’t a subject to be ignored when it has to do with computer science and programming, but nevertheless, it shouldn’t define your career.\n\nUnderstanding orthogonal and transversal lines is essential to every perspective drawing you will make later on. This text might not be in its final form and could be updated or revised later on. It is, in addition, the ability to comprehend the language of math (as an example sum usually means a response to addition, difference usually means the response to a subtraction question).\n\n## What You Should Do to Find Out About What Does Orthogonal Mean in Math Before You’re Left Behind\n\nThe term vertex is most frequently utilised to denote the corners of a polygon. The difference is very important because the connection between two vectors doesn’t need to be linear. Since you can see, in order for us to project a vector onto a subspace, we have to be in a position to produce an orthogonal foundation for this subspace.\n\nPurplemath Sometimes you must discover the point that’s exactly midway between two other points. Because the connection between all pairs of groups is the very same, there is just a single set of coefficients (only 1 model). Therefore, there’s indeed some distance between both lines." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.92939365,"math_prob":0.8335344,"size":5823,"snap":"2020-24-2020-29","text_gpt3_token_len":1162,"char_repetition_ratio":0.104313456,"word_repetition_ratio":0.032719836,"special_character_ratio":0.19337112,"punctuation_ratio":0.08288288,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9781249,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-10T13:12:29Z\",\"WARC-Record-ID\":\"<urn:uuid:08ecaeec-1131-4090-9af0-dd33c281c32b>\",\"Content-Length\":\"58433\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:07b19740-9fff-4735-a7e3-b60595a6f258>\",\"WARC-Concurrent-To\":\"<urn:uuid:56deef9b-609d-4296-a06b-cacfa76a737a>\",\"WARC-IP-Address\":\"142.11.244.238\",\"WARC-Target-URI\":\"http://www.gnet.org/up-in-arms-about-what-does-orthogonal-mean-in-math/\",\"WARC-Payload-Digest\":\"sha1:O6CR674CYFTNCMCXKDQZFJPPYWPY7DZJ\",\"WARC-Block-Digest\":\"sha1:G62ONORKGBGS6WUCWJVWTHFRPD7CCJV4\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593655908294.32_warc_CC-MAIN-20200710113143-20200710143143-00491.warc.gz\"}"}
https://www.physicsforums.com/threads/difficult-calculus-problems.79427/
[ "# Difficult Calculus Problems\n\n#### aetatis\n\nHey I have a course project where I have to find/develop 15 slightly difficult calculus problems and solve them. Any suggestions?\n(I'm not asking for proofs but if you want to share those as well I really really don't object", null, ")\n\nOh, and this is first/second year calculus not multivariable.\n\n#### Cyrus\n\naetatis said:\nHey I have a course project where I have to find/develop 15 slightly difficult calculus problems and solve them. Any suggestions?\n(I'm not asking for proofs but if you want to share those as well I really really don't object", null, ")\n\nOh, and this is first/second year calculus not multivariable.\nIf I were in your shoes I would pick 15 problems that involve the trig identities. Those were the hardest for me, because you have to remember the formulas which you can use as tricks to solve the problems. If your taking calc 2, Id do problems with trig substitution, those are kind of hard and will help you out on tests and stuff.\n\n#### berkeman\n\nMentor\nHere's kind of a fun site from Harvey Mudd College with tutorials and quizzes about calculus. You can go into the tutorials for 1st year calculus subjects, go to the harder subjects, and at the end of some of the tutorial pages, they'll have an \"Explore\" page where you can play with graphing or other stuff. Check out the Taylor Series subject and its Exploration page -- very cool.\n\nhttp://www.math.hmc.edu/calculus/\n\n#### mattmns\n\n$$\\int \\sqrt{tan\\theta} d\\theta$$\n\nI remember this problem from calc 2", null, "#### whozum\n\nProve that\n\n$$\\lim_{x\\rightarrow 0} \\frac{\\sin x}{x} = 1$$\n\n#### The Guru Kid\n\nwhozum said:\nProve that\n\n$$\\lim_{x\\rightarrow 0} \\frac{\\sin x}{x} = 1$$\nActually that wouldn't be too hard.\n\nIgnoring the limit and rearranging the equation gives you sinx = x. That's only true when x = 0.\n\nThe only reason the limit exists in the original equation is because you cannot divide by zero.\n\np.s. first post, hope i'm not wrong", null, "#### amcavoy\n\n$$\\lim_{x\\rightarrow 0} \\frac{\\sin x}{x} = 1$$\n\nTo prove this I would set up a unit circle and find the area of the triangle with legs sin(x) and cos(x), area of the sector of angle x, and area of triangle with legs 1 and tan(x). Now rearrange and set up and equation with these which will lead you to the fact that $$1\\leq \\lim_{x\\to 0}\\frac{\\sin x}{x}\\leq 1$$\n\n#### steven187\n\nhello there\n\nwell i remember this from high school, but i know that some of the people i go to uni with, were not able to do it\n$$\\int \\sec\\theta d\\theta$$\nthere would be two ways of doing it, a long and short way, i come to look at this now its pretty simple anyway good luck with solving\n\nby the way it would be a great idea to list some topics that you have went though in your calculus class?\n\nSteven\n\n#### fourier jr\n\n$$\\int_{0}^{\\pi/2} \\frac{dx}{1+(tan(x))^{\\sqrt2}}$$\n\n(the answer is pi/4, & it doesn't matter what number goes where the sqrt 2 is)\n\n#### shmoe\n\nHomework Helper\nThe Guru Kid said:\nIgnoring the limit and rearranging the equation gives you sinx = x. That's only true when x = 0.\nYou can't just ignore the limit or declare it to be 1 since sin(x)=x at x=0. Notice sin(x)=x^(1/3) only at x=0 as well, but $$\\lim_{x\\rightarrow 0} \\frac{\\sin x}{x^{1/3}} \\neq 1$$. You have to treat indeterminant limits like this with care.\n\n#### The Guru Kid\n\nshmoe said:\nYou can't just ignore the limit or declare it to be 1 since sin(x)=x at x=0. Notice sin(x)=x^(1/3) only at x=0 as well, but $$\\lim_{x\\rightarrow 0} \\frac{\\sin x}{x^{1/3}} \\neq 1$$. You have to treat indeterminant limits like this with care.\nWell i guess you have to understand why we took the limit in the first place. suppose we just had the equation w/o the limit. Then solving for x you get 0. But in current mathematics, you cannot do that since it gives you division by 0 (in the times befor Sir Wallis, mathematicians like Fermat did just that).\n\nThe reason we're saying sin(x)=x and not anything else is because we don't want to change the original equation.\n\nIn your equation, the limit would be x^(-2/3) which is solved in exactly the same manner.\n\n#### wisredz\n\nThe limit involving sinx/x is prooved by drawing a unit circle. What you told is wrong because what you do is intersecting the x=y line and y=sinx curve. sinx/x has a totally different curve.\nUsing the areas in the unit circle, it can be proved.\n\nNot so difficult but try prooving that as x approaches 0, sinkx/x=k.\n\n#### The Guru Kid\n\nI'm sorry i still don't see how my method is wrong. If you want to use unit circles, that's fine, but my method works unless it's one of those math coincidences.\n\nIf someone could explain why it's wrong, i would be grateful.", null, "#### whozum\n\nI think shmoe did a good job of that. I guess the better problem as wisredz suggested was find\n\n$$\\lim_{x\\rightarrow 0} \\frac{\\sin kx}{x}$$\n\n#### whozum\n\nAnother good one would be using some tabular integration to find a reduction formula for\n\n$$\\int \\sin ^n x dx$$ for even integers 'n'.\n\n#### shmoe\n\nHomework Helper\nThe Guru Kid said:\nIn your equation, the limit would be x^(-2/3) which is solved in exactly the same manner.\nAre you saying $$\\lim_{x\\rightarrow 0} \\frac{\\sin x}{x^{1/3}}=\\frac{1}{x^{2/3}}$$? This wouldn't make any sense at all, could you please clarify?\n\nBy your method if we ignore the limit in the 'equation'\n\n$$\\lim_{x\\rightarrow 0} \\frac{\\sin x}{x^{1/3}}=1$$\n\nand rearrange we get $$\\sin x=x^{1/3}$$ which is true only when x=0 so this limit is correct (of course it's actually wrong).\n\nPlease explain to me how this is different from what you've proposed for sin(x)/x.\n\n#### fourier jr\n\nhere's another one:\n\nmaximize $$f(x) = \\frac{1}{2^{x}} + \\frac{1}{2^{1/x}}$$ for x>0.\n\n#### aetatis\n\nwow\n\nwow what a great response thanks so much guys", null, "#### aetatis\n\noh and some of the topics we covered in that class are pretty basic:\nimproper integrals\nparametric eq\ntaylor mclauren series\nlength of a curve/along a path\nbut i've also done multivariable calc.\n\n#### riddick\n\n$$\\int_{0}^{\\pi/2} \\frac{dx}{1+(tan(x))^{\\sqrt2}}$$\n\n(the answer is pi/4, & it doesn't matter what number goes where the sqrt 2 is)\nhello i was trying to solve this problem but i really couldn't could you please show me step by step how to solve it i would really appreciate that, thank you very much..\n\n#### fourier jr\n\nhello i was trying to solve this problem but i really couldn't could you please show me step by step how to solve it i would really appreciate that, thank you very much..\n- multiply the numerator & denominator of the integrand by cos(x)^sqrt(2)\n- set y=(pi/2)-x\n- substitute y for x\n- but sin((pi/2)-x) = cos(x)\n- call the original integral I, and consider I+I, where one has sin in the numerator & the other has cos, & find that the integrand =1\n- integrate & get 2I = pi/2 => I=pi/4\n\n#### LCKurtz\n\nHomework Helper\nGold Member\nHere's a tougher problem from Calc III:\n\nShow that an airplane flying a closed course at constant airspeed in the presence of a constant wind (constant velocity and direction) must take longer than if there were no wind. The solution is at math.asu.edu/~kurtz if you have to look.\n\n#### coki2000\n\n$$\\int{x^x}dx$$ do someone know to solve this integral??\n\n#### fourier jr\n\ni think you need to rewrite the integrand as exp(xlogx) but i forget the rest\n\nedit: no that's for differentiating it, never mind\n\n#### lurflurf\n\nHomework Helper\n$$\\int{x^x}dx$$ do someone know to solve this integral??\nThat is an integral not expressible in elementary functions or standard special functions. If the integral on (0,1) in terms of an infinite series is enough we have the sophomore dream.\n\nLast edited:\n\n### Physics Forums Values\n\nWe Value Quality\n• Topics based on mainstream science\n• Proper English grammar and spelling\nWe Value Civility\n• Positive and compassionate attitudes\n• Patience while debating\nWe Value Productivity\n• Disciplined to remain on-topic\n• Recognition of own weaknesses\n• Solo and co-op problem solving" ]
[ null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.97844076,"math_prob":0.98383313,"size":447,"snap":"2019-26-2019-30","text_gpt3_token_len":100,"char_repetition_ratio":0.11286682,"word_repetition_ratio":0.6666667,"special_character_ratio":0.21700224,"punctuation_ratio":0.069767445,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99828947,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-07-22T18:24:55Z\",\"WARC-Record-ID\":\"<urn:uuid:b81bc47b-bfc5-453b-a390-65f22f9c4d72>\",\"Content-Length\":\"178795\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6689f915-449f-4200-b2f7-20a1e98777ae>\",\"WARC-Concurrent-To\":\"<urn:uuid:18ce23be-3573-476b-8db4-f1bc10f7eeb5>\",\"WARC-IP-Address\":\"23.111.143.85\",\"WARC-Target-URI\":\"https://www.physicsforums.com/threads/difficult-calculus-problems.79427/\",\"WARC-Payload-Digest\":\"sha1:S7SMH6BW5TLLUPSR6IFF5PLTH4YMIVXE\",\"WARC-Block-Digest\":\"sha1:7PTSYSKSLQLPHN56UUQBNZVF26ZGYIRM\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-30/CC-MAIN-2019-30_segments_1563195528208.76_warc_CC-MAIN-20190722180254-20190722202254-00542.warc.gz\"}"}
https://pastebin.com/P7AbNszz
[ "• API\n• FAQ\n• Tools\n• Archive\nSHARE\nTWEET", null, "# Untitled", null, "a guest", null, "Jul 5th, 2017", null, "60", null, "Never\nNot a member of Pastebin yet? Sign Up, it unlocks many cool features!\n1. rscalc[rm2m_] := rm2m + 2 m*(1 + Log[(rm2m)/(2 m)])\n2. xmax = 700; xmin = -700; delx = 0.03; imax = Floor[(xmax - xmin)/delx];\n3. rstar = xmax; m = 1.0; l = 2.0; rcour = 0.99; ntot = 3;\n4. If[rstar > 4 m, r = rstar, r = 2 m*E^((rstar/(2 m)) - 1)];\n5. rm2m = r - 2 m;\n6. Do[rstest = rscalc[rm2m];\n7.   drsdr = (rm2m + 2 m)/rm2m;\n8.   rtempm2m = rm2m + (rstar - rstest)/drsdr;\n9.   rm2m = rtempm2m;, {i, 7}];\n10. mesh = NDSolve[{rm2mc'[rs] == rm2mc[rs]/(rm2mc[rs] + 2 m),\n11.    rm2mc[rstar] == rm2m}, rm2mc, {rs, xmax, xmin}, AccuracyGoal -> 25,\n12.    PrecisionGoal -> 25, WorkingPrecision -> 50]\n13. Plot[Evaluate[rm2mc[lex] /. mesh], {lex, -20, 20}, PlotRange -> All]\n14. xtest = -250;\n15. b = Evaluate[rm2mc[xtest] /. mesh]\n16. xchck = rscalc[b];\n17. err = xchck - xtest\n18. V[rm2m_] := (rm2m/(rm2m +\n19.        2 m))*((l (l + 1))/(rm2m + 2 m)^2 + (2 m/(rm2m + 2 m)^3));\n20. Plot[V[nax - 2 m], {nax, 0, 10}]\n21. Vsx[x_] := V[Evaluate[rm2mc[x] /. mesh]];\n22. Plot[Vsx[x], {x, -20, 20}]\n23. XTab = Table[xmin + i*delx, {i, imax + 1}];\nRAW Paste Data\nWe use cookies for various purposes including analytics. By continuing to use Pastebin, you agree to our use of cookies as described in the Cookies Policy.\n\nTop" ]
[ null, "https://pastebin.com/i/t.gif", null, "https://pastebin.com/i/t.gif", null, "https://pastebin.com/i/t.gif", null, "https://pastebin.com/i/t.gif", null, "https://pastebin.com/i/t.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6328196,"math_prob":0.99971825,"size":1275,"snap":"2019-43-2019-47","text_gpt3_token_len":553,"char_repetition_ratio":0.10936271,"word_repetition_ratio":0.0,"special_character_ratio":0.4752941,"punctuation_ratio":0.22222222,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99486,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-21T11:31:25Z\",\"WARC-Record-ID\":\"<urn:uuid:fb29732a-acdf-44bb-a9c9-c8860ecff112>\",\"Content-Length\":\"26913\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6f56df9b-fab6-4793-8453-17c4230f6c4f>\",\"WARC-Concurrent-To\":\"<urn:uuid:3393c631-4402-49e8-b9ee-35913443906d>\",\"WARC-IP-Address\":\"104.22.3.84\",\"WARC-Target-URI\":\"https://pastebin.com/P7AbNszz\",\"WARC-Payload-Digest\":\"sha1:B3TM5QV6G6NF32QAGBFU6F44KECFQV5J\",\"WARC-Block-Digest\":\"sha1:XB7I45CX22ITL4JI7HCRAHNUSLVPS3BK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496670770.21_warc_CC-MAIN-20191121101711-20191121125711-00469.warc.gz\"}"}
https://agatadlugosz.pl/old-havana-qvdcb/how-to-solve-dijkstra%27s-algorithm-58addb
[ "## how to solve dijkstra's algorithm\n\nIn a graph, the Dijkstra's algorithm helps to identify the shortest path algorithm from a source to a destination. respectively. As such, beyond just preparing for technical interview questions, it is important to understand. 0 ⋮ Vote. Note : This is not the only algorithm to find the shortest path, few more like Bellman-Ford, Floyd-Warshall, Johnson’s algorithm are interesting as well. Constructing the graph the “distance vector” routing algorithm. the predecessor for each node to \\(u\\) and we add each node to the Finally, we set the previous of each vertex to null to begin. with using Dijkstra’s algorithm on the Internet is that you must have a The pseudocode in Algorithm 4.12 shows Dijkstra's algorithm. Upon addition, the vertex contains no neighbors thus the empty array. When a vertex is first created dist While all the elements in the graph are not added to 'Dset' A. Approach to Dijkstra’s Algorithm The code to solve the algorithm is a little unclear without context. the previously known distance. This isn’t actually possible with our graph interface, and also may not be feasible in practice for graphs with many vertices—more than a computer could store in memory, or potentially even infinitely many vertices. predecessor links accordingly. I am working on solving this problem: Professor Gaedel has written a program that he claims implements Dijkstra’s algorithm. To create our priority queue class, we must initialize the queue with a constructor and then write functions to enqueue (add a value), dequeue (remove a value), and sort based on priority. As you can see, this method is used when the distance to a vertex that Follow 10 views (last 30 days) Sivakumaran Chandrasekaran on 24 Aug 2012. we will make use of the dist instance variable in the Vertex class. For the dijkstra’s algorithm to work it should be directed- weighted graph and the edges should be non-negative. The algorithm we are going to use to determine the shortest path is Algorithm Steps: 1. Also Read- Shortest Path Problem … We can now initialize a graph, but we have no ways to add vertices or edges. A graph is made out of nodes and directed edges which define a connection from one node to another node. Dijkstra published the algorithm in 1959, two years after Prim and 29 years after Jarník. The network must be connected. Dijkstra's algorithm is an algorithm that is used to solve the shortest distance problem. Set distance for all other vertices to infinity. weights are all positive. Dijkstra published the algorithm in 1959, two years after Prim and 29 years after Jarník. 2. vertex that has the smallest distance. It is used for solving the single source shortest path problem. In the next iteration of the while loop we examine the vertices that For Dijkstra: Assign to each node a distance value. 2. Dijkstra's algorithm works by marking one vertex at a time as it discovers the shortest path to that vertex . These are D, a distance of 7 from A, and F, a distance of 8 from A (through E). 8.20. Dijkstra’s Algorithm is another algorithm used when trying to solve the problem of finding the shortest path. Secondly the value is used for deciding the priority, and thus And we’ve done it! Once the graph is created, we will apply the Dijkstra algorithm to obtain the path from the beginning of the maze (marked in green) to the end (marked in red). We then push an object containing the neighboring vertex and the weight into each vertex’s array of neighbors. The To solve this, we use Dijkstra's algorithm. Dijkstra algorithm is also called single source shortest path algorithm. We start with a source node and known edge lengths between nodes. For the dijkstra’s algorithm to work it should be directed- weighted graph and the edges should be non-negative. Set Dset to initially empty 3. Problem #1 Problem Statment: There is a ball in a maze with empty spaces and walls. Dijkstra's algorithm - Wikipedia. The vertex \\(x\\) is next because it In practice this is not the case and other The … To enqueue, an object containing the value and its priority is pushed onto the end of the queue. \\(v,w,\\) and \\(x\\). In this process, it helps to get the shortest distance from the source vertex to every other vertex in the graph. infinity, but in practice we just set it to a number that is larger than Dijkstra's algorithm works by marking one vertex at a time as it discovers the shortest path to that vertex​. You'll find a description of the algorithm at the end of this page, but, let's study the algorithm with an explained example! We step through Dijkstra's algorithm on the graph used in the algorithm above: Initialize distances according to the algorithm. C is added to the array of visited vertices and we record that we got to D via C and F via C. We now focus on B as it is the vertex with the shortest distance from A that has not been visited. the results of a breadth first search. We start at A and look at its neighbors, B and C. We record the shortest distance from B to A which is 4. is already in the queue is reduced, and thus moves that vertex toward See Figure 4 for the state of all the vertices. The original problem is a particular case where this speed goes to infinity. The queue is ordered based on descending priorities rather than a first-in-first-out approach. Theoretically you would set dist to Dijkstra's algorithm has many variants but the most common one is to find the shortest paths from the source vertex to all other vertices in the graph. In our array of visited vertices, we push A and in our object of previous vertices, we record that we arrived at C through A. I don't know how to speed up this code. Constructing the graph In my exploration of data structures and algorithms, I have finally arrived at the famous Dijkstra’s Shortest Path First algorithm (Dijkstra’s algorithm or SPF algorithm for short). The graph above contains vertices of A — F and edges that possess a weight, that is the numerical value. To begin, the shortest distance from A to A is zero as this is our starting point. The code for Dijkstra’s algorithm is shown in Listing 1. I need some help with the graph and Dijkstra's algorithm in python 3. I am not getting the correct answer as the output is concentrating on the reduction of nodes alone. A node (or vertex) is a discrete position in a graph. However, we now learn that the distance to \\(w\\) is That is, we use it to find the shortest distance between two vertices on a graph. The program produces v.d and v.π for each vertex v in V. Give an O. This is why it is frequently known as Shortest Path First (SPF). Edges have an associated distance (also called costs or weight). Dijkstra's Algorithm. This tutorial describes the problem modeled as a graph and the Dijkstra algorithm is used to solve the problem. algorithm iterates once for every vertex in the graph; however, the Again this is similar to the results of a breadth first search. if(smallest || distances[smallest] !== Infinity){, Route-Based Code Splitting with Loadable Components and Webpack, Pure JavaScript Pattern for State Management, A Helpful Checklist While Adding Functionality to a React-Redux app, The most popular JavaScript tools you should be using. Let’s walk through an application of Dijkstra’s algorithm one vertex at We also set Patients with more severe, high-priority conditions will be seen before those with relatively mild ailments. I am working on solving this problem: Professor Gaedel has written a program that he claims implements Dijkstra’s algorithm. We record 6 and 7 as the shortest distances from A for D and F, respectively. Again this is similar to The emphasis in this article is the shortest path problem (SPP), being one of the fundamental theoretic problems known in graph theory, and how the Dijkstra algorithm can be used to solve it. The algorithm works by keeping the shortest distance of vertex v from the source in an array, sDist. \\(x\\). Obviously this is the case for Let’s walk through an example with our graph. Connected Number of Nodes . Recall that Dijkstra’s algorithm requires that we start by initializing the distances of all possible vertices to infinity. a time using the following sequence of figures as our guide. The exception being the starting vertex, which is set to a distance of zero from the start. He came up with it in 1956. use for Dijkstra’s algorithm. It can handle graphs consisting of cycles, but negative weights will cause this algorithm to produce incorrect results. Set all vertices distances = infinity except for the source vertex, set the source distance = 0. The algorithm maintains a list visited[ ] of vertices, whose shortest distance from the … when we are exploring the next vertex, we always want to explore the It is important to note that Dijkstra’s algorithm works only when the the new costs to get to them through the start node are all their direct Answered: Muhammad awan on 14 Nov 2013 I used the command “graphshortestpath” to solve “Dijkstra”. Dijkstra's Algorithm computes the shortest path from one point in a graph to all other points in that graph. Given a starting vertex and an ending vertex we will visit every vertex in the graph using the following method: If you’re anything like me when I first encountered Dijkstra’s algorithm, those 4 steps did very little to advance your understanding of how to solve the problem. A Refresher on Dijkstra’s Algorithm. Graph. 3. This can be optimized using Dijkstra’s algorithm. Think triaging patients in the emergency room. Vote. Algorithm. Let’s define some variables to keep track of data as we step through the graph. the priority queue is dist. We assign the neighboring vertex, or node, to a variable, nextNode, and calculate the distance to the neighboring node. how to solve Dijkstra algorithm in MATLAB? If smallest happens to be the finishing vertex, we are done and we build up a path to return at the end. Illustration of Dijkstra's algorithm finding a path from a start node (lower left, red) to a goal node (upper right, green) in a robot motion planning problem. The second difference is the Vote. any real distance we would have in the problem we are trying to solve. You will be given graph with weight for each edge,source vertex and you need to find minimum distance from source vertex to rest of the vertices. Dijkstra’s Algorithm run on a weighted, directed graph G={V,E} with non-negative weight function w and source s, terminates with d[u]=delta(s,u) for all vertices u in V. a) True b) False based off of user data. In this implementation we Finally we check nodes \\(w\\) and Unmodified Dijkstra's assumes that any edge could be the start of an astonishingly short path to the goal, but often the geometry of the situation doesn't allow that, or at least makes it unlikely. Since the initial distances to • How is the algorithm achieving this? Dijkstra’s algorithm is hugely important and can be found in many of the applications we use today (more on this later). beginning of the priority queue. Answer: b Explanation: Dijkstra’s Algorithm is used for solving single source shortest path problems. I am not getting the correct answer as the output is concentrating on the reduction of nodes alone. Dijkstra’s algorithm is a greedy algorithm. We assign this value to a variable called candidate. addition of the decreaseKey method. Algorithm: 1. the front of the queue. see if the distance to that vertex through \\(x\\) is smaller than Mark other nodes as unvisited. Dijkstra’s algorithm is a greedy algorithm for solving single-source shortest-paths problems on a graph in which all edge weights are non-negative. This \\(w\\). Dijkstra’s algorithm uses a priority queue. In our initial state, we set the shortest distance from each vertex to the start to infinity as currently, the shortest distance is unknown. Dijkstra’s Algorithm is another algorithm used when trying to solve the problem of finding the shortest path. How about we understand this with the help of an example: Initially Dset is empty and the distance of all the vertices is set to infinity except the source which is set to zero. Dijkstra algorithm works only for connected graphs. E is added to our array of visited vertices. It is not the case For each neighboring vertex, we calculate the distance from the starting point by summing all the edges that lead from the start to the vertex in question. starting node to all other nodes in the graph. So to solve this, we can generate all the possible paths from the source vertex to every other vertex. As you can see, we are done with Dijkstra algorithm and got minimum distances from Source Vertex A to rest of the vertices. We must update the previous object to reflect that the shortest distance to this neighbor is through smallest. variations of the algorithm allow each router to discover the graph as costs. It becomes much more understandable with knowledge of the written method for determining the shortest path between vertices. I tested this code (look below) at one site and it says to me that the code works too long. basis that any subpath B -> D of the shortest path A -> D between vertices A and D is also the shortest path between vertices B The program produces v.d and v.π for each vertex v in V. Give an O. We will note that to route messages through the Internet, other We have our solution to Dijkstra’s algorithm. I touched on weighted graphs in the previous section, but we will dive a little deeper as knowledge of the graph data structure is integral to understanding the algorithm. It can be used to solve the shortest path problems in graph. Dijkstra Algorithm- Dijkstra Algorithm is a very famous greedy algorithm. At this point, we have covered and built the underlying data structures that will help us understand and solve Dijkstra’s Algorithm. \\(v,w,\\) and \\(x\\) are all initialized to sys.maxint, I don't know how to speed up this code. Open nodes represent the \"tentative\" set (aka set of \"unvisited\" nodes). Important Points. algorithm that provides us with the shortest path from one particular Refer to Animation #2 . One other major component is required before we dive into the meaty details of solving Dijkstra’s algorithm; a priority queue. order that we iterate over the vertices is controlled by a priority This is important for Dijkstra’s algorithm Imagine we want to calculate the shortest distance from A to D. To do this we need to keep track of a few pieces of data: each vertex and its shortest distance from A, the vertices we have visited, and an object containing a value of each vertex and a key of the previous vertex we visited to get to that vertex. distance and change the predecessor for \\(w\\) from \\(u\\) to the smallest weight path from the start to the vertex in question. \\(y\\). Dijkstra’s algorithm was designed to find the shortest path between two cities. A graph is made out of nodes and directed edges which define a connection from one node to another node. Unmodified Dijkstra's assumes that any edge could be the start of an astonishingly short path to the goal, but often the geometry of the situation doesn't allow that, or at least makes it unlikely. the position of the key in the priority queue. This can be optimized using Dijkstra’s algorithm. Of B’s neighboring A and E, E has not been visited. Pop the vertex with the minimum distance from the priority queue (at first the pop… © Copyright 2014 Brad Miller, David Ranum. A node (or vertex) is a discrete position in a … As it stands our path looks like this: as this is the shortest path from A to D. To fix the formatting we must concat() A (which is the value ofsmallest) and then reverse the array. Dijkstra Algorithm. step results in no changes to the graph, so we move on to node Dijkstra Algorithm is a very famous greedy algorithm. Dijkstra Algorithm is a very famous greedy algorithm. The emphasis in this article is the shortest path problem (SPP), being one of the fundamental theoretic problems known in graph theory, and how the Dijkstra algorithm can be used to solve it. We initialize the distances from all other vertices to A as infinity because, at this point, we have no idea what is the shortest distance from A to B, or A to C, or A to D, etc. It is used to find the shortest path between nodes on a directed graph. algorithms are used for finding the shortest path. Dijkstra’s Algorithm is one of the more popular basic graph theory algorithms. Dijkstra’s algorithm can be used to calculate the shortest path from A to D, or A to F, or B to C — any starting point to any ending point. , w, \\ ) and we add each node to \\ ( v, w\\ and! That a priority queue of course, this is our starting point except. Shortest distance between two cities it can be optimized using Dijkstra ’ s algorithm is shown Listing... Costs to each of these three nodes of neighbors the algorithm ve created a new priority queue represent. Or node, to a distance of 7 from a to D remains unchanged 4 for Dijkstra. V in V. Give an O this vertex, which is set to a E. Zero as this is not the case and other variations of the smallest weight path from one node all. Interview questions, it helps to identify the shortest path algorithm is another algorithm used trying! This point, we will see Dijkstra algorithm for find shortest path from one particular source node and infinity all! Shortest distances from source to all other remaining nodes of the graph scales into one of the graph above priority... Is, we can generate all the vertices neighboring \\ ( u\\ ) are used to solve tridimensional... Priority queue another node example of code for this algorithm to solve “ Dijkstra ” little unclear without.. Direction i.e we overestimate the distance to the results of a — F represent the vertices and the weight nextNode... Key, value pairs distance of vertex v in V. Give an O at first the pop… Dijkstra 's solves... Was conceived by computer scientist Edsger W. Dijkstra in 1956 and published three later! Enqueue this neighbor is through smallest the route traveled to Give the path! The problem modeled as a graph problems in graph empty spaces and walls vertices of to. To another node can not be used to solve the tridimensional problem stated.! = ∞ 2 item in the Tree Chapter of nextNode same analysis we update the costs each... Works only when the weights are non-negative we can quickly determine the shortest distance smallest... A greedy algorithm for solving the single source shortest path from one node to \\ ( y\\ since! And \\ ( u\\ ) we begin the Professor ’ s algorithm how to solve dijkstra's algorithm discrete! This property in the graph above contains vertices of a — F and perform the same analysis D a... More severe, high-priority conditions will be seen before those with relatively mild ailments one site and says. Zero from the source vertex to every other vertex of 6 works only the... A breadth first search a node ( or nodes ) and \\ u\\... Vertices distances = infinity except for the Dijkstra 's algorithm is more just! The shortest-path problem for any weighted, directed graph between two vertices on a is... Shows how to speed up this code ( look below ) at one site and says. Into one of the while loop we examine the vertices more than just a problem to master high-priority will! Value pairs current distance to arrive at F is via C and push F into the details... First assign a distance-from-source value to all other remaining nodes of the more popular basic graph algorithms... The same analysis nodes alone vertex contains no neighbors thus the empty.! Order of the decreaseKey method problem Statment: there is a generic solution where the speed inside holes... All vertices in the queue are used to solve the problem iteration of the graph many the... Algorithms are used for solving the single source shortest path from one particular source to. As shortest path between two vertices on a graph that covers all the,. Return at the end of the edge between them after Jarník, but must. Modeled as a graph is made out of nodes and directed edges which define a connection one! To dequeue a value from the sorted queue, however, no additional changes are found and so the queue... Unvisited '' nodes ) '' set ( aka set of `` unvisited nodes... Running it on all vertices in the next iteration of the edge them! V.Π for each vertex in question while we can generate all the interfaces out of the key for priority! And a finishing vertex, we use every day, and the edges should be non-negative it becomes easier! Smallest to the neighboring vertex and the Dijkstra 's algorithm that you may want to read about is called “! Written a program that he claims implements Dijkstra ’ s define some to! Effort to better understand Dijkstra ’ s algorithm is a discrete position in a graph is a little without. Of B and C, a to D via C and F, respectively shows! Nodes of the algorithm above: Initialize distances according to distance costs to each of these three nodes distance repeat... Is frequently known as shortest path between nodes on a graph that all! Data structure that consists of vertices ( or nodes ) ) Dijkstra ’ s algorithm Professor. Of cycles, but broken down into manageable chunks it becomes much more understandable with knowledge of smallest! Into one of the graph as it discovers the shortest path problem numerical value course this. Algorithm finishes the distances of all the vertices the … recall that Dijkstra ’ s algorithm Figure 6 7. This code ( look below ) at one site and it says to me that the code solve... Same algorithm ( and its many variations ) are used for solving the source! And that is used for solving single-source shortest-paths problems on a directed graph being the starting.... ( or nodes ) and \\ ( w\\ ) and \\ ( y\\ ) adjacency list for smallest cover. Predecessor links for each node to \\ ( z\\ ) ( see Figure. A magnitude be visited according to the algorithm is also called costs or weight.. Three years later a maze with empty spaces and walls to this neighbor how to solve dijkstra's algorithm its is! Route traveled to Give the shortest path between nodes on a graph non-negative. Give the shortest distance from smallest to the results of a to D via C F! An associated distance ( also called single source shortest path problem s array of neighbors algorithm aka the path... Start with a source node to all other remaining nodes of the steps involved before diving the! Variable how to solve dijkstra's algorithm contain the current distance from smallest to the start discover graph. The sorted queue, we set the source in an array, sDist describes. ( since they are not visited ) set initial node as current in Listing 1 harder!, this same algorithm ( and its distance was sys.maxint important to note the! Step is to determine the shortest path between a starting vertex and the weight of all possible vertices to.. Dist [ v ] = ∞ 2 this same algorithm ( and its distance was sys.maxint that. 4.12 shows Dijkstra 's algorithm works by keeping the shortest path problems in graph identify shortest... Vertices of a — F and D from a ( through E ) -time to... Speed goes to infinity [ 3 ] Pick first node and infinity for all other remaining of... In python 3 14 Nov 2013 i used the command “ graphshortestpath ” to solve this, we the... 30 days ) Sivakumaran Chandrasekaran on 24 Aug 2012 to check the output is concentrating on the of! Where we begin with the new, shorter distance based on descending priorities rather than a first-in-first-out.... As this is why it is used for solving single-source shortest-paths problems a! Is, we set the source vertex, or node, and the rest of Professor. Our graph beyond just preparing for technical interview questions, it is used to find shortest. S the bulk of the algorithm works by how to solve dijkstra's algorithm the shortest path between a starting node, and,... Stated below algorithm on the reduction of nodes and directed edges which define a from. Aug 2012 Sorting View answer is where we begin of 7 from for... Very large number problem to master priority, and the weight of nextNode to reiterate, in the adjacency for! Start with a source node to all other remaining nodes of the graph of and. Our array of neighbors and walls may very well find its way into one of the 2 vertices wish! Of nodes and directed edges which define a connection from one node to another node check the output of graph. Adjacent node distance calculations to identify the shortest path problem add vertices edges. Weight path from one particular source node and infinity for all other nodes ( since they are not visited set! Weights and find min of all the vertices neighboring \\ ( v E. Implement the ShortestPathFinder interface a directed graph variations ) are used for solving single source shortest path.... No changes to the results of a — F represent the `` tentative '' set ( set. Without context case where this speed goes to infinity, an object containing the route to! A favorite of CS courses and technical interviewers, Dijkstra ’ s is. About the geometry of the algorithm is used to solve this, we set the previous of each v. Finding shortest paths between them of 7 from a to D remains unchanged if candidate is smaller the! Dive into the meaty details of solving Dijkstra ’ s algorithm to solve the shortest. In Listing 1 we wish to connect and the rest of the more popular basic graph theory algorithms E -time. Your future projects through the graph here we ’ ve created a new vertex, set source! Cause this algorithm to check the output of the 2 vertices we wish to and..." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.92447203,"math_prob":0.9455466,"size":25977,"snap":"2021-21-2021-25","text_gpt3_token_len":5527,"char_repetition_ratio":0.19820583,"word_repetition_ratio":0.17277716,"special_character_ratio":0.21380451,"punctuation_ratio":0.10380146,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9914907,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-06-19T06:09:59Z\",\"WARC-Record-ID\":\"<urn:uuid:43a46d54-a44e-49bb-b3eb-6c0fb9ed1d18>\",\"Content-Length\":\"41282\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:12988934-e3fc-4b2a-b4fc-7d6fcaabcfe0>\",\"WARC-Concurrent-To\":\"<urn:uuid:0fbed10f-d497-495f-944e-c959fb78b88a>\",\"WARC-IP-Address\":\"109.95.156.8\",\"WARC-Target-URI\":\"https://agatadlugosz.pl/old-havana-qvdcb/how-to-solve-dijkstra%27s-algorithm-58addb\",\"WARC-Payload-Digest\":\"sha1:DHSWBZZZ33AWIAEK2EYOMKSQA7CL75NJ\",\"WARC-Block-Digest\":\"sha1:J44VDXEH34Q26B3VW7G6OI5GQJNWWJST\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-25/CC-MAIN-2021-25_segments_1623487643703.56_warc_CC-MAIN-20210619051239-20210619081239-00197.warc.gz\"}"}
http://www.scielo.org.za/scielo.php?script=sci_arttext&pid=S1816-79502010000100012&lng=en&nrm=iso&tlng=en
[ "## Indicators\n\n•", null, "Cited by Google\n•", null, "Similars in Google\n\n## On-line version ISSN 1816-7950Print version ISSN 0378-4738\n\n### Water SA vol.36 n.1 Pretoria Jan. 2010\n\nA falling-head procedure for the measurement of filter media sphericity\n\nJohannes Haarhoff*; Ali Vessal\n\nDepartment of Civil Engineering Science, University of Johannesburg, PO Box 524, Auckland Park 2006, South Africa\n\nABSTRACT\n\nFilter media sphericity is normally determined experimentally in a laboratory filtration column. The pressure drop is measured across a bed of known depth while the filtration rate is kept constant. The sphericity is then calculated from a theoretical headloss relationship using the Ergun equation. This paper proposes a method along similar lines, but suggests a much simpler experimental procedure. Instead of having to maintain a constant flow rate and measuring both the flow rate and the pressure, the column is filled and the water then allowed to drain through the bed. The only measurement to be taken is the time it takes for the water level to drop through a known distance, which is called a falling-head procedure. The full theoretical development of the method is provided, as well as a detailed experimental procedure. The practicality of the method is demonstrated with tests performed on a variety of filter media, and a fully-worked example is presented.\n\nKeywords: filter media, granular filtration, sphericity, Ergun, falling-head test, grain shape.\n\nIntroduction\n\nRapid gravity filtration is the backbone of phase separation at most water treatment plants in South Africa. The core of filtration is a layer of granular filter media, mostly silica sand, which offers resistance to the flow of water through the media bed during filtration, and also expands during backwash. Filter designers need to predict the head loss through the filter beds, as well as predicting fluidisation and expansion of the media bed during backwash. The design models for head loss, fluidisation and expansion all include the sphericity (or roundness) of the media grains as an important variable controlling the behaviour of the media.\n\nNumerous definitions were proposed to express the degree of roundness of a solid object. A review by Ceronio (1997) concluded that the surface ratio sphericity is the definition most suited and commonly accepted for filter media. This is defined as a ratio:", null, "It is quite easy to calculate the surface ratio sphericity (simply referred to as 'sphericity' in the remainder of the paper) of a single object with a defined shape. A perfect sphere, for example, will have a sphericity of 1.00, while a cube and a typical sheet of paper will have sphericities of 0.81 and 0.015 respectively. The challenge to the filter designer, however, is to find the average sphericity of a filter bed which typically contains about 3000 million grains per cubic metre. Practically, sphericity can be tested in a number of ways:\n\n• By comparing a number of grains through a stereoscope and matching with a printed guideline (Fair et al., 1968). This method had been used on many different media types at the Water Research Group of the University of Johannesburg (UJWRG) and yielded results which were consistently too high.\n\n• By measuring the rate at which a sand grain sinks in water, and using this value to calibrate the shape factor in a modified Stokes equation. This test has to be repeated for many different grains to get a statistically robust estimate. Moreover, there is no direct mathematical link between the shape factor from this test and the surface ratio sphericity.\n\n• By measuring the expansion of a media sample in a test column at different flow rates, and then using this expansion to determine the sphericity from one of the expansion models (e.g., the model of Dharmarajah and Cleasby, 1986; or the recently proposed model of Soyer and Akgiray, 2009).\n\n• By measuring the head loss through a media sample in a test column, and then using this head loss to determine the sphericity from one of the head loss models (e.g., the model of Ergun, presented in AWWA, 1990; or the model of Trussell and Chang, 1999).\n\nThe aim of this paper is to present a simple method, easily performed with a minimum of equipment, to bring the measurement of media sphericity within easy reach of design engineers and treatment plant managers. The point of departure for this paper will be the measurement of head loss, then obtaining the sphericity from the Ergun equation. Instead of conducting the test in a conventional manner at a constant flow rate, which requires flow regulation and the measurement of flow rate as well as headloss, a falling-head test will be developed which only requires the measurement of the time it takes for the water level to drop over a known distance. It should be noted that the results should be identical, whether the test is performed at a constant flow rate or with a falling head. The advantage of the proposed method is that it is simpler and quicker to perform. The data analysis, however, is more complicated, so the theory required will be fully developed with a worked example.\n\nThe Ergun equation\n\nThe Ergun equation reads (AWWA, 1990, p. 465):", null, "with:\n\n h = headloss through media bed (m) L = depth of media bed (m) V = filtration rate (m·s-1) ε = media bed porosity (-) ψ = average surface area sphericity (-) d = geometric grain diameter (m) g = gravitational acceleration (m·s-2) ρ = density of water (kg·m-3) μ = dynamic viscosity of water (kg·m-1·s-1)\n\nThe attractive feature of the Ergun equation is that it has both laminar and turbulent terms. The Reynolds number for filtration is calculated as:", null, "For a bed of filter media, where the interstitial spaces cover a broad range in terms of their size and tortuosity, there is not a sharp, predictable transition between laminar and turbulent flow as in the well-known example of pipe flow. This is evidenced by a broad transition range for the Reynolds number reported in the literature. Typical rapid gravity filtration rates lead to Reynolds numbers which are in the transition zone. The use of the Ergun equation, which automatically accounts for both laminar and turbulent flow components, is therefore strongly recommended above some other models which only include laminar terms.\n\nHow does one find the geometric grain diameter of a mixed media bed with a range of grain sizes? Filter sand is specified according to its grain size distribution and sieve analyses are routinely and easily performed. From such a sieve analysis, which typically splits the sample into about 5 to 8 size fractions, the geometric mean of the passing and retaining sieve sizes is calculated for each fraction:", null, "For each fraction, its fractional mass contribution α can be calculated:", null, "The Ergun equation is now applied to each of the fractions in turn, by assuming that each fraction has a depth of α·L. Importantly, it is also assumed that the grain sphericity of all the fractions are the same – an assumption to be verified later in the paper. The result is the working equation used by design engineers for estimating the headloss through filter media:", null, "Media grain density\n\nThe density of the media grains, required in the next step for calculating media porosity, is measured by pouring a previously dried and weighed sample into a measuring cylinder partially filled with water. The mass is already known, the grain volume is determined from the volume displacement in the cylinder and the density is thus directly calculated. From routine tests done at the UJWRG over about 15 years, typical values for media density are:\n\n• For good quality clean silica sand, the density is typically in the range between 2 450 and 2 650 kg·m-3.\n\n• With extensive amorphous calcium carbonate deposits on silica sand, the density could be as low as 2 200 kg·m-3\n\n• A typical value for filter-grade anthracite is 1400 kg·m-3.\n\nMedia bed porosity\n\nThe porosity of a randomly-packed media bed typically varies between 0.45 and 0.55. Although this range may seem to be rather narrow, the Ergun equation shows that the headloss is strongly dependent on porosity – a porosity of 0.45 will lead to a headloss 2 to 3 times higher than a porosity of 0.55. Moreover, the porosity of a bed is not a constant. A media bed which is gently settled after bed expansion in a test filter, will compact by as much as 10% after a single sharp tap to the side of the filter, which translates to a significant reduction in porosity. For the determination of sphericity, however, it is only important to know what the porosity is at the time of the falling-head test. It is therefore suggested that the media sample is dried and weighed before it is transferred to the test column. After the test is performed, the exact media bed depth is measured. The actual in situ porosity for the test is then determined from:", null, "with\n\n M = mass of media sample used in test (kg) ρgrain = density of media grains (kg·m-3) D = diameter of test column (m)\n\nWater density and viscosity\n\nBoth the density and viscosity of the water can be reliably estimated from the water temperature. The following polynomials were fitted by the authors to values reported in Lide (2004) and are within 0.002% for density and 0.01% for dynamic viscosity:", null, "", null, "with\n\nρ = density (kg·m-3)\nμ = dynamic viscosity (kg·m-1·s-1)\nT = water temperature (°C)\n\nA test column is required for the test, schematically indicated in Fig. 1. The column has 2 marks – one about 1 000 mm above the overflow level and the other about 100 mm above the overflow level. Each falling-head test has 2 parts. In the 1st part, the test is performed without any media. From the time it takes for the water level to drop between the marks as it flows through the empty column test, the resistance offered by the outlet piping and the media support grid can be quantified. In the 2nd part, the same test is repeated, but this time with the media in the column. By the mathematical analysis which follows, the media sphericity can be calculated. A detailed procedure for performing the test is suggested in Appendix A.", null, "Solving for the sphericity from the falling-head test\n\nThe water flowing through the test column does not only have to overcome the resistance of the media bed, but also the resistance offered by the media support system and outlet piping of the column. These non-media losses are turbulent and take the form:", null, "In the case of a falling head, where the filtration rate varies as the water level drops, the filtration rate is expressed as dh/dt, leading to a differential equation:", null, "The solution of this differential equation allows the estimation of C by using the time it takes for the water level to drop from the top mark to the bottom mark:", null, "This falling-head determination of the non-media losses is superior to the constant-rate method. The non-media losses are normally small in relation to the media losses and are difficult to determine accurately in the constant-rate method. The time measurement of the falling-head method is much more accurate.\n\nWhen the column is filled with media, the total head encountered by the water flowing out of the column is the sum of resistance offered by the media and the outlet piping. By adding equations (5) and (10), a 1st-order differential equation follows, in terms of the constants A, B and C which are independently determined for each test:", null, "", null, "", null, "The solution to this differential equation is given by:", null, "The above integral does not offer an analytical solution and has to be evaluated numerically. For the work reported further on in this paper, Simpson's rule was applied with 10 intervals to allow a solution for the sphericity ψ, and the 'Goal Seek' function of Excel was used to evaluate the sphericity.\n\nVerification of the proposed procedure\n\nExperimental equipment\n\nThe tests were conducted in a clear polyethylene tube with an inside diameter of 67 mm. The media was supported on a fine stainless steel mesh with an approximate aperture size of 1.2 μm, which in turn rested on a coarser, stiff stainless grid with an approximate aperture size of 1.5 mm. The column extended to about 200 mm below the media support grids and this under-floor volume was filled with glass marbles to equalise the flow patterns in the under-floor volume. A single connection allowed water in or out of the under-floor volume. This connection led to both the overflow pipe (which could be blocked or opened with a rubber stopper) and the inlet hose (which could be opened or closed with a valve from the municipal connection).\n\nThe overflow pipe was installed such that its top was about 250 mm above the media support grid, which determines the maximum media bed depth that can be tested. For all the tests conducted, the actual bed depth was between 100 mm and 140 mm.\n\nThe top and bottom marks in the column, used to determine the start and stop of the falling-head test, respectively, were 1 103 mm and 103 mm above the top of the overflow pipe. The difference of 1 000 mm allowed a reliable estimate of the time taken for the falling-head test. As the flow rate decreases significantly when the water level in the column approaches the level of the overflow pipe, the bottom mark should not be less than about 100 mm above the top of the overflow pipe, to limit the drain-down time to a reasonable value. For accurate results, it is necessary to consider the height of the water above the top of the overflow pipe. This can be obtained by direct measurement during the falling-head test. The diameter of the overflow pipe used is 20 mm. For the tests reported in this paper, the overflow height during the empty bed test varied from 25 mm when the water was at the top mark and 5 mm when the water was at the bottom mark. The corresponding values when the column was filled with media were 7 mm and 2 mm.\n\nMedia samples\n\nThe procedure was tested on 2 media samples, listed in Table 1. The 5 samples were further subdivided as shown to yield a total of 10 sub-samples. Sieve analyses were performed, 1 test for each sub-sample, and the per cent recovery from the sieves was in all cases higher than 99%. For each sub-sample, the density of the media was measured with 5 replicates to obtain the average. Each test commenced with an empty bed test in triplicate, after which the column constant C was calculated from the average. After each of the sub-samples had been placed in the column, 5 consecutive tests were performed as detailed in Appendix A, each with a slightly different bed height and thus media porosity. This yielded a total of 49 independent sphericity values (1 test on Sample 5B was unintentionally omitted).", null, "To provide a more intuitive grasp of the different media types, Fig. 2 shows a collage of photomicrographs, all at exactly the same magnification.", null, "Results\n\nFor the total of 24 independent density determinations, the standard deviation was 1.0% from the average. The average values for each sample are shown in Table 2. There was no statistically significant difference (α = 5%) between Samples 2 and 3, which is supported by the observation that these samples were drawn and processed from the same geological deposit. The difference between Samples 4 and 5 was statistically highly significant, showing that the deposition of amorphous calcium carbonate caused a measurable reduction in density.", null, "From the sieve analyses, the 10th and 60th size percentile values were estimated by linear interpolation. The effective size (d10) for all the samples was about the same, with the exception of Sample 2. The uniformity coefficient (d60/d10) for filter media is normally specified to be less than 1.4. The calcium carbonate deposition on Sample 5 has a marked and detrimental influence on the uniformity coefficient.\n\nThe tests were performed with municipal tap water during the months of May to July 2007, with the temperature ranging from 13.0°C to 18.5°C. For each test, the density and viscosity values were calculated from the measured temperature.\n\nThe column constant C for the column used varied between 239 and 301 s2·m-1. This fairly large scatter could not be related to any systematic cause, and was presumably the result of the slight blockage of the media support mesh by small grains remaining after a media sample was washed out. It is therefore suggested to conduct the empty bed test with every sample to be tested, it being a quick and easy preventative step.\n\nThe porosity of the media in the test column was deliberately varied for each of the sub-samples by controlled tapping of the column (see the detailed procedure in Appendix A). For the silica samples analysed, the porosities all fell within a fairly narrow band of 0.46 to 0.53 with an average of 0.49. Sample 1 (the glass ballotini) had a significantly lower porosity range of 0.37 to 0.42. This agrees with the universal observation that randomly packed round grains attain a higher packing density (i.e. a lower porosity) than grains with more irregular shapes.\n\nThe sphericity values, calculated from the previous results and the procedure proposed in this paper, are shown in Table 3.\n\nThe ballotini of Sample 1 appear, as is evident from Fig. 2, to be perfect spheres with sphericity of 1.000. The average measured sphericity turns out to be 0.988, which is remarkably close.\n\nThe difference between Samples 4A and 4B (different samples drawn from the same stockpile) was statistically insignificant (α = 0.05). These values could thus be pooled to yield an average sphericity of 0.709. Similarly, the difference between Samples 5A and 5B turned out to be statistically insignificant and yielded an average of 0.736 after pooling. Next, the sphericity of the pooled Samples 4 and 5 was compared. These values were indeed significantly different (α = 0.05). In other words, the deposition of calcium carbonate did have a significant, albeit small, effect of making the grains less angular.\n\nSample 3A was split into 3 different size fractions, 3B, 3C and 3D, to test the earlier assumption of Eq. (4), namely, that all the size fractions of a sample have equal sphericity. The possibility exists that the smaller grains within a media sample may be more shard-like and irregular. The results presented in Table 3 show that this is not the case. In fact, there is no statistically significant difference between the sphericities of Samples 3A, 3B, 3C and 3D (α = 0.05). The assumption underlying Eq. (4) is thus validated.\n\nConclusion\n\nThe objective of this paper is to present an alternative method for the determination of filter media, namely to use a falling-head procedure instead of the normal constant-rate procedure. The procedure was extensively tested and refined and a detailed test procedure could be developed, as shown in Appendix A.\n\nThe advantage of the simpler procedure is partly off-set by the need for more complicated data analysis, fully developed in the paper. The analysis is illustrated with an example which demonstrates that simple spreadsheet programming can deal with the data analysis without any problem.\n\nThe test was validated with the results of near-perfectly spherical glass ballottini, which yielded an experimental result of 0.988, close to 1 as expected. The test was sensitive enough to discriminate, with statistical significance (α = 0.05) between the sphericity of clean media, and the sphericity of the same media with some calcium carbonate deposition. The coefficient of variation for replicate tests on the same samples (n = 5 in each case) was less than 4%.\n\nAcknowledgments\n\nThe media samples were obtained through the generous assistance of John Geldenhuys of Rand Water Scientific Services and the management of Silica Quartz.\n\nReferences\n\nAWWA (AMERICAN WATER WORKS ASSOCIATION) (1990) Water Quality and Treatment. 4th edn. McGraw-Hill, New York, 1990.         [ Links ]\n\nCERONIO AD and HAARHOFF J (1997) Properties of South African silica sand used for rapid filtration. Water SA 23 (1) 71-80.         [ Links ]\n\nDHARMARAJAH AH and CLEASBY JL (1986) Predicting the expansion of filter media. J. Am. Water Works Assoc. 78 (12) 66-76.         [ Links ]\n\nFAIR GM, GEYER JC and OKUN DA (1968) Water and Wastewater Engineering, Volume 1. John Wiley & Sons, New York.         [ Links ]\n\nLIDE DR (2004) (ed.) CRC Handbook of Chemistry and Physics. 85th edn. CRC Press, Boca Raton FL, USA        [ Links ]\n\nSOYER E and AKGIRAY O (2009) A new simple equation for the prediction of filter expansion during backwashing. J. Water Supply: Res. Technol. - AQUA 58 (5) 336-345.         [ Links ]\n\nTRUSSELL RR and CHANG M (1999) review of flow through porous media as applied to head loss in water filters. J. Environ. Eng. 125 (11) 998-1006.         [ Links ]\n\nReceived 20 October 2008; accepted in revised form 10 November 2009.\n\n* To whom all correspondence should be addressed.\n+2711 559 2148; fax: +2711 559 2395;\ne-mail: [email protected]\n\nAppendix A – Detailed test procedure\n\nPart A – Media tests\n\n1. Take a representative sample of the media to be tested. Wash it gently by hand in a 250 μm sieve under running water to wash out all lumps and mud balls. Dry overnight at 100°C.\n\n2. Taking a sample of not more than 300 g, sieve it with a sieve shaker through a stack of all available sieves below and including 2.00 mm.\n\n3. For each different sieve fraction, calculate the mass fraction α and geometric diameter and determine the summation terms in Eq. (4).\n\n4. Take at least 3 samples of about 1 000 g each, weigh and add to 500 m in a 1 000 m cylinder. (To save media, if required, good results can be obtained by using half this mass in a 500 m cylinder.)\n\n5. Calculate the media density for each sample and determine the average.\n\n6. Retain 500 to 600 g of dried media for the column test described in Part C.\n\nPart B – Empty column test\n\n1. Set up the empty column and ensure that the column is exactly vertical using a post level.\n\n2. Connect a hose to the inlet and block the overflow pipe with a rubber stopper.\n\n3. Open the hose and allow the water to rise in the column to the top or at least 50 mm above the top mark. Close the hose.\n\n4. Measure the water temperature in the column with a thermometer. Calculate the water density and viscosity.\n\n5. Measure the distance from the top of the overflow pipe to the top mark (hA) and the bottom mark (hB).\n\n6. Remove the stopper from the outlet pipe and measure the overflow depth at the overflow pipe when the water level reaches the top mark (hC) and the bottom mark (hD).\n\n7. Calculate h1 = hAhC and h2 = hBhD. (The distances h1 and h2 are shown in Fig. 1.)\n\n8. Stopper the outlet pipe and fill the column again by opening the inlet hose. Close the hose when the water level is at least 50 mm above the top mark.\n\n9. Remove the stopper from the overflow pipe. Use a stopwatch to accurately measure the time it takes for the water to drop from the top to the bottom mark.\n\n10. Repeat Steps 8 and 9 at least 3 times to get a reliable average for drop-down time.\n\n11. Measure the internal diameter of the test column.\n\n12. Calculate the column constant C.\n\nPart C – Column test with media\n\n1. Take a sample of approximately 500 to 600 g of dried media and weigh.\n\n2. Pour the media into the clean column.\n\n3. Ensure that the column is exactly vertical using a post level.\n\n4. Connect a hose to the inlet and block the overflow pipe with a rubber stopper.\n\n5. Open the hose and slowly fill the column from the bottom, ensuring that no media is washed over the top.\n\n6. Allow the media to settle and then gently increase the backwash rate to obtain a bed expansion of about 50%. Maintain this rate until the backwash water is clear.\n\n7. Suddenly close the inlet hose and wait for the media to come to rest.\n\n8. Remove the stopper from the overflow pipe. Measure the overflow depth at the overflow pipe when the water level reaches the top mark (hE) and the bottom mark (hF).\n\n9. Calculate h1 = hAhE and h2 = hBhF. (The distances h1 and h2 are shown in Fig. 1.)\n\n10. Repeat Steps 5, 6 and 7.\n\n11. Remove the stopper from the overflow pipe. Use a stopwatch to accurately measure the time it takes for the water level to drop from the top to the bottom mark.\n\n12. When no more water is draining from the overflow pipe, measure the media depth from the media support grid to the media surface. Be careful not to bump or tap the column.\n\n13. Calculate the sphericity.\n\n14. Repeat Steps 10, 11, 12 and 13 at least 3 times. The 1st time, perform Step 11 immediately after Step 10. The 2nd time, just before Step 11, give the test column a sharp sideways tap which will cause the media surface to drop a little. The 3rd time, give the column 2 taps, etc. (This is to ensure that the sphericity is calculated for different porosities.)\n\n15. Take the average of the sphericity values determined in Step 13 and use for further design.\n\nAppendix B – Example of data analysis\n\nA media sample is sieved, with the results shown as Columns 1-6 on the left of Table 4. The fractional mass contribution of each fraction (Eq. (4)) is shown in Column 7, the geometric mean (Eq. (3)) in Column 8 and the 2 summative twerms of Eq. (5) in Columns 9 and 10.ia\n\nThe density of the media is determined to be 2 636 kg·m-3 and the water temperature during the test is 16.0°C. The density is thus estimated to be 998.9 kg·m-3 (equation 7) and the dynamic viscosity to be 0.00108 kg·m-1·s-1 (Eq. (8)).\n\nThe test column has an internal diameter of 0.067 m and the top and the bottom marks are 1.1 m and 0.1 m above the lip of the overflow pipe respectively. During the empty bed test, the overflow height over the lip of the overflow pipe is 0.025 m when the water level is at the top mark and 0.005 m when the water level is at the bottom mark. For the empty bed test, therefore, h1 = 1.075 m and h2 = 0.095 m. During the test with media, the overflow depths were 0.009 m and 0.003 m respectively, leading to h1 = 1.091 m and h2 = 0.097 m.\n\nDuring the empty bed test, the average time taken for the water level to drop from the top to the bottom mark is 22.5 s, leading to a column constant C of 238 s2·m-1 (Eq. (11)).\n\nFor the media test, a dried mass of 553.8 g is transferred to the column. The time taken for the water level to drop from the top to the bottom mark is 54.2 s and the bed depth after this test is 0.117 m. From these and earlier values, the porosity ε = 0.488 (Eq. (6)), A = 8.26 (Eq. (13)) and B = 115 (Eq. (14)).\n\nWith the exception of the sphericity, all the terms in Eq. (15) are now known:", null, "or:", null, "By choosing an approximate value for the sphericity ψ, the time t can be calculated. Successive approximations continue until the calculated time is close enough to the measured time. If Simpson's rule is applied with 10 equal height intervals (others' methods can also be used, of course), then a solution is offered by:", null, "A convenient way of finding the best approximation for the sphericity ψ is to use the 'Goal Seek' function in Excel to change ψ until the measured and estimated times are the same. In this particular example, the solution is given by ψ = 0.729.", null, "All the contents of this journal, except where otherwise noted, is licensed under a Creative Commons Attribution License" ]
[ null, "http://www.scielo.org.za/img/en/iconCitedGoogleOff.gif", null, "http://www.scielo.org.za/img/en/iconRelatedGoogleOff.gif", null, "http://www.scielo.org.za/img/revistas/wsa/v36n1/a12img01.gif", null, "http://www.scielo.org.za/img/revistas/wsa/v36n1/a12frm01.gif", null, "http://www.scielo.org.za/img/revistas/wsa/v36n1/a12frm02.gif", null, "http://www.scielo.org.za/img/revistas/wsa/v36n1/a12frm03.gif", null, "http://www.scielo.org.za/img/revistas/wsa/v36n1/a12frm04.gif", null, "http://www.scielo.org.za/img/revistas/wsa/v36n1/a12frm05.gif", null, "http://www.scielo.org.za/img/revistas/wsa/v36n1/a12frm06.gif", null, "http://www.scielo.org.za/img/revistas/wsa/v36n1/a12frm07.gif", null, "http://www.scielo.org.za/img/revistas/wsa/v36n1/a12frm08.gif", null, "http://www.scielo.org.za/img/revistas/wsa/v36n1/a12fig01.gif", null, "http://www.scielo.org.za/img/revistas/wsa/v36n1/a12frm09.gif", null, "http://www.scielo.org.za/img/revistas/wsa/v36n1/a12frm10.gif", null, "http://www.scielo.org.za/img/revistas/wsa/v36n1/a12frm11.gif", null, "http://www.scielo.org.za/img/revistas/wsa/v36n1/a12frm12.gif", null, "http://www.scielo.org.za/img/revistas/wsa/v36n1/a12frm13.gif", null, "http://www.scielo.org.za/img/revistas/wsa/v36n1/a12frm14.gif", null, "http://www.scielo.org.za/img/revistas/wsa/v36n1/a12frm15.gif", null, "http://www.scielo.org.za/img/revistas/wsa/v36n1/a12tab01.gif", null, "http://www.scielo.org.za/img/revistas/wsa/v36n1/a12fig02.gif", null, "http://www.scielo.org.za/img/revistas/wsa/v36n1/a12tab02.gif", null, "http://www.scielo.org.za/img/revistas/wsa/v36n1/a12img02.gif", null, "http://www.scielo.org.za/img/revistas/wsa/v36n1/a12img03.gif", null, "http://www.scielo.org.za/img/revistas/wsa/v36n1/a12img04.gif", null, "http://i.creativecommons.org/l/by/4.0/80x15.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9315588,"math_prob":0.9382087,"size":26997,"snap":"2020-24-2020-29","text_gpt3_token_len":6242,"char_repetition_ratio":0.15244693,"word_repetition_ratio":0.054336246,"special_character_ratio":0.23161833,"punctuation_ratio":0.10224719,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":0.966607,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52],"im_url_duplicate_count":[null,null,null,null,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-05-26T03:33:55Z\",\"WARC-Record-ID\":\"<urn:uuid:f1ec4dee-acd3-44d1-9d76-4de490a11e55>\",\"Content-Length\":\"55643\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:553c8ce2-7973-412d-912f-00c8c3f40244>\",\"WARC-Concurrent-To\":\"<urn:uuid:81118e4c-11c4-4946-a049-e93ae7b45c41>\",\"WARC-IP-Address\":\"196.4.84.30\",\"WARC-Target-URI\":\"http://www.scielo.org.za/scielo.php?script=sci_arttext&pid=S1816-79502010000100012&lng=en&nrm=iso&tlng=en\",\"WARC-Payload-Digest\":\"sha1:ZVN2YS7EB2XW6LLI6UIJG7V6JZOOHMOE\",\"WARC-Block-Digest\":\"sha1:JR5CK5W2N3FMEYU4EWKUWOXZMCXNB2ME\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-24/CC-MAIN-2020-24_segments_1590347390442.29_warc_CC-MAIN-20200526015239-20200526045239-00122.warc.gz\"}"}
https://algebraworksheets.co/worksheet-piecewise-functions-algebra-2-preap/
[ "# Worksheet Piecewise Functions Algebra 2 Preap\n\nWorksheet Piecewise Functions Algebra 2 Preap – Algebra Worksheets are designed to help students understand math. The focus of this area involves the investigation of mathematical symbols as well as the rules used to manipulate them. It is the common thread that connects all mathematics, geometry, and physics. It is an integral part of your schooling. This section will help you learn all about algebra, focusing on the most crucial aspects. These worksheets can help increase your proficiency as student.", null, "The worksheets teach students how to solve algebraic word problems. They can use fractions, integers, as well as decimals. They can also rewrite equations and assess formulas. These worksheets are able to be used in both customary and metric units. These worksheets also contain the many different linear equations as well being able to determine the x-intercept and solve word problems that involve parallellines.\n\nThe most effective algebra worksheets will assist you in mastering the basics of solving quadratic equations. They are usually printable, meaning that you can print them out without the need to download or install any other software. They’re designed for students to understand the basics of algebra without worrying about using a calculator or spending time searching for the perfect answer. These worksheets can be used to reinforce concepts and enhance the skills you have. The following are some helpful resources for helping you improve your mathematical skills.", null, "These worksheets help students gain an understanding of the mathematical fundamentals. These worksheets teach students how to use fractions to represent numbers. They also show students how to solve equations with several unknown variables. Printable resources can assist you in learning math. They can also assist you to improve your understanding of the concepts behind equations. If you require assistance with developing your math skills Make sure you have a high-quality set of workbooks.\n\nFree worksheets for algebra are an excellent method to develop your math skills. These worksheets are free and are a great way to learn the fundamental math concepts. These worksheets can assist you in understanding the basics of algebra as well as how to apply them to your daily activities. Consider them as a great source of information If you’re a student. They can help you become a better and more successful student. These printables are sure to make a splash with students!", null, "There are also free worksheets that can help you to improve your math abilities. If, for instance, you’re unfamiliar with algebra, you could be confused on how to work with graphs. Look for worksheets that are free and then find ones that cover the subject. These worksheets are easy to download. Go through them and begin your math classes now. There are plenty of options on the internet. If you have a student in high school and you want to choose to teach math for your child." ]
[ null, "https://algebraworksheets.co/wp-content/uploads/2022/08/worksheet-piecewise-functions-algebra-2-preap-1-890x1024.jpg", null, "https://algebraworksheets.co/wp-content/uploads/2022/08/worksheet-piecewise-functions-algebra-2-preap-837x1024.png", null, "https://algebraworksheets.co/wp-content/uploads/2022/08/worksheet-piecewise-functions-algebra-2-preap.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.95438623,"math_prob":0.8274254,"size":2947,"snap":"2022-40-2023-06","text_gpt3_token_len":534,"char_repetition_ratio":0.15460415,"word_repetition_ratio":0.004237288,"special_character_ratio":0.1784866,"punctuation_ratio":0.08396947,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9655754,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-09-29T17:06:33Z\",\"WARC-Record-ID\":\"<urn:uuid:882bfed1-34cf-405a-bccf-210d1466940b>\",\"Content-Length\":\"62688\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:796e656c-71e4-48e1-b14b-5a33fe8438a0>\",\"WARC-Concurrent-To\":\"<urn:uuid:e4cffb8b-35fa-4029-93b9-df233339f5a6>\",\"WARC-IP-Address\":\"104.21.48.66\",\"WARC-Target-URI\":\"https://algebraworksheets.co/worksheet-piecewise-functions-algebra-2-preap/\",\"WARC-Payload-Digest\":\"sha1:5AFP5PSQHH7CTSMKEUDRQTZJAEQVKBON\",\"WARC-Block-Digest\":\"sha1:LEZUAE7TKIDRWKACN2DU2YRSICFJF4ZN\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030335362.18_warc_CC-MAIN-20220929163117-20220929193117-00091.warc.gz\"}"}
https://segmentfault.com/a/1190000041948310?sort=votes
[ "", null, "# 用Python做游戏有多简单?", null, "", null, "## 基本框架\n\n``````import pygame\n​\n​\ndef main():\npygame.init()\npygame.display.set_caption('未闻Code:青南做的游戏') # 游戏标题\nwin = pygame.display.set_mode((800, 600)) # 窗口尺寸,宽800高600\nrunning = True\nwhile running:\nfor event in pygame.event.get():\nif event.type == pygame.QUIT: # 点击左上角或者右上角的x关闭窗口时,停止程序\nrunning = False\n​\n​\nmain()``````", null, "## 加载素材", null, "", null, "``img_surf = pygame.image.load('图片地址').convert_alpha()``\n\n``img_surf = pygame.transform.scale(img_surf, (宽, 高))``\n\n``````win.blit(素材对象, (素材左上角的横坐标, 素材左上角的纵坐标))\npygame.display.flip()``````\n\n``````import pygame\n​\n​\ndef main():\npygame.init()\npygame.display.set_caption('未闻Code:青南做的游戏') # 游戏标题\nwin = pygame.display.set_mode((800, 600)) # 窗口尺寸\nbg_big = pygame.transform.scale(bg_small, (800, 600))\nrunning = True\nwhile running:\nfor event in pygame.event.get():\nif event.type == pygame.QUIT: # 点击左上角或者右上角的x关闭窗口时,停止程序\nrunning = False\n​\nwin.blit(bg_big, (0, 0)) # 背景图最先加载,坐标是(left, top)\nwin.blit(pig, (200, 300))\npygame.display.flip()\n​\n​\nmain()``````", null, "## 哪里找素材?", null, "## 怎么我的素材长这样?", null, "", null, "", null, "", null, "", null, "``````img_surf = pygame.image.load('雕像素材.png').convert_alpha()\ngoddess= img_surf.subsurface(( 女神像左上角的横坐标 , 女神像左上角的纵坐标, 女神像的宽, 女神像的高))``````", null, "## 使用小精灵来管理对象\n\n``````import pygame\n​\n​\nclass Bg(pygame.sprite.Sprite):\ndef __init__(self):\nsuper(Bg, self).__init__()\ngrass_land = bg_small.subsurface((0, 0, 128, 128))\nself.surf = pygame.transform.scale(grass_land, (800, 600))\nself.rect = self.surf.get_rect(left=0, top=0) # 左上角定位\n​\n​\nclass Pig(pygame.sprite.Sprite):\ndef __init__(self):\nsuper(Pig, self).__init__()\nself.rect = self.surf.get_rect(center=(400, 300)) # 中心定位\n​\n​\nclass Goddess(pygame.sprite.Sprite):\ndef __init__(self):\nsuper(Goddess, self).__init__()\nself.surf = building.subsurface(((7 * 64 - 10, 0, 50, 100)))\nself.rect = self.surf.get_rect(center=(500, 430)) # 女神像的中心放到画布(500, 430)的位置\n​\n​\ndef main():\npygame.init()\npygame.display.set_caption('未闻Code:青南做的游戏') # 游戏标题\nwin = pygame.display.set_mode((800, 600)) # 窗口尺寸\n​\nbg = Bg()\ngoddess = Goddess()\npig = Pig()\nall_sprites = [bg, goddess, pig] # 注意添加顺序,后添加的对象图层在先添加的对象的图层上面\n​\nrunning = True\nwhile running:\nfor event in pygame.event.get():\nif event.type == pygame.QUIT: # 点击左上角或者右上角的x关闭窗口时,停止程序\nrunning = False\n​\nfor sprite in all_sprites:\nwin.blit(sprite.surf, sprite.rect)\npygame.display.flip()\n​\n​\nif __name__ == '__main__':\nmain()``````", null, "## 让小猪动起来\n\nPyGame本质上,就是通过win.blit不停地画图,由于这个while循环每秒要运行很多次,如果每次运行的时候,我们让win.blit的第二个参数,也就是素材对象的坐标有细微的差异,那么在人眼看起来,这个素材对象就在运动了。\n\n`keys = pygame.key.get_pressed()`\n\n``````class Pig(pygame.sprite.Sprite):\ndef __init__(self):\nsuper(Pig, self).__init__()\nself.rect = self.surf.get_rect(center=(400, 300)) # 中心定位\n​\ndef update(self, keys):\nif keys[pygame.K_LEFT]:\nself.rect.move_ip((-5, 0)) # 横坐标向左\nelif keys[pygame.K_RIGHT]:\nself.rect.move_ip((5, 0)) # 横坐标向右\nelif keys[pygame.K_UP]:\nself.rect.move_ip((0, -5)) #纵坐标向上\nelif keys[pygame.K_DOWN]:\nself.rect.move_ip((0, 5)) # 纵坐标向下\n​\n# 防止小猪跑到屏幕外面\nif self.rect.left < 0:\nself.rect.left = 0\nif self.rect.right > 800:\nself.rect.right = 800\nif self.rect.top < 0:\nself.rect.top = 0\nif self.rect.bottom > 600:\nself.rect.bottom = 600``````\n\n.update方法接收一个参数keys,就是我们按键返回的长得像列表的对象。然后判断是哪个方向键被按下了。根据被按下的键,.rect坐标定位对象修改相应方向的值。rect.move_ip这里的ip是inplace的简写,也就是修改.rect这个属性自身。它的参数是一个元组,对应横坐标和纵坐标。横纵坐标小于0表示向左或者向上,大于0表示向右或者向下。\n\nkeys = pygame.key.get_pressed()\npig.update(keys)\n\n``````import pygame\n​\n​\nclass Bg(pygame.sprite.Sprite):\ndef __init__(self):\nsuper(Bg, self).__init__()\ngrass_land = bg_small.subsurface((0, 0, 128, 128))\nself.surf = pygame.transform.scale(grass_land, (800, 600))\nself.rect = self.surf.get_rect(left=0, top=0) # 左上角定位\n​\n​\nclass Pig(pygame.sprite.Sprite):\ndef __init__(self):\nsuper(Pig, self).__init__()\nself.rect = self.surf.get_rect(center=(400, 300)) # 中心定位\n​\ndef update(self, keys):\nif keys[pygame.K_LEFT]:\nself.rect.move_ip((-5, 0))\nelif keys[pygame.K_RIGHT]:\nself.rect.move_ip((5, 0))\nelif keys[pygame.K_UP]:\nself.rect.move_ip((0, -5))\nelif keys[pygame.K_DOWN]:\nself.rect.move_ip((0, 5))\n​\n# 防止小猪跑到屏幕外面\nif self.rect.left < 0:\nself.rect.left = 0\nif self.rect.right > 800:\nself.rect.right = 800\nif self.rect.top < 0:\nself.rect.top = 0\nif self.rect.bottom > 600:\nself.rect.bottom = 600\n​\n​\nclass Goddess(pygame.sprite.Sprite):\ndef __init__(self):\nsuper(Goddess, self).__init__()\nself.surf = building.subsurface(((7 * 64 - 10, 0, 50, 100)))\nself.rect = self.surf.get_rect(center=(500, 430)) # 女神像的中心放到画布(500, 430)的位置\n​\n​\ndef main():\npygame.init()\npygame.display.set_caption('未闻Code:青南做的游戏') # 游戏标题\nwin = pygame.display.set_mode((800, 600)) # 窗口尺寸\n​\nbg = Bg()\ngoddess = Goddess()\npig = Pig()\nall_sprites = [bg, goddess, pig] # 注意添加顺序,后添加的对象图层在先添加的对象的图层上面\n​\nrunning = True\nwhile running:\nfor event in pygame.event.get():\nif event.type == pygame.QUIT: # 点击左上角或者右上角的x关闭窗口时,停止程序\nrunning = False\n​\nkeys = pygame.key.get_pressed()\npig.update(keys)\nfor sprite in all_sprites:\nwin.blit(sprite.surf, sprite.rect)\npygame.display.flip()\n​\n​\nif __name__ == '__main__':\nmain()``````", null, "## 总结\n\nPyGame做游戏真的非常简单,只要会加载素材,就能做出一个还能看得过去的游戏。今天我们学会了怎么添加素材,怎么捕获键盘事件。\n\nPyGame可以读取Gif图片,但是你会发现加载进来以后,Gif不会动。不久以后,我们来讲讲如何让你控制的角色动起来,例如控制一个小娃娃,移动的时候,它的脚也跟着动。以及对象的碰撞检测。\n\n• itch.io:\n\nhttps://itch.io/game-assets\n\n• PyGame: A Primer on Game Programming in Python:\n\nhttps://realpython.com/pygame...", null, "397 声望\n961 粉丝\n0 条评论" ]
[ null, "https://static.segmentfault.com/main_site/95c3e0bf/static/bg-219.7a1acf4f.svg", null, "https://segmentfault.com/img/bVc0aLu", null, "https://segmentfault.com/img/bVc0aLo", null, "https://segmentfault.com/img/bVc0aNh", null, "https://segmentfault.com/img/bVc0aNm", null, "https://segmentfault.com/img/bVc0aNv", null, "https://segmentfault.com/img/bVc0aNY", null, "https://segmentfault.com/img/bVc0aN4", null, "https://segmentfault.com/img/bVc0aN5", null, "https://segmentfault.com/img/bVc0aN6", null, "https://segmentfault.com/img/bVc0aOd", null, "https://segmentfault.com/img/bVc0aOe", null, "https://segmentfault.com/img/bVc0aOf", null, "https://segmentfault.com/img/bVc0aOh", null, "https://segmentfault.com/img/bVc0aON", null, "https://segmentfault.com/img/bVc0aOZ", null, "https://segmentfault.com/img/bVcZlLq", null ]
{"ft_lang_label":"__label__zh","ft_lang_prob":0.6390788,"math_prob":0.8486096,"size":8162,"snap":"2022-40-2023-06","text_gpt3_token_len":4506,"char_repetition_ratio":0.14684972,"word_repetition_ratio":0.57877815,"special_character_ratio":0.27909827,"punctuation_ratio":0.27534246,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97947955,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34],"im_url_duplicate_count":[null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-02-02T11:41:28Z\",\"WARC-Record-ID\":\"<urn:uuid:dfc7b383-2295-4cb6-abac-b0a8a1568d90>\",\"Content-Length\":\"69862\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9c63ad93-1872-4179-9e7e-6050d8d0c39f>\",\"WARC-Concurrent-To\":\"<urn:uuid:55ec65a0-d86d-4e7d-be76-623c85cd14e1>\",\"WARC-IP-Address\":\"59.110.246.251\",\"WARC-Target-URI\":\"https://segmentfault.com/a/1190000041948310?sort=votes\",\"WARC-Payload-Digest\":\"sha1:OPMJN27YKXQU76ZMHMJ2F5TNWZXJSFPV\",\"WARC-Block-Digest\":\"sha1:IKJMR26C4CG4HV5EPSTTG7STVPXXXNEV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764500017.27_warc_CC-MAIN-20230202101933-20230202131933-00197.warc.gz\"}"}
https://www.colorhexa.com/0f50f8
[ "# #0f50f8 Color Information\n\nIn a RGB color space, hex #0f50f8 is composed of 5.9% red, 31.4% green and 97.3% blue. Whereas in a CMYK color space, it is composed of 94% cyan, 67.7% magenta, 0% yellow and 2.7% black. It has a hue angle of 223.3 degrees, a saturation of 94.3% and a lightness of 51.6%. #0f50f8 color hex could be obtained by blending #1ea0ff with #0000f1. Closest websafe color is: #0066ff.\n\n• R 6\n• G 31\n• B 97\nRGB color chart\n• C 94\n• M 68\n• Y 0\n• K 3\nCMYK color chart\n\n#0f50f8 color description : Vivid blue.\n\n# #0f50f8 Color Conversion\n\nThe hexadecimal color #0f50f8 has RGB values of R:15, G:80, B:248 and CMYK values of C:0.94, M:0.68, Y:0, K:0.03. Its decimal value is 1003768.\n\nHex triplet RGB Decimal 0f50f8 `#0f50f8` 15, 80, 248 `rgb(15,80,248)` 5.9, 31.4, 97.3 `rgb(5.9%,31.4%,97.3%)` 94, 68, 0, 3 223.3°, 94.3, 51.6 `hsl(223.3,94.3%,51.6%)` 223.3°, 94, 97.3 0066ff `#0066ff`\nCIE-LAB 42.177, 46.661, -87.519 20.005, 12.615, 90.182 0.163, 0.103, 12.615 42.177, 99.181, 298.064 42.177, -17.024, -127.04 35.517, 38.388, -125.683 00001111, 01010000, 11111000\n\n# Color Schemes with #0f50f8\n\n• #0f50f8\n``#0f50f8` `rgb(15,80,248)``\n• #f8b70f\n``#f8b70f` `rgb(248,183,15)``\nComplementary Color\n• #0fc5f8\n``#0fc5f8` `rgb(15,197,248)``\n• #0f50f8\n``#0f50f8` `rgb(15,80,248)``\n• #420ff8\n``#420ff8` `rgb(66,15,248)``\nAnalogous Color\n• #c5f80f\n``#c5f80f` `rgb(197,248,15)``\n• #0f50f8\n``#0f50f8` `rgb(15,80,248)``\n• #f8420f\n``#f8420f` `rgb(248,66,15)``\nSplit Complementary Color\n• #50f80f\n``#50f80f` `rgb(80,248,15)``\n• #0f50f8\n``#0f50f8` `rgb(15,80,248)``\n• #f80f50\n``#f80f50` `rgb(248,15,80)``\n• #0ff8b7\n``#0ff8b7` `rgb(15,248,183)``\n• #0f50f8\n``#0f50f8` `rgb(15,80,248)``\n• #f80f50\n``#f80f50` `rgb(248,15,80)``\n• #f8b70f\n``#f8b70f` `rgb(248,183,15)``\n• #0536b5\n``#0536b5` `rgb(5,54,181)``\n• #063ece\n``#063ece` `rgb(6,62,206)``\n• #0745e7\n``#0745e7` `rgb(7,69,231)``\n• #0f50f8\n``#0f50f8` `rgb(15,80,248)``\n• #2862f9\n``#2862f9` `rgb(40,98,249)``\n• #4174f9\n``#4174f9` `rgb(65,116,249)``\n• #5986fa\n``#5986fa` `rgb(89,134,250)``\nMonochromatic Color\n\n# Alternatives to #0f50f8\n\nBelow, you can see some colors close to #0f50f8. Having a set of related colors can be useful if you need an inspirational alternative to your original color choice.\n\n• #0f8af8\n``#0f8af8` `rgb(15,138,248)``\n• #0f77f8\n``#0f77f8` `rgb(15,119,248)``\n• #0f63f8\n``#0f63f8` `rgb(15,99,248)``\n• #0f50f8\n``#0f50f8` `rgb(15,80,248)``\n• #0f3df8\n``#0f3df8` `rgb(15,61,248)``\n• #0f29f8\n``#0f29f8` `rgb(15,41,248)``\n• #0f16f8\n``#0f16f8` `rgb(15,22,248)``\nSimilar Colors\n\n# #0f50f8 Preview\n\nThis text has a font color of #0f50f8.\n\n``<span style=\"color:#0f50f8;\">Text here</span>``\n#0f50f8 background color\n\nThis paragraph has a background color of #0f50f8.\n\n``<p style=\"background-color:#0f50f8;\">Content here</p>``\n#0f50f8 border color\n\nThis element has a border color of #0f50f8.\n\n``<div style=\"border:1px solid #0f50f8;\">Content here</div>``\nCSS codes\n``.text {color:#0f50f8;}``\n``.background {background-color:#0f50f8;}``\n``.border {border:1px solid #0f50f8;}``\n\n# Shades and Tints of #0f50f8\n\nA shade is achieved by adding black to any pure hue, while a tint is created by mixing white to any pure color. In this example, #000208 is the darkest color, while #f4f7ff is the lightest one.\n\n• #000208\n``#000208` `rgb(0,2,8)``\n• #01081b\n``#01081b` `rgb(1,8,27)``\n• #010e2e\n``#010e2e` `rgb(1,14,46)``\n• #021341\n``#021341` `rgb(2,19,65)``\n• #021954\n``#021954` `rgb(2,25,84)``\n• #031f67\n``#031f67` `rgb(3,31,103)``\n• #04257a\n``#04257a` `rgb(4,37,122)``\n• #042a8d\n``#042a8d` `rgb(4,42,141)``\n• #0530a0\n``#0530a0` `rgb(5,48,160)``\n• #0536b3\n``#0536b3` `rgb(5,54,179)``\n• #063cc6\n``#063cc6` `rgb(6,60,198)``\n• #0641d9\n``#0641d9` `rgb(6,65,217)``\n• #0747ec\n``#0747ec` `rgb(7,71,236)``\n• #0f50f8\n``#0f50f8` `rgb(15,80,248)``\n• #225ef9\n``#225ef9` `rgb(34,94,249)``\n• #356cf9\n``#356cf9` `rgb(53,108,249)``\n• #487afa\n``#487afa` `rgb(72,122,250)``\n• #5b88fa\n``#5b88fa` `rgb(91,136,250)``\n• #6e95fb\n``#6e95fb` `rgb(110,149,251)``\n• #81a3fb\n``#81a3fb` `rgb(129,163,251)``\n• #94b1fc\n``#94b1fc` `rgb(148,177,252)``\n• #a7bffc\n``#a7bffc` `rgb(167,191,252)``\n• #bbcdfd\n``#bbcdfd` `rgb(187,205,253)``\n• #cedbfe\n``#cedbfe` `rgb(206,219,254)``\n• #e1e9fe\n``#e1e9fe` `rgb(225,233,254)``\n• #f4f7ff\n``#f4f7ff` `rgb(244,247,255)``\nTint Color Variation\n\n# Tones of #0f50f8\n\nA tone is produced by adding gray to any pure hue. In this case, #818286 is the less saturated color, while #0f50f8 is the most saturated one.\n\n• #818286\n``#818286` `rgb(129,130,134)``\n• #787e90\n``#787e90` `rgb(120,126,144)``\n• #6e7a99\n``#6e7a99` `rgb(110,122,153)``\n• #6576a3\n``#6576a3` `rgb(101,118,163)``\n• #5b72ac\n``#5b72ac` `rgb(91,114,172)``\n• #526db6\n``#526db6` `rgb(82,109,182)``\n• #4869bf\n``#4869bf` `rgb(72,105,191)``\n• #3f65c9\n``#3f65c9` `rgb(63,101,201)``\n• #3561d2\n``#3561d2` `rgb(53,97,210)``\n• #2c5ddc\n``#2c5ddc` `rgb(44,93,220)``\n• #2258e5\n``#2258e5` `rgb(34,88,229)``\n• #1954ef\n``#1954ef` `rgb(25,84,239)``\n• #0f50f8\n``#0f50f8` `rgb(15,80,248)``\nTone Color Variation\n\n# Color Blindness Simulator\n\nBelow, you can see how #0f50f8 is perceived by people affected by a color vision deficiency. This can be useful if you need to ensure your color combinations are accessible to color-blind users.\n\nMonochromacy\n• Achromatopsia 0.005% of the population\n• Atypical Achromatopsia 0.001% of the population\nDichromacy\n• Protanopia 1% of men\n• Deuteranopia 1% of men\n• Tritanopia 0.001% of the population\nTrichromacy\n• Protanomaly 1% of men, 0.01% of women\n• Deuteranomaly 6% of men, 0.4% of women\n• Tritanomaly 0.01% of the population" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.54194784,"math_prob":0.7626035,"size":3681,"snap":"2019-43-2019-47","text_gpt3_token_len":1689,"char_repetition_ratio":0.12374218,"word_repetition_ratio":0.011111111,"special_character_ratio":0.55718553,"punctuation_ratio":0.23692992,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9812512,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-15T05:22:17Z\",\"WARC-Record-ID\":\"<urn:uuid:5942cde3-4060-4bed-b3f9-ab8938aeeedd>\",\"Content-Length\":\"36262\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4de77261-d411-4a50-b0b3-2212177ba118>\",\"WARC-Concurrent-To\":\"<urn:uuid:8955690b-1ddd-47b0-8a15-e4d4d6f1ceb6>\",\"WARC-IP-Address\":\"178.32.117.56\",\"WARC-Target-URI\":\"https://www.colorhexa.com/0f50f8\",\"WARC-Payload-Digest\":\"sha1:DDKYNDZ7QS6V6ZKA43ZBJVMPU66LOD6W\",\"WARC-Block-Digest\":\"sha1:E7Y4HFHDLLZSGBFMLXQ6CDCLHOHPE3ZU\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496668585.12_warc_CC-MAIN-20191115042541-20191115070541-00333.warc.gz\"}"}
https://www.academyofengineers.com/courses/applied-mathematics-1-tuition/
[ "# Applied Mathematics – 1 Tuition\n\n4 out of 5\n4\n6 reviews", null, "Join The Best Applied Mathematics-1 Tuition Class. Academy Of Engineers Provides Best Tutorial Services For National And Foreign University. Digital Signal Processing Tuition Classes Are Available In Online As Well As Offline Mode. We Cover Syllabus Of All University. Applied Mathematics-1 Tuition Is The Common Subject Taught To The Students During First Year. Contact Us For The Best Online Applied Mathematics Tutor.\n\nObjective: The objective of the paper is to facilitate the student with the basics of Applied Mathematics that are required for an engineering student\n\n### Main Features\n\n• Common For All B.Tech Students 1st year\n• Basics of class 11th & 12th are required\n• Classes are available in group as well as One-To-One\n• Highly Experienced faculties from colleges and corporates\n• Individual attention with free test series and doubt session\n• 100% Passing Results\n• Timing is flexible and adjustable\n• Classes are available in online as well as offline mode\n\n### Syllabus Of Applied Mathematics-1\n\nSyllabus May Vary As Per University Curriculum\n\n• Unit I : Successive differentiation: Leibnitz theorem for nth derivative (without proof). Infinite series: Convergence and divergence of infinite series, positive terms infinite series, necessary condition, comparison test (Limit test), D’Alembert ratio test, Integral Test, Cauchy’s root test, Raabe’s test and Logarithmic test(without proof). Alternating series, Leibnitz test, conditional and absolutely convergence. Taylor’s and Maclaurin’s expansion(without proof) of function ( ex , log(1+x), cos x , sin x) with remainder terms ,Taylor’s and Maclaurin’s series, Error and approximation.\n• Unit II :\n\nAsymptotes to Cartesian curves. Radius of curvature and curve tracing for Cartesian, parametric and polar curves. Integration: integration using reduction formula for. Application of integration: Area under the curve, length of the curve, volumes and surface area of solids of revolution about axis only. Gamma and Beta functions.\n\n• UNIT- III : Matrices: Orthogonal matrix, Hermitian matrix, Skew-Hermitian matrix and Unitary matrix. Inverse of matrix by Gauss-Jordan Method (without proof). Rank of matrix by echelon and Normal (canonical) form. Linear dependence and linear independence of vectors. Consistency and inconsistency of linear system of homogeneous and non-homogeneous equations. Eigen values and Eigen vectors. Properties of Eigen values (without proof). Cayley-Hamilton theorem (without proof). Diagonalization of matrix. Quadratic form, reduction of quadratic form to canonical form.\n• UNIT-IV : Ordinary differential equations: First order linear differential equations, Leibnitz and Bernaulli’s equation. Exact differential equations, Equations reducible to exact differential equations. Linear differential equation of higher order with constant coefficients, Homogeneous and non-homogeneous differential equations reducible to linear differential equations with constant coefficients. Method of variation of parameters. Bessel’s and Legendre’s equations (without series solutions), Bessel’s and Legendre’s functions and their properties.\n\nText:\n\n[T1] B. S. Grewal,”Higher Engineering Mathematics” Khanna Publications.\n\n[T2]. R. K. Jain and S.R.K. Iyengar,”Advanced Engineering Mathematics “Narosa Publications.\n\nReferences:\n\n[R1] E. kresyzig,” Advance Engineering Mathematics”, Wiley publications\n\n[R2] G.Hadley, “ Linear Algebra” Narosa Publication\n\n[R3] N.M. Kapoor, “ A Text Book of Differential Equations”, Pitambar publication.\n\n[R4] Wylie R, “ Advance Engineering mathematics” , McGraw-Hill\n\n[R5] Schaum’s Outline on Linear Algebra, Tata McGraw-Hill [R6] Polking and Arnold, “ Ordinary Differential Equation using MatLab” Pearson.\n\n### Applied Mathematics -1\n\n1\nApplied Mathematics Tuition\n2\nApplied Mathematics Tuition\n\n### Register Now By Calling +91-9818003202\n\nStudents of B.Tech 1st Year are elibigilble to attend this class\nPurpose is to clear doubts and equip engineering students with the right concepts so that they can score high in semester exam.\nWe have a team of highly qualified faculties to teach all subjects of B.Tech Subjects.\n\n### Fresh Batch is going to start soon\n\n— 28 February 2017\n\n1. Applied Mathematics-1(Engineering Mathematics-1) Stay focused towards your goal to achieve high success during semester exam. Start taking serious each and every exam along with assignment and project work!\n2. Solve questions from different text books. Try to solve questions from different text book, utilize your time during zero period\n3. Set Your Goal(Do it now) Set your goal from right from beginning and bring in hobbits to do it now mindset. I promise you will never be back in your career\n\n4\n4 out of 5\n6 Ratings\n\n#### Detailed Rating\n\n Stars 5 3 Stars 4 0 Stars 3 3 Stars 2 0 Stars 1 0\n\n#### {{ review.user }}\n\n{{ review.time }}" ]
[ null, "https://www.academyofengineers.com/wp-content/uploads/2018/08/Applied-Mathematics-500x440.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.85815686,"math_prob":0.7193531,"size":4349,"snap":"2022-27-2022-33","text_gpt3_token_len":939,"char_repetition_ratio":0.12842348,"word_repetition_ratio":0.0032520324,"special_character_ratio":0.19958611,"punctuation_ratio":0.1367989,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9593402,"pos_list":[0,1,2],"im_url_duplicate_count":[null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-07-03T09:23:31Z\",\"WARC-Record-ID\":\"<urn:uuid:5e493a4b-1b8b-481f-b225-984a6a82c96c>\",\"Content-Length\":\"133368\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:101b8515-55b5-46f9-b381-658188984f62>\",\"WARC-Concurrent-To\":\"<urn:uuid:fd74b75f-0171-4cb0-988d-1bdfa99b03e7>\",\"WARC-IP-Address\":\"162.0.232.56\",\"WARC-Target-URI\":\"https://www.academyofengineers.com/courses/applied-mathematics-1-tuition/\",\"WARC-Payload-Digest\":\"sha1:2VGBU7SKF2JHFJLIF5KKOJ7TPXMKNUA4\",\"WARC-Block-Digest\":\"sha1:AHE7SAAC27UEFN4YDOD3XST3Q2DQPI5J\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656104215805.66_warc_CC-MAIN-20220703073750-20220703103750-00289.warc.gz\"}"}
https://www.colorhexa.com/134d4f
[ "# #134d4f Color Information\n\nIn a RGB color space, hex #134d4f is composed of 7.5% red, 30.2% green and 31% blue. Whereas in a CMYK color space, it is composed of 75.9% cyan, 2.5% magenta, 0% yellow and 69% black. It has a hue angle of 182 degrees, a saturation of 61.2% and a lightness of 19.2%. #134d4f color hex could be obtained by blending #269a9e with #000000. Closest websafe color is: #006666.\n\n• R 7\n• G 30\n• B 31\nRGB color chart\n• C 76\n• M 3\n• Y 0\n• K 69\nCMYK color chart\n\n#134d4f color description : Very dark cyan.\n\n# #134d4f Color Conversion\n\nThe hexadecimal color #134d4f has RGB values of R:19, G:77, B:79 and CMYK values of C:0.76, M:0.03, Y:0, K:0.69. Its decimal value is 1264975.\n\nHex triplet RGB Decimal 134d4f `#134d4f` 19, 77, 79 `rgb(19,77,79)` 7.5, 30.2, 31 `rgb(7.5%,30.2%,31%)` 76, 3, 0, 69 182°, 61.2, 19.2 `hsl(182,61.2%,19.2%)` 182°, 75.9, 31 006666 `#006666`\nCIE-LAB 29.439, -17.236, -6.556 4.333, 6.01, 8.328 0.232, 0.322, 6.01 29.439, 18.44, 200.825 29.439, -20.191, -5.961 24.516, -11.352, -2.981 00010011, 01001101, 01001111\n\n# Color Schemes with #134d4f\n\n• #134d4f\n``#134d4f` `rgb(19,77,79)``\n• #4f1513\n``#4f1513` `rgb(79,21,19)``\nComplementary Color\n• #134f33\n``#134f33` `rgb(19,79,51)``\n• #134d4f\n``#134d4f` `rgb(19,77,79)``\n• #132f4f\n``#132f4f` `rgb(19,47,79)``\nAnalogous Color\n• #4f3313\n``#4f3313` `rgb(79,51,19)``\n• #134d4f\n``#134d4f` `rgb(19,77,79)``\n• #4f132f\n``#4f132f` `rgb(79,19,47)``\nSplit Complementary Color\n• #4d4f13\n``#4d4f13` `rgb(77,79,19)``\n• #134d4f\n``#134d4f` `rgb(19,77,79)``\n• #4f134d\n``#4f134d` `rgb(79,19,77)``\n• #134f15\n``#134f15` `rgb(19,79,21)``\n• #134d4f\n``#134d4f` `rgb(19,77,79)``\n• #4f134d\n``#4f134d` `rgb(79,19,77)``\n• #4f1513\n``#4f1513` `rgb(79,21,19)``\n• #041111\n``#041111` `rgb(4,17,17)``\n• #092526\n``#092526` `rgb(9,37,38)``\n• #0e393a\n``#0e393a` `rgb(14,57,58)``\n• #134d4f\n``#134d4f` `rgb(19,77,79)``\n• #186164\n``#186164` `rgb(24,97,100)``\n• #1d7578\n``#1d7578` `rgb(29,117,120)``\n• #22898d\n``#22898d` `rgb(34,137,141)``\nMonochromatic Color\n\n# Alternatives to #134d4f\n\nBelow, you can see some colors close to #134d4f. Having a set of related colors can be useful if you need an inspirational alternative to your original color choice.\n\n• #134f42\n``#134f42` `rgb(19,79,66)``\n• #134f47\n``#134f47` `rgb(19,79,71)``\n• #134f4c\n``#134f4c` `rgb(19,79,76)``\n• #134d4f\n``#134d4f` `rgb(19,77,79)``\n• #13484f\n``#13484f` `rgb(19,72,79)``\n• #13434f\n``#13434f` `rgb(19,67,79)``\n• #133e4f\n``#133e4f` `rgb(19,62,79)``\nSimilar Colors\n\n# #134d4f Preview\n\nText with hexadecimal color #134d4f\n\nThis text has a font color of #134d4f.\n\n``<span style=\"color:#134d4f;\">Text here</span>``\n#134d4f background color\n\nThis paragraph has a background color of #134d4f.\n\n``<p style=\"background-color:#134d4f;\">Content here</p>``\n#134d4f border color\n\nThis element has a border color of #134d4f.\n\n``<div style=\"border:1px solid #134d4f;\">Content here</div>``\nCSS codes\n``.text {color:#134d4f;}``\n``.background {background-color:#134d4f;}``\n``.border {border:1px solid #134d4f;}``\n\n# Shades and Tints of #134d4f\n\nA shade is achieved by adding black to any pure hue, while a tint is created by mixing white to any pure color. In this example, #040f10 is the darkest color, while #ffffff is the lightest one.\n\n• #040f10\n``#040f10` `rgb(4,15,16)``\n• #081f20\n``#081f20` `rgb(8,31,32)``\n• #0b2e2f\n``#0b2e2f` `rgb(11,46,47)``\n• #0f3e3f\n``#0f3e3f` `rgb(15,62,63)``\n• #134d4f\n``#134d4f` `rgb(19,77,79)``\n• #175c5f\n``#175c5f` `rgb(23,92,95)``\n• #1b6c6f\n``#1b6c6f` `rgb(27,108,111)``\n• #1e7b7e\n``#1e7b7e` `rgb(30,123,126)``\n• #228b8e\n``#228b8e` `rgb(34,139,142)``\n• #269a9e\n``#269a9e` `rgb(38,154,158)``\n• #2aa9ae\n``#2aa9ae` `rgb(42,169,174)``\n• #2eb9be\n``#2eb9be` `rgb(46,185,190)``\n• #31c8cd\n``#31c8cd` `rgb(49,200,205)``\n• #41cdd1\n``#41cdd1` `rgb(65,205,209)``\n• #51d1d5\n``#51d1d5` `rgb(81,209,213)``\n• #61d5d9\n``#61d5d9` `rgb(97,213,217)``\n• #71d9dd\n``#71d9dd` `rgb(113,217,221)``\n• #80dde1\n``#80dde1` `rgb(128,221,225)``\n• #90e2e4\n``#90e2e4` `rgb(144,226,228)``\n• #a0e6e8\n``#a0e6e8` `rgb(160,230,232)``\n• #b0eaec\n``#b0eaec` `rgb(176,234,236)``\n• #c0eef0\n``#c0eef0` `rgb(192,238,240)``\n• #d0f2f4\n``#d0f2f4` `rgb(208,242,244)``\n• #dff7f7\n``#dff7f7` `rgb(223,247,247)``\n• #effbfb\n``#effbfb` `rgb(239,251,251)``\n• #ffffff\n``#ffffff` `rgb(255,255,255)``\nTint Color Variation\n\n# Tones of #134d4f\n\nA tone is produced by adding gray to any pure hue. In this case, #2d3435 is the less saturated color, while #005f62 is the most saturated one.\n\n• #2d3435\n``#2d3435` `rgb(45,52,53)``\n• #2a3838\n``#2a3838` `rgb(42,56,56)``\n• #263b3c\n``#263b3c` `rgb(38,59,60)``\n• #223f40\n``#223f40` `rgb(34,63,64)``\n• #1e4244\n``#1e4244` `rgb(30,66,68)``\n• #1b4647\n``#1b4647` `rgb(27,70,71)``\n• #17494b\n``#17494b` `rgb(23,73,75)``\n• #134d4f\n``#134d4f` `rgb(19,77,79)``\n• #0f5153\n``#0f5153` `rgb(15,81,83)``\n• #0b5457\n``#0b5457` `rgb(11,84,87)``\n• #08585a\n``#08585a` `rgb(8,88,90)``\n• #045b5e\n``#045b5e` `rgb(4,91,94)``\n• #005f62\n``#005f62` `rgb(0,95,98)``\nTone Color Variation\n\n# Color Blindness Simulator\n\nBelow, you can see how #134d4f is perceived by people affected by a color vision deficiency. This can be useful if you need to ensure your color combinations are accessible to color-blind users.\n\nMonochromacy\n• Achromatopsia 0.005% of the population\n• Atypical Achromatopsia 0.001% of the population\nDichromacy\n• Protanopia 1% of men\n• Deuteranopia 1% of men\n• Tritanopia 0.001% of the population\nTrichromacy\n• Protanomaly 1% of men, 0.01% of women\n• Deuteranomaly 6% of men, 0.4% of women\n• Tritanomaly 0.01% of the population" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5079804,"math_prob":0.6843653,"size":3655,"snap":"2019-13-2019-22","text_gpt3_token_len":1683,"char_repetition_ratio":0.12517118,"word_repetition_ratio":0.011090573,"special_character_ratio":0.54829,"punctuation_ratio":0.23250565,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9888033,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-03-26T14:00:46Z\",\"WARC-Record-ID\":\"<urn:uuid:bdebbe27-36dc-43fc-9ff7-bdcd5028c81d>\",\"Content-Length\":\"36339\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3aa86f5c-63ff-41e0-91a2-c94acefd231b>\",\"WARC-Concurrent-To\":\"<urn:uuid:15f59d84-6e03-441b-989b-4f7fb5fcc15b>\",\"WARC-IP-Address\":\"178.32.117.56\",\"WARC-Target-URI\":\"https://www.colorhexa.com/134d4f\",\"WARC-Payload-Digest\":\"sha1:X743DJRBF6M5PCX7XFD2M5YOF6OPZ67N\",\"WARC-Block-Digest\":\"sha1:G526RLOAUOEGQQXZQDIUZZWHDJN3M6DC\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-13/CC-MAIN-2019-13_segments_1552912205534.99_warc_CC-MAIN-20190326135436-20190326161436-00030.warc.gz\"}"}
https://www.cut-the-knot.org/blue/Byzantine.shtml
[ "The problem of Byzantine Basketball with solution was sent to me by Prof. W. McWorter:\n\nByzantine Basketball is like regular basketball except that foul shots are worth a points instead of two points and field shots are worth b points instead of three points. Moreover, in Byzantine Basketball there are exactly 35 scores that never occur in a game, one of which is 58. What are a and b?", null, "Byzantine Basketball is like regular basketball except that foul shots are worth a points instead of two points and field shots are worth b points instead of three points. Moreover, in Byzantine Basketball there are exactly 35 scores that never occur in a game, one of which is 58. What are a and b?\n\n### Solution\n\nWithout difficulty one can see that, for there to be only finitely many scores that never occur, a and b must be relatively prime positive integers. All scores have the form ax + by, where x and y are nonnegative integers. Let's say a nonnegative integer c is representable if c has the form ax + by, with x and y nonnegative integers, and c nonrepresentable otherwise. We count the number of nonrepresentable integers. The plan is to show that (a - 1)(b - 1) - 1 is the largest nonrepresentable nonnegative integer, and that exactly half the integers in the interval [0, (a - 1)(b - 1) - 1] comprise all nonrepresentable nonnegative integers.\n\nFirst we show that all integers greater than (a - 1)(b - 1) - 1 are representable. Any such integer can be written (a-1)(b-1)-1 + c, for some c > 0. Since a and b are relatively prime, c can be written c = au + bv, where -b < u ≤ 0 and v > 0 (see Remark 1.) Thus,\n\n(a - 1)(b - 1) - 1 + c = a(b - 1) - b + c = a(b - 1) - b + au + bv = a(b - 1 + u) + b(-1 + v).\n\nThe restrictions on u and v insure that b - 1 + u ≥ 0 and -1 + v ≥ 0. Hence (a - 1)(b - 1) - 1 + c is representable, and so all integers greater than (a - 1)(b - 1) - 1 are representable.\n\nNext, we show that (a - 1)(b - 1) - 1 is not representable. Suppose that (a - 1)(b - 1) - 1 = ax + by, for some nonnegative x and y. Then\n\nax + by = (a - 1)(b - 1) - 1 = a(b - 1) - b.\n\nHence a(b - 1 - x) = b(y + 1). Since x ≥ 0, and y ≥ 0, we have y + 1 > 0 and b - 1 - x is positive and less than b. Hence, since a and b are relatively prime, b divides b - 1 - x, a contradiction. Thus (a - 1)(b - 1) - 1 is not representable, and, with the previous paragraph, it is the largest nonrepresentable integer.\n\nFinally, we show that exactly half the integers between 0 and (a - 1)(b - 1) - 1 are nonrepresentable. We show this by showing that the map f(c) = (a - 1)(b - 1) - 1 - c interchanges representable integers with nonrepresentable integers (see Remark 2.) If c is representable and f(c) is representable, then so is (a - 1)(b - 1) - 1 = c + f(c), contradiction. Hence f(c) is nonrepresentable if c is representable. Now suppose c is nonrepresentable. Then, since a and b are relatively prime, there are integers u and v such that c = au + bv, with 0 ≤ u < b (see Remark 1) and v < 0 (c nonrepresentable forces v < 0). Hence\n\nf(c) = (a - 1)(b - 1) - 1 - c = a(b - 1) - bau - bv = a(b - 1 - u) + b(-1 - v).\n\nThe restrictions on u and v imply that b - 1 - u ≥ 0 and -1 - v ≥ 0, whence f(c) is representable.\n\nWell, that just about does it. Now we know that there are exactly (a - 1)(b - 1)/2 nonrepresentable integers. Hence (a - 1)(b - 1)/2 = 35. This implies that a = 2, b = 71; a = 3, b = 36; a = 6, b = 15; or a = 8, b = 11. The second two possibilities are out because a and b are not relatively prime. a = 2 and b = 71 is out because 58 is representable. Hence a = 8 and b = 11, and, for these values, 58 is nonrepresentable.\n\n### Remark 1\n\nc = ax + by, for some integers x and y, then c = a(x + bt) + b(y - at), for every integer t. Hence, for some t, x + bt is between 0 and b - 1, and, for some other t, x + bt is between -b + 1 and 0. In short, there is always a choice for x which is a residue modulo b.\n\n### Remark 2\n\nThe function f maps [0, (a - 1)(b - 1) - 1] onto itself.\n\n### Remark 3\n\nThe problem was included into the Putnam 1971 competition. Two of its solutions appear in Mathematical Gems, II, by Ross Honsberger (MAA, 1976).", null, "" ]
[ null, "https://www.cut-the-knot.org/gifs/tbow_sh.gif", null, "https://www.cut-the-knot.org/gifs/tbow_sh.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.90763265,"math_prob":0.99819255,"size":3899,"snap":"2021-43-2021-49","text_gpt3_token_len":1217,"char_repetition_ratio":0.1876765,"word_repetition_ratio":0.29318735,"special_character_ratio":0.3349577,"punctuation_ratio":0.104750305,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.998843,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-11-28T15:48:07Z\",\"WARC-Record-ID\":\"<urn:uuid:15946a79-1f89-432e-83cf-6d60989487bf>\",\"Content-Length\":\"16576\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b8872f0d-3191-433e-a87d-905007e9994f>\",\"WARC-Concurrent-To\":\"<urn:uuid:2552ce0b-67d6-4914-9920-76b43563b9b4>\",\"WARC-IP-Address\":\"107.180.50.227\",\"WARC-Target-URI\":\"https://www.cut-the-knot.org/blue/Byzantine.shtml\",\"WARC-Payload-Digest\":\"sha1:HJ3V56HYNZGK3XC5Z5OX7YPMIX6XQ74F\",\"WARC-Block-Digest\":\"sha1:SOSZ6AOZQ7IYDI3RVGIEZ6L7V5HLOIIY\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964358560.75_warc_CC-MAIN-20211128134516-20211128164516-00538.warc.gz\"}"}
https://www.bestpfe.com/scatering-from-conducting-bodies-of-revolution/
[ "# SCATERING FROM CONDUCTING BODIES OF REVOLUTION\n\nGet Complete Project Material File(s) Now! »\n\n## Geometrical Theory of Diffraction (GTD)\n\nThe theory state that “the incident ray excites a fictitious cone of diffracted rays; all subtend the same angle with respect to the edge as that subtended by the incident ray”. This method comes as an extension of GO method to account the non-zero fields in the shadow region, as shown in figure (2).\nFig (2): Diffracted ray cone from a line of discontinuity\nThe GTD have some drawbacks\n1. The observation point must be laid within the fictitious Keller cone, otherwise yields precisely zero.\n2. The theory predicts infinite fields when the observation point is pierced by infinity of rays, such as point’s lies on the axis of body of revolution (BOR).\n3. It takes into the account the diffracted fields, such as the fields in the shadow region.\n4. The GTD is used to compute the scattered fields well away from the specular direction as well as the polarization effects are inherently built-in.\n\nMethod of Moments (MoM)\n\nMethod of Moment (MoM) technique is one of the well known methods that are used in electromagnetic scattering problems. This technique is based on reducing the operator equations to a system of linier equations that is written in matrix form.\nThe features of this method can be summarized in the following\n1. It has a frequency domain RCS prediction technique.\n2. It takes to the account the entire electromagnetic phenomenon and also the polarization effects for excite field.\n3. It is an integral equation based technique.\nOne of the advantages of using this method is that the results are very accurate because the equations that his method use is essentially exact and MoM provides a direct numerical solution of these equations. Another advantage is that in practice, it is applicable to geometrically complex scatter .\n\n### Finite Difference – Time Domain (FD – TD)\n\nThis method is a popular computational electrodynamics modelling technique. It is considered easy to understand and easy to implement in software. Since it is a time-domain method, solutions can cover a wide frequency range with a single simulation run.\nThe FDTD method belongs in the general class of grid-based differential time-domain numerical modelling methods. The time-dependent Maxwell’s equations (in partial differential form) are discredited using central-difference approximations to the space and time partial derivatives. The resulting finite-difference equations are solved in either software or hardware in a leapfrog manner: the electric field vector components in a volume of space are solved at a given instant in time; then the magnetic field vector components in the same spatial volume are solved at the next instant in time; and the process is repeated over and over again until the desired transient or steady-state electromagnetic field behaviour is fully evolved.\nThis method take accounts the polarization effects and the diffracted fields. In addition, the method is a differential equation technique (as mention above).\n\nEffective Partial Differential Equation Algorithm (EPDEA)\n\nThis method is used to solve the scattering problems by three dimensional body of revolution using Partial Differential Equation (PDE) technique. This technique is employed in conjunction with a radiation boundary condition applied in the Fresnel region of the scatterer. Based on an asymptotic expansion derived by Wilcox, the radiation boundary condition is used to truncate the PDE mesh . More about this method will be discussed later in this thesis.\n\nCharacteristics Basis Function Method (CBFM)\n\nThis method have been developed in conjunction with the Fast Fourier Transform (FFT) for matrix generation to improve the efficiency of the Method of Moment when analysing electromagnetic scattering from large Perfect Electrically Conducting (PEC) bodies of revolution. The CBFs are high-level basis functions comprising conventional sub domain bases, and their use leads to a reduced matrix which can be solved by using a direct method. By using this technique, one can get a good advantage that the computational time and memory requirement can be significantly reduced for large BOR problems .\nThe second part, reduction of RCS (RRCS), the information in this technique is limited and it is difficult to obtain as well but due to the development in the semiconductors, the RRCS is taken place in the radar systems . There are four basic techniques for reducing the RCS; they are shaping, radar absorbing materials, passive cancellation and finally the active cancellation. Of course each technique of these has its advantage and disadvantage.\nAs we know, bodies of revolutions objects are used in these problems to simplify the calculations, due to the symmetrical property so the surface current can be explained in some small terms that helps in reducing the memory usage and the computation time. In the same way, complex bodies can be treated like bodies of revolution in their calculations.\nThis body of revolution (BOR) approach has been applied to several numerical methods such as MoM, EPDEA and CBFM. In our work, we are going to use the Efficient Partial Differential Equation Algorithm EPDEA to calculate the surface current density Jθ .\n\nREAD  LTCC technology validation for RF packaging applications\n\nRadar Cross Section (RCS);σ is the unit of measure of how detectable an object is with radar. For example a stealth aircraft (which is designed to be undetectable) will have design features that give it a low RCS, as opposed to a passenger airliner that will have a high RCS. An object’s RCS depends on its size, reflectivity of its surface, and the directivity of the radar reflection caused by the object’s geometric shape. So in other expression, RCS can be written as:\nRadar Cross Section (RCS) = Geometric Cross Section × Reflectivity × Directivity\n\nCHAPTER 1: INTRODUCTION\n1.1 INTRODUCTION\n1.2.3 SCATTERING REGIMES\n1.3 LITERATURE SURVEY\n1.3.1 SCATTERING AND RADIATION FROM PERFECTLY CONDUCTING BODIES\n1.3.2 SCATTERING AND RADIATION FROM IMPERFECTLY CONDUCTING BODIES\n1.3.3 RADAR CROSS SECTION RCS MEASUREMENTS\nCHAPTER 2: SCATERING FROM CONDUCTING BODIES OF REVOLUTION\n2.1 INTRODUCTION\n2.2 FORMULATION OF SCATTERING PROBLEM\n2.3 MOMENT SOLUTION\n2.4 EVALUATION OF DRIVING VECTOR AND FAR FIELD COMPONENTS\n2.6 EFFICIENT PARTIAL DIFFERENTIAL EQUATION ALGORITHM (EPDEA):\n2.6.1 FORMULATION OF BODY OF REVOLUTION PROBLEM\nCHAPTER 3: RESULTS & DISCUSSIONS\n3. RESULTS AND DISCUSSION\nTABLE 3.1: THE GENERALIZED ADMITTANCE MATRIX [Y]\nTABLE 3.2: REAL AND IMAGINARY PARTS AND MAGNITUDE OF THE Ρ AND Ф DIRECTED CURRENT. Ρ IS THE ARC LENGTH\nTABLE 3.3: THE RADAR CROSS SECTION () WITH RESPECT TO Ѳ\nTABLE 3.4: REAL AND IMAGINARY PARTS AND MAGNITUDE OF THE Ρ AND Ф DIRECTED CURRENTS. Ρ IS THE ARC LENGTH\nTABLE 3.5: THE RADAR CROSS SECTION () WITH RESPECT TO Ѳ\nTABEL 3.6: EXCITED CURRENT COMPONENTS\nTABLE 3.7: THE NORMALIZED POWER GAIN PATTERN\nREFERENCES\nAPPENDIX\nA.1 THE MATHEMATICA WORK TO SOLVE THE EQUATIONS)\nA.2 SPHERICAL COORDINATES (:,,φθr)\nA.3 MAXWELL’S EQUATIONS\n\nGET THE COMPLETE PROJECT" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.90275085,"math_prob":0.9259924,"size":8683,"snap":"2023-14-2023-23","text_gpt3_token_len":1875,"char_repetition_ratio":0.109113954,"word_repetition_ratio":0.02202643,"special_character_ratio":0.19463319,"punctuation_ratio":0.098014094,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9871494,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-23T05:23:31Z\",\"WARC-Record-ID\":\"<urn:uuid:f7686b1d-dec0-4b49-878b-49f65759d5f8>\",\"Content-Length\":\"57041\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:af84cb42-d3ae-49f8-97cf-d0c88dc55eb6>\",\"WARC-Concurrent-To\":\"<urn:uuid:20223dfc-ccbd-4b0c-b1a4-56d49f8c580f>\",\"WARC-IP-Address\":\"66.23.225.237\",\"WARC-Target-URI\":\"https://www.bestpfe.com/scatering-from-conducting-bodies-of-revolution/\",\"WARC-Payload-Digest\":\"sha1:KM2Y2BR4EQUOBQFODQKR7ASAXIZCYMOF\",\"WARC-Block-Digest\":\"sha1:RYVJCYEOZVB2O3H2RU7TM5N3TX5ISXR7\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296944996.49_warc_CC-MAIN-20230323034459-20230323064459-00190.warc.gz\"}"}
https://www.codespeedy.com/check-if-a-linked-list-is-a-palindrome-in-cpp/
[ "# Check if a linked list is a palindrome in C++\n\nIn this tutorial, we will learn how to check if a linked list is a palindrome in C++. A palindrome is a word, phrase or a sequence that reads the same backward and forwards.\n\nFor example-\n\n121 is a palindrome.\n\n123321 is a palindrome.\n\n1234554321 is a palindrome.\n\n12345421 is not a palindrome.\n\n112233112233 is not a palindrome again.\n\n## C++ program to check if a linked list is a palindrome\n\n### Method 1:\n\nWe will be using STL(Standard template library) in this method.\n\n1. In the first pass, compare the first and the last element.\n2. In the second pass, compare the second and the second last element.\n3. Do this for n/2 times i.e till we reach the middle of the list.\n4. Keep a flag variable. If anywhere, the elements are not equal set the flag to 0 and break from the loop.\n\nThe last element is accessed by:\n\nlist<int>::iterator tail=mylist.end();\nBut since it refers to this position shown in the figure, we have to decrement it by 1 to reach the last element of the linked list.\nHere is the C++ code:\n```#include <bits/stdc++.h>\nusing namespace std;\nint main() {\nlist<int> mylist;\nint n=9;\nint data;\n\nfor(int i=0;i<n;i++)\n{\ncin>>data;\nmylist.push_front(data);\n}\n\nlist<int>::iterator tail=mylist.end();\n\n--tail;//To locate the element at last iterator\nbool flag=1;\n\nfor(int i=0;i<n/2;i++)\n{\n\n{\ntail--;\n\n}\nelse\n{\nflag=0;\nbreak;\n}\n}\n\nflag ? cout<<\"yes it's a palindrome\" : cout<<\"no,it's not a palindrome\";\n\n}\n```\n\nINPUT: 1 2 3 3 2 1\n\nOUTPUT:\n\n`yes it's a palindrome`\n\nINPUT: 1 2 3 4 5 3 2 1\n\nOUTPUT:\n\n`no, it's not a palindrome`\n\n### Method 2:\n\nWe will be using STL again in this method.\n\n1. First, we will reach to the middle of the linked list. This we will do by incrementing the head2 iterator by n/2 times if n is even and n/2+1 times if n is odd.\n2. Then we will create another list of second-half elements and reverse it.\n3. Now, compare the elements of the first half of the original list and the second list.\n4. If they are all equal, it’s a palindrome.\n\nHere, is the code:\n\n```#include <bits/stdc++.h>\nusing namespace std;\nint main() {\nlist<int> mylist;\nint n=9;\nint data;\n\nfor(int i=0;i<n;i++)\n{\ncin>>data;\nmylist.push_back(data);\n}\n\nint j=0;\nif(n%2==0)//if number of elemnts are even\n{\nj=n/2;\n}\nelse\n{\nj=(n/2)+1;\n}\n//incrementing head2 till it reaches the middle of the list\nfor(int i=0;i<j;i++)\n{\n}\n//creating another list of second half elements of first list\nlist<int> l2;\n\nfor(int i=0;i<n/2;i++)\n{\n}\n//reversing the second list\nl2.reverse();\nint flag=1;\n\n//comparing elements of first and second list\nfor(int i=0;i<n/2;i++)\n{\n{\n}\nelse\n{\nflag=0;\nbreak;\n}\n}\n\nflag ? cout<<\"yes it's a palindrome\" : cout<<\"no,it's not a palindrome\";\n\n}\n```\n\nINPUT: 1 2 3 3 2 1\n\nOUTPUT:\n\n`yes it's a palindrome`\n\nAlso, refer to:\n\nThank you!" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.59132636,"math_prob":0.88967663,"size":2965,"snap":"2022-40-2023-06","text_gpt3_token_len":847,"char_repetition_ratio":0.14859845,"word_repetition_ratio":0.19879518,"special_character_ratio":0.32546374,"punctuation_ratio":0.18037517,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98029643,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-01-31T21:05:00Z\",\"WARC-Record-ID\":\"<urn:uuid:7751ba4d-2559-46d5-862e-e0dd014e98b9>\",\"Content-Length\":\"51576\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d2d41812-a764-4e34-abc5-9ec6d95ec398>\",\"WARC-Concurrent-To\":\"<urn:uuid:06932d53-f892-4ce5-b302-3625862f0adf>\",\"WARC-IP-Address\":\"104.21.85.98\",\"WARC-Target-URI\":\"https://www.codespeedy.com/check-if-a-linked-list-is-a-palindrome-in-cpp/\",\"WARC-Payload-Digest\":\"sha1:Y6VO6QQXVRU253PEBBU6KAQ2S4US5UDP\",\"WARC-Block-Digest\":\"sha1:EKWQD5KTUN3QL4MDIFYQRXUO4GYMCENW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764499890.39_warc_CC-MAIN-20230131190543-20230131220543-00483.warc.gz\"}"}
http://www.computerchess.org.uk/ccrl/404/cgi/engine_details.cgi?match_length=30&print=Details&each_game=1&eng=Demolito%202018-10-29%2064-bit
[ "Contents: CCRL Blitz Downloads and Statistics January 16, 2021 Testing summary: Total: 2'442'136 games played by 2'904 programs White wins: 939'660 (38.5%) Black wins: 754'242 (30.9%) Draws: 748'234 (30.6%) White score: 53.8%\n\n## Engine Details\n\n Options Show each game results\nDemolito 2018-10-29 64-bit (3017+16\n−16\n)Quote\n Author: Lucas Braesch (France) Links: github, AppVeyor\nThis is one of the 14 Demolito versions we tested: Compare them!\n Opponent Elo Diff Results Score LOS Perf –   Fire 7.1 64-bit 3334 +8−8 (+317) 0 − 2(+0−2=0) 0.0%0.0 / 2 0.0% −INF 0 0 –   Andscacs 0.95 64-bit 3229 +10−10 (+212) 0 − 1(+0−1=0) 0.0%0.0 / 1 0.0% −INF 0 –   RofChade 2.1 64-bit 3216 +15−16 (+199) 0 − 1(+0−1=0) 0.0%0.0 / 1 0.0% −INF 0 –   NirvanaChess 2.4 64-bit 3132 +8−8 (+115) 10.5 − 22.5(+4−16=13) 31.8%10.5 / 33 0.0% −5 0 = = 0 = 0 = 1 1 0 0 0 = = 0 1 0 0 = 0 = 0 0 0 = 1 0 = = 0 = 0 = –   RofChade 2.0 64-bit 3130 +14−14 (+113) 10 − 22(+6−18=8) 31.3%10.0 / 32 0.0% −22 0 = 0 = = 0 = 1 0 1 0 1 = 0 1 0 0 = 0 0 1 0 0 0 0 0 0 = 1 = 0 0 –   Vajolet2 2.7 64-bit 3128 +14−15 (+111) 12 − 32(+1−21=22) 27.3%12.0 / 44 0.0% −27 0 = = = = 0 0 = 0 1 = 0 = = 0 = 0 = 0 0 0 = = = 0 = 0 0 0 0 0 = = = 0 = 0 0 = = = = 0 0 –   Arasan 21.1 64-bit 3117 +15−15 (+100) 6.5 − 25.5(+2−21=9) 20.3%6.5 / 32 0.0% −111 0 0 0 = 1 0 0 0 0 = 0 0 0 0 0 0 = 0 0 = 0 = 0 = 0 1 = 0 0 0 = = –   Nemorino 5.00 64-bit 3116 +10−10 (+99) 10.5 − 21.5(+3−14=15) 32.8%10.5 / 32 0.0% −6 0 1 0 0 0 = 0 0 = 0 = = = = = 1 0 = = = = 0 0 1 = = = 0 0 = 0 0 –   Texel 1.07 64-bit 3113 +8−8 (+96) 10 − 22(+6−18=8) 31.3%10.0 / 32 0.0% −37 0 0 0 0 = 1 0 1 1 0 = 0 0 0 0 0 0 = 0 1 = 1 0 0 = = 0 = 0 = 0 1 –   Rybka 4.1 64-bit 3110 +6−6 (+93) 13 − 20(+6−13=14) 39.4%13.0 / 33 0.0% +27 0 = = = = 1 0 = = 0 0 = 0 = = 1 = 0 0 1 = 1 1 0 0 = 0 1 0 0 = = 0 –   Hannibal 1.7 64-bit 3110 +7−7 (+93) 12.5 − 20.5(+7−15=11) 37.9%12.5 / 33 0.0% +8 = 1 0 0 1 0 0 0 0 0 = = 0 = = 0 0 0 = = 1 0 0 = = 0 0 = 1 1 1 = 1 –   Pedone 1.8 64-bit 3107 +14−14 (+90) 10.5 − 21.5(+6−17=9) 32.8%10.5 / 32 0.0% −26 0 = 0 0 0 0 0 = 1 0 = 0 1 = 0 1 1 = 0 = 0 0 0 1 0 = 0 0 = = 1 0 –   Protector 1.9.0 64-bit 3088 +7−6 (+71) 15.5 − 17.5(+5−7=21) 47.0%15.5 / 33 0.0% +55 1 = 1 = = = 0 1 = = = 0 0 = = = 0 = 1 = = 1 0 0 = = = 0 = = = = = –   Senpai 2.0 64-bit 3085 +8−8 (+68) 12 − 21(+6−15=12) 36.4%12.0 / 33 0.0% −22 0 = 0 0 1 0 0 1 = 0 = 0 1 = 0 0 = = = 0 = 1 = = 0 = 1 0 = 0 0 0 1 –   Vajolet2 2.6 64-bit 3085 +13−13 (+68) 14.5 − 17.5(+8−11=13) 45.3%14.5 / 32 0.0% +36 1 0 1 1 = 0 = = 1 0 = = 0 0 0 1 = 0 0 = 1 0 = 0 = = 1 1 = = = 0 –   iCE 3.0 64-bit 3081 +7−7 (+64) 12 − 20(+6−14=12) 37.5%12.0 / 32 0.0% −17 0 = = = 0 0 = 1 = = 0 = 0 0 0 = 0 1 = 1 0 = 0 1 0 1 = = 1 0 0 0 –   chess22k 1.12 64-bit 3081 +16−16 (+64) 14.5 − 17.5(+9−12=11) 45.3%14.5 / 32 0.0% +32 = 0 0 = 0 1 1 1 0 = 0 0 0 = 0 = 0 = = 0 1 1 1 1 1 0 = = = 0 = 1 –   Wasp 3.60 64-bit 3080 +17−17 (+63) 0.5 − 0.5(+0−0=1) 50.0%0.5 / 1 0.0% +32 = –   Wasp 3.50 64-bit 3066 +16−16 (+49) 14 − 18(+4−8=20) 43.8%14.0 / 32 0.0% +16 = 0 0 = = 1 0 = = = = = = = = = = 0 1 0 0 = = = = = 1 = 1 0 0 = –   RubiChess 1.4 64-bit 3050 +18−18 (+33) 11.5 − 20.5(+6−15=11) 35.9%11.5 / 32 0.3% −60 0 = 0 1 1 0 0 = 0 1 1 = 0 0 0 1 0 = = 0 0 0 = = 0 0 = 0 = 1 = = –   ChessBrainVB 3.72 3048 +19−19 (+31) 16 − 16(+13−13=6) 50.0%16.0 / 32 0.6% +31 1 1 0 = = 0 = = 1 1 0 1 0 0 = = 0 1 0 1 1 1 0 1 1 1 0 0 0 0 0 1 –   Defenchess 1.1f 64-bit 3047 +11−11 (+30) 15.5 − 16.5(+9−10=13) 48.4%15.5 / 32 0.1% +22 1 = 0 0 0 0 = 1 1 1 0 = 0 = = = = 0 1 = 1 1 = 1 1 0 0 = 0 = = = –   SmarThink 1.98 64-bit 3045 +8−8 (+28) 19.5 − 12.5(+14−7=11) 60.9%19.5 / 32 0.1% +99 0 0 = 1 1 0 0 = = 1 1 = = = 1 1 1 = = 1 1 = = 1 1 1 1 1 0 0 0 = –   chess22k 1.11 64-bit 3031 +16−16 (+14) 17.5 − 14.5(+12−9=11) 54.7%17.5 / 32 11.0% +44 = 1 1 = 1 1 = 1 1 0 1 = 0 = = = = 1 0 1 0 = 0 0 = 0 1 = 0 1 1 0 –   Naum 4.6 64-bit 3031 +7−7 (+14) 15.5 − 16.5(+11−12=9) 48.4%15.5 / 32 5.6% +2 0 = = 0 = 1 0 0 0 0 = 0 0 1 = 0 1 = 1 = 1 = 1 1 = 0 0 0 1 1 1 1 –   Wasp 3.0 64-bit 3024 +15−15 (+7) 18 − 14(+11−7=14) 56.3%18.0 / 32 25.2% +48 1 = 1 0 0 1 0 1 0 = = 1 1 = 1 = 0 = = = = 0 = = 0 = = 1 1 1 1 = –   Hakkapeliitta TCEC v2 64-bit 3020 +8−8 (+3) 16.5 − 15.5(+13−12=7) 51.6%16.5 / 32 35.6% +13 = = = 0 1 0 0 1 0 1 1 1 0 0 1 1 = 0 0 0 0 1 1 0 0 = 1 = 1 1 = 1 –   Rodent III 0.287 64-bit 3005 +19−19 (−12) 14.5 − 17.5(+8−11=13) 45.3%14.5 / 32 83.5% −42 1 = = 1 1 = = 0 0 = 0 = = = = 0 1 0 0 1 = = 0 1 0 0 1 0 0 = = 1 –   Rodent III 0.273 64-bit 3001 +17−17 (−16) 1 − 0(+1−0=0) 100.0%1.0 / 1 91.3% +INF 1 –   ChessBrainVB 3.70 3000 +17−17 (−17) 16.5 − 15.5(+11−10=11) 51.6%16.5 / 32 92.8% −6 = 1 = = = 0 1 0 1 0 = = 0 0 0 = 1 0 1 1 1 0 1 = 1 0 1 = = 0 1 = –   Deuterium 2019.1.36.50 64-bit 2999 +15−14 (−18) 15 − 17(+6−8=18) 46.9%15.0 / 32 95.2% −40 = 1 = 1 = 0 = 0 = 0 = = = = = 0 = = = 1 1 1 = = = 0 0 = 0 1 = 0 –   Rodent III 0.275 64-bit 2998 +29−29 (−19) 13.5 − 18.5(+8−13=11) 42.2%13.5 / 32 88.0% −67 1 1 0 = = 1 0 = = = = 1 0 1 = 1 0 1 0 0 = 0 0 0 = 1 0 0 0 0 = = –   Pirarucu 2.9.5 64-bit 2987 +19−19 (−30) 37 − 35(+24−22=26) 51.4%37.0 / 72 99.4% −21 = 1 = 0 1 1 = 1 1 1 1 1 = 1 0 1 0 = = 1 = = 1 1 = 1 0 = 1 0 0 0 0 1 1 0 0 0 = = 1 1 0 = 0 1 0 1 = = = = 0 = 1 1 1 = 0 0 0 = 0 = 0 = 0 0 = = = = –   Hiarcs 14 2985 +7−8 (−32) 18 − 14(+10−6=16) 56.3%18.0 / 32 100.0% +5 0 = = = = = 1 1 = = = = = 1 = = = = = 1 1 = 0 0 0 0 1 1 1 1 1 0 –   Amoeba 3.0 64-bit 2974 +15−14 (−43) 23.5 − 17.5(+18−12=11) 57.3%23.5 / 41 100.0% +7 0 = = 0 0 0 1 1 = 0 1 = 1 1 = 1 0 = = 1 0 1 = 1 1 1 1 1 0 0 = 1 0 = 0 = 0 1 1 1 1 –   Bobcat 8.0 64-bit 2960 +9−9 (−57) 20.5 − 11.5(+13−4=15) 64.1%20.5 / 32 100.0% +27 1 1 = = = 1 1 1 = = 0 1 0 = 1 = 0 = 1 = = 1 = 1 = = 1 0 = = 1 1 –   Amoeba 2.8 64-bit 2952 +13−13 (−65) 18 − 14(+12−8=12) 56.3%18.0 / 32 100.0% −23 = 1 0 1 1 0 1 = = 0 = = 1 0 = = 0 = 1 0 1 = 1 = = = 1 0 1 1 1 0 –   Deuterium 2018.1.35.514 64-bit 2952 +15−15 (−65) 21.5 − 10.5(+16−5=11) 67.2%21.5 / 32 100.0% +51 = 1 1 1 = 1 0 0 = 1 = 1 0 = = 1 = = 0 1 1 1 1 1 = 1 = 1 1 0 1 = –   Cheng 4.39 64-bit 2943 +8−8 (−74) 19 − 13(+11−5=16) 59.4%19.0 / 32 100.0% −17 1 0 1 = = = = 1 1 = 1 1 1 0 = = 1 = 1 1 1 = 0 = = 0 = 0 = = = = –   Winter 0.6 64-bit 2938 +18−18 (−79) 21 − 12(+18−9=6) 63.6%21.0 / 33 100.0% +24 1 1 = = 0 = 0 1 1 0 0 1 1 1 1 1 1 0 1 1 0 1 = 1 0 0 = 0 1 = 1 1 1 –   Dirty CUCUMBER 64-bit 2927 +13−13 (−90) 25 − 8(+19−2=12) 75.8%25.0 / 33 100.0% +80 1 = 1 1 1 1 = = 1 1 = 1 = 1 1 0 = 1 = 1 0 = = = 1 1 1 = 1 1 1 = 1 –   Spark 1.0 64-bit 2927 +7−6 (−90) 20.5 − 11.5(+15−6=11) 64.1%20.5 / 32 100.0% +2 = = 0 1 1 1 = 0 1 0 1 1 0 = 1 = = 0 1 1 = 1 1 = 1 1 0 1 = 1 = = –   Atlas 3.91 64-bit 2923 +10−10 (−94) 18.5 − 13.5(+11−6=15) 57.8%18.5 / 32 100.0% −45 = 1 = 0 = 0 = = = = 1 1 1 = 0 1 1 = 1 = = 1 1 1 = 0 = 0 1 = 0 = –   DisasterArea 1.65 64-bit 2868 +9−9 (−149) 0 − 1(+0−1=0) 0.0%0.0 / 1 100.0% −INF 0 –   Rhetoric 1.4.3 64-bit 2808 +8−8 (−209) 1 − 0(+1−0=0) 100.0%1.0 / 1 100.0% +INF 1 –   Frenzee 3.5.19 64-bit 2770 +7−7 (−247) 1 − 0(+1−0=0) 100.0%1.0 / 1 100.0% +INF 1 –   Bison 9.11 64-bit 2760 +9−9 (−257) 2 − 0(+2−0=0) 100.0%2.0 / 2 100.0% +INF 1 1 –   Arminius 2018-12-23 64-bit 2755 +13−13 (−262) 1 − 0(+1−0=0) 100.0%1.0 / 1 100.0% +INF 1 –   Chronos 1.9.9 64-bit 2740 +7−7 (−277) 1 − 0(+1−0=0) 100.0%1.0 / 1 100.0% +INF 1 –   Gandalf 7 64-bit 2665 +11−11 (−352) 1 − 0(+1−0=0) 100.0%1.0 / 1 100.0% +INF 1 –   Alaric 707 2663 +7−6 (−354) 0 − 1(+0−1=0) 0.0%0.0 / 1 100.0% −INF 0 –   WildCat 8 2624 +7−7 (−393) 0.5 − 0.5(+0−0=1) 50.0%0.5 / 1 100.0% −423 =\n\n### Rating changes by day", null, "### Rating changes with played games", null, "Created in 2005-2013 by CCRL team Last games added on January 16, 2021" ]
[ null, "http://www.computerchess.org.uk/ccrl/404/rating-history-by-day-graphs/Demolito_2018-10-29_64-bit.png", null, "http://www.computerchess.org.uk/ccrl/404/rating-history-by-day-graphs-2/Demolito_2018-10-29_64-bit.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.64259464,"math_prob":1.0000099,"size":7430,"snap":"2021-04-2021-17","text_gpt3_token_len":5156,"char_repetition_ratio":0.38472933,"word_repetition_ratio":0.541555,"special_character_ratio":0.9195155,"punctuation_ratio":0.11370648,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9994081,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,3,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-23T04:45:50Z\",\"WARC-Record-ID\":\"<urn:uuid:e39d70a7-7bf0-4e24-b134-2736937b8401>\",\"Content-Length\":\"55573\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7f7f4736-a930-4593-93c2-f2443e922de8>\",\"WARC-Concurrent-To\":\"<urn:uuid:a925a5ca-e1fd-4962-94f2-a75e81429f86>\",\"WARC-IP-Address\":\"185.45.66.155\",\"WARC-Target-URI\":\"http://www.computerchess.org.uk/ccrl/404/cgi/engine_details.cgi?match_length=30&print=Details&each_game=1&eng=Demolito%202018-10-29%2064-bit\",\"WARC-Payload-Digest\":\"sha1:UJLQNLVSZIAAYNKF4NHCRWST2J2VL5GN\",\"WARC-Block-Digest\":\"sha1:XBIO4FRUI45RKUQEB7VMY3RSO2QLU7S4\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610703533863.67_warc_CC-MAIN-20210123032629-20210123062629-00440.warc.gz\"}"}
https://www.excellup.com/seventh_math/7_math_chapter_6_4.aspx
[ "Class 7 Maths\n\n# Properties of Triangles\n\n## Exercise 6.4\n\nQuestion 1: Is it possible to have a triangle with the following sides?\n\n1. 2 cm , 3 cm, 5 cm\n2. 3 cm, 6 cm, 7 cm\n3. 6 cm, 3 cm, 2 cm\n\nAnswer: Making a triangle is possible only with sides given in option ‘b’. In other options, sum of two sides is either equal to or less than the third side.\n\nQuestion 2: Take any point O in the interior of a triangle PQR. Is", null, "1. OP + OQ > PQ?\n2. OQ + OR > QR?\n3. OR + OP > RP?\n\nAnswer: The answer is yes in each case, because sum of any two sides of a triangle is always greater than the third side.\n\nQuestion 3: AM is a median of a triangle ABC. Is AB + BC + CA > 2AM? (Consider the sides of ΔABM and ΔAMC)", null, "Answer: In ΔABM, AB + BM > AM\n\nSimilarly, in ΔAMC, AC + CM > AM\n\nAB + BM + CM + AC > 2AM\n\nOr, AB + BC + CA > 2AM\n\nQuestion 4: ABCD is a quadrilateral. Is AB + BC + CD + DA > AC + BD?", null, "Answer: In ΔABC; AB + BC > AC\n\nIn ΔDAC, DA + CD > AC\n\nIn ΔDAB, DA + AB > DB\n\nIn ΔDCB, CD + CB > DB\n\n2AB + 2BC + 2 CD + 2AD > 2AC + 2BD\n\nOr, 2(AB + BC + CD + AD) > 2(AC + BD)\n\nOr, AB + BC + CD + AD > AC + BD\n\nQuestion 5: ABCD is a quadrilateral. Is AB + BC + CD + DA < 2(AC + BD)?\n\nAnswer: Let us assume a point O at the point of intersection of diagonals AC and BD.\n\nIn ΔOAB, OA + OB > AB\n\nIn Δ OBC, OB + OC > BC\n\nIn ΔODC, OD + OC > CD\n\nAB + BC + CD + DA < OA + OB + OB + OC + OC + OD + OD + OA\n\nOr, AB + BC + CD + DA < OA + OA + OC + OC + OD + OD + OB + OB\n\nOr, AB + BC + CD + DA < 2(OA + OC + OD + OB)\n\nOr, AB + BC + CD + DA < 2(AC + BD)\n\nQuestion 6: The lengths of two sides of a triangle are 12 cm and 15 cm. Between what two measures should the length of the third side fall?\n\nAnswer: Sum of given two sides = 12 cm + 15 cm = 27 cm\n\nHence, the third side should always be less than 27 cm.\n\nThe difference between given sides = 15 cm – 12 cm = 3 cm\n\nIf the third side will be = 3 cm then 12 + 3 = 15 cm shall be equal to one of the given sides.\n\nHence, the third side should be more than 3 cm\n\nSo, range of measure of third side = 4 cm to 26 cm." ]
[ null, "https://www.excellup.com/seventh_math/seven_math_chapter_6/7_math_chapter_6_49.png", null, "https://www.excellup.com/seventh_math/seven_math_chapter_6/7_math_chapter_6_50.png", null, "https://www.excellup.com/seventh_math/seven_math_chapter_6/7_math_chapter_6_51.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.78820807,"math_prob":0.99967587,"size":2093,"snap":"2023-40-2023-50","text_gpt3_token_len":730,"char_repetition_ratio":0.14648157,"word_repetition_ratio":0.1369606,"special_character_ratio":0.3755375,"punctuation_ratio":0.13453816,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9926135,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,3,null,3,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-02T16:24:24Z\",\"WARC-Record-ID\":\"<urn:uuid:2fb561e3-8cbe-4265-8c38-599c45b71e63>\",\"Content-Length\":\"10678\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:eb75f5c7-3331-437a-ab4f-318aed5b3d35>\",\"WARC-Concurrent-To\":\"<urn:uuid:e3d8a501-9cb5-42aa-879e-40cf8e1ad099>\",\"WARC-IP-Address\":\"182.50.135.109\",\"WARC-Target-URI\":\"https://www.excellup.com/seventh_math/7_math_chapter_6_4.aspx\",\"WARC-Payload-Digest\":\"sha1:EZDXWOFFEWECGESHCRKJ5GZ3SN4PDDHN\",\"WARC-Block-Digest\":\"sha1:AN54R2IUM6XGSU4FIKTMSCHV37A5ITA4\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100427.59_warc_CC-MAIN-20231202140407-20231202170407-00078.warc.gz\"}"}
http://frontiersofbiology.org/find-a-polynomial-with-integer-coefficients-that-4/
[ "Online Learning\n\n# Find a polynomial with integer coefficients that satisfies the given conditions Q\n\nhas degree 3 and\n1.\nBy Factor Theorem, the general form of a polynomial with\ndegree 3 and zeros r1 , r2 , and r3 is\nf x a x r1 x r2 x r3 . Here the degree of the polynomial is 3.\nThe zeros of the polynomial are\nr1…\nMath\n\nFind a polynomial with integer coefficients that satisfies the given conditions Q" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.85666656,"math_prob":0.9982532,"size":301,"snap":"2021-04-2021-17","text_gpt3_token_len":84,"char_repetition_ratio":0.17508417,"word_repetition_ratio":0.0,"special_character_ratio":0.269103,"punctuation_ratio":0.08955224,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9998371,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-18T14:24:51Z\",\"WARC-Record-ID\":\"<urn:uuid:ae94a9d2-cc23-40f3-9846-011ec90fb3cc>\",\"Content-Length\":\"17102\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:69ac5b90-b082-411c-a6d2-f25562372f42>\",\"WARC-Concurrent-To\":\"<urn:uuid:ecbafb09-fd96-4448-aca5-dc07e6c50db2>\",\"WARC-IP-Address\":\"47.88.103.144\",\"WARC-Target-URI\":\"http://frontiersofbiology.org/find-a-polynomial-with-integer-coefficients-that-4/\",\"WARC-Payload-Digest\":\"sha1:UHYF7QTJ6PKRA7HNOBU7DG6UVWORE7DC\",\"WARC-Block-Digest\":\"sha1:5QWXUZJNHSMBU7GQLTW5B6CNEMGIVZB4\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610703514796.13_warc_CC-MAIN-20210118123320-20210118153320-00295.warc.gz\"}"}
https://blog.oureducation.in/representation-of-and-gatetruth-table-logic-gate/
[ "", null, "# Representation of AND Gate,Truth Table & Logic Gate\n\nMay 25 • Notes • 4147 Views • 1 Comment on Representation of AND Gate,Truth Table & Logic Gate\n\nRepresentation of AND Gate,Truth Table & Logic Gate\n\nLogic Gate:\n\n• Logic gates are not only fundamental building blocks of digital systems but heart of system.\n• Logic gate is a device which produces output depending upon the possible combinations of input.\n\nThere are seven basic types of logic gates and here we have tried to define all seven logic gates:\n\n1. AND Gate\n2. OR Gate\n3. NOT Gate\n4. NAND Gate\n5. NOR Gate\n6. EX-OR Gate\n7. EX-NOR Gate\n\nTruth Table:\n\n• A table which has all the possible combinations of input variables and the corresponding outputs is called a truth table.\n• It shows how the logic circuit’s output responds to various combination of logic levels at the input.\n• Truth table are used to show logic gate functions in all frames.\n•  while constructing a truth table, the binary values 1 and 0 are used in all frames.\n\nAND Gate:\n\n• As name “AND” suggests AND Gate has two or more inputs but only one output.\n• The  AND Gate is an electronic circuit according to its name and function that gives a true output (1) only if all inputs are true.\n•  A dot (.) is used to show AND Gate operation i.e., A.B.\n\nThe Logic symbol and the truth table of a 2-input AND Gate is:", null, "Related Questions:\n\nQ1. What is a logic gate? Write the different types of logic gates?\n\nAns. Logic gates are the fundamental building blocks of digital systems.Logic gate is a device which produces output depending upon the possible combinations of input.\n\nThere are seven basic types of logic gates which we will discuss here:\n\n1. AND Gate\n2. OR Gate\n3. NOT Gate\n4. NAND Gate\n5. NOR Gate\n6. EX-OR Gate\n7. EX-NOR Gate\n\nQ2. What is a truth table?\n\nAns. A table which has all the possible combinations of input variables and the corresponding outputs is called a truth table. It shows how the logic circuit’s output responds to various combination of logic levels at the input.\n\nQ3. What is AND Gate? Which is used to show AND GATE operation?\n\nAns  The  AND Gate is an electronic circuit that gives a true output (1) only if all inputs are true.\n\nA dot (.) is used to show AND Gate operation i.e., A.B.\n\nQ4.  Write the symbol and truth table of AND Gate.\n\nAns.  The symbol and truth table of AND Gate is:", null, "### One Response to Representation of AND Gate,Truth Table & Logic Gate\n\n1. Rachita Mishra says:\n\nhe AND gate is a basic digital logic gate that implements logical conjunction – it behaves according to the truth table to the right. A HIGH output results only if both the inputs to the AND gate are HIGH.also known as universal gate ,so this is important for electronics students." ]
[ null, "https://i0.wp.com/blog.oureducation.in/wp-content/uploads/2013/05/AND_gate_fig_1.jpg", null, "https://i0.wp.com/blog.oureducation.in/wp-content/uploads/2013/05/2-Input-AND-Gate-Truth-Table.jpg", null, "https://i0.wp.com/blog.oureducation.in/wp-content/uploads/2013/05/2-Input-AND-Gate-Truth-Table.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8500259,"math_prob":0.93760246,"size":2688,"snap":"2022-27-2022-33","text_gpt3_token_len":598,"char_repetition_ratio":0.16356185,"word_repetition_ratio":0.3767821,"special_character_ratio":0.22470239,"punctuation_ratio":0.10280374,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9909742,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,2,null,4,null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-06-26T05:07:27Z\",\"WARC-Record-ID\":\"<urn:uuid:d726ea5b-d937-40f8-ad96-9de60f012020>\",\"Content-Length\":\"86751\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:37565dbb-b145-4fb6-a77e-0d42e9f17736>\",\"WARC-Concurrent-To\":\"<urn:uuid:e349bb26-e4c2-4b8f-8859-9ecc0c628c3a>\",\"WARC-IP-Address\":\"103.139.75.129\",\"WARC-Target-URI\":\"https://blog.oureducation.in/representation-of-and-gatetruth-table-logic-gate/\",\"WARC-Payload-Digest\":\"sha1:UBZYRGZHRAULBIJGNJK4JOPT2XCWDLRH\",\"WARC-Block-Digest\":\"sha1:M4PD2VCKDHIW4ZFRZB7J47SQBTQYIH7Z\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103037089.4_warc_CC-MAIN-20220626040948-20220626070948-00329.warc.gz\"}"}
https://discourse.mc-stan.org/t/help-speeding-up-an-auto-regressive-multivariate-measurement-error-model/27562
[ "# Help speeding up an auto-regressive multivariate measurement-error model\n\nI am working with a large data set (160 million observations of 6 variables for 400 subjects) where conditions change over varying periods of time.\n\nIt seems like there might be too much raw data, so I collapsed the variables into means and standard deviations over each condition time interval (observations are sampled at 120Hz), which takes the data down to 68k observations. I have attached an example subset.\n\nsample_data.csv (699.5 KB)\n\nIt seemed like an AR term for “interval” made sense and I used `brms` to specify the model and get the generated stan code below:\n\n``````functions {\n/* compute correlated group-level effects\n* Args:\n* z: matrix of unscaled group-level effects\n* SD: vector of standard deviation parameters\n* L: cholesky factor correlation matrix\n* Returns:\n* matrix of scaled group-level effects\n*/\nmatrix scale_r_cor(matrix z, vector SD, matrix L) {\n// r is stored in another dimension order than z\nreturn transpose(diag_pre_multiply(SD, L) * z);\n}\n}\ndata {\nint<lower=1> N; // total number of observations\nint<lower=1> N_meanX; // number of observations\nvector[N_meanX] Y_meanX; // response variable\n// data for measurement-error in the response\nvector<lower=0>[N_meanX] noise_meanX;\nint<lower=0> Nme_meanX;\nint<lower=1> Jme_meanX[Nme_meanX];\nint<lower=1> K_meanX; // number of population-level effects\nmatrix[N_meanX, K_meanX] X_meanX; // population-level design matrix\n// data needed for ARMA correlations\nint<lower=0> Kar_meanX; // AR order\nint<lower=0> Kma_meanX; // MA order\n// number of lags per observation\nint<lower=0> J_lag_meanX[N_meanX];\nint<lower=1> N_meanY; // number of observations\nvector[N_meanY] Y_meanY; // response variable\n// data for measurement-error in the response\nvector<lower=0>[N_meanY] noise_meanY;\nint<lower=0> Nme_meanY;\nint<lower=1> Jme_meanY[Nme_meanY];\nint<lower=1> K_meanY; // number of population-level effects\nmatrix[N_meanY, K_meanY] X_meanY; // population-level design matrix\n// data needed for ARMA correlations\nint<lower=0> Kar_meanY; // AR order\nint<lower=0> Kma_meanY; // MA order\n// number of lags per observation\nint<lower=0> J_lag_meanY[N_meanY];\nint<lower=1> N_meanZ; // number of observations\nvector[N_meanZ] Y_meanZ; // response variable\n// data for measurement-error in the response\nvector<lower=0>[N_meanZ] noise_meanZ;\nint<lower=0> Nme_meanZ;\nint<lower=1> Jme_meanZ[Nme_meanZ];\nint<lower=1> K_meanZ; // number of population-level effects\nmatrix[N_meanZ, K_meanZ] X_meanZ; // population-level design matrix\n// data needed for ARMA correlations\nint<lower=0> Kar_meanZ; // AR order\nint<lower=0> Kma_meanZ; // MA order\n// number of lags per observation\nint<lower=0> J_lag_meanZ[N_meanZ];\nint<lower=1> N_meanP; // number of observations\nvector[N_meanP] Y_meanP; // response variable\n// data for measurement-error in the response\nvector<lower=0>[N_meanP] noise_meanP;\nint<lower=0> Nme_meanP;\nint<lower=1> Jme_meanP[Nme_meanP];\nint<lower=1> K_meanP; // number of population-level effects\nmatrix[N_meanP, K_meanP] X_meanP; // population-level design matrix\n// data needed for ARMA correlations\nint<lower=0> Kar_meanP; // AR order\nint<lower=0> Kma_meanP; // MA order\n// number of lags per observation\nint<lower=0> J_lag_meanP[N_meanP];\nint<lower=1> N_meanR; // number of observations\nvector[N_meanR] Y_meanR; // response variable\n// data for measurement-error in the response\nvector<lower=0>[N_meanR] noise_meanR;\nint<lower=0> Nme_meanR;\nint<lower=1> Jme_meanR[Nme_meanR];\nint<lower=1> K_meanR; // number of population-level effects\nmatrix[N_meanR, K_meanR] X_meanR; // population-level design matrix\n// data needed for ARMA correlations\nint<lower=0> Kar_meanR; // AR order\nint<lower=0> Kma_meanR; // MA order\n// number of lags per observation\nint<lower=0> J_lag_meanR[N_meanR];\nint<lower=1> N_meanW; // number of observations\nvector[N_meanW] Y_meanW; // response variable\n// data for measurement-error in the response\nvector<lower=0>[N_meanW] noise_meanW;\nint<lower=0> Nme_meanW;\nint<lower=1> Jme_meanW[Nme_meanW];\nint<lower=1> K_meanW; // number of population-level effects\nmatrix[N_meanW, K_meanW] X_meanW; // population-level design matrix\n// data needed for ARMA correlations\nint<lower=0> Kar_meanW; // AR order\nint<lower=0> Kma_meanW; // MA order\n// number of lags per observation\nint<lower=0> J_lag_meanW[N_meanW];\nint<lower=1> nresp; // number of responses\nint nrescor; // number of residual correlations\n// data for group-level effects of ID 1\nint<lower=1> N_1; // number of grouping levels\nint<lower=1> M_1; // number of coefficients per level\nint<lower=1> J_1_meanX[N_meanX]; // grouping indicator per observation\nint<lower=1> J_1_meanY[N_meanY]; // grouping indicator per observation\nint<lower=1> J_1_meanZ[N_meanZ]; // grouping indicator per observation\nint<lower=1> J_1_meanP[N_meanP]; // grouping indicator per observation\nint<lower=1> J_1_meanR[N_meanR]; // grouping indicator per observation\nint<lower=1> J_1_meanW[N_meanW]; // grouping indicator per observation\n// group-level predictor values\nvector[N_meanX] Z_1_meanX_1;\nvector[N_meanY] Z_1_meanY_2;\nvector[N_meanZ] Z_1_meanZ_3;\nvector[N_meanP] Z_1_meanP_4;\nvector[N_meanR] Z_1_meanR_5;\nvector[N_meanW] Z_1_meanW_6;\nint<lower=1> NC_1; // number of group-level correlations\n// data for group-level effects of ID 2\nint<lower=1> N_2; // number of grouping levels\nint<lower=1> M_2; // number of coefficients per level\nint<lower=1> J_2_meanX[N_meanX]; // grouping indicator per observation\nint<lower=1> J_2_meanY[N_meanY]; // grouping indicator per observation\nint<lower=1> J_2_meanZ[N_meanZ]; // grouping indicator per observation\nint<lower=1> J_2_meanP[N_meanP]; // grouping indicator per observation\nint<lower=1> J_2_meanR[N_meanR]; // grouping indicator per observation\nint<lower=1> J_2_meanW[N_meanW]; // grouping indicator per observation\n// group-level predictor values\nvector[N_meanX] Z_2_meanX_1;\nvector[N_meanY] Z_2_meanY_2;\nvector[N_meanZ] Z_2_meanZ_3;\nvector[N_meanP] Z_2_meanP_4;\nvector[N_meanR] Z_2_meanR_5;\nvector[N_meanW] Z_2_meanW_6;\nint<lower=1> NC_2; // number of group-level correlations\nint prior_only; // should the likelihood be ignored?\n}\ntransformed data {\nint Kc_meanX = K_meanX - 1;\nmatrix[N_meanX, Kc_meanX] Xc_meanX; // centered version of X_meanX without an intercept\nvector[Kc_meanX] means_X_meanX; // column means of X_meanX before centering\nint max_lag_meanX = max(Kar_meanX, Kma_meanX);\nint Kc_meanY = K_meanY - 1;\nmatrix[N_meanY, Kc_meanY] Xc_meanY; // centered version of X_meanY without an intercept\nvector[Kc_meanY] means_X_meanY; // column means of X_meanY before centering\nint max_lag_meanY = max(Kar_meanY, Kma_meanY);\nint Kc_meanZ = K_meanZ - 1;\nmatrix[N_meanZ, Kc_meanZ] Xc_meanZ; // centered version of X_meanZ without an intercept\nvector[Kc_meanZ] means_X_meanZ; // column means of X_meanZ before centering\nint max_lag_meanZ = max(Kar_meanZ, Kma_meanZ);\nint Kc_meanP = K_meanP - 1;\nmatrix[N_meanP, Kc_meanP] Xc_meanP; // centered version of X_meanP without an intercept\nvector[Kc_meanP] means_X_meanP; // column means of X_meanP before centering\nint max_lag_meanP = max(Kar_meanP, Kma_meanP);\nint Kc_meanR = K_meanR - 1;\nmatrix[N_meanR, Kc_meanR] Xc_meanR; // centered version of X_meanR without an intercept\nvector[Kc_meanR] means_X_meanR; // column means of X_meanR before centering\nint max_lag_meanR = max(Kar_meanR, Kma_meanR);\nint Kc_meanW = K_meanW - 1;\nmatrix[N_meanW, Kc_meanW] Xc_meanW; // centered version of X_meanW without an intercept\nvector[Kc_meanW] means_X_meanW; // column means of X_meanW before centering\nint max_lag_meanW = max(Kar_meanW, Kma_meanW);\nvector[nresp] Y[N]; // response array\nfor (i in 2:K_meanX) {\nmeans_X_meanX[i - 1] = mean(X_meanX[, i]);\nXc_meanX[, i - 1] = X_meanX[, i] - means_X_meanX[i - 1];\n}\nfor (i in 2:K_meanY) {\nmeans_X_meanY[i - 1] = mean(X_meanY[, i]);\nXc_meanY[, i - 1] = X_meanY[, i] - means_X_meanY[i - 1];\n}\nfor (i in 2:K_meanZ) {\nmeans_X_meanZ[i - 1] = mean(X_meanZ[, i]);\nXc_meanZ[, i - 1] = X_meanZ[, i] - means_X_meanZ[i - 1];\n}\nfor (i in 2:K_meanP) {\nmeans_X_meanP[i - 1] = mean(X_meanP[, i]);\nXc_meanP[, i - 1] = X_meanP[, i] - means_X_meanP[i - 1];\n}\nfor (i in 2:K_meanR) {\nmeans_X_meanR[i - 1] = mean(X_meanR[, i]);\nXc_meanR[, i - 1] = X_meanR[, i] - means_X_meanR[i - 1];\n}\nfor (i in 2:K_meanW) {\nmeans_X_meanW[i - 1] = mean(X_meanW[, i]);\nXc_meanW[, i - 1] = X_meanW[, i] - means_X_meanW[i - 1];\n}\nfor (n in 1:N) {\nY[n] = transpose([Y_meanX[n], Y_meanY[n], Y_meanZ[n], Y_meanP[n], Y_meanR[n], Y_meanW[n]]);\n}\n}\nparameters {\nvector[N_meanX] Yl_meanX; // latent variable\nvector[Kc_meanX] b_meanX; // population-level effects\nreal Intercept_meanX; // temporary intercept for centered predictors\nvector[Kar_meanX] ar_meanX; // autoregressive coefficients\nreal<lower=0> sigma_meanX; // dispersion parameter\nvector[N_meanY] Yl_meanY; // latent variable\nvector[Kc_meanY] b_meanY; // population-level effects\nreal Intercept_meanY; // temporary intercept for centered predictors\nvector[Kar_meanY] ar_meanY; // autoregressive coefficients\nreal<lower=0> sigma_meanY; // dispersion parameter\nvector[N_meanZ] Yl_meanZ; // latent variable\nvector[Kc_meanZ] b_meanZ; // population-level effects\nreal Intercept_meanZ; // temporary intercept for centered predictors\nvector[Kar_meanZ] ar_meanZ; // autoregressive coefficients\nreal<lower=0> sigma_meanZ; // dispersion parameter\nvector[N_meanP] Yl_meanP; // latent variable\nvector[Kc_meanP] b_meanP; // population-level effects\nreal Intercept_meanP; // temporary intercept for centered predictors\nvector[Kar_meanP] ar_meanP; // autoregressive coefficients\nreal<lower=0> sigma_meanP; // dispersion parameter\nvector[N_meanR] Yl_meanR; // latent variable\nvector[Kc_meanR] b_meanR; // population-level effects\nreal Intercept_meanR; // temporary intercept for centered predictors\nvector[Kar_meanR] ar_meanR; // autoregressive coefficients\nreal<lower=0> sigma_meanR; // dispersion parameter\nvector[N_meanW] Yl_meanW; // latent variable\nvector[Kc_meanW] b_meanW; // population-level effects\nreal Intercept_meanW; // temporary intercept for centered predictors\nvector[Kar_meanW] ar_meanW; // autoregressive coefficients\nreal<lower=0> sigma_meanW; // dispersion parameter\ncholesky_factor_corr[nresp] Lrescor; // parameters for multivariate linear models\nvector<lower=0>[M_1] sd_1; // group-level standard deviations\nmatrix[M_1, N_1] z_1; // standardized group-level effects\ncholesky_factor_corr[M_1] L_1; // cholesky factor of correlation matrix\nvector<lower=0>[M_2] sd_2; // group-level standard deviations\nmatrix[M_2, N_2] z_2; // standardized group-level effects\ncholesky_factor_corr[M_2] L_2; // cholesky factor of correlation matrix\n}\ntransformed parameters {\nmatrix[N_1, M_1] r_1; // actual group-level effects\n// using vectors speeds up indexing in loops\nvector[N_1] r_1_meanX_1;\nvector[N_1] r_1_meanY_2;\nvector[N_1] r_1_meanZ_3;\nvector[N_1] r_1_meanP_4;\nvector[N_1] r_1_meanR_5;\nvector[N_1] r_1_meanW_6;\nmatrix[N_2, M_2] r_2; // actual group-level effects\n// using vectors speeds up indexing in loops\nvector[N_2] r_2_meanX_1;\nvector[N_2] r_2_meanY_2;\nvector[N_2] r_2_meanZ_3;\nvector[N_2] r_2_meanP_4;\nvector[N_2] r_2_meanR_5;\nvector[N_2] r_2_meanW_6;\nreal lprior = 0; // prior contributions to the log posterior\n// compute actual group-level effects\nr_1 = scale_r_cor(z_1, sd_1, L_1);\nr_1_meanX_1 = r_1[, 1];\nr_1_meanY_2 = r_1[, 2];\nr_1_meanZ_3 = r_1[, 3];\nr_1_meanP_4 = r_1[, 4];\nr_1_meanR_5 = r_1[, 5];\nr_1_meanW_6 = r_1[, 6];\n// compute actual group-level effects\nr_2 = scale_r_cor(z_2, sd_2, L_2);\nr_2_meanX_1 = r_2[, 1];\nr_2_meanY_2 = r_2[, 2];\nr_2_meanZ_3 = r_2[, 3];\nr_2_meanP_4 = r_2[, 4];\nr_2_meanR_5 = r_2[, 5];\nr_2_meanW_6 = r_2[, 6];\nlprior += normal_lpdf(b_meanX | 0,2);\nlprior += normal_lpdf(Intercept_meanX | 0,0.5);\nlprior += normal_lpdf(ar_meanX | 0,1);\nlprior += normal_lpdf(sigma_meanX | 0,2)\n- 1 * normal_lccdf(0 | 0,2);\nlprior += normal_lpdf(b_meanY | 0,2);\nlprior += normal_lpdf(Intercept_meanY | 0,0.5);\nlprior += normal_lpdf(ar_meanY | 0,1);\nlprior += normal_lpdf(sigma_meanY | 0,2)\n- 1 * normal_lccdf(0 | 0,2);\nlprior += normal_lpdf(b_meanZ | 0,2);\nlprior += normal_lpdf(Intercept_meanZ | 0,0.5);\nlprior += normal_lpdf(ar_meanZ | 0,1);\nlprior += normal_lpdf(sigma_meanZ | 0,2)\n- 1 * normal_lccdf(0 | 0,2);\nlprior += normal_lpdf(b_meanP | 0,2);\nlprior += normal_lpdf(Intercept_meanP | 0,0.5);\nlprior += normal_lpdf(ar_meanP | 0,1);\nlprior += normal_lpdf(sigma_meanP | 0,2)\n- 1 * normal_lccdf(0 | 0,2);\nlprior += normal_lpdf(b_meanR | 0,2);\nlprior += normal_lpdf(Intercept_meanR | 0,0.5);\nlprior += normal_lpdf(ar_meanR | 0,1);\nlprior += normal_lpdf(sigma_meanR | 0,2)\n- 1 * normal_lccdf(0 | 0,2);\nlprior += normal_lpdf(b_meanW | 0,2);\nlprior += normal_lpdf(Intercept_meanW | 0,0.5);\nlprior += normal_lpdf(ar_meanW | 0,1);\nlprior += normal_lpdf(sigma_meanW | 0,2)\n- 1 * normal_lccdf(0 | 0,2);\nlprior += lkj_corr_cholesky_lpdf(Lrescor | 1);\nlprior += normal_lpdf(sd_1 | 0,1)\n- 6 * normal_lccdf(0 | 0,1);\nlprior += lkj_corr_cholesky_lpdf(L_1 | 1);\nlprior += normal_lpdf(sd_2 | 0,1)\n- 6 * normal_lccdf(0 | 0,1);\nlprior += lkj_corr_cholesky_lpdf(L_2 | 1);\n}\nmodel {\n// likelihood including constants\nif (!prior_only) {\nvector[nresp] Yl[N] = Y;\n// matrix storing lagged residuals\nmatrix[N_meanX, max_lag_meanX] Err_meanX = rep_matrix(0, N_meanX, max_lag_meanX);\nvector[N_meanX] err_meanX; // actual residuals\n// initialize linear predictor term\nvector[N_meanX] mu_meanX = Intercept_meanX + Xc_meanX * b_meanX;\n// matrix storing lagged residuals\nmatrix[N_meanY, max_lag_meanY] Err_meanY = rep_matrix(0, N_meanY, max_lag_meanY);\nvector[N_meanY] err_meanY; // actual residuals\n// initialize linear predictor term\nvector[N_meanY] mu_meanY = Intercept_meanY + Xc_meanY * b_meanY;\n// matrix storing lagged residuals\nmatrix[N_meanZ, max_lag_meanZ] Err_meanZ = rep_matrix(0, N_meanZ, max_lag_meanZ);\nvector[N_meanZ] err_meanZ; // actual residuals\n// initialize linear predictor term\nvector[N_meanZ] mu_meanZ = Intercept_meanZ + Xc_meanZ * b_meanZ;\n// matrix storing lagged residuals\nmatrix[N_meanP, max_lag_meanP] Err_meanP = rep_matrix(0, N_meanP, max_lag_meanP);\nvector[N_meanP] err_meanP; // actual residuals\n// initialize linear predictor term\nvector[N_meanP] mu_meanP = Intercept_meanP + Xc_meanP * b_meanP;\n// matrix storing lagged residuals\nmatrix[N_meanR, max_lag_meanR] Err_meanR = rep_matrix(0, N_meanR, max_lag_meanR);\nvector[N_meanR] err_meanR; // actual residuals\n// initialize linear predictor term\nvector[N_meanR] mu_meanR = Intercept_meanR + Xc_meanR * b_meanR;\n// matrix storing lagged residuals\nmatrix[N_meanW, max_lag_meanW] Err_meanW = rep_matrix(0, N_meanW, max_lag_meanW);\nvector[N_meanW] err_meanW; // actual residuals\n// initialize linear predictor term\nvector[N_meanW] mu_meanW = Intercept_meanW + Xc_meanW * b_meanW;\n// multivariate predictor array\nvector[nresp] Mu[N];\nvector[nresp] sigma = transpose([sigma_meanX, sigma_meanY, sigma_meanZ, sigma_meanP, sigma_meanR, sigma_meanW]);\n// cholesky factor of residual covariance matrix\nmatrix[nresp, nresp] LSigma = diag_pre_multiply(sigma, Lrescor);\nfor (n in 1:N_meanX) {\n// add more terms to the linear predictor\nmu_meanX[n] += r_1_meanX_1[J_1_meanX[n]] * Z_1_meanX_1[n] + r_2_meanX_1[J_2_meanX[n]] * Z_2_meanX_1[n];\n}\nfor (n in 1:N_meanY) {\n// add more terms to the linear predictor\nmu_meanY[n] += r_1_meanY_2[J_1_meanY[n]] * Z_1_meanY_2[n] + r_2_meanY_2[J_2_meanY[n]] * Z_2_meanY_2[n];\n}\nfor (n in 1:N_meanZ) {\n// add more terms to the linear predictor\nmu_meanZ[n] += r_1_meanZ_3[J_1_meanZ[n]] * Z_1_meanZ_3[n] + r_2_meanZ_3[J_2_meanZ[n]] * Z_2_meanZ_3[n];\n}\nfor (n in 1:N_meanP) {\n// add more terms to the linear predictor\nmu_meanP[n] += r_1_meanP_4[J_1_meanP[n]] * Z_1_meanP_4[n] + r_2_meanP_4[J_2_meanP[n]] * Z_2_meanP_4[n];\n}\nfor (n in 1:N_meanR) {\n// add more terms to the linear predictor\nmu_meanR[n] += r_1_meanR_5[J_1_meanR[n]] * Z_1_meanR_5[n] + r_2_meanR_5[J_2_meanR[n]] * Z_2_meanR_5[n];\n}\nfor (n in 1:N_meanW) {\n// add more terms to the linear predictor\nmu_meanW[n] += r_1_meanW_6[J_1_meanW[n]] * Z_1_meanW_6[n] + r_2_meanW_6[J_2_meanW[n]] * Z_2_meanW_6[n];\n}\n// include ARMA terms\nfor (n in 1:N_meanX) {\nerr_meanX[n] = Yl_meanX[n] - mu_meanX[n];\nfor (i in 1:J_lag_meanX[n]) {\nErr_meanX[n + 1, i] = err_meanX[n + 1 - i];\n}\nmu_meanX[n] += Err_meanX[n, 1:Kar_meanX] * ar_meanX;\n}\n// include ARMA terms\nfor (n in 1:N_meanY) {\nerr_meanY[n] = Yl_meanY[n] - mu_meanY[n];\nfor (i in 1:J_lag_meanY[n]) {\nErr_meanY[n + 1, i] = err_meanY[n + 1 - i];\n}\nmu_meanY[n] += Err_meanY[n, 1:Kar_meanY] * ar_meanY;\n}\n// include ARMA terms\nfor (n in 1:N_meanZ) {\nerr_meanZ[n] = Yl_meanZ[n] - mu_meanZ[n];\nfor (i in 1:J_lag_meanZ[n]) {\nErr_meanZ[n + 1, i] = err_meanZ[n + 1 - i];\n}\nmu_meanZ[n] += Err_meanZ[n, 1:Kar_meanZ] * ar_meanZ;\n}\n// include ARMA terms\nfor (n in 1:N_meanP) {\nerr_meanP[n] = Yl_meanP[n] - mu_meanP[n];\nfor (i in 1:J_lag_meanP[n]) {\nErr_meanP[n + 1, i] = err_meanP[n + 1 - i];\n}\nmu_meanP[n] += Err_meanP[n, 1:Kar_meanP] * ar_meanP;\n}\n// include ARMA terms\nfor (n in 1:N_meanR) {\nerr_meanR[n] = Yl_meanR[n] - mu_meanR[n];\nfor (i in 1:J_lag_meanR[n]) {\nErr_meanR[n + 1, i] = err_meanR[n + 1 - i];\n}\nmu_meanR[n] += Err_meanR[n, 1:Kar_meanR] * ar_meanR;\n}\n// include ARMA terms\nfor (n in 1:N_meanW) {\nerr_meanW[n] = Yl_meanW[n] - mu_meanW[n];\nfor (i in 1:J_lag_meanW[n]) {\nErr_meanW[n + 1, i] = err_meanW[n + 1 - i];\n}\nmu_meanW[n] += Err_meanW[n, 1:Kar_meanW] * ar_meanW;\n}\n// combine univariate parameters\nfor (n in 1:N) {\nYl[n] = Yl_meanX[n];\nYl[n] = Yl_meanY[n];\nYl[n] = Yl_meanZ[n];\nYl[n] = Yl_meanP[n];\nYl[n] = Yl_meanR[n];\nYl[n] = Yl_meanW[n];\n}\n// combine univariate parameters\nfor (n in 1:N) {\nMu[n] = transpose([mu_meanX[n], mu_meanY[n], mu_meanZ[n], mu_meanP[n], mu_meanR[n], mu_meanW[n]]);\n}\ntarget += multi_normal_cholesky_lpdf(Yl | Mu, LSigma);\n}\n// priors including constants\ntarget += lprior;\ntarget += normal_lpdf(Y_meanX[Jme_meanX] | Yl_meanX[Jme_meanX], noise_meanX[Jme_meanX]);\ntarget += normal_lpdf(Y_meanY[Jme_meanY] | Yl_meanY[Jme_meanY], noise_meanY[Jme_meanY]);\ntarget += normal_lpdf(Y_meanZ[Jme_meanZ] | Yl_meanZ[Jme_meanZ], noise_meanZ[Jme_meanZ]);\ntarget += normal_lpdf(Y_meanP[Jme_meanP] | Yl_meanP[Jme_meanP], noise_meanP[Jme_meanP]);\ntarget += normal_lpdf(Y_meanR[Jme_meanR] | Yl_meanR[Jme_meanR], noise_meanR[Jme_meanR]);\ntarget += normal_lpdf(Y_meanW[Jme_meanW] | Yl_meanW[Jme_meanW], noise_meanW[Jme_meanW]);\ntarget += std_normal_lpdf(to_vector(z_1));\ntarget += std_normal_lpdf(to_vector(z_2));\n}\ngenerated quantities {\n// actual population-level intercept\nreal b_meanX_Intercept = Intercept_meanX - dot_product(means_X_meanX, b_meanX);\n// actual population-level intercept\nreal b_meanY_Intercept = Intercept_meanY - dot_product(means_X_meanY, b_meanY);\n// actual population-level intercept\nreal b_meanZ_Intercept = Intercept_meanZ - dot_product(means_X_meanZ, b_meanZ);\n// actual population-level intercept\nreal b_meanP_Intercept = Intercept_meanP - dot_product(means_X_meanP, b_meanP);\n// actual population-level intercept\nreal b_meanR_Intercept = Intercept_meanR - dot_product(means_X_meanR, b_meanR);\n// actual population-level intercept\nreal b_meanW_Intercept = Intercept_meanW - dot_product(means_X_meanW, b_meanW);\n// residual correlations\ncorr_matrix[nresp] Rescor = multiply_lower_tri_self_transpose(Lrescor);\nvector<lower=-1,upper=1>[nrescor] rescor;\n// compute group-level correlations\ncorr_matrix[M_1] Cor_1 = multiply_lower_tri_self_transpose(L_1);\nvector<lower=-1,upper=1>[NC_1] cor_1;\n// compute group-level correlations\ncorr_matrix[M_2] Cor_2 = multiply_lower_tri_self_transpose(L_2);\nvector<lower=-1,upper=1>[NC_2] cor_2;\n// extract upper diagonal of correlation matrix\nfor (k in 1:nresp) {\nfor (j in 1:(k - 1)) {\nrescor[choose(k - 1, 2) + j] = Rescor[j, k];\n}\n}\n// extract upper diagonal of correlation matrix\nfor (k in 1:M_1) {\nfor (j in 1:(k - 1)) {\ncor_1[choose(k - 1, 2) + j] = Cor_1[j, k];\n}\n}\n// extract upper diagonal of correlation matrix\nfor (k in 1:M_2) {\nfor (j in 1:(k - 1)) {\ncor_2[choose(k - 1, 2) + j] = Cor_2[j, k];\n}\n}\n}\n\n``````\n\nA small sample of the data is taking hours to run, so I’m wondering where I might be able to gain some efficiency through some vectorization and if there might be some changes I could make to take advantage of a fairly capable GPU." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5794361,"math_prob":0.9944013,"size":20391,"snap":"2023-40-2023-50","text_gpt3_token_len":6919,"char_repetition_ratio":0.20571934,"word_repetition_ratio":0.16822067,"special_character_ratio":0.32769358,"punctuation_ratio":0.16541822,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99963224,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-11-29T03:03:46Z\",\"WARC-Record-ID\":\"<urn:uuid:03177b13-3f87-49c1-8284-2d67303e9a8c>\",\"Content-Length\":\"41281\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1608ad09-bb30-4a17-b6b0-10fa3b6c7012>\",\"WARC-Concurrent-To\":\"<urn:uuid:1900bcfd-c009-41a5-8187-48645762c515>\",\"WARC-IP-Address\":\"74.82.16.203\",\"WARC-Target-URI\":\"https://discourse.mc-stan.org/t/help-speeding-up-an-auto-regressive-multivariate-measurement-error-model/27562\",\"WARC-Payload-Digest\":\"sha1:PHLCDUSULULRTZ6YLL57TE6HC22KLRCJ\",\"WARC-Block-Digest\":\"sha1:LO2AZBJUOG6FJEDDMG5J7LKQMXBU7SJG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100047.66_warc_CC-MAIN-20231129010302-20231129040302-00432.warc.gz\"}"}
https://www.noblemushtak.com/blog/math-puzzle-may-3
[ "All content on this site is available under the Creative Commons Attribution 4.0 International license:", null, "Math Puzzle from May 3, 2019\nCan you figure out how much Gimkit powerups really cost?\nWritten on by Noble Mushtak\n\nThe following was a puzzle presented to Marshwood GT students on May 3, 2019. Have fun doing math!\n\nIn the past year, the educational game Gimkit has become all the craze in classrooms across the country. (However, neither this math riddle nor its author is associated with the Gimkit company or any Gimkit employee in any way.) In Gimkit, students earn money for answering educational questions, and in live games, students race against each other to answer more questions and earn more money than anyone else in their class. However, Gimkit players can also use their money to buy upgrades and powerups, which then allows them to earn money even faster. For example, here are some of the Gimkit powerups:\n\n• Mini Bonus: Earn 2 times the amount of money than normal from answering one question.\n• Mega Bonus: Earn 5 times the amount of money than normal from answering one question.\n• Discounter: Decrease the cost of all upgrades by 25%.\n\nHowever, the catch with Gimkit powerups is that the cost of powerups increase as you gain more money, so understanding the relationship between how much money you have and the cost of a powerup is critical to optimizing one's Gimkit strategy. Unfortunately, there is no Gimkit Rule Book which gives you an equation for the cost of all of the powerups, so you have to figure out the equations yourself!\n\nFirst, in order to derive the equations for the cost of each powerup, you play a Gimkit game and, after answering every few questions, you record the amount of money you have and the cost of each powerup according to the Gimkit Shop. The following table contains all the data that you recorded:\n\n$$\\begin{array}{|c|c|c|c|}\\hline \\text{Amount of Money} & \\text{Cost of Mini Bonus} & \\text{Cost of Mega Bonus} & \\text{Cost of Discounter} \\\\ \\hline \\10 & \\25 & \\55 & \\355 \\\\ \\hline \\17 & \\25 & \\55 & \\355 \\\\ \\hline \\77 & \\25 & \\55 & \\365 \\\\ \\hline \\147 & \\25 & \\60 & \\380 \\\\ \\hline \\217 & \\30 & \\65 & \\395 \\\\ \\hline \\367 & \\35 & \\75 & \\420 \\\\ \\hline \\465 & \\35 & \\80 & \\440 \\\\ \\hline \\540 & \\40 & \\85 & \\455 \\\\ \\hline \\615 & \\40 & \\90 & \\470 \\\\ \\hline \\690 & \\45 & \\95 & \\485 \\\\ \\hline \\765 & \\45 & \\100 & \\500 \\\\ \\hline \\1867 & \\80 & \\165 & \\705 \\\\ \\hline \\2167 & \\90 & \\185 & \\765 \\\\ \\hline \\3435 & \\125 & \\260 & \\1005 \\\\ \\hline \\73735 & \\2235 & \\4475 & \\14360 \\\\ \\hline \\end{array}$$\n\nNow, based off this data, find the formulas for the cost of the mini bonus, the cost of the mega bonus, and the cost of the discounter in terms of the amount of money. To be clear, you are finding three different formulas, one for each powerup, and the only independent variable in each formula should be the amount of money. Your formulas must be exact (i.e. not approximations) and they must work for the data in all of the rows of the above table. Finally, your formulas should be as simple as possible: If you find two different formulas which both work, choose the more concise formula.\n\nClick here to show the first hint.First, do a linear regression, where the cost of the powerup is on the y-axis and the amount of money is on the x-axis. This will give you an approximate formula. Then, compare the points on the line to the actual data points and try to modify the linear regression formula in order to make it exact.\n\nFor any number $$z$$, the following formula gives you $$z$$ rounded up to the next multiple of 5:\n$$5\\left\\lceil \\frac{z}{5} \\right\\rceil$$\nFor example, for $$z=24$$, $$5\\lceil \\frac{24}{5} \\rceil=5\\lceil 4.8 \\rceil=5\\cdot 5=25$$ and for $$z=30$$, $$5\\lceil \\frac{30}{5} \\rceil=5\\lceil 6\\rceil=5\\cdot 6=30$$.\n\nMessenger" ]
[ null, "https://www.noblemushtak.com/static/img/creative-commons-by-logo.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8879863,"math_prob":0.99030447,"size":3094,"snap":"2023-14-2023-23","text_gpt3_token_len":802,"char_repetition_ratio":0.18122977,"word_repetition_ratio":0.054770317,"special_character_ratio":0.34001294,"punctuation_ratio":0.09632224,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9941063,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-05-30T04:19:52Z\",\"WARC-Record-ID\":\"<urn:uuid:0d8d1dec-eca0-454f-b6ba-9d0d0f05f190>\",\"Content-Length\":\"11752\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b6227a22-80f3-4ee5-a270-c50194471e1b>\",\"WARC-Concurrent-To\":\"<urn:uuid:c5fc996f-4b67-460f-b7d4-f96a1c04dd30>\",\"WARC-IP-Address\":\"35.173.69.207\",\"WARC-Target-URI\":\"https://www.noblemushtak.com/blog/math-puzzle-may-3\",\"WARC-Payload-Digest\":\"sha1:OJPWM7NPTHQLHF6UPLTWLP2EA34GDPEH\",\"WARC-Block-Digest\":\"sha1:JRZRWOKZBS7KERJEHCLE3CFBS7FMAK3F\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224645089.3_warc_CC-MAIN-20230530032334-20230530062334-00546.warc.gz\"}"}
https://stats.stackexchange.com/questions/378049/modeling-quarterly-default-rate-non-stationary-autorregressive-time-series
[ "# Modeling quarterly default rate (non stationary, autorregressive time series)\n\nI am a student writing my thesis on default rate modeling. My major is finance, so I'm not really experienced in econometrics. I'm trying to create a model for quarterly corporate default rates (percentage of defaulted corporations in the previous 12 months) with macroeconomical variables (GDP growth, interest rate, debt/GDP, etc.)\n\nMy problem is, that according to ADF and KPSS my variables are non-stationary. Differenciating doesn't really help, and second differences hold no explanatory value. The variables are not cointegrated. I'm using a sample of 17 years (60-70 values, depending on the variable lags). They should be stationary on the long term (100 years), but my sample is very narrow.\n\nBecause I'm using quaretly data of annual default rates, the dependent variable is also heavily autocorrelated. Using annual data shows no autocorrelation, but 16 variables are few for a model.\n\nMy goal is to examine the significance of the independent variables and their lags in different sectors, and also trying out different methods for modeling (linear, linear with lags, log-linear, diffeneciated, and all with Hodrick-Prescott filter).\n\nIn real life it is common practice by banks to just use linear regression, which I did, and got awesome results, but it turns out at the end that the data needs to be stationary and non-autocorrelated for this model.\n\nThe only appropriate model I found was ARDL which suggests 4 lags and it shows that all four lags of the dependent variable is significant, so it's not really helpful for me.\n\nI'm running out of ideas what kind of models to use or how to explain why I used linear regression.\n\nedit: Data used\n\nedit2: I tried to use Newey-West and Hansen-Hodrick Standard Errors to correct the issue of (overlapping) autocorrelation, the coefficients and the Durbin-Watson value did not change, only the p ant t values in a minimal way.\n\nedit3: MY final regression model is ln⁡(hp_DR)= -0,3349+0,0182*hp_debt/GDP(t-2))+ + 0,0639* hp_interest(t-11)- 0,0182* hp_GDP(t-9)+ μ,\n\nDR is the default rate in percentage, debt/GDP is corporate debt/GDP rate in %, and interest rate is government 10 year bond yields. All with Hodrick-Prescott filter (hp) and appropriate lags.\n\ndefault rate plot:", null, "• Can you edit your post to give an example of linear regression model you used? Just write it up as Y ~ X1 + X2+ ... + Xp and explain who your predictors are. Nov 21 '18 at 3:28\n• Thank you for your edit! You can use multiple linear regression with autocorrelated errors for your modelling - in R, this would be implemented with the gls() function in the nlme package which has a \"correlation =\". You would fit your multiple linear regression model with temporally uncorrelated errors first and then check the model residuals for evidence of temporal autocorrelation via the ACF and PACF plots. If there is evidence of autocorrelation, you can model that with corARMA or corAR. Nov 21 '18 at 15:41\n• With quarterly data, you may have expected to see seasonality in your input series for your multiple linear regression model - but if thar is not the case, you won't need to control for seasonality in the model. The beauty of using something like the gls() function is that you can fit various competing models to your data using method = \"REML\" (restricted maximum likelihood estimation) and then compare the models based on BIC (since you seem to be interested in explanation rather than forecasting). Nov 21 '18 at 15:45\n• The crosscorrelation function (ccf in R) will give you some clues as to what lags you should include in your modelling for the predictor of interest. Simply apply this function to the response variable and the non-lagged predictor variable and then interpret the resulting plot as explained here, for instance: stats.stackexchange.com/questions/253778/…. Nov 21 '18 at 15:51\n• Nov 21 '18 at 16:00\n\nHave you thought about using a PCA, clustering or random forest for determining independent variable significance?\n\nFor forecasting, you could use an ARIMAX model with exogenous inputs. Just fill in the appropriate parameters, (p,d,q). AR models are commonly used for econometric forecasts. You could also use random forest to do this forecast, but with the lack of data it might be an issue.\n\nOne last thought, you might want to transform the data to percent change before performing the independent variable tests. Although, leave the data as is for the ARIMAX forecast.\n\nUpdate\n\nThe answer to your most recent comment is a bit lengthy so I am making it an update. Also I think I have a better picture of what you are try to accomplish.\n\nYou should not be using a simple linear model with non-stationary series to explain the variance of the dependent variable. You should be using % change / differences. Using a non-stationary series you will get a great r-squared, but it is incorrect because the model is not picking up the variance, it is only seeing the trend. Below is a quick example to illustrate using the SPY ETF. As you can see the price series regression looks great, .99 r-squared , but it is a lie! When you do the return series, you get the real answer that it is a bad model with no predictive or explanatory power.\n\nLinear Regression using price series\n\nCall:\nlm(formula = spy.close.px[-1] ~ spy.close.px[-nrow(spy.close.px)])\n\nResiduals:\nMin 1Q Median 3Q Max\n-11.5588 -0.7205 0.0493 0.8592 12.8073\n\nCoefficients:\nEstimate Std. Error t value Pr(>|t|)\n(Intercept) 0.0445310 0.1024606 0.435 0.664\nspy.close.px[-nrow(spy.close.px)] 0.9999792 0.0005781 1729.656 <0.0000000000000002 ***\n---\nSignif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1\n\nResidual standard error: 1.704 on 2991 degrees of freedom\nMultiple R-squared: 0.999, Adjusted R-squared: 0.999\nF-statistic: 2.992e+06 on 1 and 2991 DF, p-value: < 0.00000000000000022\n\n\nLinear Regression using returns\n\nCall:\nlm(formula = spy.returns[-1] ~ spy.returns[-nrow(spy.returns)])\n\nResiduals:\nMin 1Q Median 3Q Max\n-0.105063 -0.004327 0.000488 0.005280 0.133375\n\nCoefficients:\nEstimate Std. Error t value Pr(>|t|)\n(Intercept) 0.0002255 0.0002262 0.997 0.319\nspy.returns[-nrow(spy.returns)] -0.0804997 0.0182356 -4.414 0.0000105 ***\n---\nSignif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1\n\nResidual standard error: 0.01237 on 2990 degrees of freedom\n(1 observation deleted due to missingness)\nMultiple R-squared: 0.006475, Adjusted R-squared: 0.006143\nF-statistic: 19.49 on 1 and 2990 DF, p-value: 0.00001049\n\n• thanks for the answer! I went with the linear modell because I'm used to doing that at my work, and wanted to examine the effect of each value and the differences in sectors. E.g. GDP is more significant in Agriculture and less in Construction. Forecasting is a minor part of my thesis, that is why I wanted to concentrate on the independent variables, and not use AR lags if that makes any sense Nov 21 '18 at 11:00\n• also, does ARIMAX not need stationary variables? I tried the percentage change (quarterly, annualized) which actually seems to work, but I'm not yet sure of the economical interpretation Nov 21 '18 at 11:41\n• Nope, variable do not need to be stationary. Although the results will not be good. Best practice would be to make them stationary through differencing. Run an ACF or ADF test to get what the lagged difference number should be. Nov 21 '18 at 14:24\n• that is my problem, that I need to differenciate twice to make it stationary, but with that the other values have no explanatory meaning Nov 21 '18 at 17:30\n• check out my update Nov 21 '18 at 18:09" ]
[ null, "https://i.stack.imgur.com/VSKou.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.90997267,"math_prob":0.83810306,"size":2240,"snap":"2021-43-2021-49","text_gpt3_token_len":537,"char_repetition_ratio":0.102862254,"word_repetition_ratio":0.0,"special_character_ratio":0.22767857,"punctuation_ratio":0.11600928,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96873444,"pos_list":[0,1,2],"im_url_duplicate_count":[null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-11-27T06:32:26Z\",\"WARC-Record-ID\":\"<urn:uuid:330c9ba2-db64-4b74-8dec-c768a0b73551>\",\"Content-Length\":\"152362\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b61ffb21-9076-4612-bf7b-5be12d240455>\",\"WARC-Concurrent-To\":\"<urn:uuid:9dd213ae-ec00-4aed-9426-59301d27802f>\",\"WARC-IP-Address\":\"151.101.129.69\",\"WARC-Target-URI\":\"https://stats.stackexchange.com/questions/378049/modeling-quarterly-default-rate-non-stationary-autorregressive-time-series\",\"WARC-Payload-Digest\":\"sha1:FR6QF3PVFHSOPW7QBIU36S6KODEL2IF5\",\"WARC-Block-Digest\":\"sha1:QKLSFDYWFR65YO7WR2QOTRIGJXXHWV2I\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964358118.13_warc_CC-MAIN-20211127043716-20211127073716-00186.warc.gz\"}"}
https://number.academy/4744
[ "# Number 4744 facts\n\nThe even number 4,744 is spelled 🔊, and written in words: four thousand, seven hundred and forty-four. The ordinal number 4744th is said 🔊 and written as: four thousand, seven hundred and forty-fourth. The meaning of the number 4744 in Maths: Is it Prime? Factorization and prime factors tree. The square root and cube root of 4744. What is 4744 in computer science, numerology, codes and images, writing and naming in other languages. Other interesting facts related to 4744.\n\n## What is 4,744 in other units\n\nThe decimal (Arabic) number 4744 converted to a Roman number is (IV)DCCXLIV. Roman and decimal number conversions.\n The number 4744 converted to a Mayan number is", null, "Decimal and Mayan number conversions.\n\n#### Weight conversion\n\n4744 kilograms (kg) = 10458.6 pounds (lbs)\n4744 pounds (lbs) = 2151.9 kilograms (kg)\n\n#### Length conversion\n\n4744 kilometers (km) equals to 2948 miles (mi).\n4744 miles (mi) equals to 7635 kilometers (km).\n4744 meters (m) equals to 15565 feet (ft).\n4744 feet (ft) equals 1446 meters (m).\n4744 centimeters (cm) equals to 1867.7 inches (in).\n4744 inches (in) equals to 12049.8 centimeters (cm).\n\n#### Temperature conversion\n\n4744° Fahrenheit (°F) equals to 2617.8° Celsius (°C)\n4744° Celsius (°C) equals to 8571.2° Fahrenheit (°F)\n\n#### Power conversion\n\n4744 Horsepower (hp) equals to 3488.73 kilowatts (kW)\n4744 kilowatts (kW) equals to 6450.92 horsepower (hp)\n\n#### Time conversion\n\n(hours, minutes, seconds, days, weeks)\n4744 seconds equals to 1 hour, 19 minutes, 4 seconds\n4744 minutes equals to 3 days, 7 hours, 4 minutes\n\n### Zip codes 4744\n\n• Zip code 04744 Fort Kent Mills, Maine, Aroostook, USA a map\n\n### Codes and images of the number 4744\n\nNumber 4744 morse code: ....- --... ....- ....-\nSign language for number 4744:", null, "", null, "", null, "", null, "Number 4744 in braille:", null, "Images of the number\nImage (1) of the numberImage (2) of the number", null, "", null, "More images, other sizes, codes and colors ...\n\n#### Number 4744 infographic", null, "### Gregorian, Hebrew, Islamic, Persian and Buddhist Year (Calendar)\n\nGregorian year 4744 is Buddhist year 5287.\nBuddhist year 4744 is Gregorian year 4201 .\nGregorian year 4744 is Islamic year 4248 or 4249.\nIslamic year 4744 is Gregorian year 5224 or 5225.\nGregorian year 4744 is Persian year 4122 or 4123.\nPersian year 4744 is Gregorian 5365 or 5366.\nGregorian year 4744 is Hebrew year 8504 or 8505.\nHebrew year 4744 is Gregorian year 984.\nThe Buddhist calendar is used in Sri Lanka, Cambodia, Laos, Thailand, and Burma. The Persian calendar is the official calendar in Iran and Afghanistan.\n\n## Share in social networks", null, "## Mathematics of no. 4744\n\n### Multiplications\n\n#### Multiplication table of 4744\n\n4744 multiplied by two equals 9488 (4744 x 2 = 9488).\n4744 multiplied by three equals 14232 (4744 x 3 = 14232).\n4744 multiplied by four equals 18976 (4744 x 4 = 18976).\n4744 multiplied by five equals 23720 (4744 x 5 = 23720).\n4744 multiplied by six equals 28464 (4744 x 6 = 28464).\n4744 multiplied by seven equals 33208 (4744 x 7 = 33208).\n4744 multiplied by eight equals 37952 (4744 x 8 = 37952).\n4744 multiplied by nine equals 42696 (4744 x 9 = 42696).\nshow multiplications by 6, 7, 8, 9 ...\n\n### Fractions: decimal fraction and common fraction\n\n#### Fraction table of 4744\n\nHalf of 4744 is 2372 (4744 / 2 = 2372).\nOne third of 4744 is 1581,3333 (4744 / 3 = 1581,3333 = 1581 1/3).\nOne quarter of 4744 is 1186 (4744 / 4 = 1186).\nOne fifth of 4744 is 948,8 (4744 / 5 = 948,8 = 948 4/5).\nOne sixth of 4744 is 790,6667 (4744 / 6 = 790,6667 = 790 2/3).\nOne seventh of 4744 is 677,7143 (4744 / 7 = 677,7143 = 677 5/7).\nOne eighth of 4744 is 593 (4744 / 8 = 593).\nOne ninth of 4744 is 527,1111 (4744 / 9 = 527,1111 = 527 1/9).\nshow fractions by 6, 7, 8, 9 ...\n\n### Calculator\n\n 4744\n\n#### Is Prime?\n\nThe number 4744 is not a prime number. The closest prime numbers are 4733, 4751.\nThe 4744th prime number in order is 45817.\n\n#### Factorization and factors (dividers)\n\nThe prime factors of 4744 are 2 * 2 * 2 * 593\nThe factors of 4744 are 1, 2, 4, 8, 593, 1186, 2372, 4744.\nTotal factors 8.\nSum of factors 8910 (4166).\n\n#### Powers\n\nThe second power of 47442 is 22.505.536.\nThe third power of 47443 is 106.766.262.784.\n\n#### Roots\n\nThe square root √4744 is 68,876701.\nThe cube root of 34744 is 16,802796.\n\n#### Logarithms\n\nThe natural logarithm of No. ln 4744 = loge 4744 = 8,464636.\nThe logarithm to base 10 of No. log10 4744 = 3,676145.\nThe Napierian logarithm of No. log1/e 4744 = -8,464636.\n\n### Trigonometric functions\n\nThe cosine of 4744 is 0,98103.\nThe sine of 4744 is 0,193858.\nThe tangent of 4744 is 0,197607.\n\n## Number 4744 in Computer Science\n\nCode typeCode value\nPIN 4744 It's recommended that you use 4744 as your password or PIN.\n4744 Number of bytes4.6KB\nUnix timeUnix time 4744 is equal to Thursday Jan. 1, 1970, 1:19:04 a.m. GMT\nIPv4, IPv6Number 4744 internet address in dotted format v4 0.0.18.136, v6 ::1288\n4744 Decimal = 1001010001000 Binary\n4744 Decimal = 20111201 Ternary\n4744 Decimal = 11210 Octal\n4744 Decimal = 1288 Hexadecimal (0x1288 hex)\n4744 BASE64NDc0NA==\n4744 MD542778ef0b5805a96f9511e20b5611fce\n4744 SHA16c3ed1d27b8a822a66dba5180837cda77c5e445f\n4744 SHA224a5b8617fa24b34a5f4a9773da9d755c2139faf0a10a5a0d1a044aab2\n4744 SHA256fdba794336e0776e12850af77674a568e984745e0c1fa7318f23b62b662cabd1\n4744 SHA384b25aacc886366276439b75837a7f3e10c49e5ca7398440c2b564bf28361b2481a9fe7a21c8eaa9a6c21d7a6247aeecd2\nMore SHA codes related to the number 4744 ...\n\nIf you know something interesting about the 4744 number that you did not find on this page, do not hesitate to write us here.\n\n## Numerology 4744\n\n### The meaning of the number 4 (four), numerology 4\n\nCharacter frequency 4: 3\n\nThe number four (4) came to establish stability and to follow the process in the world. It needs to apply a clear purpose to develop internal stability. It evokes a sense of duty and discipline. Number 4 personality speaks of solid construction. It teaches us to evolve in the tangible and material world, to develop reason and logic and our capacity for effort, accomplishment and work.\nMore about the the number 4 (four), numerology 4 ...\n\n### The meaning of the number 7 (seven), numerology 7\n\nCharacter frequency 7: 1\n\nThe number 7 (seven) is the sign of the intellect, thought, psychic analysis, idealism and wisdom. This number first needs to gain self-confidence and to open his/her life and heart to experience trust and openness in the world. And then you can develop or balance the aspects of reflection, meditation, seeking knowledge and knowing.\nMore about the the number 7 (seven), numerology 7 ...\n\n## Interesting facts about the number 4744\n\n### Asteroids\n\n• (4744) Rovereto is asteroid number 4744. It was discovered by H. Debehogne from La Silla Observatory on 9/2/1988.\n\n### Distances between cities\n\n• There is a 4,744 miles (7,634 km) direct distance between Abidjan (Ivory Coast) and Ufa (Russia).\n• There is a 2,948 miles (4,744 km) direct distance between Āgra (India) and Kiev (Ukraine).\n• There is a 4,744 miles (7,634 km) direct distance between Baotou (China) and Rome (Italy).\n• There is a 2,948 miles (4,744 km) direct distance between Belgrade (Serbia) and Faisalābād (Pakistan).\n• More distances between cities ...\n• There is a 4,744 miles (7,634 km) direct distance between Belgrade (Serbia) and Kaifeng (China).\n• There is a 2,948 miles (4,744 km) direct distance between Bucheon-si (South Korea) and Muzaffarābād (Pakistan).\n• There is a 2,948 miles (4,744 km) direct distance between Cali (Colombia) and Maceió (Brazil).\n• There is a 4,744 miles (7,634 km) direct distance between Curitiba (Brazil) and Pretoria (South Africa).\n• There is a 2,948 miles (4,744 km) direct distance between Hāora (India) and Sakai (Japan).\n• There is a 2,948 miles (4,744 km) direct distance between Homs (Syria) and Mbuji-Mayi (Zaire).\n• There is a 4,744 miles (7,634 km) direct distance between Ibadan (Nigeria) and Shivaji Nagar (India).\n• There is a 2,948 miles (4,744 km) direct distance between İzmir (Turkey) and Sūrat (India).\n• There is a 4,744 miles (7,634 km) direct distance between Iztapalapa (Mexico) and Nova Iguaçu (Brazil).\n• There is a 4,744 miles (7,634 km) direct distance between Johannesburg (South Africa) and Rabat (Morocco).\n• There is a 4,744 miles (7,634 km) direct distance between Kiev (Ukraine) and Philadelphia (USA).\n• There is a 4,744 miles (7,634 km) direct distance between Kota Bharu (Malaysia) and Voronezh (Russia).\n• There is a 4,744 miles (7,634 km) direct distance between Lusaka (Zambia) and Saratov (Russia).\n• There is a 4,744 miles (7,634 km) direct distance between Manhattan (USA) and Ouagadougou (Burkina Faso).\n• There is a 4,744 miles (7,634 km) direct distance between Saint Petersburg (Russia) and Vancouver (Alberta).\n\n## № 4,744 in other languages\n\nHow to say or write the number four thousand, seven hundred and forty-four in Spanish, German, French and other languages. The character used as the thousands separator.\n Spanish: 🔊 (número 4.744) cuatro mil setecientos cuarenta y cuatro German: 🔊 (Nummer 4.744) viertausendsiebenhundertvierundvierzig French: 🔊 (nombre 4 744) quatre mille sept cent quarante-quatre Portuguese: 🔊 (número 4 744) quatro mil, setecentos e quarenta e quatro Hindi: 🔊 (संख्या 4 744) चार हज़ार, सात सौ, चौंतालीस Chinese: 🔊 (数 4 744) 四千七百四十四 Arabian: 🔊 (عدد 4,744) أربعة آلاف و سبعمائة و أربعة و أربعون Czech: 🔊 (číslo 4 744) čtyři tisíce sedmset čtyřicet čtyři Korean: 🔊 (번호 4,744) 사천칠백사십사 Danish: 🔊 (nummer 4 744) firetusinde og syvhundrede og fireogfyrre Hebrew: (מספר 4,744) ארבעת אלפים שבע מאות ארבעים וארבע Dutch: 🔊 (nummer 4 744) vierduizendzevenhonderdvierenveertig Japanese: 🔊 (数 4,744) 四千七百四十四 Indonesian: 🔊 (jumlah 4.744) empat ribu tujuh ratus empat puluh empat Italian: 🔊 (numero 4 744) quattromilasettecentoquarantaquattro Norwegian: 🔊 (nummer 4 744) fire tusen, syv hundre og førti-fire Polish: 🔊 (liczba 4 744) cztery tysiące siedemset czterdzieści cztery Russian: 🔊 (номер 4 744) четыре тысячи семьсот сорок четыре Turkish: 🔊 (numara 4,744) dörtbinyediyüzkırkdört Thai: 🔊 (จำนวน 4 744) สี่พันเจ็ดร้อยสี่สิบสี่ Ukrainian: 🔊 (номер 4 744) чотири тисячі сімсот сорок чотири Vietnamese: 🔊 (con số 4.744) bốn nghìn bảy trăm bốn mươi bốn Other languages ...\n\n## News to email\n\nI have read the privacy policy\n\n## Comment\n\nIf you know something interesting about the number 4744 or any other natural number (positive integer), please write to us here or on Facebook.\n\n#### Comment (Maximum 2000 characters) *\n\nThe content of the comments is the opinion of the users and not of number.academy. It is not allowed to pour comments contrary to the laws, insulting, illegal or harmful to third parties. Number.academy reserves the right to remove or not publish any inappropriate comment. It also reserves the right to publish a comment on another topic. Privacy Policy." ]
[ null, "https://numero.wiki/s/numeros-mayas/numero-maya-4744.png", null, "https://numero.wiki/s/senas/lenguaje-de-senas-numero-4.png", null, "https://numero.wiki/s/senas/lenguaje-de-senas-numero-7.png", null, "https://numero.wiki/s/senas/lenguaje-de-senas-numero-4.png", null, "https://numero.wiki/s/senas/lenguaje-de-senas-numero-4.png", null, "https://number.academy/img/braille-4744.svg", null, "https://numero.wiki/img/a-4744.jpg", null, "https://numero.wiki/img/b-4744.jpg", null, "https://number.academy/i/infographics/4/number-4744-infographic.png", null, "https://numero.wiki/s/share-desktop.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6496643,"math_prob":0.8610548,"size":9664,"snap":"2023-40-2023-50","text_gpt3_token_len":3474,"char_repetition_ratio":0.17070393,"word_repetition_ratio":0.101217166,"special_character_ratio":0.38203642,"punctuation_ratio":0.15523657,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9699945,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20],"im_url_duplicate_count":[null,2,null,null,null,null,null,null,null,null,null,2,null,2,null,2,null,2,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-10T06:53:19Z\",\"WARC-Record-ID\":\"<urn:uuid:d00b890f-a3f4-44e0-80dd-acec3704509c>\",\"Content-Length\":\"45352\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:006fb1a8-4ea7-4bfa-aaa2-ec14fa827ae3>\",\"WARC-Concurrent-To\":\"<urn:uuid:c4c1d1e5-f8ea-4719-98da-55b87fb01b26>\",\"WARC-IP-Address\":\"162.0.227.212\",\"WARC-Target-URI\":\"https://number.academy/4744\",\"WARC-Payload-Digest\":\"sha1:CLBNWE5FRNHKWT5AXLAFWO2SRWAZDQ5X\",\"WARC-Block-Digest\":\"sha1:LFYFSUKB4MXTY2IWZKX6NJLQ4WWONR2G\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679101282.74_warc_CC-MAIN-20231210060949-20231210090949-00859.warc.gz\"}"}
https://edurev.in/course/quiz/attempt/-1_Test-Laws-Of-Motion-1-From-Past-28-Years-Questions/48af3d12-60ac-4164-95c7-7d82db7a14d6
[ "Courses\n\n# Test: Laws Of Motion 1 - From Past 28 Years Questions\n\n## 14 Questions MCQ Test Physics For JEE | Test: Laws Of Motion 1 - From Past 28 Years Questions\n\nDescription\nThis mock test of Test: Laws Of Motion 1 - From Past 28 Years Questions for NEET helps you for every NEET entrance exam. This contains 14 Multiple Choice Questions for NEET Test: Laws Of Motion 1 - From Past 28 Years Questions (mcq) to study with solutions a complete question bank. The solved questions answers in this Test: Laws Of Motion 1 - From Past 28 Years Questions quiz give you a good mix of easy questions and tough questions. NEET students definitely take this Test: Laws Of Motion 1 - From Past 28 Years Questions exercise for a better result in the exam. You can find other Test: Laws Of Motion 1 - From Past 28 Years Questions extra questions, long questions & short questions for NEET on EduRev as well by searching above.\nQUESTION: 1\n\n### A block of mass m is placed on a smooth wedge of inclination θθ. The whole system is accelerated horizontally so that the block does not slip on the wedge. The force exerted by the wedge on the block will be (g is acceleration due to gravity)\n\nSolution:", null, "", null, "", null, "", null, "", null, "QUESTION: 2\n\n### 300 J of work is done in sliding a 2 kg block up an inclined plane of height 10 m. Taking g = 10 m/s2, work done against friction is \n\nSolution:", null, "QUESTION: 3\n\n### A block  B is pushed momen tarily along  a horizontal surface  with an initial velocity V. If μ is the coefficient of sliding friction  between B and the surface, block B will come to rest after a time", null, "Solution:\n\nFriction is the retarding force for the block F = ma = μR = μmg\nTherefore, from the first equation of motion v = u – at", null, "QUESTION: 4\n\nSand is being dropped on a conveyor belt at the rate of M kg/s. The force necessary to keep the belt moving with a constant velocity of v m/s will be\n\nSolution:\n\nTaking derivative of mass. We cant take derivative of velocity hence it is constant as it is mentioned in the question.", null, "QUESTION: 5\n\nA body under the action of a force", null, "acquires an acceleration of 1 m/s2. The mass of this body must be\n\nSolution:", null, "QUESTION: 6\n\nThe mass of a lift is 2000 kg. When the tension in the supporting cable is 28000 N, then its acceleration is: \n\nSolution:\n\nNet force, F = T – mg\nma = T – mg\n2000 a = 28000 – 20000 = 8000", null, "QUESTION: 7\n\nA person of mass 60 kg is inside a lift of mass 940 kg and presses the button on control panel.The lift starts moving upwards with an acceleration  1.0 m/s2. If g = 10 ms–2, the tension in the supporting cable is \n\nSolution:", null, "Total mass = (60 + 940) kg = 1000 kg\nLet T be the tension in the supporting cable, then T – 1000g = 1000 × 1\n⇒ T = 1000 × 11 = 11000 N\n\nQUESTION: 8\n\nA body of mass M hits normally a rigid wall with velocity V and bounces back with the same velocity. The impulse experienced by the body is\n\nSolution:\n\nImpulse experienced by the body = change in momentum = MV – (–MV)\n= 2MV.\n\nQUESTION: 9\n\nA conveyor belt is moving at a constant speed of 2m/s. A box is gently dropped on it. The coefficient of friction between them is µ = 0.5. The distance that the box will move relative to belt before coming to rest on it taking g = 10 ms–2, is [2011M]\n\nSolution:\n\nFrictional force on the box f = μmg\n∴ Acceleration in the box", null, "", null, "⇒ distance = 0.4 m\n\nQUESTION: 10\n\nA car of mass 1000 kg negotiates a banked curve of radius 90 m on a frictionless road. If the banking angle is 45°, the speed of the car is : \n\nSolution:\n\nFor banking tan", null, "", null, "V = 30 m/s\n\nQUESTION: 11\n\nA stone is dropped from a height h. It hits the ground with a certain momentum P. If the same stone is dropped from a height 100% more than the previous height, the momentum when it hits the ground will change by : [2012M]\n\nSolution:\n\nMomentum", null, "(v2 = u2 + 2gh; Here u = 0)\n\nWhen stone hits the ground momentum", null, "when same stone dropped from 2h (100% of initial) then momentum", null, "Which is changed by 41% of initial.\n\nQUESTION: 12\n\nA car of mass m is moving on a level circular track of radius R. If μs represents the static friction between the road and tyres of the car, the maximum speed of the car in circular motion is given by :\n\nSolution:\n\nFor smooth driving maximum speed of car v then", null, "QUESTION: 13\n\nThree blocks with masses m, 2 m and 3 m are connected by strings as shown in the figure.After an upward force F is applied on block m, the masses move upward at constant speed v.What is the net force on the block of mass 2m? (g is the acceleration due to gravity) [NEET 2013]", null, "Solution:", null, "", null, "From figure F = 6 mg, As speed is constant, acceleration a = 0\n∴ 6 mg = 6ma = 0, F = 6 mg\n∴ T = 5 mg , T' = 3 mg T\" = 0\nFnet on block of mass 2 m = T – T' – 2 mg = 0\nALTERNATE :\nv  = constant  so, a = 0, Hence, Fnet = ma = 0\n\nQUESTION: 14\n\nA car is moving in a circular horizontal track of radius 10 m with a constant speed of 10 m/s. A bob is suspended from the roof of the car by a light wire of length 1.0 m. The angle made by the wire with the vertical is [NEET Kar. 2013]\n\nSolution:\n\nGiven; speed = 10 m/s; radius r = 10 m\n\nAngle made by the wire with the vertical", null, "", null, "" ]
[ null, "https://cdn3.edurev.in/ApplicationImages/Temp/3066490_0bd9c115-454b-428c-9239-2e5a10980746_lg.png", null, "https://cdn3.edurev.in/ApplicationImages/Temp/3066490_e9867983-0875-4638-a861-80ba60f976de_lg.png", null, "https://cdn3.edurev.in/ApplicationImages/Temp/3066490_1c19f37a-97d2-4880-8631-07596cfa2ae0_lg.png", null, "https://cdn3.edurev.in/ApplicationImages/Temp/3066490_a98624e3-5e64-488d-90fc-c7db986a6c44_lg.png", null, "https://cdn3.edurev.in/ApplicationImages/Temp/3066490_32ce8707-771a-48f2-bbb6-35fb26cd9981_lg.png", null, "https://cdn3.edurev.in/ApplicationImages/Temp/105f21a9-7d7d-4533-930f-9e4c2ddad3e5_lg.jpg", null, "https://cdn3.edurev.in/ApplicationImages/Temp/3066490_2c084c5c-bd41-4599-bffa-f6b46e6382ea_lg.png", null, "https://cdn3.edurev.in/ApplicationImages/Temp/3066490_930cf142-59a0-4645-b69f-74b945ab935f_lg.png", null, "https://cdn3.edurev.in/ApplicationImages/Temp/3e8aa370-23c2-4e07-ac15-d5a1fc388eda_lg.jpg", null, "https://cdn3.edurev.in/ApplicationImages/Temp/3066490_09856fae-9aaf-413b-8774-fa87008cb737_lg.png", null, "https://cdn3.edurev.in/ApplicationImages/Temp/e3de8309-1f87-46c6-a72b-7eb6ffa5bcce_lg.jpg", null, "https://cdn3.edurev.in/ApplicationImages/Temp/3066490_23f05678-da1b-472f-ba31-51ccf375a98e_lg.png", null, "https://cdn3.edurev.in/ApplicationImages/Temp/3066490_f75f6f1e-fbf2-4db9-849b-2428a5eeffed_lg.png", null, "https://cdn3.edurev.in/ApplicationImages/Temp/3066490_41e6e4d9-2060-44ff-a9e8-8e720732947c_lg.png", null, "https://cdn3.edurev.in/ApplicationImages/Temp/3066490_07086641-3ab4-42f5-80fe-d28f137459a2_lg.png", null, "https://cdn3.edurev.in/ApplicationImages/Temp/3066490_6d2f3ff7-4c57-4c07-a68d-73088113f972_lg.png", null, "https://cdn3.edurev.in/ApplicationImages/Temp/3066490_496b9a16-4159-4802-8e2b-679aae510588_lg.png", null, "https://cdn3.edurev.in/ApplicationImages/Temp/3066490_502b66e1-1215-47fd-a360-eb81433a42fe_lg.png", null, "https://cdn3.edurev.in/ApplicationImages/Temp/3066490_8f5c32b8-649c-407a-98e9-60f49510411c_lg.png", null, "https://cdn3.edurev.in/ApplicationImages/Temp/3066490_bb19fc31-9995-4c3d-a7a1-f372a1220d4f_lg.png", null, "https://cdn3.edurev.in/ApplicationImages/Temp/3066490_28f590aa-b2a7-4f47-8d44-c6e040b7f545_lg.png", null, "https://cdn3.edurev.in/ApplicationImages/Temp/3066490_7a066b08-5182-42a3-a259-bf494cbbce02_lg.png", null, "https://cdn3.edurev.in/ApplicationImages/Temp/3066490_dc82a173-a3fa-4346-9841-e327a61af556_lg.png", null, "https://cdn3.edurev.in/ApplicationImages/Temp/3066490_63cc1174-7659-4890-8799-1f97e13eb428_lg.png", null, "https://cdn3.edurev.in/ApplicationImages/Temp/3066490_c3e69979-4f57-4c47-a0d6-dd27356aa723_lg.png", null, "https://cdn3.edurev.in/ApplicationImages/Temp/3066490_daa27973-834f-46a3-a78c-ead5e76233ab_lg.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9090743,"math_prob":0.98538876,"size":3987,"snap":"2021-04-2021-17","text_gpt3_token_len":1144,"char_repetition_ratio":0.11599297,"word_repetition_ratio":0.036342323,"special_character_ratio":0.30875346,"punctuation_ratio":0.06860465,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99572843,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52],"im_url_duplicate_count":[null,3,null,2,null,3,null,3,null,3,null,2,null,3,null,3,null,2,null,3,null,2,null,3,null,3,null,2,null,2,null,2,null,2,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-04-18T20:17:41Z\",\"WARC-Record-ID\":\"<urn:uuid:156a9062-771c-4cf5-8b7c-988a90590664>\",\"Content-Length\":\"333802\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:65b48933-7800-4bc6-8e74-6a5b3b1a93c9>\",\"WARC-Concurrent-To\":\"<urn:uuid:1c375c64-31c6-4fc5-a464-3272c11d20dc>\",\"WARC-IP-Address\":\"34.87.155.163\",\"WARC-Target-URI\":\"https://edurev.in/course/quiz/attempt/-1_Test-Laws-Of-Motion-1-From-Past-28-Years-Questions/48af3d12-60ac-4164-95c7-7d82db7a14d6\",\"WARC-Payload-Digest\":\"sha1:6MXNRL72UQSI57PWVCKG4K766LWWLUUJ\",\"WARC-Block-Digest\":\"sha1:S6K4JKP53YFW5ABAJKN775VJZPSXNBMP\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-17/CC-MAIN-2021-17_segments_1618038860318.63_warc_CC-MAIN-20210418194009-20210418224009-00638.warc.gz\"}"}
https://avatto.com/ugc-net-paper1/paper1-practice/data-interpretation/pie-chart/page/5/
[ "# Data Interpretation - Pie Chart\n\n>>>>>>>>Pie Chart\n\nInstructions:\n\nThe pie chart given below shows the expenditure incurred in bringing out a book, by a publisher:", null, "study the graph and answer the questions 21-24  given below\n\nWhat is the central angle showing the cost of paper?\n\n• A\n\n16°", null, "", null, "• B\n\n32°", null, "", null, "• C\n\n28.8°", null, "", null, "• D\n\n57.6°", null, "", null, "• Option : D\n• Explanation :\n\nRequired Central Angle = (16/100) * 360 = 57.6° Click on Discuss to view users comments.\n\nInstructions:\n\nThe pie chart given below shows the expenditure incurred in bringing out a book, by a publisher:", null, "study the graph and answer the questions 21-24  given below\n\n• A\n\nRs. 6500", null, "", null, "• B\n\nRs. 2340", null, "", null, "• C\n\nRs. 4680", null, "", null, "• D\n\nRs. 7840", null, "", null, "• Option : A\n• Explanation : Let the royalty be Rs. x Then.\n\n36:10 : : 23400 : x\n\nSo, x = (10 * 23400)/36 = Rs. 6500\n\nInstructions:\n\nThe pie chart given below shows the expenditure incurred in bringing out a book, by a publisher:", null, "study the graph and answer the questions 21-24  given below\n\n• A\n\nRs. 8000", null, "", null, "• B\n\nRs. 14400", null, "", null, "• C\n\nRs. 46800", null, "", null, "• D\n\nRs. 40500", null, "", null, "• Option : D\n• Explanation : Let the expenditure on canvassing be Rs. x\n\nThen, 8:18 : : 18000:x\n\nSo, x = (18*18000)/8 = Rs.40500\n\nInstructions:\n\nThe pie chart given below shows the expenditure incurred in bringing out a book, by a publisher:", null, "study the graph and answer the questions 21-24  given below\n\n• A\n\n8%", null, "", null, "• B\n\n80%", null, "", null, "• C\n\n44 4/9%", null, "", null, "• D\n\nNone", null, "", null, "• Option : C\n• Explanation : If canvassing charges are Rs. 18, royalty is Rs. 10.\n\nOn Rs. 18. it is less by 8\n\nOn Rs. 100, it is less by ((8/18) * 100)%\n\n=44 4/9 %\n\n• A", null, "", null, "• B", null, "", null, "Related Quiz.\nPie Chart" ]
[ null, "https://www.avatto.com/a1/charts/q39.png", null, "https://avatto.com/wp-content/themes/studytadka/images/right.png", null, "https://avatto.com/wp-content/themes/studytadka/images/wrong.png", null, "https://avatto.com/wp-content/themes/studytadka/images/right.png", null, "https://avatto.com/wp-content/themes/studytadka/images/wrong.png", null, "https://avatto.com/wp-content/themes/studytadka/images/wrong.png", null, "https://avatto.com/wp-content/themes/studytadka/images/right.png", null, "https://avatto.com/wp-content/themes/studytadka/images/right.png", null, "https://avatto.com/wp-content/themes/studytadka/images/wrong.png", null, "https://www.avatto.com/a1/charts/q39.png", null, "https://avatto.com/wp-content/themes/studytadka/images/right.png", null, "https://avatto.com/wp-content/themes/studytadka/images/wrong.png", null, "https://avatto.com/wp-content/themes/studytadka/images/right.png", null, "https://avatto.com/wp-content/themes/studytadka/images/wrong.png", null, "https://avatto.com/wp-content/themes/studytadka/images/wrong.png", null, "https://avatto.com/wp-content/themes/studytadka/images/right.png", null, "https://avatto.com/wp-content/themes/studytadka/images/right.png", null, "https://avatto.com/wp-content/themes/studytadka/images/wrong.png", null, "https://www.avatto.com/a1/charts/q39.png", null, "https://avatto.com/wp-content/themes/studytadka/images/right.png", null, "https://avatto.com/wp-content/themes/studytadka/images/wrong.png", null, "https://avatto.com/wp-content/themes/studytadka/images/right.png", null, "https://avatto.com/wp-content/themes/studytadka/images/wrong.png", null, "https://avatto.com/wp-content/themes/studytadka/images/wrong.png", null, "https://avatto.com/wp-content/themes/studytadka/images/right.png", null, "https://avatto.com/wp-content/themes/studytadka/images/right.png", null, "https://avatto.com/wp-content/themes/studytadka/images/wrong.png", null, "https://www.avatto.com/a1/charts/q39.png", null, "https://avatto.com/wp-content/themes/studytadka/images/right.png", null, "https://avatto.com/wp-content/themes/studytadka/images/wrong.png", null, "https://avatto.com/wp-content/themes/studytadka/images/right.png", null, "https://avatto.com/wp-content/themes/studytadka/images/wrong.png", null, "https://avatto.com/wp-content/themes/studytadka/images/wrong.png", null, "https://avatto.com/wp-content/themes/studytadka/images/right.png", null, "https://avatto.com/wp-content/themes/studytadka/images/right.png", null, "https://avatto.com/wp-content/themes/studytadka/images/wrong.png", null, "https://avatto.com/wp-content/themes/studytadka/images/right.png", null, "https://avatto.com/wp-content/themes/studytadka/images/wrong.png", null, "https://avatto.com/wp-content/themes/studytadka/images/right.png", null, "https://avatto.com/wp-content/themes/studytadka/images/wrong.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.90163493,"math_prob":0.878635,"size":1168,"snap":"2021-21-2021-25","text_gpt3_token_len":300,"char_repetition_ratio":0.13487972,"word_repetition_ratio":0.50476193,"special_character_ratio":0.2979452,"punctuation_ratio":0.13061224,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99783015,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80],"im_url_duplicate_count":[null,4,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,4,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,4,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,4,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-06-23T21:51:35Z\",\"WARC-Record-ID\":\"<urn:uuid:ca9cef4a-d7e7-4100-b4e1-1613a87b73b1>\",\"Content-Length\":\"141862\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d708d67d-327e-4237-b774-e671f53dc3ce>\",\"WARC-Concurrent-To\":\"<urn:uuid:053781a7-baab-4e6f-bc37-02a152e899c8>\",\"WARC-IP-Address\":\"184.168.97.219\",\"WARC-Target-URI\":\"https://avatto.com/ugc-net-paper1/paper1-practice/data-interpretation/pie-chart/page/5/\",\"WARC-Payload-Digest\":\"sha1:6IJPZUVPIANZZFQODG2ETLRPN6DT5AGC\",\"WARC-Block-Digest\":\"sha1:E6GDJ4OHV33RLLNBZWEYNKVRXH4JJ5LP\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-25/CC-MAIN-2021-25_segments_1623488540235.72_warc_CC-MAIN-20210623195636-20210623225636-00035.warc.gz\"}"}
https://factoring-polynomials.com/math-solve/hard-algebra-problems.html
[ "", null, "", null, "Home", null, "A Summary of Factoring Polynomials", null, "Factoring The Difference of 2 Squares", null, "Factoring Trinomials", null, "Quadratic Expressions", null, "Factoring Trinomials", null, "The 7 Forms of Factoring", null, "Factoring Trinomials", null, "Finding The Greatest Common Factor (GCF)", null, "Factoring Trinomials", null, "Quadratic Expressions", null, "Factoring simple expressions", null, "Polynomials", null, "Factoring Polynomials", null, "Fractoring Polynomials", null, "Other Math Resources", null, "Factoring Polynomials", null, "Polynomials", null, "Finding the Greatest Common Factor (GCF)", null, "Factoring Trinomials", null, "Finding the Least Common Multiples\nTry the Free Math Solver or Scroll down to Tutorials!\n\n Depdendent Variable\n\n Number of equations to solve: 23456789\n Equ. #1:\n Equ. #2:\n\n Equ. #3:\n\n Equ. #4:\n\n Equ. #5:\n\n Equ. #6:\n\n Equ. #7:\n\n Equ. #8:\n\n Equ. #9:\n\n Solve for:\n\n Dependent Variable\n\n Number of inequalities to solve: 23456789\n Ineq. #1:\n Ineq. #2:\n\n Ineq. #3:\n\n Ineq. #4:\n\n Ineq. #5:\n\n Ineq. #6:\n\n Ineq. #7:\n\n Ineq. #8:\n\n Ineq. #9:\n\n Solve for:\n\n Please use this form if you would like to have this math solver on your website, free of charge. Name: Email: Your Website: Msg:\n\n# Hard Algebra Problems?\n\nBelow is a number of search phrases that our visitors entered recently to get to website.\n\nHow is this of help ?\n\n• Locate the term that you are searching for (i.e. Hard Algebra Problems) in the table below\n\n• Click on the appropriate program demo found in the same row  as your search term Hard Algebra Problems\n\n• If you find the software demo helpful click on the purchase button to obtain the software at a special low price extended only to factoring-polynomials.com users\n\n Related Search Keywords Algebrator Flash Demo Algebrator Static Demo Purchase now how to simplify the perimeter of a polynomial", null, "", null, "", null, "calculate square root exponents", null, "", null, "", null, "calculate \"half life\" calculus calculator", null, "", null, "", null, "7TH GRADE FREE PRINTABLE FRACTION +WORK +SHEETS", null, "", null, "", null, "exercises math sixth grade", null, "", null, "", null, "university algebra questions", null, "", null, "", null, "factorising maths ppt", null, "", null, "", null, "Math A Regents study sheet", null, "", null, "", null, "quadratic equation vertex", null, "", null, "", null, "math algebra 2 free worksheets", null, "", null, "", null, "combining like terms skill in algebra", null, "", null, "", null, "formula maths primary", null, "", null, "", null, "+\"McDougal Littell\" +math +\"7th grade\" +\"final exam\"", null, "", null, "", null, "6th grade math worksheets speed and distance", null, "", null, "", null, "free 5th grade math taks help", null, "", null, "", null, "boolean algebra simplifier", null, "", null, "", null, "+fifth grade math simplifying ratios", null, "", null, "", null, "nys ninth grade english test prep", null, "", null, "", null, "math slope linear equation exercise", null, "", null, "", null, "Solving Cubed Radicals equations", null, "", null, "", null, "visual algebra", null, "", null, "", null, "graphing a circle in excel", null, "", null, "", null, "casio binomial factor program", null, "", null, "", null, "what is the rule in adding and subtracting integers", null, "", null, "", null, "printable homework for 3 grade", null, "", null, "", null, "Prev Next" ]
[ null, "https://factoring-polynomials.com/img/23.jpg", null, "https://factoring-polynomials.com/img/311.gif", null, "https://factoring-polynomials.com/img/311.gif", null, "https://factoring-polynomials.com/img/311.gif", null, "https://factoring-polynomials.com/img/311.gif", null, "https://factoring-polynomials.com/img/311.gif", null, "https://factoring-polynomials.com/img/311.gif", null, "https://factoring-polynomials.com/img/311.gif", null, "https://factoring-polynomials.com/img/311.gif", null, "https://factoring-polynomials.com/img/311.gif", null, "https://factoring-polynomials.com/img/311.gif", null, "https://factoring-polynomials.com/img/311.gif", null, "https://factoring-polynomials.com/img/311.gif", null, "https://factoring-polynomials.com/img/311.gif", null, "https://factoring-polynomials.com/img/311.gif", null, "https://factoring-polynomials.com/img/311.gif", null, "https://factoring-polynomials.com/img/311.gif", null, "https://factoring-polynomials.com/img/311.gif", null, "https://factoring-polynomials.com/img/311.gif", null, "https://factoring-polynomials.com/img/311.gif", null, "https://factoring-polynomials.com/img/311.gif", null, "https://factoring-polynomials.com/img/311.gif", null, "https://factoring-polynomials.com/images/flash_demo_button.gif", null, "https://factoring-polynomials.com/images/screenshots_button.gif", null, "https://factoring-polynomials.com/images/buy.jpg", null, "https://factoring-polynomials.com/images/flash_demo_button.gif", null, "https://factoring-polynomials.com/images/screenshots_button.gif", null, "https://factoring-polynomials.com/images/buy.jpg", null, "https://factoring-polynomials.com/images/flash_demo_button.gif", null, "https://factoring-polynomials.com/images/screenshots_button.gif", null, "https://factoring-polynomials.com/images/buy.jpg", null, "https://factoring-polynomials.com/images/flash_demo_button.gif", null, "https://factoring-polynomials.com/images/screenshots_button.gif", null, "https://factoring-polynomials.com/images/buy.jpg", null, "https://factoring-polynomials.com/images/flash_demo_button.gif", null, "https://factoring-polynomials.com/images/screenshots_button.gif", null, "https://factoring-polynomials.com/images/buy.jpg", null, "https://factoring-polynomials.com/images/flash_demo_button.gif", null, "https://factoring-polynomials.com/images/screenshots_button.gif", null, "https://factoring-polynomials.com/images/buy.jpg", null, "https://factoring-polynomials.com/images/flash_demo_button.gif", null, "https://factoring-polynomials.com/images/screenshots_button.gif", null, "https://factoring-polynomials.com/images/buy.jpg", null, "https://factoring-polynomials.com/images/flash_demo_button.gif", null, "https://factoring-polynomials.com/images/screenshots_button.gif", null, "https://factoring-polynomials.com/images/buy.jpg", null, "https://factoring-polynomials.com/images/flash_demo_button.gif", null, "https://factoring-polynomials.com/images/screenshots_button.gif", null, "https://factoring-polynomials.com/images/buy.jpg", null, "https://factoring-polynomials.com/images/flash_demo_button.gif", null, "https://factoring-polynomials.com/images/screenshots_button.gif", null, "https://factoring-polynomials.com/images/buy.jpg", null, "https://factoring-polynomials.com/images/flash_demo_button.gif", null, "https://factoring-polynomials.com/images/screenshots_button.gif", null, "https://factoring-polynomials.com/images/buy.jpg", null, "https://factoring-polynomials.com/images/flash_demo_button.gif", null, "https://factoring-polynomials.com/images/screenshots_button.gif", null, "https://factoring-polynomials.com/images/buy.jpg", null, "https://factoring-polynomials.com/images/flash_demo_button.gif", null, "https://factoring-polynomials.com/images/screenshots_button.gif", null, "https://factoring-polynomials.com/images/buy.jpg", null, "https://factoring-polynomials.com/images/flash_demo_button.gif", null, "https://factoring-polynomials.com/images/screenshots_button.gif", null, "https://factoring-polynomials.com/images/buy.jpg", null, "https://factoring-polynomials.com/images/flash_demo_button.gif", null, "https://factoring-polynomials.com/images/screenshots_button.gif", null, "https://factoring-polynomials.com/images/buy.jpg", null, "https://factoring-polynomials.com/images/flash_demo_button.gif", null, "https://factoring-polynomials.com/images/screenshots_button.gif", null, "https://factoring-polynomials.com/images/buy.jpg", null, "https://factoring-polynomials.com/images/flash_demo_button.gif", null, "https://factoring-polynomials.com/images/screenshots_button.gif", null, "https://factoring-polynomials.com/images/buy.jpg", null, "https://factoring-polynomials.com/images/flash_demo_button.gif", null, "https://factoring-polynomials.com/images/screenshots_button.gif", null, "https://factoring-polynomials.com/images/buy.jpg", null, "https://factoring-polynomials.com/images/flash_demo_button.gif", null, "https://factoring-polynomials.com/images/screenshots_button.gif", null, "https://factoring-polynomials.com/images/buy.jpg", null, "https://factoring-polynomials.com/images/flash_demo_button.gif", null, "https://factoring-polynomials.com/images/screenshots_button.gif", null, "https://factoring-polynomials.com/images/buy.jpg", null, "https://factoring-polynomials.com/images/flash_demo_button.gif", null, "https://factoring-polynomials.com/images/screenshots_button.gif", null, "https://factoring-polynomials.com/images/buy.jpg", null, "https://factoring-polynomials.com/images/flash_demo_button.gif", null, "https://factoring-polynomials.com/images/screenshots_button.gif", null, "https://factoring-polynomials.com/images/buy.jpg", null, "https://factoring-polynomials.com/images/flash_demo_button.gif", null, "https://factoring-polynomials.com/images/screenshots_button.gif", null, "https://factoring-polynomials.com/images/buy.jpg", null, "https://factoring-polynomials.com/images/flash_demo_button.gif", null, "https://factoring-polynomials.com/images/screenshots_button.gif", null, "https://factoring-polynomials.com/images/buy.jpg", null, "https://factoring-polynomials.com/images/flash_demo_button.gif", null, "https://factoring-polynomials.com/images/screenshots_button.gif", null, "https://factoring-polynomials.com/images/buy.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9218912,"math_prob":0.8533457,"size":563,"snap":"2022-40-2023-06","text_gpt3_token_len":115,"char_repetition_ratio":0.10375671,"word_repetition_ratio":0.0,"special_character_ratio":0.19538188,"punctuation_ratio":0.06363636,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9982499,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127,128,129,130,131,132,133,134,135,136,137,138,139,140,141,142,143,144,145,146,147,148,149,150,151,152,153,154,155,156,157,158,159,160,161,162,163,164,165,166,167,168,169,170,171,172,173,174,175,176,177,178,179,180,181,182,183,184,185,186,187,188,189,190,191,192,193,194],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-10-03T21:46:24Z\",\"WARC-Record-ID\":\"<urn:uuid:fdbc4efd-bba0-4e86-8336-93e91f8c76ec>\",\"Content-Length\":\"92907\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:260f9acb-3869-4ce7-9724-eb7a8ec24047>\",\"WARC-Concurrent-To\":\"<urn:uuid:22ef12da-08c9-43b5-b146-7544f5c0c3d7>\",\"WARC-IP-Address\":\"35.80.222.224\",\"WARC-Target-URI\":\"https://factoring-polynomials.com/math-solve/hard-algebra-problems.html\",\"WARC-Payload-Digest\":\"sha1:APQEA23IYPYJOXPXVG6Y2TLB4BUXXK2M\",\"WARC-Block-Digest\":\"sha1:66VMV52PC7GZ6PYJG6OUBE5N3QZSWVLY\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030337432.78_warc_CC-MAIN-20221003200326-20221003230326-00108.warc.gz\"}"}
https://se.mathworks.com/help/dsp/ref/dsp.analyticsignal-system-object.html
[ "# dsp.AnalyticSignal\n\nAnalytic signals of discrete-time inputs\n\n## Description\n\nThe `dsp.AnalyticSignal` System object™ computes analytic signals of discrete-time inputs. The real part of the analytic signal in each channel is a replica of the real input in that channel, and the imaginary part is the Hilbert transform of the input. In the frequency domain, the analytic signal doubles the positive frequency content of the original signal while zeroing-out negative frequencies and retaining the DC component. The object computes the Hilbert transform using an equiripple FIR filter.\n\nTo compute the analytic signal of a discrete-time input:\n\n1. Create the `dsp.AnalyticSignal` object and set its properties.\n\n2. Call the object with arguments, as if it were a function.\n\n## Creation\n\n### Syntax\n\n``anaSig = dsp.AnalyticSignal``\n``anaSig = dsp.AnalyticSignal(order)``\n``anaSig = dsp.AnalyticSignal(Name,Value)``\n\n### Description\n\n````anaSig = dsp.AnalyticSignal` returns an analytic signal object, `anaSig`, that computes the complex analytic signal corresponding to each channel of a real M-by-N input matrix.```\n\nexample\n\n````anaSig = dsp.AnalyticSignal(order)` returns an analytic signal object, `anaSig`, with the FilterOrder property set to `order`.```\n````anaSig = dsp.AnalyticSignal(Name,Value)` returns an analytic signal object, `anaSig`, with each specified property set to the specified value.```\n\n## Properties\n\nexpand all\n\nUnless otherwise indicated, properties are nontunable, which means you cannot change their values after calling the object. Objects lock when you call them, and the `release` function unlocks them.\n\nIf a property is tunable, you can change its value at any time.\n\nOrder of the equiripple FIR filter used in computing the Hilbert transform, specified as an even integer scalar greater than 3.\n\nData Types: `single` | `double` | `int8` | `int16` | `int32` | `int64` | `uint8` | `uint16` | `uint32` | `uint64`\n\n## Usage\n\n### Syntax\n\n``y = anaSig(x)``\n\n### Description\n\nexample\n\n````y = anaSig(x)` computes the analytic signal, `y`, of the M-by-N input matrix `x`, according to the equation $Y=X+jH\\left\\{X\\right\\}$ where j is the imaginary unit and $H\\left\\{X\\right\\}$ denotes the Hilbert transform.Each of the N columns in `x` contains M sequential time samples from an independent channel. The method computes the analytic signal for each channel.```\n\n### Input Arguments\n\nexpand all\n\nData input, specified as a vector or a matrix.\n\nData Types: `single` | `double`\nComplex Number Support: Yes\n\n### Output Arguments\n\nexpand all\n\nAnalytic signal output, returned as a vector or a matrix.\n\nData Types: `single` | `double`\nComplex Number Support: Yes\n\n## Object Functions\n\nTo use an object function, specify the System object as the first input argument. For example, to release system resources of a System object named `obj`, use this syntax:\n\n`release(obj)`\n\nexpand all\n\n `step` Run System object algorithm `release` Release resources and allow changes to System object property values and input characteristics `reset` Reset internal states of System object\n\n## Examples\n\ncollapse all\n\nNote: This example runs only in R2016b or later. If you are using an earlier release, replace each call to the function with the equivalent `step` syntax. For example, myObject(x) becomes step(myObject,x).\n\nCompute the analytic signal of a sinusoidal input.\n\n```t = (-1:0.01:1)'; x = sin(4*pi*t); anaSig = dsp.AnalyticSignal(200); y = anaSig(x);```\n\nView the analytic signal.\n\n```subplot(2,1,1); plot(t, x) title('Original Signal'); subplot(2,1,2), plot(t, [real(y) imag(y)]); title('Analytic signal of the input') legend('Real signal','Imaginary signal',... 'Location','best');```", null, "## Algorithms\n\nThis object implements the algorithm, inputs, and outputs described on the Analytic Signal block reference page. The object properties correspond to the block parameters." ]
[ null, "https://se.mathworks.com/help/examples/dsp/win64/ComputeTheAnalyticSignalExample_01.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.73344755,"math_prob":0.9354978,"size":1916,"snap":"2020-24-2020-29","text_gpt3_token_len":426,"char_repetition_ratio":0.17416318,"word_repetition_ratio":0.030821918,"special_character_ratio":0.18841337,"punctuation_ratio":0.12427746,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.991959,"pos_list":[0,1,2],"im_url_duplicate_count":[null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-05-24T23:37:40Z\",\"WARC-Record-ID\":\"<urn:uuid:adad95fe-b69d-425c-bfc0-6c6877518a59>\",\"Content-Length\":\"86287\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a1923f83-0ccf-44f4-9df6-940e3c95e99b>\",\"WARC-Concurrent-To\":\"<urn:uuid:eb66d395-84f4-4e8f-84f2-e2c5bffa920e>\",\"WARC-IP-Address\":\"104.110.193.39\",\"WARC-Target-URI\":\"https://se.mathworks.com/help/dsp/ref/dsp.analyticsignal-system-object.html\",\"WARC-Payload-Digest\":\"sha1:WPRDLPPKFONY7MO74LZNIUAPGTWLDUJE\",\"WARC-Block-Digest\":\"sha1:SFP354ZTGKYOL6XK4WFSSTWR6I7WR6UX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-24/CC-MAIN-2020-24_segments_1590347385193.5_warc_CC-MAIN-20200524210325-20200525000325-00005.warc.gz\"}"}
https://www.calculatorbit.com/en/length/22-meter-to-kilometer
[ "# Convert 22 Meter to Kilometer\n\nResult:\n\n22 Meter = 0.022 Kilometer (km)\n\nRounded: ( Nearest 4 digits)\n\n22 Meter is 0.022 Kilometer (km)\n\n22 Meter is 22m\n\n## How to Convert Meter to Kilometer (Explanation)\n\n• 1 meter = 0.001 km (Nearest 4 digits)\n• 1 kilometer = 1000 m (Nearest 4 digits)\n\nThere are 0.001 Kilometer in 1 Meter. To convert Meter to Kilometer all you need to do is multiple the Meter with 0.001.\n\nIn formula distance is denoted with d\n\nThe distance d in Kilometer (km) is equal to 0.001 times the distance in meter (m):\n\n### Equation\n\nd (km) = d (m) × 0.001\n\nFormula for 22 Meter (m) to Kilometer (km) conversion:\n\nd (km) = 22 m × 0.001 => 0.022 km\n\n## How many Kilometer in a Meter\n\nOne Meter is equal to 0.001 Kilometer\n\n1 m = 1 m × 0.001 => 0.001 km\n\n## How many Meter in a Kilometer\n\nOne Kilometer is equal to 1000 Meter\n\n1 km = 1 km / 0.001 => 1000 m\n\n## meter:\n\nThe meter (symbol: m) is unit of length in the international System of units (SI), Meter is American spelling and metre is British spelling. Meter was first originally defined in 1793 as 1/10 millionth of the distance from the equator to the North Pole along a great circle. So the length of circle is 40075.017 km. The current definition of meter is described as the length of the path travelledby light in a vacuum in 1/299792458 of a second, later definition of meter is rephrased to include the definition of a second in terms of the caesium frequency (ΔνCs; 299792458 m/s).\n\n## kilometer:\n\nThe kilometer (symbol: km) is a unit of length in the International System of Units (SI), equal to 1000 meters. Kilometer is most commonly used measurement unit to measure the distance between physical places all around the world.\n\n## Meter to Kilometer Calculations Table\n\nNow by following above explained formulas we can prepare a Meter to Kilometer Chart.\n\nMeter (m) Kilometer (km)\n18 0.018\n19 0.019\n20 0.02\n21 0.021\n22 0.022\n23 0.023\n24 0.024\n25 0.025\n26 0.026\n27 0.027\n\nNearest 4 digits\n\n## Convert from Meter to other units\n\nHere are some quick links to convert 22 Meter to other length units.\n\n## Convert to Meter from other units\n\nHere are some quick links to convert other length units to Meter.\n\n## FAQs About Meter and Kilometer\n\nConverting from one Meter to Kilometer or Kilometer to Meter sometimes gets confusing." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.773515,"math_prob":0.9949726,"size":3958,"snap":"2023-14-2023-23","text_gpt3_token_len":1185,"char_repetition_ratio":0.29590288,"word_repetition_ratio":0.029325513,"special_character_ratio":0.310763,"punctuation_ratio":0.10102302,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9951696,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-26T19:28:10Z\",\"WARC-Record-ID\":\"<urn:uuid:b2b91929-64fe-4894-bc11-390c0290471a>\",\"Content-Length\":\"31883\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c1e9e7d7-a69b-42c3-b932-cc8d7a489f73>\",\"WARC-Concurrent-To\":\"<urn:uuid:61400685-9203-4b9b-8718-139ebf15fd8d>\",\"WARC-IP-Address\":\"104.21.18.139\",\"WARC-Target-URI\":\"https://www.calculatorbit.com/en/length/22-meter-to-kilometer\",\"WARC-Payload-Digest\":\"sha1:WJ2AHHX2TXGGBS4ZTDKWP6VKJOQJNGX6\",\"WARC-Block-Digest\":\"sha1:M3JFPTDDQWMJ7G3DOVRRM2WCRBR5MFPT\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296946445.46_warc_CC-MAIN-20230326173112-20230326203112-00044.warc.gz\"}"}
https://www.philipzucker.com/more-opencv/
[ "# More Opencv\n\nCanny finds edges. Edges are basically places of very large derivative in the image. Then the canny algorithm cleans it up a little.\n\nFindContours seems to be a clutch dude\n\nhttp://dsynflo.blogspot.com/2014/10/opencv-qr-code-detection-and-extraction.html\n\nThis guy uses it to\n\n```import cv2\nimport numpy as np\n\ncap = cv2.VideoCapture(0)\n\nwhile(1):\n\n# Take each frame\n\ngray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)\nedges = cv2.Canny(gray,100,200)\n#ret,thresh = cv2.threshold(gray,50,255,cv2.THRESH_BINARY)\n\nkernel = np.ones((5,5),np.uint8)\nedges = cv2.dilate(edges,kernel,iterations = 5) # really chunks it up\n\ncontours,hierarchy= cv2.findContours(edges,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)[-2:]\ncv2.drawContours(frame, contours, -1, (0,0,255), 3)\ncv2.imshow('res',cv2.pyrDown(frame))\nk = cv2.waitKey(5) & 0xFF\nif k == 27:\nbreak\n\ncv2.destroyAllWindows()```\n\nThe dilation reduces the number of contours to something more reasonable\n\nEach contour is of the format [[[x y]], [[x y]], [[x y]]]\n\nI had a problem with draw contours until I found out I needed to write onto a color image with it.\n\nThe good features to track\n\n```import cv2\nimport numpy as np\n\ncap = cv2.VideoCapture(0)\n\nwhile(1):\n\ngray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)\ncorners = cv2.goodFeaturesToTrack(gray,25,0.01,10)\ncorners = np.int0(corners)\nfor i in corners:\nx,y = i.ravel()\ncv2.circle(frame,(x,y),8,[0,0,255],-1) #image center radius color thickness\ncv2.imshow('image',cv2.pyrDown(frame))\n\nk = cv2.waitKey(5) & 0xFF\nif k == 27:\nbreak```\n\nThe 25 is number of corners, 0.01 is a quality cutoff (1% of best corner quality found), 10 is minimum distance between corners" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6616219,"math_prob":0.9923447,"size":1654,"snap":"2020-34-2020-40","text_gpt3_token_len":481,"char_repetition_ratio":0.11333334,"word_repetition_ratio":0.1509434,"special_character_ratio":0.31438935,"punctuation_ratio":0.22849463,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96976763,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-26T07:19:30Z\",\"WARC-Record-ID\":\"<urn:uuid:9d868130-dab2-4c98-b2c6-2bc678a95061>\",\"Content-Length\":\"35799\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:302e373a-ed3c-42a8-b428-e3f3529535e1>\",\"WARC-Concurrent-To\":\"<urn:uuid:e79f2c49-8475-4b84-bca8-30f8f83968a8>\",\"WARC-IP-Address\":\"208.94.116.64\",\"WARC-Target-URI\":\"https://www.philipzucker.com/more-opencv/\",\"WARC-Payload-Digest\":\"sha1:W7BPCJINLA5FYJIJIJEY4WFWV2Q4TOQI\",\"WARC-Block-Digest\":\"sha1:E63U7RZ4C6MTJXVPPOIMODIC6VXL7FJI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600400238038.76_warc_CC-MAIN-20200926071311-20200926101311-00061.warc.gz\"}"}
https://www.kilomegabyte.com/1-kbit-to-tbit
[ "#### Data Units Calculator\n\n###### Kilobit to Terabit\n\nOnline data storage unit conversion calculator:\n\nFrom:\nTo:\n\nThe smallest unit of measurement used for measuring data is a bit. A single bit can have a value of either zero(0) or one(1). It may contain a binary value (such as True/False or On/Off or 1/0) and nothing more. Therefore, a byte, or eight bits, is used as the fundamental unit of measurement for data storage. A byte can store 256 different values, which is sufficient to represent standard ASCII table, such as all numbers, letters and control symbols.\n\nSince most files contain thousands of bytes, file sizes are often measured in kilobytes. Larger files, such as images, videos, and audio files, contain millions of bytes and therefore are measured in megabytes. Modern storage devices can store thousands of these files, which is why storage capacity is typically measured in gigabytes or even terabytes.\n\n# 1 kbit to tbit result:\n\n1 (one) kilobit(s) is equal 0.000000001 (zero point zero × 8 one) terabit(s)\n\n#### What is kilobit?\n\nThe kilobit is a multiple of the unit bit for digital information or computer storage. The prefix kilo- (symbol k) is defined in the International System of Units (SI) as a multiplier of 10^3 (1 thousand), and therefore, 1 kilobit = 10^3 bits = 1000 bits. The kilobit has the unit symbol kbit.\n\n#### What is terabit?\n\nThe terabit is a multiple of the unit bit for digital information or computer storage. The prefix tera (symbol T) is defined in the International System of Units (SI) as a multiplier of 10^12 (1 trillion, short scale), and therefore, 1 terabit = 10^12 bits = 1000000000000 bits = 1000 gigabits. The terabit has the unit symbol Tbit.\n\n#### How calculate kbit. to tbit.?\n\n1 Kilobit is equal to 0.000000001 Terabit (zero point zero × 8 one tbit)\n1 Terabit is equal to 1000000000 Kilobit (one billion kbit)\n1 Kilobit is equal to 1000.000000 bits (one thousand point zero × 6 zero bits)\n1 Terabit is equal to 1000000000000 bits (one trillion bits)\n1 Kilobit is equal to 1000 Bit (one thousand bit)\n\nTerabit is greater than Kilobit\n\nMultiplication factor is 1000000000.\n1 / 1000000000 = 0.000000001.\n\nMaybe you mean Kibibit?\n\n1 Kilobit is equal to 0.9765625 Kibibit (zero point nine million seven hundred and sixty-five thousand six hundred and twenty-five kibit) convert to kibit\n\n### Powers of 2\n\nkbit tbit (Terabit) Description\n1 kbit 0.000000001 tbit 1 kilobit (one) is equal to 0.000000001 terabit (zero point zero × 8 one)\n2 kbit 0.000000002 tbit 2 kilobit (two) is equal to 0.000000002 terabit (zero point zero × 8 two)\n4 kbit 0.000000004 tbit 4 kilobit (four) is equal to 0.000000004 terabit (zero point zero × 8 four)\n8 kbit 0.000000008 tbit 8 kilobit (eight) is equal to 0.000000008 terabit (zero point zero × 8 eight)\n16 kbit 0.000000016 tbit 16 kilobit (sixteen) is equal to 0.000000016 terabit (zero point zero × 7 sixteen)\n32 kbit 0.000000032 tbit 32 kilobit (thirty-two) is equal to 0.000000032 terabit (zero point zero × 7 thirty-two)\n64 kbit 0.000000064 tbit 64 kilobit (sixty-four) is equal to 0.000000064 terabit (zero point zero × 7 sixty-four)\n128 kbit 0.000000128 tbit 128 kilobit (one hundred and twenty-eight) is equal to 0.000000128 terabit (zero point zero × 6 one hundred and twenty-eight)\n256 kbit 0.000000256 tbit 256 kilobit (two hundred and fifty-six) is equal to 0.000000256 terabit (zero point zero × 6 two hundred and fifty-six)\n512 kbit 0.000000512 tbit 512 kilobit (five hundred and twelve) is equal to 0.000000512 terabit (zero point zero × 6 five hundred and twelve)\n1024 kbit 0.000001024 tbit 1024 kilobit (one thousand and twenty-four) is equal to 0.000001024 terabit (zero point zero × 5 one thousand and twenty-four)\n2048 kbit 0.000002048 tbit 2048 kilobit (two thousand and forty-eight) is equal to 0.000002048 terabit (zero point zero × 5 two thousand and forty-eight)\n4096 kbit 0.000004096 tbit 4096 kilobit (four thousand and ninety-six) is equal to 0.000004096 terabit (zero point zero × 5 four thousand and ninety-six)\n8192 kbit 0.000008192 tbit 8192 kilobit (eight thousand one hundred and ninety-two) is equal to 0.000008192 terabit (zero point zero × 5 eight thousand one hundred and ninety-two)" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7608579,"math_prob":0.99968684,"size":4109,"snap":"2022-27-2022-33","text_gpt3_token_len":1269,"char_repetition_ratio":0.23580998,"word_repetition_ratio":0.11922504,"special_character_ratio":0.35410076,"punctuation_ratio":0.09402795,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99437416,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-07-03T11:50:00Z\",\"WARC-Record-ID\":\"<urn:uuid:6e6a40cc-aca6-45c8-b5f2-8b93732055f7>\",\"Content-Length\":\"21785\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9bbf004f-a646-44e1-b4f6-447454bee1c7>\",\"WARC-Concurrent-To\":\"<urn:uuid:d4716006-2b64-4d5c-be15-8c7101db0070>\",\"WARC-IP-Address\":\"205.144.171.63\",\"WARC-Target-URI\":\"https://www.kilomegabyte.com/1-kbit-to-tbit\",\"WARC-Payload-Digest\":\"sha1:TKY23P7XSNFB2YJXSOOPJJP5D2DE6W56\",\"WARC-Block-Digest\":\"sha1:Y6CLEA4WMU5AUJOL3XOK7GYSTS5PDZA5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656104240553.67_warc_CC-MAIN-20220703104037-20220703134037-00374.warc.gz\"}"}
https://electronics.stackexchange.com/questions/638265/effect-of-surrounding-material-on-microstrip-impedance
[ "# Effect of surrounding material on microstrip impedance?\n\nIf you were to have a microstrip transmission line PCB trace, which then had a dielectric material run completely on top of the PCB track coverlayer. Would this surrounding material affect the impedance of the line, causing a need to change the width to accommodate? I see that by adding this dielectric material, it starts to look more like a strip line in that it is sandwiched by dielectric on either side.\n\nThis would be for a high frequency (GHz) signal.\n\nYes, any dielectric above will have an effect, although not a large one since most of the field is below the microstrip. Lots of calculators can do embedded microstrips, so you can see for yourself. For example, to pick one at random:", null, "Edit: this calculator may not be accurate, check the one suggested below.\n\nhttps://www.eeweb.com/tools/embedded-microstrip-impedance/\n\nAdding 20 microns of dielectric (same as substrate) above the microstrip will change the impedance by about 3.5 ohms. Usually not significant, but not unmeasurable either.\n\n• Thanks for answering. As this would be on top of the cover layer, I'm not sure this would exactly apply. As I imagine the distance would also play a factor Oct 12, 2022 at 17:08\n• @AdamMakin Height above does matter since the field is strongest closest to the conductor. Try punching in different height values into that calculator and you can get a rough idea of how much. Oct 12, 2022 at 17:36\n• I don't trust this calculator. Adding dielectric material above the trace should increase the capacitance and thus lower the characteristic impedance. But this calculator is showing the opposite effect. It also doesn't complain if you give it invalid inputs (like H2 < H1) Oct 12, 2022 at 18:44\n• Here's a calculator that gives the expected dependence between upper dielectric thickness and impedance: cecas.clemson.edu/cvel/emc/calculators/PCB-TL_Calculator/… Oct 12, 2022 at 18:51\n• Need to keep in mind that the dielectric constant of materials decrease with increasing frequencies, especially when you're in the GHz or higher region. So for these impedance calculators to give good results, you 1) have to adjust the Dk based on your expected frequency or 2) use a tool that asks you for the frequency of interest. Oct 13, 2022 at 0:24" ]
[ null, "https://i.stack.imgur.com/Op92e.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9623021,"math_prob":0.8997383,"size":458,"snap":"2023-40-2023-50","text_gpt3_token_len":96,"char_repetition_ratio":0.12334802,"word_repetition_ratio":0.0,"special_character_ratio":0.19432314,"punctuation_ratio":0.07954545,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9573518,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-07T11:51:51Z\",\"WARC-Record-ID\":\"<urn:uuid:0ebccfbd-8d9e-4db2-8d5c-64e75d2cd224>\",\"Content-Length\":\"167424\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e5790b48-8997-4095-b5da-9b8954b35775>\",\"WARC-Concurrent-To\":\"<urn:uuid:bb4cbe74-350b-48b8-8bed-2d3fde8adff1>\",\"WARC-IP-Address\":\"172.64.144.30\",\"WARC-Target-URI\":\"https://electronics.stackexchange.com/questions/638265/effect-of-surrounding-material-on-microstrip-impedance\",\"WARC-Payload-Digest\":\"sha1:3K7BKEWIXB3GWVFYPWPHLONMC6QEKB43\",\"WARC-Block-Digest\":\"sha1:PXC54IU6HPZYMLZKY752OOYRL3LIMPLS\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100651.34_warc_CC-MAIN-20231207090036-20231207120036-00023.warc.gz\"}"}
https://se.mathworks.com/matlabcentral/fileexchange/35717-block-diagonal-multiplication?s_tid=prof_contriblnk
[ "File Exchange\n\n## block diagonal multiplication\n\nversion 1.1.0.0 (2.27 KB) by\nUsed for multiplying large block diagonal matrices with matrices / vectors.\n\nUpdated 26 Apr 2012\n\nUsed to perform B*M or M*B with B a block diagonal matrix, B is stored as a cell array (call it C) with each element a matrix as a block of B, such that B = blkdiag(C{:}).\nThis speeds up the multiplication when B is large and also allows the operation to take place when B could not fit in the memory stored as a full block diagonal matrix.\nOperations B'*M = (M'*B)' and M*B' can be performed using transpose identities\n\n### Cite As\n\nDavid Holdaway (2021). block diagonal multiplication (https://www.mathworks.com/matlabcentral/fileexchange/35717-block-diagonal-multiplication), MATLAB Central File Exchange. Retrieved ." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8525233,"math_prob":0.76603466,"size":820,"snap":"2021-21-2021-25","text_gpt3_token_len":186,"char_repetition_ratio":0.111519605,"word_repetition_ratio":0.0,"special_character_ratio":0.23292683,"punctuation_ratio":0.10691824,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9826705,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-05-18T15:19:08Z\",\"WARC-Record-ID\":\"<urn:uuid:81d2ee30-553c-46b2-91a7-f1391a28672b>\",\"Content-Length\":\"82881\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5815eb7d-075d-4223-8821-9f07ac89572c>\",\"WARC-Concurrent-To\":\"<urn:uuid:d7f00288-2225-4b6f-bd84-3d9ea4d95843>\",\"WARC-IP-Address\":\"23.220.132.54\",\"WARC-Target-URI\":\"https://se.mathworks.com/matlabcentral/fileexchange/35717-block-diagonal-multiplication?s_tid=prof_contriblnk\",\"WARC-Payload-Digest\":\"sha1:XC5ADR2QTNERLFNKNXSCDJ7R3RDS2GBX\",\"WARC-Block-Digest\":\"sha1:JT64UOE5T7XBFJ2M22PR5NKIPLALZVU4\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-21/CC-MAIN-2021-21_segments_1620243989637.86_warc_CC-MAIN-20210518125638-20210518155638-00398.warc.gz\"}"}
https://de.mathworks.com/matlabcentral/cody/problems/44385-extra-safe-primes/solutions/1656196
[ "Cody\n\n# Problem 44385. Extra safe primes\n\nSolution 1656196\n\nSubmitted on 22 Oct 2018 by Sharon Spelt\nThis solution is locked. To view this solution, you need to provide a solution of the same size or smaller.\n\n### Test Suite\n\nTest Status Code Input and Output\n1   Pass\nx = 0; assert(isequal(isextrasafe(x),false))\n\ntf = 0\n\n2   Pass\nx = 5; assert(isequal(isextrasafe(x),false))\n\np = 2 tf = 0\n\n3   Pass\nx = 7; assert(isequal(isextrasafe(x),false))\n\np = 3 p = 1 tf = 0\n\n4   Pass\nx = 11; assert(isequal(isextrasafe(x),true))\n\np = 5 p = 2 tf = 1\n\n5   Pass\nx = 15; assert(isequal(isextrasafe(x),false))\n\ntf = 0\n\n6   Pass\nx = 23; assert(isequal(isextrasafe(x),true))\n\np = 11 p = 5 tf = 1\n\n7   Pass\nx = 71; assert(isequal(isextrasafe(x),false))\n\np = 35 tf = 0\n\n8   Pass\nx = 719; assert(isequal(isextrasafe(x),true))\n\np = 359 p = 179 tf = 1\n\n9   Pass\nx = 2039; assert(isequal(isextrasafe(x),true))\n\np = 1019 p = 509 tf = 1\n\n10   Pass\nx = 2040; assert(isequal(isextrasafe(x),false))\n\ntf = 0\n\n11   Pass\nx = 5807; assert(isequal(isextrasafe(x),true))\n\np = 2903 p = 1451 tf = 1" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.52488935,"math_prob":0.99994326,"size":1039,"snap":"2020-34-2020-40","text_gpt3_token_len":414,"char_repetition_ratio":0.22608696,"word_repetition_ratio":0.02688172,"special_character_ratio":0.45139557,"punctuation_ratio":0.13656388,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99956983,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-12T01:45:35Z\",\"WARC-Record-ID\":\"<urn:uuid:99fe1158-dbdc-4f95-b226-0bbe60857abe>\",\"Content-Length\":\"77933\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a5a4fad7-7051-4c36-8e71-66cc16e4c2cb>\",\"WARC-Concurrent-To\":\"<urn:uuid:4b9cbc7e-32f5-4f85-9485-ce7db0b1cdda>\",\"WARC-IP-Address\":\"23.212.144.59\",\"WARC-Target-URI\":\"https://de.mathworks.com/matlabcentral/cody/problems/44385-extra-safe-primes/solutions/1656196\",\"WARC-Payload-Digest\":\"sha1:Y72MKLWHTKRXOMCYYLTF2JJ4DYP4XQXG\",\"WARC-Block-Digest\":\"sha1:WBJUT2NXL2AWYCPNS6MEBREHXYKWI7QX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439738858.45_warc_CC-MAIN-20200811235207-20200812025207-00371.warc.gz\"}"}
https://www.codespeedy.com/starasterisk-pattern-in-python/
[ "# Star(asterisk) pattern in Python\n\nIn this tutorial, you are going to learn about the star or asterisk pattern in Python. Star or asterisk patterns are a series of * which forms a pattern or any geometrical shape like triangle, square, rhombus etc. These patterns are created with the help of for loop. For the good understanding of the topic, you should know about the for loop. So let’s learn how to print various start patterns in Python.\n\nA for loop can iterate over the item of any sequence (such as string or a list). For the first iteration of the loop, the list is evaluated and the first item of the list is assigned to the iterating variable “iterating_var” then the body of the for loop is executed. Each item of the list is assigned to the “iterating_var” and the body of for will be executed until all the list items are exhausted. Nested for loop is used in the program to make a star or asterisk pattern.\n\nSyntax:\n\n```for iterating_var in sequence:\n\nbody of for```\n```list=[1,2,2,3,4] # list\n\n# by sequence\nprint(\"By sequence: \",end=\" \")\nfor l in list:\nprint(l,end=\" \")\n\n# by range\nprint(\"By range: \",end=\" \")\nfor l in range(6):\nprint(l,end=\" \")```\n\nOutput:-\n\n```By sequence: 1 2 2 3 4\nBy range: 0 1 2 3 4 5```\n\n## Inverted right angled triangle star pattern in Python\n\n1. The outer loop gives i=0 in the first iteration and goes to the inner loop which will work for the range (0,6-i) and print the star(*) for 6 time in a line and the inner loop work is completed.\n2. After that it will come to next line by print().\n3. The outer loop will iterate again to give i=i+1. Repeat all the steps again until i=5.\n```# Outer loop\nfor i in range(0,6):\n\n# Inner loop\nfor j in range(0,6-i):\nprint(\"*\",end=\"\")\nprint()\n```\n\nOutput:-\n\n```******\n*****\n****\n***\n**\n*```\n\n## Mirrored inverted right angled triangle pattern in Python\n\n1. The outer loop gives i=0 in the first iteration and goes to the inner loop 1 to print the space for a range of (o,i) and print no space for the first line.\n2. After completing the inner loop 1 then it goes to inner loop 2 to print the star(*) for a range of (0,6-i) and print the 6 star in the same line.\n3. After that inner loop 2 is completed and the pointer goes to the next line by print().\n4. Then the outer loop will iterate for the second time. Repeat all the above steps again to form the pattern.\n5. The outer loop will continue its work until i=5.\n```# Outer loop\nfor i in range(0,6):\n# Inner loop 1\nfor k in range(0,i):\nprint(\" \",end=\"\")\n# Inner loop 2\nfor j in range(0,6-i):\nprint(\"*\",end=\"\")\nprint()\n```\n\nOutput:-\n\n```******\n*****\n****\n***\n**\n*```\n\n## Square star pattern in Python\n\n1. The outer loop gives i=0 in the first iteration and goes to the inner loop which will work for the range (0,5) and print the star(*) for 5 time in a line and the inner loop work is completed.\n2. After that it will come to next line by print().\n3. The outer loop will iterate again to give i=i+1.  Repeat all the steps again until i=4.\n```# Outer loop\nfor i in range(0,5):\n# Inner loop\nfor j in range(0,5):\nprint(\"*\",end=\"\")\nprint()\n```\n\nOutput:-\n\n```*****\n*****\n*****\n*****\n*****```\n\n## Mirrored right angled triangle pattern in Python\n\n1. The outer loop gives i=0 in the first iteration and goes to the inner loop 1 to print the space for a range of (o,5-i) and print 5 spaces for the first line.\n2. After completing the inner loop 1 then it goes to inner loop 2 to print the star(*) for a range of (0,i+1). Print star only one time in the same line.\n3. After that inner loop 2 is completed and the pointer goes to the next line by print().\n4. Then the outer loop will iterate for the second time. Repeat all the above steps again to form the pattern.\n5. The outer loop will iterate until i becomes 5.\n```# Outer loop\nfor i in range(0,6):\n# Inner loop 1\nfor j in range(0,5-i):\nprint(\" \",end=\"\")\n# Inner loop 2\nfor k in range(0,i+1):\nprint(\"*\",end=\"\")\nprint()\n```\n\nOutput:-\n\n```     *\n**\n***\n****\n*****\n******```\n\nPython program to print non square numbers" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.81164235,"math_prob":0.91932535,"size":3832,"snap":"2019-43-2019-47","text_gpt3_token_len":1007,"char_repetition_ratio":0.18077324,"word_repetition_ratio":0.5048143,"special_character_ratio":0.30741128,"punctuation_ratio":0.10666667,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9792532,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-20T20:18:49Z\",\"WARC-Record-ID\":\"<urn:uuid:4a198723-6c90-4686-875b-02d6073bf8c9>\",\"Content-Length\":\"34978\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b07dcf66-4b44-4ff8-b42f-59ceec78fee5>\",\"WARC-Concurrent-To\":\"<urn:uuid:89b05205-4132-4b0c-8c71-06fa34768e02>\",\"WARC-IP-Address\":\"104.27.179.5\",\"WARC-Target-URI\":\"https://www.codespeedy.com/starasterisk-pattern-in-python/\",\"WARC-Payload-Digest\":\"sha1:R6K7RKQNR2L32YT6Z3JWOHCBLRDOONIS\",\"WARC-Block-Digest\":\"sha1:6U45HVHTGYBAS5TLMO344VPBEIC5TNHC\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496670601.75_warc_CC-MAIN-20191120185646-20191120213646-00555.warc.gz\"}"}
https://socratic.org/questions/how-do-you-add-5sqrt48-4sqrt75
[ "# How do you add 5sqrt48-4sqrt75?\n\nApr 26, 2018\n\n$5 \\sqrt{48} - 4 \\sqrt{75} = 0$\n\n#### Explanation:\n\nFirst, attempt to simplify the radicals.\n\n$5 \\sqrt{48} - 4 \\sqrt{75}$\n\n$5 \\sqrt{6 \\times 8} - 4 \\sqrt{15 \\times 5}$\n\n$5 \\sqrt{{2}^{4} \\times 3} - 4 \\sqrt{3 \\times {5}^{2}}$\n\nThe squares can be factored out.\n\n$5 \\times {2}^{2} \\sqrt{3} - 5 \\times 4 \\sqrt{3}$\n\nThis can now be simplified:\n\n$\\left(5 \\times {2}^{2} - 5 \\times 4\\right) \\times \\sqrt{3}$\n\n$\\left(20 - 20\\right) \\times \\sqrt{3}$\n\n$\\left(0\\right) \\times \\sqrt{3} = 0$" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6765658,"math_prob":1.0000081,"size":298,"snap":"2023-40-2023-50","text_gpt3_token_len":74,"char_repetition_ratio":0.1122449,"word_repetition_ratio":0.0,"special_character_ratio":0.24832214,"punctuation_ratio":0.125,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99939215,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-27T06:19:04Z\",\"WARC-Record-ID\":\"<urn:uuid:3013c6db-d99d-4876-8a73-c3a4662dd646>\",\"Content-Length\":\"32913\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e63a999a-8f72-4b63-8573-fbd357ee22cc>\",\"WARC-Concurrent-To\":\"<urn:uuid:b601680f-9fe6-45bb-99ab-367c1acb93ca>\",\"WARC-IP-Address\":\"216.239.34.21\",\"WARC-Target-URI\":\"https://socratic.org/questions/how-do-you-add-5sqrt48-4sqrt75\",\"WARC-Payload-Digest\":\"sha1:D5J43EB2JWXFQVUVFSISI5NLOIXUP4HH\",\"WARC-Block-Digest\":\"sha1:HFDUVICCBJKNJ77BFWBCPZHFQQYTMRP7\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510259.52_warc_CC-MAIN-20230927035329-20230927065329-00530.warc.gz\"}"}
https://www.colorhexa.com/8cfccf
[ "# #8cfccf Color Information\n\nIn a RGB color space, hex #8cfccf is composed of 54.9% red, 98.8% green and 81.2% blue. Whereas in a CMYK color space, it is composed of 44.4% cyan, 0% magenta, 17.9% yellow and 1.2% black. It has a hue angle of 155.9 degrees, a saturation of 94.9% and a lightness of 76.9%. #8cfccf color hex could be obtained by blending #ffffff with #19f99f. Closest websafe color is: #99ffcc.\n\n• R 55\n• G 99\n• B 81\nRGB color chart\n• C 44\n• M 0\n• Y 18\n• K 1\nCMYK color chart\n\n#8cfccf color description : Very soft cyan - lime green.\n\n# #8cfccf Color Conversion\n\nThe hexadecimal color #8cfccf has RGB values of R:140, G:252, B:207 and CMYK values of C:0.44, M:0, Y:0.18, K:0.01. Its decimal value is 9239759.\n\nHex triplet RGB Decimal 8cfccf `#8cfccf` 140, 252, 207 `rgb(140,252,207)` 54.9, 98.8, 81.2 `rgb(54.9%,98.8%,81.2%)` 44, 0, 18, 1 155.9°, 94.9, 76.9 `hsl(155.9,94.9%,76.9%)` 155.9°, 44.4, 98.8 99ffcc `#99ffcc`\nCIE-LAB 91.549, -42.212, 11.661 56.884, 79.698, 71.414 0.273, 0.383, 79.698 91.549, 43.793, 164.557 91.549, -50.81, 24.687 89.274, -42.49, 15.063 10001100, 11111100, 11001111\n\n# Color Schemes with #8cfccf\n\n• #8cfccf\n``#8cfccf` `rgb(140,252,207)``\n• #fc8cb9\n``#fc8cb9` `rgb(252,140,185)``\nComplementary Color\n• #8cfc97\n``#8cfc97` `rgb(140,252,151)``\n• #8cfccf\n``#8cfccf` `rgb(140,252,207)``\n• #8cf1fc\n``#8cf1fc` `rgb(140,241,252)``\nAnalogous Color\n• #fc978c\n``#fc978c` `rgb(252,151,140)``\n• #8cfccf\n``#8cfccf` `rgb(140,252,207)``\n• #fc8cf1\n``#fc8cf1` `rgb(252,140,241)``\nSplit Complementary Color\n• #fccf8c\n``#fccf8c` `rgb(252,207,140)``\n• #8cfccf\n``#8cfccf` `rgb(140,252,207)``\n• #cf8cfc\n``#cf8cfc` `rgb(207,140,252)``\n• #b9fc8c\n``#b9fc8c` `rgb(185,252,140)``\n• #8cfccf\n``#8cfccf` `rgb(140,252,207)``\n• #cf8cfc\n``#cf8cfc` `rgb(207,140,252)``\n• #fc8cb9\n``#fc8cb9` `rgb(252,140,185)``\n• #41fab0\n``#41fab0` `rgb(65,250,176)``\n• #5afbba\n``#5afbba` `rgb(90,251,186)``\n• #73fbc5\n``#73fbc5` `rgb(115,251,197)``\n• #8cfccf\n``#8cfccf` `rgb(140,252,207)``\n• #a5fdd9\n``#a5fdd9` `rgb(165,253,217)``\n• #befde4\n``#befde4` `rgb(190,253,228)``\n• #d7feee\n``#d7feee` `rgb(215,254,238)``\nMonochromatic Color\n\n# Alternatives to #8cfccf\n\nBelow, you can see some colors close to #8cfccf. Having a set of related colors can be useful if you need an inspirational alternative to your original color choice.\n\n• #8cfcb3\n``#8cfcb3` `rgb(140,252,179)``\n• #8cfcbc\n``#8cfcbc` `rgb(140,252,188)``\n• #8cfcc6\n``#8cfcc6` `rgb(140,252,198)``\n• #8cfccf\n``#8cfccf` `rgb(140,252,207)``\n• #8cfcd8\n``#8cfcd8` `rgb(140,252,216)``\n• #8cfce2\n``#8cfce2` `rgb(140,252,226)``\n• #8cfceb\n``#8cfceb` `rgb(140,252,235)``\nSimilar Colors\n\n# #8cfccf Preview\n\nText with hexadecimal color #8cfccf\n\nThis text has a font color of #8cfccf.\n\n``<span style=\"color:#8cfccf;\">Text here</span>``\n#8cfccf background color\n\nThis paragraph has a background color of #8cfccf.\n\n``<p style=\"background-color:#8cfccf;\">Content here</p>``\n#8cfccf border color\n\nThis element has a border color of #8cfccf.\n\n``<div style=\"border:1px solid #8cfccf;\">Content here</div>``\nCSS codes\n``.text {color:#8cfccf;}``\n``.background {background-color:#8cfccf;}``\n``.border {border:1px solid #8cfccf;}``\n\n# Shades and Tints of #8cfccf\n\nA shade is achieved by adding black to any pure hue, while a tint is created by mixing white to any pure color. In this example, #00130b is the darkest color, while #ffffff is the lightest one.\n\n• #00130b\n``#00130b` `rgb(0,19,11)``\n• #012617\n``#012617` `rgb(1,38,23)``\n• #013923\n``#013923` `rgb(1,57,35)``\n• #024c2e\n``#024c2e` `rgb(2,76,46)``\n• #025f3a\n``#025f3a` `rgb(2,95,58)``\n• #037246\n``#037246` `rgb(3,114,70)``\n• #038651\n``#038651` `rgb(3,134,81)``\n• #04995d\n``#04995d` `rgb(4,153,93)``\n• #04ac69\n``#04ac69` `rgb(4,172,105)``\n• #05bf74\n``#05bf74` `rgb(5,191,116)``\n• #05d280\n``#05d280` `rgb(5,210,128)``\n• #06e58b\n``#06e58b` `rgb(6,229,139)``\n• #06f897\n``#06f897` `rgb(6,248,151)``\n• #19f99f\n``#19f99f` `rgb(25,249,159)``\n• #2cfaa7\n``#2cfaa7` `rgb(44,250,167)``\n• #40faaf\n``#40faaf` `rgb(64,250,175)``\n• #53fbb7\n``#53fbb7` `rgb(83,251,183)``\n• #66fbbf\n``#66fbbf` `rgb(102,251,191)``\n• #79fcc7\n``#79fcc7` `rgb(121,252,199)``\n• #8cfccf\n``#8cfccf` `rgb(140,252,207)``\n• #9ffcd7\n``#9ffcd7` `rgb(159,252,215)``\n• #b2fddf\n``#b2fddf` `rgb(178,253,223)``\n• #c5fde7\n``#c5fde7` `rgb(197,253,231)``\n• #d8feef\n``#d8feef` `rgb(216,254,239)``\n• #ecfef7\n``#ecfef7` `rgb(236,254,247)``\n• #ffffff\n``#ffffff` `rgb(255,255,255)``\nTint Color Variation\n\n# Tones of #8cfccf\n\nA tone is produced by adding gray to any pure hue. In this case, #c2c6c4 is the less saturated color, while #8cfccf is the most saturated one.\n\n• #c2c6c4\n``#c2c6c4` `rgb(194,198,196)``\n• #becac5\n``#becac5` `rgb(190,202,197)``\n• #b9cfc6\n``#b9cfc6` `rgb(185,207,198)``\n• #b5d3c7\n``#b5d3c7` `rgb(181,211,199)``\n• #b0d8c8\n``#b0d8c8` `rgb(176,216,200)``\n• #acdcc9\n``#acdcc9` `rgb(172,220,201)``\n• #a7e1ca\n``#a7e1ca` `rgb(167,225,202)``\n• #a3e5cb\n``#a3e5cb` `rgb(163,229,203)``\n• #9eeacb\n``#9eeacb` `rgb(158,234,203)``\n• #9aeecc\n``#9aeecc` `rgb(154,238,204)``\n• #95f3cd\n``#95f3cd` `rgb(149,243,205)``\n• #91f7ce\n``#91f7ce` `rgb(145,247,206)``\n• #8cfccf\n``#8cfccf` `rgb(140,252,207)``\nTone Color Variation\n\n# Color Blindness Simulator\n\nBelow, you can see how #8cfccf is perceived by people affected by a color vision deficiency. This can be useful if you need to ensure your color combinations are accessible to color-blind users.\n\nMonochromacy\n• Achromatopsia 0.005% of the population\n• Atypical Achromatopsia 0.001% of the population\nDichromacy\n• Protanopia 1% of men\n• Deuteranopia 1% of men\n• Tritanopia 0.001% of the population\nTrichromacy\n• Protanomaly 1% of men, 0.01% of women\n• Deuteranomaly 6% of men, 0.4% of women\n• Tritanomaly 0.01% of the population" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.52972966,"math_prob":0.74715436,"size":3733,"snap":"2019-35-2019-39","text_gpt3_token_len":1669,"char_repetition_ratio":0.12925717,"word_repetition_ratio":0.011029412,"special_character_ratio":0.5162068,"punctuation_ratio":0.23730685,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98319894,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-09-21T09:34:32Z\",\"WARC-Record-ID\":\"<urn:uuid:1809658d-4900-4c7f-bb0a-578e84b79ad6>\",\"Content-Length\":\"36365\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f02f0cd9-7c00-428e-bee0-4f6996bee343>\",\"WARC-Concurrent-To\":\"<urn:uuid:cb2522a2-5839-4bda-a94b-eabd6577a3f0>\",\"WARC-IP-Address\":\"178.32.117.56\",\"WARC-Target-URI\":\"https://www.colorhexa.com/8cfccf\",\"WARC-Payload-Digest\":\"sha1:4FTVUCN6WP7Q3U23JPBRKK56RQUJY7T6\",\"WARC-Block-Digest\":\"sha1:3UGUBRXOJ6NRMDSO5EMVPUNHPTPMQT3T\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-39/CC-MAIN-2019-39_segments_1568514574377.11_warc_CC-MAIN-20190921084226-20190921110226-00101.warc.gz\"}"}
https://docs.pymor.org/2022-2-1/autoapi/pymor/reductors/residual/index.html
[ "# `pymor.reductors.residual`¶\n\n## Module Contents¶\n\nclass pymor.reductors.residual.ImplicitEulerResidualOperator(operator, mass, rhs, dt, name=None)[source]\n\nInstantiated by `ImplicitEulerResidualReductor`.\n\nMethods\n\n `apply` Apply the operator to a `VectorArray`. `projected_to_subbasis`\napply(U, U_old, mu=None)[source]\n\nApply the operator to a `VectorArray`.\n\nParameters\n\nU\n\n`VectorArray` of vectors to which the operator is applied.\n\nmu\n\nThe `parameter values` for which to evaluate the operator.\n\nReturns\n\n`VectorArray` of the operator evaluations.\n\nprojected_to_subbasis(dim_range=None, dim_source=None, name=None)[source]\nclass pymor.reductors.residual.ImplicitEulerResidualReductor(RB, operator, mass, dt, rhs=None, product=None)[source]\n\nReduced basis residual reductor with mass operator for implicit Euler timestepping.\n\nGiven an operator, mass and a functional, the concatenation of residual operator with the Riesz isomorphism is given by:\n\n```riesz_residual.apply(U, U_old, mu)\n== product.apply_inverse(operator.apply(U, mu) + 1/dt*mass.apply(U, mu)\n- 1/dt*mass.apply(U_old, mu) - rhs.as_vector(mu))\n```\n\nThis reductor determines a low-dimensional subspace of the image of a reduced basis space under `riesz_residual` using `estimate_image_hierarchical`, computes an orthonormal basis `residual_range` of this range space and then returns the Petrov-Galerkin projection\n\n```projected_riesz_residual\n== riesz_residual.projected(range_basis=residual_range, source_basis=RB)\n```\n\nof the `riesz_residual` operator. Given reduced basis coefficient vectors `u` and `u_old`, the dual norm of the residual can then be computed as\n\n```projected_riesz_residual.apply(u, u_old, mu).norm()\n```\n\nMoreover, a `reconstruct` method is provided such that\n\n```residual_reductor.reconstruct(projected_riesz_residual.apply(u, u_old, mu))\n== riesz_residual.apply(RB.lincomb(u), RB.lincomb(u_old), mu)\n```\n\nParameters\n\noperator\n\nSee definition of `riesz_residual`.\n\nmass\n\nThe mass operator. See definition of `riesz_residual`.\n\ndt\n\nThe time step size. See definition of `riesz_residual`.\n\nrhs\n\nSee definition of `riesz_residual`. If `None`, zero right-hand side is assumed.\n\nRB\n\n`VectorArray` containing a basis of the reduced space onto which to project.\n\nproduct\n\nInner product `Operator` w.r.t. which to compute the Riesz representatives.\n\nMethods\n\n `reconstruct` Reconstruct high-dimensional residual vector from reduced vector `u`. `reduce`\nreconstruct(u)[source]\n\nReconstruct high-dimensional residual vector from reduced vector `u`.\n\nreduce()[source]\nclass pymor.reductors.residual.NonProjectedImplicitEulerResidualOperator(operator, mass, rhs, dt, product)[source]\n\nInstantiated by `ImplicitEulerResidualReductor`.\n\nNot to be used directly.\n\nMethods\n\n `apply` Apply the operator to a `VectorArray`. `projected_to_subbasis`\napply(U, U_old, mu=None)[source]\n\nApply the operator to a `VectorArray`.\n\nParameters\n\nU\n\n`VectorArray` of vectors to which the operator is applied.\n\nmu\n\nThe `parameter values` for which to evaluate the operator.\n\nReturns\n\n`VectorArray` of the operator evaluations.\n\nprojected_to_subbasis(dim_range=None, dim_source=None, name=None)[source]\nclass pymor.reductors.residual.NonProjectedResidualOperator(operator, rhs, riesz_representatives, product)[source]\n\nInstantiated by `ResidualReductor`.\n\nNot to be used directly.\n\nMethods\n\n `apply` Apply the operator to a `VectorArray`. `projected_to_subbasis`\napply(U, mu=None)[source]\n\nApply the operator to a `VectorArray`.\n\nParameters\n\nU\n\n`VectorArray` of vectors to which the operator is applied.\n\nmu\n\nThe `parameter values` for which to evaluate the operator.\n\nReturns\n\n`VectorArray` of the operator evaluations.\n\nprojected_to_subbasis(dim_range=None, dim_source=None, name=None)[source]\nclass pymor.reductors.residual.ResidualOperator(operator, rhs, name=None)[source]\n\nInstantiated by `ResidualReductor`.\n\nMethods\n\n `apply` Apply the operator to a `VectorArray`. `projected_to_subbasis`\napply(U, mu=None)[source]\n\nApply the operator to a `VectorArray`.\n\nParameters\n\nU\n\n`VectorArray` of vectors to which the operator is applied.\n\nmu\n\nThe `parameter values` for which to evaluate the operator.\n\nReturns\n\n`VectorArray` of the operator evaluations.\n\nprojected_to_subbasis(dim_range=None, dim_source=None, name=None)[source]\nclass pymor.reductors.residual.ResidualReductor(RB, operator, rhs=None, product=None, riesz_representatives=False)[source]\n\nGeneric reduced basis residual reductor.\n\nGiven an operator and a right-hand side, the residual is given by:\n\n```residual.apply(U, mu) == operator.apply(U, mu) - rhs.as_range_array(mu)\n```\n\nWhen operator maps to functionals instead of vectors, we are interested in the Riesz representative of the residual:\n\n```residual.apply(U, mu)\n== product.apply_inverse(operator.apply(U, mu) - rhs.as_range_array(mu))\n```\n\nGiven a basis `RB` of a subspace of the source space of `operator`, this reductor uses `estimate_image_hierarchical` to determine a low-dimensional subspace containing the image of the subspace under `residual` (resp. `riesz_residual`), computes an orthonormal basis `residual_range` for this range space and then returns the Petrov-Galerkin projection\n\n```projected_residual\n== project(residual, range_basis=residual_range, source_basis=RB)\n```\n\nof the residual operator. Given a reduced basis coefficient vector `u`, w.r.t. `RB`, the (dual) norm of the residual can then be computed as\n\n```projected_residual.apply(u, mu).norm()\n```\n\nMoreover, a `reconstruct` method is provided such that\n\n```residual_reductor.reconstruct(projected_residual.apply(u, mu))\n== residual.apply(RB.lincomb(u), mu)\n```\n\nParameters\n\nRB\n\n`VectorArray` containing a basis of the reduced space onto which to project.\n\noperator\n\nSee definition of `residual`.\n\nrhs\n\nSee definition of `residual`. If `None`, zero right-hand side is assumed.\n\nproduct\n\nInner product `Operator` w.r.t. which to orthonormalize and w.r.t. which to compute the Riesz representatives in case `operator` maps to functionals.\n\nriesz_representatives\n\nIf `True` compute the Riesz representative of the residual.\n\nMethods\n\n `reconstruct` Reconstruct high-dimensional residual vector from reduced vector `u`. `reduce`\nreconstruct(u)[source]\n\nReconstruct high-dimensional residual vector from reduced vector `u`.\n\nreduce()[source]" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.59650177,"math_prob":0.92548233,"size":5199,"snap":"2023-14-2023-23","text_gpt3_token_len":1300,"char_repetition_ratio":0.17651588,"word_repetition_ratio":0.3462838,"special_character_ratio":0.19369109,"punctuation_ratio":0.20795454,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97017676,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-06T05:36:25Z\",\"WARC-Record-ID\":\"<urn:uuid:801e2856-0498-4c2e-807b-81c348318822>\",\"Content-Length\":\"67682\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:db7c5584-9dd7-4748-83ab-40e55066bf20>\",\"WARC-Concurrent-To\":\"<urn:uuid:a785666c-6c16-4ca8-ba32-815cc00d4cc8>\",\"WARC-IP-Address\":\"185.199.111.153\",\"WARC-Target-URI\":\"https://docs.pymor.org/2022-2-1/autoapi/pymor/reductors/residual/index.html\",\"WARC-Payload-Digest\":\"sha1:ELGLVBAAKB6H2HBEWXVA3KUE55FJJWGR\",\"WARC-Block-Digest\":\"sha1:ZTBTWQBHQV3JS3FVDUHC7GXEQ3M7UVXQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224652235.2_warc_CC-MAIN-20230606045924-20230606075924-00254.warc.gz\"}"}
https://www.nature.com/articles/s41598-020-77733-4?error=cookies_not_supported&code=b3257ace-5d87-4a3d-af78-2b5e07c29d29
[ "## Introduction\n\nTo perform diagnosis and prognosis of cardiovascular disease (CVD) medical experts depend on the reliable quantification of cardiac function1. Cardiac magnetic resonance imaging (CMRI) is currently considered the reference standard for quantification of ventricular volumes, mass and function2. Short-axis CMR imaging, covering the entire left and right ventricle (LV resp. RV) is routinely used to determine quantitative parameters of both ventricle’s function. This requires manual or semi-automatic segmentation of corresponding cardiac tissue structures for end-diastole (ED) and end-systole (ES).\n\nExisting semi-automated or automated segmentation methods for CMRIs regularly require (substantial) manual intervention caused by lack of robustness. Manual or semi-automatic segmentation across a complete cardiac cycle, comprising 20 to 40 phases per patient, enables computation of parameters quantifying cardiac motion with potential diagnostic implications but due to the required workload, this is practically infeasible. Consequently, segmentation is often performed at end-diastole and end-systole precluding comprehensive analysis over complete cardiac cycle.\n\nRecently3,4, deep learning segmentation methods have shown to outperform traditional approaches such as those exploiting level set, graph-cuts, deformable models, cardiac atlases and statistical models5,6. However, recent comparison of a number of automatic methods showed that even the best performing methods generated anatomically implausible segmentations in more than 80% of the CMRIs7. Such errors do not occur when experts perform segmentation. To achieve acceptance in clinical practice these shortcomings of the automatic approaches need to be alleviated by further development. This can be achieved by generating more accurate segmentation result or by development of approaches that automatically detect segmentation failures.\n\nIn manual and automatic segmentation of short-axis CMRI, largest segmentation inaccuracies are typically located in the most basal and apical slices due to low tissue contrast ratios8. To increase segmentation performance, several methods have been proposed9,10,11,12. Tan et al.9 used a convolutional neural network (CNN) to regress anatomical landmarks from long-axis views (orthogonal to short-axis). They exploited the landmarks to determine most basal and apical slices in short-axis views and thereby constraining the automatic segmentation of CMRIs. This resulted in increased robustness and performance. Other approaches leverage spatial10 or temporal11,12 information to increase segmentation consistency and performance in particular in the difficult basal and apical slices.\n\nAn alternative approach to preventing implausible segmentation results is by incorporating knowledge about the highly constrained shape of the heart. Oktay et al.13 developed an anatomically constrained neural network (NN) that infers shape constraints using an auto-encoder during segmentation training. Duan et al.14 developed a deep learning segmentation approach for CMRIs that used atlas propagation to explicitly impose a shape refinement. This was especially beneficial in the presence of image acquisition artifacts. Recently, Painchaud et al.15 developed a post-processing approach to detect and transform anatomically implausible cardiac segmentations into valid ones by defining cardiac anatomical metrics. Applying their approach to various state-of-the-art segmentation methods the authors showed that the proposed method provides strong anatomical guarantees without hampering segmentation accuracy.\n\nA different research trend focuses on detecting segmentation failures, i.e. on automated quality control for image segmentation. These methods can be divided in those that predict segmentation quality using image at hand or corresponding automatic segmentation result, and those that assess and exploit predictive uncertainties to detect segmentation failure.\n\nRecently, two methods were proposed to detect segmentation failures in large-scale cardiac MR imaging studies to remove these from subsequent analysis16,17. Robinson et al.17 using the approach of Reverse Classification Accuracy (RCA)18 predicted CMRI segmentation metrics to detect failed segmentations. They achieved good agreement between predicted metrics and visual quality control scores. Alba et al.16 used statistical, pattern and fractal descriptors in a random forest classifier to directly detect segmentation contour failures without intermediate regression of segmentation accuracy metrics.\n\nMethods for automatic quality control were also developed for other applications in medical image analysis. Frounchi et al.19 extracted features from the segmentation results of the left ventricle in CT scans. Using the obtained features the authors trained a classifier that is able to discriminate between consistent and inconsistent segmentations. To distinguish between acceptable and non-acceptable segmentations Kohlberger et al.20 proposed to directly predict multi-organ segmentation accuracy in CT scans using a set of features extracted from the image and corresponding segmentation.\n\nA number of methods aggregate voxel-wise uncertainties into an overall score to identify insufficiently accurate segmentations. For example, Nair et al.21 computed an overall score for target segmentation structure from voxel-wise predictive uncertainties. The method was tested for detection of Multiple Sclerosis in brain MRI. The authors showed that rejecting segmentations with high uncertainty scores led to increased detection accuracy indicating that correct segmentations contain lower uncertainties than incorrect ones. Similarly, to assess segmentation quality of brain MRIs Jungo et al.22 aggregated voxel-wise uncertainties into a score per target structure and showed that the computed uncertainty score enabled identification of erroneous segmentations.\n\nUnlike approaches evaluating segmentation directly, several methods use predictive uncertainties to predict segmentation metrics and thereby evaluate segmentation performance23,24. For example, Roy et al.23 aggregated voxel-wise uncertainties into four scores per segmented structure in brain MRI. The authors showed that computed scores can be used to predict the Intersection over Union and hence, to determine segmentation accuracy. Similar idea was presented by DeVries et al.24 that predicted segmentation accuracy per patient using an auxiliary neural network that leverages the dermoscopic image, automatic segmentation result and obtained uncertainties. The researchers showed that a predicted segmentation accuracy is useful for quality control.\n\nWe build on our preliminary work where automatic segmentation of CMR images using a dilated CNN was combined with assessment of two measures of segmentation uncertainties25. For the first measure the multi-class entropy per voxel (entropy maps) was computed using the output distribution. For the second measure Bayesian uncertainty maps were acquired using Monte Carlo dropout (MC-dropout)26. In25 we showed that the obtained uncertainties almost entirely cover the regions of incorrect segmentation i.e. that uncertainties are calibrated. In the current work we extend our preliminary research in two ways. First, we assess impact of CNN architecture on the segmentation performance and calibration of uncertainty maps by evaluating three existing state-of-the-art CNNs. Second, we employ an auxiliary CNN (detection network) that processes a cardiac MRI and corresponding spatial uncertainty map (Entropy or Bayesian) to automatically detect segmentation failures. We differentiate errors that may be within the range of inter-observer variability and hence do not necessarily require correction (tolerated errors) from the errors that an expert would not make and hence require correction (segmentation failures). Given that overlap measures do not capture fine details of the segmentation results and preclude us to differentiate two types of segmentation errors, in this work, we define segmentation failure using a metric of boundary distance. In25 we found that degree of calibration of uncertainty maps is dependent on the loss function used to train the CNN. Nevertheless, in the current work we show that uncalibrated uncertainty maps are useful to detect local segmentation failures. In contrast to previous methods that detect segmentation failure per-patient or per-structure23,24, we propose to detect segmentation failures per image region. We expect that inspection and correction of segmentation failures using image regions rather than individual voxels or images would simplify correction process. To show the potential of our approach and demonstrate that combining automatic segmentation with manual correction of the detected segmentation failures per region results in higher segmentation performance we performed two additional experiments. In the first experiment, correction of detected segmentation failures was simulated in the complete data set. In the second experiment, correction was performed by an expert in a subset of images. Using publicly available set of CMR scans from MICCAI 2017 ACDC challenge7, the performance was evaluated before and after simulating the correction of detected segmentation failures as well as after manual expert correction.\n\n## Data\n\nIn this study data from the MICCAI 2017 Automated Cardiac Diagnosis Challenge (ACDC)7 was used. The dataset consists of cardiac cine MR images (CMRIs) from 100 patients uniformly distributed over normal cardiac function and four disease groups: dilated cardiomyopathy, hypertrophic cardiomyopathy, heart failure with infarction, and right ventricular abnormality. Detailed acquisition protocol is described by Bernard et al.7. Briefly, short-axis CMRIs were acquired with two MRI scanners of different magnetic strengths (1.5 and 3.0 T). Images were made during breath hold using a conventional steady-state free precession (SSFP) sequence. CMRIs have an in-plane resolution ranging from 1.37 to 1.68 mm (average reconstruction matrix 243 $$\\times$$ 217 voxels) with slice spacing varying from 5 to 10 mm. Per patient 28 to 40 volumes are provided covering partially or completely one cardiac cycle. Each volume consists of on average ten slices covering the heart. Expert manual reference segmentations are provided for the LV cavity, RV endocardium and LV myocardium (LVM) for all CMRI slices at ED and ES time frames. To correct for intensity differences among scans, voxel intensities of each volume were scaled to the [0.0, 1.0] range using the minimum and maximum of the volume. Furthermore, to correct for differences in-plane voxel sizes, image slices were resampled to $${1.4} \\times 1.4\\, \\hbox {mm}^2$$.\n\n## Methods\n\nTo investigate uncertainty of the segmentation, anatomical structures in CMR images are segmented using a CNN. To investigate whether the approach generalizes to different segmentation networks, three state-of-the-art CNNs were evaluated. For each segmentation model two measures of predictive uncertainty were obtained per voxel. Thereafter, to detect and correct local segmentation failures an auxiliary CNN (detection network) that analyzes a cardiac MRI was used. Finally, this leads to the uncertainty map allowing detection of image regions that contain segmentation failures. Fig. 1 visualizes this approach.\n\n### Automatic segmentation of cardiac MRI\n\nTo perform segmentation of LV, RV, and LVM in cardiac MR images i.e. 2D CMR scans, three state-of-the-art CNNs are trained. Each of the three networks takes a CMR image as input and has four output channels providing probabilities for the three cardiac structures (LV, RV, LVM) and background. Softmax probabilities are calculated over the four tissue classes. Patient volumes at ED and ES are processed separately. During inference the 2D automatic segmentation masks are stacked into a 3D volume per patient and cardiac phase. After segmentation, the largest 3D connected component for each class is retained and volumes are resampled to their original voxel resolution. Segmentation networks differ substantially regarding architecture, number of parameters and receptive field size. To assess predictive uncertainties from the segmentation models Monte Carlo dropout (MC-dropout) introduced by Gal and Ghahramani26 is implemented in every network. The following three segmentation networks were evaluated: Bayesian Dilated CNN, Bayesian Dilated Residual Network, Bayesian U-net.\n\n#### Bayesian dilated CNN (DN)\n\nThe Bayesian DN architecture comprises a sequence of ten convolutional layers. Layers 1 to 8 serve as feature extraction layers with small convolution kernels of size 3$$\\times$$3 voxels. No padding is applied after convolutions. The number of kernels increases from 32 in the first eight layers, to 128 in the final two fully connected classification layers, implemented as 1$$\\times$$1 convolutions. The dilation level is successively increased between layers 2 and 7 from 2 to 32 which results in a receptive field for each voxel of 131$$\\times$$131 voxels, or 18.3$$\\times$$18.3 cm2. All trainable layers except the final layer use rectified linear activation functions (ReLU). To enhance generalization performance, the model uses batch normalization in layers 2 to 9. In order to convert the original DN27 into a Bayesian DN dropout is added as the last operation in all but the final layer and 10 percent of a layer’s hidden units are randomly switched off.\n\n#### Bayesian dilated residual network (DRN)\n\nThe Bayesian DRN is based on the original DRN from Yu et al.28 for image segmentation. More specifically, the DRN-D-2228 is used which consists of a feature extraction module with output stride eight followed by a classifier implemented as fully convolutional layer with 1$$\\times$$1 convolutions. Output of the classifier is upsampled to full resolution using bilinear interpolation. The convolutional feature extraction module comprises eight levels where the number of kernels increases from 16 in the first level, to 512 in the two final levels. The first convolutional layer in level 1 uses 16 kernels of size 7$$\\times$$7 voxels and zero-padding of size 3. The remaining trainable layers use small 3$$\\times$$3 voxel kernels and zero-padding of size 1. Level 2 to 4 use a strided convolution of size 2. To further increase the receptive field convolutional layers in level 5, 6 and 7 use a dilation factor of 2, 4 and 2, respectively. Furthermore, levels 3 to 6 consist of two residual blocks. All convolutional layers of the feature extraction module are followed by batch normalization, ReLU function and dropout. Adding dropout and switching off 10 percent of a layer’s hidden units converts the original DRN28 into a Bayesian DRN.\n\n#### Bayesian U-net (U-net)\n\nThe standard architecture of the U-net29 is used. The network is fully convolutional and consists of a contracting, bottleneck and expanding path. The contracting and expanding path each consist of four blocks i.e. resolution levels which are connected by skip connections. The first block of the contracting path contains two convolutional layers using a kernel size of 3$$\\times$$3 voxels and zero-padding of size 1. Downsampling of the input is accomplished by employing a max pooling operation in block 2 to 4 of the contracting path and the bottleneck using a convolutional kernel of size 2$$\\times$$2 voxels and stride 2. Upsampling is performed by a transposed convolutional layer in block 1 to 4 of the expanding path using the same kernel size and stride as the max pooling layers. Each downsampling and upsampling layer is followed by two convolutional layers using 3$$\\times$$3 voxel kernels with zero-padding size 1. The final convolutional layer of the network acts as a classifier and uses 1$$\\times$$1 convolutions to reduce the number of output channels to the number of segmentation classes. The number of kernels increases from 64 in the first block of the contracting path to 1024 in the bottleneck. In contrast, the number of kernels in the expanding path successively decreases from 1024 to 64. In deviation to the standard U-net instance normalization is added to all convolutional layers in the contracting path and ReLU non-linearities are replaced by LeakyReLU functions because this was found to slightly improve segmentation performance. In addition, to convert the deterministic model into a Bayesian neural network dropout is added as the last operation in each block of the contracting and expanding path and 10 percent of a layer’s hidden units are randomly switched off.\n\n### Assessment of predictive uncertainties\n\nTo detect failures in segmentation masks generated by CNNs in testing, spatial uncertainty maps of the obtained segmentations are generated. For each voxel in the image two measures of uncertainty are calculated. First, a computationally cheap and straightforward measure of uncertainty is the entropy of softmax probabilities over the four tissue classes which are generated by the segmentation networks. Using these, normalized entropy maps $$\\mathbf{E}\\in [0, 1]^{H\\times W}$$ (e-map) are computed where H and W denote the height and width of the original CMRI, respectively.\n\nSecond, by applying MC-dropout in testing, softmax probabilities with a number of samples T per voxel are obtained. As an overall measure of uncertainty the mean standard deviation of softmax probabilities per voxel over all tissue classes C is computed\n\n\\begin{aligned} \\mathbf{B }(I)^{(x, y)}&= \\frac{1}{C} \\sum _{c=1}^{C} \\sqrt{\\frac{1}{T-1} \\sum _{t=1}^{T} \\big (p_t(I)^{(x, y, c)} - {\\hat{\\mu }}^{(x, y, c)} \\big )^2 }, \\end{aligned}\n(1)\n\nwhere $$\\mathbf{B }(I)^{(x, y)} \\in [0, 1]$$ denotes the normalized value of the Bayesian uncertainty map (b-map) at position (xy) in 2D slice I, C is equal to the number of classes, T is the number of samples and $$p_t(I)^{(x, y, c)}$$ denotes the softmax probability at position (xy) in image I for class c. The predictive mean per class $${\\hat{\\mu }}^{(x, y, c)}$$ of the samples is computed as follows:\n\n\\begin{aligned} {\\hat{\\mu }}^{(x, y, c)}&= \\frac{1}{T} \\sum _{t=1}^{T} p_t(I)^{(x, y, c)}. \\end{aligned}\n(2)\n\nIn addition, the predictive mean per class is used to determine the tissue class per voxel.\n\n### Calibration of uncertainty maps\n\nIdeally, incorrectly segmented voxels as defined by the reference labels should be covered by higher uncertainties than correctly segmented voxels. In such a case the spatial uncertainty maps are perfectly calibrated. Risk-coverage curves introduced by Geifman et al.30 visualize whether incorrectly segmented voxels are covered by higher uncertainties than those that are correctly segmented. Risk-coverage curves convey the effect of avoiding segmentation of voxels above a specific uncertainty value on the reduction of segmentation errors (i.e. risk reduction) while at the same time quantifying the voxels that were omitted from the classification task (i.e. coverage).\n\nTo generate risk-coverage curves first, each patient volume is cropped based on a minimal enclosing parallelepiped bounding box that is placed around the reference segmentations to reduce the number of background voxels. Note that this is only performed to simplify the analysis of the risk-coverage curves. Second, voxels of the cropped patient volume are ranked based on their uncertainty value in descending order. Third, to obtain uncertainty threshold values per patient volume the ranked voxels are partitioned into 100 percentiles based on their uncertainty value. Finally, per patient volume each uncertainty threshold is evaluated by computing a coverage and a risk measure. Coverage is the percentage of voxels in a patient volume at ED or ES that is automatically segmented. Voxels in a patient volume above the threshold are discarded from automatic segmentation and would be referred to an expert. The number of incorrectly segmented voxels per patient volume is used as a measure of risk. Using bilinear interpolation risk measures are computed per patient volume between [0, 100] percent.\n\n### Detection of segmentation failures\n\nTo detect segmentation failures uncertainty maps are used but direct application of uncertainties is infeasible because many correctly segmented voxels, such as those close to anatomical structure boundaries, have high uncertainty. Hence, an additional patch-based CNN (detection network) is used that takes a cardiac MR image together with the corresponding spatial uncertainty map as input. For each patch of 8 $$\\times$$ 8 voxels the network generates a probability indicating whether it contains segmentation failure. In the following, the terms patch and region are used interchangeably.\n\nThe detection network is a shallow Residual Network (S-ResNet)31 consisting of a feature extraction module with output stride eight followed by a classifier indicating the presence of segmentation failure. The first level of the feature extraction module consists of two convolutional layers. The first layer uses 16 kernels of 7 $$\\times$$ 7 voxels and zero-padding of size 3 and second layer 32 kernels of 3 $$\\times$$ 3 voxels and zero-padding of 1 voxel. Level 2 to 4 each consist of one residual block that contains two convolutional layers with 3 $$\\times$$ 3 voxels kernels with zero-padding of size 1. The first convolutional layer of each residual block uses a strided convolution of 2 voxels to downsample the input. All convolutional layers of the feature extraction module are followed by batch normalization and ReLU function. The number of kernels in the feature extraction module increases from 16 in level 1 to 128 in level 4. The network is a 2D patch-level classifier and requires that the size of the two input slices is a multiple of the patch-size. The final classifier consists of three fully convolutional layers, implemented as 1 $$\\times$$ 1 convolutions, with 128 feature maps in the first two layers. The final layer has two channels followed by a softmax function which indicates whether the patch contains segmentation failure. Furthermore, to regularize the model dropout layers ($$p=0.5$$) were added between the residual blocks and the fully convolutional layers of the classifier.\n\n## Evaluation\n\nAutomatic segmentation performance, as well as performance after simulating the correction of detected segmentation failures and after manual expert correction was evaluated. For this, the 3D Dice-coefficient (DC) and 3D Hausdorff distance (HD) between manual and (corrected) automatic segmentation were computed. Furthermore, the following clinical metrics were computed for manual and (corrected) automatic segmentation: left ventricle end-diastolic volume (EDV); left ventricle ejection fraction (EF); right ventricle EDV; right ventricle ejection fraction; and left ventricle myocardial mass. Following Bernard et al.7 for each of the clinical metrics three performance indices were computed using the measurements based on manual and (corrected) automatic segmentation: Pearson correlation coefficient; mean difference (bias and standard deviation); and mean absolute error (MAE).\n\nTo evaluate detection performance of the automatic method precision-recall curves of identification of slices that require correction were computed. A slice is considered positive in case it consists of at least one image region with a segmentation failure. To achieve accurate segmentation in clinic, identification of slices that contain segmentation failures might ease manual correction of automatic segmentations in daily practice. To further evaluate detection performance detection rate of segmentation failures was assessed on a voxel level. More specific, sensitivity against the number of false positive regions was evaluated because manual correction is presumed to be performed at this level.\n\nFinally, after simulation and manual correction of the automatically detected segmentation failures, segmentation was re-evaluated and significance of the difference between the DCs, HDs and clinical metrics was tested with a Mann-Whitney U test.\n\n## Experiments\n\nTo use stratified four-fold cross-validation the dataset was split into training (75%) and test (25%) set. The splitting was done on a patient level, so there was no overlap in patient data between training and test sets. Furthermore, patients were randomly chosen from each of the five patient groups w.r.t. disease. Each patient has one volume for ED and ES time points, respectively.\n\n### Training segmentation networks\n\nDRN and U-net were trained with a patch size of 128 $$\\times$$ 128 voxels which is a multiple of their output stride of the contracting path. In the training of the dilated CNN (DN) images with 151 $$\\times$$ 151 voxel samples were used. Zero-padding to 281 $$\\times$$ 281 was performed to accommodate the 131 $$\\times$$ 131 voxel receptive field that is induced by the dilation factors. Training samples were randomly chosen from training set and augmented by 90 degree rotations of the images. All models were initially trained with three loss functions: soft-Dice33 (SD); cross-entropy (CE); and Brier loss34. However, for the evaluation of the combined segmentation and detection approach for each model architecture the two best performing loss functions were chosen: soft-Dice for all models; cross-entropy for DRN and U-net and Brier loss for DN. For completeness, we provide the equations for all three used loss functions.\n\n\\begin{aligned} \\text {soft-Dice}_{c} = \\frac{\\sum _{i=1}^{N} R_{c}(i) \\; A_{c}(i) }{\\sum _{i=1}^{N} R_{c}(i) + \\sum _{i=1}^{N} A_{c}(i)} , \\end{aligned}\n(3)\n\nwhere N denotes the number of voxels in an image, $$R_{c}$$ is the binary reference image for class c and $$A_{c}$$ is the probability map for class c.\n\n\\begin{aligned} \\text {Cross-Entropy}_{c}&= - \\; \\sum _{i=1}^{N} t_{ic} \\; \\log \\; p(y_i=c|x_i) , \\text { where } t_{ic} = 1 \\text { if } y_{i}=c, \\text { and 0 otherwise.} \\end{aligned}\n(4)\n\\begin{aligned} \\text {Brier}_{c}&= \\sum _{i=1}^{N} \\big (t_{ic} - p(y_i=c|x_{i}) \\big )^2 , \\text { where } t_{ic} = 1 \\text { if } y_{i}=c, \\text { and 0 otherwise.} \\end{aligned}\n(5)\n\nwhere N denotes the number of voxels in an image and p denotes the probability for a specific voxel $$x_i$$ with corresponding reference label $$y_i$$ for class c.\n\nChoosing Brier loss to train the DN model instead of CE was motivated by our preliminary work which showed that segmentation performance of DN model was best when trained with Brier loss25.\n\nAll models were trained for 100,000 iterations. DRN and U-net were trained with a learning rate of 0.001 which decayed with a factor of 0.1 after every 25,000 steps. Training DN used the snapshot ensemble technique35, where after every 10,000 iterations the learning rate was reset to its original value of 0.02.\n\nAll three segmentation networks were trained using mini-batch stochastic gradient descent using a batch size of 16. Network parameters were optimized using the Adam optimizer36. Furthermore, models were regularized with weight decay to increase generalization performance.\n\n### Training detection network\n\nTo train the detection model a subset of the errors performed by the segmentation model is used. Segmentation errors that presumably are within the range of inter-observer variability and therefore do not inevitably require correction (tolerated errors) are excluded from the set of errors that need to be detected and corrected (segmentation failures). To distinguish between tolerated errors and the set of segmentation failures $${\\mathcal {S}}_I$$ the Euclidean distance of an incorrectly segmented voxel to the boundary of the reference target structure is used. For each anatomical structure a 2D distance transform map is computed that provides for each voxel the distance to the anatomical structure boundary. To differentiate between tolerated errors and the set of segmentation failures $${\\mathcal {S}}_I$$ an acceptable tolerance threshold is applied. A more rigorous threshold is used for errors located inside compared to outside of the anatomical structure because automatic segmentation methods have a tendency to undersegment cardiac structures in CMRI. Hence, in all experiments the acceptable tolerance threshold was set to three voxels (equivalent to on average $${4.65}\\, \\hbox {mm}$$) and two voxels (equivalent to on average $${3.1}\\, \\hbox {mm}$$) for segmentation errors located outside and inside the target structure. Furthermore, a segmentation error only belongs to $${\\mathcal {S}}_I$$ if it is part of a 2D 4-connected cluster of minimum size 10 voxels. This value was found in preliminary experiments by evaluating values $$\\{1, 5, 10, 15, 20\\}$$. However, for apical slices all segmentation errors are included in $${\\mathcal {S}}_I$$ regardless of fulfilling the minimum size requirement because in these slices anatomical structures are relatively small and manual segmentation is prone to large inter-observer variability7. Finally, segmentation errors located in slices above the base or below the apex are always included in the set of segmentation failures.\n\nUsing the set $${\\mathcal {S}}_I$$ a binary label $$t_j$$ is assigned to each patch $$P_j^{(I)}$$ indicating whether $$P_j^{(I)}$$ contains at least one voxel belonging to set $${\\mathcal {S}}_I$$ where $$j \\in \\{1 \\dots M \\}$$ and M denotes the number of patches in a slice I.\n\nThe detection network is trained by minimizing a weighted binary cross-entropy loss:\n\n\\begin{aligned} {\\mathcal {L}}_{DT} = - \\sum _{j \\in P^{(I)}} w_{pos} \\; t_j \\log p_j + (1 - t_j) \\log (1 - p_j) \\; , \\end{aligned}\n(6)\n\nwhere $$w_{pos}$$ represents a scalar weight, $$t_j$$ denotes the binary reference label and $$p_j$$ is the softmax probability indicating whether a particular image region $$P_j^{(I)}$$ contains at least one segmentation failure. The average percentage of regions in a patient volume containing segmentation failures ranges from 1.5 to 3 percent depending on the segmentation architecture and loss function used to train the segmentation model. To train a detection network $$w_{pos}$$ was set to the ratio between the average percentage of negative samples divided by the average percentage of positive samples.\n\nEach fold was trained using spatial uncertainty maps and automatic segmentation masks generated while training the segmentation networks. Hence, there was no overlap in patient data between training and test set across segmentation and detection tasks. In total 12 detection models were trained and evaluated resulting from the different combination of 3 model architectures (DRN, DN and U-net), 2 loss functions (DRN and U-net with CE and soft-Dice, DN with Brier and soft-Dice) and 2 uncertainty maps (e-maps, b-maps).\n\nThe patches used to train the network were selected randomly ($$\\frac{2}{3}$$), or were forced ($$\\frac {1}{3}$$) to contain at least one segmentation failure by randomly selecting a scan containing segmentation failure, followed by random sampling of a patch containing at least one segmentation failure. During training the patch size was fixed to 80 $$\\times$$ 80 voxels. To reduce the number of background voxels during testing, inputs were cropped based on a minimal enclosing, rectangular bounding box that was placed around the automatic segmentation mask. Inputs always had a minimum size of 80 $$\\times$$ 80 voxels or were forced to a multiple of the output grid spacing of eight voxels in both direction required by the patch-based detection network. The patches of size 8 $$\\times$$ 8 voxels did not overlap. In cases where the automatic segmentation mask only contains background voxels (scans above the base or below apex of the heart) input scans were center-cropped to a size of 80 $$\\times$$ 80 voxels.\n\nModels were trained for 20,000 iterations using mini-batch stochastic gradient descent with batch-size 32 and Adam as optimizer36. Learning rate was set to 0.0001 and decayed with a factor of 0.1 after 10.000 steps. Furthermore, dropout percentage was set to 0.5 and weight decay was applied to increase generalization performance.\n\n### Segmentation using correction of the detected segmentation failures\n\nTo investigate whether correction of detected segmentation failures increases segmentation performance two scenarios were performed. In the first scenario manual correction of the detected failures by an expert was simulated for all images at ED and ES time points of the ACDC dataset. For this purpose, in image regions that were detected to contain segmentation failure predicted labels were replaced with reference labels. In the second scenario manual correction of the detected failures was performed by an expert in a random subset of 50 patients of the ACDC dataset. The expert was shown CMRI slices for ED and ES time points together with corresponding automatic segmentation masks for the RV, LV and LV myocardium. Image regions detected to contain segmentation failures were indicated in slices and the expert was only allowed to change the automatic segmentations in these indicated regions. Annotation was performed following the protocol described in7. Furthermore, expert was able to navigate through all CMRI slices of the corresponding ED and ES volumes.\n\n## Results\n\nIn this section we first present results for the segmentation-only task followed by description of the combined segmentation and detection results.\n\n### Segmentation-only approach\n\nTable 1 lists quantitative results for segmentation-only and combined segmentation and detection approach in terms of Dice coefficient and Hausdorff distance. These results show that DRN and U-net achieve similar Dice coefficients and outperformed the DN network for all anatomical structures at end-systole. Differences in the achieved Hausdorff distances among the methods are present for all anatomical structures and for both time points. The DRN model achieved the highest and the DN network the lowest Hausdorff distance.\n\nTable 2 lists results of the evaluation in terms of clinical metrics. These results reveal noticeable differences between models for ejection fraction (EF) of left and right ventricle, respectively. We can observe that U-net trained with the soft-Dice and the Dilated Network (DN) trained with Brier or soft-Dice loss achieved considerable lower accuracy for LV and RV ejection fraction compared to DRN. Overall, the DRN model achieved highest performance for all clinical metrics.\n\n#### Effect of model architecture on segmentation\n\nAlthough quantitative differences between models are small, qualitative evaluation discloses that automatic segmentations differ substantially between the models. Figure 2 shows that especially in regions where the models perform poorly (apical and basal slices) the DN model more often produced anatomically implausible segmentations compared to the DRN and U-net. This seems to be correlated with the performance differences in Hausdorff distance.\n\n#### Effect of loss function on segmentation\n\nThe results indicate that the choice of loss function only slightly affects the segmentation performance. DRN and U-net perform marginally better when trained with soft-Dice compared to cross-entropy whereas DN performs better when trained with Brier loss than with soft-Dice. For DN this is most pronounced for the RV at ES.\n\nA considerable effect of the loss function on the accuracy of the LV and RV ejection fraction can be observed for the U-net model. On both metrics U-net achieved the lowest accuracy of all models when trained with the soft-Dice loss.\n\n#### Effect of MC dropout on segmentation\n\nThe results show that enabling MC-dropout during testing seems to result in slightly improved HD while it does not affect DC.\n\n### Detection of segmentation failures\n\n#### Detection of segmentation failures on voxel level\n\nTo evaluate detection performance of segmentation failures on voxel level Fig. 3a shows average voxel detection rate as a function of false positively detected regions. This was done for each combination of model architecture and loss function exploiting e- (Fig. 3a, left) or b-maps (Fig. 3a, right). These results show that detection performance of segmentation failures depends on segmentation model architecture, loss function and uncertainty map.\n\nThe influence of (segmentation) model architecture and loss function on detection performance is slightly stronger when e-maps were used as input for the detection task compared to b-maps. Detection rates are consistently lower when segmentation failures originate from segmentation models trained with soft-Dice loss compared to models trained with CE or Brier loss. Overall, detection rates are higher when b-maps were exploited for the detection task compared to e-maps.\n\n#### Detection of slices with segmentation failures\n\nTo evaluate detection performance w.r.t. slices containing segmentation failures precision-recall curves for each combination of model architecture and loss function using e-maps (Fig. 3b, left) or b-maps (Fig. 3b, right) are shown. The results show that detection performance of slices containing segmentation failures is slightly better for all models when using e-maps. Furthermore, the detection network achieves highest performance using uncertainty maps obtained from the DN model and the lowest when exploiting e- or b-maps obtained from the DRN model. Table 3 shows the average precision of detected slices with segmentation failures per patient, as well as the average percentage of slices that do contain segmentation failures (reference for detection task). The results illustrate that these measures are positively correlated i.e. that precision of detected slices in a patient volume is higher if the volume contains more slices that need correction. On average the DN model generates cardiac segmentations that contain more slices with at least one segmentation failure compared to U-net (ranks second) and DRN (ranks third). A higher number of detected slices containing segmentation failures implies an increased workload for manual correction.\n\n### Calibration of uncertainty maps\n\nFigure 4 shows risk-coverage curves for each combination of model architectures, uncertainty maps and loss functions (Fig. 4 left: CE or Brier loss, Fig. 4 right: soft-Dice). The results show an effect of the loss function on slope and convergence of the curves. Segmentation errors of models trained with the soft-Dice loss are less frequently covered by higher uncertainties than models trained with CE or Brier loss (steeper slope and lower minimum are better). This difference is more pronounced for e-maps. Models trained with the CE or Brier loss only slightly differ concerning convergence and their slopes are approximately identical. In contrast, the curves of the models trained with the soft-Dice differ regarding their slope and achieved minimum. Comparing e- and b-map of the DN-SD and U-net-SD models the results reveal that the curve for b-map has a steeper slope and achieves a lower minimum compared to the e-map. For the DRN-SD model these differences are less striking. In general for a specific combination of model and loss function the risk-coverage curves using b-maps achieve a lower minimum compared to e-maps.\n\n### Correction of automatically identified segmentation failures\n\n#### Simulated correction\n\nThe results listed in Tables 1 and 2 show that the proposed method consisting of segmentation followed by simulated manual correction of detected segmentation failures delivers accurate segmentation for all tissues over ED and ES points. Correction of detected segmentation failures improved the performance in terms of DC, HD and clinical metrics for all combinations of model architectures, loss functions and uncertainty measures. Focusing on the DC after correction of detected segmentation failures the results reveal that performance differences between evaluated models decreased compared to the segmentation-only task. This effect is less pronounced for HD where the DRN network clearly achieved superior results in the segmentation-only and combined approach. The DN performs the least of all models but achieves the highest absolute DC performance improvements in the combined approach for RV at ES. Overall, the results in Table 1 disclose that improvements attained by the combined approach are almost all statistically significant ($$p \\le 0.05$$) at ES and frequently at ED (96% resp. 83% of the cases). Moreover, improvements are in 99% of the cases statistically significant for HD compared to 81% of the cases for DC.\n\nResults in terms of clinical metrics shown in Table 2 are inline with these findings. We observe that segmentation followed by simulated manual correction of detected segmentation failures resulted in considerably higher accuracy for LV and RV ejection fraction. Achieved improvements for clinical metrics are only statistically significant ($$p \\le 0.05$$) in one case for RV ejection fraction.\n\nIn general, the effect of correction of detected segmentation failures is more pronounced in cases where the segmentation-only approach achieved relatively low accuracy (e.g. DN-SD for RV at ES). Furthermore, performance gains are largest for RV and LV at ES and for ejection fraction of both ventricles.\n\nThe best overall performance is achieved by the DRN model trained with cross-entropy loss while exploiting entropy maps in the detection task. Moreover, the proposed two step approach attained slightly better results using Bayesian maps compared to entropy maps.\n\n#### Manual correction\n\nTable 4 lists results for the combined automatic segmentation and detection approach followed by manual correction of detected segmentation failures by an expert. The results show that this correction led to improved segmentation performance in terms of DC, HD and clinical metrics. Improvements in terms of HD are in 50 percent of the cases statistically significant ($$p \\le 0.05$$) and most pronounced for RV and LV at end-systole.\n\nQualitative examples of the proposed approach are visualized in Figs. 5 and 6 for simulated correction and manual correction of the automatically detected segmentation failures, respectively. For the illustrated cases (simulated) manual correction of detected segmentation failures leads to increased segmentation performance. On average manual correction of automatic segmentations took less than 2 min for ED and ES volumes of one patient compared to 20 min that is typically needed by an expert for the same task.\n\n## Ablation study\n\nTo demonstrate the effect of different hyper-parameters in the method, a number of experiments were performed. In the following text these are detailed.\n\n### Impact of number of Monte Carlo samples on segmentation performance\n\nTo investigate the impact of the number of Monte Carlo (MC) samples on the segmentation performance validation experiments were performed for all three segmentation architectures (Dilated Network, Dilated Residual Network and U-net) using T $$\\in \\{1, 3, 5, 7, 10, 20, 30, 60\\}$$ samples. Results of these experiments are listed in Table 5. We observe that segmentation performance started to converge using seven samples only. Performance improvements using an increased number of MC samples were largest for the Dilated Network. Overall, using more than ten samples did not increase segmentation performance. Hence, in the presented work T was set to 10.\n\n### Effect of patch-size on detection performance\n\nThe combined segmentation and detection approach detects segmentation failures on region level. To investigate the effect of patch-size on detection performance three different patch-sizes were evaluated: 4 $$\\times$$ 4, 8 $$\\times$$ 8, and 16 $$\\times$$ 16 voxels. The results are shown in Fig. 7. We can observe in Fig. 7a that larger patch-sizes result in a lower number of false positive regions. The result is potentially caused by the decreasing number of regions in an image when using larger patch-sizes compared to smaller patch-sizes. Furthermore, Fig. 7b reveals that slice detection performance is only slightly influenced by patch-size. To ease manual inspection and correction by an expert, it is desirable to keep region-size i.e. patch-size small. Therefore, in the experiments a patch-size of 8 $$\\times$$ 8 voxels was used.\n\n### Impact of tolerance threshold on number of segmentation failures\n\nTo investigate the impact of the tolerance threshold separating segmentation failures and tolerable segmentation errors, we calculated the ratio of the number of segmentation failures and all errors i.e. the sum of tolerable errors and segmentation failures. Fig. 8 shows the results. We observe that at least half of the segmentation failures are located within a tolerance threshold i.e. distance of two to three voxels of the target structure boundary as defined by the reference annotation. Furthermore, the mean percentage of failures per volume is considerably lower for the Dilated Residual Network (DRN) and highest for the Dilated Network. This result is inline with our earlier finding (see Table 3) that average percentage of slices that do contain segmentation failures is lowest for the DRN model.\n\n## Discussion\n\nWe have described a method that combines automatic segmentation and assessment of uncertainty in cardiac MRI with detection of image regions containing segmentation failures. The results show that combining automatic segmentation with manual correction of detected segmentation failures results in higher segmentation performance. In contrast to previous methods that detected segmentation failures per patient or per structure, we showed that it is feasible to detect segmentation failures per image region. In most of the experimental settings, simulated manual correction of detected segmentation failures for LV, RV and LVM at ED and ES led to statistically significant improvements. These results represent the upper bound on the maximum achievable performance for the manual expert correction task. Furthermore, results show that manual expert correction of detected segmentation failures led to consistently improved segmentations. However, these results are not on par with the simulated expert correction scenario. This is not surprising because inter-observer variability is high for the presented task and annotation protocols may differ between clinical environments. Moreover, qualitative results of the manual expert correction reveal that manual correction of the detected segmentation failures can prevent anatomically implausible segmentations (see Fig. 6). Therefore, the presented approach can potentially simplify and accelerate correction process and has the capacity to increase the trustworthiness of existing automatic segmentation methods in daily clinical practice.\n\nThe proposed combined segmentation and detection approach was evaluated using three state-of-the-art deep learning segmentation architectures. The results suggest that our approach is generic and applicable to different model architectures. Nevertheless, we observe noticeable differences between the different combination of model architectures, loss functions and uncertainty measures. In the segmentation-only task the DRN clearly outperforms the other two models in the evaluation of the boundary of the segmented structure. Moreover, qualitative analysis of the automatic segmentation masks suggests that DRN generates less often anatomically implausible and fragmented segmentations than the other models. We assume that clinical experts would prefer such segmentations although they are not always perfect. Furthermore, even though DRN and U-net achieve similar performance in regard to DC we assume that less fragmented segmentation masks would increase trustworthiness of the methods.\n\nIn agreement with our preliminary work we found that uncertainty maps obtained from a segmentation model trained with soft-Dice loss have a lower degree of uncertainty calibration compared to when trained with one of the other two loss functions (cross-entropy and Brier)25. Nevertheless, the results of the combined segmentation and detection approach showed that a lower degree of uncertainty calibration only slightly deteriorated the detection performance of segmentation failures for the larger segmentation models (DRN and U-net) when exploiting uncertainty information from e-maps. Hendrycks and Gimpel37 showed that softmax probabilities generated by deep learning networks have poor direct correspondence to confidence. However, in agreement with Geifman et al.30 we presume that probabilities and hence corresponding entropies obtained from softmax function are ranked consistently i.e. entropy can potentially be used as a relative uncertainty measure in deep learning. In addition, we detect segmentation failures per image region and therefore, our approach does not require perfectly calibrated uncertainty maps. Furthermore, results of the combined segmentation and detection approach revealed that detection performance of segmentation failures using b-maps is almost independent of the loss function used to train the segmentation model. In line with Jungo et al.38 we assume that by enabling MC-dropout in testing and computing the mean softmax probabilities per class leads to better calibrated probabilities and b-maps. This assumption is in agreement with Srivastava et al.39 where a CNN with dropout used at testing is interpreted as an ensemble of models.\n\nQuantitative evaluation in terms of Dice coefficient and Hausdorff distance reveals that proposed combined segmentation and detection approach leads to significant performance increase. However, the results also demonstrate that the correction of the detected failures allowed by the combined approach does not lead to statistically significant improvement in clinical metrics. This is not surprising because state-of-the-art automatic segmentation methods are not expected to lead to large volumetric errors7 and standard clinical measures are not sensitive to small segmentation errors. Nevertheless, errors of the current state-of-the-art automatic segmentation methods may lead to anatomically implausible segmentations7 that may cause distrust in clinical application. Besides increase in trustworthiness of current state-of-the-art segmentation methods for cardiac MRIs, improved segmentations are a prerequisite for advanced functional analysis of the heart e.g. motion analysis40 and very detailed morphology analysis such as myocardial trabeculae in adults41.\n\nFor the ACDC dataset used in this manuscript, Bernard et al.7 reported inter-observer variability ranging from 4 to $${14.1}\\, \\hbox {mm}$$ (equivalent to on average 2.6 to 9 voxels). To define the set of segmentation failures, we employed a strict tolerance threshold on distance metric to distinguish between tolerated segmentation errors and segmentation failures (see Ablation study). Stricter tolerance threshold was used because the thresholding is performed in 2D, while evaluation of segmentation is done in 3D. Large slice thickness in cardiac MR could lead to a discrepancy between the two. As a consequence of this strict threshold results listed in Table 3 show that almost all patient volumes contain at least one slice with a segmentation failure. This might render the approach less feasible in clinical practice. Increasing the threshold decreases the number of segmentation failures and slices containing segmentation failures (see Fig. 8) but also lowers the upper bound on the maximum achievable performance. Therefore, to show the potential of our proposed approach we chose to apply a strict tolerance threshold. Nevertheless, we realize that although manual correction of detected segmentation failures leads to increased segmentation accuracy the performance of precision-recall is limited (see Fig. 3) and hence, should be a focus of future work.\n\nThe presented patch-based detection approach combined with (simulated) manual correction can in principle lead to stitching artefacts in the resulting segmentation masks. A voxel-based detection approach could potentially solve this. However, voxel-based detection methods are more challenging to train due to the very small number of voxels in an image belonging to the set of segmentation failures.\n\nEvaluation of the proposed approach for 12 possible combinations of segmentation models (three), loss functions (two) and uncertainty maps (two) resulted in an extensive number of experiments. Nevertheless, future work could extend evaluation to other segmentation models, loss functions or combination of losses. Furthermore, our approach could be evaluated using additional uncertainty estimation techniques e.g. by means of ensembling of networks42 or variational dropout43. In addition, previous work by Kendall and Gal44, Tanno et al.45 has shown that the quality of uncertainty estimates can be improved if model (epistemic) and data (aleatoric) uncertainty are assessed simultaneously with separate measures. The current study focused on the assessment of model uncertainty by means of MC-dropout and entropy which is a combination of epistemic and aleatoric uncertainty. Hence, future work could investigate whether additional estimation of aleatoric uncertainty improves the detection of segmentation failures.\n\nFurthermore, to develop an end-to-end approach future work could incorporate the detection of segmentation failures into the segmentation network. Besides, adding the automatic segmentations to the input of the detection network could increase the detection performance.\n\nFinally, the proposed approach is not specific to cardiac MRI segmentation. Although data and task specific training would be needed the approach could potentially be applied to other image modalities and segmentation tasks.\n\n## Conclusion\n\nA method combining automatic segmentation and assessment of segmentation uncertainty in cardiac MR with detection of image regions containing local segmentation failures has been presented. The combined approach, together with simulated and manual correction of detected segmentation failures, increases performance compared to segmentation-only. The proposed method has the potential to increase trustworthiness of current state-of-the-art segmentation methods for cardiac MRIs." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8941746,"math_prob":0.9290727,"size":65714,"snap":"2023-40-2023-50","text_gpt3_token_len":13257,"char_repetition_ratio":0.18854056,"word_repetition_ratio":0.058237147,"special_character_ratio":0.1987552,"punctuation_ratio":0.11452961,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9550537,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-09T15:26:44Z\",\"WARC-Record-ID\":\"<urn:uuid:807aa6ea-d680-4e82-a7df-7aeb20644fa0>\",\"Content-Length\":\"403532\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e99cabd1-1f5e-43a0-98f0-b03f32d9c76b>\",\"WARC-Concurrent-To\":\"<urn:uuid:a707e705-7977-4fda-a1d1-bfa189d34edc>\",\"WARC-IP-Address\":\"146.75.32.95\",\"WARC-Target-URI\":\"https://www.nature.com/articles/s41598-020-77733-4?error=cookies_not_supported&code=b3257ace-5d87-4a3d-af78-2b5e07c29d29\",\"WARC-Payload-Digest\":\"sha1:GN7CSWZUB2XE5OGOZFHNH73Q2KRC7PKD\",\"WARC-Block-Digest\":\"sha1:BWJUENYQCVIFZCU63GI3Z2DTR63HLXT7\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100912.91_warc_CC-MAIN-20231209134916-20231209164916-00312.warc.gz\"}"}
https://wiki.csiamerica.com/exportword?pageId=1216739
[ "Message-ID: <1371640563.15239.1558619265106.JavaMail.j2ee-wiki-prod@csi1-cos-conf01.managed.contegix.com> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary=\"----=_Part_15238_432325442.1558619265106\" ------=_Part_15238_432325442.1558619265106 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location: file:///C:/exported.html SAP2000\n\n# Explore SAP2000 Test = Problems\n\n=20\n=20\n=20\n=20\n=20\n=20 =20 = =20 =20 =20 =20 =20 =20 =20 = =20 =20 =20 =20 =20\n=20 =20\n=20\n=20\n=20\n\n# List of SAP2000 Test Problems<= /h1> =20 =20 =20 =20 =20 =20 =20 =20 =20 =20 =20 =20 =20 =20 =20 =20 =20 =20 =20 =20 =20 =20 =20 =20 =20 =20 =20 =20 =20 =20 =20 =20 =20 =20 =20 =20 =20 =20 =20 =20 =20 =20 =20 =20 =20 =20 =20 =20 =20 =20 =20 =20 =20 =20 =20 =20 =20 =20 Title Description Effect of insertion point on beam reactions How insertion poi= nt affects horizontal reactions and flexural response of a simply supported= beam. T= endon force vs. frame response Tendon applicatio= n is validated by comparing tendon forces to those in an equivalent frame s= ystem. Moving-load analysis section cuts Verification of s= ection-cut forces generated during moving-load analysis. Staged construction of a five-story column Creep application= , addition of nodes to deformed configuration, and verification against man= ual calculations are given for the staged construction of a five-story colu= mn. Acceleration= loading Demonstrate accel= eration loading and validate relative/absolute acceleration, velocity, and = displacement. A= lign solid and hollow sections Model relative po= sitive position for frame sections which have identical outlines, but diffe= rent center-of-gravity locations due to one section being hollow. Two-span girder simply-supported for DL and c= ontinuous for LL Modeling demonstr= ation for a two-span girder which is simply-supported for DL and continuous= for LL. Temperature-gradient loading for bridge objects This test problem= demonstrates CSI Software calculation and application of temperature-gradi= ent loading to bridge objects. Line and ar= ea springs This test problem= demonstrates and validates the application of line and area springs. Insertion point and transform stiffness 3D demonstration = of insertion-point, end-offset, and transform-stiffness application. Body vs= . equal constraint Comparison betwee= n body-constraint and equal-constraint application to a simply supported be= am. End offsets Demonstration of = end offsets applied to a two-span continuous beam. Frame = to shell connections This tutorial des= cribes the application of connections between frame and shell elements.= Saving section cuts during moving-load analysis Sections cuts may= be saved during moving-load analysis through this procedure. Section cuts drawn within the graphical user interfac= e Draw section cuts= within the graphical user interface using either 2D or 3D views. P-Delta effect for a cantilevered column Calculation and v= erification of the P-Delta effects of a cantilevered column. Human-in= duced vibrations The modeling and = analysis of human-induced vibrations due to footfalls or another type of im= pact. Fram= e and shell section cuts Section cuts are = defined through a simply-supported beam which is modeled using frame and sh= ell objects. Interpreting buckling analysis resul= ts for different initial conditions Buckling analysis= may begin with either zero initial conditions or the stiffness taken from = the end of a nonlinear load case. This test problem compares the associated= output. Multi= -pendulum model (Newton's cradle) Model a pendulum = system in SAP2000 using large-displacement time-history analysis. Hinge response when yield point changes Behavior of a con= centrated plastic hinge when the loading applied to a nonlinear frame objec= t causes the yield point of the interaction surface to change position.= = Staged construction in buildings Guidelines for se= tting up staged construction and interpreting the staged-construction resul= ts. Veh= icle remains fully in lane Verification of m= oving-load analysis when the option is specified for a vehicle to remain fu= lly in lane. Temperature load vs. insertion point Given temperature= loading applied to a fixed-fixed beam with variable insertion point (centr= oid and top-center), theoretical solution is compared to that from a SAP200= 0 model. Partial end = releases Hand calculations= present the following SAP2000 features: fixed conditions, full releases, p= artial releases, rotational-spring supports, and panel zones. O= ptions for applying area loads Uniform (Shell), = one-way Uniform to Frame (Shell), and two-way Uniform to Frame (Shell) load= application to shell objects and associated meshing procedures. Influence surfa= ce Influence-surface= verification for a cantilever beam modeled using shell objects. Moment curvature, cracked moment= of inertia and Caltrans idealized model Parameters and ou= tput for moment curvature and cracked moment of inertia. Steady-state vs. time-history analysis Test problems to = demonstrate the differences and similarities between steady-state and time-= history analyses.\n\n------=_Part_15238_432325442.1558619265106--" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8161906,"math_prob":0.9189433,"size":3976,"snap":"2019-13-2019-22","text_gpt3_token_len":869,"char_repetition_ratio":0.11404834,"word_repetition_ratio":0.0036429872,"special_character_ratio":0.22585513,"punctuation_ratio":0.10575428,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99930024,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-05-23T13:47:45Z\",\"WARC-Record-ID\":\"<urn:uuid:53f998b3-7591-471c-8cb2-98dc778ffbd7>\",\"Content-Length\":\"22595\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5781a15e-870d-4963-81c4-dc5342bab941>\",\"WARC-Concurrent-To\":\"<urn:uuid:344c549f-1b5d-4404-9942-75835181f241>\",\"WARC-IP-Address\":\"199.102.164.99\",\"WARC-Target-URI\":\"https://wiki.csiamerica.com/exportword?pageId=1216739\",\"WARC-Payload-Digest\":\"sha1:Z3TLFTXYL344LNRV7SVRWTDBI7QIONST\",\"WARC-Block-Digest\":\"sha1:LKZ3FDVEHPMXBMMNF6F3NNIXXYUUIQO7\",\"WARC-Identified-Payload-Type\":\"message/rfc822\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-22/CC-MAIN-2019-22_segments_1558232257244.16_warc_CC-MAIN-20190523123835-20190523145835-00060.warc.gz\"}"}
https://mathoverflow.net/questions/9274/combinatorial-distance-%E2%89%A1-euclidean-distance/9368
[ "# Combinatorial distance ≡ Euclidean distance\n\nDefinition: A polytope has property X iff there is a function f:N+ → R+ such that for each pair of vertices vi, vj the following holds:\n\ndisteuclidean(vi, vj) = f(distcombinatorial(vi, vj))\n\nwith distcombinatorial(vi, vj) = shortest path of edges between vi and vj.\n\nThat means: for each vi1, vj1, vi2, vj2:\n\ndisteuclidean(vi1, vj1) = disteuclidean(vi2, vj2)\n\niff\n\ndistcombinatorial(vi1, vj1) = distcombinatorial(vi2, vj2)\n\nQuestion 1: Is property X already named? What's its common name?\n\nQuestion 2: Which polytopes have property X? The regular polytopes seem to have it, but are there more?\n\nAnother way of describing your property X is to say that concentric spheres in the shortest path metric in the graph of the polytope are mapped into concentric Euclidean spehres. I have never heard of these polytopes before, but it is a very natural question. My suspicion is that there are not very many \"unsymmetric\" ones.\n\nHere is a plan for enumerating them in dimension 3. As mentioned by Martin M. W., in dimension two the only such polytopes are the regular polygons: all edges must be equal, and all angles must also be equal, because of vertices at distance 2 in the graph must have the same Euclidean distance. So in dimension 3, each facet of such a polytope must be a regular polygon. Because a regular polygon fixes the second-smallest distance, all non-triangular facets must be congruent. The 3-polytopes with each face a regular polygon are known: they are the 5 Platonic solids, the 13 Archimedean solids, the infinite family of prisms, the infinite family of antiprisms, and the 92 Johnson solids.\n\nOnly two prisms satisfy this property, the triangular one and the square one (aka the cube). Most likely, the only antiprism will be the triangular one (aka the octahedron), but this will need some calculations. This leaves finitely many, each of which can be checked (more calculations).\n\nIn principle, the same programme could be carried through in dimension 4. Then each facet will be one in the finite list (not) enumerated above, and because of the various distances realized by each possible 3-polytope, not many of them could co-exist. So it could be that the possibilities are even more restricted in dimension 4. Or there could be a combinatorial explosion. It's not clear to me that there would be only finitely many such polytopes (up to similarity) in a fixed dimension $\\geq 4$.\n\nAnyone for a computational project? (Note: Zalgaller's enumeration of the Johnson solids takes up almost 100 pages.)\n\n• It is not true that the octahedron is the only antiprism that works: the pentagonal antiprism (a form of bidiminished icosahedron) also works. Among the Johnson solids the diminished icosahedron, metabidiminished icosahedron, and tridiminished icosahedron all work. These polyhedra are all formed by removing vertices from an icosahedron and replacing them by pentagonal faces. – David Eppstein Dec 23 '09 at 19:11\n\nNot even all regular polytopes would have this property: any polytope having property X would have vertex figures with at most two kinds of distances between its vertices, and the 24- and 600-cells have the cube and the icosahedron for a vertex figure. Apart from those two examples, though, all regular polytopes have your property, and at least the regular prism has it as well as a regular pentagonal pyramid. No idea whether there are more examples.\n\n• Pentagonal and square pyramid also have such property. – Nurdin Takenov Dec 18 '09 at 15:19\n\nIn all dimensions n>2, you get a non-regular polytope with this property if you take two simplices and glue two faces together. Pairs of vertices will be 1 unit apart, topologically and geometrically, except for one pair that will be two units apart topologically.\n\n(In dimension 2, the only non-self-intersecting examples are the usual regular polygons, since it's easy to see all sides and angles have to be the same.)\n\nI don't know a name or a characterization for the property, though!\n\nFirst, a more standard name for what you call \"geometrical distance\" is Euclidean distance, although your name also arises. \"Topological distance\" is a very problematic name; I think that \"combinatorial distance\" is a clearer and more standard name for distance defined by the number of edges.\n\nMy guess is that your property does not have a standard name. However, it is closely related to a property that does have a standard name. The combinatorial distance is usually bounded by a relatively small integer; for instance, it is 2 in a cross polytope in any dimension. A set is more generally called a $k$-distance set if it only has $k$ distinct Euclidean distances. The conventional thinking is that $k$-distance sets are a good level of generality, although I have no idea whether conventional thinking on this point is wise or unwise. I found a an article on $k$-distance sets in normed spaces by Konrad Swanepoel which has an interesting mini-bibliography of the Euclidean case. Maybe Konrad can say more since he is a regular user of MO.\n\nMy suggestion is to make a name like \"edge-determined few-distance set\" to convey your idea. Even if there is a name that has appeared in a few papers, it is not necessarily a good name. I concede that it there already is a standard name in many papers, you should probably use it; but I checked a bit and I didn't see one.\n\nAlso, there is a more general class of examples than those that people have suggested so far. Any Cartesian product of regular simplices with unit edges is an example. This includes the $n$-cube, the $n$-simplex, and the triangular prism as special cases.\n\n• My article on k-distance sets is in the context of normed spaces, not Euclidean. And it's not really a review either. There are many, many papers on Euclidean few-distance sets - the keywords to search for a spherical design, Euclidean design, few-distance set, s-distance set, etc. They are mostly about extremal results (how large is a k-distance set in dimension n), but there are some about structure, which would be more relevant here. – Konrad Swanepoel Dec 19 '09 at 12:23\n• I apologize, I misspoke. What I meant is that your article had a good mini-bibliography for the Euclidean question. – Greg Kuperberg Dec 19 '09 at 15:44\n\nThanks for the responses. They give many valuable hints.\n\nAnyway: Does property X seem to be an interesting property or is it \"just so\"? At first glance it looks like a fundamental property - similar to regularity? What does property X tell us about the symmetry of the polytope that has it? Or what else?\n\nI wonder if the class of polytopes I am going to define might have property X:\n\nConsider the regular n-simplex $\\Delta^n$.\n\nLet $F_k^n$ be the set of k-dimensional faces of $\\Delta^n$:\n\n• $F_0^n$ = the set of vertices\n• $F_1^n$ = the set of edges\n• ...\n• $F_n^n$ = $\\Delta^n$\n\nLet $P_k^n$ be the polytope the vertices of which are the centers of the elements of $F_k^n$.\n\n$P_k^n$ represents in a natural way the subsets of [n+1]={0,1,..,n} with exactly k+1 elements.\n\n$P_1^3$ (= $P_2^3$) is the octahedron.\n\n$P_1^4$ (= $P_3^4$) is the rectified 4-simplex (with the triangular prism for vertex figure).\n\nClaim: $P_1^n$ (= $P_{n-1}^n$) is the rectified n-simplex.\n\nClaim: For any vertex v of the regular hypercube $C^n$ the vertices with combinatorial distance k to v are the vertices of $P_k^n$.\n\nConjecture: *For all n, k, the polytope $P_k^n$ has property X.*\n\nQuestion: Is there a standard name for the polytopes $P_k^n$?\n\nQuestion: Can anyone canonically name some other $P_k^n$ for 1 < k < n-1 (like \"rectified n-simplex\" for k=1)?\n\nQuestion: Does a proof of the above conjecture seem to be (i) feasible, (ii) trivial, or - if (i) but not (ii) - does anyone (iii) could sketch a proof?\n\n• These polytopes are called hypersimplices, and to show that they have property X is easy; for hints see for example this very readable paper of Rispoli, especially its Proposition 4(a). – Konrad Swanepoel Dec 23 '09 at 20:58" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8983584,"math_prob":0.9678966,"size":585,"snap":"2019-43-2019-47","text_gpt3_token_len":190,"char_repetition_ratio":0.1342513,"word_repetition_ratio":0.0,"special_character_ratio":0.25982907,"punctuation_ratio":0.20168068,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99543834,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-19T10:10:09Z\",\"WARC-Record-ID\":\"<urn:uuid:dbec172c-dd8c-46f5-b6f2-0522a136a2b3>\",\"Content-Length\":\"151104\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:908f3200-a5ac-4bff-b8ab-ed86a6148fd9>\",\"WARC-Concurrent-To\":\"<urn:uuid:0233e6fb-ac32-4d9d-8413-f47a58377079>\",\"WARC-IP-Address\":\"151.101.65.69\",\"WARC-Target-URI\":\"https://mathoverflow.net/questions/9274/combinatorial-distance-%E2%89%A1-euclidean-distance/9368\",\"WARC-Payload-Digest\":\"sha1:6H44KLCY2UDQNSG5D25E2EISY5JHBFM4\",\"WARC-Block-Digest\":\"sha1:H2AX5A365HPIJBRE3LKITRGBX5ODLC7C\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496670135.29_warc_CC-MAIN-20191119093744-20191119121744-00281.warc.gz\"}"}
https://dsp.stackexchange.com/questions/62859/intuitive-meaning-of-complex-time-domain-signal-representation-quadrature-sampl
[ "# Intuitive meaning of complex time domain signal representation (quadrature sampling)\n\nMy question is about I Q samples as recorded from an SDR receiver (e.g. RTL-SDR).\n\nMy understanding is that we can call these I Q samples a complex time domain signal and these can have both non-zero real and imaginary components (the question I linked above is talking about zero imaginary part in the time domain).\n\nIs there an intuitive explanation of what is the meaning of this type of complex time domain signal?\n\nI can think of taking a simple case of an input sine wave of a frequency below the sampling frequency which would make the IQ signal's phasor rotate with the frequency equal to the difference between the input sine wave's frequency and the sampling frequency thus allowing to unambiguously reconstruct frequencies up to (but not including) the sampling frequency from the IQ samples.\n\nBut I'm struggling to extend this special case to an input composed of several sinusoids of different frequencies.\n\n• Are you familiar with Euler’s identities that express sines and cosines in terms of exponentials such as $e^{j\\omega t}$ and do you have an intuitive understanding of that exponential expression, and what “positive” and “negative” frequencies are? If not this was all answered in other posts here we can link you to as I think starting with that will really help you and is already answered. Dec 25 '19 at 23:26\n• @Dan Boschen, I'm familiar with the identities, but don't have an intuitive understanding of what a negative frequency would mean in the time domain. The phasor would rotate in the opposite direction I suppose, but I don't know what it means in the time domain.\n– axk\nDec 26 '19 at 8:26\n• The \"rotating phasor\" is in the time domain (the constant magnitude phasor rotating at a constant rate is just a single impulse in the frequency domain). Look at this answer by @endolith where he shows the spinning phasor, and the sine and cosine components: dsp.stackexchange.com/questions/431/…. So a positive frequency is cosine + j sine and a negative frequency is cosine - j sine Dec 26 '19 at 12:55\n• And here are other related posts that may further help: dsp.stackexchange.com/questions/52826/… and dsp.stackexchange.com/questions/31355/… Dec 26 '19 at 13:02\n• Yes that makes sense to say. Also note that we can represent that signal at baseband (carrier =0) as long as we use the complex signal representation meaning we need I and Q (or magnitude and phase) for each sample to represent it. For linear processes it doesn’t matter then what carrier the signal is at. Multiplying by $e^{j\\omega_c t}$ just shifts it to a new frequency. Did any other the answers below meet your question? If so best to select the one that did to close this out. Dec 26 '19 at 18:42\n\n$$r(t)=I(t)\\cos(2\\pi f_ct+\\phi)-Q(t)\\sin(2\\pi f_ct+\\phi)\\tag{1}$$\nwhere $$I(t)$$ and $$Q(t)$$ are the in-phase and quadrature components, respectively. After demodulation you're left with the two signals $$I(t)$$ and $$Q(t)$$, and if you like you can combine them into a complex-valued baseband signal $$x(t)=I(t)+jQ(t)$$." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9488137,"math_prob":0.96781754,"size":982,"snap":"2021-43-2021-49","text_gpt3_token_len":195,"char_repetition_ratio":0.1380368,"word_repetition_ratio":0.0,"special_character_ratio":0.1904277,"punctuation_ratio":0.05,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99763936,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-17T22:48:23Z\",\"WARC-Record-ID\":\"<urn:uuid:f13a50f4-fcfa-435b-8254-859efa38143d>\",\"Content-Length\":\"187548\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:78b40bd4-1c69-4cb4-873d-2c6def296a77>\",\"WARC-Concurrent-To\":\"<urn:uuid:d74cf14b-fe7e-4e3d-89d4-91e39d0200fd>\",\"WARC-IP-Address\":\"151.101.1.69\",\"WARC-Target-URI\":\"https://dsp.stackexchange.com/questions/62859/intuitive-meaning-of-complex-time-domain-signal-representation-quadrature-sampl\",\"WARC-Payload-Digest\":\"sha1:PJ7T7K6Y4F7EOODLRGQMHRJQONUDRZQY\",\"WARC-Block-Digest\":\"sha1:IDPHO3YZQONPDPFMVH2ZAYANBNRVSIMW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585183.47_warc_CC-MAIN-20211017210244-20211018000244-00298.warc.gz\"}"}
https://git.matthewbutterick.com/mbutterick/aoc-racket/src/branch/master/day11.rkt
[ "mbutterick\n/\naoc-racket\nArchived\nYou cannot select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.\nThis repo is archived. You can view files and clone it, but cannot push or open issues/pull-requests.\n\n#### 103 lines 4.2 KiB Racket Raw Permalink Blame History Unescape Escape\n\nThis file contains invisible Unicode characters!\n\nThis file contains invisible Unicode characters that may be processed differently from what appears below. If your use case is intentional and legitimate, you can safely ignore this warning. Use the Escape button to reveal hidden characters.\n\n ```#lang scribble/lp2 ``` ```@(require scribble/manual aoc-racket/helper) ``` ``` ``` ```@aoc-title ``` ``` ``` ```@defmodule[aoc-racket/day11] ``` ``` ``` ```@link[\"http://adventofcode.com/day/11\"]{The puzzle}. Our @link-rp[\"day11-input.txt\"]{input} is a short alphabetic key that represents a password. ``` ``` ``` ```@chunk[ ``` ``` ``` ``` ``` ``` ``` ``` ] ``` ``` ``` ```@isection{What's the next password that meets the criteria?} ``` ``` ``` ```Though the password is alphabetic, we can increment it as we would a numerical password, by changing the rightmost letter to the next letter (for instance @litchar{x} to @litchar{y}, @litchar{y} to @litchar{z}). When we reach @litchar{z}, we roll over to @litchar{a}, and ``carry over'' the surplus by incrementing the letter to the left. ``` ``` ``` ```Furthermore, like @secref{Day_5}, the puzzle provides certain criteria that must be met: ``` ``` ``` ```@itemlist[ ``` ``` @item{The password must have a sequence of three consecutive letters (like @litchar{bcd}).} ``` ``` ``` ``` @item{The password may not contain @litchar{i}, @litchar{o}, or @litchar{l}.} ``` ``` ``` ``` @item{The password must contain two different, non-overlapping pairs of letters.} ``` ``` ] ``` ``` ``` ```As in @secref{Day_5}, we'll use @iracket[regexp-match] to implement tests for these conditions. We'll also use @iracket[regexp-replace*] to build the function that increments a password alphabetically. Then it's a simple matter of looking at passwords until we find one that works. ``` ``` ``` ```The @racket[increment-password] function works by using the observation that if the password ends in any number of @litchar{z}s, you have to roll them over to @litchar{a} and increment the letter to the left. Otherwise, you can just increment the last letter — which is actually the same rule, but with zero @litchar{z}s. This logic can all be captured in one regular expression — @racket[#rx\"^(.*?)(.)(z*)\\$\"]. ``` ``` ``` ```The @racket[three-consecutive-letters?] test works by converting the letters to numbers and creating a list of the differences betweeen adjacent values. Any three consecutive letters will differ by value of @racket. So if the list of differences contains the subsequence @racket['(1 1)], then the string has three consecutive letters. ``` ``` ``` ``` ``` ```@chunk[ ``` ``` (require racket rackunit) ``` ``` (provide (all-defined-out)) ``` ``` ``` ``` (define (increment-password password) ``` ``` (define (increment-letter c) ``` ``` ((compose1 ~a integer->char add1 char->integer car string->list) c)) ``` ``` ``` ``` (match-define (list _ prefix letter-to-increment trailing-zs) ``` ``` (regexp-match #rx\"^(.*?)(.)(z*)\\$\" password)) ``` ``` ``` ``` (string-append* (list prefix (increment-letter letter-to-increment) ``` ``` (regexp-replace* #rx\"z\" trailing-zs \"a\")))) ``` ``` ``` ``` (define (three-consecutive-letters? str) ``` ``` (define ints (map char->integer (string->list str))) ``` ``` (let loop ([differences (map - (cdr ints) (drop-right ints 1))]) ``` ``` (if (empty? differences) ``` ``` #f ``` ``` (or (list-prefix? '(1 1) differences) (loop (cdr differences)))))) ``` ``` ``` ``` (define (no-iol? str) ``` ``` (not (regexp-match #rx\"[iol]\" str))) ``` ``` ``` ``` (define (two-nonoverlapping-doubles? str) ``` ``` (regexp-match #px\"(\\\\w)\\\\1.*?(\\\\w)\\\\2\" str)) ``` ``` ``` ``` (define (valid? password) ``` ``` (and (three-consecutive-letters? password) ``` ``` (no-iol? password) ``` ``` (two-nonoverlapping-doubles? password))) ``` ``` ``` ``` (define (find-next-valid-password starting-password) ``` ``` (define candidate-pw (increment-password starting-password)) ``` ``` (if (valid? candidate-pw) ``` ``` candidate-pw ``` ``` (find-next-valid-password candidate-pw))) ``` ``` ``` ``` ] ``` ``` ``` ```@chunk[ ``` ``` ``` ``` (define (q1 input-key) ``` ``` (find-next-valid-password input-key))] ``` ``` ``` ``` ``` ``` ``` ```@section{What's the next valid password after that?} ``` ``` ``` ```We take the answer to question 1 and use it as input to the same function. ``` ``` ``` ```@chunk[ ``` ``` ``` ``` (define (q2 input-key) ``` ``` (find-next-valid-password (q1 input-key))) ] ``` ``` ``` ``` ``` ```@section{Testing Day 11} ``` ``` ``` ```@chunk[ ``` ``` (module+ test ``` ``` (define input-key (file->string \"day11-input.txt\")) ``` ``` (check-equal? (q1 input-key) \"hxbxxyzz\") ``` ``` (check-equal? (q2 input-key) \"hxcaabcc\"))] ``` ``` ``` ``` ```" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6363785,"math_prob":0.62860495,"size":4360,"snap":"2023-40-2023-50","text_gpt3_token_len":1328,"char_repetition_ratio":0.24311295,"word_repetition_ratio":0.14209116,"special_character_ratio":0.37431192,"punctuation_ratio":0.093457945,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96591336,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-23T03:50:04Z\",\"WARC-Record-ID\":\"<urn:uuid:ef365745-81e0-46ce-a22e-d361a4d04df9>\",\"Content-Length\":\"82351\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ff70605d-2a58-44b4-bb91-6dfc519abdda>\",\"WARC-Concurrent-To\":\"<urn:uuid:6f4ef950-221a-48c4-9d87-6df2c9e55a8b>\",\"WARC-IP-Address\":\"192.241.210.178\",\"WARC-Target-URI\":\"https://git.matthewbutterick.com/mbutterick/aoc-racket/src/branch/master/day11.rkt\",\"WARC-Payload-Digest\":\"sha1:PAYB27CCEJPDGX5ZDFUPHL5BZ7OZ5GFK\",\"WARC-Block-Digest\":\"sha1:ZQ5L66XT4QDEOCBSXEUMV6OXCGVCIEV6\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233506479.32_warc_CC-MAIN-20230923030601-20230923060601-00523.warc.gz\"}"}
https://www.owatonnautilities.com/residential-customers/billing-customer-service/billing-units/
[ "### Units of Measurements\n\nTherms and kilowatt hours are basic units of gas and electricity. They are used on your bill to show how much gas and electricity you have used.\n\nCCF (Water) – Your water meter measures the volume of water you use in hundreds of cubic feet (CCF). One CCF of water equals approximately 748 gallons. One CF equals 7.48 gallons.\n\nCCF (Natural Gas) – Your gas meter measures the volume of natural gas you use in hundreds of cubic feet (CCF). The difference between prior and present meter readings is your usage in hundreds of cubic feet. (CCF or therms).  One CCF of natural gas equals approximately 100,000 btu’s.\n\nKilowatt-hours – The basic unit of electric power is the watt. Because a watt is small, a unit called a kilowatt is used for measurement. One kilowatt is 1,000 watts.\n\nThe amount of electricity recorded by your meter reflects the wattage rating of lights and appliances in your home and the length of time you use them. This is expressed in kilowatt hours (kWh). For instance, a small portable heater rated at 1,000 watts and operated for one hour would use one kWh; so would ten 100-watt light bulbs.\n\nThe difference between prior and present meter readings equals the number of kWh you used.\n\n### Energy Basics\n\nBelow are some basic facts about the terminology that describes the different types and amounts of energy you use.\n\n• Amps x Volts = Watts\n• kW x Hrs = kWh\n• 1000 Watts = kW\n• 1Btu = the heat needed to raise 1 pound of water 1 degree Fahrenheit (1 btu./lb. x degree F).\n\nConversion Factors:\n\n• 100 cubic feet = 1 CCF\n• One CCF = 100,000 Btus\n• One therm = 100,000 Btus\n• One kWh = 1,000 Watts = 3,413 Btus\n• 100,000 Btus/therm divided by 3,413 Btus/kWh = 29.3 kWhs/therm or CCF\n• One gallon of water weighs 8.33 Lbs.\n• One cubic foot of water = 7.48 gallons, 1 CCF = 748 gallons\n• One hundred cubic (CCF) feet of water = 748 gallons\n\n### Other Term Definitions\n\nECI – ENERGY CONSERVATION INVESTMENT.  The 2001 Minnesota omnibus bill requires Owatonna Public Utilities (OPU) to invest 1.5% of our gross electric revenue and 1.5% of our gross natural gas revenue in energy conservation programs each year. This amount is built into the rates collected from all OPU customers and is used for approved energy conservation programs for our customers. Any unused funds at year end are turned over to the State." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8870032,"math_prob":0.9823872,"size":2331,"snap":"2023-14-2023-23","text_gpt3_token_len":611,"char_repetition_ratio":0.10270735,"word_repetition_ratio":0.047732696,"special_character_ratio":0.26254827,"punctuation_ratio":0.097613886,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9906926,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-03T14:31:10Z\",\"WARC-Record-ID\":\"<urn:uuid:52ebee6a-f18d-4289-9bea-b3c90c70a3df>\",\"Content-Length\":\"125919\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7ef569c4-c7f9-4a4c-b2e1-cc68e686e6f1>\",\"WARC-Concurrent-To\":\"<urn:uuid:33f36e08-c117-47a9-b489-5752da021d41>\",\"WARC-IP-Address\":\"3.141.115.141\",\"WARC-Target-URI\":\"https://www.owatonnautilities.com/residential-customers/billing-customer-service/billing-units/\",\"WARC-Payload-Digest\":\"sha1:U6P5BH6734NCOV5V35INXN2B56LS5QZY\",\"WARC-Block-Digest\":\"sha1:3BZCKVBKPYW2EVEVBIYSERCK26POYASQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224649293.44_warc_CC-MAIN-20230603133129-20230603163129-00288.warc.gz\"}"}
https://ouncetocup.com/cups-to-ml/
[ "# Cups to Milliliters (cup to ml conversion)\n\nAre you cooking your favorite dish? The detailed chart in the recipe includes the calculation of 1 cups to ml conversion? Do not worry; check this conversion tool to find how many 1 cups equal to ml in a minute. This 1 cup to ml converter gives an exact measurement for any recipe you prepare.\n\nCup Value:\n\nCups\n\nMilliliter Value:\n\nml\n\n1 Cup = 236.588 Milliliter\n(1 c = 236.588 ml)\n\nTry our auto 1 cup to milliliter calculator (Without Convert Button), Just change the first field value and you got final value.", null, "## How many ml is a cup?\n\nWe know that the volume value of 1 c is equal to 236.588 ml. If you want to convert 1 cup to an equal number of ml, just multiply the volume value by 236.588. Hence, 1 Cup is equal to 236.588 ml.\n\nThe Answer is: 1 US Cups = 236.588 US ml\n\n1 c = 236.588 ml\n\nMany of them try to search or find an answer for what is 1 cups in ml? So, we’ll start with 1 c to ml conversion to know how big is 1 c.\n\n## How To Calculate 1 cup to ml?\n\nTo calculate 1 cups to an equal number of milliliter, simply follow the steps below.\n\nFluid Cups to Milliliters formula is:\n\nMilliliter = Fluid Cup * 236.588\n\nAssume that we are finding out how many ml were found in 1 fl c of water, multiply by 236.588 to get the result.\n\nApplying to Formula: ml = 1 c * 236.588 = 236.588 ml.\n\n## How To Convert c to ml?\n\n• To convert fluid cups to ml,\n• Simply multiply the us cup value by 236.588.\n• Applying to the formula, ml = 1 cups * 236.588 [1x236.588].\n• Hence, 1 cups is equal to 236.588 ml.\n\n## Some quick table references for cups to milliliter conversions:\n\nCup [c]Milliliter [ml]\n1 cup236.588 ml\n2 cup473.176 ml\n3 cup709.764 ml\n4 cup946.352 ml\n5 cup1182.94 ml\n6 cup1419.528 ml\n7 cup1656.116 ml\n8 cup1892.704 ml\n9 cup2129.292 ml\n10 cup2365.88 ml\n11 cup2602.468 ml\n12 cup2839.056 ml\n13 cup3075.644 ml\n14 cup3312.232 ml\n15 cup3548.82 ml\n\n## Reverse Calculation: How many cups are in a ml?\n\n• To convert 1 ml to c,\n• Simply divide the 1 ml by 236.6.\n• Then, applying the formula, cup = 1 ml / 236.6 [1/236.6 = 0.00422675].\n• Hence, 1 ml is equal to 0.00422675 c.\n\n### Related Converter:\n\nFormula: Cup to ml\n\nml = cup * 236.588\n\nApplying to Formula,\n\nml = 1*236.588 = 236.588\n\n1 c = 236.588 ml" ]
[ null, "https://ouncetocup.com/images/cupstoml.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8265588,"math_prob":0.9909104,"size":1980,"snap":"2023-40-2023-50","text_gpt3_token_len":669,"char_repetition_ratio":0.16700405,"word_repetition_ratio":0.0125,"special_character_ratio":0.410101,"punctuation_ratio":0.15524194,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9997272,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-26T05:09:53Z\",\"WARC-Record-ID\":\"<urn:uuid:8994431a-e382-4cc0-a269-bb92d3e383c2>\",\"Content-Length\":\"16711\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b8fe4be2-2ab9-4d28-b78f-fc7be176dc6e>\",\"WARC-Concurrent-To\":\"<urn:uuid:907d9ab9-eada-4b8d-a6fc-d7b1dbed8cdf>\",\"WARC-IP-Address\":\"104.21.52.158\",\"WARC-Target-URI\":\"https://ouncetocup.com/cups-to-ml/\",\"WARC-Payload-Digest\":\"sha1:4PZZILP54C3WFSS3NWRBYCMJSCIHITCN\",\"WARC-Block-Digest\":\"sha1:PMNP6FWJ2FCBDBZI2OXSM2CRWWYWLBEK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510149.21_warc_CC-MAIN-20230926043538-20230926073538-00379.warc.gz\"}"}
https://exceltutorialworld.com/unit-conversion-in-excel/
[ "# Unit Conversion In Excel\n\nLet’s see how to do unit conversion in excel.\n\nWe will see how we could leverage a simple convert function in excel to do unit conversion.\n\nConvert function is very useful in eliminating any extra multiplication and division operation associated with unit conversion in excel.\n\nConvert(number,from_unit,to_unit)\n\nSuppose we want to convert the number in A2 from meters to feet.\n\nYou will observe a drop down when you enter the “from unit” and “to unit” argument in the formula.", null, "", null, "", null, "Hit enter to get the desired result.\n\nHope this helped.\n\nShare The Knowledge\n•", null, "" ]
[ null, "https://exceltutorialworld.com/wp-content/uploads/2016/07/Capture-58.png", null, "https://exceltutorialworld.com/wp-content/uploads/2016/07/Capture-59.png", null, "https://exceltutorialworld.com/wp-content/uploads/2016/07/Capture-60.png", null, "https://secure.gravatar.com/avatar/e1686a40d329d0c90607ac4bd7b54a4b", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8203938,"math_prob":0.61885536,"size":527,"snap":"2023-40-2023-50","text_gpt3_token_len":109,"char_repetition_ratio":0.14340344,"word_repetition_ratio":0.0,"special_character_ratio":0.20113853,"punctuation_ratio":0.089108914,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9824096,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,1,null,1,null,1,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-02T17:47:29Z\",\"WARC-Record-ID\":\"<urn:uuid:3a9ccea9-d282-4551-abd0-6903844d2079>\",\"Content-Length\":\"104610\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:99addf70-6337-47fb-b7dc-f3ce0cb87a31>\",\"WARC-Concurrent-To\":\"<urn:uuid:fe1509d7-9c77-4ce2-896a-587cb9f342ff>\",\"WARC-IP-Address\":\"158.106.136.250\",\"WARC-Target-URI\":\"https://exceltutorialworld.com/unit-conversion-in-excel/\",\"WARC-Payload-Digest\":\"sha1:K6FLFRMVVRITOPQM6QMM7ZZO5IRKNRMY\",\"WARC-Block-Digest\":\"sha1:C4W5GFQCKAMWZAKUC2MMMRTWU5DZI6CW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100448.65_warc_CC-MAIN-20231202172159-20231202202159-00215.warc.gz\"}"}
http://compfight.com/search/cape-town-office/1-0-1-1
[ "Professional Stock Photos from \\$1", null, "", null, "5070 x 3380", null, "x", null, "x", null, "x", null, "x", null, "", null, "", null, "x", null, "x", null, "x", null, "x", null, "x", null, "", null, "x", null, "x", null, "x", null, "x", null, "x", null, "x", null, "x", null, "x", null, "x", null, "x", null, "x", null, "x", null, "x", null, "x", null, "x", null, "x", null, "x", null, "x", null, "x", null, "x", null, "x", null, "x", null, "x", null, "x", null, "x", null, "x", null, "x", null, "1024 x 681", null, "1984 x 1488", null, "x", null, "x", null, "x", null, "x", null, "x", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "x", null, "x", null, "x", null, "x", null, "x", null, "x", null, "x", null, "x", null, "x\n\nMore images", null, "© 2010 Compfight  |  We make good use of the flickr™ API, but aren't affiliated with flickr.\n\n>" ]
[ null, "https://static.flickr.com/65535/49533435778_21894768ae_t.jpg", null, "https://static.flickr.com/4792/25825058847_ae62493816_t.jpg", null, "https://static.flickr.com/2711/5835535230_14de91f045_t.jpg", null, "https://static.flickr.com/3471/5835535052_0805665540_t.jpg", null, "https://static.flickr.com/3110/5835534872_1f28039911_t.jpg", null, "https://static.flickr.com/2455/5835534694_660e9da76c_t.jpg", null, "https://static.flickr.com/48/134168977_81ad873e05_t.jpg", null, "https://static.flickr.com/65535/49560604126_47d98f5c54_t.jpg", null, "https://static.flickr.com/65535/49491292012_96ef8043eb_t.jpg", null, "https://static.flickr.com/0/49176700338_63cfda2123_t.jpg", null, "https://static.flickr.com/65535/49167347133_ffb8bbaeba_t.jpg", null, "https://static.flickr.com/5656/31154858605_c451d7cc05_t.jpg", null, "https://static.flickr.com/5810/30978595095_94f0c31a42_t.jpg", null, "https://static.flickr.com/1582/25073383035_0c984d27ce_t.jpg", null, "https://static.flickr.com/383/18989828758_39f17dcdf2_t.jpg", null, "https://static.flickr.com/8780/16909146898_9c92d1346e_t.jpg", null, "https://static.flickr.com/8745/17095397022_070c4e8a71_t.jpg", null, "https://static.flickr.com/7670/16474464504_b1d86c0059_t.jpg", null, "https://static.flickr.com/6161/6165131925_9f6a987931_t.jpg", null, "https://static.flickr.com/2594/5851838637_7c0d5ffb13_t.jpg", null, "https://static.flickr.com/2713/5852389006_edd122fd6d_t.jpg", null, "https://static.flickr.com/4131/5128418734_99a7b850ae_t.jpg", null, "https://static.flickr.com/1254/5128418732_745de38308_t.jpg", null, "https://static.flickr.com/4013/5128418722_9342f4ca39_t.jpg", null, "https://static.flickr.com/1147/5128418728_b4de2f4c5e_t.jpg", null, "https://static.flickr.com/1259/5104039515_6af7b64323_t.jpg", null, "https://static.flickr.com/1098/5104633594_9e51041d1c_t.jpg", null, "https://static.flickr.com/1045/5104037677_640e7168d2_t.jpg", null, "https://static.flickr.com/4110/5104036917_f3a556a6b3_t.jpg", null, "https://static.flickr.com/4110/5104631132_cb996cf6ee_t.jpg", null, "https://static.flickr.com/1182/5104630092_58b402dd7f_t.jpg", null, "https://static.flickr.com/4086/5082094454_7b40f844a9_t.jpg", null, "https://static.flickr.com/4090/5044948579_fa3a0f8d34_t.jpg", null, "https://static.flickr.com/4082/4856632836_6861ce66e1_t.jpg", null, "https://static.flickr.com/4053/4699718867_4055a0ea11_t.jpg", null, "https://static.flickr.com/4019/4626033571_a473ce92e6_t.jpg", null, "https://static.flickr.com/3381/4618134337_55ca5bceae_t.jpg", null, "https://static.flickr.com/3322/4618134331_a74281db37_t.jpg", null, "https://static.flickr.com/4060/4546230288_2781004e4f_t.jpg", null, "https://static.flickr.com/4007/4490143228_ac893b4a94_t.jpg", null, "https://static.flickr.com/3038/2810123017_30482e68e2_t.jpg", null, "https://static.flickr.com/3057/2315760259_3c1fa2a1e6_t.jpg", null, "https://static.flickr.com/3201/2299251630_51a975fa7f_t.jpg", null, "https://static.flickr.com/224/471465155_241b47b07c_t.jpg", null, "https://static.flickr.com/191/471465147_21540234d5_t.jpg", null, "https://static.flickr.com/220/471465129_9b7b0b1554_t.jpg", null, "https://static.flickr.com/197/471428628_f1f7b5a277_t.jpg", null, "https://static.flickr.com/96/229842189_a74e718e09_t.jpg", null, "https://static.flickr.com/53/134171182_5ee9a22771_t.jpg", null, "https://static.flickr.com/52/134171175_ab1bbc0f88_t.jpg", null, "https://static.flickr.com/55/134171172_fb630ec261_t.jpg", null, "https://static.flickr.com/45/134171169_4f1799a6b5_t.jpg", null, "https://static.flickr.com/45/134171171_c02ac352fc_t.jpg", null, "https://static.flickr.com/52/134171165_c3f11fca9d_t.jpg", null, "https://static.flickr.com/55/134168978_23a508acbd_t.jpg", null, "https://static.flickr.com/50/134168980_ee6e2881b4_t.jpg", null, "https://static.flickr.com/46/134168975_13424440f2_t.jpg", null, "https://static.flickr.com/45/134168971_6d5e706b1c_t.jpg", null, "https://static.flickr.com/49/134168973_84ba94d8b7_t.jpg", null, "https://static.flickr.com/50/134167218_c57f898de8_t.jpg", null, "https://static.flickr.com/44/134167223_fa528c51fc_t.jpg", null, "https://static.flickr.com/47/134167217_9bf2a183d6_t.jpg", null, "https://static.flickr.com/55/134167220_e0586c676a_t.jpg", null, "https://static.flickr.com/54/134167224_c7807378cf_t.jpg", null, "https://static.flickr.com/49/134167216_d42db45784_t.jpg", null, "https://static.flickr.com/47/106926839_ff4eb8cce3_t.jpg", null, "https://static.flickr.com/44/106926838_9df15adc86_t.jpg", null, "https://static.flickr.com/40/106926842_b68d7d6218_t.jpg", null, "https://static.flickr.com/50/106924440_9208702e3f_t.jpg", null, "https://static.flickr.com/52/106924439_2069bc7502_t.jpg", null, "https://static.flickr.com/32/39586299_43caf48394_t.jpg", null, "https://static.flickr.com/30/39586298_f2943649f9_t.jpg", null, "https://static.flickr.com/28/39586296_3baf0d7d5b_t.jpg", null, "https://static.flickr.com/22/39586297_ebb59a7b8c_t.jpg", null, "http://compfight.com/images/cf_arrow_dn.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.55834717,"math_prob":1.00001,"size":388,"snap":"2020-24-2020-29","text_gpt3_token_len":184,"char_repetition_ratio":0.3359375,"word_repetition_ratio":0.44210526,"special_character_ratio":0.3273196,"punctuation_ratio":0.01,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9691944,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127,128,129,130,131,132,133,134,135,136,137,138,139,140,141,142,143,144,145,146,147,148,149,150],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-06-05T04:15:17Z\",\"WARC-Record-ID\":\"<urn:uuid:2d8ff0c6-2839-459a-bea5-215d88f0f884>\",\"Content-Length\":\"52728\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0a8c611d-be43-4ef9-b89d-13fe27fb56aa>\",\"WARC-Concurrent-To\":\"<urn:uuid:3145ba9b-7453-4e76-b329-9092f47861bd>\",\"WARC-IP-Address\":\"54.162.127.159\",\"WARC-Target-URI\":\"http://compfight.com/search/cape-town-office/1-0-1-1\",\"WARC-Payload-Digest\":\"sha1:T3GVGIBCS6IFMB5MLT2RADQJ4FD7N2U7\",\"WARC-Block-Digest\":\"sha1:75PPDJUD2UBXTBIR4WJMQIBAIHVXD356\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-24/CC-MAIN-2020-24_segments_1590348492427.71_warc_CC-MAIN-20200605014501-20200605044501-00093.warc.gz\"}"}
https://www.statisticshowto.com/absolute-error/
[ "", null, "# Absolute Error & Mean Absolute Error (MAE)\n\nShare on\n\nMeasurement Error > Absolute Error\n\n## What is Absolute Error?\n\nAbsolute Error is the amount of error in your measurements. It is the difference between the measured value and “true” value. For example, if a scale states 90 pounds but you know your true weight is 89 pounds, then the scale has an absolute error of 90 lbs – 89 lbs = 1 lbs.\n\nThis can be caused by your scale not measuring the exact amount you are trying to measure. For example, your scale may be accurate to the nearest pound. If you weigh 89.6 lbs, the scale may “round up” and give you 90 lbs. In this case the absolute error is 90 lbs – 89.6 lbs = .4 lbs.\n\n## Formula\n\nThe formula for the absolute error (Δx) is:\n\n(Δx) = xi – x,\n\nWhere:\n\n• xi is the measurement,\n• x is the true value.\n\nUsing the first weight example above, the absolute error formula gives the same result:\n(Δx) = 90 lbs – 89 lbs = 1 lb.\n\nSometimes you’ll see the formula written with the absolute value symbol (these bars: | |). This is often used when you’re dealing with multiple measurements:\n\n(Δx) = |xi – x|,\n\nThe absolute value symbol is needed because sometimes the measurement will be smaller, giving a negative number. For example, if the scale measured 89 lbs and the true value was 95 lbs then you would have a difference of 89 lbs – 95 lbs = -6 lbs. On its own, a negative value is fine (-6 just means “six units below”) but the problem comes when you’re trying to add several values, some of which are positive and some are negative. For example, let’s say you have:\n\n• 89 lbs – 95 lbs = -6 lbs and\n• 98 lbs – 92 lbs = 6 lbs\n\nOn their own, both measurements have absolute errors of 6 lbs. If you add them together, you should get a total of 12 lbs of error, but because of that negative sign you’ll actually get -6 lbs + 6 lbs = 0 lbs, which makes no sense at all — after all, there was a pretty big error (12 lbs) which has somehow become 0 lbs of error. We can solve this by taking the absolute value of the results and then adding:\n|-6 lbs| + |6 lbs| = 12 lbs.\n\nNeed help with a homework question? Check out our tutoring page!\n\n## Absolute Accuracy Error\n\nAbsolute error is also called Absolute Accuracy Error. You might see the formula written this way:\n\nE = xexperimental – xtrue.\n\nThe formula is the exact same thing, just with different names. “xexperimental” is the measurement you take and xtrue is the true measurement.\n\n## Mean Absolute Error\n\nThe Mean Absolute Error(MAE) is the average of all absolute errors. The formula is:", null, "Where:\n\n• n = the number of errors,\n• Σ = summation symbol (which means “add them all up”),\n• |xi – x| = the absolute errors.\n\nThe formula may look a little daunting, but the steps are easy:\n\n1. Find all of your absolute errors, xi – x.\n3. Divide by the number of errors. For example, if you had 10 measurements, divide by 10.\n\n## Absolute Precision Error\n\nAbsolute precision error is something completely different: it is the standard deviation of a set of measurements, given by the following formula:", null, "Where:\n\n## References\n\nKotz, S.; et al., eds. (2006), Encyclopedia of Statistical Sciences, Wiley.\nEveritt, B. S.; Skrondal, A. (2010), The Cambridge Dictionary of Statistics, Cambridge University Press.\n\nCITE THIS AS:\nStephanie Glen. \"Absolute Error & Mean Absolute Error (MAE)\" From StatisticsHowTo.com: Elementary Statistics for the rest of us! https://www.statisticshowto.com/absolute-error/\n------------------------------------------------------------------------------\n\nNeed help with a homework or test question? With Chegg Study, you can get step-by-step solutions to your questions from an expert in the field. Your first 30 minutes with a Chegg tutor is free!" ]
[ null, "https://www.statisticshowto.com/wp-content/uploads/2013/10/cropped-banner-21.jpg", null, "https://www.statisticshowto.com/wp-content/uploads/2016/10/MAE.png", null, "https://www.statisticshowto.com/wp-content/uploads/2016/10/absolute-error.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8954211,"math_prob":0.88763255,"size":3677,"snap":"2021-31-2021-39","text_gpt3_token_len":904,"char_repetition_ratio":0.15654778,"word_repetition_ratio":0.020833334,"special_character_ratio":0.26081043,"punctuation_ratio":0.14266843,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9949373,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,null,null,6,null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-23T05:15:52Z\",\"WARC-Record-ID\":\"<urn:uuid:8eebd93c-c77d-437d-a359-fd0e6915609b>\",\"Content-Length\":\"56977\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f21b404f-633a-4235-88e3-27b1c2a90913>\",\"WARC-Concurrent-To\":\"<urn:uuid:a95cd16d-0345-4027-8f98-9fa20133e0de>\",\"WARC-IP-Address\":\"172.66.40.136\",\"WARC-Target-URI\":\"https://www.statisticshowto.com/absolute-error/\",\"WARC-Payload-Digest\":\"sha1:R5XRQNKSTFADI7C3WHDEALT7VW3JNLPU\",\"WARC-Block-Digest\":\"sha1:6Z3QSOLFVHV762ZCADJR6F2ZJXTT4RVA\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780057417.10_warc_CC-MAIN-20210923044248-20210923074248-00082.warc.gz\"}"}
http://worldresearchlibrary.org/abstract.php?pdf_id=937
[ "Paper Title\nLmi Lyapunov Based Ts Fuzzy Modeling And Controller Synthesis For A Nonlinear Ball And Beam System\n\nAbstract\nTakagi-Sugeno (TS) fuzzy modeling and control techniques are applied to the classical nonlinear Ball and Beam problem. The nonlinear model is segmented to different local linear models. Local controllers are hence designed using LMI theory for achieving a robust control behavior for ball position. TS fuzzy modeling is applied to generate suitable fuzzy models, and the associated controllers. Using fuzzy if-then, a model knowledge based fuzzy control is then designed to regulate the original system response and behavior while relying on LMI-Lyapunov stability synthesis technique. That was achieved while interfacing the Ball and Beam system with fuzzy Gain Scheduling mechanism coding. For validating such control methodology, fuzzy control system has been synthesized and implemented practically. Keywords- Takagi-Sugeno (TS) Fuzzy Modeling, LMI, Lyapunov Stability, Nonlinear Dynamics." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8724845,"math_prob":0.47086024,"size":1013,"snap":"2020-24-2020-29","text_gpt3_token_len":199,"char_repetition_ratio":0.13875124,"word_repetition_ratio":0.0,"special_character_ratio":0.1638697,"punctuation_ratio":0.086419754,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96170527,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-07T19:47:06Z\",\"WARC-Record-ID\":\"<urn:uuid:33c67411-5b1e-41db-855c-67e69d11986b>\",\"Content-Length\":\"1493\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4ec4d3d1-041c-4308-9a4f-f38ccd04d8c8>\",\"WARC-Concurrent-To\":\"<urn:uuid:d573d1e7-bc2f-4725-8745-460ecf20d293>\",\"WARC-IP-Address\":\"104.211.115.6\",\"WARC-Target-URI\":\"http://worldresearchlibrary.org/abstract.php?pdf_id=937\",\"WARC-Payload-Digest\":\"sha1:4WPY7QYOFS6E44XPQOYJCWW7MSCQVNB4\",\"WARC-Block-Digest\":\"sha1:VWRB7LWLRPHBBQFIPHJCCHWKKBPOI7FY\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593655894904.17_warc_CC-MAIN-20200707173839-20200707203839-00169.warc.gz\"}"}
https://www.edaboard.com/threads/hilbert-filter-with-matlab.158593/
[ "# Hilbert filter with Matlab\n\nStatus\nNot open for further replies.\n\n#### telosa\n\n##### Newbie level 1", null, "hilbert filter matlab\n\nI designed a filter hilbert with Matlab (function fdesign.hilbert and design(,'firls'). I tested it with a cosine but the results are very stranges: for my sampling frequency 100 Mhz if I try with a cosine of an interger frequency (1,2,3 .. 49) Mhz the result is ok, hilbert filter returns the 90 degree shift signal of my cosine. But if I try with other frequencies (20,15 or 36, 34) filter doesn't work.\n\nHave anybody any explanation?\n\nThank you very much\n\n#### kumarravi0086\n\n##### Newbie level 4", null, "Re: hilbert filter matlab\n\nI designed a filter hilbert with Matlab (function fdesign.hilbert and design(,'firls'). I tested it with a cosine but the results are very stranges: for my sampling frequency 100 Mhz if I try with a cosine of an interger frequency (1,2,3 .. 49) Mhz the result is ok, hilbert filter returns the 90 degree shift signal of my cosine. But if I try with other frequencies (20,15 or 36, 34) filter doesn't work.\n\nHave anybody any explanation?\n\nThank you very much\n\ni have same type of problem if u have the solution then please help me its urgent\n\n---------- Post added at 11:51 ---------- Previous post was at 11:45 ----------\n\nI designed a filter hilbert with Matlab (function fdesign.hilbert and design(,'firls'). I tested it with a cosine but the results are very stranges: for my sampling frequency 100 Mhz if I try with a cosine of an interger frequency (1,2,3 .. 49) Mhz the result is ok, hilbert filter returns the 90 degree shift signal of my cosine. But if I try with other frequencies (20,15 or 36, 34) filter doesn't work.\n\nHave anybody any explanation?\n\nThank you very much\n\ni have same type of problem if u have the solution then please help me its urgent\n\nStatus\nNot open for further replies." ]
[ null, "https://www.edaboard.com/styles/images/ranks/alevel1.gif", null, "https://www.edaboard.com/styles/images/ranks/blevel1.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8656918,"math_prob":0.58120006,"size":955,"snap":"2022-27-2022-33","text_gpt3_token_len":253,"char_repetition_ratio":0.13880126,"word_repetition_ratio":0.9325153,"special_character_ratio":0.26282722,"punctuation_ratio":0.13793103,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9604684,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-16T06:42:30Z\",\"WARC-Record-ID\":\"<urn:uuid:18e36a22-ed6e-4506-84a1-f44951e1335c>\",\"Content-Length\":\"73143\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4fccc4d9-72da-49fe-9f45-f065ce81c7b8>\",\"WARC-Concurrent-To\":\"<urn:uuid:994270b0-9226-44e2-acf1-2a4b1a0c0610>\",\"WARC-IP-Address\":\"67.227.166.80\",\"WARC-Target-URI\":\"https://www.edaboard.com/threads/hilbert-filter-with-matlab.158593/\",\"WARC-Payload-Digest\":\"sha1:2USQLFBUFUKUHPOWU67ILMEZAWY4L6BO\",\"WARC-Block-Digest\":\"sha1:VIEO6OG2LA2LMCFTHGL3JZ7DURSKJCTK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882572221.38_warc_CC-MAIN-20220816060335-20220816090335-00473.warc.gz\"}"}
https://sentence.yourdictionary.com/mv
[ "mv", null, "mv\n\nmv Sentence Examples\n\n• Obviously equimolecular surfaces are given by (Mv) 3, where M is the molecular weight of the substance, for equimolecular volumes are Mv, and corresponding surfaces the two-thirds power of this.\n\n• Hence S may be replaced by (Mv) 3.\n\n• Ramsay and Shields found from investigations of the temperature coefficient of the surface energy that Tin the equation y(Mv) 3 = KT must be counted downwards from the critical temperature T less about 6°.\n\n• Their surface energy equation therefore assumes the form y(Mv)i=K(T-6°).\n\n• n is the mean number of molecules which associate to form one molecule, then by the normal equation we have y (Mnv) 3 =2.121(r -6°); if the calculated constant be K 1, then we have also y(Mv)3=K,(r-6°).\n\n• Since the volume is constant, we have the condition mv'--l-(I-m)v\"=constant.\n\n• Since the weights used in conjunction with a balance are really standard masses, the word \"weight\" may be substituted for the word \"mass\" in the preceding definitions; and we may symbolically express the relations thus: - If M be the weight of substance occupying a volume V, then the absolute density O = M/V; and if m, m 1 be the weights of the substance and of the standard substance which occupy the same volume, the relative density or specific gravity S = m/m l; or more generally if m i be the weight of a volume v of the substance, and m l the weight of a volume v l of the standard, then S = mv l /m l v.\n\n• If 2mu 2 denote the mean value of 2mu 2 averaged over the s molecules of the first kind, equations (3) may be written in the form Z mu g = 2 mv 2 = 2 mw 2 = 2x,0 2 1 =.\n\n• We accordingly put 1/2h = RT, where T denotes the temperature on the absolute scale, and then have equations (7) in the form mu 2 = mv '2 = ...\n\n• We may also write ur 1 = I +zu 1+ &c., since z is very small compared with u, and expressing u in terms of w by (25), (we find l 21- mv i fi(z) i I +z(c R w + ' R 2 w) do) = 27rmoti(z) I -f-ZZ (Ki + R2/ This then expresses the work done by the attractive forces when a particle m is brought from an infinite distance to the point P at a distance z from a stratum whose surface-density is a, and whose principal radii of curvature are R 1 and R2.\n\n• St vcTal of der sites La [he /Yorthern part were suggested by the lace MV La Trr/be ...Bateman and some of those Southern part by M' &cm,Ztort H Fulton en Mein report, the RuyalCammuswn.\n\n• actinic tubes; the paddock a 125 MV light.\n\n• The majority of members in the MV accreditation scheme are pedigree breeders who sell quality, healthy breeding stock at premium prices.\n\n• MV - Sizewell - 22nd September A visit to this exposed coastal site on a rather breezy day.\n\n• computes coefficients which require only the correlations between the AR parameters of the specified MV order.\n\n• doughnuteper threw a cream donut across MV's line of sight which allowed him to smother the distracted Aussie's shot.\n\n• fuzzy inference will result in confidence factors (MV's) assigned to each outcome in the rule base.\n\n• guano island The 460 refugees on board MV Tampa dreamed of an idyllic life overlooking Sydney Harbor.\n\n• inference engine will assign the outcome ' credit limit is low ', the maximum MV from all the fired rules.\n\n• Watch out for the MV old stagers - they still turn in a good time.\n\n• strain gages, mV, thermocouples and RTD's.\n\n• In MV they take some of these subjects further and include chapters on the four-colour theorem, Ramsey theory, Catalan numbers and more.\n\n• Obviously equimolecular surfaces are given by (Mv) 3, where M is the molecular weight of the substance, for equimolecular volumes are Mv, and corresponding surfaces the two-thirds power of this.\n\n• Hence S may be replaced by (Mv) 3.\n\n• Ramsay and Shields found from investigations of the temperature coefficient of the surface energy that Tin the equation y(Mv) 3 = KT must be counted downwards from the critical temperature T less about 6°.\n\n• Their surface energy equation therefore assumes the form y(Mv)i=K(T-6°).\n\n• n is the mean number of molecules which associate to form one molecule, then by the normal equation we have y (Mnv) 3 =2.121(r -6°); if the calculated constant be K 1, then we have also y(Mv)3=K,(r-6°).\n\n• Since the volume is constant, we have the condition mv'--l-(I-m)v\"=constant.\n\n• Since the weights used in conjunction with a balance are really standard masses, the word \"weight\" may be substituted for the word \"mass\" in the preceding definitions; and we may symbolically express the relations thus: - If M be the weight of substance occupying a volume V, then the absolute density O = M/V; and if m, m 1 be the weights of the substance and of the standard substance which occupy the same volume, the relative density or specific gravity S = m/m l; or more generally if m i be the weight of a volume v of the substance, and m l the weight of a volume v l of the standard, then S = mv l /m l v.\n\n• If 2mu 2 denote the mean value of 2mu 2 averaged over the s molecules of the first kind, equations (3) may be written in the form Z mu g = 2 mv 2 = 2 mw 2 = 2x,0 2 1 =.\n\n• We accordingly put 1/2h = RT, where T denotes the temperature on the absolute scale, and then have equations (7) in the form mu 2 = mv '2 = ...\n\n• We may also write ur 1 = I +zu 1+ &c., since z is very small compared with u, and expressing u in terms of w by (25), (we find l 21- mv i fi(z) i I +z(c R w + ' R 2 w) do) = 27rmoti(z) I -f-ZZ (Ki + R2/ This then expresses the work done by the attractive forces when a particle m is brought from an infinite distance to the point P at a distance z from a stratum whose surface-density is a, and whose principal radii of curvature are R 1 and R2.\n\n• St vcTal of der sites La [he /Yorthern part were suggested by the lace MV La Trr/be ...Bateman and some of those Southern part by M' &cm,Ztort H Fulton en Mein report, the RuyalCammuswn.\n\n• Watch out for the MV old stagers - they still turn in a good time.\n\n• The OM2 modules interface directly with sensors or analog signals such as strain gages, mV, thermocouples and RTD 's.\n\n• In MV they take some of these subjects further and include chapters on the four-colour theorem, Ramsey theory, Catalan numbers and more.\n\n• She becomes a purple MV Agusta F4 retroSBK motorcycle." ]
[ null, "data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHZpZXdCb3g9IjAgMCAyMCAyMCI+PHBhdGggZD0iTTExLjY0MiAxNy43bC02LjIxNi0zLjc3NlY2LjEzbDYuMjE2LTMuNzc2VjE3Ljd6TTEuNjY2IDYuMzg3aDIuMDk4djcuMTZIMS42NjZ2LTcuMTZ6TTEyLjg4NC4xNDdjLS4yNi0uMTQ3LS41ODItLjE0My0uODQuMDEyTDQuNTM4IDQuNzJILjgzMmEuODMuODMgMCAwIDAtLjgzMS44MzF2OC44MjdhLjgzLjgzIDAgMCAwIC44MzEuODMxaDMuNTI4bDcuNjgyIDQuNjY4Yy4yNTcuMTU1LjU3OC4xNjMuODQuMDEyYS44My44MyAwIDAgMCAuNDI0LS43MjVWLjg3MmEuODIuODIgMCAwIDAtLjQyLS43MjV6bTUuMjIyIDQuNDc2YS44My44MyAwIDEgMC0xLjI3OSAxLjA2N2MxLjk4OCAyLjQgMS45ODggNi4yNzcgMCA4LjY2OC0uMjkzLjM1NC0uMjQ0Ljg3Ni4xMDYgMS4xNzMuMTU1LjEzLjc5NC4zNDYgMS4xNzMtLjEgMi40NjQtMyAyLjQ3Ny03LjgyIDAtMTAuNzk4ek0xNS45NCA3LjIyNmEuODMuODMgMCAwIDAtMS4yNzkgMS4wNjdjLjc3OC45MzcuNzc4IDIuNTIgMCAzLjQ1OC0uMjkzLjM1NC0uMjU3Ljg4OC4xMDYgMS4xNzMuNDM2LjMzOCAxLjAwNi4xIDEuMTczLS4xIDEuMy0xLjU2OCAxLjMtNC4wMiAwLTUuNnoiIGZpbGw9IiMyOThDRDEiIGZpbGwtcnVsZT0ibm9uemVybyIvPjwvc3ZnPg==", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9008436,"math_prob":0.99488455,"size":6383,"snap":"2019-43-2019-47","text_gpt3_token_len":1658,"char_repetition_ratio":0.12149239,"word_repetition_ratio":0.77712363,"special_character_ratio":0.2536425,"punctuation_ratio":0.091044776,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9948462,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-20T18:51:07Z\",\"WARC-Record-ID\":\"<urn:uuid:0157c129-91a9-458b-bdac-7895b4c3e247>\",\"Content-Length\":\"73346\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:81a0c0c6-5182-4cc1-827d-60195d9aacaf>\",\"WARC-Concurrent-To\":\"<urn:uuid:57b2924a-47a6-4252-9e8b-b06ca259981b>\",\"WARC-IP-Address\":\"99.84.181.48\",\"WARC-Target-URI\":\"https://sentence.yourdictionary.com/mv\",\"WARC-Payload-Digest\":\"sha1:TUQCVVX7U4I7CYTCFLX23FIRPJO5ECCQ\",\"WARC-Block-Digest\":\"sha1:E3HS2W42JYGHPCT7NCRNWSWLGH4JWUIM\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570986718918.77_warc_CC-MAIN-20191020183709-20191020211209-00113.warc.gz\"}"}
https://www.asknumbers.com/inch-to-mm/10-inches-to-mm.aspx
[ "# How Many Millimeters in 10 Inches?\n\n10 Inches to mm converter. How many millimeters in 10 inches?\n\n10 Inches equal to 254.0 mm or there are 254.0 millimeters in 10 inches.\n\n←→\nstep\nRound:\nEnter Inch\nEnter Millimeter\n\n## How to convert 10 inches to mm?\n\nThe conversion factor from inches to mm is 25.4. To convert any value of inches to mm, multiply the inch value by the conversion factor.\n\nTo convert 10 inches to mm, multiply 10 by 25.4, that makes 10 inches equal to 254.0 mm.\n\n10 inches to mm formula\n\nmm = inch value * 25.4\n\nmm = 10 * 25.4\n\nmm = 254.0\n\nCommon conversions from 10.x inches to mm:\n(rounded to 3 decimals)\n\n• 10 inches = 254.0 mm\n• 10.1 inches = 256.54 mm\n• 10.2 inches = 259.08 mm\n• 10.3 inches = 261.62 mm\n• 10.4 inches = 264.16 mm\n• 10.5 inches = 266.7 mm\n• 10.6 inches = 269.24 mm\n• 10.7 inches = 271.78 mm\n• 10.8 inches = 274.32 mm\n• 10.9 inches = 276.86 mm\n\nWhat is a Millimeter?\n\nMillimeter (millimetre) is a metric system unit of length. The symbol is \"mm\".\n\nWhat is a Inch?\n\nInch is an imperial and United States Customary systems unit of length, equal to 1/12 of a foot. 1 inch = 25.4 mm. The symbol is \"in\".\n\nCreate Conversion Table\nClick \"Create Table\". Enter a \"Start\" value (5, 100 etc). Select an \"Increment\" value (0.01, 5 etc) and select \"Accuracy\" to round the result." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7778065,"math_prob":0.99476254,"size":905,"snap":"2022-40-2023-06","text_gpt3_token_len":321,"char_repetition_ratio":0.23307437,"word_repetition_ratio":0.0,"special_character_ratio":0.4243094,"punctuation_ratio":0.18454936,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98610955,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-10-06T01:42:12Z\",\"WARC-Record-ID\":\"<urn:uuid:99756ec6-051c-431c-8b76-6cf549298636>\",\"Content-Length\":\"44191\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2a06f94e-d4c6-4c1b-93c4-4267c512c11c>\",\"WARC-Concurrent-To\":\"<urn:uuid:c78b08cf-d83f-4c65-8f17-00b408036e59>\",\"WARC-IP-Address\":\"104.21.33.54\",\"WARC-Target-URI\":\"https://www.asknumbers.com/inch-to-mm/10-inches-to-mm.aspx\",\"WARC-Payload-Digest\":\"sha1:3OBCO66UHF5EGVGERDZARKIUDHWEJNBU\",\"WARC-Block-Digest\":\"sha1:SHDUAKUPXS2XSUWY7I3DWAERXS7L54CH\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030337680.35_warc_CC-MAIN-20221005234659-20221006024659-00633.warc.gz\"}"}
https://blog.themusio.com/2016/06/30/variational-autoencoder/
[ "Meet Musio\n\n## Variational Autoencoder\n\nGoal\nVariational methods for inference of latent variables became popular in the last years.\nHere, we have a look at variational autoencoders from a slightly more mathematical point of view.\n\nMotivation\nSome deep generative models for vision tasks, such as style transfer or simply generating images similar to some given images, rely heavily on variational autoencoder frameworks.\nOnly very recently latent variables have been introduced into the hierarchical recurrent encoder decoder framework to enhance the expressive power of the model when coming up with responses to utterances.\nFurther variational autoencoder allow to perform unsupervised learning and are thus in general interesting to solving artificial intelligence.\n\nIngredients\nvariational inference, posterior distribution, latent variable, Bayes model, Kullback-Leibler divergence, objective function, lower bound\n\nSteps\nImagine we are in the situation were we have a distribution over some data points and we are not interested in putting a label on them, but we want to generate more data points which are similar to the ones we got.\nIn particular, we can think of a set of images of birds or 3D models of spaceships and we would like to come up with more of the same kind.\nSo far, model building for generation required either strong assumptions on the structure of the data, severe approximations which lead to sub-optimal models or were simply computationally expensive.\nOne approach to tackle these problems is to introduce latent variables into the models.\nIn principle these are capable of capturing some of the underlying structure of the data we want to model and allow us to maximize the probability of the seen data points such that we can generate probable new ones.\n\nThe variational autoencoder is one model that incorporates latent variables and only shares its name with the sparsity and denoising autoencoder because of its encoder decoder framework.\nThe most important question we have to answer is how we manage to maximize the probability of the data points for the variational autoencoder.\nFirst, we have a look at how to choose the latent variables.\nIn general, these might experience a very complicated dependency structure, since for example for handwritten character generation the angle of the character might be influenced by the writing style and velocity which we all have to capture in the latent variable.\nLuckily, we have neural networks around, we draw the latent variables from a normal distribution and hope that our network will come up with a useful distribution over the latent variables in some layer.\nNext, we have to approximate the data distribution which can be done by sampling.\nSo we sample a bunch of latent variables and calculate the expectation of the conditional data distribution.\nNow for high dimensional spaces we will run into problems since generated data points will mostly have low probability since they will in general look very different from the data points we already have.\nIt would be a better idea to sample latent variables that are more likely to have produced the data points we observed and then compute the conditional distribution using these values for the latent variables.\nThe mathematical tool to use is called Kullback-Leibler divergence and measures the similarity of two probability distributions.\nThis leads us to an objective function which can be optimized by looking at a lower bound to the logarithm of the data distribution.\nThe lower bound in particular includes a term that drives an arbitrarily chosen distribution over the latent variables to one that gives high probability to those latent variable values that have generated the observed data points.\nAnother advantages of this is that we make the inference of the posterior distribution over the latent variables tractable in this way.\n\nThe optimization of this lower bound can be done by the conventional methods of stochastic gradient descent.\nWe choose an inference model that comes up with parameters for a normal distribution which we try to match to a standard normal distribution over the latent variables.\nSampling becomes also a minor problem since during gradient descent we already go over the whole data set and it suffices to pick one sample for the latent variables.\nSince the sampling process is non-continuous a reparametrization trick is needed to push the gradients through to the inference model.\nInstead of sampling over the the normal distribution, we sample an error value from a standard normal distribution and deterministically calculate the latent variables using the parameter output of the inference model.\nIn this way none of the expectations in our objective depend on the parameters of our models and we can perform stochastic gradient descent.\nDuring testing we sample a latent variable from the standard normal distribution and we get a lower bound on the logarithm of the probability for the generated data point.\n\nThere is also a nice interpretation of lower bound in terms of information theory.\nUsually the logarithm of the data distribution tells us the total number bits needed to reconstruct the data point from an ideal encoding.\nWhat we are doing with the lower bound is calculating the extra information when the latent variable is sampled from the inference model instead of the standard normal distribution plus the information needed to reconstruct the data point given this latent variable.\nIn the end there is a small penalty for sub-optimal encoding since the inference model is used instead of the posterior distribution coming directly from the observed data points.\n\nFrom a more practical point of view and having applications with respect to natural language processing in mind the shortcoming of recurrent language models in producing averaged responses to given utterances might be circumvented by allowing the models to generate several probable responses.\nWe will pursue this direction further and follow recent developments with interest.\n\nResources\n\nTutorial on Variational Autoencoders” (PDF). Tutorial on Variational Autoencoders. Published June 21, 2016. Accessed 29 June 2016.\nVariational auto-encoders do not train complex generative models” (WEB). Variational auto-encoders do not train complex generative models. Published June 22, 2016. Accessed 29 June 2016.\nChainer-Variational-Recurrent-Autoencoder” (GIT). Chainer-Variational-Recurrent-Autoencoder. Published August 23, 2015. Accessed 29 June 2016." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.88166887,"math_prob":0.94926864,"size":6494,"snap":"2021-43-2021-49","text_gpt3_token_len":1206,"char_repetition_ratio":0.16594762,"word_repetition_ratio":0.020895522,"special_character_ratio":0.17570065,"punctuation_ratio":0.064958826,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9773028,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-21T01:16:37Z\",\"WARC-Record-ID\":\"<urn:uuid:e43ac757-bfbf-43df-a7cd-36307e98359e>\",\"Content-Length\":\"49666\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:88f6e253-a767-4589-a8cc-e37d2204ecd4>\",\"WARC-Concurrent-To\":\"<urn:uuid:df6d88ef-57fc-4f6f-bd23-52fac0c61bfe>\",\"WARC-IP-Address\":\"52.199.22.172\",\"WARC-Target-URI\":\"https://blog.themusio.com/2016/06/30/variational-autoencoder/\",\"WARC-Payload-Digest\":\"sha1:77IQI2TE6BEPUFQCKRKF32J2UOFJ4W5M\",\"WARC-Block-Digest\":\"sha1:AOZ625GM4RUEOY6N5T6EL3XLGVGRQQ53\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585380.70_warc_CC-MAIN-20211021005314-20211021035314-00486.warc.gz\"}"}
https://forum.allaboutcircuits.com/threads/codewar-problem-looking-for-alternative-way-to-solve-this-problem-mathematical.183714/
[ "# codewar problem-looking for alternative way to solve this problem-mathematical\n\n#### terabaaphoonmein\n\nJoined Jul 19, 2020\n109\nhttps://www.codewars.com/kata/5552101f47fc5178b1000050/train/python\n\nThis is how I youtube'd and found a solution-:\n\nCode:\nn=89\np=1\nmy_sum=0\nfor num in str(n):\nmy_sum=my_sum+(int(num))**p\np=p+1\nif(my_sum%n==0):\nprint(int(my_sum/n))\nelse:\nprint(-1)\nWhat is another way to solve this problem? I prefer no code solutions with flowcharts/algorithms rather than writing codes so that I can try writing codes on my own. But I of course won't mind code with comments tbh.\n\nI want a more mathematical way of solving this problem. like using that given equation in code and finding a solution...I know we are already using it...but I want sth different.\n\n#### KeithWalker\n\nJoined Jul 10, 2017\n2,804\nI assume that this is written in python. Although currently it is a very popular software language, a lot of the older members, including myself, evolved through 4-bit assembler, Cobol, Lisp, Basic, Fortran, Forth and a whole slew of other languages to C and C++ and have never needed to use Python.\nIf you can describe, line by line, what the program is doing, I will see if I can offer any alternate ways of getting the result.\n\n•", null, "djsfantasi\n\n#### click_here\n\nJoined Sep 22, 2020\n545\nThe brute force solution is fine, but it might be a good opportunity to make some reusable code.\n\nI would make a function called \"isFactor(a,b)\" that returned a non-zero number if number \"a\" was a factor of \"b\"\n\nWhen dealing with larger numbers you would usually get all the \"prime factors\" and see if the number can be made from the product of them, but this is overkill for the simple numbers here...\n\nYou can also make a function that returns the value of a specific digit in a decimal number \"foo(1234,3)\" returns 3..." ]
[ null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8882268,"math_prob":0.85924685,"size":927,"snap":"2023-14-2023-23","text_gpt3_token_len":252,"char_repetition_ratio":0.11375948,"word_repetition_ratio":0.45714286,"special_character_ratio":0.27508092,"punctuation_ratio":0.145,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97848797,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-04T18:33:43Z\",\"WARC-Record-ID\":\"<urn:uuid:e20e4fff-b07e-4adb-ac27-93f88df482ae>\",\"Content-Length\":\"102115\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1e766975-0963-46c5-9871-a13b80fe026e>\",\"WARC-Concurrent-To\":\"<urn:uuid:2bff6290-2466-48b0-9bfb-c456c2d53fa4>\",\"WARC-IP-Address\":\"172.67.34.128\",\"WARC-Target-URI\":\"https://forum.allaboutcircuits.com/threads/codewar-problem-looking-for-alternative-way-to-solve-this-problem-mathematical.183714/\",\"WARC-Payload-Digest\":\"sha1:P4EY3CP5VTYBIU4SBGVRNGVMCCMQF5GE\",\"WARC-Block-Digest\":\"sha1:D6EL4OXJCZKPVAUBV222GTIO72JUVIY2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224650201.19_warc_CC-MAIN-20230604161111-20230604191111-00179.warc.gz\"}"}
https://fr.scribd.com/document/358293038/Source-Codes
[ "Vous êtes sur la page 1sur 37\n\n# Source Codes\n\n3.1 INTRODUCTION\n\nIn Chapter 2, it has been discussed that both source and channel coding\nare essential for error-free transmission over a communication channel\n(Figure 2.1). The task of the source encoder is to transform the source\noutput into a sequence of binary digits (bits) called the information\nsequence. If the source is a continuous source, it involves analog-to-\ndigital (A/D) conversion. An ideal source encoder should have the\nfollowing properties:\n\n1. The average bit rate required for representation of the source output should\nbe minimized by reducing the redundancy of the information source.\n2. The source output can be reconstructed from the information sequence\nwithout any ambiguity.\n\n## The channel encoder converts the information sequence into a\n\ndiscrete encoded sequence (also calledcode word) to combat the noisy\nenvironment. A modulator (not shown in the figure) is then used to\ntransform each output symbol of the channel encoder into a suitable\nwaveform for transmission through the noisy channel. At the other end\nof the channel, a demodulator processes each received waveform and\nproduces an output called the received sequence which can be either\ndiscrete or continuous. The channel decoder converts the received\nsequence into a binary sequence called theestimated sequence. Ideally\nthis should be a replica of information source even in presence of the\nnoise in the channel. The source decoder then transforms the estimated\nsequence into an estimate of the source output and transfers the\nestimate to the destination. If the source is continuous, it involves\ndigital-to-analog (D/A) conversion. For a data storage system, a\nmodulator can be considered as awriting unit, a channel as a storage\nmedium and a demodulator as a reading unit. The process of\ntransmission can be compared to recording of data on a storage\nmedium. Efficient representation of symbols leads to compression of\ndata.\n\n## In this chapter we will consider various source coding techniques and\n\ntheir possible applications. Source coding is mainly used for\ncompression of data, such as speech, image, video, text, etc.\n\n## 3.2 CODING PARAMETERS\n\nIn Chapter 2, we have seen that the inputoutput relationship of a\nchannel is specified in terms of either symbols or messages (e.g., the\nentropy is expressed in terms of bits/message or bits/symbols). In fact,\nboth representations are widely used. In this chapter the following\nterminology has been used to describe different source coding\ntechniques:\n\n## 1. Source AlphabetA discrete information source has a finite set of source\n\nsymbols as possible outputs. This set of source symbols is called the source\nalphabet.\n2. Symbols or LettersThese are the elements of the source alphabet.\n3. Binary Code WordThis is a combination of binary digits (bits) assigned to\na symbol.\n4. Length of Code WordThe number of bits in the code word is known as the\nlength of code word.\n\n## A. Average Code Length Let us consider a DMS X having finite\n\nentropy H(X) and an alphabet {x1,x2, ..., xm} with corresponding\nprobabilities of occurrence P(xj), where j = 1,2,...,m. If the binary code\nword assigned to symbol Xj is nj bits, the average code word length L per\nsource symbol is defined as\n\nIt is the average number of bits per source symbol in the source coding\nprocess. L should be minimum for efficient transmission.\n\n## where Lmin is the minimum possible value of L. Obviously when = 1, the\n\ncode is the most efficient.\n\n=1 (3.3)\n\n## 3.3 SOURCE CODING THEOREM\n\nThe source coding theorem states that if X be a DMS with\nentropy H(X), the average code word lengthL per symbol is bounded as\n\nL H(X) (3.4)\n\n## Example 3.1: A DMS X produces two symbols x1 and x2. The\n\ncorresponding probabilities of occurrence and codes are shown in Table\n3.1. Find the code efficiency and code redundancy.\n\nxj P(xj) Code\n\nx1 0.8 0\n\nx2 0.2 1\n\nThe entropy is\n\n## The code efficiency is\n\nThe code redundancy is\n\n## 3.4 CLASSIFICATION OF CODES\n\nA. Fixed-length Codes If the code word length for a code is fixed, the\ncode is called fixed-length code. A fixed-length code assigns fixed\nnumber of bits to the source symbols, irrespective of their statistics of\nappearance. A typical example of this type of code is the ASCII code for\nwhich all source symbols (A to Z, a to z, 0 to 9, punctuation mark,\ncommas etc.) have 7-bit code word.\n\nLet us consider a DMS having source alphabet {x1, x2, ..., xm}. If m is a\npower of 2, the number of bits required for unique coding is log2m.\nWhen m is not a power of 2, the bits required will be [(log2m) + 1].\n\n## B. Variable-length Codes For a variable-length code, the code word\n\nlength is not fixed. We can consider the example of English alphabet\nconsisting of 26 letters (a to z). Some letters such as a, e, etc. appear\nmore frequently in a word or a sentence compared to the letters such\nas x, q, z, etc. Thus, if we represent the more frequently occurring letters\nby lesser number of bits and the less frequently occurring letters by\nlarger number of bits, we might require fewer number of bits overall to\nencode an entire given text than to encode the same with a fixed-length\ncode. When the source symbols are not equiprobable, a variable-length\ncoding technique can be more efficient than a fixed-length coding\ntechnique.\n\n## C. Distinct Codes A code is called distinct if each code word is\n\ndistinguishable from the other. Table 3.2 is an example of distinct code.\n\nxj Code Word\nx1 00\n\nx2 01\n\nx3 10\n\nx4 11\n\n## D. Uniquely Decodable Codes The coded source symbols are\n\ntransmitted as a stream of bits. The codes must satisfy some properties\nso that the receiver can identify the possible symbols from the stream of\nbits. A distinct code is said to be uniquely decodable if the original\nsource sequence can be reconstructed perfectly from the received\nencoded binary sequence. We consider four source symbols A, B, C, and\nD encoded with two different techniques as shown in Table 3.3.\n\nSymbol Code 1 Co\n\nA 00 0\n\nB 01 1\n\nC 10 00\n\nD 11 01\n\n## Code 1 is a fixed-length code, whereas code 2 is a variable-length code.\n\nThe message A BAD CAB can be encoded using the above two codes. In\ncode 1 format, it appears as 00 010011 100001, whereas using code 2\nformat the sequence will be 0 1001 0001. Code 1 requires 14 bits to\nencode the message, whereas code 2 requires 9 bits. Although code 2\nrequires lesser number of bits, yet it does not qualify as a valid code as\nthere is a decoding problem with this code. The sequence 0 1001 0001\ncan be regroupedin different ways, such as \nwhich stands for A BAAB AAD or which\ntranslates to A BCB AAAB. Since in code 2 format we do not know\nwhere the code word of one symbol (letter) ends and where the next one\nbegins it creates an ambiguity; it is not a uniquely decodable code.\nHowever, there is no such problem associated with code 1 format since it\nis a fixed-length code and each group must include 2 bits together.\nHence, code 1 format is a uniquely decodable code. In should be noted\nthat a uniquely decodable code can be both fixed-length code and\nvariable-length code.\n\n## E. Prefix-free Codes A code in which no code word forms the prefix\n\nof any other code word is called a prefix-free code or prefix code. The\ncoding scheme in Table 3.4 is an example of prefix code.\n\nA 0\n\nB 10\n\nC 110\n\nD 1110\n\n## We consider a symbol being encoded using code 2 in Table 3.3. If a 0 is\n\nreceived, the receiver cannot decide whether it is the entire code word\nfor alphabet A or a partial code word for C or D that it has received.\nHence, no code word should be the prefix of any other code word. This is\nknown as the prefix-free property or prefix condition. The code\nillustrated in Table 3.4 satisfies this condition.\n\n## It is to be mentioned that if no code word forms the prefix of another\n\ncode word, the code is said to be uniquely decodable. However, the\nprefix-free condition is not a necessary condition for unique\ndecodability. This is explained in Example 3.2.\n\n## F. Instantaneous Codes A uniquely decodable code is said to be\n\nan instantaneous code if the end of any code is recognizable without\nchecking subsequent code symbols. Since the instantaneous codes also\nhave the property that no code word is a prefix of another code word,\nprefix codes are also calledinstantaneous codes.\nG. Optimal Codes A code is called an optimal code if it is\ninstantaneous and has minimum average length L for a given source\nwith a particular probability assignment for the source symbols.\n\n## H. Entropy Coding When a variable-length code is designed such\n\nthat its average code word length approaches the entropy of the DMS,\nthen it is said to be entropy coding. ShannonFano coding andHuffman\ncoding (discussed later) are two examples of this type of coding.\n\nExample 3.2: Consider Table 3.5 where a source of size 4 has been\nencoded in binary codes with 0 and 1. Identify different codes.\n\n## Codes 2, 4, and 6 and code 5 are uniquely decodable codes.\n\n(Note that code 5 does not satisfy the prefix-free property, and still it is\nuniquely decodable since the bit 0 indicates the beginning of each code\nword.)\n\nExample 3.3: Consider Table 3.6 illustrating two binary codes having\nfour symbols. Compare their efficiency.\n\n## Table 3.6 Two Binary Codes\n\nSolution: Code 1 is a fixed-length code having length 2.\n\nThe entropy is\n\n## In this case, the average code length per symbol is\n\nThe entropy is\nThe code efficiency is\n\n## Let X be a DMS X having an alphabet {xj} ( j = 1,2,...,m). If the length of\n\nthe binary code word corresponding to xj be nj, a necessary and sufficient\ncondition for existence of an instantaneous binary code is\n\n## The above expression is known as Kraft inequality. It indicates the\n\nexistence of an instantaneously decodable code with code word lengths\nthat satisfy the inequality. However, it does not show how to obtain these\ncode words, nor does it tell that any code, for which inequality condition\nis valid, is automatically uniquely decodable.\n\nExample 3.4: Verify that L H(X), where L and H(X) are the average\ncode word length per symbol and the source entropy, respectively.\n\n## where the equality holds only if Qj = Pj.\n\nFrom the Kraft inequality, we get\n\n## The equality holds if Qj = Pj and K = 1.\n\nExample 3.5: Consider a DMS with four source symbols encoded with\nfour different binary codes as shown in Table 3.7. Show that\n\n## 1. all codes except code 2 satisfy the Kraft inequality\n\n2. codes 1 and 4 are uniquely decodable but codes 2 and 3 are not uniquely\ndecodable.\n\n## Table 3.7 Different Binary Codes\n\nSolution:\n\n1. For code 1: n1 = n2 = n3 = n4 = 2\nHence, all codes except code 2 satisfy Kraft inequality.\n2. Codes 1 and 4 are prefix codes; therefore, they are uniquely decodable.\n\n## Code 2 does not satisfy the Kraft inequality. Thus, it is not\n\nuniquely decodable.\nCode 3 satisfies the Kraft inequality; yet it is not uniquely\ndecodable. This can be verified considering the following\nexample:\nLet us consider a binary sequence 0110110. Using code 3, this\nsequence can correspond to x1x2x1x4or x1x4x4.\n\n## Like an electronic communication system, an image compression system\n\ncontains two distinct functional components: an encoder and a decoder.\nThe encoder performs compression, while the job of the decoder is to\nexecute the complementary operation of decompression. These\noperations can be implemented by the using a software or a hardware or\na combination of both. A codec is a device or a program that performs\nboth coding and decoding operations. In still-image applications, both\nthe encoded input and the decoder output are the functions of two\ndimensional (2-D) space co-ordinates, whereas video signals are the\nfunctions of space co-ordinates as well as time. In general, decoder\noutput may or may not be an exact replica of the encoded input. If it is an\nexact replica, the compression system is error free,\nlossless or information preserving, otherwise, the output image is\ndistorted and the compression system is called lossy system.\n\n## 3.6.1 Image Formats, Containers, and Compression Standards\n\nAn image file format is a standard way to organize and store image data.\nIt specifies how the data is arranged and which type of compression\ntechnique (if any) is used. An image container is akin to a file format but\ndeals with multiple types of image data. Image compression\nstandards specify the procedures for compressing and decompressing\nimages. Table 3.8 provides a list of the image compression standards, file\nformats, and containers presently used.\n\n## CCITT Group 3 (Consultative Committee JPEG (Joint Photographic H.261\n\nof the International Telephone and Experts Group standard)\nTelegraph standard)\n\n## CCITT Group 4 JPEG-LS (Loss less or H.262\n\nnear loss less JPEG)\n\n## JBIG (or JBIG1) (Joint Bi-level Image JPEG-2000 H.263\n\nExperts Group standard)\n\n## TIFF (Tagged Image File Format) GIF (Graphic Interchange MPEG-1 (\n\nFormat) Expert Gr\n\nFormat)\n\nGraphics)\n\nTIFF MPEG-4 A\nCoding)\n\nAVS (Aud\nStandard)\n\nHDV (Hig\nVideo)\n\nM-JPEG (\n\nQuick Tim\n\nVC-1 (or W\n\n## Digital audio technology forms an essential part of multimedia standards\n\nand technology. The technology has developed rapidly over the last two\ndecades. Digital audio finds applications in multiple domains such as\nCD/DVD storage, digital telephony, satellite broadcasting, consumer\nelectronics, etc.\n\n## Based on their applications, audio signals can be broadly classified into\n\nthree following subcategories:\n\n## 1. Telephone SpeechThis is a low bandwidth application. It covers the\n\nfrequency range of 3003400 Hz. Though the intelligibility and naturalness\nof this type of signal are poor, it is widely used in telephony and some video\ntelephony services.\n2. Wideband SpeechIt covers a bandwidth of 507000 Hz for improved\nspeech quality.\n3. Wideband AudioWideband audio includes high fidelity audio (speech as\nwell as music) applications. It requires a bandwidth of at least 20 kHz for\ndigital audio storage and broadcast applications.\n\nThe conventional digital format for these signals is the Pulse Code\nModulation (PCM). Earlier, the compact disc (CD) quality stereo audio\nwas used as a standard for digital audio representation having sampling\nfrequency 44.1 kHz and 16 bits/sample for each of the two stereo\nchannels. Thus, the stereo net bit rate required is 2 16 44.1 = 1.41\nMbps. However, the CD needs a significant overhead (extra bits) for\nsynchronization and error correction, resulting in a 49-bit representation\nof each 16-bit audio sample. Hence, the total stereo bit rate requirement\nis 1.41 49/16 = 4.32 Mbps.\n\n## Although high bandwidth channels are available, it is necessary to\n\nachieve compression for low bit rate applications in cost-effective storage\nand transmission. In many applications such as mobile radio, channels\nhave limited capacity and efficient bandwidth compression must be\nemployed.\n\n## Speech compression is often referred to as speech coding, a method for\n\nreducing the amount of information needed to represent a speech signal.\nMost of the speech-coding schemes are usually based on a lossy\nalgorithm. Lossy algorithms are considered acceptable as far as the loss\nof quality is undetectable to the human ear. Speech coding or\ncompression is usually implemented by the use ofvoice\ncoders or vocoders. There are two types of vocoders as follows:\n\n## 1. Waveform-following CodersWaveform-following coders exactly\n\nreproduce the original speech signal if there is no quantization error.\n2. Model-based CodersModel-based coders cannot reproduce the original\nspeech signal even in absence of quantization error, because they employ a\nparametric model of speech production which involves encoding and\ntransmitting the parameters, not the signal.\n\n## One of the model-based coders is Linear Predictive Coding (LPC)\n\nvocoder, which is lossy regardless of the presence of quantization error.\nAll vocoders have the following attributes:\n\n## 1. Bit RateIt is used to determine the degree of compression that a vocoder\n\nachieves. Uncompressed speech is usually transmitted at a rate of 64 kbps\nusing 8 bits/sample and 8 kHz sampling frequency. Any bit rate below 64\nkbps is considered compression. The linear predictive coder transmits the\nsignal at a bit rate of 2.4 kbps.\n2. DelayIt is involved with the transmission of an encoded speech signal.\nAny delay that is greater than 300 ms is considered unacceptable.\n3. ComplexityThe complexity of algorithm affects both the cost and the\npower of the vocoder. LPC is very complex as it has high compression rate\nand involves execution of millions of instructions per second.\n4. QualityQuality is a subjective attribute and it depends on how the speech\nsounds to a given listener.\n\nAny voice coder, regardless of the algorithm it exploits, will have to make\n\n## ShannonFano coding, named after Claude Shannon and Robert Fano,\n\nis a source coding technique for constructing a prefix code based on a set\nof symbols and their probabilities. It is suboptimal as it does not achieve\nthe lowest possible expected code word length like Huffman coding.\nShannonFano algorithm produces fairly efficient variable-length\nencoding. However, it does not always produce optimal prefix codes.\nHence, the technique is not widely used. It is used in the IMPLODE\ncompression method, which is a part of the ZIP file format, where a\nsimple algorithm with high performance and the minimum requirements\nfor programming is desired.\n\n## The steps or algorithm of ShannonFano algorithm for generating\n\nsource code are presented as follows:\n\n## Step 1: Arrange the source symbols in order of decreasing probability.\n\nThe symbols with equal probabilities can be listed in any arbitrary order.\nStep 2: Divide the set into two such that the sum of the probabilities in\neach set is the same or nearly the same.\nStep 3: Assign 0 to the upper set and 1 to the lower set.\nStep 4: Repeat steps 2 and 3 until each subset contains a single symbol.\nExample 3.6: A DMS X has five symbols x1, x2, x3, x4, and x5 with P(x1) =\n0.4, P(x2) = 0.17, P(x3) = 0.18,P(x4) = 0.1, and P(x5) = 0.15, respectively.\n\n## 1. Construct a ShannonFano code for X.\n\n2. Calculate the efficiency of the code.\n\nSolution:\n\n## Table 3.9 Construction of Shannon-Fano Code\n\n2.\n3.9 HUFFMAN CODING\n\nHuffman coding produces prefix codes that always achieve the lowest\npossible average code word length. Thus, it is an optimal code which has\nthe highest efficiency or the lowest redundancy. Hence, it is also known\nas the minimum redundancy code or optimum code.\n\n## Huffman codes are used in CCITT, JBIG2, JPEG, MPEG-1/2/4, H.261,\n\nH.262, H.263, H.264, etc.\n\n## Step 1: List the source symbols in order of decreasing probability. The\n\nsymbols with equal probabilities can be arranged in any arbitrary order.\nStep 2: Combine the probabilities of the symbols having the smallest\nprobabilities. Now, reorder the resultant probabilities. This process is\ncalled reduction 1. The same process is repeated until there are exactly\ntwo ordered probabilities remaining. Final step is called the last\nreduction.\nStep 3: Start encoding with the last reduction. Assign 0 as the first digit\nin the code words for all the source symbols associated with the first\nprobability of the last reduction. Then assign 1 to the second probability.\nStep 4: Now go back to the previous reduction step. Assign 0 and 1 to\nthe second digit for the two probabilities that was combined in this\nreduction step, retaining all assignments made in step 3.\nStep 5: Repeat step 4 until the first column is reached.\nExample 3.7: Repeat Example 3.6 for the Huffman code and compare\ntheir efficiency.\n\n## Table 3.10 Construction of Huffman Code\n\nThe average code word length for the Huffman code is shorter than that\nof the ShannonFano code. Hence, the efficiency of Huffman code is\nhigher than that of the ShannonFano code.\n\nExample 3.8: A DMS X has seven symbols x1, x2, x3, x4, x5, x6,\n\nand x7 with\n\nrespectively.\n\n## 1. Construct a Huffman code for X.\n\n2. Calculate the efficiency of the code.\n\n## Solution: If we proceed similarly as in the previous example, we can\n\nobtain the following Huffman code (see Table 3.11).\n\n## Table 3.11 Huffman Code for Example 3.8\n\nIn this case, the efficiency of the code is exactly 100%. It is also\ninteresting to note that the code word length for each symbol is equal to\nits self-information. Therefore, it can be concluded that to achieve\noptimality ( = 100%), the self-information of the symbols must be\ninteger, which in turn, requires that the probabilities must negative\npowers of 2.\n\n## 3.10 ARITHMETIC CODING\n\nIt has been already been shown that the Huffman codes are optimal only\nwhen the probabilities of the source symbols are negative powers of two.\nThis condition of probability is not always valid in practical situations. A\nmore efficient way to match the code word lengths to the symbol\nprobabilities is implemented by using arithmetic coding. No one-to-one\ncorrespondence between source symbols and code words exists in this\ncoding scheme; instead, an entire sequence of source symbols (message)\nis assigned a single code word. The arithmetic code word itself defines an\ninterval of real numbers between 0 and 1. If the number of symbols in\nthe message increases, the interval used to represent it becomes\nnarrower. As a result, the number of information units (say, bits)\nrequired to represent the interval becomes larger. Each symbol in the\nmessage reduces the interval in accordance with its probability of\noccurrence. The more likely symbols reduce the range by less, and\ntherefore add fewer bits to the message.\n\n## Arithmetic coding finds applications in JBIG1, JBIG2, JPEG-2000,\n\nH.264, MPEG-4 AVC, etc.\n\n## Example 3.9: Let an alphabet consist of only four symbols A, B,\n\nC, and D with probabilities of occurrence P(A) = 0.2, P(B) = 0.2, P(C) =\n0.4, and P(D) = 0.2, respectively. Find the arithmetic code for the\nmessage ABCCD.\n\n## Table 3.12 Construction of Arithmetic Code\n\nWe first divide the interval [0, 1) into four intervals proportional to the\nprobabilities of occurrence of the symbols. The symbol A is thus\nassociated with subinterval [0, 0.2). B, C, and D correspond to [0.2, 0.4),\n[0.4, 0.8), and [0.8, 1.0), respectively. A is the first symbol of the\nmessage being coded, the interval is narrowed to [0, 0.2). Now, this\nrange is expanded to the full height of the figure with its end points\nlabelled as 0 and 0.2 and subdivided in accordance with the original\nsource symbol probabilities. The next symbol B of the message now\ncorresponds to [0.04, 0.08). We repeat the process to find the intervals\nfor the subsequent symbols. The third symbol C further narrows the\nrange to [0.056, 0.072). The fourth symbol C corresponds to [0.06752,\n0.0688). The final message symbol Dnarrows the subinterval to\n[0.06752, 0.688). Any number within this range (say, 0.0685) can be\nused to represent the message.\n3.11 LEMPELZIVWELCH CODING\n\n## There are many compression algorithms that use a dictionary or code\n\nbook, known to the coder and the decoder. This dictionary is generated\nduring the coding and decoding processes. Many of these algorithms are\nbased on the work reported by Abraham Lempel and Jacob Ziv, and are\nknown asLempelZiv encoders. In principle, these coders replace\nrepeated occurrences of a string by references to an earlier occurrence.\nThe dictionary is basically the collection of these earlier occurrences. In a\nwritten text, groups of letters such as th, ing, qu, etc. appear many\ntimes. A dictionary-based codingscheme in this case can be proved\neffective.\n\n## One widely used LZ algorithm is the LempelZivWelch (LZW)\n\nalgorithm reported by Terry A. Welch. It is a lossless\nor reversible compression. Unlike Huffman coding, LZW coding requires\nno a priori knowledge of the probability of occurrences of the symbols to\nbe encoded. It is used in variety of mainstream imaging file formats,\nincluding GIFF, TIFF and PDF.\n\nExample 3.10: Encode and decode the following text message using\nLZW coding:\n\n## itty bitty bit bin\n\nSolution: The initial set of dictionary entries is a 8-bit character code\nhaving values 0-255, with ASCII as the first 128 characters, including\nspecifically the following which appear in the string.\n\n## Table 3.13 LZW Coding Dictionary\n\nValue Character\n\n32 Space\n\n98 b\n\n105 i\n\n110 n\n116 t\n\n121 y\n\nDictionary entries 256 and 257 are reserved for the clear dictionary and\nend of transmission commands, respectively. During encoding and\ndecoding process, new dictionary entries are created using\nall phrases present in the text that are not yet in the dictionary.\nEncoding algorithm is as follows.\n\nAccumulate characters of the message until the string does not match\nany dictionary entry. Then define this string as a new entry, but send the\nentry corresponding to the string without the last character, which will\nbe used as the first character of the next string to match.\n\nIn the given text message, the first character is i and the string\nconsisting of just that character is already present in the dictionary. So\nthe next character is added, and the accumulated string becomes it.\nThis string is not in the dictionary. At this point, i is sent and it is\nadded to the dictionary, at the next available entry, i.e., 258. The\naccumulated string is reset to be just the last character, which was not\nsent, so it is t. Now, the next character is added; hence, the accumulated\nstring becomes tt which is not in the dictionary. The process repeats.\n\n## Initially, the additional dictionary entries are all two-character strings.\n\nHowever, the first time one of these two-character strings is repeated, it\nis sent (using fewer bits than would be required for two characters) and a\nnew three-character dictionary entry is defined. For the given message, it\nhappens with the string itt. Later, one three-character string gets\ntransmitted, and a four-character dictionary entry is defined.\n\n## Decoding algorithm is as follows.\n\nOutput the character string whose code is transmitted. For each code\ntransmission, add a new dictionary entry as the previous string plus the\nfirst character of the string just received. It is to be noted that the coder\nand decoder create the dictionary on the fly; the dictionary therefore\ndoes not need to be explicitly transmitted, and the coder deals with the\ntext in a single pass.\n\nAs seen from Table 3.14, we sent eighteen 8-bit characters (144 bits) in\nfourteen 9-bit transmissions (126 bits). It is a saving of 12.5% for such a\nshort text message. In practice, larger text files often compress by a\nfactor of 2, and drawings by even more.\n\n## Run-length encoding (RLE) is used to reduce the size of a repeating\n\nstring of characters. The repeating string is referred to as a run. It can\ncompress any type of data regardless of its information content.\nHowever, content of data affects the compression ratio. Compression\nratio, in this case, is not so high. But it is easy to implement and quick to\nexecute. Typically RLE encodes a run of symbols into two bytes, a count\nand a symbol.\n\nRLE was developed in the 1950s and became, along with its 2-D\nextensions, the standard compression technique in facsimile (FAX)\ncoding. FAX is a two-colour (black and white) image which is\npredominantly white. If these images are sampled for conversion into\ndigital data, many horizontal lines are found to be entirely white (long\nruns of 0s). Besides, if a given pixel is either black or white, the\npossibility that the next pixel will match is also very high. The code for a\nfax machine is actually a combination of a Huffman code and a run-\nlength code. The coding of run-lengths is also used in CCITT, JBIG2,\nJPEG, M-PEG, MPEG-1/2/4, BMP, etc.\n\n## Example 3.11: Consider the following bit stream:\n\n11111111111111110000000000000000000011\nFind the run-length code and its compression ratio.\n\nSolution: The stream can be represented as: sixteen 1s, twenty 0s and\ntwo 1s, i.e., (16, 1), (20, 0), (2, 1). Since the maximum number of\nrepetitions is 20, which can be represented with 5 bits, we can encode\nthe bit stream as (10000,1), (10100,0), (00010,1).\n\n## The Motion Pictures Expert Group (MPEG) of the International\n\nStandards Organization (ISO) provides the standards for digital audio\ncoding, as a part of multimedia standards. There are three standards\ndiscussed as follows.\n\nA. MPEG-1 In the MPEG-1 standard, out of a total bit rate of 1.5 Mbps\nfor CD quality multimedia storage, 1.2 Mbps is provided to video and 256\nkbps is allocated to two-channel audio. It finds applications in web\nmovies, MP3 audio, video CD, etc.\n\n## B. MPEG-2 MPEG-2 provides standards for high-quality video\n\n(including High-Definition TV) at a rate ranging from 3 to 15 Mbps and\nabove. It also supports new audio features including low bit rate digital\naudio and multichannel audio. In this case, two to five full bandwidth\naudio channels are accommodated. The standard also provides a\ncollection of tools known as Advanced Audio Coding(MPEG-2 AAC).\n\n## C. MPEG-4 MPEG-4 addresses standardization of audiovisual coding\n\nfor various applications ranging from mobile access, low-complexity\nmultimedia terminals to high-quality multichannel sound systems with\nwide range of quality and bit rate, but improved quality mainly at low bit\nrate. It provides interactivity, universal accessibility, high degree of\nflexibility, and extensibility. One of its main applications is found in\ninternet audio-video streaming.\n\n## 3.14 PSYCHOACOUSTIC MODEL OF HUMAN HEARING\n\nThe human auditory system (the inner ear) is fairly complicated. Results\nof numerous psychoacoustic tests reveal that human auditory response\nsystem performs short-term critical band analysis and can be modelled\nas a bank of band pass filters with overlapping frequencies. The power\nspectrum is not on linear frequency scale and the bandwidths are in the\norder of 50 to 100 Hz for signals below 500 Hz and up to 5000 Hz at\nhigher frequencies. Such frequency bands of auditory response system\nare calledcritical bands. Twenty six critical bands covering frequencies\nof up to 24 kHz are taken into account.\n\nIt is observed that the ear is less sensitive to low level sound when there\nis a higher level sound at a nearby frequency. When this occurs, the low\nlevel audio signal becomes either less audible or inaudible. This\nweaker signal is calledmasker and the weaker one that is masked is\nknown as maskee. It is also found that the masking is the largest in the\ncritical band within which the masker is present and the masking is also\nslightly effective in the neighbouring bands.\n\n## We can define a masking threshold, below which the presence of any\n\naudio will be rendered inaudible. It is to be noted that the masking\nthreshold depends upon several factors, such as the sound pressure\nlevel (SPL), the frequency of the masker, and the characteristics of the\nnoise).\nFigure 3.1 Effects of Masking in Presence of a Masker at 1 kHz\n\nIn Figure 3.1, the 1-kHz signal acts as a masker. The masking threshold\n(solid line) falls off sharply as we go away from the masker frequency.\nThe slope of the masking threshold is found to be steeper towards the\nlower frequencies. Hence, it can be concluded that the lower frequencies\nare not masked to the extent that the higher frequencies are masked. In\nthe above diagram, the three solid bars represent the maskee frequencies\nand their respective SPLs are well below the masking threshold. The\ndotted curve represents quiet threshold in the absence of any masker.\nThe quiet threshold has a lower value in the frequency range from 500\nHz to 5 kHz of the audio spectrum.\n\nparameters:\n\n## Signal-to-mask ratio (SMR): The SMR at a given frequency is defined as\n\nthe difference (in dB) between the SPL of the masker and the masking\nthreshold at that frequency.\n\n## Mask-to-noise ratio (MNR): The MNR at a given frequency is the\n\ndifference (in dB) between the masking threshold at that frequency and\nthe noise level. To make the noise inaudible, its level must be below the\nmasking threshold; i.e., the MNR must be positive.\n\nat a frequency fm. The SMR, the signal-to-noise ratio (SNR) and the\nMNR for a particular frequency f corresponding to a noise level have also\nbeen shown in the figure. It is evident that\nSMR (f) = SNR (f) MNR (f) (3.7)\n\nSo far we have considered only one masker. If more than one maskers\na global masking threshold is evaluated that describes just noticeable\ndistortion as a function of frequency.\n\n## The masking phenomenon described in the previous subsection is also\n\nsounds occur within a small interval of time, the stronger signal being\nplays an important role in human auditory perception. Temporal\nshort time interval and is associated\ntenth duration of that of the latter. The order of postmasking duration is\naudio coding algorithms.\n3.14.3 Perceptual Coding in MPEG Audio\n\nAn efficient audio source coding algorithm must satisfy the following two\nconditions:\n\n## 1. Redundancy removal: It will remove redundant components by exploiting\n\n2. Irrelevance removal: It is perceptually motivated since any sound that our\nears cannot hear can be removed.\n\n## In irrelevance removal, simultaneous and temporal masking phenomena\n\nplay major roles in MPEG audio coding. It has already been mentioned\nthat the noise level should be below the masking threshold. Since the\nquantization noise depends on the number of bits to which the samples\nare quantized, the bit allocation algorithm must take care of this\nfact. Figure 3.3 shows the block diagram of a perception-based coder\nthat makes use of the masking phenomenon.\n\n## Figure 3.3 Block Diagram of a Perception Based Coder\n\nAs seen from the figure, Fast Fourier Transform (FFT) of the incoming\nPCM audio samples is computed to obtain the complete audio spectrum,\nfrom which the tonal components of masking signals can be determined.\nUsing this, a global masking threshold and also the SMR in the entire\naudio spectrum is evaluated. The dynamic bit allocator uses the SMR\ninformation while encoding the bit stream. A coding scheme is\ncalled perceptually transparent if the quantization noise is below the\nglobal masking threshold. The perceptually transparent encoding\nprocess will produce the decoded output indistinguishable from the\ninput.\nHowever, our knowledge in computing the global masking threshold is\nlimited as the perceptual model considers only simple and stationary\nmaskers and sometimes it can fail in practical situations. To solve this\nproblem, sufficient safety margin should be maintained.\n\n3.15 DOLBY\n\n## Dolby Digital was first developed in 1992 as a means to allow 35-mm\n\ntheatrical film prints to store multichannel digital audio directly on the\nfilm without sacrificing the standard analog optical soundtrack. It is\nbasically a perceptual audio coding system. Since its introduction the\nsystem has been adopted for use with laser disc, DVD-audio, DVD-video,\nDVD-ROM, Internet audio distribution, ATSC high definition and\nstandard definition digital television, digital cable television and digital\nsatellite broadcast. Dolby Digital is used as an emissions coder that\nencodes audio for distribution to the consumer. It is not a\nmultigenerational coder which is exploited to encode and decode audio\nmultiple times.\n\nDolby Digital breaks the entire audio spectrum into narrow bands of\nfrequency using mathematical models derived from the characteristics of\nthe ear and then analyzes each band to determine the audibility of those\nsignals. A greater number of bits represent more audible signals, which,\nin turn, increases data efficiency. In determining the audibility of signals,\nthe system makes use of masking. As mentioned earlier, a low level audio\nsignal becomes inaudible, if there is a simultaneous occurrence of a\nstronger audio signal having frequency close to the former. This is\nsignals can be encoded much more efficiently than in other coding\nsystems with comparable audio quality, such as linear PCM. Dolby\nDigital is an excellent choice for those systems where high audio quality\nis desired, but bandwidth or storage space is limited. This is especially\ntrue for multichannel audio. The compact Dolby Digital bit stream allows\nfull 5.1-channel audio to take less space than a single channel of linear\nPCM audio.\n\n## Linear predictive coding is a digital method for encoding an analog\n\nsignal (e.g., speech signal) in which a particular value is predicted by a\nlinear function of the past values of the given signal. The particular\nsource-filter model employed in LPC is known as the linear predictive\ncoding model. It consists of two main components: analysis or encoding\nand synthesis or decoding. The analysis part involves examining the\nspeech signal and breaking it down into segments or blocks. Each\nsegment is further examined to get the answers to the following key\nquestions:\n\n1. Is the segment voiced or unvoiced? (Voiced sounds are usually vowels and\noften have high average energy levels. They have very distinct resonant or\nformant frequencies. Unvoiced sounds are usually consonants and generally\nhave less energy. They have higher frequencies than voiced sounds.)\n2. What is the pitch of the segment?\n3. What parameters are needed to construct a filter that models the vocal tract\nfor the current segment?\n\n## LPC analysis is usually conducted by a sender who is supposed to answer\n\nactually performs the task of synthesis. It constructs a filter by using the\ncan reproduce the original speech signal.\n\n## Essentially, LPC synthesis simulates human speech production. Figure\n\n3.4 illustrates which parts of the receiver correspond to which parts in\nthe human anatomy. In almost all voice coder models, there are two\nparts: excitation and articulation. Excitation is the type of sound that is\ntransmitted to the filter or vocal tract and articulation is the\ntransformation of the excitation signal into speech.\nFigure 3.4 Human vs. Voice Coder Speech Production\n\n## Problem 3.1: Consider a DMS X having a symbol xj with corresponding\n\nprobabilities of occurrenceP(xj) = Pj where j = 1,2,...,m. Let nj be the\nlength of the code word assigned to symbol xj such\n\n## that Prove that this relationship satisfies the\n\nKraft inequality and find the bound on K in the expression of Kraft\ninequality.\n\nSolution:\nThe result indicates that the Kraft inequality is satisfied. The bound\non K is\n\nProblem 3.2: Show that a code constructed with code word length\nsatisfying the condition given inProblem 3.1 will satisfy the following\nrelation:\n\nH(X) L H(X) + 1\n\nwhere H(X) and L are the source entropy and the average code word\nlength, respectively.\n\n## Solution: From the previous problem, we have\n\nlog2Pj nj log2Pj + 1\n\n## Multiplying by Pj and summing over j yields\n\nProblem 3.3: Apply the ShannonFano coding procedure for a DMS\nwith the following source symbols and the given probabilities of\noccurrence. Calculate its efficiency.\n\nSolution:\n\n## Table 3.16 Construction of ShannonFano Code\n\nAnother ShannonFano code for the same source symbols:\n\n## The above two procedures reveal that sometimes ShannonFano method\n\nis ambiguous. The reason behind this ambiguity is the availability of\nmore than one equally valid schemes of partitioning the symbols.\n\n## Problem 3.4: Repeat Problem 3.3 for the Huffman code.\n\nSolution:\nTable 3.18 Construction of Huffman Code\n\n## 1. The coding efficiency is expressed as\n\n1. 1 + redundancy\n2. 1 redundancy\n3. 1/redundancy\n4. none of these\nAns. (b)\n\n0.\n\n1.\n2. = Lmin L\n3. none of these\nAns. (a)\n\n## 3. The efficiency of Huffman code is linearly proportional to\n\n0. average length of code\n1. average entropy\n2. maximum length of code\n3. none of these\nAns. (b)\n\n0.\n\n1.\n\n2.\n\n3.\nAns. (c)\n\n## 5. An example of dictionary based coding is\n\n0. ShannonFano coding\n1. Huffman coding\n2. arithmetic coding\n3. LZW coding\nAns. (d)\n\n## 6. The run-length code for the bit stream: 11111000011 is\n\n0. (101,1), (100,0), (010,1)\n1. (101,0), (100,0), (010,1)\n2. (101,1), (100,1), (010,1)\n3. (101,1), (100,0), (010,0)\nAns. (a)\n\n7. The signal-to-mask ratio (SMR), mask to noise ratio (MNR) and signal to\nnoise ratio (SNR) are related by the formula\n0. SMR (f) = SNR (f) MNR (f)\n1. SMR (f) = MNR (f) SNR (f)\n2. SMR (f) = SNR (f) + MNR (f)\n3. none of these\nAns. (a)\n\n## 8. Dolby Digital is based on\n\n0. multigenerational coding\n1. perceptual coding\n2. ShannonFano coding\n3. none of these\nAns. (b)\n\n## 9. The frequency range of telephone speech is\n\n0. 47 kHz\n1. less than 300 Hz\n2. greater than 20 kHz\n3. 3003400 Hz\nAns. (d)\n\n10. LPC is a\n\n0. waveform-following coder\n1. model-based coder\n2. lossless vocoder\n3. none of these\nAns. (b)\n\nREVIEW QUESTIONS\n\n1.\n1. Define the following terms:\n1. average code length\n2. code efficiency\n3. code redundancy.\n2. State source coding theorem.\n2. With suitable example explain the following codes:\n0. fixed-length code\n1. variable-length code\n2. distinct code\n3. uniquely decodable code\n4. prefix-free code\n5. instantaneous code\n6. optimal code.\n3. Write short notes on\n0. ShanonFano algorithm\n1. Huffman coding\n4.\n0. Write down the advantages of Huffman coding over ShannonFano\ncoding.\n1. A discrete memoryless source has seven symbols with\nprobabilities of occurrences 0.05, 0.15, 0.2, 0.05, 0.15,\n0.3 and 0.1. Construct the Huffman code and determine\n0. entropy\n1. average code length\n2. code efficiency.\n5. A discrete memoryless source has five symbols with probabilities of\noccurrences 0.4, 0.19, 0.16, 0.15 and 0.1. Construct both the ShannonFano\ncode and Huffman code and compare their code efficiency.\n6. With a suitable example explain arithmetic coding. What are the advantages\nof arithmetic coding scheme over Huffman coding?\n7. Encode and decode the following text message using LZW coding:" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8643862,"math_prob":0.91649646,"size":16190,"snap":"2020-34-2020-40","text_gpt3_token_len":3931,"char_repetition_ratio":0.11855925,"word_repetition_ratio":0.021137586,"special_character_ratio":0.23014206,"punctuation_ratio":0.13980521,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9770058,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-05T14:04:48Z\",\"WARC-Record-ID\":\"<urn:uuid:75eb1303-2c35-47f8-945d-3f4df6a3af7c>\",\"Content-Length\":\"421395\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6a140f3d-6e87-4a2b-824c-696b883a8e9d>\",\"WARC-Concurrent-To\":\"<urn:uuid:9952ff33-f0e8-455b-ac22-a76665bceaa7>\",\"WARC-IP-Address\":\"199.232.66.152\",\"WARC-Target-URI\":\"https://fr.scribd.com/document/358293038/Source-Codes\",\"WARC-Payload-Digest\":\"sha1:S5DYTNELGUHZRU35NXTGDX7WLNEVQES4\",\"WARC-Block-Digest\":\"sha1:5OT2HLXG4MNBUSXKQGK37SADGD5A7JFH\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439735958.84_warc_CC-MAIN-20200805124104-20200805154104-00263.warc.gz\"}"}
https://spaceforum.sk/en/space-data-analysis-course/astrometry-gaia/
[ "# Astrometry – Gaia\n\n## Introduction\n\nGaia relies on the proven principles of ESA’s Hipparcos mission to help solve one of the most difficult yet deeply fundamental challenges in modern astronomy: the creation of an extraordinarily precise three-dimensional map of about one billion stars throughout our Galaxy and beyond.\n\nGaia Mission – Gaia – Cosmos\n\nOn this exercise usage of the Gaia data will be demonstrated by two approaches. Both utilize TAP protocol to access the data, but the first approach will use dedicated software – TOPCAT, and the second approach will use python software library to access the data programmatically.\n\n## Setup of the environment\n\n• Setup of the python environment\n• The python and necessary libraries can be installed locally, or the examined jupyter notebooks can be run in cloud-based solutions such as Google colaboratory.\n• Python 3.X installation\n• Environment setup:\n• Virtual environments (venv, conda) are recommended for the local installation.\n• Virtual environment can be set-up by the following command:\n`python3 -m virtualenv -p python3 venv# activation. venv/bin/activate`\n• Required non-standard libraries are listed in requirements.txt file\n• With pip package installer, all required packages can be installed via:\n`pip install -r requirements.txt`\n• With conda package installer, all required packages can be installed via:\n`conda env create -f environment.yml`\n• To use files in Google Drive (such as requirements.txt), the Google Drive directories can be mounted in a notebook using google.colab.drive library (see the example notebook). Shell commands can be executed from a jupyter notebook by putting an exclamation point as the first letter of a line (example of shell commands in IPython).\n• TOPCAT installation\n\n## Summary of the activities\n\n##### Astroquery Gaia TAP example (Jupyter)\n• `Astroquery Gaia TAP example.ipynb `\n• The notebook illustrates synchronous TAP query using astroquery.gaia module, some possibilities to show data of astropy.Table, and finally visualization of the data using scatter plot without any modifications.\n• The rendering of the sky does not  adhere to the conventions used in astronomy. See the next notebook which examines this topic.\n##### Skymap coordinates plotting  (Jupyter)\n• `Skymap plots in matplotlib.ipynb`\n• This notebook demonstrates plotting of a subset of Gaia data in galactic coordinates using Matplotlib library. Although it is easy to use, Matplotlib is not ideal for plotting sky map coordinates, and additional fixes are required to plot the data in a more conventional way.\n• The data are loaded from a file created by the previous notebook (only a sample of 10000 random entries).\n##### Plotting Number of sources per subarea of the sky using ADQL (Jupyter)\n• `Number of sources per subarea of the sky using ADQL.ipynb`\n• Bin galactic longitude and galactic latitude into bins of size 5 degrees.\n• Use division by the bin size and round the result (for instance by FLOOR function)\n• Group subquery results by longitude and latitude bin.\n• Don’t forget to rescale-back the values after division.\n• Returned longitude and latitude values as bin center values.\n• Required columns are the following: source_id_count, l_bin_center, b_bin_center\n• Applied plotting approach, might be puzzling. It transforms the table data into a grid of pixels almost in one line. However, it  requires identical number longitude bins of rows of each latitude bin.\n• The notebook shows another functioning method for rendering mollweide projection-based histogram.\n• Task #2: Make a copy of this notebook and instead of the number of sources plot total luminosity at a particular location.\n##### Estimated distance of a source (Jupyter)\n• `Estimated distance of a source.ipynb`\n• The notebook illustrates usage of the external table in the Gaia archive, which contains precomputed distances to many sources in the gaiadr2.gaia_data table.\n• The table is a result of work done by C.A.L. Bailer-Jones et al. in paper titled “Estimating distances from parallaxes IV: Distances to 1.33 billion stars in Gaia Data Release 2” (see: https://arxiv.org/abs/1804.10121).\n\n#### Messier 4 in proper motion space (Jupyter)\n\n• `Cluster identification n1 Messier 4 in proper motion space.ipynb`\n• This notebook is python reproduction of a tutorial by Mark Taylor. It is the first tutorial in the document linked below. The tutorial focuses on identification of a star cluster in the proper motion space.\n• Task #1: Implementation of clustering (DBSCAN)\nAim of the task is to separate the clusters in the proper motion space. The recommended method here is to use clustering instead of selecting sources in a manually specified region.\n• Task #2: Select the “comoving” cluster out of the identified clusters\nAssign the “commoving” cluster’s label to the variable comoving_cluster_label\n• Task #3: Visualize proper monition using Matplotlib.Axes.arrow\n• Draw arrows marking the proper motion vectors.\n• Make the celestial coordinates start points of the arrows.\n• Scale proper motion values to fit into the image.\n• For drawing the arrows, you can use matplotlib.axes.Axes.arrow. Recommended options: width=0.00005, linewidth=0, head_width=0.0005\n• Compare the resulting image with the TOPCAT result.\n\n#### Hertzsprung-Russell diagram for a subset of sources (Jupyter)\n\n• `Local Herzsprung-Russell Diagram.ipynb`\n• This notebook is python reproduction of a tutorial by Mark Taylor. It is the fourth tutorial in the document linked below. The tutorial focuses on plotting Local Herzsprung-Russell Diagram from Gaia data.\n• Task #1: Calculate mean values of astrometric_excess_noise per each bin.\nOne possibility is to use scipy.stats.binned_statistic_2d. In the case of this function, it might be necessary to resolve NaN values. This can be done by np.nan_to_num.\n• Task #2: Apply a cut on the astrometric excess noise and create a filtered subset.\n• Apply a cut to select only sources with astrometric_excess_noise under 1\n• Calculate 2D binned statistic (for instance using scipy.stats.binned_statistic_2d\n• Task #3: Apply a cut based on the predefined polygon in the space defined by BP-RP colour and BP/RP excess factor.\nCreate a mask of sources which are contained inside the predefined polygon.\nPossible inspiration:\n• Task #4: Apply both photometric and astrometric-cut masks on the original data from the archive.\n• Create a subset of data satisfying both photometric and astrometric cuts (OK subset). Create a combined mask of the photometric and astrometric mask and use it to filter the data. Save a reference to the filtered dataset into variable gaia_data_ok.\n• Create a 2D array of the binned statistic for the OK subset. You can copy and modify code from the task #2. Save a reference to the filtered dataset into variable bin_means_ok.\n\n#### Hertzsprung-Russell diagram in single ADQL (Jupyter)\n\n• `Herzsprung-Russell Diagram from ADQL Mora et al.ipynb `\n• The notebook shows an example of creation of the similar histogram as the previous example, but here summation and more detailed filtration is done in the ADQL query, which also includes cross-match with an external catalogue available through the Gaia archive.\n\nADQL query is taken from a presentation by A. Mora et al. at IAU Symposium 330. Nice, France. The slide are available here: https://iaus330.sciencesconf.org/data/pages/Mora.pdf" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.75022835,"math_prob":0.54697216,"size":9609,"snap":"2022-27-2022-33","text_gpt3_token_len":2180,"char_repetition_ratio":0.10515357,"word_repetition_ratio":0.07805907,"special_character_ratio":0.20668124,"punctuation_ratio":0.1233993,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9596586,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-07-03T05:41:41Z\",\"WARC-Record-ID\":\"<urn:uuid:0fb90c24-f42e-48c1-a885-d56e9ac83d68>\",\"Content-Length\":\"86627\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:daa6631a-6e95-459a-8cbb-aaca99480635>\",\"WARC-Concurrent-To\":\"<urn:uuid:19d499c4-6538-4f53-8d1d-1eea2c369859>\",\"WARC-IP-Address\":\"147.232.41.66\",\"WARC-Target-URI\":\"https://spaceforum.sk/en/space-data-analysis-course/astrometry-gaia/\",\"WARC-Payload-Digest\":\"sha1:TOZR65TNJEK26ITLOLZLEYURKIVUKDYR\",\"WARC-Block-Digest\":\"sha1:NETK7PKEYRVQ7SAA3F7IJSBXHRNZYOZA\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656104215790.65_warc_CC-MAIN-20220703043548-20220703073548-00673.warc.gz\"}"}
https://www.encyclopedia.com/science-and-technology/mathematics/mathematics/parallelogram
[ "# Parallelogram\n\nviews updated Jun 27 2018\n\n# Parallelogram\n\nA parallelogram, in plane geometry, is a plane figure of four sides whose opposite sides are parallel to each other and where each side is equal in length to its opposite side. This four-sided figure is called a polygon (a two-dimensional [2D] figure formed of three or more straight sides) or, more specifically, a quadrilateral (a 2D figure formed of four straight sides).\n\nIn all types of parallelograms, the diagonals cut each other in half (or, bisect each other) where they intersect. (A diagonal is a line that joins two opposite corners of a geometric figure, such as a parallelogram.)\n\nThree special types of parallelograms are called the rhombus, the rectangle, and the square.\n\nA rhombus is a parallelogram with all four sides of equal length but the sides to not necessarily meet at right angles to one another.\n\nA rectangle is a parallelogram whose adjacent sides are perpendicular (meet at right angles) to one another and opposite sides are equal in length.\n\nA square is a parallelogram whose adjacent sides are both perpendicular (angled at 90°) and equal in length. That is, all four sides are equal in length and the sides that contain a common point are at right angles to one another.\n\nThe area (A) of a parallelogram is equal to the distance (length) of one side, called its base b, times the shortest distance (length) to the opposite side, called its altitude or height h. It can be notated as A= bh. Since all four sides (s) of a square are equal, then b= h= s, and its equation can be written A= 2s. The circumference (C) of a parallelogram (or the distance around it) is denoted: C= 2b+ 2H. In the case of a square (where all four sides are equal in length), the equation becomes: C= 4s.\n\n# Parallelogram\n\nviews updated May 11 2018\n\n# Parallelogram\n\nA parallelogram is a plane figure of four sides whose opposite sides are parallel . A rhombus is a parallelogram with all four sides of equal length; a rectangle is a parallelogram whose adjacent sides are perpendicular ; and a square is a parallelogram whose adjacent sides are both perpendicular and equal in length.\n\nThe area of a parallelogram is equal to the length of its base times the length of its altitude.\n\n# parallelogram\n\nviews updated Jun 11 2018\n\npar·al·lel·o·gram / ˌparəˈleləˌgram/ • n. a four-sided plane rectilinear figure with opposite sides parallel.\n\nparallelograms\n\n# parallelogram\n\nviews updated May 18 2018\n\nparallelogram Quadrilateral (four-sided plane figure) having each pair of opposite sides parallel and equal. Both the opposite angles of a parallelogram are also equal. Its area is the product of one side and its perpendicular distance from the opposite side. A parallelogram with all four sides equal is a rhombus." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.91169316,"math_prob":0.99033964,"size":2303,"snap":"2022-27-2022-33","text_gpt3_token_len":561,"char_repetition_ratio":0.2044367,"word_repetition_ratio":0.16879795,"special_character_ratio":0.21276596,"punctuation_ratio":0.08924485,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.998751,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-06-29T14:05:44Z\",\"WARC-Record-ID\":\"<urn:uuid:6a527c92-fb34-42f6-92aa-aca551851f01>\",\"Content-Length\":\"75371\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e77b9856-5f16-48f4-9e60-5124eddf355e>\",\"WARC-Concurrent-To\":\"<urn:uuid:cac5814d-068d-4e55-9114-e0379856323d>\",\"WARC-IP-Address\":\"172.67.74.5\",\"WARC-Target-URI\":\"https://www.encyclopedia.com/science-and-technology/mathematics/mathematics/parallelogram\",\"WARC-Payload-Digest\":\"sha1:XY36EHRMTP42SM27AEHUSRR65WGDRMRQ\",\"WARC-Block-Digest\":\"sha1:ASR4P3SPQUZHGKA446FPEFSULBJBP7GP\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103639050.36_warc_CC-MAIN-20220629115352-20220629145352-00766.warc.gz\"}"}
https://stats.stackexchange.com/questions/21926/latent-class-model
[ "Latent class model [closed]\n\nI have asked following questions before\n\nMore than one outcome (dependent) variables in ordinal logistic regression\n\nHow to handle more than one dependent variable (categorical) in logistic regression?\n\nUsing Rasch model to explain relationships between a set of dependent and independent variables\n\nUse of further analysis on factors formed by principal component analysis in regression\n\nanswers of the above questions help me to understand what could be done with my data. My data consists of 2 independent variable (income and years- continuous data) with control variables (dichotomous or categorical data) and I have a dependent variable with 6 proxies e.g for dependent variable Business development (latent variable), six questions were asked in 5-point likert scale (categorical-ordinal data) increase in profit, sales, size, marketing, asset and labour (manifest variables).\n\nI want to find out the relationship between dependent and independent variable. I want to use measurement model to find out this. I came to know from http://en.wikipedia.org/wiki/Latent_variable_model\n\nthat I could use latent variable model. I am very much sure that my manifest variables are categorical and I suppose my latent variable is also categorical (need to know), therefore Latent Class Analysis seems to be the best choice for my data. My question is that:\n\n• Am I right that the Latent Class Analysis is best for my data or have to think of any other thing?\n• Could you give an example when latent variable is continuous and manifest variable is categorical? these are the situations when there is need to use latent trait analysis. I just ask this question to clarify my latent variable as continuous or categorical.\n• I know that for latent class analysis LatentGold software and Mplus are best but expensive to buy, Do you think Lem-a free software could serve my purpose, as I want to do factor and regression in Latent class analysis? or what to you think of SATA for it?\n• It is possible in latent class analysis to do cluster method, factor analysis and regression for categorical data. Is it possible to do more than that in it like structural equation modelling and path analysis for categorical data etc?\n• Latent class analysis is mostly used in medicine and psychology and less frequently in development or applied economics. Could I able to justify the use of latent class analysis for research related to business (mix of management and development economics area)." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.92374045,"math_prob":0.63079613,"size":2503,"snap":"2019-43-2019-47","text_gpt3_token_len":486,"char_repetition_ratio":0.1664666,"word_repetition_ratio":0.0,"special_character_ratio":0.18857372,"punctuation_ratio":0.07256236,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.971949,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-23T03:54:37Z\",\"WARC-Record-ID\":\"<urn:uuid:baa4875e-e6c0-4a84-9317-e14e94ab9ff3>\",\"Content-Length\":\"125347\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:38ed62e6-f5b8-44e6-81f4-37e07ee1ecdd>\",\"WARC-Concurrent-To\":\"<urn:uuid:580a3b93-26d0-42ae-8d2d-bef165d51fc6>\",\"WARC-IP-Address\":\"151.101.65.69\",\"WARC-Target-URI\":\"https://stats.stackexchange.com/questions/21926/latent-class-model\",\"WARC-Payload-Digest\":\"sha1:CHSSAVZKEDC26KXHQTJBSZKE2AMOEHL2\",\"WARC-Block-Digest\":\"sha1:7UBYE6DP4LMLKQFO3HXWBRAQN6ZQCHZL\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570987828425.99_warc_CC-MAIN-20191023015841-20191023043341-00278.warc.gz\"}"}
https://deepai.org/publication/unsupervised-learning-of-full-waveform-inversion-connecting-cnn-and-partial-differential-equation-in-a-loop
[ "DeepAI\n\n# Unsupervised Learning of Full-Waveform Inversion: Connecting CNN and Partial Differential Equation in a Loop\n\nThis paper investigates unsupervised learning of Full-Waveform Inversion (FWI), which has been widely used in geophysics to estimate subsurface velocity maps from seismic data. This problem is mathematically formulated by a second order partial differential equation (PDE), but is hard to solve. Moreover, acquiring velocity map is extremely expensive, making it impractical to scale up a supervised approach to train the mapping from seismic data to velocity maps with convolutional neural networks (CNN). We address these difficulties by integrating PDE and CNN in a loop, thus shifting the paradigm to unsupervised learning that only requires seismic data. In particular, we use finite difference to approximate the forward modeling of PDE as a differentiable operator (from velocity map to seismic data) and model its inversion by CNN (from seismic data to velocity map). Hence, we transform the supervised inversion task into an unsupervised seismic data reconstruction task. We also introduce a new large-scale dataset OpenFWI, to establish a more challenging benchmark for the community. Experiment results show that our model (using seismic data alone) yields comparable accuracy to the supervised counterpart (using both seismic data and velocity map). Furthermore, it outperforms the supervised model when involving more seismic data.\n\n01/23/2022\n\n### Application of an RBF-FD solver for the Helmholtz equation to full-waveform inversion\n\nFull waveform inversion (FWI) is one of a family of methods that allows ...\n06/10/2018\n\n### Stochastic seismic waveform inversion using generative adversarial networks as a geological prior\n\nWe present an application of deep generative models in the context of pa...\n04/28/2022\n\n### An Intriguing Property of Geophysics Inversion\n\nInversion techniques are widely used to reconstruct subsurface physical ...\n02/03/2022\n\n### Exploring Multi-physics with Extremely Weak Supervision\n\nMulti-physical inversion plays a critical role in geophysics. It has bee...\n08/02/2022\n\n### Velocity estimation via model order reduction\n\nA novel approach to full waveform inversion (FWI), based on a data drive...\n06/08/2020\n\n### Schrödinger PCA: You Only Need Variances for Eigenmodes\n\nPrincipal component analysis (PCA) has achieved great success in unsuper...\n03/25/2020\n\n### EikoNet: Solving the Eikonal equation with Deep Neural Networks\n\nThe recent deep learning revolution has created an enormous opportunity ...\n\n## 1 Introduction\n\nGeophysical properties (such as velocity, impedance, and density) play an important role in various subsurface applications including subsurface energy exploration, carbon capture and sequestration, estimating pathways of subsurface contaminant transport, and earthquake early warning systems to provide critical alerts. These properties can be obtained via seismic surveys, i.e., receiving reflected/refracted seismic waves generated by a controlled source. This paper focuses on reconstructing subsurface velocity maps from seismic measurements. Mathematically, the velocity map and seismic measurements are correlated through an acoustic-wave equation (a second-order partial differential equation) as follows:\n\n ∇2p(r,t)−1v(r)2∂2p(r,t)∂t2=s(r,t), (1)\n\nwhere denotes the pressure wavefield at spatial location and time , represents the velocity map of wave propagation, and is the source term. Full Waveform Inversion (FWI) is a methodology that determines high-resolution velocity maps of the subsurface via matching synthetic seismic waveforms to raw recorded seismic data , where represents the locations of the seismic receivers.\n\nA velocity map describes the wave propagation speed in the subsurface region of interest. An example in 2D scenario is shown in Figure 1(a). Particularly, the x-axis represents the horizontal offset of a region, and the y-axis stands for the depth. The regions with the same geologic information (velocity) are called a layer in velocity maps. In a sample of seismic measurements (termed a shot gather in geophysics) as depicted in Figure 1(b), each grid in the x-axis represents a receiver, and the value in the y-axis is a 1D time-series signal recorded by each receiver.\n\nExisting approaches solve FWI in two directions: physics-driven and data-driven. Physics-driven approaches rely on the forward modeling of Equation 1\n\n, which simulates seismic data from velocity map by finite difference. They optimize velocity map per seismic sample, by iteratively updating velocity map from an initial guess such that simulated seismic data (after forward modeling) is close to the input seismic measurements. However, these methods are slow and difficult to scale up as the iterative optimization is required per input sample. Data-driven approaches consider FWI problem as an image-to-image translation task and apply convolution neural networks (CNN) to learn the mapping from seismic data to velocity maps\n\n(Wu and Lin, 2019). The limitation of these methods is that they require paired seismic data and velocity maps to train the network. Such ground truth velocity maps are hardly accessible in real-world scenario because generating them is extremely time-consuming even for domain experts.\n\nIn this work, we leverage advantages of both directions (physics + data driven) and shift the paradigm to unsupervised learning of FWI by connecting forward modeling and CNN in a loop. Specifically, as shown in Figure 1, a CNN is trained to predict a velocity map from seismic data, which is followed by forward modeling to reconstruct seismic data. The loop is closed by applying reconstruction loss on seismic data to train the CNN. Due to the differentiable forward modeling, the whole loop can be trained end-to-end. Note that the CNN is trained in an unsupervised manner, as the ground truth of velocity map is not needed. We name our unsupervised approach as UPFWI (Unsupervised Physical-informed Full Waveform Inversion).\n\nAdditionally, we find that perceptual loss (Johnson et al., 2016) is crucial to improve the overall quality of predicted velocity maps due to its superior capability in preserving the coherence of the reconstructed waveforms comparing with other losses like Mean Squared Error (MSE) and Mean Absolute Error (MAE).\n\nTo encourage fair comparison on a large dataset with more complicate geological structures, we introduce a new dataset named OpenFWI, which contains 60,000 labeled data (velocity map and seismic data pairs) and 48,000 unlabeled data (seismic data alone). 30,000 of those velocity maps contain curved layers that are more challenge for inversion. We also add geological faults with various shift distances and tilting angles to all velocity maps.\n\nWe evaluate our method on this large dataset. Experimental results show that for velocity maps with flat layers, our UPFWI trained with 48,000 unlabeled data achieves 1146.09 in MSE, which is 26.77% smaller than that of the supervised method, and 0.9895 in Structured Similarity (SSIM), which is 0.0021 higher than the score of the supervised method; for velocity maps with curved layers, our UPFWI achieves 3639.96 in MSE, which is 28.30% smaller than that of supervised method, and 0.9756 in SSIM, which is 0.0057 higher than the score of the supervised method.\n\nOur contribution is summarized as follows:\n\n• We propose to solve FWI in an unsupervised manner by connecting CNN and forward modeling in a loop, enabling end-to-end learning from seismic data alone.\n\n• We find that perceptual loss is helpful to boost the performance comparable to the supervised counterpart.\n\n• We introduce a large-scale dataset as benchmark to encourage further research on FWI.", null, "Figure 3: Unsupervised UPFWI (ours) vs. Supervised H-PGNN+ (Sun et al., 2021). Our method achieves better performance, e.g. lower Mean Squared Error (MSE) and higher Structural Similarity (SSIM), when involving more unlabeled data (>24k).\n\n## 2 Preliminaries of Full Waveform Inversion (FWI)\n\nThe goal of FWI in geophysics is to invert for a velocity map from seismic measurements , where and denote the horizontal and vertical dimensions of the velocity map, is the number of sources that are used to generate waves during data acquisition process, denotes the number of samples in the wavefields recorded by each receiver, and represents the total number of receivers.\n\nIn conventional physics-driven methods, forward modeling is commonly referred to the process of simulating seismic data from given estimated velocity maps . For simplicity, the forward acoustic-wave operator can be expressed as\n\n ~p=f(^v). (2)\n\nGiven this forward operator , the physics-driven FWI can be posed as a minimization problem (Virieux and Operto, 2009)\n\n E(^v)=min^v{||p−f(^v)||22+λR(^v)}, (3)\n\nwhere is the the distance between true seismic measurements and the corresponding simulated data , is a regularization parameter and is the regularization term which is often the or norm of . This requires optimization per sample, which is slow as the optimization involves multiple iterations from an initial guess.\n\nData-driven methods leverage convolutional neural networks to directly learn the inverse mapping as (Adler et al., 2021)\n\n ^v=gθ(p)≈f−1(p), (4)\n\nwhere is the approximated inverse operator of parameterized by . In practice, is usually implemented as a convolutional neural network (Adler et al., 2021; Wu and Lin, 2019)\n\n. This requires paired seismic data and velocity maps for supervised learning. However, the acquisition of large volume of velocity maps in field applications can be extremely challenging and computationally prohibitive.\n\n## 3 Method\n\nIn this section, we present our Unsupervised Physics-informed solution (named UPFWI), which connects CNN and forward modeling in a loop. It addresses limitations of both physics-driven and data-driven approaches, as it requires neither optimization at inference (per sample), nor velocity maps as supervision.\n\n### 3.1 UPFWI: Connecting CNN and Forward Modeling\n\nAs depicted in Figure 1, our UPFWI connects a CNN and a differentiable forward operator to form a loop. In particular, the CNN takes seismic measurements as input and generates the corresponding velocity map . We then apply forward acoustic-wave operator  (see Equation 2) on the estimated velocity map to reconstruct the seismic data . Typically, the forward modeling employs finite difference (FD) to discretize the wave equation (Equation 1). The details of forward modeling will be discussed in the subsection 3.3. The loop is closed by the reconstruction loss between input seismic data and the reconstructed seismic data . Notice that the ground truth of velocity maps is not involved, and the training process is unsupervised\n\n. Since the forward operator is differentiable, the reconstruction loss can be backpropagated (via gradient descent) to update the parameters\n\nin the CNN.\n\n### 3.2 CNN Network Architecture\n\nWe use an encoder-decoder structured CNN (similar to Wu and Lin (2019) and Zhang and Lin (2020)) to model the mapping from seismic data to velocity map\n\n. The encoder compresses the seismic input and then transforms the latent vector to build the velocity estimation through a decoder. Since the number of receivers\n\nand the number of timesteps in seismic measurements are unbalanced (), we first stack a 71 and six 3\n\n1 convolutional layers (with stride 2 every the other layer to reduce dimension) to extract temporal features until the temporal dimension is close to\n\n. Then, six 33 convolutional layers are followed to extract spatial-temporal features. The resolution is down-sampled every the other layer by using stride 2. Next, the feature map is flattened and a fully connected layer is applied to generate the latent feature with dimension 512. The decoder first repeats the latent vector by 25 times to generate a 55\n\n512 tensor. Then it is followed by five 3\n\n3 convolutional layers with nearest neighbor upsampling in between, resulting in a feature map with size 808032. Finally, we center-crop the feature map (7070) and apply a 3\n\n3 convolution layer to output a single channel velocity map. All the aforementioned convolutional and upsampling layers are followed by a batch normalization\n\n(Ioffe and Szegedy, 2015)\n\nand a leaky ReLU\n\n(Nair and Hinton, 2010)\n\n### 3.3 Differentiable Forward Modeling\n\nWe apply the standard finite difference (FD) in the space domain and time domain to discretize the original wave equation. Specifically, the second-order central finite difference in time domain ( in Equation 1) is approximated as follows:\n\n ∂2p(r,t)∂t2≈1(Δt)2(pt+1r−2ptr+pt−1r)+O[(Δt)2], (5)\n\nwhere denotes the pressure wavefields at timestep , and and are the wavefields at and , respectively. The Laplacian of can be estimated in the similar way on the space domain (see Appendix). Therefore, the wave equation can then be written as\n\n pt+1r=(2−v2∇2)ptr−pt−1r+v2(Δt)2str, (6)\n\nwhere here denotes the discrete Laplace operator.\n\nThe initial wavefield at the timestep 0 is set zero (i.e. ). Thus, the gradient of loss with respect to estimated velocity at spatial location\n\ncan be computed using the chain rule as\n\n ∂L∂v(r)=T∑t=0[∂L∂p(r,t)]∂p(r,t)∂v(r), (7)\n\nwhere indicates the length of the sequence.\n\n### 3.4 Loss Function\n\nThe reconstruction loss of our UPFWI includes a pixel-wise loss and a perceptual loss as follows:\n\n L(p,~p)=Lpixel(p,~p)+Lperceptual(p,~p), (8)\n\nwhere and are input and reconstructed seismic data, respectively. The pixel-wise loss combines and distance as:\n\n Lpixel(p,~p)=λ1ℓ1(p,~p)+λ2ℓ2(p,~p), (9)\n\nwhere and are two hyper-parameters to control the relative importance. For the perceptual loss , we extract features from conv5 in a VGG-16 network  (Simonyan and Zisserman, 2015)\n\npretrained on ImageNet\n\n(Krizhevsky et al., 2012) and combine the and distance as:\n\n Lperceptual(p,~p)=λ3ℓ1(ϕ(p),ϕ(~p))+λ4ℓ2(ϕ(p),ϕ(~p)), (10)\n\nwhere represents the output of conv5 in the VGG-16 network, and and are two hyper-parameters. Compared to the pixel-wise loss, the perceptual loss is better to capture the region-wise structure, which reflects the waveform coherence. This is crucial to boost the overall accuracy of velocity map (e.g. the quantitative velocity values and the structural information).\n\n## 4 OpenFWI Dataset\n\nWe introduce a new large-scale geophysics FWI dataset OpenFWI, which consists of 108K seismic data for two types of velocity maps: one with flat layers (named FlatFault) and the other one with curved layers (named CurvedFault). Each type has 54K seismic data, including 30K with paired velocity maps (labeled) and 24K unlabeled. The 30K labeled pairs of seismic data and velocity maps are splitted as 24K/3K/3K for training, validation and testing respectively. Samples are shown in Appendix.\n\nThe shape of curves in our dataset follows a sine function. Velocity maps in CurvedFault are designed to validate the effectiveness of FWI methods on curved topography. Compared to the maps with flat layers, curved velocity maps yield much more irregular geological structures, making inversion more challenging. Both FlatFault and CurvedFault contain 30,000 samples with 2 to 4 layers and their corresponding seismic data. Each velocity map has dimensions of 70\n\n70, and the grid size is 15 meter in both directions. The layer thickness ranges from 15 grids to 35 grids, and the velocity in each layer is randomly sampled from a uniform distribution between 3,000 meter/second and 6,000 meter/second. The velocity is designed to increase with depth to be more physically realistic. We also add geological faults to every velocity map. The faults shift from 10 grids to 20 grids, and the tilting angle ranges from -123\n\nto 123.\n\nTo synthesize the seismic data, five sources are evenly distributed on the surface with a spacing of 255 meter, and the seismic traces are recorded by 70 receivers positioned at each grid with an interval of 15 meter. The source is a Ricker wavelet with a central frequency of 25 Hz Wang (2015). Each receiver records time-series data for 1 second, and we use a 1 millisecond sample rate to generate 1,000 timesteps. Therefore, the dimensions of seismic data become 5100070. Compared to the existing datasets (Yang and Ma, 2019; Moseley et al., 2020), OpenFWI is significantly larger. It includes more complicated and physically realistic velocity maps. We hope it establishes a more challenging benchmark for the community.\n\n## 5 Experiments\n\nIn this section, we present experimental results of our proposed UPFWI evaluated on the OpenFWI dataset. We also discuss different factors that affect the performance of our method.\n\n### 5.1 Implementation Details\n\nTraining Details: The input seismic data are normalized to range [-1, 1]. We employ AdamW (Loshchilov and Hutter, 2018) optimizer with momentum parameters , and a weight decay of to update all parameters of the network. The initial learning rate is set to be , and we reduce the learning rate by a factor of 10 when validation loss reaches a plateau. The minimum learning rate is set to be . The size of mini-batch is set to be 16. All trade-off hyper-parameters\n\nin our loss function are set to be 1. We implement our models in Pytorch and train them on 8 NVIDIA Tesla V100 GPUs. All models are randomly initialized.\n\nWe consider three metrics for evaluating the velocity maps inverted by our method: MAE, MSE and Structural Similarity (SSIM). Both MAE and MSE have been employed in the existing methods (Wu and Lin, 2019; Zhang and Lin, 2020) to measure the pixel-wise error. Considering that the layered-structured velocity maps contain highly structured information, degradation or distortion in velocity maps can be easily perceived by a human. To better align with human vision, we employ SSIM to measure the perceptual similarity. It is important to note that for MAE and MSE calculation, we denormalize velocity maps to their original scale while we keep them in normalized scale [-1, 1] for SSIM according to the algorithm.\n\nComparison: We compare our method with three state-of-the-art algorithms: two pure data-driven methods, i.e., InversionNet (Wu and Lin, 2019) and VelocityGAN (Zhang and Lin, 2020), and a physics-informed method H-PGNN (Sun et al., 2021). We follow the implementation described in these papers and search for the best hyper-parameters for OpenFWI dataset. Note that we improve H-PGNN by replacing the network architecture with the CNN in our UPFWI and adding perceptual loss, resulting in a significant boosted performance. We refer our implementation as H-PGNN+, which is a strong supervised baseline. Our method has two variants (UPFWI-24K and UPFWI-48K), using 24K and 48K unlabeled seismic data respectively.\n\n### 5.2 Main Results\n\nResults on FlatFault: Table 1 shows the results of different methods on FlatFault. Compared to data-driven InversionNet and VelocityGAN, our UPFWI-24K performs better in MSE and SSIM, but is slightly worse in MAE score. Compared to the physics-informed H-PGNN+, there is a gap between our UPFWI-24K and H-PGNN+ when trained with the same amount of data. However, after we double the size of unlabeled data (from 24K to 48K), a significant improvement is observed in our UPFWI-48K for all three metrics, and it outperforms all three supervised baselines in MSE and SSIM. This demonstrates the potential of our UPFWI for achieving higher performance with more unlabeled data involved.\n\nThe velocity maps inverted by different methods are shown in Figure 4. Consistent with our quantitative analysis, more accurate details are observed in the velocity maps generated by UPFWI-48K. For instance, in the first row of Figure 4, although all models somehow reveal the small geophysical fault near to the right boundary of the velocity map, only UPFWI-48K reconstructs a clear interface between layers as highlighted by the yellow square. In the second row, we find both InversionNet and VelocityGAN generate blurry results in deep region, while H-PGNN+, UPFWI-24K and UPFWI-48K yield much clearer boundaries. We attribute this finding as the impact of seismic loss. We further observe that the slope of the fault in deep region is different from that in the shallow region, yet only UPFWI-48K replicates this result as highlighted by the green square.\n\nResults on CurvedFault Table 1 shows the results of CurvedFault. Performance degradation is observed for all models, due to the more complicated geological structures in CurvedFault. Although our UPFWI-24K underperforms the three supervised baselines, our UPFWI-48K significantly boosts the performance, outperforming all supervised methods in terms of all three metrics. This demonstrates the power of unsupervised learning in our UPFWI that greatly benefits from more unlabeled data when dealing with more complicated curved structure.\n\nFigure 5 shows the visualized velocity maps in CurvedFault obtained using different methods. Similar to the observation in FlatFault, our UPFWI-48K yields more accurate details compared to the results of supervised methods. For instance, in the first row, only our UPFWI-24K and UPFWI-48K precisely reconstruct the fault beneath the curve around the top-left corner as highlighted by the yellow square. Although some artifacts are observed in the results of UPFWI-24K around the layer boundary in deep region, they are eliminated in the results of UPFWI-48K. As for the example in the second row, it is obvious that the shape of geological anomalies in the shallow region is best reconstructed by our UPFWI-24K and UPFWI-48K as highlighted by the red square. More visualization results are shown in the Appendix.", null, "Figure 4: Comparison of different methods on inverted velocity maps of FlatFault. Our UPFWI-48K reveals more accurate details at layer boundaries and the slope of the fault in deep region.", null, "Figure 5: Comparison of different methods on inverted velocity maps of CurvedFault. Our UPFWI reconstructs the geological anomalies on the surface that best match the ground truth.\n\n### 5.3 Ablation Study\n\nBelow we study the contribution of different loss functions: (a) pixel-wise distance (MSE), (b) pixel-wise distance (MAE), and (c) perceptual loss. All experiments are conducted on FlatFault using 24,000 unlabeled data.\n\nFigure 5(a) shows the predicted velocity maps for using three loss combinations (pixel-, pixel-, pixel-+perceptual) in UPFWI. The ground truth seismic data and velocity map are shown in the left column. For each loss option, we show the difference between the reconstructed and the input seismic data (on the top) and predicted velocity (on the bottom). When using pixel-wise loss in distance alone, there are some obvious artifacts in both seismic data (around 600 millisecond) and velocity map. These artifacts are mitigated by introducing additional pixel-wise loss in distance. With perceptual loss added, more details are correctly retained (e.g. seismic data from 400 millisecond to 600 millisecond, velocity boundary between layers). Figure 5(b) compares the reconstructed seismic data (in terms of residual to the ground truth) at a slice of 525 meter offset (orange dash line in Figure 5(a)). Clearly, the combination of pixel-wise and perceptual loss has the smallest residual.\n\nThe quantitative results are shown in Table 2. They are consistent with the observation in qualitative analysis (Figure 5(a)). In particular, using pixel-wise loss in distance has the worst performance. The involvement of distance mitigates all velocity errors but is slightly worse on MSE and SSIM of seismic error. Adding perceptual loss further boosts the performance in all performance metrics by a clear margin. This shows that perceptual loss is helpful to retain waveform coherence, which is correlated to the velocity boundary, and validates our proposed loss function (combining pixel-wise and perceptual loss).\n\n## 6 Discussion\n\nOur UPFWI has two major limitations. Firstly, it needs further improvement on a small number of challenging velocity maps where adjacent layers have very close velocity values. We find that the lack of supervision is not the cause as our UPFWI can yield comparable or even better results compared to its supervised counterparts. Another limitation is the speed and memory consumption for forward modeling, as the gradient of finite difference (see Equation 6) need to be stored for backpropagation. We will explore different loss functions (e.g. adversarial loss) and the methods that can balance the requirement of computation resources and the accuracy in the future work. We believe the idea of connecting CNN and PDE to solve full waveform inversion has potential to be applied to other inverse problems with a governing PDE such as medical imaging and flow estimation.\n\n## 7 Related Work\n\nPhysics-driven Methods There are two primary physics-driven methods, depending on the complexity of the forward model. The simpler one is via travel time inversion (Tarantola, 2005), which has a linear forward operator, but provides results of inferior accuracy (Lin et al., 2015). FWI techniques (Virieux and Operto, 2009), being the other one, provide superior solutions by modeling the wave propagation, but the forward operator is non-linear and computationally expensive. Furthermore the problem is ill-posed (Virieux and Operto, 2009), making a prior model of the solution space essential. Since regularized FWI solved via iterative techniques need to apply the forward model many times, these solutions are very computationally expensive. In addition, existing regularized FWI methods employ relatively simplistic models of the solution space (Hu et al., 2009; Burstedde and Ghattas, 2009; Ramírez and Lewis, 2010; Lin and Huang, 2017, 2015b, 2015a; Guitton, 2012; Treister and Haber, 2016), leaving considerable room for improvement in the accuracy of the solutions. Another common approach to alleviate the ill-posedness and non-linearity of FWI is via multi-scale techniques (Bunks et al., 1995; Boonyasiriwat et al., 2009; Feng and Schuster, 2019). Rather than matching the seismic data all at once, the multi-scale techniques decompose the data into different frequency bands so that the low-frequency components will be updated first and then followed with higher frequency components.\n\nData-driven Methods\n\nRecently, a new type of methods has been developed based on deep learning.\n\nAraya-Polo et al. (2018) proposed a model based on a fully connected network. Wu and Lin (2019) further converted FWI into an image-to-image translation task with an encoder-decoder structure that can handle more complex velocity maps. Zhang and Lin (2020)\n\nadopted GAN and transfer learning to improve the generalization.\n\nLi et al. (2020) designed SeisInvNet to solve the misaligned issue when dealing sources from different locations. In Yang and Ma (2019), a U-Net architecture was proposed with skip connections. Feng et al. (2021) proposed a data-driven multi-scale framework by considering different frequency components. Rojas-Gómez et al. (2020) developed an adaptive data augmentation method to improve the generalization. Ren et al. (2020) combined the data-driven and physics-based methods and proposed H-PGNN model. Some similar ideas were developed on different physical forward models. Wang et al. (2020) proposed a model by connecting two CNNs approximating the forward model and inversion process, and their model was tested on well-logging data. Alfarraj and AlRegib (2019) utilized the forward model to constrain the training of convolutional and recurrent neural layers to invert well-logging seismic data for elastic impedance. All of those aforementioned works were developed based on supervised learning. Biswas et al. (2019) designed an unsupervised CNN to estimate subsurface reflectivity using pre-stack seismic angle gather. Comparing to FWI, their problem is simpler because of the approximated and linearized forward model.\n\n## 8 Conclusion\n\nIn this study, we introduce an unsupervised method named UPFWI to solve FWI by connecting CNN and forward modeling in a loop. Our method can learn the inverse mapping from seismic data alone in an end-to-end manner. We demonstrate through a series of experiments that our UPFWI trained with sufficient amount of unlabeled data outperforms the supervised counterpart on our dataset to be released. The ablation study further substantiates that perceptual loss is a critical component in our loss function and has a great contribution to the performance of our UPFWI.\n\n## References\n\n• A. Adler, M. Araya-Polo, and T. Poggio (2021) Deep learning for seismic inverse problems: toward the acceleration of geophysical analysis workflows. IEEE Signal Processing Magazine 38 (2), pp. 89–119. Cited by: §2.\n• M. Alfarraj and G. AlRegib (2019) Semisupervised sequence modeling for elastic impedance inversion. Interpretation 7 (3), pp. SE237–SE249. Cited by: §7.\n• M. Araya-Polo, J. Jennings, A. Adler, and T. Dahlke (2018) Deep-learning tomography. The Leading Edge 37 (1), pp. 58–66. Cited by: §7.\n• R. Biswas, M. K. Sen, V. Das, and T. Mukerji (2019) Prestack and poststack inversion using a physics-guided convolutional neural network. Interpretation 7 (3), pp. SE161–SE174. Cited by: §7.\n• C. Boonyasiriwat, P. Valasek, P. Routh, W. Cao, G. T. Schuster, and B. Macy (2009) An efficient multiscale method for time-domain waveform tomography. Geophysics 74 (6), pp. WCC59–WCC68. Cited by: §7.\n• C. Bunks, F. Saleck, S. Zaleski, and G. Chavent (1995) Multiscale seismic waveform inversion. Geophysics 60 (5), pp. 1457–1473. Cited by: §7.\n• C. Burstedde and O. Ghattas (2009) Algorithmic strategies for full waveform inversion: 1D experiments. Geophysics 74 (6), pp. 37–46. Cited by: §7.\n• F. Collino and C. Tsogka (2001) Application of the perfectly matched absorbing layer model to the linear elastodynamic problem in anisotropic heterogeneous media. Geophysics 66 (1), pp. 294–307. Cited by: §A.1.\n• S. Feng, Y. Lin, and B. Wohlberg (2021) Multiscale data-driven seismic full-waveform inversion with field data study. IEEE Transactions on Geoscience and Remote Sensing (), pp. 1–14. External Links: Document Cited by: §7.\n• S. Feng and G. T. Schuster (2019) Transmission+ reflection anisotropic wave-equation traveltime and waveform inversion. Geophysical Prospecting 67 (2), pp. 423–442. Cited by: §7.\n• A. Guitton (2012) Blocky regularization schemes for full waveform inversion. Geophysical Prospecting 60, pp. 870–884. Cited by: §7.\n• W. Hu, A. Abubakar, and T. Habashy (2009) Simultaneous multifrequency inversion of full-waveform seismic data. Geophysics 74 (2), pp. 1–14. Cited by: §7.\n• S. Ioffe and C. Szegedy (2015) Batch normalization: accelerating deep network training by reducing internal covariate shift. In\n\nInternational Conference on Machine Learning\n\n,\npp. 448–456. Cited by: §3.2.\n• J. Johnson, A. Alahi, and L. Fei-Fei (2016)\n\nPerceptual losses for real-time style transfer and super-resolution\n\n.\nIn\n\nEuropean Conference on Computer Vision\n\n,\npp. 694–711. Cited by: §1.\n• A. Krizhevsky, I. Sutskever, and G. E. Hinton (2012) ImageNet classification with deep convolutional neural networks. Advances in Neural Information Processing Systems 25, pp. 1097–1105. Cited by: §3.4.\n• S. Li, B. Liu, Y. Ren, Y. Chen, S. Yang, Y. Wang, and P. Jiang (2020) Deep-learning inversion of seismic data. IEEE Transactions on Geoscience and Remote Sensing 58 (3), pp. 2135–2149. External Links: Document Cited by: §7.\n• Y. Lin and L. Huang (2015a) Acoustic- and elastic-waveform inversion using a modified Total-Variation regularization scheme. Geophysical Journal International 200 (1), pp. 489–502. External Links: Document Cited by: §7.\n• Y. Lin and L. Huang (2015b) Quantifying subsurface geophysical properties changes using double-difference seismic-waveform inversion with a modified Total-Variation regularization scheme. Geophysical Journal International 203 (3), pp. 2125–2149. External Links: Document Cited by: §7.\n• Y. Lin and L. Huang (2017) Building subsurface velocity models with sharp interfaces using interface-guided seismic full-waveform inversion. Pure and Applied Geophysics 174 (11), pp. 4035–4055. External Links: Document Cited by: §7.\n• Y. Lin, E. M. Syracuse, M. Maceira, H. Zhang, and C. Larmat (2015) Double-difference traveltime tomography with edge-preserving regularization and a priori interfaces. Geophysical Journal International 201 (2), pp. 574. External Links: Document Cited by: §7.\n• I. Loshchilov and F. Hutter (2018) Decoupled weight decay regularization. In International Conference on Learning Representations, Cited by: §5.1.\n• B. Moseley, T. Nissen-Meyer, and A. Markham (2020) Deep learning for fast simulation of seismic waves in complex media. Solid Earth 11 (4), pp. 1527–1549. Cited by: §4.\n• V. Nair and G. E. Hinton (2010) In Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 807–814. Cited by: §3.2.\n• A. Ramírez and W. Lewis (2010) Regularization and full-waveform inversion: a two-step approach. In 80th Annual International Meeting, SEG, Expanded Abstracts, pp. 2773–2778. Cited by: §7.\n• Y. Ren, X. Xu, S. Yang, L. Nie, and Y. Chen (2020) A physics-based neural-network way to perform seismic full waveform inversion. IEEE Access 8, pp. 112266–112277. Cited by: §7.\n• R. Rojas-Gómez, J. Yang, Y. Lin, J. Theiler, and B. Wohlberg (2020) Physics-consistent data-driven waveform inversion with adaptive data augmentation. IEEE Geoscience and Remote Sensing Letters. Cited by: §7.\n• K. Simonyan and A. Zisserman (2015) Very deep convolutional networks for large-scale image recognition. In International Conference on Learning Representations, Cited by: §3.4.\n• J. Sun, K. A. Innanen, and C. Huang (2021) Physics-guided deep learning for seismic inversion with hybrid training and uncertainty analysis. Geophysics 86 (3), pp. R303–R317. Cited by: Figure 3, §5.1.\n• A. Tarantola (2005) Inverse problem theory and methods for model parameter estimation. SIAM. Cited by: §7.\n• E. Treister and E. Haber (2016) Full waveform inversion guided by travel time tomography. SIAM Journal on Scientific Computing 39, pp. S587–S609. Cited by: §7.\n• J. Virieux and S. Operto (2009) An overview of full-waveform inversion in exploration geophysics. Geophysics 74 (6), pp. WCC1–WCC26. Cited by: §2, §7.\n• Y. Wang (2015) Frequencies of the Ricker wavelet. Geophysics 80 (2), pp. A31–A37. Cited by: §4.\n• Y. Wang, Q. Ge, W. Lu, and X. Yan (2020) Well-logging constrained seismic inversion based on closed-loop convolutional neural network. IEEE Transactions on Geoscience and Remote Sensing 58 (8), pp. 5564–5574. Cited by: §7.\n• Y. Wu and Y. Lin (2019) InversionNet: an efficient and accurate data-driven full waveform inversion. IEEE Transactions on Computational Imaging 6 (1), pp. 419–433. Cited by: §1, §2, §3.2, §5.1, §5.1, §7.\n• F. Yang and J. Ma (2019) Deep-learning inversion: a next-generation seismic velocity model building method. Geophysics 84 (4), pp. R583–R599. Cited by: §4, §7.\n• Z. Zhang and Y. Lin (2020) Data-driven seismic waveform inversion: a study on the robustness and generalization. IEEE Transactions on Geoscience and Remote Sensing 58, pp. 6900–6913. Cited by: §3.2, §5.1, §5.1, §7.\n\n## Appendix A Appendix\n\n### a.1 Derivation of Forward Modeling in Practice\n\nSimilar to the finite difference in time domain, in 2D situation, by applying the fourth-order central finite difference in space, the Laplacian of can be discretized as\n\n ∇2p(r,t)=∂2p∂x2+∂2p∂z2,≈1(Δx)22∑i=−2ciptx+i,z+1(Δz)22∑i=−2ciptx,z+i+O[(Δx)4+(Δz)4], (11)\n\nwhere , , , , and and stand for the horizontal offset and the depth of a 2D velocity map, respectively. For convenience, we assume that the vertical grid spacing is identical to the horizontal grid spacing .\n\nGiven the approximation in Equations 5 and 11, we can rewrite the Equation 1 as\n\n pt+1x,z=(2−5α)ptx,z−pt−1x,z+(Δx)2αstx,z+α2∑i=−2i≠0ci(ptx+i,z+ptx,z+i), (12)\n\nwhere .\n\nDuring the simulation of the forward modeling, the boundaries of the velocity maps should be carefully handled because they may cause reflection artifacts that interfere with the desired waves. One of the standard methods to reduce the boundary effects is to add absorbing layers around the original velocity map. Waves are trapped and attenuated by a damping parameter when propagating through those absorbing layers. Here, we follow Collino and Tsogka (2001) and implement the damping parameter as\n\n κ=d(u)=3uv2L2ln(R), (13)\n\nwhere denotes the overall thickness of absorbing layers, indicates the distance between the current position and the closest boundary of the original velocity map, and is the theoretical reflection coefficient chosen to be . With absorbing layers added, Equation 6 can be ultimately written as\n\n pt+1x,z=(2−5α−κ)ptx,z−(1−κ)pt−1x,z+(Δx)2αstx,z+α2∑i=−2i≠0ci(ptx+i,z+ptx,z+i). (14)\n\n### a.2 OpenFWI Examples and Inversion Results of Different Methods", null, "Figure 7: More examples of velocity maps and their corresponding seismic measurements in OpenFWI dataset.", null, "Figure 8: Comparison of different methods on inverted velocity maps of FlatFault. The details revealed by our UPFWI are highlighted.", null, "Figure 9: Comparison of different methods on inverted velocity maps of CurvedFault. The details revealed by our UPFWI are highlighted." ]
[ null, "https://deepai.org/publication/None", null, "https://deepai.org/publication/None", null, "https://deepai.org/publication/None", null, "https://deepai.org/publication/None", null, "https://deepai.org/publication/None", null, "https://deepai.org/publication/None", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8828055,"math_prob":0.8469222,"size":32318,"snap":"2022-40-2023-06","text_gpt3_token_len":7440,"char_repetition_ratio":0.14219843,"word_repetition_ratio":0.022313204,"special_character_ratio":0.22977906,"punctuation_ratio":0.12955397,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9581772,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-02-06T08:55:22Z\",\"WARC-Record-ID\":\"<urn:uuid:1e62dc59-7c8c-4571-a3ee-c9a876233b28>\",\"Content-Length\":\"529247\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:458e5127-3758-431b-b58a-2d874d5552b8>\",\"WARC-Concurrent-To\":\"<urn:uuid:64c630d4-c1e6-49aa-abfb-0e529a7c6141>\",\"WARC-IP-Address\":\"18.67.76.81\",\"WARC-Target-URI\":\"https://deepai.org/publication/unsupervised-learning-of-full-waveform-inversion-connecting-cnn-and-partial-differential-equation-in-a-loop\",\"WARC-Payload-Digest\":\"sha1:GELHVZWMACSWMLPROWV4MW4CW66FTSF7\",\"WARC-Block-Digest\":\"sha1:GCJIRI7DJS2M3RHE2Y6CABC666T6SZI4\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764500334.35_warc_CC-MAIN-20230206082428-20230206112428-00835.warc.gz\"}"}
https://math.stackexchange.com/questions/4777165/abelianization-of-non-abelian-groups
[ "# Abelianization of Non-Abelian Groups\n\nLet $$\\mathbf{Grp}$$ be the category of groups and $$\\mathbf{Ab}$$ be the category of abelian groups, whose $$\\text{Hom}$$ sets are group homomorphisms.\n\nWe can define a forgetful functor $$\\mathcal{F}: \\mathbf{Ab}\\to\\mathbf{Grp}$$ by \"forgetting\" that the group is abelian; i.e. $$\\mathcal{F}(G) = G$$, $$\\mathcal{F}(f) = f$$ for all abelian groups $$G$$ and abelian group homomorphisms $$f$$.\n\nLikewise, we can define an abelianization functor $$\\mathcal{G}:\\mathbf{Grp}\\to\\mathbf{Ab}$$ by $$\\mathcal{G}(G) = G/[G, G]$$, where $$[G, G]=\\{ghg^{-1}h^{-1}\\mid g, h\\in G\\}$$ is the commutator of $$G$$, and for any $$f\\in \\text{Hom}_{\\mathbf{Grp}}(G, H)$$, we have $$\\mathcal{G}(f):G/[G, G]\\to H/[H, H]$$ given by $$x[G, G]\\mapsto f(x)[H, H]$$.\n\nNow one can trivially check that $$\\mathcal{G}$$ is a surjective functor; that is, every abelian group is the abelianization of some group (for instance, itself). But this conclusion didn't seem satisfying to me.\n\nConsider the category $$\\mathbf{Nab}$$ of non-abelian groups. We define a functor $$\\mathcal{H}:\\mathbf{Nab}\\to\\mathbf{Ab}$$ similarly to $$\\mathcal{G}$$; by abelianizing the group.\n\nMy question: Is $$\\mathcal{H}$$ surjective? That is, is every abelian group the abelianization of a non-abelian group?\n\n• Note that a more interesting property of a functor than being \"surjective on objects\" is that it be full: a functor $F\\colon\\mathscr{A}\\to\\mathscr{B}$ is full if and only if for every $A,B\\in\\mathrm{Ob}(\\mathscr{A})$, the induced map $\\mathscr{A}(A,B)\\to\\mathscr{B}(F(A),F(B))$ is surjective. (The functor is \"faithful\" if the induced map is one-to-one). Sep 28 at 15:51\n• Yes, I just thought the \"full\" condition was too strong for the question I wanted to answer\n– IAAW\nSep 28 at 16:06\n• You also kind of want the image to be equivalent (rather than equal) to the target, though that doesn't matter here. Sep 28 at 17:35\n• I think you'd want to ask about the functor being essentially surjective. Otherwise, a stickler could say: if $G$ is a group, then $G^{ab}$ always has an underlying set all of whose elements have the same (nonzero) cardinality. Therefore, the group with underlying set $\\{ 0, 1 \\}$ and group structure transported from $C_2$ is not in the image of the abelianization functor since 0 is empty whereas 1 is nonempty. Sep 28 at 19:16\n• @chi you're right. Will edit\n– IAAW\nSep 29 at 14:03\n\nOf course - let $$G$$ be any abelian group. Then $$A_5\\times G$$ has abelianisation $$G$$.\nMore generally, we have $$\\mathcal{G}(H\\times G)=\\mathcal{G}(H)\\times \\mathcal{G}(G)$$. Therefore, if $$G$$ is abelian and $$H$$ has trivial abelinisation then $$\\mathcal{G}(H\\times G)=\\mathcal{G}(H)\\times \\mathcal{G}(G)=\\mathcal{G}(G)=G$$." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.74214596,"math_prob":0.99994946,"size":1214,"snap":"2023-40-2023-50","text_gpt3_token_len":416,"char_repetition_ratio":0.20661157,"word_repetition_ratio":0.0,"special_character_ratio":0.30313015,"punctuation_ratio":0.14229248,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000004,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-01T21:30:42Z\",\"WARC-Record-ID\":\"<urn:uuid:c1dba80f-bfa8-4a10-a672-755f805a1c11>\",\"Content-Length\":\"151095\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3fd531e7-9573-4e07-8760-9e7f433dfcfd>\",\"WARC-Concurrent-To\":\"<urn:uuid:2f37ae3f-f302-47a0-bedf-3b2f2410bf49>\",\"WARC-IP-Address\":\"172.64.144.30\",\"WARC-Target-URI\":\"https://math.stackexchange.com/questions/4777165/abelianization-of-non-abelian-groups\",\"WARC-Payload-Digest\":\"sha1:ECCVFB3UNMDFCIZ6XIAHUO33XTMZNVJJ\",\"WARC-Block-Digest\":\"sha1:AYYQYROP37ABIHESQ5X4QAXI4YPJNGTO\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100304.52_warc_CC-MAIN-20231201183432-20231201213432-00005.warc.gz\"}"}
https://www.scribd.com/presentation/331279713/Ranges-New-pptx
[ "You are on page 1of 49\n\nCOMPACT RANGES\n\nMicrowave antennas need\n\n-U.P.W for illumination\n-far field distances in the setup\nCATR (Compact Antenna Test Range)\na collimating device that generates\nUPW in a short distance compared to 2D 2\n/\n\nTwo types\n1.Curved metal reflectors\n2.Dielectric lens\nPrinciple\nEquality of path length to generate plane\nwave\nFig.1\n\nDraw backs\n-aperture blockage\n-spill over of feed to test antenna\n-diffraction from edges of reflector\n-polarization problems\nSolution\n-using off-set feed\n-long focal length\n\nCATR performance\nto produce perfect UPW a CATR should\nhave\n-ideal parabolic curvature\n-infinite size\n-point source at focus\nA practical CATR produces approximate\nUPW\nFigures of merit are\n-phase errors\n-amplitude tapper\n- amplitude & phase ripples\n\nPhase errors : due to diffraction and\n\ncurvature\nAmplitude tapering:\n- feed pattern\n- space attenuation (1/r2 )\nAmplitude & phase ripples:\n- due to diffraction\nFig.2\n\nDiffraction : to reduce\n1.Serrated edges\nFig.3\n\n2.Rolled edges\nFig.4\n\nCATR designs:\nFour configurations\n1. Single paraboloid:\nFig:5(copy fig.1)\n\n- no blocking\n-more depolarization\n-No spill over\n\n2.Dual paraboloid:\nFig.6\n\n-spill over exist\n\n-low cross polarization\n\n3.Dual shaped reflector:\n\nFig.7\n\n-similar to cassegrain\n-high illumination efficiency\n-low spill over\n-increased power density\n\nFig.8\n\nVertical plane -parabolic\n\nHorizontal plane flat\nSmall antennas direct far field pattern\nLarge antennas NF/FF transform for FF\npattern\n\nMeasurements in the near field\n\nare transformed to obtain FF\nmeasurements using analytical\nmethods.\nMethods:\nNear field data:\nA scanning probe over\n1.planar surface or\n2.cylindrical or\n3.spherical\n\nE-field (magn. & phase) on the surface\n\nModel expansion methods\nFF\npattern\nFF Transformation complexity\nPlanar(high G antennas) < cylindrical < spherical(low\nG antennas)\n\nPlanar scanning:\ndata acquired on rect. Plane\nMax. sample spacing: x = y = /2\n\nFig.9\n\n- Mathematical simplicity\n- Less complex transformation(FFT\nalgorithms)\n- Suitable for horn, reflector, planar\narrays\n- Resulting FF pattern is over limited\nangular span(draw back)\n\nCylindrical scanning:\nFig.10\n\n- Produce complete azimutal pattern\n\n- Moderate FF transformation complexity\n- Maximum angular & vertical sample spacing is\n=/2(+a) ; z = /2\nwhere , a= radius of smallest cylinder\nenclosing\n\nFig.11- cylindrical scan\n\nSpherical scanning:\nFig.12\n\n- Complete FF pattern can be obtained\n\n- Maximum sampling spacing is\n= /2(+a) ; = /2(+a)\nWhere, a = radius of smallest sphere\n\nFig. 13-spherical scanning\n\nPatterns measured on surface of\nsphere of\nFig.14\n\nField pattern : E (, ) (r is fixed)\n\n- three dimensional fig\n- E-plane & H-plane\n- vertical pattern( = 900 )\n- azimuthal pattern( =\n900 )\nAmplitude pattern:\nTotal amplitude = vector sum of two\northogonal\ncomponents\n\nFig.15- Amplitude pattern\n\nPhase pattern:\nHow the phase of the field varies\nover the surface of fixed radius\nFig. 16-far field phase measurement\n\nGain Measurement\nAbsolute Gain measurement\n- based on Friis transmission formula\n(i).Two- antenna method:\n\nFriis transmission formula in\n\nlogarithmic decibel form can be\nwritten as\n(Got )dB + (Gor )dB\n= (20 log(4R/) + 10\nlog(Pr/Pt) )\n\nIf the transmitter & Receiver are\n\nidentical\ni.e (Got )dB = (Gor )dB then\n\n(ii)Three antenna method\n\nIf the two antennas are not identical\nThen three antennas a, b, c can used\na-b combination\n(Ga )dB +(Gb )dB = 20 log(4R/) + 10\nlog(Prb/Pta)\na-c comination\n(Ga )dB +(Gc )dB = 20 log(4R/) + 10\nlog(Prc/Pta)\n\nb-c combination\n(Gb )dB +(Gc )dB = 20 log(4R/) + 10\nlog(Prc/Ptb)\n\nIMPEDANCE MEASUREMENT\nThe i/p Impedance (driving point\npoint impedance) of an antenna\nIs its self impedance (when radiating\ninto unbounded medium, no coupling)\nIs a function of self impedance & Mutual\nimpedance between it and other\nsources/obstacles\n\nTo ensure max.power transfer\n\nbetween Antenna & Transmission line\nConjugate match is required\nIf No Conjugate matching, then power\nlost is given by\n\nWhere\nZant = i/p impedance of the antenna\nZcct = impedance of the Tr.Line/ Circuit\nconnected to the antenna\n\nThe degree of impedance\n\nmismatch\nis a function of Zant and\n\nChar.impedance, Zc of the tr.line\n\ndetermines the amount of power\nreflected at the i/p terminals of\n\nNow to measure the i/p Impedance\n\nVSWR is measured to compute the\nmagnitude of Reflection coefficient,\nTo find the phase , of of the reflection\ncoefficient, voltage minimum on the\ntr.line is identified from the antenna\nterminals and then,\n\nand\n\nAfter measuring the reflection\n\ncoefficient (both magnitude &\nphase), the i/p impedance can be\ncalculated using\n\nMEASUREMENT\n\nUsed to determine the degree of mismatch\n\nbetween the source and load when the value\nVSWR 1.\nCan be measured by using a slotted line. Direct\nMethod Measurement is used for VSWR values\nusing a standing wave detector .\nThe measurement consists simply of adjusting\nsure that the frequency is correct and then using\nthe dc voltmeter to measure the detector output\nat a maximum on the slotted section and then at\nthe nearest minimum.\n\nISWR = Imax / Imin\n\n= k (V\n\n2\n)\n/ k (V\nmax\n\n2\n)\nmin\n\n(V\n\nmax\n\n2\n=\nVSWR\nVSWR = ( I / I ) = ISWR\nmax\n\nmin\n\n/ V\n\n2\n)\nmin\n\nMethods used depends on the value of VSWR\n\nwhether it is high or low. If the load is not exactly\nmatched to the line, standing wave pattern is\nproduced.\nReflections can be measured in terms of voltage,\ncurrent or power. Measurement using voltage is\npreffered because it is simplicity.\nWhen reflection occured, the incident and the\nreflected waves will reinforce each other in some\nplaces, and in others they will tend to cancel each\nother out.\n\nMICROWAV\nE SOURCE\n\nPOWER\nMETER\nISOLAT\nOR\n\nATTENU\nATOR\n\nWAVE\nMETER\n\nDIRECTI\nONAL\nCOUPLE\nR\n\nVSWR\nINDICAT\nOR\n\nSLOTTED\nLINE\n\nTUNE\nR\n\nTERMINAT\nOR\n\nFUNCTION OF EACH BLOCK\n\nMICROWAVE SOURCE generates microwave\nsource in X-band (8 12 GHz);\ne.g klystron, magnetron or TWT\nISOLATOR /CIRCULATOR - Allow wave to travel\nthrough in one direction while being\nattenuated in the other direction or it is use to\neliminate the unwanted generator frequency\npulling (changing the frequency of the\ngenerator) due to system mismatch or\ndiscontinuity. (to prevent reflected energy from\nreaching the source)\n\nATTENUATOR - Control the amount of\n\npower level in a fixed amount, variable\namount or in a series of fixed steps from the\nfrom the microwave source to the\nwavemeter.\n\nWAVEMETER - Used to select / measure\n\nresonant cavity frequencies by having a\nplunger move in and out of the cavity thus\ncauses the the cavity to resonate at different\nfrequencies.\nDIRECTIONAL COUPLER - Samples part of\nthe power travelling through the main\nwaveguide and allows part of its energy to\nfeed to a secondary output port. Ideally it is\nused to separate the incident and reflected\nwave in a transmission line.\nSLOTTED LINE - Used to determine the field\n\nVSWR INDICATOR - Denotes the value of\n\nVSWR measured by the slotted line.\nTUNER\n\nappear at the output. Any harmonic frequencies\n\nthat appear at the output are reduced to an\nacceptable level.\nTERMINATOR\n\nresistive termination to some sort of deep-space\n\nantenna array, active repeater or similar devices.\n3 special cases of transmission line i.e short\ncircuit, open circuit, match impedance.\n\nDouble Minimum method is usually employed\n\nfor VSWR values greater than about 10.\nE2MAX\n\n2E2MIN\n\nSWR PATTERN\n\nE2MIN\n\n/2\n\nd/2\n\nThe detector output (proportional to field strength\n\nsquared) is plotted against position. The probe is\nmoved aling the line to find the minimum value of\nsignal.\nIt is then moved either side to determine 2\npositions at which twice as much detector signal is\nobtained. The distance d between these two\npositions then gives the VSWR according to the\nformula :\nS =\n\n1 + 1/Sin2(d/)\n\n(OR)\nThe value of VSWR is calculated by\nVSWR = g/ 2(d2- d1)\nWhere g = guided wave length\nd2- d1 = seperation between\nsuccessive\nminima\n\nS-PARAMETERS : Introduction\nA two port network is shown in the figure below.\n\nFrom network theory a two port network can be\n\ndescribed by a number of parameters such as H, Y ,\nABCD parameters.\n\nIf the frequencies are in microwave region these parameters\n\ncannot be used due to the following reasons.\n\nThe figure below shows the S parameters of two port network.\n\nScattering Parameters\nConsider a circuit or device inserted into a\nT-Line as shown in the Figure. We can\nrefer to this circuit or device as a two-port\nnetwork.\nThe behavior of the network can be\ncompletely characterized by its scattering\nparameters\n(S-parameters),\nor\nits\nscattering matrix, [S].\nScattering matrices are frequently used to\ncharacterize multiport networks, especially\nat high frequencies. They are used to\nrepresent microwave devices, such as\namplifiers and circulators, and are easily\nrelated to concepts of gain, loss and\nreflection.\n\nScattering matrix\n\nS11\nS\n\nS 21\n\nS12\n\nS 22\n\nScattering Parameters (S-Parameters)\n\nThe scattering parameters represent ratios of voltage waves entering and\nleaving the ports (If the same characteristic impedance, Zo, at all ports in\nthe network are the same).\n\nIn matrix form this is written\n\nWhere,\n\nProperties:\n1)\nThe two-port network is reciprocal if\nReciprocity\n\nthe transmission characteristics are\n\nthe same in both directions (i.e. S21\n= S12).\nIt is a property of passive circuits\n(circuits with no active devices or\nferrites) that they form reciprocal\nnetworks.\nA network is reciprocal if it is equal\nto\nits\ntranspose.\nStated\nmathematically, for a reciprocal\nnetwork\nCondition for Reciprocity:\n\nS12= S21\n\n2) Lossless\nNetworks\nA\nlossless network does\n\nnot contain any\n\nresistive elements and there is no\nattenuation of the signal. No real power is\ndelivered to the network. Consequently, for\nany passive lossless network, what goes in\nmust come out!\nIn terms of scattering parameters, a\nnetwork is lossless if\n\nS S\nt\n\n1\nU , where [U] is the unitary matrix\n[U ]\n\n.\n0\n1\n\nFor a 2-port network, the product of the transpose matrix and the\ncomplex conjugate matrix yields\n\nS S\n\nS S\nS S\n\nS11 S 21\n\nS S* S\n12 11\n22\n\nIf the network is\nreciprocal and\nlossless\n\n*\n21\n\n*\n11 12\n\n12\n\nS 21S 22*\n\nS 22\n\nS11 S 21 1\n2\n\n1 0\n\n0 1\nS11 S12* S 21S 22* 0" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.761967,"math_prob":0.92781204,"size":10456,"snap":"2019-26-2019-30","text_gpt3_token_len":2839,"char_repetition_ratio":0.10084195,"word_repetition_ratio":0.0068649887,"special_character_ratio":0.22628155,"punctuation_ratio":0.081567794,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98405826,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-06-25T15:44:34Z\",\"WARC-Record-ID\":\"<urn:uuid:e742a1be-0f7b-47d0-9bdb-3f350912cbfc>\",\"Content-Length\":\"248485\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e50c5c7f-aae1-4f0e-9040-266080632b70>\",\"WARC-Concurrent-To\":\"<urn:uuid:85a29e9e-87c9-4944-9b22-1528c07490d6>\",\"WARC-IP-Address\":\"151.101.250.152\",\"WARC-Target-URI\":\"https://www.scribd.com/presentation/331279713/Ranges-New-pptx\",\"WARC-Payload-Digest\":\"sha1:C7XGJ4Y66M6CJ5FPFZCKAHMVJZSWU4W4\",\"WARC-Block-Digest\":\"sha1:M3N4QU77TLR5O2MVUZRO4OJSTB7JJJO4\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-26/CC-MAIN-2019-26_segments_1560627999853.94_warc_CC-MAIN-20190625152739-20190625174739-00230.warc.gz\"}"}
https://www.gradesaver.com/textbooks/math/algebra/algebra-1/chapter-8-polynomials-and-factoring-8-1-adding-and-subtracting-polynomials-practice-and-problem-solving-exercises-page-478/24
[ "## Algebra 1\n\n$-2y^2+5y$ $quadratic$ $binomial$\nIn order to solve this problem, we need to put the equation in standard form and name the polynomial. To put the equation in standard form, we must order the monomials from highest to lowest. $-2y^2$ has a higher degree than $5y$, so it will go first. Our answer is: $-2y^2+5y$ Next, we need to name the polynomial. It is a binomial because it has two monomials (terms), and it is quadratic because its highest degree is 2. Therefore, it is a quadratic binomial." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9208754,"math_prob":0.9993631,"size":521,"snap":"2020-34-2020-40","text_gpt3_token_len":147,"char_repetition_ratio":0.12765957,"word_repetition_ratio":0.022727273,"special_character_ratio":0.26103646,"punctuation_ratio":0.11818182,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99978477,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-18T20:12:23Z\",\"WARC-Record-ID\":\"<urn:uuid:e75872dc-9ca9-4dd4-8667-51598e026fa9>\",\"Content-Length\":\"90425\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f219f0ff-3259-44ab-a81a-95c080096aa6>\",\"WARC-Concurrent-To\":\"<urn:uuid:1417d471-4744-49e1-9fdf-4e26be155f41>\",\"WARC-IP-Address\":\"34.238.129.158\",\"WARC-Target-URI\":\"https://www.gradesaver.com/textbooks/math/algebra/algebra-1/chapter-8-polynomials-and-factoring-8-1-adding-and-subtracting-polynomials-practice-and-problem-solving-exercises-page-478/24\",\"WARC-Payload-Digest\":\"sha1:N2KA5QEUQD3IICQUIV4ZIMCOII4QFXLL\",\"WARC-Block-Digest\":\"sha1:7INCVIC7G37ZZAC7JEANNMVFQWSKWDJ3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600400188841.7_warc_CC-MAIN-20200918190514-20200918220514-00772.warc.gz\"}"}
https://www.netcomputerscience.com/2017/03/gate-computer-science-2017-session-2-p5.html
[ "", null, "Sample Questions, Previous Year Solved Papers, Study Materials For Competitive Examinations Like UGC NET, SET And GATE Computer Science.\n\n## Wednesday, 15 March 2017\n\n41.       Let L(R) be the language represented by regular expression R. Let L(G) be the language generated by a context free grammar G. Let L(M) be the language accepted by a Turing machine M.\nWhich of the following decision problems are undecidable?\nI. Given a regular expression R and a string w, is w ϵ L(R)?\nII. Given a context-free grammar G, is L(G) = Φ?\nIII. Given a context-free grammar G, is L(G) = ∑* for some alphabet ∑?\nIV. Given a Turing machine M and a string w, is w ϵ L(M)?\n(A) I and IV only\n(B) II and III only\n(C) II, III and IV only\n(D) III and IV only\n42.       The next state table of a 2-bit saturating up-counter is given below.\nThe counter is built as a synchronous sequential circuit using T flip-flops. The expressions for T1 and T0 are\n43.       Consider the following snippet of a C program. Assume that swap (& x, & y) exchanges the contents of x and y.\nint main() {\nint array[] = {3, 5, 1, 4, 6, 2);\nint done = 0;\nint i;\nwhile (done == 0) {\ndone = 1;\nfor (i=0; i<=4; i++) {\nif (array[i] < array[i+1]) {\nswap(&array[i], &array[i+1]);\ndone = 0;\n}\n}\nfor (i=5; i>=1; i--) {\nif (array[i] > array[i-1]) {\nswap(&array[i], &array[i-1]);\ndone = 0;\n}\n}\n}\nprintf (“%d”, array);\n}\nThe output of the program is ...................\n44.       Two transactions T1 and T2 are given as\nT1: r1(X)w1(X)r1(Y)w1(Y)\nT2: r2(Y)w2(Y)r2(Z)w2(Z)\nwhere ri(V) denotes a read operation by transaction Ti on a variable V and wi(V) denotes a write operation by transaction Ti on a variable V. The total number of conflict serializable schedules that can be formed by T1 and T2 is ...............\n45.       The read access times and the hit ratios for different caches in a memory hierarchy are as given below.\nThe read access time of main memory is 90 nanoseconds. Assume that the caches use the referred-word-first read policy and the write back policy. Assume that all the caches are direct mapped caches. Assume that the dirty bit is always 0 for all the blocks in the caches. In execution of a program, 60% of memory reads are for instruction fetch and 40% are for memory operand fetch. The average read access time in nanoseconds (up to 2 decimal places) is ..................\n\n46.       Consider the following database table named top_scorer.\nConsider the following SQL query:\nSELECT ta.player FROM top scorer AS ta\nWHERE ta.goals > ALL (SELECT tb.goals\nFROM top_scorer AS tb\nWHERE tb.country = ‘Spain’)\nAND ta.goals >ANY(SELECT tc.goals\nFROM top_scorer AS tc\nWHERE tc.country = ‘Germany’)\nThe number of tuples returned by the above SQL query is ...............\n47.       If the ordinary generating function of a sequence\nthen a3-a0 is equal to ....................\n48.       If a random variable X has a Poisson distribution with mean 5, then the expectation E[(X+2)2] equals ................\n49.       In a B+ tree, if the search-key value is 8 bytes long, the block size is 512 bytes and the block pointer size is 2 bytes, then the maximum order of the B+ tree is ...................\n50.    A message is made up entirely of characters from the set X={P,Q,R,S,T}. The table of probabilities for each of the characters is shown below:\nIf a message of 100 characters over X is encoded using Huffman coding, then the expected length of the encoded message in bits is ..............\n51.    Consider the set of processes with arrival time (in milliseconds), CPU burst time (in milliseconds), and priority (0 is the highest priority) shown below. None of the processes have I/O burst time.\nThe average waiting time (in milliseconds) of all the processes using preemptive priority scheduling algorithm is ................\n52.    If the characteristic polynomial of a 3 x 3 matrix M over R (the set of real numbers) is λ3-4λ2+aλ+30, a ϵ R, and one eigen value of M is 2, then the largest among the absolute values of the eigen values of M is .................\n53.    Consider a machine with a byte addressable main memory of 232 bytes divided into blocks of size 32 bytes. Assume that a direct mapped cache having 512 cache lines is used with this machine. The size of the tag field in bits is .............\n54.    Consider the following C Program.\n#include<stdio.h>\nint main() {\nint m = 10;\nint n, n1;\nn = ++m;\nn1=m++;\nn--;\n--n1;\nn-=n1;\nprintf(“%d”, n);\nreturn 0;\n}\nThe output of the program is ..................\n55.    Consider the following C Program.\n#include<stdio.h>\n#include<string.h>\nint main() {\nchar* c = “GATECSIT2017”;\nchar* p = c;\nprintf(“%d”, (int)strlen(c+2[p]-6[p]-1));\nreturn 0;\n}\nThe output of the program is ................." ]
[ null, "https://3.bp.blogspot.com/-uJFqsWMxi2E/Wa4jgRUlpJI/AAAAAAAACE8/3gd2wEOMRp0LzWREUBFXqUL3o6tRBtkQgCK4BGAYYCw/s1600/Logo.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7562982,"math_prob":0.99136335,"size":4683,"snap":"2019-13-2019-22","text_gpt3_token_len":1382,"char_repetition_ratio":0.134003,"word_repetition_ratio":0.04761905,"special_character_ratio":0.337177,"punctuation_ratio":0.2936893,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9963568,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-03-22T21:01:39Z\",\"WARC-Record-ID\":\"<urn:uuid:ba524231-1093-4b82-9aa9-3236bb6b3bf5>\",\"Content-Length\":\"270492\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:92b452eb-b118-4ed0-b435-bfbccc22ff1c>\",\"WARC-Concurrent-To\":\"<urn:uuid:6f749d65-4b8c-4e89-8f49-7ac4d995eda1>\",\"WARC-IP-Address\":\"172.217.15.115\",\"WARC-Target-URI\":\"https://www.netcomputerscience.com/2017/03/gate-computer-science-2017-session-2-p5.html\",\"WARC-Payload-Digest\":\"sha1:I7O22ZZ4Q5RWC5TXKJPL6ZDUARWNWW6Y\",\"WARC-Block-Digest\":\"sha1:2X5R43Z6T6LBJ4RERJOXFGREC2B53VIZ\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-13/CC-MAIN-2019-13_segments_1552912202689.76_warc_CC-MAIN-20190322200215-20190322222215-00364.warc.gz\"}"}
https://www.mecaenterprises.com/components-and-cladding-example/
[ "Jun 26, 2020\n\nThis article provides a Components and Cladding (C&C) example calculation for a typical building structure.  We will first perform the calculations manually, and then show how the same calculations can be performed much easier using the MecaWind software.\n\n## What is Components and Cladding?\n\nASCE 7-16 defines Components and Cladding (C&C) as: “Elements of the building envelope or elements of building appurtances and rooftop structures and equipment that do not qualify as part of the MWFRS (Main Wind Force Resisting System).”   In simple terms, C&C would be considered as windows, doors, the siding on a house, roofing material, etc..", null, "## Example Problem:\n\nWe will use ASCE 7-16 for this example and the building parameters are as follows:\n\nBuilding Eave Height:  EHt = 40 ft [12.2 m]\n\nBuilding Length:  L = 200 ft [60.96 m]\n\nBuilding Width:  W = 100 ft [30.48 m]\n\nRoof Type:  Monoslope with 1:12 Slope\n\nEnclosure Type:  Enclosed\n\nElevation:  Job site is at sea level\n\nWind Speed:  V = 150 mph [67.1 m/s]\n(Based upon Category III)\n\nExposure:  C (Open Terrain)\n\nTopography:  Flat, no topographic features", null, "## Effective Area:\n\nIn order to calculate the wind pressures for each zone, we need to know the effective area of the C&C.   To determine the area we need the Width and Length:\n\nLength = Span of the component\n\nWidth =  The effective width of the component which need not be less than 1/3 of the span length.\n\nAs an example, a roof joist that spans 30 ft and are spaced 5 ft apart would have a length of 30 ft and the width would be the greater of 5 ft or 30 ft / 3 = 10 ft.  In this case the 1/3 rule would come into play and we would use 10ft for the width.\n\nEff Area = 30 ft x 10 ft = 300 sq ft\n\n(Note: MecaWind makes this adjustment automatically, you just enter the Width and Length and it will check the 1/3 rule)\n\nWhen calculating C&C pressure, the SMALLER the effective area the HIGHER the wind pressure.\n\nMost of the figures for C&C start at 10 sq ft [0.9 sq m] and so for the purpose of this example we will consider an effective area of 10 sq ft for all wall and roof wind zones.  This will give us the most conservative C&C wind pressure for each zone.\n\n## Chapter 30, but Which Part?\n\nChapter 30 of ASCE 7-16 provides the calculation methods for C&C, but which of the seven (7) parts in this section do we follow?  We just have to follow the criteria for each part to determine which part(s) our example will meet.  To do this we first need our mean roof height (h) and roof angle.\n\nRoof Angle = arctan(1/12) = 4.76 Deg\n\nSec 2.62 defines the mean roof height as the average of the roof eave height and the height to the highest point on the roof surface, except that, for roof angles less than or equal to 10 deg, the mean roof height is permitted to be taken as the roof eave height.\n\nSince our Roof Angle (4.76 Deg) <= 10 Deg, then we can take h as the eave height (EHt).\n\nh = EHt = 40 ft [12.2 m]\n\nThe other determination we need to make is whether this is a low rise building.  To be considered a low rise, the building must be enclosed (this is true), the h <= 60 ft (this is true) and the h<= least horizontal width.  Our least horizontal dimension is the width of 100 ft [30.48] and our h is less than this value, so this criteria is met as well.  Therefore this building is a low rise building.\n\nUsing all of this criteria, we can then determine that the only two methods of Chapter 30 where we meet all criteria are Part 1 and 4 (see chart).", null, "## Chapter 30 Part 1:", null, "We now follow the steps outlined in Table 30.3-1 to perform the C&C Calculations per Chapter 30 Part 1:\n\nStep 1: We already determined the risk category is III\n\nStep 2: V = 150 mph\n\nStep 3: Determine Wind Load Parameters\nKd = 0.85 (Per Table 26.6-1 for C&C)\nKzt = 1  (There are no topographic features)\nKe = 1  (Job site is at sea level)\nGCpi = +/-0.18 (Tabel 26.13-1 for enclosed building)\n\nStep 4: Determine Velocity pressure exposure coefficient\nzg = 900 ft [274.32]  (Table 26.11-1 for Exposure C)\nAlpha = 9.5                 (Table 26.11-1 for Exposure C)\nKh = 2.01*(40 ft / 900 ft)^(2/9.5) = 1.044\n\nStep 5: Determine velocity pressure\nqz = 0.00256*Kh*Kzt*Kd*Ke*V^2\n= 0.00256*(1.044)*(1)*(0.85)*(1.0)*(150^2) = 51.1 psf\n\nStep 6: Determine External Pressure Coefficient (GCp).\n\nWe are looking at pressures for all zones on the wall and roof.  For the wall we follow Figure 30.3-1:\n\nFor 10 sq ft, we get the following values for GCp.  Note 5 of Figut 30.3-1 indicates that for roof slopes <= 10 Deg that we reduce these values by 10%, and since our roof slope meets this criteria we multiply the figure values by 0.9\n\nZone 4:  GCp = +1.0*0.9 = +0.9 / -1.1*0.9 = -0.99\n\nZone 5: GCp = +1.0*0.9 = +0.9 / -1.4*0.9 = -1.26", null, "", null, "A Monoslope roof with a slope between 3 deg and 10 deg follows Fig 30.3-5A.  For each zone, we get the following values:\n\nZone 1:  GCp = +0.3 / -1.1\n\nZone 2: GCp = +0.3 / -1.3\n\nZone 2′: GCp = +0.3 / -1.6\n\nZone 3: +0.3 / -1.8\n\nZone 3′: +0.3 / -2.6", null, "", null, "Step 7: Calculate wind pressure\n\nWe can then use all of these values to calculate the pressures for the C&C.  Since we have GCp values that are postive and negative, and our GCpi value is also positive and negative, we take the combinations that produce the largest positive value and negative value for pressure:\n\np1 = qh*(GCp – GCpi)\n= 51.1 * (0.3 – (-0.18)) = 24.53 psf    (Zone 1)\n\np2 = 51.1*(-1.1 – (+0.18)) = -65.41   (Zone 1)\n\nThe calculations for Zone 1 are shown here, and all remaining zones are summarized in the adjacent tables.\n\nThese pressures follow the normal ASCE 7 convention, Positive pressures are acting TOWARD the surface, and Negative Pressures are acting AWAY from the surface.", null, "## Chapter 30 Part 4:\n\nChapter 30 Part 4 was the other method we could use.  This is considered a “Simplified” method and is supposed to be easier to calculate by looking up values from tables.  Using the same information as before we will now calculate the C&C pressures using this method.", null, "Step 1:  The category is III\n\nStep 2: V = 150 mph\n\nStep 3: Wind load parameters are the same as earlier\n\nStep 4: For walls and roof we are referred to Table 30.6-2.", null, "Table 30.6-2 (above) refers us to Fig 30.4-1, which is shown below.", null, "Referring back to Table 30.6-2, it indicates in note 5 that when Fig 30.4-1 applies then we must use the adjustment factor Lambda for building height and exposure.\n\nReferring to this table for a h = 40 ft and Exposure C, we get a Lambda value of 1.49.  This value is then multiplied by the value obtained from Fig 30.4-1.", null, "", null, "## Is there an Easier Way?\n\nFortunately, there is an easier way to make this conversion. Meca has developed the MecaWind software, which can make all of these calculations much easier.  As you can see in this example, there are many steps involved and it is very easy to make a mistake.  MecaWind can do a lot of the busy work for you, and let you just focus on your inputs and outputs.\n\nHere are the input and output files associated with these examples:\n\nChapter 30 Part 1:        Input File         Output PDF File\n\nChapter 30 Part 4:        Input File         Output PDF File\n\nWe have worked this same example in MecaWind, and here is the video to show the process.  There is no audio, it is just a 2.5 minute video showing how you enter Part 1 and then switch to Part 4 for the results." ]
[ null, "data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%20290%20174'%3E%3C/svg%3E", null, "data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%201258%20686'%3E%3C/svg%3E", null, "data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%20852%20401'%3E%3C/svg%3E", null, "data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%20456%20491'%3E%3C/svg%3E", null, "data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%20511%20349'%3E%3C/svg%3E", null, "data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%20440%20460'%3E%3C/svg%3E", null, "data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%20820%20458'%3E%3C/svg%3E", null, "data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%20692%20721'%3E%3C/svg%3E", null, "data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%20649%20370'%3E%3C/svg%3E", null, "data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%201105%20285'%3E%3C/svg%3E", null, "data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%20812%20139'%3E%3C/svg%3E", null, "data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%201169%20632'%3E%3C/svg%3E", null, "data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%20779%20334'%3E%3C/svg%3E", null, "data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%20513%20567'%3E%3C/svg%3E", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8867509,"math_prob":0.99151194,"size":7170,"snap":"2023-40-2023-50","text_gpt3_token_len":2040,"char_repetition_ratio":0.11652247,"word_repetition_ratio":0.020879941,"special_character_ratio":0.3004184,"punctuation_ratio":0.122132674,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99240506,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-27T22:32:04Z\",\"WARC-Record-ID\":\"<urn:uuid:62878664-48a6-481e-afb2-42931eb01528>\",\"Content-Length\":\"221334\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a7f7e8e3-905d-4460-8f7e-2eadfca775dd>\",\"WARC-Concurrent-To\":\"<urn:uuid:9aa0a9d0-2d52-4b72-a612-a9d8d99f42f2>\",\"WARC-IP-Address\":\"104.21.30.126\",\"WARC-Target-URI\":\"https://www.mecaenterprises.com/components-and-cladding-example/\",\"WARC-Payload-Digest\":\"sha1:AP5Y6D5DB4OUWCD534BN4EJJT3Q4F3GB\",\"WARC-Block-Digest\":\"sha1:2ZKT64U4SJBQDAWJPP4VFPD7FVZ22BKR\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510326.82_warc_CC-MAIN-20230927203115-20230927233115-00594.warc.gz\"}"}
https://www.scielo.cl/scielo.php?script=sci_arttext&pid=S0717-97072005000400016&lng=en&nrm=iso
[ "## Indicators\n\n•", null, "Cited by SciELO\n•", null, "Access statistics\n\n## On-line version ISSN 0717-9707\n\n### J. Chil. Chem. Soc. vol.50 no.4 Concepción Dec. 2005\n\n#### http://dx.doi.org/10.4067/S0717-97072005000400016\n\nJ. Chil. Chem. Soc., 50, N° 4 (2005), págs: 739-743\n\nOPTIMIZATION OF PHYSICO-CHEMICAL MODELS USING THE GAUSS-NEWTON METHOD\n\nSALVADOR BARBATO RAVERA*1 ,MIGUEL ALVAREZ CHAVEZ2 .\n\n1 Department of Chemistry, Sciences Faculty. Universidad de Antofagasta. E-mail: [email protected]\n2. Department of Engineering, Engineering Faculty. Universidad de Antofagasta.\n\nABSTRACT\n\nThis study proposes the use of a numerical calculation method for optimizing non-linear physico-chemical models based on the Gauss-Newton algorithm. This method can be applicable to the Wagner-Traud model used to determine the parameters of corrosion and to adsorption isotherm models, like those of Langmuir and Langmuir-Freundlich, to determine the free energy of adsorption. This method in present work was applicated to experimental data of polarization of the iron electrode in medium of sulfuric acid 0.5 M and experimental values of adsorption of 3-Mercaptopropyltrimethoxisilane on copper surface. For the first case (polarization) the results showed with the proposed method was obtained smaller relative errors than the relative errors obtained by the Polynomial method and the computational method of Betacrunch. In the second case, the results showed that the adsorption parameters agree with the model of isotherm of Langmuir-Freundlich, obtaining a free energy of adsorption of 40 kJ/mol.\n\nKeywords: Gauss-Newton algorithm, isotherm of Langmuir and Langmuir-Freundlich, Wagner-Traud equation, corrosion current, Tafel slopes, free energy of adsorption\n\nINTRODUCTION\n\nThe efficiency of interpretation of experimental data requires good determination of the parameters which regulate the physico-chemical model, a condition which is not generally met when the model is non-linear. One example is the use of the non-linear Wagner-Traud (1) model (Eq. 8) for the determination of corrosion currents and Tafel slopes. Due to non-linear characteristics, the usual procedure in information processing of this type is to insert extreme conditions into the model, so that a linear correlation is achieved between the independent and dependent variables at these extremes; this then becomes a problem of partial or incomplete interpretation of the experimental data assimilated in the model2. In order to avoid situations of this type, the present study develops the Gauss-Newton3-8 numerical analysis algorithm for treatment of complete data sets, and we optimize, as examples, two nonlinear physico-chemical models including, (a) the Wagner-Traud corrosion model, and (b) an adsorption isotherm model.\n\nDESCRIPTION OF THE METHOD\n\nWe considered that the function which represents the non-linear physico-chemical model to be optimized is given in its general form by equation 1:", null, "where:\n\ny = Dependent variable.\nx = Independent variable.\naj = parameters of the model.\nn = number of parameters in the mode\nl\n\nThus, if there is a set of experimental values available [yexp, xexp], the parameters aj represents the values to be determined by the model . The squared value of the residual that is generated by each of the experimental data points when comparing the model with the data is given by Eq. 2:", null, "where:\n\nei = square of the residual for each experimental data point i.\ni = 1, m.\nm = Number of experimental data points.\n\nIf we define the residual according to Eq. 3:", null, "Then the optimal values for aj are obtained when the value of the residual is at a minimum. Each of the residuals of Eq. 2 depends on the values of aj. The matrix of Eq.4 shows the variations which affect the residual of Eq. 3:", null, "Given that the function f (x, aj) which represents the model is not linear, and that the system of equations is over-determined (i.e.:there exist a larger number of data points than equations), to find the solution we use the Gauss-Newton algorithm following Eq. 5:", null, "Equation 5 can be written in vectorial form according to Eq. 6:", null, "Where vector represents the parameters which lead to the solution, and vector represents the variation of the parameters in each iteration and which must converge on values which produce minimal residuals.", null, "", null, "RESULTS AND DISCUSSION\n\nGauss-Newton method for determination of corrosion parameters\n\nTheoretical data were generated using the Wagner-Traud model to test the validity of the model for corrosion currents according to Eq. 9:", null, "where:\n\nIc = Corrosion current.\nba = Slope of anodic Tafel.\nbc = Slope of cathodic Tafel.\nh = Overpotential.\n\nThe data generated by this equation are included in Table 1, together with the information generated by the Gauss-Newton method.", null, "According to the data of Table 1, complete coincidence is observed between the theoretical model and the optimized model (residual value=0), validating the proposed method.\n\nExperimental polarization data were selected in order to determine the error of the proposed method. Table 2 and Figure 1 show the optimization of the experimental data of polarization of the Fe electrode in 0.5 M H2SO4 at 25 ºC as obtained by J. Jankowski and R. Jchniewczs9.", null, "", null, "Figure 1. Polarization curve of Fe in H2SO4 0,5 M. Experimental data are compared with optmized model using the Gauss-Newton algorithm\n\nBased on the data from Table 3, it is observed that the Gauss-Newton method showed a lower percentage of relative error, that is, 0.7 %, compared with the polynomial method2 and the Betacrunch10 computational method, demonstrating that the proposed method was stable and reliable in quantifying the corrosion parameters and that it also permitted working over the entire range of overpotential. This cannot be done with the polynomial method, which does not allow the use of overpotential values near zero. In the case of the Betacrunch computational method, work was done within an overpotential range of ± 100 mV and with an odd number of data points.", null, "Gauss-Newton method for determining DGº of adsorption of organic substances that act as corrosion inhibitors.\n\nThe adsorption equilibrium in the electrode / electrolyte interface is described for non-linear models, termed adsorption isotherms, which in their general form are given by the function in Eq.10 as:\n\nf (q X)e-aq = KC000000(10)\n\nwhere:\n\nq = Molar fraction of the surface covered by the adsorbed molecule.\na = Parameter of the molecular interaction.\nX = Relation between the adsorbed molecule and the solvent molecule.\nK = Equilibrium constant of the adsorption.\nC = Concentration of the substance that is adsorbed in the bulk of the solution.\n\nThe constant K is related to the free energy of adsorption by Eq 11:", null, "where:\n\nA = Reciprocal of the solvent concentration.\nDG = Free energy of adsorption.\n\nThe isotherm most used because of its simplicity is that of Langmuir11, Eqs.12 & 13.", null, "", null, "This isotherm has the following restrictions regarding the adsorbed molecules:\n\n o A single molecule per active site. o Supposition of the formation of a molecular monolayer. o No lateral interactions between the molecules.\n\nAs can be observed from equations 12 and 13 the Langmuir adsorption model is non-linear, however, it is common to transform it to a linear form using logarithmic relations or reciprocal values for the data, which implies the use of transformed values in the optimization of the system. The Langmuir isotherm can be improved by introducing the heterogeneity parameter h , thus obtaining the Langmuir-Freundlich12,13 isotherm given by Eq. 14:", null, "The heterogeneity parameter may assume a range of values between 0< h <1 and it is considered that it is a measure of the distribution of energy of adsorption at the different active sites over the surface14. Figures 2 and 3 show the results of the optimization using the Gauss-Newton algorithm proposed for the Langmuir and Langmuir-Fruendlich isotherms in the adsorption of 3-mercaptopropyltrimethoxysilane in copper15. These results are summarized in Table 3.", null, "", null, "Figure 2. Langmuir isotherm model. Degree of coverage vs. concentration of 3-mercaptopropyltrimethoxisilane in Cu Figure 3. Langmuir-Freundlich isotherm model. Degree of coverage vs.concentration of 3-mercaptopropyltrimethoxisilane in Cu\n\nThe data from Table 3 (Figures 2 and 3) show that the results of the free energy of adsorption are similar in the three cases.\n\nNevertheless, the application of Gauss-Newton method to the model of adsorption of Langmuir- Freundlich is obtained the greater coefficient of correlation (0.99). From this result can conclude that the Langmuir-Freundlich isotherm is the best that represent the experimental15 values and therefore, that the free energy of adsorption for this case should be of 40 kJ/mol.", null, "CONCLUSIONS:\n\nApplication of the Gauss-Newton model for optimizing non-linear physico-chemical models produces optimal results.\n\nThe proposed model has the advantage of being applicable to a complete range of experimental data.\n\nThe non-linear model is a good alternative for determining physico-chemical parameters in problems concerning both corrosion and adsorption.\n\nThe proposed model can be applied to any physico-chemical system which has non-linear behavior\n\nALGORITHM CALCULATION:\n\nThe calculation programs developed in this paper were programmed in Matlab 6.5.\n\nREFERENCES:\n\n1. C.Wagner and W.Traud. Z. Elektrochem, 44 (1938) 461.        [ Links ]\n\n2. M.Guzman. P.Ortega, L.Vera. Bol. Soc. Quim, 45 (2000) 191.        [ Links ]\n\n4. Coleman, T.F. and Y. Li, \"An Interior, Trust Region Approach for Nonlinear Minimization Subject to Bounds,\" SIAM Journal on Optimization, Vol. 6, pp. 418-445, 1996.        [ Links ]\n\n5. Coleman, T.F. and Y. Li, \"On the Convergence of Reflective Newton Methods for Large-Scale Nonlinear Minimization Subject to Bounds,\" Mathematical Programming, Vol. 67, Number 2, pp. 189-224, 1994.        [ Links ]\n\n6. Dennis, J. E. Jr., \"Nonlinear Least Squares,\" State of the Art in Numerical Analysis, ed. D. Jacobs, Academic Press, pp. 269-312, 1977.        [ Links ]\n\n7. Levenberg, K., \"A Method for the Solution of Certain Problems in Least Squares, Quarterly Applied Math. 2, pp. 164-168, 1944.        [ Links ]\n\n8. Marquardt, D., \"An Algorithm for Least Squares Estimation of Nonlinear Parameters,\" SIAM Journal Applied Math. Vol. 11, pp. 431-441, 1963.        [ Links ]\n\n9. Jankowski and Jchniewczs, Corros. Sci, 20 (1980) 841.        [ Links ]\n\n10. N. D. Greene and R. H. Gandhi, Mater. Perform, 21 (1982) 34.        [ Links ]\n\n11. I. Langmuir, J. Am. Chem. Soc. 40 (1918) 1361.        [ Links ]\n\n12. R. Sips, J.Chem. Phys., 16 (1948) 490.        [ Links ]\n\n13. R. A. Koble and T. E. Corrigan, Ind. Eng. Chem., 44 (1952) 387.        [ Links ]\n\n14. P. Kern and D. Landoit, J. Electrochem. Soc., 148 (2001) B228.        [ Links ]\n\n15. R.Tremont, H.De Jesús-Cardona, J.Garcia-Orosco, R.J. Castro asnd C.RCabrera, Journal of Applied Electrochemistry 30 (2000) 737.        [ Links ]", null, "All the contents of this journal, except where otherwise noted, is licensed under a Creative Commons Attribution License" ]
[ null, "https://www.scielo.cl/img/en/iconCitedOff.gif", null, "https://www.scielo.cl/img/en/iconStatistics.gif", null, "https://www.scielo.cl/fbpe/img/jcchems/v50n4/for16-01.jpg", null, "https://www.scielo.cl/fbpe/img/jcchems/v50n4/for16-03.jpg", null, "https://www.scielo.cl/fbpe/img/jcchems/v50n4/for16-04.jpg", null, "https://www.scielo.cl/fbpe/img/jcchems/v50n4/for16-05.jpg", null, "https://www.scielo.cl/fbpe/img/jcchems/v50n4/for16-06.jpg", null, "https://www.scielo.cl/fbpe/img/jcchems/v50n4/for16-07.jpg", null, "https://www.scielo.cl/fbpe/img/jcchems/v50n4/for16-08.jpg", null, "https://www.scielo.cl/fbpe/img/jcchems/v50n4/for16-09.jpg", null, "https://www.scielo.cl/fbpe/img/jcchems/v50n4/for16-10.jpg", null, "https://www.scielo.cl/fbpe/img/jcchems/v50n4/tb16-01.jpg", null, "https://www.scielo.cl/fbpe/img/jcchems/v50n4/tb16-02.jpg", null, "https://www.scielo.cl/fbpe/img/jcchems/v50n4/fig16-01.jpg", null, "https://www.scielo.cl/fbpe/img/jcchems/v50n4/tb16-03.jpg", null, "https://www.scielo.cl/fbpe/img/jcchems/v50n4/for16-11.jpg", null, "https://www.scielo.cl/fbpe/img/jcchems/v50n4/for16-12.jpg", null, "https://www.scielo.cl/fbpe/img/jcchems/v50n4/for16-13.jpg", null, "https://www.scielo.cl/fbpe/img/jcchems/v50n4/for16-14.jpg", null, "https://www.scielo.cl/fbpe/img/jcchems/v50n4/fig16-02.jpg", null, "https://www.scielo.cl/fbpe/img/jcchems/v50n4/fig16-03.jpg", null, "https://www.scielo.cl/fbpe/img/jcchems/v50n4/tb16-04.jpg", null, "http://i.creativecommons.org/l/by-nc/4.0/80x15.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8446952,"math_prob":0.92279303,"size":10019,"snap":"2022-27-2022-33","text_gpt3_token_len":2464,"char_repetition_ratio":0.1341987,"word_repetition_ratio":0.008344031,"special_character_ratio":0.23615131,"punctuation_ratio":0.15743138,"nsfw_num_words":2,"has_unicode_error":false,"math_prob_llama3":0.9914509,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46],"im_url_duplicate_count":[null,null,null,null,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-07-07T05:30:24Z\",\"WARC-Record-ID\":\"<urn:uuid:fe89042f-07e4-4972-9559-d29251731e6a>\",\"Content-Length\":\"42954\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:aa787fb7-d23b-409c-a0c0-f0cc1088e6da>\",\"WARC-Concurrent-To\":\"<urn:uuid:5259af54-7163-487e-8ab1-5724a184d125>\",\"WARC-IP-Address\":\"146.83.150.119\",\"WARC-Target-URI\":\"https://www.scielo.cl/scielo.php?script=sci_arttext&pid=S0717-97072005000400016&lng=en&nrm=iso\",\"WARC-Payload-Digest\":\"sha1:NALJNPPMV4TS6KYYURI35PJV7DLO2R4I\",\"WARC-Block-Digest\":\"sha1:VD2G67VISJVAQ27GIGWM7ZYDIOLHCEEC\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656104683683.99_warc_CC-MAIN-20220707033101-20220707063101-00766.warc.gz\"}"}
https://www.vedantu.com/rd-sharma-solutions/class-9-maths-chapter-6-exercise-6-3
[ "Courses\nCourses for Kids\nFree study material\nOffline Centres\nMore\n\n# RD Sharma Class 9 Solutions Chapter 6 - Factorization of Polynomials (Ex 6.3) Exercise 6.3", null, "Last updated date: 30th Nov 2023\nTotal views: 525k\nViews today: 7.25k\n\n## What are Polynomials?\n\nPolynomials in class 9 are the basics of algebra where you need these theories for solving future classes problems. Polynomials are nothing but algebraic expressions which have one or more terms with a non-zero coefficient. Every term of the polynomial has at least one coefficient. The word “polynomial” is derived from the “poly” which can be termed as “too many” and the word “nominal” means “the term”.  They are said to be the subset of the algebraic expression. The word polynomial is nothing but the expression which contains one or more terms with a non- zero coefficient.\n\nFor example, the coefficient is 2 for the first term, x is the variable, 2 is the exponent, so it will be in this format 2x2+3x-2. As the variables may have constants, variables, terms, exponents, and variables.  Constants are nothing but the numerical which have a fixed value. Variable is the symbol that is assigned for the different values. Various parts of the expression in the above example which are separated by the sign can be termed as a term. The coefficient is the numerical factor in the expression. Exponent is the number that tells how many times that number needs to be multiplied.\n\nFree PDF download of RD Sharma Class 9 Solutions Chapter 6 - Factorization of Polynomials Exercise 6.3 solved by Expert Mathematics Teachers on Vedantu.com. All Chapter 6 - Factorization of Polynomials Ex 6.3 Questions with Solutions for RD Sharma Class 9 Math to help you to revise the complete Syllabus and Score More marks. Register for online coaching for IIT JEE (Mains & Advanced) and other engineering entrance exams.\n\n## Class 9 Topics for Polynomials\n\n• Introduction\n\n• Polynomials in one variable\n\n• Zeros of polynomials\n\n• Remainder theorem\n\n• Factorization of polynomials\n\n• Algebraic identities\n\n### Exercise 6.3 of Chapter 6 - Factorization of Polynomials\n\nIn this exercise, we will look into the remainder theorem of the factorization of polynomials. As we know about the equation that we use to solve for the sums of division method, the same is the formula we need to find : Dividend = Divisor x Quotient + Remainder. This formula will help you to know how to solve the sums for the remainder theorem. For example f(x) is dividend, k(x) is divisor, q(x) is quotient and g(x) is the remainder wherein we have the equation as f(x)= k(x)- q(x) + g(x).\n\n## FAQs on RD Sharma Class 9 Solutions Chapter 6 - Factorization of Polynomials (Ex 6.3) Exercise 6.3\n\n1. What do you mean by the factorization of polynomials?\n\nFactorization of polynomials means expressing the polynomial as a product of its two or more factors.\n\n2. Where can  I download the free pdf of RD Sharma Class 9 Solutions Chapter 6?\n\nYou can download the free pdf of RD Sharma Class 9 Solutions Chapter 6 from Vedantu’s official website and also from the Vedantu app available in the Google Playstore.\n\n3. Are the Class 9 study materials of Vedantu reliable?\n\nYes, you can completely rely on the study materials provided by Vedantu as these materials are prepared by the expert professionals. The study materials are as per the latest guidelines and are prepared after a thorough research." ]
[ null, "https://seo-fe.vedantu.com/cdn/images/new-header-img/bg2_dw.webp", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9251746,"math_prob":0.94485927,"size":2336,"snap":"2023-40-2023-50","text_gpt3_token_len":507,"char_repetition_ratio":0.12349914,"word_repetition_ratio":0.064432986,"special_character_ratio":0.20505136,"punctuation_ratio":0.08944954,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99766743,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-03T07:30:50Z\",\"WARC-Record-ID\":\"<urn:uuid:4585bf8c-6ebd-47b0-9014-d121b2ee21ec>\",\"Content-Length\":\"194972\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:38885efb-d86c-4374-9aa2-2bff83976e1d>\",\"WARC-Concurrent-To\":\"<urn:uuid:f840b6e7-9295-4e7f-afa1-af35b7823b45>\",\"WARC-IP-Address\":\"108.138.64.12\",\"WARC-Target-URI\":\"https://www.vedantu.com/rd-sharma-solutions/class-9-maths-chapter-6-exercise-6-3\",\"WARC-Payload-Digest\":\"sha1:ACGHICLTINMJ2HNJT5BAL7FVSWNH6CVJ\",\"WARC-Block-Digest\":\"sha1:U4GR57TKEXPSWF666KOOHNN5NZS7POMS\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100489.16_warc_CC-MAIN-20231203062445-20231203092445-00814.warc.gz\"}"}
https://www.colorhexa.com/29374a
[ "# #29374a Color Information\n\nIn a RGB color space, hex #29374a is composed of 16.1% red, 21.6% green and 29% blue. Whereas in a CMYK color space, it is composed of 44.6% cyan, 25.7% magenta, 0% yellow and 71% black. It has a hue angle of 214.5 degrees, a saturation of 28.7% and a lightness of 22.5%. #29374a color hex could be obtained by blending #526e94 with #000000. Closest websafe color is: #333333.\n\n• R 16\n• G 22\n• B 29\nRGB color chart\n• C 45\n• M 26\n• Y 0\n• K 71\nCMYK color chart\n\n#29374a color description : Very dark desaturated blue.\n\n# #29374a Color Conversion\n\nThe hexadecimal color #29374a has RGB values of R:41, G:55, B:74 and CMYK values of C:0.45, M:0.26, Y:0, K:0.71. Its decimal value is 2701130.\n\nHex triplet RGB Decimal 29374a `#29374a` 41, 55, 74 `rgb(41,55,74)` 16.1, 21.6, 29 `rgb(16.1%,21.6%,29%)` 45, 26, 0, 71 214.5°, 28.7, 22.5 `hsl(214.5,28.7%,22.5%)` 214.5°, 44.6, 29 333333 `#333333`\nCIE-LAB 22.647, 0.024, -13.513 3.516, 3.698, 7.007 0.247, 0.26, 3.698 22.647, 13.513, 270.1 22.647, -6.488, -15.41 19.23, -1.013, -8.141 00101001, 00110111, 01001010\n\n# Color Schemes with #29374a\n\n• #29374a\n``#29374a` `rgb(41,55,74)``\n• #4a3c29\n``#4a3c29` `rgb(74,60,41)``\nComplementary Color\n• #29484a\n``#29484a` `rgb(41,72,74)``\n• #29374a\n``#29374a` `rgb(41,55,74)``\n• #2c294a\n``#2c294a` `rgb(44,41,74)``\nAnalogous Color\n• #484a29\n``#484a29` `rgb(72,74,41)``\n• #29374a\n``#29374a` `rgb(41,55,74)``\n• #4a2c29\n``#4a2c29` `rgb(74,44,41)``\nSplit Complementary Color\n• #374a29\n``#374a29` `rgb(55,74,41)``\n• #29374a\n``#29374a` `rgb(41,55,74)``\n• #4a2937\n``#4a2937` `rgb(74,41,55)``\n• #294a3c\n``#294a3c` `rgb(41,74,60)``\n• #29374a\n``#29374a` `rgb(41,55,74)``\n• #4a2937\n``#4a2937` `rgb(74,41,55)``\n• #4a3c29\n``#4a3c29` `rgb(74,60,41)``\n• #0e1219\n``#0e1219` `rgb(14,18,25)``\n• #171f29\n``#171f29` `rgb(23,31,41)``\n• #202b3a\n``#202b3a` `rgb(32,43,58)``\n• #29374a\n``#29374a` `rgb(41,55,74)``\n• #32435a\n``#32435a` `rgb(50,67,90)``\n• #3b4f6b\n``#3b4f6b` `rgb(59,79,107)``\n• #445c7b\n``#445c7b` `rgb(68,92,123)``\nMonochromatic Color\n\n# Alternatives to #29374a\n\nBelow, you can see some colors close to #29374a. Having a set of related colors can be useful if you need an inspirational alternative to your original color choice.\n\n• #293f4a\n``#293f4a` `rgb(41,63,74)``\n• #293d4a\n``#293d4a` `rgb(41,61,74)``\n• #293a4a\n``#293a4a` `rgb(41,58,74)``\n• #29374a\n``#29374a` `rgb(41,55,74)``\n• #29344a\n``#29344a` `rgb(41,52,74)``\n• #29324a\n``#29324a` `rgb(41,50,74)``\n• #292f4a\n``#292f4a` `rgb(41,47,74)``\nSimilar Colors\n\n# #29374a Preview\n\nThis text has a font color of #29374a.\n\n``<span style=\"color:#29374a;\">Text here</span>``\n#29374a background color\n\nThis paragraph has a background color of #29374a.\n\n``<p style=\"background-color:#29374a;\">Content here</p>``\n#29374a border color\n\nThis element has a border color of #29374a.\n\n``<div style=\"border:1px solid #29374a;\">Content here</div>``\nCSS codes\n``.text {color:#29374a;}``\n``.background {background-color:#29374a;}``\n``.border {border:1px solid #29374a;}``\n\n# Shades and Tints of #29374a\n\nA shade is achieved by adding black to any pure hue, while a tint is created by mixing white to any pure color. In this example, #06080b is the darkest color, while #fdfefe is the lightest one.\n\n• #06080b\n``#06080b` `rgb(6,8,11)``\n• #0d1118\n``#0d1118` `rgb(13,17,24)``\n• #141b24\n``#141b24` `rgb(20,27,36)``\n• #1b2431\n``#1b2431` `rgb(27,36,49)``\n• #222e3d\n``#222e3d` `rgb(34,46,61)``\n• #29374a\n``#29374a` `rgb(41,55,74)``\n• #304057\n``#304057` `rgb(48,64,87)``\n• #374a63\n``#374a63` `rgb(55,74,99)``\n• #3e5370\n``#3e5370` `rgb(62,83,112)``\n• #455d7c\n``#455d7c` `rgb(69,93,124)``\n• #4c6689\n``#4c6689` `rgb(76,102,137)``\n• #536f96\n``#536f96` `rgb(83,111,150)``\n• #5a79a2\n``#5a79a2` `rgb(90,121,162)``\n• #6683aa\n``#6683aa` `rgb(102,131,170)``\n• #728db1\n``#728db1` `rgb(114,141,177)``\n• #7f97b8\n``#7f97b8` `rgb(127,151,184)``\n• #8ca1bf\n``#8ca1bf` `rgb(140,161,191)``\n• #98acc6\n``#98acc6` `rgb(152,172,198)``\n• #a5b6cd\n``#a5b6cd` `rgb(165,182,205)``\n• #b2c0d4\n``#b2c0d4` `rgb(178,192,212)``\n``#becadb` `rgb(190,202,219)``\n• #cbd5e2\n``#cbd5e2` `rgb(203,213,226)``\n• #d7dfe9\n``#d7dfe9` `rgb(215,223,233)``\n• #e4e9f0\n``#e4e9f0` `rgb(228,233,240)``\n• #f1f3f7\n``#f1f3f7` `rgb(241,243,247)``\n• #fdfefe\n``#fdfefe` `rgb(253,254,254)``\nTint Color Variation\n\n# Tones of #29374a\n\nA tone is produced by adding gray to any pure hue. In this case, #36393d is the less saturated color, while #013172 is the most saturated one.\n\n• #36393d\n``#36393d` `rgb(54,57,61)``\n• #323841\n``#323841` `rgb(50,56,65)``\n• #2d3846\n``#2d3846` `rgb(45,56,70)``\n• #29374a\n``#29374a` `rgb(41,55,74)``\n• #25364e\n``#25364e` `rgb(37,54,78)``\n• #203653\n``#203653` `rgb(32,54,83)``\n• #1c3557\n``#1c3557` `rgb(28,53,87)``\n• #17345c\n``#17345c` `rgb(23,52,92)``\n• #133460\n``#133460` `rgb(19,52,96)``\n• #0e3365\n``#0e3365` `rgb(14,51,101)``\n• #0a3269\n``#0a3269` `rgb(10,50,105)``\n• #06326d\n``#06326d` `rgb(6,50,109)``\n• #013172\n``#013172` `rgb(1,49,114)``\nTone Color Variation\n\n# Color Blindness Simulator\n\nBelow, you can see how #29374a is perceived by people affected by a color vision deficiency. This can be useful if you need to ensure your color combinations are accessible to color-blind users.\n\nMonochromacy\n• Achromatopsia 0.005% of the population\n• Atypical Achromatopsia 0.001% of the population\nDichromacy\n• Protanopia 1% of men\n• Deuteranopia 1% of men\n• Tritanopia 0.001% of the population\nTrichromacy\n• Protanomaly 1% of men, 0.01% of women\n• Deuteranomaly 6% of men, 0.4% of women\n• Tritanomaly 0.01% of the population" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.50779337,"math_prob":0.6263843,"size":3680,"snap":"2021-43-2021-49","text_gpt3_token_len":1625,"char_repetition_ratio":0.12295974,"word_repetition_ratio":0.011070111,"special_character_ratio":0.56902176,"punctuation_ratio":0.23463687,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99172115,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-19T18:00:18Z\",\"WARC-Record-ID\":\"<urn:uuid:f619b1c5-f1e8-493c-9264-2da6be1dab7f>\",\"Content-Length\":\"36115\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d84c05b5-28e2-47ce-9291-d6b17245fe4a>\",\"WARC-Concurrent-To\":\"<urn:uuid:a09425b5-06be-440e-b718-39249fef87bb>\",\"WARC-IP-Address\":\"178.32.117.56\",\"WARC-Target-URI\":\"https://www.colorhexa.com/29374a\",\"WARC-Payload-Digest\":\"sha1:JZKSOGG6HUU6MLW5USLJ56QYLLT5QYV4\",\"WARC-Block-Digest\":\"sha1:X33GQCPYYFJZHZWX7OLGUQZC26Y5MQJX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585280.84_warc_CC-MAIN-20211019171139-20211019201139-00626.warc.gz\"}"}
https://export.arxiv.org/abs/1812.00067?context=math
[ "math\n\n# Title: The investigation of Euler's totient function preimages\n\nAbstract: We propose a lower estimation for computing quantity of the inverses of Euler's function. We answer the question about the multiplicity of $m$ in the equation $\\varphi(x) = m$ \\cite{Ford}. An analytic expression for exact multiplicity of $m = {{2}^{{{2}^{n}}+a}}$, where $a\\in N$, $a<{{2}^{n}}$, $\\varphi(t)={{2}^{{{2}^{n}}+a}}$ was obtained. A lower bound of inverses number for arbitrary $m$ was found. New numerical metric was proposed.\n Comments: \"Sixth International Conference on Analytic Number Theory and 11 Spatial Tessellations. Voronoy Conference\" Subjects: Number Theory (math.NT) MSC classes: 11R20 Journal reference: Sixth International Conference on Analytic Number Theory.(2018) 39. ages\" Sixth International Conference on Analytic Number Theory and 11 Spatial Tessellations. Voronoy Conference\" Book of abstracts. pp. 37- 39 Report number: 02 Cite as: arXiv:1812.00067 [math.NT] (or arXiv:1812.00067v2 [math.NT] for this version)\n\n## Submission history\n\nFrom: Ruslan Skuratovskii Viacheslavovich [view email]\n[v1] Fri, 30 Nov 2018 21:52:17 GMT (11kb)\n[v2] Fri, 22 Feb 2019 22:36:06 GMT (12kb)\n\nLink back to: arXiv, form interface, contact." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.79753965,"math_prob":0.9406936,"size":894,"snap":"2022-40-2023-06","text_gpt3_token_len":272,"char_repetition_ratio":0.08539326,"word_repetition_ratio":0.0,"special_character_ratio":0.32550335,"punctuation_ratio":0.15294118,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95428026,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-01-29T14:54:08Z\",\"WARC-Record-ID\":\"<urn:uuid:218bc5c7-e6bb-4d42-b60d-2e9fb1250e81>\",\"Content-Length\":\"16341\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3553e9f0-e570-497c-ae27-b94ceae747c5>\",\"WARC-Concurrent-To\":\"<urn:uuid:e55ee520-00dd-4b72-baa0-e2723d681bb2>\",\"WARC-IP-Address\":\"128.84.21.203\",\"WARC-Target-URI\":\"https://export.arxiv.org/abs/1812.00067?context=math\",\"WARC-Payload-Digest\":\"sha1:GJPFKMKQ427ZD4B74JUQOV36Q3WOCZK5\",\"WARC-Block-Digest\":\"sha1:OKRUEHM66HFCEFOZD7CZN63QGTAZMAEX\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764499744.74_warc_CC-MAIN-20230129144110-20230129174110-00158.warc.gz\"}"}
https://developer.mozilla.org/zh-CN/docs/Web/JavaScript/Guide/Iterators_and_Generators
[ "# 迭代器和生成器\n\n## 迭代器\n\nJavascript中最常见的迭代器是Array迭代器,它只是按顺序返回关联数组中的每个值。 虽然很容易想象所有迭代器都可以表示为数组,但事实并非如此。 数组必须完整分配,但迭代器仅在必要时使用,因此可以表示无限大小的序列,例如0和无穷大之间的整数范围。\n\n```function makeRangeIterator(start = 0, end = Infinity, step = 1) {\nlet nextIndex = start;\nlet iterationCount = 0;\n\nconst rangeIterator = {\nnext: function() {\nlet result;\nif (nextIndex < end) {\nresult = { value: nextIndex, done: false }\nnextIndex += step;\niterationCount++;\nreturn result;\n}\nreturn { value: iterationCount, done: true }\n}\n};\nreturn rangeIterator;\n}```\n\n```let it = makeRangeIterator(1, 10, 2);\n\nlet result = it.next();\nwhile (!result.done) {\nconsole.log(result.value); // 1 3 5 7 9\nresult = it.next();\n}\n\nconsole.log(\"Iterated over sequence of size: \", result.value); // 5\n\n```\n\n## 生成器函数\n\n```function* makeRangeIterator(start = 0, end = Infinity, step = 1) {\nfor (let i = start; i < end; i += step) {\nyield i;\n}\n}\nvar a = makeRangeIterator(1,10,2)\na.next() // {value: 1, done: false}\na.next() // {value: 3, done: false}\na.next() // {value: 5, done: false}\na.next() // {value: 7, done: false}\na.next() // {value: 9, done: false}\na.next() // {value: undefined, done: true}\n```\n\n## 可迭代对象\n\n### 自定义的可迭代对象\n\n```var myIterable = {\n*[Symbol.iterator]() {\nyield 1;\nyield 2;\nyield 3;\n}\n}\n\nfor (let value of myIterable) {\nconsole.log(value);\n}\n// 1\n// 2\n// 3\n\n// 或者\n\n[...myIterable]; // [1, 2, 3]\n```\n\n### 内置可迭代对象\n\n`String``Array``TypedArray``Map``Set` 都是内置可迭代对象,因为它们的原型对象都拥有一个 `Symbol.iterator` 方法。\n\n### 用于可迭代对象的语法\n\n```for (let value of ['a', 'b', 'c']) {\nconsole.log(value);\n}\n// \"a\"\n// \"b\"\n// \"c\"\n\n[...'abc']; // [\"a\", \"b\", \"c\"]\n\nfunction* gen() {\nyield* ['a', 'b', 'c'];\n}\n\ngen().next(); // { value: \"a\", done: false }\n\n[a, b, c] = new Set(['a', 'b', 'c']);\na; // \"a\"\n\n```\n\n## 高级生成器\n\nThe `next()` 方法也接受一个参数用于修改生成器内部状态。传递给 `next()` 的参数值会被yield接收。要注意的是,传给第一个 `next()` 的值会被忽略。\n\n```function* fibonacci() {\nvar fn1 = 0;\nvar fn2 = 1;\nwhile (true) {\nvar current = fn1;\nfn1 = fn2;\nfn2 = current + fn1;\nvar reset = yield current;\nif (reset) {\nfn1 = 0;\nfn2 = 1;\n}\n}\n}\n\nvar sequence = fibonacci();\nconsole.log(sequence.next().value); // 0\nconsole.log(sequence.next().value); // 1\nconsole.log(sequence.next().value); // 1\nconsole.log(sequence.next().value); // 2\nconsole.log(sequence.next().value); // 3\nconsole.log(sequence.next().value); // 5\nconsole.log(sequence.next().value); // 8\nconsole.log(sequence.next(true).value); // 0\nconsole.log(sequence.next().value); // 1\nconsole.log(sequence.next().value); // 1\nconsole.log(sequence.next().value); // 2```" ]
[ null ]
{"ft_lang_label":"__label__zh","ft_lang_prob":0.8176056,"math_prob":0.98620266,"size":4021,"snap":"2020-34-2020-40","text_gpt3_token_len":2184,"char_repetition_ratio":0.17226785,"word_repetition_ratio":0.06935123,"special_character_ratio":0.29296196,"punctuation_ratio":0.24328358,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9940871,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-19T07:36:06Z\",\"WARC-Record-ID\":\"<urn:uuid:69bf202b-8f6d-4cd2-8e23-7f93c4823518>\",\"Content-Length\":\"273909\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:93e29c63-0590-44f0-9dde-bee8964ddeb8>\",\"WARC-Concurrent-To\":\"<urn:uuid:f8763ef3-5a9e-4746-8d6a-d10e8a7743d2>\",\"WARC-IP-Address\":\"13.32.179.14\",\"WARC-Target-URI\":\"https://developer.mozilla.org/zh-CN/docs/Web/JavaScript/Guide/Iterators_and_Generators\",\"WARC-Payload-Digest\":\"sha1:RA47XKQK3FTTMNMRFDSTBNDE5IZGSOSX\",\"WARC-Block-Digest\":\"sha1:GHVQ5WH7LN6K4C5TC6GZNC54JVYWLIPA\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600400190270.10_warc_CC-MAIN-20200919044311-20200919074311-00138.warc.gz\"}"}
https://www.tutoringhour.com/worksheets/perimeter/circles/
[ "# Circumference of a Circle Worksheets\n\n1. Math >\n2. Geometry >\n3. Perimeter >\n4. Circles\n\nShow your smarts by solving our free worksheets on the circumference of a circle and calculating the distance around a circle. Our pdf exercises help master finding the circumference in terms of pi as well as by using the decimal value 3.14 for pi. The radius or diameter of circles is expressed in customary as well as metric units. Also assess your skills with the printable revision worksheets that feature a mix of circles with diameters and radii. Find the radius or diameter from circumference as you level up!\n\nOur printable circumference of circles worksheets are ideal for students in grade 6, grade 7, and grade 8.\n\nCCSS: 7.G\n\nSelect the Measurement Units\n\nEnable 6th and 7th grade learners to forge ahead while calculating the circumference of circles using their radii! Use π ≈ 3.14, substitute the radius, find the circumference, and round it to the nearest tenth.", null, "Finding Circumference in Terms of Pi Using Radius\n\nRecall the formula for the circumference of a circle, C = 2πr, where r is the radius of the circle, to solve the exercises in this pdf resource on finding circumference in terms of π, meticulously designed for 6th grade children.", null, "Finding Circumference Using Diameter\n\nStudy the circles depicted with their diameters(d) and apply the formula C = πd, where C is the circumference of a circle and d is the diameter, to work out the problems here. Use π ≈ 3.14 and round your answer to the nearest tenth.", null, "Finding Circumference in Terms of Pi Using Diameter\n\nWhat's the relation between a radius and diameter? Explore this as you determine the circumference of circles in terms of π using diameter in these printable worksheets that work best for grade 7 kids!", null, "How do you find the radius(r) of a circle using circumference(C)? Remember r = C/2π is the formula for such problems. Plug in the circumference C, offered in whole numbers as well as decimals, in the formula and solve for the radius r.\n\nFinding Diameter from Circumference Worksheets\n\nProgress to the next level with these pdf exercises where 6th grade, 7th grade, and 8th grade students determine the diameter using the circumference of circles. These worksheets are available in customary and metric units.\n\nFinding Circumference | Mixed Review\n\nChildren in 6th grade and 7th grade will be all agog as they revise the topics by plugging into our pdf worksheets. Apply the formula C = πd, use π ≈ 3.14, and round your answer to the nearest tenth to obtain the circumference.", null, "Area and Circumference of a Circle Worksheets\n\nCruise through a collection of exercises that call for kids to flex their grasp on both the area and the circumference of a circle. Also, get to know how to find one of the two, when the other is given." ]
[ null, "https://www.tutoringhour.com/images/worksheet-th-all-before.svg", null, "https://www.tutoringhour.com/images/worksheet-th-all-before.svg", null, "https://www.tutoringhour.com/images/worksheet-th-all-before.svg", null, "https://www.tutoringhour.com/images/worksheet-th-all-before.svg", null, "https://www.tutoringhour.com/images/worksheet-th-all-before.svg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.89928174,"math_prob":0.912295,"size":2786,"snap":"2023-40-2023-50","text_gpt3_token_len":609,"char_repetition_ratio":0.20956147,"word_repetition_ratio":0.059196617,"special_character_ratio":0.20638908,"punctuation_ratio":0.08952381,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99658984,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-26T06:59:18Z\",\"WARC-Record-ID\":\"<urn:uuid:5deb2c7e-7311-41b6-b790-aefa4cc1fe36>\",\"Content-Length\":\"63799\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:56f420a0-d51e-4a99-8944-3120ed56508d>\",\"WARC-Concurrent-To\":\"<urn:uuid:637942fa-ca5c-4091-b75b-47b50e2e3059>\",\"WARC-IP-Address\":\"67.227.190.101\",\"WARC-Target-URI\":\"https://www.tutoringhour.com/worksheets/perimeter/circles/\",\"WARC-Payload-Digest\":\"sha1:VYS2KC7DLXEWTW35RV4R4NTLY2D3YNEE\",\"WARC-Block-Digest\":\"sha1:EYBUEBAVMOMSA3P2UCK4DZDK73WILMQB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510149.21_warc_CC-MAIN-20230926043538-20230926073538-00131.warc.gz\"}"}
https://metroplexmathcircle.wordpress.com/tag/mersenne-prime/
[ "Feeds:\nPosts\n\n## April 13, 2013 – Dr. Paul Stanford – Primes and Misdemeanors", null, "For 2ⁿ – 1 to be prime we also need n itself to be prime, but that is not sufficient. For example, 2¹¹ – 1 is composite even though 11 is prime.  However, if you look at tables of Mersenne primes it is interesting to note that if you start with 2 and use that to make a new number 2ⁿ – 1 with n = 2 you get 3, then recycling the 3 you get 7, use n = 7 and you get 127, another prime! How long could this go on?\n\nLet f(n) = 2ⁿ – 1.  The iterations you get, starting from 2, are f⁰(2) = 2, f¹(2) = 3, f²(2) = 7, f³(2) = 127, f⁴(2) = 1701411834604692317316873037158884105727.\n\nIt turns out these are all prime!  But what of the next one???  Well, it may be a very long time before any of us know. The largest value of n for which 2ⁿ – 1 is known to be prime is n = 57885161, and that after a concerted effort using volunteers from around the globe.  Not much chance of answering this one in our lifetimes, unless some really new idea arrives.Before you make a hasty conjecture (as has already been done), a cautionary piece of history is in order.  If you define a new function to iterate you get some other interesting numbers.  Let g(n) = n² – 2n + 2. Then the iterates are g⁰(3) = 3, g¹(3) = 5, g²(3) = 17, g³(3) = 257, g⁴(3) = 65537, and all of these are prime!  So, with forgivable excitement, the conjecture was made that all of these will be prime, especially as the next one, g⁵(2) = 4294967297, was much too large at the time for mere mortals to conceive of factoring with their bare hands.\n\nNone, that is, until Euler combined his genius with an impish disbelief in Fermat’s conjecture to discover that g⁵(2) = 4294967297 = 641 * 6700417.  And since then we have found many more composite Fermat numbers, and no further Fermat primes, leading to the complementary conjecture that all the rest are composite!  It seems that we never learn to be humble around these things…\n\nIt takes a larger number to be “forever beyond reach” these days.  Rather than the now puny 4294967297 we cower before f⁵(2) = 2¹⁷⁰¹⁴¹¹⁸³⁴⁶⁰⁴⁶⁹²³¹⁷³¹⁶⁸⁷³⁰³⁷¹⁵⁸⁸⁸⁴⁴¹⁰⁵⁷²⁷ – 1, and who can blame us?" ]
[ null, "https://metroplexmathcircle.files.wordpress.com/2009/08/stanford-paul-2009-08.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.907273,"math_prob":0.98489034,"size":1144,"snap":"2021-31-2021-39","text_gpt3_token_len":417,"char_repetition_ratio":0.11140351,"word_repetition_ratio":0.0,"special_character_ratio":0.35402098,"punctuation_ratio":0.11440678,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95027506,"pos_list":[0,1,2],"im_url_duplicate_count":[null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-28T01:23:40Z\",\"WARC-Record-ID\":\"<urn:uuid:165f2509-3eda-4463-a978-b4f0c48c1d58>\",\"Content-Length\":\"83054\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:56848266-77dc-439c-8ab2-bcf3a9f372f6>\",\"WARC-Concurrent-To\":\"<urn:uuid:bd4cedbb-7bdd-4b8c-b53f-bb41c5e972ed>\",\"WARC-IP-Address\":\"192.0.78.12\",\"WARC-Target-URI\":\"https://metroplexmathcircle.wordpress.com/tag/mersenne-prime/\",\"WARC-Payload-Digest\":\"sha1:MPEVZW723RLRUWEII472VMUKYUSRBOLK\",\"WARC-Block-Digest\":\"sha1:VD4XC7XMKCIYDFHNSROXUU5XPCSIICN6\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780058589.72_warc_CC-MAIN-20210928002254-20210928032254-00414.warc.gz\"}"}
https://www.colorhexa.com/18e96f
[ "# #18e96f Color Information\n\nIn a RGB color space, hex #18e96f is composed of 9.4% red, 91.4% green and 43.5% blue. Whereas in a CMYK color space, it is composed of 89.7% cyan, 0% magenta, 52.4% yellow and 8.6% black. It has a hue angle of 145 degrees, a saturation of 82.6% and a lightness of 50.4%. #18e96f color hex could be obtained by blending #30ffde with #00d300. Closest websafe color is: #00ff66.\n\n• R 9\n• G 91\n• B 44\nRGB color chart\n• C 90\n• M 0\n• Y 52\n• K 9\nCMYK color chart\n\n#18e96f color description : Vivid cyan - lime green.\n\n# #18e96f Color Conversion\n\nThe hexadecimal color #18e96f has RGB values of R:24, G:233, B:111 and CMYK values of C:0.9, M:0, Y:0.52, K:0.09. Its decimal value is 1632623.\n\nHex triplet RGB Decimal 18e96f `#18e96f` 24, 233, 111 `rgb(24,233,111)` 9.4, 91.4, 43.5 `rgb(9.4%,91.4%,43.5%)` 90, 0, 52, 9 145°, 82.6, 50.4 `hsl(145,82.6%,50.4%)` 145°, 89.7, 91.4 00ff66 `#00ff66`\nCIE-LAB 81.629, -71.599, 46.122 32.383, 59.616, 24.838 0.277, 0.51, 59.616 81.629, 85.168, 147.211 81.629, -72.645, 71.734 77.211, -60.257, 34.975 00011000, 11101001, 01101111\n\n# Color Schemes with #18e96f\n\n• #18e96f\n``#18e96f` `rgb(24,233,111)``\n• #e91892\n``#e91892` `rgb(233,24,146)``\nComplementary Color\n• #2ae918\n``#2ae918` `rgb(42,233,24)``\n• #18e96f\n``#18e96f` `rgb(24,233,111)``\n• #18e9d8\n``#18e9d8` `rgb(24,233,216)``\nAnalogous Color\n• #e9182a\n``#e9182a` `rgb(233,24,42)``\n• #18e96f\n``#18e96f` `rgb(24,233,111)``\n• #d818e9\n``#d818e9` `rgb(216,24,233)``\nSplit Complementary Color\n• #e96f18\n``#e96f18` `rgb(233,111,24)``\n• #18e96f\n``#18e96f` `rgb(24,233,111)``\n• #6f18e9\n``#6f18e9` `rgb(111,24,233)``\n• #92e918\n``#92e918` `rgb(146,233,24)``\n• #18e96f\n``#18e96f` `rgb(24,233,111)``\n• #6f18e9\n``#6f18e9` `rgb(111,24,233)``\n• #e91892\n``#e91892` `rgb(233,24,146)``\n• #10a54e\n``#10a54e` `rgb(16,165,78)``\n• #12bc59\n``#12bc59` `rgb(18,188,89)``\n• #14d364\n``#14d364` `rgb(20,211,100)``\n• #18e96f\n``#18e96f` `rgb(24,233,111)``\n• #2feb7e\n``#2feb7e` `rgb(47,235,126)``\n• #47ed8c\n``#47ed8c` `rgb(71,237,140)``\n• #5ef09b\n``#5ef09b` `rgb(94,240,155)``\nMonochromatic Color\n\n# Alternatives to #18e96f\n\nBelow, you can see some colors close to #18e96f. Having a set of related colors can be useful if you need an inspirational alternative to your original color choice.\n\n• #18e93b\n``#18e93b` `rgb(24,233,59)``\n• #18e94c\n``#18e94c` `rgb(24,233,76)``\n• #18e95e\n``#18e95e` `rgb(24,233,94)``\n• #18e96f\n``#18e96f` `rgb(24,233,111)``\n• #18e980\n``#18e980` `rgb(24,233,128)``\n• #18e992\n``#18e992` `rgb(24,233,146)``\n• #18e9a3\n``#18e9a3` `rgb(24,233,163)``\nSimilar Colors\n\n# #18e96f Preview\n\nThis text has a font color of #18e96f.\n\n``<span style=\"color:#18e96f;\">Text here</span>``\n#18e96f background color\n\nThis paragraph has a background color of #18e96f.\n\n``<p style=\"background-color:#18e96f;\">Content here</p>``\n#18e96f border color\n\nThis element has a border color of #18e96f.\n\n``<div style=\"border:1px solid #18e96f;\">Content here</div>``\nCSS codes\n``.text {color:#18e96f;}``\n``.background {background-color:#18e96f;}``\n``.border {border:1px solid #18e96f;}``\n\n# Shades and Tints of #18e96f\n\nA shade is achieved by adding black to any pure hue, while a tint is created by mixing white to any pure color. In this example, #000201 is the darkest color, while #effdf5 is the lightest one.\n\n• #000201\n``#000201` `rgb(0,2,1)``\n• #021409\n``#021409` `rgb(2,20,9)``\n• #042612\n``#042612` `rgb(4,38,18)``\n• #05381a\n``#05381a` `rgb(5,56,26)``\n• #074923\n``#074923` `rgb(7,73,35)``\n• #095b2b\n``#095b2b` `rgb(9,91,43)``\n• #0a6d34\n``#0a6d34` `rgb(10,109,52)``\n• #0c7f3c\n``#0c7f3c` `rgb(12,127,60)``\n• #0e9144\n``#0e9144` `rgb(14,145,68)``\n• #10a34d\n``#10a34d` `rgb(16,163,77)``\n• #11b555\n``#11b555` `rgb(17,181,85)``\n• #13c75e\n``#13c75e` `rgb(19,199,94)``\n• #15d966\n``#15d966` `rgb(21,217,102)``\n• #18e96f\n``#18e96f` `rgb(24,233,111)``\n• #2aeb7a\n``#2aeb7a` `rgb(42,235,122)``\n• #3cec85\n``#3cec85` `rgb(60,236,133)``\n• #4eee90\n``#4eee90` `rgb(78,238,144)``\n• #60f09c\n``#60f09c` `rgb(96,240,156)``\n• #72f2a7\n``#72f2a7` `rgb(114,242,167)``\n• #83f3b2\n``#83f3b2` `rgb(131,243,178)``\n• #95f5bd\n``#95f5bd` `rgb(149,245,189)``\n• #a7f7c8\n``#a7f7c8` `rgb(167,247,200)``\n• #b9f8d3\n``#b9f8d3` `rgb(185,248,211)``\n``#cbfadf` `rgb(203,250,223)``\n• #ddfcea\n``#ddfcea` `rgb(221,252,234)``\n• #effdf5\n``#effdf5` `rgb(239,253,245)``\nTint Color Variation\n\n# Tones of #18e96f\n\nA tone is produced by adding gray to any pure hue. In this case, #79887f is the less saturated color, while #05fc6c is the most saturated one.\n\n• #79887f\n``#79887f` `rgb(121,136,127)``\n• #70917e\n``#70917e` `rgb(112,145,126)``\n• #669b7c\n``#669b7c` `rgb(102,155,124)``\n• #5ca57a\n``#5ca57a` `rgb(92,165,122)``\n• #52af79\n``#52af79` `rgb(82,175,121)``\n• #49b877\n``#49b877` `rgb(73,184,119)``\n• #3fc276\n``#3fc276` `rgb(63,194,118)``\n• #35cc74\n``#35cc74` `rgb(53,204,116)``\n• #2bd672\n``#2bd672` `rgb(43,214,114)``\n• #22df71\n``#22df71` `rgb(34,223,113)``\n• #18e96f\n``#18e96f` `rgb(24,233,111)``\n• #0ef36d\n``#0ef36d` `rgb(14,243,109)``\n• #05fc6c\n``#05fc6c` `rgb(5,252,108)``\nTone Color Variation\n\n# Color Blindness Simulator\n\nBelow, you can see how #18e96f is perceived by people affected by a color vision deficiency. This can be useful if you need to ensure your color combinations are accessible to color-blind users.\n\nMonochromacy\n• Achromatopsia 0.005% of the population\n• Atypical Achromatopsia 0.001% of the population\nDichromacy\n• Protanopia 1% of men\n• Deuteranopia 1% of men\n• Tritanopia 0.001% of the population\nTrichromacy\n• Protanomaly 1% of men, 0.01% of women\n• Deuteranomaly 6% of men, 0.4% of women\n• Tritanomaly 0.01% of the population" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.51038486,"math_prob":0.70124495,"size":3701,"snap":"2020-24-2020-29","text_gpt3_token_len":1662,"char_repetition_ratio":0.121720314,"word_repetition_ratio":0.011049724,"special_character_ratio":0.55525535,"punctuation_ratio":0.23522854,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99123615,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-07T01:13:48Z\",\"WARC-Record-ID\":\"<urn:uuid:8610234b-a84f-48fc-933e-9d301a7b123e>\",\"Content-Length\":\"36298\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8fc7894a-7b01-4132-ad25-74d6396d6ed7>\",\"WARC-Concurrent-To\":\"<urn:uuid:2c47a3d6-563a-4fd8-a7e0-1a0f9ae7aed6>\",\"WARC-IP-Address\":\"178.32.117.56\",\"WARC-Target-URI\":\"https://www.colorhexa.com/18e96f\",\"WARC-Payload-Digest\":\"sha1:VP4HMY2U42XMZKOG7YC36A2AK7G3WFUC\",\"WARC-Block-Digest\":\"sha1:KT7XY46LFYANMHU4LZKMCGQJMFJRJ5YL\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593655890566.2_warc_CC-MAIN-20200706222442-20200707012442-00053.warc.gz\"}"}
https://fronteirastral.com/solving-quadratic-equations-by-factoring-worksheet/
[ "# Solving Quadratic Equations by Factoring Worksheet\n\nSolving Quadratic Equations By Factoring Worksheet\n\n60 best factoring and quadratics images on Pinterest from Solving Quadratic Equations By Factoring Worksheet\n, source: pinterest.com", null, "Fantastic Factoring Answers Generator Ideas Printable Math from Solving Quadratic Equations By Factoring Worksheet\n, source: kumander.com", null, "Solve Quadratic Equations by peting the Square Worksheets from Solving Quadratic Equations By Factoring Worksheet\n, source: thoughtco.com", null, ", source: homeshealth.info", null, ", source: homeshealth.info", null, "Solving Quadratic Equations by pleting the Square from Solving Quadratic Equations By Factoring Worksheet\n, source: pinterest.com", null, "Solving Quadratic Equations by Factoring Maze from Solving Quadratic Equations By Factoring Worksheet\n, source: pinterest.com", null, "Lovely Solving Quadratic Equations By Factoring Worksheet Unique from Solving Quadratic Equations By Factoring Worksheet\n, source: latinopoetryreview.com", null, "Mathematics 9 from Solving Quadratic Equations By Factoring Worksheet\n, source: slideshare.net", null, "Solving Quadratic Equations by pleting the Square from Solving Quadratic Equations By Factoring Worksheet\n, source: pinterest.co.uk", null, "Worksheets 41 Fresh Factoring By Grouping Worksheet Full Hd from Solving Quadratic Equations By Factoring Worksheet\n, source: latinopoetryreview.com", null, "Lovely Solving Quadratic Equations By Factoring Worksheet Unique from Solving Quadratic Equations By Factoring Worksheet\n, source: latinopoetryreview.com", null, ", source: thoughtco.com", null, "What is a Quadratic Equation Definition & Examples Video from Solving Quadratic Equations By Factoring Worksheet\n, source: study.com", null, "Word problems involving quadratic equations from Solving Quadratic Equations By Factoring Worksheet\n, source: pinterest.com", null, "60 best factoring and quadratics images on Pinterest from Solving Quadratic Equations By Factoring Worksheet\n, source: pinterest.com", null, ", source: thoughtco.com", null, "116 best Math 12 Stuff images on Pinterest from Solving Quadratic Equations By Factoring Worksheet\n, source: pinterest.com", null, ", source: bonlacfoods.com", null, "Best Solving Quadratic Equations By Factoring Worksheet Luxury from Solving Quadratic Equations By Factoring Worksheet\n, source: latinopoetryreview.com", null, "103 best Quadratics & Polynomials images on Pinterest from Solving Quadratic Equations By Factoring Worksheet\n, source: pinterest.com", null, "48 Beautiful Naming Chemical pounds Worksheet High Resolution from Solving Quadratic Equations By Factoring Worksheet\n, source: latinopoetryreview.com", null, ", source: homeshealth.info", null, "Algebra Worksheet Section 10 5 Factoring Polynomials The Form from Solving Quadratic Equations By Factoring Worksheet\n, source: minimalistgranny.com", null, "Quadratic Master on the App Store from Solving Quadratic Equations By Factoring Worksheet\n, source: itunes.apple.com", null, "Greater Than Less Than Worksheets Math Aids from Solving Quadratic Equations By Factoring Worksheet\n, source: math-aids.com", null, "Lovely Solving Quadratic Equations By Factoring Worksheet Unique from Solving Quadratic Equations By Factoring Worksheet\n, source: latinopoetryreview.com", null, "Use the Quadratic Formula to solve the equations Quadratic formula from Solving Quadratic Equations By Factoring Worksheet\n, source: thoughtco.com", null, "Review Packet 1st Quarter Topics Lessons Tes Teach from Solving Quadratic Equations By Factoring Worksheet\n, source: tes.com", null, "Review Packet 1st Quarter Topics Lessons Tes Teach from Solving Quadratic Equations By Factoring Worksheet\n, source: tes.com", null, ", source: pinterest.com", null, "Best Solving Quadratic Equations By Factoring Worksheet Elegant from Solving Quadratic Equations By Factoring Worksheet\n, source: latinopoetryreview.com", null, "576 best Algebra Ideas images on Pinterest from Solving Quadratic Equations By Factoring Worksheet\n, source: pinterest.com", null, "Lovely Solving Quadratic Equations By Factoring Worksheet Unique from Solving Quadratic Equations By Factoring Worksheet\n, source: latinopoetryreview.com", null, "Factoring Trinomials Worksheet Best Pin Od Pou…¾vate„¾a Jamie from Solving Quadratic Equations By Factoring Worksheet\n, source: tblbiz.info" ]
[ null, "http://winonarasheed.com/wp-content/uploads/fantastic-factoring-answers-generator-ideas-printable-math-image-below-solving-quadratic-equations-by-factoring-worksheet-of-solving-quadratic-equations-by-factoring-worksheet.jpg", null, "http://winonarasheed.com/wp-content/uploads/solve-quadratic-equations-by-peting-the-square-worksheets-image-below-solving-quadratic-equations-by-factoring-worksheet-of-solving-quadratic-equations-by-factoring-worksheet.jpg", null, "http://winonarasheed.com/wp-content/uploads/algebra-2-chapter-5-quadratic-equations-and-functions-answers-image-below-solving-quadratic-equations-by-factoring-worksheet-of-solving-quadratic-equations-by-factoring-worksheet.jpg", null, "http://winonarasheed.com/wp-content/uploads/algebra-2-chapter-5-quadratic-equations-and-functions-answers-image-below-solving-quadratic-equations-by-factoring-worksheet-of-solving-quadratic-equations-by-factoring-worksheet.jpg", null, "http://winonarasheed.com/wp-content/uploads/solving-quadratic-equations-by-pleting-the-square-image-below-solving-quadratic-equations-by-factoring-worksheet-of-solving-quadratic-equations-by-factoring-worksheet.jpg", null, "http://winonarasheed.com/wp-content/uploads/solving-quadratic-equations-by-factoring-maze-image-below-solving-quadratic-equations-by-factoring-worksheet-of-solving-quadratic-equations-by-factoring-worksheet.jpg", null, "http://winonarasheed.com/wp-content/uploads/lovely-solving-quadratic-equations-by-factoring-worksheet-unique-image-below-solving-quadratic-equations-by-factoring-worksheet-of-solving-quadratic-equations-by-factoring-worksheet.jpg", null, "http://winonarasheed.com/wp-content/uploads/mathematics-9-image-below-solving-quadratic-equations-by-factoring-worksheet-of-solving-quadratic-equations-by-factoring-worksheet.jpg", null, "http://winonarasheed.com/wp-content/uploads/solving-quadratic-equations-by-pleting-the-square-image-below-solving-quadratic-equations-by-factoring-worksheet-of-solving-quadratic-equations-by-factoring-worksheet-1.jpg", null, "http://winonarasheed.com/wp-content/uploads/worksheets-41-fresh-factoring-by-grouping-worksheet-full-hd-image-below-solving-quadratic-equations-by-factoring-worksheet-of-solving-quadratic-equations-by-factoring-worksheet.jpg", null, "http://winonarasheed.com/wp-content/uploads/lovely-solving-quadratic-equations-by-factoring-worksheet-unique-image-below-solving-quadratic-equations-by-factoring-worksheet-of-solving-quadratic-equations-by-factoring-worksheet-1.jpg", null, "http://winonarasheed.com/wp-content/uploads/quadratic-equation-worksheets-printable-pdf-download-image-below-solving-quadratic-equations-by-factoring-worksheet-of-solving-quadratic-equations-by-factoring-worksheet.jpg", null, "http://winonarasheed.com/wp-content/uploads/what-is-a-quadratic-equation-definition-amp-examples-video-image-below-solving-quadratic-equations-by-factoring-worksheet-of-solving-quadratic-equations-by-factoring-worksheet.jpg", null, "http://winonarasheed.com/wp-content/uploads/word-problems-involving-quadratic-equations-image-below-solving-quadratic-equations-by-factoring-worksheet-of-solving-quadratic-equations-by-factoring-worksheet.jpg", null, "http://winonarasheed.com/wp-content/uploads/60-best-factoring-and-quadratics-images-on-pinterest-image-below-solving-quadratic-equations-by-factoring-worksheet-of-solving-quadratic-equations-by-factoring-worksheet-1.jpg", null, "http://winonarasheed.com/wp-content/uploads/quadratic-equation-worksheets-printable-pdf-download-image-below-solving-quadratic-equations-by-factoring-worksheet-of-solving-quadratic-equations-by-factoring-worksheet-1.jpg", null, "http://winonarasheed.com/wp-content/uploads/116-best-math-12-stuff-images-on-pinterest-image-below-solving-quadratic-equations-by-factoring-worksheet-of-solving-quadratic-equations-by-factoring-worksheet.jpg", null, "http://winonarasheed.com/wp-content/uploads/quadratic-equation-word-problems-worksheet-with-answers-worksheets-image-below-solving-quadratic-equations-by-factoring-worksheet-of-solving-quadratic-equations-by-factoring-worksheet.jpg", null, "http://winonarasheed.com/wp-content/uploads/best-solving-quadratic-equations-by-factoring-worksheet-luxury-image-below-solving-quadratic-equations-by-factoring-worksheet-of-solving-quadratic-equations-by-factoring-worksheet.jpg", null, "http://winonarasheed.com/wp-content/uploads/103-best-quadratics-amp-polynomials-images-on-pinterest-image-below-solving-quadratic-equations-by-factoring-worksheet-of-solving-quadratic-equations-by-factoring-worksheet.jpg", null, "http://winonarasheed.com/wp-content/uploads/48-beautiful-naming-chemical-pounds-worksheet-high-resolution-image-below-solving-quadratic-equations-by-factoring-worksheet-of-solving-quadratic-equations-by-factoring-worksheet.jpg", null, "http://winonarasheed.com/wp-content/uploads/algebra-2-chapter-5-quadratic-equations-and-functions-answers-image-below-solving-quadratic-equations-by-factoring-worksheet-of-solving-quadratic-equations-by-factoring-worksheet-1.jpg", null, "http://winonarasheed.com/wp-content/uploads/algebra-worksheet-section-10-5-factoring-polynomials-the-form-image-below-solving-quadratic-equations-by-factoring-worksheet-of-solving-quadratic-equations-by-factoring-worksheet.jpg", null, "http://winonarasheed.com/wp-content/uploads/quadratic-master-on-the-app-store-image-below-solving-quadratic-equations-by-factoring-worksheet-of-solving-quadratic-equations-by-factoring-worksheet.jpg", null, "http://winonarasheed.com/wp-content/uploads/greater-than-less-than-worksheets-math-aids-image-below-solving-quadratic-equations-by-factoring-worksheet-of-solving-quadratic-equations-by-factoring-worksheet.jpg", null, "http://winonarasheed.com/wp-content/uploads/lovely-solving-quadratic-equations-by-factoring-worksheet-unique-image-below-solving-quadratic-equations-by-factoring-worksheet-of-solving-quadratic-equations-by-factoring-worksheet-2.jpg", null, "http://winonarasheed.com/wp-content/uploads/use-the-quadratic-formula-to-solve-the-equations-quadratic-formula-image-below-solving-quadratic-equations-by-factoring-worksheet-of-solving-quadratic-equations-by-factoring-worksheet.jpg", null, "http://winonarasheed.com/wp-content/uploads/review-packet-1st-quarter-topics-lessons-tes-teach-image-below-solving-quadratic-equations-by-factoring-worksheet-of-solving-quadratic-equations-by-factoring-worksheet.jpg", null, "http://winonarasheed.com/wp-content/uploads/review-packet-1st-quarter-topics-lessons-tes-teach-image-below-solving-quadratic-equations-by-factoring-worksheet-of-solving-quadratic-equations-by-factoring-worksheet-1.jpg", null, "http://winonarasheed.com/wp-content/uploads/quadratic-equations-factoring-and-quadratic-formula-image-below-solving-quadratic-equations-by-factoring-worksheet-of-solving-quadratic-equations-by-factoring-worksheet.jpg", null, "http://winonarasheed.com/wp-content/uploads/best-solving-quadratic-equations-by-factoring-worksheet-elegant-image-below-solving-quadratic-equations-by-factoring-worksheet-of-solving-quadratic-equations-by-factoring-worksheet.jpg", null, "http://winonarasheed.com/wp-content/uploads/576-best-algebra-ideas-images-on-pinterest-image-below-solving-quadratic-equations-by-factoring-worksheet-of-solving-quadratic-equations-by-factoring-worksheet.jpg", null, "http://winonarasheed.com/wp-content/uploads/lovely-solving-quadratic-equations-by-factoring-worksheet-unique-image-below-solving-quadratic-equations-by-factoring-worksheet-of-solving-quadratic-equations-by-factoring-worksheet-3.jpg", null, "http://winonarasheed.com/wp-content/uploads/factoring-trinomials-worksheet-best-pin-od-pouaavateaaa-jamie-image-below-solving-quadratic-equations-by-factoring-worksheet-of-solving-quadratic-equations-by-factoring-worksheet.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.77271605,"math_prob":0.9266138,"size":4861,"snap":"2021-31-2021-39","text_gpt3_token_len":1057,"char_repetition_ratio":0.34671608,"word_repetition_ratio":0.56935483,"special_character_ratio":0.15778646,"punctuation_ratio":0.14754099,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9915813,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68],"im_url_duplicate_count":[null,3,null,3,null,6,null,6,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-07-28T02:58:15Z\",\"WARC-Record-ID\":\"<urn:uuid:d63f45fa-4106-45cd-a74a-fae4f5962bb1>\",\"Content-Length\":\"75196\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5a5cfe2f-0ee6-4647-8414-cf07579d1a2d>\",\"WARC-Concurrent-To\":\"<urn:uuid:a1d6e87c-8744-4c72-acba-29750c2bc25f>\",\"WARC-IP-Address\":\"172.67.221.168\",\"WARC-Target-URI\":\"https://fronteirastral.com/solving-quadratic-equations-by-factoring-worksheet/\",\"WARC-Payload-Digest\":\"sha1:NRK74NQXDSYJUMSHZAUWER5QLC6U3HYV\",\"WARC-Block-Digest\":\"sha1:2FYOQ5YT7FAQFFLLWETZBWZDFOHUW6WG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046153521.1_warc_CC-MAIN-20210728025548-20210728055548-00585.warc.gz\"}"}
https://exceptionshub.com/datagrid-excel-turning-my-numbers-to-floats.html
[ "Home » excel » datagrid – Excel turning my numbers to floats\n\n# datagrid – Excel turning my numbers to floats\n\nPosted by: admin May 14, 2020 Leave a comment\n\nQuestions:\n\nI have a bit of ASP.NET code that exports data in a datagrid into Excel but I noticed that it messes up a particular field when exporting.\n\nE.g. I have the value of something like 89234010000725515875 in a column in the datagrid but when exported, it turns it into 89234+19.\n\nIs there any Excel formatting that will bring back my original number? Thanks.\n\nHow to&Answers:\n\nExcel isn’t really messing up the field. Two things are happening:\n\n1. Excel formats large numbers in scientific notation. So “89234010000725515875” becomes “8.9234E+19” or “8.9234 x 10 ^ 19”.\n2. The size of the number “89234010000725515875” exceeds the precision in which Excel uses to store values. Excel stores your number as “89234010000725500000” so you’re losing the last five digits.\n\nDepending on your needs you can do one of two things.\n\nYour first option is to change the formatting from “General” to “0” (Number with zero decimal places.) This will give you “89234010000725500000” so you will have lost precision but you will be able to perform calculcations on the number.\n\nThe second option is to format the cell as text “@” or to paste your field with an apostrophe at the beginning of the line to force the value to be text. You’ll get all of the digits but you won’t be able to do calculations of the value.\n\nI hope this helps.\n\n### Answer:\n\nYou can add a space to the field, then when you export it to Excel, it’s considered as string:\n\n``````lblTest.Text = DTInfo.Rows(0).Item(\"Test\") & \" \"\n``````\n\nGood luck.\n\n### Answer:\n\nBelow is the C# source code to do this with SpreadsheetGear for .NET. Since the SpreadsheetGear API is similar to Excel’s API, you should be able to easily adapt this code to Excel’s API to get the same result.\n\nYou can download a free trial here if you want to try it yourself.\n\nDisclaimer: I own SpreadsheetGear LLC\n\n``````using System;\nusing SpreadsheetGear;\n\nnamespace Program\n{\nclass Program\n{\nstatic void Main(string[] args)\n{\n// Create a new workbook and get a reference to A1.\nIWorkbook workbook = Factory.GetWorkbook();\nIWorksheet worksheet = workbook.Worksheets;\nIRange a1 = worksheet.Cells[\"A1\"];\n// Format A1 as Text using the \"@\" format so that the text\n// will not be converted to a number, and put the text in A1.\na1.NumberFormat = \"@\";\na1.Value = \"89234010000725515875\";\n// Show that the formatted value is\nConsole.WriteLine(\"FormattedValue={0}, Raw Value={1}\", a1.Text, a1.Value);\n// Save the workbook.\nworkbook.SaveAs(@\"c:\\tmp\\Text.xls\", FileFormat.Excel8);\nworkbook.SaveAs(@\"c:\\tmp\\Text.xlsx\", FileFormat.OpenXMLWorkbook);\n}\n}\n}\n``````" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7744185,"math_prob":0.7448028,"size":2518,"snap":"2021-21-2021-25","text_gpt3_token_len":632,"char_repetition_ratio":0.11296738,"word_repetition_ratio":0.0,"special_character_ratio":0.29666403,"punctuation_ratio":0.14285715,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95723987,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-06-23T04:45:30Z\",\"WARC-Record-ID\":\"<urn:uuid:db54cb1e-75f7-42f8-874d-a2bcf3f6afef>\",\"Content-Length\":\"54610\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:cc7bec73-c2fe-479f-9ee8-ef981e0c65ab>\",\"WARC-Concurrent-To\":\"<urn:uuid:1b0e4e84-104f-43b5-8ae4-f70ce153421b>\",\"WARC-IP-Address\":\"172.67.182.150\",\"WARC-Target-URI\":\"https://exceptionshub.com/datagrid-excel-turning-my-numbers-to-floats.html\",\"WARC-Payload-Digest\":\"sha1:QWNTLFDB3FD6F7UID4X5Y3TMZ2QC6IEY\",\"WARC-Block-Digest\":\"sha1:SIPEZO6NFF2KDSR7XICSGYLL66QQXBT4\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-25/CC-MAIN-2021-25_segments_1623488534413.81_warc_CC-MAIN-20210623042426-20210623072426-00372.warc.gz\"}"}
https://math.stackexchange.com/questions/2191651/triangles-formed-by-the-points-of-contact-of-the-sides-with-the-excircles-and-by
[ "Triangles formed by the points of contact of the sides with the excircles and by that of the sides of the triangle with the inscribed circle.\n\nProve that the triangle formed by the points of contact of the sides of a given triangle with the excircles corresponding to these sides is equivalent to the triangle formed by the points of contact of the sides of the triangle with the inscribed circle.\n\nCan it be approached using Ceva's, Menalaus' or Stewart's theorems?\n\n• Hmm. Is this true? (I'm thinking of a skinny isosceles triangle). What do you mean by \"equivalent\" here? Mar 18 '17 at 1:20\n• Probably, it means having same area according to the same question on Google search. Mar 18 '17 at 1:23\n• I have a GeoGebra sketch that seems to confirm that \"equivalent\" means \"having equal area\" in this context.\n– Blue\nMar 18 '17 at 2:26\n• Please give solution? Mar 18 '17 at 10:50\n\nI don't see a clever way to invoke Ceva, Menelaus, or Stewart here. Nevertheless, there's a general principle at work. Consider $\\triangle ABC$ with points $D$, $E$, $F$ on appropriate sides at shown ...", null, "... where we define $$p := \\frac{|\\overline{BD}|}{|\\overline{BC}|} \\qquad q := \\frac{|\\overline{CE}|}{|\\overline{CA}|} \\qquad r := \\frac{|\\overline{AF}|}{|\\overline{AB}|} \\tag{1}$$\n\n(These are not Ceva-Menelaus ratios.) Then, for instance, since $\\triangle AEF$ shares an angle with $\\triangle ABC$, but the corresponding sides enclosing that angle are scaled by $(1-q)$ and $r$, we can write $$|\\triangle AEF| = (1-q) r\\;|\\triangle ABC| \\tag{2}$$ Likewise, $$|\\triangle BFD| = (1-r)p\\;|\\triangle ABC| \\qquad\\qquad |\\triangle CDE| = (1-p)q\\;|\\triangle ABC| \\tag{3}$$ so that \\begin{align} |\\triangle DEF| &= |\\triangle ABC| - |\\triangle AEF| - |\\triangle BFD| - |\\triangle CDE| \\\\[4pt] &=|\\triangle ABC|\\;\\left(1-(1-q)r-(1-r)p-(1-p)q\\right) \\\\[4pt] &=|\\triangle ABC|\\;\\left( 1 - p - q - r + p q + p r + q r \\right) \\\\[4pt] &=|\\triangle ABC|\\;\\left(\\; (1-p)(1-q)(1-r) + p q r \\;\\right) \\tag{4} \\end{align}\n\nObserve that $(4)$ is obviously unchanged under the substitutions $$p \\leftrightarrow 1-p \\qquad q \\leftrightarrow 1-q \\qquad r \\leftrightarrow 1-r$$\n\nThis implies that,\n\nIf $D$, $E$, $F$, $D^\\prime$, $E^\\prime$, $F^\\prime$ are such that $$\\overline{BD} \\cong \\overline{D^\\prime C} \\qquad \\overline{CE} \\cong \\overline{E^\\prime A} \\qquad \\overline{AF} \\cong \\overline{F^\\prime B} \\tag{\\star}$$ then $$|\\triangle DEF| = |\\triangle D^\\prime E^\\prime F^\\prime| \\tag{\\star\\star}$$", null, "(Note: To avoid marking overlapping segments, the diagram depicts $$\\overline{BD^\\prime} \\cong \\overline{DC} \\qquad \\overline{CE^\\prime} \\cong \\overline{EA} \\qquad \\overline{AF} \\cong \\overline{F^\\prime B}$$ but clearly these conditions are equivalent to $(\\star)$.)\n\nFor the problem at hand, one needs only show that the points of contact of $\\triangle ABC$'s edges with its incircles and excircles make a collection of points $D$, $E$, $F$, $D^\\prime$, $E^\\prime$, $F^\\prime$ satisfying $(\\star)$. Well, this certainly looks true:", null, "Proof is not too difficult. See, for instance, the first part of this answer. $\\square$" ]
[ null, "https://i.stack.imgur.com/LZT4g.png", null, "https://i.stack.imgur.com/k9qUP.png", null, "https://i.stack.imgur.com/cITSD.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.55614924,"math_prob":0.9996842,"size":2210,"snap":"2022-05-2022-21","text_gpt3_token_len":754,"char_repetition_ratio":0.20580235,"word_repetition_ratio":0.01369863,"special_character_ratio":0.35067874,"punctuation_ratio":0.12682927,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999076,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,2,null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-18T02:05:15Z\",\"WARC-Record-ID\":\"<urn:uuid:3d87d36a-89da-4441-bb51-911c535b9584>\",\"Content-Length\":\"140775\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2df14805-8ebd-4a9d-80c7-a8cf7888c351>\",\"WARC-Concurrent-To\":\"<urn:uuid:70e3c560-a432-452c-9dc6-907b1825d876>\",\"WARC-IP-Address\":\"151.101.193.69\",\"WARC-Target-URI\":\"https://math.stackexchange.com/questions/2191651/triangles-formed-by-the-points-of-contact-of-the-sides-with-the-excircles-and-by\",\"WARC-Payload-Digest\":\"sha1:QRVQUWQ7JH72Z4IJBLHE3E5EPTWVJPHN\",\"WARC-Block-Digest\":\"sha1:IHYSUYYIZREOFTLKJZA6WNTEY465J3PX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320300658.84_warc_CC-MAIN-20220118002226-20220118032226-00394.warc.gz\"}"}
http://napitupulu-jon.appspot.com/posts/paired-data-coursera-statistics.html
[ "# Paired data and Bootstrapping\n\n|   Source\n\nThis will become the foundation for the inference for numerical variables. This blog will discuss how to compare mean for each of the group.Specifically how we can make inference other than CI(because median and other point estimate can't be fit in CLT). We also learn how to do bootstrapping (randomize permuation for each step), working for small example(less than 30, like t-distribution), and comparing many means (ANOVA).", null, "Screenshot taken from Coursera 02:09\n\nThis is an examples where students' score being observed for their read and writing test. This have 200 data. This is of course read and writing are dependant variables. Students with high-achieving scores are most likely having both higher scores. This two variables is what we called paired data.\n\nParameter of Interest that we're going to observe is the population mean. But since we don't know the value, we're going to use average difference of sampled students as point estimate.\n\n$$PoI = \\mu_\\mathbf{diff}$$$$PO = \\bar{x}_\\mathbf{diff}$$", null, "Screenshot taken from Coursera 03:36\n\nHere we have calculate each of the difference of the students and make a summary statistics. Now we should expect that the average difference is zero, but turn out it's not the exact value. Is the difference is due to chance? is it statiscally significant? To test this, we use previous tool, hypothesis testing.\n\n#### Hypothesis Testing for Paired Data¶", null, "Screenshot taken from Coursera 07:17\n\nNow we can use Hypothesis Framework to test our data. The sample size is 200, which confirm higher than 30, but less than 10% population. The students is random sampled, so it is independent of one another. Than we can make a distribution, compute test statistics and shade the p-value. Standard Error can be get, we already have sample size and standard deviation. Because we investigate the different, we doubled the p-value.\n\nThis is what we can do in paired data, where data have 2 variables(can be more) but as long as it dependant, we can reduce to 1 variable. The null value is 0, but can be set with other value to your need. The benefits for HT paired data is same individuals: we can have pre/post studies about one person, identify the variable at the beginning and end of observation, we can have repeated measurement of groups. And when it's not about same individuals, we can use it on different(but dependant variables), such as twins, partners, family, etc.\n\n#### CI intervals for Paired Data¶\n\nUsing the same examples earlier, we calculate its confidence interval. We don't want to just rejecting the hypothesis, but also measures its uncertainty. Recall that significance level used earlier is 5%, which equal to 95% confidence interval. Since we failed to reject null hypothesis, the observed outcome should be within the interval. The interval will be calculated as,\n\nIn :\n%load_ext rpy2.ipython\n\nIn :\n%%R\n\n#95% = 1.96\n#99% = 2.58\n\nn = 200\nmu = -0.545\ns = 8.887\n# z = 1.96\nCL = 0.95\nz = round(qnorm((1-CL)/2,lower.tail=F),digits=2)\nSE = s/sqrt(n)\nME = z*SE\n\nc(mu-ME,mu+ME)\n\n -1.7766754 0.6866754\n\n\nRecall that the difference that we get is reading subtracted by writing, we can make a confident interval statement as, \"We are 95% confident that high school students on average, reading is 1.78 lower to 0.69 higher than writing.\"\n\n#### Comparing Independent Means¶\n\nWe already discussed earlier how we can compare two dependant variables, but how about independent? In this section we discuss how we can make hypothesis testing and confidence interval for independent variables.", null, "Screenshot taken from Coursera 01:12\n\nLet's take a look athe example of 2010 GSS, where among the variables are highest degree, categorical and hours, numerical discrete. We can used side-by-side boxplot to plot between categorical and numerical variables.But what we're currently concern about is whether they got college degree or not. This way we group the degrees into college degree and the opposite, no degree.", null, "Screenshot taken from Coursera 04:20\n\nSo to begin with, we use CI. As usual we estimate PoI and PE. our parameter interest this time, is the different between two variables, and point estimate is the difference in the sampled variables.\n\n$$PoI = \\mu_\\mathbf{col} - \\mu_\\mathbf{nocol}$$$$PO = \\bar{x}_\\mathbf{col} - \\bar{x}_\\mathbf{nocol}$$\n\nNext, we calculate the margin of error. Given the equation, we have our standard error. While explaining why the addition is beyond the scope of this blog, think about how the addition is a joint of two error variability of each of the variables. Therefore, the standard error will be higher.Because the calculations of standard error is different for independent vs dependent variables, we must validate the conditions. To be more precise, the requirements are:", null, "Screenshot taken from Coursera 06:50\n\nWe see some similarities for confidence interval. In addition, pay attention to the groups. These groups must be independent. And the sample size must met the criteria. We already seen that the sample size is different for college vs no college, so we know that it can't be paired(different sample size, independent). We also see that both distributions are not very skewed and the sample size meet the criteria. If it dependent, we must revert back to paired CI earlier.\n\nIn :\nimport numpy as np", null, "Screenshot taken from Coursera 08:13\n\nThen, after the data meet the criteria earlier, we can calculate CI based on the given parameters. What we get is positive 0.66 and 4.14. Since we know that CI is calculated based on $\\bar{x}_\\mathbf{coll} - \\bar{x}_\\mathbf{no coll}$, in CI statement, we say \"College grads work on average is 0.66 to 4.14 hours more per week than those without a college degree.\n\nFor hypothesis testing, we're being skeptical if there is no difference between college degree and without a degree. Alternative one can be different, less than, or greater than. Same conditions and SE have been meassured by CI earlier.", null, "Screenshot taken from Coursera 13:08\n\nIn :\n%R pnorm(2.4,mean=0,sd=0.89,lower.tail=F)\n\nOut:\n<FloatVector - Python:0x10a2727e8 / R:0x10785c278>\n[0.003502]\nIn :\n%%R\ns_1 = 3.4\ns_2 = 2.7\n\nn_1 = 18\nn_2 = 18\n\nsqrt(s_1**2/n_1 + s_2**2/n_2)\n\n 1.023339\n\n\nCalculating the p-value, what we get is 0.7%. Such small p-value would means that we reject the null hypothesis and favor the alternative. Interpret p-value for each of problem is often complex, the best way to interpret this is use the foundation definition of p-value, and work our way up the problem.\n\nWe know that null hypothesis is no difference of work hours on average between college degree and not college degree. We know the observe outcome, which is 2.4. We also know that more extreme, means that at least 2.4, and different means either direction. Based on that foundations, we can make HT statement,\n\n\"If there is no difference of work hours on average between college degree vs non college degree, there is 0.7% chance of obtaining random samples of 505 college and 667 non-college degree give average difference of work hours at least 2.4 hours\". Readers can conclude, that such small probability is very rare, and won't likely happen due to chance.Beside the p-value definition, we also have to mention each of the sample size. Because different sample size would yield different outcome, hence different p-value.\n\n### Bootstrapping¶", null, "Screenshot taken from Coursera 01:35\n\nSo we have twenty samples and skewed distribution. This data is taken from search engine of raleigh, on apartment in Durham, city where Duke university is located. Because this is skewed data, median is better fit. We can't use CLT, since we calculate the median, and the sample size is less than 30. Since no known tools that we can used so far,introduced Bootstrapping.Bootstrapping is different from sampling distribution in that bootstrapping generate samples from sample with replacement, sampling distribution generate samples from population with replacement.", null, "Screenshot taken from Coursera 03:46\n\nThis is what we have to do in bootstrapping. The idea is, there might be some similar value within the population. So we're taking random sampling with replacement, from values in sample. So it's like taking with replacement sampling of Bootstrap from the sample, and bootstrap consider the sample as the population. By doing this, we're regenerate fake data from the samples. Doing this of course we have to have independent variables. Bootstrap can use median,proportion, or any other point estimate.Take a look here where we have only have 5 samples, but resampling with replacement with 6 bootstrap.", null, "Screenshot taken from CS109Harvard\n\nThen bootstrap will perform same step like sampling distribution, but this time its different, and called a distribution of bootstrap statistics. This is very useful method. Even Harvard statistics professor,Joe Blitztein in the CS109 Harvard 2013 Data Science online class, stated, and I quoted here, \" Bootstrap is one of the biggest statistical breakthrough in the 21th century\".", null, "Screenshot taken from Coursera 05:50\n\nSo based on such small sample size, we can construct bootstrap distribution. this should give us a sense of median from the population distribution. If you're taking \\$676 apartment from bootstrap, it's likely that you also have similar value within the population. So next, how do construct CI based on bootstrap statistics?", null, "Screenshot taken from Coursera 07:17\n\nFirst we can use percentile method, the bootstrap distribution can be considered as centered around the population median.Because this looks like normal distribution, we can estimate 95% interval and assign lower bound and upper bound.\n\nUsing standard error method, we can calculate the Standard error, and assigning the point estimate, creating upper bound and lower bound. We don't have to use CLT, but we may take extra steps for regenerating bootstrap sample, using computation.\n\nConsider the following examples.", null, "Screenshot taken from Coursera 08:54\n\nUsing the percentiles method, we simulated the distribution and cut off 5 dot points on each side. Recall that the dot plot is the median of one bootstrap sample. Doing bootstrap doesn't mean we can't infer the population parameter, CI is still always about the population. So we can say, \"We are 95% confident that median rent of 1+ apartments in Durham is somwhere between 740 and 1050 dollar.", null, "Screenshot taken from Coursera 11:02\n\nFrom the bootstrap simulation, we're given mean and SE. Using that as a basis, we calculate the interval\n\nIn :\n%%R\n\n#95% = 1.96\n#99% = 2.58\n\nn = 100\nmu = 882.515\n# s = 89.5758\n# z = 1.96\nCL = 0.9\nz = round(qnorm((1-CL)/2,lower.tail=F),digits=2)\n# SE = s/sqrt(n)\nSE = 89.5759\nME = z*SE\n\nc(mu-ME,mu+ME)\n\n 735.6105 1029.4195\n\n\nIn the plot above, we see that the intervals are different, but pretty close.\n\n#### Bootstrap Limitation¶", null, "Screenshot taken from Coursera 12:14\n\n#### Bootstrap vs Sampling distribution¶\n\n• samping distritbuion with replacement from population, where bootsrap with replacement from sample\n• Both are distributions of sample statistics. CLT can explicitly describe the distribution of the population, where bootstrap also describe that using one sample.\n\n#### Summary¶\n\nBootstrap can be created by using with replacement in one sample. This is different from sampling distribution, where it takes with replacement from population.We can use percentile method; take 100 sample size bootstrap and cut off the sided for XX% interval, or calculate percentile based on the known condition that the distribution is normal, use point estimate bootstrap and standard error bootstrap.There's one weakness of bootstrap, is that when you have skew and sparse bootstrap distribution, it's not reliable.\n\nPaired data, is when you have one observation dependent on other variable.We can use these set differences as a basis to use hypothesis testing and confidence interval.\n\nREFERENCES:" ]
[ null, "http://napitupulu-jon.appspot.com/galleries/coursera-statistics/4w1.jpg", null, "http://napitupulu-jon.appspot.com/galleries/coursera-statistics/4w2.jpg", null, "http://napitupulu-jon.appspot.com/galleries/coursera-statistics/4w3.jpg", null, "http://napitupulu-jon.appspot.com/galleries/coursera-statistics/4w4.jpg", null, "http://napitupulu-jon.appspot.com/galleries/coursera-statistics/4w5.jpg", null, "http://napitupulu-jon.appspot.com/galleries/coursera-statistics/4w6.jpg", null, "http://napitupulu-jon.appspot.com/galleries/coursera-statistics/4w7.jpg", null, "http://napitupulu-jon.appspot.com/galleries/coursera-statistics/4w8.jpg", null, "http://napitupulu-jon.appspot.com/galleries/coursera-statistics/4w9.jpg", null, "http://napitupulu-jon.appspot.com/galleries/coursera-statistics/4w10.jpg", null, "http://napitupulu-jon.appspot.com/galleries/coursera-statistics/4w11.jpg", null, "http://napitupulu-jon.appspot.com/galleries/coursera-statistics/4w12.jpg", null, "http://napitupulu-jon.appspot.com/galleries/coursera-statistics/4w13.jpg", null, "http://napitupulu-jon.appspot.com/galleries/coursera-statistics/4w14.jpg", null, "http://napitupulu-jon.appspot.com/galleries/coursera-statistics/4w15.jpg", null, "http://napitupulu-jon.appspot.com/galleries/coursera-statistics/4w16.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.91946316,"math_prob":0.97828454,"size":10893,"snap":"2019-13-2019-22","text_gpt3_token_len":2463,"char_repetition_ratio":0.1367435,"word_repetition_ratio":0.019241653,"special_character_ratio":0.22711833,"punctuation_ratio":0.12505877,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.993779,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32],"im_url_duplicate_count":[null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-03-21T04:16:55Z\",\"WARC-Record-ID\":\"<urn:uuid:749e356d-af4c-4b18-81a4-476487e1e772>\",\"Content-Length\":\"37047\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:902527ca-fe8b-4ca4-a618-f455884575f6>\",\"WARC-Concurrent-To\":\"<urn:uuid:3ab832aa-1868-4bd1-9196-fd794b8c4857>\",\"WARC-IP-Address\":\"172.217.15.116\",\"WARC-Target-URI\":\"http://napitupulu-jon.appspot.com/posts/paired-data-coursera-statistics.html\",\"WARC-Payload-Digest\":\"sha1:3PWIC5H6S4GV3GOQIZYR7MPI4C5PCGBE\",\"WARC-Block-Digest\":\"sha1:SXPGRCWAENDGHCL2ZUOY5ESX2GS4O6G7\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-13/CC-MAIN-2019-13_segments_1552912202484.31_warc_CC-MAIN-20190321030925-20190321052925-00285.warc.gz\"}"}
http://wiki.san-ss.com.ar/project-euler-problem-026
[ "Project Euler Problem 026\n\n# Statement\n\nA unit fraction contains 1 in the numerator. The decimal representation of the unit fractions with denominators 2 to 10 are given:\n\n(1)\n\\begin{align} \\begin{flushleft} 1/2 = 0.5 1/3 = 0.(3) 1/4 = 0.25 1/5 = 0.2 1/6 = 0.1(6) 1/7 = 0.(142857) 1/8 = 0.125 1/9 = 0.(1) 1/10 = 0.1 \\end{align}\n\nWhere 0.1(6) means 0.166666…, and has a 1-digit recurring cycle. It can be seen that $1/7$ has a 6-digit recurring cycle.\n\nFind the value of d < 1000 for which $1/d$ contains the longest recurring cycle in its decimal fraction part.\n\n# Solution\n\nHere the problem is how to determinate when the recurrence has started. After thinking deeply it's pretty obvious, when we find\nthat the tuple formed by the digit that we have to add to the result and the module that we have to continue dividing has\nappeared before then we have a chain.\n\ndef find_cycle_length(n):\ntmp = 1\ndecimal = []\ndigit = tmp / n\nmodule = tmp % n\nwhile (digit, module) not in decimal:\ndecimal.append((digit, module))\ntmp = module * 10\ndigit = tmp // n\nmodule = tmp % n\nwhile (digit, module) != decimal:\ndecimal.pop(0)\nreturn len(decimal)\n\nif __name__ == '__main__':\nresult = 0\nmax_len = 0\nfor d in range(2,1000):\nlength = find_cycle_length(d)\nif length > max_len:\nresult = d\nmax_len = length\nprint(\"The result is:\", result)" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.73118246,"math_prob":0.99953187,"size":1357,"snap":"2019-51-2020-05","text_gpt3_token_len":414,"char_repetition_ratio":0.12564671,"word_repetition_ratio":0.042372882,"special_character_ratio":0.35003686,"punctuation_ratio":0.12587413,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9995302,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-15T01:08:04Z\",\"WARC-Record-ID\":\"<urn:uuid:fe6f5b48-2614-48b5-ae15-59ab3bb9edf4>\",\"Content-Length\":\"29785\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:07634cef-7e20-4d4d-b97f-87b434b14057>\",\"WARC-Concurrent-To\":\"<urn:uuid:8073d7d4-ffb0-450d-8c54-d6252daa4ad2>\",\"WARC-IP-Address\":\"107.20.139.170\",\"WARC-Target-URI\":\"http://wiki.san-ss.com.ar/project-euler-problem-026\",\"WARC-Payload-Digest\":\"sha1:OEVMVQKKGLV4Y5FSDQNQBOW3ZJJRG5BU\",\"WARC-Block-Digest\":\"sha1:WAFOGTNPXFNUPI6PYGW2KCBDVFBYYBRL\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575541297626.61_warc_CC-MAIN-20191214230830-20191215014830-00449.warc.gz\"}"}
http://bance.transform-eu.org/how-to-calculate-a-timecard/
[ "# How To Calculate A Timecard\n\nhow to calculate hours worked ontheclock .\n\nsample time card calculator 7 documents in pdf .\n\nfree time card calculator timesheet calculator for excel .\n\nhow to calculate hours worked ontheclock .\n\ntime card calculator .\n\n10 best time card calculators getsling .\n\nfree time card calculator timesheet calculator for excel .\n\ntime card calculator .\n\nhow to calculate overtime hours on a time card in excel .\n\ntimeclicks free online time card calculator .\n\ncopleys timecard cycling studio .\n\nsample time card calculator 7 documents in pdf .\n\nfree excel time card calculator with lunch and overtime .\n\ncreate time card calculator in microsoft excel 2013 2016 tips and tricks itfriend .\n\nexcel formula basic timesheet formula with breaks exceljet .\n\nhow to calculate a timecard with your eyes closed .\n\nfree time card calculator reviews and pricing 2019 .\n\nfree excel time card calculator with lunch and overtime .\n\nweekly time card calculator .\n\ntime card calculator excel major magdalene project org .\n\ntimesheet in excel guide to create timesheet calculator .\n\nfree timesheet calculator online time card calculator .\n\n001 excel time card template ideas monthly rare microsoft .\n\nworking with time cards answer key fill online printable .\n\nfree time card calculator timesheet calculator for excel .\n\nemployee timesheets excel template time card work hours calculator .\n\n10 best time card calculators getsling .\n\neasy free time card calculator with lunch breaks and .\n\ntime card calculator software .\n\ntime sheet calculator .\n\ntime card calculator .\n\nhours calculator .\n\nemployee timecard daily weekly monthly and yearly .\n\ncopleys timecard cycling studio .\n\nhow to calculate an employees time card using military time .\n\nhow to calculate a time card .\n\ntime card calculator geek awwwards nominee .\n\ntime card calculator free timesheet calculator with lunch .\n\nexcel formula basic timesheet formula with breaks exceljet .\n\ntime rounding in virtual timeclock .\n\nemployee timesheet template excel time card work hours .\n\nexcel for commerce time card calculator payroll template .\n\nweb based online time card calculator for construction .\n\nbi weekly timecard calculator with lunch break wages and ot .\n\nhow to calculate time in google sheets .\n\nfree online time card calculator time clock wizard .\n\ncalculating your paycheck weekly time card 1 fill online .\n\nexcel for commerce time card calculator payroll template .\n\nfree time card calculator homebase .\n\nhumanity demo video .\n\ntime card calculator .\n\n2 weeks timecard template templates report card template .\n\nexcel timesheet calculator how to calculate hours in excel .\n\naccounting user guide media services mobile .\n\nemployee time clock .\n\nreporting activities in virtual timeclock .\n\nfree online time card calculator time clock wizard .\n\ntime card calculator lets automate it honeybeebase .\n\nopen dental software manage time cards .\n\nwhy actual cost and actual hours are not calculating from .\n\nbiweekly timesheet calculator free time card employee excel .\n\ntime card calculator biweekly xlsx biweekly time card .\n\nbuild a simple timesheet in excel techrepublic .\n\n10 best time card calculators getsling .\n\naccounting user guide media services mobile .\n\nannual bi weekly timecard payroll calculator .\n\ncalculated industries used timecard tabulator ii 9530 time .\n\nexcel time card template step by guide to create time card .\n\ncalculating upunch time clock bundle with 200 cards 3 ribbons 2 time card racks 2 keys hn4500 .\n\ntime card calculator excel major magdalene project org .\n\nhow to build a time card in excel time card template .\n\nfree timesheet calculator for payroll timecard calculator .\n\nrules in calculating time cards manually chron com .\n\ncrew time card the entertainment industry standard for .\n\nopen dental software timecard .\n\nbuild a simple timesheet in excel techrepublic .\n\nhow to calculate hours worked and minus lunch time in excel .\n\nhow to create a simple excel timesheet .\n\ntimecard calculator 9 1650x1275 bi brucker holz de frisch .\n\ncopleys timecard calculator best of biweekly timesheet .\n\nhow to calculate time in google sheets .\n\ncheck timecards for mistakes hr payroll and employee .\n\n7 8 time card calculator with lunch break star wars .\n\ngo green with this paperless time card app .\n\nhow to calculate payroll time in quarters .\n\ndpc 3 94 06 04 the timecard tab on center software support ." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.71927476,"math_prob":0.9945629,"size":4137,"snap":"2020-45-2020-50","text_gpt3_token_len":814,"char_repetition_ratio":0.3048633,"word_repetition_ratio":0.1826087,"special_character_ratio":0.19917814,"punctuation_ratio":0.11527377,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95758724,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-20T08:18:25Z\",\"WARC-Record-ID\":\"<urn:uuid:b9fff3fe-5f0a-450e-9cb2-3e3cb88047f3>\",\"Content-Length\":\"83834\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0ca59fc0-3dfe-44b4-8148-d4be46b178fa>\",\"WARC-Concurrent-To\":\"<urn:uuid:c095611d-5eda-429f-a009-d8a8649ab764>\",\"WARC-IP-Address\":\"213.202.241.219\",\"WARC-Target-URI\":\"http://bance.transform-eu.org/how-to-calculate-a-timecard/\",\"WARC-Payload-Digest\":\"sha1:RZSTMM3DBHMVPKAUFHODGTTYDZUOVKKZ\",\"WARC-Block-Digest\":\"sha1:LHMN5YBU47NMLPL545GY2Y5Y642N7UQN\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107871231.19_warc_CC-MAIN-20201020080044-20201020110044-00532.warc.gz\"}"}
https://metanumbers.com/552063
[ "## 552063\n\n552,063 (five hundred fifty-two thousand sixty-three) is an odd six-digits composite number following 552062 and preceding 552064. In scientific notation, it is written as 5.52063 × 105. The sum of its digits is 21. It has a total of 3 prime factors and 8 positive divisors. There are 361,688 positive integers (up to 552063) that are relatively prime to 552063.\n\n## Basic properties\n\n• Is Prime? No\n• Number parity Odd\n• Number length 6\n• Sum of Digits 21\n• Digital Root 3\n\n## Name\n\nShort name 552 thousand 63 five hundred fifty-two thousand sixty-three\n\n## Notation\n\nScientific notation 5.52063 × 105 552.063 × 103\n\n## Prime Factorization of 552063\n\nPrime Factorization 3 × 59 × 3119\n\nComposite number\nDistinct Factors Total Factors Radical ω(n) 3 Total number of distinct prime factors Ω(n) 3 Total number of prime factors rad(n) 552063 Product of the distinct prime numbers λ(n) -1 Returns the parity of Ω(n), such that λ(n) = (-1)Ω(n) μ(n) -1 Returns: 1, if n has an even number of prime factors (and is square free) −1, if n has an odd number of prime factors (and is square free) 0, if n has a squared prime factor Λ(n) 0 Returns log(p) if n is a power pk of any prime p (for any k >= 1), else returns 0\n\nThe prime factorization of 552,063 is 3 × 59 × 3119. Since it has a total of 3 prime factors, 552,063 is a composite number.\n\n## Divisors of 552063\n\n8 divisors\n\n Even divisors 0 8 4 4\nTotal Divisors Sum of Divisors Aliquot Sum τ(n) 8 Total number of the positive divisors of n σ(n) 748800 Sum of all the positive divisors of n s(n) 196737 Sum of the proper positive divisors of n A(n) 93600 Returns the sum of divisors (σ(n)) divided by the total number of divisors (τ(n)) G(n) 743.009 Returns the nth root of the product of n divisors H(n) 5.89811 Returns the total number of divisors (τ(n)) divided by the sum of the reciprocal of each divisors\n\nThe number 552,063 can be divided by 8 positive divisors (out of which 0 are even, and 8 are odd). The sum of these divisors (counting 552,063) is 748,800, the average is 93,600.\n\n## Other Arithmetic Functions (n = 552063)\n\n1 φ(n) n\nEuler Totient Carmichael Lambda Prime Pi φ(n) 361688 Total number of positive integers not greater than n that are coprime to n λ(n) 90422 Smallest positive number such that aλ(n) ≡ 1 (mod n) for all a coprime to n π(n) ≈ 45368 Total number of primes less than or equal to n r2(n) 0 The number of ways n can be represented as the sum of 2 squares\n\nThere are 361,688 positive integers (less than 552,063) that are coprime with 552,063. And there are approximately 45,368 prime numbers less than or equal to 552,063.\n\n## Divisibility of 552063\n\n m n mod m 2 3 4 5 6 7 8 9 1 0 3 3 3 1 7 3\n\nThe number 552,063 is divisible by 3.\n\n## Classification of 552063\n\n• Arithmetic\n• Deficient\n\n### Expressible via specific sums\n\n• Polite\n• Non-hypotenuse\n\n• Square Free\n\n### Other numbers\n\n• LucasCarmichael\n• Sphenic\n\n## Base conversion (552063)\n\nBase System Value\n2 Binary 10000110110001111111\n3 Ternary 1001001021210\n4 Quaternary 2012301333\n5 Quinary 120131223\n6 Senary 15455503\n8 Octal 2066177\n10 Decimal 552063\n12 Duodecimal 227593\n16 Hexadecimal 86c7f\n20 Vigesimal 39033\n36 Base36 btz3\n\n## Basic calculations (n = 552063)\n\n### Multiplication\n\nn×i\n n×2 1104126 1656189 2208252 2760315\n\n### Division\n\nni\n n⁄2 276032 184021 138016 110413\n\n### Exponentiation\n\nni\n n2 304773555969 168254203628914047 92886920417989175528961 51279431946716358210044796543\n\n### Nth Root\n\ni√n\n 2√n 743.009 82.0344 27.2582 14.0734\n\n## 552063 as geometric shapes\n\n### Circle\n\nRadius = n\n Diameter 1.10413e+06 3.46871e+06 9.57474e+11\n\n### Sphere\n\nRadius = n\n Volume 7.04782e+17 3.8299e+12 3.46871e+06\n\n### Square\n\nLength = n\n Perimeter 2.20825e+06 3.04774e+11 780735\n\n### Cube\n\nLength = n\n Surface area 1.82864e+12 1.68254e+17 956201\n\n### Equilateral Triangle\n\nLength = n\n Perimeter 1.65619e+06 1.31971e+11 478101\n\n### Triangular Pyramid\n\nLength = n\n Surface area 5.27883e+11 1.98289e+16 450758\n\n## Cryptographic Hash Functions\n\nmd5 d87a90f9b18002e714653181490097bb 4ef68ed69f26ecf16ce144bbfece89e9d2220aae 4a977e5a07f980302e59baee67d9bbd81e410eb8260b13acdb00341ac47bc992 b51df9449a6501580fcb8a61296cb4b8d79a911b42142498e5f279edb27e5f4bd316fb6c99322a1066014dc32d7b3ee9c65bae67f102de007b86dd35a64276c9 493917da243b21e4448e7101da2d6aa4a89393de" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5980889,"math_prob":0.98190355,"size":4653,"snap":"2021-21-2021-25","text_gpt3_token_len":1632,"char_repetition_ratio":0.12067111,"word_repetition_ratio":0.028023599,"special_character_ratio":0.46292713,"punctuation_ratio":0.075738125,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99669045,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-06-23T14:43:15Z\",\"WARC-Record-ID\":\"<urn:uuid:f1db6482-a6c6-4d38-9e47-4c5e2df79a01>\",\"Content-Length\":\"60288\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:760fe981-697d-4119-aeca-ee339e9e56e0>\",\"WARC-Concurrent-To\":\"<urn:uuid:ac36eb96-92fc-4ed5-93e7-be342943a61c>\",\"WARC-IP-Address\":\"46.105.53.190\",\"WARC-Target-URI\":\"https://metanumbers.com/552063\",\"WARC-Payload-Digest\":\"sha1:7CTDXIR3ZNTY2GGRDIIC5UZM7IBEITBK\",\"WARC-Block-Digest\":\"sha1:WS7Z35RF5NX6NXG72BOY57WUKG6TT63O\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-25/CC-MAIN-2021-25_segments_1623488539480.67_warc_CC-MAIN-20210623134306-20210623164306-00546.warc.gz\"}"}
https://answers.everydaycalculation.com/subtract-fractions/90-40-minus-2-60
[ "Solutions by everydaycalculation.com\n\n## Subtract 2/60 from 90/40\n\n1st number: 2 10/40, 2nd number: 2/60\n\n90/40 - 2/60 is 133/60.\n\n#### Steps for subtracting fractions\n\n1. Find the least common denominator or LCM of the two denominators:\nLCM of 40 and 60 is 120\n2. For the 1st fraction, since 40 × 3 = 120,\n90/40 = 90 × 3/40 × 3 = 270/120\n3. Likewise, for the 2nd fraction, since 60 × 2 = 120,\n2/60 = 2 × 2/60 × 2 = 4/120\n4. Subtract the two fractions:\n270/120 - 4/120 = 270 - 4/120 = 266/120\n5. After reducing the fraction, the answer is 133/60\n6. In mixed form: 213/60\n\nMathStep (Works offline)", null, "Download our mobile app and learn to work with fractions in your own time:" ]
[ null, "https://answers.everydaycalculation.com/mathstep-app-icon.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8517526,"math_prob":0.99085647,"size":731,"snap":"2020-24-2020-29","text_gpt3_token_len":287,"char_repetition_ratio":0.14305365,"word_repetition_ratio":0.0,"special_character_ratio":0.49384406,"punctuation_ratio":0.09756097,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99812967,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-06-03T00:56:44Z\",\"WARC-Record-ID\":\"<urn:uuid:dd532027-2c95-467f-9333-352c62c4b2a2>\",\"Content-Length\":\"7639\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d0867269-da4b-4b3f-9705-6c19856f400f>\",\"WARC-Concurrent-To\":\"<urn:uuid:50a6a1af-de4d-48c7-a8f0-e9904c6d3d89>\",\"WARC-IP-Address\":\"96.126.107.130\",\"WARC-Target-URI\":\"https://answers.everydaycalculation.com/subtract-fractions/90-40-minus-2-60\",\"WARC-Payload-Digest\":\"sha1:FAE53UX6F56PRD6OTGFGYVFQ4OGDOR7B\",\"WARC-Block-Digest\":\"sha1:EZSZBARU6M6WJNASPXSTX3J65VZVDZ3X\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-24/CC-MAIN-2020-24_segments_1590347426956.82_warc_CC-MAIN-20200602224517-20200603014517-00279.warc.gz\"}"}