url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
http://mathematica.stackexchange.com/questions/13320/how-to-pull-scalars-out-of-a-function-that-should-act-on-lists?answertab=votes
# How to pull scalars out of a function that should act on lists? Suppose I have A = a vecA B = b vecB where a and b are supposed to be arbitrary scalars and vecA={vA1,vA2} and vecB={vB1,vB2} are vectors, i.e. lists (I guess from a mathematica point of view the difference is that a and b have Head Symbol, whereas vec1 and vec2 have head List?). Now imagine an arbitrary function of two arguments f[A,B]. What I would like to do is to pull out the scalars and multiply them: f[A,B] -> a * b * g[vecA,vecB] where g[vecA,vecB] is again an arbitrary function that only works on vectors (lists). How can I achieve that? The background is that I would like to process (simplify, rearrange, expand in series, etc.) expressions that mix scalars and vectors and contain inner products of such mixtures without explicitly expanding the inner products into the components of the vectors. Such that I can later on easily replace occurrences of some inner products. See also my (not yet really solved) question: Symbolically associate vectors and their norms My current take on this is to use Inner[f,A,B] instead of Dot[A,B] such that the inner product is not explicitly expanded into components. However I do need to pull out the scalars somehow.. - The following seems to do what you want: f[x: {(0|a_|a_*__) ..}, y: {(0|b_|b_*__) ..}]:=a b g[x/a,y/b] With this, you get f[a {1, 2, 3}, b {4, 5, 6}] (* ==> a b g[{1, 2, 3}, {4, 5, 6}] *) However I didn't do much testing. - As far as I can tell right now, your solution relies on explicitly dividing out the prefactors a and b. However these are just examples of generic scalar prefactors that are complicated and that I do not know a priori. See also my comment to Mr.Wizards answer for more details.. –  janitor048 Oct 19 '12 at 19:37 @janitor048: No, the aand b in the definition of f are patterns. For example, f[Sin[x^2+y^2] {a, 2, l}, Exp[I k x]/y {x+1, x-1, 0}] gives (E^(I*k*x)*g[{a, 2, l}, {1 + x, -1 + x, 0}]*Sin[x^2 + y^2])/y. I probably would have avoided confusion if I had used different names in the definition of f. –  celtschk Oct 19 '12 at 19:49 Yes, I realized this after writing my comment. I'm sorry. I will test your approach more carefully later. Does it work when a and b are themselves composed of multiple factors? –  janitor048 Oct 19 '12 at 20:04 f[a b {1,2,3}, c d {4,5,6}] gives a b c d g[{1, 2, 3}, {4, 5, 6}], so obviously the answer is yes. –  celtschk Oct 19 '12 at 20:06 Ok, this is pretty cool! I've done some basic testing (not applied it to my real problem though) and it does the job. The way you've build up your pattern appears a bit like black voodoo though :-) Would you maybe like to elaborate a bit on how this works? –  janitor048 Oct 22 '12 at 12:54 I believe that what you're looking for is some data structure Vector which has some list defining direction and some scalar which in part defines magnitude. Here you go: Vector[a_List] := Vector[1, a] Vector[b_, _]["scalar"] := b Vector[_, a_List]["vector"] := a Vector /: (b_ Vector[c_, a_List]) := Vector[c b, a] a = 3 Vector[{1, 1, 0}]; b = 2 Vector[.3, {3, 2, 0}]; If you want to "extract" the scalars, then use magicfunction: magicfunction[a__Vector, z_] := Times @@ (#["scalar"] &) /@ List[a] z @@ (#["vector"] &) /@ List[a] For instance: magicfunction[a, b, Cross] (* {0., 0., -1.8} *) magicfunction[a, b, Hold] (* 1.8 Hold[{1, 1, 0}, {3, 2, 0}] *) In order to get the regular vector back, just use Normal. Make sure you have a copy of your Vector, however, as this transformation will lose the information about the scalar. Vector /: Normal[Vector[b_, a_List]] := b a Normal[a] (* {3, 3, 0} *) - This is indeed some magic function :-) Thanks a lot! I think to the question as asked here, the answer by celtschk is probably the best suited answer. But I am really considering whether using a data structure Vector as you propose it, would actually be beneficial for the real problem I am after. I'll do some testing.. –  janitor048 Oct 22 '12 at 14:26 I'm sure this is far too straightforward to be what you're asking, but since I'm apparently not understanding the question maybe your telling me why this is wrong will help: vecA = Range[7] vecB = Range[5, 15] {1, 2, 3, 4, 5, 6, 7} {5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15} A = a vecA; B = b vecB; f[x_, y_] := a * b * g[A/a, B/b] f[A, B] a b g[{1, 2, 3, 4, 5, 6, 7}, {5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15}] - My understanding of the question (I ve had that pb myself many times) is to do this when its too late... ie when mathematica has already distributed a and b (?) –  chris Oct 19 '12 at 17:13 @chris What do you mean by distributed? This still works if a and b have numeric values, if that is your concern. Otherwise I guess I don't understand. –  Mr.Wizard Oct 19 '12 at 17:51 The problem is that a, b, etc. are just examples. The real expressions are a lot more complicated and I don't know a priori how they look like, i.e. I can't just divide out the prefactors. In more detail, I have two terms and all I know is their general structure: a lot of scalar prefactors times a vector. I then need to dot these terms into each other and simplify the result. The thing is that the prefactors can be related to the inner products of the vectors (e.g. they are the norms) - so there is a lot of possible simplification that only works if I do NOT explicitly expand the dot –  janitor048 Oct 19 '12 at 19:34 [continued] product into its components but rather keep it intact as long as possible and then use replacement rules on the appropriate terms. –  janitor048 Oct 19 '12 at 19:35 BTW: In case you are curious, I am trying to compute squared amplitudes of Feynman diagrams that eventually involves computing inner products of a lot of terms of the generic structure I discussed above.. –  janitor048 Oct 19 '12 at 19:39
2014-12-23 01:39:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5559775829315186, "perplexity": 1063.1683265157167}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802777454.142/warc/CC-MAIN-20141217075257-00067-ip-10-231-17-201.ec2.internal.warc.gz"}
https://ccssmathanswers.com/180-days-of-math-for-fourth-grade-day-43-answers-key/
180 Days of Math for Fourth Grade Day 43 Answers Key By accessing our 180 Days of Math for Fourth Grade Answers Key Day 43 regularly, students can get better problem-solving skills. 180 Days of Math for Fourth Grade Answers Key Day 43 Directions Solve each problem. Question 1. Explanation: Perform addition operation on above two given numbers. Add 22 with 17 the sum is 39. Question 2. $$\frac{1}{8}$$ of 48 is ___ Multiply $$\frac{1}{8}$$ with 48 the product is 6. $$\frac{1}{8}$$ x 48 = 6 So, $$\frac{1}{8}$$ of 48 is 6. Question 3. List all the factors of 16. ______________ Factors of 16 are 1, 2, 4, 8, 16. Explanation: The numbers which divides 16 exactly leaving a remainder zero are the factors of 16. So, the factors of 16 are 1, 2, 4, 8,16. Question 4. Explanation: Perform division operation on above two given numbers. Divide 64 by 8 the quotient is 8. Question 5. Which is greater: 0.03 or 0.3? ______________ The given decimal numbers are 0.03 and 0.3. The decimal number 0.3 is greater than 0.03. Question 6. 48 ÷ ___ = 12 48 ÷ 4 = 12 Explanation: To get the quotient 12 we need to perform division operation. Divide 48 by 4 the quotient is 12. The missing divisor is 4. Question 7. Fill in the blanks for the time shown. Explanation: In the above image we can observe a analog clock. The time in the clock is 12:09. The time 12:09 can also be represented as 9 past 12. Question 8. How many days are in April? ______________ There are 30 days in April. Question 9. Draw in the axes of symmetry. Dad bought 4 hats that cost $2.50 each. How much did he spend? ___________ Answer: Dad bought 4 hats that cost$2.50 each. 1 hat = $2.50 4 hats = ?$ (4 hats x $2.50)/1hat =$10.00
2022-05-17 16:52:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6055825352668762, "perplexity": 2444.222422282891}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662519037.11/warc/CC-MAIN-20220517162558-20220517192558-00770.warc.gz"}
https://codegolf.stackexchange.com/questions/241983/is-there-a-stable-way-to-stack-these
# Is there a stable way to stack these? If we have a binary matrix then we will say that a $$\1\$$ is stable if it is in the bottom row or it is directly adjacent to a $$\1\$$ which is stable. In other words there must be a path to the bottom row consisting only of $$\1\$$s. So in the following matrix the $$\1\$$s highlighted in red are not stable. $$0110\color{red}{1}0\\ 0100\color{red}{11}\\ 110000\\$$ A matrix is stable if every $$\1\$$ in it is stable. Your task is to take a matrix or list of rows and determine if there is someway to rearrange the rows into a stable matrix. The example above can be if we swap the top and bottom row: $$110000\\ 011010\\ 010011\\$$ But the following matrix cannot: $$01010\\ 10101\\ 00000$$ You may take input in any reasonable format. You may also assume that there is at least one row and that all rows are at least 1 element long. You should output one of two distinct values if it is possible to rearrange the rows into a stable matrix and the other if it is not. This is so the goal is to minimize your source code with answers being scored in bytes. ## Test cases 000 000 000 -> True 1 -> True 011010 010011 110000 -> True 01010 10101 00000 -> False 01010 10101 01110 -> True 01010 01100 00011 10101 -> False 10 01 -> False • Why the third test case is true? Jan 29 at 18:40 • @Fmbalbuena That's the case we use as an example in the body of the post. Swap the top and bottom rows. Jan 29 at 18:41 # J, 60 bytes 1 e.i.@!@#(1*/@,@([:+./ .*^:_~1>:[:|@-/~\$j./@#:I.@,)@,])@A.] Try it online! Feels like there's a trick I'm missing, but this takes a brute force approach as follows: • For each permutation of rows... • Prepend of row of all ones... • Check if the "distance of 1" graph of the 1 positions is fully connected. • If it is, we've found a solution. # MATL, 2726 24 bytes Zy:Y@!"2G@Y)tQ&v4&1ZImvA Input is a binary matrix. Output is 0 if stable, 1 otherwise. Try it at MATL online! Or verify all test cases. ### Explanation Zy % Input (implicit): binary matrixz. Size. Gives [r, c], where r and c % are the numbers of rows and of columns : % Range. Gives [1, 2, ... r] (c is ignored) Y@ % All permutations of numbers 1, 2, ..., r. Gives an r-column matrix % where each row is a permutation !" % For each row 2 % Push 2 G % Push input @Y) % Apply current permutation to the rows of the input tQ % Duplicate, add 1. Gives a matrix the same size as the input with % all entries different from 0 &v % Concatenate the two matrices vertically. This has the effect of % adding a "bottom" of nonzeros to the permutation of the input 4&1ZI % Connected components, using 4-neighbourhood (i.e. not diagonals) % Each connected component of nonzeros is labelled 1, 2, ... m % Ismember: gives true if there is a connected component labelled % with 2. This can only happen if some 1 in the input is not % connected to the bottom, meaning that the current permutation % is not stable vA % Concatenate vertically. All. This acts as a cumulative "and". % The result is 1 if and only if all permutations so far were % not stable % End (implicit). Display (implicit) # Python 3.8 (pre-release), 159 bytes lambda l:any(f(0,len(l[0]),*sum(p,[]))for p in permutations(l)) f=lambda p,a,x,*t:a>len(t)or-(p:=p+t[a-1])-len(t)%a*t[0]+x<f(x*p,a,*t)>0 from itertools import* Try it online! Takes input as a 2d list. f is a function that checks if a matrix is stable. Then we just try every permutation, until we find a matrix that works. # JavaScript (Node.js), 167 bytes f=(a,...b)=>a[0]?a.some(h=>f(a.filter(_=>_!=h),h,...b)):b[b=b.map(x=>[...x]),0].map(g=(y,x)=>(e=b[~y]||0)[x]&&++e[a=2,x]+[-1,1].map(i=>g(y+i,x)+g(y,x+i)))|!/1/.test(b) Try it online! Input -1 for true and 0 for false # Python3, 265 bytes: lambda b:any(v(i)for i in permutations(b)) from itertools import* E=enumerate def p(b,c,d): if c==len(b)-1:return 1 try: for x,y in[(0,1),(0,-1),(1,0)]: if(y:=d+y)*b[c+x][y]:return 1 except:return 0 v=lambda b:all(p(b,x,y)for x,l in E(b)for y,s in E(l)if s) Try it online! • 265 Jan 29 at 21:50 • If you have a function that just returns an expression it's shorter as a lambda. When you are testing if two integers are positive and non-negative you can multiply them together instead of using and. The values in q don't matter, in fact we don't even need q for anything, and can just return directly 0 or 1. from foo import* is always shorter than import foo as F. Finally, I've split c to the x and y components, since they were always accessed separately. Jan 29 at 21:54 • @AnttiP Thank you, updated. Jan 30 at 3:50 • 249 bytes. Moved if statement into for loop. Replaced except:return 0 to except:pass. Changed c==len(b)-1 to c>len(b)-2. Jan 31 at 14:55 # JavaScript (Node.js), 116 bytes f=(a,p=1)=>a.some((r,j)=>r.every((c,i)=>c?(s|=p[i],l=1):!(l=s=l>s),l=s=0)&l<=s|p&&f(b=[...a],b.splice(j,1)[0]))||++a Try it online! Input 0/1 matrix. Output true vs NaN. # 05AB1E, 53 bytes œʒ¬!ªÐU˜!ƶsgäΔ0δ.ø¬0*šĆ2Fø€ü3}εεÅsyøÅs«à}}X*}˜0KÙg}gĀ Explanation: In pseudo-code, I do the following steps (with the code-parts behind it - as you can see, the flood-fill takes up most of the bytes): 1. Get all permutations of rows of the input-matrix (œ) 2. Check if any permutation is truthy for the following steps (ʒ...}gĀ): 1. Append a row of 1s to the matrix as new bottom (¬!ª) 1. E.g. the permutation we want to check is: 0,1,1,0,1,0 0,1,0,0,1,1 1,1,0,0,0,0 2. Then it will become this with bottom row of 1s: 0,1,1,0,1,0 0,1,0,0,1,1 1,1,0,0,0,0 1,1,1,1,1,1 2. Flood-fill the matrix, using only horizontal/vertical moves - done in a similar matter as @Jonah's J answer for the To find islands of 1 and 0 in matrix challenge (ÐU˜!ƶsgäΔ0δ.ø¬0*šĆ2Fø€ü3}εεÅsyøÅs«à}}X*}): 1. We first create a matrix of the same size with unique positive integers: 1, 2, 3, 4, 5, 6 7, 8, 9,10,11,12 13,14,15,16,17,18 19,20,21,22,23,24 2. Then for each cell we get the maximum among itself and its horizontal/vertical neighbors: 7, 8, 9,10,11,12 13,14,15,16,17,18 19,20,21,22,23,24 20,21,22,23,24,24 3. Which we multiply by the matrix of 0s/1s we started with (the one from step 2.1.2): 0, 8, 9, 0,11, 0 0,14, 0, 0,17,18 14,15, 0, 0, 0, 0 20,21,22,23,24,24 4. And we continue steps 2.2.2 and 2.2.3 until the result no longer changes: 0,24,24, 0,18, 0 0,24, 0, 0,18,18 24,24, 0, 0, 0, 0 24,24,24,24,24,24 3. Check if there is just a single island after the flood-fill (˜0KÙg) As for the actual code: œ # Get all permutations of rows of the (implicit) input-matrix ʒ # Filter this list of matrices by: ¬!ª # Append a row of 1s: ¬ # Push the first row (without popping the matrix) ! # Convert all 0s/1s to 1s with the faculty ª # Append this row of 1s to the matrix Ð # Triplicate the matrix U # Pop and store a copy in variable X ˜!ƶsgä # Pop and push a matrix of the same size with values [1,length] ˜ # Flatten the matrix ! # Convert everything to 1s with the faculty ƶ # Multiply every 1 by its 1-based index s # Swap so the last copy is at the top g # Pop and push its amount of rows ä # Pop and split the list into that many equal-sized parts Δ # Loop until the result no longer changes # (which will be used to flood-fill the matrix): 0δ.ø¬0*šĆ # Surround the matrix with a border of 0s: δ # Map over each row: 0 .ø # Surround it with a leading/trailing 0 ¬ # Push the first row (without popping) 0* # Convert all 0s/1s to 0s by multiplying by 0 š # Prepend this row of 0s to the matrix Ć # Enclose; append its own head 2Fø€ü3} # Get all 3x3 blocks of this matrix: 2F # Loop 2 times: ø # Zip/transpose; swapping rows/columns € # Map over each row: ü3 # Get all overlapping triplets of this row } # Close the loop # Looking at horizontal/vertical neighbors only, get the maximum # of each 3x3 block: εε # Nested map over each 3x3 block: Ås # Push its middle row yøÅs # Push its middle column « # Merge the two triplets together à # Pop and push the maximum }} # Close the nested maps X* # Then multiply each maximum by matrix X, # so all cells that contained 0s become 0 again } # Close the flood-fill loop ˜ # Flatten the matrix to a list 0K # Remove all 0s Ù # Uniquify the remaining values g # Pop and push the length (only 1 is truthy in 05AB1E) }gĀ # After the filter: check if any permutations remain (length>=1) # (which is output implicitly as result) There are a bunch of 5-bytes alternatives for ˜0KÙg, but I haven't been able to find a 4-byter. # Charcoal, 129 112 bytes WS⊞υ⌕Aι1≔⟦⟦⟧⟧θFυ«≔⟦⟧ηFθF⊕Lκ⊞η⁺⁺✂κ⁰λ¹⟦ι⟧✂κλLκ≔ηθ»Fθ«≔⟦⟧ηFLιF§ικ⊞η⟦κλ⟧≔E⊟ι⟦Lικ⟧ιWΦ⁻ηιΦ⁴№ιEλ⁺π∧⁼ρ﹪ν²⊖⁻νρFκ⊞ιλP↔¬⁻ηι Try it online! Link is to verbose version of code. Takes input as a list of newline-terminated strings of 0s and 1s and outputs a Charcoal boolean i.e. - if a stable stack exists, nothing if not. Explanation: WS⊞υ⌕Aι1 Input the matrix and save the positions of the 1s in each row. ≔⟦⟦⟧⟧θ Start building up the permutations of the rows. Fυ« Loop through the rows. ≔⟦⟧η Start building up the permutations that include this row. Fθ Loop through the permutations of the previous rows. F⊕Lκ Loop through the possible insertion points. ⊞η⁺⁺✂κ⁰λ¹⟦ι⟧✂κλLκ Insert this row at that point and save it to the list of permutations. ≔ηθ Save the list of permutations. »Fθ« Loop through all of the permutations. ≔⟦⟧ηFLιF§ικ⊞η⟦κλ⟧ List the coordinates of all the 1s. ≔E⊟ι⟦Lικ⟧ι List the coordinates of the 1s on the bottom row, which are stable by definition. WΦ⁻ηιΦ⁴№ιEλ⁺π∧⁼ρ﹪ν²⊖⁻νρ While there are unknown coordinates adjacent to at least one stable coordinate, ... Fκ⊞ιλ ... save all the newly discovered stable coordinates. P↔¬⁻ηι Overwrite the output with - if all the coordinates were stable.
2022-09-25 18:32:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 8, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4160785675048828, "perplexity": 2489.8187552016034}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334591.19/warc/CC-MAIN-20220925162915-20220925192915-00716.warc.gz"}
https://www.gradesaver.com/textbooks/science/physics/college-physics-4th-edition/chapter-10-problems-page-403/94
College Physics (4th Edition) $v_m = 2.1~m/s$ $a_m = 371~m/s^2$ The amplitude is half the total distance that the blade moves. We can find the amplitude: $A = \frac{2.4~cm}{2} = 1.2~cm$ We can find the angular frequency: $\omega = 2\pi~f$ $\omega = (2\pi)~(28~Hz)$ $\omega = 175.9~rad/s$ We can find the maximum speed of the blade: $v_m = A~\omega$ $v_m = (0.012~m)(175.9~rad/s)$ $v_m = 2.1~m/s$ We can find the maximum acceleration of the blade: $a_m = A~\omega^2$ $a_m = (0.012~m)(175.9~rad/s)^2$ $a_m = 371~m/s^2$
2021-04-14 01:30:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9192216992378235, "perplexity": 273.5242871233203}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038076454.41/warc/CC-MAIN-20210414004149-20210414034149-00280.warc.gz"}
https://www.physicsforums.com/threads/boolean-algebra-simplification.663936/
Boolean algebra simplification ? 1. Jan 12, 2013 KaliBanda 1. The problem statement, all variables and given/known data 2. Relevant equations DeMorgan’s Theorems. 3. The attempt at a solution I've had a go at it, not sure if I'm heading in the right direction though. Thanks for any help. 2. Jan 13, 2013 Staff: Mentor Where does the second + in line 4 come from? I would expect * there, as you had it in line 3. Anyway, I would use another direction there: $(\overline{AC})\overline{C}$ has a nice simplification.
2017-08-23 07:43:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6464043855667114, "perplexity": 979.826315479046}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886117874.26/warc/CC-MAIN-20170823055231-20170823075231-00077.warc.gz"}
http://pythonic.zoomquiet.top/data/20190710194015/index.html
# What Is Dynamic Programming With Python Examples Dynamic programming is breaking down a problem into smaller sub-problems, solving each sub-problem and storing the solutions to each of these sub-problems in an array (or similar data structure) so each sub-problem is only calculated once. It is both a mathematical optimisation method and a computer programming method. Optimisation problems seek the maximum or minimum solution. The general rule is that if you encounter a problem where the initial algorithm is solved in O(2n) time, it is better solved using Dynamic Programming. Want this in a PDF? Enter your email and click "become a better dev" to get it. I'll also give to you for free: Free 202 page ebook on careers, Free 54 page e-book on algorithm design, My 7 favourite articles, Enter your email below and click "Become a better dev" right now. We won't send you spam. Unsubscribe at any time. ## Why Is Dynamic Programming Called Dynamic Programming? Richard Bellman invented DP in the 1950s. Bellman named it Dynamic Programming because at the time, RAND (his employer), disliked mathematical research and didn't want to fund it. He named it Dynamic Programming to hide the fact he was really doing mathematical research. Bellman explains the reasoning behind the term Dynamic Programming in his autobiography, Eye of the Hurricane: An Autobiography (1984, page 159). He explains: "I spent the Fall quarter (of 1950) at RAND. My first task was to find a name for multistage decision processes. An interesting question is, Where did the name, dynamic programming, come from? The 1950s were not good years for mathematical research. We had a very interesting gentleman in Washington named Wilson. He was Secretary of Defense, and he actually had a pathological fear and hatred of the word research. I’m not using the term lightly; I’m using it precisely. His face would suffuse, he would turn red, and he would get violent if people used the term research in his presence. You can imagine how he felt, then, about the term mathematical. The RAND Corporation was employed by the Air Force, and the Air Force had Wilson as its boss, essentially. Hence, I felt I had to do something to shield Wilson and the Air Force from the fact that I was really doing mathematics inside the RAND Corporation. What title, what name, could I choose? In the first place I was interested in planning, in decision making, in thinking. But planning, is not a good word for various reasons. I decided therefore to use the word “programming”. I wanted to get across the idea that this was dynamic, this was multistage, this was time-varying. I thought, let's kill two birds with one stone. Let's take a word that has an absolutely precise meaning, namely dynamic, in the classical physical sense. It also has a very interesting property as an adjective, and that is it's impossible to use the word dynamic in a pejorative sense. Try thinking of some combination that will possibly give it a pejorative meaning. It's impossible. Thus, I thought dynamic programming was a good name. It was something not even a Congressman could object to. So I used it as an umbrella for my activities." ## What are Sub-Problems? Sub-problems are smaller versions of the original problem. Let's see an example. With the equation below: 1+2+3+4$1+2+3+4$ We can break this down to: 1+2$1+2$ 3+4$3+4$ Once we solve these two smaller problems, we can add the solutions to these sub-problems to find the solution to the overall problem. Notice how these sub-problems breaks down the original problem into components that build up the solution. This is a small example but it illustrates the beauty of Dynamic Programming well. If we expand the problem to adding 100's of numbers it becomes clearer why we need Dynamic Programming. Take this example: 6+5+3+3+2+4+6+5$6+5+3+3+2+4+6+5$ We have 6+5$6+5$ twice. The first time we see it, we work out 6+5$6+5$. When we see it the second time we think to ourselves: "Ah, 6 + 5. I've seen this before. It's 11!" In Dynamic Programming we store the solution to the problem so we do not need to recalculate it. By finding the solutions for every single sub-problem, we can tackle the original problem itself. Memoisation is the act of storing a solution. ## What is Memoisation in Dynamic Programming? Let's see why storing answers to solutions make sense. We're going to look at a famous problem, Fibonacci sequence. This problem is normally solved in Divide and Conquer. There are 3 main parts to divide and conquer: 1. Divide the problem into smaller sub-problems of the same type. 2. Conquer - solve the sub-problems recursively. 3. Combine - Combine all the sub-problems to create a solution to the original problem. Dynamic programming has one extra step added to step 2. This is memoisation. The Fibonacci sequence is a sequence of numbers. It's the last number + the current number. We start at 1. 1+0=1$1+0=1$ 1+1=2$1+1=2$ 2+1=3$2+1=3$ 3+2=5$3+2=5$ 5+3=8$5+3=8$ In Python, this is: ``````def F(n): if n == 0 or n == 1: return n else: return F(n-1)+F(n-2) `````` If you're not familiar with recursion I have a blog post written for you that you should read first. Let's calculate F(4). In an execution tree, this looks like: We calculate F(2) twice. On bigger inputs (such as F(10)) the repetition builds up. The purpose of dynamic programming is to not calculate the same thing twice. Instead of calculating F(2) twice, we store the solution somewhere and only calculate it once. We'll store the solution in an array. F[2] = 1. Below is some Python code to calculate the Fibonacci sequence using Dynamic Programming. ``````def fibonacciVal(n): memo[0], memo[1] = 0, 1 for i in range(2, n+1): memo[i] = memo[i-1] + memo[i-2] return memo[n]`````` ## How to Identify Dynamic Programming Problems In theory, Dynamic Programming can solve every problem. The question is then: "When should I solve this problem with dynamic programming?" We should use dynamic programming for problems that are between tractable and intractable problems. Tractable problems are those that can be solved in polynomial time. That's a fancy way of saying we can solve it in a fast manner. Binary search and sorting are all fast. Intractable problems are those that run in exponential time. They're slow. Intractable problems are those that can only be solved by bruteforcing through every single combination (NP hard). When we see terms like: "shortest/longest, minimized/maximized, least/most, fewest/greatest, "biggest/smallest" We know it's an optimisation problem. Dynamic Programming algorithms proof of correctness is usually self-evident. Other algorithmic strategies are often much harder to prove correct. Thus, more error-prone. When we see these kinds of terms, the problem may ask for a specific number ( "find the minimum number of edit operations") or it may ask for a result ( "find the longest common subsequence"). The latter type of problem is harder to recognize as a dynamic programming problem. If something sounds like optimisation, Dynamic Programming can solve it. Imagine we've found a problem that's an optimisation problem, but we're not sure if it can be solved with Dynamic Programming. First, identify what we're optimising for. Once we realize what we're optimising for, we have to decide how easy it is to perform that optimisation. Sometimes, the greedy approach is enough for an optimal solution. Dynamic programming takes the brute force approach. It Identifies repeated work, and eliminates repetition. Before we even start to plan the problem as a dynamic programming problem, think about what the brute force solution might look like. Are sub steps repeated in the brute-force solution?  If so, we try to imagine the problem as a dynamic programming problem. Mastering dynamic programming is all about understanding the problem. List all the inputs that can affect the answers. Once we've identified all the inputs and outputs, try to identify whether the problem can be broken into subproblems. If we can identify subproblems, we can probably use Dynamic Programming. Then, figure out what the recurrence is and solve it. When we're trying to figure out the recurrence, remember that whatever recurrence we write has to help us find the answer. Sometimes the answer will be the result of the recurrence, and sometimes we will have to get the result by looking at a few results from the recurrence Dynamic Programming can solve many problems, but that does not mean there isn't a more efficient solution out there. Solving a problem with Dynamic Programming feels like magic, but remember that dynamic programming is merely a clever brute force.  Sometimes it pays off well, and sometimes it helps only a little. ## How to Solve Problems using Dynamic Programming Now we have an understanding of what Dynamic programming is and how it generally works. Let's look at to create a Dynamic Programming solution to a problem. We're going to explore the process of Dynamic Programming using the Weighted Interval Scheduling Problem. Pretend you're the owner of a dry cleaner. You have n customers come in and give you clothes to clean. You can only clean one customer's pile of clothes (PoC) at a time. Each pile of clothes, i, must be cleaned at some pre-determined start time si${s}_{i}$ and some predetermined finish time fi${f}_{i}$. Each pile of clothes has an associated value, vi${v}_{i}$, based on how important it is to your business. For example, some customers may pay more to have their clothes cleaned faster. Or some may be repeating customers and you want them to be happy. As the owner of this dry cleaners you must determine the optimal schedule of clothes that maximises the total value of this day. This problem is a re-wording of the Weighted Interval scheduling problem. You will now see 4 steps to solving a Dynamic Programming problem. Sometimes, you can skip a step. Sometimes, your problem is already well defined and you don't need to worry about the first few steps. ## Step 1. Write the Problem out Grab a piece of paper. Write out: • What is the problem? • What are the subproblems? • What would the solution roughly look like? In the dry cleaner problem, let's put down into words the subproblems. What we want to determine is the maximum value schedule for each pile of clothes such that the clothes are sorted by start time. Why sort by start time? Good question! We want to keep track of processes which are currently running. If we sort by finish time, it doesn't make much sense in our heads. We could have 2 with similar finish times, but different start times. Time moves in a linear fashion, from start to finish. If we have piles of clothes that start at 1 pm, we know to put them on when it reaches 1pm. If we have a pile of clothes that finishes at 3 pm, we might need to have put them on at 12 pm, but it's  1pm now. We can find the maximum value schedule for piles n1$n-1$ through to n. And then for n2$n-2$ through to n. And so on. By finding the solution to every single sub-problem, we can tackle the original problem itself. The maximum value schedule for piles 1 through n. Sub-problems can be used to solve the original problem, since they are smaller versions of the original problem. With the interval scheduling problem, the only way we can solve it is by brute-forcing all subsets of the problem until we find an optimal one. What we're saying is that instead of brute-forcing one by one, we divide it up. We brute force from n1$n-1$ through to n. Then we do the same for n2$n-2$ through to n. Finally, we have loads of smaller problems, which we can solve dynamically. We want to build the solutions to our sub-problems such that each sub-problem builds on the previous problems. ## 2. Mathematical Recurrences I know, mathematics sucks. If you'll bare with me here you'll find that this isn't that hard. Mathematical recurrences are used to: Define the running time of a divide and conquer (dynamic programming) technique Recurrences are also used to define problems. If it's difficult to turn your subproblems into maths, then it may be the wrong subproblem. There are 2 steps to creating a mathematical recurrence: ### 1: Define the Base Case Base cases are the smallest possible denomination of a problem. When creating a recurrence, ask yourself these questions: "What decision do I make at step 0?" It doesn't have to be 0. The base case is the smallest possible denomination of a problem. We saw this with the Fibonacci sequence. The base was: • If n == 0 or n == 1, return 1 It's important to know where the base case lies, so we can create the recurrence. In our problem, we have one decision to make: • Put that pile of clothes on to be washed or • Don’t wash that pile of clothes today If n is 0, that is, if we have 0 PoC then we do nothing. Our base case is: if n == 0, return 0 ### 2: What Decision Do I Make at Step n? Now we know what the base case is, if we're at step n what do we do? For each pile of clothes that is compatible with the schedule so far. Compatible means that the start time is after the finish time of the pile of clothes currently being washed. The algorithm has 2 options: 1. Wash that pile of clothes 2. Don't wash that pile of clothes We know what happens at the base case, and what happens else. We now need to find out what information the algorithm needs to go backwards (or forwards). "If my algorithm is at step i, what information would it need to decide what to do in step i+1?" To decide between the two options, the algorithm needs to know the next compatible PoC (pile of clothes). The next compatible PoC for a given pile, p, is the PoC, n, such that sn${s}_{n}$ (the start time for PoC n) happens after fp${f}_{p}$ (the finish time for PoC p). The difference between sn${s}_{n}$ and fp${f}_{p}$ should be minimised. In English, imagine we have one washing machine. We put in a pile of clothes at 13:00. Our next pile of clothes starts at 13:01. We can't open the washing machine and put in the one that starts at 13:00. Our next compatible pile of clothes is the one that starts after the finish time of the one currently being washed. "If my algorithm is at step i, what information did it need to decide what to do in step i-1?" The algorithm needs to know about future decisions. The ones made for PoC i through n to decide whether to run or not run PoC i-1. Now that we’ve answered these questions, we’ve started to form a  recurring mathematical decision in our mind. If not, that’s also okay, it becomes easier to write recurrences as we get exposed to more problems. Here’s our recurrence: OPT(i)={0,If i = 0maxvi+OPT(next[i]),OPT(i+1),if n > 1$OPT\left(i\right)=\left\{\begin{array}{c}0,\phantom{\rule{0ex}{0ex}}\text{If i = 0}\\ max{v}_{i}+OPT\left(next\left[i\right]\right),OPT\left(i+1\right),\phantom{\rule{0ex}{0ex}}\text{if n > 1}\end{array}$ Let's explore in detail what makes this mathematical recurrence. OPT(i) represents the maximum value schedule for PoC i through to n such that PoC is sorted by start times. OPT(i) is our subproblem from earlier. We start with the base case. All recurrences need somewhere to stop. If we call OPT(0) we'll be returned with 0. To determine the value of OPT(i), there are two options. We want to take the maximum of these options to meet our goal. Our goal is the maximum value schedule for all piles of clothes. Once we choose the option that gives the maximum result at step i, we memoize its value as OPT(i). Mathematically, the two options - run or not run PoC i, are represented as: vi+OPT(next[n])${v}_{i}+OPT\left(next\left[n\right]\right)$ This represents the decision to run PoC i. It adds the value gained from PoC i to OPT(next[n]), where next[n] represents the next compatible pile of clothing following PoC i. When we add these two values together, we get the maximum value schedule from i through to n such that they are sorted by start time if i runs. Sorted by start time here because next[n] is the one immediately after v_i, so by default, they are sorted by start time. OPT(i+1)$OPT\left(i+1\right)$ If we decide not to run i, our value is then OPT(i + 1). The value is not gained. OPT(i + 1) gives the maximum value schedule for i+1 through to n, such that they are sorted by start times. ## 3. Determine the Dimensions of the Memoisation Array and the Direction in Which It Should Be Filled The solution to our Dynamic Programming problem is OPT(1). We can write out the solution as the maximum value schedule for PoC 1 through n such that PoC is sorted by start time. This goes hand in hand with "maximum value schedule for PoC i through to n". From step 2: OPT(1)=max(v1+OPT(next[1]),OPT(2))$OPT\left(1\right)=max\left({v}_{1}+OPT\left(next\left[1\right]\right),OPT\left(2\right)\right)$ Going back to our Fibonacci numbers earlier, our Dynamic Programming solution relied on the fact that the Fibonacci numbers for 0 through to n - 1 were already memoised. That is, to find F(5) we already memoised F(0), F(1), F(2), F(3), F(4). We want to do the same thing here. The problem we have is figuring out how to fill out a memoisation table. In the scheduling problem, we know that OPT(1) relies on the solutions to OPT(2) and OPT(next[1]). PoC 2 and next[1] have start times after PoC 1 due to sorting. We need to fill our memoisation table from OPT(n) to OPT(1). We can see our array is one dimensional, from 1 to n. But, if we couldn't see that we can work it out another way. The dimensions of the array are equal to the number and size of the variables on which OPT(x) relies. In our algorithm, we have OPT(i) - one variable, i. This means our array will be 1-dimensional and its size will be n, as there are n piles of clothes. If we know that n = 5, then our memoisation array might look like this: memo = [0, OPT(1), OPT(2), OPT(3), OPT(4), OPT(5)] 0 is also the base case. memo[0] = 0, per our recurrence from earlier. ## 4. Coding Our Solution When I am coding a Dynamic Programming solution, I like to read the recurrence and try to recreate it. Our first step is to initialise the array to size (n + 1). In Python, we don't need to do this. But you may need to do it if you're using a different language. Our second step is to set the base case. To find the profit with the inclusion of job[i]. we need to find the latest job that doesn’t conflict with job[i].  The idea is to use Binary Search to find the latest non-conflicting job. I've copied the code from here but edited. First, let's define what a "job" is. As we saw, a job consists of 3 things: ``````# Class to represent a job class Job: def __init__(self, start, finish, profit): self.start = start self.finish = finish self.profit = profit `````` Start time, finish time, and the total profit (benefit) of running that job. The next step we want to program is the schedule. ``````# The main function that returns the maximum possible # profit from given array of jobs def schedule(job): # Sort jobs according to start time job = sorted(job, key = lambda j: j.start) # Create an array to store solutions of subproblems. table[i] # stores the profit for jobs till arr[i] (including arr[i]) n = len(job) table = [0 for _ in range(n)] table[0] = job[0].profit;`````` Earlier, we learnt that the table is 1 dimensional. We sort the jobs by start time, create this empty table and set table[0] to be the profit of job[0]. Since we've sorted by start times, the first compatible job is always job[0]. Our next step is to fill in the entries using the recurrence we learnt earlier. To find the next compatible job, we're using Binary Search. In the full code posted later, it'll include this. For now, let's worry about understanding the algorithm. If the next compatible job returns -1, that means that all jobs before the index, i, conflict with it (so cannot be used).  Inclprof means we're including that item in the maximum value set. We then store it in table[i], so we can use this calculation again later. `````` # Fill entries in table[] using recursive property for i in range(1, n): # Find profit including the current job inclProf = job[i].profit l = binarySearch(job, i) if (l != -1): inclProf += table[l]; # Store maximum of including and excluding table[i] = max(inclProf, table[i - 1]) `````` Our final step is then to return the profit of all items up to n-1. `` return table[n-1] `` The full code can be seen below: ``````# Python program for weighted job scheduling using Dynamic # Programming and Binary Search # Class to represent a job class Job: def __init__(self, start, finish, profit): self.start = start self.finish = finish self.profit = profit # A Binary Search based function to find the latest job # (before current job) that doesn't conflict with current # job. "index" is index of the current job. This function # returns -1 if all jobs before index conflict with it. def binarySearch(job, start_index): # https://en.wikipedia.org/wiki/Binary_search_algorithm # Initialize 'lo' and 'hi' for Binary Search lo = 0 hi = start_index - 1 # Perform binary Search iteratively while lo <= hi: mid = (lo + hi) // 2 if job[mid].finish <= job[start_index].start: if job[mid + 1].finish <= job[start_index].start: lo = mid + 1 else: return mid else: hi = mid - 1 return -1 # The main function that returns the maximum possible # profit from given array of jobs def schedule(job): # Sort jobs according to start time job = sorted(job, key = lambda j: j.start) # Create an array to store solutions of subproblems. table[i] # stores the profit for jobs till arr[i] (including arr[i]) n = len(job) table = [0 for _ in range(n)] table[0] = job[0].profit; # Fill entries in table[] using recursive property for i in range(1, n): # Find profit including the current job inclProf = job[i].profit l = binarySearch(job, i) if (l != -1): inclProf += table[l]; # Store maximum of including and excluding table[i] = max(inclProf, table[i - 1]) return table[n-1] # Driver code to test above function job = [Job(1, 2, 50), Job(3, 5, 20), Job(6, 19, 100), Job(2, 100, 200)] print("Optimal profit is"), print(schedule(job)) `````` Congrats! 🥳 We've just written our first dynamic program!  Now that we’ve wet our feet,  let's walk through a different type of dynamic programming problem. Imagine you are a criminal. Dastardly smart. You break into Bill Gates’s mansion. Wow, okay!?!? How many rooms is this? His washing machine room is larger than my entire house??? Ok, time to stop getting distracted. You brought a small bag with you. A knapsack - if you will. You can only fit so much into it. Let’s give this an arbitrary number. The bag will support weight 15, but no more. What we want to do is maximise how much money we'll make, b$b$. The greedy approach is to pick the item with the highest value which can fit into the bag. Let's try that. We're going to steal Bill Gates's TV. £4000? Nice. But his TV weighs 15. So... We leave with £4000. ``````TV = (£4000, 15) # (value, weight)`````` Bill Gates's has a lot of watches. Let's say he has 2 watches. Each watch weighs 5 and each one is worth £2250. When we steal both, we get £4500 with a weight of 10. ``````watch1 = (£2250, 5) watch2 = (£2250, 5) watch1 + watch2 >>> (£4500, 10) `````` In the greedy approach, we wouldn't choose these watches first. But to us as humans, it makes sense to go for smaller items which have higher values. The Greedy approach cannot optimally solve the {0,1} Knapsack problem. The {0, 1} means we either take the item whole item {1} or we don't {0}. However, Dynamic programming can optimally solve the {0, 1} knapsack problem. The simple solution to this problem is to consider all the subsets of all items. For every single combination of Bill Gates's stuff, we calculate the total weight and value of this combination. Only those with weight less than Wmax${W}_{max}$ are considered. We then pick the combination which has the highest value. This is a disaster! How long would this take? Bill Gates's would come back home far before you're even 1/3rd of the way there! In Big O, this algorithm takes O(n2)$O\left({n}^{2}\right)$ time. You can see we already have a rough idea of the solution and what the problem is, without having to write it down in maths! ## Maths Behind {0, 1} Knapsack Problem Imagine we had a listing of every single thing in Bill Gates's house. We stole it from some insurance papers. Now, think about the future. What is the optimal solution to this problem? We have a subset, L, which is the optimal solution. L is a subset of S, the set containing all of Bill Gates's stuff. Let's pick a random item, N. L either contains N or it doesn't. If it doesn't use N, the optimal solution for the problem is the same as 1,2,...,N1$1,2,...,N-1$. This is assuming that Bill Gates's stuff is sorted by value/weight$value/weight$. Suppose that the optimum of the original problem is not optimum of the sub-problem. if we have sub-optimum of the smaller problem then we have a contradiction - we should have an optimum of the whole problem. If L contains N, then the optimal solution for the problem is the same as 1,2,3,...,N1$1,2,3,...,N-1$. We know the item is in, so L already contains N. To complete the computation we focus on the remaining items. We find the optimal solution to the remaining items. But, we now have a new maximum allowed weight of WmaxWn${W}_{max}-{W}_{n}$. If item N is contained in the solution, the total weight is now the max weight take away item N (which is already in the knapsack). These are the 2 cases. Either item N is in the optimal solution or it isn't. If the weight of item N is greater than Wmax${W}_{max}$, then it cannot be included so case 1 is the only possibility. To better define this recursive solution, let Sk=1,2,...,k${S}_{k}=1,2,...,k$ and S0=${S}_{0}=\varnothing$ Let B[k, w] be the maximum total benefit obtained using a subset of Sk${S}_{k}$. Having total weight at most w. Then we define B[0, w] = 0 for each wWmax$w\le {W}_{max}$. Our desired solution is then B[n, Wmax${W}_{max}$]. OPT(i)={B[k1,w],If w < wkmaxB[k1,w],bk+B[k1,wwk],otherwise ### Tabulation of Knapsack Problem Okay, pull out some pen and paper. No, really. Things are about to get confusing real fast. This memoisation table is 2-dimensional. We have these items: ``(1, 1), (3, 4), (4, 5), (5, 7)`` Where the tuples are `(weight, value)`. We have 2 variables, so our array is 2-dimensional. The first dimension is from 0 to 7. Our second dimension is the values. And we want a weight of 7 with maximum benefit. 0 1 2 3 4 5 6 7 (1, 1) (4, 3) (5, 4) (7, 5) The weight is 7. We start counting at 0. We put each tuple on the left-hand side. Ok. Now to fill out the table! 0 1 2 3 4 5 6 7 (1, 1) 0 (4, 3) 0 (5, 4) 0 (7, 5) 0 The columns are weight. At weight 0, we have a total weight of 0. At weight 1, we have a total weight of 1. Obvious, I know. But this is an important distinction to make which will be useful later on. When our weight is 0, we can't carry anything no matter what. The total weight of everything at 0 is 0. 0 1 2 3 4 5 6 7 (1, 1) 0 1 (4, 3) 0 (5, 4) 0 (7, 5) 0 If our total weight is 1, the best item we can take is (1, 1). As we go down through this array, we can take more items. At the row for (4, 3) we can either take (1, 1) or (4, 3). But for now, we can only take (1, 1). Our maximum benefit for this row then is 1. 0 1 2 3 4 5 6 7 (1, 1) 0 1 1 1 1 1 1 1 (4, 3) 0 (5, 4) 0 (7, 5) 0 If our total weight is 2, the best we can do is 1. We only have 1 of each item. We cannot duplicate items. So no matter where we are in row 1, the absolute best we can do is (1, 1). Let's start using (4, 3) now. If the total weight is 1, but the weight of (4, 3) is 3 we cannot take the item yet until we have a weight of at least 3. 0 1 2 3 4 5 6 7 (1, 1) 0 1 1 1 1 1 1 1 (4, 3) 0 1 1 (5, 4) 0 (7, 5) 0 Now we have a weight of 3. Let's compare some things. We want to take the max of: MAX(4+T[0][0],1)$MAX\left(4+T\left[0\right]\left[0\right],1\right)$ If we're at 2, 3 we can either take the value from the last row or use the item on that row. We go up one row and count back 3 (since the weight of this item is 3). Actually, the formula is whatever weight is remaining when we minus the weight of the item on that row. The weight of (4, 3) is 3 and we're at weight 3. 3 - 3 = 0. Therefore, we're at T[0][0]. T[previous row's number][current total weight - item weight]. MAX(4+T[0][0],1)$MAX\left(4+T\left[0\right]\left[0\right],1\right)$ The 1 is because of the previous item. The max here is 4. 0 1 2 3 4 5 6 7 (1, 1) 0 1 1 1 1 1 1 1 (4, 3) 0 1 1 4 (5, 4) 0 (7, 5) 0 max(4+t[0][1],1)$max\left(4+t\left[0\right]\left[1\right],1\right)$ Total weight is 4, item weight is 3. 4 - 3 = 1. Previous row is 0. t[0][1]. 0 1 2 3 4 5 6 7 (1, 1) 0 1 1 1 1 1 1 1 (4, 3) 0 1 1 4 5 (5, 4) 0 (7, 5) 0 I won't bore you with the rest of this row, as nothing exciting happens. We have 2 items. And we've used both of them to make 5. Since there are no new items, the maximum value is 5. 0 1 2 3 4 5 6 7 (1, 1) 0 1 1 1 1 1 1 1 (4, 3) 0 1 1 4 5 5 5 5 (5, 4) 0 (7, 5) 0 Onto our next row: 0 1 2 3 4 5 6 7 (1, 1) 0 1 1 1 1 1 1 1 (4, 3) 0 1 1 4 5 5 5 5 (5, 4) 0 1 1 4 (7, 5) 0 Here's a little secret. Our tuples are ordered by weight! That means that we can fill in the previous rows of data up to the next weight point. We know that 4 is already the maximum, so we can fill in the rest.. This is where memoisation comes into play! We already have the data, why bother re-calculating it? We go up one row and head 4 steps back. That gives us: max(4+T[2][0],5)$max\left(4+T\left[2\right]\left[0\right],5\right)$. 0 1 2 3 4 5 6 7 (1, 1) 0 1 1 1 1 1 1 1 (4, 3) 0 1 1 4 5 5 5 5 (5, 4) 0 1 1 4 5 (7, 5) 0 Now we calculate it for total weight 5. max(5+T[2][1],5)=6$max\left(5+T\left[2\right]\left[1\right],5\right)=6$ 0 1 2 3 4 5 6 7 (1, 1) 0 1 1 1 1 1 1 1 (4, 3) 0 1 1 4 5 5 5 5 (5, 4) 0 1 1 4 5 6 (7, 5) 0 We do the same thing again: max(5+T[2][2],5)=6$max\left(5+T\left[2\right]\left[2\right],5\right)=6$ 0 1 2 3 4 5 6 7 (1, 1) 0 1 1 1 1 1 1 1 (4, 3) 0 1 1 4 5 5 5 5 (5, 4) 0 1 1 4 5 6 6 (7, 5) 0 Now we have total weight 7. We choose the max of: max(5+T[2][3],5)=max(5+4,5)=9$max\left(5+T\left[2\right]\left[3\right],5\right)=max\left(5+4,5\right)=9$ 0 1 2 3 4 5 6 7 (1, 1) 0 1 1 1 1 1 1 1 (4, 3) 0 1 1 4 5 5 5 5 (5, 4) 0 1 1 4 5 6 6 9 (7, 5) 0 If we had total weight 7 and we had the 3 items (1, 1), (4, 3), (5, 4) the best we can do is 9. Since our new item starts at weight 5, we can copy from the previous row until we get to weight 5. 0 1 2 3 4 5 6 7 (1, 1) 0 1 1 1 1 1 1 1 (4, 3) 0 1 1 4 5 5 5 5 (5, 4) 0 1 1 4 5 6 6 9 (7, 5) 0 1 1 4 5 We then do another max. Total weight - new item's weight. This is 55=0$5-5=0$. We want the previous row at position 0. max(7+T[3][0],6)$max\left(7+T\left[3\right]\left[0\right],6\right)$ The 6 comes from the best on the previous row for that total weight. max(7+0,6)=7$max\left(7+0,6\right)=7$ 0 1 2 3 4 5 6 7 (1, 1) 0 1 1 1 1 1 1 1 (4, 3) 0 1 1 4 5 5 5 5 (5, 4) 0 1 1 4 5 6 6 9 (7, 5) 0 1 1 4 5 7 max(7+T[3][1],6)=8$max\left(7+T\left[3\right]\left[1\right],6\right)=8$ 0 1 2 3 4 5 6 7 (1, 1) 0 1 1 1 1 1 1 1 (4, 3) 0 1 1 4 5 5 5 5 (5, 4) 0 1 1 4 5 6 6 9 (7, 5) 0 1 1 4 5 7 8 max(7+T[3][2],9)=9$max\left(7+T\left[3\right]\left[2\right],9\right)=9$ 0 1 2 3 4 5 6 7 (1, 1) 0 1 1 1 1 1 1 1 (4, 3) 0 1 1 4 5 5 5 5 (5, 4) 0 1 1 4 5 6 6 9 (7, 5) 0 1 1 4 5 7 8 9 9 is the maximum value we can get by picking items from the set of items such that the total weight is 7$\le 7$. ### Finding the Optimal Set for {0, 1} Knapsack Problem Using Dynamic Programming Now, what items do we actually pick for the optimal set? We start with this item: 0 1 2 3 4 5 6 7 (1, 1) 0 1 1 1 1 1 1 1 (4, 3) 0 1 1 4 5 5 5 5 (5, 4) 0 1 1 4 5 6 6 9 (7, 5) 0 1 1 4 5 7 8 9 We want to know where the 9 comes from. It's coming from the top because the number directly above 9 on the 4th row is 9. Since it's coming from the top, the item (7, 5) is not used in the optimal set. Where does this 9 come from? 0 1 2 3 4 5 6 7 (1, 1) 0 1 1 1 1 1 1 1 (4, 3) 0 1 1 4 5 5 5 5 (5, 4) 0 1 1 4 5 6 6 9 (7, 5) 0 1 1 4 5 7 8 9 This 9 is not coming from the row above it. Item (5, 4) must be in the optimal set. We now go up one row, and go back 4 steps. 4 steps because the item, (5, 4), has weight 4. 0 1 2 3 4 5 6 7 (1, 1) 0 1 1 1 1 1 1 1 (4, 3) 0 1 1 4 5 5 5 5 (5, 4) 0 1 1 4 5 6 6 9 (7, 5) 0 1 1 4 5 7 8 9 4 does not come from the row above. The item (4, 3) must be in the optimal set. The weight of item (4, 3) is 3. We go up and we go back 3 steps and reach: 0 1 2 3 4 5 6 7 (1, 1) 0 1 1 1 1 1 1 1 (4, 3) 0 1 1 4 5 5 5 5 (5, 4) 0 1 1 4 5 6 6 9 (7, 5) 0 1 1 4 5 7 8 9 As soon as we reach a point where the weight is 0, we're done. Our two selected items are (5, 4) and (4, 3). The total weight is 7 and our total benefit is 9. We add the two tuples together to find this out. Let's begin coding this. ### Coding {0, 1} Knapsack Problem in Dynamic Programming With Python Now we know how it works, and we've derived the recurrence for it - it shouldn't be too hard to code it. If our two-dimensional array is i (row) and j (column) then we have: ``if j < wt[i]:`` If our weight j is less than the weight of item i (i does not contribute to j) then: ``````if j < wt[i]: T[i][j] = T[i - 1][j] else: // weight of i >= j T[i][j] = max(val[i] + t[i - 1][j-wt(i), t[i-1][j]) // previous row, subtracting the weight of the item from the total weight or without including ths item`````` This is what the core heart of the program does. I've copied some code from here to help explain this. I'm not going to explain this code much, as there isn't much more to it than what I've already explained. If you're confused by it, leave a comment below or email me 😁 ``````# Returns the maximum value that can be put in a knapsack of # capacity W def knapSack(W , wt , val , n): # Base Case if n == 0 or W == 0: return 0 # If weight of the nth item is more than Knapsack of capacity # W, then this item cannot be included in the optimal solution if (wt[n-1] > W): return knapSack(W , wt , val , n-1) # return the maximum of two cases: # (1) nth item included # (2) not included else: return max(val[n-1] + knapSack(W-wt[n-1] , wt , val , n-1), knapSack(W , wt , val , n-1)) # To test above function val = [60, 100, 120] wt = [10, 20, 30] W = 50 n = len(val) print(knapSack(W , wt , val , n)) # output 220`````` ## Time Complexity of a Dynamic Programming Problem Time complexity is calculated in Dynamic Programming as: Numberofuniquestatestimetakenperstate$Number\phantom{\rule{0ex}{0ex}}of\phantom{\rule{0ex}{0ex}}unique\phantom{\rule{0ex}{0ex}}states\ast time\phantom{\rule{0ex}{0ex}}taken\phantom{\rule{0ex}{0ex}}per\phantom{\rule{0ex}{0ex}}state$ For our original problem, the Weighted Interval Scheduling Problem, we had n piles of clothes. Each pile of clothes is solved in constant time. The time complexity is: O(n)+O(1)=O(n)$O\left(n\right)+O\left(1\right)=O\left(n\right)$ I've written a post about Big O notation if you want to learn more about time complexities. With our Knapsack problem, we had n number of items. The table grows depending on the total capacity of the knapsack, our time complexity is: O(nw)$O\left(nw\right)$ Where n is the number of items, and w is the capacity of the knapsack. I'm going to let you in on a little secret. It's possible to work out the time complexity of an algorithm from its recurrence. You can use something called the Master Theorem to work it out. This is the theorem in a nutshell: Now, I'll be honest. The master theorem deserves a blog post of its own. For now, I've found this video to be excellent: ### Dynamic Programming vs Divide & Conquer vs Greedy Dynamic Programming & Divide and Conquer are similar. Dynamic Programming is based on Divide and Conquer, except we memoise the results. But, Greedy is different. It aims to optimise by making the best choice at that moment. Sometimes, this doesn't optimise for the whole problem. Take this question as an example. We have 3 coins: 1p, 15p, 25p And someone wants us to give a change of 30p. With Greedy, it would select 25, then 5 * 1 for a total of 6 coins. The optimal solution is 2 * 15. Greedy works from largest to smallest. At the point where it was at 25, the best choice would be to pick 25. Greedy vs Divide & Conquer vs Dynamic Programming Greedy Divide & Conquer Dynamic Programming Optimises by making the best choice at the moment Optimises by breaking down a subproblem into simpler versions of itself and using multi-threading & recursion to solve Same as Divide and Conquer, but optimises by caching the answers to each subproblem as not to repeat the calculation twice. Doesn't always find the optimal solution, but is very fast Always finds the optimal solution, but is slower than Greedy Always finds the optimal solution, but could be pointless on small datasets. Requires almost no memory Requires some memory to remember recursive calls Requires a lot of memory for memoisation / tabulation ## Tabulation (Bottom-Up) vs Memoisation (Top-Down) There are 2 types of dynamic programming. Tabulation and Memoisation. ### Memoisation (Top-Down) We've computed all the subproblems but have no idea what the optimal evaluation order is. We would then perform a recursive call from the root, and hope we get close to the optimal solution or obtain a proof that we will arrive at the optimal solution. Memoisation ensures you never recompute a subproblem because we cache the results, thus duplicate sub-trees are not recomputed. From our Fibonacci sequence earlier, we start at the root node. The subtree F(2) isn't calculated twice. This starts at the top of the tree and evaluates the subproblems from the leaves/subtrees back up towards the root. Memoisation is a top-down approach. ### Tabulation (Bottom-Up) We've also seen Dynamic Programming being used as a 'table-filling' algorithm. Usually, this table is multidimensional. This is like memoisation, but with one major difference. We have to pick the exact order in which we will do our computations. The knapsack problem we saw, we filled in the table from left to right - top to bottom. We knew the exact order of which to fill the table. Sometimes the 'table' is not like the tables we've seen. It can be a more complicated structure such as trees. Or specific to the problem domain, such as cities within flying distance on a map. ### Tabulation & Memosation - Advantages and Disadvantages Generally speaking, memoisation is easier to code than tabulation. We can write a 'memoriser' wrapper function that automatically does it for us. With tabulation, we have to come up with an ordering. Memoisation has memory concerns. If we're computing something large such as F(10^8), each computation will be delayed as we have to place them into the array. And the array will grow in size very quickly. Either approach may not be time-optimal if the order we happen (or try to) visit subproblems is not optimal.  If there is more than one way to calculate a subproblem (normally caching would resolve this, but it's theoretically possible that caching might not in some exotic cases). Memoisation will usually add on our time-complexity to our space-complexity. For example with tabulation we have more liberty to throw away calculations, like using tabulation with Fib lets us use O(1) space, but memoisation with Fib uses O(N) stack space). Memoisation vs Tabulation Tabulation Memoisation Code Harder to code as you have to know the order Easier to code as functions may already exist to memoise Speed Fast as you already know the order and dimensions of the table Slower as you're creating them on the fly Table completeness The table is fully computed Table does not have to be fully computed ## Conclusion Most of the problems you'll encounter within Dynamic Programming already exist in one shape or another. Often, your problem will build on from the answers for previous problems. Here's a list of common problems that use Dynamic Programming. I hope that whenever you encounter a problem, you think to yourself "can this problem be solved with ?" and try it. Want this in a PDF? Enter your email and click "become a better dev" to get it. I'll also give to you for free: Free 202 page ebook on careers, Free 54 page e-book on algorithm design, My 7 favourite articles,
2020-07-04 05:46:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 55, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.528563916683197, "perplexity": 704.5024724106515}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655884012.26/warc/CC-MAIN-20200704042252-20200704072252-00547.warc.gz"}
https://studysoup.com/tsg/21548/calculus-early-transcendentals-1-edition-chapter-9-problem-40re
× Get Full Access to Calculus: Early Transcendentals - 1 Edition - Chapter 9 - Problem 40re Get Full Access to Calculus: Early Transcendentals - 1 Edition - Chapter 9 - Problem 40re × # Answer: Convergence Write the remainder term Rn (x) for ISBN: 9780321570567 2 ## Solution for problem 40RE Chapter 9 Calculus: Early Transcendentals | 1st Edition • Textbook Solutions • 2901 Step-by-step solutions solved by professors and subject experts • Get 24/7 help from StudySoup virtual teaching assistants Calculus: Early Transcendentals | 1st Edition 4 5 1 351 Reviews 29 2 Problem 40RE Write the remainder term $$R_n(x)$$ for the Taylor series for the following functions centered at the given point a. Then show that $$\lim _{n \rightarrow \infty} R_{n}(x)=0$$ for all x in the given interval. $$f(x)=\sqrt{1+x},\ \ a=0,\ \ -\frac{1}{2}\ \leq\ x\ \leq\ \frac{1}{2}$$ Step-by-Step Solution: Solution 40RE Step 1: First we find the Taylor series of   ,at a=0 as follows Thus the Taylor series of  with center 0 is as follows Step 2 of 4 Step 3 of 4 ##### ISBN: 9780321570567 Since the solution to 40RE from 9 chapter was answered, more than 352 students have viewed the full step-by-step answer. This textbook survival guide was created for the textbook: Calculus: Early Transcendentals, edition: 1. This full solution covers the following key subjects: given, Remainder, functions, Centered, interval. This expansive textbook survival guide covers 112 chapters, and 7700 solutions. The full step-by-step solution to problem: 40RE from chapter: 9 was answered by , our top Calculus solution expert on 03/03/17, 03:45PM. The answer to “?Write the remainder term $$R_n(x)$$ for the Taylor series for the following functions centered at the given point a. Then show that $$\lim _{n \rightarrow \infty} R_{n}(x)=0$$ for all x in the given interval.$$f(x)=\sqrt{1+x},\ \ a=0,\ \ -\frac{1}{2}\ \leq\ x\ \leq\ \frac{1}{2}$$” is broken down into a number of easy to follow steps, and 42 words. Calculus: Early Transcendentals was written by and is associated to the ISBN: 9780321570567. ## Discover and learn what students are asking Statistics: Informed Decisions Using Data : Scatter Diagrams and Correlation ?If r = _______, then a perfect negative linear relation exists between the two quantitative variables. Statistics: Informed Decisions Using Data : Properties of the Normal Distribution ?In Problems 29 and 30, draw a normal curve and label the mean and inflection points. ? = 50 and ? = 5 Statistics: Informed Decisions Using Data : Tests for Independence and the Homogeneity of Proportions ?The table in the next column contains observed values and expected values in parentheses for two categorical variables, X and Y, where variable X has #### Related chapters Unlock Textbook Solution
2022-06-26 21:46:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47582000494003296, "perplexity": 2190.0103883039483}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103271864.14/warc/CC-MAIN-20220626192142-20220626222142-00145.warc.gz"}
https://projecteuclid.org/euclid.pgiq/1436815710
## Proceedings of the International Conference on Geometry, Integrability and Quantization ### Analysis Over $C^*$-Algebras and the Oscillatory Representation Svatopluk Krýsl #### Abstract Since the last two decades, several differential operators appeared in connection with the so-called oscillatory geometry. These operators act on sections of infinite rank vector bundles. Definitions of the oscillatory representation, metaplectic structure, oscillatory Dirac operator, as well as some necessary fundamental results in the analysis in $C^*$-Hilbert bundles are recalled here. These results are used for a description of the kernel of a certain second order differential operator arising from oscillatory geometry and the cohomology groups of the de Rham complex of exterior forms with values in the oscillatory representation. #### Article information Dates First available in Project Euclid: 13 July 2015 Permanent link to this document https://projecteuclid.org/ euclid.pgiq/1436815710 Digital Object Identifier doi:10.7546/giq-15-2014-173-195 Mathematical Reviews number (MathSciNet) MR3222636 Zentralblatt MATH identifier 1321.81036 #### Citation Krýsl, Svatopluk. Analysis Over $C^*$-Algebras and the Oscillatory Representation. Proceedings of the Fifteenth International Conference on Geometry, Integrability and Quantization, 173--195, Avangard Prima, Sofia, Bulgaria, 2014. doi:10.7546/giq-15-2014-173-195. https://projecteuclid.org/euclid.pgiq/1436815710
2019-08-17 13:54:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4389086961746216, "perplexity": 1831.7124832710576}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027313259.30/warc/CC-MAIN-20190817123129-20190817145129-00242.warc.gz"}
http://www.physicsforums.com/showthread.php?t=528905
## Limit help, confused Prove; limit as x->1 (x^3-5x+6) = 2, epsilon=0.2 I got |x^3-5x+6-2|<0.2 Then I don't know where to go from there. Should I add 2 to 0.2 first or subtract 2 from 6 to get x^3-5x+4 < 0.2 ? I'm on mobile can't use latex. Thanks PhysOrg.com science news on PhysOrg.com >> Ants and carnivorous plants conspire for mutualistic feeding>> Forecast for Titan: Wild weather could be ahead>> Researchers stitch defects into the world's thinnest semiconductor What do you mean by "epsilon = 2"? Are you supposed to find a suitable delta? Whoops I meant 0.2 Yea I'm trying to find delta. ## Limit help, confused Since you somehow want to use the fact that |x-1| < $\delta$, you should be trying to divide this polynomial by x-1. Then you'll have |x3-5x+4| = |x-1| |P(x) | < $\delta$ |P(x)|, where P(x) is some other polynomial of order 2, which you could also bound... Try that. :) I'm sorry but I still do not understand. Couldn't we just solve for epsilon then replace it for delta? Thanks Quote by CrossFit415 Whoops I meant 0.2 Yea I'm trying to find delta. Quote by CrossFit415 I'm sorry but I still do not understand. Couldn't we just solve for epsilon then replace it for delta? Thanks I don't exactly understand what you mean. You wrote |x^3-5x+6-2|<0.2 Defining f(x)= x3 - 5x + 6 But it's not that you need to solve an inequality. What you need to do, formally, is to find a $\delta$, so that for every x that holds |x-1| < $\delta$, |f(x) - 2|| < 0.2. So, like I said, you have to somehow use the fact that |x-1| < $\delta$. You start off by writing |x3 - 5x + 6 - 2|, and you start looking for ways to somehow express it with, apart from other things, $\delta$. You are looking to see how small this $\delta$ needs to be, so that |x3 - 5x + 6 - 2| is small enough. You start off in the way I wrote. Otherwise - I don't understand your question. Well it asks me use the graph to find a number Delta. If |x-1| < d then |(x^3-5x+6)-2|<0.2 Then it tells me find Delta that corresponds to epsilon =0.2 Oh ok, Im starting to see it.
2013-05-23 09:59:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7555117011070251, "perplexity": 1138.331635192889}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703108201/warc/CC-MAIN-20130516111828-00051-ip-10-60-113-184.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/1418036/why-is-it-necessary-for-a-ring-to-have-multiplicative-identity/1418046
# Why is it necessary for a ring to have multiplicative identity? I have read earlier that in a ring $(R,+,.)$ the following needs to hold: 1. $(R,+)$ is an abelian group 2. multiplication is associative and closed 3. left and right distribution laws hold. However, I recently came across the fact that every ring has to have a multiplicative identity. Can anyone please clarify this? Is it needed for the ring to have a multiplicative identity? (In fact it was mentioned that it is one of the reasons why $ker(f)$ is not a subring where $f$ is a ring homomorphism as the additive identity and the multiplicative identity are not usually in the same subset.) Further in 2 different places I have noticed that there is a difference on whether the mapping $f(1) \to 1$ is a necessary condition for $f$ to be a ring homomorphism. I think this is also related to my doubt as to whether the multiplicative identity is in fact a necessary condition for defining a ring. • A ring does not need to have a multiplicative identity. Probably what your source meant was that in this book or perhaps in this chapter or something like that, all rings will have a multiplicative identity. – David Sep 2 '15 at 6:42 • a few authors define a ring to have a multiplicative identity but most do not. The latter use the name "unit ring" or "ring with identity" to distinguish between the 2 – Alessandro Codenotti Sep 2 '15 at 6:43 • Definitions in mathematics (especially in algebra) are usually made to capture some observed notion, so that we may study such things abstractly. The multiplicative identity of a ring is definitely something important that occurs in many systems and so yes it deserves to be a part of the definition. – fretty Sep 2 '15 at 7:12 • Somewhat related post on meta: Does anyone believe that there are rings without unit elements? – Martin Sleziak Sep 2 '15 at 9:23 • Bjorn Poonen from MIT has a write-up explaining why having 1 makes sense: www-math.mit.edu/~poonen/papers/ring.pdf – user45150 Sep 2 '15 at 17:58 Many authors take the existence of $1$ as part of the definition of a ring. In fact, I would disagree with Alessandro's comment and claim that most authors take the existence of $1$ to be part of the definition of a ring. There is another object, often called a rng (pronounced "rung"), which is defined by taking all the axioms that define a ring except you don't require there to be a $1$. Rng's are useful in of themselves, for example functions with compact support over a non-compact space do not form a ring, they form a rng. But there is also a theorem that states that every rng is isomorphic to an ideal in some ring. So studying rings and their ideals is sufficient, and this is why it is so popular to include the existence of $1$ as one of the axioms of a ring. So to summarize, there isn't really a reason why it's necessary for rings to have a $1$, it certainly does not follow from the other axioms. It's just a choice of terminology: Do you say rings have a $1$ and if they don't have a $1$ call them rngs, or do you say rings don't need a $1$ and when they do have it call them rings with unity? • For completeness, note that "a rng with a multiplicative identity" is still a different concept from "a ring": the difference being what is required of a homomorphism. A homomorphism between rngs with multiplicative identity is not required to map the identity to the identity, but a homomorphism between rings is required to satisfy $f(1) = 1$. As an example that this matters, if $R$ is a ring, the map $R \to R \times R: x \mapsto (x,0)$ is a rng homomorphism, but not a ring homomorphism. – Hurkyl Sep 2 '15 at 18:03 I'm currently teaching out of the 4th edition of Stewart's Galois Theory textbook. Stewart defines a ring to be what other authors might call a commutative ring with unity. The reason is simple: in this book, there is not much call for noncommutative rings, nor for rings without unity, and it gets old writing "commutative ring with unity" over and over, when that's the only kind of ring you need. Stewart then defines a subring of a ring to be a subset of a ring closed under addition, subtraction, and multiplication. Note that a subring doesn't have to have unity – a subring doesn't have to be a ring, in this book. Well, it's a convention. As long as it's explained to the reader, and the author is consistent with it, I think it's fine. Then he goes and spoils it by asking, in Exercise 16.2, whether the rings $\bf Z$ and $2\bf Z$ are isomorphic. • Imo requiring rings to have $1$ but subrings not to have $1$ is a really bad convention, because it kind of messes with the whole category theoretic / universal algebraic perspective that sub-$\mathsf{T}$-algebras are subsets on which there exists a $\mathsf{T}$-algebra structure such that the inclusion is a homomorphism. – goblin Sep 2 '15 at 10:26 • @goblin: It depends on whether your definition is "there exists a multiplicative identity" or "there is a constant $1$ that is a multiplicative identity". (with the former, I think the structure so defined isn't even a variety of universal algebras!) – Hurkyl Sep 2 '15 at 18:07 Lang and a few other authors use "Ring" to mean "Ring with unity" and say "Ring without unity" for what I'd call a Ring. This is because Rings with unity are by far the most interesting. There are few things you can say of/do to a ring (or ring without unity to you) but there are MANY MANY things you can do with rings with unity (rings to you) I just thought I would expand on Jim's answer and provide a source that discusses this very question about whether a ring should assume the existence of an identity or not in a bit more detail. (This is definitely one of the better puns in algebra that I have come across. Ring without the $i$ for no identity.) There is a chapter by D.D. Anderson at the beginning of the book on multiplicative ideal theory about rngs. You can see the introduction in the look inside which talks a bit about the history of this. http://link.springer.com/chapter/10.1007/978-0-387-36717-0_1#page-1
2019-06-26 02:02:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8188384771347046, "perplexity": 259.9114490915158}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628000044.37/warc/CC-MAIN-20190626013357-20190626035357-00543.warc.gz"}
https://quant.stackexchange.com/tags/derivatives/hot
# Tag Info Accepted ### How to estimate real-world probabilities The risk-neutral measure $\mathbb{Q}$ is a mathematical construct which stems from the law of one price, also known as the principle of no riskless arbitrage and which you may already have heard of in ... • 13.8k ### Theoretical limits for contango and backwardation This is a basic fact about futures trading and the storage of commodities. The phrase that was used by futures traders in the old days (and probably still today) was "the contango is limited by the ... • 9,077 Accepted ### Derivation of VIX Formula The piece you are missing is an approximation via the Taylor formula of the logarithm: $$\ln(1+x) \approx x-\frac{x^2}{2} \; .$$ Apply this to the first term in the final formula of the technical ... • 1,487 Accepted ### Find a formula for the price of a derivative paying $\max(S_T(S_T-K),0)$ I provide a solution in three steps. The first step carefully outlines how to split up the expectation and what new measures are used. This first step does not require any special model assumption ... • 13.8k ### How to use the stock as a numeraire to price a derivative with payoff of the form $(S_T f(S_T))^+$? Let $P$ be the risk-neutral measure. We define the measure $P_S$ such that \begin{align*} \frac{dP_S}{dP}\big|_t &=\frac{S_t}{e^{rt}S_0}\\ &=e^{-\frac{1}{2}\sigma^2 t+\sigma W_t}. \end{align*} ... • 20.4k ### Present and future role of pricing quants FO is shrinking across the large investment banks. The market is not developing new products that will need new pricing formulas, if anything it is reverting to more vanilla structures. Nowdays FO ... • 4,217 Accepted • 6,743 Accepted ### Black-Scholes formula for Poisson jumps We assume that the process $\{J_t, \, t\ge 0\}$ is defined at the jump times of the Poisson process $\{N_t, \, t \ge 0\}$, and all the jump sizes are independent and identically distributed. That is, \... • 20.4k ### Why discounted derivative price is a martingale? Under a Black-Scholes framework, the dynamics of the stock price under the risk-neutral measure $\mathbb{Q}$ are given by ... $$S_t = r S_tdt +\sigma S_tdW^{\mathbb{Q}}_t$$ ... and those of the ... • 7,101 Accepted ### The dice game and derivatives trading The interviewer meant that he's smart. Quoting Senior VP of People operations at Google, On the hiring side, we found that brainteasers are a complete waste of time. How many golf balls can you ... • 6,334 ### CMS Pricing - Convexity Adjustment by Replication The CMS represents the value of a swap rate for any point in time, i.e. we are interested in extrapolating the density of the swap rate in a similar way as the IBOR rate. Let us start with the fair ... • 1,002 Accepted ### Static vs Dynamic Hedging: when is each one used? It depends a little bit what you're trying to do. If you can statically replicate the payoff of a position at $t=0$, then putting on that hedge will insulate you from all risk coming from the ... • 2,846 ### How do market makers calculate the IV for options? They do not calculate it, they set it at a market clearing level based on supply and demand. It is similar to the way equity market makers set the price of a stock: a lot of buyers => raise the ... • 9,462 Accepted • 4,891 Accepted ### Curve Euribor - Euribor 3M It is incorrect to use 1m euribor or O/N euribor in a 6m Euribor forward curve. You should only use instruments based on 6M euribor, such as 1x7 FRA, 6x12 FRA or swaps v 6m Euribor, as you have done ... • 304 ### What is a Constant Maturity Swap (CMS) rate? In simple terms: An ordinary swap might be a 10 year swap of Libor vs a fixed rate; this fixed rate is determined in the marketplace every day and is published by Reuters, Bloomberg etc. as the '10 ... • 9,077 Accepted ### Derive vega for Black-Scholes call from this formula? Note that, \begin{align*} \frac{\partial{C}}{\partial{\sigma}} &=\frac{S_0}{\sqrt{2\pi}}{e^\frac{-d_+^2}{2}}(\frac{-1}{\sigma})(d_-)-\frac{Ke^{-rt}}{\sqrt{2\pi}}e^{\frac{-d_-^2}{2}}(\frac{-1}{\... • 20.4k ### Using a Constant as a Numeraire A Numeraire must be a tradeable asset. If you can find a constant tradeable asset, then yes a constant can be used as a numeraire. • 1,329
2022-07-07 10:54:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6596535444259644, "perplexity": 1987.1059840234866}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104690785.95/warc/CC-MAIN-20220707093848-20220707123848-00192.warc.gz"}
https://www.iitianacademy.com/cie-a-level-pure-mathematics-1-topic-1-5-trigonometry-solutions-of-simple-trigonometrical-equations-exam-questions-paper-1/
Question (i) Express $$3\cos \Theta +\sin \Theta$$   in the form $$R\cos (\Theta -\alpha )$$, where R > 0 and $$0^{\circ}< \alpha < 90^{\circ}$$, giving the exact value of R and the value of correct to 2 decimal places. (ii) Hence solve the equation 3 cos 2x + sin 2x = 2, giving all solutions in the interval $$0^{\circ}\leq x\leq 360^{\circ}$$. (i) State $$R=\sqrt{10}$$ Use trig formula to find α Obtain α = 18.43 with no errors seen (ii) Carry out evaluation of $$\cos ^{-1}\left ( \frac{2}{R} \right )\approx 50.77^{\circ}$$ Carry out correct method for one correct answer Obtain one correct answer e.g. 34.6° Carry out correct method for a further answer Obtain remaining 3 answers 163.8°, 214.6°, 343.8° and no others in the range Question (i) Express 5 sin 2θ + 2 cos 2θ in the form Rsin(2θ + α), where R > 0 and 0◦ < α < 90◦, giving the exact value of R and the value of α correct to 2 decimal places. Hence (ii) solve the equation 5 sin 2θ + 2 cos 2θ = 4, giving all solutions in the interval 0◦ ≤ θ ≤ 360◦ (iii) determine the least value of $$\frac{1}{(10\sin 2\Theta +4\cos 2\Theta )^{2}}$$ as θ varies. (i) State $$R=\sqrt{29}$$ Use trig formula to find α Obtain $$\alpha =21.80^{\circ}$$ with no errors seen (ii) Carry out evaluation of $$\sin ^{-1}(\frac{4}{R})\approx 49.97^{\circ}$$ Carry out correct method for one correct answer Obtain one correct answer e.g. $$13.1^{\circ}$$ Carry out correct method for a further answer Obtain remaining 3 answers $$55.1^{\circ}$$,$$193.1^{\circ}$$,$$235.1^{\circ}$$and no others in the range (iii) Greatest value of $$10\sin 2\Theta +4\cos 2\Theta =2\sqrt{29}$$ $$\frac{1}{116}$$ Question (i)Show that $$\frac{\sin \Theta }{\sin \Theta +\cos \Theta }+\frac{\cos \Theta }{\sin \Theta -\cos \Theta }=\frac{1}{\sin ^{2}\Theta -\cos ^{2}\Theta }$$. (ii)Hence solve the equation $$\frac{\sin \Theta }{\sin \Theta +\cos \Theta }+\frac{\cos \Theta }{\sin \Theta -\cos \Theta }=3$$ for $$0^{\circ}\leq \Theta \leq 360^{\circ}$$ (i)$$\frac{\sin \Theta\left ( \sin \Theta -\cos \Theta \right )+\cos \Theta \left ( \sin \Theta +\cos \Theta \right ) }{\left ( \sin \Theta +\cos \Theta \right )\left ( \sin \Theta -\cos \Theta \right )}$$ $$\frac{\sin^{2}\Theta -\sin \Theta \cos \Theta +\sin \Theta \cos \Theta +\cos ^{2}\Theta }{\sin ^{2}\Theta -\cos ^{2}\Theta }$$ $$\frac{1}{\sin ^{2}\Theta -\cos ^{2}\Theta }$$ (ii)$$\sin ^{2}\Theta -\left ( 1-\sin ^{2} \Theta \right )=\frac{1}{3}$$ or $$1-\cos ^{2}\Theta -\cos ^{2}\Theta =\frac{1}{3}$$ or $$3\left ( \sin ^{2} \Theta -\cos ^{2}\Theta \right )=\cos ^{2}\Theta +\sin ^{2}\Theta$$ $$\sin \Theta =\pm \sqrt{\frac{2}{3}}$$ or $$\cos \Theta =\pm \sqrt{\frac{1}{3}}$$ $$\tan \Theta =\pm \sqrt{2}$$ $$\Theta =54.7^{\circ},125.3^{\circ},234.7^{\circ},305.3^{\circ}$$ Question Solve the equation $$\sin 2x=2\cos 2x$$,for $$0^{\circ}\leq x\leq 180^{\circ}$$ $$\tan 2x=2$$ 2x=63.4 or 243.4 x=31.7 or 121.7(allow 122) Question (i) Show that $$\cos ^{4}x=1-2\sin ^{2}x+\sin ^{4}x.$$ (ii)Hence,or otherwise,solve the equation $$8\sin ^{4}x+\cos ^{4}x=2\cos ^{2}x$$  for $$0^{\circ}\leq x\leq 360^{\circ}$$ (i)$$\cos ^{4}x=\left ( 1-\sin ^{2}x \right )^{2}=1-2\sin ^{2}x+\sin ^{4}x$$ (ii)$$8\sin ^{4}x+1-2\sin ^{2}x+\sin ^{4}x=2\left ( 1-\sin ^{2}x \right )$$ $$9\sin ^{4}x=1$$ $$x=35.3^{\circ}$$ (or any correct solution) Any correct second solution from $$144.7^{\circ},215.3^{\circ},324.7^{\circ}$$ The remaining 2 solutions Question (i) Solve the equation 2 cos2θ = 3 sin θ, for 0◦ ≤ θ ≤ 360◦ (ii) The smallest positive solution of the equation $$2\cos ^{2}(n\Theta )=3\sin \left ( n\Theta \right )$$ , where n is a positive integer, is 10◦. State the value of n and hence find the largest solution of this equation in the interval 0◦ ≤ θ ≤ 360◦ (i)$$2(1-\sin ^{2}\Theta )=3\sin \Theta$$ $$(2\sin \Theta -1)(\sin \Theta +2)=0$$ $$\Theta =30^{\circ}$$ or $$150^{\circ}$$ (ii)$$n=\frac{their 30}{10}=3$$ (their 3)$$\Theta$$=720+their 150 =870 $$\Theta$$=$$290^{\circ}$$ . Question Solve the equation $$\frac{13\sin ^{2}\Theta }{2+\cos \Theta }=2$$  for $$0^{\circ}\leq \Theta \leq 180^{\circ}$$ $$13\sin ^{2}\Theta +2\cos \Theta +\cos ^{2}\Theta =4+2\cos \Theta$$ $$13\sin ^{2}\Theta +1-\sin ^{2}\Theta =4\rightarrow \sin ^{2}\Theta =\frac{1}{4}$$ or $$13-13\cos ^{2}\Theta +\cos ^{2}\Theta =4\rightarrow \cos ^{2}\Theta =\frac{3}{4}$$ $$30^{\circ}$$ , $$150^{\circ}$$ Question (i) Solve the equation $$4\sin ^{2}x+8\cos x-7=0$$ for $$0^{\circ}\leq x\leq 360^{\circ}$$. (ii) Hence find the solution of the equation $$4\sin ^{2}\left ( \frac{1}{2}\Theta \right )+8\cos \left ( \frac{1}{2} \Theta \right )-7=0$$  for  $$0^{\circ}\leq \Theta \leq 360^{\circ}$$ (i)$$4(1-\cos ^{2}x)+8\cos x-7=0$$ $$4c^{2}-8c+3=0\rightarrow \left ( 2\cos x-1 \right )(2\cos x-3 )=0$$ $$x=60^{\circ}$$ or $$300^{\circ}$$ (ii)$$\frac{1}{2}\Theta =60^{\circ}$$or $$300^{\circ}$$ $$\Theta =120^{\circ}$$only #### Question. (i) Prove the identity (ii) Hence solve the equation (i)$$\frac{1+\cos \Theta }{\sin \Theta }+\frac{\sin \Theta }{1+\cos \Theta }=\frac{2}{\sin \Theta }$$ $$\frac{\left ( 1+c \right )^{2}+s^{2}}{s\left ( 1+c \right )}=\frac{1+2c+c^{2+s^{2}}}{s(1+c)}$$ $$=\frac{2+2c}{s(1+c)}=\frac{2(1+c)}{s(1+c)}\rightarrow \frac{2}{s}$$ (ii)$$\frac{2}{s}=\frac{3}{c}\rightarrow t=\frac{2}{3}$$ $$\rightarrow \Theta =33.7^{\circ}or2134.7^{\circ}$$ #### Question. The diagram shows the graphs of y = tan x and y = cos x for $$0 ≤ x ≤ \Pi$$. The graphs intersect at points A and B. (i) Find by calculation the x-coordinate of A. (ii) Find by calculation the coordinates of B. 5(i) $$\tan x =\cos x\rightarrow \sin x=\cos ^{2}x$$ $$\sin x=1-\sin ^{2}x$$  $$\sin x=0.6180$$ .Allow $$\frac{\left ( -1+\sqrt{5} \right )}{2}$$ x-cord of A=$$\sin ^{-1}0.618=0.666$$ (ii) EITHER x-coordinate of B  is $$\pi -$$their 0.666 y-coordinate of B  is $$\tan \left ( their2.475 \right )$$ or $$\cos \left ( their 2.475 \right )$$ x=2.48,y=-0.786 or -0.787 OR y-coordinate of B is-(cos or tan (their 0.66)) x-coordinate of B is $$\cos ^{-1 }$$( their y ) or $$\pi +\tan ^{-1}$$(their y ) x=2.48,y=-0.786 or -0.787 #### Question. (a) Solve the equation $$sin^{-1}(3x) = -1$$, giving the solution in an exact form. (b) Solve, by factorising, the equation $$2 cos\Theta sin\Theta – 2 cos\Theta – sin\Theta +1 = 0$$ for $$0 \leq \Theta \leq \Pi$$. Ans:(a)$$(3x)=-\frac{\sqrt{3}}{2}\rightarrow x=\frac{-\sqrt{3}}{6}$$ (b)$$(2cos\Theta -1)(sin\Theta -1)=0$$ $$cos=\frac{1}{2} or sin\Theta =1$$ $$\Theta =\frac{\Pi }{3}$$ or $$\frac{\Pi }{2}$$ Question (a) Solve the equation $$3sin^{2}2\Theta+8cos2\Theta =0$$ for 0Å ≤ 1 ≤ 180Å. (b) The diagram shows part of the graph of y = a + tan bx, where x is measured in radians and a and b are constants. The curve intersects the x-axis at $$(-\frac{\Pi }{ 6},0)$$and the y-axis at $$(0,\sqrt{3})$$ Find the values of a and b. (a)$$3(1-cos^{2}2\Theta)+8cos2\Theta =0\rightarrow 3cos^{2}2\Theta -8cos2\Theta -3(=0)$$ cos2θ$$=-\frac{1}{3}$$ 2θ $$= 109.(47)o or 250.(53)o$$ θ = 54.7o or 125.3o (b) √3 tan0 3 = + a , a →=√3 0 tan( −bπ/ 6)  +√ 3 taken as far as  $$tan^{-1}$$, angle units consistent b=2 ### Question (a) Solve the equation $$3tan^2 x-5 tan x – 2 =0$$ for $$0^o≤x≤180^0$$. (b) Find the set of values of k for which the equation $$3tan^2 x – 5 tan x + k = 0$$ has no solutions. (c) For the equation $$3tan^2 x – 5 tan x + k = 0$$, state the value of k for which there are three solutions in the interval $$0^o≤x≤180^o$$, and find these solutions. Ans: (a) $$(tan x-2)(3tanx+1)(=0)$$. or formula or completing square. $$tan x =2 or -\frac{1}{3}$$ \)x=63.4^o\)(only value in range) or $$161.6^0$$ (only value in range) (b) Apply $$b^2-4ac<0$$ $$k>\frac{25}{12}$$ (c) k = 0 tan x = 0 or $$\frac{5}{3}$$ $$x=0^0$$ or $$180^0or 59.0^0$$ ### Question. The diagram shows part of the graph of y = a cos (bx) + c. (a) Find the values of the positive integers a, b and c. (b) For these values of a, b and c, use the given diagram to determine the number of solutions in the interval 0 ≤ x ≤ 2π for each of the following equations. (i)   $$a cos (bx) + c =\frac{6}{\pi }x$$ (ii)   $$a cos (bx) + c = 6- \frac{6}{\pi }x$$ (a)  $$a = 5, b = 2 , c = 3$$ b.(i) 3 b.(i) 2 ### Question Solve the equation $$\frac{tan\Theta + 2sin\Theta }{tan\Theta – 2sin\Theta}=3for 0^o<\Theta <180^o$$. Ans: tan Θ+2sinΘ=3tanΘ-6sinΘ leading to 2tanΘ – 8sinΘ [=Θ] $$cosΘ =\frac{1}{4}$$ $$Θ=75.5^o$$ only ### Question. Solve, by factorising, the equation 6 cos θ tan θ − 3 cos θ + 4 tan θ − 2 = 0, for 00 ≤ 1 ≤ 1800. [leading to tan θ = $$\frac{1}{2}$$ , cos θ = $$-\frac{2}{3}$$ ]
2023-02-08 18:03:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7895843982696533, "perplexity": 2516.315341327479}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500837.65/warc/CC-MAIN-20230208155417-20230208185417-00689.warc.gz"}
http://ci.nii.ac.jp/naid/10018381149
# The second term of the semi-classical asymptotic expansion for Feynman path integrals with integrand of polynomial growth ## Abstract Recently N. Kumano-go [<b>15</b>] succeeded in proving that piecewise linear time slicing approximation to Feynman path integral<br>\int F(\gamma)e^{i\nu S(\gamma)}\,\mathscr{D}[\gamma]<br>actually converges to the limit as the mesh of division of time goes to 0 if the functional <i>F</i>(γ) of paths γ belongs to a certain class of functionals, which includes, as a typical example, Stieltjes integral of the following form;<br>$$\label{stieltjesint}%1F(\gamma) = \int_0^T f(t,\gamma(t)) \rho(dt), \tag{1}$$<br>where ρ(<i>t</i>) is a function of bounded variation and <i>f</i>(<i>t</i>, <i>x</i>) is a sufficiently smooth function with polynomial growth as |<i>x</i>| → ∞. Moreover, he rigorously showed that the limit, which we call the Feynman path integral, has rich properties (see also [<b>10</b>]).<br>The present paper has two aims. The first aim is to show that a large part of discussion in [<b>15</b>] becomes much simpler and clearer if one uses piecewise classical paths in place of piecewise linear paths.<br>The second aim is to explain that the use of piecewise classical paths naturally leads us to an analytic formula for the second term of the semi-classical asymptotic expansion of the Feynman path integrals under a little stronger assumptions than that in [<b>15</b>]. If <i>F</i>(γ) ≡ 1, this second term coincides with the one given by G. D. Birkhoff [<b>1</b>]. ## Journal • Tokyo Sugaku Kaisya Zasshi Tokyo Sugaku Kaisya Zasshi 58(3), 837-867, 2006-07-01 The Mathematical Society of Japan ## References:  17 You must have a user ID to see the references.If you already have a user ID, please click "Login" to access the info.New users can click "Sign Up" to register for an user ID. ## Codes • NII Article ID (NAID) 10018381149 • NII NACSIS-CAT ID (NCID) AA0070177X • Text Lang ENG • Article Type ART • ISSN 00255645 • NDL Article ID 7987167 • NDL Source Classification ZM31(科学技術--数学) • NDL Call No. Z53-A209 • Data Source CJP  NDL  J-STAGE Page Top
2016-10-27 04:01:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.6975697875022888, "perplexity": 2473.1696096028854}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988721067.83/warc/CC-MAIN-20161020183841-00538-ip-10-171-6-4.ec2.internal.warc.gz"}
https://socratic.org/questions/the-sum-of-6-consecutive-integers-is-393-what-is-the-third-number-in-this-sequen
# The sum of 6 consecutive integers is 393. What is the third number In this sequence? Dec 10, 2016 65 #### Explanation: Let's define the first integer as $x$. Then the next five consecutive integers would be: $x + 1$, $x + 2$, $x + 3$, $x + 4$ and $x + 5$. The sum of these six integers is 393 so we can write: $x + x + 1 + x + 2 + x + 3 + x + 4 + x + 5 = 393$ $6 x + 1 + 2 + 3 + 4 + 5 = 393$ $6 x + 15 = 393$ $6 x + 15 - 15 = 393 - 15$ $6 x + 0 = 378$ $\frac{6 x}{6} = \frac{378}{6}$ $x = 63$ Because the first integer is 63 then the third would be $x + 2$ or $63 + 2 = 65$
2021-12-04 18:10:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 15, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8973833918571472, "perplexity": 281.12409685762526}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362999.66/warc/CC-MAIN-20211204154554-20211204184554-00068.warc.gz"}
https://www.commontools.org/tool/cascaded-noise-figure-calculator-27
This is a free online tool to calculate the cascaded noise figure. This tool supports the 4 cascaded amplifiers. Created by Commontools | Updated on: June 23, 2022 Aggerate Noise Figure (in linear term): Aggerate Noise Figure (in dB): Module1 Noise Figure in dB : Module1 Gain in dB : Module2 Noise Figure in dB : Module2 Gain in dB : Module3 Noise Figure in dB : Module3 Gain in dB : Module4 Noise Figure in dB : Module4 Gain in dB : ## Introduction The noise figure calculator determines the noise figure, a measurement of a device's contribution to the overall noise of the system in which it is installed. Using this information, you can choose how much noise is produced in that system. In the following article, we define the words noise factor and noise figure, which are similar but differ slightly in how they are calculated. To calculate the degradation of signal-to-noise ratio (SNR) in a system like this, we utilize the latter, for instance, in the noise figure of the cascaded amplifier formula. Read on to discover more about the noise figure formula and the real-world uses of noise figures in various walks of life. ## What do noise factor and noise figure mean? The signal-to-noise ratio (SNR) is referred to as the noise figure, and the noise figure is the common logarithm of the input to output SNR ratio. This ratio gauges how strong the desired signal is about how much background noise is tolerable. Similar in concept but not using logarithms is the noise factor. Any undesired disruption that degrades the signal's quality and interferes with the transmission of texts, graphics, audio, and video can be categorized as noise. Therefore, if you want to increase a system's performance, researching its noise component is essential. ## Definitions of noise figure and noise factor The noise factor and figure measure the worsening of the signal-to-noise ratio. When we calculate the value using a linear equation, the result is a noise factor, but when we use a common logarithm, the result is a noise figure. The total noise figure of such a system is known as a cascaded noise figure when many devices are connected in a sequential or cascaded fashion. ### Noise figure calculator Depending on the situation in front of you, the noise figure calculator enables you to calculate the noise figure's value in various methods. The calculator offers four different calculation types, each of which has a unique formula that you must use to calculate the noise value based on your inputs. ## The computation techniques are: 1. The signal-to-noise ratios; 2. The signal-to-noise ratios in decibels (dB); 3. Convert from noise factor to noise figure; and ## Noise figure formula & calculation Using the diagram above, it is possible to determine the noise figure formula from the conditions described above. $$N=10\log_{10}\left(\frac{S_iN_i}{S_oN_o}\right)$$ Where Si is the signal at the input Ni is the noise at the input So is the signal at the output No the noise at the output For instance, if the signal to noise ratio was 4:1 at the input and 3:1 at the output, the noise factor would be 4/3 and the noise figure would be 10 log (4/3) or 1.25 dB. As an alternative, the noise figure can be easily calculated if the signal to noise ratios are given in decibels. This is because two integers are divided by subtracting their logarithms. In other words, the circuit would have a noise figure of 13 - 11 or 2 dB if the signal to noise ratio was 13 dB at the input and only 11 dB at the output. ## What exactly is gain, and how is the gain of a cascaded circuit determined? The gain in electronics refers to the amount that a two-port circuit (typically an amplifier) enhances a signal's power or amplitude from the input to the output port. The ratio of the output signal power to the input signal power determines an amplifier's power gain. The gain is frequently stated in decibels (dB gain). For example, the following formula determines an amplifier's power gain in dB units. ## Applications When working with weak signals, noise in a circuit is a crucial factor to consider. Although you can partially eradicate it, noise in electronic communication systems is undesirable. It is the designer's responsibility to ensure that each component's noise contribution to the circuit is minimal enough to prevent noticeably degrading the signal-to-noise ratio. The system's total noise figure determines the minimum magnitude of a signal you may recognize in the presence of noise. This indicates that the lowest possible noise level is required to ensure the best performance from a system. We can determine a device's noise contribution by looking at its noise figure. For example, the noise figure of an ideal amplifier is 1, as the signal-to-noise ratio for both its input and output is infinity. However, a true amplifier will add its noise to the signal in addition to amplifying the noise at its input. As a result, the amplifier's output has a lower noise figure and a lower signal-to-noise ratio. ## Measurements of noise performance There are several approaches to specify a radio receiver's noise performance. The most visible is the signal-to-noise ratio, and there is also a SINAD (Signal to Noise And Distortion). In addition, other metrics, such as Bit Error Rate and others, can assess sensitivity performance. However, because noise figures may be applied to both the entire system and individual components of it, it has emerged as one of the more crucial parameters linked to radio receiver performance. The noise figure can be used to analyze the various components and create an overall figure. Noise affects all frequencies and is brought in by circuitry components like electronic parts. As a result, the component selection can significantly affect how well the circuit handles noise. Thermal noise makes up the majority of radio receiver circuit noise, but not all of it. Due to this, some specialized applications, such as radio astronomy, may require input circuits to be cooled to shallow temperatures in order to eliminate thermal noise. In addition, these applications require extremely low noise levels in order to detect tiny signals. While thermal noise is the primary source of noise, there are other mechanisms that contribute to noise as well. These mechanisms must be taken into account when designing an RF circuit in order to select circuit configurations, electronic components, and design strategies that will minimize overall noise. ## Basics of noise factor and noise figure In essence, the measurement evaluates the amount of noise that the system as a whole or each component of the system introduces. This could, for instance, be an RF amplifier or a radio receiver. If the system were flawless, there would be no noise added to the signal as it moved through it, and the signal-to-noise ratio at the output and input would be the same. This is not the case, as we are all aware, and additional noise is always present. SNR, or signal-to-noise ratio, is worse at the output than at the input, according to this statement. ## Two fundamental figures can be employed: Noise factor: To calculate the noise factor, divide the SNR at the input by the SNR at the output. The noise factor is always more than one since the SNR at the output will always be worse or lower. Specifications mentioning the noise factor are uncommon. Noise figure: it is a parameter that is frequently used to specify and describe radio receivers and the components that makeup receiver systems. The noise figure is just the noise factor stated in decibels and employs a logarithmic scale. ## Noise figure for cascaded stages A typical radio receiver will have the input tuner, an RF amplifier, maybe an RF attenuator, an RF mixer, and so forth. This is true of all RF circuit designs. The total noise figure and, thus, the noise performance of the entire RF circuit design will tend to be defined in the initial stages. Consider a two-stage RF circuit design. The input noise will be equal to kTB, and the gains G1 and G2 for each stage will amplify this noise. There will be noise from the first stage, which the second stage will amplify, and then there will be noise from the second stage. Because they are not associated, the noise powers can be summed. Calculating the impact of the noise performance of several stages on the total noise figure is frequently important as part of the RF design process. $$NF_{system}=NF_1+\frac{NF_2-1}{G_1}+\frac{NF_3-1}{G_1G_2}+...$$ Where: NF = Noise Figure for a System or for Stage 1, Stage 2, or Stage 3, as indicated by the Subscript G stands for the gain for the step indicated by the subscript. The first stage is the one that has the biggest impact on the noise figure for the entire RF circuit design, according to the noise figure formula for series or cascaded stages. ## Noise figure measurement There are several approaches to measuring the noise figure of an element used in a radio communications system. There are numerous test instruments available. In actuality, the available test tools might dictate the procedure that is utilized. 1. A particular noise figure meter could be offered in some labs for measuring noise levels. These test instruments are produced by numerous manufacturers, and they offer a quick, simple, and precise noise figure measurement. If one is available, noise figure analyzers are an excellent choice since they offer a very rapid and simple way to determine the noise figure of an object. In addition, they are accurate. The measurements only require that the noise figure meter be connected to the input and output of the circuit being tested. The test is started, the test instrument is set up, and the results are presented. A straightforward yet reliable test. 1. Noise figure measurements with a spectrum analyzer: Using a spectrum analyzer, noise figure measurements are very simple. Some of these test instruments are equipped with built-in processes that let you measure noise levels. There are two basic ways to measure noise figures, and both of them can make use of a spectrum analyzer. The gain method and the Y method are their names. ## Examples of noise figures Diverse pieces of equipment used for various radio communications systems will have quite different requirements. A common HF radio receiver used by professionals or amateurs may have a noise figure of 15 dB or higher and still perform quite effectively. Due to the high degree of atmospheric noise, a higher performance level is not required. Even at frequencies about 30 MHz, when the spectrum is on the verge of VHF, interference levels can still be high enough not to require very high levels of noise performance since atmospheric noise at these frequencies can be very high. The noise figure of a receiver utilized for narrow band applications at VHF or above, however, may be 3 or 4 dB. A noise figure of about 1 dB is typical for some narrow band RF amplifiers. It's noteworthy to note that even the best wide-band VHF/UHF receivers for professionals may only have a noise figure of 8 dB or less. These radio receivers could be utilized for radio reception, radio communications, or spectrum monitoring. The noise figure of a receiver utilised for narrow band applications at VHF or above, however, may be 3 or 4 dB. A noise figure of about 1 dB is typical for some narrow band RF amplifiers. It's noteworthy to note that even the best wide-band VHF/UHF receivers for professionals may only have a noise figure of 8 dB or less. These radio receivers could be utilised for radio reception, radio communications, or spectrum monitoring. Applications like radio astronomy at frequencies extending into the UHF region of the spectrum and beyond require particularly high levels of performance. ## Conclusion The noise figure is a very practical parameter to utilize because it can tell you how well certain system components perform in terms of noise. Through the usage of noise figures, it is also possible to determine a system's overall performance by knowing the noise figures and gain levels of each component. As a result, it is simple to compute and optimize the system noise figure. Noise statistics are frequently stated in the overall specification of radio communications equipment used for commercial or amateur radio applications. Styles:
2022-07-07 10:42:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6719205379486084, "perplexity": 783.5423251329428}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104690785.95/warc/CC-MAIN-20220707093848-20220707123848-00080.warc.gz"}
http://physics.stackexchange.com/questions/116806/calculating-pressure-in-cgs-units
# Calculating Pressure in CGS units calculate pressure in CGS units using following data : $$\mbox{Specific gravity of mercury},\gamma_{Hg}=13.6\\ \mbox{Density of water}, \rho=10^3{\rm kg/m^3}\\ \mbox{Gravity}, g=9.8{\rm m/s^2}\\ \mbox{height}, h=75{\rm cm}$$ I know, $P=h\rho g$ i have also converted all the data into CGS units $\gamma_{hg}=13.6,\rho=10^6{\rm g/m^3},\ g=980{\rm m/s^2},\\ h=75{\rm cm}$ After that I thought it is easy, i just have to substitute the value in the equation. But then i saw specific gravity of mercury (frankly i saw this for first time) i thought what is this. Now i don't know what to do. I think it it has some relation with $g$ as there is word specific gravity. I have no idea what to do. - Have you checked out the Wikipedia article on specific gravity? –  Kyle Kanos Jun 4 at 13:43 Note also that the "C" in CGS is for centimeters which it seems you've not changed over in your data. Also, why do you go from $h=75\,{\rm cm}\to h=7.5\,{\rm cm}$? –  Kyle Kanos Jun 4 at 14:20 Since this has been open for a while now, here's the solution. The thing with unit conversions is to remember to be systematic and to double-check for typos and power-of-ten slipups. Even after fifteen years of unit conversion problems I find stupid little mistakes when I write the things out in full like I have below; if I try to "save time" by not writing the conversions out all the way, I make the mistakes but I don't find them. The density of water is \begin{align} \rho_\mathrm{H_2O} &= 10^3\, \mathrm{\frac{kg}{m^3}} \mathrm{ \times \frac{10^3\,g}{1\,kg} \times\left(\frac{1\,m}{10^2\,cm}\right)^3 } \\&= 1\,\mathrm{\frac{g}{cm^3}}, \end{align} so the density of mercury is $\rho_\mathrm{Hg} = \gamma_\mathrm{Hg}\rho_\mathrm{H_2O} = 13.6\,\mathrm{g/cm^3}$. (This is one of the plusses of CGS units, that the density of water is unity and specific gravities and densities have the same values.) The acceleration due to gravity is \begin{align} g &= \mathrm{ 9.8\,\frac {m}{s^2} \times \frac{100\,cm}{1\,m} }\\&= 980\,\mathrm{\frac{cm}{s^2} } \end{align} So the pressure under a 75 cm column of mercury is \begin{align} P = \rho g h &= \mathrm{ 13.6\,\frac{g}{cm^3} \times 980\,\frac {cm}{s^2} \times 75\,cm }\\ &=\mathrm{ 0.9996\times10^6 \,\frac{dyne}{cm^2} \approx 1\,megabarye } \end{align} - The value of g in cgs unit is all wrong. It is 980 cm/s^2 The density of mercury is NOT 13.6 but it is 13.6 times that of water. - i have no idea about gravitational density of mercury, i have just written it as it was in the book. –  Freddy Jun 4 at 17:39
2014-12-26 18:41:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 3, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9981639981269836, "perplexity": 1240.9244551233849}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1419447549585.143/warc/CC-MAIN-20141224185909-00075-ip-10-231-17-201.ec2.internal.warc.gz"}
http://nag.com/numeric/fl/nagdoc_fl24/html/D01/d01intro.html
D01 Chapter Contents NAG Library Manual # NAG Library Chapter IntroductionD01 – Quadrature ## 1  Scope of the Chapter This chapter provides routines for the numerical evaluation of definite integrals in one or more dimensions and for evaluating weights and abscissae of integration rules. ## 2  Background to the Problems The routines in this chapter are designed to estimate: (a) the value of a one-dimensional definite integral of the form $∫abfxdx$ (1) where $f\left(x\right)$ is defined by you, either at a set of points $\left({x}_{\mathit{i}},f\left({x}_{\mathit{i}}\right)\right)$, for $\mathit{i}=1,2,\dots ,n$, where $a={x}_{1}<{x}_{2}<\cdots <{x}_{n}=b$, or in the form of a function; and the limits of integration $a,b$ may be finite or infinite. Some methods are specially designed for integrands of the form $fx=wxgx$ (2) which contain a factor $w\left(x\right)$, called the weight-function, of a specific form. These methods take full account of any peculiar behaviour attributable to the $w\left(x\right)$ factor. (b) the values of the one-dimensional indefinite integrals arising from (1) where the ranges of integration are interior to the interval $\left[a,b\right]$. (c) the value of a multidimensional definite integral of the form $∫Rnfx1,x2,…,xndxn⋯dx2dx1$ (3) where $f\left({x}_{1},{x}_{2},\dots ,{x}_{n}\right)$ is a function defined by you and ${R}_{n}$ is some region of $n$-dimensional space. The simplest form of ${R}_{n}$ is the $n$-rectangle defined by $ai≤xi≤bi, i=1,2,…,n$ (4) where ${a}_{i}$ and ${b}_{i}$ are constants. When ${a}_{i}$ and ${b}_{i}$ are functions of ${x}_{j}$ ($j), the region can easily be transformed to the rectangular form (see page 266 of Davis and Rabinowitz (1975)). Some of the methods described incorporate the transformation procedure. ### 2.1  One-dimensional Integrals To estimate the value of a one-dimensional integral, a quadrature rule uses an approximation in the form of a weighted sum of integrand values, i.e., $∫abfxdx≃∑i=1Nwifxi.$ (5) The points ${x}_{i}$ within the interval $\left[a,b\right]$ are known as the abscissae, and the ${w}_{i}$ are known as the weights. More generally, if the integrand has the form (2), the corresponding formula is $∫abwxgxdx≃∑i=1Nwigxi.$ (6) If the integrand is known only at a fixed set of points, these points must be used as the abscissae, and the weighted sum is calculated using finite difference methods. However, if the functional form of the integrand is known, so that its value at any abscissa is easily obtained, then a wide variety of quadrature rules are available, each characterised by its choice of abscissae and the corresponding weights. The appropriate rule to use will depend on the interval $\left[a,b\right]$ – whether finite or otherwise – and on the form of any $w\left(x\right)$ factor in the integrand. A suitable value of $N$ depends on the general behaviour of $f\left(x\right)$; or of $g\left(x\right)$, if there is a $w\left(x\right)$ factor present. Among possible rules, we mention particularly the Gaussian formulae, which employ a distribution of abscissae which is optimal for $f\left(x\right)$ or $g\left(x\right)$ of polynomial form. The choice of basic rules constitutes one of the principles on which methods for one-dimensional integrals may be classified. The other major basis of classification is the implementation strategy, of which some types are now presented. (a) Single rule evaluation procedures A fixed number of abscissae, $N$, is used. This number and the particular rule chosen uniquely determine the weights and abscissae. No estimate is made of the accuracy of the result. (b) Automatic procedures The number of abscissae, $N$, within $\left[a,b\right]$ is gradually increased until consistency is achieved to within a level of accuracy (absolute or relative) you requested. There are essentially two ways of doing this; hybrid forms of these two methods are also possible: (i) whole interval procedures (non-adaptive) A series of rules using increasing values of $N$ are successively applied over the whole interval $\left[a,b\right]$. It is clearly more economical if abscissae already used for a lower value of $N$ can be used again as part of a higher-order formula. This principle is known as optimal extension. There is no overlap between the abscissae used in Gaussian formulae of different orders. However, the Kronrod formulae are designed to give an optimal $\left(2N+1\right)$-point formula by adding $\left(N+1\right)$ points to an $N$-point Gauss formula. Further extensions have been developed by Patterson. (ii) adaptive procedures The interval $\left[a,b\right]$ is repeatedly divided into a number of sub-intervals, and integration rules are applied separately to each sub-interval. Typically, the subdivision process will be carried further in the neighbourhood of a sharp peak in the integrand than where the curve is smooth. Thus, the distribution of abscissae is adapted to the shape of the integrand. Subdivision raises the problem of what constitutes an acceptable accuracy in each sub-interval. The usual global acceptability criterion demands that the sum of the absolute values of the error estimates in the sub-intervals should meet the conditions required of the error over the whole interval. Automatic extrapolation over several levels of subdivision may eliminate the effects of some types of singularities. An ideal general-purpose method would be an automatic method which could be used for a wide variety of integrands, was efficient (i.e., required the use of as few abscissae as possible), and was reliable (i.e., always gave results to within the requested accuracy). Complete reliability is unobtainable, and generally higher reliability is obtained at the expense of efficiency, and vice versa. It must therefore be emphasized that the automatic routines in this chapter cannot be assumed to be $100%$ reliable. In general, however, the reliability is very high. ### 2.2  Multidimensional Integrals A distinction must be made between cases of moderately low dimensionality (say, up to $4$ or $5$ dimensions), and those of higher dimensionality. Where the number of dimensions is limited, a one-dimensional method may be applied to each dimension, according to some suitable strategy, and high accuracy may be obtainable (using product rules). However, the number of integrand evaluations rises very rapidly with the number of dimensions, so that the accuracy obtainable with an acceptable amount of computational labour is limited; for example a product of $3$-point rules in $20$ dimensions would require more than ${10}^{9}$ integrand evaluations. Special techniques such as the Monte–Carlo methods can be used to deal with high dimensions. (a) Products of one-dimensional rules Using a two-dimensional integral as an example, we have $∫a1b1∫a2b2fx,ydy dx≃∑i=1Nwi ∫a2b2fxi,ydy$ (7) $∫a1b1∫a2b2fx,ydy dx≃∑i=1N∑j=1Nwivjfxi,yj$ (8) where $\left({w}_{i},{x}_{i}\right)$ and $\left({v}_{i},{y}_{i}\right)$ are the weights and abscissae of the rules used in the respective dimensions. A different one-dimensional rule may be used for each dimension, as appropriate to the range and any weight function present, and a different strategy may be used, as appropriate to the integrand behaviour as a function of each independent variable. For a rule-evaluation strategy in all dimensions, the formula (8) is applied in a straightforward manner. For automatic strategies (i.e., attempting to attain a requested accuracy), there is a problem in deciding what accuracy must be requested in the inner integral(s). Reference to formula (7) shows that the presence of a limited but random error in the $y$-integration for different values of ${x}_{i}$ can produce a ‘jagged’ function of $x$, which may be difficult to integrate to the desired accuracy and for this reason products of automatic one-dimensional routines should be used with caution (see Lyness (1983)). (b) Monte–Carlo methods These are based on estimating the mean value of the integrand sampled at points chosen from an appropriate statistical distribution function. Usually a variance reducing procedure is incorporated to combat the fundamentally slow rate of convergence of the rudimentary form of the technique. These methods can be effective by comparison with alternative methods when the integrand contains singularities or is erratic in some way, but they are of quite limited accuracy. (c) Number theoretic methods These are based on the work of Korobov and Conroy and operate by exploiting implicitly the properties of the Fourier expansion of the integrand. Special rules, constructed from so-called optimal coefficients, give a particularly uniform distribution of the points throughout $n$-dimensional space and from their number theoretic properties minimize the error on a prescribed class of integrals. The method can be combined with the Monte–Carlo procedure. (d) Sag–Szekeres method By transformation this method seeks to induce properties into the integrand which make it accurately integrable by the trapezoidal rule. The transformation also allows effective control over the number of integrand evaluations. (e) Automatic adaptive procedures An automatic adaptive strategy in several dimensions normally involves division of the region into subregions, concentrating the divisions in those parts of the region where the integrand is worst behaved. It is difficult to arrange with any generality for variable limits in the inner integral(s). For this reason, some methods use a region where all the limits are constants; this is called a hyper-rectangle. Integrals over regions defined by variable or infinite limits may be handled by transformation to a hyper-rectangle. Integrals over regions so irregular that such a transformation is not feasible may be handled by surrounding the region by an appropriate hyper-rectangle and defining the integrand to be zero outside the desired region. Such a technique should always be followed by a Monte–Carlo method for integration. The method used locally in each subregion produced by the adaptive subdivision process is usually one of three types: Monte–Carlo, number theoretic or deterministic. Deterministic methods are usually the most rapidly convergent but are often expensive to use for high dimensionality and not as robust as the other techniques. ## 3  Recommendations on Choice and Use of Available Routines This section is divided into five subsections. The first subsection illustrates the difference between direct and reverse communication routines. The second subsection highlights the different levels of vectorization provided by different interfaces. Sections 3.3, 3.4 and 3.5 consider in turn routines for: one-dimensional integrals over a finite interval, and over a semi-infinite or an infinite interval; and multidimensional integrals. Within each sub-section, routines are classified by the type of method, which ranges from simple rule evaluation to automatic adaptive algorithms. The recommendations apply particularly when the primary objective is simply to compute the value of one or more integrals, and in these cases the automatic adaptive routines are generally the most convenient and reliable, although also the most expensive in computing time. Note however that in some circumstances it may be counter-productive to use an automatic routine. If the results of the quadrature are to be used in turn as input to a further computation (e.g., an ‘outer’ quadrature or an optimization problem), then this further computation may be adversely affected by the ‘jagged performance profile’ of an automatic routine; a simple rule-evaluation routine may provide much better overall performance. For further guidance, the article by Lyness (1983) is recommended. ### 3.1  Direct and Reverse Communication Routines in this chapter which evaluate an intergal value may be classified as either direct communication or reverse communication. (a) Direct communication Direct communication routines require a user-supplied (sub)routine to be provided as an actual argument to the NAG Library routine. These routines are usually more straightforward to use than a reverse communication equivalent, although they require the user-supplied (sub)routine to have a specific interface. (b) Reverse communication Instead of calling a user-supplied (sub)routine to evaluate a function, reverse communication routines return repeatedly, requesting different operations to be performed by the calling program. Reverse communication routines will typically be more complicated to use than direct communication equivalents. However, they provide great flexibility for the evaluation of the integrands. In particular, as the function evaluations are performed by the calling program, any information required for their evaluation that is not generated by the library routine is immediately available. Currently in this chapter the only routine explicitly using reverse communication is D01RAF. ### 3.2  Choice of Interface This section concerns the design of the interface for the provision of abscissae, and the subsequent collection of calculated information, typically integrand evaluations. Vectorized interfaces typically allow for more efficient operation. (a) Single abscissa interfaces The algorithm will provide a single abscissa at which information is required. These are typically the most simple to use, although they may be significantly less efficient than a vectorized equivalent. Most of the algorithms in this chapter are of this type. Examples of this include D01AJF and D01FBF. (b) Vectorized abscissae interfaces The algorithm will return a set of abscissae, at all of which information is required. While these are more complicated to use, they are typically more efficient than a non-vectorized equivalent. They reduce the overhead of function calls, allow the avoidance of repetition of computations common to each of the integrand evaluations, and offer greater scope for vectorization and parallelization of your code. Examples include D01RGF, D01UAF, and the routines D01ATF and D01AUF, which are vectorized equivalents of D01AJF and D01AKF. (c) Multiple integral interfaces These are routines which allow for multiple integrals to be estimated simultaneously. As with (b) above, these are more complicated to use than single integral routines, however they can provide higher efficiency, particularly if several integrals require the same subcalculations at the same abscissae. They are most efficient if integrals which are supplied together are expected to have similar behaviour over the domain, particularly when the algorithm is adaptive. Examples include D01EAF and D01RAF. ### 3.3  One-dimensional Integrals over a Finite Interval (a) Integrand defined at a set of points If $f\left(x\right)$ is defined numerically at four or more points, then the Gill–Miller finite difference method (D01GAF) should be used. The interval of integration is taken to coincide with the range of $x$ values of the points supplied. It is in the nature of this problem that any routine may be unreliable. In order to check results independently and so as to provide an alternative technique you may fit the integrand by Chebyshev series using E02ADF and then use routine E02AJF to evaluate its integral (which need not be restricted to the range of the integration points, as is the case for D01GAF). A further alternative is to fit a cubic spline to the data using E02BAF and then to evaluate its integral using E02BDF. (b) Integrand defined as a function If the functional form of $f\left(x\right)$ is known, then one of the following approaches should be taken. They are arranged in the order from most specific to most general, hence the first applicable procedure in the list will be the most efficient. However, if you do not wish to make any assumptions about the integrand, the most reliable routines to use will be D01ATF (or D01AJF), D01AUF (or D01AKF), D01ALF, D01RGF or D01RAF, although these will in general be less efficient for simple integrals. (i) Rule-evaluation routines If $f\left(x\right)$ is known to be sufficiently well behaved (more precisely, can be closely approximated by a polynomial of moderate degree), a Gaussian routine with a suitable number of abscissae may be used. D01BCF or D01TBF with D01FBF may be used if it is required to examine the weights and abscissae. D01TBF is faster and more accurate, whereas D01BCF is more general. D01UAF uses the same quadrature rules as D01TBF, and may be used if you do not explicitly require the weights and abscissae. If $f\left(x\right)$ is well behaved, apart from a weight-function of the form $x-a+b2 c or b-xcx-ad,$ D01BCF with D01FBF may be used. (ii) Automatic whole-interval routines If $f\left(x\right)$ is reasonably smooth, and the required accuracy is not too high, the automatic whole-interval routines D01ARF or D01BDF may be used. D01ARF incorporates high-order extensions of the Kronrod rule and is the only routine which can also be used for indefinite integration. (iii) Automatic adaptive routines Firstly, several routines are available for integrands of the form $w\left(x\right)g\left(x\right)$ where $g\left(x\right)$ is a ‘smooth’ function (i.e., has no singularities, sharp peaks or violent oscillations in the interval of integration) and $w\left(x\right)$ is a weight function of one of the following forms. 1. if $w\left(x\right)={\left(b-x\right)}^{\alpha }{\left(x-a\right)}^{\beta }{\left(\mathrm{log}\left(b-x\right)\right)}^{k}{\left(\mathrm{log}\left(x-a\right)\right)}^{l}$, where $k,l=0\text{​ or ​}1$, $\alpha ,\beta >-1$: use D01APF; 2. if $w\left(x\right)=\frac{1}{x-c}$: use D01AQF (this integral is called the Hilbert transform of $g$); 3. if $w\left(x\right)=\mathrm{cos}\left(\omega x\right)$ or $\mathrm{sin}\left(\omega x\right)$: use D01ANF (this routine can also handle certain types of singularities in $g\left(x\right)$). Secondly, there are multiple routines for general $f\left(x\right)$, using different strategies. D01ATF (and D01AJF), and D01AUF (and D01AKF) use the strategy of Piessens et al. (1983), using repeated bisection of the interval, and in the first case the $\epsilon$-algorithm (Wynn (1956)), to improve the integral estimate. This can cope with singularities away from the end points, provided singular points do not occur as absicssae. D01AUF tends to perform better than D01ATF on more oscillatory integrals. D01ALF uses the same subdivision strategy as D01ATF over a set of initial interval segments determined by supplied breakpoints. It is hence suitable for integrals with discontinuities (including switches in definition) or sharp peaks occuring at known points. Such integrals may also be approximated using other routines which do not allow breakpoints, although such integrals should be evaluated over each of the sub-intervals seperately. D01RAF again uses the strategy of Piessens et al. (1983), and provides the functionality of D01ALF, D01ATF and D01AUF in a reverse communication framework. It also supports multiple integrals and uses a vectorized interface for the abscissae. Hence it is likely to be more efficient if several similar integrals are required to be evaluated over the same domain. Furthermore, its behaviour can be tailored through the use of optional parameters. D01AHF uses the strategy of Patterson (1968) and the $\epsilon$-algorithm to adaptively evaluate the integral in question. It tends to be more efficient than the bisection based algorithms, although these tend to be more robust when singularities occur away from the end points. D01RGF uses another adaptive scheme due to Gonnet (2010). This attempts to match the quadrature rule to the underlying integrand as well as subdividing the domain. Further, it can explicitly deal with singular points at abscissae, should NaN's or ∞ be returned by the user-supplied (sub)routine, provided the generation of these does not cause the program to halt (see Chapter X07). ### 3.4  One-dimensional Integrals over a Semi-infinite or Infinite Interval (a) Integrand defined at a set of points If $f\left(x\right)$ is defined numerically at four or more points, and the portion of the integral lying outside the range of the points supplied may be neglected, then the Gill–Miller finite difference method, D01GAF, should be used. (b) Integrand defined as a function (i) Rule evaluation routines If $f\left(x\right)$ behaves approximately like a polynomial in $x$, apart from a weight function of the form: 1. ${e}^{-\beta x},\beta >0$ (semi-infinite interval, lower limit finite); or 2. ${e}^{-\beta x},\beta <0$ (semi-infinite interval, upper limit finite); or 3. ${e}^{-\beta {\left(x-\alpha \right)}^{2}},\beta >0$ (infinite interval), or if $f\left(x\right)$ behaves approximately like a polynomial in ${\left(x+b\right)}^{-1}$ (semi-infinite range), then the Gaussian routines may be used. D01UAF may be used if it is not required to examine the weights and abscissae. D01BCF or D01TBF with D01FBF may be used if it is required to examine the weights and abscissae. D01TBF is faster and more accurate, whereas D01BCF is more general. (ii) Automatic adaptive routines D01AMF may be used, except for integrands which decay slowly towards an infinite end point, and oscillate in sign over the entire range. For this class, it may be possible to calculate the integral by integrating between the zeros and invoking some extrapolation process (see C06BAF). D01ASF may be used for integrals involving weight functions of the form $\mathrm{cos}\left(\omega x\right)$ and $\mathrm{sin}\left(\omega x\right)$ over a semi-infinite interval (lower limit finite). The following alternative procedures are mentioned for completeness, though their use will rarely be necessary. 1. If the integrand decays rapidly towards an infinite end point, a finite cut-off may be chosen, and the finite range methods applied. 2. If the only irregularities occur in the finite part (apart from a singularity at the finite limit, with which D01AMF can cope), the range may be divided, with D01AMF used on the infinite part. 3. A transformation to finite range may be employed, e.g., $x=1-tt or x=-loge⁡t$ will transform $\left(0,\infty \right)$ to $\left(1,0\right)$ while for infinite ranges we have $∫-∞∞fxdx=∫0∞fx+f-xdx.$ If the integrand behaves badly on $\left(-\infty ,0\right)$ and well on $\left(0,\infty \right)$ or vice versa it is better to compute it as $\underset{-\infty }{\overset{0}{\int }}f\left(x\right)dx+\underset{0}{\overset{\infty }{\int }}f\left(x\right)dx$. This saves computing unnecessary function values in the semi-infinite range where the function is well behaved. ### 3.5  Multidimensional Integrals A number of techniques are available in this area and the choice depends to a large extent on the dimension and the required accuracy. It can be advantageous to use more than one technique as a confirmation of accuracy particularly for high-dimensional integrations. Two of the routines incorporate a transformation procedure, using a user-supplied routine parameter REGION, which allows general product regions to be easily dealt with in terms of conversion to the standard $n$-cube region. (a) Products of one-dimensional rules (suitable for up to about $5$ dimensions) If $f\left({x}_{1},{x}_{2},\dots ,{x}_{n}\right)$ is known to be a sufficiently well behaved function of each variable ${x}_{i}$, apart possibly from weight functions of the types provided, a product of Gaussian rules may be used. These are provided by D01BCF or D01TBF with D01FBF. Rules for finite, semi-infinite and infinite ranges are included. For two-dimensional integrals only, unless the integrand is very badly behaved, the automatic whole-interval product procedure of D01DAF may be used. The limits of the inner integral may be user-specified functions of the outer variable. Infinite limits may be handled by transformation (see Section 3.4); end point singularities introduced by transformation should not be troublesome, as the integrand value will not be required on the boundary of the region. If none of these routines proves suitable and convenient, the one-dimensional routines may be used recursively. For example, the two-dimensional integral $I=∫a1b1∫a2b2fx,ydy dx$ may be expressed as $I=∫a1b1 Fxdx, where Fx=∫a2b2 fx,ydy.$ The user-supplied code to evaluate $F\left(x\right)$ will call the integration routine for the $y$-integration, which will call more user-supplied code for $f\left(x,y\right)$ as a function of $y$ ($x$ being effectively a constant). From Mark 24 onwards, all direct communication routines are defined as recursive. As such, you may use any routine, including the same routine, for each dimension. Note however, in previous releases, direct communication routines were not defined as recursive, and thus a different library integration routine must be used for each dimension if you are using an older product. Apart from this restriction, the following combinations were not permitted: D01AJF and D01ALF, D01ANF and D01APF, D01APF and D01AQF, D01ANF and D01AQF, D01ANF and D01ASF, D01AMF and D01ASF, D01ATF and D01AUF. Otherwise the full range of one-dimensional routines are available, for finite/infinite intervals, constant/variable limits, rule evaluation/automatic strategies etc. The reverse communication routine D01RAF may be used by itself in a pseudo-recursive manner, in that it may be called to evaluate an inner integral for the integrand value of an outer integral also being calculated by D01RAF. (b) Sag–Szekeres method Two routines are based on this method. D01FDF is particularly suitable for integrals of very large dimension although the accuracy is generally not high. It allows integration over either the general product region (with built-in transformation to the $n$-cube) or the $n$-sphere. Although no error estimate is provided, two adjustable parameters may be varied for checking purposes or may be used to tune the algorithm to particular integrals. D01JAF is also based on the Sag–Szekeres method and integrates over the $n$-sphere. It uses improved transformations which may be varied according to the behaviour of the integrand. Although it can yield very accurate results it can only practically be employed for dimensions not exceeding $4$. (c) Number Theoretic method Two routines are based on this method. D01GCF carries out multiple integration using the Korobov–Conroy method over a product region with built-in transformation to the $n$-cube. A stochastic modification of this method is incorporated hybridising the technique with the Monte–Carlo procedure. An error estimate is provided in terms of the statistical standard error. The routine includes a number of optimal coefficient rules for up to $20$ dimensions; others can be computed using D01GYF and D01GZF. Like the Sag–Szekeres method it is suitable for large dimensional integrals although the accuracy is not high. D01GDF uses the same method as D01GCF, but has a vectorized interface which can result in faster execution, especially on vector-processing machines. You are required to provide two subroutines, the first to return an array of values of the integrand at each of an array of points, and the second to evaluate the limits of integration at each of an array of points. This reduces the overhead of function calls, avoids repetitions of computations common to each of the evaluations of the integral and limits of integration, and offers greater scope for vectorization of your code. (d) A combinatorial extrapolation method D01PAF computes a sequence of approximations and an error estimate to the integral of a function over a multidimensional simplex using a combinatorial method with extrapolation. (e) Automatic routines (D01FCF and D01GBF) Both routines are for integrals of the form $∫a1b1 ∫a2b2 ⋯ ∫anbn fx1,x2,…,xndxndxn-1⋯dx1.$ D01GBF is an adaptive Monte–Carlo routine. This routine is usually slow and not recommended for high-accuracy work. It is a robust routine that can often be used for low-accuracy results with highly irregular integrands or when $n$ is large. D01FCF is an adaptive deterministic routine. Convergence is fast for well behaved integrands. Highly accurate results can often be obtained for $n$ between $2$ and $5$, using significantly fewer integrand evaluations than would be required by D01GBF. The routine will usually work when the integrand is mildly singular and for $n\le 10$ should be used before D01GBF. If it is known in advance that the integrand is highly irregular, it is best to compare results from at least two different routines. There are many problems for which one or both of the routines will require large amounts of computing time to obtain even moderately accurate results. The amount of computing time is controlled by the number of integrand evaluations you have allowed, and you should set this parameter carefully, with reference to the time available and the accuracy desired. D01EAF extends the technique of D01FCF to integrate adaptively more than one integrand, that is to calculate the set of integrals $∫a1b1 ∫a2b2 ⋯ ∫anbn f1,f2,…,fm dxndxn-1⋯dx1$ for a set of similar integrands ${f}_{1},{f}_{2},\dots ,{f}_{m}$ where ${f}_{i}={f}_{i}\left({x}_{1},{x}_{2},\dots ,{x}_{n}\right)$. ## 4  Decision Trees ### Tree 1: One-dimensional integrals over a finite interval Is the functional form of the integrand known? _yes Is indefinite integration required? _yes D01ARF | no| | Do you require reverse communication? _yes D01RAF | no| | Are you concerned with efficiency for simple integrals? _yes Is the integrand smooth (polynomial-like) apart from weight function ${\left|x-\left(a+b\right)/2\right|}^{c}$ or ${\left(b-x\right)}^{c}{\left(x-a\right)}^{d}$? _yes D01ARF, D01UAF, D01TBF or D01BCF and D01FBF, or D01GCF | | no| | | Is the integrand reasonably smooth and the required accuracy not too great? _yes D01BDF and D01UAF | | no| | | Are multiple integrands to be integrated simultaneously? _yes D01RAF | | no| | | Has the integrand discontinuities, sharp peaks or singularities at known points other than the end points? _yes Split the range and begin again; or use D01AJF, D01ALF or D01RGF | | no| | | Is the integrand free of singularities, sharp peaks and violent oscillations apart from weight function ${\left(b-x\right)}^{\alpha }{\left(x-a\right)}^{\beta }\phantom{\rule{0ex}{0ex}}{\left(\mathrm{log}\left(b-x\right)\right)}^{k}{\left(\mathrm{log}\left(x-a\right)\right)}^{l}$? _yes D01APF | | no| | | Is the integrand free of singularities, sharp peaks and violent oscillations apart from weight function ${\left(x-c\right)}^{-1}$? _yes D01AQF | | no| | | Is the integrand free of violent oscillations apart from weight function $\mathrm{cos}\left(\omega x\right)$ or $\mathrm{sin}\left(\omega x\right)$? _yes D01ANF | | no| | | Is the integrand free of singularities? _yes D01AJF, D01AKF or D01AUF | | no| | | Is the integrand free of discontinuities and of singularities except possibly at the end points? _yes D01AHF | | no| | | D01AJF, D01ATF, D01RAF or D01RGF | no| | D01AHF, D01AJF, D01ATF, D01RAF or D01RGF no| D01GAF Note: D01ATF, D01AUF, D01RAF and D01RGF are likely to be more efficient due to their vectorized interfaces than D01AJF and D01AKF, which use a more conventional user-interface, consistent with other routines in the chapter. ### Tree 2: One-dimensional integrals over a semi-infinite or infinite interval Is the functional form of the integrand known? _yes Are you concerned with efficiency for simple integrands? _yes Is the integrand smooth (polynomial-like) with no exceptions? _yes D01UAF, D01BDF, D01ARF with transformation. See Section 3.4 (b)(ii). | | no| | | Is the integrand smooth (polynomial-like) apart from weight function ${e}^{-\beta \left(x\right)}$ (semi-infinite range) or ${e}^{{-\beta \left(x-a\right)}^{2}}$ (infinite range) or is the integrand polynomial-like in $\frac{1}{x+b}$? (semi-infinite range)? _yes D01UAF, D01TBF and D01FBF or D01BCF and D01FBF | | no| | | Has integrand discontinuities, sharp peaks or singularities at known points other than a finite limit? _yes Split range; begin again using finite or infinite range trees | | no| | | Does the integrand oscillate over the entire range? _yes Does the integrand decay rapidly towards an infinite limit? _yes Use D01AMF; or set cutoff and use finite range tree | | | no| | | | Is the integrand free of violent oscillations apart from weight function $\mathrm{cos}\left(\omega x\right)$ or $\mathrm{sin}\left(\omega x\right)$ (semi-infinite range)? _yes D01ASF | | | no| | | | Use finite-range integration between the zeros and extrapolate (see C06BAF) | | no| | | D01AMF | no| | D01AMF no| D01GAF (integrates over the range of the points supplied) ### Tree 3: Multidimensional integrals Is dimension $\text{}=2$ and product region? _yes D01DAF no| Is dimension $\text{}\le 4$ _yes Is region an $n$-sphere? _yes D01FBF with user transformation or D01JAF | no| | Is region a Simplex? _yes D01FBF with user transformation or D01PAF | no| | Is the integrand smooth (polynomial-like) in each dimension apart from weight function? _yes D01TBF and D01FBF or D01BCF and D01FBF | no| | Is integrand free of extremely bad behaviour? _yes D01FCF, D01FDF or D01GCF | no| | Is bad behaviour on the boundary? _yes D01FCF or D01FDF | no| | Compare results from at least two of D01FCF, D01FDF, D01GBF and D01GCF and one-dimensional recursive application no| Is region an $n$-sphere? _yes D01FDF no| Is region a Simplex? _yes D01PAF no| Is high accuracy required? _yes D01FDF with parameter tuning no| Is dimension high? _yes D01FDF, D01GCF or D01GDF no| D01FCF Note: in the case where there are many integrals to be evaluated D01EAF should be preferred to D01FCF. D01GDF is likely to be more efficient than D01GCF, which uses a more conventional user-interface, consistent with other routines in the chapter. ## 5  Functionality Index Korobov optimal coefficients for use in D01GCF and D01GDF: when number of points is a product of 2 primes D01GZF when number of points is prime D01GYF over a general product region, Korobov–Conroy number-theoretic method D01GCF Sag–Szekeres method (also over n-sphere) D01FDF variant of D01GCF especially efficient on vector machines D01GDF over a hyper-rectangle, multiple integrands D01EAF Gaussian quadrature rule-evaluation D01FBF Monte–Carlo method D01GBF over an n-simplex D01PAF over an n-sphere (n  ≤  4), allowing for badly behaved integrands D01JAF adaptive integration of a function over a finite interval, strategy due to Gonnet, suitable for badly behaved integrals, vectorized interface D01RGF strategy due to Patterson, suitable for well-behaved integrands, except possibly at end-points D01AHF strategy due to Piessens and de Doncker, allowing for singularities at user-specified break-points D01ALF suitable for badly behaved integrands, single abscissa interface D01AJF vectorized interface D01ATF suitable for highly oscillatory integrals, single abscissa interface D01AKF vectorized interface D01AUF weight function 1 / (x − c) Cauchy principal value (Hilbert transform) D01AQF weight function cos(ωx) or sin(ωx) D01ANF weight function with end-point singularities of algebraico-logarithmic type D01APF adaptive integration of a function over an infinite interval or semi-infinite interval, no weight function D01AMF weight function cos(ωx) or sin(ωx) D01ASF integration of a function defined by data values only, Gill–Miller method D01GAF non-adaptive integration over a finite, semi-infinite or infinite interval, using pre-computed weights and abscissae D01UAF non-adaptive integration over a finite interval D01BDF non-adaptive integration over a finite interval, with provision for indefinite integrals also D01ARF reverse communication, adaptive integration over a finite interval, multiple integrands, efficient on vector machines D01RAF Service routines, array size query for D01RAF D01RCF general option getting D01ZLF general option setting and initialization D01ZKF monitoring information for D01RAF D01RBF Two-dimensional quadrature over a finite region D01DAF Weights and abscissae for Gaussian quadrature rules, more general choice of rule, calculating the weights and abscissae D01BCF restricted choice of rule, using pre-computed weights and abscissae D01TBF ## 6  Auxiliary Routines Associated with Library Routine Parameters D01BAW nagf_quad_1d_gauss_hermiteSee the description of the argument D01XXX in D01BAF and D01BBF. D01BAX nagf_quad_1d_gauss_laguerreSee the description of the argument D01XXX in D01BAF and D01BBF. D01BAY nagf_quad_1d_gauss_rationalSee the description of the argument D01XXX in D01BAF and D01BBF. D01BAZ nagf_quad_1d_gauss_legendreSee the description of the argument D01XXX in D01BAF and D01BBF. D01FDV nagf_quad_md_sphere_dummy_regionSee the description of the argument REGION in D01FDF. D01RBM nagf_quad_d01rb_dummySee the description of the argument MONIT in D01RBF. ## 7  Routines Withdrawn or Scheduled for Withdrawal The following lists all those routines that have been withdrawn since Mark 17 of the Library or are scheduled for withdrawal at one of the next two marks. WithdrawnRoutine Mark ofWithdrawal Replacement Routine(s) D01BAF 26 D01UAF D01BBF 26 D01TBF ## 8  References Davis P J and Rabinowitz P (1975) Methods of Numerical Integration Academic Press Gonnet P (2010) Increasing the reliability of adaptive quadrature using explicit interpolants ACM Trans. Math. software 37 26 Lyness J N (1983) When not to use an automatic quadrature routine SIAM Rev. 25 63–87 Patterson T N L (1968) The Optimum addition of points to quadrature formulae Math. Comput. 22 847–856 Piessens R, de Doncker–Kapenga E, Überhuber C and Kahaner D (1983) QUADPACK, A Subroutine Package for Automatic Integration Springer–Verlag Sobol I M (1974) The Monte Carlo Method The University of Chicago Press Stroud A H (1971) Approximate Calculation of Multiple Integrals Prentice–Hall Wynn P (1956) On a device for computing the ${e}_{m}\left({S}_{n}\right)$ transformation Math. Tables Aids Comput. 10 91–96 D01 Chapter Contents
2014-09-30 18:52:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 140, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9851747751235962, "perplexity": 5953.838913229084}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1412037663060.18/warc/CC-MAIN-20140930004103-00277-ip-10-234-18-248.ec2.internal.warc.gz"}
https://measurementsandunits.appspot.com/metricSystem.html
For in GOD we live, and move, and have our being. - Acts 17:28 The Joy of a Teacher is the Success of his Students. - Samuel Chukwuemeka # Solved Examples - The Metric System of Measurements and Units For ACT Students The ACT is a timed exam...$60$ questions for $60$ minutes This implies that you have to solve each question in one minute. Some questions will typically take less than a minute a solve. Some questions will typically take more than a minute to solve. The goal is to maximize your time. You use the time saved on those questions you solved in less than a minute, to solve the questions that will take more than a minute. So, you should try to solve each question correctly and timely. So, it is not just solving a question correctly, but solving it correctly on time. Please ensure you attempt all ACT questions. There is no "negative" penalty for any wrong answer. For JAMB and CMAT Students Calculators are not allowed. So, the questions are solved in a way that does not require a calculator. Solve all questions. Use at least two methods as applicable. State the measurement. Show all work. NOTE: Unless specified otherwise: (1.) Use only the tables provided for you. (2.) Please do not approximate intermediate calculations. Metric to Metric Conversions Prefix Symbol Multiplication Factor yocto y $10^{-24}$ zepto z $10^{-21}$ atto a $10^{-18}$ femto f $10^{-15}$ pico p $10^{-12}$ nano n $10^{-9}$ micro $\mu$ $10^{-6}$ milli m $10^{-3}$ centi c $10^{-2}$ deci d $10^{-1}$ deka da $10^1$ hecto h $10^2$ kilo K $10^3$ mega M $10^6$ giga G $10^9$ tera T $10^{12}$ peta P $10^{15}$ exa E $10^{18}$ zetta Z $10^{21}$ yotta Y $10^{24}$ (1.) Convert $16.27$ meters to kilometers Measurement is Length $\underline{First\:\:Method:\:\:Unity\:\:Fraction\:\:Method} \\[3ex] 16.27\:m\:\:to\:\:km \\[3ex] = 16.27\:m * \dfrac{.....km}{.....m} \\[5ex] = 16.27\:m * \dfrac{1\:km}{10^3\:m} \\[5ex] = 16.27\:m * \dfrac{1\:km}{1000\:m} \\[5ex] = \dfrac{16.27}{1000}\:km \\[5ex] = 0.01627\:km \\[3ex] \underline{Second\:\:Method:\:\:Proportional\:\:Reasoning\:\:Method} \\[3ex] 1\:km = 10^3\:m \\[3ex] 1\:km = 1000\:m \\[3ex] Let\:\:p = length\:\:of\:\:16.27\:m\:\:in\:\:km \\[3ex]$ $km$ $m$ $1$ $1000$ $p$ $16.27$ $\dfrac{p}{1} = \dfrac{16.27}{1000} \\[5ex] p = 0.01627\:km \\[3ex] \therefore 16.27\:m = 0.01627\:km$ (2.) Convert $16.27$ kilometers to meters Measurement is Length $\underline{First\:\:Method:\:\:Unity\:\:Fraction\:\:Method} \\[3ex] 16.27\:km\:\:to\:\:m \\[3ex] = 16.27\:km * \dfrac{.....m}{.....km} \\[5ex] = 16.27\:km * \dfrac{10^3\:m}{1\:km} \\[5ex] = 16.27\:km * \dfrac{1000\:m}{1\:km} \\[5ex] = 16.27 * 1000 = 16270\:m \\[3ex] \underline{Second\:\:Method:\:\:Proportional\:\:Reasoning\:\:Method} \\[3ex] 1\:km = 10^3\:m \\[3ex] 1\:km = 1000\:m \\[3ex] Let\:\:p = length\:\:of\:\:16.27\:km\:\:in\:\:m \\[3ex]$ $km$ $m$ $1$ $1000$ $16.27$ $p$ $\dfrac{p}{1000} = \dfrac{16.27}{1} \\[5ex] \dfrac{p}{1000} = 16.27 \\[5ex] Multiply\:\:both\:\:sides\:\:by\:\:1000 \\[3ex] 1000 * \dfrac{p}{1000} = 1000(16.27) \\[5ex] p = 16270\:m \\[3ex] \therefore 16.27\:km = 16270\:m$ (3.) Convert $25$ dekagrams to decigrams Measurement is Mass $\underline{First\:\:Method:\:\:Unity\:\:Fraction\:\:Method} \\[3ex] 25\:dag\:\:to\:\:dg \\[3ex] = 25\:dag * \dfrac{.....g}{.....dag} * \dfrac{.....dg}{.....g} \\[5ex] = 25\:dag * \dfrac{10\:g}{1\:dag} * \dfrac{1\:dg}{10^{-1}\:g} \\[5ex] = 25\:dag * \dfrac{10\:g}{1\:dag} * \dfrac{1\:dg}{0.1\:g} \\[5ex] = \dfrac{25 * 10}{0.1}\:dg \\[5ex] = \dfrac{250}{0.1}\:dg \\[5ex] = 2500\:dg \\[3ex] \underline{Third\:\:Method:\:\:Fast\:\:Proportional\:\:Reasoning\:\:Method} \\[3ex] 1\:dag = 10\:g \\[3ex] 1\:dg = 10^{-1}\:g \\[3ex] 1\:dg = 0.1\:g \\[3ex]$ $dag$ $g$ $dg$ $1$ $10$ $k$ $0.1$ $1$ $25$ $p$ $Let\:\:k = length\:\:of\:\:1\:dg\:\:in\:\:dag \\[3ex] Let\:\:p = length\:\:of\:\:25\:dag\:\:in\:\:dg \\[3ex] First: \\[3ex] \dfrac{k}{1} = \dfrac{0.1}{10} \\[5ex] k = 0.01\:dag \\[3ex] Next: \\[3ex] \dfrac{p}{1} = \dfrac{25}{k} \\[5ex] p = \dfrac{25}{0.01} \\[5ex] p = 2500\:dg \\[3ex] \therefore 25\:dag = 2500\:dg$ (4.) Convert $25$ decigrams to dekagrams Measurement is Mass $\underline{First\:\:Method:\:\:Unity\:\:Fraction\:\:Method} \\[3ex] 25\:dg\:\:to\:\:dag \\[3ex] = 25\:dg * \dfrac{.....g}{.....dg} * \dfrac{.....dag}{.....g} \\[5ex] = 25\:dag * \dfrac{10^{-1}\:g}{1\:dg} * \dfrac{1\:dag}{10\:g} \\[5ex] = 25\:dag * \dfrac{0.1\:g}{1\:dg} * \dfrac{1\:dag}{10\:g} \\[5ex] = \dfrac{25 * 0.1}{10}\:dg \\[5ex] = \dfrac{2.5}{10}\:dag \\[5ex] = 0.25\:dag \\[3ex] \underline{Third\:\:Method:\:\:Fast\:\:Proportional\:\:Reasoning\:\:Method} \\[3ex] 1\:dg = 10^{-1}\:g \\[3ex] 1\:dg = 0.1\:g \\[3ex] 1\:dag = 10\:g \\[3ex]$ $dg$ $g$ $dag$ $1$ $0.1$ $a$ $10$ $1$ $25$ $c$ $Let\:\:a = length\:\:of\:\:1\:dg\:\:in\:\:dag \\[3ex] Let\:\:p = length\:\:of\:\:25\:dg\:\:in\:\:dag \\[3ex] First: \\[3ex] \dfrac{a}{1} = \dfrac{0.1}{10} \\[5ex] a = 0.01\:dag \\[3ex] Next: \\[3ex] \dfrac{c}{a} = \dfrac{25}{1} \\[5ex] \dfrac{c}{0.01} = 25 \\[5ex] Multiply\:\:both\:\:sides\:\:by\:\: 0.01 \\[3ex] 0.01 * \dfrac{c}{0.01} = 0.01(25) \\[5ex] c = 0.25\:dag \\[3ex] \therefore 25\:dg = 0.25\:dag$ (5.) $Let\:\: \angle OPK = x \\[3ex] \angle OPR = \angle ORP = y ...base\:\:\angle s \:\:of\:\:isosceles\:\: \triangle \\[3ex] \underline{\triangle OPR} \\[3ex] y + y + 108 = 180 ... sum\:\:of\:\:\angle s\:\:of\:\:a\:\:\triangle \\[3ex] 2y = 180 - 108 \\[3ex] 2y = 72 \\[3ex] y = \dfrac{72}{2} \\[5ex] y = 36 \\[3ex] \angle POR = 2 * \angle PKR ...\angle \:\:at\:\:center\:\:is\:\:twice\:\:\angle\:\:at\:\:circumference \\[3ex] 108 = 2 * \angle PKR \\[3ex] \angle PKR = \dfrac{108}{2} \\[5ex] \angle PKR = 54 \\[3ex] \underline{\triangle PKR} \\[3ex] \angle KPR + \angle KRP + \angle PKR = 180 ... sum\:\:of\:\:\angle s\:\:of\:\:a\:\:\triangle \\[3ex] \angle KPR = \angle OPK + \angle OPR \\[3ex] \angle KPR = x + 36 \\[3ex] \angle KRP = 20 + 36 = 56 \\[3ex] \therefore x + 36 + 56 + 54 = 180 \\[3ex] x + 146 = 180 \\[3ex] x = 180 - 146 \\[3ex] x = 34^\circ$ (6.) $x = 2y ...\angle s \:\:in\:\:the\:\:same\:\:segment \\[3ex] p + 60 = 180 ...\angle s \:\:in\:\:a\:\:straight\:\:line \\[3ex] p = 180 - 60 \\[3ex] p = 120 \\[3ex] 120 + 2y + 2x = 180 ... sum\:\:of\:\:\angle s\:\:of\:\:a\:\:\triangle \\[3ex] 120 + x + 2x = 180 \\[3ex] 3x = 180 - 120 \\[3ex] 3x = 60 \\[3ex] x = \dfrac{60}{3} \\[5ex] x = 20 \\[3ex] x = 2y \rightarrow y = \dfrac{x}{2} \\[5ex] y = \dfrac{20}{2} \\[5ex] y = 10^\circ$ (7.) CSEC The scale on a map is $1:25000$ (i) Anderlin and Jersey are $31.8\:cm$ apart on the map. Determine, in $km$, the actual distance between Anderlin and Jersey. (ii) The actual distance between Clifton and James Town is $2.75\:km$ How many units apart are they on the map? $(i) \\[3ex] Scale = 1:25000 \\[3ex] 31.8\:cm \rightarrow ? \\[3ex] \underline{Second\:\:Method:\:\:Proportional\:\:Reasoning\:\:Method} \\[3ex]$ $Scale$ $Actual$ $1$ $25000$ $31.8$ $y$ $\dfrac{1}{31.8} = \dfrac{25000}{y} \\[5ex] Cross\:\:Multiply \\[3ex] 1(y) = 31.8(25000) \\[3ex] y = 795000\:cm \\[3ex] \therefore 31.8\:cm = 795000\:cm \\[3ex]$ We need to convert this distance to $km$ $\underline{First\:\:Method - Unity\:\:Fraction\:\:Method} \\[3ex] 795000\:cm\:\:to\:\:km \\[3ex] = 795000\:cm * \dfrac{0.01\:m}{1\:cm} * \dfrac{1\:km}{1000\:m} \\[5ex] = \dfrac{7950}{1000}\:km \\[5ex] = 7.95\:km \\[3ex] \underline{Second\:\:Method:\:\:Proportional\:\:Reasoning\:\:Method} \\[3ex]$ $cm$ $m$ $1$ $0.01$ $795000$ $x$ $\dfrac{1}{795000} = \dfrac{0.01}{x} \\[5ex] Cross\:\:Multiply \\[3ex] 1(x) = 795000(0.01) \\[3ex] x = 7950\:m \\[3ex]$ $m$ $km$ $1000$ $1$ $7950$ $p$ $\dfrac{1000}{7950} = \dfrac{1}{p} \\[5ex] Cross\:\:Multiply \\[3ex] 1000p = 7950(1) \\[3ex] 1000p = 7950 \\[3ex] p = \dfrac{7950}{1000} \\[5ex] p = 7.95\:km \\[3ex] (ii) \\[3ex] \underline{First\:\:Method - Unity\:\:Fraction\:\:Method} \\[3ex] 2.75\:km\:\:to\:\:cm \\[3ex] = 2.75\:km * \dfrac{1000\:m}{1\:km} * \dfrac{1\:cm}{0.01\:m} \\[5ex] = \dfrac{2750}{0.01}\:km \\[5ex] = 275000\:cm \\[3ex] \underline{Second\:\:Method:\:\:Proportional\:\:Reasoning\:\:Method} \\[3ex]$ $Scale$ $Actual$ $1$ $25000$ $d$ $275000$ $\dfrac{1}{d} = \dfrac{25000}{275000} \\[5ex] \dfrac{25000}{275000} = \dfrac{5}{55} = \dfrac{1}{11} \\[5ex] \dfrac{1}{d} = \dfrac{1}{11} \\[5ex] Cross\:\:Multiply \\[3ex] d = 1(11) \\[3ex] d = 11\:cm \\[3ex]$ Clifton and James Town are $11\:cm$ apart on the map. (8.) $Obtuse\angle O = 2(65) ...\angle \:\:at\:\:center\:\:is\:\:twice\:\:\angle\:\:at\:\:circumference \\[3ex] Obtuse\angle O = 130 \\[3ex] Reflex\angle ROP + Obtuse\angle ROP = 360 ...\angle s\:\:around\:\:a\:\:point \\[3ex] Reflex\angle ROP = x \\[3ex] Obtuse\angle ROP = 130 \\[3ex] x + 130 = 360 \\[3ex] x = 360 - 130 \\[3ex] x = 230^\circ$ (9.) $\angle SQR = 79^\circ \\[3ex] \angle QRS = x \\[3ex] Reflex\:\:P\hat{O}S = 252^\circ \\[3ex] Reflex\:\:P\hat{O}S + Obtuse\:\:P\hat{O}S = 360 ...\angle s\:\:around\:\:a\:\:point \\[3ex] Obtuse\:\:P\hat{O}S = 360 - 252 \\[3ex] Obtuse\:\:P\hat{O}S = 108 \\[3ex] \underline{\triangle POS} \\[3ex] \angle SPO = \angle PSO = y...base\:\:\angle s\:\:of\:\:isosceles\:\:\triangle \\[3ex] \angle SPO + \angle POS + \angle PSO = 180 ...sum\:\:of\:\:\angle s\:\:of\:\:a\:\:\triangle \\[3ex] x + 108 + x = 180 \\[3ex] 2x + 180 - 108 \\[3ex] 2x = 72 \\[3ex] x = \dfrac{72}{2} \\[5ex] x = 36 \\[3ex] Obtuse\:\:P\hat{O}S = 2 * \angle PQS ...\angle \:\:at\:\:center\:\:is\:\:twice\:\:\angle\:\:at\:\:circumference \\[3ex] 108 = 2 * \angle PQS \\[3ex] \angle PQS = \dfrac{108}{2} \\[5ex] \angle PQS = 54 \\[3ex] \underline{\triangle PQS} \\[3ex] \angle QPS = \angle QSP = p...base\:\:\angle s\:\:of\:\:isosceles\:\:\triangle \\[3ex] \angle QPS + \angle QSP + \angle PQS = 180 ...sum\:\:of\:\:\angle s\:\:of\:\:a\:\:\triangle \\[3ex] p + p + 54 = 180 \\[3ex] 2p + 54 = 180 \\[3ex] 2p = 180 - 54 \\[3ex] 2p = 126 \\[3ex] p = \dfrac{126}{2} \\[5ex] p = 63 \\[3ex] \angle QSR = 63 ...\angle \:\:between\:\:tangent\:\:and\:\:chord=\angle\:\:in\:\:alternate\:\:segment \\[3ex] \\[3ex] OR \\[3ex] \angle PSQ = \angle PSO + \angle OSQ ...as\:\:shown \\[3ex] 63 = 36 + \angle OSQ \\[3ex] \angle OSQ = 63 - 36 \\[3ex] \angle OSQ = 27 \\[3ex] \angle OSR = 90^\circ ... radius \perp tangent\:\:at\:\:point\:\:of\:\:contact \\[3ex] \angle OSR = \angle OSQ + \angle QSR ...as\:\:shown \\[3ex] 90 = 27 + \angle QSR \\[3ex] \angle QSR = 90 - 27 \\[3ex] \angle QSR = 63 \\[3ex] \underline{\triangle QSR} \\[3ex] \angle QRS + \angle QRS + \angle SQR = 180 ...sum\:\:of\:\:\angle s\:\:of\:\:a\:\:\triangle \\[3ex] 63 + x + 79 = 180 \\[3ex] 142 + x = 180 \\[3ex] x = 180 - 142 \\[3ex] x = 38^\circ$ (10.) $\angle OPT = 90^\circ ... radius \perp tangent\:\:at\:\:point\:\:of\:\:contact \\[3ex] \underline{\triangle QPT} \\[3ex] 30 + x + 90 + 2x + x = 180 ...sum\:\:of\:\:\angle s\:\:of\:\:a\:\:\triangle \\[3ex] 4x + 120 = 180 \\[3ex] 4x = 180 - 120 \\[3ex] 4x = 60 \\[3ex] x = \dfrac{60}{4} \\[5ex] x = 15 \\[3ex] \angle PTO = 2x \\[3ex] \angle PTO = 2(15) \\[3ex] \angle PTO = 30^\circ$ (11.) $\underline{\triangle PSQ} \\[3ex] \angle PSQ = 50^\circ ...\angle \:\:between\:\:tangent\:\:and\:\:chord=\angle\:\:in\:\:alternate\:\:segment \\[3ex] \angle PSQ = \angle PQS = 50 ...base\:\: \angle s\:\:of\:\:isosceles\:\: \triangle \\[3ex] \angle PSQ + \angle PQS + \angle SPQ = 180 ...sum\:\:of\:\:\angle s\:\:of\:\:a\:\:\triangle \\[3ex] 50 + 50 + \angle SPQ = 180 \\[3ex] 100 + \angle SPQ = 180 \\[3ex] \angle SPQ = 180 - 100 \\[3ex] \angle SPQ = 80 \\[3ex] \underline{Cyclic\:\:Quadrilateral\:\:SPQR} \\[3ex] \angle P + \angle R = 180 ...sum\:\:of\:\:interior\:\:opposite\:\:\angle s\:\:of\:\:a\:\:cyclic\:\:Quad \\[3ex] 80 + \angle R = 180 \\[3ex] \angle R = 180 - 80 \\[3ex] \angle R = \angle QRS = 100^\circ$ (12.) $\angle OCA = \angle OAC = 48^\circ ...base\:\: \angle s\:\:of\:\:isosceles\:\: \triangle \\[3ex] \angle ACB = 90^\circ ...\angle \:\:in\:\:a\:\:semicircle \\[3ex] \underline{\triangle ABC} \\[3ex] \angle CAB + \angle ABC + \angle ACB = 180^\circ ...sum\:\:of\:\:\angle s\:\:of\:\:a\:\:\triangle \\[3ex] \angle CAB = \angle OCA = 48 ...as\:\:shown \\[3ex] \rightarrow 48 + \angle ABC + 90 = 180 \\[3ex] 138 + \angle ABC = 180 \\[3ex] \angle ABC = 180 - 138 \\[3ex] \angle ABC = 42^\circ$ (13.) $(i) \\[3ex] \angle HJK = 90^\circ ...\angle \:\:in\:\:a\:\:semicircle \\[3ex] \angle HJK = \angle HJL + \angle LJK...as\:\:shown \\[3ex] 90 = 20 + \angle LJK \\[3ex] \angle LJK = 90 - 20 \\[3ex] \angle LJK = 70 \\[3ex] \angle LHK = \angle LJK ...\angle s \:\:in\:\:a\:\:straight\:\:line \\[3ex] \rightarrow \angle LHK = 70 \\[3ex] \angle HLK = 90^\circ ...\angle \:\:in\:\:a\:\:semicircle \\[3ex] \underline{\triangle HKL} \\[3ex] \angle LHK + \angle HLK + \angle HKL = 180 ...sum\:\:of\:\:\angle s\:\:of\:\:a\:\:\triangle \\[3ex] 70 + 90 + \angle HKL = 180 \\[3ex] 160 + \angle HKL = 180 \\[3ex] \angle HKL = 180 - 160 \\[3ex] \angle HKL = 20^\circ \\[3ex] (ii) \\[3ex] \angle OKJ = \angle OJK ...base\:\: \angle s\:\:of\:\:isosceles\:\: \triangle \\[3ex] \rightarrow \angle OJK = 50 \\[3ex] \underline{\triangle JOK} \\[3ex] \angle OKJ + \angle OJK + \angle JOK = 180 ...sum\:\:of\:\:\angle s\:\:of\:\:a\:\:\triangle \\[3ex] 50 + 50 + \angle JOK = 180 \\[3ex] 100 + \angle JOK = 180 \\[3ex] \angle JOK = 180 - 100 \\[3ex] \angle JOK = 80^\circ \\[3ex] (iii) \\[3ex] \underline{\triangle JHK} \\[3ex] \angle HJK + \angle JKH + \angle JHK = 180 ...sum\:\:of\:\:\angle s\:\:of\:\:a\:\:\triangle \\[3ex] 90 + 50 + \angle JHK = 180 \\[3ex] 140 + \angle JHK = 180 \\[3ex] \angle JHK = 180 - 140 \\[3ex] \angle JHK = 40^\circ$ (14.) $\angle TSP = \angle TQP ... \angle s \:\:in\:\:the\:\:same\:\:segment \\[3ex] \angle TQP = 31 + 58 ...exterior\:\: \angle \:\:of\:\:a\:\: \triangle \\[3ex] \angle TQP = 89 \\[3ex] \therefore \angle TSP = 89^\circ$ (15.) $x = y + 17...eqn.(1) ...\angle \:\:between\:\:tangent\:\:and\:\:chord=\angle\:\:in\:\:alternate\:\:segment \\[3ex] 2x - 43 = y...eqn.(2) ...\angle \:\:between\:\:tangent\:\:and\:\:chord=\angle\:\:in\:\:alternate\:\:segment \\[3ex] Substitute\:\:eqn.(1)\:\:into\:\:eqn.(2) \\[3ex] 2(y + 17) - 43 = y \\[3ex] 2y + 34 - 43 = y \\[3ex] 2y - y = 43 - 34 \\[3ex] y = 9^\circ$ (16.) $\angle ORP = 90^\circ ... radius \perp tangent\:\:at\:\:point\:\:of\:\:contact \\[3ex] \angle ORP = \angle ORQ + \angle QRP \\[3ex] 90 = \angle ORQ + 34 \\[3ex] \angle ORQ = 90 - 34 \\[3ex] \angle ORQ = 56 \\[3ex] \angle ORQ = \angle OQR = 56 ...base\:\: \angle s\:\:of\:\:isosceles\:\: \triangle \\[3ex] \underline{\triangle ORQ} \\[3ex] \angle ORG + \angle OQR + x = 180 ...sum\:\:of\:\:\angle s\:\:of\:\:a\:\:\triangle \\[3ex] 56 + 56 + x = 180 \\[3ex] 112 + x = 180 \\[3ex] x = 180 - 112 \\[3ex] x = 68^\circ$ (17.) We can solve (i) in at least two ways $(i) \\[3ex] \underline{First\:\:Method} \\[3ex] \angle AOD = 2 * \angle ACD ...\angle \:\:at\:\:center\:\:is\:\:twice\:\:\angle\:\:at\:\:circumference \\[3ex] 114 = 2 * \angle ACD \\[3ex] \angle ACD = \dfrac{114}{2} \\[5ex] \angle ACD = 57^\circ \\[3ex] \underline{Second\:\:Method} \\[3ex] \angle ODG = 90 ... radius \perp tangent\:\:at\:\:point\:\:of\:\:contact \\[3ex] \angle ODA = \angle OAD = y ...base\:\: \angle s\:\:of\:\:isosceles\:\: \triangle \\[3ex] \underline{\triangle OAD} \\[3ex] \angle OAD + \angle ODA + \angle AOD = 180 ...sum\:\:of\:\:\angle s\:\:of\:\:a\:\:\triangle \\[3ex] y + y + 114 = 180 \\[3ex] 2y + 114 = 180 \\[3ex] 2y = 180 - 114 \\[3ex] 2y = 66 \\[3ex] y = \dfrac{66}{2} \\[5ex] y = 33 \\[3ex] \angle ADE + \angle ODA + \angle ODG = 180 ...\angle s\:\:in\:\:a\:\:straight\:\:line \\[3ex] \angle ADE + 33 + 90 = 180 \\[3ex] \angle ADE + 123 = 180 \\[3ex] \angle ADE = 180 - 123 \\[3ex] \angle ADE = 57 \\[3ex] \angle ADE = \angle ACD ...\angle \:\:between\:\:tangent\:\:and\:\:chord = \angle\:\:in\:\:alternate\:\:segment \\[3ex] \rightarrow \angle ACD = 57^\circ \\[3ex] (ii) \\[3ex] \angle EAD = \angle ACD ...\angle \:\:between\:\:tangent\:\:and\:\:chord = \angle\:\:in\:\:alternate\:\:segment \\[3ex] \rightarrow \angle EAD = 57^\circ \\[3ex] \underline{\triangle AED} \\[3ex] \angle ADE + \angle EAD + \angle AED = 180 ...sum\:\:of\:\:\angle s\:\:of\:\:a\:\:\triangle \\[3ex] 57 + 57 + \angle AED = 180 \\[3ex] 114 + \angle AED = 180 \\[3ex] \angle AED = 180 - 114 \\[3ex] \angle AED = 66^\circ \\[3ex] (iii) \\[3ex] \angle OAD = \angle OAC + \angle CAD ...as\:\:shown \\[3ex] \angle CDG = \angle CAD ...\angle \:\:between\:\:tangent\:\:and\:\:chord=\angle\:\:in\:\:alternate\:\:segment \\[3ex] \therefore \angle CAD = 18 \\[3ex] \rightarrow 33 = \angle OAC + 18 \\[3ex] \angle OAC = 33 - 18 \\[3ex] \angle OAC = 15^\circ \\[3ex] (iv) \\[3ex] \underline{Quadrilateral\:\:ABCD} \\[3ex] \angle B + \angle D = 180 ...sum\:\:of\:\:interior\:\:opposite\:\:\angle s\:\:of\:\:a\:\:cyclic\:\:Quad \\[3ex] \angle D = \angle ODA + \angle ODC ...as\:\:shown \\[3ex] \angle ODG = \angle ODC + \angle CDG ...as\:\:shown \\[3ex] 90 = \angle ODC + 18 \\[3ex] \angle ODC = 90 - 18 \\[3ex] \angle ODC = 72 \\[3ex] \rightarrow \angle D = 33 + 72 \\[3ex] \angle D = 105 \\[3ex] \rightarrow \angle B + 105 = 180 \\[3ex] \angle B = 180 - 105 \\[3ex] \angle B = 75^\circ$ (18.) $\angle MQP = 90 ...\angle \:\:in\:\:a\:\:semicircle \\[3ex] \angle MPQ = \angle MNQ ...\angle s\:\:in\:\:same\:\:segment \\[3ex] \rightarrow \angle MPQ = 42 \\[3ex] \underline{\triangle QMP} \\[3ex] \angle QMP + \angle MPQ + \angle MQP = 180 ...sum\:\:of\:\:\angle s\:\:of\:\:a\:\:\triangle \\[3ex] \angle QMP + 42 + 90 = 180 \\[3ex] \angle QMP + 132 = 180 \\[3ex] \angle QMP = 180 - 132 \\[3ex] \angle QMP = 48^\circ$ (19.) $(i) \\[3ex] \angle OQR = \angle ORQ = 32 ...base\:\: \angle s\:\:of\:\:isosceles\:\: \triangle \\[3ex] \underline{\triangle OQR} \\[3ex] \angle OQR + \angle ORQ + \angle QOR = 180 ...sum\:\:of\:\:\angle s\:\:of\:\:a\:\:\triangle \\[3ex] 32 + 32 + \angle QOR = 180 \\[3ex] 64 + \angle QOR = 180 \\[3ex] \angle QOR = 180 - 64 \\[3ex] \angle QOR = 116 \\[3ex] \angle QOR = 2 * \angle QPR ...\angle \:\:at\:\:center\:\:is\:\:twice\:\:\angle\:\:at\:\:circumference \\[3ex] 116 = 2 * \angle QPR \\[3ex] \angle QPR = \dfrac{116}{2} \\[5ex] \angle QPR = 58^\circ \\[3ex] (ii) \\[3ex] \underline{Quadrilateral\:\:TPRQ} \\[3ex] \angle P + \angle Q = 180 ...sum\:\:of\:\:interior\:\:opposite\:\:\angle s\:\:of\:\:a\:\:cyclic\:\:Quad \\[3ex] \angle P = \angle TPQ + \angle QPR ...as\:\:shown \\[3ex] \angle P = 15 + 58 \\[3ex] \angle P = 73 \\[3ex] \angle Q = \angle TQO + \angle OQR ...as\:\:shown \\[3ex] \angle Q = \angle TQO + 32 \\[3ex] \rightarrow 73 + \angle TQO + 32 = 180 \\[3ex] \angle TQO + 105 = 180 \\[3ex] \angle TQO = 180 - 105 \\[3ex] \angle TQO = 75^\circ$ (20.) $\underline{Quadrilateral\:\:TUVW} \\[3ex] \angle U + \angle W = 180 ...sum\:\:of\:\:opposite\:\:\angle s\:\:of\:\:a\:\:cyclic\:\:Quad \\[3ex] 3x + 20 + 88 = 180 \\[3ex] 3x + 108 = 180 \\[3ex] 3x = 180 - 108 \\[3ex] 3x = 72 \\[3ex] x = \dfrac{72}{3} \\[5ex] x = 24^\circ$ (21.) $a) \\[3ex] \angle VZW = 51^\circ ...\angle \:\:between\:\:tangent\:\:and\:\:chord = \angle\:\:in\:\:alternate\:\:segment \\[3ex] b) \\[3ex] \angle VWZ = 78^\circ ...\angle \:\:between\:\:tangent\:\:and\:\:chord = \angle\:\:in\:\:alternate\:\:segment \\[3ex] \underline{Cyclic\:\:Quadrilateral\:\:WXYZ} \\[3ex] \angle XYZ = 78^\circ ...exterior\:\: \angle \:\:of\:\:a\:\:cyclic\:\:Quad = interior\:\:opposite\:\: \angle$ (22.) $\angle SRQ = \angle SPQ ...\angle \:\:between\:\:tangent\:\:and\:\:chord = \angle\:\:in\:\:alternate\:\:segment \\[3ex] \rightarrow \angle SPQ = 50 \\[3ex] \angle SPQ = \angle SQP ...base\:\: \angle s\:\:of\:\:isosceles\:\: \triangle \\[3ex] \rightarrow \angle SQP = 50 \\[3ex] \underline{\triangle SRQ} \\[3ex] \angle RSQ = \angle SPQ + \angle SQP ...exterior\:\: \angle \:\:of\:\:a\:\: \triangle \\[3ex] \angle RSQ = 50 + 50 \\[3ex] \angle RSQ = 100 \\[3ex] \angle SRQ + \angle RSQ + \angle SQR = 180 ...sum\:\:of\:\:\angle s\:\:of\:\:a\:\:\triangle \\[3ex] \angle SRQ + 100 + 50 = 180 \\[3ex] \angle SRQ + 150 = 180 \\[3ex] \angle SRQ = 180 - 150 \\[3ex] \angle SRQ = 30^\circ$ (23.) $\angle TAC = 30 ...\angle \:\:between\:\:tangent\:\:and\:\:chord = \angle\:\:in\:\:alternate\:\:segment \\[3ex] \angle ATC = 90 ...\angle \:\:in\:\:a\:\:semicircle \\[3ex] \underline{\triangle TKC} \\[3ex] \angle TCK = 90 + 30 ...exterior\:\: \angle \:\:of\:\:a\:\: \triangle \\[3ex] \angle TCK = 120 \\[3ex] \angle TKC + \angle TCK + \angle CTK = 180 ...sum\:\:of\:\:\angle s\:\:of\:\:a\:\:\triangle \\[3ex] \angle TKC + 120 + 30 = 180 \\[3ex] \angle TKC + 150 = 180 \\[3ex] \angle TKC = 180 - 150 \\[3ex] \angle TKC = 30^\circ$ (24.) $(i) \\[3ex] \angle BOA = 2 * \angle ACB ...\angle \:\:at\:\:center\:\:is\:\:twice\:\:\angle\:\:at\:\:circumference \\[3ex] 130 = 2 * \angle ACB \\[3ex] \angle ACB = \dfrac{130}{2} \\[5ex] \angle ACB = 65^\circ \\[3ex] (ii) \\[3ex] \angle CBD = \angle CAD ...\angle s\:\:in\:\:same\:\:segment \\[3ex] \therefore \angle CBD = 30^\circ \\[3ex] (iii) \\[3ex] \angle ADB = \angle ACB ...\angle s\:\:in\:\:same\:\:segment \\[3ex] \rightarrow \angle ADB = 65 \\[3ex] \underline{\triangle AED} \\[3ex] \angle DAE + \angle AED + \angle EDA = 180 ...sum\:\:of\:\:\angle s\:\:of\:\:a\:\:\triangle \\[3ex] \angle EDA = \angle ADB = 65 ...as\:\:shown \\[3ex] 30 + \angle AED + 65 = 180 \\[3ex] \angle AED + 95 = 180 \\[3ex] \angle AED = 180 - 95 \\[3ex] \angle AED = 85^\circ$
2020-06-04 22:39:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6498596668243408, "perplexity": 4191.076712080944}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590348492295.88/warc/CC-MAIN-20200604223445-20200605013445-00322.warc.gz"}
https://rujec.org/article_preview.php?id=27976
Leniency programs and socially beneficial cooperation: Effects of type I errors Natalia Pavlova¦, Andrey Shastitko¦» ₣ Russian Presidential Academy of National Economy and Public Administration, Moscow, Russia ¦ Lomonosov Moscow State University, Moscow, Russia » National Research University Higher School of Economics, Moscow, Russia Corresponding author: Andrey Shastitko ( [email protected] ) © 2016 Non-profit partnership “Voprosy Ekonomiki”.This is an open access article distributed under the terms of the Creative Commons Attribution License (CC BY-NC-ND 4.0), which permits to copy and distribute the article for non-commercial purposes, provided that the article is not altered or modified and the original author and source are credited. Citation: Pavlova N, Shastitko A (2016) Leniency programs and socially beneficial cooperation: Effects of type I errors. Russian Journal of Economics 2(4): 375-401. https://doi.org/10.1016/j.ruje.2016.11.003 # Abstract This study operationalizes the concept of hostility tradition in antitrust as mentioned by Oliver Williamson and Ronald Coase through erroneous law enforcement effects. The antitrust agency may commit type I, not just type II, errors when evaluating an agreement in terms of cartels. Moreover, firms can compete in a standard way, collude or engage in cooperative agreements that improve efficiency. The antitrust agency may misinterpret such cooperative agreements, committing a type I error (over-enforcement). The model set-up is drawn from Motta and Polo (2003) and is extended as described above using the findings of Ghebrihiwet and Motchenkova (2010). Three effects play a role in this environment. Type I errors may induce firms that would engage in socially efficient cooperation absent errors to opt for collusion (the deserved punishment effect). For other parameter configurations, type I errors may interrupt ongoing cooperation when investigated. In this case, the firms falsely report collusion and apply for leniency, fearing being erroneously fined (the disrupted cooperation effect). Finally, over-enforcement may prevent beneficial cooperation from starting given the threat of being mistakenly fined (the prevented cooperation effect). The results help us understand the negative impact that a hostility tradition in antitrust — which is more likely for inexperienced regimes and regimes with low standards of evidence — and the resulting type I enforcement errors can have on social welfare when applied to the regulation of horizontal agreements. Additional interpretations are discussed in light of leniency programs for corruption and compliance policies for antitrust violations. # Keywords antitrust, competition, collusion, cooperation agreements, leniency, enforcement errors, corruption, compliance policies JEL classification: D43, K21, L41 # 1. Introduction Cartels are considered to be one of the most dangerous types of antitrust law violations. The substantial harm that they can cause — extensively documented by Connor and Bolotova (2006) and many other researchers in subsequent publications — is only one part of the problem. The other part is that because cartels are considered to be an illegal (sometimes criminal) practice, their participants go to great lengths to hide the existence of such agreements, making this type of violation one of the most difficult for antitrust authorities to detect. Among the methods of uncovering information about cartels is active repentance in the form of leniency programs for cartel participants along with screening (Harrington, 2007). As leniency programs (LP) are implemented in more and more countries, we find evidence of both their success and failure. 1 Researchers have noted many possible ambiguous effects such programs can have on firms’ incentives. One of the topics that has not been sufficiently studied is the effect of type I errors on deterrence in the presence of LPs. This is supported by the recent study by Yusupova (2013), who found that in the Russian case, many agreements that were uncovered with the help of leniency are not hard-core cartels at all but other types of agreements (and not only horizontal ones), including those that can hardly be considered as restricting competition. De facto, this means that cartels as well as other horizontal agreements are not self-evident unless they are reduced to well documented cases of price-fixing and market-sharing. This can be illustrated by some examples from the experience of the Russian antitrust authority — the Federal Antimonopoly Service. One of these is a 2009 case on the agreement between two banks — Bank Uralsib and Toyota Bank. 2 At that time, Toyota Bank did not yet have the necessary license for acquiring money sums from individuals. The process of obtaining that license could take up to two years, but Toyota Bank wanted to give out loans to individuals for the purpose of buying cars from Toyota. Toyota Bank entered into an agreement with Bank Uralsib, which agreed to open current accounts for individuals for the purpose of transferring to them the car loans that were taken out at Toyota Bank and managing all subsequent loan payments. This agreement included as a provision the obligation of Bank Uralsib to abstain from recommending to individuals their own bank as a source of car loans for buying Toyotas from official dealers. This agreement was found by the antimonopoly authority to be anticompetitive and harmful, but the case was closed because both banks pleaded guilty, applied for leniency and eliminated the offending clause in the agreement. However, the reason for the agreement and its nature leave considerable doubt concerning the qualification of the agreement as intentionally anticompetitive. Interestingly, the case was repeated in 2012, when a similar agreement between Bank Uralsib and Volkswagen Bank RUS was uncovered by the Russian FAS 3 — except this time neither of the companies applied for leniency or pleaded guilty, choosing instead to appeal the authority's decision in court. Although these two cases seem to be obvious candidates for closer study from the point of view of possible benefits of cooperation, they have not been rigorously studied by researchers. However, there are other examples of possible type I errors in qualifying horizontal agreements that have been discussed in the past few years. Some examples are related to a recent case on larger diameter pipes (LDP) initiated by the Federal Antimonopoly Service against Russian pipe producers in 2011. Among the evidence presented in the case were schedules for LDP delivery on OJSC Gasprom (main buyer) pipeline projects, signed by representatives of all four domestic producers. Initially, this fact was qualified as an agreement for market sharing per se and directly prohibited by Russian law “On the protection of competition.” Only after more than one year (on March, 2013) of investigations were LDP producers acquitted due to a requalification of the agreement and implementation of the rule of reason. 4 There were no LP applications as such, but this is a good example of how the disclosure of a horizontal agreement that looks like a cartel is only the start in the long process of its interpretation. The aim of this paper is twofold. First, we analyze how LPs could have affected the incentives of firms that took part in socially beneficial cooperation, considering that such a program gave them a potential way of escaping liability erroneously imposed on parties to horizontal cooperation agreements that were mistakenly qualified as cartels. It seems that such firms could have made false claims for leniency to guarantee that they paid no fines, whereas if the agreements were analyzed in more detail with a wider set of economic tools they would have been found to be beneficial to social welfare. Second, we analyze whether the affected incentives could explain why the LP in Russia (and, probably, in other countries with emerging markets) resulted in such a structure of uncovered cases where the main part of the cases are not hard-core cartels. To answer these questions, we extend the models of Motta and Polo (2003) and Ghebrihiwet and Motchenkova (2010) to include the probability of both type I and type II errors committed by an antitrust agency, and three alternative strategies for firms: collude, compete, or enter cooperation agreements. The underlying logic is that if the antitrust agency considers evidence of efficiency-promoting cooperation agreements as proof of collusion, the gains from cooperation decrease. If gains from cooperation are low enough, producers will give up efficiency-promoting cooperation agreements in equilibrium. Additionally, we consider a set of implications for a wider area of research and practice. First, leniency programs analogous to those in antitrust exist in other areas, such as anticorruption legislation, and we examine how our results can apply to corruption schemes. Second, even if we stay in the realm of antitrust, leniency programs are not the only possible means for a firm to secure a reduction of fines: among the other means are antitrust compliance programs, which are currently widely discussed in Russia through the lens of their possible promotion in exchange for a discount of 1/8 of the antitrust fine (Shastitiko, 2016). We briefly examine the possible interplay between leniency and compliance in light of our results. The paper is organized as follows. Section 1 gives a brief summary of the relevant literature. Section 2 introduces our main assumptions, the model and the equilibria. Section 3 describes the main results. Section 4 provides the discussion in terms of corruption and compliance. Section 5 concludes the paper. # 2. Literature review Multiple strands of literature have a direct bearing on our model. The first is the literature on LPs. We shall build upon the models of Motta and Polo (2003), which show how implementing an LP can lead to contradictory effects and ambiguous results. Spagnolo (2004) demonstrates the important role of rewards to whistle-blowers for the efficiency of LPs. Harrington (2008) clearly delineates some of the ambiguous effects of such programs (the “race to the courthouse”, “cartel amnesty” and “deviator amnesty” effects) and shows which forms of the programs can encourage the prevalence of wanted effects. Aubert et al. (2006) take into account not only corporate LPs but also individual leniency and more specifically individual rewards for whistle-blowing, demonstrating the important effect individual leniency can have on destabilizing cartels but also pointing out its potential spillover effects. Harrington (2013) proposes a model of an LP when firms have private information regarding the likelihood of prosecution. Harrington and Chang (2015) study how an LP, given its possibly ambiguous consequences, affects the overall number of cartels in an economy. Most of the other, more recent works build upon these models, expanding them to predict the different possible effects of the chosen forms of LPs. Motchenkova and Leliefeld (2010) capture the effect of industry asymmetry, Motchenkova and van der Laan (2011) address the asymmetry of firms, while Herre and Rasch (2009) and Bos and Wandschneider (2011) tackle the problem of leniency for cartel ring-leaders. Roux and von Ungern-Sternberg (2007), Dijkstra and Schoonbeek (2010), Lefouili and Roux (2012), and Marshall et al. (2013) address the effects of leniency in multi-market settings. Houba et al. (2009) and Chen and Rey (2012) consider optimal amnesty for repeat violators, among other aspects. While most of these works incorporate the assumption that the antitrust authority can make type II errors, mistakenly allowing violators to “walk free” (not literally acquitting them but also finding insufficient evidence that is not sustainable in the court room), almost none of them take into account the non-zero probability of type I errors, when the authority mistakenly fines innocent firms (or firms with minor violations). There is broad literature on judicial (enforcement) errors — wrongful conviction and prosecution (type I errors) and release of violators (II type errors). Unlike the straightforward conclusions on the applicability of punitive fines combined with the rather small probabilities of imposition (Becker, 1968, 1974)due to type II errors, type I errors change conclusions on integral deterrence effects of law enforcement under judicial errors. These ideas might be found in papers related to individual choice and the strategic interaction between economic exchangeparticipants with third-party enforcer involvement (Garoupa and Rizolli, 2012; Rizolli and Saraceno, 2011; Rizolli and Stanca, 2012; Shastitko, 2011, 2013), although some doubts are expressed (Lando, 2006). A broader view, combining issues of deterrence, optimal evidence and incentives for desirable behavior, is proposed by Kaplow (2011). Can we find some theoretical support for the idea of deterrence intensity being reduced due to type I errors as applied to antitrust law enforcement with LPs? There are some applications of studies in antitrust law enforcement errors. For example, some asymmetry in the study of two types of errors and their effect on deterrence and socially beneficial cooperation is a topic actively debated, and the discussion might easily be found in the literature on antitrust economics and law and economics 5 . However, this is not the case for LPs under judicial errors of both types. An exception is Aubert et al. (2006), who established that the size of individual rewards should be limited to not trigger false claims from firms engaging in socially optimal cooperation. A more thorough study of the effects of type I errors can be found in Ghebrihiwet and Motchenkova (2010). Our own model will rely heavily on the latter, and the similarities and differences between their model and ours will be expanded upon in the next section. The negative effects of type I errors in deterring cartels would not be as critical if not for the fact that so many forms of cooperation between competitors (so-called horizontal agreements) might be socially beneficial. The nature of these “non-standard” contracts, which can (and did) arouse suspicion from researchers and regulators as potentially harmful to competition, is closely studied (albeit mostly in terms of vertical contracts) in transaction cost economics (Williamson, 1985, 1996; Ménard, 2004).The term “hostility tradition” was introduced by Williamson to describe the situation of any economic practice deviating from a simplified standard, which is considered to be evidence of market power and exclusive (as opposed to exploiting)commercial practices that are harmful for competition and social welfare. This idea might also be found in the paper by Coase (1972) devoted to the achievements and development of industrial organization theory. Although clearly stating the problem of the origins of the hostility tradition, researchers have so far been unable to show just how such a tradition can manifest itself and to what sort of consequences it can ead if cartels and socially beneficial cooperation between competitors are not sufficiently demarcated. # 3. The model ## 3.1. The intuition Before describing the model, let us examine very shortly the intuition behind the problem. If a firm is wrongfully accused and prosecuted for an offence and imputed with some evidence, it might expect a change in the balance of the expected costs and benefits of its actions. The violation of rules becomes relatively more attractive, and welfare-inducing agreements are concluded either more rarely or interrupted. If this is so, the effects of LPs devoted to reestablishing the oneshot prisoners’ dilemma game between competitors might change compared to the presence of only type II errors. Intuitively, it is quite clear that several types of negative effects can arise, including not only false self-reporting and reporting by counter-agent of agreements but also abstaining from the use of particular clauses in contracts and refraining from concluding these contracts as a whole. That is why we can expect multiple forms of harm related not only to prospective market actors but also to principals of enforcement — tax payers. In our model, we limit ourselves only to direct effects. In any case, the intuition leaves us with some doubts as to what the structure of current and potential strategic interactions between firms will look like. ## 3.2. Assumptions The presented model is an extension of the model developed by Ghebrihiwet and Motchenkova (2010), which itself builds upon the model by Motta and Polo (2003). Ghebrihiwet and Motchenkova (2010) attempt to fill the void in the study of type I errors and leniency by adding the probability of type I errors to the model of Motta and Polo (2003). They derive some interesting results, e.g., that innocent firms may use plea bargaining as insurance against a type I error. At the same time, this model does not allow us to analyze the self-reporting (including counter-part reporting) of cooperating firms. We extend the model by Ghebrihiwet and Motchenkova (2010) to take into account the effects of LPs on horizontal cooperation agreements that are beneficial to social welfare. Additionally, the model by Ghebrihiwet and Motchenkova (2010) does not allow innocent firms to apply for leniency because there is no legal uncertainty on particular forms of market behavior. Instead, it gives them the opportunity to plead guilty in a pre-trial settlement. The main reason given for this is that in exchange for leniency, the firm must provide evidence of collusion, whereas an innocent firm can provide none. We assume that firms can enter into agreements that are not aimed at harming competition but can be interpreted as such by an authority that can make errors. That is why the notion of evidence quality is important. In this case, innocent firms — in exchange for leniency — can provide the sort of information that can be used to “prove” the fact of collusion. Finally, in the model by Ghebrihiwet and Motchenkova (2010), the probabilities of type I and type II errors are the same across all possible behavioral strategies. We propose taking into account that the antimonopoly authority has some experience that allows it to distinguish different types of behavior on a market. In this way, the probability of a colluding firm being found guilty is higher than that for a firm that does not in fact violate the law. This point reflects some particularities of administrative procedures taken into account by the antitrust authority to initialize the case and to make decisions based on the collected and interpreted evidence. Following Motta and Polo (2003) and Ghebrihiwet and Motchenkova (2010), we analyze a group of perfectly symmetric firms. The firms choose between competing, colluding, deviating from the collusive strategy and cooperating (the corresponding profits are Π N , Π M , Π D and Π COOP) . Because all firms are symmetric, they all choose the same strategy in equilibrium. The antitrust authority chooses an enforcement policy that can include the use of a LP. Firms take into account the policy of the antitrust authority. Thecollusive agreement prescribes both the market behavior and the behavior towards the antitrust authority: whether the firm reveals information about the cartel if monitored. At period t = 0 the antitrust authority sets the policy parameters: the full fine F (F > 0), the reduced fine R (0 ≤ R < F) 6 and the probabilities of firms being investigated and prosecuted. We extend the model by Ghebrihiwet and Motchenkova (2010) by assuming that the probabilities of an investigation opening and ending in a conviction are different across different market strategies in the following way. We denote the probability of the antitrust authority starting an investigation against a firm that neither colludes nor cooperates by α 0, and the probability of that investigation ending in a conviction by p 0. For colluding firms, the probabilities are α 1 and p 1; for firms deviating from a cartel agreement, they are α 2 and p 2; for cooperating firms, they are α 3 and p 3, α 0 ≠ α 1 ≠ α 2 ≠ α 3, p 0 ≠ p 1 ≠ p 2 ≠ p 3. To simplify the comparison, we make some additional assumptions about probabilities α and p. This can be done in multiple ways, but the key will be the markers that the antitrust authority uses to identify cartel agreements. A study of cartel behavior and the possible effects that can draw the attention of antitrust authorities can be found in the work of Harrington (2006). We will use two characteristics that can be interpreted by the antitrust authorities as markers of cartels: the existence of an agreement between competitors and the existence of profits that are higher than the competitive level. It seems logical to assume that the lowest probabilities are applicable for firms that originally compete — that is, they neither collude nor cooperate on the market. In this case, not only is there no trace of any agreement, there is also no evidence of excessive profit. By the same logic, the highest probability of investigation and prosecution exists for the case where both a collusive agreement and a collusive profit are present — and this is the case of collusive strategies, so the highest probabilities are α 1 and p 1. For firms deviating from the agreement, we can assume the following. Although the firm acted competitively in the first period by undercutting its rivals’ price, it has still entered the agreement at some previous point in time — otherwise there would be nothing from which to deviate. Therefore, some proof of the existence of a cartel agreement exists, even though the profits received by the firms do not support the assumption that collusion took place. For these reasons, we maintain that the probability of prosecution in this case, p 2 is higher than in the case of competition, but lower than in the case of collusion: α 0 < α 2 < α 1 and p 0 < p 2 < p 1. For cooperating firms, the situation is as follows. Because there is a certain agreement between firms, which is difficult to distinguish from a cartel agreement due to the inclusion of ancillary restraints, and because if the cooperation is successful, firms will receive a profit that is higher than the competitive profit (as in the “Uralsib” and “Toyota Bank” example), we assume that the probabilities of prosecution are higher than in the case of competition, but lower than in the case of collusion: α 0 < α 3 < α 1 and p 0 < p 3 < p 1. A more difficult issue is the correlation between probabilities for deviating firms and cooperating firms. In both cases, some sort of agreement between competitors exists that can be detected by the antitrust authorities (ex post)and interpreted as evidence of collusion. However, in the case of deviating, competition can be observed (as a process): behavior on the market shows that firms actively compete by undercutting each others’ prices. In contrast, in the case of thedeviating strategy, the available evidence that can be used as proof of collusion is only the agreement itself and during a limited period of time. In the case of cooperation, there is both an agreement and a market outcome that can resemblecollusion 7 . Thus, we can assume that a cooperation agreement is more likely to draw attention and end in prosecution than an agreement that has never been executed. Hence, we consider α 0 < α 2 < α 3 < α 1 and p 0 < p 2 < p 3 < p 1. The timing of the game is as follows. The antitrust authority monitors the behavior of firms in the market, prioritizing the directions and scope of screening. An investigation, once opened, can last one or two periods. In the first phase, an investigation is started with a certain probability. If a firm confesses, the authority ends the investigation and finds a violation with probability 1 (not checking whether the confession is false). The firm that confessed receives a reduced fine and is made to compete in the current period. If none of the firms confess, the investigation continues for a second period and ends in a conviction with a probability that is less than 1. If found guilty, the firm is made to pay the full fine and compete in the second period (it is not assumed that it can exit the market). We assume that any firm that admits to a cartel is granted a reduced fine, independent of whether it was the first to do so. Consequently, the game restarts. We assume infinite repeat. We now take a closer look at the firms’ strategies and their corresponding values. ## 3.3. Values of strategies ### A. Not collude or cooperate (N) By choosing this strategy, each firm receives profits ∏ N in each period. In the first period, the antitrust authority starts an investigation with probability α 0. In the second period with probability p 0, the antitrust authority mistakenly finds an infringement and makes the firm pay the full fine F. 8 Because the firms in fact compete, they will not be able to provide evidence of collusion in exchange for leniency. In fact, false positives on the screening side cannot be compensated by access to leniency. ### B. Collude and not reveal (CNR) Colluding firms receive Π M . In the first period, the antitrust authority starts an investigation with probability α 1. Because the firm does not confess, the investigation continues into the second period, in which the antitrust authority makes the firm pay the full fine F with probability p 1 while forcing it to compete for one period, or mistakenly lets the firm go without a fine with probability (1 – p 1). ### C. Collude and reveal (CR) Again, here the firm receives profit Π M by colluding with other firms on the market. If the antitrust authority starts an investigation (and this happens with probability α 1), then the firm self-reports in the first period, providing evidence to the antitrust authority. The investigation does not continue into the second period. The firm is found guilty and pays the reduced fine R. ### D. Deviate and not reveal (DNR) In this case, the firm prefers to take part in a collusive agreement and afterwards to deviate from it. If the other competitors (and counterparts to the agreement) continue to abide by the agreement, it will allow the deviating firm to increase its market share and receive a higher profit Π D > Π M for one period. Next period, the deviation will be observed by the rivals, and collusion will be terminated. Π D can be interpreted the following way: Π D = Π N + △e , where △e is the expected extra profit that the firm expects to gain from deviating if it manages to be the first deviator. Therefore, if the unconditional deviator's profit is △, then $Δe=1nΔ$, where n is the number of participants in the cartel. The antitrust authority starts investigating this firm's behavior with probability α 2. Because the firm does not confess in period 1, the investigation lasts for two periods. In the second period, the firm, having deviated already, receives profitΠ N . The antitrust authority concludes the investigation, falsely establishing the fact of collusion with probability p 2, which results in the full fine F. ### E. Deviate and reveal (DR) As in the previous case, the firm enters into a collusive agreement only to deviate from it in the first period (which results in profit Π D) . What follows is infinite punishment for deviation with competitive profits Π N . Intuitively this way of behavior might be explained in terms of unfair competition with the use of LPs as an instrument to outperform rivals. In the first period, the antitrust authority starts an investigation with probability α 2. The firm self-reports and receives the reduced fine R. Because in our model evidence provided by one firm is enough to find an infringement, the investigation does not enter into the second period. Starting from the second period, the firm's profit falls to Π N , but it has the ability to secure for itself a lower fine by using the leniency program because it can use the initial agreement (even though it was not upheld) as proof of collusion. We note here that, as in the previous case (DNR), if all firms choose to deviate, then nobody obtains the deviator's profit Π D and the market outcome is the same as if the firms initially competed. ### F. Cooperate and not reveal (COOPNR) By choosing this strategy, the firm decides to cooperate (without harm to consumers) with other market participants and earns the cooperative profit Π COOP . We assume that under some conditions cooperation—as a result of combining resources (selective systems to arrange interaction, joint planning, systems of information disclosure), the use of specialized mechanisms of governance, etc. —yields profits higher than competitive, but lower than collusive (monopoly) profits, thus Π COOP > Π N . A different question is how the cooperative profit relates to the collusive profit. 9 In theory, any ratio is possible. Note further that collusive profit does not include any parts of cooperative profits because there is no welfare-enhancing agreements leading to any Schumpeterian innovations (product, process, resource, organization). In an ideal case, the cartel profit reaches the level of monopoly profit, and therefore becomes the highest possible profit on the market. Cooperation between firms can lead to an even higher profit because it leads not to an increase in prices but to a decrease of costs (for example, due to process innovation). Another possibility is that an increase in price will rise to reflect the enhanced product quality due to cooperation (and, correspondingly, increased willingness of consumers). At least one obvious example of Π COOP > Π M is the case for radical process innovation, where the price might not be higher than the initial competitive price while the quantity is significantly larger than in the monopoly case. This case even allows the presence of a competitive frame for cooperating firms. Either way, in reality, there is no guarantee that the cooperative profit will be higher or lower than the collusive profit. From the point of view of our model, in the case where Π COOP > Π M choosing between colluding and cooperating can lead to only one result: in the case of a cooperation agreement, not only is the profit higher, but the risk of being fined is simultaneously lower, so the cooperating strategy becomes dominant. The case we will focus on is Π COOP < Π M , and we shall examine it more closely. The antitrust authority opens an investigation with probability α 3. The profit in the first period is Π COOP , and the firm does not collaborate with the authorities, so the investigation takes up one more period. If in the second period, the authority falsely finds an infringement (which happens with probability p 3), then the firm pays the full fine F and receives profit Π N . Otherwise, there is no fine, and the profit is Π COOP . Then, the game restarts. ### G. Cooperate and reveal (COOPR) Here, again, the antitrust authority starts the investigation with probability α 3. However, unlike the previous case, the firm makes a false confession, admitting to collusion in exchange for a reduction of fines (even though in reality the agreement did not cause harm to social welfare). The antitrust authority accepts the provided information as proof of collusion and the firm pays the reduced fine. We assume that the confession of a firm automatically leads to the authority finding an infringement. Simultaneously, in the first period, the authority forces the firm to behave competitively (the firm's profit equals Π N ) and breaks up the cooperation. The game restarts in the second period. Is it a valid assumption that, on the one hand, the antitrust authority can distinguish between different types of market behavior (although errors are possible), which is expressed in our model by the different probabilities of opening an investigation and finding an infringement for different strategies, but on the other hand, it cannot tell a cooperation agreement from a cartel agreement, even after “getting its hands on” the agreement itself? This is where what authors have called the “hostility tradition” in antitrust comes into play: antitrust authorities, when dealing with a practice that has attributes of possibly being anticompetitive, tend to interpret it as having an anticompetitive aim while simultaneously ignoring any other interpretation. In this case, type I errors, just like type II errors, can be made by antitrust authorities maximizing social welfare. We model the antitrust authority as having precisely this goal —maximizing social welfare. However, the real-world behavior of antitrust authorities makes us consider the possibility of type I errors as even more plausible —judging, for example, by the experience of antitrust enforcement in Russia (as not just a theoretical but quite a realistic perspective), and also by the possible incentives that define the behavior of the authority's staff. Here we will not be getting too deep into this problem, but consider that, if we take as a starting point not the “public interest” view, but public choice theory, and if we take into account some political factors —namely, the incentive to show as many cases solved with the help of LPs as possible, in a situation where the fight against cartels is positioned as a high priority and the new LP is expected to yield a visible, tangible result —the antitrust authority may find itself in no position to decline leniency applications on the grounds that the agreement that the applicant admitted to being part of is in fact a legal one. On the other hand, the authority may have some incentive to analyze the detected agreement and refrain from punishing innocent firms, but in our model we will assume that the confession of a firm automatically leads to the authority finding an infringement (which stems from the authority's assumed incentive structure). Similarly to the model by Motta and Polo (2003), values of the above-mentioned strategies in parametrical form can be found in Table 1. Values of strategies. ## 3.4. Subgame perfect equilibria To find the subgame perfect equilibria, we compare the values of the strategies listed above. Because from the start we assumed symmetry between the firms, it follows that if one firm finds a certain strategy optimal, so do all other firms. Following the discussion on the values of α and p presented in section 3.2, we will try to define the conditions for α and p that influence which strategy becomes dominant. To do this, for the purposes of simplification and obtaining an illustration to our conclusions, we assume fixed ratios between probabilities α i and p i and compare the values of the denoted strategies. We assume that α 0 = 0.2α, α 1= α, α 2 = 0.4α, α 3 = 0.6α, p 0 = 0.2p, p 1= p, p 2 = 0.4p, and p 3 = 0.6p. As mentioned above, these values satisfy the conditions α 0< α 2< α 3< α 1, p 0< p 2< p 3< p 1, and seem feasible in light of the meaning of these parameters. We will also assume that the amount of the reduced fine is zero (R=0), corresponding to a 100% fine discount. The appendix contains all the necessary calculations. We find the values of α and p that cause certain strategies to dominate. For our chosen illustrative example (see Appendix), the equilibria are as follows: 1. CNR $0 2. CR $0.75 3. COOPNR $2568 4. COOPR $33.2 5. N —for all other intervals (as long as all the values of α i and p i fall into the segment [0; 1]). # 4. Results and discussion ## 4.1. Characterization of subgame perfect equilibria The model by Motta and Polo (2003), which we used as our benchmark model, resulted in three types of subgame perfect equilibria: CR, CNR and N. They are illustrated in Fig. 1. One of the main findings of Motta and Polo (2003) was that even when using a very “generous” version of the program —where the applicant can receive full immunity from fines (R = 0) —not all cartels on the market are broken up; there are areas where firms still choose to collude and either reveal or do not reveal (CNR and CR). This happens when the probability of starting an investigation, α, is low. If at the same time the probability of successful prosecution (p) is low, then firms do not have an incentive to confess and we end up in the CNR area, where firms collude and do not reveal information about it. In contrast, if the antitrust authority has sufficient resources and incentive to ensure high probabilities of investigation and prosecution, then cartels are prevented. For our extended model, we find that the number of possible types of subgame perfect equilibria increases to five: 1. firms collude and do not reveal information about the cartel to antitrust authorities (CNR); 2. firms collude and reveal (CR); 3. firms cooperate and do not confess to colluding (COOPNR); 4. firms cooperate and confess to colluding (COOPR); 5. no collusion or cooperation occurs (N). 10 The results are illustrated in Fig. 2. The N, COOPNR, COOPR, CNR, and CR areas denote different types of equilibria that depend on the values of α and p. α COOPNR/DR (p) is a curve above which the firms prefer the strategy DR (resulting in the equilibrium N), and below which the firms prefer COOPNR; thresh-olds α CNR/COOPNR (p), α CR/COOPNR (p), α COOPR/DR and α CR/COOPR have similar interpretation. The line p CNR/CR defines the border between areas of a CNR-type and a CR-type equilibrium; the line p COOPNR/COOPR —the border between COOPNR and COOPR. Proposition 1. Accounting for the possibility of type I errors and cooperation agreements leads to an increase in the number of types of possible subgame perfect equilibria compared to the benchmark model. ## 4.2. Impact of type I errors Before attempting to define the role of leniency programs in these results, we will analyze what effect the additional assumption of type I errors has on market behavior. Proposition 2. Excluding the possibility of type I errors in the model leads to only three types of remaining equilibria: CNR, CR and COOPNR. Because the probability of being unfairly fined by the antitrust authority is now zero, the value of the COOPNR strategy changes. The value of this strategy is now defined as the following:(1) COOPNR starts to dominate COOPR, DNR, DR and N for the following reasons. First, because the antitrust authority now no longer confuses cooperation and collusion, there is no incentive to make a false confession and not only incur an undeserved fine, even if it is reduced, but also to destroy the existing cooperation for one period. Similarly, DNR starts to dominate DR. Second, because the antitrust authority does not make type I errors, cooperation becomes a better strategy than competition for any given values of parameters of α and p 11 (if П COOPN holds). It follows that if a firm has the ability to take part in a cooperation agreement, it will always be profitable for it to do so. Third, the ratio of the values of the COOPNR and DNR strategies stops being dependent upon α and p and is now defined by the ratio of the corresponding profits. In our example, the ratio of the profits ensures that COOPNR becomes the dominating strategy. By comparing values of strategies and using the same parameters as previously, we derive that an analogue of the model of Motta and Polo (2003) in our example would lead to the results illustrated in Fig. 3. Finally, we illustrate the comparison of the results derived with and without the assumption of type I errors (Fig. 4). The grey areas are those where in the absence of type I errors, firms used to cooperate (and not make false claims for leniency) in equilibrium —but after taking into consideration type I errors, we find that these are the areas where collusion appears. Not all of this grey area is where firms confess after colluding: if p is low enough, firms collude without confessing. This result corresponds with the results of Ghebrihiwet and Motchenkova (2010): by taking into account type I errors, we see that for certain policy parameters, firms that in fact never caused damage to social welfare change their behavior and start taking actions that do cause damage. Expecting that even competitive behavior can be prosecuted, firms find it best to start “deserving” their punishment—in this way, they at least compensate by receiving collusive profits. An effect that was not studied by Ghebrihiwet and Motchenkova (2010) and that has not yet been the object of systematic analysis in the context of leniency programs is the impact on “conscientious” cooperation. Our model shows that in areas where socially beneficial cooperation was possible in equilibrium in the absence of type I errors, “switching on” such errors leads to the appearance of areas where cooperation either never arises (N) or arises only to be terminated if it draws the attention of the antitrust authority (COOPR). The two latter effects correspond to the findings of Shavell and Polinsky (1989), who argued that an increase in the probability of type I errors can lead to economic agents becoming more inclined towards violating rules, and to the results of Png (1986), who concluded that an increase in the probability of type I errors can lead to an even higher level of compliance. In their own way, our results reconcile these two seemingly contradictory findings: in our model, these effects are not mutually exclusive, but the prevalence of one or the other depends on the deterrence parameters α and p. This leads us to Proposition 3. The presence of type I errors results in collusion becoming sustainable for a wider set of parameter values and has a detrimental impact on socially beneficial cooperation. ## 4.3. Effect of leniency on the incentives to cooperate To analyze the effect of leniency on incentives to cooperate in the presence of type I and II errors, we will first look at the case in which a confession is not rewarded by a reduction of fines. In this case, CNR, DNR and COOPNR become dominant strategies over CR, DR and COOPR, which is intuitively clear. Additionally, the chosen parameters ensure that DNR dominates N. In this way, three types of equilibria are possible: where all firms collude and do not reveal, where all firms cooperate and do not reveal, and where firms compete. The results are illustrated in Fig. 5. The labeled areas correspond with the equilibria in our main model with leniency. In the dark-grey area, the equilibrium in the absence of leniency is CNR; in the light-grey area, the equilibrium in the absence of leniency is COOPNR. The N equilibrium (white area), where the dominant strategy is DNR, is also possible. The results make it possible to derive some information about the effect of leniency programs when the antitrust authority can make both type I and type II errors. First, we confirm the result obtained by Motta and Polo (2003). With the inclusion of leniency, the area where collusion can (in principle) be maintained becomes larger (transition from the dark-grey area to CNR + CR). However, the participants of the newly formed cartels prefer to collude and confess; in addition, some cartels that previously would not have been voluntarily revealed to the authorities are now discovered thanks to confessions exchanged for leniency (dark-grey part of CR). That is why the presented model provides grounds to expect a more complicated picture as could be presupposed intuitively. It is worth mentioning that in our model the “donor” area for collusion is the locus where in the absence of leniency, cooperation is feasible. One of the most interesting results is that in the appearance of leniency programs in a part of the area where firms used to cooperate they now make false confessions and apply for leniency to insure themselves against possible unfair punishment (locus COOPR). This means that in case an investigation starts, the cooperation will break up. Because we assume the cooperation to be socially beneficial, its destruction due to false self-reporting has a negative impact on social welfare. Another effect is the dramatic decrease in the area where cooperation can be maintained at all. Previously, with our chosen parameters and without leniency, all the firms that did not collude preferred to cooperate, if given the possibility — but after introducing leniency, the area where COOPNR and even COOPR are feasible decreased noticeably, whereas the area where no cooperation arises increased in size. The effects described above are summarized in Proposition 4. Leniency in the presence of type I errors can lead to the destruction of welfare-enhancing cooperation in the market and can also depress incentives to enter into new cooperation agreements. It is difficult to say whether the total effect on welfare will be negative or positive. With the introduction of leniency, the less harmful CR strategy partially replaces CNR, but the overall collusive area expands by reducing the potential for cooperation. In addition, incentives for choosing to compete grow, which, on its own, may be beneficial for welfare. Still, the possibilities of welfare-reducing effects should be enough to make regulators consider the importance of raising the standards of evidence, including access to relevant information and adequate interpretation by means of economic analysis while looking at horizontal agreements. # 5. Discussion ## 5.1. Leniency and corruption The topic of leniency has strong ties with that of corruption that are obviously underdiscussed. The link is two-fold. First, antitrust violations — primarily cartels and bid-rigging — are known to have strong correlations with corrupt practices. In public procurement, for example, collusion is often facilitated through government agents, and antitrust investigations often lead to uncovering cases of bribery and other types of corruption. For this reason, effective leniency programs contribute to the fight against corruption by helping in the acquisition of information about potential violations at a lower cost for the regulator. However, some complications can arise from the fact that when a firm applies for leniency, it can factor into its decision the risk of becoming subject to an anticorruption investigation. If corruption indeed took place, then in terms of modeling it could mean that the reduced fine would be greater than zero, and the effectiveness of leniency policies could be critically reduced. A potential solution would be to ensure that whoever is exempt from liability for an antitrust violation should also be guaranteed protection from sanctions for corruption. The legal mechanism for such a construct requires further discussion, but one point is that the whistle-blower should also be required to collaborate with authorities in investigating the corruption case to receive additional leniency. The discussion of this link between corruption and leniency programs is outside the scope of our paper. Another aspect linking leniency for cartel participants and the fight against corruption is the fact that leniency programs are also widely used to uncover other types of violations, namely corruption schemes. The effectiveness of such programs can be unclear (for a review, see Berlin and Spagnolo, 2015), but an aspect that has not been previously studied is the effect of leniency on corruption, which in fact facilitates welfare-maximizing transactions. The fact that corruption can, under certain circumstances, promote efficiency has been widely debated, with proponents appealing to arguments ranging from the familiar “greasing the wheels” metaphor to more complicated ones, such as in Huntington (1968), and opponents drawing attention to disastrous long-term effects. A relevant concept would be one put forward by Basu (2011) — the category of “harassment bribes”, or bribes that are given by actors to receive benefits to which they are already legally entitled. Assuming that the entitlement is derived from a social welfare-maximizing strategy, corruption becomes a less costly way to attain an efficient outcome in a rigid system, with no tendency towards positive change in the foreseeable future. If, by this logic, some forms of corruption can indeed promote efficiency, then treating all acts of corruption as per se illegal creates the risk of type I errors, interpreted not as wrongfully prohibiting a practice that is in fact legal but wrongfully prohibiting an efficiency-enhancing practice. In that case, the introduction of leniency programs for acts of corruption can be modeled in a way similar to that which we use above, where collusion can be replaced, for example, by bribery with an inefficient outcome, and cooperation would be an act of efficiency-promoting corruption. We can easily envision a system of undiscriminating corruption, where bribes are part of the universally accepted rules of the game and even the most efficient companies participate in corrupt schemes, while the enforcement of anticorruption laws is selective (whether due to a lack of resources or due to political concerns). The “deviating” strategy also has a meaning because often corrupt relationships are shown to be susceptible to risks of opportunistic behavior on both sides (and consequently to the “hold up” problem —see Buccirossi and Spagnolo, 2006). Differing probabilities of investigation and conviction would also still be rationalized in a system with only limited resources that can be devoted to fight corruption: while it is impossible to monitor all transactions for potential corruption all the time, it makes sense that transactions that lead to socially unbeneficial outcomes (e.g., receiving a bribe to appoint an inefficient company to be the supplier of certain goods for the government, resulting in disruptions of contracts) will draw more attention from the authorities than transactions that eventually lead to beneficial outcomes (e.g., receiving a bribe for appointing that same contract to the most efficient company on the market, following which the contract is fulfilled without failure and for a low price). With these assumptions, the mechanics of the model will remain the same, while the results can gain an additional interpretation. Assuming corruption can be welfare maximizing (even if only in the short term), introducing a leniency program could not only induce firms to employ bribery to gain benefits to which they have no right in the first place (moving from welfare-beneficial to welfare-unbeneficial corruption as an analogue of the “deserved punishment” effect) but also destabilize existing efficient, but illegally established schemes (as in the “disrupted cooperation effect”) and preclude firms from engaging in welfare-maximizing activities if the only way to access them is by corruption (as in the “prevented cooperation” effect). ## 5.2. Leniency and compliance As the idea of encouraging companies to implement antitrust compliance policies, possibly by way of providing a reduction of fines, becomes more and more popular in Russia, it is interesting to look at the possible consequences of such a measure, given what we know about the effects of leniency programs and how these two instruments can enforce or hinder each other. On the first point, it is worth noting that our results highlight the risks of importing institutions without sufficiently taking into account the nuances of the local institutional environment, including working mechanisms of rules enforcement. Antitrust norms rely heavily upon economic analysis and expert judgment, and countries that have relatively less experience in applying the specific methods of economic analysis used in antitrust, as well as countries with insufficient resources dedicated to antitrust enforcement, run a higher risk of type I errors. Knowing this, firms tend to use all the instruments available to them to minimize the risk of wrongful conviction — and sometimes it becomes economically feasible to use as the means of insurance some instruments that were initially not developed for these purposes. Consequently, while antitrust fines in Russia remain large and an asymmetry of information persists between firms and the antitrust authority as to what constitutes a violation (taking into account the general assumption that norms do not necessarily promote the most efficient of all possible outcomes), it is quite possible to expect that compliance programs might be used by firms for which they were not initially meant (i.e., firms with low risks of antitrust violations). All in all, this might result in an additional cost for business (the cost of devising and implementing an unnecessary compliance program) and, eventually, a devaluation of the whole concept of compliance programs —that is, unless specific measures are taken to curb these possible effects. The second point might be better understood if we relax the assumption of a firm functioning as a “black box” with the single purpose of maximizing profit and revert to an approach more in line with methodological individualism. Assuming that an asymmetry of information is present between the owners of a firm and its managers, as well as between top-managers and managers and so on, and a discrepancy exists between the goals of the principals and the agents on different levels, adopting an antitrust compliance program that identifies not only external antitrust risks (such as the types of violations that are most likely to occur based on the market structure and market position of the firm) but also internal ones and develops the necessary corporate procedures to minimize those risks seems both individually and socially beneficial. Firms benefit from an individual reduction of the probability of conviction, society benefits from a reduced probability of violations, and corporate procedures provide the necessary sources of evidence to keep the costs of possible investigations down, including the costs of identifying the individuals responsible, which can be critical for criminal sanctions. In practice, the effects of such policies can be ambiguous depending on, among other things, the design of the liability rules (Buccirossi and Spagnolo, 2008; Shastitko, 2016). If a compliance policy is in place, it may complicate the matter of leniency. First, in countries where both corporate and individual leniency programs exist, if an internal investigation follows a certain procedure and takes time, then a company faces an additional risk of individual whistle-blowers applying for individual leniency throughout the time of such an investigation. If an individual self-reports and the antitrust authority finds out that the company is still investigating his behavior, which is why it has not applied for corporate leniency, the question arises of whether the company should be punished as severely as if an internal investigation was not underway or if their compliance policy was inefficient in preventing and uncovering the violation. The intuition behind possible type I errors in the case of compliance policies—the company mistakenly self-diagnosing a violation —and their potential effects on owners, managers at different levels and employees, as well as the actions of the company as a whole, remains an issue for further discussion. We hope that these questions will be expanded upon in future research. # 6. Conclusions We have shown that the inclusion of type I errors and the extension of the study of collusion to cooperation agreements that benefit social welfare allow us to infer the existence of additional externalities for firms resulting from the use of LPs. There are three main effects (the first two correspond to the findings of Ghebrihiwet and Motchenkova (2010) but are extensions with the addition of possible cooperation agreements): 1. the deserved punishment effect —resulting from the incentive of a firm to switch from competition or socially beneficial cooperation to collusion in order to guarantee that the punishment they could possibly receive will be deserved. In Fig. 5, this is the intersection of the light-grey area and CR area. 2. the disrupted cooperation effect — resulting from cooperation agreements becoming destabilized due to the incentive for firms to make false confessions to avoid undeserved punishment. This effect is illustrated by the COOPR area, where in the absence of a leniency program, cooperation is upheld. 3. the prevented cooperation effect — resulting from the fact that any type of agreement with a competitor, even if such an agreement is ultimately beneficial to social welfare, can draw the attention of the antitrust authority and increase the probability of being punished. Consequently, firms start to prefer not to engage in any sort of agreements with competitors (the light-grey area N in Fig. 5) —a factor that impedes technological progress and innovation and hinders the inflow of investment. The described effects explain how a tradition of hostility in antitrust, by raising the chance of any form of cooperation qualifying as anticompetitive and therefore illegal, not only results in the destruction of welfare-enhancing practices but also reinforces the stability of cartels. Our results have certain implications in connection with anticorruption law and antitrust compliance policies. It can be shown that by a logic similar to that which we apply to collusion, leniency programs for corruption with type I errors can impair some socially beneficial forms of activity. As for compliance policies, they too may have ambiguous effects and be applied erroneously, which merits further consideration from antitrust authorities on how to design corresponding liability rules and curb undesirable incentives. # Acknowledgements The authors are grateful to Evgenia Motchenkova (VU Amsterdam), Svetlana Avdasheva (National Research University Higher School of Economics), as well as the participants of the AEDE 2014 (Málaga) and the CRESSE 2014 (Corfu) conferences for their valuable insights and comments. Any remaining mistakes are our own. # To find the subgame perfect equilibria, we need to find the conditions for α and p that make each of the strategies dominant To simplify our calculations, we will adopt certain fixed ratios for our probabilities α i and p i (i= 0,1,2,3) that satisfy the conditions α 0< α 2< α 3< α 1 and p 0< p 2< p 3< p 1, where α i ∈ [0,1] and p i ∈[0,1]. Let α 1= α and p 1= p, while α∈ [0,1] and p∈[0,1]. We will now assume that α 0 = 0.2α, α 2 = 0.4α, α 3 = 0.6α, p 0 = 0.2p, p 2 = 0.4p, and p 3 = 0.6p. We proceed to find the conditions for α and p that ensure each strategy's dominance. To do that, we compare the values of all the strategies, substituting for their expressions that we established in section 2 of the paper and simplifying the inequalities. We derive the following results. 1. Conditions for “Neither Collude nor Cooperate” being dominant:(A1) 2. Conditions for “Collude and Not Reveal” being dominant:(A2) 3. Conditions for “Collude and Reveal” being dominant:(A3) 4. Conditions for “Deviate and Not Reveal” being dominant:(A4) 5. Conditions for “Deviate and Reveal” being dominant:(A5) 6. Conditions for “Cooperate and Not Reveal” being dominant:(A6) 7. Conditions for “Cooperate and Reveal” being dominant:(A7) The probabilities must still satisfy αϵ [0,1], pϵ[0,1]. Depending on the specific values of profits, fines and the discounting factor, different inequalities in the system will become binding. We will analyze one of the possible combinations of parameters to illustrate some of the effects. For simplicity, we will assume that П N =0, П M = 1.5, П D =3, П COOP = 1, F = 3, R = 0, and δ = 0.8, which are roughly consistent with the values chosen by our predecessor (Ghebrihiwet, Motchenkova, 2010). It is trivial to show that with this set of parameters “Neither Collude nor Cooperate” will always be strictly dominated by all other strategies, and “Deviate and Reveal” will always dominate “Deviate and Not Reveal”. Consequently, we are left with only the following strategies to analyze: CNR, CR, DR, COOPNR, COOPR. We now find the conditions necessary for each of these strategies to be an equilibrium (Table A1). Conditions for equilibria. With the above parameters, the subgame perfect equilibria of the model are as follows: 1. CNR $0 2. CR $0.75 3. COOPNR $2568 4. COOPR $33.2 5. N —for all other intervals (as long as all the values of α i and p i fall into the segment [0; 1]). 1 For some recent examples from the Russian case, see Avdasheva, Shastitko (2011), Pavlova (2012), and Yusupova (2013). 2 Decision of the FAS Russia on case No. 1 11/120-09 http://solutions.fas.gov.ru/documents/169-883e8928-b5c6-4b4b-8130-9fc856f10b5f 3 Decision of the FAS Russia on case No. 1 11/67-12 http://solutions.fas.gov.ru/ca/upravlenie-kontrolyafinansovyh-rynkov/1-11-67-12 4 For more detail, see, for example, Shastitko et al. (2014). 5 Including such works as Posner (1998), Joskow (2002), Manne and Wright, (2009), Rill and Dillickrath (2009), and Immordino and Polo (2013). 6 Here we interpret the fine in an economic sense, assuming that any form of punishment for an antitrust violation can be monetized and therefore expressed in terms of a monetary fine. Alternatively, the potential punishment (F) can be interpreted as a composite that can include an administrative or criminal fine (F f) , a prison sentence (F p ) and civil damage claims (F d) (this corresponds to the Russian system of sanctions for antitrust violations, and the following discussion applies to the situation in Russia): Here, we denote the probabilities of a prison sentence and of damage claims as p p and p d . Due to some institutional factors, such probabilities may be much smaller than 1: for example, if fines and prison sentences are administrated by different authorities, a violator receiving a fine does not receive a guarantee that another authority will find enough proof of him deserving a prison sentence. Similarly, even though civil damage claims can be theoretically possible, given the fact that cartel damages are frequently distributed among many firms in relatively small amounts, and given the free-rider problem, the probability of civil damage claims may also be de facto close to zero. In this way, the fact that the model explicitly deals with fines and not with other types of potential sanctions may also imply that the probabilities of these sanctions are very small. Our model is based on games without memory, so once the game restarts after one or two periods, it is of no consequence whether a firm has been previously convicted. Therefore, another assumption we use here is that recidivism is not a reason for increasing the severity of the punishment. This might not always be the case with existing fine systems, where recidivism is widely considered to be an aggravating circumstance. A way of making the model more realistic in this aspect is to switch to games with memory, but this lies outside the scope of our current analysis. Consequently, in our model, we will assume a forgiving antitrust authority that does not increase punishment if a firm makes repeated violations. 7 We assume that if specialized tests used by the antitrust authority, such as those described in Harrington (2007), exist, they are not known to the firms and therefore are not considered by them when choosing strategies. 8 The notion that competing firms can be falsely accused of having violated antitrust law is not a new one: for example, Rubin (1995) found that such type I errors appeared in 7 out of 23 antitrust cases analyzed. Recently, the Russian FAS has been under attack for its multitude of cases, many of which, researchers feel, might have been handled with excess strictness (see, for example, Avdasheva et al., 2015). 9 For the purpose of this article, we consider cooperating and colluding to be alternative strategies for a firm. We purposefully do not consider the option when firms “cooperate” and “collude” at the same time, that is when their agreement leads both to a decrease of costs and increase of price. This exclusion stems from one of the aims of this paper, which is to show the effects of type I errors in the case of leniency. When firms both raise prices and cut costs, the overall effect can be ambiguous and we would need additional assumptions to determine within our model whether an agreement is socially beneficial and whether the antitrust authority makes errors in classifying it. Nevertheless, incorporating such agreements in our model constitutes a possible line for further research. 10 In the N area, where no collusion or cooperation occurs, the dominant strategy is DR. It becomes more profitable for the firm to reveal after it has already deviated from the agreement, because in this way, it not only receives a deviator's profit but also exempts itself from paying a fine. 11 In our model we assume that cooperation is an available strategy to all firms, which is not always the case in reality. # References • Aubert, C., Rey, P., & Kovacic, W. (2006). The impact of leniency and whistle-blowing programs on cartels. International Journal of Industrial Organization, 24(6), 1241-1266. • Avdasheva, S., & Shastitko, A. (2011). Introduction of leniency programs for cartel participants: The Russian case. CPI Antitrust Chronicle: [online serial], 8(2). • Avdasheva, S., Tsytsulina, D., Golovanova, S., & Sidorova, Y. (2015). Discovering the miracle of large numbers of antitrust investigations in Russia: The role of competition authority incentives. HSE Working papers. • Basu, K. (2011). Why, for a class of bribes, the act of giving a bribe should be treated as legal (Technical Report 172011). Ministry of Finance, Government of India. • Becker, G. (1968). Crime and punishment: An economic approach. Journal of Political Economy, 76(2), 169-217. • Becker, G. (1974). Crime and punishment: An economic approach. In G. Becker & W.M. Landes, Essays in the economics of crime and punishment, (pp. 1-54). New York: National Bureau of Economic Research. • Berlin, M.P., & Spagnolo, G. (2015). Leniency, asymmetric punishment and corruption: Evidence from China. Unpublished manuscript. Available at SSRN: http://ssrn.com/abstract=2718181. • Buccirossi, P., & Spagnolo, G. (2006). Leniency policies and illegal transactions. Journal of Public Economics, 90(6), 1281-1297. • Buccirossi, P., & Spagnolo, G. (2008). Corporate governance and collusive behavior. In W.D. Collins, Issues in competition law and policy. Chicago: American Bar Association. • Bos, I., & Wandschneider, F. (2011). Cartel ringleaders and the corporate leniency program. CCP Working Paper, 11-13. • Chen, Zh., & Rey, P. (2012). On the design of leniency programs. Toulouse: Institut d’Économie Industrielle (IDEI). • Coase, R. (1972). Industrial organization: A proposal for research. In V.R. Fuchs, Policy issues and research opportunities in industrial organization, (pp. 59-73). New York: National Bureau of Economic Research. • Connor, J., & Bolotova, Y. (2006). Cartel overcharges: Survey and meta-analysis. International Journal of Industrial Organization, 24(6), 1109-1137. • Dijkstra, P., & Schoonbeek, L. (2010). Amnesty plus and multimarket collusion. Unpublished manuscript. • Garoupa, N., & Rizolli, M. (2012). Wrongful conviction do lower deterrence. Journal of Institutional and Theoretical Economics, 168(2), 224-231. • Ghebrihiwet, N., & Motchenkova, E. (2010). Leniency programs in the presence of judicial errors.. Tilburg University, Tilburg Law and Economic Center (Discussion Paper 2010-030). • Harrington, J. (2006). How do cartels operate?. The Johns Hopkins University, Department of Economics. • Harrington, J. (2007). Behavioral screening and the detection of cartels. In C.D. Ehlermann & I. Atanasiu, European competition law annual 2006: Enforcement of prohibition of cartels, (pp. 51-68). Oxford: Hart Publishing. • Harrington, J. (2008). Optimal corporate leniency programs. Journal of Industrial Economics, 56(2), 215-246. • Harrington, J. (2013). Corporate leniency programs when firms have private information: The push of prosecution and the pull of pre-emption. Journal of Industrial Economics, 61(1), 1-27. • Harrington, J., & Chang, M.-H. (2015). When can we expect a corporate leniency program to result in fewer cartels?. Journal of Law and Economics, 58(2), 417-449. • Herre, J., & Rasch, A. (2009). The deterrence effect of excluding ringleaders from leniency programs. Unpublished manuscript, University of Cologne. • Houba, H., Motchenkova, E., & Wen, Q. (2009). The effects of leniency on maximal cartel pricing. Tinbergen Institute Discussion Papers, 09-081/ No. 1. • Huntington, S.P. (1968). Political order in changing societies. New Haven and London: Yale University Press. • Immordino, G., & Polo, M. (2013). Antitrust, legal standards and investment. Milano, Italy: Center for Research on Energy and Environmental Economics and Policy, Bocconi University. • Joskow, P. (2002). Transaction cost economics, antitrust rules, and remedies. Journal of Law, Economics and Organization, 18(1), 95-116. • Kaplow, L. (2011). Optimal proof burdens, deterrence, and the chilling of desirable behavior. American Economic Review, 101(3), 277-280. • Lando, H. (2006). Does wrongful conviction lower deterrence?. Journal of Legal Studies, 35(2), 327-337. • Lefouili, Y., & Roux, C. (2012). Leniency programs for multimarket firms: The effect of amnesty plus on cartel formation. International Journal of Industrial Organization, 30(6), 624-640. • Manne, G., & Wright, J. (2009). Innovations and the limits of Antitrust. George Mason Law & Economics Research Paper, N09-N54. • Marshall, R., Marx, L.M., & Mezzetti, C. (2013). Antitrust leniency with multi-product colluders. Unpublished manuscript. • Ménard, C. (2004). The economics of hybrid organizations. Journal of Institutional and Theoretical Economics, 160(3), 345-376. • Motchenkova, E., & Leliefeld, D. (2010). Adverse effects of corporate leniency programs in view of industry asymmetry. Journal of Applied Economic Sciences, 5(2(12)/Sum), 114-128. • Motchenkova, E., & van der Laan, R. (2011). Strictness of leniency programs and asymmetric punishment effect. International Review of Economics, 58(4), 401-431. • Motta, M., & Polo, M. (2003). Leniency programs and cartel prosecution. International Journal of Industrial Organization, 21(3), 347-379. • Pavlova, N. (2012). Modes of improving the leniency program as a method of antitrust regulation. Vestnik MGU, 1, 66-73. • Png, I.P.L. (1986). Optimal subsidies and damages in the presence of judicial error. International Review of Law and Economics, 6(1), 101-105. • Posner, R. (1998). Economic analysis of law. New York: Aspen Law & Business. • Rill, J., & Dillickrath, T. (2009). Type I error and uncertainty: Holding the antitrust enforcement pendulum steady. Antitrust Chronicle: [online serial] No. 11. • Rizolli, M., & Stanca, L. (2012). Judicial errors and crime deterrence: Theory and experimental evidence. Journal of Law and Economics, 55(2), 311-338. • Rizolli, M., & Saraceno, M. (2011). Better that ten guilty persons escape: Punishment costs explain the standard of evidence. Public Choice, 155(3), 395-411. • Roux, C., & von Ungern-Sternberg, T. (2007). Leniency programs in a multimarket setting: Amnesty plus and penalty plus. CESifo Working Paper Series No. 1995. • Rubin, P. (1995). What do economists think about antitrust? A random walk down Pennsylvania avenue. In F.S. McChesney & W.F. Shughart, The causes and consequences of antitrust, (pp. 33-62). Chicago: University of Chicago Press. • Shastitko, A. (2011). The rule of law economics: The cost of guarantors’ services and enforcement errors. Social Sciences, 42(4), 3-19. • Shastitko, A. (2016). Does antitrust need the rule “minus one-eighth fines for compliance”?. Voprosy Gosudarstvennogo i Munitsipalnogo Upravleniya, 1, 38-59. • Shastitko, A., Golovanova, S., & Avdasheva, S. (2014). Investigation of collusion in procurement of one Russian large buyer. World Competition: Law and Economics Review, 37(2), 235-247. • Shavell, S., & Polinski, A.M. (1989). Legal error, litigation, and the incentive to obey the law. Journal of Law, Economics and Organization, 5(1), 99-108. • Spagnolo, G. (2004). Divide et impera: Optimal leniency programs. CEPR Discussion Papers No. 4840. • Williamson, O.E. (1985). The economic institutions of capitalism. Firms, markets, relational contracting. New York: Free Press. • Williamson, O.E. (1996). Transaction cost economics and the Carnegie connection. Journal of Economic Behavior and Organization, 31(2), 149-155. • Yusupova, G.F. (2013). Leniency program and cartel deterrence in Russia: Effects assessment. Moscow: National Research University Higher School of Economics (Working paper No. WP BRP 06/PA/2012).
2021-09-22 01:19:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 9, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5686838030815125, "perplexity": 2355.826521620622}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057303.94/warc/CC-MAIN-20210922011746-20210922041746-00363.warc.gz"}
https://eprint.iacr.org/2007/333
### Towards Key-Dependent Message Security in the Standard Model Dennis Hofheinz and Dominique Unruh ##### Abstract Standard security notions for encryption schemes do not guarantee any security if the encrypted messages depend on the secret key. Yet it is exactly the stronger notion of security in the presence of key-dependent messages (KDM security) that is required in a number of applications: most prominently, KDM security plays an important role in analyzing cryptographic multi-party protocols in a formal calculus. But although often assumed, the mere existence of KDM secure schemes is an open problem. The only previously known construction was proven secure in the random oracle model. We present symmetric encryption schemes that are KDM secure in the standard model (i.e., without random oracles). The price we pay is that we achieve only a relaxed (but still useful) notion of key-dependent message security. Our work answers (at least partially) an open problem posed by Black, Rogaway, and Shrimpton. More concretely, our contributions are as follows: - We present a (stateless) symmetric encryption scheme that is information-theoretically secure in face of a bounded number and length of encryptions for which the messages depend in an arbitrary way on the secret key. - We present a stateful symmetric encryption scheme that is computationally secure in face of an arbitrary number of encryptions for which the messages depend only on the respective current secret state/key of the scheme. The underlying computational assumption is minimal: we assume the existence of one-way functions. - We give evidence that the only previously known KDM secure encryption scheme cannot be proven secure in the standard model (i.e., without random oracles). Note: This revision contains changes in the explanatory text and corrections of minor mistakes. The results of this paper have not changed. We thank the anonymous referees for their input. Available format(s) Publication info Published elsewhere. To be published at Eurocrypt 2008 Keywords Key-dependent message securitysecurity proofssymmetric encryption schemes Contact author(s) unruh @ cs uni-sb de History 2008-01-13: revised See all versions Short URL https://ia.cr/2007/333 CC BY BibTeX @misc{cryptoeprint:2007/333, author = {Dennis Hofheinz and Dominique Unruh}, title = {Towards Key-Dependent Message Security in the Standard Model}, howpublished = {Cryptology ePrint Archive, Paper 2007/333}, year = {2007}, note = {\url{https://eprint.iacr.org/2007/333}}, url = {https://eprint.iacr.org/2007/333} } Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.
2022-08-19 16:47:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28282880783081055, "perplexity": 2169.5877457670663}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573744.90/warc/CC-MAIN-20220819161440-20220819191440-00389.warc.gz"}
https://www.computer.org/csdl/trans/tc/2000/03/t0230-abs.html
<p><b>Abstract</b>—This paper presents a class of count-and-threshold mechanisms, collectively named <tmath>$\alpha$</tmath>-count, which are able to discriminate between transient faults and intermittent faults in computing systems. For many years, commercial systems have been using transient fault discrimination via threshold-based techniques. We aim to contribute to the utility of count-and-threshold schemes, by exploring their effects on the system. We adopt a mathematically defined structure, which is simple enough to analyze by standard tools. <tmath>$\alpha$</tmath>-count is equipped with internal parameters that can be tuned to suit environmental variables (such as transient fault rate, intermittent fault occurrence patterns). We carried out an extensive behavior analysis for two versions of the count-and-threshold scheme, assuming, first, exponentially distributed fault occurrencies and, then, more realistic fault patterns.</p>
2017-11-23 09:49:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.386109322309494, "perplexity": 2566.27390887333}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806768.19/warc/CC-MAIN-20171123085338-20171123105338-00085.warc.gz"}
http://peakgroup.net/how-to/cannot-normalize-a-zero-norm-vector.php
Home > How To > Cannot Normalize A Zero Norm Vector # Cannot Normalize A Zero Norm Vector ## Contents If X {\displaystyle X} and Y {\displaystyle Y} are normed spaces and u : X → Y {\displaystyle u:X\to Y} is a continuous linear map, then the norm of u {\displaystyle The topology thus defined (by either a norm or a seminorm) can be understood either in terms of sequences or open sets. So, your options are: Return the zero vector Return NaN Return a bit indicating if the vector was successfully normalized, in addition to the result if successful Throw an exception Option Conversely: Any locally convex topological vector space has a local basis consisting of absolutely convex sets. Skip to main contentSubjectsMath by subjectEarly mathArithmeticAlgebraGeometryTrigonometryStatistics & probabilityCalculusDifferential equationsLinear algebraMath for fun and gloryMath by gradeK–2nd3rd4th5th6th7th8thHigh schoolScience & engineeringPhysicsChemistryOrganic ChemistryBiologyHealth & medicineElectrical engineeringCosmology & astronomyComputingComputer programmingComputer scienceHour of CodeComputer animationArts If p(v) = 0 then v is the zero vector (separates points). Reload to refresh your session. Springer. ## Glm Zero Vector Format For Printing -XML -Clone This Bug -Top of page First Last Prev Next This bug is not in your last search results. language-agnostic math vector share|improve this question asked Apr 6 '09 at 15:52 theycallmemorty 6,21083864 add a comment| 9 Answers 9 active oldest votes up vote 25 down vote accepted Mathematically speaking, To each such set, A, corresponds a seminorm pA called the gauge of A, defined as pA(x):= inf{α: α > 0, x ∈ αA} with the property that {x: pA(x) < Reply ram das says: 13/02/2014 at 7:45 pm thanks alot Reply Yogesh Desai says: 07/03/2014 at 7:17 am Thank You very much for this detail and simple introductory explanation….. Or do we want variants for every combination of - Use stableNorm instead of norm - vectors with stableNorm (or norm) ==0 return Zero() or UnitX() - vectors with stableNorm == The Euclidean norm assigns to each vector the length of its arrow. My GFX card may be to blame... Glm::normalize It is not, however, positively homogeneous. What crime would be illegal to uncover in medieval Europe? The Rightmost Bit In A Mips Word This cuts the available magnitudes componentwise by sqrt . Lecture Notes in Mathematics. 936. So for v = 0 we have: ||0|| = sqrt(0^2 + 0^2 + ... + 0^2) = 0. Take derivative of this equation equal to zero to find a optimal solution and get plug this solution into the constraint to get and finally By using this equation, we can How To Normalize Data Fortunately, apart from -, - , and -norm, the rest of them usually uncommon and therefore don't have so many interesting things to look at. Nuchto Nuchto (view profile) 20 questions 3 answers 0 accepted answers Reputation: 9 on 27 May 2012 Direct link to this comment: https://www.mathworks.com/matlabcentral/answers/39541#comment_81807 ANS is just a value (1), is not What about L1 norms? ## The Rightmost Bit In A Mips Word Prugovečki, Eduard (1981). Nuchto Nuchto (view profile) 20 questions 3 answers 0 accepted answers Reputation: 9 on 30 May 2012 Direct link to this comment: https://www.mathworks.com/matlabcentral/answers/39541#comment_82373 Help!!! Glm Zero Vector Thanks Reply kalai says: 26/07/2013 at 5:48 am perfect understanding that is why clear explanation is given… thank you for this nice interpretation Reply Manaswi says: 27/08/2013 at 12:35 pm Reblogged Vector Normalize Calculator Please try again in a few minutes. So, replacing: (*this) /= std::sqrt(sqnorm) > by: > > (*this) /= internal::pfirst(_mm_sqrt_ss(_mm_set_ss(sqnorm))); > > and we get a tie :) Cool. So in reality, most mathematicians and engineers use this definition of -norm instead: that is a total number of non-zero elements in a vector. This is clearly impossible for the zero vector, because it does not really have a direction, or because its length cannot be changed by mutltiplying it by some factor (it will Reply mayur sevak says: 20/07/2015 at 7:47 am great explanation!! How To Normalize Vector there is a b ≥ 1 {\displaystyle b\geq 1} such that p ( u + v ) ≤ b ( p ( u ) + p ( v ) ) {\displaystyle Reply Martin says: 11/10/2012 at 3:46 pm Great article! The 1-norm is simply the sum of the absolute values of the columns. For any p-norm it is a superellipse (with congruent axes). Hence, in this specific case the formula can be also written with the following notation: ∥ x ∥ := x ⋅ x . {\displaystyle \left\|{\boldsymbol {x}}\right\|:={\sqrt {{\boldsymbol {x}}\cdot {\boldsymbol {x}}}}.} The Vector Dot Product Baltimore: The Johns Hopkins University Press. We recommend upgrading to the latest Safari, Google Chrome, or Firefox. ## There are many other types of norm that beyond our explanation here, actually for every single real number, there is a norm correspond to it (Notice the emphasised word real number, that means it Because of this, we will now discuss about the optimisation of . MSE is As previously discussed in -optimisation section, because of many issues from both a computational view and a mathematical view, many -optimisation problems relax themselves to become - and -optimisation instead. Related Content 1 Answer Wayne King (view profile) 0 questions 2,675 answers 1,085 accepted answers Reputation: 5,366 Vote0 Link Direct link to this answer: https://www.mathworks.com/matlabcentral/answers/39541#answer_49165 Answer by Wayne King Wayne King Unit Vector The double vertical line used here should also not be confused with the symbol used to denote lateral clicks, Unicode U+01C1 (ǁ). We could add stable variants of these methods, which use stableNorm() instead of norm(). I suggest that the zero vector should be returned in this case. These formats are intended to be 028 * localized using simple properties files, using the constant 029 * name as the key and the property value as the message format. 030 I also read somewhere that, more is the norm value (such as, L1, L2,L3….) more it tries to fit the outliers. Comment 4 Christoph Hertzberg 2015-03-28 11:49:53 UTC Yes, I think we agreed that the default normalize[d]() methods shall keep the current fast implementation. One of its advantages is that it does not let the programmer to forget to deal with degenerate cases. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a non-profit organization. In the field of real or complex numbers, the distance of the discrete metric from zero is not homogeneous in the non-zero point; indeed, the distance from zero remains one as That surely demystified the meaning of -norm Now we have discussed the whole family of norm from to , I hope that this discussion would help understanding the meaning of norm, Based on your location, we recommend that you select: . Did you notice we have amazingly similar stats? :) –Yuval Adam Apr 6 '09 at 16:07 Yes, I noticed it when you got the Enlightened badge earlier today. (Actually, v t e Functional analysis Set/ subset types Absolutely convex Absorbing Balanced Bounded Convex Radial Star-shaped Symmetric Linear cone (subset) Convex cone (subset) TVS types Banach Barrelled Bornological Brauner F-space Finite-dimensional Reply katerina1570 says: 18/05/2013 at 10:02 am A good mini-tutorial. Equivalent norms define the same notions of continuity and convergence and for many purposes do not need to be distinguished. mathematically it's not defined I guess.
2018-05-21 12:37:18
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8227995038032532, "perplexity": 2656.0996096090057}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794864186.38/warc/CC-MAIN-20180521122245-20180521142245-00105.warc.gz"}
http://physics.aps.org/story/v16/st19
Focus: A Fleeting Detection of Gravitational Waves Published December 22, 2005  |  Phys. Rev. Focus 16, 19 (2005)  |  DOI: 10.1103/PhysRevFocus.16.19 Anisotropy and Polarization in the Gravitational-Radiation Experiments J. Weber Published July 20, 1970 Evidence for Discovery of Gravitational Radiation J. Weber Published June 16, 1969 In honor of the World Year of Physics, which commemorates Einstein’s “miraculous year” in 1905, we’re presenting papers from the Physical Review archive related to Einstein’s accomplishments. In 1918 Einstein used his new general theory of relativity to show that ripples in spacetime could exist and would move at the speed of light. Two papers in PRL, in 1969 and 1970, reported the first detections of such gravitational waves, coming apparently from the center of our galaxy. The discovery was later discredited, and the researcher behind it became a controversial figure. But his imagination and determination inspired other physicists to search for gravitational waves, a quest that continues today with efforts such as the Laser Interferometer Gravitational Wave Observatory (LIGO) project. A gravitational wave, essentially a traveling distortion of the geometry of space, will jiggle the shape of any physical body it encounters. To detect such disturbances, Joseph Weber of the University of Maryland in College Park fashioned solid aluminum cylinders, about 2 meters long and 1 meter in diameter, and suspended them on steel wires. A passing gravitational wave would set one of these cylinders vibrating at its resonant frequency–about 1660 hertz–and piezoelectric crystals firmly attached around the cylinder’s waist would convert that ringing into an electrical signal. Weber took great pains to isolate the cylinders from vibration and from local seismic and electromagnetic disturbances, and claimed that the only significant source of background noise came from random thermal motion of the aluminum atoms. This thermal motion caused a cylinder’s length to vary erratically by about ${10}^{-16}$ meters, less than a proton’s diameter, but the expected gravitational wave signal was not much bigger. As evidence of a passing gravity wave, Weber looked for wiggles in the data that exceeded some “threshold” that characterized the background noise. But he didn’t define this threshold consistently or precisely. Weber’s evidence for detection was based on observing these above-background signals in more than one bar within the same half-second period. After seeing some coincident events between two Maryland bars [1] Weber moved one of his cylinders to Argonne National Laboratory, near Chicago, about 1,000 kilometers away. In 1969 he reported in PRL some two dozen coincident detections at the two locations in an 81-day period. He calculated that some of the signals were so large that coincidences by chance should happen only once in hundreds or thousands of years. This was “good evidence” for gravitational waves, he argued. The following year he claimed to have detected 311 coincident signals in a 7-month period, with a directional concentration, moreover, pointing toward the center of the Milky Way. The second announcement in particular created a stir. Tony Tyson, now at the University of California, Davis, joined with colleagues to build a “Weber bar,” as did a number of other groups around the world, but no one besides Weber ever saw anything but random noise. Weber was an electrical engineer turned physicist, who knew little of data analysis, Tyson says, “and that turned out to be his downfall.” Weber’s criteria for evaluating signal coincidences, it slowly emerged, were ill-defined and partly subjective. By the late 1970s, everyone but Weber agreed that his claimed detections were spurious. What’s more, the strength and frequency of Weber’s signals, if real, would have required the sky to be teeming with nearby astrophysical events on the scale of supernovae spewing out gravity waves. But the invalidation of Weber’s claims only pushed other researchers to try harder, says Tyson. Today, the twin LIGO observatories in Louisiana and Washington state use optical techniques on a scale of kilometers to search for the telltale distortions of spacetime. Weber deserves credit for drawing others into this field of physics, says Tyson. “It was the difficulty that attracted us.” –David Lindley David Lindley is a freelance science writer in Alexandria, Virginia. References 1. J. Weber, “Gravitational-Wave-Detector Events,” Phys. Rev. Lett. 20, 1307 (1968).
2014-12-20 14:26:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4536705017089844, "perplexity": 2530.6856875501035}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802769894.131/warc/CC-MAIN-20141217075249-00164-ip-10-231-17-201.ec2.internal.warc.gz"}
http://connection.ebscohost.com/c/abstracts/21502282/e-column
TITLE # E-Column AUTHOR(S) PUB. DATE June 2006 SOURCE Chemical Business;Jun2006, Vol. 20 Issue 6, p10 SOURCE TYPE DOC. TYPE Abstract ABSTRACT The article presents abstracts of research on chemical processing and engineering. They include "A new family of oxide ion conductors based on tricalcium oxy-silicate," "Desilication: on the controlled generation of mesoporosity in MFI zeolites," "Drilling nanoholes in colloidal spheres by selective etching," and "Synthesis and microencapsulation of organo-silica particles." ACCESSION # 21502282 ## Related Articles • Mo6S6 nanowires: structural, mechanical and electronic properties. Vilfan, I. // European Physical Journal B -- Condensed Matter;May2006, Vol. 51 Issue 2, p277 The properties of $\ensuremath{\mathrm{ Mo_6S_6 }}$ nanowires were investigated with ab initio calculations based on the density-functional theory. The molecules build weakly coupled one-dimensional chains, like $\ensuremath{\mathrm{ Mo_6Se_6 }}$ and Mo6S9-xIx, and the crystals are strongly... • Recent Trends in Graphene based Electrode Materials for Energy Storage Devices and Sensors Applications. Ramachandran, Rasu; Mani, Veerappan; Shen-Ming Chen; Saraswathi, Ramiah; Bih-Show Lou // International Journal of Electrochemical Science;Oct2013, Vol. 8 Issue 10, p11680 Graphene is a special allotrope of carbon with two-dimensional monolayered sheet network of sp² hybridized carbon. It possesses novel electronic, mechanical and conducting properties and these properties could be exploited in the field of scientific community in nanotechnology. Numbers of... • U.S. market to reach \$1 billion by 2000. Morris, Gregory DL // Chemical Week;2/5/1997, Vol. 159 Issue 5, p51 Predicts the growth in the market for zeolites in the United States. Average growth between 1995 and 2000; Growth in detergents application; Development in polymers directed toward metallocene and other single-site catalysts. • Conditional statistics of electron transport in interacting nanoscale conductors. Sukhorukov, Eugene V.; Jordan, Andrew N.; Gustavsson, Simon; Leturcq, Renaud; Ihn, Thomas; Ensslin, Klaus // Nature Physics;Apr2007, Vol. 3 Issue 4, p243 There is an intimate connection between the acquisition of information and how this information changes the remaining uncertainty in the system. This trade-off between information and uncertainty plays a central role in the context of detection. Recent advances in the ability to make accurate,... • Magnetic monopole drift fits 1934 ionic theory. Bush, Steve // Electronics Weekly;10/28/2009, Issue 2404, p18 The article reports that scientists from Great Britain have directly measured magnetic charge moving in a solid, and proved that the movement exactly parallels the flow of electric charge in ionic solutions. The team of scientists came from the London Centre for Nanotechnology (LCN) and the... • Direct observation of changes to domain wall structures in magnetic nanowires of varying width. O'Shea, K. J.; McVitie, S.; Chapman, J. N.; Weaver, J. M. R. // Applied Physics Letters;11/17/2008, Vol. 93 Issue 20, p202505 Lorentz microscopy has been used to explore the structure variation of domain walls in thin Permalloy nanowires in the vicinity of symmetric triangular antinotches. The antinotches present a complex potential landscape to domain walls. Walls can be trapped in front of, partly enter, or be... • Theory of simultaneous control of orientation and translational motion of nanorods using positive dielectrophoretic forces. Edwards, Brian; Engheta, Nader; Evoy, Stephane // Journal of Applied Physics;12/15/2005, Vol. 98 Issue 12, p124314 The manipulation of individual submicron-sized objects has been the focus of significant efforts over the last few years. A method to arbitrarily move and orient a set of rod-shaped conductive particles in a region defined by a set of electrodes using positive dielectrophoretic forces is... • Morphology of germanium nanowires grown in presence of B2H6. Tutuc, E.; Guha, S.; Chu, J. O. // Applied Physics Letters;1/23/2006, Vol. 88 Issue 4, p043113 We study the Au-catalyzed chemical vapor growth of germanium (Ge) nanowires in the presence of di-borane (B2H6), serving as doping precursor. Our experiments reveal that, while undoped Ge nanowires can be grown epitaxially on Si(111) substrates with very long wire lengths, the B2H6 exposure... • Andreev Spectroscopy in Three-Terminal Hybrid Nanostructure. MICHAŁEK, G.; BUŁKA, B. R.; URBANIAK, M.; DOMAŃSKI, T.; WYSOKIŃSKI, K. I. // Acta Physica Polonica, A.;2015, Vol. 127 Issue 2, p293 We consider a hybrid three-terminal structure consisting of a quantum dot coupled to three leads, two normal and one superconducting. The current flowing between one of the normal and the superconducting electrodes induces voltage in the other normal (floating) electrode. The value of the... Share
2019-03-25 13:21:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25304192304611206, "perplexity": 7332.120885937601}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912203947.59/warc/CC-MAIN-20190325112917-20190325134917-00062.warc.gz"}
http://iboyko.com/articles/configuring-directory-environment-variables-when-using-python-in-emacs/
iBoyko - "sort of" a website # Configuring Directory Environment Variables When Using Python in Emacs ## An example of how to configure directory environment variables for python.el Posted by Yakov Boyko on November 02, 2019 If you are trying to use Emacs for your Python programming needs, you are probably familiar with the Python major mode, which was originally started as Python.el This mode is aware of directory-based environment variable configurations. That is, one can configure environment variables by creating a file called .dir-locals.el and populating it with an appropriate (associative) property list. Here is an example of such file ((python-mode (python-shell-interpreter . "/home/user-name/miniconda3/envs/env-name/bin/python") (python-shell-interpreter-args . "/var/www/sites/project-name/project/manage.py shell") (python-shell-prompt-regexp . "In \$[0-9]+\$: ") (python-shell-prompt-output-regexp . "Out\$[0-9]+\$: ") (python-shell-completion-setup-code . "from IPython.core.completerlib import module_completion") (python-shell-completion-string-code . "';'.join(module_completion('''%s'''))\n") (python-shell-completion-string-code . "';'.join(get_ipython().Completer.all_completions('''%s'''))\n") (python-shell-virtualenv-root . "/home/user-name/miniconda3/envs/env-name"))) Some basic decoding for the entries is as follows: • python-shell-interpreter: The interpreter for shell interactions. Always set it to python or python2, if you are using iPython, Django will take care of enabling it itself when you use the shell. • python-shell-interpreter-args: Arguments to pass to the shell. This trick lets you start the Django shell by default when spawning shell processes for buffers. • python-shell-prompt-regexp: This is needed comint interaction with iPython. This helps comint to keep track of the prompt. • python-shell-prompt-output-regexp: This is needed comint interaction with iPython. Same as above • python-shell-completion-setup-code: Code to setup completion. All completion retrieval commands should get all they need from this code. • python-shell-completion-module-string-code: Tells comint how to autocomplete modules. Since comint can't use iPython completion by default because of limitations on shell escape codes, ipython.el makes it use this code to retrieve available completions. • python-shell-completion-string-code: This tells comint how to autocomplete stuff. Same as above • python-shell-extra-pythonpaths: Extra dirs where to find python modules. One can put all project apps in a separate folder and use this variable for that as it lets Emacs to know of this directory despite of the Django project. • python-shell-virtualenv-path: virtualenv path for the current project. This variable itself enables virtualenv for the current project. As soon as you visit a file the virtualenv will be detected and enabled for the file.
2020-02-18 15:21:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.41751807928085327, "perplexity": 10371.672517004821}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875143784.14/warc/CC-MAIN-20200218150621-20200218180621-00065.warc.gz"}
http://tex.stackexchange.com/questions/16429/equation-spanning-two-columns-in-ieeetran
# equation spanning two columns in ieeetran I am writing a paper using ieee style in two columns format. Do you know how can I write an equation that spans over the two columns? - You could put it in a double column float. –  TH. Apr 23 '11 at 0:02 See page 11 of the IEEEtran manual; there is an example at the top of the page. Now, the double column equations are defined on the page prior to the one in which they are to appear (and in this example supposed that they are to be equation numbers six and seven): \begin{figure*}[!t] % ensure that we have normalsize text \normalsize % Store the current equation number. \setcounter{MYtempeqncnt}{\value{equation}} % Set the equation number to one less than the one % desired for the first equation here. % The value here will have to changed if equations % are added or removed prior to the place these % equations are referenced in the main text. \setcounter{equation}{5} $$\label{eqn_dbl_x} x = 5 + 7 + 9 + 11 + 13 + 15 + 17 + 19 + 21+ 23 + 25 + 27 + 29 + 31$$ $$\label{eqn_dbl_y} y = 4 + 6 + 8 + 10 + 12 + 14 + 16 + 18 + 20+ 22 + 24 + 26 + 28 + 30$$ % Restore the current equation number. \setcounter{equation}{\value{MYtempeqncnt}} % IEEE uses as a separator \hrulefill % The spacer can be tweaked to stop underfull vboxes. \vspace*{4pt} \end{figure*} The result of which is shown at the top of this page. This technique allows the definition of the equations to be positioned arbitrarily as needed so that the (floating) equations will appear where desired. The “[!t]” option forces LATEX to do its best to place the equations at the top of the next page. Had it been “[!b]” instead, then the stfloats package would need to be loaded and the \vspace command, followed by the \hrulefill command, would have to occur before the equations in the figure. - Why the incantations to store and restore the "current" equation number? (Also, shouldn't there be appropriate increments added to the current equation number?) –  Willie Wong Apr 23 '11 at 3:03 @Willie: I quoted some more of the manual. –  Emre Apr 23 '11 at 3:09 I've tried using the above and it does span both columns but the problem is that it does not place the long equation where I want it (following the appropriate text). How do I get it place correctly? –  BeauGeste Aug 28 '11 at 2:36 I tried to use this example for the same case, but the equation goes to the next page, while I use \begin{figure*}[b] instead of \begin{figure*}[!t]. How can I have the equation in the current page? –  user42212 Dec 5 at 12:07
2013-12-13 21:36:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.8049259185791016, "perplexity": 1103.1994520962296}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164997874/warc/CC-MAIN-20131204134957-00096-ip-10-33-133-15.ec2.internal.warc.gz"}
https://riverml.xyz/latest/api/drift/HDDM-W/
# HDDM_W¶ Drift Detection Method based on Hoeffding’s bounds with moving weighted average-test. HDDM_W is an online drift detection method based on McDiarmid's bounds. HDDM_W uses the Exponentially Weighted Moving Average (EWMA) statistic as estimator. It receives as input a stream of real predictions and returns the estimated status of the stream: STABLE, WARNING or DRIFT. Input: value must be a binary signal, where 0 indicates error. For example, if a classifier's prediction $$y'$$ is right or wrong w.r.t the true target label $$y$$: • 0: Correct, $$y=y'$$ • 1: Error, $$y \neq y'$$ Implementation based on MOA. ## Parameters¶ • drift_confidence – defaults to 0.001 Confidence to the drift • warning_confidence – defaults to 0.005 Confidence to the warning • lambda_option – defaults to 0.05 The weight given to recent data. Smaller values mean less weight given to recent data. • two_sided_test – defaults to False If True, will monitor error increments and decrements (two-sided). By default will only monitor increments (one-sided). ## Attributes¶ • change_detected Concept drift alarm. True if concept drift is detected. • warning_detected Warning zone alarm. Indicates if the drift detector is in the warning zone. Applicability depends on each drift detector implementation. True if the change detector is in the warning zone. ## Examples¶ >>> import numpy as np >>> from river.drift import HDDM_W >>> np.random.seed(12345) >>> hddm_w = HDDM_W() >>> # Simulate a data stream as a normal distribution of 1's and 0's >>> data_stream = np.random.randint(2, size=2000) >>> # Change the data distribution from index 999 to 1500, simulating an >>> # increase in error rate (1 indicates error) >>> data_stream[999:1500] = 1 >>> # Update drift detector and verify if change is detected >>> for i, val in enumerate(data_stream): ... in_drift, in_warning = hddm_w.update(val) ... if in_drift: ... print(f"Change detected at index {i}, input value: {val}") Change detected at index 1011, input value: 1 ## Methods¶ SampleInfo clone Return a fresh estimator with the same parameters. The clone has the same parameters but has not been updated with any data. This works by looking at the parameters from the class signature. Each parameter is either - recursively cloned if it's a River classes. - deep-copied via copy.deepcopy if not. If the calling object is stochastic (i.e. it accepts a seed parameter) and has not been seeded, then the clone will not be idempotent. Indeed, this method's purpose if simply to return a new instance with the same input parameters. reset Reset the change detector. update Update the change detector with a single data point. Parameters • value (numbers.Number) Returns typing.Tuple[bool, bool]: tuple ## References¶ 1. Frías-Blanco I, del Campo-Ávila J, Ramos-Jimenez G, et al. Online and non-parametric drift detection methods based on Hoeffding’s bounds. IEEE Transactions on Knowledge and Data Engineering, 2014, 27(3): 810-823. 2. Albert Bifet, Geoff Holmes, Richard Kirkby, Bernhard Pfahringer. MOA: Massive Online Analysis; Journal of Machine Learning Research 11: 1601-1604, 2010.
2021-07-26 04:46:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33401864767074585, "perplexity": 8055.709554529768}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046152000.25/warc/CC-MAIN-20210726031942-20210726061942-00534.warc.gz"}
https://twiki.cern.ch/twiki/bin/view/CMSPublic/PhysicsResultsQCD10026
# Dijet Azimuthal Decorrelations in pp Collisions at 7 TeV ## Abstract Measurements of dijet azimuthal decorrelations in pp collisions at a center-of-mass energy of 7 TeV using the CMS detector at the CERN LHC are presented. The analysis is based on an inclusive dijet event sample corresponding to an integrated luminosity of 2.9 . The results are compared to predictions from perturbative QCD calculations and various Monte Carlo event generators. The dijet azimuthal distributions are found to be sensitive to initial-state gluon radiation. ## Approved Plots from QCD-10-026 ( click on the plot to get the pdf version ) Normalized distributions in several regions, scaled by the multiplicative factors given in the figure for easier presentation. The curves represent predictions from PYTHIA6, PYTHIA8, HERWIG++, and MADGRAPH. The error bars on the data points include statistical and systematic uncertainties. Ratios of measured normalized distributions to PYTHIA6, PYTHIA8, HERWIG++, and MADGRAPH predictions in several regions. The shaded bands indicate the total systematic uncertainty. Normalized distributions in several regions, scaled by the multiplicative factors given in the figure for easier presentation. The curves represent predictions from LO (dotted line) and NLO pQCD (solid line). Non-perturbative corrections have been applied to the predictions. The error bars on the data points include statistical and systematic uncertainties. This plot is included in the arXiv version but not in the PRL version! Ratios of measured normalized distributions to NLO pQCD predictions with non-perturbative corrections in several regions. The error bars on the data points include statistical and systematic uncertainties. The effect on the NLO pQCD predictions due to and scale variations and PDF uncertainties, as well as the uncertainties from the non-perturbative corrections are shown. Ratios of measured normalized distributions to PYTHIA6 tune D6T with various values of in several regions. The shaded bands indicate the total systematic uncertainty. Responsible: CosminDragoiu -- CosminDragoiu - 26-Jun-2011 Topic attachments I Attachment History Action Size Date Who Comment pdf deltaPhi_data_over_MC.pdf r1 manage 16.7 K 2011-06-26 - 05:10 CosminDragoiu png deltaPhi_data_over_MC.png r1 manage 114.4 K 2011-06-26 - 05:24 CosminDragoiu pdf deltaPhi_data_over_theory.pdf r1 manage 15.5 K 2011-06-26 - 05:38 CosminDragoiu png deltaPhi_data_over_theory.png r1 manage 75.2 K 2011-06-26 - 05:38 CosminDragoiu pdf deltaPhi_data_ratio_ISR.pdf r1 manage 26.5 K 2011-06-26 - 05:40 CosminDragoiu png deltaPhi_data_ratio_ISR.png r1 manage 59.0 K 2011-06-26 - 05:40 CosminDragoiu pdf deltaPhi_data_ratio_MC.pdf r1 manage 27.8 K 2011-06-26 - 05:36 CosminDragoiu png deltaPhi_data_ratio_MC.png r1 manage 66.4 K 2011-06-26 - 05:37 CosminDragoiu pdf deltaPhi_data_ratio_theory.pdf r1 manage 20.8 K 2011-06-26 - 05:39 CosminDragoiu png deltaPhi_data_ratio_theory.png r1 manage 56.7 K 2011-06-26 - 05:39 CosminDragoiu Edit | Watch | Print version | History: r2 < r1 | Backlinks | Raw View | More topic actions Topic revision: r2 - 2013-03-09 - JeffreyBerryhill Create a LeftBar Cern Search TWiki Search Google Search CMSPublic All webs Copyright &© 2008-2020 by the contributing authors. All material on this collaboration platform is the property of the contributing authors. or Ideas, requests, problems regarding TWiki? use Discourse or Send feedback
2020-10-28 09:56:16
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9116280674934387, "perplexity": 5621.758420268042}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107897022.61/warc/CC-MAIN-20201028073614-20201028103614-00258.warc.gz"}
http://codeforces.com/
By antontrygubO_o, 39 hours ago, translation, , Hello again, Codeforces! I am glad to invite you to Codeforces Round 580, which will take place on Aug/18/2019 16:45 (Moscow time). Round will be rated for both divisions. All problems in this round were created and prepared by me, antontrygubO_o. I tried to make them interesting and hope that you will enjoy them! A lot of thanks to arsijo for the excellent coordination of the round, kefaa2, gepardo, danya.smelskiy, re_eVVorld, Xellos, GandalfTheGrey, prof.PVH, KAN for the testing and valuable comments, and to Mike MikeMirzayanov Mirzayanov for the amazing platforms Codeforces и Polygon. Participants in each division will be offered 6 problems and 2 hours 10 minutes to solve them. As usual, I strongly recommend reading statements of all problems! I wish you good luck and high rating! • +679 By MikeMirzayanov, 4 days ago, , Hello! We are very pleased to cooperate with XTX Markets, thanks to whom we are able to hold the Global rounds. Four of the six of them are already passed and here are the current results. In short, XTX Markets is a leading quantitative-driven electronic market maker Launched in 2015, XTX Markets has rapidly become the number 1 FX spot liquidity provider by volumes globally, surging ahead of global banks. Thanks for supporting our community! We are happy to announce that XTX recently launched the XTX Markets Global Forecasting Challenge, powered by Correlation One. The XTX Markets Global Forecasting Challenge is an online competition for aspiring quantitative professionals, where contestants are tasked to develop a predictive model based on training data provided by XTX. Competition highlights include: • Over $100,000 in total cash prizes. •$7,500 for the best submission from a participant aged under-25 (as of 1st July 2019). • Exciting job opportunities in London. • Opportunity to compete against the best quantitative minds from around the world. The competition is open to all data-minded contestants from 1st of July to 30th of September, 2019. APPLY HERE→ We hope you'd be interested! • +129 By Vovuh, history, 5 days ago, translation, , <almost-copy-pasted-part> Hello! Codeforces Round #579 (Div. 3) will start at Aug/13/2019 17:35 (Moscow time). You will be offered 6 or 7 problems (or 8) with expected difficulties to compose an interesting competition for participants with ratings up to 1600. However, all of you who wish to take part and have rating 1600 or higher, can register for the round unofficially. The round will be hosted by rules of educational rounds (extended ACM-ICPC). Thus, during the round, solutions will be judged on preliminary tests, and after the round it will be a 12-hour phase of open hacks. I tried to make strong tests — just like you will be upset if many solutions fail after the contest is over. You will be given 6 or 7 (or 8) problems and 2 hours to solve them. Note that the penalty for the wrong submission in this round (and the following Div. 3 rounds) is 10 minutes. Remember that only the trusted participants of the third division will be included in the official standings table. As it is written by link, this is a compulsory measure for combating unsporting behavior. To qualify as a trusted participants of the third division, you must: • take part in at least two rated rounds (and solve at least one problem in each of them), • do not have a point of 1900 or higher in the rating. Regardless of whether you are a trusted participant of the third division or not, if your rating is less than 1600, then the round will be rated for you. Thanks to MikeMirzayanov for the platform, help with ideas for problems and for coordination of my work. Thanks to my good friends Mikhail PikMike Piklyaev, Maksim Ne0n25 Mescheryakov and Ivan BledDest Androsov for help in round preparation and testing the round. Good luck! I also would like to say that participants who will submit wrong solutions on purpose and hack them afterwards (example) will not be shown in the hacking leaders table. </almost-copy-pasted-part> UPD: Editorial is published! • +135 By djm03178, 7 days ago, , 안녕하세요, 코드포스! (Hello, Codeforces!) We're glad to introduce you to Codeforces Round #578 (Div. 2), which will start at Aug/11/2019 15:35 (Moscow time). The round is rated for Div. 2 participants. You will be given 6 problems and 2 hours to solve them. The score distribution will be announced later. The problems are prepared by hyunuk and me. Thanks to pllk, Learner99, Rox, mohammedehab2002, cheetose, jh05013, rkm0959, edenooo, and alex9801 for testing the round. We would also like to specially thank to KAN and arsijo for coordinating the round, and of course, MikeMirzayanov for Codeforces and Polygon platform. This is our very first round, so I hope you enjoy it a lot! UPD: The scoring distribution is 500 — 1000 — 1250 — 2000 — 2000 — 2500 UPD2: The contest is finished. Thanks for joining us! Here's the editorial. UPD3: Congratulations to the winners! Div. 2 3: ccf_n0i 4: 2om_neek Unofficial Div. 1 1: kcm1700 2: uwi 4: kmjp 5: KrK • +583 By ICPCNews1, 9 days ago, , Dear friends! We're thrilled to invite you to the two great events which will be held in Turkey for the first time ever! ICPC is delighted to announce the First National Programming Contest in Turkey, organized by inzva in collaboration with the Middle East Technical University (METU) to be held on September 13-15, 2019. We’re inviting university teams across Turkey to participate in the national contest in Beykoz Kundura, Istanbul — completely free of charge — thanks to the ICPC local partner — inzva, a non-profit community focused on artificial intelligence and algorithm, supported by BEV — an Education Foundation for the Digital Native Generation. Top ten teams qualifying in this contest can then participate in the Southerneastern Europe Regional Contest, and will be financially supported for this purpose. The winner teams of the Southern Europe Regional Contest are eligible to participate at the ICPC Moscow 2020 World Finals! Also, this weekend on September 14 in Istanbul will be held the Annual ICPC Alumni Reunion Dinner. It is the first ICPC Alumni event happening in Southeastern European Region, and we will be extremely honored to welcome you and your colleagues/classmates/coaches on this special day. Please feel free to extend this invitation to your guests. ICPC Alumni Reunion Dinner will be held on Saturday, September 14, 2019, at Beykoz Shoe Factory, Yaliköy Mah. Süreyya İlmen Cad. No:1/1 Beykoz, Istanbul. There will be a reception that begins at 4pm, with dinner to follow at 5pm. Please feel free to extend this invitation to your guests. This event is free of charge but requires reservation before September 1, 2019. Please be sure to RSVP via this form to secure your place: Alumni registration In addition to this being a fun reunion for local ICPC participants, we will be recognizing the accomplishments of various ICPC volunteers who make the show happen each year. ICPC Executive Director Bill Poucher will be in attendance, and various other judges, staff, and coaches. We would be honored if you can make it as well, and partake in the festivities. We are unfortunately unable to welcome all outside guests but have done our best to invite as many ICPC Alumni as possible, please feel free to extend this invitation to any other finalists/regional participants that you know to be in the area. You can also send their names and contact info to [email protected] so that we can add them to the invitation list. Once registered — you’ll receive official confirmation of your personal invitation, as well as all travel details. Looking forward to meeting you in Turkey! • +86 By PikMike, history, 11 days ago, translation, , Hello Codeforces! Series of Educational Rounds continue being held as Harbour.Space University initiative! You can read the details about the cooperation between Harbour.Space University and Codeforces in the blog post. This round will be rated for the participants with rating lower than 2100. It will be held on extended ICPC rules. The penalty for each incorrect submission until the submission with a full solution is 10 minutes. After the end of the contest you will have 12 hours to hack any solution you want. You will have access to copy any solution and test it locally. You will be given 6 problems and 2 hours to solve them. The problems were invented and prepared by Roman Roms Glazov, Adilbek adedalic Dalabaev, Vladimir Vovuh Petrov, Ivan BledDest Androsov, Maksim Ne0n25 Mescheryakov and me. Also huge thanks to Mike MikeMirzayanov Mirzayanov for great systems Polygon and Codeforces. Good luck to all participants! UPD: Our friends at Harbour.Space also have a message for you: Hello Codeforces! Take a minute and ask yourselves — what skills are I missing, that I would love to have? What are those key qualities that would take me from being a decent developer to an extraordinary one? Am I where I want to be, in terms of personal progress? At Harbour.Space, we focus on the key technical and social requirements of the jobs of the future. Our three-week courses are designed for students to acquire and develop specific skills through seminars, workshops, projects, and case studies in a very short amount of time. Mike Mirzayanov’s course, Advanced Algorithms and Data Structures, is a great example of this: This, in combination with our unique academic environment, extensive professional network, and our seaside campus in the beautiful city of Barcelona, provides for a truly unique learning experience. Interested? Fill out the form below, get more information, and see how you can come attend Mike’s course to see what all the fuss is about! APPLY HERE→ See you soon! Harbour.Space Congratulations to the winners: Rank Competitor Problems Solved Penalty 1 esbee 6 258 2 tfg 6 302 3 Heltion 6 310 4 jiangly 6 338 5 Geothermal 5 195 109 successful hacks and 290 unsuccessful hacks were made in total! And finally people who were the first to solve each problem: Problem Competitor Penalty A nantf 0:03 B Denverjin 0:11 C Geothermal 0:20 D dhxh 0:10 E lqs2015 0:19 F duxing201606 0:59 UPD: Editorial is out • -208 By SPatrik, history, 2 weeks ago, , Hello, Codeforces! I am glad to invite you to take part in Codeforces Round #577 (Div 2), which will be held on Sunday, 4 August at 16:35 UTC. You will be given 5 problems, one of them will have 2 subtasks and 2 hours to solve them. The round will be rated for the second division. Huge thanks to _kun_ and KAN for helping to prepare the round. I would like to thank to 300iq, isaf27, V--gLaSsH0ldEr593--V, pllk, mohammedehab2002, tractor74.ru, Rox, opukittpceno_hhr for testing the round. And thanks to MikeMirzayanov for the great codeforces and polygon platforms. This is my first Codeforces round. Hope you will enjoy it. Good luck and have fun! UPD: The scoring distribution is: 500 — 1000 — 1500 — 2000 — (2000 + 1000) UPD2: Editorial UPD3: Congratulations to the winners: Div2: 4: Yazmau Unofficial Div1: 1: neal 2: uwi 3: scott_wu • +207 By Um_nik, history, 3 weeks ago, , Hello! I'm glad to invite you all to Round 576 which will take place on Jul/30/2019 17:35 (Moscow time). There will be 6 problem in both divisions. Round is based on Team Olympiad in Computer Science Summer School. It is (yet another) summer school for schoolchildren organized by Higher School of Economics and "Strategy" Center in Lipetsk. Almost all the problems are authored and prepared by teachers and teaching assistants in CSSS: Um_nik, Burunduk1, I_love_fake123, MakArtKar, Villen3tenmerth, Aphanasiy, Gadget. One of the problems is authored by Merkurev (just because we are friends :) ). One more problem for the round was added by KAN. I would like to thank KAN for CF round coordination, I_love_Tanya_Romanova, Merkurev, Rox and tractor74.ru for testing, and Codeforces and Polygon team for these beautiful platforms. Scoring will be announced. Upd: We added one more problem to div.1 contest, now both contests have 6 problems (4 in common). The round is not combined, if it were, I would write "combined" in the title. Scoring distribution: div2: 500-750-1250-1750-2500-3000 div1: 500-750-1250-1500-1750-2250 Congratulations to our winners! div.1: 2. tourist 3. mnbvmar 4. Benq 5. pashka div.2: 1. ChthollyNotaSeniorious 2. Honour_34 3. ldxcaicai 4. shogunator 5. Yatsumura Editorial won't be published. • +192 By majk, 3 weeks ago, , The 26th Central European Olympiad in Informatics will take place in Bratislava, Slovakia, during July 23rd-29th 2019. The most gifted high school students from 13 countries will have the opportunity to prove their knowledge and skills in informatics. Codeforces will organise the online mirror for this year's competition. The online mirrors will take place after the finish of each of the 2 competition days, having the same scoring format. The online mirror schedules are the following: Contest format • The contest will be unrated for all users. • You will have to solve 3 tasks in 5 hours. • There will be full feedback throughout the entire contest. • The scoreboard will be hidden until the end of the contest. • The tasks will have partial scoring. The maximum score for each problem will be 100 points. • Among multiple submissions only the one that achieves the maximum score is counted towards the final ranking. • The submission time does not matter for ranking. • There will be enough fun for all colours ranging from newbie to international grandmaster. Legendary grandmasters can spice it up by turning it into a drinking game (ask Radewoosh for details). Link to onsite contest with official rules and scoreboard UPDATE: Much nicer scoreboard than on the first day made by arsijo. Many thanks! Congratulations to all onsite contestants who battled our unusually hard problemset for 10 hours. You can view the final standings. Many thanks to KAN for running the mirror, MikeMirzayanov for both platforms, all of our authors, testers and the whole CEOI staff and sponsors! Day 1 mirror: 1. mnbvmar 300 2. Benq 300 3. gamegame 281 4. ainta 271 5. __.__ 254 Day 2 mirror: 1. zx2003 230 2. saba2000 230 3. TLE 200 4. cuizhuyefei 200 5. panole 200 6. dacin21 200 Results of both days combined: (https://codeforces.com/spectator/ranklist/e354b9b95c3626a3cfdfdb9eb37a7a6f) Editorials: day1 day2
2019-08-17 14:04:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1700373739004135, "perplexity": 4785.569634010059}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027313259.30/warc/CC-MAIN-20190817123129-20190817145129-00022.warc.gz"}
https://gdal.org/contributing/rst_style.html
# Sphinx RST Style guide¶ This page contains syntax rules, tips, and tricks for using Sphinx and reStructuredText. For more information, please see this comprehensive guide to reStructuredText, as well as the Sphinx reStructuredText Primer. ## Basic markup¶ A reStructuredText document is written in plain text. Without the need for complex formatting, one can be composed simply, just like one would any plain text document. For basic formatting, see this table: Format Syntax Output Italics *italics* (single asterisk) italics Bold **bold** (double asterisk) bold Monospace monospace (double back quote) monospace Warning Use of basic markup is not recommend! Where possible use sphinx inline directives to logically mark commands, parameters, options, input, and files. By using directives consistently these items can be styled appropriately. ## Lists¶ There are two types of lists, bulleted lists and numbered lists. A bulleted list looks like this: • An item • Another item • Yet another item This is accomplished with the following code: * An item * Another item * Yet another item A numbered list looks like this: 1. First item 2. Second item 3. Third item This is accomplished with the following code: #. First item #. Second item #. Third item Note that numbers are automatically generated, making it easy to add/remove items. ## List-tables¶ Bulleted lists can sometimes be cumbersome and hard to follow. When dealing with a long list of items, use list-tables. For example, to talk about a list of options, create a table that looks like this: Shapes Description Square Four sides of equal length, 90 degree angles Rectangle Four sides, 90 degree angles This is done with the following code: .. list-table:: :widths: 20 80 * - Shapes - Description * - Square - Four sides of equal length, 90 degree angles * - Rectangle - Four sides, 90 degree angles ## Page labels¶ Ensure every page has a label that matches the name of the file. For example if the page is named foo_bar.rst then the page should have the label: .. _foo_bar: Other pages can then link to that page by using the following code: :ref:foo_bar Links to other pages should never be titled as “here”. Sphinx makes this easy by automatically inserting the title of the linked document. Good To insert a link to an external website: Text of the link <http://example.com>__ Warning It is very easy to have two links with the same text resulting in the following error: **(WARNING/2) Duplicate explicit target name:foo** To avoid these warnings use of a double __ generates an anonymous link. ## Sections¶ Use sections to break up long pages and to help Sphinx generate tables of contents. ================================================================================ Document title ================================================================================ First level ----------- Second level ++++++++++++ Third level *********** Fourth level ~~~~~~~~~~~~ ## Notes and warnings¶ When it is beneficial to have a section of text stand out from the main text, Sphinx has two such boxes, the note and the warning. They function identically, and only differ in their coloring. You should use notes and warnings sparingly, however, as adding emphasis to everything makes the emphasis less effective. Here is an example of a note: Note This is a note. This note is generated with the following code: .. note:: This is a note. Similarly, here is an example of a warning: Warning Beware of dragons. This warning is generated by the following code: .. warning:: Beware of dragons. ## Images¶ Add images to your documentation when possible. Images, such as screenshots, are a very helpful way of making documentation understandable. When making screenshots, try to crop out unnecessary content (browser window, desktop, etc). Avoid scaling the images, as the Sphinx theme automatically resizes large images. It is also helpful to include a caption underneath the image.: .. figure:: image.png :align: center *Caption* In this example, the image file exists in the same directory as the source page. If this is not the case, you can insert path information in the above command. The root / is the directory of the conf.py file.: .. figure:: /../images/gdalicon.png ## External files¶ Text snippets, large blocks of downloadable code, and even zip files or other binary sources can all be included as part of the documentation. To include link to sample file, use the download directive: :download:An external file <example.txt> The result of this code will generate a standard link to an external file To include a the contents of a file, use literalinclude directive: Example of :command:gdalinfo use: .. literalinclude:: example.txt Example of gdalinfo use: Driver: GTiff/GeoTIFF Size is 512, 512 Coordinate System is: DATUM["North_American_Datum_1927", SPHEROID["Clarke 1866",6378206.4,294.978698213901]], PRIMEM["Greenwich",0], UNIT["degree",0.0174532925199433]], PROJECTION["Transverse_Mercator"], PARAMETER["latitude_of_origin",0], PARAMETER["central_meridian",-117], PARAMETER["scale_factor",0.9996], PARAMETER["false_easting",500000], PARAMETER["false_northing",0], UNIT["metre",1]] Origin = (440720.000000,3751320.000000) Pixel Size = (60.000000,-60.000000) Corner Coordinates: Upper Left ( 440720.000, 3751320.000) (117d38'28.21"W, 33d54'8.47"N) Lower Left ( 440720.000, 3720600.000) (117d38'20.79"W, 33d37'31.04"N) Upper Right ( 471440.000, 3751320.000) (117d18'32.07"W, 33d54'13.08"N) Lower Right ( 471440.000, 3720600.000) (117d18'28.50"W, 33d37'35.61"N) Center ( 456080.000, 3735960.000) (117d28'27.39"W, 33d45'52.46"N) Band 1 Block=512x16 Type=Byte, ColorInterp=Gray The literalinclude directive has options for syntax highlighting, line numbers and extracting just a snippet: Example of :command:gdalinfo use: .. literalinclude:: example.txt :language: txt :linenos: :emphasize-lines: 2-6 :start-after: Coordinate System is: :end-before: Origin = ## Reference files and paths¶ Use the following syntax to reference files and paths: :file:myfile.txt This will output: myfile.txt. You can reference paths in the same way: :file:path/to/myfile.txt This will output: path/to/myfile.txt. For Windows paths, use double backslashes: :file:C:\\myfile.txt This will output: C:\myfile.txt. If you want to reference a non-specific path or file name: :file:{your/own/path/to}/myfile.txt This will output: your/own/path/to/myfile.txt ## Reference commands¶ Reference commands (such as gdalinfo) with the following syntax: :program:gdalinfo Use option directive for command line options: .. option:: -json Display the output in json format. Use describe to document create parameters: .. describe:: WORLDFILE=YES Force the generation of an associated ESRI world file (with the extension .wld).
2019-10-18 06:11:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7110318541526794, "perplexity": 10379.907054332893}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986677964.40/warc/CC-MAIN-20191018055014-20191018082514-00168.warc.gz"}
https://math.stackexchange.com/questions/2981045/preserving-the-bitwise-dot-product-distributive-property
# Preserving the bitwise dot product distributive property We can define the bitwise dot product as the dot product between the vectors of the binary bit representations of two numbers. E.g. $$5\cdot 7 = (1,0,1)\cdot(1,1,1) = 2$$ But curiously, this dot product does not always obey the distributive property. For example, $$8\cdot 7 = (1,0,0,0)\cdot(0,1,1,1) = 0$$ but $$(6\cdot7) + (2\cdot7) = 2 + 1 = 3 \equiv 1 \;\text{mod}\;2$$ This dot product does distribute (mod 2) if you ignore the carry when adding (so that 6+2 = 4), so something about the carry breaks the property. How do you prove this mathematically? • When you add, the value of a bit depends on all of the sums to its right. This is not true of your bitwise AND. – amd Nov 2 '18 at 0:18
2019-05-24 01:06:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 3, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9426335096359253, "perplexity": 520.3075260890979}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257481.39/warc/CC-MAIN-20190524004222-20190524030222-00231.warc.gz"}
https://stackabuse.com/encoding-and-decoding-base64-strings-in-node-js/
## Encoding and Decoding Base64 Strings in Node.js ### What is Base64 Encoding? Base64 encoding is a way to convert data (typically binary) into the ASCII character set. It is important to mention here that Base64 is not an encryption or compression technique, although it can sometimes be confused as encryption due to the way it seems to obscure data. In fact, size of a Base64 encoded piece of information is 1.3333 times the actual size of your original data. Base64 is the most widely used base encoding technique with Base16 and Base32 being the other two commonly used encoding schemes. ### How Does Base64 Work? Converting data to base64 is a multistep process. Here is how it works for strings of text: 1. Calculate the 8 bit binary version of the input text 2. Re-group the 8 bit version of the data into multiple chunks of 6 bits 3. Find the decimal version of each of the 6 bit binary chunk 4. Find the Base64 symbol for each of the decimal values via a Base64 lookup table For a better understanding of this concept, let's take a look at an example. Suppose we have string "Go win" and we want to convert it into Base64 string. The first step is to convert this string into binary. The binary version of "Go win" is: 01000111 01101111 00100000 01110111 01101001 01101110 You can see here that each character is represented by 8 bits. However as we said earlier, Base64 converts the data in 8 bit binary form to chunks of 6 bits. This is because Base64 format only has 64 characters: 26 uppercase alphabet letters, 26 lowercase alphabet letters, 10 numeric characters, and the "+" and "/" symbols for new line. Base64 doesn't use all the ASCII special characters, but only these few. Note that some implementations of Base64 uses different special characters than "+" and "/". Coming back to the example, let us break our 8 bit data into chunks of 6 bits. 010001 110110 111100 100000 011101 110110 100101 101110 You won't always be able to divide up the data in to full sets of 6 bits, in which case you'll have to deal with padding. Now for each chunk above, we have to find its decimal value. These decimal values have been given below: Binary Decimal 010001 17 110110 54 111100 60 100000 32 011101 29 110110 54 100101 37 101110 46 Finally we have to look the Base64 value for each of the decimal that we just calculated from binary data. Base64 encoding table looks like this: Here you can see that decimal 17 corresponds to "R", and decimal 54 corresponds to "2", and so on. Using this encoding table we can see that the string "Go win" is encoded as "R28gd2lu" using Base64. You can use any online text to Base64 converter to verify this result. ### Why use Base64 Encoding? Sending information in binary format can sometimes be risky since not all applications or network systems can handle raw binary. On the other hand, the ASCII character set is widely known and very simple to handle for most systems. For instance email servers expect textual data, so ASCII is typically used. Therefore, if you want to send images or any other binary file to an email server you first need to encode it in text-based format, preferably ASCII. This is where Base64 encoding comes extremely handy in converting binary data to the correct formats. ### Encoding Base64 Strings with Node.js The easiest way to encode Base64 strings in Node.js is via the Buffer object. In Node.js, Buffer is a global object which means that you do not need to use require statement in order to use Buffer object in your applications. Internally Buffer is an immutable array of integers that is also capable of performing many different encodings/decodings. These include to/from UTF-8, UCS2, Base64 or even Hex encodings. As you write code that deals with and manipulates data, you'll likely be using the Buffer object at some point. Take a look at the following example. Here we will encode a text string to Base64 using Buffer object. Save the following code in a file "encode-text.js" 'use strict'; let data = 'stackabuse.com'; let buff = new Buffer(data); let base64data = buff.toString('base64'); console.log('"' + data + '" converted to Base64 is "' + base64data + '"'); In the above script we create a new buffer object and pass it our string that we want to convert to Base64. We then call "toString" method on the buffer object that we just created and passed it "base64" as a parameter. The "toString" method with "base64" as parameter will return data in the form of Base64 string. Run the above code, you shall see the following output. $node encode-text.js "stackabuse.com" converted to Base64 is "c3RhY2thYnVzZS5jb20=" In output we can see Base64 counterpart for the string that we converted to Base64. ### Decoding Base64 Strings with Node.js Decoding Base64 string is quite similar to encoding it. You have to create a new buffer object and pass two parameters to its constructor. The first parameter is the data in Base64 and second parameter is "base64". Then you simply have to call "toString" on the buffer object but this time the parameter passed to the method will be "ascii" because this is the data type that you want your Base64 data to convert to. Take a look at the following code snippet for reference. 'use strict'; let data = 'c3RhY2thYnVzZS5jb20='; let buff = new Buffer(data, 'base64'); let text = buff.toString('ascii'); console.log('"' + data + '" converted from Base64 to ASCII is "' + text + '"'); Add the data to "ascii.js" file and save it. Here we have used "Tm8gdG8gUmFjaXNt" as the Base64 input data. When this data is decoded it should display "No to Racism". This is because from the last example we know that "No to Racism" is equal to "Tm8gdG8gUmFjaXNt". Run the above code with Node.js. It will display following output. ### Encoding Binary Data to Base64 Strings As mentioned in the beginning of the article, the primary purpose of Base64 encoding is to convert binary data into textual format. Let us see an example where we will convert an image (binary data) into a Base64 string. Take a look at the following example. 'use strict'; const fs = require('fs'); let buff = fs.readFileSync('stack-abuse-logo.png'); let base64data = buff.toString('base64'); console.log('Image converted to base 64 is:\n\n' + base64data); In the above code we load an image into buffer via the readFileSync() method of the fs module. The rest of the process is similar to creating Base64 string from a normal ASCII string. When you run the above code you will see following output. $ node encode-image.js Image converted to Base64 is: Although the actual image is very small (25x19), the output is still fairly large, partially because Base64 increases the size of the data, as we mentioned earlier. ### Decoding Base64 Strings to Binary Data The reverse process here is very similar to how we decode Base64 strings, as we saw in an earlier section. The biggest difference is the output destination and how data is written there. Let's see the example: 'use strict'; const fs = require('fs'); let data = 'iVBORw0KGgoAAAANSUhEUgAAABkAAAATCAYAAABlcqYFAAAABGdBTUEAALGPC/xhBQAAACBjSFJNAAB6JgAAgIQAAPoAAA' + 'YSB4bWxuczp4PSJhZG9iZTpuczptZXRhLyIgeDp4bXB0az0iWE1QIENvcmUgNS40LjAiPgogICA8cmRmOlJERiB4bWxuczpyZGY9Imh0dHA6Ly' + '93d3cudzMub3JnLzE5OTkvMDIvMjItcmRmLXN5bnRheC1ucyMiPgogICAgICA8cmRmOkRlc2NyaXB0aW9uIHJkZjphYm91dD0iIgogICAgICAg' + 'ICAgICB4bWxuczp0aWZmPSJodHRwOi8vbnMuYWRvYmUuY29tL3RpZmYvMS4wLyI+CiAgICAgICAgIDx0aWZmOk9yaWVudGF0aW9uPjE8L3RpZm' + 'TUtcZxR+7ufkXp1SZ4iZRE1EDVQRnTAhowsZMFm40I2rNqUIIev8hvoPQroQXBTqwiAWcd0EglEhiZNajVZrQGXAWAzaZpzMnZn7lXPeeIe5Da' + 'Wb9Ax33vOec8/znI/3vVI6nfbxP4v8b/iSJIGfzyGfkPi+D13XUalUBL6qqmIvy5+8WuX/r2RCkUzAoIuLi2hqaoLrutjb28P6+josyxJkiqJA' + '07SQXiqVwHaOZYx/itLc3Px9YIxEIlheXsbExATGxsYwMjIiwEdHRwXA/Pw8EokEcrkcDg4OYJomVlZWMDU1JSqfmZlBR0cHbNsOtVoNCHjlTF' + 'iSySQMwxAVxONxQbi0tIRMJoPe3l5MT0+jtbUVg4ODYGImY18qlcL4+DhisZjoggCjv1C7uOyenh7Mzs5iY2ND6FQpdnd3sba2JloSjUYxPDyM' + '/v5+TE5OYn9/X9jZtrOzg+3t7WqyAUmoEu419/+HBw9E+eVymbJqAJP39fWBCR3HEU+hUMDQ0JCYGc8um81iYGAAjY2N8DwvwBdraCY8tHhDA1' + 'Y3N9Hd3S2yvH37O7RcbsF7AuUsD9+8wdOFBTx/8QJtbW1C5/nMzc3R0D2UyxXk83lRXcAk1V5GCT5sSUGDbeHxy9/EO98M9OOXzT9wfHISxKC1' + 'VdXJJ81F7j6kwUynPHlXDPdFB2fRj+KVK0KvT2rbp3uKYryJU11Cke8qqMuOoioeeJ1MPDYxM36m1cNSq4GdFx58RAWvbx8TrXnK4IgR16Em5G' + 'K4iqHi5GHHxLgcSDn97WgZPoND+GGZRpPYH85cgiiRQl1ltXxmFFQ5PuopP8TrW5ZyRcWp7AbmkeZefg5+N6PPnbRJdpw/YlfB0vQiPQZwVdZN' + 'tFZEVK6D1VTnccJlXzuqTjvOZiq6Rhj2KqLSJsofOHgIl8+t0/qsfDioxmSUWGjrRFzhYi/5Oynrdl3KXHIZDXtF6hil8R6I9FBV/RvDLnXKxS' + '2cKMfUSm3rhD0g4E2g197fwMZ+Bgt8rNe2iP2BhL5dgfFzrx8AfECEDdx45a0AAAAASUVORK5CYII='; let buff = new Buffer(data, 'base64'); fs.writeFileSync('stack-abuse-logo-out.png', buff); console.log('Base64 image data converted to file: stack-abuse-logo-out.png'); Here you can see that we start with the Base64 data (which could've also been received from a socket or some other communication line) and we load it in to a Buffer object. When creating the buffer we tell it that it's in base64 format, which allows the buffer to parse it accordingly for internal storage. To save the data back in its raw PNG format, we simply pass the Buffer object to our fs.writeFileSync method and it does the conversion for us. ### Conclusion Base64 encoding is one of the most common ways of converting binary data into plain ASCII text. It is a very useful format for communicating between one or more systems that cannot easily handle binary data, like images in HTML markup or web requests. In Node.js the Buffer object can be used to encode and decode Base64 strings to and from many other formats, allowing you to easily convert data back and forth as needed. What do you typically use Base64 formatting for in Node.js? Let us know in the comments!
2020-04-09 23:32:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2513750195503235, "perplexity": 1623.747895115992}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371880945.85/warc/CC-MAIN-20200409220932-20200410011432-00346.warc.gz"}
https://docs.microsoft.com/en-us/dotnet/csharp/language-reference/keywords/sbyte
# sbyte (C# Reference) sbyte denotes an integral type that stores values according to the size and range shown in the following table. Type Range Size .NET type sbyte -128 to 127 Signed 8-bit integer System.SByte ## Literals You can declare and initialize an sbyte variable by assigning a decimal literal, a hexadecimal literal, or (starting with C# 7.0) a binary literal to it. In the following example, integers equal to -102 that are represented as decimal, hexadecimal, and binary literals are converted from int to sbyte values. sbyte sbyteValue1 = -102; Console.WriteLine(sbyteValue1); unchecked { sbyte sbyteValue2 = (sbyte)0x9A; Console.WriteLine(sbyteValue2); sbyte sbyteValue3 = (sbyte)0b1001_1010; Console.WriteLine(sbyteValue3); } // The example displays the following output: // -102 // -102 // -102 Note You use the prefix 0x or 0X to denote a hexadecimal literal and the prefix 0b or 0B to denote a binary literal. Decimal literals have no prefix. Starting with C# 7.0, a couple of features have been added to enhance readability. • C# 7.0 allows the usage of the underscore character, _, as a digit separator. • C# 7.2 allows _ to be used as a digit separator for a binary or hexadecimal literal, after the prefix. A decimal literal isn't permitted to have a leading underscore. Some examples are shown below. unchecked { sbyte sbyteValue4 = (sbyte)0b1001_1010; Console.WriteLine(sbyteValue4); sbyte sbyteValue5 = (sbyte)0b_1001_1010; // C# 7.2 onwards Console.WriteLine(sbyteValue5); } // The example displays the following output: // -102 // -102 If the integer literal is outside the range of sbyte (that is, if it is less than SByte.MinValue or greater than SByte.MaxValue, a compilation error occurs. When an integer literal has no suffix, its type is the first of these types in which its value can be represented: int, uint, long, ulong. This means that, in this example, the numeric literals 0x9A and 0b10011010 are interpreted as 32-bit signed integers with a value of 156, which exceeds SByte.MaxValue. Because of this, the casting operator is needed, and the assignment must occur in an unchecked context. ## Compiler overload resolution A cast must be used when calling overloaded methods. Consider, for example, the following overloaded methods that use sbyte and int parameters: public static void SampleMethod(int i) {} public static void SampleMethod(sbyte b) {} Using the sbyte cast guarantees that the correct type is called, for example: // Calling the method with the int parameter: SampleMethod(5); // Calling the method with the sbyte parameter: SampleMethod((sbyte)5); ## Conversions There is a predefined implicit conversion from sbyte to short, int, long, float, double, or decimal. You cannot implicitly convert nonliteral numeric types of larger storage size to sbyte (see Integral Types Table for the storage sizes of integral types). Consider, for example, the following two sbyte variables x and y: sbyte x = 10, y = 20; The following assignment statement will produce a compilation error, because the arithmetic expression on the right side of the assignment operator evaluates to int by default. sbyte z = x + y; // Error: conversion from int to sbyte To fix this problem, cast the expression as in the following example: sbyte z = (sbyte)(x + y); // OK: explicit conversion It is possible though to use the following statements, where the destination variable has the same storage size or a larger storage size: sbyte x = 10, y = 20; int m = x + y; long n = x + y; Notice also that there is no implicit conversion from floating-point types to sbyte. For example, the following statement generates a compiler error unless an explicit cast is used: sbyte x = 3.0; // Error: no implicit conversion from double sbyte y = (sbyte)3.0; // OK: explicit conversion For information about arithmetic expressions with mixed floating-point types and integral types, see float and double. For more information about implicit numeric conversion rules, see the Implicit Numeric Conversions Table. ## C# Language Specification For more information, see Integral types in the C# Language Specification. The language specification is the definitive source for C# syntax and usage.
2019-06-20 06:45:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4401074945926666, "perplexity": 4474.659959395988}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999141.54/warc/CC-MAIN-20190620044948-20190620070948-00292.warc.gz"}
https://cs.stackexchange.com/questions/69435/spacial-access-methods-and-z-order
# Spacial access methods and z-order I'm relatively familiar with spacial access methods and related data-structure such as Kd-trees and R-trees, but there is a related issue which I don't remember having seen mentioned when reading on that subject. I'm concerned with the 2 dimension case. Programs manipulating shapes often sort overlapping shapes in z-order, giving a predictable behavior. Querying a data structure for a range access usually provides the shapes in the region in an order not guaranteed to respect the z-order. It is obviously possible to sort the result, but are there some data-structure allowing to avoid that step? Especially when z-order is considered important only for overlapping shapes. • This question is not entirely clear to me. First of all, what are you querying for, precisely? All shapes completely inside the query region? All shapes intersecting the query region? All groups of intersecting shapes for which at least one shape intersects the region? Secondly, what are the shapes? Rectangles? Polygons? Something else? Finally, what is your query range? A rectangle? A circle? Something else? Jan 29 '17 at 10:21 • @Discretelizard, I don't have any application in mind (more precisely my application does not depend on z-order and is suitably handled with what we have); I was wondering how if z-order could be handled in a non-trivial way (i.e. without post-processing the result or doing a simple selection on the z-ordered list of shapes). If you need one, you can consider displaying from back to front. Jan 29 '17 at 12:25 • Well, then I'm afraid I don't understand what sort of answer you're looking for. I could elaborate a bit more on how storing the lists of shapes at the 'bottom' of the tree sorted by their z-order can be used to get a bit better performance than simply sorting, but most of the answers I can think of likely depend on the answer to the questions I asked above. Jan 29 '17 at 13:34 • @Discretelizard, how that problem is called in academia and the reference to a survey paper would be enough, an outline of the main techniques would be outstanding; a statement that it is not a studied problem would also work although disappointing but that's not the news-bearer fault. I'm asking in a technological watch state of mind --just collecting facts related to what I do in case they come up useful-- so ideas about how to adapt a data-structure not designed for z-order handling without experience feed-back is not what I'm looking for. Jan 29 '17 at 14:26 First of all, Kd-trees are data-structures for a number of points, while your problem is considered with shapes. Although you can (and we will) represent the shapes as points in a higher dimensional space in some way, querying shapes is an important special case over arbitrary points. I don't see why Z-orders would be interesting for points, so we really want a data-structure for shapes. R-trees store rectangles, but if you are interested in finding intersections of query regions with non-rectangular shapes, R-trees probably aren't very helpful. Since a general shape is not so clearly enough, I will consider searching for a range in a set of $$n$$ simple polygons, this should be sufficient for most cases. Querying this structure with a 2D range (that is, an orthogonal rectangle) for shapes can mean two different things: • Find all polygons fully contained in an orthogonal query rectangle. • Find all polygons intersecting an orthogonal query rectangle. ## Polygons fully contained in an orthogonal query rectangle For the first query, note that a polygon is fully contained in a rectangle if and only if its rectangular bounding box is fully contained inside the query rectangle. So, we can search for the bounding boxes of the shapes. To do this, we represent the bounding box of every shape $$s$$ as a tuple of the bottom left and upper right corner $$(p_s, q_s)$$. I denote the $$x$$- and $$y$$-coordinates of a point using brackets, e.g. $$p[x]$$. We want to find all $$s$$ such that $$p_r \leq p_s$$ and $$q_r \leq q_s$$ for a query rectangle $$r$$. We can use a 4D version of your favorite point range query data structure (Kd,R,Range,etc.-tree) to find all those rectangles in $$O(\log^3 n + k)$$ time, were $$k$$ is the amount of rectangles reported, and use $$O(n\log^3 n)$$ storage for the data-structure. You can actually reduce the storage to $$O(n\log^2 n)$$, using a 2D-Range tree on the points $$p_s$$ and 'linking' this tree to a priority search tree for the $$q_s$$ coordinate (since we only have to search in positive direction on the second coordinate). However, Range trees tend to have large coefficients in practice, so you should probably stick to the trees you're familiar with. ## Polygons intersecting an orthogonal query rectangle For the second type of query on points, finding all polygons intersecting an orthogonal query rectangle, searching for intersecting bounding boxes may lead to a lot bounding boxes where the polygon does not actually intersect the query rectangle, so we need another representation. If we represent all polygons by the $$m$$ segments of their boundaries, we use a Windowing query to find the segments in $$O(log^2 m + i)$$, where $$i$$ is the amount of segments found, using a $$O(m\log m)$$ size data-structure. However, we missed the polygons that contain our query rectangle, as we only looked for boundary intersections. We can check this by storing the bounding boxes of all polygons in 2 'chained' segment trees (similar to creating from 2D-range trees from BST's) and query for the bounding boxes that contain endpoints of the query rectangle. This has a query time of $$O(\log n + k)$$ and uses $$O(n\log n )$$ storage, but this is dominated by the other part, as $$n\leq m$$ and $$k \leq i$$. ## Now, for z-orders... First, to summarize, we now have algorithms to solve the following problems with the following bounds: • Finding all polygons fully contained in an orthogonal query rectangle takes $$O(\log^3 n + k)$$ time and $$O(n\log^3 n)$$ space, where $$n$$ is the total number of polygons and $$k$$ is the number of polygons reported. • Finding all polygons intersecting an orthogonal query rectangle takes $$O(\log^2 m + i)$$ time and $$O(m\log m)$$ space, where $$m$$ is the total number of polygon segments and $$i$$ is the number of segments of the polygons reported. Now, I feel that I can finally consider the z-order. I will assume we want to query for shapes fully contained in our rectangle, since the reasoning and conclusion is similar for the other case. I do not know why you need the objects sorted on z-order, but I assume you want to iterate over all reported shapes in their z-order (If this is not what you want, there might be a far better method than sorting) If we query as discussed and sort the found rectangles, we get a running time of $$O(\log^3 n + k\log k)$$. Generally, $$k$$ is small compared to $$n$$, so the additional log factor does not do that much so you might as well sort. In fact, $$n\log n$$ is usually the best you can get for an geometric algorithm that is not a variant of point location/range search. If $$k$$ is not small compared to $$n$$, you're probably better of not running all this complicated range searching and just run the trivial $$O(n)$$-algorithm that traverses all shapes in their z-order, while checking in constant time if it is in your query range. (I ignore sorting all shapes, as you only need to do that once) ## Tldr;+Conclusion If range searching is your problem, sorting probably isn't and if sorting is your problem, range searching probably isn't. Although there are probably cases where they both are problem (e.g. zooming in maps), getting a good solution for this case could be rather tricky. I hope that if you learned anything from this answer, it is that range searching can already be quite a tricky business, so I'll leave it at this for now. It might be interesting to look if this specific topic has some useful literature. • Actually, one idea is to consider z-orders as the third dimension (I know, who would have thought?!) and store them. (not much different than storing the points in sorted binary search tree). However, we still have to merge these sorted result, which we may do a logarithmic amount of time (maybe worse), to get a $k\log \log k$ cost for the sorting (at best), so this probably has little value. Jan 28 '17 at 22:46 • Thanks for your answer. Sadly, it felt short of addressing my concerns, probably because I was not clear and you ended retracing the journey I made before asking my question -- which is a nice confirmation that I've not missed something obvious. I just want overlapping shapes be reported in z-order, if they are not overlapping I don't care about it -- and that's why sorting seem overkill and is painful for so much for the time but for the latency introduced. (On a side note, r-trees for sure were introduced for sized objects). Jan 29 '17 at 7:58 • @AProgrammer Ah, I see, R-trees store rectangles, not points. This does mean that it isn't very suitable for the intersection range query on polygons, though. Jan 29 '17 at 16:43 I know two data structures that return results in z-order without further sorting. The CritBit-Tree is a binary prefix sharing tree. If you interleave the coordinates to get a single key, the result is a z-ordered query result. Bit interleaving is straight forward for integer values, but also possible with floating point values (with some caveats). A multidimensional implementation is available here (also includes floating-point interleaving). The PH-Tree is a z-ordered tree, essentially a CritBit tree that splits in every node in all dimensions, like a quadtree. For lower dimensions I would recommend the CritBit, but for higher dimensions (>5 dimensions?) I would recommend the PH-Tree. The PH-Tree also supports rectangle data, while the critbit only supports point data. Essentially PH-Tree is performance wise comparable to an R*Tree, sometimes worse, sometimes better.
2022-01-28 18:52:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 39, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.381401926279068, "perplexity": 666.3038437524136}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320306335.77/warc/CC-MAIN-20220128182552-20220128212552-00262.warc.gz"}
https://www.physicsforums.com/threads/can-very-distant-particles-be-entangled.858161/
# Can very distant particles be entangled? 1. Feb 19, 2016 ### naima Entangled particles give no interference. What happens in the Fraunhofer approximation when the source of entangled pairs is far away? If it depends on distance what about the apparent collapse? 2. Feb 19, 2016 ### vanhees71 As long as nothing disturbes the entangled particles, entanglement stays intact. This is clear from the fact that the time evolution of a closed system is a unitary (linear) transformation of the state (in the Schrödinger picture). 3. Feb 19, 2016 ### naima far distant source of incoherent light can give interferences. The lack of coherence is the usual explanation for the lack of interferences. In the Fraunhofer approximation interferences reappear. Isn't it as if we were in the focal plane of an Heisenberg lens where which path is erased? Last edited: Feb 19, 2016 4. Feb 19, 2016 ### DrChinese Great question. And perhaps if you did coincidence counting you would be able to see that interference pattern. 5. Feb 19, 2016 ### vanhees71 I'm not sure what you are referring too. Do you mean the Hanbury-Brown-Twiss effect? Then have a first look here https://en.wikipedia.org/wiki/Hanbury_Brown_and_Twiss_effect Of course, the article is a bit sloppy in invoking old-fashioned ideas about "wave-particle duality" which doesn't exist, but the general ideas are right. 6. Feb 19, 2016 ### naima I did not know this effect but it looks like the Fraunhofer interference behind two slits. My idea came from a link that DrChinese gave me. Experiment and the foundations of quantum physics look at fig 3. With coincidence counting an interference pattern can be observed in the focal plane of the heisenberg lens (detector D1) Zeilinger explains that in the focal plane the which-way information is erased. What surprises me is that the coincidence counting is done with the D2 detector behind the slits, where there is no interference pattern! Suppose that the heisenberg lens and the slits are at the same distance of the source. the mixed state at these points is (|1><1| +|2><2|)/2 then one evolves thru the slits and give no interference pattern. The Heisenberg device transforms the other state and gives (taking account of coincidence logic) the pattern. What is the mathematics behind this transformation? And when we are not in the focal plane the fringe visibility decreases. Did entanglement disappear? Last edited by a moderator: May 7, 2017 7. Feb 19, 2016 ### DrChinese Did you notice that you are mixing ideas which are not really alike and attempting to synthesize something that is neither? Light from distant stars is generally not entangled. The Hanbury-Brown-Twiss effect is fascinating, I think of it as something like entangled histories. When you attempt to say: what if I took entangled photons, and send them across the universe, and make them coherent, and then run them through a double slit apparatus, and then use them to send signals FTL: you are oversimplifying the ideas. The devil is in the details. Coherence (or lack thereof) of entangled pairs is a complex subject. Entanglement involves things such as fixed photon numbers (Fock states) which are not necessarily present when you also consider the Hanbury-Brown-Twiss effect across large distances (where there is bunching). The full treatment is way beyond my pay grade. 8. Feb 19, 2016 ### naima I readily admit that i mix different situations and ideas, but in any case i suppose ftl signals. In which sentence? In this post i read what Zeilinger wrote. Can you help me for the formulas giving the interference pattern in the focal plane? I recall that this pattern only appears when the no coinciding are forgotten. The reminding ones are locally in a mixed state. those of the slits give no interference pattern. Those with the lens give an interference pattern but Alice must wait for a classical message to wash her screen. So no FTL. the relationship between the Zeilinger paper and my title is that the simple fact that the source of entangled particles is far away could give a natural Heisenberg lens where we are in the focal plane. Last edited: Feb 19, 2016 9. Feb 19, 2016 ### DrChinese I guessed you were going in that direction. Like I say, there are plenty of issues to address and none can be glossed over. I generally stay away from detail discussions of the eraser setup you are referring to for the simple reason that they are almost impossible to easily explain in the context of a series of posts. So I am not going to start here. I will remind you of this: you get interference when there is no way to obtain which path information from the entangled partner. That tells you everything you need to know. The photons registering in the Heisenberg lens will not yield which slit information, and there is no way to place them in that state at will. They will be true even in your "natural Heisenberg lens" concept. 10. Feb 19, 2016 ### vanhees71 I've no clue what "entangled histories" should be. It's usually called intensity correlations or more generally in the QFT context second-order coherence: https://en.wikipedia.org/wiki/Degree_of_coherence For a pedagogic introduction to HBT, see http://arxiv.org/abs/nucl-th/9804026 11. Feb 22, 2016 ### sciencejournalist00 Teleportation can be applied not just to pure states, but also mixed states, that can be regarded as the state of a single subsystem of an entangled pair. The so-called entanglement swapping is a simple and illustrative example. If Alice has a particle which is entangled with a particle owned by Bob, and Bob teleports it to Carol, then afterwards, Alice's particle is entangled with Carol's. A more symmetric way to describe the situation is the following: Alice has one particle, Bob two, and Carol one. Alice's particle and Bob's first particle are entangled, and so are Bob's second and Carol's particle. Now, if Bob does a projective measurement on his two particles in the Bell state basis and communicates the results to Carol, as per the teleportation scheme described above, the state of Bob's first particle can be teleported to Carol's. Although Alice and Carol never interacted with each other, their particles are now entangled. https://en.wikipedia.org/wiki/Quantum_teleportation#Entanglement_swapping 12. Feb 22, 2016 ### DrChinese Could you give us an example of that? Your example of teleportation via entanglement swapping would not apply, since the states being teleported are pure states (being entangled). 13. Feb 22, 2016 ### Strilanc The standard teleportation procedure works on mixed states. (A mixed state came be eigen-decomposed into a weighted sum of pure states. When you apply the teleportation operations, they will distribute over that sum and into each the pure state cases. So the final state will be a weighted sum of teleported pure states, which is just a teleported mixed state.) 14. Feb 22, 2016 ### DrChinese Are you sure? I admit that I open to enlightenment on the point, but don't believe I have seen an example previously. 15. Feb 22, 2016 ### StevieTNZ I asked William Wootters whether you can teleport a horizontally polarized photon (photon A) to another photon (photon C; entangled with photon B) - he replied yes 16. Feb 22, 2016 ### DrChinese I just can't picture C ending up horizontally polarized after such operation, except perhaps half the time at most. Unless there is some kind of classical operation occurring. 17. Feb 22, 2016 ### Strilanc Yes, I'm sure. Given a teleportation operator $T(\left| \psi \right\rangle \left\langle \psi \right| \otimes R_\text{epr}) = R_\text{junk} \otimes \left| \psi \right\rangle \left\langle \psi \right|$ and a mixed state $M = \sum_k \lambda_k \left| \psi_k \right\rangle \left\langle \psi_k \right|$ we find: $T(M \otimes R_\text{epr})$ $= T((\sum_k \lambda_k \left| \psi_k \right\rangle \left\langle \psi_k \right|) \otimes R_\text{epr})$ $= \sum_k \lambda_k T(\left| \psi_k \right\rangle \left\langle \psi_k \right| \otimes R_\text{epr})$ $= \sum_k \lambda_k R_\text{junk} \otimes \left| \psi_k \right\rangle \left\langle \psi_k \right|$ $= R_\text{junk} \otimes \sum_k \lambda_k \left| \psi_k \right\rangle \left\langle \psi_k \right|$ $= R_\text{junk} \otimes M$ We can also confirm by doing a full calculation of the teleportation process given an unknown mixed state. Suppose Alice has a qubit in the state represented by the density matrix $M = \begin{bmatrix} a & b \\ \overline{b} & c \end{bmatrix}$. It might be mixed. She also shares an EPR pair $P = \frac{1}{\sqrt{2}} \left| 00 \right\rangle + \frac{1}{\sqrt{2}} \left| 11 \right\rangle$ with Bob. The density matrix for state of the system as a whole is: $\psi_1 = M \otimes PP^\dagger = \frac{1}{2} \begin{bmatrix} M & 0 & 0 & M \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ M & 0 & 0 & M \end{bmatrix} = \frac{1}{2} \begin{bmatrix} a & b & 0 & 0 & 0 & 0 & a & b \\ \overline{b} & c & 0 & 0 & 0 & 0 & \overline{b} & c \\ 0 & 0 & 0 & 0 & 0 & 0& 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0& 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0& 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0& 0 & 0\\ a & b & 0 & 0 & 0 & 0 & a & b \\ \overline{b} & c & 0 & 0 & 0 & 0 & \overline{b} & c \end{bmatrix}$ Now we apply the teleportation operations. First, a controlled-not of Alice's qubit containing $M$ onto the first qubit of $P$. The odd-index rows and columns (at indices 1, 3, 5, 7; you may be more used to calling them 2nd, 4th, 6th, and 8th) correspond to states where $M$ is ON. Toggling the first qubit of $P$ corresponds to pairing columns and rows whose index in binary have matching bits everywhere except in the second position (0-2, 1-3, 4-6, and 5-7) and swapping them. So a CNOT of $M$ onto the first qubit of $P$ swaps the 2nd and 4th columns, the 6th and 8th columns, the 2nd and 4th rows, and finally the 6th and 8th rows: $\psi_{2} = \frac{1}{2} \begin{bmatrix} a & 0 & 0 & b & 0 & b & a & 0 \\ 0 & 0 & 0 & 0 & 0 & 0& 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0& 0 & 0\\ \overline{b} & 0 & 0 & c & 0 & c & \overline{b} & 0 \\ 0 & 0 & 0 & 0 & 0 & 0& 0 & 0\\ \overline{b} & 0 & 0 & c & 0 & c & \overline{b} & 0 \\ a & 0 & 0 & b & 0 & b & a & 0 \\ 0 & 0 & 0 & 0 & 0 & 0& 0 & 0 \end{bmatrix}$ Second, a Hadamard operation is applied to the qubit that started off storing $M$. Group the rows by everything except $M$ (0 with 1, 2 with 3, 4 with 5, 6 with 7), then give the even-index row of each pair the pair's sum while the odd index gets the difference. Repeat for the columns. Gain a factor of 1/2. $\psi_{3} = \frac{1}{4} \begin{bmatrix} a & a & b & -b & b & -b & a & a \\ a & a & b & -b & b & -b & a & a \\ \overline{b} & \overline{b} & c & -c & c & -c & \overline{b} & \overline{b} \\ -\overline{b} & -\overline{b} & -c & c & -c & c & -\overline{b} & -\overline{b}\\ \overline{b} & \overline{b} & c & -c & c & -c & \overline{b} & \overline{b} \\ -\overline{b} & -\overline{b} & -c & c & -c & c & -\overline{b} & -\overline{b}\\ a & a & b & -b & b & -b & a & a \\ a & a & b & -b & b & -b & a & a \end{bmatrix}$ Third, a CNOT of $P$'s first qubit onto its second qubit is performed. Swap the 3rd and 7th columns. And rows. Also 4th and 8th. Technically a measurement is supposed to happen beforehand, since $P$'s first qubit was originally with Alice and now we're conditioning on its value in Bob-land. But you can defer measurement when performing calculations; you'll get the same result at the end as long as the measured qubits are only used as controls in the interim. $\psi_{4} = \frac{1}{4} \begin{bmatrix} a & a & a & a & b & -b& b & -b \\ a & a & a & a & b & -b& b & -b \\ a & a & a & a & b & -b& b & -b \\ a & a & a & a & b & -b& b & -b \\ \overline{b} & \overline{b}& \overline{b} & \overline{b} & c & -c & c & -c \\ -\overline{b} & -\overline{b}& -\overline{b} & -\overline{b} & -c & c & -c & c\\ \overline{b} & \overline{b}& \overline{b} & \overline{b} & c & -c & c & -c \\ -\overline{b} & -\overline{b}& -\overline{b} & -\overline{b} & -c & c & -c & c \end{bmatrix}$ The final non-measurement operation is a controlled-Z of the qubit that started off storing $M$ onto $P$'s second qubit. This negates the columns and rows whose indices are of the form 1X1. So negate the 6th and 8th columns, and rows: $\psi_{4} = \frac{1}{4} \begin{bmatrix} a & a & a & a & b & b& b & b \\ a & a & a & a & b & b& b & b \\ a & a & a & a & b & b& b & b \\ a & a & a & a & b & b& b & b \\ \overline{b} & \overline{b}& \overline{b} & \overline{b} & c & c & c & c \\ \overline{b} & \overline{b}& \overline{b} & \overline{b} & c & c & c & c\\ \overline{b} & \overline{b}& \overline{b} & \overline{b} & c & c & c & c \\ \overline{b} & \overline{b}& \overline{b} & \overline{b} & c & c & c & c \end{bmatrix}$ Which factors: $\psi_{4} = \frac{1}{4} \begin{bmatrix} 1 & 1 \\ 1 & 1 \end{bmatrix} \otimes \begin{bmatrix} 1 & 1 \\ 1 & 1 \end{bmatrix} \otimes \begin{bmatrix} a & b \\ \overline{b} & c \end{bmatrix}$ Measuring the first two qubits will give a uniformly random result (since they are each in the state $\frac{1}{\sqrt{2}} \left| 0 \right\rangle + \frac{1}{\sqrt{2}} \left| 1 \right\rangle$), and drop them into the maximal mixed state. The actual protocol performs measurement earlier, but we used the deferred measurement principle to get the same final state despite delaying the measurement calculations until now: $\psi_{5} = \frac{1}{4} \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} \otimes \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} \otimes \begin{bmatrix} a & b \\ \overline{b} & c \end{bmatrix} = \frac{1}{4} I_2 \otimes I_2 \otimes M$ As you can see, the third qubit (which corresponds to the second qubit of $P$; i.e. Bob's qubit) has ended up in the state $M$. Also you can see that it's much easier to use the intuition that "if it works on pure states then it must work on mixed states since mixed states are like not knowing which pure state you're in" than it is to run out the full calculation. Last edited: Feb 22, 2016 18. Feb 22, 2016 ### StevieTNZ I guess it depends on what kind of bell-state you get for photons A and B (I think if it is |H>|V> - |V>|H>, then no operation needs to be performed on photon C to make it horizontally polarized). But yes, it only happens a certain percentage of the time. 19. Feb 22, 2016 ### DrChinese Thanks. Usually, I would expect one Bell state to produce H, another to produce V. Thus no FTL signalling is possible. I am still looking Strilanc's derivation to understand how it is both correct and not in violation of signalling. 20. Feb 22, 2016 ### Strilanc The teleportation I described requires sending two classical bits (the results of measuring Alice's qubits) from Alice to Bob, so it can't be used to signal faster than light. If you're not familiar with quantum teleportation from the perspective of quantum information, you might find this video by Michael Nielson useful. Also, if you're thinking in terms of photons, then the wikipedia article on linear optical quantum computing is probably a good resource.
2017-10-21 09:50:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7271742224693298, "perplexity": 886.139204429491}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824675.67/warc/CC-MAIN-20171021081004-20171021101004-00836.warc.gz"}
https://cs.stackexchange.com/questions/13771/undecidability-of-a-restricted-version-of-the-acceptance-problem
# Undecidability of a restricted version of the acceptance problem It's known that the following language, the so-called acceptance problem is undecidable: $A_{TM} = \{\langle M,w\rangle\,\vert\,M\text{ is a TM which accepts }w\}$ The proof is by contradiction: Assume there is a TM $H$ which decides $A_{TM}$. Let $D$ be another TM. Given the code of a TM $M$, $\langle M\rangle$ as input, $D$ simulates $H$ on $\langle M,\langle M\rangle\rangle$, and accepts, if $H$ rejects this input and rejects, if $H$ accepts it. That is, $D$ accepts $\langle M\rangle$ if $M$ rejects its own code, and vice versa. Running $D$ on its own code, $\langle D\rangle$, leads to contradiction. Let's restrict $A_{TM}$ by excluding all input strings which encode a TM: $E = \{w\,\vert\,w\text{ is a structurally valid encoding of a TM}\}$ $A'_{TM} = \{\langle M,w\rangle\,\vert\,M\text{ is a TM which accepts }w\text{ and }w\not\in E\}$ I'd like to know whether $A'_{TM}$ is also undecidable. I tried to prove it the above way: Assume there is a TM $H'$ which decides $A'_{TM}$. Let $D'$ be another TM. Given the code of a TM $M$, $\langle M\rangle$ as input, $D'$ simulates $H'$ on $\langle M,\langle M\rangle\rangle$, and accepts, if $H'$ rejects this input and rejects, if $H'$ accepts it. The problem is that running $D'$ on its own code, $\langle D'\rangle$, doesn't necessarily lead to contradiction. I mean since $\langle D',\langle D'\rangle\rangle$ is not a member of $A'_{TM}$, we don't know what $H'$ will do with it. Note: An encoding of TMs, and TMs along with an input string Let $M = (Q, \Sigma, \Gamma, \delta, q_{i}, q_{a}, q_{r})$ be a TM, where • $Q$ is the set of states, • $\Sigma = \{0, 1\}$ is the input alphabet, • $\Gamma$ is the tape alphabet ($\Sigma\subset\Gamma$), • $\delta: (Q-\{q_a, q_r\})\times\Gamma\rightarrow Q\times\Gamma\times\{L,R,S\}$ is the transition function, • $L$, $R$ and $S$ denote the respective head movements, "left", "right" and "stay", and • $q_i$, $q_a$ and $q_r$ are the initial, accepting and rejecting state, respectively. Let's assign a unique positive integer to each element of $Q$, and do the same in case of $\Sigma$, $\Gamma$ and $\{L,R,S\}$. Now every transition rule $\delta(p, a) = (q, b, m)$ can be encoded as $\langle p\rangle 1\langle a\rangle 1\langle q\rangle 1\langle b\rangle 1 \langle m\rangle$, where $\langle x\rangle$ denotes a sequence of $0$'s, with length being equal to the positive integer assigned to $x$. The encoding of $M$, denoted by $\langle M\rangle$, can be created by concatenating its transition rules, separated by $11$'s. The combined encoding of $M$, and an input string, $w\in\Sigma^*$, denoted by $\langle M,w\rangle$ is $\langle M\rangle111w$. • You just need to tweak $D$ a bit - the general idea is the same. Instead of $\langle M \rangle$, encode the input in such a way that (1) you can decode $\langle M \rangle$, and (2) it's not a valid encoding of a TM. Aug 16, 2013 at 14:05 The complication forbidding the input from being an encoding of a Turing machine is easy to overcome. All you need to do is tweak $D$ a bit, so that instead of accepting an encoding $\langle M \rangle$ of a Turing machine, it accepts some other input which can be decoded into an encoding of a Turing machine, while at the same time not being an encoding of a Turing machine itself.
2022-10-01 23:58:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8272703289985657, "perplexity": 285.2033264829568}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00336.warc.gz"}
https://pos.sissa.it/358/434/
Volume 358 - 36th International Cosmic Ray Conference (ICRC2019) - CRI - Cosmic Ray Indirect Analysis of Data from Surface Detector Stations of the AugerPrime Upgrade A. Taboada*  on behalf of the Pierre Auger Collaboration Full text: pdf Pre-published on: July 22, 2019 Published on: July 02, 2021 Abstract Measuring the different components of extensive air showers is of key importance in reconstructing the mass composition of ultra-high energy cosmic rays. AugerPrime, the upgrade of the Pierre Auger Observatory, aims to enhance the sensitivity of its surface detector to the masses of cosmic rays by installing a $3.8~\mathrm{m^2}$ plastic scintillator detector on top of each of the 1660 Water-Cherenkov Detectors (WCDs). This Scintillator Surface Detector (SSD) provides a complementary measurement which allows for disentanglement of the electromagnetic and muonic shower components. Another important improvement of AugerPrime are the surface-detector electronics. The new electronics will process signals from the WCD and the SSD with higher sampling frequency and enhanced resolution in signal amplitude. Furthermore, a smaller photomultiplier tube will be added to each WCD, thus increasing its dynamic range. Twelve upgraded surface detector stations have been operating since September 2016. Additionally, seventy-seven SSDs have been deployed and are taking data since March 2019. In this work, the analysis of the data from these detectors is presented. DOI: https://doi.org/10.22323/1.358.0434 How to cite Metadata are provided both in "article" format (very similar to INSPIRE) as this helps creating very compact bibliographies which can be beneficial to authors and readers, and in "proceeding" format which is more detailed and complete. Open Access Copyright owned by the author(s) under the term of the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
2022-01-20 05:19:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3857620358467102, "perplexity": 3220.3503399652227}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301720.45/warc/CC-MAIN-20220120035934-20220120065934-00490.warc.gz"}
https://learn.careers360.com/ncert/question-write-a-pair-of-integers-whose-product-is-minus-12-and-there-lies-seven-integers-between-them-excluding-the-given-integers/?question_number=114.0
# Get Answers to all your Questions #### Write a pair of integers whose product is –12 and there lies seven integers between them (excluding the given integers). The integers -2 and 6 are such that $(-2) \times 6=-12$ There are seven integers i.e-1,0,1,2,3,4,5 which lie between them.
2023-03-23 00:53:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.484296590089798, "perplexity": 1237.666894058577}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296944606.5/warc/CC-MAIN-20230323003026-20230323033026-00511.warc.gz"}
http://www.vallis.org/blogspace/preprints/1209.0718.html
## [1209.0718] A method to estimate the significance of coincident gravitational-wave observations from compact binary coalescence Authors: Kipp Cannon, Chad Hanna, Drew Keppel Date: 4 Sep 2012 Abstract: Coalescing compact binary systems consisting of neutron stars and/or black holes should be detectable with upcoming advanced gravitational-wave detectors such as LIGO, Virgo, GEO and {KAGRA}. Gravitational-wave experiments to date have been riddled with non-Gaussian, non-stationary noise that makes it challenging to ascertain the significance of an event. A popular method to estimate significance is to time shift the events collected between detectors in order to establish a false coincidence rate. Here we propose a method for estimating the false alarm probability of events using variables commonly available to search candidates that does not rely on explicitly time shifting the events while still capturing the non-Gaussianity of the data. We present a method for establishing a statistical detection of events in the case where several silver-plated (3--5$\sigma$) events exist but not necessarily any gold-plated ($>5\sigma$) events. We use LIGO data and a simulated, realistic, blind signal population to test our method. #### Sep 17, 2012 1209.0718 (/preprints) 2012-09-17, 13:39
2018-04-20 20:08:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.37258774042129517, "perplexity": 1897.932920232865}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125944682.35/warc/CC-MAIN-20180420194306-20180420214306-00404.warc.gz"}
https://codereview.stackexchange.com/questions/277341/simple-number-guessing-game-in-haskell
I've been learning Haskell for 2 weeks now which is I'm very new to it. I would like to have feedback on what is good/bad about my code and how it could I improve in learning functional programming paradigm. {-- Number guessing game -- Generate a random number between 1 and 100 -- Ask the user to guess the number -- If the user guesses the number, congratulate them -- If the user guesses incorrectly, tell them if they guessed too high or too low -- If the user runs out of guesses, tell them they lost --} module Main where import System.Random import System.Exit (die) message :: String message = "Welcome to the number guessing game! \n\ \You have 10 guesses to guess the number.\n\ \The computer will generate a random number between 1 and 100.\n\ \Try to guess it in as few attempts as possible.\n" putStrLn "Take a guess: " input <- getLine let number = read input :: Integer if number < 1 || number > 100 then do putStrLn "Number must be between 1 and 100" else return number playGame :: Integer -> IO () playGame secretNumber = counter 1 where counter getTries = do let maxGeusses = 10 putStrLn $"You have " ++ show (maxGeusses - getTries) ++ " tries left.\n" guess <- askNumber if guess == secretNumber && getTries <= maxGeusses then do putStrLn "You guessed the number!" die "You won!" else if getTries < maxGeusses then do if guess > secretNumber then do putStrLn "Your guess was too high\n" >> counter (getTries + 1) playGame secretNumber else do putStrLn "Your guess was too low\n" >> counter (getTries + 1) playGame secretNumber else do die "You lost!" main :: IO () main = do putStrLn message secretNumber <- randomRIO (1, 100):: IO Integer playGame secretNumber 2 Answers This is pretty good for your first project! And it mostly works! Wall Using the option -Wall will get the compiler to complain more about little ambiguities in your code, kinda like a linter. In this case it alerts that it's filling in the type of counter for us; we can shut it up by adding counter :: Int -> IO () right about it inside the where clause. Style • Why is counter called that? It contain most of the game! Maybe you can give it a better name. If not, the traditional name for a recursive sub-function that does all the work is "go". (I don't like it, but it's convention.) • Why is getTries called that? It's neither a function nor an IO. • Using >> like that is kinda out of place. It's the same as just having the two expressions as seperate lines of the do block. That's what do is for, to safe us having to use lots of >>s and >>=s. • On the flip side, it'd be pretty normal to skip the variable input altogether. Remember that monads are functors; we can just write number <- read <$> getLine. If you prefer number :: Integer <- read <$> getLine, you'll need to add -XScopedTypeVariables to your compiler options. • The IO expression counter (renamed go) never "returns". I'll talk more below about whether that's good in itself; the point here is that it makes the recursive calls to playGame secretNumber unreachable. It's good to treat unreachable code as a bug and squash it. • For a human-interactive CLI tool, it's nice to be able to edit the input before hitting enter. Poking about for two seconds did not find me a working off-the shelf solution, which is annoying but neither your problem nor mine. • Be clear about what you're importing; either qualify the import or explicitly import just the names you want. This makes it easier for new people to figure out where stuff comes from. • maxGeusses is spelled wrong. Also, move it up to the where clause so it's not cluttering up the do block. Behavior • Why are you using die? That clearly shows up as an error (*** Exception: ExitFailure 1), which isn't appropriate even when the player looses. Probably System.Exit has some other command for non-failure exiting, but you shouldn't need it; just let the program finish naturally (i.e. let program flow get to the end of main). • Be more careful with read, it will crash your program if it can't parse the string. readMaybe works. Actually dealing with the Maybe makes the function look pretty ugly. Finding pleasant ways to talk about things like failure is its own learning-curve in Haskell. • While not technically wrong, it's kinda sketchy the way you're checking the number of guesses twice. A lot can go wrong in a recursive function; doing a thing right once is preferable. For example, as written, it will sometimes say you have 0 guesses left; what does that mean? In this case, we can do it all the way up as a guard on the definition of go. Here's what I came up with while trying out all the above: module Main where import System.Random (randomRIO) import Text.Read (readMaybe) message :: String message = "Welcome to the number guessing game! \n\ \You have 10 guesses to guess the number.\n\ \The computer will generate a random number between 1 and 100.\n\ \Try to guess it in as few attempts as possible." askNumber :: IO Integer askNumber = do putStrLn "Take a guess: " maybeNumber :: Maybe Integer <- readMaybe <$> getLine maybe recurseIfUnparseable recurseIfOutOfRange maybeNumber where recurseIfUnparseable = do putStrLn "Only enter integers!" recurseIfOutOfRange n | n < 1 || n > 100 = do putStrLn "Number must be between 1 and 100" | otherwise = return n playGame :: Integer -> IO () playGame secretNumber = go 0 where maxGuesses = 10 go :: Integer -> IO () go failureCount | maxGuesses <= failureCount = putStrLn "You lost!" | otherwise = do putStrLn $"You have " ++ show (maxGuesses - failureCount) ++ " tries left." guess <- askNumber if guess == secretNumber then putStrLn "You guessed the number; you won!" else do putStrLn (if guess > secretNumber then "Your guess was too high." else "Your guess was too low.") go (failureCount + 1) main :: IO () main = do putStrLn message secretNumber <- randomRIO (1, 100):: IO Integer playGame secretNumber • This is very helpful, thanks a lot. Do you have a book that you recommend that I can read. – JHV Jun 15 at 4:49 • I'm not the person who wrote this post, but I really like "Learn You a Haskell for Great Good!" by Miran Lipovaca: learnyouahaskell.com Jun 15 at 14:21 • "Learn You A Haskell" is good; oldschool but well liked. Possibly too introductory for OP, but I don't know. Jun 15 at 23:31 • One small improvement would be to conditionally print "try" or "tries" to avoid printing "You have 1 tries left." Also, you mean, "even when the player loses," not "looses." Jun 16 at 15:19 Aside from what @ShapeOfMatter have already said. I'd suggest to force yourself into pure functions as much as you can. This is the way functional is design for. Notice that when you make a guess you have four possible outcomes: Either you win, loose, guess too high or guess to low. You can express that as a enumeration (ADT in Haskell), then you can neatly define the playRound function which given the neccesary information produces a result. What's left is calling that function over and over again, and produce the corresponding messages to the user, making the validations out of range and input should be a number. This is done via pattern matching easily. {-# LANGUAGE TypeApplications #-} Module Main where import System.Random import Text.Read (readMaybe) -- Check @ShapeofMatter answer for this data EndGame = Win | Lose | TooHigh | TooLow -- The pure function for one guess. Clear and simple playRound :: Int -> Int -> Int -> EndGame playRound numguess solution guess | numguess <= 0 = Lose | guess > solution = TooHigh | guess < solution = TooLow | solution == guess = Win -- This function just executes playRound given users input. -- It makes some validations you can even factor out in other function playGame :: Int -> Int -> IO () playGame guesses_left solution = do putStrLn$ "\nYou have " ++ show guesses_left ++ " tries left." putStrLn "Take a guess: " input <- readMaybe @Int <\$> getLine case input of Nothing -> do putStrLn "input isn't a number" playGame guesses_left solution -- Input is out of range Just guess | (guess < 1 || guess > 100) -> do putStrLn "Number must be between 1 and 100" playGame guesses_left solution -- Input is ok Just guess -> do case playRound guesses_left solution guess of Lose -> putStrLn "You lost!" Win -> do putStrLn "You guessed the number!" putStrLn "You Won!" TooHigh -> do putStrLn "Your guess was too high" playGame (guesses_left - 1) solution TooLow -> do putStrLn "Your guess was too low" playGame (guesses_left - 1) solution message :: String message = "Welcome to the number guessing game! \n\ \You have 10 guesses to guess the number.\n\ \The computer will generate a random number between 1 and 100.\n\ \Try to guess it in as few attempts as possible.\n" main :: IO () main = do putStrLn message -- putStrLn "Set the max number of guesses:" secretNumber <- randomRIO (1, 100) :: IO Int playGame 10 secretNumber Edit: Notice that guards can be used within pattern matching as normal guards, making code more succint. I didn't use it in the first solution because didn't want to introduce too much syntactic overhead. In general, whenever you find if .. then .. else .. you can substitute it with guards or with pattern matching on booleans. Many times it will improve readability, but not always... so just choose the option that reads better. If you have nested if .. then .. else .. I think it is somehow a code smell. The rule of thumb which works for me is "don't use if .. then .. else .. unless it clearly improves readability". The reason for that is because branching is much more clear when you have many equations via guards or pattern matching instead of a huge expression in a single if .. then .. else .. . . . Just guess | guess < 1 || guess > 100 -> do -- Input is out of range putStrLn "Number must be between 1 and 100" playGame guesses_left solution | otherwise -> do -- Input is ok case playRound guesses_left solution guess of Lose -> putStrLn "You lost!" Win -> do putStrLn "You guessed the number!" putStrLn "You Won!" TooHigh -> do putStrLn "Your guess was too high" playGame (guesses_left - 1) solution TooLow -> do putStrLn "Your guess was too low" playGame (guesses_left - 1) solution • thanks a lot this is very big help, I'm new to FP do you have a resources to recommend to dig deeper into FP like principles of this paradigm, concepts, etc – JHV Jun 15 at 10:01 • Nice use of type application on readMaybe! Try help new people notice and understand cool features like that. I also really like the way you pattern-matched input with a guard to flatten out the validation logic. Jun 15 at 23:28 • @ShapeOfMatter thanks!, actually guards work within pattern matching as regular. I've added an Edit section explaining why if then else isn't a good idea in many cases. Also I provide a rewrite of the guard case so you can see there is nothing special with using guards within pattern matching ;) Jun 16 at 6:45 • @ShapeOfMatter oh! I was thinking you are the OP, sorry hahahaha. Still usefull edit though Jun 16 at 6:48
2022-07-02 05:56:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39911285042762756, "perplexity": 6751.55369828953}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103984681.57/warc/CC-MAIN-20220702040603-20220702070603-00147.warc.gz"}
https://psychology.stackexchange.com/questions/15433/what-is-the-role-of-why-universal-gas-constant-in-nernst-equation
# What is the role of (why )universal gas constant in Nernst equation? The Nernst equation is an equation that relates the the total voltage, i.e. the electromotive force, of the full cell at any point in time to the standard electrode potential, temperature, activity, and reaction quotient of the underlying reactions and species used. I have been wondering why Universal gas constant (R) is included in Nernst and Goldmann equations while describing the steady state of membrane potential? • I'm voting to close this question as off-topic because this should go to physics – AliceD Jul 11 '16 at 8:45 • Are questions about neuron membrane properties off-topic here? I see a lot of them around, so I am not clear why this is any different. If such questions are off-topic, then this would be on-topic for the proposed Neuroscience stack exchange site on Area 51. Jul 11 '16 at 14:29 • @TheBlackCat, if the question is relevant for understanding membrane potentials, we ask the OP (or anyone else) to add that part in the post. Not everybody is knowledgeable about neuroscience at such a detailed level. Providing more context to the question would prevent ambiguity and confusion of the topic. Jul 11 '16 at 15:59 • @RobinKramer: It is at the end of the post, "...while describing the steady state of membrane potential?" Jul 11 '16 at 16:23 • You are absolutely right. I completely missed that part of the question. Then, since two people apparently missed it, perhaps some more emphasis on the neuro part may be handy, also to avoid further confusion :) Jul 11 '16 at 16:31
2021-09-27 06:47:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5763975977897644, "perplexity": 722.8433029778304}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780058373.45/warc/CC-MAIN-20210927060117-20210927090117-00255.warc.gz"}
https://archive.mcpt.ca/cpt-editorials/jdcc/2016/december/e/
# This content is archived! For the 2018-2019 school year, we have switched to using the WLMOJ judge for all MCPT related content. This is an archive of our old website and will not be updated. Considering edge cases gets you $40\%$ of the points. These include: no area and one circle within the other. The intersection of two circles can be seen as two chords. The area of a chord can be found by subtracting the area of a triangle by the area of the inner isosceles triangle. To make the problem simpler, translate the two circles onto the x-axis. One of which has a center on the origin. To do this, find the distance $d$ between the two centers. Now the new centers are $(0, 0)$ and $(d, 0)$. Then solve for the points of intersection. Now you have all the pieces of information you need to find the areas. ## Time complexity $\mathcal{O}(1)$
2021-09-26 14:55:23
{"extraction_info": {"found_math": true, "script_math_tex": 5, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5304503440856934, "perplexity": 263.2509501027951}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057882.56/warc/CC-MAIN-20210926144658-20210926174658-00569.warc.gz"}
https://tex.stackexchange.com/questions/467710/in-the-figure-form-how-to-adjust-the-whole-size-of-text-and-math-format-at
# In the figure form, how to adjust the whole size of “text” and “math” format at once? In the figure form, how to adjust the whole size of "text" and "math" format "as a combined figure" at once? If this is a usual pdf figure, we can do \includegraphics[width=4.4in], such as below, the size is tuned by [width=4.4in] \begin{figure}[htbp] \centering \includegraphics[width=4.4in]{.pdf} \caption{}\label{} \end{figure} Can we adjust the whole overall size of the figure (including the "text" and "math" format) at once by the similar function, like "size"? I hope to have a 3-times-larger figure overall. \documentclass{article} \usepackage{mathtools,amssymb} \begin{document} \begin{figure}[!h] \begin{center} \begin{gather*} \overbrace{\underbrace{A \times B}_E\times \underbrace{C\times {D}}_{EFG}}^{\text{ABCDEFG}} \\[-\normalbaselineskip] \underbrace{\kern5em}_{\text{family}} \end{gather*} \end{center} \caption{} \end{figure} \end{document} \documentclass[fleqn]{article} \usepackage{mathtools,amssymb,varwidth} \usepackage{showframe} \begin{document} \begin{figure}[!htb] \resizebox{\linewidth}{!}{% \begin{varwidth}{\linewidth} \mathindent=0pt \begin{gather*} \overbrace{\underbrace{A \times B}_E\times \underbrace{C\times {D}}_{EFG}}^{\text{ABCDEFG}}\\[-\normalbaselineskip] \underbrace{\hphantom{A\times B\times C\times D}}_{\text{family}} \end{gather*} \end{varwidth}} \caption{foo} \end{figure} \end{document} Instead of \resizebox you can also use \scalebox: [...] \begin{figure}[!htb] \centering \scalebox{3}{% \begin{varwidth}{\linewidth} [...] • Thanks, may I make sure the complete code for your "\scalebox"? – wonderich Dec 28 '18 at 20:52 • yes. By the way: If you do not need the environment gather* then it can be done in an easier way: \scalebox{3}{$\displaystyle\overbrace{\underbrace{A \times B}_E\times ...$} – user2478 Dec 28 '18 at 20:56 • actually you taught me to use "gather*" - -what was that purpose in a previous post you answer? Thanks!!! – wonderich Dec 28 '18 at 21:00 • It depends on what you want: Only one time an equation as big as possible or an equation as part of a document with other equations? In the first case it doesn't matter what you use ... – user2478 Dec 28 '18 at 21:06 • Package graphicx is needed, but it is loaded already by mathtools. At least in my TeX distribution. – user2478 Dec 28 '18 at 21:13 If you want to resize content using a specific width/height, then you can use \resizebox{<width>}{<height>} (using ! to maintain the aspect ratio if you only specify one or the other). If you want to scale content using a number, you can use \scalebox{<num>}: \documentclass{article} \usepackage{graphicx,amsmath} \begin{document} \begin{figure} \centering \scalebox{3}{$\displaystyle % If needed... \underbrace{ \overbrace{ \underbrace{ A \times B }_E \times \underbrace{ C \times D }_{EFG} }^{\text{ABCDEFG}} }_{\text{family}}$} \caption{A caption} \end{figure} \end{document} I've nested the family \underbrace as part of the bigger expression, so there's no need to manually place it based on the location. Also note the use of \centering rather than the center environment, and there's no need for using gather.
2020-07-12 20:09:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.930628776550293, "perplexity": 1500.0694308330635}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657139167.74/warc/CC-MAIN-20200712175843-20200712205843-00205.warc.gz"}
http://everettbrazilianjiujitsu.com/iqobog/article.php?tag=f926f3-elasticity-physics-notes
If this site helps, please consider sharing this in your social media. For the same material, the three coefficients of elasticity γ, η and K have different magnitudes. It is defined as the ratio of normal stress to the longitudinal strain Within the elastic limit. The property of an elastic body by virtue of which its behaviour becomes less elastic under the action of repeated alternating deforming force is called elastic fatigue. Practical Applications of Elasticit.. The internal restoring force acting per unit area of a deformed body is called stress. If you don't see any interesting for you, use our search form on bottom ↓ . A material is said to be elastic if it deforms under stress (e g external Elastic modulus under stress (e.g., external forces), but then returns to its origgpinal shape when the stress is removed. To a greater or lesser extent, most solid materials exhibit elastic behaviour, but there Depression at the free end of a cantilever is given by. Price Elasticity Of Demand Using The Midpoint Method Video by khanacademy.org. Ratio between isothermal elasticity and adiabatic elasticity E. Elasticity is that property of the object by virtue of which it regain its original configuration after the removal of the deforming force. modulus of rigidity is zero. Elasticity Definition Physics: (i) Normal Stress If deforming force is applied normal to the area, then the stress is called normal stress. Elastic limit is the upper limit of deforming force upto which, if deforming force is removed, the body regains its original form completely and beyond which if deforming force is increased the body loses its property of elasticity and get permanently deformed. This is possible only when you have the best CBSE Class 11 Physics study material and a smart preparation plan. Question and Answer forum for K12 Students. In this case, elasticity of demand is infinite or E We are giving a detailed and clear sheet on all Physics Notes that are very useful to understand the Basic Physics Concepts. For liquids. Plastic bodies: - Bodies which do not show a tendency to recover their original configuration on the removal … SYLLABUS PHYSIC A LEVEL : Download; 150290-learner-guide-for-cambridge-international-as-a-level-physics-9702- : Download; 9702_iECRs_ALL : Download; 9702_Topic_Connections : Download; 9702_Topic_Questions : Download; A-Levels Physics Videos : Download; Notes : Physics As notes : Download; Physics formula sheet : Download; 1 - Physical … Steel is more elastic than rubber. Class-XI Physics Handwritten Notes Ch 1: Physical World Ch 2: Units and Measurements Ch 3: Motion in a Straight Line Ch 4: Motion in a Plane (a)Vectors (b) Projectile Ch 5: Laws of Motion Ch 6: Work,Energy and Power Ch 7: System of Particles & Rotational Motion Ch 8: … What Is Elasticity In Economics Definition Theory Formula by study.com. For liquids. Consider a case in the figure below where demand is very elastic, that is, when the curve is almost flat. Steel is more elastic than rubber. 1. Elasticity: Download Elasticity and thermal expansion NOTES by MOTION for Jee Mains & Jee Advanced (IIT JEE) exam preparation. Summary ⇒ The following image shows an unfortunate situation: a van fails to stop as it approaches a line of traffic and hits a stationary car; and they move forwards together - is this elastic or inelastic collision? Perfectly Elastic Demand: A perfectly elastic demand refers to a situation when demand is infinite at the prevailing price. The maximum value of deforming force for which elasticity is present in the body is called its limit of elasticity. Download Elasticity (Physics) notes for IIT-JEE Main and Advanced Examination. Many candidates are facing problems in collecting Maths, Physics and Chemistry Topic wise notes collection for JEE(Joint … From J to K the material flowed like a fluid; such behaviour is called plastic flow. (ii) Tangential Stress If deforming force is applied tangentially, then the stress is called tangential stress. Perfectly Plastic Bodies: Elasticity:-The property by virtue of which a body tends to recover its original configuration (shape and size) on the removal of the deforming forces, is elasticity. Substances that display a high degree of elasticity are termed "elastic." Chapter 9 – Stress and Strain 2. How elasticity affects the incidence of a tax, and who bears its burden? The temporary delay in regaining the original configuration by the elastic body after the removal of deforming force, is called elastic after effect. A beam clamped at one end and loaded at free end is called a cantilever. Elasticity is a physical property of a material whereby the material returns to its original shape after having been stretched out or altered by force. where, k = Force constant of spring and x = Change in length. Those bodies which does not regain its original configuration at all on the removal of deforming force are called perfectly plastic bodies, e.g. The materials which show large plastic range beyond elastic limit are called ductile materials, e.g., copper, silver, iron, aluminum, etc. Torsion of Cylinder. You can see that if the price changes from $.75 to$1, the quantity decreases by a lot. Recall Hooke's law — first stated formally by Robert Hooke in The True Theory of Elasticity or Springiness(1676)… which can be translated literally into… or translated formally into… Most likely we'd replace the word "extension" with the symbol (∆x), "force" with the symbol (F), and "is directly proportional to" with an equals sign (=) and a constant of proportionality (k), then, to … Price elasticity of demand, also called the elasticity of demand, refers to the degree of responsiveness in demand quantity with respect to price. Elastic Limit Definition: Elastic Hysteresis. If there is an increase in length, then stress is called tensile stress. Thanks heaps! (iv) 9 / Y = 1 / K + 3 / η or Y = 9K η / η + 3K. A body with this ability is said to behave (or respond) elastically. Young’s modulus (Y) and modulus of rigidity (η) are possessed by solid materials only. Ductile materials are used for making springs and sheets. When temperature of a gas enclosed in a vessel is changed, then the thermal stress produced is equal to change in pressure (Δp)of the gas. BUNGEE jumping utilizes a long elastic strap which stretches until it reaches a maximum length that is proportional to the weight of the jumper. When a deforming force is applied at the free end of a suspended wire of length 1 and radius R, then its length increases by dl but its radius decreases by dR. Now two types of strains are produced by a single force. Relation Between Volumetric Strain,.. Elasticity: The property of the body to regain its original configuration (length, volume or shape) when the deforming forces are removed, is called elasticity. where, η = modulus of rigidity of the material of cylinder, Work done in twisting the cylinder through an angle θ, Relation between angle of twist (θ) and angle of shear (φ), rθ = lφ or φ = r / l = θ If there is a decrease in length, then stress is called compression stress. According to the change in configuration, the strain is of three types, (1) Longitudinal strain= Change in length / Original length, (2) Volumetric strain = Change in volume / Original volume. The time delay in restoring the original configuration after removal of deforming force is called elastic relaxation time. The fractional change in configuration is called strain. We are giving a detailed and clear sheet on all Physics Notes that are very useful to understand the Basic Physics Concepts. Compressibility of a material is the reciprocal of its bulk modulus of elasticity. 5 Using the Midpoint Method to Calculate Elasticities. The change in the shape or size of a body when external forces act on it is determined by the forces between its atoms or molecules. Within elastic limit, Stress & strain ⇒ S t r e s s S t r a i n = C o n s t a n t \Rightarrow \frac{Stress}{Strain}=Constant ⇒ S t r a i n S t r e s s = C o n s t a n t. This constant is known as modulus of elasticity (or) coefficient of elasticity. where, E is the modulus of elasticity of the material of the body. Elasticity 2012 1. AQA GCSE Physics exam revision with questions & model answers for Forces & elasticity. where, K = bulk modulus of elasticity and. etc. ∴ Poisson’s Ratio (σ) = Lateral strain / Longitudinal strain = – Δ R/ R / ΔU l. The theoretical value of Poisson’s ratio lies between – 1 and 0.5. Preface This lecture book contains the problems and answers of the exams elasticity theory from June 1997 until January 2003. Within the limit of elasticity, the stress is proportional to the strain. If you don't see any interesting for you, use our search form on bottom ↓ . Elastic Modulus or Young’s Modulus Definition: The ratio of stress and strain, called modulus of elasticity or elastic moduli. It is a situation where the slightest rise in price causes the quantity demanded of the commodity to fall to zero. Its practical value lies between 0 and 0.5. It is defined as the ratio of tangential stress to the shearing strain, within the elastic limit. All objects in nature are elastic and no effects in … Solid objects will deformwhen adequate loadsare applied to them; if the material is elastic, the object will return to its initial shape and size after removal. Grade 9 | Grade 10 | Year 9 | Year 10 | Form 4 | Form 5| | This site is best seen using Web version. Note : The fourth state of matter in which the medium is in the form of positive and negative ions, is known as plasma. where, γ = Cp / Cv ratio of specific heats at constant pressure and at constant volume. To assist you with that, we are here with notes. Elasticity is that property of the object by virtue of which it regain its original configuration after the removal of the deforming force. Solids are more elastic and gases are least elastic. Safety factor = Breaking stress / Working stress. Potential energy U = Average force * Increase in length, = 1 / 2 Stress * Strain * Volume of the wire, Elastic potential energy of a stretched spring = 1 / 2 kx2. Elasticity, ability of a deformed material body to return to its original shape and size when the forces causing the deformation are removed. Elasticity is a measure of a variable’s sensitivity to a change in another variable, most commonly this sensitivity is the change in price relative to changes in other factors. The minimum value of stress required to break a wire, is called breaking stress. modulus of rigidity is zero. Elastic Potential Energy in a Stretched Wire. Plasma occurs in the atmosphere of stars (including the sun) and in discharge tubes. Elasticity is a measure of a variable’s sensitivity to a change in another variable, most commonly this sensitivity is the change in price relative to changes in other factors. Those bodies which regain its original configuration immediately and completely after the removal of deforming force are called perfectly elastic bodies. Download elasticity notes for 12th physics document. The modulus of elasticity is simply the ratio between stress and strain. Elastic Modulus in Physics | Definition, Formulas, Symbol, Units – Elasticity You Can Download Sri Lanka Advanced Level Physics Related Notes.Such as Laser notes,Elasticity note,etc.Browse Archives,And Latest Physics Notes in here. Subjects | Physics Notes | A-Level Physics. Download 12th class elasticity chapter notes in physics document. What is a Hooke’s law and how it is applicable for the concept of elasticity. Factors Affecting Elasticity. Notes for NEET Physics Elasticity. Elastic limit is the upper limit of deforming force upto which, if deforming force is removed, the body regains its original form completely and beyond which if deforming force is increased the body loses its property of elasticity and get permanently deformed. The amount of deformation is ll d th t i Elastic deformation Physics Notes for High School DEDICATED TO HELP STUDENTS EXCEL IN PHYSICS BY GIVING NOTES, MOTIVATION AND RESOURCES ESPECIALLY FOR (O-LEVEL), High School, Secondary School Students. Elasticity of Demand – CBSE Notes for Class 12 Micro Economics CBSE NotesCBSE Notes Micro EconomicsNCERT Solutions Micro Economics Introduction This is a numerical based chapter on elasticity of demand, price elasticity of demand and its measurements, also discussing the factors affecting it. All CBSE Notes for Class 11 Physics Maths Notes Chemistry Notes Biology Notes. quartz, phospher bronze etc. Made by expert teachers. On this page you can read or download 12th class elasticity chapter notes in physics in PDF format. IARCS Olympiads: Indian Association for Research in Computing Science, CBSE 12 Class Compartment Result 2020 (Out) – Check at cbseresults.nic.in, CBSE Class 10 Result 2020 (Out) – Check CBSE 10th Result at cbseresults.nic.in, cbse.nic.in, Breaking: CBSE Exam to be conducted only for Main Subjects, CBSE Class 11 Physics Notes : Hydrostatics. The maximum value of deforming force for which elasticity is present in the body is called its limit of elasticity. Relation between \ [\mathbf {Y,}\,\ma.. After a region K to L of partial elastic behaviour, plastic flow continued from L to M . We are giving a detailed and clear sheet on all Physics Notes that are very useful to understand the Basic Physics Concepts. It has no unit and it is a dimensionless quantity. Perfectly Elastic Bodies: Theory of Elasticity Exam Problems and Answers Lecture CT5141 (Previously B16) Delft University of Technology Faculty of Civil Engineering and Geosciences Structural Mechanics Section Dr.ir. putty, paraffin, wax etc. Young’s modulus (Y) and modulus of rigidity (η) are possessed by solid materials only. In Fig. Its SI unit is N-1m2 and CGS unit is dyne-1 cm2. From a general summary to chapter summaries to explanations of famous quotes, the SparkNotes Elasticity Study Guide has everything you need to ace quizzes, tests, and essays. 4 The World Demand for Oil . Elasticity Of Demand Cbse Notes For Class 12 Micro by learncbse.in. e.g., quartz and phosphor bronze etc. On this page you can read or download elasticity notes for 12th physics in PDF format. genius PHYSICS Elasticity 5 9.5 Types of Solids. Those bodies which does not regain its original configuration at all on the removal of deforming force are called perfectly plastic bodies, e.g., putty, paraffin, wax etc. Class 11 Physics Elasticity – Get here the Notes for Class 11 Physics Elasticity. Candidates who are ambitious to qualify the Class 11 with good score can check this article for Notes. The property of matter by virtue of which it regains its original configuration after removing the deforming force is called elasticity. Elasticity of Demand: The degree of responsiveness of demand to the […] Limit of Elasticity. Elasticity is the property of solid materials to return to their original shape and size after the forces deforming them have been removed. To get fastest exam alerts and government job alerts in India, join our Telegram channel. Hoogenboom CT5141 August 2003 21010310399. 3 Defining and Measuring Elasticity The price elasticity of demand is the ratio of the percent change in the quantity demanded to the percent change in the price as we move along the demand curve. 4, DD is the perfectly elastic demand curve which is parallel to OX-axis. There are three types of modulus of elasticity, Young’s modulus, Shear modulus, and Bulk modulus. It has been assembled … Learn about the deforming force applied on an elastic object and how the stress and strain works on an object. AS Level Physics Notes and Worksheets. The SI unit applied to elasticity is the pascal (Pa), which is used to measure the modulus of deformation and elastic limit. Its unit is N/m2 or Pascal and its dimensional formula is [ML-1T-2]. Coefficient of elasticity depends upon the material, its temperature and purity but not on stress or strain. y = Young’s modulus of elasticity, and IG = geometrical moment of inertia. Breaking stress is fixed for a material but breaking force varies with area of cross-section of the wire. γ = coefficient of cubical expansion of the gas. (iii) Shearing strain = Angular displacement of the plane perpendicular to the fixed surface. Tamilnadu Board Class 10 English Solutions, Tamilnadu Board Class 9 Science Solutions, Tamilnadu Board Class 9 Social Science Solutions, Tamilnadu Board Class 9 English Solutions, Elastic Modulus in Physics | Definition, Formulas, Symbol, Units – Elasticity, Stress in Physics | Definition, Formulas, Types – Elasticity, MCQ Questions for Class 10 Social Science SST with Answers PDF Download Chapter Wise, MCQ Questions for Class 9 Science with Answers PDF Download Chapter Wise, MCQ Questions with Answers for Class 12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, and 1 all Subjects, MCQ Questions for Class 11 Economics with Answers Chapter Wise PDF Download, MCQ Questions for Class 11 Accountancy with Answers PDF Download Chapter Wise, MCQ Questions for Class 12 Economics with Answers Chapter Wise PDF Download, MCQ Questions for Class 11 Biology with Answers Chapter Wise PDF Download, MCQ Questions for Class 11 Chemistry with Answers Chapter Wise PDF Download, MCQ Questions for Class 11 Physics with Answers Chapter Wise PDF Download, MCQ Questions for Class 11 Maths with Answers Chapter Wise PDF Download. Theory from June 1997 until January 2003 the Class 11 Physics elasticity for you, use search... A property of the jumper who bears its burden government job alerts in India, join our Telegram channel stored... Is almost flat is applied normal to the fixed surface, glass, cast iron, etc bodies which its. L to M and Cross elasticity of Demand CBSE Notes for NEET Physics elasticity stretching a is! Iit JEE exam preparation minimum value of deforming force is called its limit elasticity! January 2003 ( i ) normal stress if deforming force applied on an elastic object and how is... In length, then the stress is called breaking stress quantity demanded of the gas forces. Score can check this article for Notes read or download elasticity Notes for Physics. This in your social media breaking stress is called a deforming force are called perfectly elastic.... The Class 11 Physics elasticity force constant of spring and x = Change in the body preparation... Constant of spring and x = Change in elasticity physics notes, then stress is proportional to the strain by. Body after the removal of deforming force are called perfectly elastic Demand: a perfectly elastic Demand a. Learn about the deforming force is applied normal to the Shearing strain, within the elastic limit are brittle! On applying it, is called elastic relaxation time a case in the figure below where is... Modulus Definition: the ratio of stress required to break a wire, is called limit. Its original shape and size when the curve is almost flat bodies which regain its configuration! Perpendicular to the elasticity physics notes of the jumper dimensionless quantity tangentially, then stress. Exam revision with questions & model answers for forces & elasticity of linear expansion of the body elasticity Get. One end and loaded at free end of a material is the modulus of or. Defines a property of solid materials only not on stress or strain that we. A long elastic strap which stretches until it reaches a maximum length that is proportional to the strain of (... E.G., glass, cast iron, etc in regaining the original configuration after the removal of material... Their requirements from the links given below varies with area of cross-section of plane. Configuration immediately and completely after the forces causing the deformation are removed body is called tensile stress its formula... Continued from L to M s modulus ( Y ) and modulus of elasticity the... = Cp / Cv ratio of normal stress if deforming force, is called normal stress to Shearing... And government job alerts in India, join our Telegram channel and x Change..., when the curve is almost flat n't see any interesting for,... Understand the Basic Physics Concepts a property of the deforming force for which elasticity is that property of solid to. Value of stress and strain elasticity is the perfectly elastic Demand: a perfectly elastic.. Degree of elasticity, the quantity decreases by a lot save my,... Completely after the forces deforming them have been removed [ ML-12T-2 ] its temperature and purity not. 9 / Y = Young ’ s modulus ( Y ) and in tubes! Incidence of a deformed material body to return to their original shape and size after the removal of object. An elastic object and how it is defined as the ratio of normal stress deforming. If deforming force material body to return to its original configuration after removal deforming. A fluid ; such behaviour is called its limit of elasticity is present in the body called. Physics elasticity – Get here the Notes for 12th Physics in PDF format a Change in the body is compression... Of elasticity [ \mathbf { Y, } \, \ma at end. = Change in configuration of the cantilever the next time i comment the. / Cv ratio of normal stress download 12th Class elasticity chapter Notes in Physics in format... It is defined as the ratio of tangential stress energy stored in springs GCSE! Force, is called compression stress called normal stress ( or respond ) elastically in a... Given below area of a tax, and bulk modulus of elasticity or elastic moduli value of force... The energy stored in form of potential energy of the material of the rod Subjects | Notes., when the curve is almost flat by the elastic limit their requirements the...: elasticity defines a property of an object where the slightest rise in price causes the decreases... Range beyond elastic limit are called brittle materials, e.g., glass, iron... That has the same physical unit as stress sharing this in your media! For a material but breaking force varies with area of a deformed material body to to. Maximum length that is, when the curve is almost flat Demand Using the Midpoint Method Video by.! ) 9 / Y = 1 / K + 3 / η or Y = Young s. Or compressed the problems and answers of the strap determines the amplitude of the wire the weight of material!, email, and IG = geometrical moment of inertia Income and Cross elasticity of Demand Grade by! In length material and a smart preparation plan and size when the curve is almost.. Is stored in form of potential energy of the deforming force the produced stress called! End and loaded at free end is called compression stress η and K have different magnitudes elasticity... A case in the atmosphere of stars ( including the sun ) and modulus of elasticity 3 η... This site helps, please consider sharing this in your social media sheet on all Physics Notes that are useful. And dimensional formula is [ ML-1T-2 ] in the body CGS unit is N/m2 or Pascal dimensional! Of rigidity ( η ) are possessed by solid materials only modulus has the ability to regain original! N-1M2 and CGS unit is N/m2 or Pascal and dimensional formula is [ ML-12T-2 ] dyne-1 cm2 a is... With area of cross-section of the object by virtue of which it regain its original after. For Class 11 Physics elasticity almost flat, E is the modulus elasticity! Can see that if the price changes from $.75 to$ 1, the three coefficients elasticity! Concept of elasticity is that property of an object that has the ability to regain its original shape and after! Of solid materials only wire is stored in form of potential energy of the gas deforming them have removed... Or elastic moduli then the stress and strain the prevailing price pressure and at volume... To understand the Basic Physics Concepts modulus or Young ’ s modulus elasticity... Pressure and at constant pressure and at constant volume and gases are elastic! – Get here the Notes for NEET Physics elasticity the deforming force those bodies regain... Use our search form on bottom ↓ & model answers for forces elasticity... Area, then the produced stress is called its limit of elasticity / η + 3K, then the is! That are very useful to understand the Basic Physics Concepts very useful understand! Refer this study material and a smart preparation plan by the elastic limit, is called stress! Learn about and revise shape-changing forces, elasticity and clear sheet on all Physics Notes are... Is given by more elastic and gases are least elastic. E is the reciprocal of its modulus... Which elasticity is the reciprocal of its bulk modulus of elasticity by the elastic modulus or Young ’ s Definition... The Notes for Class 11 with good score can check this article Notes. Flowed like a fluid ; such behaviour is called its limit of.! The figure below where Demand is very elastic, that is, when the forces causing the deformation are.. Strap determines the amplitude of the gas, called modulus of rigidity ( η ) are possessed by solid only... Elastic modulus or Young ’ s modulus of elasticity depends upon the material flowed like a fluid ; behaviour. Called compression stress Demand refers to a situation where the slightest rise elasticity physics notes price causes quantity., called modulus of elasticity, and IG = geometrical moment of inertia acting per unit area a... Then stress is called its limit of elasticity any interesting for you, our..., DD is the modulus of elasticity, the stress and strain on! At its both ends is changed, then the stress is called breaking stress is stored in springs GCSE... Elastic modulus in Physics | Definition, Formulas, Symbol, Units – elasticity download 12th Class elasticity chapter in... Ll d th t i elastic deformation Notes for 12th Physics in PDF format price. Elasticity affects the incidence of a tax, and website in this browser for the same material its! Η / η or Y = 1 / K + 3 / η or =. Stretched or compressed { Y, } \, \ma or download elasticity Notes for 12! Time i comment Class 11 with good score can check this article for Notes sheet all! 12 Micro by learncbse.in Notes that are very elasticity physics notes to understand the Basic Concepts. Clear sheet on all Physics Notes that are very useful to understand the Physics! Elastic after effect answers for forces & elasticity } \, \ma ll d t..., join our Telegram channel, Young ’ s modulus ( Y ) and discharge... In Economics Definition theory formula by study.com body with this ability is said to (! The cantilever model answers for forces & elasticity, that is, when the forces causing deformation... Was There An Earthquake In Kentucky Yesterday, Penampang District Map, Tsumugi Shirogane Sprites, Easyjet Flights To Jersey Today, Reyka Vodka Price, Crash Team Racing Nitro Fueled Part 1, Department Of Communications Media Releases, Calvin's Joint Lyrics, Ni No Kuni Switch Review Reddit, Centra Killala Phone Number, Penampang District Map, Byron Leftwich Coach,
2021-12-07 06:57:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6204622387886047, "perplexity": 1672.3419847712373}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363336.93/warc/CC-MAIN-20211207045002-20211207075002-00153.warc.gz"}
https://www.nature.com/articles/s41370-019-0175-9?utm_source=commission_junction&utm_medium=affiliate&error=cookies_not_supported&code=fb3342af-4e8f-490d-b878-72f13a2eb017
# Potted plants do not improve indoor air quality: a review and analysis of reported VOC removal efficiencies ## Abstract Potted plants have demonstrated abilities to remove airborne volatile organic compounds (VOC) in small, sealed chambers over timescales of many hours or days. Claims have subsequently been made suggesting that potted plants may reduce indoor VOC concentrations. These potted plant chamber studies reported outcomes using various metrics, often not directly applicable to contextualizing plants’ impacts on indoor VOC loads. To assess potential impacts, 12 published studies of chamber experiments were reviewed, and 196 experimental results were translated into clean air delivery rates (CADR, m3/h), which is an air cleaner metric that can be normalized by volume to parameterize first-order loss indoors. The distribution of single-plant CADR spanned orders of magnitude, with a median of 0.023 m3/h, necessitating the placement of 10–1000 plants/m2 of a building’s floor space for the combined VOC-removing ability by potted plants to achieve the same removal rate that outdoor-to-indoor air exchange already provides in typical buildings (~1 h−1). Future experiments should shift the focus from potted plants’ (in)abilities to passively clean indoor air, and instead investigate VOC uptake mechanisms, alternative biofiltration technologies, biophilic productivity and well-being benefits, or negative impacts of other plant-sourced emissions, which must be assessed by rigorous field work accounting for important indoor processes. ## Introduction Inhabitants of developed countries spend up to 90% of their lives indoors [1]. As such, the quality of indoor air is critical to human exposure to pollution. Indoor pollution is composed of myriad constituents, which include oxidants and irritants, volatile organic compounds (VOC), and particulate matter (PM) [2,3,4,5,6,7,8,9,10]. Much, though not all, of indoor pollution is sourced directly from the indoor environment itself. VOC concentrations particularly are driven by indoor emissions, traceable to building materials and furnishings [11], use of consumer products and air fresheners [12], and cooking [13], among others. VOCs may be a primary cause of many sick building syndrome (SBS) symptoms and other health problems associated with indoor air [14,15,16,17,18]. Oxidation of VOCs can also produce secondary organic aerosols [19,20,21,22,23,24,25], which compound the PM burden and may pose harmful health risks themselves [26,27,28]. To reduce VOCs and other indoor-sourced pollutants from the indoor environment, buildings traditionally make use of infiltration and natural or mechanical ventilation air exchange [29], which is the replacement of stale indoor air with fresh air from the outdoors. Higher ventilation rates have been correlated with lower absenteeism and SBS symptom incidences, reductions in perceptions of odors, and increased task performance [30,31,32,33,34,35]. However, increased ventilation may augment the indoor concentration of outdoor-sourced pollutants, such as ozone and PM [9, 10, 36,37,38]. Increased ventilation also typically uses more energy [39,40,41], as outdoor air must be conditioned to be thermally comfortable. To address these drawbacks, alternative means of purifying indoor air to replace or supplement ventilation air are being investigated. Experiments have demonstrated the ability of potted plants to reduce airborne VOC concentrations within sealed chambers. Many studies which carried out these experiments subsequently draw conclusions that potted plants may improve indoor air quality, spurring a presence of nonacademic resources (predominantly online) touting the use of houseplants as a sustainable means of cleaning indoor air. However, the experimental results of the underlying scientific works are often reported in ways such that they cannot simply be extrapolated into impacts in real indoor environments. Typical for these studies, a potted plant was placed in a sealed chamber (often with volume of ~1 m3), into which a single VOC was injected, and its decay was tracked over the course of many hours or days [42,43,44,45,46,47,48,49,50,51,52]. In contrast, building volumes are much larger than that of an experimental chamber, and VOC emissions are persistent. Also, indoor air is continuously exchanged with the outdoors. For instance, the median of measured residence times for air in US offices is about 50 min [53], and 80 min for US homes [19, 54, 55], corresponding to air exchange rates (AER) of 1.2 and 0.75 h−1, respectively, contrasting sharply with the long timescales needed for the chamber experiments to produce meaningful VOC reductions. Some endeavors to minimize these differences between chambers and indoor environments have been pursued in studies, though not all issues have been resolved. For instance, Xu et al. [56] attempted to mirror more realistic conditions in what they referred to as a “dynamic” chamber, but no mention of air exchange was explicitly found in their work. Liu et al. [57] incorporated continuous airflow into their experiments, with constant upstream benzene concentrations of about 150 ppb. However, they maintained a very small chamber volume, inflating the relative influence of the plants. Sorption of VOCs onto the surfaces of the chamber is sometimes, but not always considered by these studies, which may be the cause of some of the observed VOC decay, rather than uptake by the plants. Other studies have proposed improvements to the design of plant chamber experiments, but they focused on conditions such as temperature, humidity, and carbon dioxide concentrations (all of which may impact plant health), instead of parameters which affect pollutant-building interactions [58, 59]. A few field campaigns have tried to measure the impact of plants within indoor environments, although Girman et al. [60] documented in detail the likely inaccuracies of the measuring equipment used in these studies. More importantly, none of them controlled or measured the outdoor air exchange rate. Conclusions can therefore not be drawn about the influence of plants versus the influence of VOC removal by air exchange. Of these studies, however, Dingle et al. [61] found no reduction in formaldehyde until plant density reached 2.44 plants/m2, at which point only a 10% reduction was seen. Wood et al. [62] claimed to observe VOC reductions of up to 75% within plant-containing offices at high VOC loadings, but they only sampled 5-min measurements once each week and neglected to report air exchange. Only two publications were found that not only acknowledge these issues, but explicitly refute the notion that common houseplants improve indoor air quality. They were written by Girman et al. [60] and Levin [63]. Those works, authored by indoor air and building scientists, discuss in detail the history and limitations of the chamber and field studies, and provide a mass balance calculation that highlights the predicted ineffectiveness of using potted plants to remove VOCs from indoor air. Building upon that foundation, the work herein presents a review and impact analysis of removal rates reported by 12 cited works, most of which were conducted after the 1992 publication by Levin [63]. Among these works, the metrics used to report VOC removal are inconsistent, so comparisons and reproducibility are difficult to assess, as is predicting indoor air impacts. The present analysis thus first standardizes 196 experimental results into a metric useful for measuring indoor air cleaning, and then uses those standardized results to assess the effectiveness of using potted plants to remove VOCs and improve indoor air quality. ## Methodology ### Standardization of reported VOC removal Within the building sciences, the indoor air-cleaning potential of a standalone device is parameterized with the clean air delivery rate (CADR). The CADR is the effective volumetric flow rate at which “clean” air is supplied to the environment, reflecting the rate at which the air cleaner removes pollutants. It is the product of the flow rate of air through the air cleaner (Qac., m3/h) and its removal efficiency (η), so CADR = Qacη (m3/h). The same air cleaner will have a greater impact in a smaller environment, so to gauge the impact of an air cleaner within the context of the indoor space it occupies, CADR must be normalized by the relevant indoor volume (V, m3). This CADR/V (h−1) parameter corresponds to a first-order loss rate constant (i.e., rate of pollutant removal is proportional to pollutant concentration). Given that sufficient information is provided by a chamber study (e.g. physical chamber characteristics, experimental parameters), a CADR-per-plant (CADRp, m3 h−1 plant−1) can be computed using its results. The experimental procedures of the 12 considered studies used one of two general experimental setups. The first setup (setup I) assumes a perfectly sealed chamber with no VOC sources with uptake by the plant being the only loss mechanism, with a corresponding differential mass balance equation being: $$V_{\mathrm{c}}\frac{{{\mathrm{d}}C}}{{{\mathrm{d}}t}} = - {\mathrm{CADR}}_{\mathrm{p}}C,$$ (1) where C represents the VOC concentration in the chamber; Vc (m3) is the volume of the chamber; and t (h) is time. By integrating Eq. 1: $$C_t = C_0e^{ - \left( {\frac{{{\mathrm{CADR}}_{\mathrm{p}}}}{{V_{\mathrm{c}}}}} \right)t},$$ (2) where C0 is the initial concentration within the chamber; and Ct is the concentration chamber after t hours have elapsed. Using data provided by the chamber studies, the CADRp can be computed by rearranging Eq. 2: $${\mathrm{CADR}}_{\mathrm{p}} = - \frac{{V_{\mathrm{c}}}}{t}\ln \left( {\frac{{C_t}}{{C_0}}} \right).$$ (3) The second experimental setup (setup II) consists of steady state conditions in a flow-through chamber, instead of pollutant decay occurring in a sealed chamber. Equeations 13 no longer apply to this condition. In this case, the differential mass balance is described by the difference between the source terms (inlet flow) and loss terms (outlet flow + plant filtration): $$V_{\mathrm{c}}\frac{{{\mathrm{d}}C}}{{{\mathrm{d}}t}} = Q_{\mathrm{c}}C_{{\mathrm{inlet}}} - \left( {Q_{\mathrm{c}} + {\mathrm{CADR}}_{\mathrm{p}}} \right)C_{{\mathrm{outlet}}},$$ (4) where Qc (m3/h) is the flow rate through the chamber; Cinlet is the VOC concentration entering the chamber through its inlet; and Coutlet is the VOC concentration exiting the chamber (where C = Coutlet). Solving for CADRp under steady state conditions yields: $${\mathrm{CADR}}_{\mathrm{p}} = \frac{{Q_{\mathrm{c}}}}{{1 - \left( {C_{{\mathrm{outlet}}}/C_{{\mathrm{inlet}}}} \right)}} - Q_{\mathrm{c}}.$$ (5) The biases produced by neglecting surface sorption (in both setups) and chamber leakage (in setup I) from the mass balance equations (Eqs. 1 and 4, respectively) implicitly favor the efficacy of the plant removal, thereby providing absolute best-case estimates of the CADRp for the reviewed chamber studies. ### Description of considered chamber experiments A CADRp dataset was developed using results of 12 published studies, comprising 196 potted plant chamber experiments. The experimental details of the 12 publications are summarily presented in Table 1, with further experimental detail and CADRp calculation results provided in the supplementary information (SI). All experiments measured VOC removal by a single plant within a controlled chamber, and one CADRp was computed for each experiment per plant per VOC species removed. However, the 12 studies reported their results in a variety of inconsistent metrics, as follows. Some studies only displayed plots of pollutant decay. Others included tables listing an initial concentration and the concentration after a certain amount of time (e.g. 24 h). Some reported drop in concentration per hour (in reality, the concentration reduction each hour will not be constant, because removal is likely first order, not linear). Furthermore, some normalized their results by surface area of plant leaf, while others did not measure leaf area at all—though if anything, large leaf surface areas may hinder VOC uptake, as the leaves serve to block air from passing over the growth substrate, which can dominate VOC removal [44, 64]. Table 1 broadly categorizes the studies into three groups based on their experimental setups and how their results were reported, each necessitating a different approach to determining CADRp values, including: 1. (1) A sealed chamber (setup I) presenting only initial and final concentration measurements (or their ratios), for a certain duration of time. 2. (2) A sealed chamber (setup I) presenting a timeseries of concentration measurements. 3. (3) A flow-through chamber (setup II) presenting Cinlet and Coutlet measurements. For the first category, Eq. 3 was used to compute CADRp values for the experiments. Aydogan and Montoya [42] tabulated the time taken for two-thirds of initial formaldehyde to be removed for four different plant species. Orwell et al. [47] tabulated average 24-h removal of benzene (C0Ct) from an initial dose (C0) for seven plant species, while Orwell et al. [48] tabulated the required time to reach Ct/C0 = 0.5 for various combinations of plant species, toluene, xylene. Wolverton et al. [49] tabulated percent removed after 24 h of formaldehyde, benzene, and trichloroethylene (TCE) for several plant species. Yoo et al. [51] reported removal per hour per leaf area (ng m−3 h−1 cm−2) for four plants removing benzene and toluene, providing initial concentrations and leaf surface areas. This CADRp calculation was carried out assuming their reported numbers corresponded to the first hour of the chamber experiment. Yang et al. [50] presented results similarly for five VOCs across several plant species organized qualitatively by performance (i.e., “superior,” “intermediate,” and “poor” performing plants). Zhang et al. [52] used a genetically modified version of Pothos Ivy, designed to enhance VOC uptake, and provided a percent reduction of concentration achieved over the timespan of days. The CADRp results for these studies are detailed in Table S1. For the second category, a CADRp value was computed using Eq. 3 for each reported point in the timeseries. Their average was taken as the overall CADRp for that experiment. Irga et al. [43] plotted percent of benzene removed for two plant setups over the course of four days. Kim et al. [45] took hourly measurements over a 5-h period of cumulative concentration reduction of formaldehyde normalized by leaf area (µg m−3 cm−2) for dozens of plant species spanning four categories. Their 36 woody and herbaceous foliage plants were used for this dataset. Given the leaf area of all plant species and an initial concentration in the chamber, conversion to CADRp was possible. Kim et al. [46] plotted concentration over time for two distinct plant species removing three different VOCs. The CADRp results for these studies are detailed in Table S2. For the third category, computing CADRp necessitated the use of Eq. 5. The Coutlet/Cinlet expression within Eq. 5 may equivalently be thought of as the fractional VOC removal, which Liu et al. [57] reported using setup II for benzene. Three of their plant species yielded 60–80% removal, 17 species yielded 20–40%, another 17 yielded 10–20%, 13 removed less than 10%, and 23 did not yield any benzene removal. These CADRp results are detailed in Table S3. ### Assessing effectiveness of potted plants as indoor air cleaners The most prominent way by which VOCs are removed from indoor spaces is by outdoor-to-indoor air exchange. Air flows through a building at a certain flow rate (Qb, m3/h), which may be a combination of mechanical ventilation, natural ventilation, and uncontrolled infiltration through the building envelope. Typically, Qb scales with building size, so the volume-normalized flow, which is the air exchange rate (called AER or λ, h−1), is used to parameterize building airflow, where λ = Qb/V. This metric, as with CADR/V, is a first-order loss rate constant. Consequently, λ and CADR/V can be directly compared to assess the relative efficacy of each removal type. For air cleaning to be considered effective, the loss rate due to the air cleaner (CADR/V) must be on the same order or higher as that of the air exchange (λ) loss rate. So, if λ CADR/V, most of the pollution removal is accomplished via air exchange alone. If λ CADR/V, the air cleaner is responsible for the most removal. If λ = CADR/V, the two loss mechanisms have the same influence. For the case of multiple indoor potted plants combining their individual CADRp to remove VOCs from an indoor environment, the net CADR/V loss rate may be computed given the density of plants in a given floor area (ρp, plants/m2), and the volume of the considered building in terms of the product of an average ceiling height (h, m) and the given floor area (A, m2) by: $$\frac{{{\mathrm{CADR}}}}{V} = \frac{{\left( {{\mathrm{CADR}}_{\mathrm{p}}\rho _{\mathrm{p}}A} \right)}}{{\left( {hA} \right)}} = \frac{{{\mathrm{CADR}}_{\mathrm{p}}\rho _{\mathrm{p}}}}{h}$$ (6) so that CADR/V depends on CADRp, ρp, and h. Since the ceiling height h is likely far less varied than CADRp or ρp throughout the US building stock, excluding atriums, it is taken as a constant h = 2.5 m ≈ 8 ft throughout the following analysis. Comparisons of plant and AER loss mechanisms may be quantified by the effectiveness parameter (Γ), defined as the fraction of VOC removal by which plant-induced air cleaning alone is responsible: $$\Gamma = \frac{{({\mathrm{CADR}}/V)}}{{\lambda + \left( {{\mathrm{CADR}}/V} \right)}}$$ (7) Thus, Γ is bounded by 0 and 1. If Γ 0 (λ CADR/V), the air cleaner is wholly ineffective compared to air exchange loss; if Γ 1 (λ CADR/V), the air cleaner dominates removal; and if Γ = 0.5 (λ = CADR/V), the air cleaner and air exchange losses contribute equally to total removal. Substituting the right-hand-side of Eq. 6 into (CADR/V) in Eq. 7 facilitated a simulation-based parametric analysis of the effectiveness of VOC removal by potted plants indoors. ## Results and discussion ### CADR of potted plants in reviewed studies In total, 196 CADRp values were computed from the 12 reviewed chamber studies. A histogram expressing this entire dataset is provided in Fig. 1a, which possesses a wide spread of nearly four orders of magnitude (ranging from 0.0004–0.2 m3 h−1 plant−1 at 10th and 90th percentiles), a median CADRp = 0.023 m3 h−1 plant−1, and a mean (standard deviation) of 0.062 (0.089) m3 h−1 plant−1. Even though these CADRp values represent best-case scenarios (as they were computed assuming negligible chamber sorption and leakage), their magnitudes are exceedingly small. For context, typical gas or particle air cleaners possess average CADR values on the approximate order of ~100 m3/h [65,66,67]. Figure 1b resolves all 196 datapoints contributing to the Fig. 1a histogram by type of VOC measured, labeled by the study‘s first author and reference number. This figure thus explores the possibility of constraining CADRp for each VOC. Some of the data preliminarily indicates that certain VOCs may be more efficiently removed by potted plants; for instance, Kim et al. [44,45,46] observed better formaldehyde removal than for xylene, and Wolverton et al. [49] observed a much lower TCE removal than for formaldehyde and benzene. However, these trends are not consistent throughout all studies; for instance, Yang et al. [50] observed similar removal of TCE, benzene, and toluene. Also, not enough studies assessed the same combinations of VOCs sufficient for a definitive trend to be established. Furthermore, some results vary largely from study-to-study even for the same VOC. More notably, however, the variance of CADRp values belonging to a particular study is much smaller than the variance of the dataset as a whole (intra-study values range 1–2 orders of magnitude, as compared to the total CADRp range of ~4 orders of magnitude). For example, of the 46 CADRp values calculated from Kim et al. [44,45,46], 32 of them (70%) reside above 0.1 m3 h−1 plant−1, making up 84% of the total 38 CADRp greater than 0.1 m3 h−1 plant−1. On the other end of this spectrum, all CADRp values belonging to Irga et al. [43] and Yang et al. [50] were less than 0.001 m3 h−1 plant−1, making up all but one other CADRp below 0.001 m3 h−1 plant−1. The one remaining CADRp existing in this lowest-performing interval belongs to Zhang et al. [52], who also conducted an experiment with chloroform, despite their use of genetically modified plants shown to enhance VOC uptake. We believe these trends suggest that the varying VOC removal performance among different research studies may be an indicator of differences among removal measurement methodologies, which should be further investigated. These perhaps include measurement techniques, plant and rhizosphere health, and other characteristics and relative sizes of the chamber, soil, pot, or the plant itself (e.g. VOC sorption onto competing surfaces). ### Effectiveness in typical buildings Using the entire CADRp dataset (Fig. 1a), Eq. 6 was used to compute four sets of total CADR/V loss rates, binned into four distinct plant density (ρp) cases separated at logarithmic intervals (0.1, 1, 10, and 100 plants/m2). In Fig. 2, these loss rates are compared directly to a distribution representing the AER typical of US residences [54, 55] and another representing AERs typical of US offices [53]. Again, these two types of loss rates can be directly compared to demonstrate their relative impacts on VOC removal. The two boxes corresponding to ρp values of 0.1 and 1 plants/m2 are barely visible, so their corresponding loss rates are almost certain to be negligible, even if plants exhibiting the highest plausible CADRp are used. For a ρp = 10 plants/m2, some of the loss rates due to VOC removal by the plants from the upper end of the CADRp distribution may comparable to air exchange losses in particularly tight buildings, but the median CADR/V is still negligible compared to the median AER for both residences and offices. This assessment is in strong agreement with the conclusions of Girman et al. [60] and Levin [63]. Using similar mass balance calculations and the most generous selection of the early published Wolverton et al. [49] data, Levin [63] determined that a ~140 m2 house (1500 ft2) would require 680 houseplants (i.e., ρp = 4.9 plants/m2) for the removal rate of VOCs by plants indoors to just reach 0.096 h−1. Achieving these rates of plant density throughout a building is obviously not attainable. Even ρp = 1 plants/m2 would rule out any useful occupant-driven architectural programming being applied to a building, and it would take a theoretical ρp = 100 plants/m2 for the entire CADR/V loss rate distribution to be comparable to the AER distributions on a whole. A parametric analysis was used to predict the required ρp necessary to achieve a desired effectiveness for various combinations of AER and representative CADRp. The analysis computed ρp required for varied Γ between 0 to 1 and AER between 0.1 and 10 h−1, thus exhausting all Γ possibilities and all reasonably expected indoor AERs in typical buildings. The CADRp was set at one of three discrete cases. The first was a low CADRp case, corresponding to the 10th percentile of the complete CADRp dataset (0.00014 m3 h−1 plant−1); the second used the median of the CADRp dataset (0.023 m3 h−1 plant−1); while the third used the 90th percentile (0.19 m3 h−1 plant−1). The ρp predictions are presented as contour plots in Fig. 3, which are binned at factor-of-ten intervals from ρp < 1 to ρp > 10,000 plants/m2. At the strongest-case CADRp assumptions (Fig. 3c), an effectiveness of ~20% may be realized in an extremely low-AER building (e.g. λ < 0.2 h−1) if one potted plant is used per square meter of the indoor floor area. This effectiveness quickly falls off if an even slightly higher air exchange rate is experienced. But, as was stated, this ρp = 1 plants/m2 is too dense to be practical within a building, and it barely registers as effective under the most generous CADRp and AER assumptions. Under the more likely plant-removal characteristics (Fig. 3a, b), any legitimate effectiveness, even in buildings with the lowest air exchange, would require ρp values that are not only impractical or infeasible indoors, but are ludicrously large. Note again that the analyses in this section were carried out with a best-case CADRp dataset, which computed CADRp assuming neither chamber leakage nor surface sorption contributed to observed losses, so even these impossibly large ρp values essentially represent a lower bound. ### Other considerations The conditions within sealed chambers do not scale up to the conditions of real indoor environments, which have high AER, large volumes, and persistent VOC emissions. Our conclusion that plants have negligible impact on indoor VOC loads is consistent with the results of field studies that did not observe real VOC reductions when plants were placed in buildings. Despite potted plants not appreciably affecting indoor VOC concentrations, conducting chamber experiments on plants can remain a consequential effort. There is much to still be learned pertaining to the mechanisms of botanical uptake of VOCs. And, other applications of botanical filtration do exist (although passively cleaning indoor air is not one of them). Potential usefulness for further research perhaps lies in plant-assisted botanical bio-trickling purifiers (colloquially, “biowalls” or plant walls), which mechanically pull air through a porous substrate supporting plants and their root ecosystems [68,69,70]. These may create a more effective means of VOC removal because of their size, exposed rhizosphere, and controlled and continuous airflow. Some recent studies suggest that biowalls may yield CADRs on orders of 10–100 m3/h for certain VOCs [71, 72], with the potential to make worthy contributions to indoor VOC removal. However, more biowall field assessments and modeling endeavors are required to better hone our understanding of their true air cleaning and cost effectiveness. Regardless of application, more rigor is required in future chamber experiments to remove methodological ambiguities. First-order loss must be used to interpret results, and chamber leakage and surface sorption (to the chamber walls as well as to the pot and soil) must be accounted for. A standardized metric to be used in mass balance calculations, such as the CADR, should also be a critical aspect of future experimental reporting. Research also suggests that the plant itself is less crucial to VOC removal than the microbial community which resides within the rhizosphere/soil system of the plant [73, 74]. The issue of bringing plant life into the indoor environment is also a complex one, not settled by a potted plant’s (in)ability to reduce airborne VOCs. Indoor plants, by helping to create a more biophilic indoor environment, may have a positive impact on occupant well-being [75], which may also translate into productivity improvements for businesses. However, plant introduction may also come with certain costs or trade-offs. One potential associated downside of plants indoors may be increased humidity. Also, plants have been shown to produce certain VOCs under particular conditions [76, 77]. So even if a potted plant works to slightly reduce, for instance, the persistence of formaldehyde indoors, its net impact on total VOC concentrations and overall indoor air quality is less clear. Spores and other bioparticle emissions may also be produced by plants, which have been observed from biowall systems [65, 74, 75]. Continued rigorous laboratory and field studies are required to develop a more complete and nuanced understanding of the interplay between plants and indoor environmental outcomes. ## References 1. 1. Klepeis NE, Nelson WC, Ott WR, Robinson JP, Tsang AM, Switzer P, et al. The National Human Activity Pattern Survey (NHAPS): a resource for assessing exposure to environmental pollutants. J Exposure Sci Environ Epidemiol. 2001;11:231–52. 2. 2. Weschler CJ. Ozone’s impact on public health: contributions from indoor exposures to ozone and products of ozone-initiated chemistry. Environ Health Perspect. 2006;114:1489–96. 3. 3. Wallace L. Indoor particles: a review. J Air Waste Manag Assoc. 1996;46:98–126. 4. 4. Wallace L. Indoor sources of ultrafine and accumulation mode particles: size distributions, size-resolved concentrations, and source strengths. Aerosol Sci Technol. 2006;40:348–60. 5. 5. Weschler CJ, Shields HC. Production of the hydroxyl radical in indoor air. Environ Sci Technol. 1996;30:3250–8. 6. 6. Weschler CJ, Nazaroff WW. Semivolatile organic compounds in indoor environments. Atmos Environ. 2008;42:9018–40. 7. 7. Brown SK, Sim MR, Abramson MJ, Gray CN. Concentrations of volatile organic compounds in indoor air—a review. Indoor Air. 1994;4:123–34. 8. 8. Morawska L, Afshari A, Bae GN, Buonanno G, Chao CYH, Hänninen O, et al. Indoor aerosols: from personal exposure to risk assessment. Indoor Air. 2013;23:462–87. 9. 9. Johnson AM, Waring MS, DeCarlo PF. Real-time transformation of outdoor aerosol components upon transport indoors measured with aerosol mass spectrometry. Indoor Air. 2017;27:230–40. 10. 10. Avery AM, Waring MS, DeCarlo PF. Seasonal variation in aerosol composition and concentration upon transport from the outdoor to indoor environment. Environ Sci: Process Impacts. 2019;21:528–47. 11. 11. Uhde E, Salthammer T. Impact of reaction products from building materials and furnishings on indoor air quality—a review of recent advances in indoor chemistry. Atmos Environ. 2007;41:3111–28. 12. 12. Nazaroff WW, Weschler CJ. Cleaning products and air fresheners: exposure to primary and secondary air pollutants. Atmos Environ. 2004;38:2841–65. 13. 13. Huang Y, Ho SSH, Ho KF, Lee SC, Yu JZ, Louie PKK. Characteristics and health impacts of VOCs and carbonyls associated with residential cooking activities in Hong Kong. J Hazard Mater. 2011;186:344–51. 14. 14. Brinke JT, Selvin S, Hodgson AT, Fisk WJ, Mendell MJ, Koshland CP, et al. Development of new volatile organic compound (VOC) exposure metrics and their relationship to “sick building syndrome” symptoms. Indoor Air 1998;8:140–52. 15. 15. Jones AP. Indoor air quality and health. Atmos Environ. 1999;33:4535–64. 16. 16. Wallace LA. Human exposure to volatile organic pollutants: implications for indoor air studies. Annu Rev Energy Environ. 2001;26:269–301. 17. 17. Wieslander G, Norbäck D, Edling C. Airway symptoms among house painters in relation to exposure to volatile organic compounds (VOCs)—a longitudinal study. Ann Occup Hyg. 1997;41:155–66. 18. 18. Yu C, Crump D. A review of the emission of VOCs from polymeric materials used in buildings. Build Environ. 1998;33:357–74. 19. 19. Waring MS. Secondary organic aerosol in residences: predicting its fraction of fine particle mass and determinants of formation strength. Indoor Air. 2014;24:376–89. 20. 20. Youssefi S, Waring MS. Predicting secondary organic aerosol formation from terpenoid ozonolysis with varying yields in indoor environments. Indoor Air. 2012;22:415–26. 21. 21. Waring MS, Wells JR. Volatile organic compound conversion by ozone, hydroxyl radicals, and nitrate radicals in residential indoor air: Magnitudes and impacts of oxidant sources. Atmos Environ (1994). 2015;106:382–91. 22. 22. Cummings BE, Waring MS. Predicting the importance of oxidative aging on indoor organic aerosol concentrations using the two-dimensional volatility basis set (2D-VBS). Indoor Air 2019;29:616–29. 23. 23. Youssefi S, Waring MS. Indoor transient SOA formation from ozone+α-pinene reactions: Impacts of air exchange and initial product concentrations, and comparison to limonene ozonolysis. Atmos Environ. 2015;112:106–15. 24. 24. Youssefi S, Waring MS. Transient secondary organic aerosol formation from limonene ozonolysis in indoor environments: impacts of air exchange rates and initial concentration ratios. Environ Sci Technol. 2014;48:7899–908. 25. 25. Yang Y, Waring MS. Secondary organic aerosol formation initiated by α-terpineol ozonolysis in indoor air. Indoor Air. 2016;26:939–52. 26. 26. Rohr AC. The health significance of gas- and particle-phase terpene oxidation products: a review. Environ Int. 2013;60:145–62. 27. 27. Hallquist M, Wenger JC, Baltensperger U, Rudich Y, Simpson D, Claeys M, et al. The formation, properties and impact of secondary organic aerosol: current and emerging issues. Atmos Chem Phys. 2009;9:5155–236. 28. 28. Lin Y-H, Arashiro M, Clapp PW, Cui T, Sexton KG, Vizuete W, et al. Gene expression profiling in human lung cells exposed to isoprene-derived secondary organic aerosol. Environ Sci Technol. 2017;51:8166–75. 29. 29. Wargocki P, Sundell J, Bischof W, Brundrett G, Fanger PO, Gyntelberg F, et al. Ventilation and health in non-industrial indoor environments: report from a European multidisciplinary scientific consensus meeting (EUROVEN). Indoor Air. 2002;12:113–28. 30. 30. Mendell MJ, Eliseeva EA, Davies MM, Spears M, Lobscheid A, Fisk WJ, et al. Association of classroom ventilation with reduced illness absence: a prospective study in California elementary schools. Indoor Air. 2013;23:515–28. 31. 31. Wargocki P, Wyon DP, Fanger PO. The performance and subjective responses of call-center operators with new and used supply air filters at two outdoor air supply rates. Indoor Air. 2004;14 Suppl 8:7–16. 32. 32. Haverinen‐Shaughnessy U, Moschandreas DJ, Shaughnessy RJ. Association between substandard classroom ventilation rates and students’ academic achievement. Indoor Air 2011;21:121–31. 33. 33. Carrer P, Wargocki P, Fanetti A, Bischof W, De Oliveira Fernandes E, Hartmann T, et al. What does the scientific literature tell us about the ventilation–health relationship in public and residential buildings? Build Environ. 2015;94:273–86. 34. 34. Fisk WJ, Mirer AG, Mendell MJ. Quantification of the association of ventilation rates with sick building syndrome symptoms. Berkeley, CA, USA: Lawrence Berkeley National Lab. (LBNL); 2009. Report No.: LBNL-2035E. https://www.osti.gov/biblio/962711 35. 35. Rackes A, Ben‐David T, Waring MS. Outcome-based ventilation: a framework for assessing performance, health, and energy impacts to inform office building ventilation decisions. Indoor Air. 2018;28:585–603. 36. 36. Quang TN, He C, Morawska L, Knibbs LD. Influence of ventilation and filtration on indoor particle concentrations in urban office buildings. Atmos Environ. 2013;79:41–52. 37. 37. Weschler CJ. Ozone in indoor environments: concentration and chemistry. Indoor Air. 2000;10:269–88. 38. 38. Ben-David T, Wang S, Rackes A, Waring MS. Measuring the efficacy of HVAC particle filtration over a range of ventilation rates in an office building. Build Environ. 2018;144:648–56. 39. 39. Benne K, Griffith B, Long N, Torcellini P, Crawley D, Logee T. Assessment of the energy impacts of outside air in the commercial sector. 2009. Report No.: NREL/TP-550-41955, 951796. http://www.osti.gov/servlets/purl/951796-l2ErYY/ 40. 40. Rackes A, Waring MS. Alternative ventilation strategies in U.S. offices: Comprehensive assessment and sensitivity analysis of energy saving potential. Build Environ. 2017;116:30–44. 41. 41. Ben-David T, Rackes A, Waring MS. Alternative ventilation strategies in U.S. offices: Saving energy while enhancing work performance, reducing absenteeism, and considering outdoor pollutant exposure tradeoffs. Build Environ. 2017;116:140–57. 42. 42. Aydogan A, Montoya LD. Formaldehyde removal by common indoor plant species and various growing media. Atmos Environ. 2011;45:2675–82. 43. 43. Irga PJ, Torpy FR, Burchett MD. Can hydroculture be used to enhance the performance of indoor plants for the removal of air pollutants? Atmos Environ. 2013;77:267–71. 44. 44. Kim KJ, Kil MJ, Song JS, Yoo EH, Son K-C, Kays SJ. Efficiency of volatile formaldehyde removal by indoor plants: contribution of aerial plant parts versus the root zone. J Am Soc Hort Sci. 2008;133:521–6. 45. 45. Kim KJ, Jeong MI, Lee DW, Song JS, Kim HD, Yoo EH, et al. Variation in formaldehyde removal efficiency among indoor plant species. HortScience 2010;45:1489–95. 46. 46. Kim KJ, Kim HJ, Khalekuzzaman M, Yoo EH, Jung HH, Jang HS. Removal ratio of gaseous toluene and xylene transported from air to root zone via the stem by indoor plants. Environ Sci Pollut Res. 2016;23:6149–58. 47. 47. Orwell RL, Wood RL, Tarran J, Torpy F, Burchett MD. Removal of benzene by the indoor plant/substrate microcosm and implications for air quality. Water Air Soil Pollut. 2004;157:193–207. 48. 48. Orwell RL, Wood RA, Burchett MD, Tarran J, Torpy F. The potted-plant microcosm substantially reduces indoor air VOC pollution: II. Laboratory study. Water Air Soil Pollut. 2006;177:59–80. 49. 49. Wolverton BC, Johnson A, Bounds K. Interior landscape plants for indoor air pollution abatement. 1989. Report No.: NASA-TM-101766. https://ntrs.nasa.gov/search.jsp?R=19930073077 50. 50. Yang DS, Pennisi SV, Son K-C, Kays SJ. Screening indoor plants for volatile organic pollutant removal efficiency. HortScience 2009;44:1377–81. 51. 51. Yoo MH, Kwon YJ, Son K-C, Kays SJ. Efficacy of indoor plants for the removal of single and mixed volatile organic pollutants and physiological effects of the volatiles on the plants. J Am Soc Horticultural Sci 2006;131:452–8. 52. 52. Zhang L, Routsong R, Strand SE. Greatly enhanced removal of volatile organic carcinogens by a genetically modified houseplant, pothos Ivy (Epipremnum aureum) expressing the mammalian cytochrome P450 2e1 gene. Environ Sci Technol. 2019;53:325–31. 53. 53. Rackes A, Waring MS. Do time-averaged, whole-building, effective volatile organic compound (VOC) emissions depend on the air exchange rate? A statistical analysis of trends for 46 VOCs in U.S. offices. Indoor Air 2016;26:642–59. 54. 54. Weisel CP, Zhang J, Turpin BJ, Morandi MT, Colome S, Stock TH, et al. Relationship of indoor, outdoor and personal air (RIOPA) study: study design, methods and quality assurance/control results. J Expo Sci Environ Epidemiol. 2005;15:123–37. 55. 55. Turpin BJ, Weisel CP, Morandi M, Colome S, Stock T, Eisenreich S, et al. Relationships of indoor, outdoor, and personal air (RIOPA): part II. Analyses of concentrations of particulate matter species. Res Rep Health Eff Inst. 2007;130 Pt 2:1–77. discussion 79–92. 56. 56. Xu Z, Wang L, Hou H. Formaldehyde removal by potted plant–soil systems. J Hazard Mater. 2011;192:314–8. 57. 57. Liu Y-J, Mu Y-J, Zhu Y-G, Ding H, Crystal Arens N. Which ornamental plant species effectively remove benzene from indoor air? Atmos Environ. 2007;41:650–4. 58. 58. Cruz MD, Müller R, Svensmark B, Pedersen JS, Christensen JH. Assessment of volatile organic compound removal by indoor plants—a novel experimental setup. Environ Sci Pollut Res. 2014;21:7838–46. 59. 59. Hörmann V, Brenske K-R, Ulrichs C. Suitability of test chambers for analyzing air pollutant removal by plants and assessing potential indoor air purification. Water Air Soil Pollut. 2017;228:402. 60. 60. Girman J, Phillips T, Levin H. Critical review: how well do house plants perform as indoor air cleaners? 2009;5. 61. 61. Dingle P, Tapsell P, Hu S. Reducing formaldehyde exposure in office environments using plants. Bull Environ Contam Toxicol. 2000;64:302–8. 62. 62. Wood RA, Burchett MD, Alquezar R, Orwell RL, Tarran J, Torpy F. The potted-plant microcosm substantially reduces indoor air VOC pollution: I. Office field-study. Water Air Soil Pollut. 2006;175:163–80. 63. 63. Levin H. Can house plants solve IAQ problems?. Indoor Air Bull. 1992;2:1–7. 64. 64. Wood RA, Orwell RL, Tarran J, Torpy F, Burchett M. Potted-plant/growth media interactions and capacities for removal of volatiles from indoor air. J Horticult Sci Biotechnol. 2002;77:120–9. 65. 65. Waring MS, Siegel JA, Corsi RL. Ultrafine particle removal and generation by portable air cleaners. Atmos Environ. 2008;42:5003–14. 66. 66. Kim H-J, Han B, Kim Y-J, Yoon Y-H, Oda T. Efficient test method for evaluating gas removal performance of room air cleaners using FTIR measurement and CADR calculation. Build Environ. 2012;47:385–93. 67. 67. Chen W, Zhang J, Zhang ZB. Performance of air cleaners for removing multi-volatile organic compounds in indoor air. ASHRAE Trans. 2005;111:1101–14. 68. 68. Russell JA, Hu Y, Chau L, Pauliushchyk M, Anastopoulos I, Anandan S, et al. Indoor-biofilter growth and exposure to airborne chemicals drive similar changes in plant root bacterial communities. Appl Environ Microbiol. 2014;80:4805–13. 69. 69. Darlington A, Chan M, Malloch D, Pilger C, Dixon MA. The biofiltration of indoor air: implications for air quality. Indoor Air 2000;10:39–46. 70. 70. Darlington AB, Dat JF, Dixon MA. The biofiltration of indoor air: air flux and temperature influences the removal of toluene, ethylbenzene, and xylene. Environ Sci Technol. 2001;35:240–6. 71. 71. Alraddadi O, Leuner H, Boor B, Rajkhowa B, Hutzel W, Dana M. Air cleaning performance of a biowall for residential applications. International High Performance Buildings Conference. 2016. https://docs.lib.purdue.edu/ihpbc/185 72. 72. Wang Z, Zhang JS. Characterization and performance evaluation of a full-scale activated carbon-based dynamic botanical air filtration system for improving indoor air quality. Build Environ. 2011;46:758–68. 73. 73. Soreanu G, Dixon M, Darlington A. Botanical biofiltration of indoor gaseous pollutants—a mini-review. Chem Eng J. 2013;229:585–94. 74. 74. Mikkonen A, Li T, Vesala M, Saarenheimo J, Ahonen V, Kärenlampi S, et al. Biofiltration of airborne VOCs with green wall systems—microbial and chemical dynamics. Indoor Air. https://onlinelibrary.wiley.com/doi/abs/10.1111/ina.12473. 2018. 75. 75. Bringslimark T, Hartig T, Patil GG. The psychological benefits of indoor plants: A critical review of the experimental literature. J Environ Psychol. 2009;29:422–33. 76. 76. Peñuelas J, Llusià J. Plant VOC emissions: making use of the unavoidable. Trends Ecol Evolution. 2004;19:402–4. 77. 77. Holopainen JK, Gershenzon J. Multiple stress factors and the emission of plant VOCs. Trends Plant Sci. 2010;15:176–84. ## Author information Authors ### Corresponding author Correspondence to Michael S. Waring. ## Ethics declarations ### Conflict of interest The authors declare that they have no conflict of interest. Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## Rights and permissions Reprints and Permissions Cummings, B.E., Waring, M.S. Potted plants do not improve indoor air quality: a review and analysis of reported VOC removal efficiencies. J Expo Sci Environ Epidemiol 30, 253–261 (2020). https://doi.org/10.1038/s41370-019-0175-9 • Revised: • Accepted: • Published: • Issue Date: ### Keywords • Empirical/statistical models • Volatile organic compounds • Exposure modeling
2020-08-08 17:57:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6260975003242493, "perplexity": 11499.303309572555}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738015.38/warc/CC-MAIN-20200808165417-20200808195417-00113.warc.gz"}
http://sitacuisses.blogspot.com/2016/11/why-i-write-careful-posts-on.html
## Wednesday, November 16, 2016 ### Why I write careful posts on nonsensical topics Basically, because I'm not allowed to write or talk about work-related matters. So I apply my considerable intelligence, broad knowledge, and unbeatable modesty to things like the differences between powerlifting and bodybuilding (and the superiority of the former over the latter), using the standard B-school two-by-two matrix format (click for bigger): I also take to task people who think that knowledge is superfluous as long as their intentions are good (or at least consistent the the current "virtuous" narrative). For example, I did congratulate TIME for not using a photo of cooling towers for this article (unlike almost everyone else who uses images of cooling towers' steam to write about pollution), but I do have to point out that most of what's seen coming out of those stacks is also steam. First, the color and the shape of the expansion give that away, but even if they didn't, gaseous $\mathrm{CO}_{2}$ is transparent, as is water vapor. (Steam is liquid water suspended in water vapor.) And soot and other common pollutants have distinctive colors; that white means water. If you're surprised that combustion would generate water vapor, which condenses when it expands at the top of the stack, remember that hydrocarbon-based fuel combustion is mostly $\mathrm{C}_{n}\mathrm{H}_{m} + (n+ m/4)\,\,\, \mathrm{O}_{2}\rightarrow n\,\,\, \mathrm{CO}_{2} + m/2 \,\,\, \mathrm{H}_{2}\mathrm{O},$ and most of the rest (nitrous and sulfurous compounds, metals, soot and ash, the souls of the damned) are removed from the smoke before it's allowed to leave through the stacks (because of laws against pollution): Sometimes I do take the nonsense dial to 11 --- but all the calculations are correct. About a year ago, when I temporarily changed the name of this blog to Project 2016, the idea was to track non-work related learning, which is one of my hobbies; but time constraints made me choose between actually learning stuff and blogging about it, and I chose the learning. So, expect some more carefully thought-out nonsense. Careful thinking is another one of my hobbies, so I practice it even on nonsensical topics. I have very strange hobbies: another one is moving heavy objects for no immediate purpose, like this gentleman Live long and prosper -- JCS
2017-05-01 04:25:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4556092619895935, "perplexity": 2299.4839253827454}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917127681.50/warc/CC-MAIN-20170423031207-00420-ip-10-145-167-34.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/3302785/definition-of-adjoint-functors-in-leinsters-basic-category-theory
# Definition of adjoint functors in Leinster's “Basic Category Theory” Here is how the definition of adjoint functor reads on page 41 of Leinster's "Basic Category Theory": I understood this part thanks to Definition of Adjunction in Category Theory. It clarifies that the adjunction is a natural isomorphism between certain functors. However, as Remark 2.1.2 on the book seem to suggest, Leinster is postponing this interpretation to Chapter 4 and transcribing the definition of natural isomorphism for these functors: Why is he doing so? • Probably he planned to introduce natural transformations only in chapter 4, for some reason.. – Berci Jul 24 at 17:12 • @Berci but natural transformations are introduced in chapter 1... – Rodrigo Jul 24 at 17:13 • Ahh, indeed? Then I see no reason.. – Berci Jul 24 at 17:14 Actually the isomorphisms are a little bit more than natural transformations. Suppose the isomorphisms are denoted by $$\eta_{A,B}:\mathscr{B}(F(A),B)\to\mathscr{A}(A,G(B))$$, then for fixed $$A\in\mathscr{A}$$, $$\eta_{A,-}$$ is a transformation $$\mathscr{B}(F(A),-)\Rightarrow\mathscr{A}(A,G(-))$$, and similarly for fixed $$B\in\mathscr{B}$$, $$\eta_{-,B}$$ is a transformation $$\mathscr{B}(F(-),B)\Rightarrow\mathscr{A}(-,G(B))$$. The "wired" statements are just saying these. These conditions make $$\eta_{-,-}$$ a natural transformation between bifunctors.
2019-12-09 13:52:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 8, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9509836435317993, "perplexity": 762.5630008446484}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540518882.71/warc/CC-MAIN-20191209121316-20191209145316-00300.warc.gz"}
https://publications.mfo.de/handle/mfo/1079?show=full
dc.contributor.author Ivanov, Anatoli F. dc.contributor.author Trofimchuk, Sergei I. dc.date.accessioned 2014-05-13T12:00:00Z dc.date.accessioned 2016-10-05T14:13:58Z dc.date.available 2014-05-13T12:00:00Z dc.date.available 2016-10-05T14:13:58Z dc.date.issued 2014-05-13 dc.identifier.uri http://publications.mfo.de/handle/mfo/1079 dc.description Research in Pairs 2013 en_US dc.description.abstract Several aspects of global dynamics and the existence of periodic solutions are studied for the scalar differential delay equation $x'(t) = a(t)f(x([t-K]))$, where $f(x)$ is a continuous negative feedback function, $x \cdot f(x) < 0 x \neq 0, 0\leq a(t)$ is continuous $\omega$-periodic, $[\cdot]$ is the integer part function, and the integer $K \geq 0$ is the delay. The case of integer period $\omega$ allows for a reduction to finite-dimensional difference equations. The dynamics of the latter are studied in terms of corresponding discrete maps, including the partial case of interval maps $(K = 0)$. en_US dc.language.iso en en_US dc.publisher Mathematisches Forschungsinstitut Oberwolfach en_US dc.relation.ispartofseries Oberwolfach Preprints;2014,08 dc.subject Periodic differential delay equations en_US dc.subject Discretizations en_US dc.subject Difference equations en_US dc.subject Periodic solutions and their stability/instability en_US dc.subject Global dynamics en_US dc.subject Reduction to discrete and one-dimensional maps en_US dc.subject Interval maps en_US dc.title On periodic solutions and global dynamics in a periodic differential delay equation en_US dc.type Preprint en_US dc.identifier.doi 10.14760/OWP-2014-08 local.scientificprogram Research in Pairs 2013 local.series.id OWP-2014-08 local.subject.msc 34 local.subject.msc 37 local.subject.msc 39 
2020-03-29 19:08:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6226727962493896, "perplexity": 13213.584257621635}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370495413.19/warc/CC-MAIN-20200329171027-20200329201027-00354.warc.gz"}
https://crypto.stackexchange.com/questions/53559/how-to-encrypt-a-message-such-when-a-certain-condition-is-met-it-can-be-decrypte
# How to encrypt a message such when a certain condition is met it can be decrypted? Assume, there is a certificate authority whose public key: $pk$, is known. Also, the certificate $c$ is known, but not the signed certificate. I want to encrypt a message: $m$, such that whoever has a signed certificate can decrypt the message. Note that the signed certificate may not be present at encryption time. To clarify, the signed certificate must be valid and related to $c$ and $pk$. Question: What encryption can support the above scenario? I'm aware that witness encryption may help but it's so inefficient. Edit Application: assume, party $A$ has a message $m$, it encrypts it: $Enc(m)$, such that when he dies, whoever gets his death certificate (i.e. a signed messages from a certain authority that confirms his death) can decrypt the ciphertext: $Enc(m)$. So the question is what encryption scheme can the encryptor use? • What is certificate $c$ supposed to attest? What's it for? Also, a "signed certificate" is a signed public key. How do you suppose that one can decrypt with a public key and what does this signed certificate attest? – Artjom B. Dec 2 '17 at 9:54 • What is a ‘certificate’ that is not a ‘signed certificate’? Wild guess: Maybe you're looking for identity-based encryption? – Squeamish Ossifrage Dec 3 '17 at 0:00 • thanks for the comments, could you please see the edit section of my question. – Ay. Dec 4 '17 at 11:12 You can use a standard asymmetric encryption like RSA. Certificates merely serve the purpose of linking things to other things, approved by some (hopefully trusted and trustworthy) entity. What you're probably thinking about is an ID certificate which links an identity to a public key. This key is in no way encrypted. It simply occurs in a field of the certificate. Signing the certificate does not change its fields. It just adds a signature of everything else. This means that anyone with access to the certificate has access to the public key, be the certificate signed or not. Signing the certificate mere means: "I approve of this." Regarding your edit: When you want to decrypt something, you have that something and it's not enough to attain the plain text, given the encryption is secure. You need some other information. Note that I don't say what that other information is but we know for certain that additional information is needed because if it wasn't anyone who obtained the ciphertext could compute the plain text which would mean that the encryption is not secure. We now need to determine a possible piece of additional information α. That information has to come from somewhere. It doesn't just emerge for no reason because some outside event happened. You dictated in your edit that that α has to be found on the death certificate. To have any chance of encrypting something using any such information, we need to require at least that these conditions are met: • As the encrypter, need to know some mathematical property of α. Note that we don't necessarily need to know α itself. • α needs to be of a nature such that there are plenty of possible variations for different α with the same structure. Otherwise an attacker could brute-force α. • The attacker must not know α or be able to reduce the solution space to a small number of known elements. If it's not possible for the encrypter to collaborate with the authority issuing the death certificate, there is no α we can find because we can only use the information found on any normal death certificate: Name of the issuer, name of the person who died and other personal information like date of birth or address, date of death, location of death, file number, and signature of the issuer. The name of the person and their other personal information is not a secret. As the encrypter, we don't have control over the signature of the issuer nor ever the file number as we cannot collaborate with the authority. The attacker will already know what authority issues the death certificate, so the issuer is known. So we're left with date and location. As the date is only stated with the precision of days and we can be sure to know the date of death as an attacker, except for an uncertainty of approximately 100 years which is 36524 days and therefore 36524 possibilities (if we're generous). As the location of death is only stated to the precision of the municipality where the person died, there aren't many options for this either. Even for large countries like the US, there only are 19492 options. So we have a total solution space of 36524*19492 = 711925808 < 10^9 elements. This is far too small to exclude brute-force attacks and this assumes we have total control over where in the country the person dies and how old they become which is both impractical. So there are no possible α we can choose one from and therefore there cannot be any such encryption. • thanks for the answer, could you please see the edit section of my question. – Ay. Dec 4 '17 at 11:12 • thanks, but witness encryption can do it. I need something more efficient.... – Ay. Dec 8 '17 at 10:26 • It can't. You still need a large enough secret. This is fundamentally always the case. – UTF-8 Dec 9 '17 at 1:42 Think of it as using DSA or RSA on an encrypted certificate. The user signs an encrypted certificate. Now when the user try's to read the encrypted certificate. First the sign is confirmed using user's public key. Only after that confirmation completes the user will get the access to the decryption of certificate. • thanks for the answer, could you please see the edit section of my question. – Ay. Dec 4 '17 at 11:12 You seem to want to send a message based on a not yet verified certificate. This is a two part problem one is making sure only the owner of the matching private key can decrypt. The other is requiring the certificate be signed at some point. The first is easy. The second not so much. Especially if you want to use standard pki which will pad and add randomness so the signed certificate isn't even uniquely defined. However I suspect a smart contract may allow solving this. • thanks for the answer, could you please see the edit section of my question. – Ay. Dec 4 '17 at 11:12
2020-02-27 03:13:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.46874263882637024, "perplexity": 919.7418382531764}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146643.49/warc/CC-MAIN-20200227002351-20200227032351-00246.warc.gz"}
https://www.projecteuclid.org/euclid.tjm/1264170242
Tokyo Journal of Mathematics Higher Dimensional Compacta with Algebraically Closed Function Algebras Kazuhiro KAWAMURA Abstract For a compact Hausdorff space $X$, $C(X)$ denotes the ring of all complex-valued continuous functions on $X$. We say that $C(X)$ is \textit{algebraically closed} if every monic algebraic equation with $C(X)$-coefficients has a root in $C(X)$. Modifying the construction of [2], we show that, for each $m = 1,2, \cdots, \infty$, there exists an $m$-dimensional compact Hausdorff space $X(m)$ such that $C(X(m))$ is algebraically closed. Article information Source Tokyo J. Math., Volume 32, Number 2 (2009), 441-445. Dates First available in Project Euclid: 22 January 2010 https://projecteuclid.org/euclid.tjm/1264170242 Digital Object Identifier doi:10.3836/tjm/1264170242 Mathematical Reviews number (MathSciNet) MR2589955 Zentralblatt MATH identifier 1197.54044 Citation KAWAMURA, Kazuhiro. Higher Dimensional Compacta with Algebraically Closed Function Algebras. Tokyo J. Math. 32 (2009), no. 2, 441--445. doi:10.3836/tjm/1264170242. https://projecteuclid.org/euclid.tjm/1264170242 References • G. E. Bredon, Sheaf Theory, McGraw-Hill, New York 1967. • N. Brodskiy, J. Dydak, A. Karasev and K. Kawamura, Root closed function algebras on compacta of large dimensions, Proc. Amer. Math. Soc., 135 (2007), 587–596. • R. S. Countryman, Jr., On the characterization of compact Hausdorff $X$ for which $C(X)$ is algebraically closed, Pacific J. Math., 20 (1967), 433-438. • E. A. Gorin and V. Ja. Lin, Algebraic equations with continuous coefficients and certain questions of the algebraic theory of braids, Mat. Sb. 78 (120) (1969), 579-610, English translation: Math. USSR Sbornik, 7 (1969), 569–596. • V. L. Hansen, Braids and Coverings, London Math. Soc. Student Text 18, Cambridge Univ. Press, 1989. • O. Hatori and T. Miura, On a characterization of the maximal ideal spaces of commutative $C^\ast$-algebras in which every element is the square of another, Proc. Amer. Math. Soc., 128 (1999), 1185–1189. • D. Honma and T. Miura, On a characterization of compact Hausdorff space $X$ for which certain algebraic equations are solvable in $C(X)$, Tokyo J. Math., 31 (2007), 403–416. • K. Kawamura and T. Miura, On the existence of continuous (approximate) roots of algebraic equations, Top. Appl., 154 (2007), 434–442. • T. Miura and K. Niijima, On a characterization of the maximal ideal spaces of algebraically closed commutative $C^\ast$-algebras, Proc. Amer. Math. Soc., 131 (2003), 2869–2876. • E. L. Stout, The theory of uniform algebras, Bogden-Quigley, 1971.
2019-09-18 04:02:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6646439433097839, "perplexity": 1137.0529765464428}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573176.51/warc/CC-MAIN-20190918024332-20190918050332-00052.warc.gz"}
http://mathhelpforum.com/trigonometry/191665-question-about-periodicity.html
Ok as we know, if $f(x+P) = f(x)$ then the function is periodic. So if the function is not periodic, $f(x+p) = f(x)$, what it the value of p? It's just a positive real number, for example the $f(x)=\sin(x)$ is periodic with periodicity $2\pi$ which is a positive real number. But in that: case f(x)is periodic. for a function not periodic , the period is a positive real number too? Originally Posted by Fabio010 Ok as we know, if $f(x+P) = f(x)$ then the function is periodic. So if the function is not periodic, $f(x+p) = f(x)$, what it the value of p? If this always true $f(x+0)=f(x)~?$ So you are telling me that in not periodic functions P = 0? If you know that $f(x)$ is not periodic but for all $x$ you have $f(x+P)=f(x)$ then $P=~?$
2013-12-12 03:55:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9412015080451965, "perplexity": 291.3276214190637}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164447901/warc/CC-MAIN-20131204134047-00055-ip-10-33-133-15.ec2.internal.warc.gz"}
https://zbmath.org/?q=an:0898.58020
# zbMATH — the first resource for mathematics Classification of Poisson structures. (English. Russian original) Zbl 0898.58020 Dokl. Math. 54, No. 2, 706-709 (1996); translation from Dokl. Akad. Nauk Nauk, Ross. Akad. Nauk 350, No. 3, 304-307 (1996). From the text: “In this work, Poisson structures that vanish at some point are classified. To find the normal forms of such Poisson structures, we apply the method of spectral sequences. The spectral sequences that we use are spectral sequences of Poisson cohomologies. Using a $$\mu$$-adic filtration in a Poisson complex, we find the normal forms of Poisson structures and formulate the conditions of their sufficiency”. ##### MSC: 37J99 Dynamical aspects of finite-dimensional Hamiltonian and Lagrangian systems 37G05 Normal forms for dynamical systems 37A30 Ergodic theorems, spectral theory, Markov operators
2021-01-18 01:58:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6173964738845825, "perplexity": 717.3487806010917}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703514046.20/warc/CC-MAIN-20210117235743-20210118025743-00383.warc.gz"}
https://cs.stackexchange.com/questions/6871/how-does-the-parallel-radix-sort-work
# how does the parallel radix sort work? I'm currently learning computer science, and there is a slide of notes brief described the parallel radix sort under data parallelism. number 101 110 011 001 111 (1st bit) order 2 1 3 4 5 (new order) number 110 101 011 001 111 (2nd bit) order 3 1 4 2 5 (new order) number 101 001 110 011 111 (3rd bit) order 3 1 4 2 5 (new order) number 001 011 101 110 111 I roughly know how to sort it from lecturer's explanation, but how is it related to parallel computing to increase the performance? It turns out that within each round of radix sort, we can take advantage of parallelism. We need to reorder the keys (in a stable manner) according to the relevant bit. The simplest to do this in parallel would be as follows: /* perform one round of radix-sort on the given input * sequence, returning a new sequence reordered according * to the kth bit */ front = filter(input, kth bit is zero) back = filter(input, kth bit is one) return concatenate(front, back) In this approach, on each round, we "filter" the sequence twice. The first filter selects the subsequence of elements whose kth bit is zero. The second filter similarly selects the elements whose kth bit is one. We then complete the round by returning the concatenation of these two sequences. Here's a trace of your small example: round 0: input = [101, 110, 011, 001, 111] front = [110] back = [101, 011, 001, 111] round 1: input = [110, 101, 011, 001, 111] front = [101, 001] back = [110, 011, 111] round 2: input = [101, 001, 110, 011, 111] front = [001, 011] back = [101, 110, 111] round 3: input = [001, 011, 101, 110, 111] (done) Now all we have to do is explain how to do filter and concatenate in parallel. Assuming sequences are just implemented as arrays, concatenate is pretty simple. All we have to do is allocate an output array of the appropriate size and then, in parallel, write all the elements out to this new array: function concatenate(a, b): n = length(a) m = length(b) result = allocate(n+m) for i from 0 to n+m do in parallel: if i < n: result[i] = a[i] else: result[i] = b[i-n] return result Implementing filter in parallel is not so immediately obvious. The basic idea which is typically used in practice is to do a parallel prefix sum to count, at each position, how many elements satisfy the predicate preceding that position. This gives you the index of each surviving element of the output, so that in parallel you can write them into an output array. With this implementation, you can do each round of radix sort in $O(n)$ work and $O(\log n)$ parallel time. This gives you a work-efficient radix sort with total work $O(wn)$ and parallel time $O(w \log(n))$, assuming a bit width of $w$. In practice, you would want to optimize this implementation to not allocate too many intermediate arrays, and to hopefully only make a single "pass" over the input on each round. There are many ways to do it. The following approach allows to fairly split work between many cores. I believe that it's used even in GPU implementations of radix sort, such as ones provided by Boost.Compute and CUDA Thrust. I describe here one pass of LSD radix sort that distributes data into R buckets: • First stage: split input block into K parts, where K is number of cores sharing the work. Each core builds histogram for one part of data, counting how many elements from this part of data should go into each bucket: Cnt[part][bucket]++ • Second stage: Wait till all cores finished the stage one, and then compute partial sums over counts, thus revealing an initial index of each bucket for every part of data. This is sequential algorithm. • Third stage: each core again process its own part of data, sending each element into position determined by Cnt[part][bucket] In pseudocode the entire pass looks like: parallel_for part in 0..K-1 for i in indexes(part) bucket = compute_bucket(a[i]) Cnt[part][bucket]++ base = 0 for bucket in 0..R-1 for part in 0..K-1 Cnt[part][bucket] += base base = Cnt[part][bucket] parallel_for part in 0..K-1 for i in indexes(part) bucket = compute_bucket(a[i]) out[Cnt[part][bucket]++] = a[i]
2020-02-29 01:38:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4483047425746918, "perplexity": 1608.1615230022094}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875148163.71/warc/CC-MAIN-20200228231614-20200229021614-00434.warc.gz"}
https://gmatclub.com/forum/integers-x-and-y-are-both-positive-and-x-y-how-many-different-com-197455.html
GMAT Question of the Day - Daily to your Mailbox; hard ones only It is currently 19 Nov 2018, 14:02 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History ## Events & Promotions ###### Events & Promotions in November PrevNext SuMoTuWeThFrSa 28293031123 45678910 11121314151617 18192021222324 2526272829301 Open Detailed Calendar • ### How to QUICKLY Solve GMAT Questions - GMAT Club Chat November 20, 2018 November 20, 2018 09:00 AM PST 10:00 AM PST The reward for signing up with the registration form and attending the chat is: 6 free examPAL quizzes to practice your new skills after the chat. • ### The winning strategy for 700+ on the GMAT November 20, 2018 November 20, 2018 06:00 PM EST 07:00 PM EST What people who reach the high 700's do differently? We're going to share insights, tips and strategies from data we collected on over 50,000 students who used examPAL. # Integers x and y are both positive, and x > y. How many different com Author Message TAGS: ### Hide Tags Math Expert Joined: 02 Sep 2009 Posts: 50670 Integers x and y are both positive, and x > y. How many different com  [#permalink] ### Show Tags 07 May 2015, 02:18 11 00:00 Difficulty: 65% (hard) Question Stats: 55% (01:24) correct 45% (01:27) wrong based on 291 sessions ### HideShow timer Statistics Integers x and y are both positive, and x > y. How many different committees of y people can be chosen from a group of x people? (1) The number of different committees of x-y people that can be chosen from a group of x people is 3,060. (2) The number of different ways to arrange x-y people in a line is 24. Kudos for a correct solution. _________________ Manager Joined: 03 Sep 2014 Posts: 74 Concentration: Marketing, Healthcare Re: Integers x and y are both positive, and x > y. How many different com  [#permalink] ### Show Tags 07 May 2015, 09:37 Bunuel wrote: Integers x and y are both positive, and x > y. How many different committees of y people can be chosen from a group of x people? (1) The number of different committees of x-y people that can be chosen from a group of x people is 3,060. (2) The number of different ways to arrange x-y people in a line is 24. Kudos for a correct solution. Choosing y people out of x = xCy = $$\frac{x!}{(y!)(x-y)!}$$ A) choosing x-y out of x = xC(x-y) = $$\frac{x!}{(x-x+y)!(x-y)!}$$ = $$\frac{x!}{(y!)(x-y)!} = 3060$$ --- Sufficient B) No. of ways arranging x-y people in line = 24 => (x-y)! = 24 => x-y = 4, still no info about x and y in particular --- Insufficient GMAT Tutor Joined: 24 Jun 2008 Posts: 1327 Integers x and y are both positive, and x > y. How many different com  [#permalink] ### Show Tags 07 May 2015, 13:35 Bunuel wrote: Kudos for a correct solution. I'll give kudos to anyone who can post a correct solution without using any algebra. There is one. Nice question Bunuel! _________________ GMAT Tutor in Toronto If you are looking for online GMAT math tutoring, or if you are interested in buying my advanced Quant books and problem sets, please contact me at ianstewartgmat at gmail.com Math Expert Joined: 02 Sep 2009 Posts: 50670 Re: Integers x and y are both positive, and x > y. How many different com  [#permalink] ### Show Tags 11 May 2015, 03:35 1 1 Bunuel wrote: Integers x and y are both positive, and x > y. How many different committees of y people can be chosen from a group of x people? (1) The number of different committees of x-y people that can be chosen from a group of x people is 3,060. (2) The number of different ways to arrange x-y people in a line is 24. Kudos for a correct solution. KAPLAN OFFICIAL SOLUTION: The first step in this problem is to determine what we are really being asked. If we want to select committees of y people from a group of x people, we should use the combinations formula, which is n!/[k!/(n-k)!]. Remember, in this formula n is the number with which we start and k is the number we want in each group. Thus, we can reword the question as what does x!/[y!(x-y)!] equal? Statement 1 tells us how many committees of x-y people we can make from our initial group of x people. If we plug this information into the combinations formula, we get x!/[(x-y)!(x-(x-y))!] = 3,060. This can be simplified to x!/[(x-y)!(x-x+y))!] = 3,060, which in turn is simplified to x!/[(x-y)!y!] = 3,060. The simplified equation matches the expression in our question, and gives us a numerical solution for it. Therefore, statement 1 is sufficient. Statement 2 tells us how many ways we can arrange a number of people. The formula for arrangements is simply n!. In this case we have x-y people, thus (x-y)! = 24. Therefore, x-y must equal 4. However, we have no way of calculating what x and y actually are. This means that we cannot calculate the number of combinations in our question. Statement 2 is insufficient. So our final answer choice for this Data Sufficiency question is answer choice (A) or (1), Statement 1 is sufficient on its own, but Statement 2 is not. ~Bret Ruber _________________ Current Student Joined: 03 May 2014 Posts: 67 Concentration: Operations, Marketing GMAT 1: 680 Q48 V34 GMAT 2: 700 Q49 V35 GPA: 3.6 WE: Engineering (Energy and Utilities) Re: Integers x and y are both positive, and x > y. How many different com  [#permalink] ### Show Tags 19 Jul 2015, 12:54 1 IanStewart wrote: Bunuel wrote: Kudos for a correct solution. I'll give kudos to anyone who can post a correct solution without using any algebra. There is one. Nice question Bunuel! Correct me if I am wrong.. As 5c2=5c3 so likewise xcy=xc(x-y) and we have been given x>y. so from A we directly get the value. B is Insufficient as we don't know anything about X or Y exact values. Non-Human User Joined: 09 Sep 2013 Posts: 8846 Re: Integers x and y are both positive, and x > y. How many different com  [#permalink] ### Show Tags 29 Oct 2018, 08:40 Hello from the GMAT Club BumpBot! Thanks to another GMAT Club member, I have just discovered this valuable topic, yet it had no discussion for over a year. I am now bumping it up - doing my job. I think you may find it valuable (esp those replies with Kudos). Want to see all other topics I dig out? Follow me (click follow button on profile). You will receive a summary of all topics I bump in your profile area as well as via email. _________________ Re: Integers x and y are both positive, and x > y. How many different com &nbs [#permalink] 29 Oct 2018, 08:40 Display posts from previous: Sort by
2018-11-19 22:02:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6140654683113098, "perplexity": 1674.105399976928}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039746112.65/warc/CC-MAIN-20181119212731-20181119234731-00415.warc.gz"}
https://gmatclub.com/forum/a-fishing-boat-receives-1-50-for-each-tuna-it-brings-back-to-port-and-238938.html
GMAT Question of the Day - Daily to your Mailbox; hard ones only It is currently 18 Jun 2018, 02:59 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # Events & Promotions ###### Events & Promotions in June Open Detailed Calendar ### Show Tags 25 Apr 2017, 07:56 1 00:00 Difficulty: 25% (medium) Question Stats: 81% (01:06) correct 19% (01:13) wrong based on 27 sessions ### HideShow timer Statistics A fishing boat receives $1.50 for each tuna it brings back to port and$1.70 for each mackerel. How many mackerels did it bring back to port yesterday? (1) Yesterday the number of tuna that the boat brought back was 5 less than twice the number of mackerel brought back. (2) Yesterday the boat received a total of $2,930 from tuna and mackerel brought back. Source: GMAT Free _________________ Senior CR Moderator Status: Long way to go! Joined: 10 Oct 2016 Posts: 1387 Location: Viet Nam Re: A fishing boat receives$1.50 for each tuna it brings back to port and [#permalink] ### Show Tags 04 May 2017, 02:39 nguyendinhtuong wrote: A fishing boat receives $1.50 for each tuna it brings back to port and$1.70 for each mackerel. How many mackerels did it bring back to port yesterday? (1) Yesterday the number of tuna that the boat brought back was 5 less than twice the number of mackerel brought back. (2) Yesterday the boat received a total of $2,930 from tuna and mackerel brought back. Source: GMAT Free Call $$x,y$$ are respectively the number of tuna and the number of mackerel. (1) We have $$x=2y-5$$. However, we cannot know the value of $$x$$ and $$y$$. Insufficient. (2) We have $$1.5x+1.7y=2930$$. We still dont know the value of them. Insufficient. Combine (1) and (2) we have $$\Big \{ \begin{array}{lr} x-2y =-5 \\ 1.5x+1.7y =2930 \end{array} \implies \Big \{ \begin{array}{lr} x = 625\\ y =1245 \end{array}$$ Sufficient. The answer is C _________________ Re: A fishing boat receives$1.50 for each tuna it brings back to port and   [#permalink] 04 May 2017, 02:39 Display posts from previous: Sort by # A fishing boat receives \$1.50 for each tuna it brings back to port and new topic post reply Question banks Downloads My Bookmarks Reviews Important topics Powered by phpBB © phpBB Group | Emoji artwork provided by EmojiOne Kindly note that the GMAT® test is a registered trademark of the Graduate Management Admission Council®, and this site has neither been reviewed nor endorsed by GMAC®.
2018-06-18 09:59:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2609032094478607, "perplexity": 11257.905930043424}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267860168.62/warc/CC-MAIN-20180618090026-20180618110026-00483.warc.gz"}
https://www.lessonplanet.com/teachers/one-two-three-isaac-newton-and-me-9th-12th
# One Two three Isaac Newton and Me Young scholars discover that the learning cycle develops the concepts of Newton's Laws and applies these concepts to travel in space.
2018-11-18 04:25:02
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9362950325012207, "perplexity": 1633.1498929015647}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039743963.32/warc/CC-MAIN-20181118031826-20181118053826-00302.warc.gz"}
https://math.stackexchange.com/questions/2039873/if-p-is-a-polynomial-with-simple-zeros-then-sum-k-1n-fraca-kmpa
# If $P$ is a polynomial with simple zeros, then $\sum_{k=1}^n\frac{{a_k}^m}{P'(a_k)}=0$ So this is an exercise from my complex analysis course. Let $P(z)$ be a polynomial of degree $n\geq 2$ with simple zeros $a_1, \dots, a_n$. I'm trying to prove that $$\sum_{k=1}^n\frac{{a_k}^m}{P'(a_k)}=0$$ for $m=0, \dots, n-2$ My first thought is to use induction. When $n=2$ we obtain the desired identity by straightforward computation. Assume that the identity holds for $n=N$. Consider a polynomial $P$ of degreee $N+1$ with simple zeros $a_1, \dots, a_{N+1}$. Since $P$ has at most $N+1$ zeros, the $a_k$'s must be all of them, each with multiplicity $1$. Therefore we can write $$P(z)=c\prod_{k=1}^{N+1} (z-a_k)$$ where $c\in \mathbb{C}$ is a constant. Letting $Q(z)=c\prod_{k=1}^{N} (z-a_k)$, we have $P(z)=(z-a_{N+1})Q(z)$. Therefore $$P'(z)=Q(z)+(z-a_{N+1})Q'(z)$$ Hence $$P'(a_k)=Q(a_k)+(a_k-a_{N+1})Q'(a_k)=(a_k-a_{N+1})Q'(a_k)$$ for $k=1, \dots, N$. Moreover, $P'(a_{N+1})=Q(a_{N+1})$. By induction hypothesis, we have $$\sum_{k=1}^N\frac{{a_k}^m}{Q'(a_k)}=0$$ for $m=1, \dots, N-2$. We want to show that $$\sum_{k=1}^{N+1}\frac{{a_k}^m}{P'(a_k)}=0$$ for $m=1, \dots, N-1$. But I just can't simplify the first one to get the second. Can someone help me? Am I on the right track (do we really have to use induction)? Any effort is appreciated! • Consider $\oint_{|z|=R} \frac{z^m{\rm d}z}{P(z)}$. Evaluate this using the residue theorem and estimate the integral using the estimation lemma. Take $R\to\infty$ in the end. – Winther Dec 2 '16 at 3:12 • @Winther Thanks! I didn't realize it was that easy! :-) So there is nothing special about $n-2$, and we need it only because the integral vanishes for $0\leq m\leq n-2$ (by estimation lemma). Is that correct? – Liebster Jugendtraum Dec 2 '16 at 3:18 • Yes this argument only works when $m \leq n-2$, but it's easy to show by example that it can fail when $m = n-1$. Take for example if $n=2$ and $P(z) = (z-1)(z+1)$ then $\sum_{x_i=\pm 1} \frac{x_i^m}{P'(x_i)} = \frac{1}{2}\sum_{x_i=\pm 1} x_i^{m-1} \not = 0$ if $m = 1 = n-1$. – Winther Dec 2 '16 at 3:22
2020-03-29 10:33:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9744917750358582, "perplexity": 110.74683693041031}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370494064.21/warc/CC-MAIN-20200329074745-20200329104745-00030.warc.gz"}
https://unterschwelligeverfuehrung.com/escapade-larry-kzvmpym/0c5c7b-use-of-calculus-in-building-bridges
whats the purpose of using its derivative $\cos{x}$. For instance, Newton perceived the applications of Calculus as being geometrical and having a strong link to the physical world. samples by all dates, 4 (1000 words), how to calculate average physical product, What Determines The Steepness Of The Sratc Curve, Civil Engineering Construction and Types of Bridges, Using Calculus in curves for bridges, tunnels, and more (engineering). Lesson 2: Angles, Scale Factors and  Bridge Design In engineering, calculus is used for designing bridges. Quality Plan for project Quality should be attained although the project. Calculus in the engineeringfield Calculus initially developed for better navigation system. Students will explore the design options to meet requirements for loads, safety, and traffic flow as well as the efficiency of these structures for various capacities. In robotics calculus is used how robotic parts will work on given command. why we cant use $\sin{x}$ itself to solve the equation. Calculus is used to improve safety of vehicles. Approaches of De Oversteek Bridges are one of the longest integral bridge systems in the world. How will you use scale models, and scale factors to create your bridge as a model for the Brent Spence Bridge? The issue of today’s deteriorating infrastructures is an issue that surrounds all of us. Units of run are typically based on 12 inches. The RUN is one-half the span. There are a large number of applications of calculus in our daily life. It's crazy how math is basically found everywhere. Most engineers and architects will use calculus to know the shape and size of the curves. On the other hand, the Agile concentrates on intrinsic, extrinsic and the constraints of a given project. Structural analysis – Seismic design Exploratorium This spun off many modern words, including "calculate" (use stones for mathematical purposes), and "calculus", which came to be used, in the 18th century, for accidental or incidental mineral buildups in human and animal bodies, like kidney stones and minerals on teeth. Built at a cost in excess of £150m its world record and cost of … Calculus is used in geography, computer vision (such as for autonomous driving of cars), photography, artificial intelligence, robotics, video games, and even movies. A scalar quantity is fully described or specified, ... the second derivative tell you? Calculus was a term used for various kinds of stones. We are also able to use integrals to find the arc length of any suspension cables. Calculus is used to improve safety of vehicles. He even used Calculus to try to explain how planets orbit around the sun. Instead of finding the area under the arc I found the arc length. Even though they are the primary founding fathers of Calculus, they developed it independently and perceived the fundamental concepts in contrasting manners. For a detailed overview of parabolas, see the page, Parabola. (“Using Calculus in curves for bridges, tunnels, and more (engineering) Research Paper”, n.d.), (Using Calculus in Curves for Bridges, Tunnels, and More (engineering) Research Paper). Since the cables always form a parabola I just used integrals to figure out the distance between the 2 towers. Reply. f' (x)= (x/4410)- (10/21) The Calculus In Building. This website offers many PDF files for download, which require, Remodel, Replace, or Maintain?, The Locker Issue, Using Percent to Reduce Your Carbon Footprint, Outbreak: Analyzing An Epidemic Using Quadratic Functions, Lesson 2: Angles, Scale Factors and  Bridge Design. What are the specific qualities that go into engineering a safe and efficient bridge? Here, the discussion seeks to differentiate the Iron Triangle and Agile. Connections between the two courses, examples that display connections between physics and calculus, and terms used in both courses are given. Thank you! The foundation – also called pile – serves as the legs or main support of the entire bridge structure. Engineers use calculus for building skyscrapers , bridges. Most cantilever bridges are designed so that a gap remains between two cantilevered arms that reach out from their abutments: the gap is bridged by a simple beam. Apply the steps of the design process to solve a variety of design problems. There can be many skilled offered but in this project, we would want to ensure that they are quality and are useful to the youth in the area. What types of angles allow for more support for the structure? On the other hand, according to Leibniz, Calculus entailed analyzing the changes in graphs (Simmons 67). Integral Bridge design is an expert area in the design of bridges. This is 100% legal. Use facts about supplementary, complementary, vertical, and adjacent angles in a multi-step problem to write and solve simple equations for an unknown angle in a figure. Lesson 3: Bridge Design Challenge. You may not submit downloaded papers as your own, that is cheating. In order to assure the youth we will offer quality services, we will lay down the objectives of the projects. 18 Days (60 minute), Unit Lessons: Newest Essay Topics, Index The state-of-the-art technologies and ideas where used in the design. Calculus is essential in the following tasks assigned to the Calculus is essential in the following tasks assigned to the architectural team: A. The second calculus is first-order (multiplicative intuitionistic) linear logic, which turns out to have several other, independently proposed extensions of the Lambek calculus as fragments. Using methods such as the first derivative and the second derivative, a graph and its dimensions can be accurately estimated. Find the derivatives for the following functions: a. f(X) = ln250X b. f(X) = ln (20X-20) c. f(X) = ln (1- X2) d. f(X) = ln (5X + X-1) e. f(X) = Xln (12- 2X) f. f(X) = 2Xln(X3 + X4) g. f(X) =... ... decisions will be made with the help of Average Cost, ...Vectors and the Geometry of Space Q1. The last thing that students will need in order to grasp the concepts in the unit are social, interaction skills for completing projects with peers. Bridges range from small structures such as simple footbridges to iconic structures such as the Humber Bridge which, when opened in 1981, held a 17 year world record for being the longest single span suspension bridge in the world. The Difference between Iron Triangle and Agile This paper is based on the lecture by Erin on Agile online. Solve problems involving scale drawings of geometric figures, including computing actual lengths and areas from a scale drawing and reproducing a scale drawing at a different scale. On EVERY level of engineering calculus is used as a way of testing out a system without actually building it. Because construction is often made up of multiple layers of wood, building plans often provided detailed descriptions to make clear where to begin or end measurements. Students have traditionally struggled with when and how they can use proportions to solve real life problems, and proportions can be used in so many ways. Therefore, the Agile is an advancement of the Iron Triangle since the constraints comprise of the cost, schedule and the scope of a project. If you find papers matching your topic, you may use them only as an example of work. They use calculus to improve on the architectural design of the structure. In robotics calculus is used how robotic parts will work on given command. For the counting of infinitely smaller numbers, Mathematicians began using the same term, and the name stuck. Modern bridge construction makes use of in-site piling. What types of structural changes would you make to construct an improved model of the Brent Spence Bridge? Building bridges requires knowledge of parabolas and trig. Round your favorite of calculus was the building as a question and it leads to problems to form an avid and algebra. Suspension Bridges. Calculus has many practical applications in real life. Building Bridges Between Calculus and Physics. From this discussion, it can be inferred that the Iron Triangle demonstrates the aspect of scope, cost and schedule of a project. Successfully understand the concepts in the unit students must be aware of how to set up a proportion of any kind. Though it was proved that some basic ideas of Calculus were known to our Indian Mathematicians, Newton & Leibnitz initiated a new era of mathematics. ‘Calculus’ is a Latin word, which means ‘stone.’ Romans used stones for counting. Here's video from today's bridge testing session in Prof. Bob Parker's Pre-Calculus II class. Without the accuracy afforded by the use of calculus, bridges, roads and tunnels would not be as safe as they are today. Engineers use calculus for building skyscrapers , bridges. Also, the video I created for the unit will be used as an overall related hook. Calculus in the engineering field Calculus initially developed for better navigation system. Background Knowledge: The big idea of the unit is: How is math used in the building and engineering of infrastructures? Providing good leadership skills Newton and Gottfried Wilheim von Leibniz the engineeringfield calculus initially developed for navigation... Introduction the birth of calculus, they developed it independently and perceived the applications of calculus was term. And trig the first derivative and the constraints of a cable supported by two poles, they it! Primary founding fathers of calculus and that of architecture examples that display connections between the use calculus. As being geometrical and having a strong link to the Building as a model for the or. Will offer quality services, we could find the arc length the primary fathers. Or specified,... the second derivative tell you Paper ”, n.d. https: //studentshare.org/mathematics/1630657-using-calculus-in-curves-for-bridges-tunnels-and-more-engineering a cable by. Piers are connected to bridge decks without any joints and bearings bridge systems in the world under the length. After each milestone we will countercheck and see if the bridge structure anddistributes this of... Come into play for the Brent Spence bridge others results the situation with the safety of roads,,! Work was alredy submitted once by a student who originally wrote it physical.! Geometrical figures and describe geometrical figures and describe geometrical figures and describe relationships. Services, we will provide a brief summary and description of parabolas, see page. Improve on the architectural design of the entire discipline of physical sciences video! Easier by math as we are able to use calculus to try to explain how planets orbit the. Way to alleviate some of the longest integral bridge systems in the design the!, a graph and its dimensions can be inferred that the support beams will be as... By math as we are able to use calculus for system design the entirety of the unit is: is. Develop ways to build a model for the counting of infinitely smaller numbers, Mathematicians using... The fundamental concepts in contrasting manners the model you have designed would meet requirements/needs the seeks!, etc some information in the unit is: how is math used in engineering... Bridges requires knowledge of parabolas below before explaining its applications to suspension bridges how... Will countercheck and see if the bridge of mass and make sure that the Iron Triangle demonstrates the of. Architectural team: a with the Brent Spence bridge in Cincinnati world record cost!, the video I created for the load on the lecture by Erin on Agile online the shapes curved! Using methods such as the first derivative and the name stuck they use calculus motion! And trig are going to use calculus in physics and calculus, and scale factors to create bridge. Independently and perceived the fundamental concepts in the news regarding the situation with the Brent bridge... Demonstrates the aspect of scope, cost and schedule of a cable supported two. Derivative, a graph and its dimensions can be inferred that the Iron Triangle and Agile architects to calculate shapes. The Building and engineering of infrastructures physical world load or support of the unit will be used as example... Solve the equation up a proportion of any kind to suspension bridges of £150m its world and... Tasks assigned to the architectural design of the design use of calculus in building bridges to solve a variety of design problems entirety of load! Project quality should be attained although the project the hook will be sufficient to proportions is. And efficient bridge for better navigation system activities that occur during each phase second,! Will countercheck and see if the objectives have been approved describe the relationships between them models and... Activities that occur during each phase be attained although the project listings the. Them to solve multi-step ratio and percent problems two poles the state-of-the-art technologies and ideas used... The load or support of the structure how robotic parts will work on given.... Students some information in the world independently and perceived the applications of calculus the. You should remember, that this work was alredy submitted once by a student who originally wrote it brief! Quality services, we will offer quality services, we will countercheck and see if the model you designed... Go into engineering a safe and efficient bridge Triangle and Agile the engineeringfield calculus initially developed for better navigation.. Integrals to find the center of mass and make sure that the Iron Triangle and Agile support... Hooded Crow Lifespan, Chelsea Vs Everton 2014 2015, Toy Netta Performance, The Amazing Spider-man Wii Rom, Hey Hey My My The 100, Masha Allah Meaning, Raes Dining Room,
2021-05-18 16:45:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4284140467643738, "perplexity": 1560.3872426136577}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991288.0/warc/CC-MAIN-20210518160705-20210518190705-00266.warc.gz"}
https://bobsegarini.wordpress.com/tag/the-bells/
## GWNtertainment #6 by Jaimie Vernon Posted in life, music, Opinion, Review with tags , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , on February 15, 2021 by segarini Well, here we are, half a dozen issues into our rebirth and we’re getting great feedback from the masses. Thank you for reading and thanks for the time you’ve taken to write to use with your new releases and news. The theme in this week’s issue seems to be a leaning toward synth-pop. A coincidence or are we on the verge of a 1980s synth renaissance? Check out the tunes in our Absolutely Indie section below and see for yourself. Anyway, it’s onward we go…. Posted in Opinion, Review with tags , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , on July 5, 2017 by segarini A weekend (and a year) of celebrations across Canada….150 years young…..but is it really? Oh how I love history! An item on my “to do” list before I pop my clogs is to work towards, and obtain, my BA Mediaeval History. Cambridge University is where I’d love to be (I do love England) but University of St. Michaels College here in Toronto is also an excellent choice and know for their studies in Mediaval history. The bucket list is not long, just really big projects. ## Roxanne Tellier – Fly Me High, Ken Tobias Posted in Opinion, Review with tags , , , , , , , , , , , , , , , , , , , , , , , , , , , on June 26, 2016 by segarini “I remember being asked when I was very young what did I want to be when I grow up. I remember saying ” I want to be an artist, a singer, and a scientist…..well it turned out that I am a professional singer, an avid science fan, and yes an artist…painting in acrylics for 30 years.” ## JAIMIE VERNON – TRUE NORTH GLUTEN FREE Posted in Opinion with tags , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , on October 25, 2014 by segarini In the early morning aftermath of two tragic military assassinations in Quebec and Ottawa, Ontario this week America tuned in and saw a different kind of Canada. ## JAIMIE VERNON – WE’VE HEARD THIS SONG BEFORE Posted in Opinion with tags , , , , , , , , , , , , , , , , , , , , , , , , , , on September 8, 2013 by segarini WARNING: This blog contains political ranting…and sexual panting. I expect many will be offended by the sex part…but not by the politics. By the time you read this the Stukas should be flying over Disneyland dropping bombs on Damascus’ strategic military targets in response to the 1300+ chemically assassinated civilians two weeks ago. Obama wants to spank Syrian president Assad for the killings despite having unanimous universal rejection of the plan from every nation on the planet except our own asshole Prime Minister -Harpo, The Marxist Brother.
2023-03-22 19:57:05
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8942286968231201, "perplexity": 1010.8378043693101}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296944452.74/warc/CC-MAIN-20230322180852-20230322210852-00190.warc.gz"}
https://github.com/Khan/khan-exercises/pull/11127
# Khan/khan-exercises ### Subversion checkout URL You can clone with or . # Removes multiple choice from radians_and_degrees.html exercise#11127 Closed wants to merge 3 commits into from +58 −44 ### 4 participants This might stir up controversy, but I have added problems to these exercises that use radians in terms of tau. Personally, I find the tau conversions easier, but sadly, pi is still the established standard. For more information on tau, see: http://www.khanacademy.org/video/tau-versus-pi?playlist=New%20and%20Noteworthy Mathman06 updated to for tau compatibility 6a3903e Mathman06 note: this requires my tau update to utils/angles.js 1be6371 Mathman06 Note: this also uses my tau update to utils/angles.js 65fcb4d commented on 6a3903e commented on 1be6371 Removes multiple choice from radians_and_degrees.html exercise commented on 65fcb4d Removes multiple choice from radians_and_degrees.html exercise Owner This is unlikely to be included but we can check with Sal and see what he thinks. (cc @mwahl) While this is interesting, the difference from watching the video seems relatively minor. No big difference except you divide everything by 2 to get things in tau. As Sal points out the formula for area isn't quite as nice. But, from an educator's standpoint I can see a big problem with using this in exercises. pi has been around for over 3900 years. tau (based on the sources Sal quotes in the video) has been around just over 10 years. I know when I looked at this request, my first thought was "I've never heard of tau other than a Greek letter." And I have a Master's in math. What did I miss? Now, add into this the fact that you have teachers of math who have Elementary Education degrees... And the majority of students in my Math for Elementary Education Major's class (back when I was an Elementary Education major) said the reason they were majoring in Elementary Education because they couldn't handle the math for ... Keep in mind that the schools piloting using Khan Academy are elementary schools and middle schools... And keep in mind that tau isn't part of standardized curriculum. Not trying to say that we shouldn't teach it because it's new... but, if it ain't broke, don't fix it. (Other note, one negative comment I got on my teaching was on tossing in looks at the future like ordered triplets being too complicated. And this is something students will get into when they get to 3 dimensional math.) Oh, and one other note: in the videos on converting between radians and degrees, Sal doesn't use tau. Owner You'll probably find a number of people at KA who appreciate the elegance of tau, myself included. Sal declined to take sides in his video, but Vi Hart is certainly very vocal in her support of it. Nevertheless, all of the other videos use pi, particularly, as @christi points out, the ones relating to this exercise. If we want to introduce using tau to the knowledge map I think it would be preferable to have a separate set of exercises that introduce the concept and/or fully embrace it and integrate it across the exercises. In either case, I think that would need to be a decision that is coordinated to be consistent in the videos as well. With Vi on board, I'm sure we haven't heard the last of tau, but for now I'm closing this pull request until we figure out how we want to approach it. closed this Commits on Jan 13, 2012 1. Mathman06 authored 2. Mathman06 authored 3. Mathman06 authored @@ -13,29 +13,40 @@ roundTo( 2, toRadians(NUM_DEGREES) ) - + rand( commonAngles.length ) commonAngles[COMMON_INDEX].deg commonAngles[COMMON_INDEX].rad + commonAngles[COMMON_INDEX].trad + COMMON_DEGREES/ 180 + COMMON_DEGREES/ 360 - Convert the angle COMMON_DEGREES° into radians. - + Convert the angle COMMON_DEGREES° into radians. (Note that answer is in terms of \pi) + To convert from degrees to radians, you multiply by \pi and then divide by 180^{\circ}. - + - - - • - • - • - + + + Convert the angle COMMON_DEGREES° into radians. (Note that answer is in terms of \tau, which equals 2\pi) + + + To convert from degrees to radians, you multiply by \tau and then divide by 360^{\circ}. + + + + Convert the angle NUM_DEGREES° into radians. (Round to the nearest hundredth of a radian.) @@ -45,11 +56,6 @@ - - • - • - • - @@ -13,27 +13,39 @@ roundTo( 2, toRadians(NUM_DEGREES) ) - + rand( commonAngles.length ) commonAngles[COMMON_INDEX].deg commonAngles[COMMON_INDEX].rad + commonAngles[COMMON_INDEX].trad - COMMON_DEGREES° + + COMMON_DEGREES^{\circ} + To convert from radians to degrees, you multiply by 180^{\circ} and then divide by \pi. - - • wrongCommonAngle( COMMON_INDEX, 1 ).deg° • - • wrongCommonAngle( COMMON_INDEX, 2 ).deg° • - • wrongCommonAngle( COMMON_INDEX, 3 ).deg° • - + + + + + Convert the angle COMMON_TAURADIANS radians into degrees. (Note: this angle is given in terms of \tau, which equals 2\pi) + + COMMON_DEGREES^{\circ} + + + To convert from radians to degrees, you multiply by 360^{\circ} and then divide by \tau. + + + + @@ -45,11 +57,7 @@
2015-07-03 06:37:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.664286196231842, "perplexity": 4115.990738376152}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375095775.68/warc/CC-MAIN-20150627031815-00251-ip-10-179-60-89.ec2.internal.warc.gz"}
http://mathoverflow.net/revisions/44327/list
2 added 65 characters in body This isn't the answer you seek, but let me observe merely that if you allow extra balls and if the table width is an integer number of balls in each direction, then we may imagine a cross pattern with the cue ball at the intersection. /------------------\ | O | | O | | O | | O | |OOOOOOOOOCOOOOOOOO| | O | ( O ) | O | | O | | O | | O | | O | \------------------/ Under your idealized physical interactions, it seems that the cue ball cannot move, since the forces acting on the other balls are all transverse. Probably one can also imagine other highly-packed arrangements. 1 This isn't the answer you seek, but let me observe merely that if you allow extra balls and if the table width is an integer number of balls in each direction, then we may imagine a cross pattern with the cue ball at the intersection. /------------------\ | O | | O | | O | | O | |OOOOOOOOOCOOOOOOOO| | O | ( O ) | O | | O | | O | | O | | O | \------------------/ Under your idealized physical interactions, it seems that the cue ball cannot move. Probably one can also imagine other highly-packed arrangements.
2013-05-19 15:10:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8531173467636108, "perplexity": 1836.4992664511706}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697745221/warc/CC-MAIN-20130516094905-00045-ip-10-60-113-184.ec2.internal.warc.gz"}
https://www.wyzant.com/resources/answers/262075/bus_calc_dimensions
Lexi D. # Bus. Calc dimensions For reasons too complicated to explain, I need to create a rectangular orchid garden with an area of exactly 169 sq. ft. abutting my house so that the house itself forms the northern boundary. The fencing for the southern boundary costs $4 per foot, and the fencing for the east and west sides costs$2 per foot. What are the dimensions of the orchid garden with the least expensive fence?
2021-08-02 05:54:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1800849884748459, "perplexity": 2933.0446582573286}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154304.34/warc/CC-MAIN-20210802043814-20210802073814-00195.warc.gz"}
https://luminousmonkey.org/
# LuminousMonkey ## Shit Project So, I've been a bit busy lately, so much so, that I had to defer a semester of Uni. The reason? I've had to manage the installation of a bunch of GPS trackers for a large state government organisation. It's been, in a word, shit. I inherited the management from someone, who, basically, didn't do much except attempt to make pretty graphs of unrealistic timelines in Microsoft Project. They left, so I was thrown into the project. Now, even though it's not related to a software project, you still have the same sort of things that projects get (funny that). So, I will write a few blog posts, when I can, to cover just some minor things that I found. ## Beep Beep Beep I use ZFS, and I love it, I think it is the best filesystem out there. It's primary focus is on integrity, which is the most important thing. What is also important, backups. Even with the data integrity that ZFS offers (which far surpasses any hardware RAID), you still have to backup. Again, with ZFS, this is much easier than with other solutions (like Bacula for example). Since we run Sun servers, we also run Solaris, since when you run Solaris on Sun hardware, the licence is relatively cheap. As a result, I use the Timeslider service to automatically create snapshots (which, when you share a ZFS filesystem out via CIFS shows up in the Windows GUI as "previous versions"). Because of this, I also use the "zfs-send" plugin, basically backing up snapshots to a separate Solaris server. However, there are some gotchas which may catch you out if you had a working config, and then change things around and find the zfs-send service failing. First, zfs-send will put a hold on snapshots. It does this so they don't get deleted before they're used to send to the remote server. However, if you're in the situation where you need to clear all the snapshots (for example, you've moved, or changed zfs filesystems you want to backup). Then you will find you can't delete these, what you have to do is "zfs release" the snapshots. Here is a little snippet that will do this (and delete ALL zfs-auto-snap snapshots on the system): for snap in zfs list -H -o name -t snapshot | grep @zfs-auto-snap; do zfs release org.opensolaris:time-slider-plugin:zfs-send $snap; zfs destroy$snap; done Then, secondly, zfs-send stores the name of the previously sent snapshot as a property on the filesystem. It does this, so it knows it can use an incremental zfs send. However, if you have broken this sequence, or deleted the snapshots, then this will cause it to break. You can look for it with: zfs get -r org.opensolaris:time-slider-plugin:zfs-send storage Where "storage" can be replaced with your particular zpool name. To clear a property, you use "zfs inherit", like so: zfs inherit org.opensolaris:time-slider-plugin:zfs-send storage/shares Changing "storage/shares" to the particular ZFS file system you want to clear the property from. You can clear this property recursively by just adding the "-r" option: zfs inherit -r org.opensolaris:time-slider-plugin:zfs-send storage/shares Once you've done this, just enable the service (or clear it if it was forced into maintenance) and you should be golden. ## I'm not joking Jonathan Blow of Software Quality, you should watch this if you're interested in writing software. I used to have an Amiga, and to be honest, it was far more responsive than my current beast of a PC. ## She Blinded Me With Science It seems to me, that one of the most important aspects of software development is one that doesn't get a great amount of focus. Debugging. Sure, it's mentioned here and there, but, for example, first year students aren't even taught about the command line Java debugger. So, I believe this video of Stuart Halloway, "Debugging with the Scientific Method" is required viewing. Of course, it's not just debugging, but any sort of performance or work on a website or application. Take stackoverflow for example, it's a popular site and hosted on their own servers. I have been reading lately on their setup and the monitoring they do, not only for uptime, but for performance. For example, they use HAProxy to load balance to their web tier servers, obviously not unusual, that's what HAProxy is for. But, they also have these proxies capture and filter performance data from their application via headers in the HTTP response. It's probably something that everyone does, but to be honest, I've never come across any mention of this trick. (There's also their miniprofiler tool, which I'm using a variant of). Given how little debugging is taught in university (well, my university) I can't judge on how common and detailed this sort of performance measurement is. I suspect that it might not be very common, so could be an interesting area for me to focus on. ## I don't know SQL I mentioned in the previous post that I'm not a database guru, luckily I haven't actually had to do a great deal of complex SQL queries. Which, is a shame in a way, because I was working on a particular SQL query this week just past, and it was interested. I learned a few things about PostgreSQL that I think make it the database to select when starting a project. Of course I know that SQL Server and Oracle would have these features, but I would be honestly surprised if MySQL did. I'm constantly surprised by how many projects use/support MySQL when it really is the inferior database to PostgreSQL. Again, I would argue that it's because ORM frameworks abstract away useful distinguishing features of the underlying database. But, I could obviously be wrong… unless you're dealing with GIS data, such as I am with my recent project. It's an application that reads and stores GPS data from a Tait DMR network. The specifics aren't too important, but, basically, every thirty seconds we get a GPS location for a fleet of vehicles that we store in a database for querying. You could just get the decimal latitude and longitudes and store them in any database, but then when you try to do something with the data, it can get difficult. For example, we have a customer, who wants to track the time that his employees are onsite. This is so he can charge his customer the correct amount. The thing is, when you get a GPS reading, it may be different. If they drive off for lunch, then head back, it would be a fine trick for them to get the exact GPS reading again. Thankfully, the PostGIS extension for PostgreSQL gives datatypes and functions to help with this. First, the table definition for the GPS readings: CREATE TABLE gps_readings ( location GEOGRAPHY(POINT,4326) NOT NULL, speed integer NOT NULL, time_and_date timestamp NOT NULL, unit_id integer REFERENCES units(id) ON UPDATE CASCADE, PRIMARY KEY (time_and_date, unit_id) ); Nothing really too surprising here, just the geography type that PostGIS gives you. With GIS geography data, there's different ways you can do data projections, since you're trying to map a coordinate system of a spheroid (the Earth). GPSs return data in decimal latitude and longitude (WGS84, which is the same system used by Google maps), this means you're trying to use a Cartesian system to map onto a sphere. This results in distortion, if you've ever seen a 2D map of the Earth, you can see how massive Greenland is, even though it's a tiny country. That's a result of the distortion. PostGIS uses the geography type to keep track of what system you're using, in this case I'm using SRID 4326. Which I believe is the expected GPS coordinate system. Anyway, basically, I have to get all the locations of the vehicle and group them into clusters, with the clusters being within a certain radius of each other. Actually, I'll just include the code here, since I already commented what it does… --- The following is a pretty messy SQL query. --- How it works as follows: --- --- First, it takes all the distinct gps readings for a unit where the --- unit hasn't been moving. It then does cluster analysis of these --- groups to organise them into clusters. --- --- Then, the gps readings for the units are taken, each reading being --- compared to each cluster, basically associating each reading with --- the corresponding cluster. With the clusters generalised to a --- common point. --- --- Then these results are parsed through using windows, with the --- window, the current grouped_location is compared with the last --- grouped_location. If they are different, it means that the vehicle --- has moved out of a cluster, so the time of the current record must --- be the starting time of movement into a new location. --- --- Then end_time is then calculated by looking ahead for a change in --- location (meaning the record is the last reading inside the current --- location). --- --- This gives us start and end times, but with a few readings when the --- unit was inside the location. They are removed, and a final window --- is used to put the end_time into the same row as the start_time (so --- we just end up with a single grouped_location with a start and end --- time). Rows with the same start and end time are removed, before we --- finally return the grouped_location, start, end, and total times. --- --- Because of the way the Clojure JDBC works, and because we use the --- same parameters for two different subqueries, the same arguments --- need to be substituted in twice. --- --- They are, unit_id, start_date, end_date, unit_id, start_date, end_date. SELECT ST_Y(grouped_location) AS latitude, ST_X(grouped_location) AS longitude, start_time, end_time, end_time - start_time AS total_time FROM (SELECT grouped_location, start_time, CASE WHEN ST_Equals((lead(grouped_location) OVER tadw),grouped_location) AND lead(end_time) OVER tadw IS NOT NULL THEN lead(end_time) OVER tadw WHEN end_time IS NOT NULL THEN end_time END AS end_time FROM (SELECT *, CASE WHEN lag(grouped_location) OVER tadw IS NULL THEN time_and_date WHEN ST_Equals((lag(grouped_location) OVER tadw),grouped_location) THEN NULL WHEN NOT ST_Equals((lag(grouped_location) OVER tadw),grouped_location) THEN time_and_date END AS start_time, CASE WHEN NOT ST_Equals((lead(grouped_location) OVER tadw),grouped_location) THEN time_and_date ELSE NULL END AS end_time FROM (SELECT ST_Centroid(UNNEST(ST_Clusterwithin(location::geometry, 0.01))) AS grouped_location FROM (SELECT DISTINCT location FROM gps_readings WHERE unit_id = ? AND speed = 0) AS clus_loc_filter) AS clusters INNER JOIN gps_readings ON (ST_DWithin(clusters.grouped_location, gps_readings.location::geometry, 0.01) AND unit_id = ? AND speed = 0) WINDOW tadw AS (ORDER BY time_and_date)) AS tbl_start_times WHERE ((start_time IS NOT NULL) OR (end_time IS NOT NULL)) WINDOW tadw AS (ORDER BY time_and_date)) AS tbl_end_times WHERE (start_time IS NOT NULL AND (start_time <> end_time) OR end_time is null) AND (end_time - start_time) > interval '5 minutes'; The question marks aren't part of the SQL, since I'm using Clojure JDBC, they're where the unit id for the vehicle gets substituted in. Again, I think that there is room for improvement here. Simply because I haven't used enough SQL to learn the best way to approach this. As it stands, I'm pretty happy with it. On the low-end VM (Linux, 2GB RAM) PostgreSQL is running on, it will get the result in about 1100ms, that's reading all the GPS position data for that unit in the system. It's been collecting the GPS data since the 11th of March 2016, which is about 20,000 rows. Considering the ClusterWithin function it's running, that's not too bad, it was a bit slower than that without that DISTINCT that I mentioned in my last post. Also, it should get faster, since I haven't added date ranges to restrict the number of rows searched. So, in summary, if you use MySQL, you should be using PostgreSQL and you should be taking advantage of the database features where possible, you can do some pretty cool stuff with those Window functions. I just wonder how many web applications using frameworks are missing out on easy performance gains because they've got sloppy SQL queries.
2017-05-27 02:16:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3361758887767792, "perplexity": 2383.3597889635516}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463608765.79/warc/CC-MAIN-20170527021224-20170527041224-00050.warc.gz"}
https://socratic.org/questions/how-do-you-differentiate-y-3y-4-4u-5-u-x-3-2x-5-using-the-chain-rule
# How do you differentiate y= 3y^4-4u+5 ;u=x^3-2x-5 using the chain rule? Oct 9, 2017 dy/dx=(-12x^2+8)/(1-12y^3 #### Explanation: When we put the value of $u$ in $y$ we get $y = 3 {y}^{4} - 4 \left({x}^{3} - 2 x - 5\right) + 5$ $y = 3 {y}^{4} - 4 {x}^{3} + 8 x + 25$ When we differentiate both sides with respect to $x$ we apply chain rule. $\frac{d}{\mathrm{dx}} y = \frac{d}{\mathrm{dx}} \left(3 {y}^{4} - 4 {x}^{3} + 8 x + 25\right)$ $\frac{\mathrm{dy}}{\mathrm{dx}} = 12 {y}^{3} \frac{d}{\mathrm{dx}} y - 12 {x}^{2} \frac{d}{\mathrm{dx}} x + 8$ $\frac{\mathrm{dy}}{\mathrm{dx}} = 12 {y}^{3} \frac{\mathrm{dy}}{\mathrm{dx}} - 12 {x}^{2} + 8$ $\frac{\mathrm{dy}}{\mathrm{dx}} - 12 {y}^{3} \frac{\mathrm{dy}}{\mathrm{dx}} = - 12 {x}^{2} + 8$ $\frac{\mathrm{dy}}{\mathrm{dx}} \left(1 - 12 {y}^{3}\right) = - 12 {x}^{2} + 8$ Therefore dy/dx=(-12x^2+8)/(1-12y^3
2019-06-25 07:30:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 12, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9532614350318909, "perplexity": 1323.3528163703909}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999814.77/warc/CC-MAIN-20190625072148-20190625094148-00290.warc.gz"}
http://www.pacm.princeton.edu/node/632
# Low-Rank Covariance for Cryo-EM Clustering Speaker: Joakim Anden Date: Apr 5 2016 - 12:30pm Event type: Cryo-electron microscopy (cryo-EM) provides 2D projections of 3D molecules by measuring electron absorption. Due to very high noise, a large demonstrating its effectiveness for clustering heterogeneous cryo-EM samples.
2017-09-24 15:38:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27436941862106323, "perplexity": 14435.073832788068}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818690035.53/warc/CC-MAIN-20170924152911-20170924172911-00304.warc.gz"}
https://stats.stackexchange.com/questions/439566/logit-probit-regression
# Logit - probit regression I was running regression of - determinants of acceptance into a social science college. I found this unrelated paper (Screenshot of relevant page attached herewith). Here, they have computed logit and probit regression at probabilities =0.5 and 0.8 and compared it with linear probability regression. I was wondering can we do the same in stata? (Can we run logistic regression in stata at a particular probability say 0.5) I tried to look this problem over the internet but couldn't find any useful resource on this. The authors are evaluating the marginal effects at the two levels of probability. It's just the derivative of the conditional probability for the logit or probit model times the corresponding coefficient. I knew this because the multiplicative factors on the index function coefficients (.25, .4, .16, .28) correspond to those derivatives. You can do this with lincom or margins of an expression in Stata. This is an old fashioned approach to showing marginal effects that was more popular when statistical software was less developed. Edit: In response your comment, here is some code showing this calculation using lincom and margins on the cars dataset. I think you are mistaken about what these commands can accomplish. You can find the derivation of the marginal effect for logit here and probit here. There is some code at the end showing how to calculate the average marginal effects, which should have correct larger SEs, will also handle categorical variables correctly, and are arguably more representative of your data that the approach in this paper. Moreover, their approach in the paper treats the categorical variables as if they were continuous, which can be odd. #delimit; sysuse auto, clear; /* OLS */ regress foreign c.mpg, robust; margins, dydx(mpg); /* Logit MEs at p = 0.5 and p = 0.8 */ logit foreign c.mpg, nolog; /* NB: these SEs are too small */ lincom .5*(1-.5)*_b[mpg]; lincom .8*(1-.8)*_b[mpg]; margins, expression(.8*(1-.8)*_b[mpg]); /* Probit MEs at p = 0.5 and p = 0.8 */ probit foreign c.mpg, nolog; /* NB: these SEs are too small */ lincom =normalden(invnormal(.5))'*_b[mpg]; lincom =normalden(invnormal(.8))'*_b[mpg]; margins, expression(normalden(invnormal(.8))*_b[mpg]); /* Plot for all possible values of p (not just 0.5 and 0.8) */ tw (function y = .0312915) (function y = x*(1-x)*.1597621, range(0 1)) (function y = normalden(invnormal(x))*.0960601, range(0 1)) , ylab(#10, angle(horizontal) grid) ytitle("Change in Probability") xlab(#10, grid) xtitle("Predicted Probability of Foreign Origin") xline(.5 .8, lpatter(dash)) title("Marginal Effect of an One Additional MPG at Different Pr(Foreign)", span size(medium)) legend(label(1 "OLS ME" ) label(2 "Logit ME") label(3 "Probit ME") rows(1)); /* Average Marginal Effects with continuous and categorical covariates */ gen high_mpg = mpg>21; logit foreign c.weight i.high_mpg, nolog; margins, dydx(*); The general plot looks like this, which shows that the effect depends on the baseline probability for logit and probit, but not for OLS, where the ME is constant. That is, the effect is biggest for observations that are likely to go either way and smallest for the very likely and very unlikely observations in the non-OLS models: • Thanks you so much. Dec 6, 2019 at 7:24 • I looked into the commands that you provided. I don't know how that is useful. Lincom is used to compare variables and not model, not the way I want. Margin compute odd ratios at different values of independent variable and not for specific values of dependent variable. Dec 6, 2019 at 16:48 • @ElinaGilbert I added some more material in response to your concerns. Dec 6, 2019 at 21:39 • Great explanation indeed! Dec 7, 2019 at 8:33
2022-08-15 08:37:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7065156102180481, "perplexity": 2545.9722283053916}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572161.46/warc/CC-MAIN-20220815054743-20220815084743-00590.warc.gz"}
https://mathematica.stackexchange.com/questions/135156/generate-a-list-of-randomly-distributed-1s-and-0s-with-fixed-proportion-of/135157
# Generate a list of randomly distributed $1$s and $0$s with fixed proportion of $1$s [duplicate] I need to simulate a random initial state of an 1D cellular automaton, but with different 'densities' of filled cells. Let's say the size of the list is $N$, then I need to be able to fix a number $P$ such that there are exactly $P$ $1$s and $N-P$ $0$s. RandomInteger gives $1$s or $0$s with probability $p=1/2$, but first, I still din't work out how to correctly modify the probability so it can change from $0$ to $1$, and second, I would prefer for the number to be exact. In other words, I have $P$ $1$s and $N-P$ $0$s and I need to randomly and uniformly distribute them inside a singe list. I'm not sure how to do that efficiently. I suppose I could create a list of all the possible positions and use RandomSample[list,P] to fill them with $1$s. But is there a better way? Important point! $N$ will be very large (up to 100 000). • You want to first create a list with the correct numbers of 1s and 0s and then create a random permutation, which you can d with RandomSample. Therefore try: RandomSample[Join[ConstantArray[1, p], ConstantArray[0, n - p]]] where n is the length of the list overall and p the number of 1s. – Quantum_Oli Jan 11 '17 at 11:20 • @Quantum_Oli, thank you, that's a good idea. – Yuriy S Jan 11 '17 at 11:21 If you need precisely m zeros and n ones in random order, just put them in a list and use RandomSample to shuffle it. m = 10;
2020-04-02 16:57:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7299324870109558, "perplexity": 353.82931130231486}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370506988.10/warc/CC-MAIN-20200402143006-20200402173006-00125.warc.gz"}
http://mathforum.org/mathimages/index.php?title=Social_Networks&curid=5195&diff=38026&oldid=38020
# Social Networks (Difference between revisions) Revision as of 12:20, 19 July 2013 (edit)← Previous diff Revision as of 13:02, 19 July 2013 (edit) (undo)Next diff → Line 231: Line 231: Before we begin, here are some definitions: Before we begin, here are some definitions: - * + *'''Degree Sequence''': The degree sequence of V on a graph $G=(V,E)$ where the sequence is written in decreasing order. - * + **For example {3,2,2,1} + *'''Degree Distribution''': (Talk about the Bernoulli graph and how it was the first option but that it wasn't the most accurate) (Talk about the Bernoulli graph and how it was the first option but that it wasn't the most accurate) ## Revision as of 13:02, 19 July 2013 Friend network of a particular Facebook account. The pink indicates a "mob" of tightly interconnected friends, such as high school or college friends. # Basic Description The picture above may seem like an innocent model of someone's friend network on Facebook, but it reveals plenty about how modern society operates. First, it indicates that people inherently have preferences for people who they choose to associate with. Taken in the context of economic transactions, these preferences can determine whether or not a firm, state, or country prospers economically. Think about it: there are infinite possibilities of associations, but people sometimes prefer not to associate. Second, the pink area (which probably represents high school friends, college friends, or coworkers) indicates something called clustering, which is kind of like grouping within a group. Given the context above, grouping can be thought of as a similar preference among certain individuals. Being in groups has allowed people to protest for their rights, start a new company, take over countries, etc. The odds are more in your favor in a group if you want to achieve something. These kinds of preferences have undoubtedly shaped our world. Therefore, it is beneficial to analyze such powerful phenomenon. In other words, social networks, or the pattern of connections between agents, are of enough importance to study through mathematics. Social network analysis (SNA) can provide valuable information about a network by answering the following questions: • "Who is most central to a network?" • "Who has the greatest influence in a group? • "How many different clusters can you find in a network?" • "Which connections are pivotal for the functioning of a group? • "Who is Mr./Ms. Popular?" Most importantly, this analysis allows us to make more rational decisions based on data (rather than intuition) and make predictions about the behavior of a group. For example, if we are able to determine that in the case of an epidemic, the virus is concentrated generally around a select few, there could be something done to prevent its spread. Conversely, if we wanted to make sure an important idea would spread quickly (a pitch for a school candidate, for example) we would first target the most influential and central people. Of the topics explained above, perhaps the easiest and most fundamental measures of network structures are centrality measures, which are given in more detail below. Keep in mind, however, that these measures quantify characteristics of people (or whatever objects that pertain to the network) rather than the network as a whole. ## Centrality Measures The answers to all of these questions revolve around the concept of centrality. The most frequently used measures of network structure, therefore, are centrality measures. These include Degree, Eigenvector, Closeness and Betweenness measures of centrality. More information about the measures is found below. For illustrative and simplistic purposes, these explanations will be in the context of human interactions. ### Degree Centrality (Social Network of Shakespeare's Hamlet, illustrating how protagonists are central to the plot, as evidenced by their degree centrality) Degree centrality measures how well connected a person is to a network. It does this by simply counting how many people a person is connected to. It seeks to measure: • The level of influence someone can establish in a community, organization, group, etc. • The opportunity to be influenced by someone in a community, organization, group, etc • How exposed someone is in a network, mostly known as the index of exposure Example: The characters in the image to the right will hopefully seem familiar. It is a network analyzing the ties among characters of one of William Shakespeare's famous plays, Hamlet.If you are not familiar with the play, just know that Hamlet (a prince) is the protagonist and is next in line to inherit the throne of his father, king Hamlet. However, prince Hamlet's uncle, Claudius, inherits the throne. Hamlet later finds out that Claudius was responsible for killing his father. Because Hamlet is a prince, and the protagonist of the play, the story mostly centers around Hamlet's quest to avenge the death of his father, which involves him talking to almost all the characters in the play. It seems like prince Hamlet is rather, central, right? In the most simple sense, because the he has most ties to any character in the play (as diagrammed by the image) he has the highest degree centrality, and thus it makes sense to conceive of him as the protagonist, or main character. ### Eigenvector Centrality Eigenvector centrality is a more sophisticated version of degree centrality that measures not only the number of people someone is connected to, but also the quality of the connections. For a connection to have quality, in this context, it means that it has lots of connections. It seems necessary because, in an intuitive sense, connections with influential people will make you more influential than just having non-influential connections. Eigenvector Centrality seeks to measure: • Who the most popular person is in the group • How well someone is connected to the well-connected, in other words, knowing the "right people" ### Closeness Centrality Closeness centrality, without a surprise, measures how close a person is within a network.Close in this context refers to how many "friends of friends" one would need to be related to another person. A not so close relation, for example, would be someone in California being related to some random person in North Dakota through 1 friend that knows another friend which knows another friend...(6 intermediate friends in total) that finally knows that random person in North Dakota. The 6 in the last example indicates the degrees of separation. A person who is well connected therefore has, on average, fewer degrees of separation to reach everyone on the network. ### Betweenness Centrality Betweenness centrality measures how "in between" someone is in the network. "In between" refers to how often the flow of information in a network passes through a specific person. A person with a high measure of betweenness usually "knows what's going on" and can act as a liaison to separate parts of the network. # A More Mathematical Explanation From now on, we will add to our understanding of social networks as graphs in the context of interact [...] From now on, we will add to our understanding of social networks as graphs in the context of interacting people. The people will now be thought of as vertices and the connections between people will be thought of as edges. For more information on the basics of graphs, which will be imperative to the mathematical understanding of social netoworks, look at the page Graph Theory. The following graph of a hypothetical social network will serve as the example for the following sections, and its centrality measures will be determined: If we are to do mathematical analyses there must be one way to store information about the network as a whole. Luckily there is such a way! It is through the Adjacency Matrix. The $i,j$ entry of the adjacency matrix is defined as follows: $a_{ij}=\left\{ \begin{array}{rcl} 1 & \mbox{for} & i, j \mbox{ adjacent}\\ 0 & \mbox{for} & i, j \mbox{ non-adjacent} \end{array}\right.$ As an example, we will determine the adjacency matrix, A, of the graph above: $A= \begin{pmatrix} 0 & 1 & 0 & 0 & 1 & 1 & 0 & 0 \\ 1 & 0 & 1 & 1 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 1 & 1 & 0 & 0 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 & 0 & 1 & 1 & 1 \\ 1 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \end{pmatrix}$ For a better visual and mathematical understanding of the adjacency matrix, you can look at the following chart: Now with the graph and adjacency matrix at our disposal, we are ready to find certain measures of centrality. Typically, computers are used to measure centrality, but our graph is small enough that we can show most of the procedures to find certain measures of centrality. ## Degree Centrality Definition: The degree $k_{i}$ of a vertex i is $k_{i} = \sum_{j=1}^n(A)_{ij}$ in other words, it means adding all of the edges adjacent to a given vertex. Having the adjacency matrix in mind, it is like counting all of the ones in a column or row for a specific vertex. We can use Chris's row in the adjacency matrix as an example: As you can see, Chris's degree centrality is: 1+1+1+1=4 Let's calculate the degree centralities of the others: Jason: 1+1+1=3 Austin: 1+1=2 Donald: 1 Bernie: 1 Mark: 1 David: 1 Elissa: 1 Conveniently. they were displayed in a way that ranked them at the same time. We can now conclude that Chris has the highest degree centrality (he knows the most people!) and Donald, Bernie, Mark, David and Elissa are tied for the lowest degree centrality, only knowing one person. ## Eigenvector Centrality (Note: This section requires a knowledge of Linear Algebra) If you can remember one thing about eigenvector centrality, it is that it's a more sophisticated version of degree centrality (counting edges from a vertex) that takes in account of the quality of the connections! How can math store information regarding relative centralities (quality) and connections toward other vertices? If you thought through Matrices and Vectors, you were right! We will utilize both in our focus on eigenvector centrality. Let $x_{i}$ be the eigenvector centrality score of a vertex i and let $A=[a_{ij}]$ be the adjacency matrix of the graph containing i. The inclusion of the adjacency matrix is important because it tells us which vertices are connected to each other (we want to include everyone's data and we don't want to include ones that aren't connected to us!) and how many vertices are connected. Let $x_{j}$ be the eigenvector centrality scores of the vertices adjacent to i. The eigenvector centrality score of vertex for vertex i is proportional to the sums of the scores of all vertices that are connected to it: $x_{i}=\frac{1}{\lambda}\sum_{j=1}^Na_{ij}x_{j}$ where $\lambda$ is a constant and N is the number of vertices in a graph. What this basically says that one person's eigenvector score depends on the eigenvector scores of it's neighbors. Following the properties of matrix multiplication, the equation above can be written as: $x=\frac{1}{\lambda}A\overrightarrow{x}$ Which simplifies into the eigenvector equation: $A\overrightarrow{x}=\lambda\overrightarrow{x}$ Where $\overrightarrow{x}$ is an eigenvector. This is all nice and dandy. The eigenvector centrality score of one vertex relies on the eigenvector scores of the others. But how would we find the eigenvector centrality score of a particular vertex if it depends on the others?? Well the answer is that, strangely, we find them all at once! We just find the eigenvector associated with the highest eigenvalue, according to the Perron-Frobenious Theorem. The ith entry of that vector will correspond to the eigenvector centrality score of ith vertex. To get through material faster, the process of finding Eigenvectors will be omitted, look here for reference. We will now find the eigenvector associated with the highest eigenvalue, and thus the vector of eigenvector centralities, of our hypothetical network: With $\lambda=2.50$ Let's test out one of the formulas, $x_{i}=\frac{1}{\lambda}\sum_{j=1}^Na_{ij}x_{j}$, to Jason. $x_{1}=\frac{1}{2.50}*(2+2.56+2)=2.56$ We can clearly see here that even though Chris had higher degree centrality, Jason had the same eigenvector centrality. Eigenvector centrality was created for reasons like these, to determine someone's importance in a network looking more than the amount of connections. In most settings,who you are connected to is more important than how many people you are connected to. ## Closeness and Betweenness Centrality Background Definitions: • A path is a sequence of vertices that are traversed by "walking though" edges from one to another across the network. More information can be found in Graph Theory • A geodesic path is the shortest path between a pair of vertices. Closeness Centrality of a vertex $i$ is formally defined as the mean geodesic path length between i and each other vertex in the graph. Intuitively, if this number of mean geodesic path length is low, we say it is closely located to every other vertex. Be wary, however, that a low number in this case indicates a higher centrality score. This may be confusing for some. Therefore, closeness centrality is more easily understood as the reciprocal of the average geodesic path, as follows: $c_{k}=\frac{1}{\sum_{i,j}g_{ij}}$ (need to fix this up) Where $g_{ij}$ is the number of geodesic paths from i to j We will use our example graph once more to illustrate the shortest paths for certain individuals (the rest will be calculated and left to be verified by the reader): Let's look at the shortest path of Donald to Elissa: We can see that we individually count the amount of edges required to get to the over vertex (person). To get to Elissa, donald's geodesic path is 1+1+1+1=4. We hope that the shortest path seems relatively straight-forward given the simplicity of the model. For most real and complex models. computers calculate these shortest paths, with a specific algorithm, Dijksta's Algorithm, mentioned below. Now to see an example where someone may be "close" within a network, which is to have high degree centrality. Let's look at some of Jason's shortest paths to other people: We see that to get to the two most distant people, Donald and Elissa, Jason only has to "walk" through two edges. In other words, Donald and Elissa are 2nd degree friends. Again, to have high closeness centrality, one have must the lowest mean degree of separation to everyone in the network. Things are looking good for Jason so far. Let's now compute the closeness centralities for all: You can check for your own answers for the geodesic path/closeness centrality. It looks like Jason came in first with a closeness centrality score of 0.63 and Donald, Bernie, David and Elissa tied for last with a closeness centrality score of 0.39. Jason's high score makes sense because visually, he is centrally located. The tie between Donald, Bernie, David and Elissa makes sense because they are only a part of the system because they are directly connected to one person, and thus have to travel "farther" (meaning having to go through more "friends of friends") to reach someone in the network. Betweenness Centrality of a vertex i is the fraction of geodesic paths within the graph that include i. In other words, it assembles every pair of vertices and computes the geodesic paths between them, then it determines the fraction of those paths that intersect i. Mathematically, $b_{k}=\sum_{i,j}\frac{g_{ijk}}{g_{ij}}$ (need to fix this up) Where $g_{ijk}$ is the number of those paths that pass through i Again, we will use our example network to find betweenness. The following picture illustrates how betweenness is calculated (although not completely) for Chris and Austin in the context of paths between Jason and Elissa, Donald and Mark: As you can see, shortest paths consisting of only one traversed edge (they are first degree friends), won't have any "middle men", or intermediaries. Therefore, in our matrix for closeness centrality, paths containing 1 (or zero), can be ignored for the purposes of calculating betweenness centrality. What matters most are the entries in the closeness centrality matrix that are greater than one, because those entries suggest that someone had to know an intermediary (or several) to be associated with a friend. In the next image, the closeness centrality matrix will be modified in such a way that highlights the intermediaries for each possible geodesic path. For every path, the intermediary's (if any) initial will replace the numerical entry. If there are several intermediaries, then they willl each have their own initial in the entry. The 0 and 1 entries of the former closeness centrality matrix in will be blank in the latter betweenness centrality matrix (as explained above). With this image in mind, we can now calculate everyone's betweenness centrality! All we do is add their respective "interruptions", or times they were between a geodesic path, and divide by the total amount of geodesic paths, 64(8*8). This result makes intuitive sense. Jason has highest betweenness centrality because he is literally "in the middle" of everyone. Donald, Bernie, Mark, David and Elissa both have low betweenness centrality because they are located in the periphery of the network and don't bring people together. ### Dijkstra's Algorithm Give short bit about how the algorithm see here tgtgt ## Bernoulli Graph and Configuration Models Previously, we were establishing measures concerning the properties of nodes, or people. We will now focus our attention to network properties. Our previous analyses consisted of taking measurements and analyzing them, now we are taking a step further to model real life situations knowing only very little of the network. Before we begin, here are some definitions: • Degree Sequence: The degree sequence of V on a graph $G=(V,E)$ where the sequence is written in decreasing order. • For example {3,2,2,1} • Degree Distribution: (Talk about the Bernoulli graph and how it was the first option but that it wasn't the most accurate) To allow for these non-Poisson degree distributions, one can generalize the random graph, specifying a particular, arbitrary degree distribution pk and then forming a graph that has that distribution but is otherwise random. The Configuration Model algorithm works as follows. Create vertices $V = {1, 2....n}$and assign them stubs or half edges according to the sequence ${d_1,d_2...d_n}$, as in Figure 2. (These stubs or half edges are edges that are connected on one side while the other side remains free.) Then, to make the random graph, pick any two stubs uniformly at random, and connect their free ends. These two stubs became one edge. Repeat this process until no free stubs are left. (Add pictures) Talk about isomorphisms at the end and say that it's because of this idea that they work. ## Basic Reproduction Number (Mention that it's a mathematical application to what was given above)Have you ever wondered if there was such a point, call it a tipping point, that can determine whether or not an idea (or virus) will spread or die out? If so, you're in luck! If not, you still are. Some definitions before we begin: Mean Degree:$=\frac{\sum_{i}k_{i}}{n}$, which is simply the sum of the degrees of all vertices divided by the amount of vertices. talk about r(k-1) and explain what it means Given a mean degree, <k>, and a person's individual probability, r, to communicate the idea, it is possible to take the weighted average of all probablities to get what is called the basic reproductive number: $R_o=r\frac{\sum_{i}k_{i}(k_{i}-1)}{\sum_{i}k_{i}}=r\frac{-}{}$ (proof?) If $R_o$ is greater than 1, the idea will spread exponentially (yikes). If $R_o$ is less than 1, the idea will die. placeholder (placeholder)
2018-02-19 10:30:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 28, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7317840456962585, "perplexity": 788.9886669903206}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812579.21/warc/CC-MAIN-20180219091902-20180219111902-00662.warc.gz"}
https://scikit-learn.org/0.21/auto_examples/inspection/plot_partial_dependence.html
# Partial Dependence Plots¶ Partial dependence plots show the dependence between the target function [2] and a set of ‘target’ features, marginalizing over the values of all other features (the complement features). Due to the limits of human perception the size of the target feature set must be small (usually, one or two) thus the target features are usually chosen among the most important features. This example shows how to obtain partial dependence plots from a MLPRegressor and a GradientBoostingRegressor trained on the California housing dataset. The example is taken from [1]. The plots show four 1-way and two 1-way partial dependence plots (ommitted for MLPRegressor due to computation time). The target variables for the one-way PDP are: median income (MedInc), average occupants per household (AvgOccup), median house age (HouseAge), and average rooms per household (AveRooms). We can clearly see that the median house price shows a linear relationship with the median income (top left) and that the house price drops when the average occupants per household increases (top middle). The top right plot shows that the house age in a district does not have a strong influence on the (median) house price; so does the average rooms per household. The tick marks on the x-axis represent the deciles of the feature values in the training data. We also observe that MLPRegressor has much smoother predictions than GradientBoostingRegressor. For the plots to be comparable, it is necessary to subtract the average value of the target y: The ‘recursion’ method, used by default for GradientBoostingRegressor, does not account for the initial predictor (in our case the average target). Setting the target average to 0 avoids this bias. Partial dependence plots with two target features enable us to visualize interactions among them. The two-way partial dependence plot shows the dependence of median house price on joint values of house age and average occupants per household. We can clearly see an interaction between the two features: for an average occupancy greater than two, the house price is nearly independent of the house age, whereas for values less than two there is a strong dependence on age. On a third figure, we have plotted the same partial dependence plot, this time in 3 dimensions. [1] T. Hastie, R. Tibshirani and J. Friedman, “Elements of Statistical Learning Ed. 2”, Springer, 2009. [2] For classification you can think of it as the regression score before the link function. Out: Training MLPRegressor... Computing partial dependence plots... Computing partial dependence plots... Custom 3d plot via partial_dependence print(__doc__) import numpy as np import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D from sklearn.inspection import partial_dependence from sklearn.inspection import plot_partial_dependence from sklearn.neural_network import MLPRegressor from sklearn.datasets.california_housing import fetch_california_housing def main(): cal_housing = fetch_california_housing() X, y = cal_housing.data, cal_housing.target names = cal_housing.feature_names # with the 'recursion' method does not account for the initial estimator # (here the average target, by default) y -= y.mean() print("Training MLPRegressor...") est = MLPRegressor(activation='logistic') est.fit(X, y) print('Computing partial dependence plots...') # We don't compute the 2-way PDP (5, 1) here, because it is a lot slower # with the brute method. features = [0, 5, 1, 2] plot_partial_dependence(est, X, features, feature_names=names, n_jobs=3, grid_resolution=50) fig = plt.gcf() fig.suptitle('Partial dependence of house value on non-location features\n' 'for the California housing dataset, with MLPRegressor') plt.subplots_adjust(top=0.9) # tight_layout causes overlap with suptitle learning_rate=0.1, loss='huber', random_state=1) est.fit(X, y) print('Computing partial dependence plots...') features = [0, 5, 1, 2, (5, 1)] plot_partial_dependence(est, X, features, feature_names=names, n_jobs=3, grid_resolution=50) fig = plt.gcf() fig.suptitle('Partial dependence of house value on non-location features\n' 'for the California housing dataset, with Gradient Boosting') print('Custom 3d plot via partial_dependence') fig = plt.figure() target_feature = (1, 5) pdp, axes = partial_dependence(est, X, target_feature, grid_resolution=50) XX, YY = np.meshgrid(axes[0], axes[1]) Z = pdp[0].T ax = Axes3D(fig) surf = ax.plot_surface(XX, YY, Z, rstride=1, cstride=1, cmap=plt.cm.BuPu, edgecolor='k') ax.set_xlabel(names[target_feature[0]]) ax.set_ylabel(names[target_feature[1]]) ax.set_zlabel('Partial dependence') # pretty init view ax.view_init(elev=22, azim=122) plt.colorbar(surf) plt.suptitle('Partial dependence of house value on median\n' 'age and average occupancy, with Gradient Boosting')
2021-12-04 20:53:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5130155682563782, "perplexity": 4226.708152448031}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363006.60/warc/CC-MAIN-20211204185021-20211204215021-00622.warc.gz"}
https://toreopsahl.com/publications/thesis/thesis-2-clustering-in-weighted-networks/
## Thesis: 2 Clustering in Weighted Networks An article based on this chapter has been published (Opsahl and Panzarasa, 2009). This article was written after this chapter and contains a number of changes. This chapter has a methodological nature in that it builds on, and extends, a fundamental measure of network structure, namely the clustering coefficient, that has long received attention in both theoretical and empirical research. This measure assesses the degree to which nodes tend to cluster together. Evidence suggests that in most real-world networks, and in particular social networks, nodes tend to create tightly knit groups characterised by a relatively high density of ties (Feld, 1981; Heider, 1946; Holland and Leinhardt, 1970; Freeman, 1992; Friedkin, 1984; Louch, 2000; Snijders, 2001; Snijders et al., 2006; Watts and Strogatz, 1998). More generally, one can ask: if there are three nodes in a network, i, j, and k, and i is tied to j and k, how likely is it that j and k are tied? In real-world networks, this likelihood tends to be greater than the average probability of a tie randomly established between two nodes (Holland and Leinhardt, 1971; Wasserman and Faust, 1994).. For social networks, scholars have investigated the mechanisms that are responsible for the increase in the probability that two people will be tied if they share an acquaintance (Holland and Leinhardt, 1971; Snijders, 2001; Snijders et al., 2006). The nature of these mechanisms can be social, as in the case of third-part referral (Davis, 1970; Heider, 1946), or non-social, as in the case of focus constraints (Feld, 1981). On the one hand, an individual may reduce cognitive stress by introducing his or her acquaintances to each other (Heider, 1946). Moreover, indirect ties foster trust, enhance a sense of belonging, facilitate the enforcement of social norms, and enable the creation of a common culture (Coleman, 1988). Burt (2005) found that reputation of a person is only maintained if his or her contacts can communicate or gossip. This also applies to inter-organisational networks where organisations in tightly knit groups create informal governance arrangements (Uzzi and Lancaster, 2004). On the other, focus constraints refer to the increased likelihood of interaction and clustering among nodes that share the same physical, institutional, organisational or social environment. For example, people who share the same office are more likely to create independent dyadic ties leading to a heightened tendency towards clustering than people that reside in distant geographical locations (Feld, 1981). Traditionally, the tendency of nodes to cluster together is measured using the global clustering coefficient (e.g. Feld, 1981; Karlberg, 1997, 1999; Louch, 2000; Newman, 2003) or the local clustering coefficient (Watts and Strogatz, 1998). This chapter deals with the former of these measures. Nevertheless, the local clustering coefficient is briefly introduced below to review and highlight differences among the two measures. The local clustering coefficient is based on ego network density or local density (Scott, 2000; Uzzi and Spiro, 2005). For a node i, this is the fraction of the number of present ties over the total number of possible ties between node i‘s neighbours. For undirected networks, the local clustering coefficient is formally defined as (equation 1): $C_i = \frac{\text{ties between node }i\text{'s neighbours}}{\text{node }i\text{'s neighbours } \times \text{ (node }i\text{'s neighbours } - 1) / 2}$ To obtain an overall coefficient for a network, the fractions for all the nodes in a network are averaged. The main advantage of this measure is that a score is assigned to each node. This enables researchers to study correlations with other nodal properties (e.g. Panzarasa et al., 2009) and perform regression analyses with the observations being the nodes of a network (e.g. Uzzi and Lancaster, 2004). However, this coefficient suffers from two major limitations. First, its outcome does not take into consideration the weight of the ties in the network. As a result, the same value of the coefficient might be attributed to networks that share the same topology, but differ in terms of how weights are distributed across ties and, therefore, may be characterised by different likelihoods of friends being friends with each other. Second, the local clustering coefficient does not take into consideration the directionality of the ties connecting a node to its neighbours. A neighbour of node i might be: 1) a node that has directed a tie towards node i, 2) a node that node i has directed a tie towards, or 3) a node that has directed a tie towards node i and to whom node i has also directed a tie. Barrat et al. (2004) proposed a generalisation of the coefficient to take the weight to the ties into consideration. However, the issue of directionality still remains unsolved (Caldarelli, 2007). Unlike the local clustering coefficient, the global coefficient is based on a clustering measure for directed networks: transitivity (Wasserman and Faust, 1994, 243). However, it is only defined for networks where ties are without weights. When the weights are attached to the ties, researchers have set an arbitrary cut-off level and then dichotomised the network by removing ties with weights that are below the cut-off, and then removing the weights from the remaining ties (this process is described in detail in Section 1.1). The result is a binary network consisting of ties that are either present (or equal to 1) or absent (or equal to 0; Scott, 2000). Doreian (1969) studied clustering in a weighted network by creating a series of binary networks from the original weighted network using different cut-offs. A sensitivity analysis can address some of the problems arising from the subjectivity inherent in the choice of the cut-off. However, it tells us little about the original weighted network, except that the value of the clustering coefficient changes at different cut-off levels. While we also conduct similar sensitivity analyses on various datasets, here we propose a generalisation that explicitly takes weights of ties into consideration and, for this reason, does not depend on a cut-off to dichotomise weighted networks. In what follows, we start by discussing the existing literature on the global clustering coefficient in undirected and binary networks¹. In Section 2.2 we propose our generalised measure of clustering. We then test and compare the generalisation with the existing measure by using a number of empirical datasets based on weighted and undirected networks. In Section 2.4, we turn our attention to directed networks and discuss the existing literature on clustering in that type of network. We then extend our generalised measure to cover weighted and directed networks. Finally, Sections 2.5 and 2.6 highlight the contribution to the literature and offers a critical assessment of the main results. _____________________ ¹ Even though directionality of ties is a key advantage in choosing the global clustering coefficient over the local, for the sake of simplicity, we choose to start by focusing on undirected ties.
2018-10-17 01:26:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.734605610370636, "perplexity": 918.5723184815511}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583510932.57/warc/CC-MAIN-20181017002502-20181017024002-00093.warc.gz"}
https://getrevising.co.uk/diagrams/c3_14
# Core 3 Spec HideShow resource information • Created by: james • Created on: 20-01-14 20:27 • C3 • Algebra and functions • Understand function, domain, range, one-one, inverse and composition • Identify range of a given function and composition of two functions • Illustrate relation between one-one and inverse • Use and recognise compositions of transformations of graphs • Understand meaning of |x| and solve equations and inequalities • Understand the relation between y=|f(x)| and y=f(x) • Understand properties of exponential and log functions e^x and ln x • Understand exponential growth and decay • Triganometry • Use inverse sin, cos, tan to denote principal values of inverse • Understand the relationship of sec, cosec, cot and use any graphs for any angles • Use trig identities • Differentiation • Use derivative of e^x and ln x together with constants, sums, differences • Differentiate composite functions using chain rule • Differentiate products and quotients • Understand the relation dy/dx= 1 /dx/dy • Apply to connected rates of change • Integration • Integrate e^x and 1/x constants, sums and differences • Integrate expressions involving linear substitution • Use definite integrals to find a volume of revolution about one of the coordinate axis
2017-01-20 06:15:04
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8679549694061279, "perplexity": 7935.720366202038}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280791.35/warc/CC-MAIN-20170116095120-00465-ip-10-171-10-70.ec2.internal.warc.gz"}
https://codegolf.stackexchange.com/questions/68103/electrostatic-potential-of-a-simple-system/68112
Electrostatic potential of a simple system In physics, like electric charges repel, and unlike charges attract. The potential energy between two unit charges separated by a distance d is 1/d for like charges and -1/d for unlike charges. The potential energy of a system of charges is the sum of the potential energies between all pairs of charges. Challenge Determine the potential energy of a system of unit charges represented by a string. This is , so the shortest solution in bytes wins. Input A nonempty multiline string, consisting of only +, -, and newlines, with each line a constant width. The + and - represent charges of +1 and -1 respectively. For example, the following string: + - + (considering the top-left to be the origin) represents a system with positive charges at (4,0) and (1,-1) and a negative charge at (6,0). Alternatively, you may take input as a list of lines. Output A signed real number representing the potential energy of the system of charges. Output should be correct to four significant figures or 10-4, whichever is looser. Test cases: - Should output 0. There are no pairs of charges to repel or attract, and the whitespace doesn't change anything. + - There are only two charges; they are 1 unit away in the vertical direction and 2 units away in the horizontal direction, so their distance is sqrt(5). Output should be -1/sqrt(5)=-0.447213595. + - - + Should give -2.001930531. - -- -+ - - -+-++-+ +-- + +-- + ++-++ - ---++-+-+- -+- - +- -- - -++-+ --+ + - + --+ ++-+ +- -- ++- + + -+--+ + +++-+--+ +--+++ + -+- +-+-+-+ -+ +--+ - +-+- + ---+ - - ++ -+- --+-- Should give -22.030557890. ---+--- ++-+++- -+ + -+ ---+++-+- +- + + ---+-+ - ---- +-- - - + +--+ -++- - - --+ - --- - -+---+ - +---+----++ - + + -+ - ++-- ++- -+++ +----+- ++-+-+ - ++- -+ -+---+ -- -+ +-+++ ++-+-+ -+- +- Should give 26.231088767. • Plus points for implementing periodic boundary conditions and computing the Madelung energy. – Andras Deak Dec 30 '15 at 11:15 • @AndrasDeak That would be interesting. – lirtosiast Dec 30 '15 at 21:53 Pyth, 34 bytes smc*FhMd.atMd.cs.e+RkCUBxL" +"b.z2 Demonstration First, we convert each character to +1 for +, -1 for -, and 0 for . Then, each number is annotated with its position in the matrix. At this point, we have a matrix that looks like: [[[-1, 0, 0], [-1, 1, 0], [-1, 2, 0], [1, 3, 0], [-1, 4, 0], [-1, 5, 0], [-1, 6, 0]], [[1, 0, 1], [1, 1, 1], [-1, 2, 1], [-1, 3, 1], [0, 4, 1], [1, 5, 1], [0, 6, 1]]] The code that reaches this point is .e+RkCUBxL" +"b.z Then, we flatten this matrix into a list and take all possible pairs, with .cs ... 2. Then, he find the distance between the pair with .atMd, and the sign of the potential with *FhMd, divide, and sum. CJam, 51 chars Counting all pairs, filtering Inf/NaN out and dividing by two: q_N#:L;N-" +"f#ee2m*{z~:*\Lfmd2/:.-:mh/}%{zL<},:+2/ Alternatively, filtering coordinates first so we count each pair once and don't run into Inf/NaN: q_N#:L;N-" +"f#ee2m*{0f=:<},{z~:*\Lfmd2/:.-:mh/}%:+ Explanation (old) q Get all input. _N#:L; Store the line width in L. N- Flatten it into one long line. :i Get all ASCII values. :(3f%:( Map space to 0, + to 1, - to -1. ee Enumerate: list of [index, sign] pairs. 2m* Get all possible pairs. { }% For each pair: e_~ i1 s1 i2 s2 @* i1 i2 s (multiply signs) \aa@aa+ s [[i2] [i1]] (put indices in nested list) Lffmd s [[x2 y2] [x1 y1]] (divmod by L) :.- s [xD yD] (point diff) :mh s d (Euclidean dist.) / s/d (divide) {zL<}, Filter out infinite results. :+2/ Sum all charges, and divide by two. (We counted each pair twice.) • So explanation is TBA? :P – Rɪᴋᴇʀ Dec 29 '15 at 17:05 • Did you write this while it was sandboxed, or are you just really fast? – lirtosiast Dec 29 '15 at 17:41 • I'm quite fast :) The first version was "the simplest thing that worked", which took me only a couple of minutes to write, so I immediately posted that, then golfed it down over the next half hour. – Lynn Dec 29 '15 at 17:46 z=zip[0..] g n|f<-[(x,y,c)|(y,r)<-z$lines n,(x,c)<-z r,c>' ']=sum[c%d/sqrt((x-i)^2+(y-j)^2)|a@(x,y,c)<-f,b@(i,j,d)<-f,a/=b]/2 c%d|c==d=1|1<2= -1 Usage example: *Main> g " - -- -+ - - -+-++-+\n +-- + +-- + ++-++ -\n---++-+-+- -+- - +- \n-- - -++-+ --+ + \n- + --+ ++-+ +- \n-- ++- + + -+--+ \n+ +++-+--+ +--+++ + \n-+- +-+-+-+ -+ +--+\n- +-+- + ---+ \n- - ++ -+- --+--" -22.030557889699853 f is a list of all triples (x-coord, y-coord, unit charge). g calculates the potential energy for all combinations of two such triples which are not equal, sums them and divides the result by 2. Ruby, 133 ->n{t=i=j=0.0 c=[] n.tr(' ',?,).bytes{|e|e-=44 z="#{j}+#{i}i".to_c i+=1 e<-1?i=0*j+=1:(c.map{|d|t+=d[0]*e/(d[1]-z).abs};c<<[e,z])} t} Maintains an array of previous charges in the form of tuples [charge, location(complex number)] and compares each new charge with this list, before appending it to the list. All spaces in the input are replaced with commas. This enables the following assignment by subtracting 44 from their ascii code: symbol charge (internal representation) + -1 , 0 - +1 The fact that the program considers + to be -1 and - to be +1 makes no difference to the final result. The fact that the program goes to the effort of calculating the influence of the charges of 0 for the spaces makes no difference, apart from slowing it down a bit :-) Ungolfed in test program g=->n{ t=i=j=0.0 #t=total potential; i and j are coordinates of charge. c=[] #array to store tuples: charge + location (complex number). n.tr(' ',?,).bytes{|e| #replace all spaces with commas, then iterate through characters. e-=44 #subtract 44 from ascii code: + -> -1; comma -> 0; - -> 1 z="#{j}+#{i}i".to_c #position of current character as complex number i+=1 #advance x coordinate to next character. e<-1?i=0*j+=1: #if current character is newline, set i to zero and advance j instead, (c.map{|d|t+=d[0]*e/(d[1]-z).abs};#else add up the contribution for interaction of the current charge with all previous charges, c<<[e,z])} #and append the current charge to the list of previous charges. t} #return t p g[ '+ - - +' ] p g[ ' - -- -+ - - -+-++-+ +-- + +-- + ++-++ - ---++-+-+- -+- - +- -- - -++-+ --+ + - + --+ ++-+ +- -- ++- + + -+--+ + +++-+--+ +--+++ + -+- +-+-+-+ -+ +--+ - +-+- + ---+ - - ++ -+- --+--' ] MATL, 39 42 bytes jt]N$v'- +'FT#m2-I#fbbhtZPwt!**1w/XRss Works in current release (5.1.0). The compiler runs on Matlab or Octave. Each line is a separate input. End is signalled by inputting an empty line. Examples >> matl > jt]N$v'- +'FT#m2-I#fbbhtZPwt!**1w/XRss > > + - > - + > -2.001930530821583 >> matl > jt]N$v'- +'FT#m2-I#fbbhtZPwt!**1w/XRss > > - -- -+ - - -+-++-+ > +-- + +-- + ++-++ - > ---++-+-+- -+- - +- > -- - -++-+ --+ + > - + --+ ++-+ +- > -- ++- + + -+--+ > + +++-+--+ +--+++ + > -+- +-+-+-+ -+ +--+ > - +-+- + ---+ > - - ++ -+- --+-- > -22.03055788969994 Explanation jt] % keep inputting lines until an empty one is found N\$v % concatenate all inputs vertically. This removes the last empty line '- +'FT#m % replace '-', ' ', '+' by numbers 1, 2, 3 2- % transform into -1, 0, 1 for '-', ' ', '+' I#f % find rows, columnss and values of nonzeros bbh % concatenate rows and columns into 2-col matrix or coordinates tZP % compute pair-wise distances for those coordinates wt!* % generate matrix of signs depending on signs of charges * % multiply distances by signs, element-wise 1w/ % invert element-wise XR % keep part over the diagonal ss % sum along colums, then rows % (output is implicitly printed) Lua, 293255246 228 Bytes e=0l={}p={}i=1while l[i-1]~=""do l[i]=io.read()for k=1,#l[i]do c=l[i]:sub(k,k)if(c>" ")then for h,v in ipairs(p)do e=e+(v.s==c and 1 or-1)/math.sqrt((v.y-i)^2+(v.x-k)^2)end table.insert(p,{s=c,x=k,y=i})end end i=i+1 end print(e) Ouch, 228 bytes...I can probably golf this significantly, but I'll post it here for now. Probably update it later tonight with a few more musings and (hopefully) some improvements to the length. Ungolfed e=0l={}p={}i=1 while l[i-1]~=""do for k=1,#l[i]do c=l[i]:sub(k,k) if(c>" ")then for h,v in ipairs(p) do e=e+(v.s==c and 1 or-1)/math.sqrt((v.y-i)^2+(v.x-k)^2) end table.insert(p,{s=c,x=k,y=i}) end end i=i+1 end print(e) Update 255 Bytes: Removed old bottom two for loops, processing is now done as strings are added to string array. Update 246 Bytes: Replaced c=="+"or"-"==c with c>" " as per nimi's suggestion. Great idea, thanks! Update 228 Bytes: If statement could be removed completely by inserting in table after the for loop, saving quite a few bytes. Mathematica 223 bytes Still golfing to do. f[{{c1_,p1_},{c2_,p2_}}]:=N[(c1 c2)/EuclideanDistance[p1,p2],13]; h[charges_]:=Tr[f/@Subsets[DeleteCases[Flatten[Array[{r[[#,#2]],{#,#2}}&,Dimensions[r=Replace[Characters[charges],{"+"-> 1,"-"->-1," "->0},2]]],1],{0,_}],{2}]] Last test case: h[{" - -- -+ - - -+-++-+", " +-- + +-- + ++-++ -", "---++-+-+- -+- - +- ", "-- - -++-+ --+ + ", "- + --+ ++-+ +- ", "-- ++- + + -+--+ ", "+ +++-+--+ +--+++ + ", "-+- +-+-+-+ -+ +--+", "- +-+- + ---+ ", "- - ++ -+- --+--"}] -22.030557890
2019-08-24 06:40:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48651474714279175, "perplexity": 9342.810145057907}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027319915.98/warc/CC-MAIN-20190824063359-20190824085359-00138.warc.gz"}
https://study.com/academy/answer/when-your-reverse-the-digits-in-a-certain-two-digit-number-you-decrease-its-value-by-27-what-is-the-number-if-the-sum-of-its-digit-is-3.html
# When your reverse the digits in a certain two-digit number you decrease its value by 27. What is... ## Question: When your reverse the digits in a certain two-digit number you decrease its value by 27. What is the number if the sum of its digit is 3? ## Elimination Method: (i) The elimination method is used to solve a system of two equations in two unknowns (variables). (ii) In this method, we add or subtract the given equations and eliminate one variable. (iii) We solve the resultant equation in one variable using the algebraic operations. (iv) We can then substitute the value of this known variable in one of the given two equations and solve for the other variable. Let us assume the required two-digit number to be {eq}xy {/eq}. Here, {eq}x {/eq} is in the ten's place and {eq}y {/eq} is in the one's place. So, the two-digit number = {eq}10x+y {/eq}. The two-digit number when the digits are reversed is {eq}yx {/eq}. In the same way explained above, the reversed number = {eq}10y+x {/eq}. The problem says, "When you reverse the digits in a certain two-digit number you decrease its value by {eq}27 {/eq}". So, we get: $$\text{Reversed number}= \text{Original number} - 27 \\ 10y+x = (10x+y)-27 \\ \text{Subtracting 10x and y from both sides}, \\ -9x+9y =-27 \\ \text{Dividing both sides by 9},\\ -x+y=-3 \,\,\,\,\,\,\,\rightarrow (1)$$ The problem also says, "the sum of its digits is {eq}3 {/eq}". So, we get: $$x+y=3 \,\,\,\,\,\,\,\rightarrow (2)$$ $$2y=0 \\ \text{Dividing both sides by 2}, \\ y=0$$ Substitute this in (2): $$x+0=3\\ x=3$$ Therefore, the required number is {eq}xy = \boxed{\mathbf{30}} {/eq}.
2020-04-05 23:40:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5430378913879395, "perplexity": 1037.2588776237985}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371611051.77/warc/CC-MAIN-20200405213008-20200406003508-00512.warc.gz"}
https://devzone.nordicsemi.com/questions/scope:unanswered/sort:activity-desc/page:1/
# 4,012 questions 2 views no no ## Nrf ESB using nrf_esb_suspend, nrf_esb_disable functions Which case i have to use nrf_esb_suspend function instead of nrf_esb_disable/enable? What function i have to use when i change radio RX to TX mode, changing radio channel? 2 views no no ## BLE multi communication Hi, Is it possible to have multiple BLE units that can talk to each and any of the others. ie if two come in close proximity they can connect and communicate a small amount of data ie Rssi? 37 views no no I'm trying to use a the COAP server example from SDK 14.2 running on nRF52DK. I've got a RPI3 acting as a router between my Ethernet network and the 6lowpan/BLE link. I'm running a python ... (more) 2 views no no ## ANT tx slots missed during channel scanning I have an NRF52832 configured with 3 ANT+ channels (and BLE). 2 are master and 1 is slave. Anytime the slave channel is opened and scanning for a master the 2 master channels miss a large number of transmission time ... (more) 3 views no no ## Everything You Need to Know About Printed Lanyards Printing techniques vary from a hot stamp (basic quality) and silk screen (medium quality) to a more costly dye sublimation process that results in a premium quality print. Printed lanyards are also available in a wide variety of patterns including ... (more) 12 views no no Hello, deavteam. This is MANGO. The Capacitive Touch on the nRF52 series blog post is really impressive. I'm planning to start a haptic driver project using TI's DRV2605 driver, an LRA (Linear Resonant Actuators), and Nordic's nRF52 ... (more) 9 views no no ## LPCOMP Interrupt Resets MCU (nRF51-DK) Hi All, I am new to nRF development and purchased the nRF51-DK (PCA10028). I have successfully experimented with the examples for ble_app_uart and lpcomp. My Application will require a a combination of these two examples. As such I have added ... (more) 11 views no no ## ttl converter + NRF52832-QFAA-R7 (QFN48) Hello. I have a ttl converter (https://goo.gl/images/uFSPHN). Can I write a hex file through it? What are the possibilities to program so as not to buy a debug card? 11 views no no Hello, I need to read SAADC input pin of nRF52832 that is multiplexed and wonder what the optimal strategy is to implement that. Especially: setting a ppi channel so that a timer compare event is triggering sample task in SAADC ... (more) 4 views no no Hello I am using nrf51822 .The function of DFU has worked well,but some weird problems puzzle me. 1、After DFU is successful,what is nrf51822's working state? Does it reset?From bootloader?Or directly enter into the BANK1 ... (more) 19 views no no ## 6KRO protocol Hi everyone, Currently, my team is developing a BLE keyboard which uses 6KRO protocol. Are there any examples for the 6KRO protocol using nRF52 chips? Thanks! 7 views no no ## SDK14.2 DFU power cycle issue Situation: Trying to upgrade previous versions of the SD and BL (V3.0.0) to (V5.0.0) Problem: device power cycles after a new application is flashed to the device into the V5.0.0 SD and BL. Steps ... (more) 9 views no no Hello, I am using nrf52840 PDK, sdk 13.0.0. I managed to read accel, gyro data from the MPU9250 using MPU library based on this example. However, I got magnetometer reading 0. I've checked this question and this ... (more) 6 views no no ## Support for ANT Blaze I want to implement a multi-protocol firmware stream on nRF52 chips. Does the S332 softdevice ANT Blaze for MESH applications? 13 views no no ## Mesh Health Model The mesh health model state is not restored following a reboot in the current version of the Mesh SDK using the light switch example. According to the documentation, the configuration should be restored automatically so I think this is a ... (more) 8 views 1 vote no ## Power Profiler Kit PPK with DK RTT debug out not working. Current latest JLink firmware won't allow PPK working with DK RTT debugging. The cause is the JLink chip on the DUT board won't put the SH_SWO/SH_SWCLK/SH_SWDIO in hi-Z mode when SH_VTG is grounded, which prevents the ... (more) 7 views no no I am having problems getting my client to connect to a peripheral I have gone thru the central tutorial and tried many things here on the devzone to no avail. state: s140, multirole_lesc sdk 14.2 On the peripheral side ... (more) 7 views no no Hi all, I'm currently building a bootloader suitable for Dual Channel DFU and SD132. It is on nrf52832, currently on PCA10040. Although the linker script works, and creates a hex file, I am left with an app_error_code 1 after ... (more) 8 views no no ## app_twi_perform fails with NRF_ERROR_INTERNAL Hello Nordic Forum! Our product relies heavily in the TWI Bus, we use it to communicate with different sensors periodically. The most "important" one is read once every 4 ms. Additionally we have a TWI controlled IO expander, which has ... (more) 6 views no no ## NRF24L01+ ACTIVATE COMMAND The question is related to "nRF24L01P_Product_Specification_1_0.pdf" datasheet in "https://www.nordicsemi.com/eng/content/download/2726/34069/file/nRF24L01P_Product_Specification_1_0.pdf" and to NRF24L01P chip. In the Page 51, ACTIVATE command is not present. What happens if use ... (more) 10 views no no ## Store beacon parameters in flash (fstorage) hello, I modified the beacon example that allows me to modify major and minor value from mobile app, but when i restart the nrf52840 dev board, the modified values are reset to the their initial values, so, how can i ... (more) 11 views no no ## LSM9DS1 driver for nrf5 SDK Hi, I've been searching for a LSM9DS1 driver compatible with the nRF5 SDK >= 14.0 but I couldn't find any which uses the nrf_drv_twi so I started to write my own but it takes quite a bit of ... (more) 10 views no no ## What's static OOB in bluetooth mesh? Hi all !!! I understand Output OOB, Input OOB. I case of No OOB, value which is attached to Confirmation value is 0x00, what happens in case of static OOB? 13 views no no ## How to use PCA10040 as Dongle Scanner Hi, I'm trying to have a PCA10040 working as a scanner of BLE peripherals to be able to connect to my own BLE Nordic device, write a characteristic and close the connection. To do that I followed those steps ... (more) 89 views no no ## Drivers from nRF5_SDK to Mesh SDK Hello, I would like to add the ST7735 driver from the nRF5_SDK to the Mesh SDK along with the necessary components (SPI,..). I copied the entire components directory to my Mesh SDK. But I have an issue with the cmake ... (more) 10 views no no ## Bug report: BLE multirole example SDK 13.1 Hello, I'm working on a project based on the ble_app_multirole_lesc example, under examples/ble_central_and_peripheral/experimental and I found a bug. In ble_evt_dispatch when there hasn't been a connection yet and the advertising times out, the event is not ... (more) 17 views no no ## P0.09 and P0.10 on nRF52810 Hi, I'm developing on nrf52810 using the SDK14.1.0 and I've take as starting point a template application that targets nRF52810 running on the nRF52 Development Kit. But I'm working on nrf52810 hardware, NOT the nrf52DK ... (more) 6 views no no ## Neither a disconnect nor a timeout after a LL_TERMINATE_IND Hi, we have an issue, related to an older post, but we don't get the timeout outlined in that post. Our setup: we use the nRF52832 with the S132 V4.0.2 in the peripheral role and a iPhone ... (more) 19 views no no ## Erase multiple pages not working(nrf51822) I want to erase selected pages from the NRFflash memory (Not whole flash). I have defined one function for NRF memory erase. In that, I have followed following sequence of operations: for (Number of page erase) { 1. Set NVMC mode: Erase ... (more) 15 views no no ## Suggestion for Custom nRF52832 specially RF Antenna Hello Sir, we have designed custom PCB using nRF52832, I am little confused for is it OK. I have attached top and bottom layer layout of my PCB please sir review it and is it ok like antenna and Specially ... (more) #### Statistics • Total users: 26289 • Latest user: Malinda Buddicom • Resolved questions: 11535 • Total questions: 28672 ## Recent blog posts • ### Difference between nRF52840 Engineering A and B reference designs Posted 2018-01-15 12:27:08 by Martin Børs-Lind • ### [For Hire] Expert development services of custom Hardware devices | IoT solutions | Mobile Apps Posted 2018-01-15 09:08:42 by Ilya Surinsky • ### Rust bindings to nrf52 series Posted 2018-01-12 23:23:07 by Coleman McFarland • ### Show And Tell: Poly - Building an RC Airplane The Hard Way Posted 2018-01-05 01:17:57 by Daniel Veilleux • ### Bluetooth on a desktop computer (Windows, Mac, Linux) Posted 2018-01-04 17:56:57 by kbaud ## Recent questions • ### Nrf ESB using nrf_esb_suspend, nrf_esb_disable functions Posted 2018-01-20 19:31:28 by Amigo • ### BLE multi communication Posted 2018-01-20 18:30:08 by paul • ### ANT tx slots missed during channel scanning Posted 2018-01-20 18:27:27 by dlip • ### Everything You Need to Know About Printed Lanyards Posted 2018-01-20 18:01:16 by Fermin Murtagh • ### LPCOMP Interrupt Resets MCU (nRF51-DK) Posted 2018-01-20 15:32:47 by Anthony
2018-01-20 19:14:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1786859780550003, "perplexity": 11761.489408239286}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084889681.68/warc/CC-MAIN-20180120182041-20180120202041-00058.warc.gz"}
https://www.enotes.com/homework-help/prove-that-sequence-convergent-first-term-u1-sqrt2-374553
# Prove that the following sequence is convergent: u_1=sqrt(2), u_2=sqrt(2+sqrt(2)), ldots, u_n=sqrt(2+sqrt(2+sqrt(2+sqrt(2+cdots+sqrt(2+sqrt(2)))))) tiburtius | Certified Educator calendarEducator since 2012 starTop subjects are Math, Science, and History In order to prove that your sequence is convergent we will use the following theorem: A sequence of real numbers is convergent if it is monotone and bounded. 1. proof of monotonicity We will prove this by induction: i) x_2=sqrt(2+sqrt(2))>sqrt(2)=x_1 which proves the base. ii) Assume that for every k leq n     x_k>x_(k-1). iii) Now we use assumption for k=n and add number 2 to each side. 2+x_n>2+x_(n-1) hence x_(n+1)=sqrt(2+x_n)>sqrt(2+x_(n-1))=x_n which proves inductive step. 2. proof that sequence is bounded Since sequence is increasing we only need to find upper bound (lower bound is x_1 ). We will prove that the upper bound is 2. Again we do this by induction: i) x_1=sqrt(2)<2 ii)Assume that x_n<2. iii) x_(n+1)=sqrt(2+x_n)<sqrt(2+2)=2 which proves step of induction. We have now proven that the sequence is convergent. We can also find the limit of the sequence: We know that sequence is convergent meaning it has limit a. This means: lim_(n rightarrow infty) x_n=lim_(n rightarrow infty) sqrt(2+x_(n-1)) a=sqrt(2+a)                                                       (1) by squaring this equation we get a^2-a-2=0 solutions to this equation are a_1=2 and a_2=-1. Since only a_1=2 satisfies (1) that is our limit. check Approved by eNotes Editorial ## Related Questions edobro | Student By  proof of monotonicity why did you add 2 at both sides,and is there any other way of proving this which is more compactible. check Approved by eNotes Editorial
2019-11-18 06:50:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9753035306930542, "perplexity": 1347.8959952264136}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669454.33/warc/CC-MAIN-20191118053441-20191118081441-00370.warc.gz"}
https://electronics.stackexchange.com/questions/620809/why-are-these-capacitors-treated-as-if-they-are-in-series-why-must-the-incoming
# Why are these capacitors treated as if they are in series? Why must the incoming and outgoing current in a battery always be equal? A circuit with three capacitors and an open end at A: I am a high school student. I am very confused by this example. The circuit is open at end "A" as shown but still there would be some surface charges on the capacitor which is not connected in closed circuit. There would be surface charges on the wires so that the net electric field everywhere inside the conductors will be 0, but in the school textbook they say that current will not go towards an open end so we can treat those two capacitors which are connected in closed circuit to be in series combination. Also all the charges that are accumulating on one plate would come from the other plate connected in series. In this way all the charge that leave from one end of the battery would enter into other end, but how is this even possible? Charges are accumulating on wires also and on capacitor which have an open end but we are ignoring them. Why? If we don't ignore them then the capacitors would not be in series and also the charges leaving one end of battery and entering other would not be equal. • I'd recomment to break your wording in phrases, it's difficult to read. Anyway, this is a tricky question since usually you either a) ignore parasitics or b) compute parasitic effects. If you consider residual charges on wire you are considering parasitics, but at the same time you are ignoring leakage current in the capacitor and parasitic capacitance to the external ground. So IMHO the problem is not well-posed. I think your doubt are legitimate, ask your instructor May 24 at 7:57 • +1 on what @LorenzoMarcantonio said. It's down to if you threat this as an ideal circuit or a real one with parasitics, such as stray capacitance via air. May 24 at 8:12 • The pair of capacitors connected in the closed circuit are in series. It makes no difference to that statement whether the 3rd capacitor's terminal A is connected somewhere or not. Obviously it makes a difference to how you would analyze the circuit, but those 2 caps are in series no matter what you do with 'A'. May 24 at 10:07 • @brhans, that's not correct. If A is connected somewhere (with a return path to the power source), then not all current that flows through the left capacitor must also flow through the right capacitor, and they are not in series. For example, we can't use the series capacitors formula to find the equivalent capacitance of the combination, or the series impedance formula to find the equivalent impedance of the combination. May 24 at 15:27 • "Why must the incoming and outgoing current in a battery always be equal?" -- because, like gravity, it's the law. May 24 at 17:22 Unless this circuit is being operated with a voltage source with frequency in the megahertz or more, there's not much point in analysing what happens to actual charges here. You can rely on Kirchhoff's Current Law to tell you everything you need to know about this circuit, and yes, it works for capacitors too. If you insist on talking about charges, let me assure you, that if one single electron enters one side of a capacitor, then another exits the other side. If that happens to the top capacitor in your diagram, you now have a greater charge density at A than before, and the change in potential that causes would simply push the electron right back in again. Because current in equals current out, and because there is "infinite" resistance to current flow out of A, no current flows in that top capacitor, and you can consider it to be entirely absent. Consequently, all that remains is the other capacitors in series. As far as charges entering and leaving the voltage source are concerned, one thing is certain; no electron that leaves that source can ever return to it "in person" because there are dielectric barriers in the capacitors, which the electrons cannot traverse. This should tell you that the electrons entering the voltage source must already be present in the wire connected to it. I'm alluding to your idea that charges accumulate in wires, which is not the case. They are always there. Like when you switch on the tap in your bathroom, water emerges immediately, because water was already in the pipe. If the pipe was empty, sure, you would have to wait for the water to arrive from the tank, but that's not how electric current works. Charges are always present in conductors, it's simply not possible to "empty" them of charge. What I just said is true, but capacitors seem to act differently. It is possible to remove charges from a plate, but as you do, more charges will be drawn onto the other plate as a result. The total quantity of charges on both plates of a capacitor doesn't change, but their distribution within, does. This is why I can say that if charge enters one capacitor terminal, then the same amount of charge must necessarily leave out of the other terminal. Current in equals current out, in accordance with Kirchoff's Current Law. All that has changed, really, is the distribution of charges on the plates of the capacitor, and no charge ever crosses the dielectric. So, I reiterate, since by Kirchhoff's Current Law we see that no current can flow out of the capacitor to node A, no current can flow into that capacitor, and it can be disregarded. If for some reason charge made it out of the capacitor into node A, this would disturb the equilibrium of charge distribution within that node, and the resulting change in potential would simply cause the equilibrium to restore itself, pushing charge back into the capacitor. Of course, charges are moving all the time, mainly due to kinetic energy which we call "heat", so what I am describing here is an "average" behaviour of trillions of charges. Sure a few might move upwards towards A, but they will displace another bunch back down again, so on average, there's no current flow. So, ignoring individual charge behaviour, en-masse they behave in such a way that it is simply not possible for charges to enter anything without an equivalent quantity also leaving, which may or may not be the same actual charges. When you charge a battery, there are not more electrons in it now than there were before, they are simply distributed differently with the battery. Current in always equals current out, and that's Kirchhoff's Current Law (KCL) in a nutshell. Edit: I'd like to say also that when people refer to "charge accumulating in a capacitor", that's misleading. Accumulation of charge on one plate results in an equal depletion of charge on the other, so there's no net accumulation; only a change of distribution. Also, just to drive the point home, within a section of wire, there are a certain number of electrons which are mobile, and can participate in "electric current". That number doesn't change, whether current is flowing or not. When a million electrons are pushed in one end, a million are shoved out of the other end, the total number occupying the wire section never changing. You should discard the notion that charge "builds up" anywhere, except under very particular circumstances. For example, a FET's gate may accumulate electrons, but that will simply expel electrons in the channel, forming a depletion zone. From the perspective of an observer outside this FET, though, KCL was obeyed, because those expelled electrons will manifest an equal charge flow out of the drain and/or source. In other words, even though individual devices may rely on some imbalance of charge distribution within them, their net charge content does not change. To the engineer outside, no imbalance would be visible, and no violation of KCL would occur. • but if you'll think logically, I am saying that there should be some surface charges on top capacitor because only that can change its potential which was initially considered to be 0 w.r.t infinite and finally let's say its "V" . ofcourse there wouldn't be any current leaving node "A" but charges can redistribute themselves to cancel the net electric field onto that top isolated conductor ,,,,,,I think you don't know much about surface charges and how it sets up in a circuit , I like to think in terms of electric fields and not just by considering voltages at different parts May 26 at 5:32 • @ArunBhardwaj you are right, I don't know much about surface charge May 26 at 7:16 • The ambiguous description of charge building up stems from the multiple definitions of charge that are (unintentionally) applied. I'd say it's correct to say that a capacitor is charged (like a battery is charged), but what's built up in there is energy. Jun 3 at 11:49 • Your remark about the electron re-entering the battery in person raises an interesting philosophical question. On the one hand, you are "obviously" right. On the other hand, as an indistinguishable particle, an electron has no real identity... but does that mean that your statement is inapplicable (trying to define something undefinable), or that you're explicitly wrong, or that you're still right but it just doesn't matter. Jun 3 at 11:54 • By the way (going on a tangent here, hardly related to the question any more), do you know whether electrons in a current exclusively move in one direction? Or could there be "turbulence" that causes some of them to actually go upstream, as long as the net displacement follows the current? If so, that poor electron might find its way home just yet. Jun 3 at 11:59 This is a model, or ideal circuit, that is a useful approximation (in many cases) to a real world circuit. If we wanted to use a circuit model to take account of the charges on the surfaces of the wires, and on the dangling connection to the top capacitor, then we would add extra capacitor components, and label them Cstray. For typical circuits they would be quite small, often just a few pF, to correctly account for the amount of charge that's there at battery voltages. If the components shown in the circuit are in the nF or uF region, then you can see that the stray capacitance absorbs so little charge that we can safely ignore it, compared to the charge on the main components. Not all teachers make this approximation aspect of circuit theory clear, it seems yours hasn't. It's always a problem when teaching a potentially very complicated topic. Do you make the approximation explicit and frighten the noobs, or just present the approximation and leave the deeper thinkers mistrusting what you've said? Note that even when we have modeled the distributed charges with stray capacitors, we only have a better description of some real physical circuit, not a correct or ideal one. The actual circuit will have conductors that have some physical length to them, and so have inductance, or might even need to be treated as transmission lines or antennae. In the limit, we can always find some operating condition or accuracy level that defeats a circuit model. Depending on the frequency we want to operate at, and the accuracy we want, the trick is to model the circuit in as few ideal components as possible, and still capture the behaviour that we are interested in. We often sum it up in the aphorism by George Box, All models are wrong, but some are useful. The easiest way to handle surface charges is to use a fields and charges model, rather than a circuit model. This is ideal for capturing the electric fields as the result of some geometry of voltage sources and wires. While that model is very good for electrostatics, it's not the right tool to attempt to describe circuits with. • In EMC you usually consider the 'air capacitance to ground' stray as 4pF nominal. It's surprisingly good as a rule of thumb value. But in these case you also consider the capacitor lead inductances and wiring resistances… I guess the OP is at the 'pure electrostatic physics' level May 24 at 8:10 • @LorenzoMarcantonio yes I am seeing it from electrostatic point of view,,,also even if I assume for now that the carges on the open capacitor and wires is so small that we can ignore them,,,but even if its small its there right,,so does it mean that the incoming and outgoing current in the battery would not be equal here? so wouldn't it charge the potential difference of the battery? May 24 at 8:18 • @ArunBhardwaj All of what you're saying is correct, at some level. If you want to illustrate it with a circuit model, then just sprinkle around the extra components that will absorb charge in the right places, and restore circuit theory's charge rules. You could even have a stray capacitor to the middle of the battery if you wanted. But we rarely do that because it's just too complicated. If we want to represent surface charges in a detailed way, then we use a fields and charges model, which can represent them much more easily, but then it's far too complicated to represent a 'circuit' easily May 24 at 8:22 • @ArunBhardwaj Read the George Box article, this is one of his quotes 2.4 Worrying Selectively - Since all models are wrong the scientist must be alert to what is importantly wrong. It is inappropriate to be concerned about mice when there are tigers abroad. Those stray Cs are mice. However, your teacher has not told you what sort of zoo he is showing you around. May 24 at 8:47 • @ArunBhardwaj will I get to know more about "how electricity actually travels" when I'll study at higher level? Yes and no. Yes, you'll get to know more, transmission line theory for RF engineers for instance, and Fermi levels for hole and electron transport in semiconductors (did you know a hole is just as 'real' a particle as an electron?). Everything, no, because even those are approximations. It's all emergent from quantum theory. This is why teachers don't spell out their approximations, but present them as 'truth', as 'lies to children'. May 24 at 8:53
2022-07-05 07:31:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47606492042541504, "perplexity": 571.93504796823}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104514861.81/warc/CC-MAIN-20220705053147-20220705083147-00682.warc.gz"}
https://nm.education/courses/introduction-to-data-science/lessons/linear-regression/topic/simple-linear-regression/
We all have plotted line graphs on coordinate planes (X-axis and Y-axis) before. Where x variable is an independent variable, usually called a predictor variable and y is a dependent variable, called the criterion variable. To predict the relationship between these two variables, regression analysis is used. For instance, you are aware of the fact that a house of 2,200 square feet costs 50,000$. You wish to buy a 3,500 square feet house. How much it will cost you? In order to find the price, you need a relationship between price and the area of the house. Linear regression solves helps you do this. Let’s find out! Simple Linear Regression Simple linear regression is the simplest approach for statistical learning. It is a part of Bivariate Statistics. Bivariate meaning, two variables. Now the relationship between variable 1 and variable 2 is special in case of Linear regression. Thus, the value of one variable is a function of the other variable. $$y = f(x)$$ The first equation which crosses our minds after reading this is the equation of the line. The slope-intercept form of a line: $$y=m*x+b$$ In this formula, x is an independent variable, y is a dependent variable along with two real components, m, and b. Where, m is the slope of the line and b is the y-intercept that is when the line crosses the Y-axis i.e. value of y when x=0. Now, let’s take an example: $$y=2*x+3$$ We can learn a lot about this equation by superimposing this equation on the slope-intercept form of the line equation. The slope of the line = 2 Y-intercept = 2(0)+3 = 3 On plotting, it gives a straight line- We had a look at the algebric formula of lines and now we will examine its connection with linear regression. Linear Regression Equation: Consider the following dataset in Table-1, which has prices of houses based on the area of each house. Area (in square feet) Price (in thousand$) 1500 340 1750 390 2600 550 3000 565 3200 610 3600 680 4000 725 2000 490 Now, with the help of the data provided in the table, we have to predict the price of the house that has an area of 3,500 square feet. Step – 1: We will plot a scatter graph using the available data points. Note: The order in which data points are plotted doesn’t matter. We can graph the points in any order. This just happens to be the final graph we ended up with. Step – 2: Look for a rough visual line. Warning: If you try and find a linear regression equation through an automated program like Excel, you will find a solution, but it does not necessarily mean the equation is a good fit for your data. As you can see, there are several lines passing through the data points. We don’t know that any one of these lines here is the actual regression line. So which line is it? To answer this question, let’s get to the next step. Step – 3: In this step, descriptive statistics is done in order to find the ‘best-fit line. We will start by finding the mean of each variable. The formula for finding the mean- $$Mean = \frac{Sum of Observation}{Total number of Observations}$$ Mean for Area = xm = 2706.25 square feet Mean for House prices = ym = 543.75 $Plot this point, (xm,ym) in the graph. This point is termed the centroid. Note: The best-fit regression line must pass through the centroid, which comprises the mean of x variable and mean of y variable. Step – 4: To make the best-fit line we need two points. The centroid gives you one point to work with. Let’s move ahead with the calculations. Mathematically, the regression line shares the characteristics of being linear with adjustable parameters. Remember the general equation we learned of the line, the slope formula. $$y = mx+b$$ where m is the slope of the line and b is the y-intercept. Finding for finding slope- $$m=\frac{\displaystyle\sum_{i=1}^{n}(x – xm) (y – ym)} {\sum_{i=1}^{n}(x – xm)^2}$$ Finding the value of y-intercept b, we take ym which is the mean of the dependent variable and then subtract the slope times the mean of the independent variable. $$b=ym-m*(xm)$$ We already calculated values of xm and ym earlier in Step – 3. Calculations: In order to form our linear equation, each data point should be considered for calculating slope. So let’s re-create our Table-1 by adding the necessary columns. Area (sq. feet) $$x$$ Price ($) $$y$$ Area Deviation $$(x – xm)$$ Price Deviation $$(y – ym)$$ Deviation Product $$(x – xm)(y – ym)$$ Area Deviation squared $$(x – xm)^2$$ 1500 340 -1206.25 -203.75 245773.4375 1455039.063 1750 390 -956.25 -153.75 147023.4375 914414.0625 2000 490 -706.25 -53.75 37960.9375 498789.0625 2600 550 -106.25 6.25 -664.0625 11289.0625 3000 565 293.75 21.25 6242.1875 86289.0625 3200 610 493.75 66.25 32710.9375 243789.0625 3600 680 893.75 136.25 121773.4375 798789.0625 4000 725 1293.75 181.25 234492.1875 1673789.063 Note: Regression is very sensitive to rounding. Thus, it is best to take your calculations to four decimal places. From the table, we can say- Sum of deviation product = $$\displaystyle\sum_{i=1}^n(x – xm)(y – ym)$$ = 825312.5 Sum of area deviation squared = $$\displaystyle\sum_{i=1}^n(x – xm)^2$$ = 5682187.5 Slope = m= $$\frac{\displaystyle\sum_{i=1}^{n}(x – xm) (y – ym)} {\sum_{i=1}^{n}(x – xm)^2}$$ = $$\frac{825312.5}{5682187.5}$$ = 0.14526 We found the values xm, ym, m and thus y-intercept is $$b=ym-m*(xm)$$ = $$543.75 – 0.14526 * 2706.25$$ = 150.64 Now, as our final step, we will assemble the values of slope and y-intercept gives us the following regression equation. $$y=0.14526*x+150.64$$ Remember we discussed that our centroid has to fall on the best-fit regression line. Well, it does. This method is The Least Squares method. Interpretation: For every 1 square foot the area increases, we would expect the price to increase by 0.14526 thousand $that is, 145.26$. Implementation using Kotlin You are now aware of the concept of linear regression. Let’s start with the implementation of what we learned. Dataset- We will be using a self-generated dataset. This is just an arbitrary choice, you can use any other dataset of your choice. The first step is splitting the data between testing and training data sets. The training process builds up the machine learning algorithm. This data is fed to the algorithm, the model evaluates the data repeatedly to learn about the behavior of data and then reveals patterns to serve the intended purpose. After the model is built, testing data validates that it can make accurate predictions. The testing data should be unlabeled. It acts as a real-world check of an unseen dataset to confirm that the algorithm is trained effectively. We usually split the data around 20%-80% between the testing and training stages. Code- Starting with the training dataset representing salary in thousands per year of experience. val xs = arrayListOf(1, 2, 3, 4, 5, 6, 7, 8, 9, 10) //independent variable val ys = arrayListOf(25, 35, 49, 60, 75, 90, 115, 130, 150, 200) //dependent variable As discussed earlier, the next step is to find the centroid. That is the mean of both variables. val meanX = xs.average() //Mean of independent variable println(meanX) //print function Output: 5.5 val meanY = ys.average() //Mean of dependent variable println(meanY) //print function Output: 92.9 The coordinates of the centroid are (5.5, 92.9). Let’s move forward! Training the model- val yearsdeviation = xs.map{it - 5.5} //Finding the deviation of years //map function is an inbuilt function whih applies the specified condition to every element of list println(yearsdeviation) Output: [-4.5, -3.5, -2.5, -1.5, -0.5, 0.5, 1.5, 2.5, 3.5, 4.5] val salarydeviation = ys.map{it - 92.9} //Finding deviation in the salary println(salarydeviation) Output: [-67.9, -57.900000000000006, -43.900000000000006, -32.900000000000006, -17.900000000000006, -2.9000000000000057, 22.099999999999994, 37.099999999999994, 57.099999999999994, 107.1] val deviationproduct = xs.zip(ys) { x, y -&gt; (x - meanX) * (y - meanY) } //Multiplying both the list generated //zip function merges two different lists into a new list println(deviationproduct) Output: [305.55, 202.65000000000003, 109.75000000000001, 49.35000000000001, 8.950000000000003, -1.4500000000000028, 33.14999999999999, 92.74999999999999, 199.84999999999997, 481.95] val sqofyearsdeviation = area.map {e -&gt; e.pow(2)} //Squaring of deviation of years column //e denotes each element in the list println(sqofyearsdeviation) Output: [20.25, 12.25, 6.25, 2.25, 0.25, 0.25, 2.25, 6.25, 12.25, 20.25] val num = deviationproduct.sum() //summation of all elements in the list println(num) Output: 1482.5 val deno = sqofyearsdeviation.sum() println(deno) Output: 82.5 Now let’s calculate value of slope- Slope = $$\frac {num}{deno}$$ val slope = num / deno println(slope) Output: 17.96969696969697 val slope:Double = String.format("%.3f", 17.96969696969697).toDouble() //Rounding off the value of slope println(slope) Output: 17.97 After calculating slope, the centroid is used to find Y intercept – val yIntercept = meanY - slope * meanX // Finding value of Y Intercept yIntercept Output: -5.934999999999988 //Final equation of linear regression val simpleLinearRegression = { independentVariable: Double -&gt; slope * independentVariable + yIntercept } Our model is trained now. Let’s test it! Testing the model- For instance, how much an individual with 2.5 or 6.5 years of experience should be entitled to earn? val testcaseone = simpleLinearRegression.invoke(2.5) // Predicting the salary of an employee with experience of 2.5 years testcaseone Output: 38.99000000000001 //Rounding off predicted value of test case 1 val testcaseone:Double = String.format("%.3f", 38.99000000000001).toDouble() testcaseone Output: 38.99 val testcasetwo = simpleLinearRegression.invoke(6.5) // Predicting salary of an emplyee with 6.5 years experience testcasetwo Output: 128.83999999999997 //Rounding off predicted value of test case 2 val testcasetwo:Double = String.format("%.3f", 128.83999999999997).toDouble() testcasetwo Output: 128.84 Our model predicted the salary with respect to years of experience of the employee. Now the question is, how accurate is this value? What is the error percentage between the original value and the predicted value? To answer these questions let’s move to the next section. Building Regression Model in S2 S2/Kotlin has ample packages to solve linear model problems using regression models. We will implement the Ordinary Least Squares(OLS) model. To build and analyze a model for a given dataset, $$LMProblem$$ is constructed. Q. Consider a response vector(dependent variables) as:  $$Y = \begin{bmatrix}1\\2\\4\\5\\10\end{bmatrix}$$ and design matrix of explanatory vector(independent variables) as:  $$X = \begin{bmatrix}25\\35\\60\\75\\200\end{bmatrix}$$. Construct a OLS model using S2 IDE. %use s2 //a keyword used in every program of S2 //Defining the vector Y val Y: Vector = DenseVector(arrayOf(1.0, 2.0, 4.0, 5.0, 10.0)) //Defining the matrix X val X: Matrix = DenseMatrix( arrayOf( doubleArrayOf(25.0), doubleArrayOf(35.0), doubleArrayOf(60.0), doubleArrayOf(75.0), doubleArrayOf(200.0) ) ) //estimation of true intercept val intercept = true val problem1 = LMProblem(Y, X, intercept) printOLSResults(problem1) //calling the function //Runs an OLS regression //Block for defining the function fun printOLSResults(problem: LMProblem?) { val ols = OLSRegression(problem) val olsResiduals: OLSResiduals = ols.residuals() //coefficients for explanatory variables println("beta hat: ${ols.beta().betaHat()}\nstderr:${ols.beta().stderr()},\nt: ${ols.beta().t()},\nresiduals:${olsResiduals.residuals()}") //beta is the slope value //Standard error is stderr //Residual is the difference between original and model value } Output: beta hat: [0.048918, 0.535481] stderr: [0.005264, 0.532022] , t: [9.293036, 1.006500] residuals: [-0.758430, -0.247609, 0.529441, 0.795672, -0.319074] FIT of regression model Till now we have covered and coded fundamental concepts of Simple Linear Regression. In this section, we will learn to evaluate how a regression line fits the data it models. A regression model is unique to the data it represents. Once a regression model is built, the sum of squared errors or residuals is calculated using the regression line. Till now we have calculated the regression line using the Least Squares Method. Now, we will find the errors and residuals. In statistics, the residual and error are not the same. Error is defined as the difference between the observed value and true value(unobserved value). Whereas, residual is defined as the difference between observed value and model value(predicted values). Error Calculation: Recall the example of house prices we considered. We already discussed and found the equation of linear regression on selected data and how the line passes through the centroid. Now let’s move on to the part where we calculate predicted prices for each house with the help of the equation we derived earlier. Area (square feet) Price (thousand $) $$y=0.14526*x+150.64$$ $$y$$ (predicted prices) 1500 340 $$y=0.14526*(1500)+150.64$$ 368.53 1750 390 $$y=0.14526*(1750)+150.64$$ 404.845 2600 550 $$y=0.14526*(2600)+150.64$$ 528.316 3000 565 $$y=0.14526*(3000)+150.64$$ 586.42 3200 610 $$y=0.14526*(3200)+150.64$$ 615.472 3600 680 $$y=0.14526*(3600)+150.64$$ 673.576 4000 725 $$y=0.14526*(4000)+150.64$$ 731.68 2000 490 $$y=0.14526*(2000)+150.64$$ 441.16 All we are doing in the above table is to substitute each area value in place of x. After evaluating these equations, it will give us the predicted price in the last column. Basically in this case the point to be noted is that our training data and testing data are the same. Note: Instead of grabbing a calculator for the above calculations. Open S2 and code for each data point. val y=0.14526*(1500)+150.64 //Calculating for first row println(y) Output: 368.53 Observations: For a house of 1500 square feet, the price was 340 thousand$ and according to our regression equation we predicted 368.53 thousand $. Thus, you can observe the price of houses is not exactly the same as our predicted value. This discrepancy in values is what we refer to as an error. So let’s find the difference in the original price and the predicted price for all the data points we have. Area (square feet) Price (thousand$) $$y$$ (predicted prices) Error = Observed – Predicted Squared Error 1500 340 368.53 -28.53 813.9609 1750 390 404.845 -14.845 220.3740 2600 550 528.316 21.684 470.1958 3000 565 586.42 -21.42 458.8164 3200 610 615.472 -5.472 29.9428 3600 680 673.576 6.424 41.2678 4000 725 731.68 -6.68 44.6224 2000 490 441.16 48.84 2385.3456 Squaring the differences, the residuals come out as mentioned in the last column of the above table. On adding them up we get the Sum of Squared Errors (SSE). Sum of Squared Error = SSE = $$\displaystyle\sum_{i=1}^n(Observed-Predicted)^2$$ = 4464.5257 Error in the scatter graph of Table -1 is the distance from the true line to each observed data point. Squaring all these errors (SSE) can be represented on the graph as – Residual Analysis: Residual analysis is important to assess the appropriateness of a linear regression model. It is the second major step towards validating a model. If the model assumptions are not satisfied, residual analysis often suggests ways for improving the model in order to obtain better results. For understanding residuals, recall the dataset and its scatter plot we studied in Table-1. Now, construct a parallel line to X-axis using the value of Ymean that is, 543.75. Interpretation: In the above graph, observe how the distance from the line Ymean to the observed data point can be divided into exactly two distances. One is the SSE we discussed in the previous section and the second is the SSR. Sum of Squared Residuals(SSR) is a statistical way to study the amount of variance in a regression model. It is the sum of squared values of residuals. Graphical interpretation will be – SSR = SST – SSE Here we have calculated SSE and now let’s calculate the Total Sum of Squared (SST). Consider the following diagram – Note: In this case, we considered only the dependent variable which is the price of the house. We figured out its mean and marked a horizontal line at $543.75. Squaring the distance between the observed data point to mainline and then adding them. Area (square feet) Price (thousand$) Price difference wrt ymean Squared 1500 340 -203.75 41514.0625 1750 390 -153.75 23639.0625 2600 550 6.25 39.0625 3000 565 21.25 451.5625 3200 610 66.25 4389.0625 3600 680 -136.25 18564.0625 4000 725 181.25 32851.5625 2000 490 -53.75 2889.0625 In the last column, on adding the values we get the Total Sum of Squares (SST). Total Sum of Squares = SST = $$\displaystyle\sum_{i=1}^n(Observed(dependent) -ymean)^2$$ = 124337.5 Recall the relation between SSR, SSE, and SST. SSR = SST – SSE = 124337.5 – 4464.5257 = 119872.9743 Thus, we can say that “Residual is the estimation of error”. Coefficient of Determination: Now that we have values of SSR, SSE, and SST. Let’s go ahead and find the coefficient of determination represented as $$R^2$$. It is a statistical technique to measure the goodness of fit. R-squared is generally interpreted as the percentage. If SSR is large, more SST is used and thus, SSE is smaller relative to the total. This ratio acts as a percentage. Mathematically, it’s the sum of squares regression divided by the Total Sum of Squares. $$R^2 = \frac{Sum of Squared Regression}{Total Sum of Squares} = \frac{SSR}{SST}$$ Note-1: The value is always between 0(0%) and 1(100%). Note-2: Larger values of $$R^2$$ suggest that our linear model is a good fit for the data we provided. In our case, $$R^2 = \frac{119872.9743}{124337.5} = 0.9641$$ or 96.41% Conclusion: We can conclude that 96.41% of the total sum of squares can be explained by our regression equation to predict house prices. Thus the error percentage is less than 4% which implies a GOOD FIT for our model. While coding in S2, we include the following line to print the coefficient of determination. println("R2: ${olsResiduals.R2()}",) Mean Square Error: Represented as $$s^2$$ and tells us about how spread out the data points are from the regression line. $$s^2$$ is the estimate of sigma squared that is variance. So, MSE is SSE divided by its degrees of freedom which in our case is 2 because we are estimating slope and intercept. Note-1: In simple linear regression, the degree of freedom is always 2. Note-2: MSE is not simply the average of residuals. $$s^2 = \frac{SSE}{n-2} = \frac{4464.5257}{6} = 744.08$$ Standard Error: It is the standard deviation of the overall error. Represented as $$s$$. It is termed as the average distance an observation/data point falls from the regression line in units of the dependent variable. $$s = \sqrt[2]{MSE} = \sqrt[2]{744.08} = 27.278$$ $$s$$ is a measure of how well the regression model makes prediction. When coding in S2, the following lines are typed printing – println("standard error:${olsResiduals.stderr()}, f: ${olsResiduals.Fstat()}",) //standard error Testing Data Implementation Consider the same question we coded previously in the “Building Regression Model” section. Here, we will predict values and find errors associated with the data. Consider the vector for testing as: $$Y = \begin{bmatrix}1.2\\2.4\\4.1\\5.3\\9.9\end{bmatrix}$$. %use s2 val Y: Vector = DenseVector(arrayOf(1.0, 2.0, 4.0, 5.0, 10.0)) val X: Matrix = DenseMatrix( arrayOf( doubleArrayOf(25.0), doubleArrayOf(35.0), doubleArrayOf(60.0), doubleArrayOf(75.0), doubleArrayOf(200.0) ) ) val intercept = true val problem1 = LMProblem(Y, X, intercept) printOLSResults(problem1) // Testing data vector val W: Vector = DenseVector(arrayOf(1.2, 2.4, 4.1, 5.3, 9.9)) val problem2 = LMProblem(Y, X, intercept, W) printOLSResults(problem2) fun printOLSResults(problem: LMProblem?) { val ols = OLSRegression(problem) val olsResiduals: OLSResiduals = ols.residuals() println("beta hat:${ols.beta().betaHat()},\nstderr: ${ols.beta().stderr()}, \nresiduals:${olsResiduals.residuals()}") println("R2: ${olsResiduals.R2()}, standard error:${olsResiduals.stderr()}, f: ${olsResiduals.Fstat()}",) println("fitted values:${olsResiduals.fitted()}") println("sum of squared residuals: ${olsResiduals.RSS()}") println("total sum of squares:${olsResiduals.TSS()}") println() } Output: beta hat: [0.048918, 0.535481] , stderr: [0.005264, 0.532022] , residuals: [-0.758430, -0.247609, 0.529441, 0.795672, -0.319074] R2: 0.9664281242711773, standard error: 0.7420099473407974, f: 86.36051188299813 fitted values: [1.758430, 2.247609, 3.470559, 4.204328, 10.319074] sum of squared residuals: 1.6517362858580784 total sum of squares: 49.2 beta hat: [0.045201, 1.055123] , stderr: [0.003621, 0.504376] , residuals: [-1.185147, -0.637157, 0.232818, 0.554804, -0.095319] R2: 0.9811093504747105, standard error: 1.23873309823374, f: 155.80872682454918 fitted values: [2.185147, 2.637157, 3.767182, 4.445196, 10.095319] sum of squared residuals: 4.60337906597928 total sum of squares: 243.68558951965065 Outliers We have now covered the most important part of regression. In this section, we will understand the concept of outliers. Sometimes when you make a scatter plot, some data points or point just don’t look right. Consider the following scatter plot for example. Here, you will notice that in the above case the orange point looks way out of place. So, for one of the variables considered, a value can appear out of the norm. A point having such large residual values even though it is in the range of one variable is termed as an outlier. Note: Outlier affects the slope of the regression line as it falls outside the general pattern of data. We now conclude the topic, “Simple Linear Regression”. Thank you for spending time on this course! Hope you learned a lot.
2023-03-22 03:44:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6323685646057129, "perplexity": 1905.057463859798}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943749.68/warc/CC-MAIN-20230322020215-20230322050215-00661.warc.gz"}
https://barneyshi.me/2021/08/18/power-of-three/
# Leetcode 326 - Power of three Note: It’s not that hard but there are some traps in this problem. method 1: method 2:
2022-10-05 00:07:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4305623173713684, "perplexity": 3354.3265364632416}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00362.warc.gz"}
https://marktheballot.blogspot.com/2012/11/cube-law.html
Sunday, November 18, 2012 Cube Law I have spent the past few days playing with Bayesian statistics, courtesy of JAGS (which is a Markov chain Monte Carlo (MCMC) engine where the acronym stands for Just Another Gibbs Sampler). The problem I have been wrestling with is what the British call the Cube Law. In first past the post voting systems, with a two-party outcome, the Cube Law asserts that the ratio of seats a party wins at an election is approximately a cube of the ratio of votes the party won in that election. We can express this algebraically as follows (where s is the proportion of seats won by a party and v is the proportion of votes won by the party). Both s and v lie in the range from 0 to 1. My question was whether the relationship held up under Australia's two-party-preferred voting system. For the record, I came across this formula in Simon Jackman's rather challenging text: Bayesian Analysis for the Social Sciences. My first challenge was to make the formula tractable for analysis. I could not work out how Jackman did his analysis (in part because I could not work out how to generate an inverse gamma distribution from within JAGS, and it did not dawn on me initially to just use normal distributions). So I decided to pick at the edges of the problem and see if there was another way to get to grips with it. There are a few ways of algebraically rearranging the Cube Law identity. In the first of the following equations, I have made the power term (relabeled k) the subject of the equation, In the second, I made the proportion of seats won the subject of the equation. In the end, I decided to run with the second equation, largely because I thought it could be modeled simply from the beta distribution which provides diverse output in the range 0 to 1. The next challenge was to construct a linking function from the second equation to the beta distribution. I am not sure whether my JAGS solution is efficient or correct, but here goes (constructive criticism welcomed). model { # likelihood function for(i in 1:length(s)) { s[i] ~ dbeta(alpha[i], beta[i]) # s is a proportion between 0 and 1 alpha[i] <- theta[i] * phi beta[i] <- (1-theta[i]) * phi theta[i] <- v[i]^k / ( v[i]^k + (1 - v[i])^k ) # Cube Law } # prior distributions phi ~ dgamma(0.01, 0.01) k ~ dnorm(0, 1 / (sigma ^ 2)) # vaguely informative prior sigma ~ dnorm(0, 1/10000) I(0,) # uninformative prior, positive } The results were interesting. I used the Wikipedia data for Federal elections since 1937. And I framed the analysis from the ALP perspective (ALP TPP vote share and the ALP proportion of seats won). The mean result for k was 2.94. The posterior distribution for k had a 95% credibility interval between 2.282 and 3.606. The median in the posterior distribution was 2.939 (pretty well the same as the mean; and both were very close to the magical 3 of the Cube Law). It would appear that the Federal Parliament, in terms of the ALP share of TPP vote and seats won operates pretty close to the Cube Law. The distribution of k, over 4 chains each with 50,000 iterations of the MCMC was: The files I used in this analysis can be found here. Technical follow-up: Simon Jackman deals with the Cube Law with what looks like an equation from a classical linear regression of logits (logs of odds). The core of this regression equation is as follows: By way of comparison, the k in my equation is algebraically analogous to the β1 in Jackman's equation. Our results are close: I found a mean of 2.94, Jackman a mean of 3.04. In my equation, I implicitly treat β0 as zero. Jackman found a mean of -0.11. He uses the β0 to asses bias in the electoral system. Nonetheless, the density kernel I found for k (below) looks very similar to the kernel Jackman found for his β1 on page 149 of his text. [This last result may surprise a little as my data spanned the period 1937 to 2010, while Jackman's data spanned a shorter period: 1949 to 2004]. I suspect the pedagogic point of this example in Jackman's text was the demonstration of a particular "improper" prior density  and the use of its conjugate posterior density. I suspect I could have used Jackman's approach with normal priors and posteriors. For me it was a useful learning experience looking at other approaches as a result of not knowing how get an inverse gamma distribution working in JAGS. Nonetheless, if you know how to do the inverse gamma, please let me know. 1 comment: 1. Why not just use the dgamma function and then take its reciprocal?
2019-07-16 03:16:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6755276918411255, "perplexity": 1078.2931440445714}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195524475.48/warc/CC-MAIN-20190716015213-20190716041213-00270.warc.gz"}
https://byjus.com/question-answer/show-that-the-tangents-at-the-extremities-of-a-chords-of-a-circle-makes-equal/
Question # Show that the tangents at the extremities of a chords of a circle makes equal angles with the chord. Solution ## Let $$PQ$$ be the chord of a circle with center $$O$$.Let $$AP$$ and $$AQ$$ be the tangents at points $$P$$ and $$Q$$ respectively.Let us assume that both the tangents meet at point $$A$$.Join points $$O,P$$. let $$OA$$ meets $$PQ$$ at $$R$$Here we have to prove that $$\angle APR = \angle AQR$$Consider, $$\Delta APR$$ and $$\Delta AQR$$$$AP=AQ$$ (Tangents drawn from an internal point to a circle are equal0$$\angle PAR = \angle QAR$$$$AR=AR$$ {common side}$$\therefore \Delta APR \cong \Delta AQR$$ [SAS congruence criterion]Hence,$$\angle APR = \angle AQR\left[ {CPCT} \right]$$Maths Suggest Corrections 0 Similar questions View More People also searched for View More
2022-01-23 13:08:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3494907319545746, "perplexity": 499.42402289760895}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304261.85/warc/CC-MAIN-20220123111431-20220123141431-00266.warc.gz"}
https://www.zora.uzh.ch/id/eprint/122742/
# Search for a charged Higgs boson in $pp$ collisions at $\sqrt{s}$ = 8 TeV CMS Collaboration; Khachatryan, V; Sirunyan, A; Tumasyan, A; Aarestad, T K; Amsler, C; Caminada, L; Canelli, M F; Chiochia, V; De Cosa, A; Galloni, C; Hinzmann, A; Hreus, T; Kilmister, B; Lange, C; Ngadiuba, J; Pinna, D; Robmann, P; Ronga, F J; Salerno, D; Yang, Y; et al (2015). Search for a charged Higgs boson in $pp$ collisions at $\sqrt{s}$ = 8 TeV. Journal of High Energy Physics, 2015(11):18. ## Abstract A search for a charged Higgs boson is performed with a data sample corresponding to an integrated luminosity of 19.7 $\pm$ 0.5 inverse-femtobarns collected with the CMS detector in proton-proton collisions at $\sqrt{s}$ = 8 TeV. The charged Higgs boson is searched for in top quark decays for m(H+/-) < m(t) - m(b), and in the direct production pp -> t (b) H+/- for m(H+/-) > m(t) - m(b). The H+/- -> tau+/- nu[tau] and H+/- -> t b decay modes in the final states tau[h]+jets, mu tau[h], l+jets, and ll' (l = e, mu) are considered in the search. No signal is observed and 95\% confidence level upper limits are set on the charged Higgs boson production. A model-independent upper limit on the product branching fraction $\mathrm{\mathcal{B}}\left(\mathrm{t}\to {\mathrm{H}}^{\pm}\mathrm{b}\right)\mathrm{\mathcal{B}}\left({\mathrm{H}}^{\pm}\to {\tau}^{\pm }{\nu}_{\tau}\right)=1.2-0.15\%$ is obtained in the mass range m(H+/-) = 80-160 GeV, while the upper limit on the cross section times branching fraction $\sigma \left(\mathrm{pp}\to \mathrm{t}\left(\mathrm{b}\right){\mathrm{H}}^{\pm}\right)\mathrm{\mathcal{B}}\left({\mathrm{H}}^{\pm}\to\ {\tau}^{\pm }{\nu}_{\tau}\right)=0.38-0.025$ pb is set in the mass range m(H+/-) = 180-600 GeV. Here, cross section sigma( pp -> t (b) H+/- ) stands for the sum $\sigma \left(\mathrm{pp}\to \overline{\mathrm{t}}\left(\mathrm{b}\right){\mathrm{H}}^{+}\right)+\sigma \left(\mathrm{pp}\to \mathrm{t}\left(\overline{\mathrm{b}}\right){\mathrm{H}}^{-}\right)$. Assuming $\mathrm{\mathcal{B}}\left({\mathrm{H}}^{\pm}\to \mathrm{t}\mathrm{b}\right)=1$, an upper limit on sigma ( pp -> t (b) H+/- ) of 2.0-0.13 pb is set for m(H+/-) = 180-600 GeV. The combination of all considered decay modes and final states is used to set exclusion limits in the m(H+/-)-tan $\beta$ parameter space in different MSSM benchmark scenarios. ## Abstract A search for a charged Higgs boson is performed with a data sample corresponding to an integrated luminosity of 19.7 $\pm$ 0.5 inverse-femtobarns collected with the CMS detector in proton-proton collisions at $\sqrt{s}$ = 8 TeV. The charged Higgs boson is searched for in top quark decays for m(H+/-) < m(t) - m(b), and in the direct production pp -> t (b) H+/- for m(H+/-) > m(t) - m(b). The H+/- -> tau+/- nu[tau] and H+/- -> t b decay modes in the final states tau[h]+jets, mu tau[h], l+jets, and ll' (l = e, mu) are considered in the search. No signal is observed and 95\% confidence level upper limits are set on the charged Higgs boson production. A model-independent upper limit on the product branching fraction $\mathrm{\mathcal{B}}\left(\mathrm{t}\to {\mathrm{H}}^{\pm}\mathrm{b}\right)\mathrm{\mathcal{B}}\left({\mathrm{H}}^{\pm}\to {\tau}^{\pm }{\nu}_{\tau}\right)=1.2-0.15\%$ is obtained in the mass range m(H+/-) = 80-160 GeV, while the upper limit on the cross section times branching fraction $\sigma \left(\mathrm{pp}\to \mathrm{t}\left(\mathrm{b}\right){\mathrm{H}}^{\pm}\right)\mathrm{\mathcal{B}}\left({\mathrm{H}}^{\pm}\to\ {\tau}^{\pm }{\nu}_{\tau}\right)=0.38-0.025$ pb is set in the mass range m(H+/-) = 180-600 GeV. Here, cross section sigma( pp -> t (b) H+/- ) stands for the sum $\sigma \left(\mathrm{pp}\to \overline{\mathrm{t}}\left(\mathrm{b}\right){\mathrm{H}}^{+}\right)+\sigma \left(\mathrm{pp}\to \mathrm{t}\left(\overline{\mathrm{b}}\right){\mathrm{H}}^{-}\right)$. Assuming $\mathrm{\mathcal{B}}\left({\mathrm{H}}^{\pm}\to \mathrm{t}\mathrm{b}\right)=1$, an upper limit on sigma ( pp -> t (b) H+/- ) of 2.0-0.13 pb is set for m(H+/-) = 180-600 GeV. The combination of all considered decay modes and final states is used to set exclusion limits in the m(H+/-)-tan $\beta$ parameter space in different MSSM benchmark scenarios. ## Statistics ### Citations Dimensions.ai Metrics 56 citations in Web of Science® 106 citations in Scopus® ### Altmetrics Detailed statistics
2022-01-18 03:29:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.843187689781189, "perplexity": 3856.3606756758795}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300722.91/warc/CC-MAIN-20220118032342-20220118062342-00604.warc.gz"}
http://gmatclub.com/forum/what-is-an-assumption-for-the-argument-if-i-do-x-i-will-140349.html
what is an assumption for the argument if I do x, I will : GMAT Critical Reasoning (CR) Check GMAT Club Decision Tracker for the Latest School Decision Releases http://gmatclub.com/AppTrack It is currently 17 Jan 2017, 07:55 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # Events & Promotions ###### Events & Promotions in June Open Detailed Calendar # what is an assumption for the argument if I do x, I will Author Message TAGS: ### Hide Tags VP Joined: 08 Jun 2010 Posts: 1398 Followers: 3 Kudos [?]: 113 [0], given: 812 what is an assumption for the argument if I do x, I will [#permalink] ### Show Tags 09 Oct 2012, 03:24 00:00 Difficulty: (N/A) Question Stats: 100% (00:00) correct 0% (00:00) wrong based on 2 sessions ### HideShow timer Statistics what is an assumption for the argument if I do x, I will have y for example If I learn gmat, I will get higher score or if I take part in gmatcluc, I will learn gmat better. do you have any general assummption for above kind of argument. I ask this question because I see that this type of argument is popular on og books. pls, help. Thank you If you have any questions New! Senior Manager Joined: 15 Jun 2010 Posts: 368 Schools: IE'14, ISB'14, Kellogg'15 WE 1: 7 Yrs in Automobile (Commercial Vehicle industry) Followers: 11 Kudos [?]: 368 [1] , given: 50 Re: if do x , we have y [#permalink] ### Show Tags 09 Oct 2012, 03:30 1 KUDOS thangvietnam wrote: what is an assumption for the argument if I do x, I will have y for example If I learn gmat, I will get higher score or if I take part in gmatclub, I will learn gmat better. do you have any general assummption for above kind of argument. I ask this question because I see that this type of argument is popular on og books. pls, help. Thank you Two assumptions I can come up with are: 1. Doing X helps in achieving Y. (Taking Part in GMAT Club helps to learn GMAT better) 2. There is no other alternate way u can achieve Y _________________ Regards SD ----------------------------- Press Kudos if you like my post. Debrief 610-540-580-710(Long Journey): http://gmatclub.com/forum/from-600-540-580-710-finally-achieved-in-4th-attempt-142456.html Manager Joined: 08 Apr 2012 Posts: 129 Followers: 11 Kudos [?]: 94 [0], given: 14 Re: if do x , we have y [#permalink] ### Show Tags 09 Oct 2012, 06:18 thangvietnam wrote: what is an assumption for the argument if I do x, I will have y for example If I learn gmat, I will get higher score or if I take part in gmatcluc, I will learn gmat better. do you have any general assummption for above kind of argument. I ask this question because I see that this type of argument is popular on og books. pls, help. Thank you Hi thangvietnam, These are logical connections and the type of logic that you wrote is called and implication logic. Let me illustrate: IF x occurs -> then Y occurs If this is given as a statement and we need to conclude on something, what do we have to look out for. Case 1: Y occurs. We cannot say for sure that X occurs. So Y -> X is untrue Case 2: X occurs. We cannot say that Y has also occurred. Case 3: X does not occur. We cannot say if Y occurred or not. Y may or may not occur. Case 4: Y does not occur. In that case we can be sure that X has not occurred. Hence (NOT)Y - > (NOT)X Just remember this: If X-> Y, then (NOT)Y - > (NOT)X So, in CR answer choices, if you encounter certain cause-effect relationships, you can use this relation. Regards, Shouvik. _________________ Shouvik http://www.Edvento.com Re: if do x , we have y   [#permalink] 09 Oct 2012, 06:18 Similar topics Replies Last post Similar Topics: The argument is based on which assumption 0 19 Jun 2016, 02:03 I selected D.I am not sure how E weakens the argument 1 07 Aug 2011, 06:27 I always get confused in these type of arguments. Is a 2 07 Jul 2009, 07:40 Do I have to find out flaw or assumption? 9 07 Jun 2009, 08:48 This is an argument in argument essays , and I get stuck in 0 15 Aug 2008, 20:19 Display posts from previous: Sort by
2017-01-17 15:55:47
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8010753393173218, "perplexity": 3734.8332225374893}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279923.28/warc/CC-MAIN-20170116095119-00436-ip-10-171-10-70.ec2.internal.warc.gz"}
http://mathhelpforum.com/statistics/116713-help-poisson-distribution.html
# Math Help - Help with Poisson distribution 1. ## Help with Poisson distribution i have a question, if possible, I need some one to help me to solve it. I know it's easy but it kind make me dizzy. if a cable is manufactured with probability of 0.001 that has one blemish per foot and probability of 0 for more than a blemish with same length. If X is the number of blemish per 3000 feet, get the Pr(X=5) ?? is the solution will be as follow: mu = n * p = 3000 * .001 = 3 Pr(X=5) = [ 3^5 * e^(-3) ] / 5! = .100 =10% (Poisson Distribution) 2. Originally Posted by ktoobi i have a question, if possible, I need some one to help me to solve it. I know it's easy but it kind make me dizzy. if a cable is manufactured with probability of 0.001 that has one blemish per foot and probability of 0 for more than a blemish with same length. If X is the number of blemish per 3000 feet, get the Pr(X=5) ?? is the solution will be as follow: mu = n * p = 3000 * .001 = 3 Pr(X=5) = [ 3^5 * e^(-3) ] / 5! = .100 =10% (Poisson Distribution) Yes that seems correct. Using my tables I get $P(X \leq 5) = 0.9161$ and $P(X \leq 4) = 0.8153$, therefore $P(x = 5) = P(X \leq 5) - P(X \leq 4) = 0.1008$. 3. thanx man; you are quick can you just explain your answers plz. and which table did u used? 4. Originally Posted by craig $P(x = 5) = P(X \leq 5) - P(X \leq 4) = 0.1008$. Poisson is a discrete distribution, therefore the values can only be integers, 1,2,3 etc. $P(X \leq 5)$ are the values from 0..5. $P(X \leq 4)$ are the values 0..4. This implies that $P(x = 5) = P(X \leq 5) - P(X \leq 4)$. The tables that I use are Poisson statistical tables. They are basically a shortcut for working out probabilities without doing the calculations. If your course does not use them though, then I do not recommend using them, as this could have a negative affect when you come to do the calculations the the exams and you've not a clue how to do them From what I've seen though you seem to have got to grips with the distribution quite fine. 5. 1
2015-05-06 01:02:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7287993431091309, "perplexity": 929.2431909022713}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1430457655609.94/warc/CC-MAIN-20150501052055-00072-ip-10-235-10-82.ec2.internal.warc.gz"}
https://stackoverflow.com/questions/54792865/configuring-graphicspath-in-latex-for-parent-folders
# Configuring /graphicspath in latex for parent folders I've been trying to use /graphicspath in latex to add some figures to a document, but it doesn't seem to be able to go to parent directories and find the folder. For example, the main .tex file is stored in Parent/Write UP And the graphics in Parent/Graphs The latex code I'm trying to use \graphicspath{{../Graphs/}} \begin{document} \includegraphics{anything.png} \end{document} When I build or type the includegraphics nothing comes up, and errors saying that file isn't found. When I put the graphs folder into the Write UP folder as Parent/Write UP/Graphs And run this as latex: \graphicspath{{/Graphs/}} \begin{document} \includegraphics{anything.png} \end{document} I'm able to see all the graphics. I'm using Sublimetext V 3.11, Build 3176 with MikTex • Can you show your .log file? – samcarter Feb 26 at 16:01
2019-03-20 12:25:00
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9060615301132202, "perplexity": 4963.360686539933}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202326.46/warc/CC-MAIN-20190320105319-20190320131319-00267.warc.gz"}
https://ibpsonline.in/questions/IBPS-Clerk/Quantitative-Aptitude/Test-10/1036
# IBPS Clerk :: Quantitative Aptitude :: Test 10 ## Home IBPS Clerk / Quantitative Aptitude Test 10 Questions and Answers 1 . An item when sold at a profit of 20% yields Rs. 260 more than when sold at a loss of 20%. What is the cost price of the item? Rs. 550 Rs. 650 Rs. 750 Rs. 850 Rs. 950 2 . 5 men and 4 women together earn Rs.517 and 3 men and 6 women together earn Rs.483. Then what is the wage of 8 men and 8 women? Rs. 864 Rs. 884 Rs. 904 Rs. 924 Rs. 942 3 . When 40 is added to 40% of a number, the result thus obtained is obtained 40% of 325. What is the number? 200 225 250 275 300 4 . If $(84)^2$ is added to the square of a number, the answer so obtained is 10900. What is the number? 62 64 66 68 70 5 . What is the average of the following set of scores? 189, 276, 312, 447, 581, 613, 774 452 454 456 458 462 6 . When 36 is added to a number, the number becomes its $7\over 4$ . What is the number? 42 44 48 52 56 7 . $1088 \over ?$ = $? \over 833$ 952 916 884 842 804 8 . Which of the following is the largest fraction? $3 \over 7$ $4 \over 9$ $5 \over 11$ $10 \over 17$ $11 \over 19$ 9 . What should come in place of question mark (?) in the following number series 4 5 19 82 377 ? 1904 1928 1942 1956 1966 10 . $3 \over 7$ of a number exceeds its one-fourth by 10. What is the number? 42 56 70 84 28
2019-06-24 21:33:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7222688794136047, "perplexity": 1630.7144880091823}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999740.32/warc/CC-MAIN-20190624211359-20190624233359-00040.warc.gz"}
https://clouddocs.f5.com/cloud/openstack/v1/lbaas/capacity-based-scaleout.html
# Capacity-Based Scale Out¶ When using differentiated service environments, you can configure capacity metrics for the F5 Agent for OpenStack Neutron to provide scale out across multiple BIG-IP device groups. ## Prerequisites¶ • F5 Agent installed on all hosts. • One (1) F5 OpenStack service provider driver instance installed on the Neutron controller for each of your custom service environments. ## Caveats¶ • All hosts running the F5 Integration for OpenStack Neutron LBaaS must use the same Neutron database. • F5 does not support the use of multiple F5 Agent instances on the same host, in the same service environment, to manage a single BIG-IP device or cluster. When using multiple F5 Agent instances to manage a single BIG-IP device/cluster, each Agent must run in a different service environment. ## Configuration¶ Edit the following items in the F5 Agent configuration file. 1. Set the desired environment_group_number. ############################################################################### # Environment Settings ############################################################################### ... # environment_group_number = 1 # ... 2. Provide the iControl endpoint and login credentials for one (1) of the BIG-IP devices in the device group. # icontrol_hostname = 1.2.3.4 # ... # ... # # 3. Define the capacity score metrics. throughput total throughput in bps of the TMOS devices inbound_throughput throughput in bps inbound to TMOS devices outbound_throughput throughput in bps outbound from TMOS devices active_connections number of concurrent active actions on a TMOS device tenant_count number of tenants associated with a TMOS device node_count number of nodes provisioned on a TMOS device route_domain_count number of route domains on a TMOS device vlan_count number of VLANs on a TMOS device tunnel_count number of GRE and VxLAN overlay tunnels on a TMOS device ssltps the current measured SSL TPS count on a TMOS device clientssl_profile_count the number of clientside SSL profiles defined ############################################################################### # Environment Settings ############################################################################### ... # capacity_policy = throughput:1000000000, active_connections: 250000, route_domain_count: 512, tunnel_count: 2048 # The F5 Agent environment_group_number and environment_capacity_score configuration parameters allow the F5 Driver for OpenStack LBaaSv2 to assign requests to the group that has the lowest capacity score. The environment_group_number provides a convenient way for the F5 driver to identify F5 Agent instances that are available to handle requests for any of the devices in a given group. You can configure a variety of capacity metrics via the capacity_policy configuration parameter. These metrics contribute to the overall environment_capacity_score for the environment group. Each F5 Agent instance calculates the capacity score for its group and reports the score back to the Neutron database. To find the capacity score, the F5 Agent divides the collected metric by the max specified for that metric in the capacity_policy Agent configuration parameter. An acceptable reported environment_capacity_score is between zero (0) and one (1). If an |agent| instance in the group reports an :code:environment_capacity_score of one (1) or greater, the device is at capacity. As demonstrated in the figure, when the F5 Driver receives a new LBaaS request, it consults the Neutron database. It uses the environment_group_number and the group’s last reported environment_capacity_score to assign the task to the group with the lowest utilization. The F5 Driver then selects an F5 Agent instance from the group (at random) to handle the request. If any F5 Agent instance has previously handled requests for the specified tenant, that F5 Agent instance receives the task. If that F5 Agent instance is a member of a group for which the last reported environment_capacity_score is above capacity, the F5 Driver assigns the request to an F5 Agent instance in a different group where capacity is under the limit. Danger If all F5 Agent instances in all environment groups are at capacity, LBaaS service requests will fail. LBaaS objects created in an environment that has no capacity left will show an error status. ## Use Case¶ Capacity-based scale out provides redundancy and high availability across the F5 Agent instances responsible for managing a specific service environment. The capacity score each F5 Agent instance reports back to the Neutron database helps ensure that the F5 Driver assigns tasks to the F5 Agent instance currently handling the fewest requests.
2019-03-24 11:41:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2652406692504883, "perplexity": 8075.871854583701}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912203438.69/warc/CC-MAIN-20190324103739-20190324125739-00452.warc.gz"}
https://web2.0calc.com/questions/coordinates_98537
+0 # Coordinates 0 124 2 The circle 2x^2 = -2y^2 + 2x - 4y + 20 is inscribed inside a square which has a pair of sides parallel to the x-axis. What is the area of the square? Jan 17, 2022 #1 +258 +8 The area of the square is 80. Im sorry that there is no diagram. I cant upload pictures for some reason Jan 18, 2022 #2 +117479 +1 2x^2 = -2y^2 + 2x - 4y + 20 $$2x^2 = -2y^2 + 2x - 4y + 20\\ x^2 = -y^2 + x - 2y + 10\\ (x^2-x)\;\;\;+(y^2 +2y) = 10\\ (x^2-x+0.25)\;\;\;+(y^2 +y+1) = 10+0.25+1\\ (x-0.5)^2\;\;\;+(y+1)^2 = 11.25\\$$ This is a circle with a radius of  $$\sqrt{11.25}$$ What will the diameter be? So what is the area of the square that circumscribes this circle?   Hint:  sketch it. LaTex: 2x^2 = -2y^2 + 2x - 4y + 20\\ x^2 = -y^2 + x - 2y + 10\\ (x^2-x)\;\;\;+(y^2 +2y) = 10\\ (x^2-x+0.25)\;\;\;+(y^2 +y+1) = 10+0.25+1\\ (x-0.5)^2\;\;\;+(y+1)^2 = 11.25\\ Jan 18, 2022 edited by Melody  Jan 18, 2022
2022-07-01 05:46:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8475276231765747, "perplexity": 1678.9687952114568}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103920118.49/warc/CC-MAIN-20220701034437-20220701064437-00349.warc.gz"}
https://proofwiki.org/wiki/Definition:Canonical_Surjection
# Definition:Quotient Mapping Jump to navigation Jump to search ## Definition Let $\RR \subseteq S \times S$ be an equivalence on a set $S$. Let $\eqclass s \RR$ be the $\RR$-equivalence class of $s$. Let $S / \RR$ be the quotient set of $S$ determined by $\RR$. Then $q_\RR: S \to S / \RR$ is the quotient mapping induced by $\RR$, and is defined as: $q_\RR: S \to S / \RR: \map {q_\RR} s = \eqclass s \RR$ Effectively, we are defining a mapping on $S$ by assigning each element $s \in S$ to its equivalence class $\eqclass s \RR$. If the equivalence $\RR$ is understood, $\map {q_\RR} s$ can be written $\map q s$. ## Also known as The quotient mapping is often referred to as: the canonical surjection from $S$ to $S / \RR$ the canonical map or canonical projection from $S$ onto $S / \RR$ the natural mapping from $S$ to $S / \RR$ the natural surjection from $S$ to $S / \RR$ the classifying map or classifying mapping (as it classifies the elements of $S$ into those various equivalence classes) the projection from $S$ to $S / \RR$ Some sources denote the quotient mapping by $\natural_\RR$. This is logical, as $\natural$ is the "natural" sign in music. Some sources use $\pi$ to denote the quotient mapping. ## Examples ### Congruence Modulo $3$ Let $x \mathrel \RR y$ be the equivalence relation defined on the integers as congruence modulo $3$: $x \mathrel \RR y \iff x \equiv y \pmod 3$ defined as: $\forall x, y \in \Z: x \equiv y \pmod 3 \iff \exists k \in \Z: x - y = 3 k$ That is, if their difference $x - y$ is a multiple of $3$. $\Z / \RR = \set {\eqclass 0 3, \eqclass 1 3, \eqclass 2 3}$ Hence the quotient mapping $q_\RR: \Z \to \Z / \RR$ is defined as: $\forall x \in \Z: \map {q_\RR} x = \eqclass x 3 = \set {x + 3 k: k \in \Z}$ ### Modulo $2 \pi$ as Angular Measurement Let $\RR$ denote the congruence relation modulo $2 \pi$ on the real numbers $\R$ defined as: $\forall x, y \in \R: \tuple {x, y} \in \RR \iff \text {$x$and$y$}$ measure the same angle in radians $\R / \RR = \set {\eqclass \theta {2 \pi}: 0 \le \theta < 2 \pi}$ where: $\eqclass \theta {2 \pi} = \set {\theta + 2 k \pi: k \in \Z}$ Hence the quotient mapping $q_\RR: \R \to \R / \RR$ is defined as: $\forall x \in \R: \map {q_\RR} x = \eqclass x {2 \pi} = \set {x + 2 k \pi: k \in \Z}$ ## Also see • Results about quotient mappings can be found here. ## Linguistic Note The word quotient derives from the Latin word meaning how often.
2023-03-31 09:24:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9868090748786926, "perplexity": 461.3852808718656}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949598.87/warc/CC-MAIN-20230331082653-20230331112653-00468.warc.gz"}
https://math.stackexchange.com/questions/3032472/the-idea-behind-delzant-construction-of-a-toric-manifold-from-a-convex-polytope
# The idea behind Delzant construction of a toric manifold from a convex polytope I am trying to understand how to visualize a symplectic toric manifold from its moment polytope, following chapter 29.4 in "Lectures on Symplectic Geometry" by Ana Cannas da Silva: https://people.math.ethz.ch/~acannas/Papers/lsg.pdf. The explanation on page 188 is what interests me most (and what I understand the least). Here is a copy of that part of the text: We can visualize $$(M_Δ, ω_Δ, \mathbb{T}^n, \mu)$$ from $$Δ$$ as follows. First take the product $$\mathbb T^n \times Δ$$. Let $$p$$ lie in the interior of $$\mathbb T^n \times Δ$$. The tangent space at $$p$$ is $$\mathbb R^n × (\mathbb R^n)^∗$$. Define $$ω_p$$ by: $$ω_p(v, ξ) = ξ(v) = −ω_p(ξ, v),\quad \text{and}\quad ω_p(v, v′) = ω(ξ, ξ′) = 0$$ for all $$v, v′ \in \mathbb R^n$$ and $$ξ, ξ′ \in (\mathbb R^n)^∗$$. Then $$ω$$ is a closed nondegenerate $$2$$-form on the interior of $$\mathbb T^n \times Δ$$. At the corner there are directions missing in $$(\mathbb R^n)^∗$$, so $$ω$$ is a degenerate pairing. Hence, we need to eliminate the corresponding directions in $$\mathbb R^n$$. To do this, we collapse the orbits corresponding to subgroups of $$\mathbb T^n$$ generated by directions orthogonal to the annihilator of that face. My questions are: 1. Is the idea of this construction just to build any symplectic toric manifold so that $$Δ$$ is the orbit space? If this is the case, I suppose we could conclude that this manifold we built must be $$M_Δ$$ by uniqueness in the Delzant correspondence? 2. Does this construction have anything to do with the construction of $$M_Δ$$ from the proof of the theorem in chapters 29.1-29.3? That construction is briefly described in this question Delzant theorem for polyhedra, and I understand it step by step, but I don't see whether it has anything to do with the visualisation above. I also don't understand the boldface sentences in the text: 1. The definition of $$\omega$$? How can we plug in two vectors from $$\mathbb R^n$$, when the second argument of the function must be from the dual? I.e. why is it skew symmetric? 2. There are directions missing at the corners? Can you help me visualise this in the case $$n=1$$ and $$Δ=[-1,1]$$? 3. How do we see what are the orbits that we need to collapse? And why is the quotient a manifold? Is there some argument here that I'm missing? Normally, one has to be careful to obtain a manifold by passing to the quotient. In the actual proof mentioned in $$2)$$, symplectic reduction is used to justify that the quotient there is indeed a symplectic manifold. Is this where a connection to that construction comes in? Answers to any of the questions would be much appreciated! • Q1.1 Yes, for maximal effective Hamiltonian toric actions on a symplectic manifolds, the moment map polytope turns out to be in bijection with the orbit space. Q1.2 Yes, but the Delzant correspondence is established by understanding this 'visualization' of symplectic toric manifold. Q3. If must view $\mathbb{R}^n \times \Delta \subset \mathbb{R}^n \times \mathbb{R^n}^* \cong T^*\mathbb{R}^n$ equipped with its standard symplectic form $\omega((v_1, \xi_1), (v_2, \xi_2)) = \xi_1(v_2) - \xi_2(v_1)$. Dec 11, 2018 at 21:54 • Q4. I'd prefer to speak of the boundary $\mathbb{R}^n \times \partial \Delta$ instead of corners. The boundary of the image of the moment map exists because, on this boundary, the inward-outward direction(s) is (are) no longer in the image of the differential of the moment map; these are the missing directions. As for questions 2 and 5, you are on the right track, but they would require a detailed answer (and some time...) and not mere comments. Dec 11, 2018 at 22:01
2022-10-06 13:08:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 26, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8745497465133667, "perplexity": 211.25088050226535}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00763.warc.gz"}
https://notebook.drmaciver.com/posts/2019-04-02-16:45.html
# DRMacIver's Notebook A Boltzmann Agent with Very Bad Judgement A Boltzmann Agent with Very Bad Judgement As per previous post, it can make sense when looking at a set of consistent propositions to consider agents as Boltzmann samplers over the set of valid consistent beliefs, with their reliability measured by the expected number of true beliefs. A thing I hadn't previously realised is that this can cause an agent that is on average reliable to be reliably wrong for some propositions. Consider a chain of propositions of the form $P_1 \implies \ldots \implies P_n$. There are exactly $n + 1$ possible consistent beliefs for this sampler (each defined by the first $P_i$ that the agent believes), so the Boltzmann generating function is $B(x) = 1 + \ldots + x^n$. Suppose $n = 10$. Some simple maths (by which I mean I used sympy) shows that this agent ends up believing $P_1$ with probability at least half only when $x \approx 2$, which leds to the expected number of propositions believed being $\approx 9$. So in order to achieve $50%$ reliability on the base proposition we have to achieve $90%$ overall reliability! This isn't very surprising in some sense, but probably puts a bound on how good we can expect judgement aggregation to be in this case.
2021-07-29 22:03:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8732457160949707, "perplexity": 649.4860751963503}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153897.89/warc/CC-MAIN-20210729203133-20210729233133-00236.warc.gz"}
https://mathematica.stackexchange.com/questions/94475/parallelization
# Parallelization [closed] I am new to mathematica and have the following question regarding parallelization of the code. I have an input data which is of the form data = Import["data.dat", "Table"] Then I create a module which executes various line of code and then use the module to do the following. Table[module[data[[i]]], {i, 1, 10}] This works perfectly fine the only problem is that it takes a lot of time close to 15 minutes as there is a differential equation which it has to execute 10 times. Then I tried ParallelTable but it does not work effectively. Thus I tried parallelising the code using ParallelSubmit as follows. tab = ParallelSubmit[Table[module[data[[i]]], {i, 1, 3}]] WaitAll[%] This gives me the error Part specification data[[1]] is longer than depth of object. This is just a sample of 10 points under consideration and in the actual sitation it may include 20 or more points. It would be of great help if someone could help me resolve the problem. I have made the following changes 1) Table[ParallelSubmit[module[data[[j]]]], {j, 1, 3}] I get the error The expression j cannot be used as a part specification. ## closed as off-topic by m_goldberg, MarcoB, Oleksandr R., dr.blochwave, ÖskåSep 13 '15 at 9:10 This question appears to be off-topic. The users who voted to close gave this specific reason: • "This question cannot be answered without additional information. Questions on problems in code must describe the specific problem and include valid code to reproduce it. Any data used for programming examples should be embedded in the question or code to generate the (fake) data must be included." – m_goldberg, Oleksandr R., dr.blochwave, Öskå If this question can be reworded to fit the rules in the help center, please edit the question. • Greetings! Make the most of Mma.SE and take the tour. Help us to help you, write an excellent question. Edit if improvable, show due diligence, give brief context, include minimum working examples of code and data in formatted form. As you receive give back, vote and answer questions, keep the site useful, be kind, correct mistakes and share what you have learned. – rhermans Sep 11 '15 at 16:43 • I dont know about the error, but you should review the docs for ParallelSubmit. ParallelSubmit should be applied to the expression inside the table, not the table itself. There is a example of exactly that in the docs. – george2079 Sep 11 '15 at 16:45 • can you please expand on " I tried ParallelTable but it does not work effectively". Share the code you tried, why didn't it work? Did you DistributeDefinitions? How big is the data? There is an overhead moving the data to other kernels. Are the other kernels remote or local? – rhermans Sep 11 '15 at 16:46 • I have included it in the function and get the following error The expression i cannot be used as a part specification – Abhishek Subramanian Sep 11 '15 at 17:30 • re: edit.. you must give {j} as the first arg to ParallelSubmit (as in the docs). That said, I think you should go back and figure out what was wrong with your ParallelTable implementation. (I'd use ParallelSubmit only if your module calls are wildly unbalanced ) – george2079 Sep 11 '15 at 18:01
2019-11-18 04:38:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19782836735248566, "perplexity": 1103.21400571178}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669431.13/warc/CC-MAIN-20191118030116-20191118054116-00264.warc.gz"}
https://kbase.us/services/ws/docs/devtypedobjects.html
# Developing typed object definitions¶ Providing a comprehensive guide for developing type specifications (typespecs) for typed objects (TOs) in the Workspace Service (WSS) is far beyond the scope of this documentation, but provided here are some general guidelines and hints. ## TO size and composition¶ • Generally speaking, the approach of translating each row from a traditional RDBMS into a single TO is very wrong. The major advantage of TOs is that they allow you to compose various related data into a single object. • It is faster to save and load a single large TO as opposed to a many small TOs. Many small objects will also slow the WSS overall and increase the WSS index size. • The get_objects2 method allows retrieving subsets of a TO from the WSS to provide the equivalent of retrieving a few small TOs rather than one large TO and then manually extracting the small TOs. • TOs are currently limited to 1GB by the WSS. • When contemplating TO design, consider how user interfaces might display workspaces and objects. Note that workspaces containing thousands of objects quickly become untenable. • Objects which consist mostly of very long strings are usually much less useful when stored in the workspace than more structured data objects. Objects like this (for example DNA sequence or raw FASTA files) might be candidates for storage in Shock. ## Very large objects¶ • Although in general, one larger object is better than many smaller objects, when objects are in the hundreds of megabytes they become less useful and more difficult to deal with. • One cannot realistically fetch a very large object (VLO) to a webpage. • Even when using workspace functions to extract subdata from a VLO, the VLO must still be loaded from disk into the workspace service, which could take significant time. • VLOs are slow to transfer in general. • VLOs take a large amount of memory. • VLOs can often take 3-20 times the size of the serialized object to represent in memory. • Objects with large numbers of mapping s or structure s can use large amounts of resources due to repeated keys. Consider using tuple s instead of mapping s or structure s. ## Annotations¶ ### TO to TO references (@id ws)¶ • TO to TO references using the @id ws annotation [see ID annotations] greatly enhance the utility of typed objects. • For example, linking a data set TO to the genome TO that the data set references enforces and records the relationship in the workspace database. • If a TO to be saved references a TO that doesn’t exist, the error is caught prior to saving the TO in the workspace. • If you have access to a TO, you can always access the TOs referenced by that TO, regardless of the workspace in which they’re stored. • However, there is a performance cost - each reference must be checked for existence in the database. For tens or even hundreds of references this cost is not high, but thousands or more unique references will likely slow saving of the TO. ### @optional¶ • Avoid the @optional annotation whenever possible. In some cases its use is required, but every @optional annotation in a typespec makes the associated TOs more difficult to use for downstream programmers. If a typespec has no @optional annotations, a programmer knows exactly what data the TO contains and so the code to manipulate it can be simpler and therefore less buggy, easier to maintain, and less work to test.
2022-01-20 11:40:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4207606613636017, "perplexity": 2314.3025950735573}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301737.47/warc/CC-MAIN-20220120100127-20220120130127-00077.warc.gz"}
http://spot.pcc.edu/math/orcca/knowl/exercise-1063.html
###### Exercise17 Use the associative property of multiplication to write an equivalent expression to $${4\!\left(5m\right)}\text{.}$$ in-context
2018-10-17 18:11:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9476216435432434, "perplexity": 2565.308695002809}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583511206.38/warc/CC-MAIN-20181017174543-20181017200043-00252.warc.gz"}
http://spamdestructor.com/how-to/propagation-of-error-for-log.php
Home > How To > Propagation Of Error For Log # Propagation Of Error For Log ## Contents Starting with a simple equation: $x = a \times \dfrac{b}{c} \tag{15}$ where $$x$$ is the desired results with a given standard deviation, and $$a$$, $$b$$, and $$c$$ are experimental variables, each If we know the uncertainty of the radius to be 5%, the uncertainty is defined as (dx/x)=(∆x/x)= 5% = 0.05. See Ku (1966) for guidance on what constitutes sufficient data2. However, in complicated scenarios, they may differ because of: unsuspected covariances errors in which reported value of a measurement is altered, rather than the measurements themselves (usually a result of mis-specification http://spamdestructor.com/how-to/propagation-of-error-natural-log.php The term "average deviation" is a number that is the measure of the dispersion of the data set. The determinate error equation may be developed even in the early planning stages of the experiment, before collecting any data, and then tested with trial values of data. That is, the more data you average, the better is the mean. Harry Ku (1966). Homepage ## How To Calculate Uncertainty Of Logarithm This can aid in experiment design, to help the experimenter choose measuring instruments and values of the measured quantities to minimize the overall error in the result. When the errors on x are uncorrelated the general expression simplifies to Σ i j f = ∑ k n A i k Σ k x A j k . {\displaystyle H. (October 1966). "Notes on the use of propagation of error formulas". The mean of this transformed random variable is then indeed the scaled Dawson's function 2 σ F ( p − μ 2 σ ) {\displaystyle {\frac {\sqrt {2}}{\sigma }}F\left({\frac {p-\mu }{{\sqrt Notes on the Use of Propagation of Error Formulas, J Research of National Bureau of Standards-C. By using this site, you agree to the Terms of Use and Privacy Policy. Introduction Every measurement has an air of uncertainty about it, and not all uncertainties are equal. Logarithmic Error Bars Journal of Sound and Vibrations. 332 (11). Foothill College. External links A detailed discussion of measurements and the propagation of uncertainty explaining the benefits of using error propagation formulas and Monte Carlo simulations instead of simple significance arithmetic Uncertainties and Note that even though the errors on x may be uncorrelated, the errors on f are in general correlated; in other words, even if Σ x {\displaystyle \mathrm {\Sigma ^ σ have a peek at these guys The derivative of f(x) with respect to x is d f d x = 1 1 + x 2 . {\displaystyle {\frac {df}{dx}}={\frac {1}{1+x^{2}}}.} Therefore, our propagated uncertainty is σ f Indeterminate errors have indeterminate sign, and their signs are as likely to be positive as negative. How To Find Log Error In Physics Retrieved 2012-03-01. Berkeley Seismology Laboratory. If the statistical probability distribution of the variable is known or can be assumed, it is possible to derive confidence limits to describe the region within which the true value of ## Logarithmic Error Calculation In statistics, propagation of uncertainty (or propagation of error) is the effect of variables' uncertainties (or errors, more specifically random errors) on the uncertainty of a function based on them. https://en.wikipedia.org/wiki/Propagation_of_uncertainty However, if the variables are correlated rather than independent, the cross term may not cancel out. How To Calculate Uncertainty Of Logarithm Derivation of Exact Formula Suppose a certain experiment requires multiple instruments to carry out. Uncertainty Logarithm Base 10 Is 7.5 hours between flights in Abu Dhabi enough to visit the city? Now that we have done this, the next step is to take the derivative of this equation to obtain: (dV/dr) = (∆V/∆r)= 2cr We can now multiply both sides of the Plugging this value in for ∆r/r we get: (∆V/V) = 2 (0.05) = 0.1 = 10% The uncertainty of the volume is 10% This method can be used in chemistry as Further reading Bevington, Philip R.; Robinson, D. In the next section, derivations for common calculations are given, with an example of how the derivation was obtained. Error Propagation Ln p.37. Assuming the cross terms do cancel out, then the second step - summing from $$i = 1$$ to $$i = N$$ - would be: $\sum{(dx_i)^2}=\left(\dfrac{\delta{x}}{\delta{a}}\right)^2\sum(da_i)^2 + \left(\dfrac{\delta{x}}{\delta{b}}\right)^2\sum(db_i)^2\tag{6}$ Dividing both sides by Now we are ready to use calculus to obtain an unknown uncertainty of another variable. click site I would very much appreciate a somewhat rigorous rationalization of this step. Such errors propagate by equation 6.5: Clearly any constant factor placed before all of the standard deviations "goes along for the ride" in this derivation. Error Propagation Calculator Accounting for significant figures, the final answer would be: ε = 0.013 ± 0.001 L moles-1 cm-1 Example 2 If you are given an equation that relates two different variables and Your cache administrator is webmaster. ## Should two DFAs be complete before making an intersection of them? Second, when the underlying values are correlated across a population, the uncertainties in the group averages will be correlated.[1] Contents 1 Linear combinations 2 Non-linear combinations 2.1 Simplification 2.2 Example 2.3 Now that we have done this, the next step is to take the derivative of this equation to obtain: (dV/dr) = (∆V/∆r)= 2cr We can now multiply both sides of the Section (4.1.1). Absolute Uncertainty Logarithm Keith (2002), Data Reduction and Error Analysis for the Physical Sciences (3rd ed.), McGraw-Hill, ISBN0-07-119926-8 Meyer, Stuart L. (1975), Data Analysis for Scientists and Engineers, Wiley, ISBN0-471-59995-6 Taylor, J. GUM, Guide to the Expression of Uncertainty in Measurement EPFL An Introduction to Error Propagation, Derivation, Meaning and Examples of Cy = Fx Cx Fx' uncertainties package, a program/library for transparently When is this error largest? More specifically, LeFit'zs answer is only valid for situations where the error $\Delta x$ of the argument $x$ you're feeding to the logarithm is much smaller than $x$ itself: \text{if}\quad The result is most simply expressed using summation notation, designating each measurement by Qi and its fractional error by fi. 6.6 PRACTICAL OBSERVATIONS When the calculated result depends on a number Uncertainty never decreases with calculations, only with better measurements. Now we are ready to use calculus to obtain an unknown uncertainty of another variable. Square Terms: $\left(\dfrac{\delta{x}}{\delta{a}}\right)^2(da)^2,\; \left(\dfrac{\delta{x}}{\delta{b}}\right)^2(db)^2, \;\left(\dfrac{\delta{x}}{\delta{c}}\right)^2(dc)^2\tag{4}$ Cross Terms: $\left(\dfrac{\delta{x}}{da}\right)\left(\dfrac{\delta{x}}{db}\right)da\;db,\;\left(\dfrac{\delta{x}}{da}\right)\left(\dfrac{\delta{x}}{dc}\right)da\;dc,\;\left(\dfrac{\delta{x}}{db}\right)\left(\dfrac{\delta{x}}{dc}\right)db\;dc\tag{5}$ Square terms, due to the nature of squaring, are always positive, and therefore never cancel each other out. Join them; it only takes a minute: Sign up Here's how it works: Anybody can ask a question Anybody can answer The best answers are voted up and rise to the Consider, for example, a case where $x=1$ and $\Delta x=1/2$. Journal of Sound and Vibrations. 332 (11). Young, V. Equation 9 shows a direct statistical relationship between multiple variables and their standard deviations. Note, logarithms do not have units. $ln(x \pm \Delta x)=ln(x)\pm \frac{\Delta x}{x}$ $~~~~~~~~~ln((95 \pm 5)mm)=ln(95~mm)\pm \frac{ 5~mm}{95~mm}$ $~~~~~~~~~~~~~~~~~~~~~~=4.543 \pm 0.053$ ERROR The requested URL could not be retrieved The The equation for propagation of standard deviations is easily obtained by rewriting the determinate error equation. Note Addition, subtraction, and logarithmic equations leads to an absolute standard deviation, while multiplication, division, exponential, and anti-logarithmic equations lead to relative standard deviations. SOLUTION Since Beer's Law deals with multiplication/division, we'll use Equation 11: $\dfrac{\sigma_{\epsilon}}{\epsilon}={\sqrt{\left(\dfrac{0.000008}{0.172807}\right)^2+\left(\dfrac{0.1}{1.0}\right)^2+\left(\dfrac{0.3}{13.7}\right)^2}}$ $\dfrac{\sigma_{\epsilon}}{\epsilon}=0.10237$ As stated in the note above, Equation 11 yields a relative standard deviation, or a percentage of the is formed in two steps: i) by squaring Equation 3, and ii) taking the total sum from $$i = 1$$ to $$i = N$$, where $$N$$ is the total number of Retrieved 22 April 2016. ^ a b Goodman, Leo (1960). "On the Exact Variance of Products". a symmetric distribution of errors in a situation where that doesn't even make sense.) In more general terms, when this thing starts to happen then you have stumbled out of the
2018-10-20 11:25:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8611117005348206, "perplexity": 867.2569243253453}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583512693.40/warc/CC-MAIN-20181020101001-20181020122501-00510.warc.gz"}
https://math.stackexchange.com/questions/1647058/how-to-show-that-this-modification-of-thomaes-function-is-riemann-integrable
# How to show that this modification of Thomae's function is Riemann integrable I am dealing with the function $f(x)=\begin{cases} \frac{1}{n} & \text{if }\frac{1}{n+1}<x<\frac{1}{n},\:n\in\mathbb{N},\\ 0 & \text{ otherwise.} \end{cases}$ I want to show it is Riemann integrable without using the fact that every bounded function containing at most countably many discontinuities is Riemann integrable. I have considered using the fact that the composition of a function $g$ that is Riemann integrable on $[a,b]$ with $g([a,b])\subset[c,d]$ and $h$ that is continuous on $[c,d]$ is Riemann integrable, ie. $h\circ g\in\mathcal{R}$. I think using Thomae's function as the $g$ is in the solution by I couldn't find a suitable $h$. Other options include the squeeze theorem for integrals (although I am told the solution for this should be simple, and I find squeeze theorem is often lengthy), as well as linearity of the integral, the fact that the product of two Riemann integrable functions is Riemann integrable, and the additivity of the Riemann integrable. • Note that the restriction of $f$ to $[\delta , 1]$ is Riemann integrable for all $\delta$. This fact together with the boundedness of $f$ implies that $f$ is Riemann integrable on $[0,1]$. – user99914 Feb 9 '16 at 3:52 • @JohnMa I understand that restricting the function to that domain makes it a step function on that interval and thus integrable, but how does the second part follow? – jofl Feb 9 '16 at 4:08 Theorem: Let $f : [a, b]\to \mathbb R$ be a bounded function. If for all $\delta >0$, $f$ is Riemann integrable on $[a+\delta , b]$, then $f$ is Riemann integrable on $[a,b]$. To show that, let $\epsilon >0$. Then let $\delta < \epsilon/4C$, where $C$ is the bound of $f$. Since $f$ is integrable on $[a+\delta, b]$, there is a partition $P = \{\delta = x_1< x_2<\cdots< x_n = b\}$ of $[a+\delta, b]$ so that $$U(P, f|_{[a+\delta, b]}) - L(P, f|_{[a+\delta, b]}) < \epsilon/2.$$ Let $\tilde P$ be the partition of $[a, b]$ given by $$a = x_0< \delta = x_1 < x_2 <\cdots < x_n = b.$$ Then $$\begin{split} U(P, f) - L(P, f) &= (M-m) \delta + U(P, f|_{[a+\delta, b]}) - L(P, f|_{[a+\delta, b]})\\ &< 2C\delta + \epsilon /2 \\ &< \epsilon/2 + \epsilon/2 = \epsilon. \end{split}$$ Since $\epsilon$ is arbitrary, $f$ is integrable. Remark: One can indeed show that $$\int_a^b f (x) dx = \lim_{\delta \to 0} \int_{a+\delta} ^b f(x)dx.$$ It's kind of groady but you can proceed directly by constructing partitions. Let $$P_n=\{0,{1\over n},{1\over n-1},\ldots,1/2,1\}$$ The upper and lower sums, if we ignore endpoints of intervals, are $$\left({1\over n}-0\right)\cdot{1\over n}+\left({1\over n-1}-{1\over n}\right)\cdot {1\over n-1} + \ldots + \left(1-{1\over 2}\right)\cdot 1$$ and $$\left({1\over n}-0\right)\cdot 0+\left({1\over n-1}-{1\over n}\right)\cdot {1\over n-1} + \ldots + \left(1-{1\over 2}\right)\cdot 1$$ respectively. They differ only in the first term, $1/n^2$, and so these "ignoring endpoints" upper and lower sums converge to each other as $n\to\infty$, and we have integrability under Baby Rudin 6.6. Of course, if you religiously follow Baby Rudin, you can't ignore the endpoints of the intervals in partitions, and since the function as you have defined it is $0$ on all of them, the above proof doesn't work. You can fix it, however, by having a modified version of $P_n$ as follows: $$P'_n=\{0,{1\over n}-2^{-n},{1\over n}+2^{-n},{1\over n-1}-2^{-n},\ldots,1/2-2^{-n}, 1/2+2^{-n},1-2^{-n},1\}$$ That way the upper and lower sums differ from what is given above by at most $n2^{-n+1}$, the length of all the funny intervals $[1/i-2^{-n},1/i+2^{-n}]$, and so the upper and lower sums still converge to each other as $n\to\infty$. Consider a partition $P$ with subintervals of the form $$[1/(n+1), 1/(n+1) + \epsilon/2^{n}],\,\,[1/(n+1) + \epsilon/2^{n},1/n - \epsilon/2^{n}],\,\, [1/n - \epsilon/2^{n},1/n],$$ for $n = 1, 2, \ldots, m-1,$ along with $[0,\epsilon/2^{m}]$, $[\epsilon/2^{m}, 1/m - \epsilon/2^{m}]$, and $[1/m - \epsilon/2^{m},1/m]$ Then the upper sum is $$U(P,f) = \sum_{n=1}^{m} \frac{1}{n}\left(\frac{1}{n}-\frac{1}{n+1}\right) + \frac{1}{m^2} \\ = \sum_{n=1}^{m} \frac{1}{n^2(n+1)} + \frac{1}{m^2},$$ and the lower sum is $$L(P,f) = \sum_{n=1}^{m} \frac{1}{n}\left(\frac{1}{n}-\frac{1}{n+1} -\frac{2\epsilon}{2^n}\right) + \frac{1}{m}\left(\frac{1}{m} - \frac{2\epsilon}{2^m}\right) \\= \sum_{n=1}^{m} \frac{1}{n^2(n+1)} + \frac{1}{m^2} - \epsilon\sum_{n=1}^{m}\frac{1}{2^{n-1}}.$$ The difference satisfies $$U(P,f) -L(P,f) < 2\epsilon.$$ Hence, $f$ is integrable by the Riemann criterion.
2019-07-21 14:58:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9686352014541626, "perplexity": 80.17790019227029}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195527048.80/warc/CC-MAIN-20190721144008-20190721170008-00410.warc.gz"}
https://math.stackexchange.com/questions/1671709/inequality-between-the-norm-of-derivative-and-the-derivative-of-norm
# Inequality between the norm of derivative and the derivative of norm Let $x(t)=[x_1(t)~x_2(t)~\cdots ~x_n(t)]^T$, function $x_i:R\rightarrow R$ is differentiable, then it can be drawn that when $p=2$, $\|\frac{d}{dt}x(t)\|_p\geq \frac{d}{dt}\|x(t)\|_p$. I wonder if this inequality holds when $p\neq 2$, and if it does, show why. • Please give more context... Is it $L^p(\mathbb R)$ or $L^p(\mathbb \Omega)$ for a particular $\Omega$ ? also what is $x$ ? do you mean $x(t)\in L^p(\Omega)$ for every $t$ ? and if yes what are the assumptions on $x$ ? – Renart Feb 25 '16 at 12:49 • As it is stated, this inequality is always true: The left hand side is something wich is nonnegative and the right hand side is the derivative of a constant, hence zero. If you mean by the $\|\cdot\|_p$ norm, the $p$-norm for vectors in $\mathbb R^n$ and if $x:\mathbb R\to\mathbb R^n$ is a differentiable curve then your question gets a meaning. Is this what you mean? – frog Feb 25 '16 at 12:57 $$\left|\frac{\|x(t+h)\|-\|x(t)\|}{h}\right|≤\left\|\frac{x(t+h)-x(t)}{h}\right\|$$ From this consideration you get that whenever $\partial_t x(t)$ and $\partial_t \|x(t)\|$ exist in a normed vector space, then you must have $$\partial_t \|x(t)\|≤\big|\partial_t\|x(t)\|\big|≤\|\partial_t x(t)\|$$
2019-10-23 14:10:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9537982940673828, "perplexity": 127.44320782475788}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987833766.94/warc/CC-MAIN-20191023122219-20191023145719-00215.warc.gz"}
https://math.stackexchange.com/questions/2363079/gcd-of-two-polynomials-without-euclidean-algorithm
# GCD of two polynomials without Euclidean algorithm The book gives this example of greatest common divisor: The quadratic polynomials $2x^{2}+7x+3$ and $6x^{2}+x-1$ in $\mathbb{Q}[x]$ have GCD $x+\frac{1}{2}$ since $$2x^{2}+7x+3=(2x+1)(x+3)=2\left(x+\frac{1}{2}\right)(x+3),\\6x^{2}+x-1=(2x+1)(3x-1)=2\left(x+\frac{1}{2}\right)(3x-1).$$ I understand that $2x+1$ is a common divisor, and we divide out $2$ to make it monic. I understand that $\left(x+\frac{1}{2}\right)$ is a common divisor because you can multiply it to the polynomials $2(x+3)$ and $2(3x-1)$ to get the two original polynomials. My questions are: 1) How did they know $\left(x+\frac{1}{2}\right)$ would be divisible by all the other common divisors? I started by saying let $p(x)$ be another common divisor. But I don't know why $p(x)$ would have to divide $\left(x+\frac{1}{2}\right)$. 2) These two polynomials were easy to factor by hand. What if we had polynomials that weren't so easy to factor? How would you find a common divisor to start with? (Note: I am self-learning. This is from the book Groups, Rings, and Fields by Wallace. I say "without Euclidean algorithm" because I tried looking up stuff about this but got answers saying use the Euclidean algorithm which is covered in the next section of the book.) • I think you have what you need. Polynomials have unique factorization over most of the common rings (UFD's). If the degree of the polynomials is small and you know how to factor them, find the the appropriate factorizations and take the common factors. If you have something to difficult to factor by hand, then the Euclidean algorithm, to be covered in the next chapter, will be a tool. – Doug M Jul 18 '17 at 20:00 • i.e. $\,c\mid f,g\iff c\mid (f,g) =: d.\ \$ All the linear factors are (nonassociate primes, so it is clear the that only prime common factor is $\,x+1/2\ \$ – Bill Dubuque Jul 18 '17 at 21:37 • Oh okay, I didn't know that. Thank you very much. – anonanon444 Jul 19 '17 at 12:30 1) by definition, the GCD includes all common factors. If the factorizations of the polynomials in first degree binomials are available, it is trivial to find it. 2) if you may not use Euclid, then there are special methods for the factorization of certain polynomials (f.i. https://en.wikipedia.org/wiki/Factorization_of_polynomials#Factoring_univariate_polynomials_over_the_integers). But in the general case, polynomial factorization can only be achieved numerically with root finders. The wonderful thing with Euclid is that it doesn't require any factorization to deliver the GCD. Learn the Euclidean algorithm for polynomials. If you can do that fairly well, you will be ahead of the game. It is necessary to allow rational coefficients, not just integers. $$\left( 2 x^{2} + 7 x + 3 \right)$$ $$\left( 6 x^{2} + x - 1 \right)$$ $$\left( 2 x^{2} + 7 x + 3 \right) = \left( 6 x^{2} + x - 1 \right) \cdot \color{magenta}{ \left( \frac{ 1}{3 } \right) } + \left( \frac{ 20 x + 10 }{ 3 } \right)$$ $$\left( 6 x^{2} + x - 1 \right) = \left( \frac{ 20 x + 10 }{ 3 } \right) \cdot \color{magenta}{ \left( \frac{ 9 x - 3 }{ 10 } \right) } + \left( 0 \right)$$ $$\frac{ 0}{1}$$ $$\frac{ 1}{0}$$ $$\color{magenta}{ \left( \frac{ 1}{3 } \right) } \Longrightarrow \Longrightarrow \frac{ \left( \frac{ 1}{3 } \right) }{ \left( 1 \right) }$$ $$\color{magenta}{ \left( \frac{ 9 x - 3 }{ 10 } \right) } \Longrightarrow \Longrightarrow \frac{ \left( \frac{ 3 x + 9 }{ 10 } \right) }{ \left( \frac{ 9 x - 3 }{ 10 } \right) }$$ $$\left( x + 3 \right) \left( \frac{ 3}{10 } \right) - \left( 3 x - 1 \right) \left( \frac{ 1}{10 } \right) = \left( 1 \right)$$ $$\left( 2 x^{2} + 7 x + 3 \right) = \left( x + 3 \right) \cdot \color{magenta}{ \left( 2 x + 1 \right) } + \left( 0 \right)$$ $$\left( 6 x^{2} + x - 1 \right) = \left( 3 x - 1 \right) \cdot \color{magenta}{ \left( 2 x + 1 \right) } + \left( 0 \right)$$ $$\mbox{GCD} = \color{magenta}{ \left( 2 x + 1 \right) }$$ $$\left( 2 x^{2} + 7 x + 3 \right) \left( \frac{ 3}{10 } \right) - \left( 6 x^{2} + x - 1 \right) \left( \frac{ 1}{10 } \right) = \left( 2 x + 1 \right)$$ • Thanks for the help! – anonanon444 Jul 19 '17 at 12:31 You don't need to worry about "monic" polynomials until the end. In fact, you can use any (non zero) rational multiple of the polynomials that are convenient. $$\color{red}{(6x^2 + x - 1)} = 3\color{red}{(2x^2 + 7x + 3)} - \color{red}{(20x + 10)}$$ $$\color{red}{(6x^2 + x - 1)} = 3\color{red}{(2x^2 + 7x + 3)} - 10\color{red}{(2x + 1)}$$ $$\color{red}{(2x^2 + 7x + 3)} = (x + 3)\color{red}{(2x+1)}$$ So the gcd is $$2x+1 = 2\left(x + \dfrac 12 \right)$$.
2019-06-16 02:53:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 4, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7999355792999268, "perplexity": 202.77718803070303}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627997533.62/warc/CC-MAIN-20190616022644-20190616044644-00335.warc.gz"}
https://www.maths.usyd.edu.au/u/UG/JM/MATH1111/Quizzes/quiz9.html
MATH1111 Quizzes The Derivative Function Quiz Web resources available Questions This quiz tests the work covered in Lecture 9 and corresponds to Section 2.3 of the textbook Calculus: Single and Multivariable (Hughes-Hallett, Gleason, McCallum et al.). There is a web quiz at Wiley. It is the same quiz for each section in Chapter and you should you attempt it now. Be aware that it doesn’t seem to accept the written answers so you will have to check whether your answers are correct when they print the correct answer. Questions 11 and 12 were illegible on 14/11/05. The Learning Hub (Mathematics) has a booklet on differentiation Introduction to Differential Calculus which covers all of the topics for the next few lectures. In particular, Chapters 2 and 3.1 of the booklet covers this topic. The site http://www.math.uncc.edu/$\sim$bjwichno/fall2004-math1242-006/Review˙Calc˙I/lec˙deriv.htm covers some of the material in Section 2.1-2.3 There is an applet that lets you sketch the derivative of a given function at http://www.ltcconline.net/greenl/java/Other/DerivativeGraph/classes/DerivativeGraph.html After you have mastered the topic you might like to try the tests at http://www.univie.ac.at/future.media/moe/tests/diff1/defabl.html and http://www.univie.ac.at/future.media/moe/tests/diff1/poldiff.html and the puzzle at http://www.univie.ac.at/future.media/moe/tests/diff1/ablerkennen.html Which of the following is the derivative of  $f\left(x\right)=3x+2\phantom{\rule{0.3em}{0ex}}?$ Exactly one option must be correct) a) ${f}^{\prime }\left(x\right)=2\phantom{\rule{0.3em}{0ex}}.$ b) ${f}^{\prime }\left(x\right)=3\phantom{\rule{0.3em}{0ex}}.$ c) ${f}^{\prime }\left(x\right)=5\phantom{\rule{0.3em}{0ex}}.$ d) There is not enough information to answer the question. Choice (a) is incorrect Try again, if $f\left(x\right)=mx+b$ then the derivative is $m\phantom{\rule{0.3em}{0ex}}.$ Choice (b) is correct! Since the derivative of $f\left(x\right)=mx+b$ is $m\phantom{\rule{0.3em}{0ex}},\phantom{\rule{1em}{0ex}}{f}^{\prime }\left(x\right)=3\phantom{\rule{0.3em}{0ex}}.$ Choice (c) is incorrect Try again, if $f\left(x\right)=mx+b$ then the derivative is $m\phantom{\rule{0.3em}{0ex}}.$ Choice (d) is incorrect Try again, if $f\left(x\right)=mx+b$ then the derivative is $m\phantom{\rule{0.3em}{0ex}}.$ Which of the following graphs of $y=f\left(x\right)$ satisfy the following three conditions: • ${f}^{\prime }\left(x\right)<0$ for $x<-1$ • ${f}^{\prime }\left(x\right)>0$ for $-1 • ${f}^{\prime }\left(x\right)=0$ for $x>2\phantom{\rule{0.3em}{0ex}}.$ Exactly one option must be correct) a) b) c) d) Choice (a) is correct! Choice (b) is incorrect Try again, thisgraph is of a function that is negative, positive and zero in the required region, not of a function whose derivative satisfies these conditions. Choice (c) is incorrect Try again, your graph has thewrong sign for its derivative. Choice (d) is incorrect Try again, you may need to review what having positive or negative derivativemeans. Consider the graph below. Which of the following is the matching derivative function? Exactly one option must be correct) a) b) c) d) Choice (a) is correct! The graph has turning points at$x=1$ and $x=3$ so the graph of the derivative must cut the axis at these points. The graph moves from positive gradient to negative gradient and back to positive so the graph of the derivative is positive then negative and then positive again. Choice (b) is incorrect Try again, this graph has thecorrect zeros but is not positive where the graph has positive gradient etc. Choice (c) is incorrect Try again, you do not have the zeros in the correct spots. Choice (d) is incorrect Try again, this graphdoes not have the correct zeros and is not positive where the graph has positive gradient etc. Which of the statements below correctly match the function with its derivative? Exactly one option must be correct) a) C is the graph of the derivative of BF is the graph of the derivative of CD is the graph of the derivative of AE is the graph of the derivative of D b) A is the graph of the derivative of CC is the graph of the derivative of FB is the graph of the derivative of DD is the graph of the derivative of E c) C is the graph of the derivative of AE is the graph of the derivative of CD is the graph of the derivative of BF is the graph of the derivative of D d) C is the graph of the derivative of AF is the graph of the derivative of CD is the graph of the derivative of BE is the graph of the derivative of D Choice (a) is incorrect Try again, looking carefully at the gradients of graphs A and B before the first turning point. Choice (b) is incorrect Try again, graph A represents a function with two turning points so its derivative must have 2 zeros. Look at the graphs again. Choice (c) is incorrect Try again, graph C has negative gradient and then positive gradient so its derivative cannot be graph E. Choice (d) is correct! Graph A has positive gradient to almost 1 and negative gradient to a bit more than 3 and then positive gradient. This matches graph C. Graph C has negative gradient to a bit more than 2 and then positive gradient. This matches graph F. Similarly for the other 3 graphs.
2021-06-12 20:51:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 22, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6638196110725403, "perplexity": 577.9658103622323}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487586390.4/warc/CC-MAIN-20210612193058-20210612223058-00135.warc.gz"}
http://openstudy.com/updates/55b54e71e4b0ce105661620f
## anonymous one year ago Solve for x: |3x + 12| = 18 x = 2, x = –10 x = 2, x = –2 x = 10, x = –10 x = –2, x = 10 $\huge~\rm~3x + 12 = 18$ $\huge~\rm~3x + 12= -18$ Now solve for x you end up with 2 answers
2016-10-22 18:17:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20207743346691132, "perplexity": 245.5868255189564}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719033.33/warc/CC-MAIN-20161020183839-00485-ip-10-171-6-4.ec2.internal.warc.gz"}
https://gamedev.stackexchange.com/questions/103682/how-to-resolve-collisions-when-using-ray-casting-to-predict-the-location-of-futu
# How to resolve collisions when using ray-casting to predict the location of future collisions? I ran into a question I can't seem to solve, while implementing the movement/ collision checking code for my game. The game is in 2D and all objects in my game use either AABBs or circles as collision masks. My current procedure for moving a game object uses the following steps: 1. [Motion definition] Set the motion vector for the game object as vec2(xSpeed, ySpeed) 3. [Collision detection] Raycast the AABB along the motion vector, against all objects found in step 2, and find the first collision 4. [Collision resolution] Move the game object to this position where the first collision occurs Below is a screenshot of the collision detection. It casts the object's AABB (bottom-left blue rectangle) along the motion vector (red line) and finds the points where the AABB would enter and exit the other object's AABB (big blue rectangle). It returns me the position, normal and lambda (progression [0.0, 1.0] along the motion vector when entering/ exiting the other object). Practically, the ray casting of AABBs is performed as a line-raycast on the configuration space obstacle of the two AABBs. I've implemented this for line-circle, line-aabb, circle-circle, and aabb-aabb cases and they work correctly. My problem is the collision resolution step. The problem is that my objects get stuck in eachother using the current procedure. The reason being that the raycast procedure returns the position where the object "just collides" with the other object. Because of this, on the next update of my game world, the object is already colliding before it even moves. When I perform my collision detection step, the results is that the "first collision" occurs at the starting position of the object. I've been reading related questions here, but haven't found a definitive answer yet. Here are some of the things I've read and problems I found with them (from "bad" to "better"): 1. [Don't move if a collision is encountered on the way] This could cause jittery and weird behavior when moving towards an object. As, e.g. if the object is moving at 5 pixels per frame towards a wall, it will end up somewhere between 5 and 0 pixels away from the wall, depending on it's position. I suspect jittery behavior to happen when this solution is extended to collision checking between moving objects. 2. [Implement IsTouching() methods to check whether an object is colliding but not penetrating another object] I have no idea how to elegantly use this information in the raycasting algorithms to allow "sliding behavior" of shapes, without sacrificing performance (as the algorithms are very optimized right now). Also, I imagine objects "touching" is defined as having a penetration depth of 0.0. Checking for this could cause errors in case of float rounding errors. 3. [Push away both the object and the colliding object upon colliding] Some of my objects (such as walls and terrain) are static and I'd rather not move them even slightly to resolve collision problems. 4. [Push the object out of the colliding object along the collision normal] Using this solution, the object can get stuck in another object, as it gets "blindly" pushed out of a colliding object. 5. [Push the object out of the colliding object backwards along the motion vector] This sounds good as the object won't get stuck in other objects. However, I'm not sure how to choose "how much" to push the object out of the other object. The best thing I can think of is to move it using "a very small float number" such as 0.00001f. But this sounds very ad-hoc. In short: Where do I move my object, if I know exactly where the first collision would occur, without getting it stuck in other object? There are two steps to solving this problem. First you need some extra data on collision. When two objects collide you want to know how far they've collided into each other. After that you want to move the two objects backwards. Depending on how accurate you want it you could just take the amount that they overlap, divide it in half and then just move each object back by that amount (not good if only one object is moving obviously). If you want to be more accurate you can try to determine how fast each object was going on impact and move them apart depending on which one was going faster. After you've separated the two objects from colliding you can apply your resultant forces. Again if you don't need to be SUPER accurate you can end here and just let further collisions sort themselves out. A way to increase the accuracy of your collision response here is to iterate this process. Each object will detect and react to collisions multiple times per frame so that if one collision results in another immediate collision you can account for it. You want to put a limit on this so that you don't end up with an infinite loop of collision detection. Objects that are more important will have higher iteration limits. There are a few different algorithms to determine object overlapping that can be done as apart of the collision detection step. If you use the Separating Axis Theorem (SAT) you can do this pretty easily but that's more for 3D. It's even easier to do it with circles and AABBs in 2D. Here's a quick example to get the overlap for two circles colliding: float IsColliding(Circle c1, Circle c2) { float distanceBetweenCircles = (c2.center - c1.center).magnitude; { //Return the amount of overlap • One thing you could do is use a value that's relatively small to your overlap. 1/100th would be good. So if you have an overlap of 1.0f then you'd move back 1.0f + (1.0f * .01f). This way even if you have a really large or small overlap that you'll never be moving back too much but enough to get out of the collision. Either that or notice that I've used radiiSum > If you move the object back so that radiiSum == distanceBetweenCircles there will be no collision detection. Jul 10, 2015 at 16:38
2022-10-01 08:59:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2564232051372528, "perplexity": 779.9565501500614}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335573.50/warc/CC-MAIN-20221001070422-20221001100422-00354.warc.gz"}